Report 2: The Pixel-based Image & its Processing

Post Reply
glegrady
Posts: 203
Joined: Wed Sep 22, 2010 12:26 pm

Report 2: The Pixel-based Image & its Processing

Post by glegrady » Thu Apr 08, 2021 8:59 am

MAT255 Techniques, History & Aesthetics of the Computational Photographic Image
https://www.mat.ucsb.edu/~g.legrady/aca ... s255b.html

Please provide a response to any of the material covered in this week's two presentations by clicking on "Post Reply". Consider this to be a journal to be viewed by class members. The idea is to share thoughts, other information through links, anything that may be of interest to you and the topic at hand.


Report for this topic is due by April 22, 2021 but each of your reports can be updated throughout the length of the course.
George Legrady
legrady@mat.ucsb.edu

kevinclancy
Posts: 6
Joined: Thu Apr 01, 2021 5:33 pm

Re: Report 2: The Pixel-based Image & its Processing

Post by kevinclancy » Mon Apr 19, 2021 5:23 pm

Image
Rachel Rossin, Stalking The Trace (2019) Installation View, Zabludowicz Collection. Photo: Tim Bowditch

I watched an excellent conversation between interdisciplinary artist Rachel Rossin and curator Michael Connor, in conjunction with the exhibition World on a Wire, a partnership between Rhizome, the New Museum, and HYUNDAI. This partnership itself is interesting in relation to our historical look at E.A.T. and Bell Labs, Alvy Ray Smith’s collaborations with JPL, and contemporary examples that pair artists, engineers, and scientists. Michael Connor, who is the Artistic Director at Rhizome, curated the exhibition and Rachel Rossin’s work I’m my loving memory is featured in the exhibition. According to Connor’s curatorial statement, World on a Wire “transforms the gallery space into a hybrid-reality vivarium of vivid, artist-made synthetic life forms, exploring the possibilities and poetics of simulation as artistic practice.” Rossin’s I’m my loving memory consists of 3D virtual assets that are UV printed onto clear plexiglass, which is then melted with a torch and molded by the artist’s body. This index of virtual assets then reappears floating in the space through an Augmented Reality app on a tablet.

I initially watched this conversation during Week 1, but after the lectures of Week 2 I thought it might fit better in this response, as we began to examine the digital image, compression, entropy, and the computational image. Rossin’s work touches on facial tracking, deep fakes / cheap fakes, VR, AR, photogrammetry, holograms, entropic vision, programming, game design, and blockchain in experimental and thoughtful ways.

Image
Rachel Rossin, I'm my loving memory (2020–2021), UV printed plexiglass, AR application. Courtesy Hyundai Motorstudio Beijing.

Rossin talked about her 2015 exhibition Lossy, which combined oil paintings of VR glitches with a VR simulation on an Oculus Rift headset, which feed off of each other in a recursive loop. In Lossy, Rossin introduces the fascinating idea of entropic vision, as the photogrammetry scans of the artist's studio, apartment, and paintings in the VR environment are compressed, eroded, and disappeared by the tracked gaze of the audience. The VR simulation was rebooted every 24 hours, beginning the entropic loop anew. Rossin’s creative exploration of irreversible lossy compression aligns with our class discussion of Stan Douglas’ creative use of DCT (discrete cosine transform) compression in his Corrupt Files series. This notion of entropic vision also finds interesting resonance with the Epicurean and atomist philosopher Lucretius (c. 99 - 55 BC), as quoted in Tom Gunning’s essay “To Scan A Ghost: The Ontology of Mediated Vision” that I mentioned last week:
Vision, Lucretius claimed, was carried by images (simulacra), which he described quite materially as films, "a sort of outer skin perpetually peeled off the surface of objects and flying about this way and through the air." He explained their effect on human vision as one of direct contact: "while the individual films that strike upon the eye are invisible, the objects from which they emanate are perceived."
Image
Rachel Rossin, I Came And Went As A Ghost Hand (Cycle 2), 2015, Still from VR

While Lucretius’ theories don’t entirely hold up under the current scientific understanding of vision and perception, it remains interesting in relation to Rossin’s visual manipulations of virtual simulacra, and the interactions between the physical and the virtual. It is also interesting to expand this perpetually peeling skins of simulacra to our contemporary moment of constant selfies and surveillance, the perpetual erosion of a subject.

Georges Seurat’s early pointillist painting “la Grande Jatte” (1884), is also interesting in relation to Rossin’s enduring use of traditional painting techniques in combination with Virtual Reality. From this perspective, “la Grande Jatte” can be envisioned as a precursor to the “point cloud” of photogrammetry, or as early visual synthesis as we begin to think about pixels, compression, and the computational image.

Image
Still from Rachel Rossin's VR piece The Sky is a Gap (2017)

Connor and Rossin also discuss Rossin’s VR piece The Sky is a Gap (2017), included in her solo show Stalking the Trace (2019) at the Zabludowicz Collection in London. The Sky is a Gap is inspired by the surreal final scene of Michelangelo Antonioni’s film Zabriskie Point (1970), which features a montage of slow-motion explosions of domestic items. The Antonioni connection is an interesting parallel to Professor Legrady’s mention of Antonioni’s Blow-up (1966). Rossin similarly constructs a virtual explosion of a virtual home, and along with it a surreal physics simulation of floating fragmented domestic objects. What is fascinating and innovative about Rossin’s approach is that she utilizes her viewer’s position and velocity in the space (as tracked and spatialized by the Oculus Rift headset) as a temporal cursor which scrubs the timeline of the explosion back and forth. The viewer not only examines the contents of this virtual world, but controls the flow of time based on their body’s position of space. I was fortunate enough to get to experience this VR piece in her DUMBO studio, when I tagged along with a friend's Yale field trip.

Image
Rachel Rossin, Recursive Truth, 2019. Still frame from digital video.

In her video Recursive Truth (2019), Rossin uses motion tracking and facial tracking to create crude deep fakes that superimpose her face onto footage of a young Steve Jobs and footage of Marco Rubio discussing the political implications of deep fakes at a Senate hearing. Throughout the video Rossin keeps the curtain pulled back, revealing the artifice of her manipulation through python scripts and backend facial tracking with her webcam. The obvious artifice highlights the uncanny experience of “cheap fakes”, but it also points to a near-future where the illusion of “deep fakes” seamlessly blends into reality. Politically, we are already aware that deep/cheap fakes do not need to seamlessly blend with reality, the sowing of doubt, distrust, and confusion is actually effective enough.

Links:
Rachel Rossin http://rossin.co/
Michael Connor https://rhizome.org/profile/michaelconnor3/
World On A Wire exhibition website: https://worldonawire.net/#list
World on a Wire: Rachel Rossin Artist Talk https://vimeo.com/531959602
New York Times review of Lossy: https://www.nytimes.com/2015/11/06/arts ... nting.html
ARTFORUM Interview: https://www.artforum.com/video/rachel-r ... work-71956
Final Scene of Zabriskie Point: https://www.youtube.com/watch?v=guOmJM8xvHA
Last edited by kevinclancy on Thu Apr 22, 2021 11:41 am, edited 1 time in total.

alexiskrasnoff
Posts: 6
Joined: Thu Apr 01, 2021 5:32 pm

Re: Report 2: The Pixel-based Image & its Processing

Post by alexiskrasnoff » Wed Apr 21, 2021 11:36 pm

Hi guys! I was pretty excited about the photogrammetry stuff we talked about this week, as my field of interest mainly lies in digital 3D. I had tried photogrammetry a while ago with a free app called display.land, but it's no longer supported. I dug back into the initial research I did on the topic a little bit and found this video, which is a little more of an involved process and isn't contained to your phone, but it's free and seems to get nice results! https://youtu.be/k4NTf0hMjtY I also have another video that I found a little more recently that I really love where he uses photogrammetry to scan random objects on the street and then uses those objects to create a robot animation. He uses an app that requires LiDAR scanning functionality (which my phone doesn't have, haha), but I think the concept of using photogrammetry to essentially steal mundane objects and repurpose them for art is really cool! https://youtu.be/CCauytLuF-o

I was also interested in the computer vision and motion/depth sensing concepts we talked about. The way the technology could recognize people's movement using infrared projection made me think of a website that I was using last quarter that uses a different technique sense body movement. The website Teachable Machine allows you to create machine learning models by uploading/capturing training data sets that are then used to train your model. You can train it on images, sounds, or poses. I only worked with the image models, but I was pretty impressed by the accuracy I could get with not too many training samples. Here is the website: https://teachablemachine.withgoogle.com/ and here is a tutorial on using it with the pose recognition model: https://medium.com/@warronbebster/teach ... 4f6116f491 and here is a demo where you can test out the model from the tutorial: https://tm-pose-demo.glitch.me/ Since this technique is using AI to estimate poses rather than actually capturing infrared data, it works with images, videos, and regular webcams. It's interesting to see how the same problem can be solved in totally different ways! :)

jungahson
Posts: 6
Joined: Thu Apr 01, 2021 5:31 pm

Re: Report 2: The Pixel-based Image & its Processing

Post by jungahson » Thu Apr 22, 2021 8:53 pm

As an engineer in signal/image processing field, my goal has always been reducing noise, trying to increase signal-to-noise ratio. The original meaning of "noise" was "unwanted signal" [1]. In digital images, it means that the pixels in the image show different intensity values instead of true pixel values that are obtained from image [2]. Based on the definition, it is natural that noise reduction techniques have been developed in the signal/image processing fields. The noise removal algorithms reduce or remove the visibility of noise by smoothing the image. There are various types of filters by which noise from images can be removed, such as linear smoothing filter, median filter, wiener filter and Fuzzy filter. An examples of linear filters is a Gaussian filter, a filter whose impulse response is a Gaussian function [3]. Since this kind of filter tends to blur an image, nonlinear filters are more frequently used for noise reduction. For example, with a median filter, the value of an output pixel is determined by the median of the neighborhood pixels [2].

The technique that I used most frequently when I was working on medical image processing was anisotropic diffusion. While a diffusion process is a linear transformation of the original image, anisotropic diffusion is a non-linear transformation as it behaves locally at different image regions [4].

[1] Farooque, Mohd Awais, and Jayant S. Rohankar. "Survey on various noises and techniques for denoising the color image." International Journal of Application or Innovation in Engineering & Management (IJAIEM) 2.11 (2013): 217-221.

[2] Hambal, Abdalla Mohamed, Zhijun Pei, and Faustini Libent Ishabailu. "Image noise reduction and filtering techniques." International Journal of Science and Research (IJSR) 6.3 (2017): 2033-2038.

[3] Wikipedia contributors. "Gaussian filter." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 8 April. 2021. Web. 25 April. 2021.

[4] Hassanpour, Hamid, and Mohammad Hossein Khosravi. "Image denoising using anisotropic diffusion equations on reflection and illumination components of image." International Journal of Engineering 27.9 (2014): 1339-1348.

Post Reply