wk3 10.07/10.09: Digital Image, Time, Space, Interactivity, Narrative

italo
Posts: 4
Joined: Wed Oct 01, 2025 2:34 pm

Re: wk3 10.07/10.09: Digital Image, Time, Space, Interactivity, Narrative

Post by italo » Tue Oct 14, 2025 8:43 am

Time-based art installations are one of the most interesting fields in contemporary art. Thanks to technology, artists can now manipulate and create beyond traditional concepts. Today, with advances in audio, video, and real-time data processing, artworks can be transformed and made to feel “alive.”
I am especially interested in “living sculptures” or installations inspired by nature that use principles of biomimesis. One of the most relevant artists in this field for me is Neri Oxman, who has been working with insects and living materials to create art and design.

Based on this week’s resources, I selected the following works:

1. David Rokeby – San Marco Flow (Generative Video Installation, 2004)

San Marco Flow was an installation in Piazza San Marco in Venice, where movement and time were central elements. As people walked or moved through the space, the installation visualized their presence, leaving a kind of trace behind them that showed their past pathways. This generated the idea that as we move forward, we always leave traces behind.

What I found most interesting is that without movement, there is no image — elements that do not move remain invisible to the program.
The key concepts I identify are:
a) time and the traces we leave behind;
b) movement as evidence of life; and
c) data transformation to reveal the invisible.

2. Jim Campbell – Data Transformation 3 (2017)
Jim Campbell holds a B.S. in Electrical Engineering and Mathematics from the Massachusetts Institute of Technology. His work has been exhibited internationally in institutions such as the Whitney Museum of American Art (New York) and the San Francisco Museum of Modern Art, among others.

In this work, he transforms visual information by adding noise or effects that create the illusion of low resolution. The piece uses electronic devices such as LEDs to display images in a way that merges abstraction and data representation. As Campbell explains about Data Transformation 3 (2017):

“By reducing the resolution of the color side of the image, the two sides present similar amounts of information, with each side representing the data in a different way.”

What attracted my attention was how the transformation of movement captured in the videos completes the experience. The changing images make me reinterpret the same “reality” by focusing on different aspects of it.

The key elements I identify are:
a) visuals based on simple rules that generate complex results;
b) shifting focus to different aspects of the same information; and
c) inviting the viewer to change perspective — to “see with new eyes.”


Image
Image


3. Ulrike Gabriel – terrain_02 (1997)
Ulrike Gabriel explores human reality through robotics, virtual reality (VR), installations, performative formats, and painting. In terrain_02, she worked with biodata and light-responsive devices.

Two participants sit in a nonverbal dialogue facing each other at a round table. They are connected via EEG interfaces to a system of solar-powered robots. Their brainwaves are measured, analyzed, and compared. The ratio between both frequencies is projected onto the robots through changing light intensities—from above using lamps and from below using electroluminescent sheets. The light controls the speed and behavior of the robots, activating or deactivating different areas of the terrain. Depending on the participants’ relationship and their inner responses to each other, unique motion patterns emerge.

I selected this piece because I am interested in biodata as well. It is fascinating to understand the work as one that evolves over time, depending on the dialogue between the participants. This dialogue is unique, as is their relationship. For me, it is important to see the interaction between the different elements: the display of small robots that begin to move because of light creates an internal narrative. Light induces movement and, as a consequence, sets the conditions for life.

The key concepts I identify are:
a) light as a source of life;
b) biodata as a tool to reveal relationships between humans and non-humans.
Image

zixuan241
Posts: 4
Joined: Wed Oct 01, 2025 2:41 pm

Re: wk3 10.07/10.09: Digital Image, Time, Space, Interactivity, Narrative

Post by zixuan241 » Wed Oct 15, 2025 11:35 pm

I think the progression in digital technology has redefined the experience of artists with images, time, and space. When artistic expression can be combined with digital technology, wholly new methods of revealing worlds can be developed—methods previously unimaginable. In our exploration this week of the art of time-based and interactive media, we can see how the artist works with developing technologies to combine the human imagination with computational systems.

The first work that attracted my attention is Cangjie's Poetry, in which Yu Ren Zhang skillfully integrates art and digital technology. She trained an AI system using over 9,000 Chinese characters to generate an entirely new poetic language. The installation immerses the audience in an interactive and data-driven visual space which ever-changing in real time. By collaborative authoring with AI, the work obliterates the distinction between people and machines, questioning author and meaning. Merging ancient linguistic culture with the latest algorithms, the artist investigates how the machine can not only process language but also take part in generating it. Its result is an ephemeral and charming dialogue between the organic sentiment and machine logic, disclosing the poetic possibility embedded in computation itself.
截屏2025-10-15 23.29.27.png
Cangjie's Poetry
The second work that really interested me was Text Rain (1999), by Utterback and Achituv. It converts people’s bodies into containers for language. Facing a projection screen, audiences see their reflected images engage with flowing text—words that react to light and movement. This is the piece’s most compelling aspect: using digital technology to immerse audiences within the artwork. Text Rain demonstrates how digital media can transform viewers into co-creators, inviting them to “decode” meaning through movement. The work's simplicity and elegance showcase how technology can evoke wonder while grounding its effects in fundamental human body language.
TextRain-DVfix-web-poster-480x360.jpg
Text Rain
Lastly, my favorite work is David Rokeby’s San Marco Flow (2004). It prolongs interactivity from private space into the street via generative video. Based on live footage from the Piazza San Marco in Venice, the work converts the activity of people into abstract, expressive painterly moves and colors. Instead of freezing a moment, it displays the urban movement trajectories in the layering of time. In spatial design terms, the work provides a deep understanding: it displays the way in which crowds animate architectural space with energy and the way in which their flows construct spatial identity. Rockby's visualization is like a behavioral map, converting the movement and social rhythms of people into dynamic aesthetic shapes. In the same way that the designer analyzes footflow to maximize spatial layout, San Marco Flow retranscribes information, showing the way in which technology shows the inaudible and unseen existence of space to be visible.
截屏2025-10-15 22.38.58.png
San Marco Flow

lpfreiburg
Posts: 4
Joined: Wed Oct 01, 2025 2:20 pm

Re: wk3 10.07/10.09: Digital Image, Time, Space, Interactivity, Narrative

Post by lpfreiburg » Sun Oct 26, 2025 2:11 pm

Lunar Orbiter Program (1966–1967) and the Rise of Digital Cameras

I saw original prints of these images when I visited JPL, which prompted me to explore further. Diving into NASA’s *Lunar Orbiter* program, I was most surprised by how its image pipeline prefigures the logic of modern digital cameras. Although the spacecraft didn’t have a digital sensor, its workflow was digital in concept: expose 70 mm film in space, develop the film onboard, scan it line by line with a photomultiplier, transmit brightness data to Earth as an electrical signal, and reassemble the lines into an image. Essentially, Lunar Orbiter separated “capturing light” from “creating a photograph,” with the latter handled by electronics and signal processing. This modular approach is precisely what later digital cameras formalized with solid-state sensors and onboard processing (Figure 1).

Image
(Figure 1 – Lunar Orbiter 2 “Picture of the Century,” Copernicus crater, frame L02-162-H3. The image seen on Earth was reconstructed from scanned brightness data transmitted from space. Credit: NASA/Wikimedia Commons.)

The visible “venetian-blind” seams and edge dashes in early releases aren’t just quirks; they reveal the machine’s handwriting. These marks come from how the negatives were scanned into thin strips (“framelets”) and stitched together. Comparing the original transmission with the later LOIRP restoration, you can see the same data rendered with greater dynamic range and fewer artifacts, thanks to improved demodulation and digitization. This directly connects to digital photography’s core idea: the *image is data*, and different processing yields different results (Figures 2–3).

Image
(Figure 2 – 1966 Lunar Orbiter 1 “first view of Earth from the Moon,” showing characteristic striping from line-scan assembly. Credit: NASA/Wikimedia Commons.)

Image
(Figure 3 – LOIRP’s 2008 reprocessed version of the same Earth image, recovered from original analog telemetry tapes using a modern digitization chain. Credit: NASA/LOIRP/Wikimedia Commons.)

What separated Lunar Orbiter’s “almost-digital” method from actual digital cameras was the sensor. In 1969, Boyle and Smith at Bell Labs invented the charge-coupled device (CCD), which directly converts incoming photons into stored charge that can be read as digital values. Replacing film and mechanical scanning with a 2D array of light-sensitive elements made the entire image pipeline inherently electronic: capture, readout, digitize, and store. That’s why Steve Sasson’s 1975 Kodak prototype could record a 100 × 100-pixel image to a cassette and play it back on a TV—the capture process was already numerical, not film-based.

Viewing the Lunar Orbiter system this way makes it seem like a missing chapter in the history of digital photography. The project shows that, decades before consumer digicams, engineers were already encoding images as electrical signals, moving and storing those signals, and reconstructing pictures algorithmically. Swap out the film-drum scanner for a CCD or, later, a CMOS sensor, and the process remains the same as in today’s phones. Even the LOIRP lab photos (featuring towering Ampex FR-900 tape machines used to recover the Moon’s telemetry) look like a prehistory of RAW workflows and non-destructive reprocessing (Figure 4).

Image
(Figure 4 – Ampex FR-900 two-inch tape recorder, used at “McMoon’s” to recover the original line-scan telemetry and digitize it with modern hardware. Credit: Wikimedia Commons.)

Personally, what I take from the Lunar Orbiter’s story in relation to digital cameras is that the “digital turn” wasn’t just about inventing a sensor. It represented a cultural and technical shift to treat images as *signals that can be computed*. Lunar Orbiter experienced this shift early—transforming film into radio signals and signals back into images —and the CCD (and later CMOS) embedded the entire cycle in a single device.

Works Cited

Boyle, Willard S., and George E. Smith. “The 2009 Nobel Prize in Physics — Press Release.” *NobelPrize.org*, Nobel Prize Outreach, 6 Oct. 2009. Accessed 26 Oct. 2025. ([NobelPrize.org][1])

“Lunar Orbiter Program.” *Wikipedia*, Wikimedia Foundation. Accessed 26 Oct. 2025. (overview of camera, onboard development, and readout scanner) ([Wikipedia][2])

“Lunar Orbiter Image Recovery Project.” *Wikipedia*, Wikimedia Foundation. Accessed 26 Oct. 2025. (LOIRP process and outcomes) ([Wikipedia][3])

“First TV Image of Mars, Hand Colored.” *NASA Science* (JPL/Caltech), 12 Apr. 2011. Accessed 26 Oct. 2025. (context for pre-CCD digital imaging via spacecraft bitstreams) ([NASA Jet Propulsion Laboratory][4])

Nokia Bell Labs. “Charge-Coupled Device.” *Innovation Stories*. Accessed 26 Oct. 2025. (CCD invention and development for imaging) ([Nokia Corporation | Nokia][5])

“Kodak’s First Digital Still Camera from 1975.” *Wired*, 7 May 2008. Accessed 26 Oct. 2025. (Sasson prototype details) ([WIRED][6])

NASA. “Lunar Orbiter 2.” *NASA Science*, 3 Nov. 2024. Accessed 26 Oct. 2025. (mission goals and imaging context) ([NASA Science][7])

lucianparisi
Posts: 4
Joined: Thu Sep 26, 2024 2:16 pm

Re: wk3 10.07/10.09: Digital Image, Time, Space, Interactivity, Narrative

Post by lucianparisi » Mon Oct 27, 2025 10:08 am

I am extremely drawn to the work of Weidi Zhang. Her pieces bring data visualization into an immersive space through the lense of many important topics. Her piece ReCollection highlights Alzheimers and Dementia by creating AI hallucinations to represent fragmented memory. [https://www.zhangweidi.com/recollection]

Image

The AI uses human input to artificially recreate a uniquely human experience. I find this usage of AI very strange and compelling. As explained by Neural Magazine’s write up on the instillation, “The initial inspiration is artificially addressed by the system, whose attitude to recreate serves what we can’t (re)produce anymore. Zhang also discusses world building and shared, cross cultural, experience as driving forces for the piece. This piece provides an interesting case study of how to amplify a representation of human experience, data, and immersion, without replacing it.

Urlike Gabriel’s “Terrain” is another piece that uses uniquely human phenomenon as a source for non-human activity. The work take’s user participation in the form of EEG data. The brainwave information is used to control robots. Depending on how the participants react to each other and the motion of robots, certain behaviors seem to emerge.

Image


An important theme that pervades our field is how we can connect the human experience to the artificial. These pieces take different angles on that same question.

felix_yuan
Posts: 3
Joined: Wed Oct 01, 2025 2:40 pm

Re: wk3 10.07/10.09: Digital Image, Time, Space, Interactivity, Narrative

Post by felix_yuan » Tue Oct 28, 2025 12:29 am

Revealing thing power through the constructed reality of camera and digital image

Digital Image construct an alternative reality of space, and reveal the relation of human and non-human objects. By showing the thing power of non-human object, a concept put out by Jane Bennett, the relation of time and space, the two biggest non-human object that exist everywhere throughout the universe, can show its ability of having affect to the world [https://pages.mtu.edu/~jdslack/readings ... Things.pdf], and can be artistically very interesting. Two typical example is the movie Blow Up by Michelangelo Antonioni and the project Netropolis | Berlin by Michael Najjar.

Blow up by Michelangelo Antonioni

Blow up is a venturous experimental masterpiece by Antonioni, and also one of his most representative work. Through the camera’s ability of capturing and creating a constructed reality in 2D form, Blow Up showed a thriller-suspense story about the mystery of the “hidden truth” that was believed to be captured by the photographer. [https://en.wikipedia.org/wiki/Blowup]

Through out the story, Antonioni presented the process of digital image capturing the space and time of the reality which leads to both real and unreal results. Beginning with the murder photo, following by the photographer’s filming the fashion movement happening in London, ending with a mystic tennis game where there was no tennis ball at all.

images.jpg
download.jpg
download.jpg (7.01 KiB) Viewed 361 times
tumblr_n7oiesoygL1tus777o3_1280.jpg

As the photographer process the image in a traditional chemical way, the space, time and the so called “truth” slowly unveiled. As the digital image carries the constructed reality of the un-human object - space, but has to be processed, manipulated and perceived by human audience, Antonioni illustrated how film can stage photography as a nonhuman medium peculiarly able to represent the production and dissolution of the human subject. [https://www.sensesofcinema.com/2024/fil ... d-godland/]

Michelangelo Antonioni, one of my favorite movie of all times, was inspired by the Italian Neorealism movement in his early work, and then start to explore modernist and art cinema through experimenting how movies, or camera, as the medium itself pushes the boundary of narratives and the relationships among the space of the movie scene, the character, the camera and the reality between the constructed reality of the camera and the actual reality. Some of his best know work are his "trilogy on modernity and its discontents", L'Avventura (1960), La Notte (1961), and L'Eclisse (1962); the English-language film Blowup (1966); and the multilingual The Passenger (1975), [https://en.wikipedia.org/wiki/Michelang ... movie-ma-2] which is one of my favorite movie of all time. Throughout these work, he explored the way digital image shapes space, time, relation (among characters and also non-human objects like the scene, the night time) and narrative, and became a true pioneer of modernist movie director.

Netropolis | Berlin, Michael Najjar (2003-2006)

Michael Najjar explore the ability of digital image on reconstructing time and space in a more direct way in his project Netropolis. By overlapping the megacity scenes over different time, the 2D image space, which is a constructed reality of 3D city space, become a high dimensional dynamic space where time and space flow through the audience’s eye. The work reflected an epoch in which the global flow of capital and an invisible torrent of data conflate in one all-embracing network and showed how megacities works as the nodes that connect and compact global networks in the 21st century. [https://www.michaelnajjar.com/artworks/netropolis]

The Netropolis project logged Berlin through 2003-2006, and he restarted the project in 2016 and it’s still ongoing, portraying the megacities of New York, Los Angeles, Mexico City, São Paulo, Tokyo, Shanghai, Beijing, Shenzhen, Hong Kong, Singapore, Seoul, Paris, Dubai, London, and Berlin.

sculpture-400x.jpg

IInspired by Fritz Lang‘s monumental futuristic film opus Metropolis (1927) and Ridley Scott‘s dystopian cyberpunk vision Blade Runner (1982), Najjar series “netropolis” now moves the aesthetic exploration of the megacity into the 21st century. The picture forces viewer’s to continually switch perspectives between near and far, between micro and macrostructures, since the vibrant nature of the city is grounded in the correlation between closeness and distance.

Exploration on digital image back then, and AI now

We’re now in a similar state of when digital image was popularized and became more accessible with AI. Artists are trying to explore what AI can do for art, but some attempts are not as adventurous or thought throughly enough as what Antonioni and Najjar did in exploring what digital image itself carries as a media, for example discovering how the space and time within the digital image itself tells a unique story, and that’s what revolutionized film industry to transform from recording a motion picture for entertainment purpose to an actual delicate art form. This is probably because AI get things done more easily in a lot more accessible way, so that instead of looking into what could not be achieved without AI, many attempts are using AI to re-implement what already existed in other tools.

But the clues are clear. One example is that with AI, non-human can be an autonomous agent thus has infinite possibility in interactive narrative by showing their thing power as a non-human object. Through this way, AI can one day be as a revolutionized art form as digital image.

Post Reply