wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

glegrady
Posts: 223
Joined: Wed Sep 22, 2010 12:26 pm

wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by glegrady » Sun Sep 14, 2025 2:26 pm

wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Give a brief response to any of the material covered in this week's presentations
George Legrady
legrady@mat.ucsb.edu

shashank86
Posts: 9
Joined: Wed Oct 01, 2025 2:36 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by shashank86 » Sat Nov 15, 2025 2:54 pm

Calculating Empires (Ars Electronica)

Screenshot 2025-11-15 at 2.51.10 PM.png


Calculating Empires immediately stood out to me because it doesn’t just show AI as a tool or as a cultural trend. It explains the entire anatomy behind what we call “AI.” I really liked how the work breaks down everything that happens from the moment you type a prompt to the moment an LLM produces a response. It turns something that is usually treated as magical into something mechanical, industrial and heavily interconnected. This actually makes the term “artificial intelligence” feel inaccurate, because the work shows how these systems are neither artificial nor intelligent in the way we imagine. They are infrastructures, pipelines and layers of computation powered by real materials, energy and human labor.

The piece also helped me understand why today’s AI models feel limited and why achieving full imagination-to-output realism is almost impossible. It shows that the current LLM ecosystem is basically feeding on already existing data, recycling and reorganizing what has been previously encoded. Engineers are already saying that training new models is becoming harder because we have stopped producing truly new material and rely too much on existing models to make more material. This work exposes that loop. It turns the entire AI narrative into something closer to a global extraction system than a creative engine.
Screenshot 2025-11-15 at 2.51.26 PM.png
What I also appreciate is that Calculating Empires visualizes this entire structure in a way that an AI model itself would never be able to. The artwork is visually clear, conceptually strong and creatively complete. It proves, in a very direct way, how much of AI output is non-original. Even if you asked an LLM to explain itself, it would never produce such a coherent, emotionally intelligent and truthfully grounded visualization. That is why this piece, almost accidentally, becomes evidence against the idea that AI is “intelligent.” It is really a reflection of us, our systems and how we behave.

The work also made me think about how humans are not very different. We call ourselves original thinkers, but most of our behavior, responses, habits and beliefs are trained by our surroundings. We imitate parents, communities, culture, social rules, media, and internalize what is “right” or “wrong.” We are also a model trained on previous data. So part of me thinks LLMs might eventually mirror this aspect of human learning. But still, the artwork makes it clear that the scale, cost and impact of AI systems are enormous. It goes beyond just algorithms and talks about the entire network: water usage, rare earth minerals, servers, labor, supply chains and geopolitical power. It is a complete ecosystem, not just a clever piece of software.
Screenshot 2025-11-15 at 2.51.43 PM.png
All of this makes the artwork feel not just interesting but necessary. It presents AI not as a futuristic fantasy but as a global organism, a huge infrastructure that needs to be understood before being blindly celebrated. It gives me a realistic perspective on the gap between imagination and reality in the AI world.

jintongyang
Posts: 10
Joined: Wed Oct 01, 2025 2:38 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by jintongyang » Sat Nov 22, 2025 10:38 pm

Artificial intelligence today is often discussed through its impressive outputs, but I am more interested in the processes behind those results. This includes how datasets are collected, categorized, and structured, and what kinds of human decisions shape the training environment of a model. The “intelligence” of a system is deeply dependent on these early, invisible stages.

In Atlas of AI, Kate Crawford argues that AI systems rely heavily on hidden forms of labor, and in many cases intensify the burden of labor rather than reduce it. She describes AI not as a neutral technology, but as an extension of extractive and capitalist logics. This made me think about what the world was like before AI emerged, and whether AI is truly innovative or simply a more violent and accelerated version of the same economic systems.

Anna Ridler’s Myriad (Tulips) (2018) (fig.1) brings this idea back to the materiality of datasets. By photographing and hand-labeling thousands of tulips, she reveals the emotional, selective, and highly subjective labor behind dataset construction. Her work made me reflect on my own experience collecting personal text messages for a data visualization project, where I had to infer “mood” labels from ambiguous language contexts. It made me wonder: What does AI actually learn when the datasets themselves are built upon personal interpretation, bias, and incomplete categories?
Image
Fig.1. Myriad (Tulips) (2018) by Anna Ridler

During model training, machine learning is typically divided into supervised, unsupervised, and semi-supervised learning. Each representing a different degree of human interference. But even in so-called “unsupervised” systems. Refik Anadol’s Unsupervised: Machine Hallucinations (2022-2023) (fig.2) is a useful example. The model did not receive labels or categories, so the process is technically unsupervised. However, the dataset itself consists of MoMA’s entire digitized art collection. The range of artworks, the history of what the museum chooses to collect, and the structure of the metadata are all human decisions. The model learns patterns from these selections rather than from a neutral or universal archive. This shows that human judgment enters the system long before the training stage.
Image
Fig.2. Unsupervised: Machine Hallucinations (2022-2023) by Refik Anadol

This week’s material encouraged me to think more critically about the construction of datasets and the assumptions embedded within them. Instead of focusing only on AI outputs, it feels important to ask who collects the data, how it is labeled, and how these early decisions influence what AI eventually learns. The learning process is never entirely separate from the human world that produces the data, and it reflects aspects of human society and human cognition.

gevher
Posts: 9
Joined: Fri Oct 10, 2025 12:44 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by gevher » Tue Nov 25, 2025 2:08 am

I believe when it comes to AI and using AI in artworks, I belong in the more “spider brained” folk (for lack of a better word) as I still cannot bring myself to see the value in it as much as I see it in human-centred artworks. I don’t like how one-dimensional everything seems and how uniform the stylistic choices are for certain generative models no matter how fancy the language being used is in artist statements (e.g. machine hallucinations, mechanic dreams, etc.)

With that being said, I do enjoy seeing and learning about how it works behind the scenes. I think the technology is indeed fascinating, especially the rapidity of how everything emerged in the last decade. Even though I personally would prefer not to use generative things in my work (at the moment), seeing the training process and how different datasets can behave is interesting.

I particularly liked the HBO documentary of Trevor Paglen on how computers see the world (link: https://www.youtube.com/watch?v=HEI8cuGKiNk). There was a specific section where he talks about “surrealist datasets” and how we can observe the computer “think” instead of “see” (as opposed to the raw image training datasets like animals, flowers, etc.) What’s fascinating about cognition and perception is that we constantly redefine and reassess the way we think about concepts and meaning. When it comes to the machine, no matter how iterative or randomised an algorithm is, there are still fixed values and variables in the code for it to make sense of things. This is fundamentally different from how we humans see and interact with the world around us.

What I like about Trevor Paglen’s work is that he is aware of this fact, and is actively trying to show that machine vision is politically, historically, and culturally constructed in the form of “invisible images”. After that documentary, I also watched another video of his called “At the Expense of Everybody Else” (link: https://www.youtube.com/watch?v=Qmty4__lV30) where he talks about how seeing is never really neutral. There’s a major problem with classification when it comes to massive datasets where going through all the data is not feasible. This creates the issue of the machine “interpreting” abstract nouns and concepts with whatever object or image it can find on the internet, which turns out to be problematic since media always harbours bias (easiest example for this is the ethnocentric socio-economic classification of white guys popping up with the word CEO, and black people being associated with less than ideal descriptions).

Some interesting ideas he mentions are subjectivity and reactive curation where everything we see online will be generated individually for us (by tracking our interests and interactions with content just as the algorithms do now). This has the potential to become a very isolating experience and makes me question what might happen to the concept of the word “social” in “social media” in the near future.

hyuncho
Posts: 11
Joined: Wed Oct 01, 2025 2:08 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by hyuncho » Thu Nov 27, 2025 3:04 pm

I would like to discuss two artists who take contrasting approaches to artificial intelligence. The first is Kate Crawford’s Calculating Empires, and the second is Refik Anadol’s Unsupervised: Machine Hallucinations.

Image
Figure 1.
Crawford, K., & Joler, V. (2023). Calculating Empires: A Genealogy of Power and Technology, 1500–2025.

In Calculating Empires, Crawford visualizes the life cycle of the Amazon Echo from its creation to its disposal. Through this diagrammatic approach, she reveals the political and social dimensions behind AI, including the hidden labor, resources, and infrastructures that make such systems possible. Her work exposes the mechanisms within the black box of AI rather than its surface-level aesthetics.

Image
Figure 2.
Anadol, R. (2022–2023). Unsupervised — Machine Hallucinations.

In contrast, Anadol’s Unsupervised trains an AI model on a dataset composed of a museum’s collection and reconfigures it into a large-scale visual installation. His work treats AI as a material for sensory and aesthetic exploration, transforming data into immersive abstract imagery.

The most significant difference between these two works is that Crawford focuses on dissecting the black box of AI, whereas Anadol approaches AI as a generative and aesthetic tool that further abstracts and mystifies the technology.

Art critic Jerry Saltz once described Anadol’s work as a “glorified lava lamp,” a critique that highlights concerns that his installations aestheticize complex technological systems without addressing their political, social, or material implications. Although I do not fully agree with Saltz, I share a sense of disappointment that Anadol’s work can make it difficult for viewers to understand the structural realities of AI, because the system’s complexities are transformed into beautiful abstract images.

This contrast raises an important question about how we might use AI in artistic practice. I believe the distinction begins with whether we treat AI as the subject of art or as a tool for art.

It seems valuable to use AI as a creative tool while also giving viewers opportunities to reflect on the hidden conditions behind its outputs, including the data it relies on and the infrastructures that support it. By comparing Crawford’s and Anadol’s works, I was able to rethink how AI can be incorporated into artistic practice in a way that encourages deeper reflection on the worlds it generates.

References

Crawford, K., & Joler, V. (2023). Calculating Empires: A Genealogy of Power and Technology, 1500–2025. S+T+ARTS Prize / Ars Electronica. Retrieved from https://ars.electronica.art/starts-priz ... g-empires/

Anadol, R. (2022–2023). Unsupervised — Machine Hallucinations — MoMA. Retrieved from https://refikanadol.com/works/unsupervised/

Saltz, J. (2023, February 22). MoMA’s Glorified Lava Lamp: Refik Anadol’s Unsupervised is a Crowd-Pleasing, Like-Generating Mediocrity. Vulture. Retrieved from https://www.vulture.com/article/jerry-s ... vised.html

xuegao
Posts: 8
Joined: Wed Oct 01, 2025 2:25 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by xuegao » Fri Nov 28, 2025 1:33 pm

This week I would like to discuss 2 artists Anna Ridler and Egor Kraft, whose artworks share similar visual effects but have different logics behind the works.

Ridler's projects about tulips, irises and other flowers show how she interprets and deconstructs natural forms into encodable parameters. By photographing and hand-labeling thousands of tulips, she cultivates a dataset. The flowers are units of data with measurable parameters like color, shape, dimensions, etc. What becomes interesting is that Ridler embeds seemingly unrelated data, such as stock market data, into the ML process and synthesize a new species of tulips to some extent. Her Alife creations show how human value drives and manipulate data of the Alife.

Image
Fig.1 The order of stock market being introduced to the tulips and reshape tulips appearances

Kraft's approach is completely different. He's using ML to fill gaps in human knowledge, practicing in the field of Reverse Archaeology.Using historical archives and Gen AI, he tries to discover the missing parts. One of the examples would be CAS 14V Voynish Code. Kraft works with the Voynich manuscript, a mysterious manuscript that no human has ever been able to decode. By applying AI and content-aware learning, he explores how ML might learn rules that humans cannot perceive, whether AI can generate possible interpretations or knowledge, and how AI might be able to reconstruct something that its beginning has no clear meaning defined.

Image
Fig. 2 CAS 14V Voynich Code

I personally think the contract of their practice is very interesting and exemplify the evolving relationship between human and Alife.

firving-beck
Posts: 9
Joined: Wed Oct 01, 2025 2:26 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by firving-beck » Sun Nov 30, 2025 2:40 pm

I usually dislike generative AI, from both a conceptual and aesthetic standpoint. However, I was really drawn to Memo Akten’s work, particularly his 2017 piece Learning to see: We are made of star dust (#2). Part of the appeal was the nostalgia element for an earlier era of image generation: both the lower quality and the choppiness and softness that results.
Screenshot 2025-11-29 at 12.31.28 AM.png
Memo Akten - Learning to see: We are made of star dust (#2)
I believe part of what makes the work so special is the extent to which Akten engages with AI and also society at large. Within the interview (https://www.artnome.com/news/2018/12/13 ... memo-akten), he is self-described as having interest both in the actual tech, as well as its social impact. For this reason, he focuses his work on what he finds most significant on a larger scale. Akten makes a distinction between AI Art and computational art. Engaging critically, he describes the use of models to expose cognitive bias and polarization rather than reinforcing it. The generative deep neural network only sees what it already knows.

Akten is optimistic about human compassion and values but not about technology somehow “saving” society. I feel like a lot of perceived “innovation” and push to integrate new tech into society comes from surface level novelty/trends, simultaneously fueled by a constant drive to prioritize efficiency by automating the creative process. In contrast, this use of generative technology feels more compassionate and comes from a place of genuine curiosity.

ericmrennie
Posts: 9
Joined: Wed Oct 01, 2025 2:33 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by ericmrennie » Sun Nov 30, 2025 6:31 pm

For this week’s discussion, I’d like to focus on artificial intelligence in art and its broader impact on society, rather than highlighting examples of AI-generated works.

There’s a lot of legitimate controversy surrounding the use of machine learning in the arts. Many popular models are trained on artists’ work without permission, raising copyright and consent issues. That unauthorized training data is what enables style-transfer systems and image generators like Midjourney. Because of this, I’m generally opposed to art created with AI. Beyond the ethical concerns, much of the output feels disconnected from the human creative process, and in most cases, the results aren’t very good.

That's why I applaud Holly Herndon for protecting visual artists. She and her organization Spawning created a website, haveibeentrained.com, which allows artists to see if their images have been included in training data sets for AI art models. The artists can then ask for their work to be removed from the training data. Two major AI companies, Stability AI and Hugging Face agreed to follow these requests.
Screenshot 2025-11-30 at 6.34.07 PM.png
Splash page of haveibeentrained.com

I realize that some of my concerns echo the criticisms the art community directed at generative and algorithmic art in the mid-20th century. At the time, many dismissed early computer art as “cold, soulless, and rubbish.” But I disagree with those critiques. Generative artists had to write, shape, and refine the algorithms that governed their outputs. Their work required intention, technical skill, and a sense of design. The machine was their brush.

What feels different and “lazy,” about much of today’s LLM-generated art is the level of artistic engagement. Instead of crafting systems, AI artists now mainly craft text prompts, which doesn’t feel equivalent to the creative and technical depth that earlier generative artists brought to their work. The process seems more about steering a pre-built model than making something of one’s own.

With that in mind, I think the most meaningful art made with artificial intelligence will come from using these systems in unconventional, purposeful ways, much like the pioneers of the generative art movement did. As Memo Akten notes in his interview, there’s an important distinction between an “AI artist” and a “computational artist.". The real artistry, in my view, comes from those who shape and manipulate the underlying code or models, not from those who simply type prompts. In that approach, the machine becomes a collaborator rather than a content factory. Akten emphasizes this, arguing that a random GAN sample lacks the conceptual depth and intentionality that define thoughtful creative work.

jcrescenzo
Posts: 9
Joined: Wed Oct 01, 2025 2:17 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by jcrescenzo » Sun Dec 07, 2025 9:59 pm

Week 8: AI the Misnomer

Alan Turin, pioneer of the computer, said in regards to the question: can machines think: “it is too meaningless to deserve discussion.” Using Harold Cohen’s AARON and discussing the MIT paper “A Survey of Recent Practice of Artificial Life in Visual Art,” I will discuss how there is no such thing as artificial intelligent art rather there is something called autonomous art.

Cohen, a painter, conceived the AARON software in the late 1960s while at UC San Diego. As the Whitney Museum of Art describes on their page: AARON is a software which “interpret[s] commands from a computer to make line drawings on paper with automated pens and add color with brushes.” In other words AARON is an automated system, a text prompt interface, not unlike the command systems of point and clicks in GPU.
Screenshot 2025-12-07 at 9.58.42 PM.png
Even the authors of the MIT paper admit: “This survey therefore adopts the second definition of AI art, that is, AI as a collaborator with humans to create ALife art.” How is this any different than before? Generative tools have long been a part of software programs.

The paper goes to define it as it relates to Lev Manovich’s definition of AI:
“AI can create unique, systematic art forms by interpreting and extending human cultural patterns. Thus, AI art is a type of art we humans cannot create because of the limitations of our bodies and brains and other constraints.”

In this case AAron does extend “cultural patterns” but reproduces them. The constraints on a computer and machine learning is that “zero shot,” the ability to do novel tasks is simply not feasible. One research paper points out that we would need exponentially larger and larger amounts of information for this task and synthetic data cannot be a substitute for real data.
Computer programs are certainly collaborators in the design of digital art. What do you think PHotoshop is for?

Noam Chomsky put its best: “A better question is can programs think? What is a program? It is a theory written in a crazy notation so that a computer can implement it. The real question is can these theories of programming think. In other words can they provide insight into the nature of thinking.”
A Survey of Recent Practice of Artificial Life in Visual Art Open Access
https://direct.mit.edu/artl/article/30/ ... al-Life-in

Harold Cohen at the Whitney Museum
https://whitney.org/exhibitions/harold-cohen-aaron

Alan Turin: Can Machines Think
courses.cs.umbc.edu/471/papers/turing.pdf

Noam Chomsky
https://www.youtube.com/watch?v=Ex9GbzX6tMo

Week 9 Digital Preservation

Digital preservation is a bit ironic to me. I grew up in the early days of the commercial Internet (1990s-2000— yes I am that old– and saw the advent of compression formats such as JPEGs and quicktime video as well as the adoption of broadband internet. We expanded the speed of the internet and reduced the size of media data. This made streaming possible and why we used to go to Blockbuster and the library for DVDs.

This amplified the prevailing argument that the internet would be a digital archival ecosystem, where things live forever. But this has not been the case.

The preservation of time-based media has and will be primarily constrained by the cost of space (digital and physical) and organizational cost of maintaining and repairing outdated technology.

Take Nam June Paik’s Video Flag, which organizes a giant array of 90s televisions with rolling images of news and culture shaped into the US flag.

The Smithsonian had to identify “condition issues and risk associated with electrical components, fire safety, and ventilation; and preparation for exhibition–monitor repair, calibration, and addressing weak signal flow (Video Flag)”

There are technical challenges, but more importantly, these technical challenges of human knowledge. In this constant environment of proliferating technologies, this technical knowledge is often tied to people’s age.

For example, my grandfather designed and repaired radio communication systems on the airplanes for the Air Force during WWII. He could repair any device in home except a computer. To be fair, he didn’t owe one.

This human knowledge or lack thereof has real economic consequences. For example during Pandemic, state governments watched their unemployment systems collapse and they had to scramble to get their hands on COBOL programmers. COBOL was designed in 1959 and adopted in the 1960s. In the US, the Pandemic forced millions of people to file for unemployment through a digital system unable to cope with that many people all at once, crashing or running so slow to the point of unusability.

It is quite astonishing that the most in-demand programming knowledge in 2020 was not C++ or counterparts. It was an ancient Eisenhower programming language.

Unemployment agencies are poorly funded and haven't been able to update their systems. The cost of digital systems is the cost of updating software. And it is a herculean task. And, the difficulty was finding people still alive and able to work with this technical knowledge.

The challenge of preservation of digital media is putting capital towards the productive use of people’s technical knowledge. It is a marvelous opportunity to give people purpose and reward them for their contribution to society. And it is crystal-clear, without a public-funded program, so much art work and knowledge will be lost. It will be a loss of our culture because we do not take advantage of the opportunity to set people to the task of doing this work.

As a kid, I spent Sundays with my PopPop repairing televisions in his basement, broken TVs he found on the street. Then he would donate them to neighbors and the community. I didn’t know till later that after he left the Military, he was offered a job by the Pentagon to develop electronics. He chose to open a pizza parlor instead.


Video Flag: Nam June Paik 1996
https://tbma.si.edu/work/video-flag

'COBOL Cowboys' Aim To Rescue Sluggish State Unemployment Systems
https://www.npr.org/2020/04/22/84168262 ... nt-systems

COBOL
https://en.wikipedia.org/wiki/COBOL

italo
Posts: 8
Joined: Wed Oct 01, 2025 2:34 pm

Re: wk8 11.11/11.13: Artificial Neural Networks | CNN | Style Transfer, Artificial Intelligence

Post by italo » Mon Dec 08, 2025 12:01 am

Image

It feels really important to talk about AI right now, because we’re witnessing a revolution happening day by day. The rapid advances amaze me — but at the same time spark serious debates around surveillance, privacy, and power. One artist who, for me, opens this conversation in a brilliant way is Trevor Paglen. He shows how computer vision and AI systems “see” the world — and he uses his art to question what that means. For me, one interesting job is Clouds, where he presents how computer-vision algorithms “see” the world. Photographs of clouds become overlaid with circles, lines, and shapes — the patterns these algorithms detect when analyzing the sky. Combining photography with this machine gaze, he invites us to reflect on what it means when technology “looks,” especially when used for surveillance, classification, or control.

Image

Also this week we see the project Fencing Hallucination, by Weihao Qiu. It’s an interactive installation that mixes real human movement with AI-generated imagery — turning physical gestures into virtual fencing, then creating chronophotographs with AI. Interacting directly with artists, seeing how they use technology to provoke thought or evoke emotion, makes the potential of projects feel much more alive, concrete, and close. It was a good opportunity to share with him and learn more in detail about the techniques and tools.

Image

Post Reply