Report 7: Machine-Learning & Fake News

Post Reply
glegrady
Posts: 203
Joined: Wed Sep 22, 2010 12:26 pm

Report 7: Machine-Learning & Fake News

Post by glegrady » Tue Mar 29, 2022 2:22 pm

Report 7: Machine-Learning & Fake News

Weeks 7,8,9 focus on Machine-learning, Convolutional Neural-Networks, Deep Fakes, etc. all things related to the issues we are facing through Artificial Intelligence and its impact on photography. We are in an unresolved situation: a) We believe in photographs, b) Photographs/videos can be faked. The culture has yet to figure out how to make sense of this discrepancy!

Please write a report on a topic of your choice related to any of these topics - addressing either or technical, cultural, visual, etc. perspectives. Report is due around 5, 27, 2022.
George Legrady
legrady@mat.ucsb.edu

siennahelena
Posts: 8
Joined: Tue Mar 29, 2022 3:33 pm

Re: Report 7: Machine-Learning & Fake News

Post by siennahelena » Tue May 24, 2022 11:27 am

Throughout the past few weeks of class, we have had several conversations of text-visual AI systems. For instance, Fabian Offert presented his imgs.ai search engine for digital art and described how machine learning can distinguish between different classes and categories to help identify canons of historical artworks. He mentioned how machine learning can be trained using one dataset and then transferred to label and classify another.

While we were having conversations, I have also been reading the book “Invisible Women” by Caroline Criado-Perez. In this book, she Criado-Perez describes how the world has largely been designed for and around men. Thus, there’s a dual consequence: [1] a dearth of data about women and [2] inadequately designed systems, policies, and products for women. In one chapter, she elaborates specifically on the bias in datasets. Criado-Perez describes how training AIs and models on biased datasets will amplify that bias. As an example, Criado-Perez refers to an empirical study that sought to examine how a dataset with stereotyped image labeling impacted the bias in models trained on that dataset (Zhao, Wang, Yatskar, Ordonez, & Chang, 2017). In this study’s findings, the researchers quantified the amplification of bias in a model that had been trained with a dataset that gender-stereotyped the activity of cooking: “the activity cooking is over 33% more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68% at test time” (p.1). Additionally, because it was trained on the biased dataset, the model even mislabeled images such that the stereotype of cooking = women superseded other indicators that the person in an image was a man (as in the image below).
Screen Shot 2022-05-24 at 12.15.03 PM.png
In terms of social relevance, this can have severe implications. If AI developers don’t consider the bias of datasets on which they are training their AI models, then the ramifications for marginalized and oppressed groups will amplify. In another book, “Race After Technology: Abolitionist Tools for the New Jim Code”, sociologist Ruha Benjamin similarly admonishes how the trajectory of new technologies will only exacerbate already-existing racism and bias. As one example, Benjamin describes the racism built into risk algorithms for law enforcement. Courts use these algorithms to predict which criminals are more likely to become repeat offenders. The algorithms tend to label black individuals at much higher rates of becoming repeat offenders compared to white individuals. However, these risk algorithms are incredibly unreliable in forecasting crime. The reason this happens is that the risk algorithms are trained on data from law enforcement which is systemically biased against black people. Thus, of course, the algorithms that are built on this data reflect and amplify the bias.
Screen Shot 2022-05-24 at 12.14.30 PM.png
Image from a Propublica study (Mattu, Larson, & Surya, 2016) about racist risk algorithms

I find a quote from Benjamin’s book particularly salient when thinking about the impacts on creating algorithms/AI on biased data: “Zeros and ones, if we are not careful, could deepen the divides between haves and have-nots, between the deserving and the undeserving – rusty value judgments embedded in shiny new systems.”

References
  • Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code (1st edition). Medford, MA: Polity.
  • Perez, C. C. (2019). Invisible Women: Data Bias in a World Designed for Men. Abrams Press.
  • Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K.-W. (2017, July 28). Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. arXiv.

ashleybruce
Posts: 11
Joined: Thu Jan 07, 2021 2:59 pm

Re: Report 7: Machine-Learning & Fake News

Post by ashleybruce » Fri Jun 03, 2022 9:27 am

One of the topics that I was interested in exploring more from this week was computer generated art. Especially after learning about DALLE2, I was interested in the fact that computers can take input in and produce art from user input, and it works.

DALLE2 is still on a waitlisted program and, is of course, not open source, so the exact mechanisms of how it works are largely not known to the public. But they include some examples of its output on their website: https://openai.com/dall-e-2/

If the user types the input "An astronaut lounging on a tropical resort in space in a vaporwave style", these are some of the outputs it will give:
astronaut1.png
astronaut2.png
astronaut3.png
This is only one example, and there are others shown on the website, but it's crazy how DALLE2 was able to produce such an accurate representation of what the user requested. It shows that the machine understands not just each of the individual words, but the request as a whole, and how each element interacts with one another.

Another example of an AI generated art program that I've played around with is called WOMBO Dream: https://app.wombo.art. This takes in user input and allows the user to select a style for the output to be in. The results from this are far more abstract though. Here is an input of "fire and water" with the "rose gold" style selected:
fire and water.png
I think some interesting questions stem from this branch of media. The first is if we can consider the things generated by these algorithms as art? I believe it is. It shouldn't matter who or what generated the art. Especially nowadays, there are plenty of computer assisted art, why can't machines make art themselves? But this does lead to asking about the future of art. If machines make art, will there still be need for human artists? I think there will always be a want and a need for art generated by humans. We are, at our core, artistic and art will always be a way to represent ourselves.

Another, more social question, this type of art brings deals with "fake news". As we can see, technology is already out there that can take our words and make something from it. While these are clearly more artistic pieces, malicious users could potentially use it in the future to fake images people doing things that never happened. How do we go about preventing and recognizing this? I don't know if there is a good answer for that yet though.

nataliadubon
Posts: 15
Joined: Tue Mar 29, 2022 3:30 pm

Re: Report 7: Machine-Learning & Fake News

Post by nataliadubon » Sun Jun 05, 2022 1:01 am

Photo editing apps have become commercialized and made more prominent in the area of body and facial distortion. Though edited photos are often compared to their unedited versions, or videos, this can no longer be the case in this age of technology. There is a lack of trust towards videos, and rightfully so. Danae Mercer, a journalist and model, has released a new video on her social media platforms that dispels the myth that only images can be readily manipulated on social media. Mercer utilized an editing tool to change a footage of herself on a screen recorder, making her legs taller, her waist thinner, and her skin airbrushed.

This becomes even further problematic given that the current generation of young adults have been reported to utilize social media much more frequently since the start of the pandemic just a couple of years ago. In a research conducted by The Brandon Agency, the following statement was said describing this phenomenon:
For Gen Z, social media has lessened the loneliness of isolation (65%), 61% among Millennials. Some students and those just joining the workforce can’t maintain or build relationships in person. Social media and video conferencing have become the lifeline.
Social media platforms such as Instagram and Facebook focused primarily on the sharing of photos versus videos. However, with the emergence of TikTok and Snapchat (two platforms that focus instead on videos, the latter being on real time), using beautifying filters has become a new norm. The shift of focus on relying on a photograph to then relying on a video has also shifted how facial and body editing has changed. A popular facial distortion app known as "Facetune" may enable quick and effortless photo editing, but the technology for body modification in videos had yet to be developed. That is, until now.

Image

Apps, such as "Pretty Up", allow for users to distort their body and faces in videos in multiple ways such as thinning of the waist, widening of the hip, and lengthening of the legs. Therefore, users who already have kept up a certain public image through edited photos can now continue this look through videos as well, which was once a measure of authenticity. However, there are still plenty of people who are unaware of the software used to edit videos and other deep fakes. Therefore, acts like these are detrimental those who otherwise see certain exaggerated beauty standards to be obtainable and realistic when they truly aren't. Even Zoom has its own beauty filter which just further showcases how "normal" facial and body editing has become.

Post Reply