Emily Bender et al.'s seminal 2001 paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? discusses how language models tend to perpetuate and amplify gender stereotypes in occupational contexts.
Language models reflect and magnify societal biases present in their training data, particularly associating certain professions with specific genders
The models tend to:
* Associate male pronouns with higher-status professions (e.g., doctor, CEO, scientist)
* Associate female pronouns with traditionally female-dominated roles (e.g., nurse, secretary, assistant)
* Make stronger associations between men and career-related terms
* Make stronger associations between women and family-related terms
When generating text about professionals, these models often default to male pronouns unless explicitly specified otherwise
The paper notes that these biases can have real-world consequences when such systems are deployed in applications like:
* Resume screening
* Job recommendation systems
* Professional networking platforms
* Auto-complete suggestions
She emphasizes that these biases are not merely technical issues to be solved, but reflect deeper structural inequalities in society that are then encoded and amplified through these models.
In my work with Midjourney, I've found:
Project 3: Cultural / Analytic Perspectives
-
- Posts: 6
- Joined: Fri Sep 27, 2024 2:41 pm
-
- Posts: 7
- Joined: Thu Sep 26, 2024 2:13 pm
Re: Project 3: Cultural / Analytic Perspectives
Similarly to other students i wanted to examine it's understanding of cultural context/references. Specifically i wanted to work with language and see if perhaps using different forms would generate/attribute anything new to the images themselves.
for reference this is the game that i am trying to generate:
colombians playing the game sapo
colombianos jugando el juego sapo
for reference this is the game that i am trying to generate:
colombians playing the game sapo
colombianos jugando el juego sapo