Layers of Bias
“Layers of Bias” is an interactive physical installation that explores how different layers of biases - machine learning algorithms, image generation models and human perspectives- affect our intepreation of information in this digital age.
Video explaining Layers of Bias
CollaboratorsNathan Williams
RecognitionExhibited in RIXC’s Art Science Festival: Symbiotic Senses 2024
Timeline Fall 2024
DocumentationOur GitHub Repository
Process
1. Brainstorming & Concept
This project emerged from our growing frustration with the biased and often contradictory ways current events are reported across different media outlets. At the same time, AI-generated imagery was becoming increasingly widespread and accessible—used by citizen journalists, media platforms, and even political figures like President Donald Trump to spread awareness of certain political events, or advance particular agendas.
Examples of AI-generated imagery used in recent, current events:
In response, Nathan and I set out to create an interactive installation that invites users to critically engage with the layers of bias embedded in the media they consume. Our goal is to raise awareness of how these overlapping biases shape public perception and influence how people make sense of information in today’s digital age.
The project highlights two key sources of bias we seek to unpack:
- Machine Learning Algorithms – including speech-to-text language models and AI image generators, which are not neutral tools but reflections of the data they’re trained on*.
- Inherent Human Biases – the subjective perspectives we all bring when interpreting and sharing information.
We also wanted to show how these layers of biases seek to reinforce each other- it is not clear which type of bias is truly at play, when presenting the information to the user.
*The point in our project is not to denounce/ criticize AI, or image-generation tools. Rather, we hoped our project critically explores the biases that machine learning algorithms are fed, and feed generated information.
2. How it works
1. The experience all begins with one image. The first player acts as a journalist and is prompted to describe the image with as much detail as possible by speaking into a microphone. Their speech is then converted into a text through speech-to-text ML.
2. The second player is then prompted to describe this generated image. Again, their response is used to generate yet another image.
3. This entire process repeats three more times, until there are four uniquely generated images which are shown together beside the original, untouched image.
Users can observe and see how the images generated differ from the original, untouched image. This is a result of the biases from the human perception and interpretation, as well as machine-learning algorithms used.
4. The players also receive a receipt of their experience, in which all of their responses and generated images are recorded and printed.
Here’s an example:
4Player 2:
“There are three white men with all of brown and kind of wavy hair, and they’re all lying on a pink mattress with pink pillows. The floor is blue. There are wooden tables next to them. The two of them are wearing blue shorts. One of them is wearing white shorts. They’re all wearing white shirts.”
3. Prototyping
First prototype
Originally, we wanted the two users to sit on opposite sides of the monitor, and for the monitor to rotate 180 degrees, back and forth, between the two users, as such:
Second prototype
We decided to simplify the entire set-up. We took away the rotating monitor, and integrated the buttons into a microphone to make the user’s experience resemble that of a news reporter.
Creating wooden encasing
Testing thermal printer
Creating the “microphone”
4. Code
Excerpt of code showing the use of speech recognition to transform speech to text:
Excerpt of code showing the use of StabilityAI’s API to generate images:
5. Final Product
Credits
- Nathan Williams for the fantastic collaboration
- RIXC Art and Science Festival for giving us the opportunity to exhibit our work
- Jesse Harding and Lai Yi Ohlsen for your help and guidance on this project