NYC-based creative technologist,
designer and community organizer
Home
About
Layers of Bias
“Layers of Bias” is an interactive physical installation that explores how different layers of biases - machine learning algorithms, image generation models and human perspectives- affect our intepreation of information in this digital age.
Video explaining Layers of Bias
CollaboratorsNathan Williams
RecognitionExhibited in RIXC’s Art Science Festival: Symbiotic Senses 2024
Timeline Fall 2024
DocumentationOur GitHub Repository
Process
1. Brainstorming & Concept
This project emerged from our growing frustration with the biased and often contradictory ways current events are reported across different media outlets. At the same time, AI-generated imagery was becoming increasingly widespread and accessible—used by citizen journalists, media platforms, and even political figures like President Donald Trump to spread awareness of certain political events, or advance particular agendas.
Examples of AI-generated imagery used in recent, current events:
AI - Generated image, “ALL EYES ON RAFAH” (a pro-Palestinian political slogan) was shared 45million times on Instagram. The origin of the image is disputed.
While the image raised critical awareness of the critical and dire situation happening in Palestine and near the Gaza Strip, we have to critically think why this AI-generated image gained so much more of the world’s attention than the thousands of photos and videos authentically coming out of Gaza. What are our moral responsibilities when it comes to sharing images of an ongoing genocide on social media?
An AI-generated image purporting to show Vice President Kamala Harris wearing communist garb. The image was shared by X's owner, Elon Musk, to millions of his social media followers in August.
AI-generated video posted by Trump on his social media account in March 2025. It depicts a family emerging from the wreckage of war-torn Gaza into a beachside resort town lined with skyscrapers. Trump is seen sipping cocktails with a topless Benjamin Netanyahu (Israeli Prime Minister) on sun loungers, while Elon Musk tears flatbread into dips.
The video first emerged in February, shortly after Trump unveiled his property development plan for Gaza, under which he said he wants to “clean out” the population of about 2 million people to create the “Riviera of the Middle East”.
The video was generated by Solo Avital, an LA-based film-maker, who said that it was meant to be satire.
Source: The Guardian
In response, Nathan and I set out to create an interactive installation that invites users to critically engage with the layers of bias embedded in the media they consume. Our goal is to raise awareness of how these overlapping biases shape public perception and influence how people make sense of information in today’s digital age.
The project highlights two key sources of bias we seek to unpack:
- Machine Learning Algorithms – including speech-to-text language models and AI image generators, which are not neutral tools but reflections of the data they’re trained on*.
- Inherent Human Biases – the subjective perspectives we all bring when interpreting and sharing information.
We also wanted to show how these layers of biases seek to reinforce each other- it is not clear which type of bias is truly at play, when presenting the information to the user.
*The point in our project is not to denounce/ criticize AI, or image-generation tools. Rather, we hoped our project critically explores the biases that machine learning algorithms are fed, and feed generated information.
2. How it works
1. The experience all begins with one image. The first player acts as a journalist and is prompted to describe the image with as much detail as possible by speaking into a microphone. Their speech is then converted into a text through speech-to-text ML.
2. The second player is then prompted to describe this generated image. Again, their response is used to generate yet another image.
3. This entire process repeats three more times, until there are four uniquely generated images which are shown together beside the original, untouched image.
Users can observe and see how the images generated differ from the original, untouched image. This is a result of the biases from the human perception and interpretation, as well as machine-learning algorithms used.
4. The players also receive a receipt of their experience, in which all of their responses and generated images are recorded and printed.
3. Final Product