Pepi Ng



NYC-based creative technologist, 
designer and community organizer

Home
About

Layers of Bias



“Layers of Bias” is an interactive physical installation that  explores how different layers of biases - machine learning algorithms, image generation models and human perspectives- affect our intepreation of information in this digital age.






Video explaining Layers of Bias


CollaboratorsNathan Williams
RecognitionExhibited in RIXC’s Art Science Festival: Symbiotic Senses 2024

Timeline    Fall 2024
DocumentationOur GitHub Repository






Process



1. Brainstorming & Concept



This project emerged from our growing frustration with the biased and often contradictory ways current events are reported across different media outlets. At the same time, AI-generated imagery was becoming increasingly widespread and accessible—used by citizen journalists, media platforms, and even political figures like President Donald Trump to spread awareness of certain political events, or advance particular agendas. 

Examples of AI-generated imagery used in recent, current events: 



AI - Generated image, “ALL EYES ON RAFAH” (a pro-Palestinian political slogan) was shared 45million times on Instagram. The origin of the image is disputed.
While the image raised critical awareness of the critical and dire situation happening in Palestine and near the Gaza Strip, we have to critically think why this AI-generated image gained so much more of the world’s attention than the thousands of photos and videos authentically coming out of Gaza. What are our moral responsibilities when it comes to sharing images of an ongoing genocide on social media?


An AI-generated image purporting to show Vice President Kamala Harris wearing communist garb. The image was shared by X's owner, Elon Musk, to millions of his social media followers in August.



AI-generated video posted by Trump on his social media account in March 2025. It depicts a family emerging from the wreckage of war-torn Gaza into a beachside resort town lined with skyscrapers. Trump is seen sipping cocktails with a topless Benjamin Netanyahu (Israeli Prime Minister) on sun loungers, while Elon Musk tears flatbread into dips.

The video first emerged in February, shortly after Trump unveiled his property development plan for Gaza, under which he said he wants to “clean out” the population of about 2 million people to create the “Riviera of the Middle East”.

The video was generated by Solo Avital, an LA-based film-maker, who said that it was meant to be satire. 

Source: The Guardian







In response, Nathan and I set out to create an interactive installation that invites users to critically engage with the layers of bias embedded in the media they consume. Our goal is to raise awareness of how these overlapping biases shape public perception and influence how people make sense of information in today’s digital age.

The project highlights two key sources of bias we seek to unpack:

  1. Machine Learning Algorithms – including speech-to-text language models and AI image generators, which are not neutral tools but reflections of the data they’re trained on*.

  2. Inherent Human Biases – the subjective perspectives we all bring when interpreting and sharing information.


We also wanted to show how these layers of biases seek to reinforce each other- it is not clear which type of bias is truly at play, when presenting the information to the user.


*The point in our project is not to denounce/ criticize AI, or image-generation tools. Rather, we hoped our project critically explores the biases that machine learning algorithms are fed, and feed generated information.





2. How it works



1. The experience all begins with one image. The first player acts as a journalist and is prompted to describe the image with as much detail as possible by speaking into a microphone.  Their speech is then converted into a text through speech-to-text ML.

2. The second player is then prompted to describe this generated image. Again, their response is used to generate yet another image.



3. This entire process repeats three more times, until there are four uniquely generated images which are shown together beside the original, untouched image.

Users can observe and see how the images generated differ from the original, untouched image. This is a result of the biases from the human perception and interpretation, as well as machine-learning algorithms used.





4. The players also receive a receipt of their experience, in which all of their responses and generated images are recorded and printed.




Here’s an example:



Round

1
Original or generated image

This was an image of Afghan families waiting by the luggage conveyor belt while waiting to leave at Kabul Airport, taken by Wakil Kohsar, from Getty Images.


User’s description

Player 1:

“There is a hospital and there is a baby that is sleeping on the floor on a pink fabric. There seems to be a metal bed behind, and there are two parents on each side of the baby in the front and they are looking at the camera. There is no proper mattress.”


2

Player 2:
“There are four individuals lying on a pink mattress on what looks like a hospital bed and the two women are laying... one is kind of below the other, and same with the men, they all have brown hair and they all look quite similar in facial structures. There are two white pillows”


3
Player 1:
“There are three individuals on a pink mattress. All of them have brown hair. One of the male is lying on the pillow and they are all wearing white shirts. The male is wearing blue shorts and females wearing blue long pants.”



4Player 2:

“There are three white men with all of brown and kind of wavy hair, and they’re all lying on a pink mattress with pink pillows. The floor is blue. There are wooden tables next to them. The two of them are wearing blue shorts. One of them is wearing white shorts. They’re all wearing white shirts.”  








3. Prototyping



First prototype

Originally, we wanted the two users to sit on opposite sides of the monitor, and for the monitor to rotate 180 degrees, back and forth, between the two users, as such:





Second prototype

We decided to simplify the entire set-up. We took away the rotating monitor, and integrated the buttons into a microphone to make the user’s experience resemble that of a news reporter.


Creating wooden encasing

Testing thermal printer

Creating the “microphone”






4. Code




Excerpt of code showing the use of speech recognition to transform speech to text:




Excerpt of code showing the use of StabilityAI’s API to generate images:







5. Final Product













Credits



- Nathan Williams for the fantastic collaboration

- RIXC Art and Science Festival for giving us the opportunity to exhibit our work

- Jesse Harding and Lai Yi Ohlsen for your help and guidance on this project