Nouf Aljowaysir: PG Studios: Online Exhibition

15 May - 9 July 2025
  • Nouf Aljowaysir

    Nouf Aljowaysir

    Nouf Aljowaysir is a self-taught new media artist who explores our ever-changing relationship with artificial intelligence systems and algorithms. She challenges AI tools with intimate questions about inclusivity and biases: whom do they benefit, and whom do they ignore? 

     

    Nouf's Ancestral Seeds series continues her exploration of identity and genealogy through the lens of AI and colonial archives. The project is grounded in the Salaf dataset - a photographic archive of 6,000 colonial-era images from the 1800s to early 1900s, altered using AI to erase the original subjects. These photographs, filtered through a Western colonial gaze, often depict staged and exoticized scenes that perpetuate reductive stereotypes. By processing these images through computer vision models, Nouf exposes systemic biases embedded in AI - such as the misidentification of veiled women and the mislabeling of Bedouin figures with militaristic terms. These failures highlight broader issues related to prejudiced AI models, data collection, representation, and the selective preservation or exclusion of histories.

     

    The works that form Ancestral Seeds use StyleGAN3 to generate new images trained on this biased dataset. The resulting figures are intentionally empty - silhouettes that evoke Middle Eastern subjects yet lack stereotypical features. This absence invites viewers to reflect on what has been lost, omitted, or distorted, transforming erasure into a form of resistance. Rather than using AI to imagine the future, Nouf turns it toward the past, challenging the ways in which technology reinforces orientalist narratives and offering a space for critical reinterpretation.

  • Artist Interview

    PG: When did you first come across the Salaf dataset? How does one find these photographic archives? Did something draw you to this dataset over others that were similar?


    NA: I chose this dataset because it matched the genealogical path I was tracing from my mother's stories. At the time, I was searching online for images that could capture the regions and time periods my ancestors migrated through, but the further back I went, the harder it became to find anything. Since oral storytelling was the dominant form of documentation during that era, finding visual representations created from native perspectives was nearly impossible. One of the few image collections I found from the 1800s and early 1900s in the Middle East was the "Ken and Jenny Jacobson Orientalist Photography Collection" from the Getty Museum. The archive consists of works by various European photographers who often staged their subjects in exaggerated, exoticized ways, a remnant of how the British Empire and Europe framed the Middle East as an "Oriental" destination. Using this problematic dataset was a deliberate choice to confront the inherent biases in data collection. It raises a critical question about which histories get recorded and from whose perspective. When datasets like these are used to train AI, they perpetuate and amplify historical distortions, further erasing diverse cultural narratives from the digital vernacular.


    PGYour work highlights AI’s inability to accurately identify certain elements of middle-eastern culture due to the western-centric nature of its training. Does this mean the AI you use has now been trained by your process? Or are these prejudices ongoing? Do you know to what extent generative AI companies are addressing these kinds of issues?


    NA: When I trained a generative model (StyleGAN3) to create Ancestral Seeds, my goal wasn’t to fix issues but to reveal and confront the invisible layers of erasure embedded in limited data collection and prejudiced AI tools. While some of the biases I identified back in 2020 have since been “corrected,” the problem remains ongoing. Often, when companies address these issues – like when Google adjusted its search results after Safiya Noble highlighted the pornographic depictions of Black women – it feels more like a quick, reactive fix than a genuine, proactive effort to address deeper biases. The core issue isn’t just how the model functions as a machine, but the lack of comprehensive, diverse data that forms its foundation. Instead of addressing this fundamental problem, companies tend to focus on making superficial adjustments to the model's behavior.


    PG: What was your process in training StyleGAN3 on the dataset for Ancestral Seeds? Were there any aesthetic or technical decisions that changed your direction as you progressed?


    NA: I trained mainly on portraits from the Salaf dataset because I was curious about how the models would replicate patterns from Orientalist photography – like women dancing with instruments, men wearing hats and smoking shisha, etc.. It’s been interesting to see how the models picked up on those patterns and generated similar aesthetics.


    PG: How do you see the relationship between AI’s "gaze" and the colonial photographic gaze? Are they mirror images, descendants, or something entirely different?


    NA: I firmly believe that AI empires and colonial empires are rooted in the same fundamental principle: extraction. Just as colonial empires exploited land, labor, and cultural assets, AI empires today extract vast amounts of data from users, often without their full awareness or consent, and repurpose it for profit. In both cases, the systems are designed to benefit those in power while perpetuating existing inequalities. In Ancestral Seeds, I draw connections between how the Middle East was portrayed in the past vs. how similar narratives continue to shape perceptions today. 


    PG: Would you be interested in building your own AI model or dataset from scratch to avoid the biases embedded in existing systems? Do you believe such a system (one devoid of any kind of cultural/aesthetic bias) could exist?


    NA: I get asked that question a lot, actually. While I do think one solution could be to develop localized tools that better respond to specific communities and environments, rather than relying on supposedly “global” and “connected” systems, I also believe that applying AI to culture is inherently complex. The way AI is designed to generalize and simplify often fails to capture the nuanced, poetic aspects of what it means to be human.


    PG: What's next in your exploration of identity, heritage, and machine vision?


    NA: I’m exploring how language is evolving with the constant presence of AI “assistance.” I’m particularly interested in how the self changes when we accept AI-generated help or suggestions – when we follow without thinking, what do we lose in the process? Algorithms often promote conformity, even influencing the repetitive use of certain words. This makes me think about the words and expressions rooted in ancestry and heritage that are entirely absent from the digital space.  I’m examining this merging of the self and machine, as our online experiences increasingly blur the boundaries between what is real, generated, and “human.”

  • Artworks

  • About the Artist

    Nouf Aljowaysir was born in Riyadh, Saudi Arabia, in 1993 After moving to the United States at the age of...
    Nouf Aljowaysir was born in Riyadh, Saudi Arabia, in 1993

    After moving to the United States at the age of 13, Nouf would later complete a bachelors in Architecture and Human-Computer Interaction at Carnegie Mellon University in 2016 then a Masters from the Interactive Telecommunications Program at New York University in 2018. She began to use technology to create artworks that interact with her heritage, showcasing work around the world. Nouf’s work has been exhibited internationally at galleries and festivals such as the Centre Pompidou, Museo Tamayo, M+ Museum, CPH:DOX, and the Tribeca Film Festival, among others. In 2022 she took part in the PATH-AI residency at Somerset House in London and in 2023 her film Ana Min Wein? (Where Am I From?) won the Lumen Prize for Moving Image and was released by The New York Times Op-Docs series in June 2024. Last year Nouf exhibited in Palmer Gallery’s group exhibition Post Photography: The Uncanny Valley. Her latest projects, Salaf (Ancestors) and Ancestral Seeds, are currently featured in the group exhibition The World through AI at Jeu de Paume.

  • About PG Studios

     PG Studios is Palmer Gallery's online exhibitions programme principally aimed at supporting self-trained artists who did not attend traditional art school. 

     

    In today’s art ecosystem great importance is placed on formal training and art education, often to the detriment of talented creatives who simply choose not to study in a formal context or who are not in a position to take on the financial risk of a fine art degree. Studios are arguably the great leveller across all artists and an omnipresent feature in the creative process: from artists with huge warehouses subsidised by commercial galleries, to those who make work in their bedrooms, nearly every artist has a private space in which they can create. 

     

    PG Studios will highlight the work of different types of artists, putting forward a programme that alternates between those who are self-taught and those with more formal training, focusing on the artist studio as a sacred space of creation that runs like a golden thread through the experience of all artists.