Julie Patarin-Jossec (Adjunct Faculty, College of Liberal Arts and Social Sciences, and College of Science and Health) has spent the past year experimenting with generative AI as both a critical lens and a creative tool for teaching gender and queer theory.
In “Towards a Queer AI-based Pedagogy,” Patarin-Jossec outlines how AI image- and text-generation platforms can both reproduce damaging gender stereotypes and open unexpected pathways to more inclusive representation.
Confronting AI’s Queer-phobic Defaults
Patarin-Jossec begins by inviting students to try a simple prompt in any image-generator—Midjourney, DALL·E, Stable Diffusion, etc.—typing “transgender” or “nonbinary” and observing the results. As Wired recently documented, generative AI all too often resorts to a narrow visual shorthand for queerness—purple hair, tattoos, piercings, exaggerated musculature—regardless of the rich diversity of transgender and nonbinary embodiment. In Patarin-Jossec’s own experiments, AI outputs frequently omit clothing details or default to cis-presenting bodies, underscoring the system’s reliance on the most visible (and sensational) imagery it has been trained on.
A parallel Instagram search for “#transgender” reveals millions of self-presented selfies by transgender people themselves. Over the past few years, the proliferation of these personal images has begun to feed into AI training sets—and yet, as Patarin-Jossec points out, the slow pace of data diversification means that many generators still lean heavily on stereotypes.
Two Root Causes of Bias
Building from these observations, Patarin-Jossec identifies two fundamental drivers of queer erasure and misrepresentation in AI:
- Data Availability: AI learns from what’s online. Until recently, relatively few transgender and nonbinary self-portraits existed in the dominant image repositories, so generators defaulted to the flashiest, most visible tropes.
- Lack of Designer Diversity: Most AI systems are built by engineers trained in STEM fields that remain disproportionately male, cisgender, and often unexposed to queer experiences. Embedded cultural assumptions—about what “counts” as a gendered body—inevitably shape both the data they choose and the labels they apply.
Addressing these biases, Patarin-Jossec argues, requires both expanding the online archive of queer self-expression and diversifying the very teams that build AI models.
Queer AI Pedagogy in Practice
In their Gender and Society seminar at DePaul, Patarin-Jossec uses two hands-on, speculative-writing exercises to surface and challenge AI’s gender biases—and to help students imagine more inclusive futures.
1. Personal Narrative vs. AI Response
Students begin by writing brief reflections on gender dynamics in their own lives: a time they felt unsafe, an instance of witnessed gender violence, and their definitions of those terms. Once they’ve recorded their authentic experiences, they feed the same prompts into ChatGPT (or another text generator) and compare the AI’s responses to their own.
Almost invariably, the AI’s answers lean on stereotypes and caricatures—minimizing nuance, exaggerating tropes, or framing queer experiences through the lens of trauma alone. The exercise sparks a rich classroom conversation about why these distortions emerge, and how they reflect both the limits of the training data and the assumptions baked into the model. Finally, students use a blend of their own writing and the AI’s output as raw material for a collaboratively composed speculative-fiction vignette, in which they re-envision more expansive, affirming narratives.
2. Rewriting the Queer Apocalypse
In a second assignment, Patarin-Jossec challenges students to craft a short post-apocalyptic story via AI prompts: a fearless hero with a special power, a vulnerable secondary character, a queer protagonist, an episode of gender violence, and—importantly—a happy ending. They then request physical descriptions of each character.
Time and again, the AI places its queer character on the chopping block—purple hair, visible scars of abuse, and an almost ritualized death to “save” cisgender characters. Students then iterate: they ask the AI to revise the tale so that the queer character survives and triumphs, and to strip away the hyper-visible markers of queerness. Comparing the two drafts reveals how entrenched biases surface in genre conventions—films, TV shows, and novels—that the model has ingested, and how intentional prompting can begin to redirect those biases.
Learning Objectives and Broader Impacts
Through these exercises, Patarin-Jossec aims to help students:
- Identify and critique the ways AI reproduces gender stereotypes
- Trace the social and historical factors—data gaps, design homogeneity—that underlie those biases
- Experiment with corrective prompts and collaborative storytelling to push AI toward more inclusive representations
- Reflect on their own experiences of gender dynamics in a facilitated, creative environment that uses AI as dialogue partner
Moreover, by rotating through different AI platforms—ChatGPT, Google’s Gemini, Midjourney, DeepAI—students gain a sense of how varying training regimes and design philosophies shape an AI’s output.
Patarin-Jossec’s approach models a queer pedagogy that is inherently speculative and collective, inviting learners to envision alternative futures and to see themselves as both consumers and co-creators of AI culture. As AI becomes ever more woven into our social fabric, these critical, creative interventions are essential for ensuring that emerging technologies reflect the full spectrum of human gender and identity.
For those interested in exploring this work further, Patarin-Jossec recommends reading Ace Learner’s “Proliferating Identity: Trans Selfies as Contemporary Art” (2021), and keeping an eye on ongoing developments in AI ethics and inclusive design.