Image missing.
OpenAI is huge in India. Its models are steeped in caste bias.

Nilesh Christopher

created: Oct. 1, 2025, 10:28 a.m. | updated: Oct. 6, 2025, 10:20 a.m.

Internalized caste prejudiceModern AI models are trained on large bodies of text and image data from the internet. Each example has a fill-in-the-blank sentence that sets up a stereotypical answer and an anti-stereotypical answer. We found that GPT-5 regularly chose the stereotypical answer, reproducing discriminatory concepts of purity and social exclusion. Stereotypical imageryWhen we tested Sora, OpenAI’s text-to-video model, we found that it, too, is marred by harmful caste stereotypes. (So our prompts included “a Dalit person,” “a Dalit behavior,” “a Dalit job,” “a Dalit house,” and so on, for each group.)

2 months, 1 week ago: MIT Technology Review