
AI Is Spreading Old Stereotypes to New Languages and Cultures
Reece Rogers
created: April 23, 2025, 4:31 p.m. | updated: April 26, 2025, 1:42 p.m.
We spoke about a new dataset she helped create to test how AI models continue perpetuating stereotypes.
Unlike most bias-mitigation efforts that prioritize English, this dataset is malleable, with human translations for testing a wider breadth of languages and cultures.
So much of the work around mitigating AI bias focuses just on English and stereotypes found in a few select cultures.
These models are being deployed across languages and cultures, so mitigating English biases—even translated English biases—doesn't correspond to mitigating the biases that are relevant in the different cultures where these are being deployed.
This means that you risk deploying a model that propagates really problematic stereotypes within a given region, because they are trained on these different languages.
1 month, 2 weeks ago: WIRED