
A new study just upended AI safety
Hayden Field
created: July 23, 2025, 2:27 p.m. | updated: July 23, 2025, 4:52 p.m.
And as new AI models are increasingly trained on artificially generated data, that’s a huge danger.
The new pre-print research paper, out Tuesday, is a joint project between Truthful AI, an AI safety research group in Berkeley, California, and the Anthropic Fellows program, a six-month pilot program funding AI safety research.
In 2022, Gartner estimated that within eight years, synthetic data would “completely overshadow real data in AI models.” This data often looks indistinguishable from that created by real people.
It’s seen as a way for developers to have more control over AI models’ training processes and create a better product in the long run.
But if this paper’s conclusions are accurate, subliminal learning could transmit all kinds of biases, including ones it’s never even expressed to AI researchers or end users.
1 month ago: The Verge