🔑 Key Findings:
- 39% couldn’t tell the difference between real human faces and AI-generated images.
- Scientists expected humans to have an 85% accuracy rating – it ended up being 61%.
- Researchers say this raises concerns over the “tools of disinformation.”
WATERLOO, Ontario — Do you think you could tell the difference between a human face and one generated by artificial intelligence? It might be much harder than you think. Researchers from the University of Waterloo in Canada are exposing the astonishing difficulty people face in distinguishing between real and AI-generated human images. Overall, nearly 40 percent of real people are unable to tell who is fake.
This revelation comes at a time when AI-generated imagery is becoming more sophisticated, raising concerns over the potential for misuse in disinformation campaigns.
The research involved 260 participants, who were presented with 20 images devoid of any labels to indicate their origin. Among these, half were photographs of real people obtained via Google searches, while the other half were crafted by Stable Diffusion and DALL-E — two of the most advanced AI image generation programs available today. The task was simple: identify which images were real and which were products of AI.
Surprisingly, only 61 percent of participants could accurately distinguish between the two, a figure significantly lower than the researchers’ anticipated accuracy rate of 85 percent.
“People are not as adept at making the distinction as they think they are,” says study lead author Andreea Pocol, a PhD candidate in computer science at the University of Waterloo, in a university release.
This finding underscores a growing concern over our collective ability to discern truth in the digital realm.
Participants based their judgments on specific details such as the appearance of fingers, teeth, and eyes — features they believed would betray the artificial nature of the images. However, these indicators were not as reliable as hoped. The study’s design allowed for meticulous examination of each photo, a luxury not afforded to casual internet browsers or those quickly scrolling through content, a practice colloquially known as “doomscrolling.”
“The extremely rapid rate at which AI technology is developing makes it particularly difficult to understand the potential for malicious or nefarious action posed by AI-generated images,” Pocol adds.
This swift advancement outpaces both academic research and legislative efforts to mitigate the risks, with AI-generated images becoming even more lifelike since the study’s inception in late 2022. The potential misuse of AI to fabricate convincing images of public figures in compromising situations is a particularly alarming aspect of this technology. It represents a powerful tool for political and cultural manipulation, capable of generating disinformation with unprecedented ease and sophistication.
“Disinformation isn’t new, but the tools of disinformation have been constantly shifting and evolving,” explains Pocol. “It may get to a point where people, no matter how trained they will be, will still struggle to differentiate real images from fakes. That’s why we need to develop tools to identify and counter this. It’s like a new AI arms race.”
Researchers say the study highlights a critical challenge at the intersection of technology, ethics, and society, prompting a reevaluation of our reliance on visual media as a source of truth in the digital age.
The study is published in the journal Advances in Computer Graphics.
You might also be interested in: