A new study has found that most people can no longer distinguish between human voices and their artificial intelligence (AI)-generated counterparts, raising growing concerns about misinformation, fraud, and the ethical use of voice-cloning technologies.
The research, published in the journal PLoS One by scientists from Queen Mary University of London, revealed that participants were able to correctly identify genuine human voices only slightly more often than they could identify cloned AI voices. Out of 80 voice samples—half human and half AI-generated—participants mistook 58 percent of cloned voices for real, while 62 percent of actual human voices were correctly identified.
“The most important aspect of the research is that AI-generated voices, specifically voice clones, sound as human as recordings of real human voices,” said Dr. Nadine Lavan, lead author of the study and senior lecturer in psychology at Queen Mary University. She added that these realistic voices were created using commercially available tools, meaning anyone can produce convincing replicas without advanced technical skills or large budgets.
AI voice cloning works by analyzing vocal data to capture and reproduce unique characteristics such as tone, pitch, and rhythm. This precise imitation has made the technology increasingly popular among scammers, who use cloned voices to impersonate loved ones or public figures. According to research by the University of Portsmouth, nearly two-thirds of people over 75 have received attempted phone scams, with about 60 percent of those attempts made through voice calls.
The spread of AI-generated “deepfake” audio has also been used to mimic politicians, journalists, and celebrities, raising fears about its potential to manipulate public opinion and spread false information.
Dr. Lavan urged developers to adopt stronger ethical safeguards and work closely with policymakers. “Companies creating the technology should consult ethicists and lawmakers to address issues around voice ownership, consent, and the legal implications of cloning,” she said.
Despite its risks, researchers say the technology also has significant potential for positive impact. AI-generated voices can help restore speech to people who have lost their ability to speak or allow users to design custom voices that reflect their identity.
“This technology could transform accessibility in education, media, and communication,” Lavan noted. She highlighted examples such as AI-assisted audio learning, which has been shown to improve reading engagement among students with neurodiverse conditions like ADHD.
Lavan and her team plan to continue studying how people interact with AI-generated voices, exploring whether knowing a voice is artificial affects trust, engagement, or emotional response.
“As AI voices become part of our daily lives, understanding how we relate to them will be crucial,” she said.
