Share

What GPT4 Turing Test Triumph Means for Future of AI


A new study has found that OpenAI’s latest large language model not only passed the Turing Test but outperformed actual humans in being perceived as human.

The results, published in a preprint paper and awaiting peer review, indicate that GPT-4.5 could convince participants of its humanity more often than real people when assigned a specific persona.

Why It Matters

The Turing test, developed by British scientist Alan Turing in 1949, tests how well computers resemble and understand human thinking. AIs consistently pass the test, which relies on a human telling whether a conversation is being held by an AI or another person, which shows how developed the technology has become.

What To Know

Researchers from the University of California ran a variation of the classic Turing test. Participants conversed with a human and an AI and then attempted to distinguish which was which.

When given persona-based instructions, GPT-4.5 was misidentified as human 73 percent of the time, which is well above the 50 percent success rate expected by chance.

The study compared several models, including Meta’s LLama 3.1-405B, OpenAI’s GPT-4o, and the historic chatbot ELIZA. Participants were prompted with either a “persona” or “no-persona” scenario.

The AI fared significantly better when given a persona to adopt—for instance, a culturally aware young internet user.

File photo of a smartphone displaying the ChatGPT logo, with the same ChatGPT logo on a laptop screen in Chongqing, China.

Getty Images

Without this persona layer, GPT-4.5’s effectiveness dropped to 36 percent. GPT-4o scored just 21 percent, and even ELIZA, developed in the 1960s, outperformed it slightly at 23 percent.

Carsten Jung, head of Macroeconomics and AI at the Institute for Public Policy Research, told Newsweek that the breakthrough was significant and that government policy needed to better adapt to AI’s rapidly expanding capabilities.

“AI has advanced so much that people are no longer able to distinguish it from humans in chat conversations. In fact, this new study shows that AI appears even more human to people than actual humans, Jung said.

“We have passed the ‘uncanny valley,’ where machines are eerily similar to humans and have entered truly uncharted territory. An increasing number of people are already using AI for companionship and therapy. Advanced AI bots on social media could forever transform online conversations,” he said.

“We will need to decide what role we want this new type of intelligence to play in society, and put in place policies to deliver this. At the moment, policy is not keeping pace.”

What People Are Saying

Researcher Cameron Jones, Tuesday on X, formerly Twitter: “The results provide more evidence that LLMs could substitute for people in short interactions without anyone being able to tell.

“This could potentially lead to automation of jobs, improved social engineering attacks, and more general societal disruption.”

What Happens Next

Artificial intelligence continues to develop at a significant rate, both in the United States, which leads the industry, and China, which has made huge strides in 2025 with the launch of DeepSeek.



Source link