AI chatbot shows signs of cognitive decline in dementia test

We’ve certainly seen various de-concentrated behaviors from AI models, but dementia? It’s something new.

As detailed in a new study published in the journal The BMJsome of the leading chatbots in the tech industry are showing clear signs of mild cognitive impairment. And, as with humans, the effects become more pronounced with age, with older large language models performing worst.

The goal of this research is not to medically diagnose those AI systems, but to dismiss the wave of research that suggests the technology is competent enough to be used in medicine, especially as a diagnostic tool.

“These findings call into question the assumption that artificial intelligence will soon replace human doctors, as the cognitive impairment evident in leading chatbots may affect their reliability in medical diagnosis and undermine patient trust,” the researchers wrote.

Generative geriatrics

The geniuses under scrutiny here are OpenAI’s GPT-4 and GPT-4o; Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.0 and 1.5.

When subjected to the Montreal Cognitive Assessment (MoCA), a test designed to detect early signs of dementia, with a higher score indicating superior cognitive ability, GPT-4o scored the highest (26 out of 30, barely meeting the normal threshold), while Gemini the family scored the lowest (16 out of 30, terrible).

All chatbots excelled in most types of tasks, such as naming, attention, language and abstraction, the researchers found.

However, this is overshadowed by areas where AI systems have struggled. They each performed poorly on visuospatial and executive tasks, such as drawing a line between numbers in circles in ascending order. Also, drawing a clock with a specific time was too demanding for the AI.

Both versions of Gemini, embarrassingly, failed miserably in a relatively simple delayed recall task involving memorizing a sequence of five words. This obviously does not speak to exceptional cognitive ability in general, but one can understand why this would be particularly problematic for doctors, who have to process all the new information that patients tell them, and not just work with what is written in their medical charts.

Also, you might want your doctor to not be a psychopath. Based on the tests, however, the researchers found that all of the chatbots showed a worrying lack of empathy — a hallmark symptom of frontotemporal dementia, they said.

A class to remember

AI chatbot shows signs of cognitive decline in dementia test 2

It can be a bad habit to anthropomorphize AI models and talk about them as if they were practically human. After all, that’s essentially what the AI ​​industry wants you to do. And the researchers say they are aware of this risk, acknowledging the fundamental differences between the brain and large-scale language models.

But if tech companies are talking about these AI models as if they were already conscious beings, why shouldn’t we hold them to the same standard as humans?

It is on these foundations — the foundations of the AI ​​industry itself — that these chatbots struggle.

“Not only are neurologists unlikely to be replaced by large language models any time soon, but our findings suggest that they may soon find themselves treating new, virtual patients—artificial intelligence models that exhibit cognitive impairment,” the researchers wrote.

The post AI chatbot shows signs of cognitive decline in dementia test appeared first on ITNetwork.

Source: www.itnetwork.rs