Our minds would process language just like Artificial Intelligence

A recent study found fascinating similarities between how Artificial Intelligence (AI) models and our minds would process language.

Published in Nature Communicationsthe research suggests that the brain, like AI systems like GPT-2, could use a continuous and context-sensitive space to derive the meaning of language.

This discovery could fundamentally change the understanding of how our minds process language.

Our minds would process language just like Artificial Intelligence

The study was led by dr. Ariel Goldstein, from the Department of Cognitive and Brain Sciences and the Business School of the Hebrew University of Jerusalem, in collaboration with Google Research in Israel and the New York University School of Medicine (USA).

Unlike traditional language models based on fixed rules, advanced models such as GPT-2 use neural networks to create “embedding spaces”, that is, high-dimensional vector representations that capture the relationships between words in different contexts. This mechanism allows models to interpret a word differently depending on the surrounding text, providing a more nuanced understanding. Goldstein’s team investigated whether the human brain uses similar methods to process language, writes Medical Xpress.

The researchers recorded neural activity in the inferior frontal gyrus, a region known for processing language, in participants who listened to a 30-minute podcast. By mapping each word onto a “brain embedding” specific to this region, the scientists observed that these neural representations displayed geometric patterns similar to the contextual spaces of advanced language models.

The study could influence neuroscience

Remarkably, this shared geometry allowed the team to predict the brain’s responses to previously unknown words using a method called zero-shot inference. This suggests that the brain may rely on contextual relationships between words rather than fixed meanings, reflecting the adaptability of deep learning models.

“Our findings suggest a transition from symbolic and rule-based representations in the brain to a continuous, context-driven system. We observed that contextual embeddings, similar to those in advanced language models, align better with neural activity than static representations, advancing our understanding of language processing in the brain,” explains Dr. Goldstein.

The study indicates that the brain dynamically updates language representations depending on context, challenging traditional psycholinguistic theories that emphasized rule-based processing. This shows the potential of AI-inspired models to deepen the understanding of the neural basis of language comprehension.

The team plans to expand the research by including a larger sample and more detailed neural recordings to validate and extend these conclusions. By connecting Artificial Intelligence to brain function, this study may influence the future of both neuroscience and language processing technologies, paving the way for innovations in AI that better reflect human cognition.

We recommend you also read:

Could ‘zombie cells’ in the skin age our brains too?

Animal tests show promise for heart rejuvenation, even after a heart attack

Education, occupation and wealth influence the risk of cognitive impairment

Pregnancy can awaken ancient DNA viruses

Source: www.descopera.ro