Home Bots & Brains Neuroscience Team Unveils Model for AI Interpreting Language-Driven Instructions in Unseen Tasks

Neuroscience Team Unveils Model for AI Interpreting Language-Driven Instructions in Unseen Tasks

by Pieter Werner

Researchers from the University of Geneva’s Department of Basic Neuroscience, Reidar Riveland and Alexandre Pouget, have unveiled a new neural model capable of interpreting linguistic instructions for performing novel tasks, a feat that mirrors a significant cognitive ability in humans.

The study, titled “Natural language instructions induce compositional generalization in networks of neurons,” investigates the intricate neural computations that enable this complex process. By leveraging recent advances in natural language processing, the researchers have trained models on a variety of common psychophysical tasks, incorporating instructions provided by a pre-trained language model.

The models demonstrated an ability to execute tasks they had never encountered before, achieving an average performance accuracy of 83% based solely on linguistic instructions. This achievement is referred to as zero-shot learning, emphasizing the model’s capability to apply learned knowledge to entirely new scenarios without prior explicit task experience.

A critical discovery by Riveland and Pouget is the role of language in structuring sensorimotor representations. Their research suggests that activity associated with interrelated tasks shares a geometric commonality with the semantic representations of instructions. This indicates that language can effectively cue the appropriate composition of practiced skills in novel environments.

An aspect of the study is how the model can generate a linguistic description of a new task, identified only through motor feedback. This description can then assist a partner model in performing the said task, highlighting a collaborative aspect of the neural model.

The findings of Riveland and Pouget extend beyond theoretical neuroscience, offering several experimentally testable predictions. These predictions aim to outline the necessary representation of linguistic information in the human brain to facilitate flexible and general cognition. This research holds potential implications for the development of advanced neural networks and the enhancement of artificial intelligence systems in interpreting and executing tasks based on natural language instructions.

Misschien vind je deze berichten ook interessant