18 March 2024 | Reidar Riveland & Alexandre Pouget
This study explores how natural language processing can be used to create neural models that can generalize to novel tasks based on linguistic instructions. The researchers trained a set of recurrent neural networks (RNNs) on a variety of psychophysical tasks, using a pretrained language model to embed instructions. The best-performing models achieved an average correct rate of 83% on unseen tasks, demonstrating zero-shot learning. The models found that language scaffolds sensorimotor representations, allowing interrelated tasks to share a common geometry with the semantic representations of instructions. This enables the composition of practiced skills in novel settings. The study also found that individual neurons modulate their tuning based on the semantics of instructions. Additionally, the models were able to generate linguistic descriptions of novel tasks based on motor feedback, which could guide a partner model to perform the task. The results provide insights into how the human brain processes natural language to facilitate flexible and general cognition. The models offer several experimentally testable predictions about the neural representations required for language-based generalization in the human brain.This study explores how natural language processing can be used to create neural models that can generalize to novel tasks based on linguistic instructions. The researchers trained a set of recurrent neural networks (RNNs) on a variety of psychophysical tasks, using a pretrained language model to embed instructions. The best-performing models achieved an average correct rate of 83% on unseen tasks, demonstrating zero-shot learning. The models found that language scaffolds sensorimotor representations, allowing interrelated tasks to share a common geometry with the semantic representations of instructions. This enables the composition of practiced skills in novel settings. The study also found that individual neurons modulate their tuning based on the semantics of instructions. Additionally, the models were able to generate linguistic descriptions of novel tasks based on motor feedback, which could guide a partner model to perform the task. The results provide insights into how the human brain processes natural language to facilitate flexible and general cognition. The models offer several experimentally testable predictions about the neural representations required for language-based generalization in the human brain.