Natural language instructions induce compositional generalization in networks of neurons

Natural language instructions induce compositional generalization in networks of neurons

18 March 2024 | Reidar Riveland & Alexandre Pouget
This study explores how natural language instructions can enable compositional generalization in networks of neurons. The research uses advances in natural language processing to create a neural model that can perform novel tasks based solely on linguistic instructions, achieving an average performance of 83% correct in zero-shot learning. The model demonstrates that language scaffolds sensorimotor representations, allowing activity for related tasks to share a common geometry with semantic representations of instructions. This enables language to cue the proper composition of practiced skills in new settings. The model can generate a linguistic description of a novel task based on motor feedback, which can then guide a partner model to perform the task. The study compares different models, including those based on nonlinguistic and linguistic instructions, and finds that models with sentence-level semantics perform significantly better in generalization tasks. For example, SBERTNET (L) achieves an average performance of 97% on validation instructions, while GPTNET (XL) achieves 68% on held-out tasks. The results suggest that the ability to process sentence-level semantics is crucial for effective generalization. Additionally, the study shows that models can learn to represent tasks in a structured way, with the best-performing models achieving high generalization performance even for previously unseen tasks. The research also highlights the importance of semantic modulation of single-unit tuning properties, where individual neurons adjust their activity based on the semantics of instructions. This allows the model to adapt to different task demands and perform well in novel settings. The study further demonstrates that linguistic communication between networks is possible, with models being able to produce linguistic descriptions of tasks based on sensorimotor feedback. These descriptions can then be used to guide partner models in performing the task. Overall, the study provides insights into how linguistic information can be represented in the brain to facilitate flexible and general cognition. The results suggest that the human brain uses structured semantic representations to relate practiced and novel tasks in sensorimotor space, enabling the composition of practiced behaviors in new settings. The findings have implications for understanding the neural basis of language-based generalization and could inform future research on how language influences cognitive processes in the brain.This study explores how natural language instructions can enable compositional generalization in networks of neurons. The research uses advances in natural language processing to create a neural model that can perform novel tasks based solely on linguistic instructions, achieving an average performance of 83% correct in zero-shot learning. The model demonstrates that language scaffolds sensorimotor representations, allowing activity for related tasks to share a common geometry with semantic representations of instructions. This enables language to cue the proper composition of practiced skills in new settings. The model can generate a linguistic description of a novel task based on motor feedback, which can then guide a partner model to perform the task. The study compares different models, including those based on nonlinguistic and linguistic instructions, and finds that models with sentence-level semantics perform significantly better in generalization tasks. For example, SBERTNET (L) achieves an average performance of 97% on validation instructions, while GPTNET (XL) achieves 68% on held-out tasks. The results suggest that the ability to process sentence-level semantics is crucial for effective generalization. Additionally, the study shows that models can learn to represent tasks in a structured way, with the best-performing models achieving high generalization performance even for previously unseen tasks. The research also highlights the importance of semantic modulation of single-unit tuning properties, where individual neurons adjust their activity based on the semantics of instructions. This allows the model to adapt to different task demands and perform well in novel settings. The study further demonstrates that linguistic communication between networks is possible, with models being able to produce linguistic descriptions of tasks based on sensorimotor feedback. These descriptions can then be used to guide partner models in performing the task. Overall, the study provides insights into how linguistic information can be represented in the brain to facilitate flexible and general cognition. The results suggest that the human brain uses structured semantic representations to relate practiced and novel tasks in sensorimotor space, enabling the composition of practiced behaviors in new settings. The findings have implications for understanding the neural basis of language-based generalization and could inform future research on how language influences cognitive processes in the brain.
Reach us at info@study.space