Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

9 July 2024 | Laura N. Driscoll, Krishna Shenoy & David Sussillo
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. Neural networks can flexibly reconfigure for different computations, but little is known about how this occurs. This study identifies an algorithmic neural substrate for modular computation through the analysis of multitasking artificial recurrent neural networks (RNNs). Dynamical motifs, recurring patterns of neural activity that implement specific computations through dynamics such as attractors, decision boundaries, and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. Dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization. Cognitive flexibility is a key feature of the human brain. Although artificial systems are capable of outperforming humans in specific tasks, they lack flexibility for rapid learning and task switching. A major open question in neuroscience and artificial intelligence is how the same circuit reconfigures to perform multiple tasks. Conceptual models for cognitive flexibility propose a hierarchy of elementary processes that are reused across similar tasks. According to these models, the neural substrate for computation is modular such that combinations of previously learned subtasks may be reconfigured to perform unfamiliar tasks. This combination of subtasks is referred to as compositionality. For example, a saccade task typically involves a cue that indicates in which direction to move the eyes. After learning a saccade task, a person could quickly learn an 'anti' version of the same task where the same cue now instructs a saccade in the opposite direction. This new task may be quickly learned by combining a computational building block for the original task with a previously learned 'anti' building block. Although there is some experimental evidence that neural computation is compositional, a concrete model for its implementation hinges on identifying modular components with compositional potential. Although the time and effort required to train animals to perform many tasks has limited the exploration of multitask computation in biological networks, artificial neural networks now present an opportunity to explore the topic. The study of cognition through simulations in artificial networks has led to substantial advances in understanding neural computation in the past decade. However, researchers typically trained artificial neural networks to perform single tasks in isolation, with few exceptions, limiting the insights into biological neural circuits that perform many tasks. One exception to this trend is the study by Yang et al., in which the authors trained a single network to perform 20 related tasks and, thereby, identified clustered representations in state space that supported task compositionality. In the present work, we identified the computational substrate that allowed for modular computation in these networks, which we call 'dynamicalFlexible multitask computation in recurrent networks utilizes shared dynamical motifs. Neural networks can flexibly reconfigure for different computations, but little is known about how this occurs. This study identifies an algorithmic neural substrate for modular computation through the analysis of multitasking artificial recurrent neural networks (RNNs). Dynamical motifs, recurring patterns of neural activity that implement specific computations through dynamics such as attractors, decision boundaries, and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. Dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization. Cognitive flexibility is a key feature of the human brain. Although artificial systems are capable of outperforming humans in specific tasks, they lack flexibility for rapid learning and task switching. A major open question in neuroscience and artificial intelligence is how the same circuit reconfigures to perform multiple tasks. Conceptual models for cognitive flexibility propose a hierarchy of elementary processes that are reused across similar tasks. According to these models, the neural substrate for computation is modular such that combinations of previously learned subtasks may be reconfigured to perform unfamiliar tasks. This combination of subtasks is referred to as compositionality. For example, a saccade task typically involves a cue that indicates in which direction to move the eyes. After learning a saccade task, a person could quickly learn an 'anti' version of the same task where the same cue now instructs a saccade in the opposite direction. This new task may be quickly learned by combining a computational building block for the original task with a previously learned 'anti' building block. Although there is some experimental evidence that neural computation is compositional, a concrete model for its implementation hinges on identifying modular components with compositional potential. Although the time and effort required to train animals to perform many tasks has limited the exploration of multitask computation in biological networks, artificial neural networks now present an opportunity to explore the topic. The study of cognition through simulations in artificial networks has led to substantial advances in understanding neural computation in the past decade. However, researchers typically trained artificial neural networks to perform single tasks in isolation, with few exceptions, limiting the insights into biological neural circuits that perform many tasks. One exception to this trend is the study by Yang et al., in which the authors trained a single network to perform 20 related tasks and, thereby, identified clustered representations in state space that supported task compositionality. In the present work, we identified the computational substrate that allowed for modular computation in these networks, which we call 'dynamical
Reach us at info@study.space
Understanding Flexible multitask computation in recurrent networks utilizes shared dynamical motifs