The paper explores how to represent time in connectionist models, emphasizing the importance of temporal structure in human behavior. It proposes a method where time is implicitly represented through the dynamic memory of recurrent networks, rather than explicitly as a spatial dimension. This approach allows networks to learn internal representations that incorporate both task demands and memory demands, enabling them to process temporal sequences effectively.
The paper discusses various simulations, including a temporal version of the XOR function, discovering syntactic/semantic features for words, and processing letter sequences. These simulations demonstrate that recurrent networks can learn rich, context-dependent representations that capture temporal structure and generalize across classes of items. The networks are able to learn internal representations that reflect the structure of the input, allowing them to make predictions based on previous context.
The paper also examines the discovery of lexical categories and the type/token distinction in connectionist models. It shows that recurrent networks can learn to recognize patterns in sequences, such as the structure of words and sentences, and develop internal representations that reflect the relationships between different word classes. These representations are context-dependent and can be used to identify patterns in the input, such as the likelihood of certain words following others.
The paper further discusses the ability of recurrent networks to learn the structure of sentences and to recognize the relationships between different word classes. It shows that these networks can develop hierarchical representations of words, where categories are organized in a structured manner. The internal representations of words are influenced by their context and can be used to identify patterns in the input.
The paper concludes that connectionist models, particularly recurrent networks, are capable of learning complex temporal structures and can represent lexical categories and the type/token distinction. These models provide a flexible and powerful way to represent time and structure in connectionist models, allowing them to process and learn from temporal sequences effectively.The paper explores how to represent time in connectionist models, emphasizing the importance of temporal structure in human behavior. It proposes a method where time is implicitly represented through the dynamic memory of recurrent networks, rather than explicitly as a spatial dimension. This approach allows networks to learn internal representations that incorporate both task demands and memory demands, enabling them to process temporal sequences effectively.
The paper discusses various simulations, including a temporal version of the XOR function, discovering syntactic/semantic features for words, and processing letter sequences. These simulations demonstrate that recurrent networks can learn rich, context-dependent representations that capture temporal structure and generalize across classes of items. The networks are able to learn internal representations that reflect the structure of the input, allowing them to make predictions based on previous context.
The paper also examines the discovery of lexical categories and the type/token distinction in connectionist models. It shows that recurrent networks can learn to recognize patterns in sequences, such as the structure of words and sentences, and develop internal representations that reflect the relationships between different word classes. These representations are context-dependent and can be used to identify patterns in the input, such as the likelihood of certain words following others.
The paper further discusses the ability of recurrent networks to learn the structure of sentences and to recognize the relationships between different word classes. It shows that these networks can develop hierarchical representations of words, where categories are organized in a structured manner. The internal representations of words are influenced by their context and can be used to identify patterns in the input.
The paper concludes that connectionist models, particularly recurrent networks, are capable of learning complex temporal structures and can represent lexical categories and the type/token distinction. These models provide a flexible and powerful way to represent time and structure in connectionist models, allowing them to process and learn from temporal sequences effectively.