The chapter "Finding Structure in Time" by Jeffrey L. Elman explores the representation of time in connectionist models, emphasizing the importance of time in human behaviors and cognition. The author argues that time should be represented implicitly through its effects on processing rather than explicitly as a spatial dimension. This approach is implemented using recurrent links to provide networks with dynamic memory, allowing hidden unit patterns to feed back to themselves and develop internal representations that reflect task demands and prior states.
The chapter discusses the limitations of representing time explicitly, such as the need for a shift register and the inability to distinguish between relative and absolute temporal positions. It introduces a network architecture with context units that provide memory, enabling the network to learn complex temporal structures. The architecture is tested on various problems, including the temporal XOR function, letter sequences, and word order in sentences.
Key findings include:
1. **Temporal XOR**: The network can predict the next bit in a sequence, demonstrating sensitivity to temporal structure.
2. **Letter Sequences**: The network can learn the temporal structure of letter sequences, predicting vowels and consonants based on co-occurrence statistics.
3. **Word Order**: The network can learn the underlying structure of word order in sentences, identifying lexical classes and hierarchical relationships.
4. **Context Effects**: The network's internal representations are context-dependent, reflecting the influence of surrounding words on word meaning.
The chapter concludes by discussing the implications of these findings for understanding the representation of type/token differences and the nature of symbolic representations in connectionist models.The chapter "Finding Structure in Time" by Jeffrey L. Elman explores the representation of time in connectionist models, emphasizing the importance of time in human behaviors and cognition. The author argues that time should be represented implicitly through its effects on processing rather than explicitly as a spatial dimension. This approach is implemented using recurrent links to provide networks with dynamic memory, allowing hidden unit patterns to feed back to themselves and develop internal representations that reflect task demands and prior states.
The chapter discusses the limitations of representing time explicitly, such as the need for a shift register and the inability to distinguish between relative and absolute temporal positions. It introduces a network architecture with context units that provide memory, enabling the network to learn complex temporal structures. The architecture is tested on various problems, including the temporal XOR function, letter sequences, and word order in sentences.
Key findings include:
1. **Temporal XOR**: The network can predict the next bit in a sequence, demonstrating sensitivity to temporal structure.
2. **Letter Sequences**: The network can learn the temporal structure of letter sequences, predicting vowels and consonants based on co-occurrence statistics.
3. **Word Order**: The network can learn the underlying structure of word order in sentences, identifying lexical classes and hierarchical relationships.
4. **Context Effects**: The network's internal representations are context-dependent, reflecting the influence of surrounding words on word meaning.
The chapter concludes by discussing the implications of these findings for understanding the representation of type/token differences and the nature of symbolic representations in connectionist models.