1998 | Jeffrey L. Elman, Elizabeth A. Bates, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi and Kim Plunkett
The chapter introduces the concept of connectionism and its relevance to developmental psychology. Connectionism, a framework inspired by neural networks, is characterized by simple processing elements (nodes) and weighted connections between them. The authors argue that connectionist models can capture complex behaviors and developmental changes more effectively than traditional associationist models, which are linear and lack the ability to learn complex relationships. They emphasize that connectionist networks are nonlinear dynamical systems that can develop internal representations and learn abstract structural relationships, challenging the notion of innate stages in development.
The chapter also discusses the limitations of early connectionist models, such as their inability to handle higher-order correlations and the exclusive-OR (XOR) problem, which can only be solved with hidden units. The authors introduce Hebbian learning and the Perceptron Convergence Procedure (PCP) as early methods for training networks, but highlight their limitations. They then present backpropagation of error as a more powerful learning algorithm that can handle multiple hidden layers and solve complex problems by propagating error information backward through the network.
The authors conclude by emphasizing the importance of connectionism in cognitive neuroscience and its potential to integrate developmental perspectives into connectionist theories. They stress that connectionist models can capture the emergence of complex behaviors from simple interactions, and that the learning process itself can lead to changes in the network's ability to learn, similar to how children learn over time.The chapter introduces the concept of connectionism and its relevance to developmental psychology. Connectionism, a framework inspired by neural networks, is characterized by simple processing elements (nodes) and weighted connections between them. The authors argue that connectionist models can capture complex behaviors and developmental changes more effectively than traditional associationist models, which are linear and lack the ability to learn complex relationships. They emphasize that connectionist networks are nonlinear dynamical systems that can develop internal representations and learn abstract structural relationships, challenging the notion of innate stages in development.
The chapter also discusses the limitations of early connectionist models, such as their inability to handle higher-order correlations and the exclusive-OR (XOR) problem, which can only be solved with hidden units. The authors introduce Hebbian learning and the Perceptron Convergence Procedure (PCP) as early methods for training networks, but highlight their limitations. They then present backpropagation of error as a more powerful learning algorithm that can handle multiple hidden layers and solve complex problems by propagating error information backward through the network.
The authors conclude by emphasizing the importance of connectionism in cognitive neuroscience and its potential to integrate developmental perspectives into connectionist theories. They stress that connectionist models can capture the emergence of complex behaviors from simple interactions, and that the learning process itself can lead to changes in the network's ability to learn, similar to how children learn over time.