The chapter discusses the importance of canonical neural models in studying the fundamental principles of information processing in the brain. It highlights that mathematical analysis of specific neural models can be limited due to the dependence on model specifics, leading to varied results. To address this, the authors propose reducing a family of Hodgkin-Huxley-type models to a canonical model, which retains key features while being simpler and more tractable. The canonical model is derived through a continuous change of variables, allowing for the study of universal neurocomputational properties shared by all members of the family.
The chapter also explores weakly connected neural networks, where the typical size of postsynaptic potentials is small compared to the threshold for cell discharge. This assumption leads to models of the form \(\dot{x}_i = f(x_i, \lambda_i) + \varepsilon \sum_{j=1}^{n} g_{ij}(x_i, x_j, \varepsilon)\), where \(\varepsilon\) is a small parameter reflecting the strength of connections.
Key bifurcations, such as the cusp bifurcation, are discussed, and the authors show how these can be transformed into canonical forms. For example, a cusp bifurcation in a sigmoidal neuron can be represented by the canonical model \(y_i' = r_i - y_i^3 + \sum_{j=1}^{n} s_{ij} y_j\). The chapter also covers small amplitude oscillations, large amplitude oscillations, and neural excitability, providing detailed mathematical formulations and theorems for each case.
The authors emphasize that the canonical model approach provides a rigorous way to derive simple yet accurate models, even when detailed equations are not available. This method is valuable for understanding universal neurocomputational properties and can be applied to a wide range of neural systems, including those with different dynamics and equations.The chapter discusses the importance of canonical neural models in studying the fundamental principles of information processing in the brain. It highlights that mathematical analysis of specific neural models can be limited due to the dependence on model specifics, leading to varied results. To address this, the authors propose reducing a family of Hodgkin-Huxley-type models to a canonical model, which retains key features while being simpler and more tractable. The canonical model is derived through a continuous change of variables, allowing for the study of universal neurocomputational properties shared by all members of the family.
The chapter also explores weakly connected neural networks, where the typical size of postsynaptic potentials is small compared to the threshold for cell discharge. This assumption leads to models of the form \(\dot{x}_i = f(x_i, \lambda_i) + \varepsilon \sum_{j=1}^{n} g_{ij}(x_i, x_j, \varepsilon)\), where \(\varepsilon\) is a small parameter reflecting the strength of connections.
Key bifurcations, such as the cusp bifurcation, are discussed, and the authors show how these can be transformed into canonical forms. For example, a cusp bifurcation in a sigmoidal neuron can be represented by the canonical model \(y_i' = r_i - y_i^3 + \sum_{j=1}^{n} s_{ij} y_j\). The chapter also covers small amplitude oscillations, large amplitude oscillations, and neural excitability, providing detailed mathematical formulations and theorems for each case.
The authors emphasize that the canonical model approach provides a rigorous way to derive simple yet accurate models, even when detailed equations are not available. This method is valuable for understanding universal neurocomputational properties and can be applied to a wide range of neural systems, including those with different dynamics and equations.