The Capacity of the Hopfield Associative Memory

The Capacity of the Hopfield Associative Memory

JULY 1987 | ROBERT J. MCELIECE, FELLOW, IEEE, EDWARD C. POSNER, FELLOW, IEEE, EUGENE R. RODEMICH, AND SANTOSH S. VENKATESH, STUDENT MEMBER, IEEE
The paper analyzes the capacity of the Hopfield associative memory using techniques from coding theory. The memory stores n-tuples of ±1s, with components changing based on a hard-limited version of linear functions of other components. With symmetric connections, a stable state is reached. By building the connection matrix as a sum of outer products of m fundamental memories, the goal is to recover one of the m memories using a probe vector within Hamming distance n/2. If m memories are chosen at random, the maximum asymptotic value of m for exact recovery is n/(2 log n). With the added restriction that all memories are recoverable, m is at most n/(4 log n). Extensions consider capacity under quantization of the outer-product connection matrix, closely related to the quantized Gaussian channel capacity. The paper discusses the structure of the Hopfield neural network, where neurons are simple bistable elements. The state of each neuron represents one bit of information, and the system's state is described by an n-tuple. The neural network is densely interconnected, with symmetric connections and zero diagonal. The state of the system evolves based on a threshold decision rule, leading to a fixed point or attractor. The paper explores the stability of memories, the radius of attraction, and the capacity of the system. It shows that the fundamental memories are approximately eigenvectors of the linear transformation T with eigenvalue n. The capacity is determined by the number of memories that can be stored while ensuring recovery with error correction. The results indicate that the maximum number of memories is asymptotically n/(2 log n) or n/(4 log n), depending on the recovery requirements. The paper also discusses the implications of allowing a small fraction of errors in the final stable state and the relationship between the Hopfield model and information theory.The paper analyzes the capacity of the Hopfield associative memory using techniques from coding theory. The memory stores n-tuples of ±1s, with components changing based on a hard-limited version of linear functions of other components. With symmetric connections, a stable state is reached. By building the connection matrix as a sum of outer products of m fundamental memories, the goal is to recover one of the m memories using a probe vector within Hamming distance n/2. If m memories are chosen at random, the maximum asymptotic value of m for exact recovery is n/(2 log n). With the added restriction that all memories are recoverable, m is at most n/(4 log n). Extensions consider capacity under quantization of the outer-product connection matrix, closely related to the quantized Gaussian channel capacity. The paper discusses the structure of the Hopfield neural network, where neurons are simple bistable elements. The state of each neuron represents one bit of information, and the system's state is described by an n-tuple. The neural network is densely interconnected, with symmetric connections and zero diagonal. The state of the system evolves based on a threshold decision rule, leading to a fixed point or attractor. The paper explores the stability of memories, the radius of attraction, and the capacity of the system. It shows that the fundamental memories are approximately eigenvectors of the linear transformation T with eigenvalue n. The capacity is determined by the number of memories that can be stored while ensuring recovery with error correction. The results indicate that the maximum number of memories is asymptotically n/(2 log n) or n/(4 log n), depending on the recovery requirements. The paper also discusses the implications of allowing a small fraction of errors in the final stable state and the relationship between the Hopfield model and information theory.
Reach us at info@study.space