2017 | Geoffrey W. Burr, Robert M. Shelby, Abu Sebastian, Sangbum Kim, Seyoung Kim, Severin Sidler, Kumar Virwani, Masatoshi Ishii, Prithish Narayanan, Alessandro Fumarola, Lucas L. Sanches, Irem Boybat, Manuel Le Gallo, Kibong Moon, Jiyoo Woo, Hyunsang Hwang and Yusuf Leblebici
Neuromorphic computing using non-volatile memory (NVM) is a promising approach for implementing massively-parallel and energy-efficient systems. This review discusses recent advances in applying NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and 'memcomputing'. In SNNs, NVM synaptic connections are updated using spike-timing-dependent-plasticity (STDP), a biologically-inspired learning rule. For DNNs, NVM arrays can represent synaptic weights, enabling matrix-vector multiplication in an analog, parallel manner, potentially improving power and speed compared to GPU-based training. NVM devices such as phase change memory (PCM), conductive-bridging RAM (CBRAM), filamentary and non-filamentary RRAM, and others have been explored for use as synapses or neurons in neuromorphic applications. The virtues and limitations of these devices, including conductance dynamic range, non-linearity, retention, endurance, and variability, are assessed. The review also discusses the challenges of implementing NVM-based neuromorphic systems, including the need for robust computational schemes, peripheral circuitry, and integrated crossbar selection devices. The paper highlights the potential of NVM for neuromorphic computing, including applications in STDP, vector-matrix multiplication, and memcomputing. It also addresses the impact of NVM device variability on network performance and the need for efficient, low-power implementations. The review concludes with a discussion of the potential of NVM-based neuromorphic systems for future computing applications.Neuromorphic computing using non-volatile memory (NVM) is a promising approach for implementing massively-parallel and energy-efficient systems. This review discusses recent advances in applying NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and 'memcomputing'. In SNNs, NVM synaptic connections are updated using spike-timing-dependent-plasticity (STDP), a biologically-inspired learning rule. For DNNs, NVM arrays can represent synaptic weights, enabling matrix-vector multiplication in an analog, parallel manner, potentially improving power and speed compared to GPU-based training. NVM devices such as phase change memory (PCM), conductive-bridging RAM (CBRAM), filamentary and non-filamentary RRAM, and others have been explored for use as synapses or neurons in neuromorphic applications. The virtues and limitations of these devices, including conductance dynamic range, non-linearity, retention, endurance, and variability, are assessed. The review also discusses the challenges of implementing NVM-based neuromorphic systems, including the need for robust computational schemes, peripheral circuitry, and integrated crossbar selection devices. The paper highlights the potential of NVM for neuromorphic computing, including applications in STDP, vector-matrix multiplication, and memcomputing. It also addresses the impact of NVM device variability on network performance and the need for efficient, low-power implementations. The review concludes with a discussion of the potential of NVM-based neuromorphic systems for future computing applications.