Memory devices and applications for in-memory computing

Memory devices and applications for in-memory computing

February 6, 2020 | Abu Sebastian, Manuel Le Gallo, Riduan Khaddam-Aljameh, and Evangelos Eleftheriou
In-memory computing is an alternative approach to traditional von Neumann architectures, where computational tasks are performed directly within memory devices, leveraging their physical properties. This reduces data movement between memory and processing units, which is costly in terms of time and energy. Memory devices such as SRAM, DRAM, Flash, and resistive memory (e.g., RRAM, PCM, MRAM) are being explored for in-memory computing. These devices enable computational primitives like matrix-vector multiplication (MVM), logical operations, and arithmetic functions, which are crucial for applications in scientific computing, signal processing, optimization, machine learning, and deep learning. In-memory computing also offers potential improvements in computational time complexity due to massive parallelism and physical coupling between memory and processing units. Charge-based memory devices like SRAM and DRAM are used for in-memory logic and arithmetic operations, while resistance-based devices like RRAM, PCM, and MRAM offer non-volatile storage and stateful logic. These devices can be used to perform tasks such as image compression, compressed sensing, combinatorial optimization, and associative memory. In machine learning, in-memory computing can be used for deep learning, spiking neural networks (SNNs), and stochastic computing. It enables efficient inference and training of neural networks by reducing data movement and improving energy efficiency. However, challenges remain in terms of device variability, precision, and the need for custom training and retraining. In stochastic computing, memristive devices can generate random numbers and support probabilistic inference. In security, memristive devices can be used to implement physically unclonable functions (PUFs) for secure authentication. Overall, in-memory computing offers significant opportunities for improving computational efficiency and reducing energy consumption, but further research is needed to overcome the challenges associated with device variability, precision, and scalability.In-memory computing is an alternative approach to traditional von Neumann architectures, where computational tasks are performed directly within memory devices, leveraging their physical properties. This reduces data movement between memory and processing units, which is costly in terms of time and energy. Memory devices such as SRAM, DRAM, Flash, and resistive memory (e.g., RRAM, PCM, MRAM) are being explored for in-memory computing. These devices enable computational primitives like matrix-vector multiplication (MVM), logical operations, and arithmetic functions, which are crucial for applications in scientific computing, signal processing, optimization, machine learning, and deep learning. In-memory computing also offers potential improvements in computational time complexity due to massive parallelism and physical coupling between memory and processing units. Charge-based memory devices like SRAM and DRAM are used for in-memory logic and arithmetic operations, while resistance-based devices like RRAM, PCM, and MRAM offer non-volatile storage and stateful logic. These devices can be used to perform tasks such as image compression, compressed sensing, combinatorial optimization, and associative memory. In machine learning, in-memory computing can be used for deep learning, spiking neural networks (SNNs), and stochastic computing. It enables efficient inference and training of neural networks by reducing data movement and improving energy efficiency. However, challenges remain in terms of device variability, precision, and the need for custom training and retraining. In stochastic computing, memristive devices can generate random numbers and support probabilistic inference. In security, memristive devices can be used to implement physically unclonable functions (PUFs) for secure authentication. Overall, in-memory computing offers significant opportunities for improving computational efficiency and reducing energy consumption, but further research is needed to overcome the challenges associated with device variability, precision, and scalability.
Reach us at info@study.space