All-Optical Machine Learning Using Diffractive Deep Neural Networks

All-Optical Machine Learning Using Diffractive Deep Neural Networks

| Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi and Aydogan Ozcan
This paper introduces an all-optical Diffractive Deep Neural Network (D²NN) architecture that can learn to implement various functions through deep learning-based design of passive diffractive layers. The D²NN is physically formed by multiple layers of diffractive surfaces that work collaboratively to perform an arbitrary function that the network can statistically learn. The authors experimentally demonstrated the success of this framework by creating 3D-printed D²NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. The D²NN is based on the principles of wave propagation and optical diffraction, where each neuron in the network is connected to other neurons through a secondary wave that is modulated in amplitude and phase by both the input interference pattern and the local transmission/reflection coefficient. The network is trained using an error back-propagation method, and once trained, it can perform the learned function at the speed of light using only optical diffraction and passive optical components. The authors demonstrated the D²NN's capabilities by creating a 3D-printed D²NN that successfully classified handwritten digits with 91.75% accuracy on the MNIST dataset. They also demonstrated the function of an imaging lens using a 3D-printed D²NN with 0.45 million neurons. The D²NN's design is scalable and power-efficient, making it suitable for various applications in all-optical image analysis, feature detection, and object classification. It also enables new camera and optical component designs that can learn to perform unique tasks using D²NNs. The D²NN's unique architecture allows for efficient optical processing and can be implemented using various high-throughput 3D fabrication methods. The network can be used to design optical components with new functionalities and may lead to new camera or microscope designs that can perform various unique imaging tasks. The authors also discussed the potential of using spatial light modulators to create reconfigurable D²NNs, which can be used for transfer learning and performance improvement. The results show that the D²NN can be used for various tasks, including handwritten digit classification and imaging lens functions, and that it can be scaled up to handle large-scale optical components. The authors also discussed the potential of using the mathematical basis of D²NNs in computer-based neural networks and the possibility of improving network performance through virtual wave propagation. The study highlights the potential of all-optical deep learning frameworks for various applications in imaging and optical processing.This paper introduces an all-optical Diffractive Deep Neural Network (D²NN) architecture that can learn to implement various functions through deep learning-based design of passive diffractive layers. The D²NN is physically formed by multiple layers of diffractive surfaces that work collaboratively to perform an arbitrary function that the network can statistically learn. The authors experimentally demonstrated the success of this framework by creating 3D-printed D²NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. The D²NN is based on the principles of wave propagation and optical diffraction, where each neuron in the network is connected to other neurons through a secondary wave that is modulated in amplitude and phase by both the input interference pattern and the local transmission/reflection coefficient. The network is trained using an error back-propagation method, and once trained, it can perform the learned function at the speed of light using only optical diffraction and passive optical components. The authors demonstrated the D²NN's capabilities by creating a 3D-printed D²NN that successfully classified handwritten digits with 91.75% accuracy on the MNIST dataset. They also demonstrated the function of an imaging lens using a 3D-printed D²NN with 0.45 million neurons. The D²NN's design is scalable and power-efficient, making it suitable for various applications in all-optical image analysis, feature detection, and object classification. It also enables new camera and optical component designs that can learn to perform unique tasks using D²NNs. The D²NN's unique architecture allows for efficient optical processing and can be implemented using various high-throughput 3D fabrication methods. The network can be used to design optical components with new functionalities and may lead to new camera or microscope designs that can perform various unique imaging tasks. The authors also discussed the potential of using spatial light modulators to create reconfigurable D²NNs, which can be used for transfer learning and performance improvement. The results show that the D²NN can be used for various tasks, including handwritten digit classification and imaging lens functions, and that it can be scaled up to handle large-scale optical components. The authors also discussed the potential of using the mathematical basis of D²NNs in computer-based neural networks and the possibility of improving network performance through virtual wave propagation. The study highlights the potential of all-optical deep learning frameworks for various applications in imaging and optical processing.
Reach us at info@study.space
[slides and audio] All-optical machine learning using diffractive deep neural networks