TVM: An Automated End-to-End Optimizing Compiler for Deep Learning

TVM: An Automated End-to-End Optimizing Compiler for Deep Learning

20 May 2018 | Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy
TVM is an automated compiler designed to optimize deep learning workloads for a wide range of hardware back-ends, including CPUs, GPUs, and specialized accelerators. It addresses the challenges of deploying deep learning models on diverse hardware platforms by providing both graph-level and operator-level optimizations. TVM uses a tensor expression language to describe operators and a machine learning-based cost model to guide the optimization process. The system is open-sourced and has been evaluated on various platforms, demonstrating competitive performance with state-of-the-art hand-tuned libraries. TVM's ability to handle new hardware back-ends and emerging workloads, such as depthwise convolution and low-precision operations, is highlighted through experimental results.TVM is an automated compiler designed to optimize deep learning workloads for a wide range of hardware back-ends, including CPUs, GPUs, and specialized accelerators. It addresses the challenges of deploying deep learning models on diverse hardware platforms by providing both graph-level and operator-level optimizations. TVM uses a tensor expression language to describe operators and a machine learning-based cost model to guide the optimization process. The system is open-sourced and has been evaluated on various platforms, demonstrating competitive performance with state-of-the-art hand-tuned libraries. TVM's ability to handle new hardware back-ends and emerging workloads, such as depthwise convolution and low-precision operations, is highlighted through experimental results.
Reach us at info@study.space
Understanding TVM%3A An Automated End-to-End Optimizing Compiler for Deep Learning