June 26, 2017 | Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Geib, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Haggman, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Daniel Killebrew, Andy Koch, Naveen Kumar, Steve Lacey, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross, Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg, Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter Wang, Eric Wilcox, and Doe Hyun Yoon
This paper evaluates a custom ASIC called the Tensor Processing Unit (TPU) deployed in datacenters since 2015, which accelerates the inference phase of neural networks (NN). The TPU's core is a 65,536 8-bit MAC matrix multiply unit with a peak throughput of 92 TeraOps/second (TOPS) and a large on-chip memory. The TPU's deterministic execution model is better suited to the 99th-percentile response-time requirements of NN applications compared to CPUs and GPUs, which have time-varying optimizations that help average throughput more than guaranteed latency. The TPU is relatively small and low power despite having many MACs and a large memory. The paper compares the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, both deployed in the same datacenters. The workload, written in TensorFlow, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of the datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU. The paper also discusses the TPU's architecture, performance, and energy efficiency, and evaluates alternative TPU designs. It concludes with a discussion of fallacies and pitfalls in the field of NN hardware and related work.This paper evaluates a custom ASIC called the Tensor Processing Unit (TPU) deployed in datacenters since 2015, which accelerates the inference phase of neural networks (NN). The TPU's core is a 65,536 8-bit MAC matrix multiply unit with a peak throughput of 92 TeraOps/second (TOPS) and a large on-chip memory. The TPU's deterministic execution model is better suited to the 99th-percentile response-time requirements of NN applications compared to CPUs and GPUs, which have time-varying optimizations that help average throughput more than guaranteed latency. The TPU is relatively small and low power despite having many MACs and a large memory. The paper compares the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, both deployed in the same datacenters. The workload, written in TensorFlow, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of the datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU. The paper also discusses the TPU's architecture, performance, and energy efficiency, and evaluates alternative TPU designs. It concludes with a discussion of fallacies and pitfalls in the field of NN hardware and related work.