April 27-May 1, 2024 | Henry N. Schuh, Arvind Krishnamurthy, David Culler, Henry M. Levy, Luigi Rizzo, Samira Khan, Brent E. Stephens
The paper "CC-NIC: a Cache-Coherent Interface to the NIC" by Henry N. Schuh et al. explores the integration of network interface controllers (NICs) into the CPU's cache hierarchy through emerging interconnects, such as Intel's Ice Lake and Sapphire Rapids UPI interconnects. The authors highlight the limitations of traditional PCIe NICs, which prioritize CPU efficiency at the expense of latency, and propose CC-NIC, a host-NIC interface designed for coherent interconnects. CC-NIC optimizes data structures, layouts, and signaling to minimize overheads and leverage the benefits of cache coherence. Key contributions include:
1. **Modeling and Design**: CC-NIC is designed to take advantage of the streamlined data paths and cache interactions provided by coherent interconnects, reducing latency and improving throughput.
2. **Performance Evaluation**: On Intel's Ice Lake and Sapphire Rapids platforms, CC-NIC achieves a maximum packet rate of 1.5Gbps and 980Gbps packet throughput, with 77% lower minimum latency and 88% lower latency under 80% load compared to PCIe NICs.
3. **Application-Level Benefits**: CC-NIC demonstrates significant improvements in application-level core savings and maintains these benefits across a range of interconnect performance characteristics.
The paper also discusses the challenges and trade-offs in designing a coherent host-NIC interface, including the need for careful management of caching and data structure sharing. The evaluation shows that CC-NIC's design principles can be applied to other coherent interconnects, making it a valuable contribution to the field of network interface optimization.The paper "CC-NIC: a Cache-Coherent Interface to the NIC" by Henry N. Schuh et al. explores the integration of network interface controllers (NICs) into the CPU's cache hierarchy through emerging interconnects, such as Intel's Ice Lake and Sapphire Rapids UPI interconnects. The authors highlight the limitations of traditional PCIe NICs, which prioritize CPU efficiency at the expense of latency, and propose CC-NIC, a host-NIC interface designed for coherent interconnects. CC-NIC optimizes data structures, layouts, and signaling to minimize overheads and leverage the benefits of cache coherence. Key contributions include:
1. **Modeling and Design**: CC-NIC is designed to take advantage of the streamlined data paths and cache interactions provided by coherent interconnects, reducing latency and improving throughput.
2. **Performance Evaluation**: On Intel's Ice Lake and Sapphire Rapids platforms, CC-NIC achieves a maximum packet rate of 1.5Gbps and 980Gbps packet throughput, with 77% lower minimum latency and 88% lower latency under 80% load compared to PCIe NICs.
3. **Application-Level Benefits**: CC-NIC demonstrates significant improvements in application-level core savings and maintains these benefits across a range of interconnect performance characteristics.
The paper also discusses the challenges and trade-offs in designing a coherent host-NIC interface, including the need for careful management of caching and data structure sharing. The evaluation shows that CC-NIC's design principles can be applied to other coherent interconnects, making it a valuable contribution to the field of network interface optimization.