This paper analyzes the challenges in deploying long-context transformers, focusing on the inefficiencies caused by the large size of the key-value (KV) cache. Long-context transformers, such as those with 100K tokens, are significantly more expensive to deploy than short-context models (e.g., 4K tokens) due to the massive memory and computational requirements of the KV cache. The paper presents a concurrent programming framework to analyze the efficiency challenges in serving multiple long-context requests under limited GPU high-bandwidth memory (HBM) size. It shows that the primary bottleneck is the size of the KV cache, which leads to four main deployment challenges: (1) prefilling long inputs takes much longer compute time and GPU memory than short inputs; (2) the large KV cache restricts the number of concurrent users; (3) decoding is memory-bound, with repeated reads from HBM increasing latency; and (4) context switching between users is costly due to the need to swap the KV cache between HBM and DDR. The paper also discusses how hardware architecture, attention types, and model compression techniques influence these challenges. It concludes that reducing the size of the KV cache through lossless compression is the key to making long-context models as cost-effective as short-context ones. The work provides a foundational framework for analyzing and optimizing long-context transformer deployment.This paper analyzes the challenges in deploying long-context transformers, focusing on the inefficiencies caused by the large size of the key-value (KV) cache. Long-context transformers, such as those with 100K tokens, are significantly more expensive to deploy than short-context models (e.g., 4K tokens) due to the massive memory and computational requirements of the KV cache. The paper presents a concurrent programming framework to analyze the efficiency challenges in serving multiple long-context requests under limited GPU high-bandwidth memory (HBM) size. It shows that the primary bottleneck is the size of the KV cache, which leads to four main deployment challenges: (1) prefilling long inputs takes much longer compute time and GPU memory than short inputs; (2) the large KV cache restricts the number of concurrent users; (3) decoding is memory-bound, with repeated reads from HBM increasing latency; and (4) context switching between users is costly due to the need to swap the KV cache between HBM and DDR. The paper also discusses how hardware architecture, attention types, and model compression techniques influence these challenges. It concludes that reducing the size of the KV cache through lossless compression is the key to making long-context models as cost-effective as short-context ones. The work provides a foundational framework for analyzing and optimizing long-context transformer deployment.