Streaming server blog banner

10/9/25 4:56 PM | Streaming

Beyond the CPU: How a Single Server Achieves 400Gbps Streaming

How fast can we go? We explore the roles of CPU, the TCP vs UDP debate, and the hardware required to handle extreme throughput on a single server.

How fast can a single streaming server really go? For years, the conventional wisdom pointed to the CPU as the primary bottleneck. More users, more transcoding, more cores needed. But what if the true limits of massive-scale streaming lie elsewhere?

The engineers at Netflix, the undisputed king of streaming, set out to answer this question. They can achieve staggering 400Gbps streaming throughput from a single server. Let’s deconstruct their achievement, explore the real-world bottlenecks of streaming, and show you how fast can a single streaming server go.

CPU in streaming

The first step is to understand where CPU power actually matters. As our Product Manager, Sjoerd van Koning, points out, simple streaming of static content, even an older HPE Gen8 server can saturate a 10Gbps connection. The workload simply isn't CPU-intensive.

The equation changes dramatically with transcoding.

📱 Live event streaming: For a live streaming server broadcasting an event, you might re-encode a single source into a few different bitrates (SD, HD, 4K). This is CPU-intensive but manageable, as the core task is a one-to-many delivery.

📹 High-density ingest streaming: A scenario with thousands of incoming camera feeds, each needing to be re-encoded into a specific format for storage, creates a massive parallel processing demand. In this case, CPU core count is king.

For these transcoding-heavy workloads, a CPU with a high core count is essential. However, for delivering pre-encoded static content at scale, the bottleneck lies elsewhere.

The 400Gbps benchmark

This is where the Netflix study becomes so fascinating. They set out to push a single server to its absolute limit, and the results were groundbreaking.

They could use a moderate hardware in 2021 to achieve 400Gbps streaming. It was not an exotic supercomputer. It was a single-socket system built on a 32-core AMD EPYC 7502P processor. This is a powerful, enterprise-grade CPU, but it's a piece of hardware that is accessible to many businesses—including our clients at NovoServe.

The engineers discovered that the ultimate performance limit was not the CPU, the PCIe bus, or even the network card. The bottleneck was memory bandwidth—the sheer speed at which data could be moved from the system's RAM to the network card for egress. This is a profound insight: at extreme scale, your server's memory subsystem becomes just as critical as its network interface.

Dedicated unmetered bandwidth of NovoServe

TCP vs UDP for streaming

Perhaps the most counter-intuitive part of the Netflix study is their choice of network protocol. The long-standing debate of TCP vs UDP for streaming has a conventional answer: UDP for live, TCP for downloads.

UDP (user Datagram Protocol): This is the standard for live events like sports or video calls. It's fast and lightweight, but it doesn't guarantee packet delivery. If a packet is lost, it's gone forever—there's no point in resending a video frame that's already in the past.

TCP (Transmission Control Protocol): This protocol guarantees that every packet arrives in the correct order. If a packet is lost, it's re-sent. This ensures data integrity but can introduce buffering and latency.

Netflix chose TCP for their VOD streaming. Why? Quality. For a movie, a missing packet results in a visible glitch or artifact. A few extra milliseconds of initial buffering is an acceptable trade-off to ensure a perfect, uninterrupted viewing experience.

The catch is that TCP requires a memory buffer for each and every stream to handle potential retransmissions. The higher the latency between your server and the viewer, the larger this buffer needs to be. This is where hardware and network architecture become critically intertwined. Netflix can make this strategy work because their Open Connect CDN consists of over 20,000 servers distributed globally, ensuring latency to the end-user is always minimal.

Build your own streaming powerhouse

The lessons from Netflix are clear: building a world-class streaming platform is about a balanced and intelligent architecture.

The Right CPU for the Job: You need the right processor for your specific workload. For transcoding-heavy tasks, you need high core counts. That’s why we offer a full range of AMD EPYC dedicated servers, from the 64-core workhorses to the latest 128-core powerhouses—far exceeding the specs used in the Netflix test. When it comes to EPYC for streaming, we have the perfect solution.

Massive Memory Capacity: To support a TCP-based streaming model or extensive caching, you need abundant RAM. Our servers are fully customizable and can be configured with hundreds of gigabytes of high-speed ECC memory.

A Network Built for Quality: The viability of both TCP and UDP streaming depends on a network with minimal packet loss and low latency. Our 16+Tbps premium global network, built on 10+ Tier-1 transit providers and over 800 peering partners, is engineered to find the optimal, most direct route for your data, ensuring the highest possible quality for your streams.

Truly Unmetered Bandwidth: To handle massive throughput like 400Gbps streaming without fear of surprise bills, you need a predictable cost model. We offer dedicated ports up to 50Gbps with truly unmetered bandwidth—no hidden "fair use" clauses, just pure, sustained performance.

Architecting for streaming succes

Building a world-class live streaming server or VOD platform is about more than just raw CPU power; it's about smart architectural choices. It's about understanding the real bottlenecks and investing in a balanced system of powerful compute, high-speed memory, and a world-class network.

Ready to build your own streaming powerhouse? Contact our account managers to help you design the optimal solution for your streaming ambitions.