Brought to you by Data Center Knowledge
While the focus early this week might have been on Microsoft’s Inspire in the nation’s capital, Intel was having an event of its own in New York City on Tuesday.
Promising it will revolutionize the data center, Intel launched its latest Xeon Scalable line based on its Skylake architecture.
“Today Intel is bringing the industry the biggest data center platform advancement in a decade,” said Navin Shenoy, vice president and general manager of Intel’s data center group. “A platform that brings breakthrough performance, advanced security, and unmatched agility for the broadest set of workloads — from professional business applications, to cloud and network services, to emerging workloads like artificial intelligence and automated driving.”
Intel wants to tighten its grip on the data center market, where workloads are accelerating as new technologies — blockchain and IoT, for example — are competing for bandwidth. The new line promises to bring a quantum leap in performance, with Platinum level processors supporting two, four, or eight sockets, and offering up to 28 cores with 56 threads and up to three 10.4 GT/s UPI links. Add to this a clock speed of 3.6GHz, 48 PCIe 3.0 lanes, six memory channels utilizing DDR4-2666MHz DRAM and up to 1.5TB topline memory channel bandwidth, and you have a server ready for some heavy lifting.
If that sounds like overkill, it isn’t.
The launch comes just as AMD seems to be becoming a player that matters again with the recent launch of its less expensive Epyc server line, which offers up to 32 cores, 64 threads and 3.2GHz bandwidth. Benchmarks probably aren’t as good as with Xeon Scalable, but that also depends on who’s talking. There’s additional competition from power sipping, and even cheaper, ARM-based server chips in the pipeline, that will undoubtedly find increased data center uptake for some workloads.
The company showed no indication that it’s looking over its shoulder, however, as it launched the Xeon Scalable line. There was no mention of any competition, just a forward-looking focus on the new offering and what it can do for data centers.
“The Intel Xeon Scalable Platform includes a completely rearchitected microprocessor designed from the ground up for the data center specifically,” Shenoy explained early on at the launch event, “offering greater levels of integration and workload specific accelerators. The Xeon Scalable Platform is the industry’s highest performance per watt platform.”
The platform offers more, however, than a mere increase in performance or faster speeds, which was pointed out by Lisa Spelman, vice president and general manager of Xeon Products and Data Center Marketing.
“We’re delivering on traditional performance drivers, like more cores, higher performing cores, more memory, and more I/O,” she said. “Intel’s taking it far beyond that. We’re moving beyond the basics and we’re adding unique innovations that truly address the unique requirements of data center workloads.”
These innovations include AVX-512, an instruction set in the Xeon Scalable processor that accelerates data processing.
“This has traditionally been used or valued in the high performance computing space,” she said, “but is actually expanding and growing into new workloads as well. It takes vector performance to a new level by doubling the width of the registers and doubling again the number of registers available. This results in a 2X flops-per-clock efficiency compared to the previous generation, and it makes vector computation a lot more effective and more applicable across those wider workloads.”
Quick Assist is also integrated into the system, a technology that was originally used to accelerate compute-intensive operations by reducing the size of data packets. Here, it’s used as something of a security tool, to reduce any server performance hits caused by encryption.
“We’ve also worked with our customers across the cloud service providers and across enterprises and realized that Quick Assist technology delivers great speedups for security algorithms,” she explained. “With Quick Assist integrated into the chipset in this generation, you get a hundred gigabits of cryptography and a hundred gigabits of compression, which allows you to increase the overall workload performance, while freeing up that precious compute capacity on the cores for higher order function.”
There’s also a new architectural design for the processor itself, utilizing a mesh rather than a ring design, which was necessitated by the higher number of cores available in Xeon Scalable.
“In the previous design, data would have to go around the ring — if you’re starting at the last core and need to get to the one on the other side — through a buffer and all the way back around the second ring to get to the final destination. With Mesh, the data simply cuts across the improved and increased number of data pathways and avoids the buffer. Also, CPUs don’t actually process one cache line at a time. A real data center CPU needs to have all cores accessing all memory, all I/O, at the same time, and the mesh fundamentally provides this.”
The benchmarks offered by Intel were impressive, showing performance improvements of up to 1.65X over Xeon’s last generation and a data protection performance increase of 2X. The company also claims a 4.2X greater VM capacity, with total cost of ownership dropping 65 percent.
More interesting than Intel’s own benchmarks were performance numbers reported by software companies that were given early access to Xeon scalable. Running its cloud based video stitching application optimized for the new platform, Tencent saw a 72 percent gain, and SAS, running its business analytics stack, reported a doubling of performance over Xeon’s last generation.
The platform has been field tested under production workloads as well.
“We took our most aggressive early ship program that we’ve ever delivered in the data center group,” Spelman said, “delivering production processors ahead of launch to cloud service providers, enterprises, high performance computer users, and com service providers like AT&T. Starting in November of last year, we have sold over 500,000 processors, optimized for data center workloads to over thirty customers.”
Google was the first of these customers to launch instances of Xeon Scalable — on its Google Compute platform in February — and now has customers in retail, financial services, oil and gas, and education running in production.
Bart Sano, Google’s vice president of platforms, spoke briefly at the event by video. “Our customers have reported up to 40 percent improvement in many cases versus previous platforms,” he said. “In some cases, where the customers tuned for the AVX-512, they saw more than 100 percent improvement.”