They’re slower, in the way you’re probably thinking. That’s not clickbait, this is a very common misconception about “powerful” computers.
Data centers aren’t built like desktop PCs at all. A lot of answers here discuss the size and scale of Google’s data center operations and briefly cover hardware. I’d like to nail down the hardware a bit more. To put it simply, the ecosystems are different enough that direct “speed” comparisons are a little goofy.
I don’t know the exact specs for Google machines, and even if I did I wouldn’t want to release them, but everyone uses the same basic 2-by platform whenever they can. Standardized hardware is cheaper, and more people working on the same code means more efficient code. So, high-performance networked data servers are overwhelmingly dual-socket machines, like this:
Each socket gets filled with a high-end Intel or AMD processor, which top out at 52 cores and 64 cores respectively. There are some systems that have a lot more sockets, but they are nowhere near as common. They’re more difficult to network, less stable, and produce a ton of heat in a relatively small area:
The name of the game with these servers is always efficiency – there’s only so much heat you can get out of a CPU reliably. If you have more cores, you can run them at a lower clock speed and still get the same amount of work done.
Power usage increases faster than clock speed does, so lower clocks and more cores is better – as long as you have software that can work that way. Luckily, servers are usually doing a lot of parallel tasks.
That was a big lead-in, but with that background explanation, let’s tackle the performance question. In terms of raw computing power, a clock-for-clock comparison is pretty fair, especially for the Intel parts.
A high-end, normal desktop PC might have a 9900k running an 8-core turbo of 4.7 GHz. Bleeding-edge hardware from Google will be running 2x Xeon Platinum 9282’s, at a turbo speed of 3.8 GHz.
That means the desktop will have an effective 3.76 x 10^10 clock cycles each second, and the Xeon Platinum system will have 3.95 x 10^11 cycles.
So a Google server has roughly 10x the raw power of a fast desktop.
But, most people don’t use their PC like that!
If you installed Windows on a data server and tried to play games, those games would probably run badly. In fact, I’ve used a bunch of multi-CPU systems as workstations and depending on the specifics of the hardware they can be pretty painful to work with. Especially when compared to a high-end consumer PC that costs a fraction of the price.
Consumer software has been designed for consumer hardware, and vice-versa. Video games, for example, don’t use that many cores, care a lot about latency between hardware, and benefit from fine-tuned support on gaming PCs. If your CPU and GPU get old enough, some games will have strange bugs and much worse performance than you’d expect.
While there are some servers that need fine-tuning, low-latency hardware, mostly the big concern is just raw horsepower and bandwidth. You know what kinds of tasks you’re doing, and you’re doing a lot of them, so good scheduling often lets you ignore low-level latency issues. As long as you have enough bandwidth that the hardware-to-hardware connections never fill up, the transfer delays are small and you can work around them.
So, the server hardware is often outperformed by relatively cheap consumer CPUs. Barring AMD’s most recent stuff, the high-core-count enthusiast chips don’t do as well in gaming as their low-core competition, even though they have more “horsepower”
And I think that covers pretty much everything I needed to say here. While server-grade hardware is more expensive and more powerful than a consumer-grade CPU, that doesn’t actually mean it’s faster.
I get questions about this all the time when I mention I work with supercomputers. Probably the most common one is “hey, have you tried running Crysis?”
And the truth is most server hardware is relatively bad at doing consumer tasks. It’s not terrible, but it tends to perform like lots of half-decent computers stapled together, rather than the one really fast system. At the end of the day, lots of cores that run a bit slower is not strictly better than a few cores running really fast. There are always design trade-offs.
A dump truck might have 500 horsepower, but it’s not going to win a drag race.