TL;DR - FB historically has always used a dual-socket setting up. They started noticing the trend in Intel chips to be only getting marginally faster yet using vastly more power. They needed to change this curve. So 3 years ago they starting working directly with Intel. The result was the Xeon-D chip. FB current setup is a single socket Xeon-D which is faster than the old dual socket setup and uses 1/2 the power.
We have been using traditional two-socket servers for more than five server generations now at Facebook.
Because our web servers are heavily compute-bound and don't require much memory capacity, the two-socket servers we had in production had several limitations.
Mono Lake is the server embodiment of Xeon-D and the building block for our SOC-based server designs.
This should be a HN feature. Ability to submit TL;DRs to a post in a way similar to commenting. An option to always show the top voted TL;DR, and a link to dive in to other interpretations.
Actually the power use is the same per rack, it's just faster compared to the previous performance per rack, for their workloads, compared to what they'd get by keeping dual socket with this CPU generation.
Well... they wrote their own PHP-to-C++ compiler (which is both insane and incredibly impressive), then their own PHP tracing JIT compiler, and their own PHP type checker....
At what point does it become their own language?
Also a good amount of their backend services use better languages like Haskell, D, Erlang, and whatever else they need.
There's a reason the combination of type checking, proper containers and generics, inline XML, and asynchronous processing is all called Hack, and the runtime is called HHVM. At this point, new code using all these features only vaguely resembles PHP, but it's far more robust and maintainable, and vastly more performant.
Facebook has spent a lot of time and man hours working around a lot of the issues most people have with PHP. Throw enough talented engineers at it (or any language), and they'll come up with some mighty creative solutions. It's not like they can do a rewrite, nor should they when they've obviously managed to make it work for them. Plus, their investors would kill them. Very, very slowly for such a boneheaded idea.
Anyhow, most new startups don't really have the resources to do anything remotely like that. Best to choose a well-established language that works well for your needs with manageable downsides.
Is there any potential for Facebook to get into the cloud services market? It seems like Facebook has the experience for high-performance, reliable cloud platform. Using something like Kubernetes could make it easy for customers to try out FB.
They just closed $18B in revenue in the past fiscal year. Even if they successfully began selling hardware and were making twice that, they'd be idiots to give up that huge, huge income stream. Their investors would kill them, and for good reason.
Those in-market cloud providers are very secretive about their solutions. But due to the fact they are running and expanding a business that is comparable and bigger than Facbook in size, there is no reason to believe they don't have something highly tailored to their own causes.
There's a lot that goes into building a public cloud that in house solutions won't do because it's unnecessary complexity. A lot of which is security related, which ends being very difficult to bolt onto an existing system.
It sure does sound like they could potentially build an AWS/Google Cloud competitor, but with Oculus just getting started I guess it's not a priority for them at the moment.
Facebook lost their AWS keys not many months ago. Until recently, it was also possible to trivially brute force your way into any user account. Cloud services containing the same bugs would not remain standing for very long. I'm not sure these facts would prevent them from trying though.
Interesting that they worked directly with Intel on this. I wonder if the cost savings from a power perspective were worth all this engineering effort? It's impressive in its own right to go to the lengths of customizing the hardware in such a way, but I'm not sold on the business case.
It seems like you'd get a lot more for your money just by finding cheaper sources of energy (or building your data centers near a cheap, renewable source) and using commodity low power chips that have already been proven to work well for server work loads.
Facebook is already using the cheapest energy they can find. If Facebook is buying, say, $20M worth of Intel processors per year then you can imagine how it's possible to get ROI on a few million worth of engineering effort.
Their hardware before has been custom as well, and I'd guess the CPU work was mostly Intel getting input from Facebook to help figure out what's best to counter eventual server ARM chips. It's not like Xeon-D are a limited design custom made for Facebook, they are going to be an important part of Intels lineup.
And even if your electricity is cheap, you still have to get the energy out of the data center again.
They're probably taking a pretty long view on the cost savings and considering future expansion. Based on the article, I'm thinking Intel ate up some of the costs with this to recoup them with other sales.
Amazing to think you can have problems of scale that can only be fixed by building a better CPU. But at FB scale everything is a bottleneck including power. I think it would be fun to think about ways to fix problems of this magnitude with virtually unlimited budgets. Of course having too much money can also be a bottleneck and allow you to come with too many wrong answers as well.
Sites these days love their whitespace, don't they, but now the content is fading. At this rate, in 7 years most websites will just be pointless while rectangles. Still, should save on the server costs as it'll cache well.
I don't understand your complaint. Out of all the articles I've read lately this page seems to be easiest to read. It focuses on just the content and formats it well.
The text is grey instead of black. It probably looks fine on most monitors, but I can see how it might exacerbate readability issues. It's become some kind of received wisdom that body text shouldn't be black anymore, I just don't understand it.
My complaint is that grey text is harder to read than black. People could have made text grey years ago if it were better; it doesn't require hardware or new protocols. It's just a trend. It'll die out, but until then - given browsers don't allow the user much in the way of control over how content is presented to the user - it's annoying.