It's worth noting that this discusses the centos based provisioning and hardware management platform.
The actual distribution used to _run_ all Meta backend services is completely separate and built from source
(it does share the (non centos) kernel). This is done for flexibility, performance, service isolation reasons
This is not accurate, the system packages are from the CentOS binary distribution. You might be thinking of C/C++ runtime libraries, which are distributed as separate packages used by the internal ("fbcode") binaries, but system binaries link to the standard distribution.
Right, the standard centos system binaries are used to provision services and manage hardware.
The internal services linking against the runtime libs you mention, are actually linking against about 2000 built from source packages, and are essentially a separate distro (with a distro in this sense being an ABI compatible set of libs running on a kernel)
> The actual distribution used to _run_ all Meta backend services is completely separate and built from source (it does share the kernel).
That does not seem to be true. The hosts I'm looking at have systemd and glibc RPMs that were built on centos.org hosts, with coreutils from redhat.com. The kernel was built on a Facebook host, but that's it (of the handful of components I've spot-checked).
For cases where you might not know which ldd will actually work, I like to use patchelf or readelf to get the interpreter then use the —list arg directly. That way it always actually gives correct results, using a different loader can change the selected library paths even if it thinks it works. One-liner would be approximately $(patchelf —print-interpreter tgt) —list tgt My apologies if the dashes became an em-dash or similar, on a phone.
Maybe they just don’t trust you enough and all of the machines you think are physical hosts that you ssh into are in fact nothing but containers, and there is a whole other team of people at said company controlling the real servers :thinking_face:
I'm curious as to why Meta settled on CentOS Stream. Isn't that unstable by design? How do they manage change in such a system? What are its benefits and drawbacks vis-a-vis the alternatives?
CentOS Stream isn’t unstable in the way that something like Arch Linux or Debian Sid is.
They effectively only got rid of point releases. Instead of going from CentOS 7 to 7.1, you just regularly get updates. Since they’re a part of the same major release, they don’t contain breaking changes.
It’s no different from running Debian Stable with the -updates repository enabled.
Right. But it does not matter in this case - Meta rolls out its own kernel (we employ a lot of kernel engineers, and PSA our Linux teams are hiring software engineers!) and we maintain our systemd backports and some other packages in the CentOS Hyperscale SIG
That was my take on it. But CentOS Stream didn't get a fix in time for last major CVE before RHEL/Oracle or ubuntu/debian for that matter. That killed it for me.
AlmaLinux mentioned this in their 2023 revue
>[After the split, we] have been able to ship critical security and bug fixes sooner than any other enterprise linux distro. [...] In some cases, any distribution that is still relying on RHEL as their upstream is still waiting for these patches to be released.
https://almalinux.org/blog/2023-12-14-2023-highlights/
Ironically, faster fixes were one of the reported reasons for moving CentOS upstream, turns out that was only for metabook.
Alma is quickly becoming the new self-hosting distro for me and I feel by now they're better than before
Stream gets faster fixes in general, but yeah for embargoed security fixes it goes the other way around
Even if you are Meta. The only trick we use is something anyone can do - hotfix it in Hyperscale, or download the Stream fix as soon as a signed build is available without waiting for it to be fully released
It may be due to them testing fairly rigorously before deployment and thereby benefitting from the latest kernel security patches, for example. Google does something similar with Debian internally on workstations.
I’ve been really enjoying this year’s batch ccc video's. I hope some decent English translations start to crop up for the non-English presentations. The Breaking Train DRM presentation was fantastic!
Their media hosting platform is used by many conferences. That's a video of a non-CCC conference, called asg, earlier in 2023, not to be confused with the recently held congress or other CCC headlined events.
Meta was a primary sponsor for the most recent All-Systems-Go, and employs several folks contributing upstream to systemd.
Also note Lennart is no longer even at RedHat, he's at Microsoft nowadays. I'm surprised Microsoft wasn't a major All-Systems-Go sponsor alongside Meta...
There's been a lot of earth shifting under these projects, for better or worse.
Was All Systems Go organized by CCC? I don't find this information on their website.
My comment was not about Meta sponsoring stuff. It was about the long history of CCC being against companies like Meta and my puzzlement that they provide such a company a platform.
The actual distribution used to _run_ all Meta backend services is completely separate and built from source (it does share the (non centos) kernel). This is done for flexibility, performance, service isolation reasons