Hacker News new | past | comments | ask | show | jobs | submit login
How Google got to rolling Linux releases for Desktops (cloud.google.com)
174 points by HieronymusBosch on July 12, 2022 | hide | past | favorite | 162 comments



Cool article. My two takeaways: 1) If your deployment process hurts do it it more often (small releases frequently are easier than few big bang releases. 2) Don't commit to being x days behind upstream, if there is a big workload on your team (upgrading, vacations, etc.) you have the flexibility to delay upgrades and reduce the stream of incoming issues.


This is true for your codebase and your dependencies as well.


Yes, one of those lessons they don't really teach you in school that is really obvious once you take note of the simple reality that integration and testing effort does not scale linearly with the amount of change. Twice the change is not twice the testing effort but more like five times.The more change you allow to pile up the more of a hurdle testing and integrating become. At some point the integration and testing work becomes the dominant activity. Usually, a good way out of that is simply increasing the release frequency.

That's also why agile processes work. Simply shortening feedback loops reduces the amount of work related to integrating and testing. Nothing magical about it. CI/CD works great too. Deploy when tested automatically instead of when the calendar says it is time to release. Get your feedback in early instead of weeks after a change.

A good way to fix a poorly performing team is simply to shorten their sprints. People hate this (because of all the meetings) but it makes sprints easier. Of course getting rid of sprints entirely is optimal. I actually prefer Kanban style processes usually and tend to separate planning iterations from day to day development work or releasing stuff. Leads to much more relaxed teams and it's also a lot easier on remote teams.


Fast deploy is good for many things, but that 5 times the testing effort means the few releases you do are higher quality. When it has to be perfect, like many embedded systems you don't do many releases.

If course the above assumes you actually do the 5 times the testing before release. Most companies skipped that, and it showed.


Automated tests are key for this. If you have those, it empowers deploying frequently. There is only so much that can break for a small delta. That typically also enables very targeted manual testing if you need that.

Many companies have the wrong reflex of releasing less often when things don't go smoothly so they can test more not realizing that if they release more often, they can get away with less testing because there is less new stuff that can break.


Automated tests are good, but even a large suite misses things. End to end full integration issues for example are very hard to automate and use a lot of time to run.


Any sort of integration, but some can be broken down into steps more easily than others. About a decade ago one of my roles at the startup I was at was keeping our browser built atop Chromium up-to-date with Chromium trunk. I normally would merge in all Chromium's changes each morning. Some weeks I just had other stuff to do and so it would be a full week I'd gone without merging.

Merging a full week of Chromium changes in all at once was just too many merge conflicts to deal with.

So I'd simulate as if I'd been merging daily by just doing 5-7 separate merges, which worked out well enough.


This, remaining on a known-good release is a form of technical debt to use judiciously and pay down when more convenient. Beta testing everything immediately can be a favor to the community, but it’s probably not your main job.


Actually, it's the opposite in my experience. Beta testing is typically done when most software is released. Basically, at that point your technical debt is all the known & fixed issues that you still have, the features you don't have access to and can't benefit from, the performance issues you still have, the newly deprecated APIs that you are still using, etc. Opting into all that without good reason is a bad idea. If you are afraid things will break, the best way to find out is to just try it. Worst case you have to roll back a change. But at least then you know and you can plan a fix. IMHO anything that isn't on a current version should have a documented reason why that is or be fixed ASAP. I rarely see this on projects. Usually that means nobody cared enough to try which is just a form of negligence.

On most of my projects I update dependencies whenever I can. On my older projects, I don't even touch any source code until after I update dependencies whenever I do maintenance on them. Typically either nothing breaks, I need to do some minor fixes, or I need to temporarily roll back some dependency to plan some bigger change. The thing is, if a new version is going to require any kind of non trivial work, I want to know about this as early as possible; especially if it is a lot of work. If you wait a year, you are looking at a lot of unknown work, which is basically technical debt you did not even know you had. I don't like that kind of uncertainty.

Mostly staying on top of things minimizes the work you actually have to do. And you get to benefit from all the fixes early. A lot of projects I join are hopelessly outdated. It's usually the first thing I fix. It's rare to find documented reasons why a particular thing can't or shouldn't be updated. If it's not documented, I'm just going to go ahead and fix it. In case it doesn't work, I'll document it.


> If you are afraid things will break, the best way to find out is to just try it.

How much of the team is awake, sober, and on the grid at that moment? Some times are worse than others for an outage.


Shouldn’t need to be anyones job. Automate:

Daily chron:

- Branch the repo, auto-update one dependency (ideally a smarter way to batch up groups)

- Run CI

- Auto-merge commit if CI passes, else discard commit.

- Loop to create a new branch for the next dependency waiting to be auto-updated.

Otherwise, not having the right testing or CI is technical debt like the grand parent commenter suggests.


Jesus.. here i am in my BigCorp being disallowed to dev with Linux because too much mandated corporate crapware doesn't work.

I dream of the day my employer provides an in-house distro for me.


I am loving the Netflix model. You can ask for a MacBook or a thinkpad. If you opt for the thinkpad, you can also install whatever Linux you want and all tools will still work.

In fact you can bring your own laptop and without reimaging use that for development. All interval tools (of which there aren’t too many are written in Go so are cross platform). I ordered a framework laptop and will expense it once it is here.


That's flexible! Does this flexibility derive from have a simpler threat model at Netflix? How damaging would insider risk be?


Not a security engineer so can’t comment on the threat model. But an insider threat like a rogue employee can do as much damage with a managed laptop vs an unmanaged one IMO.


We're on the same boat. Since a lot of people complaint, they allow Linux, but only under VM (VirtualBox), still better than no Linux at all.


I've only seen Windows and Mac options at work other than rare devops / sys admins who are using like red hat in some cases. Is there a generally agreed upon standard distro?

A lot of random programs that seem to be needed for corporate work, like outlook and teams, I imagine don't work or are somehow even worse on linux. Or is that what people are referring to with the junk software?

Does the general dev stack really run significantly more performant on linux distros? Significant as in uses 50% of the resources, compared to a 5% performance increase.

Windows drivers are relatively so optimized, I've seen battery life double when switching from Ubuntu to Windows on laptops although I haven't tested it in a while. The constant random headaches with webcams / mics / mice not working as expected has basically been my deal breaker in the past. Mac has been a decent medium.


> A lot of random programs that seem to be needed for corporate work, like outlook and teams, I imagine don't work or are somehow even worse on linux.

I thought Teams was shitty on Linux, then I got switched to a Mac. Teams was just as shitty there too. I don't think it's any better on Windows either.

> Does the general dev stack really run significantly more performant on linux distros? Significant as in uses 50% of the resources, compared to a 5% performance increase.

Depends on the stack. One example: Being able to use Docker Engine natively rather than Docker Desktop saves a lot of resources.


The driver behind a unix desktop is you're usually deploying to a unix environment, so parity between the environments means little to no friction developing or debugging.


WSL2 works really well.


Unlikely to work in any corporate environment that I've witnessed.

This is why so many devs I know moved to Mac's. Not because Mac's are necessary that great, who cares about battery life when it just sits on a desk, but because it's the closest you can get to Linux in a corporate environment.

I would take native Linux on an ThinkPad as first option, but denied that I would go for Mac over Windows. I don't find WSL fixed all the other Windows problems such as lack of support for multiple desktops. They exist but the implementation is terrible.


Mac's are closer to BSD than Linux and suck for Linux oriented development. Brew is more ductape than a solution and you basically need to replace all GNU tools since they are horrible out of date.


I completely agree that Mac's suck for development.

Running 32gb laptops to run a few development environments that really should be fine in 16gb is sad.


Windows 11 natively supports multiple desktops.


Is it better than 7 or 10? As those "supported" multiple desktops, but was implemented in such a bad way.

Gnome and KDE are so much smoother when it comes to these workflows.


As part of the crapware image process, WSL (1&2) is often blocked


Firewall bypass. Terrible performances when using Docker while accessing files stored on a Windows folder.


I only use WSL2 at work. Got my full blown NixOS with systems and all in it.


No it doesn't unless you've never ran bare metal Linux and are happy with VM overhead.


personally I'm a pretty big fan of opensuse tumbleweed for a rolling distribution because they ship fully tested snapshots each day rather than individual packages. If I understand the article correctly Google's Sieve is similar to the Opensuse Build Service in that regard.

It seems like they went for Debian because of the proximity to Ubuntu but I wonder if they ever considered Suse, the ecosystem is pretty good.


Have to say I didn't expect much from it. I don't distro hop usually, and when I do it's Debian based distros so that I have the familiar package management tools. I also usually stick to XFCE + plank. But for the first time in over a decade I broke tradition and installed OpenSUSE Tumbleweed with KDE. The config panel you get on first login has an extremely handy feature that lets you choose a layout for KDE, and there's one that almost perfectly matches my XFCE + plank setup. Several weeks on and really I'm happy with it. Wayland, PipeWire, everything up to date. Not a single issue! In thinking about using it as my daily driver in my work laptop.


For other people reading. I like everything you said about tw. I also can't believe I ended up on opensuse and KDE (with a modified bismuth tiling setup).

But to me the killer app of tw is btrfs snapshots and rollbacks via snapper. (It can be configured on other systems too but the tw installer takes care of the subvolumes nicely)


The upgrade-in-place plan is made more difficult the more distant from Ubuntu you get. Suse has a great ecosystem, but the transition costs would have been quite a bit higher.


Thanks for your comment. I have been thinking about Tumbleweed or Manjaro to try out a rolling release. Not sure yet which to choose, as they seem quite different in a lot of ways.


I liked Suse before Novell acquired it. Is it still good?


Fwiw, Suse is a public company again and Novell is split up into various different places like NetIq (eDirectory) then Microfocus for most of the "dead-ish" products like Groupwise and File/Print services.


Yes. I've been using it for over a decade and it's still great. Like Fedora it's leading the charge in modernization (btrfs, /usr/etc, snapper, Yast, etc) and like Arch it has a big set of up-to-date packages, except they're in the main repo instead of an untrusted AUR.


You need third party repositories if you want full ffmpeg with h.264 support, and since these are binary (IIRC) repos, each third-party package is often either compatible with the latest or older snapshot, or tumbleweed, but often not both (unlike Arch's AUR which only has one distro to target, though it's a moving target).


>You need third party repositories if you want full ffmpeg with h.264 support

Repository. Singular. Packman.

>each third-party package is often either compatible with the latest or older snapshot, or tumbleweed, but often not both

The conversation was about TW. There is a packman repo for TW.


The conversation was about getting packages "in the main repo instead of an untrusted AUR". I used Tumbleweed a few years ago. To get a package selection as rich as the full AUR (which is very broad but not 100% comprehensive for niche packages), I had to use multiple package repos besides Packman, and many packages I wanted didn't ship Tumbleweed builds (reminding me of my experiences with Ubuntu's PPAs).


well if opensuse tumbleweed would use apt I would use it instantly, however zypper is so niche...


Why is niche an issue? Most of the packages on apt are available on suse. Then you have the Opensuse Build Service(OBS) for therest.


Glad to see that they're planning on contributing more to the upstream. But considering the size and budget of Google, and the amount of free work they're getting from the Debian community, it would be utterly irresponsible not to. Not blaming the team that's doing this good work though - I'm sure they're working hard enough as it is.


I think there's a big misunderstanding as to what exactly gLinux _IS_.

Nearly every large company I've worked at worked hard to standardize the dev environment. Ensuring the right set of dev tools (and more) is available on every machine. It's not as simple as 'apt-get install build-essential' and you're done. For example many large companies use perforce, which also has a licensing requirement. Google in particular has a large set of custom binaries, remote file systems, unified login (think krb but maybe not exactly that), unified management, etc, etc.

That is what gLinux is. This isn't something you're going to want or need. In fact it doesn't even run outside of the Google hard wired VLANs really (for setup).

There are a LOT of net benefits for much of google to being on gLinux. It means, for example, that nearly every product works on Linux - google docs, spreadsheet, meet, etc. That's a huge win. Not only that, they are on par with Windows or Mac support (thanks browser as a platform).


While this is true, I think a consumer gLinux (i.e. a Debian spin with support from Google) would be more appealing to Vendors than Ubuntu and similar just because of brand recognition.

I think Google had the opportunity to accidentally displace some of the Windows monopoly, not by doing a land grab a la ChromeOS or Android, but instead by just releasing a version of something they use themselves which would be appealing to like minded developers and companies that would follow Google's 'stamp of approval'.


Open source = no strings attached. I don’t get the shaming of big users of open source.

(And if we really want to get there: chrome, android, angular, etc.)


Technically, you are right. Free software provides freedom for every user to use the code in any way they want, and to distribute their changes to others at their own will - including never.

However, I very much understand the sentiment - the same companies that keep all their code closed and mistreat their users with proprietary software use free software so they can make more proprietary software. It is antithetical to the spirit of free software. It is understandable why most hackers are bitter about it.

GPL (and AGPL) license was created to prevent such scenarios, but some people disagree with the rules of GPL - calling them "restrictions of freedom" - and use more permissive licenses instead, which allows big companies to do whatever they want with the software, including making it proprietary.


GPL absolutely has strings attached.

I would say the most successful open source projects are all in the GPL family precisely because of the strings attached.


> GPL absolutely has strings attached.

Only if you're redistributing the resulting work. If not, you do whatever you like.


Note that AGPL (Affero GPL) expands the attached strings to any kind of user-facing network interface.

So if you take AGPL code and make a Service-as-a-Software-Substitute with it, you're still legally obliged to provide source code to the users of your SaaSS.

In fact, AGPL is so badass that not even Google [0] wants to touch it with a 10 yard stick :)

[0] https://opensource.google/documentation/reference/using/agpl...


> So if you take AGPL code and make a Service-as-a-Software-Substitute with it, you're still legally obliged to provide source code to the users of your SaaSS.

So the users of the software can do whatever they want with it with no strings attached then. I don't see how this concept is so hard.


I think the concept is hard because its not clear what "users" you are talking about. The people using the software to host the SaS have strings, the users of the SaS do not. There are two groups who could be considered users.


Only if outsiders can use the service. Internal corporate use of GPL code has zero strings. You're all one entity and no distribution has happened.


> you're still legally obliged to provide source code to the users of your SaaSS.

IIUC, only insofar as the software already does that (i.e. by a download link in the UI). If the software currently lacks such a link, you have no obligation to add that feature, AGPL or not.


Affero GPL [0] states that:

> [...] if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.

So the way I understand it is as long as you don't modify the software, you can run the original version as a network service without providing source code.

But if you do modify the program and serve it to users through a network interface, you have to include some means of copying the source code.

[0] https://www.gnu.org/licenses/agpl-3.0.en.html


Interesting; I may have been misinformed.


I would agree on legal terms, but there is something to be said (and not positively) to leverage open source while not contributing in any meaningful way back, especially (as has occasionally been the case) they presume to make requests of said software maintainers.


Legally - yes. Morally.. not exactly. And shaming one of very few ways to tell big users that it'd be damn nice to chip in here and there.


Last time I checked most of popular OSS projects were predominantly developed by developers at big users on company time. So I don't get this shaming?


It can certainly seem that way with all the companies pretending to be free-software-friendly, but I don't think the percentage is that significant.

If you have any hard data you're willing to share, I'd be grateful for it.


This survey certainly doesn't answer all questions, but I think it does show that corporate funding of FOSS development is significant:

https://www.linuxfoundation.org/wp-content/uploads/2020FOSSC...

The Linux kernel is mostly funded by a few large corporations:

https://www.extremetech.com/computing/175919-who-actually-de...

Clearly, corporate funding is concentrated in a few large projects, which inevitably means that there will be volunteers out there who carry a huge burden keeping critical smaller projects going.


Thanks for the links.

> Figure 4 shows the employment status of the survey respondents. The overwhelming majority are employed full-time. The next two most popular answers were self-employed/freelancer or full-time student. This makes sense as most of the skills necessary to contribute to FOSS are highly valued in today’s job market (programming, technical documentation, etc.).

The way I understand this is that most contributors are employed full-time, not that most contributors are employed to work on free software full-time..

> In aggregate, of 577 survey respondents, 48.7% said they are paid for time spent on open source contributions by their current employer, 2.95% said another party pays them, 4.33% said they are not paid because their employment contract prevents them from accepting payment for open source development, and 44.02% said they are not paid for any other reason.

So almost half of all contributors are actually paid to work on free software projects full-time. That is a fairly high number, but I feel it's still not that overwhelming.

> Clearly, corporate funding is concentrated in a few large projects, which inevitably means that there will be volunteers out there who carry a huge burden keeping critical smaller projects going.

I suppose it's just basic economy - a piece of software that solves a general problem (e.g. a kernel) will be used by more people then software that serves some specific purpose (e.g. text editor). But it seems to me that in no way is free software dependent on any company funding - that are more then enough hackers out there to keep the thing going, even if every company in the world decided to pull the plug at once.


>So almost half of all contributors are actually paid to work on free software projects full-time.

I'm not sure the report allows this conclusion. I think what it says is that there are full-time employees who work on FOSS projects as part of their employment. But I don't see where it says how much of their paid time they can spend on FOSS work.

>But it seems to me that in no way is free software dependent on any company funding

I don't know. Linux does look very dependent on corporate contributions. But maybe this is a special case as most kernel code is device drivers.


It all boils down to what kind of freedom you want.

Open source = libertarianism. You can do whatever you want, even if it implies making the life of everyone else worse by making them subjects of surveillance capitalism, akin to what Facebook and Google do.

GPL = "collective" freedom. You can do whatever you want, but it cannot trample on other peoples' lives. You as an individual cannot benefit more than you harm the rest of society


Rolling Releases are basically my main reason why I choose Arch Linux over Debian.

It would be nice, if Google and Debian could team up together to bring a Debian Version with rolling releases.

Imagine the stability of Debian packages together with the easiness of upgrading an Arch Linux system - would be a dream for every server.


Funny, a stable release is the main reason why I choose Debian over anything else. I enjoy that it doesn't change too much. And for the occasional software for which I really want (rarely need) the upstream version, backports, flatpak, docker or compile from sources got me covered.

But the beauty of libre software is its diversity. Keep on using what you like ;)


it exists: either Debian Testing (a week or so of proven stability) or Debian Unstable (latest and greatest).

If one follows the best practices for them[1] (installing apt-listbugs and apt-listpackages), one gets information of bugs and API-breaking changes before performing the upgrade, and can decide to pin the specific package as needed, and it will automatically get released when the bug is solved.

This is thanks to how the Debian packages treat bug reports, and the debian/NEWS file. I can't believe other distributions get it so wrong for so many decades.

Plus, these 2 distributions (Testing and Unstable) get all the automated QA usual from Debian: autopkgtest integration tests, piuparts, lintian, reproducible builds, policy conformancy..

It seems it's no coincidence that gLinux from Google, rolling release, and in TFA, is based in Debian.

[1]: https://wiki.debian.org/DebianUnstable#What_are_some_best_pr...


No, it didn't exist (outside Google). Debian testing is not a rolling release because it is not a "release", it lacks internal coherence. As the article mentioned, testing packages are built in unstable and then simply moved to testing individually, with no regard to moving all the build dependency chain together. So you end up with packages in testing that were built using packages that are not in testing yet.

Edit: also testing lacks timely security updates, they don't bother to cherry pick security patches for testing and instead just wait until the new secure version make is way thru unstable.


That sounds like moving goalposts: the OP mentions Arch's rolling release. And Arch doesn't rebuild the dependents when a package gets updated.

On the point of Testing lacking timely security updates, it's fair. But I invite you to compare the timelines with other distributions (including from corporate backers), and you will realize that their manual testing takes even more time.

Plus, you always have Debian Unstable. And apt-listchanges and apt-listbugs work fine in Unstable. It's the same experience: you get notified of serious bugs before upgrading, you get to pin packages and the pins are automatically lifted with the fixed version.


> And Arch doesn't rebuild the dependents when a package gets updated.

That’s not the point they argued, though. According to the comment you replied to, given version X of package A built against version Y of package B, Debian testing might contain package A version X but not package B version Y.

As far as I know, this situation does not occur in Arch.


Debian testing is a rolling release with some caveat*

I use it on my desktop, but use stable for my servers.

* it is rolling until a new stable is released and at that point testing is frozen and you have to upgrade to the new testing version


I'm surprised no one has mentioned NixOS. Easy rolling forward or back, with config, system snapshots, and cached binaries. Can roll individual packages or the whole system very easily.


From the article,

> Rolling releases with Linux distributions today are getting more common (Arch Linux, NixOS).


NixOS users are socialized to not bother reading


Can't roll back those HN comments though...


Can edit and/or delete within some time window.


I'm talking about the comments.


Btw I am using NixOS


The more important question is where is this magical Google-blessed rolling release Debian available to download? We've all heard rumors of this internal Google distro and what I'm reading is their intent to give back. You can use Microsoft's CBL-Mariner today.

Where is Google's?


Most of the stuff that makes the distribution different from Debian Testing is tightly coupled with Google's internal infrastructure and generally uninteresting (or unuseful) to users writ large. Just run Debian!


If you want to get really technical, there's not much difference between Google's gLinux and Debian testing.

Notably, our security, provisioning, and tooling is installed, and that's about it. Nothing that's particularly useful outside of the corp bubble.

Disc: Googler, not in CorpEng.


Not to mention many of those things don't work outside the rarified environment that is a fully managed VLAN.

Regarding "security", remember plenty of security is things like auditing, remote logs, ability of remote admins to control machines, etc, etc. It's ultimately all boring, but essential, large fleet management stuff.


It seems like they are just using Debian testing with periodic version freezes. It's very likely not all that different from just running testing.


[flagged]


I have never seen a gender-based criticism of the fragmentation of Linux distros, that’s for sure…


Why yes, they could share Goobuntu with the world. But they don't have to, and so they won't.


It’s really not that interesting


I'm using Debian testing for years with unattended-upgrades upgrading everything 4 times daily (once for each Debian repository update). Not at Google.

AMA!


Have you ever used/considered using another rolling release? If yes, how would you compare them to Debian testing?

Also, are there any parts of Debian testing that is not rolling (i.e. OS version number or some package versions like kernel or Node.js)?


I've always used Debian since I discovered it via Knoppix (which was my first Linux distro, Cygwin was my first Unix distro), I initially used Debian unstable but downgraded to testing some years later as upgrading testing often is smoother than unstable. I've never considered even trying any other distro. Everything is rolling, except unmaintained software, that just gets removed from the distro when other faster rolling things change incompatibly.


Why 4 times daily and do you do that while you're working?

How do you deal with packages being broken until the next reboot if you do updates during the day?

I don't even update my Arch boxes that regularly and always do a reboot afterwards.


The Debian archive is updated 4 times daily, so I update my system after each Debian update. I do that no matter whether I am working or not.

Packages almost always aren't broken in Debian testing but if they were, a reboot would not fix them anyway.

I restart processes instead of rebooting (using the needrestart/needrestart-session tools). I only reboot after Linux kernel image/module updates or microcode updates, because Debian cannot yet live-update those components.


Switch to Arch for God's sake


regular debian testing, unless you are doing something really esoteric, is really quite stable for desktop use. such as you might get if you installed the basic barebones debian CLI setup totaling about 1GB disk space occupied and then used apt to install xorg and xfce4 or any other window manager.

debian-stable is extremely conservative in packages and stuff is often several years behind.


What about security updates? They might come slowly to unstable (since security team doesn't concern themselves with unstable) and might be held up arbitrarily long before trickling down to testing, no?


I tried to search more regarding Rodete then found this[1]

> … was to get off of Ubuntu's more classic LTS release cycle, and move to the freely-rolling waters of Debian Unstable. An underpublicized but very real reason was that the Google Linux team had a number of Debian Developers on it, who reasonably wanted to be closer to their primary love and affiliation

:blink :blink

1: https://www.reddit.com/r/debian/comments/j4liv4/whats_going_...


Interesting post! I’ve been thinking about trying Linux with a rolling distrib. They mention NixOS, do people here have an opinion on that one?


I am not sure I would call NixOs a rolling release distribution. It has releases every 6 month like Ubuntu, but it does not have LTS releases. And due to the relatively small community only the latest release is supported. I am doing software development on a desktop workstation, that followed all releases since the start of 2018. And overall it has been rock solid. If an update, either a new release or a config change of mine, breaks something important, I reboot with the previous setup, and I can continue to work until I have time to investigate what is going on. I've used that perhaps 5 times since 2018. But when I used it, it was absolutely critical.

But expect a steep learning curve.


nixos-unstable is a rolling release. I use it daily and on all of my servers.


I love NixOS. I would 100% recommend it for servers and probably 70% recommend it for desktop usage.

Feel free to ask me anything but instead of repeating past points I'll just link some past writings:

- https://kevincox.ca/2015/12/13/nixos-managed-system/

- https://kevincox.ca/2020/09/06/switching-to-desktop-nixos/

- https://kevincox.ca/2021/05/06/workstation-install-with-nixo...


NixOS is programmed in a functional programming language and aims to be fully pure and reproducible. Using it since over a year now and my entire hackspace runs on it. Can highly recommend it.


Why not Debian testing branch, as per TFA?


I always thought creating your own rolling distro with Gentoo was precisely the best case use of portage, but I’ve never seen anyone do it. I must be missing something.


Coincidentally, Google has sorta done that too!

ChromiumOS/ChromeOS is built using portage, and is certainly rolling release. It's not really a traditional linux distro, but it's something.


Container-Optimized OS is based on Chromium OS and is more of a traditional Linux, in the sense that is runs on servers.

https://cloud.google.com/container-optimized-os/docs


...and so was CoreOS before the equally named company was acquired by RedHat.


In a funny twist of events, the author of the grandparent comment above yours was indeed building that OS (Container Linux) at CoreOS :)


Guessing because a fair amount of stuff is tested, packaged, or unintentionally tied to how either Debian, Ubuntu or RedHat does things (or places things, etc). So anything outside that universe doesn't get popular.


Can I ask the following as this what Google did is literally what I try to achieve on a Raspberry Pi:

I have configured unattended-upgrades (the package from apt) on a Raspberry Pi on Raspbian 10, but no security updates seem to ever be released. Is there a gotcha I’m missing resp. is there a better way of configuring an always-on computer and keep it secure?


It sounds like you missed a step. You need to create an apt conf file that runs unattended-upgrades (in addition to the configuration you already did in /etc/apt/apt.conf.d/50unattended-upgrades). There are a lot of options (see /usr/lib/apt/apt.systemd.daily for a description of options; the manpages are lacking), but just running:

  dpkg-reconfigure unattended-upgrades
will create a minimal config file, '/etc/apt/apt.conf.d/20auto-upgrades' that may be all you need. But, you might consider adding something like below to cleanup downloaded package files:

  // Auto cleanup archives
  APT::Periodic::MaxAge "10";
  // Setting minage too is a good idea to prevent race conditions
  APT::Periodic::MinAge "8";
Or, if you prefer (but, this will delete all archives present when it triggers and not just old ones like above):

  APT::Periodic::AutocleanInterval "10";
(Note: I'm assuming Raspbian is the same as the upstream Debian; I've never used Raspbian)


The article mentions:

> We build packages in package groups, to take into account separate packages that need to be upgraded together. Once the whole group has been built, we run a virtualized test suite to make sure none of our core components and developer workflows are broken. Each group is tested separately with a full system installation, boot and local test suite run on that version of the operating system.

> We then proceed to carefully guide this release to the fleet utilizing SRE principles like incremental canarying and monitoring the fleet health.


Raspbian, or raspberry pi os? The latter is much closer to debian AFAIK, there is a 64-bit version/etc.

But if your using it for server tasks, your probably better off actually running Debian. Although if your looking for something that is closer to a rolling release, you could just pick one of the distro's that have a release policy that strikes your fancy WRT how they roll out new packages/etc. Most distro's have rpi specific images, or you just install their generic arm64 images on top of something like the PFTF firmware.

Put another way, raspberry pi os, has a fairly dated kernel/etc but works well with all the hardware on the machine for desktop purposes. If your using it as a server, its not a good choice because they frequently don't have useful things turned on (various filesystems, networking options, RAID levels, etc). The core storage/networking/cpu/etc type of things have been working quite well in mainline/etc for a couple years now, so that has trickled down into all these distros.


Does anybody know which laptops is Google using to run their Linux distro on?


Carbon X1's are common. It's not exclusively those though.



Why not ChromeOS?


I actually evaluated whether it was plausible to use a ChromeOS derivative as a workstation OS while I was at Google. At the time the answer was "no" for a bunch of reasons, including the following:

1) Crostini didn't integrate with the ChromeOS accessibility infrastructure, so it wasn't practical for users who required assistive technologies which meant gLinux would have to be supported anyway

2) While there is some degree of support for graphics acceleration for Crostini instances, there's no real way to provide direct access to the hardware. A bunch of people needed to do work that required more direct GPU access (either very resource intensive rendering, or GPU-offloaded ML models and the like), which meant gLinux would have to be supported anyway

3) We were in the process of moving to using hardware-backed machine identity for Beyondcorp, and Crostini had no way of providing that and tying that identity to the host identity (you don't want a situation where a guest VM appears to be trustworthy when it's running on an unpatched host)

4) This was also before the acquisition of Neverware, which meant at the time that a team would need to be built to maintain a build of ChromeOS for generic workstation hardware

This was a few years back, and I left getting on for 18 months ago, so it wouldn't surprise me if this is reappraised at some point.


Isn't Crostini just a Linux VM inside ChromeOS? So you'd still be running some other Linux distro (I'm guessing gLinux itself) and not actually using ChromeOS for development?


They use ChromeOS as well, this article focuses on those people who want to use a mainstream Linux Distro at Google.


Pretty much everyone has a gLinux workstation or logs into a gLinux VM - they were, traditionally, the only devices that had access to the main source repository, so it's not so much about whether people want to or not.


Googler and ChromeOS engineer here. Opinions are my own, etc etc.

This has changed/is changing. It heavily depends on your team (especially if you're not in engineering, I don't think most non-engineers would have a gLinux machine or a workstation out of the box). We've developed tools for crostini that would let people access a gLinux environment, I don't know how much I can talk about it so I won't go into details (although it's nothing super secret either), but the bottom line is that more and more engineers these days are able to do 100% of their job on a ChromeOS machine, including dealing with google3 stuff.


I mean, you can always use the web-based Cider to do your work. It's not great, but it gets the job done.


Likely because these are not end-user machines but developer machines. Most Google engineers don't checkout/write/build/test/run code on their machines (they either do these tasks through a browser or SSH to another machine).

I believe gLinux can be installed on trusted machines that you connect over SSH as well as client laptops (where you can't checkout code etc).

That said many engineers/non-engineers at Google use ChromeOS on their client laptops (same with macOS).


Just use Arch


Or NixOS unstable hehe


The nice thing with flake-based NixOS is that it's trivial to cherry-pick unstable onto a stable base. I do a bunch of that in my nixconfigs: https://gitlab.com/jcdickinson/nix


Thanks for sharing. I'm new to flakes, so seeing that idiom of having a separate nixpkgs-unstable arg propagated around set off a lightbulb!


Too busy compiling gentoo



I wanted to give it a try, but SecureBoot is refusing to be set up correctly, despite following the manual '(

(I suspect the problem is with NVidia drivers..)


Works fine here on my laptop with an nvidia gpu. I followed the instructions here: https://wiki.archlinux.org/title/Unified_Extensible_Firmware...

Though I had to stick to a slightly older version of grub to get it to load up properly with the given guide.


Tbh, I've been trying these same instructions for Pop_OS! and that's why it didn't work. Maybe Arch itself is much better in that regard.


EndeavourOS here (arch-based). Love it.


Nothing like a good ol' 9 month late pacman -Syyu


I dunno, a late xbps-install -Su may give it a run for it's money


Out of curiosity, why would a Googler elect to use a Linux based workstation as opposed to a macOS workstation? Are they testing Linux binaries? Developing kernel extensions? Linux build tools? macOS offers such a superior desktop experience I can't imagine why else you'd ask an employer offering you a macOS machine to substitute it with a Linux one.


(Source: Ex-googler for 5yrs)

So first this comment is clearly influenced by your own preferences for MacOS. Stackoverflow Developer Survey shows irl about 40% of development is done on Windows, 30% Mac and 30% Linux. So the platforms are at least equally appealing in real world circumstances.

Secondly Goobuntu was insanely well integrated with Google’s custom source control and internal ecosystem. No one actually did real engineering work on Macs because it was disallowed for security reasons by policy, but I also never met anyone who wanted to. The tooling and customization on Goobuntu was very good and although I’m sure people asked for a better experience on Mac, the ability to customize and control the Linux dev experience just beat Googles ability to modify MacOS hands down.

Also I think there’s very little MacOS offers from a developer perspective that isn’t matched or easily trumped by Linux, but now that’s my personal bias showing ;)


> So first this comment is clearly influenced by your own preferences for MacOS.

I've always found very strange the love for MacOS professed by so many here on HN!

> Stackoverflow Developer Survey shows irl about 40% of development is done on Windows, 30% Mac and 30% Linux. So the platforms are at least equally appealing in real world circumstances.

Indeed. I use Windows and there's no amount of money you could pay me to use a Mac: at the bare minimum, I want an OLED screen, and there's no such option on MacOS.

OTOH, I try Linux every know and then, the terminal experience is still sub-par but the foot terminal emulator, despite how hard it is to configure it, is quite promising!


Kitty and alacritty are both good options for Linux. With Kitty you get very easy access to themes.


> I want an OLED screen, and there's no such option on MacOS.

Mini LED on M1 MBPs is not OLED quality, but it's quite close in many cases.

Additionally, in my experience (and from reading various reviews), M1 laptops are really quiet (fans are not loud even under load), have great battery life, and don't get nearly as hot as most (if not all) PC counterparts (under the same load conditions.)


> Mini LED on M1 MBPs is not OLED quality, but it's quite close in many cases.

No, it's not even close.

When I say OLED, I mean OLED, not qled or something that's "almost like" it.

To be clear: no OLED= I won't buy it.

> Additionally, in my experience (and from reading various reviews), M1 laptops are really quiet (fans are not loud even under load), have great battery life, and don't get nearly as hot as most (if not all) PC counterparts (under the same load conditions.)

Thanks, it might be nice if I cared about such things, but I absolutely don't - especially if the only non negotiable requirement (OLED) can't even be met.

FYI, if you want to know other things I care about once the bare minimum of an OLED screen is there: ECC RAM, removable NVMe drives, ideally 2 of them (or better: 3x)

Unfortunately, there're no such things on macs.

I'm sure "quiet, long battery life, cold to the touch" are important factors for those who purchase macs- but not for me.

It seems that those who have found the M1 macs suitable to their needs think that it must be ideal for everyone else - it's not!

Different people have different needs.


Thanks this is a great answer. So are the others, I think I poorly expressed my initial post.

I use Linux on the desktop myself. I can't justify the price, so I don't use Mac. But if my employer gave me a choice in the matter, I'd pick a macOS machine given - why not go for the fancier option? One where the UI, OS and hardware are tightly integrated. I can always download Debian for free any run it on anything. That's what I was wondering and what you've answered.


(not a googler but discussed this with some) If I recall correctly, if you choose mac and you aren't in a team directly working on mac stuff, then it means you're going to do everything using SSH to Linux workstation - iirc the choice is essentially on what kind of laptop you want and you can't do development on the laptops by policy


You can work on your Mac (I do, and my official duties pertain to Android) but you get to deal with the janky FUSE/network server mount, and if you're running builds locally they're going to be slow.


thanks, TIL :)


> But if my employer gave me a choice in the matter, I'd pick a macOS machine

Provided the convenience is the same. If one is used to certain way of working with linux (may be scripts, desktop environment, or workflow - Keyboard shortcuts etc. ) then just picking a machine for "fancier option" is worthless or even annoying. Also wait to migrate scripts from bash to zsh. Again not all macs allow for baremetal install of Debian.


People who typically want to use Macs don't care much for Google's ability to modify macOS. In fact, the less Google puts on the computer the happier I think most of them would be.


Because almost all of our production fleet is Linux derived, it makes sense to use a platform that is nearly identical.

Additionally, cost is a huge factor as well. Think about the cost of purchasing 100,000 macOS machines. Compare that to purchasing 100,000 generic x86_64 machines from several manufacturers, which are generally more friendly to customization for Google's needs.

Feel free to ask questions!

Disclaimer: Googler that has a gLinux Desktop, a gLinux Cloud VM, and a macOS laptop. Not in CorpEng.


>Because almost all of our production fleet is Linux derived, it makes sense to use a platform that is nearly identical.

Yeah, it surprises me that more people who use Linux on their infrastructure don't also dogfood it on their personal machines. We run a Linux-based cloud, and my colleagues use a variety of OSes for their daily drivers; I definitely see a difference in capability between those who dogfood and those who think they can get by just fine on Windows or Mac.


As someone currently working in a company that has linux-only product but somehow decided to standardise on macbooks in engineering...

It turns out I'm not the only person with the complaint "macOS in Engineering considered Harmful". Pretty much all backend work is done in either a Linux VM using Parallels, or in docker containers, and the developer experience sucks because of that. While theoretically someone could try building some components on mac, that would be a small portion useless to general work.

We might not be at the scale of Google, but we have even less mac-specific work and similar Linux focus (hell, I'm working on custom distro even)


It doesn't surprise me at all. It's not the software, it's the hardware.

PC hardware is absolutely terrible i.e. cheaply built, poor battery life, poor quality displays, numerous bugs and Linux compatibility issues and so on. My 16" MBP will reliably allow me to work between 15 and 20 hours without needing to recharge. The mini-LED screen is far more advanced than anything on a PC laptop, daylight visible, and can go head-to-head with professional reference displays. The CPU is substantially faster and cooler than Intel's top of the line. The audio is the best on any modern laptop. It's not like there are one or two features that Macs excel at, they are better in too many ways to count.


>>> macOS offers such a superior desktop experience

[citation needed]

I don't work at google, but, recently I had to discover macos at work, and boy the "desktop experience" you talk about sucks.

I miss my linux env (I have a slackware at home) and even my windows one (the old machines at work)


Yeah it still surprises me every time I try to copy and paste a file in finder by right clicking just how crap it is.


The average Googler doesn't get to pick the OS that their workstation runs. It's going to be Linux. The typical work setup is a browser, some kind of editor and terminals - which is something that works well in Linux.


It depends on the team, and the individual.

Generally, a Googler does actually get to pick their OS, it's part of the onboarding process that every Noogler goes through, and you're given 30 days from your start to swap around and test the other platforms (gLinux, macOS, gWindows, or Chrome OS).


I was under the impression that unless you have an actual business need that calls for something other than gLinux, then it's going to be gLinux. Like a Chromium dev who primarily works on the Windows side of Chrome obviously has a need for it.

I'll fully admit that my onboarding experience is quite out of date since I've been there for a decade. When I joined I didn't really have a choice for workstation since I only worked on server side stuff. Sounds like things have changed.


No they're the same. Pretty much all google3 work is done on Linux, but folks usually get 2 machines: a laptop you pick the OS and the workstation (or cloudtop) which is Linux unless there's a special need like chrome eng. Lots of folks do stuff like use Chrome OS + cider + cloudtop though.


Okay, then it sounds like I did give an accurate answer to OP's question (which was about workstations, not laptops) :)

I've used different setups over the years, but my workstation has always been Linux. Currently running gLinux on the workstation and cloudtop. My current laptop is running ChromeOS (I'm a ChromeOS swe).


Are you sure you're talking about workstation and not laptop? What you're saying sounds true for laptops, but I was at Google for 10 years on a variety of teams and I was only ever aware of Linux workstations unless one has a specific need for a different OS. (Or you could get a Chromebox at one point, but then you'd be SSHing to and building on a VM.)


I was certainly under the impression you got the choice, thanks for confirming that applies generally.


You get a choice for the laptop, but unless your work touches any other platform its going to be done on Linux workstation (or remoting to one), iirc


> The typical work setup is a browser, some kind of editor and terminals

I have all those things open in my work setup, but I also have 3 or 4 different chat apps that don't run on linux, a few music apps that don't run on linux, a ton of Adobe software that doesn't run on linux, a stock trading app that doesn't run on linux, Kindle, a huge number of random productivity apps and games that don't run on Linux, the list goes on.

Having to deal with half-baked Linux alternatives or mess with a Windows compatibility layer sounds like a total nightmare. I also think it doesn't support modern DRM so you're SOL if you want to put on Netflix or something in the background.


> but I also have 3 or 4 different chat apps that don't run on linux, a few music apps that don't run on linux, a ton of Adobe software that doesn't run on linux, a stock trading app that doesn't run on linux, Kindle, a huge number of random productivity apps and games that don't run on Linux, the list goes on.

Well, from the viewpoint of Google the corporation, the inability of devs running these software that only detracts them from real work can be considered a feature rather than a bug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: