I wish the author's complaints about RPMs would be taken seriously. I'm a pretty experienced dev and sysadmin, a packaging a simple CLI app written in C with an easy compile process (and no dependencies other than glibc) was a giant pain in the ass, and in the end I doubt I did it the "right" way despite hours and hours of effort to do so, as I made use of docker to get builds for various active Fedora versions and RHEL versions without having to provision a complete system with each.
I don't know if it's documentation or the RPM tools/macros that suck, or some combination of both, but it's a real setback for someone trying to contribute packages. Getting a contribution into the Fedora repos was also a giant pain in the ass, as there is process there but the documentation is entirely unclear what that process is unless you already know what the process is.
My opinion of the RPM macro system is that I can see why it was done that way, and it was an admirable approach to make packages easy to read, but there's way too much magic. I'd rather parse a couple lines of shell script that made it clear what was happening, than have to look through a handful of macros that aren't documented well (or good documentation is impossible to find). Arch's PKGBUILD format is amazing, and I'd love to see it used as inspiration for an RPM replacement (or at least alternative approach). I try to be very wary of the "the last developer sucked" fallacy when coming into a new and unfamiliar project, but there are enough similar experiences to mine that I think this criticism is valid.
That said, much appreciation and love for the dedicated people who maintain and build RPMs for us. It's often thankless work, but without it we would have nothing.
The issue is that mock does nothing for moving sources in place, it's just rpmbuild and a chroot. From what I could figure out, fedpkg is more about working with existing Fedora packages shipped as part of the official repositories, and less about "Here's how to get this random project you don't own into copr".
This assumes there's an RPM spec file in your working directory and proper handling of sources within/nearby. Fedpkg will create a 'results' directory with the SRPM and RPMs
The -r $mock_root part can be repeated if you want to build against multiple targets in COPR
My experience with RPM is fairly limited, but I have a similar experience with Debian's .deb format. My rough understanding is deb is not as complicated as rpm, but I still have yet to succeed at packaging something with either. Meanwhile I've had a much easier time with APK and PKGBUILD from Alpine and Arch respectively. Why is it these packaging systems make it so much easier while the juggernauts of the Linux distro world have such complicated, poorly documented messes?
> Why is it [APK and PKGBUILD] make it so much easier while the juggernauts of the Linux distro world have such complicated, poorly documented messes?
If I had to venture a guess, I'd point to 3 factors:
1. They're both much younger than their 'juggernaut' counterparts.
2. They both have explicit commitments to simplicity, perhaps even at the cost of other virtues should push come to shove.
3. The documentation exists primarily to facilitate the maintenance of the distro itself, which is done sustained by inducting members into a community and documents are referred to users in a process of inculcation.
Documents that work well enough for (3) may not be that helpful to someone who expects to be able to sit down by themselves and slap together a package to maintain for their own usage or to submit as a drive-by contribution.
In the Arch world, at least, drive-by contributions play a crucial and esteemed role in the ecosystem, namely in the AUR. Perhaps similar is true in Alpine, as well.
I have had a similar experience packaging a simple python program, coming from zero packaging experience. PKGBUILD was the first I did and by far the easiest, deb was about 3x more time and effort, and rpm was a bit more time and effort still, despite now having experience with the other two. I wanted to package for nix next, but got stuck due to the program daemonizing and can no longer import its dependencies.
It's picosnitch, if you know what I'm doing wrong/how to fix it that'd be great! It depends on bcc and psutil, and I am able to build and run it from a .nix file using python3.pkgs.fetchPypi with
However, when I run it with just start, it can no longer find psutil and bcc. It uses a daemon class here [1].
I also encounter the same issue when running it without sudo, since it will re-execute itself here [2].
It can also use systemd, but I didn't get around to figuring out how to use systemd and install the service file on nix (I've never used nix before and know very little).
All nix packages are self-contained ("portable" in Windows parlance), programs don't get a default working PATH. You need to craft PATH yourself, usually using wrapProgram.
If you run into new issues with the package in trying to leverage wrapProgram, feel free to open a thread on discourse.nixos.org and also to @ me there!
Picosnitch seems like a nifty utility and it's awesome that you've taken up that packaging effort to make your work more available for users of so many distros. :)
I've lots of experience with both. Debian, while not perfect, is miles better than the rpm systems. Both require knowledge, because thinking of all the multitudes of edge cases makes things complicated. Don't have personal experience with Alpine or Arch, but I suspect they're not as general. Nobody WANTS to create lots of complexity for no reason.
> Nobody WANTS to create lots of complexity for no reason.
Agreed. I think in the case of RPMs and Debs, it evolved to this point. I would guess the first versions of RPM and the macros were amazingly easy to package, probably much better than PKGBUILD is now.
The problem happened that software started getting written in tons of different languages rather than usually in C, with different libraries/dependencies, different strategies and edge cases, and the system kept expanding to accomodate. Eventually it's a mess because there are too many macros, it's now too magical, and documentation hasn't been made a high priority because it's a relatively small group that uses it, and they don't need the docs. Once you know it too, there's not a lot of reason to change it and even more reason not to change it (cause that's a ton of work for little to no gain among the majority of the maintainers). I don't mean this as a criticism, just a pattern I've seen over and over having worked on software for a long time.
I think the actual package format for RPM probably doesn't need to change too much, it's the tooling and docs that do. I'd love to see a new project that is compatible with the RPM format but uses a system more like PKGBUILD.
rpm is heinous. They need to handle comments and then expand macros so that you can comment-out stuff in your .spec without getting mystery errors from multi-line macros. If they can't even get this right after decades, they're hopeless.
I have rolled my own RPMs for OpenSUSE TW. I learned about the build process from a site aimed at Fedora and got the syntax right(ish) by looking at a bunch of official .SPECs from the OpenSUSE project. I also looked at packaging .debs, but that looks like an utter pain.
This matches my impression and experience, and I've been surprised to see many commenters here write that they found DEB packages to be simpler to write.
Perhaps as openSUSE users we both found the whole process easier because openSUSE drew us to OBS, and that made actually building the package easier?
RPM is much simpler than .deb, so guide is not necessary. It's just a simple .spec file with few headers and sections. See src.fedoraproject.org[0] for Fedora .spec files (click or search a package, then click on Files tab, then .spec file).
However, Fedora uses "convention over configuration" principle, so a packager must know Fedora conventions to package a program properly.
I find the Debian maintainer guide helpful enough! To me the difficulty difference is not large, but I like the simplicity of managing just a single spec file for RPM source packages.
I think the problem is greater than RPM vs DEB vs ... It goes all the way back to the Linux philosophy "let the user break things".
I think some people use Linux to avoid paying Windows license fees or Apple's premium. There are tools only developed for Linux, but the opposite is also true for Windows and macOS. I've found most macOS apps follow Apple's core philosophy to be simple, aesthetically appealing, and easy to use. Can't say that for Linux packages (and to some extent, even Windows apps suck).
I view Linux mostly as an environment where you're free to do whatever you want, even shoot yourself in the foot. But I'd never recommend that to average Joe, for reasons such as the fact that this article exists.
Nobody uses Linux to avoid paying Windows because Windows comes preinstalled on most computers and because Windows piracy exists and is tolerated by Microsoft in order to entrench Windows usage. Switching to Linux is a deliberate choice to prefer the benefits only Linux offers and/or to avoid the detriments of all other options. The majority use the preinstalled default: Windows.
Also nobody uses Linux to avoid Apple's premium because macOS is free. A minority of people install Linux on Macs because they prefer/need Linux even if they already paid the Apple premium. The rest simply use the preinstalled default: macOS.
For a user with basic needs and no dependency on some Windows-only software, Linux is a viable choice.
Was there ever a time when "Unix" apps primarily targeted macOS? Because in most cases I've encountered, apps are only tested on Linux and macOS compatibility is an afterthought.
If it was a Linux problem (which is an interesting thing to think about), wouldn't you expect to see the same issues in all the distros? Arch, Alpine, etc seem to have a pretty good system. PKGBUILD in particular I found great.
>packaging a simple CLI app written in C with an easy compile process (and no dependencies other than glibc) was a giant pain in the ass
That's why i love to work with FreeBSD-ports and pkgsrc from netbsd, it's just so easy and the community helps you quite allot if your are in a corner-(case).
This aticle is funny. Author choose silverblue, then proceed to remove/disable and/or criticize almost everything that makes it a Fedora distro.
Fedora silverblue is not the only one immutable distro, maybe the author should have been looking at its alternatives. Opensuse MicroOS for example do not use rpm-ostree but I believe btrfs snapshots capability. Ashlinux describe how to build a similar distro out of archlinux [1], rlxos [2] choose distrobox instead of toolbox by default. VanillaOS [3] is ubuntu/deb based, BlendOS [4] has appeared in hackernews recently, there are probably many more I don't think of atm.
Silverblue is pretty great. But it's really been taken to another level in the last few months since it started allowing you to _boot_ from a container image. That means, you can have some automated container build somewhere that you point rpm-ostree too which gets automatically pulled down to all your machines. It's like building your own distro, but without any of the work. Here's mine, for example:
And then, of course, I _also_ have an automated Arch build that I use with Distrobox on top of my custom Silverblue build. I get the stability of an immutable OS layer with the easy installs and package availability of Arch.
I've contributed to the Ublue project (https://ublue.it/) and it's totally a game changer to me. I can have my system image with the proprietary Nvidia drivers and all the packages I want on my host declared, built and signed in the cloud. Ublue also has an automatic system update service which means all of the upstream updates to the host system just get pulled and are avaible after a reboot.
Mostly everything not completely necessary on the host is installed as a flatpak, and I have an archlinux container/distrobox for all of my development needs. I've actually grown to love the distrobox system, as I never have to think about breaking or polluting my host system when installing new packages. GUI apps work flawlessly too, and I've heard that the integration is even better than with flatpaks, with GTK apps using the system theme automatically, and Libreoffice apps in different containers (which you probably shouldn't do either way) just work.
Thank you so much for sharing this, it looks really promising. Here is a video by Jorge Castro that explains the upsides that is linked from their site: https://www.youtube.com/watch/X8h304Jp9N8?t=435
I considered using an Arch container for development, but ultimately felt this wouldn't actually make my life easier, as I'd still have to deal with updating issues/manual work. Instead I'm just running a Fedora 37 container, and thus far it's working great.
> Silverblue is pretty great. But it's really been taken to another level in the last few months since it started allowing you to _boot_ from a container image.
What was the state of affairs before? Is the main advantage that at install time, all you do is download the image (i.e., you don't have to actually run the automation on the target)?
Well, before it was just a lot more work to create an install image and keep it updated. So you'd just stick with the official Silverblue image, or the spins (KDE, Sway, etc).
And yeah, the advantage now is that you just pull from your image, and all the automations are run on the server beforehand. That means, if something breaks, you just don't upgrade because there's no new image, everything stays in sync across all your machines for free, and you don't have to use a bash/ansible/whatever script when you set up a new machine. Upgrades are also faster because all the layering has been done beforehand, thought that's not much of an issue if you have upgrades run in the background, which you should.
I have to applaud this post for many reasons, but I also agree with the approach entirely, and I worry it will scare people off.
Fedora is solid, and eschews the issues that make Arch hard to maintain and Ubuntu hard to stomach. The release cycle is very sane. IMHO It’s the first choice now for daily driving in the RPM world since CentOS was… insert your favorite word.
If I were OP, I wouldn’t have gone straight from Arch to Silverblue on my laptop. I would have done vanilla Fedora, and used one of its several Spins which ship with Gnome alternatives. Then I would have tried Silverblue on my desktop on a separate partition. Not in a VM because I would want to be a using the hardware unmediated, for science. It’s pretty exciting to have a NixOS-style immutability with a mainstream distro. It’s still very much Beta, and I’m not sure I have the energy to give to its edges. And that’s on desktop. Laptop driver support isn’t a guarantee, and it’s not clear to me about how a reboot-to-update workflow would look for me and I worry that a lot of software makes assumptions that aren’t true with Silverblue’s paradigms. What about hibernation!?
So I applaud the author, for jumping in with both feet. The amount of useful detail in that blog post will be saving people time for years to come.
And like a lot of people in this thread, getting off of Arch has been kind of liberating. I only recently switched over, but have had nothing but good experiences so far.
Just to note about the partition thing, you CAN do this with Silverblue, but it's not for beginners. The default is to use the whole disk. I dual boot, but give one whole disk to Silverblue and another whole disk to Windows.
I'm a long time Fedora user (on various laptops), so I'm already pretty comfortable in that ecosystem, but I don't think it's accurate to describe Silverblue as beta. It's not marketed nor considered as beta by the devs/community and it's based on technologies that were intended to be used in containerized production environments (Fedora IOT and CoreOS). Anecdotally, it's been far more stable than the (already stable) Fedora WS I've used before.
Re the other immutable Fedora desktop distros (i.e., Kinoite), those don't get quite as much support, so YMMV.
Partitions: That is interesting - are you saying partitions in general are challenging, or a particular aspect of how Sliverblue works? I'm on team whole disk too, KISS
Silverblue as beta: "Emerging Fedora Editions \ Preview the future of Fedora." GetFedora.org (bravo to whomever worked on that, its great for new folks). After you commented its not in "beta" I went looking for where I picked that notion up, and all I could find was that quote. Perhaps some minor copy revisions, or just eliminating the "Emerging Fedora Editions" category altogether for now, especially given Silverblue is the only thing in it. If it was moved up into the main section that would have eliminated that confusion entirely. I made a mock of that: https://imgur.com/a/DLMqK2e issue worthy?
Kinoite is interesting, but you've inspired enough confidence that I'm going to give Silverblue a go as a daily driver. I suppose with my early foray into NixOs it wasn't obvious if or how a desktop environment would work, and my use case was more exploring its devops features. Tell me if it isn't but Silverblue seems like the best of all worlds...
Hmm, I suppose you're right, beta might be an accurate based on that. Silverblue seems quite stable, but I guess the Fedora project isn't throwing all their resources behind it just. It's interesting because Fedora IOT uses `ostree` too, and isn't "emerging", perhaps because as a server/embedded device OS it's obviously for advanced users?
Re partitioning, it's mostly because the Anaconda installer doesn't support it. It doesn't equipped to deal with the what can be mounted as partitions in Silverblue isn't the same as normal Fedora. So it's not really too difficult if you read the docs and are comfortable with Linux, but it didn't seem worth it to me.
I've been really pleased with Silverblue. The only issue I had was an update silently failing last August. Otherwise, it just took some getting used to installing and running dev tools in toolbox/distrobox.
> having gone through this process I think I now understand why RPM packages are less commonly available compared to those for other distributions.
The most popular option of this kind, DEB, is actually even worse to work with imo, due to essentially similar kinds of problems. DEB is the most likely format to be supported by application vendors solely due to the popularity of Ubuntu.
If Fedora, rather than Ubuntu, was the distro of choice for newbies, dilettantes, and overburdened engineers who hope their choice of distro will afford them one fewer thing to focus on, then RPM would be the format Slack and Discord distribute for Linux. (And it would be about as much work for them as maintaining their DEB packages is for them now.)
Maybe interesting, but not actually relevant as far as I'm concerned.
When I'm packaging, I don't care about the archive format of the binary packages. I care about the contents inside the source package archives and the tooling and documentation for working with those files.
(I'm also not super interested in third-party tooling like the author of the linked blog post was working on. I don't want to use something like Holo or FPM or whatever and treat the binary archive as a glorified zip file, naive of the base system and its conventions. I want to conform to the policies and norms of the target distro as far as possible, vendoring as few deps as possible, running the same linters that distro developers are expected to use, etc.)
That approach is fundamentally broken because binaries from one distro aren't guaranteed to be compatible with binaries in another distro. Heck, even binaries from the same distro at different points in time can be incompatible with one another. There'd be incompatibilities across ABIs, compile time feature flags, system paths, and much more. There are plenty of assumptions about the system baked into binaries that aren't always trivial to fix.
Automated conversion can be helpful if you don't have access to the source code though. There are plenty of examples running proprietary software on unsupported distros using this approach. However, because of its shortcomings, it shouldn't be used when the source code is available.
I had a very similar experience as the author: I used and adored Arch, but the rolling updates got to be over whelming. Fedora (not Silverblue in my case, I still need to try that out) was the perfect solution. Very similar in philosophy to Arch, enough so that the Arch wiki is still 95% applicable, but very stable within major releases. It also was a bonus that learning Fedora also taught me CentOS/RHEL at the same time, which started coming in super handy when I needed super-stable and/or server-side OS. Nearly all of the skills (and even scripts) are fully compatible with both, with only the occasional minor difference. I still have a huge warm spot in my heart for Arch, but I rarely use it anymore (mainly on my pinebook pro for building ARM packages of my software).
Yes, likewise. Debian-based distros won't be wildly dissimilar, but you'll have to modify bash commands and stuff to fix paths that are different, conventions, and other things, whereas on Fedora it usually just works as is, minus a package name and a `dnf` v. `pacman` here and there. High level concepts will usually be the same, but when tweaking a Debian-based system you'll have to learn whether or how your distro of choice does it differently.
You didn't have to convince me that using Flatpack is awful, but now I have one more written evidence of just how awful it is. Unfortunately, I too need to modify evdev file to get the keyboard layout I want. I'm not going to build RPMs to do that.
Unrelated to above, my last experience with Fedora some years ago was running fedup only to discover that it had some Python syntax error on an unlikely path (something like a misspelled error type when handling errors) which broke mid-upgrade. And that was probably the third upgrade which broke the system to the point that wiping and starting fresh was easier than trying to fix the broken parts.
My experience is likely very out of date though. Today, I maintain software that deals with configuring and installing RHEL / CentOS / Rocky / SLES (but not Fedora). I don't know if I want that approach to be the approach I use on my personal computer. I don't like the tooling. There are too many levels in it, and every now and then it breaks in the ways that are very hard to deal with (eg. the BerkleyDB code of RPM stores some cursor information into its database and tries to reuse it on subsequent launches, but fails if some part of the database was modified while not touching that cursor, which invalidates it). The later isn't usually a difficult fix (just delete the whole database state file), but it's annoying that this problem has been there for ages, and if this happens as a part of some other automation step, then this may put you in some half-finished state hard to make progress from. Similar problems with getting dnf to reliably discard some stale info.
Also, I feel like the distro doesn't have a "character". I mean, if I want new shiny stuff, then I'm not getting that. If I want old reliable, still not getting it. Great flexibility? -- still no. Great defaults that need no intervention? -- still no. It's an OK distro, but there's plenty of that around.
I've been a user of Arch for the past several years and I've always been a huge proponent of the rolling release model. When I first picked up Arch, it was because I needed more modern versions of several tools than were packaged with Fedora or Ubuntu and Arch was a really easy way to get those updates. No more waiting 6 months for the next release cycle to get a new piece of software (yes, I know, "compile it yourself" is always an option--but, I've personally found that nothing destabilizes my system like adding a bunch of software from outside the package manager).
I find my own argument somewhat less compelling today. With systems like Flatpak gaining traction, we're seeing a trend towards separating the Operating System (and I'm thinking more of the overall foundations of a complete, modern system, not just OS = Linux kernel) from the applications for that operating system. Existing package managers handling the OS while Flatpak, AppImage, Snap, etc. become how applications are installed and managed seems to be a good direction.
To be clear, the divide today is far from perfect and we still run into the "Are you running the Flatpak or the distro version of X?" There are also compatibility issues to be worked out. All that said, I do still find the story of "a stable OS with up-to-date applications" compelling.
It's so rarely an issue on Arch. Between the massive official repos and the incredibly comprehensive AUR, I've only needed to do it a couple of times.
My initial comment was about needing to do that on non-arch systems. I've created my own RPM and DEB packages in the past as well; but, at least when I did it years ago, it wasn't as effective as a PKGBUILD on arch.
I've been using Fedora since 2014 and quite happy with it. It just works and gets out of my way, allowing me to focus on actual work. Gone are the days of me using minimalistic window managers with custom config, I just use vanilla gnome now and get on with real things.
This last decade I've come to hate any config of my work environment that takes time away from actual work.
Jumped on Silverblue around november last year and now it's my new favorite distro. I honestly believe this is the future of all Linux distros, even for servers.
The main feature I think is great for end users is the ability to quickly restore your system to a previous state. I was always of the opinion that things WILL go wrong, so an OS must be able to handle that.
I've since heard that using Arch you can setup btrfs snapshots in Grub in a similar way. The goal here isn't to use one distro over another, the goal is to be more user friendly. Even for users such as me with 20+ experience in Linux, because every second I spend fixing problems or config is a second away from actual work.
I did similar about 2 years ago. I do not regret. I do love arch, but now and then I just wish it worked. It is a pita to sometimes do a small update and spend hours to fix. I do not have time for that anymore :(
Also the support for some packages is pretty limited and if you need to have some custom driver you can say goodbye to part of your Sunday.
I am using fedora, but not silverblue. It is sufficiently good.
It works, is stable, has good package coverage and nothing really pisses me off. (And believe me when I say it is something hard to achieve)
Since my first install, i believe at fedora 30... I only encountered a small issue once. I think currently is 38 I guess... I don't even need to know because it works.
I ditched Ubuntu since Amazon stuff scandal. I know they have to make money somehow, but that was concerning.
Anyway. I would love to go back to arch someday, but the community is toxic af too. I never asked anything in the forums because the answer is always the same.
It is not something for beginners nor for anyone who just needs to use the computer
I'm surprised that openSUSE MicroOS isn't more popular. It's a rolling distribution, but the OS is mounted read-only, and updates are installed into a snapshot, which you then reboot into.
If you get a bad update, you can simply reboot back into the previous snapshot.
I use opensuse tumbleweed and rollbacks are a thing there too. I would have never thought I'd end up on an rpm based distro and run KDE. But that is the best combination I've found and it works great.
(KDE remembers my dual monitor setup best. I need to use it in 3 different locations)
> That is, sudo pacman -S some-package may lead to problems, so it's recommended to use sudo pacman -Syu some-package instead (see this section for more details).
I think `pacman -S some-package` is fine, it's `pacman -Sy some-package` that could be a problem.
This particular quirk is a good example of one of the shortcuts pacman takes which would not be tolerated in dnf.
The Arch devs here say 'partial upgrades are unsupported'. Fedora devs might say 'a dependency resolver can't handle partial upgrades is literally incorrect'.
At the same time, pacman is very fast compared to zypper or dnf, and many users prefer it for that reason.
I think both sides of the tradeoff can be valid choices, especially if the simpler implementation tends to fail in predictable, manageable ways. I think that's how most Arch users must feel, like
> Upgrades via pacman rarely cause issues, and when they do I always know how to manage it because the design is simple and clearly documented enough that I can always understand what's going on. Partial upgrades don't make much sense for the kind of rolling release Arch is anyway.
If that's someone's experience using Arch or whatever for $N years, then who would I be to say they haven't made the right choice for them?
Personally, my preferences are similar to yours. My favorite package managers have never been the fastest or the simplest, but the most featureful and robust. But I'm trying to develop a deeper appreciation for the things that many legitimately love about the simplest and fastest ones as well.
I am a very happy Silverblue user. There is a learning curve to adapt workflow to containers, Toolbox and serious Flatpak tuning, but got a solid and clean workstation experience. The clutter-free that prevents the gradual rot of the system pay by itself.
I like it so much that 2 years ago I jumped to the Fedora Core/IOT thing for my servers. A somewhat deeper learning curve (ignition files...) but very pleased to used lightweight Fedora rpm-ostree based immutable OS on metal and virtual. Cleanest rock solid server experience on Linux servers (that actually updates worry free) of my career.
I haven't seen any description of an immutable distro that has given me the impression that I would want to use it. I install a Debian stable GNOME desktop and some extra applications via apt. Then I get on with my work. I don't have to worry about rolling back to stable environments because updates never break my system.
Silverblue is intended for the opposite of that. It is single system image mostly immutable root OS. The goal is to *not* tinker with the core OS, and do your tinkering in containers (toolbox, distrobox, whatever).
Complaining that rpm-ostree took 2 minutes to update 2 packages (rpm-ostree downloaded an entirely new pre-baked filesytem snapshot and applied it atomically) is completely missing the point, as is complaining about selinux after swapping out a bunch of system level stuff. This is like the antithesis of the use case for Silverblue, and an author who has somehow never had to build a package for anything but Arch (deb would be much worse) complaining about the difficulty of building RPMs to install/overlay on the host system, which is not the intended use case *anyway*, is silly.
> and an author who has somehow never had to build a package for anything but Arch (deb would be much worse) complaining about the difficulty of building RPMs to install/overlay on the host system, which is not the intended use case anyway, is silly.
I can assure you I've built packages for more than just Arch Linux, and that building RPMs was by far the most frustrating experience of all :)
I don't want to tinker at all. I just want to get some work done. That is why I run Debian stable. Back when I first installed Slackware in 1994 I spent a lot of time tinkering. I am through with that now.
Fedora basically works out of the box, but Silverblue is an immutable system that you can modify by layering packages on top of the image. It's really cool, and probably The Right Way™ to handle system upgrades, but its got a bit of a learning curve.
Every now and then I give it a spin, but most IDEs aren't great at doing development inside of a container just yet, which makes it painful to work with.
I hadn’t considered this. It makes me glad I keep as much as possible within my shell and browser.
In its current state I would recommend Silverblue to two classes of users:
1. Highly technical with a strong background in immutable systems and containerization.
2. Non-technical users whose app needs are completely fulfilled by Flathub.
The latter may require some assistance with initial setup, but if I needed to help a friend or family member maintain their computer then Silverblue would be my first choice.
I use Fedora and everything works out of the box- even Steam, Nvidia drivers from the package manager, and games. This is about Silverblue, a bespoke and immutable variant of Fedora Workstation.
Thank you, I saw it was about Silverblue. It's in the title! I didn't claim Fedora (or in particular, Fedora distributions separate from Silverblue) doesn't work out of the box. I just observed what's in the shared article, which is a lot of steps to get things how they want them.
As a Linux Mint user, obviously I don't know a ton about every distribution and all of the differences! (There's an implicit assumption there, that if you know a lot more, you might choose to customize a less "off the shelf" distribution, though I don't know if that is accurate!)
I'm just glad my Linux installation process didn't involve all the steps that this article included to get up and running and being able to use my computer. To be sure, they have used Linux for over a decade, and like things a certain way, so their pickiness leads to a lot of the tinkering! I'm just saying, personally, I don't want to dig that deep if I don't have to.
I still think it's funny that everyone is pointing out that it's about Silverblue, when it's in the title, and I didn't say anything to contradict that. What am I missing here?
I’ve been running debian distros exclusively for the past few years. A few weeks back I attempted a reinstall, but I was blocked by a combo of nonfree firmware (that was incompatible w/kernel in stable) and partitioning/bootloader issues in testing.
After a few failed attempts at pre-partitioning in stable, then installing from testing, I gave up and tried Silverblue. Gotta admit, the lack of driver issues out of the box was refreshing!
Like any immutable distro, Silverblue has it’s quirks, but hardware support hasn’t been one of them in my recent experience.
Yes, though I don’t recall needing any Fedora knowledge to use Silverblue. You can use toolbox containers of any distro, and if you choose Fedora really all you need to know is the package manager is dnf.
I've had the same experience with EndeavourOS (more user-friendly Arch). It's running on an old OptiPlex desktop with an added GTX1650 GPU. I've had basically no issues for the last 8 months, with my dual monitor set-up working out of the box. I've used Arch prior to this for about 2 years but I wouldn't say that stopped me from making some mistakes. Nothing that broke the install for me though.
The author complains Arch doesn't have proper tested kernel, with the Intel flickering issue, but Fedora did have the exact same issue. That's where LTS kernels can be useful.
Interesting. The whole article reads like a cautionary tale - even someone experienced with managing their Linux system and all, have trouble with something that purports to be user-friendly.
I run Linux on a few dozen servers currently and have been using it for at least 20 years.
However, I can’t imagine running it as my primary desktop OS.
Reading these posts about the hassle and battles it takes to get a desktop linux OS running sounds like madness to me.
And the end result is usually not entirely stable, and often involves many tradeoffs like trackpads not working correctly or trying to print causing WIFI to drop.
A good operating should Get Out Of The Way, so you can work, build, create, explore, play.
Honest question: do you run Linux OS primarily because it is the best OS for you, or do you run it more because you identify with the philosophy and ethos of open source software? (Both options are completely fine.)
> And the end result is usually not entirely stable, and often involves many tradeoffs like trackpads not working correctly or trying to print causing WIFI to drop.
So have you actually tried desktop Linux, or are you working from 20-year-old stereotypes?
> A good operating should Get Out Of The Way, so you can work, build, create, explore, play.
That rules out Windows, and MacOS is 50/50 depending on whether you stay 100% on the happy path and nothing goes wrong; what are you using?
Seriously. If you aren't on 100% Apple hardware it gets annoying quickly. Things like scroll wheel acceleration, which gives you the option of scrolling a quarter line or 10 lines at at time. No problem, you can just turn it off, right? Nope, that option was removed a few major releases ago and now you need third-party accessibility software.
I agree. I think some people use Linux to avoid paying Windows license fees or Apple's premium. There are tools only developed for Linux, but the opposite is also true for Windows and macOS. I've found most macOS apps follow Apple's core philosophy to be simple, aesthetically appealing, and easy to use. Can't say that for Linux packages (and to some extent, even Windows apps suck).
I view Linux mostly as an environment where you're free to do whatever you want, even shoot yourself in the foot. But I'd never recommend that to average Joe, for reasons such as the fact that this article exists.
I don't know if it's documentation or the RPM tools/macros that suck, or some combination of both, but it's a real setback for someone trying to contribute packages. Getting a contribution into the Fedora repos was also a giant pain in the ass, as there is process there but the documentation is entirely unclear what that process is unless you already know what the process is.
My opinion of the RPM macro system is that I can see why it was done that way, and it was an admirable approach to make packages easy to read, but there's way too much magic. I'd rather parse a couple lines of shell script that made it clear what was happening, than have to look through a handful of macros that aren't documented well (or good documentation is impossible to find). Arch's PKGBUILD format is amazing, and I'd love to see it used as inspiration for an RPM replacement (or at least alternative approach). I try to be very wary of the "the last developer sucked" fallacy when coming into a new and unfamiliar project, but there are enough similar experiences to mine that I think this criticism is valid.
That said, much appreciation and love for the dedicated people who maintain and build RPMs for us. It's often thankless work, but without it we would have nothing.