Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Life is too short to depend on unstable software (sidebits.tech)
199 points by mooreds on Nov 13, 2021 | hide | past | favorite | 171 comments


Honestly, I've had better luck on the bleeding edge. Newer Linux kernels almost always are better (and when they're not, it gets fixed quickly). Wayland is definitely better than X. Newer Gnome is better than old Gnome.

Ruby 3 and 3.1 is definitely better than 2.7. Rails 7 is way better than 6, even in alpha state. Deno is nicer than Node (although the ecosystem needs to catch up). Hell, V8 and then Node led to a boom in web technologies. In fact, I think with every language newer compilers and interpreters just keep getting better, never seen enough regressions to make me not want to upgrade.

With games, newer kernels and drivers are better. Proton Experimental has Age of Empires 4 working perfectly on Linux what, a week after release?

I really can't think of a time I wanted older technology. I just upgraded to Fedora 35, it's the best laptop experience I've ever had. Ubuntu and others have been stable and good enough, but Fedora 35 is snappy and everything works in a way that's just better. No more slow software centre. No weirdness with snaps. Quicker suspend and awake. Fingerprint sensor works out of the box. Etc...


The thing is that the blog entry mixes up, as too frequently happens, the two often used meanings of "stable":

- Rock solid/reliable/"good"

- Not changing

And they are completely independent!

- Windows 95: It's definitively not rock solid. But is it "stable"? Sure! You will have a hard time finding something more "stable". When was the last time it received an update/changed? It's "stable", but not "stable".

- RHEL: Rock solid? I don't really have a lot of experience with it, but let's say yes. And its whole business model is about changing as little as it can. It's "stable" and "stable".

- Fedora: Rock solid? Let's say yes. Does it change? All the time. It's not "stable", but it's "stable".

- Linux in 1991: No idea. But I'm guessing it was crashing all the time. And it surely was changing fast. It was not "stable" and not "stable".

"Stable" software can be not "stable" on purpose. You may want to avoid changing so much that you may want to keep the bugs, people rely on that buggy behaviour!

Not "stable" software may have shitty QA and not be "stable"; or it may be so well tested that even changing all the time, it never fails.

If I have a contract with the government for a software that needs to provide a service for the next 5 years, I surely will target RHEL X. Not because I think it's specially good, but because I don't want to find myself in court about whether the contract said I need to keep supporting them every time they update the OS. I will deliver something working better or worse today and once accepted... it will keep working the same, bug by bug, relying on the same CentOS X bugs, in 5 years because the underlying system has not changed a bit.


However keeping up with the bleeding edge is expensive, it requires continuous maintenance and a lot of effort to sift out fads. Python is a great example of where the beeling edge is stable, and Python projects decay really quickly. I've seen 2-year-old python projects in machine learning fields environment that just can't be run any more as a practical matter because the dependencies have changed too much.


That's in large part due to the garbage fire that is the python ML ecosystem... Python projects that are not ML are pretty stable IME.


Changing too quickly might be bad but a certain level of decay is good.


That sounds like a Python specific problem...


I agree with this mostly, but the new GNOME releases are almost universally a design regression IMO. Pretty much everything that they added was doable with extensions in 3.38 (most of which are now broken with GNOME 40), and the only substantial changes that I can discern is the shift to libadwaita and forcing people away from things like custom stylesheets and shell extentions, which were the only things that made GNOME tolerable for many people, myself included.

Combined with the overall hostility and 'my way or the highway' mentality that's been pervasive throughout the GNOME team, I didn't feel too bad dumping their DE for KDE Plasma. I respect the constant desire to improve things, but their refusal to kill sacred cows and infatuation with destroying people's workflow doesn't really inspire me to spend more time with their software. It's especially ironic when you consider that they recently re-wrote their CoC to be vague enough for the developers to systematically silence anyone who makes feature requests they disagree with, or post bug reports for stale issues. If this mentality continues to spread across the rest of desktop Linux, it will probably be dead in the water. It's no surprise to me that Valve decided to eschew all that GNOME drama with the Steam Deck and just shipped it with Plasma. QT is a genuinely terrible toolkit by many metrics, but at least it doesn't have developers that call you fascist for enforcing a custom stylesheet in their open source software.


I still see GNOME as the most practical DE to run - but it's telling that absolutely nobody uses it as shipped: at minimum they install dash to dock or dash to panel, re-enable minimise buttons, and install a system tray extension.

The project seems to usually come around in the end, and it reminds me of some quote critical of the US's foreign policy I think, that "You can always count on the Americans to do the right thing - after they have tried everything else".

Perfectly practical solutions to problems are proposed on bug trackers, and dismissed with the demand of impossibly strict requirements of well-justified use cases, when the current behaviour was never subject to such a high bar. Then years later the obvious fix is quietly implemented after the sheen has worn off whatever sacred cow was considered "the right way to do it" that was blocking the obvious, impure, against-the-vision fix from being considered.

Of course nobody can rightfully expect work to be done on a FOSS project even if their suggestion is a brilliant one. So a response like "Patches welcome" or "Yes we'd like that too but this is on the backburner and not a priority" would be fine. Instead the response too often is along the lines of "that is not in our vision and there is no use case".

I've also infuriatingly gotten a "patches welcome" response before, and then simply faced the "that's not in our vision" response upon providing a patch. It's not that there was a problem with my patch (there may have been, but the conversation didn't get that far), and I got the feeling that "patches welcome" was just a way of kicking the can down the road to make me go away. Am now very hesitant to provide patches to GNOME projects unless I can see that the core developers are explicitly in favor of a change in advance.

Honestly, it's a few bad eggs in the project and not the majority - but it leaves an extremely sour taste in my mouth as someone who has been participating in FOSS for a long time.

And on the CoC discussion, I can't help but notice that the most abrasive participants seem to be the ones most likely to have rainbows and pronouns in their bio. Hypocrites.


"I've also infuriatingly gotten a 'patches welcome' response before, and then simply faced the 'that's not in our vision' response upon providing a patch."

That has been my experience on quite a few FOSS projects. I had to learn that the hard way (on other projects, not GNOME). Some are more conservative than others, some may delay making a decision until after they get a patch, etc. Ultimately, since it's your time, you have to do your research before you decide to spend months working on a patch or however much time you plan.

Since GNOME is a large umbrella project then it can vary between sub-projects, I have noticed the ones that are popular and user-facing tend to have a much higher bar for contribution and are much more "cathedral-like" in their design approach, and they kind of have to be if they want to have any kind of semblance of stability. So I don't think it really helps much to blame any project for this or call them "bad eggs", it's their choice of how they want to do things.


I doubt stability was their intentions. They completely destroyed 3.38 compatibility, and all the poor bastards like me who decided to update their distro got a completely broken desktop. And I mean completely broken, too: my wallpaper was gone, every single extension broke, apps would no longer appear system-native when I defined a stylesheet and GNOME apps themselves looked inconsistent and glued-together.

I really recommend reading Tobias Bernard and other GNOME representative's blogs. Not because they're insightful, but because the self-righteousness that they use to talk about fairly personal and benign things completely clouds their view of the community. People complain about inconsistent apps: his solution is to destroy the theming ecosystem instead of fixing the shit that's wrong in Adwaita. People complain about broken extentions: his solution is to tell people that extentions are dumb, and you shouldn't be using them anyways. People complain about a toxic working environment: his response is to draft an even more exclusive CoC with vaguer rules and no precedent for enforcement. There are so many of these little anecdotes that it blows my mind they even have a desktop left at this point.

Like you say, it's free software so I can't complain too much. Nevertheless, it's wild to see how a perfectly functional desktop has lost the majority of it's beloved features with nothing more than a few philosophy adjustments.


Sorry that was your experience but you should probably know, stylesheets and extensions have never actually been considered stable. I don't know what distribution you were using but if you were using those things, it would probably make sense to wait until they're fully tested before upgrading. If all the extensions/stylesheets you're using are complex then it seems it would take a lot more time to update those, sadly. Yes I know this thread is trying to make a comment about unstable software, there are plenty of "stable" desktops you could use but in my experience those have a lot less feature velocity than the big "umbrella" projects. So it's a really trade-off, it seems that at some point you will have to sacrifice some of those beloved features if you want to prioritize something else, such as stability or portability or design choices or anything else really.

I've read the blogs and I don't really agree with any of your comments in the second paragraph, especially the part about destroying themes. I'm currently developing against libadwaita and one of the benefits that it (already) brings is much better theming support. I also have never actually seen the code of conduct enforced on anybody, most people I interact with are very respectful. Although if you have something you want to ask to Tobias then you might consider asking him directly, I can't speak for him. Just please remember to be kind.


> I still see GNOME as the most practical DE to run - but it's telling that absolutely nobody uses it as shipped: at minimum they install dash to dock or dash to panel, re-enable minimise buttons, and install a system tray extension.

And all those things can be done easily, intuitively, through a GUI. Kind of a testament to Gnome if you ask me. I don't mind the default behaviour though. Minimalist and nice.


If you are in the know and are aware that dash to panel, dash to dock, and whichever system tray extension is the one maintained by the Ubuntu developers are the ones you want. Otherwise you get one of the million broken extensions.

And this is after figuring out how to install a browser extension (from the browser extension store) and the "connector" library from your package manager, to be able to install shell extensions from the browser. I wouldn't call it intuitive at all.

And the re-enabling of minimise buttons is in gnome-tweaks, which you have to install separately, and which I'm used to, but is a bit weird to put such crucial functionality in a separate settings app.

Good thing Ubuntu sets these things up by default for their users.


Right but I feel like you're missing the point of the article, since the latest version of something as long running as the linux kernel is unlikely to be flakey.


It's not that simple.

I'm using pure Arch Linux since ~8 years by know and for me it's the best suited distro.

BUT two times after a kernel update my system didn't boot because of some bug in the kernel. Sure, not a problem for me, I can just boot the recovery system, downgrade the kernel temporary and all is fine (and the bug gets fixed fast).

But still, even the latest stable release of the Linux kernel sometimes has problems (through that where the only problems I ever ran into with the kernel and it was at the vendored-efi<->linux boundary, and now that I think about it it might have been efistub, so maybe not even there kernel).

Anyway I have in general less problems with software not working on arch then I had on ubuntu.


Lol the Linux kernel is super flakey. But it moves quick enough it gets fixed when it does flake out.


I have the same experience, my linux experience (as an user) improved by an order of magnitude when switching to Arch which has the same bleeding edge philosophy.


I feel like there's a balancing point though. I've used Gentoo, Arch, Ubuntu, Debian, Fedora, RHEL, SUSE, and CentOS. I've found the most stable and least problematic distro to be Gentoo. After that was Arch and then the rest basically sorting by average package age.

I think there needs to be a trade-off between bleeding edge and stability and I find that Gentoo tends to hit it on the head. I've yet to have the system break anything for me without explicitly warning me before hand, providing me a full mitigation & migration plan, and then requiring me to explicitly continue.

Arch has been pretty stable in the past but I've had problems come up numerous times due to the bleeding edge philosophy.

Point being that you should really only be going as bleeding edge as your community can reliably audit and provide support for. I think Arch toes that line most days and sometimes hops past the line. Gentoo however tends to stay a few steps back from the line and gets close to it but never actually steps over it. And then most other distros are sitting comfortably half a mile back.


Hmm, oddly enough, I've found Debian stable to be the most stable OS for my uses. That said, I do more writing and email than I do programming, so perhaps library dependencies aren't such a big deal for me. If I can run tex-live, emacs, pandoc, along with GNU standard utilities, email client, web, and music, I'm good.


Yeah but that's also the disadvantage of the Linux ecosystem. Either you're on the same version as the devs or you're screwed.


Same experience here. Everything is just better when I'm running the latest software. My system is not "unstable" either.


Counterpoint: when using a system with hybrid graphics which required me to use proprietary video drivers, switching to the latest Fedora version often brings issues with suspend, or random freezes, or lack of external HDMI output... After a few months, the release stabilizes and my issues disappear. But whenever I tried to be optimistic and jump to the latest release too soon, I got bitten by it and wasted hours trying to find workarounds.


No argument from me there. Even on Arch it takes a while for the nvidia updates to arrive precisely because of these issues. Proprietary nvidia drivers have such a poor user experience.


I do like to think that the effort put into software development with each iteration is actually making software better: optimizing performance, fixing bugs, closing security vulnerabilities, etc. So I've been happy with Arch-based distros and most Windows updates, at least. When I take the long-term view, I'm amazed by the improvements in software since I started using a computer in the 1990s. Many more things "just work" now (e.g. Linux on a laptop). Home computer operating systems and applications can run for years now without breaking, whereas reinstalling everything (e.g. the cesspool of Windows 95/98/XP) was almost a monthly requirement back in the day. An operating system that used to take 5-10 minutes to fully boot now takes only seconds on a typical machine.

I know that this isn't always true. Sometimes it's just feature bloat, or a design change for the sake of design change, which introduces more bugs and vulnerabilities.

Maybe what I really want is the newest version of the software that I already have, as opposed to the newest software.

At work we still use Red Hat Enterprise Linux 6 on some mission-critical systems. So that's version 2.6.32 of the Linux kernel with equally old applications, and I'm not sure what we get for updates from the vendor in 2021. The problem is that there are recurring bugs and serious vulnerabilities in that system which will never get fixed at this point. You could argue that we know what the bugs are, at least.


Tbh, this dichotomy is kind of Linux's fault. Consider the issues Linux of today is facing, or were just were recently resolved.

With no attempt at being exhaustive (as I'm not a huge Linux user nowadays):

- Wayland is just becoming production ready, imo, it's still years away from being trouble-free

- Gnome is still a resource hog, Gtk is not in a good place, KDE is still prone to crashes

- Installing software/Dependency management is still a headache - the distro agnostic package management solutions (Snap/Flatpak/AppImage) are ironically anything but - apt and it's ilk have known issues as well - the final solution in this domain is still unclear

- Imo it was proven that PulseAudio won't work - it's replacement, PipeWire is still just becoming mainstream

All these issues - software installs, accelerated desktop, audio that just works, stable desktop libs - are things that both Windows and macOs have had for literally more than a DECADE (some for almost two), allowing people to be able to be extremely conservative and stay on Win7 to this day. State of art desktop Linux is still ways behind from having all of this stuff working.


I wish someone would try to refute what I wrote instead of just downvoting it.


Why? The onus of proof is on the one making assertions. You seem to feel you're asserting something that's common knowledge, but it isn't, and the downvotes prove it. I responded to your comment anyway.


> Wayland is just becoming production ready, imo, it's still years away from being trouble-free

Literally have had zero problems with Wayland.

> Gnome is still a resource hog, Gtk is not in a good place, KDE is still prone to crashes

Resource hog? Chrome with a single tab open is roughly equivalent in resources taken to everything Gnome related on my laptop. Steam takes almost as much memory just idling (window is closed). Gnome shell (the most resource intensive Gnome thing) itself is just 134 MB which is nothing when an entry level laptop has 8 gigs of memory.

I have a pretty basic laptop (Acer Swift 3 with Ryzen 2700u). Only 8 gigs of memory. Fedora with Gnome, with Chrome open, Steam running in the background, some other random stuff, still only takes like 30% of that memory.

Compare that to my old work laptop where random Windows crap consumed so much memory I'd have to restart just to be able to open up MS Teams SMH... (I don't work there anymore thank god)

As for KDE, I personally don't like KDE so no idea if it crashes or not. And what do you think is wrong with GTK?

> Installing software/Dependency management is still a headache - the distro agnostic package management solutions (Snap/Flatpak/AppImage) are ironically anything but - apt and it's ilk have known issues as well - the final solution in this domain is still unclear

Versus Windows or Mac where you have a store but everyone just ships random EXEs or whatever? All that matters to the user is that there's a distro-packaged app for nearly everything and Chrome provides .deb and .rpm packages for all major distros. It seems like a mess to developers but it's actually not that big a problem and still better than Windows or Mac where you have the store but most software is random installers you download from the internet.

Take Fedora for example. If you're a normal, non-dev user, you can download a Chrome .rpm. Install it by double clicking and following prompts. Almost every other piece of software is in the app called 'Software'. Hell, Steam is there.

> Imo it was proven that PulseAudio won't work - it's replacement, PipeWire is still just becoming mainstream

PulseAudio works fine but for some reason some Linux 'users' mess around and break it. It's literally always worked perfectly for me.

If you use a sane distro (Fedora, Suse, Ubuntu) and don't fuck around Linux works great.

On Fedora 35, everything you mentioned works. All my function keys work. Fingerprint sensor works. Suspend/Resume works. My USB-C headphones work and switching sound output is automatic. There's nothing that doesn't work perfectly. Gnome is light and snappy (as I said, roughly equivalent to Chrome with 1 tab, this page, open). Every piece of consumer software I installed, I did so through the GUI. The only time I touch the terminal is for dev shit.


While I agree with you I sincerely see no real advantage in Wayland over X. Steam does not work properly with Wayland, client side decorations are stupid and broke the decorations of Kitty and Alacritty (requiring me to workaround it).


> Steam does not work properly with Wayland

How so? I'm running Wayland and running Steam. Everything works perfectly.


I can open a game but the interface freezes after a few seconds. I'm using the beta.


Mostly the same here, but in fresh environments instead of continually upgraded ones.

So local visualization ala docker, vagrant have been a godsend with their isolated environments


In theory we all mostly agree with this: stable, well understood software is to be preferred.

In practice, it's not true most businesses or teams want newer software just to be "on the bleeding edge". The bleeding edge is not a goal on its own. What usually happens is that you need a feature (for actual business reasons) that is not available in the older version of the software you are using; or there is a serious bug that is only fixed on the "bleeding edge" version and is nontrivial to backport.

So you often have two choices: make the change yourself in the stable version (risky, time consuming, and can it be considered "stable" anymore once you mess with it?) or move to an unstable version (risky, new bugs).

And that's assuming the software is open source; if it's proprietary you have even fewer choices...


For personal use, here's the same thing again. Say a Linux user wants to play a game. The stable old version of their distro doesn't play well with the libs/drivers needed to play the game.

So the user must install a newer, less tested, distro. But the goal was not to be "on the bleeding edge" for its own sake; it's playing the game, and there's no other (easy) way.


This is a flaw in Linux Desktops' choice of application management paradigms, which insists that everything be tightly coupled and managed. It is entirely possible and reasonable to have a stable set of base system libraries everyone can depend on and otherwise applications must bring their own.


It's not just possible and reasonable. It's how literally every other platform works.

It's also how the Linux Standard Base worked. It was intended to be a stable well-defined backwards-compatible set of libraries common across distributions. Of course the LSB had its share of problems but they had the right idea to bring stability to Linux as a platform for binary apps.


Eh, it doesn't really work for a volunteer based projects. It's already hard to find people that want to fix bugs in open software, it would be even harder to find someone willing to fix it in a version 10 years old and then have them verify it working in all possible cases without regression.


NixOS manages this, it's entirely possible to use the stable channel for system packages but subscribe to the unstable channel for packages installed per-user.

Of course, NixOS also has atomic upgrades and rollback, so there's not much risk just running unstable everywhere


Agreed whole-heartedly, and one of the reasons I love the FreeBSD model. My ideal Linux distro would be the inverse of Debian k/FreeBSD – a Linux kernel with FreeBSD kernel interfaces provided by loadable module and a FreeBSD-style userland.

Might be possible soon, now that building Linux with clang is supported.


Huh. There are already others implementing the Linux ABI, but I can only think of NetBSD rump kernels and some Plan 9 thing...vx9? going the other way. No reason it shouldn't work.

Edit: Oh, and Darling runs Darwin binaries on Linux, which isn't quite a BSD but is non-Linux


People focusing on nitpicking my Linux example: it's irrelevant. In general the principle stands true: sometimes, to get a feature you need, you must install unstable software. Nobody wants instability for its own sake; what they want is the feature.

I fear my Linux example may have led people down the wrong rabbit hole. Linux is irrelevant in this context.

I could have used Windows. Sometimes, in the history of videogames, you needed a newer version of Windows. If you wanted to play the game, you needed to install this version.

Or DirectX, or whatever lib. Pick your poison. Nitpicking particular examples is missing the point.


Even on windows you generally have to install the very latest GPU drivers when an AAA game comes out though


Exactly. People focused too much on my Linux example (in hindsight, I should have foreseen this), when it's a problem of all software.

Not just OSes, everything. You want a feature or bug fix that doesn't exist in the old stable version, you need the next one, so you upgrade.


Sure. But I don't have to upgrade my whole userland from Win 10 to Win 11 for hardware compatibility. I don't have to upgrade my core OS to run a new version of Lightroom.


Throughout the history of Windows I've had to upgrade the whole OS in order to run some software or play some videogame, so I'm not sure what you mean.


> Sure. But I don't have to upgrade my whole userland from Win 10 to Win 11 for hardware compatibility. I don't have to upgrade my core OS to run a new version of Lightroom.

uh... yes, you do, all the time. Lightroom currently requires Windows 10 1903, a 2 years old OS. Most likely in very few years it'll require Win11.


I'm not sure what you mean, this is exactly what Flatpak and Snap were meant to solve. IIRC Steam should also bundle older copies of the libraries needed.


No, you misunderstand. You need new libraries to make your new hardware work. You can't use versions of driver libraries like Mesa that are older than your hardware, and Mesa has a ton of dependencies like libstdc++ and LLVM so you can't use old versions of those either. This is a major problem for Flatpak.


I don't see why that's any bigger problem than anything else, flatpak includes mesa as part of the SDK: https://docs.flatpak.org/en/latest/available-runtimes.html#f...

If there ends up being a problem with libstdc++ and LLVM, it's not hard to statically link those, if it's not being done already.


It is nowhere near as simple as you make it out to be.

Yes, the freedesktop runtimes ship extensions with newer versions of Mesa and its dependencies. This doesn't entirely solve the problem. For one thing, libstdc++ before GCC 5 did not maintain a backwards-compatible ABI, so if the app was compiled too long ago it won't work with a new libstdc++. Steam is now working around this problem by using dlmopen() namespaces to load different version of libstdc++ into the same process. Flatpak is not there yet.

For another, NVidia drivers complicate this even more. The NVidia client-side library must exactly match the version of the loaded kernel module which Flatpak can't control. So NVidia drivers are broken out into yet another runtime extension, and it can't package these drivers due to licensing issues so it will dynamically download NVidia drivers to generate an extension on the fly. NVidia drivers also depend on libstdc++ by the way, another reason why static linking doesn't magically solve the problem.

On top of all this is just the massive complexity and maintenance burden of keeping all this working. All these runtimes with all their extensions, somebody has to keep updating these, and when a runtime is deprecated all of the software that was built for it is defunct. All of this can be solved just by keeping libraries backwards compatible and building for native Linux, not Flatpak or anything else.


I'm still not sure I understand, it sounds like a solution exists and both Steam and Flatpak are working towards it. I don't see why nvidia can't also do the same things.

I hope you can see that "keep libraries backwards compatible forever" is not really a good option either and is probably orders of magnitude more work than just doing all the things you said. In some situations, it is also impossible: if there are bugs in the API contract then it has to be broken eventually.


The "solutions" are hacks at best. This is not the way to build stable software.

> I hope you can see that "keep libraries backwards compatible forever" is not really a good option

??? Why would I be able to see that? You've given zero explanation or evidence for why that would be the case. I see a whole lot of people in this thread in addition to the article explaining why backwards compatibility is good. Nobody is giving valid reasons as to why it's bad.

Microsoft has managed to keep the whole Win32 API compatible "forever". GUI apps built for Windows 95 still work out of the box on Windows 10. Backwards compatibility is a major part of why they are still the dominant platform: businesses actually care about this. They use ancient proprietary software that is critical to their business whose source code has long been lost to the sands of time. A platform that breaks their software is no platform at all.

> probably orders of magnitude more work than just doing all the things you said.

Really? How hard is it to not break things?

It's sometimes more work to add new features or support new hardware without breaking the ABI but clearly it's feasible. glibc 2.1 was released in 1999 and the maintainers decided at that point that they would preserve backwards compatibility forever. We're now at 22 years without a major ABI break. There have been some hiccups of course (the memcpy() fiasco) but they've been fixed.

The GCC team have decided to follow in their footsteps. Since version 5 they've decided they're not going to break the libstdc++ ABI anymore. The culture of backwards compatibility is finally growing on the Linux desktop. This is a far better solution than Flatpak.


AFAIK the win32 API is kept backwards compatible by doing exactly as we describe, shipping older versions of the system libraries and automatically using them when it's detected that an application needs them. So it's the same thing you call a "hack". Please don't misunderstand, I'm not saying backwards compatibility is bad. But it does cost a non-trivial amount of money and time, it's not just a magic solution to reduce the maintenance cost of something down to zero. If you're doing a cost comparison, that always has to be taken into account.

Glibc isn't really a good example, that has a ton of unfortunate broken APIs that should probably be removed entirely (the most notorious example probably being gets) but never will be, and I suspect they will continue to be a source of bugs as long as applications use them and aren't patched. I mean the whole reason musl exists is to get away from some of these maintenance issues in glibc.


To be fair the equivalent of libstdc++ on Windows has broken the ABI on every MSVC release until very recently.

The difference is that Windows applications historically shipped with the appropriate version of the C++ runtime bundled in (and there wa no guarantee that one was provided by the OS), while Linux app usually rely on the system .so.


The C++ standard committee regularly has debates about breaking the ABI. They forced it for 11, who knows if they will again.


> NVidia client-side library must exactly match the version of the loaded kernel module

Sure about that? I was sure I’ve run Docker containers with GPU stuff compiled for different versions than my driver. Like their nbody container image runs every time I set up the nv container runtime with Docker, regardless of driver version.


NVidia has special support for making their drivers work in Docker containers:

https://docs.nvidia.com/datacenter/cloud-native/container-to...


Flatpak didn't always exist.

If my Linux example misled you, let me rephrase: can you run modern AAA Windows games using good old Windows XP? If you can't, you've hit exactly the problem I describe.

It's a problem of software in general, and flatpak doesn't solve it. Flatpak itself may be subject to it!


A huge dimension concerning the decision on using "unvetted" and/or "cutting edge" technology is how MISSION CRITICAL the system you are creating is...

Building a new social media app as a startup? _Depends on the data you're storing for users and how you market the stability of the system to your user base.

Building a new Government healthcare system? _You better use properly vetted technologies.

This includes using cloud service providers as well.

Some systems simply need to be old school. Old school tech relies on structured data that can prove better for security and for testing. Methods that have been in place over years are not only more reliable, the ways of fixing problems when they occur are well documented as well. Countermeasures to security threats are also well documented for older solutions, yet we also have to acknowledge, the Internet in itself is still a relatively new thing for business and commerce, so things these days are really declared as "Legacy" by companies and individuals who are selling alternative solutions as a part of the "new money marketing" pipeline, not because they are truly "out of date" or "no longer viable"... I am not defending nor advocating COBOL or mainframe systems with that statement though... (Just to be clear).

With newer concepts/solutions like blockchain, using unstructured data, and even cloud hosting, they are vetted to an extent, but they introduce very new threats into the stability of mission critical systems, and they are not perfect solutions. These newer solutions also by nature dictate costly refactoring for many that locks buyers into platform-specific situations that they can't easily migrate back if the ideas don't work out well, and compromise of data integrity or security for mission critical systems is more costly than ever as data builds...

Not every solution should enlist "cutting edge" solutions as their backbone. Even a gradual approach may be a more reasonable option (like introducing new technology in "siloed" and/or "smaller" aspects as a part or feature of a traditional system before a complete refactor (for example).

There are some really good reasons why COBOL programmers still get paid a lot of money to this day, even though I am not one mind you.

Choose wisely my friends.


> There are some really good reasons why COBOL programmers still get paid a lot of money to this day, even though I am not one mind you.

I haven't found this to be true. It's one of those things that gets repeated, but as a former COBOL programmer, let me tell you that's not where the money is.

I agree with the rest of your comment.


> Building a new Government healthcare system? _You better use properly vetted technologies.

Is there any evidence these systems are more stable and dependable?


one needs bleeding edge + some form of retrocompatibility

people deployed new machines with new stacks at the building I'm, most things are better except for a few conflicts which turn some tasks 3x slower

people don't mind about things unless they cost them too much


How badly do you really need that feature? Why did no one need it a couple of years ago?


Because sometimes "stable" software has bugs.

A true story about one of my websites: It runs on Debian Stable, because I like stability and at the time, Debian was the OS I was most familiar with. It also does a lot of image manipulation, for which it uses ImageMagick.

In March of 2018, I discover a bug in ImageMagick: if you perform various hue/saturation modulations, sometimes pixels just turn "black" for no reason -- essentially it looks like someone sprinkled sand on the image. Reported here [1]. Apparently some code ends up with a divide-by-zero error. The good news is that the bug is fixed within a day, and is released to the beta version one day after that.

My website is quite literally built around image layering, manipulation, and generation. My users are experiencing what looks like sand thrown on their images every day. So what do I do? Do I assure my users that stable software is actually good and they should just sit tight for a year or two (or more?) until a version containing this patch hits the Debian Stable repos? Do I rewrite the core of my application to replace ImageMagick? Or do I update to run some unstable software?

[1]: https://legacy.imagemagick.org/discourse-server/viewtopic.ph...


You could have just built IM yourself? No need to switch release channels for this.

https://imagemagick.org/script/install-source.php

I used to have a similar version-freshness issue with ffmpeg on Ubuntu, for a video-encoding system I was running. Turns out that building ffmpeg isn't actually that hard. :) Later, I switched to using Nix as a layer over the distro; then I could just build ffmpeg once on my build system, and push the "closure" (the app & all its dependencies) to the other nodes in my encoding farm.


Technically you're not wrong, though I will point out that it'd still be running the bleeding-edge unstable ImageMagick by definition.

But even so, in practice, building ImageMagick it yourself, along with all configuration and integration into PHP (yes, it was a PHP site) on a production webserver is a much bigger lift than that. And then you have to maintain it manually. Arguably this is a much less stable result than running a bleeding edge Debian install where you just `apt-get install php-imagemagick`.


Fair. Re: bleeding edge, my point was that you could keep the rest of your system stable -- building one component yourself lets you make that tradeoff.


Yup. Both of those options were bad enough that I found a really janky [0] workaround to buy me the time in which to rewrite the entire image generation stack. I ended up writing it in Rust, and it's worked beautifully ever since. My implementation was approx 4x faster than the ImageMagick version, and afforded me flexibility and features not implemented in ImageMagick, so it's been great.

[0] https://legacy.imagemagick.org/discourse-server/viewtopic.ph...


That janky (but clever!) hack sounds like something I'd do... and then leave running in production for 10 years. :)

Well done for writing your own solution!


You could backport the patch, that's exactly how Debian achieves stability.


Doing so myself is the antithesis of depending on stable software.

As someone who maintains a PHP website in my spare time, no, I'm not going to go backport a patch to a C library -- a language I have a passing familiarity with -- which may depend on other updates that have occurred between now and the last Debian Stable release, and which may, if I do it wrong, compromise the security and stability of my entire system.


If you build IM yourself, then you are no longer using the old, stable IM version and you've hit the problem: you're on the cutting edge, and also running nonstandard software!


You can choose what version of IM you want to build. There may be several releases more recent than Debian Stable!

I'm not sure what your objection to "nonstandard" software is. I write software for a living -- all of my work is "nonstandard" by definition, but it runs just fine. :)


It's not an objection. It's an explanation of why people install unstable or newer versions of software; a counterpoint to the article.

Applying your own patch to a piece of software makes it nonstandard and unstable (because it may introduce new bugs, like any change), the very thing the author of the article fears.

See my initial comment here.


> How badly do you really need that feature?

Very badly. New regulations require this feature or we risk fines. Also we need to support some new hardware. Also we finally discovered a longstanding bug that was losing us money. Also, the business opened a new branch that needs this.

> Why did no one need it a couple of years ago?

The regulation didn't exist. We didn't understand the bug. The new hardware didn't exist. The new business opportunity hadn't been thought of yet.


Most of the time people that need the feature are not the same as people that will be in charge of developing the feature. I doubt most people also have the leverage to conving the business that they don't need the feature.


But what did they do about it a year ago? They didn’t have the option of using an unstable version (because the feature still wouldn’t have been there), so what happened? Did they go out of business?

IMO, the “bleeding edge” is just overly tempting and people need to learn to resist it. It’s hard to know for sure when we’re speaking in hypotheticals, but I think that in most cases, the trade-offs aren’t being weighed accurately.


Seems like you're arguing that if they didn't go out of business last year from lack of $feature, then they never will.

Imagine you run a company selling things and can't offer next-day shipping and none of your competitors does either. The company you use for deliveries announces they support next day shipping now but you have to add a priority flag to requests so they can schedule it differently, and they released a new API client to support that. Your competitors all start offering next-day shipping and you don't. You ask your employees why. The development team says the new API client is unproven and the one you have is stable and you didn't go out of business last year so you probably just have no self control and are demanding the latest shiny and are weak and impulsive.

Would you be OK with that reasoning? Or with literally any reason that ended "so we're not offering next day shipping and will definitely lose customers to our competitors because of it."? Or would you start looking for a way you can take the change and isolate it? "We'll run next-day shipping requests through a different queue and watch it more closely", etc.


They endured it.

People didn't have personal computers or cell phones a century ago. What happened? Did businesses go extinct?

The entire digital empire that we enjoy today is built on people's wants. Some things are wanted so badly, that we are willing to get our hands dirty and actually create new things. It's not hard to imagine that people can want things badly enough to try out those new things.


Maybe they needed it, but the devs only had time for it last month


People often conflate stability with stagnation.

When you have poorly written software, then old versions of it are "stable" only because people have already learned how to live with and work around its obvious bugs. Updating means learning to live with new bugs, which is seen as instability, but it's just the same software again.

OTOH if you're dealing with reliable software with a low defect rate, then all versions are "stable". Updating it isn't painful, and you can expect it to improve things without introducing new bugs.

For example: autotools is "stable", but in the stagnant sense. I use it by googling weird error messages. OTOH cargo is "stable" in the well-tested sense. The newer the better. I can use its nightly build and expect it to work with my every project.


Would you call TeX or Bash stagnant? They have a few commits per year. Some projects are like Michelangelo's David where they arch toward completion. At which point the game usually becomes inventing new stuff on top of it, like LaTeX. It's a good model for a gift economy like open source and less disruptive than things like the Python 2 to 3 transition.


Users of only "stable" FOSS releases almost never contribute, in code, or to its health or longevity. They are leeching it, sometimes over decades.

They don't contribute to testing either, since they just consume releases and ignore development, then wonder loudly why there are bugs.

Don't be that guy... if you want your dependencies to have a long life being maintained, find some time to contribute. Leeching is not contribution.


I don't bother filing bugs or sending PRs. I spend hours filing a detailed bug report. No response. I spend hours crafting a PR. Ignored. I file a bug report along with several possible solutions. I get a snarky response totally unrelated the proposed solutions.

But I get it. Maintainers are overworked and jaded. I'm on the receiving end as well, with 95% of the issues like: "help, it's broken" (with no error message), "please teach me git", "please debug my application code", "please implement this massive feature for me" (for free), "how do I do xxx?" (obviously didn't read the README.md), "your library is buggy" (no it's not, your pointer has run off the end of your array, causing Undefined Behavior).

With all of that noise, it is difficult to figure out which bug reports or PRs are in the 5% that are actually well-researched and valid.


Well put.

A reputation/karma system for bug reports could be an interesting way to deal with this problem.

I wonder if it’s ever been tried.


This is exactly the problem with an overadherance to "stability". If you submit a PR to fix a bug in a "stable" product, it will be a year before it ever becomes "stable", so the incentive to fix it is nearly zero, because you can't get the stability and your bug fix. (If they just ship every random PR to the stable build, well, now it's the "unstable" build.)


I file bug reports. It greatly depends on the person running the project; some are like you describe, or they're incompetent (particularly problematic in non-FOSS projects). Those, it's usually pretty clear that you're going to have to weigh "how much do I want this bug fixed" against how painful the process will be.

But I've also had maintainers who were very helpful. Recently had to deal with the DST cross-sign expiring, and needed a third-party container to be rebuilt on a later version of Debian. Filed an issue with really nothing more than "Would you mind releasing with a newer version?"¹ and the maintainer had it published within like a day! Greatly made my life easier.

So I really want to separate out those maintainers that are doing a stellar job; certainly some merit your criticism, just not all.

¹I didn't have time then to track down the Dockerfile & figure out the required patch. Might have once my time freed up, but the maintainer beat me to it.


> particularly problematic in non-FOSS projects

In my experience, the products I pay for usually fix bugs faster than FOSS.


I would advise against calling users leeches. A product is meaningless without users, and those users that do go out of their way to file helpful bug reports should be lauded not disparaged.

The phrase leeching implies an entity is consuming a host's life force in some way (seeders's bandwidth in the torrenting analogy), simply using the product does not meet this criteria.


Thank you for confirmin that "Free and Open Source" does not value stability or quality.

I'd rather pay for a closed program that comes with guarantees of stability and quality than participate in a community that considers these concerns tertiarty or off the menu entirely.


This is an absurd sweeping generalization. There are many FOSS projects that value stability and quality. Just like there are many closed source projects that don’t.


The only one I'm aware of that does a good job of this is the Go language project, but it's not really your typical FOSS project; it's a Google project, and Google relies on it internally, so the incentive to make it stable and high quality is as high as it will ever get. They don't sell it to others, but they themselves rely on it as a core part of their own business.


When was the last time the Linux kernel or the gnu utils broke for you?

For me? Can't remember.


Yea, Linux is another good example, but that's really mostly because of Linus' adament commitement to never break user space. A lot of other people would be happy to break user space programs. As in this famous rant https://lkml.org/lkml/2012/12/23/75

For the "gnu" user space programs, I don't use much of them so I can't comment, but for the desktop environment? It's a huge mess. Even with stable/sane environments like xfce, somethings are confusing and brittle, such as getting things setup properly for Kanji/Chinese character input. I've tried it several times, and everytime it takes a lot of effort to get it right, and I always forget what the right steps were. It's not even worth remembering because in a few years it will change.


The Linux kernel and many other big FOSS receive a non-trivial number of their contributions from professional programmers on the clock at their day job.


What does that have to do with anything?


It's not breaking because the contributions are from paid professional developers with commercial priorities and incentives, with internal or external customers often driving those changes. Not unlike Go.


Often times you can get a support for a stable OSS, and then you actually help the ecosystem by giving it resources


Unfortunately, the "pay for support" business model means the incentive is to make a system that needs support but pretends on the surface to be stable and high quality.

One way to make a system so incomprehensible that your customers absolutely need support is to make it configurable in a million different ways, and make the configuration invisible / opaque.

Example:

"Why is this program doing this when I told it to do that?"

Answer: Because this file in that directory configure service A to do X, and this other file in that other directory configures the program to behave like Y if it detects that service A is doing X.

But the program does not tell you that via its UI; you have to read obscure documentation.


I don't understand, isn't the original complaint here that everything needs support eventually?


Everything should not need support. The fact that most everything needs support is a symptom of how crazy the software industry is.


I keep hearing people say that, but how do you propose we write software that has no bugs and does everything perfectly right the first time? That seems impossible.


The first time? Sure, nothing ever gets it right the first time, but over time software should converge on being bug free and not requiring any support at all. Free-with-paid-support has a perverse incentive against this.


"over time software should converge on being bug free"

Yeah, and the way that's done is by refactoring things, removing buggy/deprecated things, and not adding any more new features/requirements... So, pick your poison, I guess? I'd love to go move on to the next job as much as the next person, but somebody still has to be paid to do those things. I don't see what the significant difference is there with free-with-paid-support, if you pay for it up front you're still paying the same cost.


All I hear is excuses. I'm not interestd in hearing excuses.


That seems dismissive and I don't know why you would be hearing that. If there is a way to do this that you think would not qualify as an excuse, then you should mention it, otherwise both of us are stuck to what we are limited to by these bodies that require rest and sustenance.


You're creating a false dichotomy: we can either add new features, or stabilize the current version, but we can't do both!

Yes you can. You can spend some time to stabilize, and then add new features on top, then stabilize again if needed.


I am not saying that's a dichotomy, I mean this purely as a function of resources. Sure you can do both but that takes up twice as much time as if you only did one of them. So however much you want to do of either is really up to the project's management goals and what the customer is willing to pay for...


What other projects besides (La)TeX and METAFONT have noticeably converged on bug-free over time? Perhaps the Linux & BSD kernels.


LaTeX specifically might be bug free, but it's so incomprehensible to a not very experienced user (like me) that it exactly matches the notion of "software where you need support". the only good thing is that you can often get the support for free on the internet. Even though it might be stable and bug free, I'm just avoiding it for "shinier" things like HTML and Ctrl+P in a browser...


Is it incomprehensible because it can be configured in different ways, or is just because the notation language it uses is not familiar to you?

The distinction is cruicial.


I'd love to! Did you make sure your FOSS project is easy to build from source? I don't have to spend hours nagging the dev team for mysterious compilation bugs and dependencies? Projects that make this easy are the extreme exception, so if you actually get it right, then yes, I completely sympathize.


It's probably a bit blind to think that there's only one way to contribute by fixing issues. However, the new feature development is very important too, and it usually it does not necessary require a testing or beta channel.


we believed that continuous integration/continuous deployment (ci/cd) will only bring good. but we started optimizing for metrics such as profit, ignoring actual customer issues.

we started solving issues that are not real customers issues.

people claim that software in the past was very unstable. sure, but let's start putting some context and numbers around these claims. or at least cite such software. in fact, let's do a quick mental exercise for those of us who have been there.

winamp vs. any current music "player".

in quotes, because most modern music apps can go around the entire web but can't often play music. i can't remember when i ever had any issue with winamp. so let me know if your experience was different.

winamp played "my" music. i don't know where the music these modern apps play is coming from. i must have bought them at some point. sometimes i don't remember when because they use so many user interface dark patterns. so many problems at each release or update these days that i want them to stop releasing.

on the other hand, they can't (or don't want to) solve real long-standing customer issues. every year there is a new round of "missing cover art"[0] issues opened for itunes / apple music.

[0]: https://www.google.com/search?q=apple+music+cover+art+missin...


The big difference between WinAmp versus the current state is content availability. While I believe WinAmp is much better than Amazon Music, my WinAmp was full of "questionable content" whilst I have no anxiety around using a streaming service.

The problem that I have with Amazon Music isn't the occasional need to restart the tab, but the lack of selection which my wife needs as part of the They Might Be Giants fan club which sends her stuff which isn't available on Amazon Music. I used to be able to upload it until they removed it. Sonos's software integration with my NAS was lack-luster with a surprising number of issues of "why isn't song X on it?!"

My goal prior to my wife's next birthday is to setup a myalexamedia on a VM/S3 which I can turn on/off quickly to save pennies!

Content and integration with either my Alexa speakers or Sonos are what is making quality an issue.


what the continuous idea brought is a neverending shape shift of everything.

maybe android web players are better than winamp but every month there's a new one which differs a bit, the others will update and modify lots of things just because they can and there's a new fad.

the experience of 90s software was obviously slower, more risky too (if you had a bug you had it until the next service patch 18months later). But it made people make long term choices and you as a user enjoyed a longer trip on the same plane. Humanly it feels more fulfilling (even though technically, I got less new features per month than with Chrome pace for instance)


> Version history. For projects using SemVer, frequently changing major version numbers is an obvious red flag. Note that many old, stable projects don’t use SemVer.

This matches my experience. Obviously it's impossible to generalize without making mistakes, but I started to notice that projects that loudly talk about their use of SemVer often break compatibility. In other words, it seems like they think SemVer is a way to liberate them in breaking APIs because now they have a way to tell the world about it explicitly, and so nothing can go wrong.

Ecosystems that have adopted SemVer massively do not value backward compatibility (npm comes to mind), and their package managers often have to provide solvers for complex dependencies; users can get to a corner where they must upgrade something but they can't because it depends on something else that bumped the major version and now the interfacing code has to be rewritten.

Go is an ecosystem that values backward compatibility a lot. They are using SemVer as well now but on the other hand they say that modules shouldn't really bump major version that often, if at all (which in turns makes me question whether adopting SemVer has been a good idea, or a compromise that they had to take to concede something to the community subset that was pushing for a more standard package management solution).

I think Qt is a project that uses SemVer (before it was named so!) in the right way. They break major version every 8-10 years, and they struggle a lot to make sure not to do that often. In C++, it's not even easy because of include files and ABI, but they manage to keep the ABI stable across all minor version upgrades, so that you can upgrade a minor of Qt without even recompiling your software using it.


I believe it’s exactly the opposite, life is too short to wait around for stable software.

Don’t expect code to be like a cathedral that can stand the test of time, think of it more like an adhoc bazaar where you can quickly setup shop and start making money. Yea sometimes it will not work right or break, but even physical things in this world that people depend on are also shitty and break. Shitty things are just part of the imperfect human condition, and we must live our lives oscillating between stability and instability, until of course we die and it all means nothing in the end.


don’t ever let your career depend on an unstable platform and tooling unless you directly profit from that instability.

Woah, unstable platform is bad, unless it makes you money? Am I reading this right?


> Am I reading this right?

You're not. There may be no correlation between the platform being bad and it being profitable. If it happens to be bad, don't let your career depend upon it, unless it also happens to generate revenue for you.


The human body is an unstable platform that doctors make money off. Lawyers make money off the unstable legal platform. So it's been the case for thousands of years. Even better in recent times as you need a license to work that unstable platform;)


Yes. This is why I charge by the hour adhoc when I have to deal with windows. I can’t estimate all the weird ass problems in my initial quote.


Without context I think the part after unless is sort of sarcastic or ironic.


It's just good business advice.


Innovative tech companies are dominated by business analysts and product managers. Shoddy specs are commonplace, and it's crazy at work because big customers push hard deadlines that have to be met to prevent them from switching to That Competitor's app. They are not empty threats. Unstable software is just the reflection of an unstable industry.


J2EE entirely spec’ed out a reproducible way to create and deploy reliable software reliably. It is so reliable that no-one uses it, except companies paying $14k per developer per month.

The truth is, the cost of unreliable software is not the same as the cost of an unreliable bridge or plane. And during the ascending phase of the Schumpeter cycle (70-year economic cycle related to an industry), entrepreneurs who move swiftly with agility always win.


Exactly. Look no further than promotional processes that value launches over maintenance.


Regular Releases are Wrong. Roll For Your Life

> Upstream packages change fast

> Upstream support is typically short, shorter than the life-time of some LTS distributions

> SUSE (and others) maintain a large number of distribution variants that need to be updated regularly

> Upstream projects are getting larger and larger

> Using stable release and trying to back-port security fixes isn't safer than using the latest versions with all the security fixes

> The closer you are to upstream the better it is for everyone

> It's easier to work with upstream

> It's easier to contribute and submit patches

> Slow and conservative updates models don't work

> Slow update models are not more sustainable

> Slow update models undermine "Open Source"

> "Partially Slow" is "Totally Broken"

https://linuxreviews.org/Richard_Brown:_Regular_Releases_are...


Related talk distilled into a blog(gish?) post: http://boringtechnology.club/


Proprietary software with no competitors (Nikon Elements Cough) creates situations where you have to depend on very unstable and buggy software.


Does this also apply to iOS and forced updates? ;)

Trade offs in the form of hardware and software are what I constantly have to think of.


The amount of software to protect my Windows based work laptop suggests that unstable software is more than just "does it work for the user"? It feels like it should include how well protected is your data for the application you are using?


We make trade offs between hardware and software all the time, the fact is you must rely on unstable software if hardware requires it. I can’t think of a better example than iOS devices or the M1 (pro) and lack of choice in OS, if you want to run Linux on M1 say goodbye to the GPU and various benefits.

Unless your only platform is embedded software which this occurs less on, it’s impossible to realistically expect to never run unstable software. I’m going to guess most consumer computation is running hundreds if not thousands of JavaScript VMs on their devices every hour.


And yet everything is built on web services nowadays that break contracts and change behaviour constantly.


I've been thinking about learning Crystal recently and using it for a personal project. However, now that I have lots of experience with Node.js, Golang and others, I'm torn between the "use what's mature" and "learn a new language" decision. Sure, I'm using Crystal now to learn a new language, but what if this becomes a serious project? Anyhow, I agree with this blog post somewhat but it's always good to expand your repertoire of tools.


I code in a new/unfamiliar language if I want to learn something new about the "art" or "science" of programming and have fun doing it.

I code in an established language if I want to I'm more interesting in solving a specific technical problem.

As for "what if this becomes a serious project?" question, remember that a rewrite always goes faster than the original.


There are well established projects using Crystal. At least I've heard of one: Invidious [1], an alternative frontend for YouTube. If your project becomes serious it might as well work for you too!

(I don't know anything about Crystal)

[1] https://github.com/iv-org/invidious


Hey! Thanks for this! I've always loved the Ruby syntax so I'm excited to see what Crystal has to offer. Thanks for sending this over!


Why not learn a stable technology instead?

For example, instead of Crystal, you could try something like Common Lisp or Haskell, both of which are really, really stable.

I would even put Rust into the "stable" category... they value backwards compatibility very highly, while allowing breaking changes to happen in newer "editions" of the language (without breaking code written for previous "editions"). Stable does not necessarily mean old.


I'm not really interested in Lisp or Haskell. I don't feel like I'll be all that productive with them and there is little to no chance I'll use them professionally.

If anything I'd be interested in Rust, but it's a bit too low-level for most of the things that I need.


If it becomes serious, and crystal becomes a problem, then you can always port it. What are the odds of both those things happening? Give it a shot.


Unstable or even worse, vanishing software. Sometimes we adopt projects that disappear, or their change their pricing so that we have to drop them.


Well the title mention unstable software but the article discussed about library or API.

Topic wise, backwards compatibility is indeed really important and useful. However to achieve stable API takes long time and effort.

You're either use the tooling now and risking changes albeit small (honestly almost everything programming related has breaking changes at one point), or roll your own API.


The most unstable software that I experienced was the original Mac. I'm sure that it cost me a non-trivial drop in my GPA.


Do you have nay more details on this? I am very curious.


I still have the Mac. It's now over 35 years old! Remember that the computer only had 128K of memory. There were like three applications. MacWrite, MacPaint, MacDraw. MacDraw was the most crash-prone.

What else might you like to know? I'll have to dust off some old neurons.


I had the same Mac. It crashed sometimes -- enough that "save often" was a habit, but I don't remember it being a productivity problem for me. My grade in English went up from B to A because I have always had trouble spelling and MacWrite had spell check. If the machine had a negative impact on my grades at all it was the distraction it caused by me learning Turbo Pascal on the thing. :-)


Okay, you're probably right that the grade impact was due as much from the distractions as the instability.


Is that the infamous Mac model that Steve Jobs intentionally under-specked in order to hit a consumer friendly price point (at the time) and as a consequence was dog-slow and basically unusable for anything serious?


And the rest is history. And the first or second largest market cap company in the world.


This is "unstable" as in "recent versions", not as in "broken". Anecdotally, "unstable" is a complete misnomer. Arch Linux (rolling distro, upgraded on my machines almost every day for ~6 years) was far more stable than Ubuntu (8.04 through 14.10 or so, 18.04 and 20.04) ever managed.

My pet hypothesis is that the core developers of any system are spending 95%+ of their time working on the latest release, and that (understandably, since that earns all the credit) hardly anybody wants to spend any more time than necessary to support older versions of anything. This goes doubly for combinations of two or more old systems which maybe ten people world wide might be using.


Last part of the piece of advice was a bit humorous (sad but true):

> And one final piece of advice: don’t ever let your career depend on an unstable platform and tooling unless you directly profit from that instability.


I know that titles have to be written to capture attention but I was really hoping there was going to be a story here. Something about how someone realized in their old age that the use of unstable software was a big regret. Sounded pretty odd so I clicked through to see. I was disappointed it was just an article about backward compatibility and preferring 3rd party libraries.

I’m sure on my deathbed software will not be coming to mind but if it did I’ll probably wish I took more chances on some crazy things like beta iOS ;-p


On my deathbed I don’t think I’ll think about software… But if I do I think my biggest regret will be not embracing event driven software design sooner.


> For projects using SemVer, frequently changing major version numbers is an obvious red flag.

Huh, I have the opposite impression, particularly if the project has a CHANGELOG of the breakages. Smaller releases are easier upgrades, and the changelog tends to indicate what call sites require investigation/changes prior to upgrading; projects with absolutely massive amounts of breaking changes that require rewriting everything are so much more painful.


The point is well-designed contracts don't need many breaking changes in the first place.

As a corollary, badly designed software are more likely to require breaking changes.

Same applies to badly managed software, where compatibility is broken inconsiderately, wasting people's time.


My project used RoR 4, React 1.5, PostgreSQL 9.6 and Solr since 2012 and still running stably in production.

Such a beautiful tech stack and it's still my recommendation for any junior programmers to learn on how to do the fullstack development the right way (before jumping on more bleeding edge stuffs).


Python, Django, etc is similarly good. Elastic search or Solr. I don’t even use React… just Bootstrap and plain HTML+CSS where I can. Front end work makes me sad inside. Celery for distributed work. Redis for this and that cause it’s so stupid reliable and easy to use and manage. Ansible to configure and deploy stuff. One repo. There was a great post on here about boring tech stacks, and their stack was our stack almost exactly. It’s stuff one human can wrap their arms around me build meaningful software with.


[flagged]


Genuinely curious, how is Ruby any better there? You can build usable front ends in either language. I care about delivering usable software. Interactive front ends for the sake of it is not a good use of effort. If your app and situation justify it, okay, but it isn’t a hard requirement like many other aspects of full stack. Just depends on what you need for good UX. A pile of Javascript isn’t a requirement there. I also stopped caring about developer titles a long time ago, that is a straw man. Real full stack devs with adequate experience in every aspect are unicorns anyway.

E: I don’t even hate JS it’s fine, just not a great investment of time for a lot of apps. And the ecosystem and getting it deployed is an absolute chore.


Without fullstack knowledge, in this case, if you don't care how the frontend will fetch data, how can you optimize the backend (just curious).

Fullstack knowledge is nessessary for optimized backend code, seriously.

Ruby has elegant syntax and structure for functional programming as well as OOP, not like broken Python (1-line lambda and broken OOP).


Your bias against Python is full of prejudices, I wonder where you got the idea that all Python devs lack knowledge about FE?


It's real. I know about 3 people doing Python, and all of them hate JS for no reason.


I thought we were arguing for rapid iteration and continuous integration? Why would I NOT live as I preach? How can I teach others if I'm not learning bugs, quirks and workaround BEFORE those I work with?


Would be interesting to see ratings of software much like the moody's index for financial products.


Life is too short to depend on unstable Programming languages would be a good title for blog post ;)


Counterpoint: Stable software may persist poor UI/API, limitations, and mistakes indefinitely.


Security issues can also be more common in older, infrequently used code and it can be a real source of burnout for anyone who wants to fix obvious issues of whatever kind to need to support dozens of obscure worflows that quite possibly no one is actually using.


Not really, as advancements in evolution are predicated on unstable software (genetic variations)


I value the new too much to depend on stable software


Life is too short to make all software stable.


agreed




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: