Hacker News new | past | comments | ask | show | jobs | submit login

Just an generalization. It seems like wine and steam/proton embodies the early Microsoft philosophy of (backward) compatibility as selling point, while the overall Linux mindset seems to be: "fuck you, you should have provided the source code if you wanted your software to work after the next dist-upgrade"

Just to be clear. I wrote this on my linux mint laptop. I'm thankful for all the hard work people put into Debian, Ubuntu and Free Software in general. This is just an generalization.




Linux takes backward compatibility very seriously. The FOSS ecosystem as a whole has the whole spectrum, so the bad apples break it for everyone.

The result isn't even "fuck you, recompile." The result is, "fuck you, you better be constantly fixing the shit that we broke with our breaking changes."

This is part of the value that distros bring to the table. Providing a snapshot of versions that actually interoperate well.


> Linux takes backward compatibility very seriously. The FOSS ecosystem as a whole has the whole spectrum, so the bad apples break it for everyone.

> The result isn't even "fuck you, recompile." The result is, "fuck you, you better be constantly fixing the shit that we broke with our breaking changes."

Linux's approach to drivers is the most "fuck you, you better be constantly fixing the shit that we broke with our breaking changes" of any OS. It does take userland compatibility seriously though.


What you say is not true. Once a driver is in the kernel, there's usually nothing a user must do to get the device working. It works so well that devices works on architectures manufacturers haven't even thought about. The policy in the kernel when any internal interface changes is: you change it, you have to modify all the driver to keep them working. It works splendidly well.


> What you say is not true.

It's true, and I have 3 devices that stopped working under Linux to prove it.

> Once a driver is in the kernel, there's usually nothing a user must do to get the device working.

If the driver is accepted into the kernel tree in the first place, and until it gets removed, yes. The fact remains that that's a much narrower window than for most OSes.


> the overall Linux mindset seems to be: "fuck you, you should have provided the source code if you wanted your software to work after the next dist-upgrade"

These days you just need a container. Or use Nix/Guix, which makes it easy to preserve required dependencies for an older app even when upgrading other parts of the system.


And a lot of space on the harddrive to house the last 20 versions of your operating system...


And these days, Windows is more about chasing the shiny than ensuring old applications run.

I've had more luck with old Linux binaries running, myself.


I wouldn't say so, for windows you can run the same built package on anything from 7 to 10 without too many issues, on linux trying to install a package that hasn't been released on the current OS version package manager repo feels like blasting your brains out. At least for Debian/Ubuntu. And the only fallback is to compile the entire thing yourself, because fuck me right?

Like sure I understand that having specific packages for each version should make it reliably work, but the trade off and demand for devs to keep up is pretty huge.


Usually tracking down old versions of libraries works, except when it doesn't of course. If the source is available, I think "just compile it" is kind of okay to be honest; it's usually not that hard, and if it is, then that's kind of a problem with that software's build system IMHO.

I compiled xv from 1994 the other day on my Linux box; just has to make a tiny patch to fix an include (lots of warning, but it compiles and runs).

That said, there's some room for improvement. For example pkg-config/pkgconf could automatically suggest which packages to install if "pkg-config --libs x11" fails, or some other distro-agnostic way for people to track down dependencies.

If you're shipping a binary program without source (i.e. a game, for example, which tend to be closed source) you should ship the libraries or compile things statically. Some of the older Linux games on gog.com can be a bit tricky to run on modern systems due to this.


> tracking down old versions of libraries works

See that's the thing, everything on linux is designed to work by sideloading as much as possible, depending on thirty thousand packages that must be all installed to the perfect version by apt or else nothing works. A good system if you need to get a fully featured OS running on 100 MB of disk space, which tbf is linux's niche, but it's absolute horseshit to maintain.

Windows on the other hand tends to have flatpak-style monolithic executables, with the odd .NET framework or cpp resdistrubutable here and there, but it's the rare exception. Things tend to actually work when they bundle their dependencies. Hell, the average Java app ships the JRE along with it because nothing works if the wrong version is installed globally. Linux just takes that problem as a fact of life and tells you to fuck off.


In general, I'm a fan of statically linked binaries or shipping the libraries with the application, so I mostly agree with you. But I also can't deny there's advantages to the shared-link approach as well. In short, there's upsides and downsides to both approaches and no perfect solution.

In practice however, I rarely encounter issues, except for closed-source programs that don't ship their libraries. That, I think, is mostly the fault of the vendor and not the Linux system. Of course, as a user it doesn't really matter whose fault it is if your application doesn't work because you just want the damn thing to work. It does mean the problem (and solution) is mostly an educational one, rather than a technical one. You certainly can ship binary programs that should work for decades to come: the two core components (Linux kernel and GNU libc) take backwards compatibility pretty serious, more or less on equal level with Windows.

You don't even need flatpack. A wrapper script with "LD_LIBRARY_PATH=. ./binary" gets you a long way (and still provides people the ability to use a system library if they want, so sort-of the best of both worlds).


These days, Windows doesn't tend to have monolithic executables - look inside the installation folder of a random app, and chances are good that you'll see a bunch of DLLs aside from the CRT. Also, .NET apps are normally redistributed as a bunch of assemblies, even aside from the CLR.

The key difference is that each app bundles its DLLs for its own private use. From the user's perspective, it's essentially the same as static linking, if you treat the entire folder as "the app".


That is exactly what the op meant with " flatpak-style monolithic executables".

In MS speak, it is xcopy install, based on how MS-DOS applicatons used to be distributed, even when static linking was the only option (with overlays).


That's everything on Unix-likes with shared libraries, yes. You can statically-link or use a system like Nix or Guix that don't enforce a single library version (by ditching the "default library search path" concept), and you get the best of both worlds: space saved for most of your shared library dependencies, and separate versions for those binaries that need them. It's basically Unix with a garbage collector.


This was one of the things I found very painless with gentoo surprisingly enough. I could easily slap together an ebuild on my local overlay that matched the dependencies of the deb/rpm even if that meant some old as dirt dep that wouldn't be on my machine normally. Then just `emerge sync -r localoverlay && emerge category/package-name` like normal. Maybe 5-10 minutes at most to do and I'd have a fully managed install that'd continue to work even if I tried to install it 5 years from now.

I've tried similar on debian, ubuntu, and centos but fighting with apt or yum and their (seemingly) comparatively brittle packaging systems got very old very quickly. Not that it can't be done easily on those systems but so far I haven't managed it yet.

Nix I find can also be really nice for this, especially since flake based packages are pretty much self contained. Still a lot less pleasant compared to the portage/ebuild route though.


I disagree. Couldn't get Deadly premonition working. Windows recommended XP compatibility mode and it worked!

All the 25 years of code is still in Windows 11 they just hide it with a lick of paint. If you want to play old games Microsoft is still your best bet.


As the joke goes, Win32 is the Linux ABI.

There are a lot of baffling decisions by Microsoft’s Windows division, but their extreme dedication towards backward compatibility is nothing short of Herculean.

Sometimes I wish they’d split up Windows in an ultra-slow ‘Enterprise’ ring and a move-fast (think about the speed of macOS changes) ‘consumer’ ring, where they could drop much of the legacy cruft in trade for speedy/forced improvements. Think macOS going fully 64 bit, making the system image immutable, etc.

Hell, imagine if they’d allowed the Xbox One, X and S to run Windows 11 in S mode. Instant cheap performant computer for the layperson! And with it running S mode, they’d make their money through the Microsoft Store.


> ultra-slow ‘Enterprise’ ring and a move-fast (think about the speed of macOS changes) ‘consumer’ ring

They do: Long-term Servicing Channel releases: https://techcommunity.microsoft.com/t5/windows-it-pro-blog/l...


Unfortunately for Microsoft, condumers are as wedded to legacy apps as enterprise customers are. A lot of Steam's library immediately vanishes if you lose 32-bit support for example.


> Couldn't get Deadly premonition working. Windows recommended XP compatibility mode and it worked!

This just tells me Windows is on a level with Linux + Wine, with the compatibility mode.

> All the 25 years of code is still in Windows 11 they just hide it with a lick of paint.

That's not true. Windows had a big, discontinuous shift when they abandoned the DOS-centric Windows codebase in the move to XP.


> Windows had a big, discontinuous shift when they abandoned the DOS-centric Windows codebase in the move to XP.

You mean the move to the NT platform, which made the consumer windows desktop OS finally stable.

Compatibility features, while not perfect, were implemented to help try and get Win9x and DOS workloads to function normally. Some DOS apps ran fine on XP in compatibility mode, although I couldn't say what %.


>Windows had a big, discontinuous shift when they abandoned the DOS-centric Windows codebase in the move to XP.

Windows 11 is the latest member of the Windows NT family, which originates from Windows NT 3.1 which coincides with Windows 3.1.

Windows NT 4.0, which coincides with Windows 95, is where things start to look more familiar to us today.

Windows 2000 followed to coincide with Windows 98 and ME and is where the NT family really got consumer-aimed software working right with the integration of DirectX all the way up to DirectX 9.

Windows XP which followed 2000 is really just 2000 with some more cleanup and QoL improvements. All the foundations were laid by the time of Windows 2000.

So yes, "all the 25 years of code" are definitely still in Windows 11 under a lick of paint.


You're welcome to try getting Deadly premonition working in Linux!


I was about to gloat, as it's so cheap to buy, I thought it was a good opportunity to test Steam proton by selecting "enforce compatibility" which so far has worked for everything I threw at it (CIV V, hitman absolution, bunch of other obscure windows only titles).

Initially happy as the install and intro ran fine, but pressing enter to skip the video exits the game immediately.

Maybe that's further than you would expect, wine / proton is incredibly good nowadays, but still your point stands.


Which is why you need a Dos Box to play DOS games.


That wasn't true for Windows XP, which shipped with the NT Virtual DOS Machine (ntvdm.dll). Unfortunately, this was dropped in 64-bit Windows and finally axed in Windows 10.


That mindset only really applies to the weird enthusiast OS's like arch & gentoo, where the need to recompile half your stuff after an upgrade is almost the point. Everything else from my experience has been pretty good at maintaining compatibility


What about that classic meme of Linus (Tech Tips, not Torvalds) running the equivalent of `apt update` on Pop OS and breaking the entire Desktop Environment?

I'd be lying if I said I'd never done something similar when trying to switch from the open-source Nvidia drivers (glitchy at 4k60, at the time) to the officially provided ones.


To be clear, Linus got an error trying to install an app via the GUI, ran the equivalent terminal command, and directly overrode the warning telling him what would happen before his desktop environment broke. It wasn’t quite as simple as just apt upgrade.


The point is that it gave him a text warning that basically said "press y to destroy your whole Desktop Environment" buried in a wall of text that you'd normally ignore. This is beyond terrible UX and would never happen outside of Linux/FOSS.


In a proprietary operating system, you just wouldn't have been allowed to uninstall the desktop environment at all. Unfortunately, you are also forbidden from uninstalling Facebook if they have a deal with your device's manufacturer.


There is a huge space between cli everything and pre-infested walled garden. I have always felt that there should be more visual differences in the standard newby frendly tools for linix. Many problems would be avoided with a basic graphic interface.


In my experience that's backwards; Arch and Gentoo are far better at telling you exactly what you need to stay backwards compatible and letting you do it than the fancy commercial distros are. (E.g. compare the difficulty of running the original Linux release of Quake 3 on those distros).


this is very much distro specific experiences. others and Linux upstream care very much


I thought that at least the kernel philosophy was the exact opposite? The “never break userland” motto?


That philosophy extends to the kernel only - there are multiple other dependencies for running programs that may not have a stable API/ABI, or the same compatibility approach. Shared libraries like glibc may be updated, graphical interfaces may differ, search paths may not be uniform, etc. and these can all break a program.


If you want stable binary compatibility, use distro that focuses on that (Suse, Redhat). Not some joke build around opensource idea and communism!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: