Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Much of what he's complaining about is a failure to obey a rule that appeared in the original Macintosh User Interface Guidelines.

"You should never have to tell the computer something it already knows."

For example, users should never have to type in a pathname. The user should, at worst, be offered a list of workable alternatives from which they can select. Expecting the user to manually typing in a value for "$LD_LIBRARY_PATH" fails this test.

The original Macintosh software had applications which were one file, with a resource fork containing various resources. This kept all the parts together. UNIX/Linux and Windows never got this organized. Hence "installers", and all the headaches and security problems associated with them. The Windows platform now has "UWP apps" with a standard structure, but they're mostly tied to Microsoft's "app store" to give Microsoft control over them.



Doesn't packaging each piece of software as a single file lead to a security disaster? When openssl has a vulnerability, suddenly the end user needs to upgrade multiple applications, and that is assuming that all of the developers have actually built the upgraded software.


I go back and forth on this. Yours is the most compelling argument for shared libraries (along with the ability for OS writers to break ABI), but there's so much bad behavior out there in the wild that I don't know if the reality fully reflects the ideal.

For example, end users never actually upgrade applications in response to a vulnerability. If they're on a commercial OS, those fixes are pushed to them by the supporting entity (Microsoft or Apple) in the form of OS updates. If they're on an open source OS, their package manager handles things. The number of conscientious, security-aware users is minuscule. In practice, this would just create a little more packaging work for upstream maintainers, negligible to their current responsibilities.

So, from the user's perspective, the only functional difference they'd notice between shared and static apps would be an increase in the insecurity[1] of third-party, closed-source apps. Which are already the largest vector for viruses, adware, etc. for most end users.

In other words, I'm not sure it would be different from the current environment in practice. Exploits would continue to, more often than not, target single apps, not shared libraries. More attack surface, easier pickings.

[1]: There are a few strategies to mitigate this (partially-relocatable compilation, where only a few libraries are dynamically loaded; OS-level services in place of in-memory libraries).


A similar issue rose up in Windows some years back.

Yes, they have dlls that do much the same as .so in _nix. But each program on Windows will first check their own folder and subfolders for a matching dll before asking the OS.

And each program ship with a set of redistributable dlls from visual-c++.

One of those dlls were found to have a vulnerability. And while MS could patch their office suite and related quite readily, they could at best offer a tool for scanning for vulnerable dlls in program folders and beg users to badger the software providers for updates.

The sad thing right now is that various big names within the Linux community is pushing for a more Windows like distribution model, when the problem they are trying to fix is related to overly rigid package dependency trees (with the RPM package format in particular, as the primary user of the DEB format has worked around the issue with a careful package naming policy).


That's also the one reason why I'm very reluctant to throw shared libraries out the window.


Ok, security-critical stuff as shared libraries (libjpeg, libz, libssl, etc).

Why do we package the other stuff as shared libraries too? GMP, GtK, Qt, OpenCV, SDL, BLAS, ...


The original motive for adding shared libraries to Unix was so that X11 would fit in memory on the machines in use at the time (according to a comment I read many years ago).

If the use of shared libraries saves on memory, it probably also saves on L3 and L2 cache, so on the aggressively cached CPU architectures of today, replacing a shared library with a statically-linked version might slow things down by decreasing cache hit rates.

In particular, if every KDE application is statically linked to Qt, then when KDE application A's time slice ends, whatever parts of A's copy of Qt are in cache will be invalidated with the result that if B wants one of those parts it will have to fetch it from it's own copy of Qt in memory whereas if A and B shared a copy of Qt, the fetch from memory could be avoided.


Has anybody actually measured it in the last decades?


I don't know.

But we know that Intel and AMD design their CPUs to go as fast as possible on the operating systems people actually use (Windows, Macos, Linux) all of which use dynamic linking. Plan 9 is the only OS I know of that does not support dynamic linking (and Plan 9 simply does not have large libraries -- they have what they call services instead, which are similar to souped-up Unix daemons).

Linux and Windows in turn are designed to run as fast as possible on Intel and AMD hardware.

After a few iterations of this sort of mutual evolution, it starts to become very unlikely that a change as big as switching a bunch of big libraries from being dynamically linked to being statically linked would actually improve performance because lots of optimizations have been made to squeeze a few percentage points of performance out of the existing system (which includes the practice of shipping most large libraries as shared libraries), and typically those optimizations stop working if there are large changes in the system.


You are implying that, for any given shared library, you can classify it as "clearly security-critical" or "clearly not security-critical".


Patching security flaws only works against adversaries who are out of the loop on zero-day vulnerabilities. Script kiddies, not organized crime or intelligence agencies.


http://xahlee.info/UnixResource_dir/_/ldpath.html

> The original Macintosh software had applications which were one file... UNIX/Linux never got this organized.

In Unix, you link at install time. A lot of pain results from well meaning people importing concepts from other systems and doing it badly.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: