Hacker News new | past | comments | ask | show | jobs | submit login

Because kernel developers can only work with what they're given. If the hardware isn't supplied by the company or some grant, or if it's IP encumbered, then you have a situation where it's unlikely to be supported natively by the kernel.

At that point, it becomes a short-term or long-term benefit assessment. Long-term, it's better to work to get it accepted upstream, but that's more work in that you have to make sure your patch is acceptable and conforms to how the kernel devs want it, and they can verify the support works, instead of just working as you've set up in-house. With the rush to market, and with the specific configuration of chips possibly not being used in the next item released, short-term patches likely look really attractive.




I don't think I'm explaining myself well. Can't the kernel be modified so that manufacturer-specific code can stay in-house?

Let me give you an example: I don't need to fork my text editor to make it highlight and autocomplete any new programming language I invent myself. And I don't need to give the editor developers anything to work with either. Because they've put the needed hooks and interfaces in place for me to write my stuff without having to modify theirs.


> I don't think I'm explaining myself well. Can't the kernel be modified so that manufacturer-specific code can stay in-house?

The kernel is under the GPLv2 so redistributing kernel binaries also requires making source available (that is all part of the "freedom").

The kernel internals constantly change. For example the USB stack was rewritten 4 times. Each time all the drivers etc were converted over. By contrast Microsoft has also rewritten their stack 4 times, but has to retain backwards compatibility for each generation of driver API/ABI, simultaneously!

While in theory it is possible to have a stable API, in practise real life is more complicated. There are concurrency issues, power management, suspend/resume, memory allocation, constantly changing hardware etc. A stable API would result in way less optimal drivers (eg they may consume more memory or power, cause cores to run at higher speeds).

The kernel developers decided they weren't going to have that layer of indirection. Instead drivers will do the one true correct thing (updated as kernel versions change). That makes them simpler, and perform better in the dimensions mentioned in the previous paragraph.

That all said, Linux is fanatical about the userspace API to the kernel being stable. You can implement drivers (eg USB) and filesystems (eg FUSE) amongst other things in userspace. However they won't be as "good" as kernel drivers.

Here is a good post from a kernel developer: http://www.kroah.com/log/linux/stable_api_nonsense.html

To get an idea of what changes in the kernel, check out the LWN page. If you have interest in Linux, I'd recommend subscribing too. https://lwn.net/Kernel/


Thank you for the explanation and the link to that article. I disagree with some of the philosophical statements, like:

> While in theory it is possible to have a stable API, in practice real life is more complicated.

(In practice it's also possible, as Microsoft demonstrated with Windows for decades).

or (from the article):

> benefits if your driver is in the main kernel tree, all of which has made Linux into such a strong, stable, and mature operating system which is the reason you are using it in the first place

(The reality is, that wasn't enough of a reason to move users from Windows, before the smartphone era. Nor is it the reason Linux won over Windows in the smartphone market, either.)

I think the article hits the crux of the issue, though: Nobody wants to spend their free time contributing to maintain an old interface, instead of working on the new one. A side effect of this policy is a better kernel, granted (for the devices that actually make it to the main tree).

So given that a stable API isn't an option, and the current model is clearly not working, I guess the solution lies in solving the hurdles these companies face to upstream their code. Do you know about that, or where could I learn more?


Microsoft only had to support one architecture, and continuously changed driver models. Except 32 and 64 bit. Plus you can't use drivers across Windows versions that much. But at the end of the day it was a financial decision by them to maintain a very large degree of backwards compatibility, and by their customers to pay that price. But also remember just how "sticky" Windows 98, XP etc were. People wouldn't give them up. Also have you tried to use USB 3 on Windows 7? Or how about skylake until the backpedal?

Note that with Linux you can get more a semblance of Microsoft style stability. Use RHEL (redhat enterprise linux) and they do the work of backporting relevant drivers and changes. You also get to pay handsomely for that.

Note that the majority of devices do make it to the kernel. You can plug in virtually anything and it works!

You really should read LWN. They do a fantastic job of covering various issues. For example here is an article on hurdles: https://lwn.net/Articles/647524/


With this:

> Note that with Linux you can get more a semblance of Microsoft style stability. Use RHEL (redhat enterprise linux) and they do the work of backporting relevant drivers and changes. You also get to pay handsomely for that.

> Note that the majority of devices do make it to the kernel. You can plug in virtually anything and it works!

it looks like you're mostly thinking of the server space. The problem of manufacturers forking the kernel, and the topic of this HN post in general, is on Android. E.g. my last two phones, a Samsung Galaxy Nexus, and an LG Nexus 5, won't get any more OS updates.

I'm going to read that article, and check LWN regularly.


No, the majority of devices do really work. Plug in a random webcam, drive controller, spi peripheral etc.

Embedded systems are different, but it isn't the device driver support that is the problem. x86 has a mostly standardised architecture, and due to an accident of history has a I/O address space that is separate from memory address space. Consequently it has been possible to probe for devices with very low probability of collateral damage. Which is why Linux and Windows can "just work" on virtually any x86 system.

In the ARM world, you get the CPU design itself from ARM and copy/paste that into your design program. Then you copy/paste in other pieces you want such as memory controllers, USB controllers, serial ports, displays, storage, SPI, and whatever else fits your needs. Then you hit print and get chips with all that working together. There is no separate I/O address space, so these devices all end up in memory address space. There is no standard location for them. Somehow you have to know the USB controller is at address 0x12345678, and you can't practically scan the address space trying to find all possible devices, and certainly not without collateral damage.

So what would happen is system developers would fork the current kernel, and make a permutation of an existing system but with hardcoded knowledge of where all the pieces making up the system live in the address space, and how they are connected. Then they'd add support for unique devices, connections, quirks and bugs. Throw in some binary blobs, NDAs, "IP" etc and they now have their own unique kernel. And that is why they get stuck on a kernel version that can't be practically updated.

The kernel developers adopted device trees as a solution. This is provides a data description to the kernel at boot time of what platform devices are present, how they are connected and where to find them. That allows the same kernel binary to support a very wide range of hardware. ARM64 systems also typically require ACPI, which if you squint allows the platform to also provide code that can run for dealing with the platform.

What this means is that the problem has been addressed technically. It does not address financial issues: the vendors get no more money after first sale, so they don't care about updates. It also does not address the folks doing binary blobs, NDAs and other "IP" concerns. The Linux (GPL) attitude is all about user freedom (as in freedom of speech), and gives everyone participating an equal playing field (no party has a special position).


Hey, I wanted to thank you for taking the time to explain this. The situation and its context is much more clear to me now.


You are welcome. It is actually something that has been very painful for me and the project I've been working on. The newest kernel we had for ARM boards we were considering was 3 years old, and hence missing a lot of newer device drivers and other functionality, not to mention missing bug & security fixes.


Two groups probably have more experience in that than others:

Linaro: https://www.linaro.org/

OpenWRT: https://www.openwrt.org/

Going on a bit of a tangent:

Most of the time these companies just don't care.

If you don't mind getting your hands dirty, then have a look at some vendor kernel trees. For example, a $300AUD router released last year is running 2.6.34brcm - http://www.tp-link.com.au/download/Archer-D9.html#GPL-Code

The 802.11AC card in that router will likely never have open source drivers. I'm pretty sure the routing engine is closed source too, although with 2x Cortex A9 chips I would have thought you could get away with doing it in software.


> (In practice it's also possible, as Microsoft demonstrated with Windows for decades).

There may be a handful of narrow cases where Microsoft has preserved third-party driver compatibility across long spans of time, but when looking across a decade or more, they break as much as they preserve. Consider DirectX, which is now completely antithetical to its original purpose and name of allowing applications (games) relatively direct hardware access. Modern systems can't survive without a level of hardware abstraction and sharing that mid-90s systems couldn't afford. The end result is that you get essentially no hardware accelerated graphics on decade-old GPUs anymore, except where the drivers were re-written for recent versions of Windows. Vista's new audio subsystem killed off an entire product segment of hardware-accelerated audio processing.


Oh, I meant the demonstration has lasted decades, not necessarily that the backwards-compatibility was eternal. Once you take the stance of preserving compatibility, there's a strong market force to deprecate and eliminate old interfaces and guarantees once it's practical. On both the PC market of then, and the smartphone market of today, hardware gets replaced faster than once per decade.


Somehow the majority of other operating systems manage to have such stable APIs.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: