Hacker News new | past | comments | ask | show | jobs | submit login
Linux Kernel 6.0 (lwn.net)
351 points by ecliptik on Oct 3, 2022 | hide | past | favorite | 166 comments



>> Linux is a registered trademark of Linus Torvalds

I hadn't realized that before. The link below discusses the reasons why this was done.

https://www.linux.com/news/linux-mark-institute-protecting-l...


It's because some douchebag named William Della Croce Jr. claimed the trademark for Linux and tried hustling people (book/CD publishers and such) for the rights to use it without being the actual author of Linux. A judge eventually awarded the mark to Linus himself and since then he's been vigorously defending it to ensure this malarkey doesn't happen again.



Fascinating. So he’s basically the BDFL of the name by making sure one person (him) can police its use.


Leaving it some entity like the Linux Foundation likely would have been much much worse.


Why would that be worse ?


It's the same principle as with the largely family-owned German Mittelstand - so many of these companies are "hidden champions" are where they are because their leadership / ownership doesn't have to bow to the will of some obscure "committee", but can do what they feel is right for the company and will ensure its long-term success.

In contrast, transferring the trademark to a committee-run foundation (or to go into economy analogy again, a company that's traded on the stock market), the incentives for those in control can drift: short-term profit wins over long-term interests (e.g. trust of the public). We've seen way too many organizations in the FOSS world fall victim to that, most notably Mozilla and MySQL - the former is always embroiled into some sort of scandal and Firefox usage has fallen through the floor, the latter was mismanaged by Sun and eventually Oracle so hard that its co-founder was forced to fork off the codebase.


Then don't we just take the source code (which we have the legal right to) and make some other Linux?


You can do that any time you want, but enough deviation from the mainline kernel tree and you're no longer allowed to call it Linux.


"Design by committee" is a great way to create something that works for a lot of people, but is average at best. If you want to create something that really excels, you need someone to drive home the vision and that's really hard to do when you're more than one, because it's really hard to be 100% on the same page.

Not to say it's impossible, it's just a lot harder.


Apart from the “design by committee” argument, the Linux Foundation seems to be ... slightly evil, with so many WEF members filling up its ranks.


What is a WEF member?



"As a CEO of Linux Foundation, I too deserve a competitive compensation package, like the CEO of Mozilla Foundation."


Is this a real quote ?


No, just an extrapolation.


More like the other way around, the courts appointed him owner of the trademark because he's the bdfl.


They once made commercial entities with Linux in the name rename themselves, while leaving distros and nonprofits.


Major versions of Linux used to be awaited with bated breath. For many years I ran a "Linux kernel major release pool", which probably nobody here remembers, but it was fairly popular. Npw, it's fairly boring when a new release comes out. Don't get me wrong, I don't think that's necessarily a bad thing...


It’s mostly because Linux stopped having major release. Linus changes the first number when he finds that the second one is growing too big which apparently means twenty-ish.


I personally wish it had gone semver and the version expressed how "boring" and stable it was; "Linux 6.0" sounds like a big deal (but in spite of all the features and changes really isn't*), where "Linux 2.1028.0" would better reflect the reality IMO - smooth, stable progression.

* EDIT: In the sense of being non disruptive and safe; the features are very non-boring:)


Semver makes really no sense. The userspace ABI tries to never have breaking changes; yet the internal APIs for use by drivers, and indeed the drivers themselves, can have breaking changes every single release.

Not that most users care, as most of the tens of thousands of devices supported are fairly obscure and the changes don't affect the other users… yet kernel API changes that break, say, Nvidia's drivers or ZFS or other high-profile out of tree drivers, have an outsized effect. You either have to increase the semver major version with every single release, or somehow version internal APIs separately, which isn't really practical either.


Semver makes sense when you want to have a way of breaking the API interface and signal that to people.

If there is one thing I've learned from reading various messages from Linus on the mailing lists, is that WE DO NOT BREAK USERSPACE!

So Linux would be perpetually on the same major version, which means there is little reason to use Semver in this case.

Also, the first number gets updated whenever Linus feels like the second number gets too large, not sure how "2.1028.0" would solve that :)


> So Linux would be perpetually on the same major version,

Yes:)

> which means there is little reason to use Semver in this case.

Okay, that's fair. If semver is there to signal breaking changes and there will never be breaking changes maybe it's not super useful.

> Also, the first number gets updated whenever Linus feels like the second number gets too large, not sure how "2.1028.0" would solve that :)

That's not "also"; I'm specifically saying that Torvald's versioning system is bad. Honestly I'd be happier with losing the leading number completely and doing an increment per feature release; right now the "major" version field just doesn't mean anything.


I blame browsers for this... They were the ones who started the "version inflation", where a new major version meant almost nothing... and everything else followed slowly.

The problem i see is, what's next? Google chrome is at 106... a few more years, and 200.. 300? What happens if linus runs out of fingers for the major number too?

I miss the "good old times", where 2.4.17->2.4.17.1 meant a minor patch, ->2.4.18 meant a new version with some new features, -> 2.6.x meant some large changes, and 3.x meant something huge, with possible backwards incompatibilities and making you choose, between a mature 2.x version vs a newish 3.x version.


The "good old times" include Solaris 2.6 -> Solaris 7 ;)


While true, that was also SunOS 5.6 to SunOS 5.7.

One name for the marketing brochures, one for techies (and their scripts). Works for me.


Linux version 2 was release in June 1996, and v3 in July 2011 15 years later. Everything in-between was incremental.

This massively predates web browsers with 100 versions


Linux v3 should be 2.6.40

Also in 2011 firefox went from "slow versioning" (up to version 4) to a new major version number every few months... the unofficial reason was, that chrome was "overtaking" firefox in version numbers, and "higher number == better browser" (chrome went from 9 to 16 that year).


I still have scars from 2.4->2.6.


That was an exciting year, in the “our house is on fire” sense.


My companies first embedded linux project was right smack in the middle of that. I think there were some big changes in gcc. NAND flash was relatively new. So much out of date docs, broken/incompatible code, and false starts.


I was working on an embedded system long after that transition happened. The BSP we got from the chip vendor was still targeting 2.4. We had to port all the drivers over ourselves. The long tail of old software in embedded world is ridiculous and painful


At least Android did a good job in forcing chipset vendors to keep up their BSPs with recent kernel versions... basically every chipset in use by anything with Android will have a decently modern kernel version.

Now, if Google would also enforce open-source compliance... the amount of hoops one has to jump through to get OSS dumps is absurd, and most of these lack critical parts (e.g. build toolchain, kernel config, partition layouts, DTBs) so it's OSS in theory but unusable to develop alternative OSes in practice.


But we did get alsa in our kernels :D


Alsa fixed so many problems and caused so many more. It was great to finally get so much hardware support in the kernel


2.4.15 was still the very best Linux kernel, everything bloated like crazy afterwards.


That’s because major releases are done when Linus feels like it, not because there’s something major in the new release.


Can someone explain what Linus meant by "running out of fingers and toes"?

EDIT: Ah I got the joke - he meant DIGITS!


Linus used to post to Google+, so a lot of the original posts are gone to the wind, however, he originally said that he'd bump major versions when the numbers get too big, i.e. he runs out of fingers and toes to count the minor version.

Here's an article for reference (2015): https://arstechnica.com/information-technology/2015/02/linux...


It is a reference to how children sometimes count on their fingers. It's an old, lame joke that if someone needs to count higher than 10 they can use their toes too. He's making that joke at himself.


Conundrum: our hands give us eleven symbols: various fingers and thumbs for 1 to 9, everything out for ten, and closed fists for zero.

While the sociologists are off figuring out why this gave us a base ten number system instead of base eleven, the computer scientists are showing off how they can count to a thousand instead.

(And also get thrown out of every bar where they order four of something.)


You can actually conveniently count to 12 with one hand. Use the tip of the finger, the first and second finger joints * 4 fingers = 12. Then use the thumb as the pointer to keep track.

A base-12 number system would be advantageous because 12 is a "superior highly composite number". However, needless to say, despite the number theoretical advantages changing from our current base-10 system is essentially impossible.


Well you could count to 31 with one hand (5 fingers = 5 bits) and to 1023 with two hands (10 fingers = 10 bits) but not as conveniently :)


Note that other cultures started numbering the individual finger bones using their thumb, ending up with a base 12 or 24 system (or so the "just so" story goes).

Regardless of its origin, base 12 is still a major part of our lives (24 hours to a day, 60 minutes to an hour, even 60 minutes to a grade in angles).


Just use each finger as a binary digit, open or cloned, and you can get to 1023. If you're really struggling to count you might add your eyes, elbows or and knees and get to 2^16 - 1. That should be enough for anyone.


There are 4 kinds of people: those who count binary on their fingers, and the other three kinds can f--- off.


i would think a kernel dev would know to count higher by using binary on his fingers :)


Yeah, but also clever enough to know bumping the version is less annoying - the old intelligence/wisdom split :)


The last release number was 5.19.

Linux 6.0 could have been versioned as 5.20 but Linus chose 6.0 out of thin air. Nothing more nothing less.


4.20 exists, but 3.20 and 5.20 don't. I don't think there is any meaning to the numbers. It has been "the next release" since the 2.15 branch has been abandoned.


you have 10 fingers and 10 toes. 10+10=20. So instead of 5.20, he named it 6.0


But 20 is countable with fingers and toes.


Not if you start from 0 :)


Wouldn't 0 just be all toes and fingers clenched?


Counting with fingers is 1-based.

Linus made an off by one error :)


Beautiful. And a sign that I'm very old. I started in the transition from 2.2 to 2.4 which I remember being a big deal but I don't recall why. Anyway, Linux has been my daily driver ever since. No regrets on that front.


The architecture of the kernel changed a lot between those two versions. There was a lot of changes, more than what I could reasonable fit in a quick comment.

Here is a quick overview: https://wiki.gentoo.org/wiki/Kernel/Migrate_2.4_to_2.6


Waiting for kernel 6.6.6


Given the release cadence of Linux, I'll predict 6.6.6 to come out in August 2023.


i hope Linus skips that version or goes 6.6.5 to 6.7.0.


6.6.6 would be a patch release to 6.6 and Linus almost never bothers with those. It'll be more the territory of Greg Kroah-Hartman.

6.5, 6.6, 6.7 will be major versions of the kernel. All of them will receive patch releases (6.5.1, 6.6.6, 6.7.2, etc); one of them may become LTS, they may not. Whether gkh decides to skip 6.6.6 and go to 6.6.7 instead, that's up to him, but it'd probably cause more issues with scripts than it'd be worth it. I get that people can be superstitious, but imo it's not worth it.


So what are the headline grabbing features?


A very Intel ACPI hack was ruining AMD Zen performance in some scenarios (up to 1000%). Real-world (non-benchmark) perf has yet to be determined.

[1]: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/...


ublk -- performant user-space block drivers; though maybe it's not of the most utility unless the network-raid things companies have been building on it get open-sourced (might've already happened, I haven't been following closely)


this could benefit ZFS and FUSE as a whole


amdgpu has finally fixed the flickering bug for high refresh rate displays.


It’s adopted a new, modern, microkernel design.

… kidding


It maybe off-topic, but don’t you think that semver is somewhat dead if even kernel (software everyone depends on) does not use it any more?

According to semver there should be some heavy incompatible changes in this release, while it’s only a regular update.

Maybe we should all just agree that semver is simply a pipe dream and we are never sure that lib update won’t break our app. Or that lib maintainers take compatibility really serious and they don’t break it, like kernel hackers do.


The kernel has never used semver, and it doesn't have to because it never breaks userspace.


It's not supposed to break userspace, but it does from time to time.

There's a surprising amount of userspace code which has to check the kernel version number because some behaviour has no "feature check" that can be used. On rare occasions that gets tricky when a distro has backported some kernel change to an old kernel version.

Because it's not supposed to be done, most breaking changes are unexpected so it wouldn't be possible to implement semver anyway, or are bug fixes which happen far too often for semver.


Wasn’t it that version 2.xx was introduced due to incompatible changes? At least it’s what i remember.


More than that, for many years, the odd minor releases were unstable (1.1, 1.3, 2.1, 2.3) and the evens were stable (1.2, 2.0, 2.2, etc). There could definitely be big churn going from 1.2 -> 2.0, 2.0 -> 2.2 and so on.


Dunno, but 2.x was ~20 years ago, things are different now


Sorry, it looks I’m a bit old :P


20 years was a friendly underestimate. 2.x was introduced 26 years ago ;-)

If it helps, I remember the transition from Linux 0.99 to 1.0, the switch to ELF and Glibc, the first non-x86 architecture support, the introduction of kernel modules, threads being a new feature, and the dropping of 386 support - all but the last happened while I was using Linux as my desktop OS at my day job :-)


But 2.6.39 "only" 11 years ago.

I would nearly bet that there are still vendors shipping 2.6.x kernels in their embedded systems today. Probably rare...


Don't know about 2.0, can't find anything about it. I know 2 -> 3 was due to changes in development, not breaking of userland.


No, the change from 2.6.39 to 3.0 had absolutely no meaning. Changes to the development model happened early in the 2.6 series.


indeed. and also for libraries semver is happily alive and thriving.


It's also not clear what it's supposed to be compatible with. There are many compatibility concerns, but semver admits only one. Linux removes and deprecates features and changes driver API too, so you can expect every release to break some things and not break other things.


Semver has its uses. Those uses are not in something like a kernel. A kernel can be considered to be a collection of multiple APIs, with each one have its own audience. Every version brings changes to some of these APIs which makes its a breaking release to some of the audience.

Semver is good for things like libraries that have a single audience. Either the workflow is broken for all of them or none of them (at least mostly). It can also be used for individual APIs. But for systems that are a collection of multiple APIs (kernel, browser, apps, etc), semver starts making much less sense.


> we are never sure that lib update won’t break our app. Or that lib maintainers take compatibility really serious and they don’t break it, like kernel hackers do.

That is extremely unlikely to catch on. It's both annoying and imposes significant costs. It also tends to trap you in early design decisions. (Don't get me wrong, in my dream world where all software was formally designed and written perfectly the first time and orgs gave infinite resources to engineering and addressing tech debt, everyone would use semver and almost nobody would ever pass 1.x - it's just that I don't think it's likely)


I wonder if various proxy software (haproxy?) would be able to benefit from these new uring features.


kernel.org still showing 5.19.12 at this moment. Guess it will be updated soon :D



It's very easy to armchair criticise the linux kernel, but it's surely one of the marvels of modern software engineering in terms of size, scope, usage and longevity.

Well done to the team.


Agreed.

What amazes me is that they (and a number of other projects) do it without any traditional “management.” Sure, there might be a release manager, but it’s more like a coordinator/comptroller function. It’s just a bunch of (mostly)”let’s act like adults” sort of people, and without all of the bureaucracy that usually accompanies some of even the smallest software efforts, it just keeps going.


It's inspirational, to say the least. Linux is a testament that a truly distributed team can not only work efficiently, but deliver a successful product.

I'm inclined to think that it's partly because they're not chasing growth metrics, have to report to stakeholders, etc. Most companies drown in the overhead related to such processes.


I don’t disagree but there are many corporate contributors who are getting paid based on the success of the optimizations and features they upstream to the kernel. At FAANG it’s the same metrics-based success you refer to.


Sure, but if any one of those corporate contributors could merge any change they wanted into the kernel they could likely get way more "metrics" gain, but they can't, because that's not the goal of the kernel team. The gains of one corporation are secondary to the gains of everyone.


Those corporations work together to contribute to the same open source code base and at the same time they are trying to (commercially) kill each other. Quite a wonder.


Or maybe somewhat similar to biological evolution: competition and cooperation.


Kinda reminds me a bit of the operations of the nation states and counties of past Europe…


Good point. But there’s not one such paradigm ruling the whole process of developing Linux. I guess we actually could say there are competing paradigms.


> and longevity

The recent funeral of Queen Elizabeth II was an opportunity to read up on the many trivias of British royalty. (PS: the British royalty is definitely not the head of my country).

One thing that struck me was that thay still have plenty of things (actual physical objects) from the middle ages that has survived to the modern day.

I think it's awesome that something from the middle ages has been immaculately maintained to the modern day.

So perhaps, by the time it's the year 7298 Linux kernel would still be in use, immaculately maintained over thousands of years. Wouldn't that just be great?

(I wonder how intergalactic pull requests are gonna work out...)


Where are you from? In Europe you walk along medieval era buildings all the time, and seeing something 1000+ years old (or few times that) is just a museum visit away...

I grew up near a church built in the 11th century, for example. The interior is newer but the building itself is original.

My own family inherited things from cca 16th-17th century (useless junk but anyways).


> seeing something 1000+ years old (or few times that) is just a museum visit away

That's terrific!

Facepalm.

My country got plundered, conquered, and colonized; they took away everything they could possibly take away. Some of those objects are in some European museums and palaces ;)

(Don't take this the wrong way. I'm not blaming anybody. What has happened has happened.)


Well, same here, most of the oldest artefacts are away. Europe is a big place, your fate is shared even here. But even the stuff that remains at home is interesting - just doesn't have so much gold on it.


Checkout this book: A deepness in the sky.

In the far future, humanity has ships which are mostly automated and don't need actual pilots. However they need tech guys to constantly fix and patch the software. They have to know a lot of the history of the software to be good at their job. The hero is basically the last member of humanity who knows what the unix epoch means.


"Software archaeology"


We're not going to be intergalactic by year 7298 for the simple fact that the Milky Way is 100000 light years across and traveling faster than light is physically forbidden


You really think our model of reality and what is possible will apply in 5,276 years? What did we understand 5,276 years ago? Most certainly countless things that we have now would be considered impossible or witchcraft back then.


the physics were the same 5000 years ago, the people just didn't know it yet.

Everything we've built so far, obeyed physics, and I expect that to remain true in perpetuity.


The physics were the same, but you assume our current understanding of physics is complete.

At least until we solve general relativity vs quantum mechanics (theory of everything), I would be hesitant to claim we know enough to state definitely what is or isn't possible.


One thing that both GR and QM agree on is that the speed of light (well, the speed of massless particles) is a constant of nature.

We also know this very clearly from relatively simple direct measurements: there never was and there never will be a way for matter to accelerate to speeds greater than c.


Or plot twist, the year is 4765. Turns out physics is deterministic, the universe still may or may not be a simulation, but that hardly matters because the only variable in it is human experience. While interstellar travel remains impossible, we can perfectly simulate any point in spacetime to enter as our reality. We create our own multiverse out of curiosity, powered by other verses we simply run backwards to restore entropy. We no longer know what's a simulation, nor do we care and choose difference experiences for lifetimes for infinity.


plot twist, the year is 7654. Turns out earth is flat.

The likelihood of your statement is similar to that of mine. It's just nicer to conjecture about the former because the increased complexity brings the illusion of potential.


But even these current laws allow for non-euclidian space configurations and wormholes that leave the possiblity of quick trips around the galaxy.


They allow for the galaxy to essentially possibly be shorter than seems visible - they don't really allow for these to be created if they don't already happen to exist.


It is not complete, but you're assuming that our current understanding of physics have major deficiencies where significant prior observations can be overturned.

Even Newtonian physics 300 years ago largely remain the same, relativity complemented it and explained issues on edge cases. It did not completely overturn Newtonian Physics and gravity did not became repulsive instead of attractive.


They need not be deficiencies.

The discovery of more fundamental models may reveal dualities between our existing theories and new ones.

These duals may show that our current models are accurate, but only to an extent, or don't apply in certain circumstances.

It's hard to fathom, but with gaps to fill in our knowledge of the universe, we may let our imaginations run wild yet :').


There are already highly theoretical schemes for FTL travel. For example, https://en.m.wikipedia.org/wiki/Alcubierre_drive


I would call them "science fiction", but whatever


Certainly the Alcubiere warp drive requires exotic matter which may not exist.

There's now a scheme that doesn't require negative energy.

https://newatlas.com/physics/ftl-warp-drive-no-negative-ener...

It may be almost impossible to build due to the energy required, but it isn't prohibited by the physics we currently know.


Why bother advancing science at all, then? If everything is forever the way it will be, clearly we know everything


There are tons of things in physics that we are unsure about or know nothing about. The fact that the speed of light is the ultimate speed limit is not one of them.


advancing science is trying in places we know we don't know about, not repeatedly trying to overturn something that we know about and observation (what do you think those spacecraft do) all agreed on.

It's like complaining on why people don't try to build a perpetual machine - it's because it is not possible. You're welcome to keep trying though.


Think of everything we’ve discovered even in just the last 50 years, and how impossible it seemed from the perspective of the past.

You can always argue that we know more now, that there are fundamental limits, etc., but there’s always more to learn and revise.


Nothing we've discovered in the last 50 years seemed impossible from the perspective of the past. They did seem much farther away, and some had just not been thought of, but no one in the 1970s would be completely incredulous at anything that exists today.

Instead, many things that seemed possible 50-150 years ago have turned out in the meantime to be impossible - speeds faster than c (before Maxwell's equations and SR), arbitrary precision measurements (before Heisenberg's uncertainty principle), uniquely determining the future state of a system based on a previous state (before wave functions), arbitrary amounts of matter and energy (before quantization of these and other properties) etc.

The history of physics is usually one of discovering new constraints, not one of discovering new possibilities. Before physics, we used to believe in unconstrained magic. Not only have we always believed we can reach the stars, we used to believe we could do it by finding the right magical formula or praying hard enough.

Of course there are some exceptions - electricity and quantum computing come to mind.


50 years ago is not 500 years ago is not 5000 years ago. In the last 50 years we've had some incremental improvements to our understanding, in 500 you're going to before Newton. 5000 and we were just developing complex farming societies and the first civilisations.

5000 years is before Ozymandias. Look on my works, ye Mighty, and despair!


Until we figure out how to pull spacetime from adjacent timelines or whatever wacky thing the Unified Theory turns out to be.


Are you claiming that we finally found a perfect and complete description of reality in modern physics?

This is such a glaring and huge blindspot in western scientific thought process - a blindness to the human element of the scientific process. As if "physics" is something that has been around forever and we just found it in perfect form like some ancient artifact in a cave, rather than the truth of the matter, which is that it is a human creation that is modified with time and is a complex communal phenomenon with many different facets and pieces that not everyone agrees upon. In fact many of the most fundamental elements of physics are under revision and without consensus.


But our model of physics was different 5000 years ago. And I'm pretty sure it'll be different 5000 years from now.


Physics doesn't care what our model of it is.


That's the point.

Stating that our current models of physics preclude X neglects to mention it's our current understanding of physics which preclude X.

Our models may change to better match reality in the future.


Physics is our model. There may be something that exists outside of our model, but we can't claim to know with certainty the limits of whatever that is. And whatever it is, it's not physics.

Is a statement that nothing can go faster than the speed of light not a model? And if not, how did we arrive at that knowledge? God? Direct experience of the speed of light?


that's good news, because our current model suggests that we can't exceed the speed of light. since physics doesn't care about our model, we might find out that we can somehow.


If faster than light travel was possible any single species who discovered it would be able to cover an appreciable portion of the galaxy in only a few thousand years out of billions.

Either life is virtually non-existent thus far, virtually always suicidal, or discovering it is tremendously unlikely.

Neither of the last two bode well for us. It seems to me most likely that ftl suggestions an empty universe or one in which we too are likely to die without learning it.


> Either life is virtually non-existent thus far, virtually always suicidal, or discovering it is tremendously unlikely.

Or they tend to sublime [0], or advanced societies prefer to live in virtual reality, or they tend towards being enlightened enough not to mess with species in an earlier stage of development, or they're not very interested in us, or we've been unlucky/lucky to have avoided discovery so far, or FTL is possible but can only be exceeded slightly, or FTL is possible but requires billions of years of technological development to achieve, or FTL requires a prohibitive amount of energy to travel at longer distances, etc., etc.

[0] https://theculture.fandom.com/wiki/The_Sublimed


You’re assuming that all species seek to grow infinitely.

What’s wrong with a species of sentient house cats that just want to nap all day?


> What’s wrong with a species of sentient house cats that just want to nap all day?

Nothing wrong with that, but if life is common, you’d need all life to be like that to explain the Fermi paradox, which seems unlikely.


It could also be that the technology to achieve FTL is just over a threshold where the civilization would be unrecognizable to us as humans.

We aren't far from such technology already. For instance imagine just a few generations of biotech and computational technology - it may already be enough for us to transform and evolve into some biomechanical hive that bears little resemblance to our current selves.

It could also be that once you are advanced enough, you transcend this physical universe. For example, the entire universe to a deep sea squid is cold dark water without much other life. They have no way to comprehend that there is so much more to reality. They just aren't capable of it. We are not special and also likely not able to detect or comprehend other levels of reality that would be more vast and interesting to advanced beings than simple FTL leaps around the galaxy. Perhaps they just aren't interested in our little ant hill the milky way.

The lack of imagination around what is possible and the supposed paradox posited by Fermi is disheartening.


The squid can see if not comprehend the fact that water continues or if it gets high enough that there is a boundary between water and air. If indeed it isn't able to see that there is more it would not merely be because of lack of specific knowledge but because of a lack of even basic analytical tools to reason abstractly, communicate between individuals, and pass down analysis between generations over time.

I think there is a reason to believe that this represents a unique threshold rather than one of many and that even highly advanced beings would be fundamentally comprehensible and describable even if their technology is not in a way that probably isn't true of the squid because our abstract reasoning can represent the basic facts needed to describe what they are doing even if it misses the why or how.

Even a biomechanical hive would have greater longevity and resources by expanding even if we don't much recognize what they are doing with said resources.


> The lack of imagination around what is possible and the supposed paradox posited by Fermi is disheartening.

I think similar reasoning could apply though. If advanced civilizations escape into a different dimension or state of being where they are undetectable, we’d need all of them to do that to explain the Fermi paradox.


It doesn't seem unfathomable that it could be the case that most advanced civs escape the confines of our limited universe or even evolved outside of it. Considering how many species there are, millions, it seems unlikely that we are at the "top" of the tree of life, but rather that we are unable to detect more advanced "animals" that are part of it.


Exactly. Life being rare and spread out, space requiring enormous resources to move between stars, most species never outgrowing their backyards seems like it could credibly explain the paradox without FTL.


Assuming 0.1c travel, the galaxy can be covered in a million years already, which is way less than the lifetime of many species here on earth. But if you extend your argument to intergalactic travel, then you definitely have to start talking about billions of years. I generally think that this argument is the best one that FTL travel is impossible, because it would have increased the likelihood of non-human intelligent life close but outside of earth dramatically. Outside of course physics of the last 160 years being built on the fact that C is the fastest speed there is.


[Caution: vague speculation follows]

Apparently exotic objects like wormholes may exist which can "teleport" one from one part of the universe to another while never violating the faster than light principle.

[See https://en.wikipedia.org/wiki/Wormhole ]

In other words, we could have intergalactic messages.

Our understanding of Physics is still evolving and things considered impossible today may become possible tomorrow without necessarily bringing the whole edifice of Physics down by taking paths that are not known so far.

Grand as the achievements of modern Physics, we should still remain humble about what is possible and impossible.


it's not easy keeping a wormhole open, while they are correct solutions to the General Relativity equation, they are widely unstable, if memory serves.

also gravity is very weak and need a significant amount of mass/energy to source it, a black hole of the mass of the earth is a centimeter in diameter.

and yes smaller black holes are in theory possible to make (in some theories at least) but they tend to evaporate very very quickly by hawking radiation (and none was observed at cern).

then there's the ER=EPR conjecture that quantum entanglement is due to microscopic wormholes connecting the states but again doesn't seem you can shove a photon in there and get super luminal communication as quantum entanglement cannot transfer information faster than light.

in the end Lorentz invariance and thus the speed of light is deeply embedded in the fabric of spacetime (we never seen any violations, and i saw the super luminal neutrino drama first hand as i was doing my phd in Bern)

they are a bit like warp drives, possible in some theory, wildly impractical, certainly lethal and requires infinite energy and/or destroys the arrival galaxy.

more realistically speaking, and i know ppl will hate me for this: we have a decline in biodiversity probably unprecedented since the dinosaurs went exciting, a near vertical climate trip to tackle (not making the first even worst), limited ressources to that mitigate it but more importantly: very few people paying attention.

at this rate(s) i doubt the successor to the Large Hadron Collider will be ready before collapse of our civilization.

i get it's nice to dream but we are tripping all regulation mechanisms on this planet and that's not something that's good for survival, fossil records show.

maybe im wrong, im a physicist after all, and SuperElon FromTheFuture will ride a closed timelike curve and come Save Us All (TM) with electric rockets and SmartTechnology (C).

(edit: added a pinch of dark humor.)


Based on your interesting response I would then say:

Without violating the speed of light limit, intergalactic messages are highly improbable but not impossible. The science could conceivable allow it but no practical path has been found yet.

Give it 100-200 years. If civilisation does not collapse before that, we might be able to revise our answer. It took decades of experimental research before gravity wave detection become possible. Who knows what 100-200 years of research might uncover?

Would you agree?


we can't rule it out indeed, but then sending a receiver in another galaxy would require more energy than saving our civilization.

it's a matter of priority i guess


What if we don't want to travel faster than light, just warp space around us

(Sorry, I finished the third book of the Three Body Problem series literally an hour ago, and, well, curvature propulsion [1])

[1]: https://www.reddit.com/r/threebodyproblem/comments/6lj9df/ca...


It would be cool, but the thought of maintaining any kind of code with more than 5000 years of legacy scares me deeply


Keeping legacy code around is a choice.


Not even Pham Nuwen could avoid it...

(for context, read A Deepness in the Sky - but it's better to begin with A Fire Upon the Deep)


There is essentially zero chance that Linux will still be in development 50 years from now, never mind 5000.

We might have strong AI by then, and surely one of its first tasks (assuming it doesn't wipe out humanity) would be to redesign our entire information technology stack from the ground up, which would make any software built by humans instantly obsolete.

But even if that somehow doesn't happen, I fully expect an operating system to emerge in the next few decades that is built around a formally verifiable safety and security model. Once it is production ready, it will dominate the market soon afterwards. The stakes associated with information technology are ever-increasing, and relying on a system written in a language where "off by one" can mean "privilege escalation" isn't sustainable in the long term.


In 1966, IBM released its new mainframe operating system, OS/360. 56 years later, it is still actively developed, although it's been rebranded a few times – most recently to z/OS, although it is still probably best-known as MVS. I'm very confident it will still be around in 20 years, and being actively maintained – although no doubt its install base will have shrunk a lot further. By then, it will be 76.

Considering how many untold millions of devices run Linux worldwide – servers for myriad business applications, consumer electronics, motor vehicles, industrial machinery, medical devices, lab equipment, aircraft, spacecraft, satellites – if z/OS can make it to 75, why not Linux? Linux will turn 75 in 2066, and then it only need survive another 6 years to make it to 2072.

Even if some new hotness (such as Google's Fuchsia) overtakes it, its existing momentum will be enough sustain it for decades to come – and people will be willing to pay for necessary maintenance (bug fixes, support for new hardware, network protocol enhancements, etc), just like they do with countless other legacy systems today.

> I fully expect an operating system to emerge in the next few decades that is built around a formally verifiable safety and security model

We already have formally verified operating systems – the open source seL4 microkernel was proven correct in 2009. But, few have adopted it – because, whatever the theoretical advantages a formally-verified microkernel may have, they don't amount to enough in practice to justify the switching costs.


> We already have formally verified operating systems

But none of them are adequate replacements for today's mainstream operating systems. seL4 is a research project, not a Linux competitor. If it offered features similar to Linux, companies absolutely would switch – the cost of security vulnerabilities and their management is staggering.


This is kinda like saying if we had electric vehicles that can charge instantly and have million km range then everyone would switch. Sure, that's correct. But it totally glosses over the fact that something like that is just not possible to build in the foreseeable future. A kernel that rivals Linux in its feature set and is formally verified is even more of a bonkers proposition.


BTW z/OS has Unix subsystem, which enables it to run practically any modern Unix software, including all modern script languages, etc. which are actively used. So there's no urgent motivation to change the platform for those who're invested in it but want to also use more recent software. I think there's no reason why it couldn't live to be 100 years old - maybe not the most popular software by then, but still alive for those who need it.


I haven't personally done it, but from what I've heard, porting things to z/OS UNIX can be rather painful, due to its use of EBCDIC, and being based on an older UNIX standard (UNIX 95) which means it is missing many newer APIs (for example openat and friends).

IBM has now added to z/OS the ability to run z/Linux Docker containers (a feature they call "zCX"), which run in a hypervisor built in to the OS. It is a lot easier to just run the software under Linux, which is all ASCII/UTF-8 and has pretty up-to-date APIs (even though few Linux distributions have formal UNIX certification, Linux generally implements all the standard UNIX APIs, except for obscure things with dubious value and little use). When it is just VMs on the same machine, network communication between two operating systems can be very fast.


I don't think EBCDIC use is mandatory. It was a while ago, but I did some ports for that Unix, and I don't remember EBCDIC being an issue. Of course, z/Linux is even easier.


zSeries machines also run Linux virtual machines extremely well (e.g. the internal throughput between VMs is crazy good). Just because Linux becomes popular doesn't mean zSeries stops being relevant to the kinds of corporations it's intended for.


> There is essentially zero chance that Linux will still be in development 50 years from now

I doubt this. Linux is going to be around for a while just like COBOL, mainframes, and a bunch of other legacy technology.

Even today there’s not really a clear Linux competitor. It has been dominant for quite a while.


Except, forces that be won't be particularly thrilled about letting a formally verifiable, safe and secure operating system dominate the market. There's going to be serious resistance as long as humans are at the top.

Maybe strong AI can help in this regard. Or maybe it will just emulate the prejudices of its creators.


It'll have to compete with GNU Hurd 1.0.


I don't think rushing to release Hurd out by an arbitrary deadline like 2072 will be in anyone's interest.


Hundreds of thousands of people Google "how to center a div" monthly and you're talking about AI wiping out humanity or becoming a omni sentient coder? Okay....


People google "how to center a div" because CSS is garbage and makes such basic things incredibly difficult and unintuitive. Just because the question seems simple doesn't mean it is.

Either way, I don't understand how these two issues are related.


AI has no problem centering a div

https://youtu.be/DdAhRpXE_ww


We are doomed


Does the NT kernel get the same accommodation, where we admit that despite its shortcomings, it's a marvel of software engineering?


In informed circles, NT gets a lot of respect.


And it's done without SCRUM :D


What impresses me is that the Linux kernel has maintained ABI compatibility with userspace forever. Meanwhile, people in userspace keep talking about binary compatibility as some kind of impossible unicorn of a goal and bring us terrible things to allow them to "evolve" ABIs instead of just, well, not breaking callers.

I love it when Very Serious People say "X is impossible". You then show them an example of "X", and these Very Serious People hem, and haw, or rage-quit meetings, or do whatever it takes to avoid admitting that they were wrong.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: