Hacker News new | past | comments | ask | show | jobs | submit login
Why we downplay Fermi’s paradox (nautil.us)
42 points by dnetesn on Aug 7, 2018 | hide | past | favorite | 85 comments



Here's a strange thing:

Many people believe that strong AI is inevitable, or at least possible within the next few decades to centuries. Many people also believe that there's a natural solution to the Fermi paradox (space travel is hard, signals are faint).

But if strong AI is possible, then the Fermi paradox suddenly becomes much more acute because it makes von Neumann probes seem way easier- once you have self-replication across interstellar distance, you will blanket the galaxy with your Cylon spawn pretty quickly. The question isn't just why haven't we heard signals from people like us, but why haven't their robotic spawn already shown up and disassembled a few moons into robot factories?


My favorite theory: the von Neumann probes are already in the solar system. They've been continuously monitoring this system for a billion years. They could just be unwilling to make the first move toward contact with humans -- maybe we'll notice them, eventually, once we have better monitoring of small in-system objects. They could even be deliberately avoiding detection, like wildlife researchers who don't want animals to notice them. They haven't turned moons into copies of themselves for, basically, the same reason they haven't turned everything into paperclips -- that sort of unbounded-growth directive is useless and/or dangerous.

Note that I don't really believe this, but it does seem to be a resolution of the paradox that (like many others) we don't yet have enough evidence to reject.


My favorite theory is that there's been at least one Bracewell probe in our solar system since the 1920s.

https://en.wikipedia.org/wiki/Bracewell_probe

https://en.wikipedia.org/wiki/Long_delayed_echo

http://folk.uio.no/sverre/LDE/


I'd object on the basis that it is hard to imagine everyone cooperating on that sort of project. It implies everyone has about the same incentives and motivations, which suggests a remarkable uniformity... so if it's true, there's probably only one civilization in our galaxy, and it's spread uniformly everywhere.


Or that one civilization (more likely: a tiny number of decision makers within one civilization) got the capability first, and had outposts waiting everywhere before any other species could reach the same galaxy-spanning capabilities. Or that a handful of civilizations spread out before there was one galaxy-wide winner. But all faced similar incentives against paperclip maximizers and toward peacefulness upon contact with other interstellar civilizations. World powers here on Earth recognized that thermonuclear war is a terribly bad idea even though a thermonuclear weapon has never been used in anger. Multiple alien civilizations could perhaps foresee that deploying Nicoll-Dyson Laser weapons would end badly without a single proof-of-concept firing.

If a civilization does expand via von Neumann replicator machine life, it becomes a lot more plausible that every machine has similar incentives. It also becomes a lot harder to fully exterminate all machines. There may be no winnable wars between interstellar-capable machine civilizations, assuming that more than one exists in the first place.

I know, I'm invoking a lot of speculation. But answering the "where are they?" question of the original Paradox with "lots of places, but we've failed to notice" feels more satisfying to me than guessing that Earth is extraordinarily special. Also more satisfying than guessing that technologically developed species inevitably self-destruct before developing interstellar capabilities.


So we are another kind of special to deserve observation and galaxy wide bubble of silence.


No, I'm assuming that probes are widely distributed. Laser communication is better than radio communication over long distances. Shorter wavelengths make for tighter beams. The whole galaxy could be mesh-networked with ultraviolet laser links and repeaters, and it would still take a rare stroke of luck for any of our telescopes to notice, even if the links aren't operated for deliberate stealthiness.


Tight laser beams are limited by diffraction. Kilometer wide laser beam on wave length 308mn will have diameter around 15 millions meters on 4 light year distance. Meter wide beam will have diameter 15 billions meters, all other things being equal.

I can't, right now, estimate probability of breaking the silence, though.


Right, a 15 billion meter wide beam aimed at e.g. a solar gravitational lens relay station some 542 AU out, or even aimed at much closer receiving stations around Jupiter, would not intersect Earth most of the time. At 308 nm it wouldn't be visible to ground based telescope since the atmosphere would mostly absorb it. If using gravitational lensing to set up system-to-system communication is the norm, the beam might not be bright enough to be observable by current space based UV telescopes.

All of this could just be driven by ordinary post-biological-alien good engineering practices. It would seem extremely stealthy to humans, but that doesn't need to be intentional, any more than ham radio communications are intentionally stealthy against observation by whales.


The obvious answer is that one or more of the assumptions in those statements is incorrect.


Or they are true but we are the first to reach this technology level.


I believe the classic argument against this is that it's statistically improbable/impossible for us to be first.


It's a lottery paradox.

It is statistically improbable for you to win. Given a large enough pool of players, statistically someone has to win.

It's incredibly unlikely, but the chance isn't 0. Someone has to be first.


If we're talking about intelligent species, first question should be : Why ? Why would they come here in the first place ?

Now, you could make the same argument about evolution. Bacterial evolution is far quicker than human evolution. If they saw an advantage to killing humanity, we'd be dead before dinner. So why haven't bacteria killed us yet ? No advantage ...

The simple truth is: it's not worth it. Large bacterial species lose and gain more biomass on a daily basis purely through accidents than all human biomass combined, and they wouldn't be able to use all that biomass if they took it from us, nor would they be able to harness the energy humans use, so that's no good either. So it's an idiotic thing to spend energy on, and so they don't spend energy on it.

For space faring civilizations you can make a similar argument. Despite the movies, there is nothing on earth that is worth coming down the gravity well (especially not water. There is FAR more water in the asteroid belt, and it's far easier to collect and use), and maybe it goes so far that it's just not worth it to come into a yellow dwarf solar system : there'll be exponentially more materials and energy available in many other solar systems. So at the very least, we'd be last in line for a visit, and maybe just outright not in line at all.

Reality is that humanity will never leave earth, and AI will never return home. We will launch AIs, and they'll decide to venture out, and conquer and multiply in starsystems with huge stars, massive asteroid belts (lots of resources ... no gravity wells you have to spend energy getting into and out of).

Why haven't we seen aliens ? Same reason bacteria don't kill us (well, only by accident) ... we're just not worth their attention.

And would they do it to study biology ? Well, you'd have to take into account that any race hopping solar systems we wouldn't be the first race they meet. Or the 100th. We'd be the 10000th. So would they still be interested ?


>Why haven't we seen aliens ? Same reason bacteria don't kill us (well, only by accident) ... we're just not worth their attention.

It's also entirely possible that amidst the vastness of space, we have yet to even be noticable, much less noticed. What few traces of our civilization that have been leaking into space for merely a century might be unrecognizable from background noise and impossible to to trace back to us if another civilization just happens to come across it.

Maybe they were listening for millions of years, when there were only dinosaurs here.


It is also statistically improbable/impossible for us to be here in the first place.


What's the point of the von Neumann probe?

Making a self-replicating probe to litter the galaxy seems like a pretty piss-poor thing to do to any interstellar neighbors. Ideally, those probes would have to do something other than make more probes.

Ideally, those probes would send information back here.

And once we ascribe a goal to the probes, all of the problems come back. Information from 4 light years away will take 4 years to reach here at best. From further, longer. What good is information that 3 centuries ago, there was a civilization on planet X?


> What good is information that 3 centuries ago, there was a civilization on planet X?

People go out of their way to try and uncover ancient and forgotten civilizations here on Earth already.

If we have strong AI, the probe itself will be smart enough to chat with the locals in realtime. Any local AIs might send a probe on their own and have a century-long nap here to wait for a response. If our AIs are based on some sort of mind-uploading tech (probably unlikely, but who knows) then arguably, there is no wait- if you want to go, just upload yourself to the probe and go to sleep. Upload a thousand copies to a thousand ships.


If "if" was a skiff, we'd all take a boat ride.

We don't even really know what "strong AI" is. We have a kind of fuzzy "we'll know it when we see it" definition. We don't even know how we think.

And why is the probe smart enough to chat with the locals in real time? And that's not really the problem. It's not communicating with the locals: it's communicating with us. And then that information being of some actual use.

We are not investigating random rocks on Mars. We are exploring the possibility of moving there. If it can be made hospitable. Part of that is knowing rather dull stuff, like is there water, is it liquid, is the soil poisonous to plant life, etc.

Performing billion dollar archaeology with no actual return is not something that's going to go far. I'm sorry to say. At some point, you have to dedicate some resources to taking care of the population of the planet you are on.

> If our AIs are based on some sort of mind-uploading tech

Or how about we just instantaneously beam there since we can just pretend whatever limitations we know we currently have won't exist in the future.

I don't have a problem with speculating where technology may lead. However, that speculation needs to be grounded in reality. When we sent a man to the moon, we weren't waiting for something fanciful to be created. No, we took what currently existed and expanded upon that.


What good (in practical terms) is information that an hour ago, a rock on Mars looked like this? It's of no practical value. In terms of learning about the universe, however, it's great.

Learning that there was a contemporary civilization within 300 light years of here would be even more revolutionary in terms of knowledge about the universe.


If that news broke today, it would be the most significant discovery in history, probably a major turning point for our entire civilization. I don't understand how that could be perceived as useless information.


>But if strong AI is possible[...]but why haven't their robotic spawn already shown up and disassembled a few moons into robot factories?

But what for? Per article assumptions lets start by assuming an STL regime. It's not unreasonable to guess that strong AI would, even if it was entirely benevolent and decided to maintain an eternal space for its originators (in fact perhaps particularly in that case), wish to have more computation and storage power. It has plenty of time. So it constructs huge computational systems, with the thermodynamic end goal being some sort of stellar engine like a matrioshka brain. Even long before that though just with near-future tech it'd be likely to move into a rather huge computer system, probably planetary scale.

However doesn't that immediately back it into a bit of a corner at least near term? In an STL regime there is no way to send huge amounts of mass interstellar nor information over anything but an extreme time scale and a lot of energy consumed. A "big intelligence" without miracle tech would in fact be physically big too, it'll consume a lot of mass and energy running itself. The only way to leave its solar system would be to either lobotomize itself to the point of effective suicide or to construct a fully equivalent remote location and then either clone itself or transfer (the difference being whether it deleted the original location after a successful copy), which would still be an incredibly difficult and energy intensive and time costly affair. It might be justified in the case where a being wanted to continue to exist after its star left main sequence maybe but it wouldn't be trivial. The very fact of its strong AI existence could easily make it hard to travel.

So what are the von Neumann probes for? Like, picking a couple young promising systems as backups makes sense but why the whole galaxy? At some point extra backups with 100k year deltas or worse wouldn't actually add any significant redundancy, the lifespan of the galaxy itself would become limiting. Long before that having another strong AI develop might even be seen as attractive, assuming it wasn't insane. A well entrenched and developed AI or AI civ could conceivably feel secure enough that it/they could survive and snuff out trouble from anything new before it was an existential threat, and in turn decide the risk vs the reward of something new and interesting would be worth it. And the more in a agreement like that the more secure it can be.


There's always star lifting. Building stockpiles of hydrogen seems to be a reasonable step to prolong active existence in the light of inevitable heat death of universe.


Are you arguing that the probes would be too difficult, or that the civilization/AI would simply choose not to use them? If we assume that the system is still working at the behest of or at least in cooperation with its human creators, I think "because we can" is enough to make it happen. We seem to have an innate drive to expand and explore and see what's out there.


>Are you arguing that the probes would be too difficult, or that the civilization/AI would simply choose not to use them?

I'm arguing a strong AI (singly or civilization based around some) might simply choose not to go the full "deconstruct everything in every solar system in the galaxy" route, because by virtue of its very existence it can't actually travel easily or at all. Exploration sure, but in the form of lots of star wisps and maybe a few stealthy replications in oort clouds or systems already checked for activity.

Now if a civilization explicitly chose to avoid strong AI, or at least put some sort of firm cap on it, precisely due to foreseeing being locked to their star (or maybe their star and a few near neighbors) that'd be different, but would that actually be stable itself? It would be risky if they encountered an alien stronger AI too perhaps.

I mean we're by definition in pretty deep pure speculation territory here, and others have devoted far more thought to any of this then I. I'm just saying that if there is no magic then everything still has to deal with the laws of thermodynamics and relativity. And widely spread concentrated point energy/mass sources (stars/solar systems) are what everything has to work with in that case, which means there has to be a pretty hard tradeoff between "intelligence" (processing/storage ability) and to what extent something can exist away from stars or move between them. Even go a fraction of the way towards maxing out what you can do with the available mass and energy in a solar system and it's simultaneously hard to go away from a solar system right?

So basically I'm saying going maximal von Neumann across the galaxy isn't clearly the ultimate end state of a civ reaching strong AI level.


Even go a fraction of the way towards maxing out what you can do with the available mass and energy in a solar system and it's simultaneously hard to go away from a solar system right?

I don't understand the "too big/locked to a star" idea. It's not like all that mass/energy would be locked into one giant indivisible entity (if that is your assumption, why?). With current space travel, we don't attempt to move our entire civilization to the Moon or to Mars in one big go. We send out a small piece that establishes itself and then grows, and perhaps over time the rest is migrated if necessary, but the original entity goes on existing for some time while the probe gets established.

I'm not saying they would deconstruct everything in every solar system but it seems it would be pretty likely for a civilization at that level to develop a program that establishes some kind of presence in as many systems as possible.


>It's not like all that mass/energy would be locked into one giant indivisible entity (if that is your assumption, why?)

Yeah it is. That's the assumption right from the start here, embodied in "strong AI" and that I've repeatedly and explicitly referred to throughout including a reflection on why it might be consciously avoided and whether that could work too. A strong AI could grow to use any amount of available computational and memory capacity, and might well need to in order to accomplish some things. There would certainly be advantages to having a higher degree of thought possible. That's the whole question raised by the OP of this thread.

Again, if you want to instead theorize why strong AI might be explicitly avoided by a civilization that's fine, but that's not the premise I was replying to originally is all.


I guess I just don’t share your belief that strong AI is necessarily equivalent to a paperclip maximizer that will seek to assimilate all available resources into one monolithic mass.

Do the humans have any say in this, any control? If not, how/why are they being kept alive? Again, even if the AI is so myopic that it won’t spare any resources to expand into other systems, the humans will still want to.


0.0001% of Sun energy output can give about 500000m/s^2 acceleration to a million tonne payload, if I'm not mistaken.


Self-replication across interstellar distances is probably much harder than it sounds, also for strong AI.


My personal take:

We are physically and mentally bound by 4 dimensional spacetime. We have begun to discover further dimensional notions via computing. As our technological intelligence advances, we will have the opportunity to leave behind our current physical states and transcend into higher level dimensional existence. The reason we have not found any other life, or intelligence, is not because it doesnt exist, but because in our current state we are simply incapable of comprehending/experiencing it.

In Hadith (I was raised Muslim), Abu Huraira states: "Allah has said: I have prepared for My righteous servants what no eye has seen, what no ear has heard, and what no heart has conceived."

Regardless of religious context, I believe this notion lends credibility to the idea that we exist in a dimensional bubble, and that there is indeed existence beyond this bubble (regardless if it's God/s, forces of nature, multi-dimensional aliens performing simulations, etc).

There have been several discussions on psychedelics on HN. If you discuss the notion of inter-dimensional beings with someone who has experienced a strong psychedelic journey (Ayahuasca, etc), they will almost always discuss the foreign beings they interact with during their trip, out of the realm of their physical body. Why is that?

I strongly believe that humanity's salvation will not be in the form of space travel, but inter-dimensional travel.

We gotta go full Tron in this b&!%$.


Maybe like we now try to protect wild areas and animals, as long as they don't inconvenience us too much, maybe the aliens leave us alone and hide from us as long as they don't really need something here.

Maybe aliens also hide from each other, for protection. Even UN agreed that we should avoid disclosing our existence and position through long range transmissions as much as possible.


The Three-Body Problem and especially its follow up, The Dark Forest, explore this a bit. Great books to anyone interested in this kind of topic.

TLDR is that technology ramps to effectively infinite levels far faster than anyone can reach the nearest star. The only safe option, strategically, is to kill off any intelligent life you are capable of killing, lest the balance go the other way before you know it. And knowing that, no intelligent species should let itself be found.

Even if you accept the premise, a counter-argument could be that the only way to truly "win" the race would be to be the first one out the gate and ramp your resources to infinity. If a strategy requires an unknown condition to be true to win, you have to assume it's true as all other outcomes are a loss.

I've also heard it argued that any species that makes it that far will need to have overcome its violent impulses.

It's a lot of speculation, but interesting stuff.


I was also going to bring up the Dark Forest - but it doesn't answer the question of the Cylon Spawn. The Cylon Spawn can be visible without revealing the source of the spawn (which was the issue in the Dark Forest).


I suppose you might go out of your way not to produce artificial intelligences that could outcompete you, since if there isn't life out there, now you've just created it, and they'd wipe you out as soon as possible just in case.

It would still just take one civilization build Skynet by accident or to hold a firm belief in mind-uploading though, and then we're knee-deep in cybermen.


Yup, more people should read the Remembrance of Earth's Past trilogy if they're interested in Fermi's Paradox.


David Brin also wrote his notions on Fermi's Paradox in Existence

In the same novel he suggest a means for avoiding a negative AI "Singularity".


Fermi's paradox is basically an example of saying "the only two possibilities are that it's impossible or inevitable, and since we don't know it to be impossible, it must be inevitable"--actually multiplying a few of those together--and then being surprised that the evidence doesn't show that it's inevitable.

We don't have enough knowledge of fundamental physics, chemistry, or biology to come up with any reasonably tight bound (tight here meaning only a few orders of magnitude) as to how likely alien interstellar voyaging is, to say nothing of our complete ignorance of xenosociology. Our own experience is n=1, which means it's extremely dangerous to generalize.

But even then, we run into a problem: intrastellar travel is hard. We don't yet have the technology to colonize a single other body in our own solar system, and it's dubious that we could send a human to and from any other body save our own Moon. The only mechanism we have to travel in space is via rockets, whose fuel costs are prohibitive for any sort of long journey. Even getting out of our gravity well requires rockets, and while we have concepts of other ideas that are less expensive, they tend to have problematic requirements such as "requires things we haven't discovered yet."


im not sure who you're quoting but i don't think that is Fermi.

the point of the paradox is not merely wondering how likely alien life is. rather, it starts with the simple fact that no evidence for alien life has been observed, ever, anywhere. why?

well, there's all kinds of potential explanations, maybe its far away, maybe its not advanced anywhere else, etc. we don't know the answer of course, but given the vastness of space one might expect to at least see some evidence of alien life, if not alien life directly, somewhere. that we do not suggests something fundamentally bigger is going on (ie the speed of light is legit)


The very name of the Fermi paradox suggests that we ought to be surprised that we haven't seen evidence of alien life. Instead, what I'm trying to say is that we don't even have enough evidence to be able to assert that we should expect to see alien life in the first place.


ok, thanks for the clarification. so if fermi and you are looking up at all the stars in the night sky and he says to you, "where is everyone??" you would say, "what do you mean? why would there be anyone out there?"

i don't think most people would have that intuition but it's fair.

drake's equation is a fun way to try to answer fermi. it seems you would assign n_e or f_1 a value close to 0


We also don't even know how likely life is to arise at all. Abiogenesis could be a freak accident with a 1 in a googolplex probability, rendering us (Earth) a freak accident, a la the anthropic principle.

We just don't know.


> We don't yet have the technology to colonize a single other body in our own solar system

Is it? Or is it that no one wants to put any resources into that because you would probably get very little out of it.

We are a money-based mostly capitalistic society, so unless there is any gain any entity whether it be a government or commercial company, investments in development of technology or actually attempting colonisation will not get anywhere anytime soon.


If you define colonize as "build a self-sustaining colony," that is very much outside of our current capabilities. We don't know how to build a self-sustaining biosphere, let alone one any body so fundamentally hostile to life as Mars or any other rocky body in the planet.


Honestly I don't understand the point of this article. The evidence that "we" as a collective whole have not given enough attention to the paradox is lacking. I'm not even sure what he's calling for - more research into specific areas? More people chatting about it on Twitter?

Based on the evidence we have, the proposition that "there are no interstellar civilizations that have colonized our galaxy" seems reasonable. This whole psychological thing about how we think we're special and how we treat animals...I don't see how that is relevant at all. Is the assertion that this attitude is preventing massive amounts of funding for SETI programs? We don't have a single shred of evidence of extraterrestrial life, nor much of a clue on how to go about finding it. All the evidence we have says that, yes, life on earth actually is quite special. Didn't find any argument against that in this article.


Personally, my guess is that we will find that life is actually pretty common, but that civilization like we have is very rare. I mean life on earth has been going for what? 4 billion years? and yet only in the last 10,000 or so did civilization actually start. And even when (modern) humans showed up on the scene it still took them another 300,000 years to start doing civilization looking things.

There is just such a huge variety of factors that had to go right on top of life being on the planet already for civilization to get going.


Also, dogs are pretty rare. And kangaroos. Dolphins are as uniquely rare as humans.

We do have a surprising variety of life forms here on Earth. True, they all developed from a common base, but they've splintered into what seems like an infinite diversity.

I mean, as far as we know, dinosaurs weren't building spaceships.

But proponents of the Fermi paradox say that even if human-style intelligence is incredibly rare, the universe is incredibly old. The sheer magnitude of the timescales involved mean that there are literally no new ideas we could think of. That it only needs to have happened once for it to be noticeable.

Of course, the Fermi paradox assumes that it is possible and desirable to do such a thing.


How old is the universe?


IIRC, it's roughly 14 billion years or so. Earth has been around for I think 4 billion. Humans for around 10,000-ish. That number may be off, but it's well under 100,000. We, as a species, have gone from crudely drawn shapes on caves to starting to explore beyond the bounds of our planet.

The claim is that our journey into the stars is inevitable. That we will begin to colonize other planets. And that this colonization is exponential. To illustrate let's say tomorrow we know how to do it. The only issue is the time it takes. Let's pretend we can do it in 10 years. From nothing to fully functional, autonomous human colony in 10 years. Technologically equivalent. Which means that colony is also capable of sending out colonies. So now instead of one planet sending out colonies, you have two. Then four, then eight, and so on. Basically, every 10 years, you would double the number of planets inhabited by a species. In just 100 years, there would be 1024 planets supporting human life. In 1000 years? 100 cycles of colonization? 1.2e+30. Huge. There are "only" 40 billion planets that seem habitable in our galaxy. We'd have run out of planets to colonize at the 36th cycle. And even if it takes 100 years to colonize a planet to the point where they're just as capable, you're just really shifting the decimal one point. That's 3600 years to a fully inhabited galaxy. Cosmically, a blip in time.

So, the Fermi paradox says that given that colonization of other worlds is inevitable for an intelligent species, if life and human-style intelligence aren't unique, why haven't we seen evidence of other civilizations yet?

Either been contacted, run into an artifact of some sort, or found some sort of signal.


But there is a clear trend in increasing complexity and sophistication in the natural world, from the first bacteria to us. Some say this might even be a sort of a physical law:

https://www.quantamagazine.org/a-new-thermodynamics-theory-o...


>and yet only in the last 10,000 or so did civilization actually start.

and quickly heading towards self destruction. If this is the norm then living civilizations would be rare indeed.


In summary, because we think we're special. Which brings to mind my long-running thought experiment: what if we're no more special on a larger scale than our own stomach floral are on our scale? Maybe there's no one else out there because we're the special oil-eating bacteria, only instead of oil we eat carbon and oxygen, turning it to "harmless CO2". Then, like many bacteria, we consume all of the carbon and oxygen and die off.

Or maybe it's just the vast distances involved and some harsh physical reality. Warp drives, FTL, even approaching the speed of light, perhaps it's been tried repeatedly on one of those planets that's billions of years older than ours. And after billions of years and the best minds of planet Argwagh toiling away, turns out that c is the universe's speed limit...period. And when asked, "who wants to spend the remainder of their life in a tin can for the cause? And to doom your descendants to the same fate?", not a single hand went up.


> what if we're no more special on a larger scale than our own stomach floral are on our scale?

I mean, this is kind of true already, although not exactly in that sense. Companies are essentially meta-entities made of humans, making their own decisions in the same way that humans are meta-entities made of cells, making decisions independent of any of those cells.


The interesting part is what ties all of them together: mutual incentives. It’s in your interest to eat because otherwise you’ll die, even if you don’t quite know what “you” is. And that’s sort of true of companies as well.


If c is the issue, then some civilization will use solar systems as space ships. They'll figure out a way to move a star along a chosen path and keep the planets in orbit. The travelers would have all the comforts of home while wandering the galaxies.

In fact, I wonder if anyone has done that already. It seems like we'd be able to see their stars dancing about.


See https://www.schlockmercenary.com/2003-08-03

Be sure to read the text below the strip (on how to do this with a gas giant planet).


> then some civilization will use solar systems as space ships.

That is a huge leap.


Yes it's a huge leap, but the leap might not be as big as it seems: if we can figure out artificial gravity, then maybe we can scale up the technology and move stars, leading to more fruitful space travel than metal ships alone can achieve.

OTOH, that method of travel would still be rather slow and aliens might have abandoned the idea because it's too boring.


Yes, just figure out how to reverse one of the fundamental forces of the universe that may not actually be a force.

Then perform that on a scale capable of shifting the path of something that contains practically all of the mass of the solar system.

Far from "too boring", it seems "too implausible". Your "step 1" would be a huge leap in and of itself.


> only instead of oil we eat carbon and oxygen, turning it to "harmless CO2". Then, like many bacteria, we consume all of the carbon and oxygen and die off.

CO2 levels were far higher in the past.

During the Jurassic period CO2 concentrations were 4x higher than in 2018. During the Cambrian period, 15x higher.

https://upload.wikimedia.org/wikipedia/commons/7/76/Phaneroz...

https://en.wikipedia.org/wiki/Carbon_dioxide_in_Earth%27s_at...


But my house and my kids don't exist in the Cambrian period, the greenhouse effect is going to change our coastlines and weather patterns in the next century.


I'm not denying climate change here, merely trying to be honest and realistic about the expected effects. False apocalyptic predictions only provide fuel to those who deny CC entirely.

> we consume all of the carbon and oxygen and die off

is a far cry from

> change our coastlines and weather patterns


> If we humans are now on the cusp of colonizing our solar system

We're not. And we may never be, let alone able to travel through space continuously while somehow maintaining generation after generation of life using a finite amount of resources that could travel with us. The paradox assumes so many things that are far from a given, to me it's confusing why such a great thinker couldn't see his own fallacious thinking. Maybe there is a ton of intelligent life out there that can barely get to the moon once or twice before giving up forever. That intelligent life would be like us, possibly dreaming of colonizing the next planet over while destroying itself. We could not, with current technology, possibly detect such life. We can barely detect earth like planets as is. The assumptions Fermi makes are simply too great for this idea to be compelling.


This cannot be overstated. Also, we are treating this planet like we've got options elsewhere. This is setting people up for a rude awakening, whenever that may be down the line.


This sort of pessimism is depressing.

We have to at least try, and we have more than enough capability if we cared to try - we put men on the moon while fighting a major war and engaging in a nuclear arms race, with computers orders of magnitudes less powerful than a raspberry pi.


Pessimism? I think you mean a dose of reality. We aren't even close. That's reality no matter what Musk and others like him say. If anything, it's extreme, unwarranted, delusional optimism that sees us colonizing the planets. And that's just our solar system. We can't even get to Mars and probably couldn't even get to the moon at this point. I don't see why we would assume that other intelligent species would be able to escape, except briefly, their planets. There is nothing at all that warrants that assumption now or when Fermi was alive. Why would anyone assume this even for the future? Progress is not guaranteed and as we have seen throughout history, periods of progress are often followed by periods of decline.


There has been a big push, I've noticed, towards excessive generalization. We aren't special, life is generalised. This seems to have appeared as a pushback against the thought that we are somehow special, but it has imo gone too far. The fact is, we're attempting to quantize extremely unknown unknowns, and to claim that we're not special because we can conceive of other life that is essentially equivalent to us is to make a huge set of assumptions regarding the plausibility of said life. Let the argument that exponential societal/technological growth goes on forever be a warning that over-generalisation is sensationalism!


“Two possibilities exist: either we are alone in the Universe or we are not. Both are equally terrifying.”

― Arthur C. Clarke


The problem is that Fermi’s paradox assumes interstellar travel to be an inevitable and achievable goal of intelligent life given enough time. Why should that be? I would argue that assumption is itself far more self-aggrandizing than what this article accuses the doubters of being. Why should we assume that intelligent life elsewhere in the galaxy that has progressed hundreds or thousands of years beyond our level of technology would even value interstellar travel? Maybe by the time such technology is possible, other grander frontiers that the human race cannot even perceive will be more compelling. Or maybe the practicalities make interstellar travel impossible. Why do we assume this is even a solvable problem given the resources available in a typical solar system? Or that such exploration would then be sustainable beyond the first leg?


There are two issues that are not generally considered, the first is that it may be that the normal size of intelligent beings is rather larger than humans. Secondly it might be that the size of the worlds that the live on is larger than the earth.

Bigger worlds will have bigger gravity and might have thicker atmospheres, possibly less deep though.

All of these factors would make space travel much harder, life support would be more difficult. take off harder (much) return to the surface much, much harder. So I think that even if civilisation is common space faring civilisation is going to be rare, and as we know even space faring civilisation doesn't approach interstellar civilisation.


Also chemistry, the chemistry of an atmosphere may screw up rocketry..


two counterpoints would be:

(1) life as we know it can exist at extremely small scales (2) intelligence is not correlated to brain size: ex. birds


Why we downplay Fermi's paradox is because most of us know that it's making up numbers, and then using the made-up numbers to reach a conclusion. Most of us have little faith in the numbers, and therefore little faith in the conclusion. Why should it be otherwise?

People love to make these big-sounding claims of Having Figured It All Out. But the data don't support the conclusions, most of us don't really buy the conclusions. That's a good thing.


This is handwaving at best. Here's a more rigorous argument that was published recently:

Dissolving the Fermi Paradox https://arxiv.org/abs/1806.02404


So that would be option number 1: Rare Earth Hypothesis: https://en.wikipedia.org/wiki/Fermi_paradox#Hypothetical_exp...


The obvious resolution to the "paradox" is that we are alone, at least in the Milky Way. It is reasonable to conclude that, if the development of intelligent, technologically capable life was likely, then we would have seen signs of it. We have not. So it is not likely. So it is not really a paradox. But the author of the article says we want to ignore Fermi because we think we are special. No. If we don't ignore the paradox, the conclusion is that we are special. Mind you, if we are special, I am not that impressed.


We are special in one sense, we're really near the start of the universe, 1% of its lifespan, rather than the average of 50%.

Given you probably need 3rd generation stars to create enough complex elements for life, it's only very recently that life has been possible at all


I’ve never bought into the Fermi thesis. We’re still burning petroleum instead of using it for its other amazing uses and blowing each other up over myths - but we think we know the fundamental nature of all life in the universe enough to say we should see it.


We know that a star burns the available fuel and then expires, with a hard limit existing on the size of it's size based on various factors. We know that algae bloom similarly exhausts it's medium and shrinks.

However, the going assumption of the Fermi thesis is intelligent life is different. Intelligent life forms civilizations that are expected to naturally expands forever, eating planets, stars and so-forth.

So far, human behavior looks much like the other phenomena that expand to the limit of their fuel and then die.

And, if somehow against all odds, we wise up and stop exhausting the resources needed for our existence, well, then that infinite expansion tendency quite likely won't be there.

"If we humans are now on the cusp of colonizing our solar system, and we are not much faster than other civilizations, those civilizations should have completed this colonization long ago and spread to other parts of the galaxy."

But what if we aren't on the cusp of this colonization, if one can argue such colonization would happen by quite a few technological advances that are purely hypothetical, that "colonizing the solar system" only seems plausible by incorrect analogy with the colonization of various areas of earth by various local societies.

Space seem wholly hostile to human from it's lack of gravity to its lack of all other components needed by us. Our era has gone from believing "space the final frontier" to seeing just how fragile our condition on earth is.


> So far, human behavior looks much like the other phenomena that expand to the limit of their fuel and then die.

Are you sure about that? It seems to me that humanity has often made a "leap" in the past... Romans "fed" on slaves, then collapsed, and eventually we found coal. The industrial revolution "fed" on coal, then we found "oil", there's probably the next thing!


I shouldn't have implied the feed-and-collapse sequence could only happen once. There can be repeats even with bacteria and sure, humans are more complex than bacteria, the process can be more complex.

The only thing not justified is the expand forever thing.


There's a huge list of explanations on wikipedia https://en.wikipedia.org/wiki/Fermi_paradox#Hypothetical_exp...


All life? No. But if life is very common, what are the odds that absolutely none of it looks like something we would recognize? If that's true then we are absolutely unique in the universe.

It just takes one civilization with a runaway von Neumann probe to drop huge monoliths on every single rocky body in the Galaxy over a couple millennia, and since evidently nobody did that in our solar system, the question is why not. If there's a rule that forbids it, what is the rule and does it apply to us? That's the question posed by the Fermi paradox.



I found this answer to the Fermi Paradox compelling: http://slatestarcodex.com/2018/07/03/ssc-journal-club-dissol...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: