Hacker News new | past | comments | ask | show | jobs | submit login

The most realistic and rational path to life extension is creation of behavioural replicants with purpose of only extending the lifespan of one's agency.

To reach next 1000 years, you need to do:

(1) Information theoretic presevation, IE body imaging, cryo, and proper archival / storage.

(2) Behavioural emulation, IE a virtual replicant that roughly makes same decisions far enough for you to identify with it and trust it will carry on your pursuits, even though it will be at least for the beginning slower than the meat was. Behavioural emulation is much less difficult and much more computationally efficient than whole brain emulation.

Many humans would say they want someone to be there taking care of their kids, if nothing else. But there is nothing really pioneering this. I hope the separate developments in neuroimaging, qualia research will eventually converge.




The problem is that without a massive internal shift toward some kind of altruistic behavior, this won't ever really be a focus, I bet.

The simulant may be Me (as in another version of me) but it's not me (the consciousness I experience inside my body). Therefore, it's life extension for me, but for everyone's benefit but mine. That doesn't help the selfish ego inside me that wants to live forever.

That just does not scratch the eternal life itch, I'm afraid.


If there was a shift to altruistic behavior, we could all increase the amount of other people's lives through charity and having children.


altruism has a massive freeloading problem, it's likely why it is unstable beyond a very limited set of very close relatives. If you want your statement above to be true you need to crush free loading or expand the very limited set of people the population doesn't mind freeloading. both those things have huge risks of enforcement/creation because they damage other aspects of human dignity we like and benefit from (i.e. the fundamental freedoms that give us a competitive productivity edge over time through allowing innovators to get fabulously wealthy off their innovation is heavily damaged by the authoritarianism that is the easiest path to crushing freeloading [up until the authoritarians become the freeloaders] or the nationalism that can lead to problematic outcomes interfacing with the inevitable outgroup but is the easiest way to expand the group that can freeload without causing instability.)


"Freeloading" is a feature, not a bug.

Human[0] socialization is hard-wired to be altruistic. If altruism didn't work, we wouldn't be socializing, we'd still be hairless primates wandering around the jungles of Africa, alone, killing and eating anything and everything we met. Hell, ants wouldn't be in colonies if that were the case. Altruism works evolutionarily because selfless actions improve group fitness, even if they don't increase individual reproductive success.

Insamuch as "freeloading" is an actually deleterious behavior, it is because individuals are trying to move resources out of their altruistic in-group and towards themselves. To address your specific examples:

- The whole goal of authoritarianism is to "become the freeloaders". The key rhetorical strategy of authoritarianism is to accuse your opponents of what you plan to do. If they agree with you and stop doing that thing, then you've won, because they are now tying their hands behind their back. If they try to normalize the thing as OK, then you've won, because now you get to do the thing. So in this case, authoritarians will identify and demonize the freeloading of some out-group, both to attack the out-group as well as internally justify their own freeloading.

- Nationalism is rather arbitrary in what is and isn't considered to be the altruistic in-group. In fact, I would argue that it is a subset of the freeloading authoritarianism I mentioned above, at least in our modern age. Nationalists want to divide and conquer humanity.

- "Allowing innovators to get fabulously wealthy off their innovation" is a freeloading behavior. Copyright and patent laws allow inventors to take from the public commons of knowledge and legally enclose it off for themselves. The intent is for this to be limited, but the limits are extremely weak[1]. "IP"[2] is the engine by which large corporate empires build fiefdoms around themselves. The counterargument to this is to gesture vaguely at the European medieval period's economic stagnation, but my counter-counterargument is to point out that this economic stagnation was itself the product of a system in which the vast majority of economic wealth was the product of rents.

Diving further into that last point... the economic system in which the majority of wealth is the product of rents is called "feudalism". We associate this with agrarian economies and extreme material poverty, but modern "IP"-heavy business practices are not that far off from feudalism, recast in the mold of modernity. You can't innovate in a feudal system because of all the owners demanding their cut. Innovation requires freeloading. If those who own the resources of innovation are positioned to charge more than you could ever hope to make from that innovation, then there will not be innovation.

[0] actually most mammals and primates especially

[1] Copyrights last "practically forever". Patents have 20 year terms but the patent office is not shy about permitting another 20 years on minor modifications to the invention that can then be used to bully competing users of the original patent. This practice is known as evergreening and it's endemic in the pharmaceutical industry.

[2] "Intellectual property" as in "federal contempt of business model"


Wow, you sure like to write essays...

Your whole essay utterly misses the point in multiple ways:

- Freeloading is not a feature or a bug, it's just something that happens when the expected value of exchange is unbalanced. If it is too unbalanced it makes the system unstable (i.e. causes war, violence, unfriending, etc. at different scales). The cry for more altruism of the OP almost always leads to a too unbalanced system, thus the problem.

- Yes we are wired to be altruistic to our in group (as I mentioned already). The outgroup we will routinely do horrible atrocities to with little thought or care. How in the in group one is will determine how unbalanced someone is willing to allow exchange to get. I.e. one will generally be fine with personally dumping large amounts of resources into a disabled immediate family member but will not be ok with personally dumping the same resources into a stranger across the globe (or across the city, or perhaps even into an acquaintance down the street).

- It's not that relevant what you think authoritarianism is classified as, it's only relevant that it's bad for freedoms many enjoy and that benefit society. Same deal with your take on nationalism. You should have edited these points out after rereading my comment because they don't matter. It's a missing the forest through all the trees situation.

- Yah, you chose the examples of innovators getting wealthy that is least relevant to societal improvement (and has huge problems I disagree with like excessive terms for copyright among many other problems). It's much more basic than that. Through much of history you lacked basic rights that ensured you could operate a business around your idea. A subset of the population had the right to just take it and that killed any motivation of the individual to act on their good ideas. Copyrights and patents came much later than the actual freedom that really matters for this. Your following paragraph is also a swing and a miss because of the above. It's not what I was talking about.


You aren’t the same “you” that you were a few years ago, from a mental perspective. Brains change all the time. We don’t generally get hung up on this concern because, fundamentally, the self-preservation drive is just an evolved reflex rather than anything fundamentally rational. We want some specific evolutionary-selected version of “self-preservation” for the same reason we crave unhealthy foods or are revolted by certain smells.

TL;DR Let your rational brain decide what it wants, try to cope with the rest from there.


That's easy to say now, but it will be little comfort when you're watching from your hospital bed as your younger clone holds your wife's hand while they both watch the doctors pull the plug on your obsoleted body.


Sure, that would suck, but would you actually say that the sum of that loss weighed against the gain of N more happy and healthy years with your wife be negative?

I don't think so, I'd sign up for it. Do you not think you could cooperate with yourself like that?


That's my point though. It's not me. It's me, but it isn't in me.

To your earlier question: am I a different person than I was in the past.

No. My body and consciousness continued.

The other new me will be me, but the me that is inside my body right now will cease.

That is the flaw.


In realistic replication your body probably would not be continued, at least for a while. You'd have a digital clone that shares some memories, most values and is cognitively at your level or above. If the preservation part is well-enough and you continue to hold it important to actualize your past wish of getting a realistic clone, one day they may be able to use the cryonically frozen body & the scans of you that were taken when it was alive to re-anime it or to create a digital clone that would be much closer to you.

A commenter above pointed above that the body changes all the time. Getting replicated and having the old body die is almost the same what you had already gone through your whole, first biological life, but only at a larger scale. It does entirely replace the "hardware" (body) but most humans would agree that "they" are made of "personality, dreams, ideas, algorithms, memories, knowledge, logic, thinking, emotions, feeling, values" more than what computational medium these exhibit as long as it emulates what they have had in the past to some degree.


Our bodies and minds get replaced bit by bit over time. We're ships of Theseus[1] -- changing all the time, but still the same entity throughout.

Mind-scanning your consciousness and pouring the resulting Copied Intelligence[2] into a computer and / or a cloned body[3] is not the same thing. That's more like building a copy of Theseus' ship, nailing the name-plate from the original onto it, and claiming it's still the same ship.

It's not.

___

[1]: Or for simple guys like me, George Washington's axes.

[2]: CI may exist some day; about "AI", I'm still in doubt.

[3]: Or an android body, or a pickle-jar on Mr Burns' desk.


This isn't a problem if you've integrated non-duality.


There's nothing rational about a desire to live forever. It's completely driven by a fear of the unknown/driven that comes past death.


I don't see that as universal explanation. Anyone who enjoys live will want to prolong it (without sacrificing the enjoyment part, of course). Many people see the death not as "unknown", but as the "end of experience".


The problem isn't the end of experience. The problem is that the universe exists in the first place.

A lot of atheist afterlife logic runs into the problem that if nothing follows death, then this would also mean the end of the universe, but this is in contradiction with the fact that we can experience the universe and that it exists. Lots of people die every day and yet that "nothing" has failed to arrive.


> the problem that if nothing follows death, then this would also mean the end of the universe

No.


> if nothing follows death, then this would also mean the end of the universe

Huh?!? How the heck do you get from one to the other??? I see absolutely no logic connection there. Care to explain?


Why do they see it as the "end of experience"? Because they don't know what comes after it? When I see people use that explanation, I also see a fear of the unknown. Nobody has any idea what comes after death: It could be the start of a brighter and better experience or it could be absolutely nothing.

Taking things to the far extreme, for all we know, there could actually be a heaven or a hell as described by one of the desert-dwelling hallucinogen-enjoying people whose book caught on. And I don't mean some ethereal concept, I mean the actual things with 72 virgins or angels with 100 eyes and 50 wings and wheels on their wheels. Despite feeling implausible, we have exactly as much evidence of nothingness after death as we do of a heaven or a hell.

Before you mention that this is absurd because there's no brain activity after death, we still don't know how the brain and "mind" work, we can't observe the vast majority of matter or energy in the universe, and there's a lot we don't know. Filling that unknown space in with "it's the end of everything I experience" is as irrational as filling that in with "72 virgins if I kill enough infidels."


You yourself seem to have internalized the idea that it is the end of the experience. The things you describe as possible are all different experiences to this one.

People aren’t required to be rational for GP’s point to be correct. I don’t even think it is necessary that they hold a particular view on death. Plenty of Christians don’t fear death because they believe in heaven. Plenty of those who believe in nothingness fear the end of their experience.

Nothingness has evidence. Memory and consciousness both appear tied to the body. Suggesting that’s equivalent to anything else because technically anything is possible is at best a god of the gaps argument.

The rational take here is that we don’t know, we may never know, but that the evidence is suggestive of the same sort of nothingness we “experience” when unconscious or before we were born.

Regardless, all that is required for the GP’s point to be true is that people do not universally fear death.


> You yourself seem to have internalized the idea that it is the end of the experience.

The previous commenter didn't say the end of "the experience." They said the end of "experience" (no the). If you want to be pedantic about the semantics, that's a pretty big thing to add, don't you think? One is the end of all sensation and the end of a particular set of sensations.

And no, it doesn't require that people not universally fear death, it requires that people who see death as the end of all experience don't fear death, which appears to be tautologically false since they adopt an irrational and negative belief about what the post-death state is.

> Nothingness has evidence. Memory and consciousness both appear tied to the body. Suggesting that’s equivalent to anything else because technically anything is possible is at best a god of the gaps argument.

The wordplay is interesting here - I didn't mention memory, only consciousness. Memory does appear to be an embodied phenomenon in your brain. Regarding consciousness, I'm not filling the gaps with a god, I'm suggesting that denying the existence of the gaps is as bad as filling it with a god.


> which appears to be tautologically false since they adopt an irrational and negative belief about what the post-death state is.

"Probably nothing" is not an irrational belief. You don't need 100% certainty to want to avoid that.

> The wordplay is interesting here - I didn't mention memory, only consciousness. Memory does appear to be an embodied phenomenon in your brain.

If I don't have my memories, then the old me is effectively gone forever. Wanting to avoid such a drastic and disruptive change has nothing to do with "fear of the unknown".


> "Probably nothing" is not an irrational belief. You don't need 100% certainty to want to avoid that.

The words "probably nothing" imply that on something more than belief, you can assign a probability to nothingness. Can you provide an objective measure of probability as to whether nothingness is what awaits you after death? When you say "probably nothing," the belief in "probably nothing" is an emotionally nice but similarly irrational hedge on "nothing," because nobody can assign a probability to an unknown unknown like "what happens after you die."

> If I don't have my memories, then the old me is effectively gone forever. Wanting to avoid such a drastic and disruptive change has nothing to do with "fear of the unknown".

Wanting to avoid that change is almost definitionally due to a fear of the unknown. You are afraid that the new state you will be in will be worse for lack of those memories. Many people who lose their memories are happier for it, and it is in fact a common trauma response to block out old, bad memories.


> The words "probably nothing" imply that on something more than belief, you can assign a probability to nothingness.

Maybe not "probability", but likelihood. That's the way Ockham's razor cuts: In the total absence of evidence for any continuation, there is no sensible reason to assume the existence of it.

> Wanting to avoid that change is almost definitionally due to a fear of the unknown. You are afraid that the new state you will be in will be worse for lack of those memories.

No, you only need to know that that won't be you who is in whatever state that whoever-it-is will be in. Our memories is who we are. No need to feel fear on behalf of whoever that will be that you're talking about.


> Can you provide an objective measure of probability as to whether nothingness is what awaits you after death?

Yes, with some effort, I can start at a default 50:50 and incorporate all the evidence we have access to. The resulting number will be pretty high and as objective as a person can reasonably be asked to be.

> nobody can assign a probability to an unknown unknown

Giving up like that is not a way to make rational decisions.

Also when you have a very precise scenario and question, doesn't that make it a known unknown?

> Wanting to avoid that change is almost definitionally due to a fear of the unknown. You are afraid that the new state you will be in will be worse for lack of those memories.

Wrong. Even with a guaranteed blissful existence, I'm still busy using my consciousness on my current life and don't want it to end.

> it is in fact a common trauma response to block out old, bad memories.

Yeah a few of them, that's not remotely the same as a clean slate.


I'm sorry, but I don't understand how you could make an equality between "end of experience" and "fear of the unknown". The first is about valuing your life and not wanting for it to end. The second is about what comes after the end of life. I do not care about the second, but care about my current life a lot. If, for some unlikely but rhetorically valuable reason, my experience decides to NOT END after my body dies — great, more fun. I do not care about the political or religious debates, especially here, but it always seemed strange to me that people assume the fear of the unknown to be some universal factor.


In one of the detective stories my wife watches, one of the suspects was a kooky spiritual medium. "Don't you wonder what happens after death?" she asks the detective. The skeptical detective responds: "I know exactly what will happen after I die: I will go back to being what I was for millions of years before I was born."

We know exactly what happens after death: nothing. You cease to be as a living being. What we don't know, and can't ever know, is what it's like to not be. But every investigation so far has failed to produce evidence of a soul separate from the body, so until that changes we can assume such souls don't exist, and neither will we when our body dies.

Don't handwave it away with "we don't know how the mind really works". For all intents and purposes we do know. The mind working at all depends on the body working; once the latter stops, so does the former. We can't accept this because our mind, from our mind's perspective, is everything, but it is limited in space and time because it too is composed of matter and energy and one day, it will stop. That fills us with horror and dread, the idea of (from our tiny perspective) everything stopping, so we fight it. We make up stories about heavens and hells. Even in this era we fight it with hopes of becoming transfinite and infinite through technology. It's all hopium and copium, and incredibly dangerous. People like Elon Musk are now shooting giant penises into the sky, and planning to send actual humans on one-way missions to interplanetary hellscapes which should inspire visions of an angry Hayao Miyazaki saying "what you have done is an insult to life itself." Meanwhile we're neglecting the care of the only hospitable home we know we have, Earth.

Accept your fate. Live, as the fictional gorilla Ishmael put it, in the hands of the gods. Doing otherwise will doom us all, and a lot of other living things too.


I don’t remember anything from before I was born, obviously. But I also don’t think my consciousness is particularly unique or special, and conscious human beings lived before me, so I assume it’s reasonable to imagine “I” was one of them. This is a pretty nonsensical way to define the word “I”, but not much more nonsensical than using the singular “I” to refer to the six year old and present versions of me.

What does bum me out is losing a lifetime of knowledge and capability. You get old just when you’re starting to be good at things. The human lifespan could definitely be a few decades longer.


Since you seem to know, can you tell me at what precise moment a person becomes conscious at birth? It would solve a lot of problems in the world if you could share that knowledge with us.

People fill the unknown with lots of things. I am simply suggesting that you should let the unknown remain unknown, especially if you're going to make major life choices around it.


Fundamentalists are fond of responding to claims about evolution (dinosaurs, etc.) with, "Were you there? How could you know if you were not there?" This is even taught as a rhetorical tactic in fundamentalist elementary schools (which I'm embarrassed to be an American for admitting they exist here).

This seems to be an approach similar to what you're taking here, except you put an interesting twist on it by handwaving your appeal to spooks with stuff about "the unknown" and then claiming it is the more rational position. Once again: we know, as certainly as we can know anything, that the mind cannot function without the body functioning. Therefore, the idea that there is no experience after body death is a more rational position to take than anything involving 72 virgins, nirvana, reincarnation, or blah blah Bible Jesus magic.


The difference is that we know the fundamentalists are wrong because their beliefs solidly run up against known facts. I am suggesting that filling the gaps one way (even though it feels more rational) is as irrational as filling the gaps any other way.

And we know very little about the mind. We know a lot about the brain. As far as the exact links between mind and brain, that is still quite a bit up in the air.


We know the mind exists as a phenomenon of the body's operation. No operating body... no mind. We have plenty of evidence to support this and zero evidence to the contrary, even if we may not have all the details, so your attempts to rescue a spook reality of disembodied minds by furiously waving your hands and going "it's all unknown!" is disingenuous.


What you are describing is just an AGI aligned with a particular person. So we (humanity) is working on that problem.

Not sure if it will ever satisfy the desire to lenghten human lifespan though. Just as a thought experiment imagine that we have this tech. You have your perfect replica. It responds exactly like you would and no one else, not even you can tell its responses apart from yours. Once you have that, and attained “immortality” as such, do you mind if someone shoots you in the head? The real you i mean. After all you are immortal. Your behaviouraly emulated clone will keep doing what you do, loving your wife, taking care of your kids, supporting the causes you support etc.

For me the answer is that I would absolutely not let my real body killed just because i have my behavioural clone. Which to me implies that at least for me it is not a true continuation of my life. More like having a living will, or a son who is way too similar to me, but still not me.

Basically I would not reach 1000 years. This thing created in my image would.


You cannot make such AGI if the information is gone. Imaging & cryopreservation sort of insure against early death.

I agree that at point behavioural replication is possible, AI probably also will be. Harsh.

Now the point you ended your reply in is a very common response. Many follow the same direction of thought. I think that to think one should not get a behavioural replica because you don't think it would be you is a non-sequitur however; if the behavioural replica continues to advance your interests, is it not the rational thing to do?

Moreover, if you said "no, it doesnt matter, I'll be dead" you would be following a strategy that'd lead to huge loss if it turned out you actually never died.


> I think that to think one should not get a behavioural replica because you don't think it would be you is a non-sequitur however

Didn’t say that. Do get one if you can afford it. It would be usefull for all kind of things. But continuation of my life it is not. Simply it does not solve the longevity extension problem from my perspective.


Probably something like 95% or more of people aren't even doing the basics for health: https://www.barbellmedicine.com/blog/where-should-my-priorit...


So I thought about the brain uploading stuff for my next novel, "The Godlike Era" and my conclusion is : Dude, brain uploading to ONE replicant after you die is totally pathetic. Make 100,000 brain replicants. Run them in parallel on a nuclear powered GPU cluster. Have them learn all specialties of modern civilization in faster than real time. Have them teleoperate 100,000 robots. Build out whole civilization's worth of infrastructure on other planets with you as the ceo of that 100,000 person planetary development corporation WHILE YOU'RE STILL ALIVE.


That is a common scifi trope. For example it is the starting point of We Are Legion (We Are Bob).


The part that I don't like about that one is Bob is dead. What if you do this while you're still alive? Von Neumann probes would be super energy inefficient too. Just power people with electricity via advanced wetware and static nitrogen atmosphere and a bit of climate control and people could live in deep space or uninhabitable planets easily.


> What if you do this while you're still alive?

Nothing? It remains the same story. What do you think it would change?

> Von Neumann probes would be super energy inefficient too.

Stars radiate a lot of energy. Simple fact of the Kardashev scale.

> Just power people with electricity via advanced wetware and static nitrogen atmosphere and a bit of climate control and people could live in deep space or uninhabitable planets easily.

This is mumbo-jumbo.


>Nothing? It remains the same story. What do you think it would change?

The AI would have loyalty to its creator, and it would get into the deep philosophical aspects of what is a living organism vs what is a machine and why that matters.

>Von Neumann probes would be super energy inefficient too.

Yeah, but if someone became intelligent 10,000 years before us, which is a pretty trivial amount of time technologically, their Von Neumann probes would have eaten our planet already, so probably not going to be militarily feasible if there's other intelligent life in the galaxy that's keeping an eye on us.

>This is mumbo-jumbo

It doesn't defy the laws of physics to power people with electricity. Sure, we're going to need a lot of engineering to figure out how to plug people into the wall to recharge, but with AI we might get there in 100 or 200 years. The benefits would be enormous. People need 2000 calories a day which is a trivial amount of electricity compared to say an AI cluster. It would make a lot of sense to send humans around for most things if they used such a small amount of energy.


> The AI would have loyalty to its creator, and it would get into the deep philosophical aspects of what is a living organism vs what is a machine and why that matters.

I guess you will have to write it for me to see. :)

> People need 2000 calories a day which is a trivial amount of electricity compared to say an AI cluster.

We have no idea how much energy an AGI will need. You are literally looking at steam engines de-watering a mine and trying to guess what a Shinkansen ticket will cost. It would be very surprising to me if it turns out that keeping human bodies around is the most efficient from of intelligence.

> Yeah, but if someone became intelligent 10,000 years before us, which is a pretty trivial amount of time technologically, their Von Neumann probes would have eaten our planet already, so probably not going to be militarily feasible if there's other intelligent life in the galaxy that's keeping an eye on us.

You will have to spell this one out for me a bit more.

We can't have Von Neumann probes because if we could someone who came before us would have already eaten Earth before we came? There are quite a few assumptions here. And the thesis is not entirely clear either.

Also we just assume that there are little grey ones watching us, and they have never contacted us, or told us anything but seemingly their military red line is us creating Von Neumann probes? Do you feel that this is built on a pile of shaky assumptions?


Congratulations, you just invented a Sybil attack on humanity.


OK, then I have some sort if AI clone who roughly acts like I'd do. What does that have to do with lifespan extension and (if I'm not a delusional tech billionaire) why would I want that?

Actually, how would that sort of "immortality" even be fundamentally different from the traditional way of becoming "immortal" - by having your children or contemporaries carry on your estate in your name, according to their interpretation of "what you would have wanted"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: