Hacker News new | past | comments | ask | show | jobs | submit login

> I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.

Can you explain more? I’ve found a definition for SSA, but unsure how it applies to AGI…




The self-sampling assumption essentially says that you should reason as though your self-sampling is the likely outcome in consciousness-space-time.

It's the anthropic principle applied to the distribution/shape of intelligence.

Since your self-sample is of a human-shaped mind, human-shaped minds are the most likely outcome. So the probability of AGI is unlikely.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: