Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

General thinking requires an AGI, which GPT-4 is not. But it can already have a major impact. Unlike self-driving cars which we require 99.999+% safety to be deployed widely, people already use the imperfect GPT-3 and ChatGPT for many productive tasks.

Driving as well as an attentive human in real time, in all conditions, probably requires AGI as well.

GPT-4 is not an AGI and GPT-5 might not be it yet. But the barriers toward it are getting thinner and thinner. Are we really ready for AGI in a plausibly-within-our-lifetime future?

Sam Altman wrote that AGI is a top potential explanation for the Fermi Paradox. If that were remotely true, we should be doing 10x-100x work on AI Alignment research.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: