Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My experience so far is very similar to yours. The technology is _really_ impressive (we have managed to transform electricity into knowledge!), but to say it is at the same level as the atom bomb seems a bit premature. My impression (or maybe my hope) is that your thinking is in line with the "silent majority" of people watching the hysteria from the sidelines.

My personal experience is that the GPTs is that they are a better Google. Why people seem to think that the models' "intelligence" will start scaling exponentially beyond where it is today (somehow _vastly_ exceeding the intelligence of the humans that created the model/training data itself, no less) is beyond me.

Will the models continue to improve? I suspect they will. Will it suddenly turn into a vengeful god and enslave/exterminate us all? That seems like a leap. I think we will need a true Hiroshima-style moment with AI in order to change public opinion that far.

I wonder if there is something deep inside the human psyche that endlessly looks for and, at some level, _craves_ existential crises like this. We look for danger everywhere and project our own fears and anxiety at whatever seems to fit the bill.



The potential of large language models is huge, but probably less of an impact than the Internet.

The potential of full AGI though? That could be as big a difference as the change from monkeys to humans, far bigger than the atomic bomb. A superintelligent AGI hiroshima doesn't leave survivors because its obvious that it should only implement its plan and kill everyone once it has a high certainty of success.

What really matters is how long it takes to go from human level intelligence to superhuman level intelligence.


> What really matters is how long it takes to go from human level intelligence to superhuman level intelligence.

probably a few hours if it can self-improve




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: