Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hope you have some advanced predictions about what capabilities the current paradigm would and would not successfully generate.

Separately, it's very clear that LLMs have "world models" in most useful senses of the term. Ex: https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-o...

I don't give much credit to the claim that it's impossible for current approaches to get us to any specific type or level of capabilities. We're doing program search over a very wide space of programs; what that can result in is an empirical question about both the space of possible programs and the training procedure (including the data distribution). Unfortunately it's one where we don't have a good way of making advance predictions, rather than "try it and find out".



It is in moments like these that I wish I wasn’t anonymous on here and could bet a 6 figure sum on AGI not happening in then next 10 years, which is how I define “foreseeable future”.


You disagreed that 2047 was reasonable on the basis that researchers didn't think it wouldn't happen in the foreseeable future, so your definition must be at least 23 years for consistency's sake


I'd be OK with that, too, if we adjusted the bet for inflation. This is, in a way, similar to fusion. We're at a point where we managed to ignite plasma for a few milliseconds. Predictions of when we're going to be able to generate energy have become a running joke. The same will be the case with AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: