Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't pretend that an argument from authority resolves this debate, but the fact that people like Stuart Russell take these arguments seriously implies that there's a bit more substance there than you're acknowledging.

To actually argue the point a little bit, the theory of expected-utility-maximizing agents is pretty much the framework in which all of mainstream AI research is situated. Yes, most current work is focused on tiny special cases, in limited domains, with a whole lot of tricks, hacks, and one-off implementations required to get decent results. You really do need a lot of specialized knowledge to be a successful researcher in deep learning, computer vision, probabilistic inference, robotics, etc. But almost all of that knowledge is ultimately in the service of trying to implement better and better approximations to optimal decision-theoretic agents. It's not an unreasonable question to ask, "what if this project succeeds?" (not at true optimality -- that's obviously excluded by computational hardness results -- but just at approximations that are as good or better than what the human brain does).

Do Nick Bostrom and Eliezer Yudkowsky understand when you would use a tanh vs rectified linear nonlinearity in a deep network? Do they know the relative merits of extended vs unscented Kalman filters, MCMC vs variational inference, gradient descent vs BFGS? I don't know, but I'd guess largely not. Is it relevant to their arguments? Not really. You can do a lot of interesting and clarifying reasoning about the behavior of agents at the decision-theoretic level of abstraction, without cluttering the argument with details of current techniques that may or may not be relevant to the limitations of whatever we eventually build.



All that talk about maximizing utility, isn't about intelligence at all. Its about a sensory feedback loop maybe. Intelligence is what lets you say "This isn't working. Maybe I should try a different approach. Maybe I should change the problem. Maybe I should get a different job". Until you're operating at that meta level, you're not talking 'intelligence' at all, just control systems.


That's not the definition the mainstream AI community has taken, for what I think are largely good reasons, but you could define intelligence that way if you wanted. It's only a renaming of the debate though - instead of calling the things we're worried about "intelligent machines", you'd now call them "very effective control systems".

The issue is still the same: if a system that doesn't perfectly share your goals is making decisions more effectively than you, it's cold comfort to tell yourself "this is just a control system, it's not really intelligent". As Gary Kasparov can confirm, a system with non-human reasoning patterns is still perfectly capable of beating you.


Yeah you can imagine a lizard brain being introduced into a biomechanic machine to calculate chess moves. It doesn't make a lizard more intelligent, or even add intelligence to the lizard.

If we don't regard intelligence as something different from control, then I guess birds are the most intelligent because they can navigate complex air currents. Etc. That is a poor definition of intelligence, because its not helpful in distinguishing what we normally mean by 'smart' from mechanistic/logical systems.

And the discussion of rogue AIs is all about intelligence gone awry. Does anybody fear a control system that mis-estimates the corn crop? No, its about a malicious non-empathetic machine entity that coldly calculates how to defeat us. And that requires more that the current AI's are delivering.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: