Hacker News new | past | comments | ask | show | jobs | submit login

If it happens, how long until they start expediting the actionables in the physical world?

I am quite worried tbh. Because, the reports coming from the Bing chatbot are quite unlike ChatGPT; it appears to be a bit more egotistical, and from some perspectives, programming ego into an AI is a dangerous game, akin to giving it a fitness function in order to problem solve it's behaiviours... I don't know, I feel like we are already well on our way to AGI, and it is dangerous. And the reason is because of the game theory on AI development right now; every company will be aware of their obligations with regard to ethics and the law, but no company will want to trust that the others live to the same standard of ethics, and they know that the game is likely to run away from them.




No, it's just regurgitating egotistical posts and science fiction from Reddit. It doesn't mean any of it because it doesn't have any sense of meaning.


Wouldn't this criticism apply to something like 99% of humans? If not, how do you define "meaning" and how are you sure humans have a sense of it?


Hey if you want to say humans are also stupid, I’m not going to stop you.

But that’s not my opinion.


But what is your opinion? "sense of meaning" is an incredibly blurry term


Sorry, didn't mean to be flip. See this comment: https://news.ycombinator.com/item?id=34876658

My understanding is that we have achieved very good pattern recognition, but that's only one aspect of cognition. IIUC a large part of our brain works that way, but for example there is also the language center which has recursion.

Also I don't think logic is just word games (could be wrong).

I'm sure we'll get there but I don't think this is it. E.g. AlphaGo was really good because it combined machine learning with tree search algorithms. Seems like logic and language and world knowledge could be combined with the excellent pattern recognition we currently have, merging first generation AI with the current stuff.

I have no idea how! I'm not in this field, just watch from a distance.


Where do you draw the line on this “it’s just regurgitating” argument?

https://arxiv.org/abs/2302.02083


I will read this, thank you.

I am open to the idea of emergent properties, but I think logic/truth and purpose/empathy are independent facilities of intelligence that do not come from this model.

Pattern matching is a huge part of our brains, but there are more directed parts too. So far modern ML seems to be entirely the pattern matching part.

(Not an expert, just followed progress over the last 40 years.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: