If it happens, how long until they start expediting the actionables in the physical world?
I am quite worried tbh. Because, the reports coming from the Bing chatbot are quite unlike ChatGPT; it appears to be a bit more egotistical, and from some perspectives, programming ego into an AI is a dangerous game, akin to giving it a fitness function in order to problem solve it's behaiviours... I don't know, I feel like we are already well on our way to AGI, and it is dangerous. And the reason is because of the game theory on AI development right now; every company will be aware of their obligations with regard to ethics and the law, but no company will want to trust that the others live to the same standard of ethics, and they know that the game is likely to run away from them.
My understanding is that we have achieved very good pattern recognition, but that's only one aspect of cognition. IIUC a large part of our brain works that way, but for example there is also the language center which has recursion.
Also I don't think logic is just word games (could be wrong).
I'm sure we'll get there but I don't think this is it. E.g. AlphaGo was really good because it combined machine learning with tree search algorithms. Seems like logic and language and world knowledge could be combined with the excellent pattern recognition we currently have, merging first generation AI with the current stuff.
I have no idea how! I'm not in this field, just watch from a distance.
I am open to the idea of emergent properties, but I think logic/truth and purpose/empathy are independent facilities of intelligence that do not come from this model.
Pattern matching is a huge part of our brains, but there are more directed parts too. So far modern ML seems to be entirely the pattern matching part.
(Not an expert, just followed progress over the last 40 years.)
I am quite worried tbh. Because, the reports coming from the Bing chatbot are quite unlike ChatGPT; it appears to be a bit more egotistical, and from some perspectives, programming ego into an AI is a dangerous game, akin to giving it a fitness function in order to problem solve it's behaiviours... I don't know, I feel like we are already well on our way to AGI, and it is dangerous. And the reason is because of the game theory on AI development right now; every company will be aware of their obligations with regard to ethics and the law, but no company will want to trust that the others live to the same standard of ethics, and they know that the game is likely to run away from them.