But humans failed to find an algorithmic solution for Go. All they could do is to throw a lot of data and get a bunch of coefficients without discovering underlying rules.
Same with drawing images and understanding language: this is not solved yet.
This is like showing an answer on an exam but failing to explain how you got it. I doubt you can get away with this.
Well, we humans also failed to find an algorithmic solution for our brain playing Go. I mean, the way AI and our brain work are mysterious. Surely at different level/layers but both share the mistery.
> But humans failed to find an algorithmic solution for Go.
sure we have algorithmic solutions for go, they're just not very good.
> All they could do is to throw a lot of data and get a bunch of coefficients without discovering underlying rules.
that's not completely true either, the special thing about ~alphago~ alphazero* was that it learned by playing itself instead of learning from a pre-recorded catalog of human games (which is the reason for its - for humans - peculiar playstyle).
now i'm not sure how you're arguing a neural network trained to play go doesn't understand the "underlying rules" of the game. to the contrary, it doesn't understand ANYTHING BUT the underlying rules.
explaining why you did something isn't always easy for a human either. most times they couldn't say anything more concrete than "well it's obviously the best move according to my experience" without just making stuff up.
By "underlying rules" I meant not rules of Go, but a detailed, commented algorithm that can win against human. Not a bunch of weights without any explanation.
It is possible that there is no algorithm that is understandable by normal humans or humans at all in the sense of the typical algorithmic approach of quick sort, etc.
In other words, the algorithm is very long for a relatively reduced programming language.
Imagine if you go to your work at the bank tomorrow and instead of a well documented, maintainable and formatted code see a gibberish. And your neural coworker tells you that it is just a problem with your capabilities if you cannot understand it. He just refactored it to improve performance. That's the situation with machine learning today.
The explanation is perfectly sensical, just too complex for humans to understand as the model scales up.
The thing you're looking for - a reductive explanation of the weights of a ANN that's easy to fit in your head, does not exist. If it were simple enough to satisfy your demands, it wouldn't work at all.
Yet, when a master player makes a decision what move to play, they often have concrete reasons for it, that they discuss in after game analysis. They evaluate some advantage or chances higher than others or some risks greater than others and calculate specific sequences ahead to be sure to solve a subproblem correctly and base their decision on that.
Banks don't typically attempt to solve P=NP problems.
Meanwhile things like stock markets attempt to with things like partial future prediction, which means all possible outcomes are not calculable in finite time, hence they use things like ML/AI.
Same with drawing images and understanding language: this is not solved yet.
This is like showing an answer on an exam but failing to explain how you got it. I doubt you can get away with this.