I'm afraid I don't understand the point you're making with "computer says no".
> Genetic algorithms have designed circuitry that we failed to even understand at first but that did work.
This is an excellent example of what I think of as not machine intelligence. If humans can't understand it then it's something entirely different that we need a different word for - an "artefact", perhaps. Meaningfully contributing to the state of the art of human knowledge requires being built upon. If these genetic algorithms can explain how they can be incorporated into the design process by humans, that's intelligence. If they are similar to being able to evolve a mantis shrimp by fiddling with DNA, that is marvellous but not what I would regard as intelligence.
We apply the same standard to human intelligence: someone who can multiply numbers very fast but not explain how they can do it is a savant; someone who can discover and teach other people a faster way of multiplication is intelligent.
Savant literally means 'one who knows', and they're not required to explain to you how they know, it's up to you to verify that they do. Just like a chess grand master doesn't have to prove to you he or she is intelligent, it's enough that they beat you. They are under no obligation to prove their intelligence to you by teaching you the same (assuming you could follow in the first place).
> Meaningfully contributing to the state of the art of human knowledge requires being built upon.
No, it requires us to understand. But we will not always be able to (in the case of those circuits we eventually figured it out, but not at first). And in Chess we did too, computer chess made some (the best) chess players better at chess. But there is no reason to assume this will always be the case and that's a limit of our intelligence.
I think you're indeed arguing a sensible definition of Artificial Intelligence, but it's not what most people (especially laymen) mean by the phrase. I think most actually equivocate AI with Artificial Sapience: some cognitive architecture that can—in theory, with the right training—do all the things humans do (socialize, write persuasive essays, create art, etc.) at least as well as the average human.
Though in all honesty, I think a lot of people just want to see a machine with emotional "instincts" and an understanding of tribal status-hierarchy dynamics such that you can empathize with it. A lot of people would consider a machine that accurately simulated a rather dumb chimpanzee to be "smart enough" to qualify as AI, even if it couldn't do any useful human intellectual labor.
> Genetic algorithms have designed circuitry that we failed to even understand at first but that did work.
This is an excellent example of what I think of as not machine intelligence. If humans can't understand it then it's something entirely different that we need a different word for - an "artefact", perhaps. Meaningfully contributing to the state of the art of human knowledge requires being built upon. If these genetic algorithms can explain how they can be incorporated into the design process by humans, that's intelligence. If they are similar to being able to evolve a mantis shrimp by fiddling with DNA, that is marvellous but not what I would regard as intelligence.
We apply the same standard to human intelligence: someone who can multiply numbers very fast but not explain how they can do it is a savant; someone who can discover and teach other people a faster way of multiplication is intelligent.