I studied AI shortly after the "AI Winter", which is where I got my definition. Strong AI---working towards a general intelligence---was strongly out of favor, especially with funding agencies. It still remains so (but see Watson). But solving limited problems heuristically (or statistically) that are not otherwise algorithmically tractable (a loose translation of "would appear to require general intelligence") has always been a fertile field.
Turing's argument, which is a philosophical argument, is not meaningful if you put any limits on it---time, topic, behavior (really, Parry is better than the Doctor), which is why it is better thought of as a thought experiment. If you have limits such that it can be gamed, then yes, it is fair to say "you only need to simulate intelligence well enough to fool the judge." Which makes it uninteresting.
But the question is, if you can "simulate intelligence" well enough under any conceivable circumstance (and yes, all actual human beans will fail here), how can you say that it cannot "actually think"?
I studied AI shortly after the "AI Winter", which is where I got my definition. Strong AI---working towards a general intelligence---was strongly out of favor, especially with funding agencies. It still remains so (but see Watson). But solving limited problems heuristically (or statistically) that are not otherwise algorithmically tractable (a loose translation of "would appear to require general intelligence") has always been a fertile field.
Turing's argument, which is a philosophical argument, is not meaningful if you put any limits on it---time, topic, behavior (really, Parry is better than the Doctor), which is why it is better thought of as a thought experiment. If you have limits such that it can be gamed, then yes, it is fair to say "you only need to simulate intelligence well enough to fool the judge." Which makes it uninteresting.
But the question is, if you can "simulate intelligence" well enough under any conceivable circumstance (and yes, all actual human beans will fail here), how can you say that it cannot "actually think"?