In the world of computing, few figures loom as large as Alan Turing, a British mathematician who not only cracked the Nazi codes during World War II, but also laid much of the groundwork for the creation of modern digital computers. Of all his contributions, one of his most enduring is a simple test he proposed in 1950 that remains one of the most debated issues in the world of artificial intelligence.
Alan Turing proposed his test as a way to measure the progress of technology, but it just as easily presents us a way to measure our own. Oxford philosopher John Lucas says, for instance, that if we fail to prevent the machines from passing the Turing test, it will be "not because machines are so intelligent, but because humans, many of them at least, are so wooden."
... word problems, spatial-reasoning questions, deliberate misspellings ... This type of thing is extraordinarily hard for programmers to prepare against, because anything goes - and this is (a) the reason that Turing had language, and conversation, in mind as his test, because it is really, in some sense, a test of everything, and (b) the kind of conversation Turing seemed to have envisioned, judging from the hypothetical conversation snippets in his 1950 paper.
The Loebner Prize competition is based on the Turing Test, one of the biggest challenges in the world of Artificial Intelligence. ... The judges at the competition will conduct conversations with the four finalist chatbots and with some human surrogates, and will then rank all their conversation partners from most humanlike to least humanlike. The chatbot with the highest overall ranking wins the prize.
Turing himself called his test the 'Imitation Game.' Reading Turing's 1950 article, a number of differences between the popular conception of the test and Turing's original are apparent. ... While in the Loebner competition all judges know that they are interrogating a mixture of human collaborators and machine, in Turing's description it is not made clear whether the judges are to be aware of the possibility that a contestant is not in fact human.
The one thing everyone likes about the Turing test is its proxy function, the idea that the test is a proxy for a great many, wide-ranging intellectual capabilities. Dennett puts it this way:
"Nothing could possible pass the Turing test by winning the imitation game without being able to perform indefinitely many other intelligent actions. [Turing's] test was so severe, he thought, that nothing that could pass it fair and square would disappoint us in other quarters."
Tests should be challenging, but tests that cannot be passed provide no information. ... My point is that even the very best technology in AI today would not bring us anywhere close to passing the Turing Test, and this has a very bad consequence: Few AI researchers try to pass the test.
Indeed, even if the challenge had been met, strong arguments have been made that we still would not necessarily have built an intelligent machine (the most famous refutation being Searle’s  “Chinese Room Argument.”) The basis of Searle’s argument is that a program could pass the test – outputting strings of symbols for each string of symbols input – without any understanding of what it was doing, and hence pass without possessing intelligence.
Robot football and other embodied tasks are now seen as more promising challenges than Turing’s test. But for game developers, the façade is what counts; it provides a simulation of intelligence to characters in a game world. Believability is more important than truth. Thus the goal of AI in games is generally the same as attempts to beat the Turing test, i.e., to create a believable intelligence by any means necessary.
The new 'grand challenges,' which tend to focus on tasks embodied in the real world, consist not only of some final test or objective, but also provide incremental goals, and it is the incremental goals that enable systematic progress to be made, where the Turing test is an 'all or nothing' test. The key surviving feature of the Turing Test in these new challenges is the focus on the ability of machines to undertake tasks which demonstrated intelligence rather than on developing machines which are 'provably' intelligent.