turing’s imitation game
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 1)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 29 (4) ◽  
pp. 409-435
Author(s):  
Hajo Greif

Abstract The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. The proposed taxonomy cuts across the traditional dichotomies between symbolic and embodied AI, general intelligence and symbol and intelligence and cognitive simulation and human/non-human-like AI. According to the taxonomy proposed here, one can distinguish between four distinct general approaches that figured prominently in early and classical AI, and that have partly developed into distinct research programs: first, phenomenal simulations (e.g., Turing’s “imitation game”); second, simulations that explore general-level formal isomorphisms in pursuit of a general theory of intelligence (e.g., logic-based AI); third, simulations as exploratory material models that serve to develop theoretical accounts of cognitive processes (e.g., Marr’s stages of visual processing and classical connectionism); and fourth, simulations as strictly formal models of a theory of computation that postulates cognitive processes to be isomorphic with computational processes (strong symbolic AI). In continuation of pragmatic views of the modes of modeling and simulating world affairs, this taxonomy of approaches to modeling in AI helps to elucidate how available computational concepts and simulational resources contribute to the modes of representation and theory development in AI research—and what made that research program uniquely dependent on them.


2019 ◽  
pp. 406-426
Author(s):  
Colin Burrow

The Postscript asks whether a machine could in the future successfully imitate a human poet. It discusses the history of artificially generated poems from early schoolroom manuals through John Clark’s ‘Eureka Machine’ of 1845 to the age of the computer. It relates Alan Turing’s ‘Imitation Game’, in which a computer mimics the linguistic behaviour of a human being, to a wider mid-twentieth-century tendency to see poetry as the ultimate challenge for an electronic imitator of human behaviour. The chapter argues that a computer which depended on statistical modelling of prior poetic corpuses would not be able to replicate the actions of a human imitator, because imitating authors imitate not simply words but practices, and those are not simply codifiable. Imitators do not simply follow the rules implicit in earlier texts, but might imitate an earlier author’s willingness to break those rules. The chapter shows that a pervasive opposition between biological and digital systems runs through writing about the possibility of artificially imitating human consciousness, which is the latest manifestation of the opposition between a ‘living’ recreation of a past author and a simulacrum. It concludes by discussing the Xenotext by the Canadian experimental poet Christian Bök, which seeks to create a perpetually living poetry engine embedded in the DNA of a permanently durable microbe. This takes the long-standing metaphor of a ‘living’ imitation to the cellular level, and makes of imitatio an unending biological process of transformation.


Author(s):  
Huma Shah ◽  
Kevin Warwick

Trust is an expected certainty in order to transact confidently. However, how accurate is our decision-making in human-machine interaction? In this chapter, the present evidence from experimental conditions in which human interrogators used their judgement of what constitutes a satisfactory response trusting a hidden interlocutor was human when it was actually a machine. A simultaneous comparison Turing test is presented with conversation between a human judge and two hidden entities during Turing100 at Bletchley Park, UK. Results of post-test conversational analysis by the audience at Turing Education Day show more than 30% made the same identification errors as the Turing test judge. Trust is found to be misplaced in subjective certainty that could lead to susceptibility to deception in cyberspace.


Author(s):  
Huma Shah ◽  
Kevin Warwick

Trust is an expected certainty in order to transact confidently. However, how accurate is our decision-making in human-machine interaction? In this chapter we present evidence from experimental conditions in which human interrogators used their judgement of what constitutes a satisfactory response trusting a hidden interlocutor was human when it was actually a machine. A simultaneous comparison Turing test is presented with conversation between a human judge and two hidden entities during Turing100 at Bletchley Park, UK. Results of post-test conversational analysis by the audience at Turing Education Day show more than 30% made the same identification errors as the Turing test judge. Trust is found to be misplaced in subjective certainty that could lead to susceptibility to deception in cyberspace.


Author(s):  
Diane Proudfoot

Can machines think? Turing’s famous test is one way of determining the answer. On the sixtieth anniversary of his death, the University of Reading announced that a ‘historic milestone in artificial intelligence’ had been reached at the Royal Society: a computer program had passed the ‘iconic’ Turing test. According to an organizer, this was ‘one of the most exciting’ advances in human understanding. In a frenzy of worldwide publicity, the news was described as a ‘breakthrough’ showing that ‘robot overlords creep closer to assuming control’ of human beings. Yet after only a single day it was claimed that ‘almost everything about the story is bogus’: it was ‘nonsense, complete nonsense’ to say that the Turing test had been passed. The program concerned ‘actually got an F’ on the test. The backlash spread to the test itself; critics said that the ‘whole concept of the Turing Test is kind of a joke . . . a needless distraction’. So, what is the Turing test—and why does it matter? In 1948, in a report entitled ‘Intelligent machinery’, Turing described a ‘little experiment’ that, he said, was ‘a rather idealized form of an experiment I have actually done’. It involved three subjects, all chess players. Player A plays chess as he/she normally would, while player B is proxy for a computer program, following a written set of rules and working out what to do using pencil and paper—this ‘paper machine’ was the only sort of programmable computer freely available in 1948 (see Ch. 31). Both of these players are hidden from the third player, C. Turing said, ‘Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine’. How did the experiment fare? According to Turing, ‘C may find it quite difficult to tell which he is playing’. This is the first version of what has come to be known as ‘Turing’s imitation game’ or the ‘Turing test’.


AI Magazine ◽  
2016 ◽  
Vol 37 (1) ◽  
pp. 97-101 ◽  
Author(s):  
Douglas B. Lenat

Turing’s Imitation Game was a brilliant early proposed test of machine intelligence — one that is still compelling, today, despite the fact that in the hindsight of all that we’ve learned in the intervening 65 years we can see the flaws in his original test. And our field needs a good “Is it AI yet?” test more than ever, today, with so many of us spending our research time looking under the “shallow processing of big data” lamppost. If Turing were alive today, what sort of test might he propose?


Sign in / Sign up

Export Citation Format

Share Document