Turing, tested

The Turing Test is an enduring artifact of cybernetics, computer science, and popular imagination. Although it is far from universally accepted as proof of much of anything, I'd like to briefly poke another hole or two in its premises.

The original "game" proposed by Turing involved no computers--there was one man, and one woman. A third person (a tester) was able to communicate with them only by written notes, and was tasked with determining the participants' correct genders. The twist is this: the man was instructed to trick the tester into believing he was a woman, and the woman was instructed to behave naturally. Turing then adapted the idea by replacing one participant with a a computer, leaving the tester to determine which was human. If the tester failed to correctly determine which participant was the computer, then the computer could be considered intelligent.

I have three problems with this formulation:

1) It is based on deception. In the natural world, deception is practiced by both predators and prey. Successful deception by one means the suffering or death of the other. Likewise in the human, business, social, and ethical realms, we rarely hold deception to be a virtue worth building upon. If accepted at face value, this formulation devalues human intelligence by equivocating it with deception. As a human being, I would like to see intelligence defined in more human, and less algorithmic, terms. (Or in ruthless evolutionary terms: if computers do someday achieve intelligence, we got here first, and we should define the terms in our enduring favor.)

2) It is excessively reductive. Narrowing the channel through which intelligence must be communicated to one of such tiny bandwidth (not to mention a single channel, unlike the human experience of multiple channels and senses) intentionally privileges the computer. I might as well propose that a small box which emitted a human-sounding laugh in response to funny jokes (and no sound in response to bad ones) was intelligent--surely it requires human-like intelligence to understand when jokes are funny? Not at all.

3) It conflates the signifier with the signified. Clever strings of text do not inherently indicate intelligence. We accept text-based communication because it is a sufficient signifier of something more--another human--on the other end. We accept this signifier precisely because, historically, only a human can generate it. If computers can reliably generate that signifier then it will no longer signify what it always has. Rather than prove machine intelligence, a successful Turing test will only prove the insufficiency of the very medium it uses (devaluing it in the process).

I'm no Luddite; but we need a much-improved version of the Turing test for it to have any meaning. This will require improved definitions of what intelligence really is. We must make sure that those definitions serve the humanity they come from, rather than its by-product.

"[The Turing test] does not necessarily mean that the computer has become more human-like. The other possibility is that the human has become more computer-like." --Jaron Lanier



Categories