November 2008 Archives

Functionality on Steroids

At what point does an informational tool become so overwrought that it becomes something entirely different from its base form?

A standard street sign has a known function and simple premises and affordances. It is intended to provide context within a given city. The example above, while providing some local context, is meant to set the reader within a global context. It is not "usable", but it does provide a function.

Overwrought software is nearly always frustrating. Are there cases where it begins to fill a different purpose altogether from what the designers intend? Can such products have repeat usefulness?

Turing, tested

The Turing Test is an enduring artifact of cybernetics, computer science, and popular imagination. Although it is far from universally accepted as proof of much of anything, I'd like to briefly poke another hole or two in its premises.

The original "game" proposed by Turing involved no computers--there was one man, and one woman. A third person (a tester) was able to communicate with them only by written notes, and was tasked with determining the participants' correct genders. The twist is this: the man was instructed to trick the tester into believing he was a woman, and the woman was instructed to behave naturally. Turing then adapted the idea by replacing one participant with a a computer, leaving the tester to determine which was human. If the tester failed to correctly determine which participant was the computer, then the computer could be considered intelligent.

I have three problems with this formulation:

1) It is based on deception. In the natural world, deception is practiced by both predators and prey. Successful deception by one means the suffering or death of the other. Likewise in the human, business, social, and ethical realms, we rarely hold deception to be a virtue worth building upon. If accepted at face value, this formulation devalues human intelligence by equivocating it with deception. As a human being, I would like to see intelligence defined in more human, and less algorithmic, terms. (Or in ruthless evolutionary terms: if computers do someday achieve intelligence, we got here first, and we should define the terms in our enduring favor.)

2) It is excessively reductive. Narrowing the channel through which intelligence must be communicated to one of such tiny bandwidth (not to mention a single channel, unlike the human experience of multiple channels and senses) intentionally privileges the computer. I might as well propose that a small box which emitted a human-sounding laugh in response to funny jokes (and no sound in response to bad ones) was intelligent--surely it requires human-like intelligence to understand when jokes are funny? Not at all.

3) It conflates the signifier with the signified. Clever strings of text do not inherently indicate intelligence. We accept text-based communication because it is a sufficient signifier of something more--another human--on the other end. We accept this signifier precisely because, historically, only a human can generate it. If computers can reliably generate that signifier then it will no longer signify what it always has. Rather than prove machine intelligence, a successful Turing test will only prove the insufficiency of the very medium it uses (devaluing it in the process).

I'm no Luddite; but we need a much-improved version of the Turing test for it to have any meaning. This will require improved definitions of what intelligence really is. We must make sure that those definitions serve the humanity they come from, rather than its by-product.

"[The Turing test] does not necessarily mean that the computer has become more human-like. The other possibility is that the human has become more computer-like." --Jaron Lanier


Technology--or rather, particular implementations of it--are like enzymes, dropped into our protein pool of possibilities. They lower the activation energy of various tasks. Our rate of communication is greatly increased by our mobile phones; the energy required to clean our clothes is greatly reduced by washing machines.

Lowered activation energy isn't always a good thing. How much time is idly wasted with mobile devices? How much plastic is needlessly created and cast off into the environment? Without these enzymes, would our finite human resources be more satisfyingly spent? As with biological well-being, technological health is a layered process. The enzyme suppliers must be aware of the long-term effects of their product, though the final responsibility lies with the well-informed user.