Artificial Intelligence or Mechanically Mindless?

Part I: What you see is what you get, but isn’t what you think

There is an interesting exception to an already interesting exception about one of the largest domains of scientific research (what I’ll call “applied sciences”, made up of fields like engineering, nanotechnology, computer science, etc.). The latter exception is how applied sciences, perhaps the most widely publicized of the sciences, is the least lied about. The exception goes by many names but the most well-known and used is A.I. (artificial intelligence). Generally, exposure to developments in the sciences is limited to tiny fraction of research from a small minority of scientific fields: that easy to sensationalize (quantum physics, neuroscience, etc.), that directly relevant to many people (e.g., research on health and the efficacy of medical treatments), or both (such as reports on the wondrous curative power of some alternative medicine). Often enough, the actual research behind the claims made in popular sources is already inaccurate before it’s sensationalized. Even good research, though, rarely makes it to sources for laypersons without significant distortion.
Engineering, computer sciences, and other mostly “applied” sciences are different in two ways. First, as just about everybody uses technology, we aren’t only exposed to developments in these fields we use them. We can find the equivalent of popular science headlines in Google sponsored ads or on a glossy page in any number of popular magazines. Whether it’s some new self-guided unmanned vehicle or cutting-edge smartphones, there are far more numerous and diverse ways for us to see the products of applied sciences because of their obvious and immediate applications.
Second, and perhaps even more interesting, is the relative lack of distortion. This is again a consequence of the “applied” part of applied sciences. If your speech recognition software doesn’t work, you’ll probably return it; likewise, if automated suggestions on sites like Netflix, Amazon, etc., are rarely helpful, you’ll tend ignore them. There is a limit to how much scientists in fields ranging from nanotechnology to aerospace engineering can claim about what their work shows simply because these claims tested “directly” rather than via e.g., some complicated statistical model. Ads for cutting-edge technology can’t rely on overly sensational claims for very long before consumers stop buying it (bad pun intended).
The interesting exception to this interesting exception is artificial intelligence (skim down for “the takeaway”). It’s not that claims here aren’t tested directly. The computer system Deep Blue beat the world champion chess legend Garry Kasparov over a decade ago. More recently, the DeepQA architecture was used to produce Jeopardy! winner “Watson” which not only won on national television but was the object of a massive flurry of popular science reports. Hardly a month goes by in which some new development references the famous “Turing test”, popularly held (if just as popularly misunderstood) to be the gold-standard by which we can test A.I. Given that claims about A.I. research made by researchers as some prestigious university or at a company like I.B.M. are put to the same tests as are those made elsewhere in applied sciences, what’s the exception? More importantly, how can there be one?
The answer is that there is no other field in applied sciences in which the claims made depend upon specific choices of terms and carefully phrased descriptions. For example, it’s certainly reasonable to say that “Watson” beat its opponents on Jeopardy!. It’s just as reasonable, but far more misleading, to say that

”[T]he Watson intelligent computer system from IBM Corp. was a triumph, and not just because it trounced two human champions in the TV game show “Jeopardy!…”1

It’s downright wrong to describe

“Watson, billed as ‘the smartest machine on Earth’…” as having the “ability to understand the meaning and context of human language”.2

This is at the heart of all the problematic claims made about this or that development in A.I. Modern technology makes it relatively easy for human minds to get machines to appear capable of human intelligence when they really about as intelligent as slugs. I often refer to the ability of modern A.I. systems to learn as “sea slug learning” for two reasons: first, it’s the kind of sensational claim that gets attention but is accurate and 2) it’s very easy to find support for its accuracy. Machine learning and A.I. depend upon more general models of learning, and a major contribution to the theory of learning was based upon Dr. Eric Kandel’s Nobel Prize-winning study of sea slugs or Aplysia californica. However, this kind of learning is often called “adaption”, because it is qualitatively the same as the ways in which a single neuron “adapts” to changes in electrical potential or a Venus Fly plant reacts to something touching the inside of its leafy “mouth”. There is no awareness, no understanding, and no intelligence in any commonly used sense of the word.
The best machine learning algorithms built into the most complex A.I. system are still algorithms: they are procedures so precisely defined that they can mechanically process input the way a pocket calculator can. Although a lot of methods in A.I. and machine learning come from studying how insects behave (“swarm intelligence”) or even genes (“evolutionary algorithms” and “gene expression”, among others), the only reason a computer can appear to be smarter than ants is because computers are very, very, very good at fast calculations and can store massive amounts of data. We’re still building calculators that can’t even “learn” as well as ants can and can’t understand anything the way even rats can. How we get machines to look like they understand anything is something that I’ll address in part II. Here, I’ll simply quote a popular science commentary on the “Watson”/Jeapordy! “spectacle” I happened upon:

“when machines are pitted against people, an unstated assertion is inevitably propagated: that human thinking and machine ‘intelligence’ are already known to be at least comparable. Of course, this is not true…Even if it had been stated (in fine print, as it were) that the task of competing at Jeopardy! shouldn’t be confused with complete mastery of human language, the extravaganza would have left the impression that scientists are on a rapid, inexorable march toward conquering language and meaning…
Much of what computer scientists were actually doing in this case, however, was teaching the software to identify statistical correlations in giant databases of text.”3

Although not all of machine learning/A.I. is categorized under statistical learning theory, it is all the meaningless, mechanical manipulation of input that exploits processing speed and storage as well as human ingenuity not only to design such technology but to take abstract ideas and strip away anything meaningful until they are quite literally purely mathematical computations.

The Takeaway:
Every time some claim is made about artificial intelligence, machine learning, “smart” robots, and other computer or computer-based “intelligent” systems, be aware of phrasing and terms used. Words or phrases like “understands”, “knows”, “can guess”, “is able to tell”, etc., all imply conceptual processing or the ability to understand concepts. No A.I.-system is remotely close to this and they are no closer than were computers 50 years ago. Likewise, be wary of sources that overly use the grammatical role of “agent”, such as “Watson defeated…”, “Watson answered”, and even relatively negative examples such as “Watson didn’t get all the questions correct”. Like the terms and phrases listed above, such descriptions are perhaps more misleading because they are subtler. They exploit grammar to make action described appear as if it was intended by the A.I. system in question.
Finally, look for specifics. The more a source describes exactly what some A.I. system does or did and (even more importantly) how, the more likely it is to be accurate and informative.

1Diaz, J. (2011, Feb 26). The plan behind watson: Winning hearts. Boston Globe
2COMPETING ON ‘JEOPARDY!’ NEXT WEEK: UNIVERSITY OF MASSACHUSETTS AMHERST COMPUTER SCIENTISTS HELPED TO DEVELOP IBM’S ‘WATSON’ COMPUTING SYSTEM. (2011, Feb 11). US Fed News Service, Including US State News
3Lanier, J. (2011, May). It’s not a game. Technology Review, 114, 80-81.

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s