The Man Who Would Teach Machines to Think
Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we’ve lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
“It depends on what you mean by artificial intelligence.” Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done.”
Hofstadter says this with an easy deliberateness, and he says it that way because for him, it is an uncontroversial conviction that the most-exciting projects in modern artificial intelligence, the stuff the public maybe sees as stepping stones on the way to science fiction—like Watson, IBM’s Jeopardy-playing supercomputer, or Siri, Apple’s iPhone assistant—in fact have very little to do with intelligence. For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think.
Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself. Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
In the early 1980s, the field was retrenching: funding for long-term “basic science” was drying up, and the focus was shifting to practical systems. Ambitious AI research had acquired a bad reputation. Wide-eyed overpromises were the norm, going back to the birth of the field in 1956 at the Dartmouth Summer Research Project, where the organizers—including the man who coined the term artificial intelligence, John McCarthy—declared that “if a carefully selected group of scientists work on it together for a summer,” they would make significant progress toward creating machines with one or more of the following abilities: the ability to use language; to form concepts; to solve problems now solvable only by humans; to improve themselves. McCarthy later recalled that they failed because “AI is harder than we thought.”
In GEB, Hofstadter was calling for an approach to AI concerned less with solving human problems intelligently than with understanding human intelligence—at precisely the moment that such an approach, having borne so little fruit, was being abandoned. His star faded quickly. He would increasingly find himself out of a mainstream that had embraced a new imperative: to make machines perform in any way possible, with little regard for psychological plausibility.
Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force. For each legal move it could make at a given point in the game, it would consider its opponent’s responses, its own responses to those responses, and so on for six or more steps down the line. With a fast evaluation function, it would calculate a score for each possible position, and then make the move that led to the best score. What allowed Deep Blue to beat the world’s best humans was raw computational power. It could evaluate up to 330 million positions a second, while Kasparov could evaluate only a few dozen before having to make a decision.
Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?” A brand of AI that didn’t try to answer such questions—however impressive it might have been—was, in Hofstadter’s mind, a diversion. He distanced himself from the field almost as soon as he became a part of it. “To me, as a fledgling AI person,” he says, “it was self-evident that I did not want to get involved in that trickery. It was obvious: I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”
Very few people are interested in how human intelligence works,” Hofstadter says. “That’s what we’re interested in—what is thinking?”
This, then, is the trillion-dollar question: Will the approach undergirding AI today—an approach that borrows little from the mind, that’s grounded instead in big data and big engineering—get us to where we want to go? How do you make a search engine that understands if you don’t know how you understand? Perhaps, as Russell and Norvig politely acknowledge in the last chapter of their textbook, in taking its practical turn, AI has become too much like the man who tries to get to the moon by climbing a tree: “One can report steady progress, all the way to the top of the tree.”
“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
“Beware,” he writes, “of innocent phrases like ‘Oh, yeah, that’s exactly what happened to me!’ … behind whose nonchalance is hidden the entire mystery of the human mind.”
In the years after the release of GEB, Hofstadter and AI went their separate ways. Today, if you were to pull AI: A Modern Approach off the shelf, you wouldn’t find Hofstadter’s name—not in more than 1,000 pages. Colleagues talk about him in the past tense. New fans of GEB, seeing when it was published, are surprised to find out its author is still alive.
Of course in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,” Hofstadter says. “That’s what we’re interested in—what is thinking?—and we don’t lose track of that question.”
“I mean, who knows?” he says. “Who knows what’ll happen. Maybe someday people will say, ‘Hofstadter already did this stuff and said this stuff and we’re just now discovering it.’ ”
Which sounds exactly like the self-soothing of the guy who lost. But Hofstadter has the kind of mind that tempts you to ask: What if the best ideas in artificial intelligence—“genuine artificial intelligence,” as Hofstadter now calls it, with apologies for the oxymoron—are yellowing in a drawer in Bloomington?
Šis ir ļoti nozīmīgs raksts, kas raksturo to, kas pašlaik notiek AI nozarē. Pētnieki cenšas iegūt rezultātu, kuru var pārdot. Bet tam (pagaidām) nav nekāda sakara ar inteliģenci. Jau 1956.gadā John McCarthy rakstīja: “if a carefully selected group of scientists work on it together for a summer,” they would make significant progress toward creating machines with one or more of the following abilities: the ability to use language; to form concepts; to solve problems now solvable only by humans; to improve themselves. McCarthy later recalled that they failed because “AI is harder than we thought.” Mēs redzam, ka daži no šiem uzdevumiem šodien atrisināti tikai daļēji (spēja lietot valodu, uzlabot pašiem sevi), bet daži uzdevumi šodien nav atrisināti nemaz: to form concepts; to solve problems now solvable only by humans.
Hofstadter grāmatā GOEDEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Vintage Books, 1979 par to, kas trūkst mākslīgajai inteliģencei: “My belief is that the explanation of emergent phenomena in our brains…for instance ideas, hopes, images, analogies, and finally consciousness and free will…are based on a kind of Strange Loop, an interaction between levels in which the top level reaches back towards the bottom level and influences it, while at the same time being itself determined by the bottom level.” [Ibid, p. 709].
The above quote likens Strange Loops to a spiral in which at ever expanding higher levels, there exists the capability to suck-up continuing information input from the bottom, more basic level. And, as previously mentioned, these Strange Loops can also be compared to a feedback cycle of continual enfoldment and unfoldment.
But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates. Vairāk un izsmeļošāk par Hofstadter izpratni šajā vietnē rakstā ‘Consciousness in the Cosmos’ un http://www.bizcharts.com/stoa_del_sol/conscious/conscious2.html
To var saprast tā, kā Jeff Hawkins raksta grāmatā ‘On Intelligence’: neironu kolonnu apakšējos slāņos ierakstās tiešā jutekļu pieredze, bet augšējos slāņos veidojas vispārinājumi, būtiskās līnijas, principi, izpratnes, kuras apziņā tiek apzīmētas ar vienkāršiem simboliem – uztverto priekšmetu, procesu shēmatiskiem attēliem, piemēram, visiem novērotiem riteņiem apziņā tiek piesaistīts aploces simbols, visiem redzētiem un sastaptiem cilvēkiem tiek piesaistīts abstrakts cilvēka tēls, kurš visbiežāk iet ar kājām, darbojas ar rokām, runā, skatās, dzird, jūt utt. Šajos augstākajos slāņos veidojas arī notikumu algoritmu shēmatiski attēlojumi – procesu modeļi. Visa cilvēka pasaules izpratne balstās uz šādiem modeļiem. (Jāpiebilst, ka Hofstadters lieto atšķirīgu terminoloģiju (strange loops). Par cik tā sakrīt ar šeit minētajām Hawkins domām, lai spriež lasītājs).
Visi šodienas roboti strādā ar programmētāju ieliktiem ārējās pasaules modeļiem. Šodien mēs vēl nezinām, kā ‘pamudināt’, kā iemācīt robotam pašam veidot ārējās pasaules modeļus. Piemēram, divu skaitļu no 0 līdz 9 saskaitīšana. Ja par pareizu rezultātu programmu apbalvo (to sauc par supervised learning), tad programma ātri ‘iemācās’ pareizās atbildes (atbilstību tabulu) ‘no galvas’. Bet, ja programmai liek saskaitīt divus skaitļus, kas nav tās līdzšinējā pieredzē, piemēram, 8 un 11, tā neprot tos saskaitīt tādēļ, ka tajā nav izveidots ‘ārējās pasaules modelis’ – saskaitīšanas algoritms. (Šis eksperiments tika izpildīts Ogrē MI laboratorijā). Tas pats attiecas uz citu darbību izpildi. Mēs zinām, ka mazi bērni neapzinoties izveido tūkstošiem ārējās pasaules modeļus (priekš smadzenēm indivīda ķermenis arī ir ārējā pasaule), kas ļauj viņiem veiksmīgi kustēties telpā un sadarboties ar apkārtējiem cilvēkiem un paredzēt apkārtējās vides reakcijas. Skolā mēs jau viņiem mērķtiecīgi mācam ārējās pasaules modeļu veidošanu – saprast un lietot dažādas matemātikas un fizikas formulas, saprast un izmantot dabas likumus.
Hofstadter galvenā doma ir, ka mašīnas būs inteliģentas tikai tad, kad tās iemācīsies savā apziņā veidot ārējās pasaules modeļus un lietot tos ārējās pasaules reakciju paredzēšanai. Jeff Hawkins (pdf: http://failiem.lv/u/mueqlnt) to sauc par domāšanu. I.V.