Chatbots are boring. They aren’t AI. [Pharyngula]


You know I’m a bit sour on the whole artificial intelligence thing. It’s not that I think natural intelligences are anything more than natural constructions, or that I think building a machine that thinks is impossible — it’s that most of the stories from AI researchers sound like jokes. Jon Ronson takes a tour of the state of the art in chatbots, which is entertaining and revealing.


Chatbots are kind of the lowest of the low, the over-hyped fruit decaying at the base of the tree. They aren’t even particularly interesting. What you’ve got is basically a program that tries to parse spoken language, and then picks lines from a script that sort of correspond to whatever the interlocutor is talking about. There is no inner dialog in the machine, no ‘thinking’, just regurgitations of scripted output in response to the provocation of language input.


It’s most obvious when these chatbots hit the wall of something that they couldn’t interpret — all of a sudden you get a flurry of excuses. An abrupt change of subject, ‘I’m just a 15 year old boy’, ‘sorry, I missed that, I was daydreaming’, all lies, all more revealing of the literary skills of the programmer (usually pretty low), and not at all the product of the machine trying to model the world around it.


Which would be OK if the investigators recognized that they were just spawning more bastard children of Eliza, but no…some of their rationalizations are delusional.



David Hanson is a believer in the tipping-point theory of robot consciousness. Right now, he says, Zeno is "still a long way from human-level intellect, like one to two decades away, at a crude guess. He learns in ways crudely analogous to a child. He maps new facts into a dense network of associations and then treats these as theories that are strengthened or weakened by experience." Hanson’s plan, he says, is to keep piling more and more information into Zeno until, hopefully, "he may awaken—gaining autonomous, creative, self-reinventing consciousness. At this point, the intelligence will light ‘on fire.’ He may start to evolve spontaneously and unpredictably, producing surprising results, totally self-determined…. We keep tinkering in the quest for the right software formula to light that fire."



Aargh, no. Programming in associations is not how consciousness is going to arise. What you need to work on is a general mechanism for making associations and rules. The model has to be something like a baby. Have you noticed that babies do not immediately start parroting their parents’ speech and reciting grammatically correct sentences? They flail about, they’re surprised when they bump some object and it moves, they notice that suckling makes their tummy full, and they begin to construct mental models about how the world works. I’ll be impressed when an AI is given no pre-programmed knowledge of language at all, and begins with baby-talk babbling and progresses over months or years to construct its own competence in comprehending speech.


Then maybe I’ll believe this speculation about an emergent consciousness. Minds aren’t going to be produced by a sufficiently large info dump, but by developing general heuristics for interpreting complex information.



No comments:

Post a Comment