Facebook had to pull the plug on rogue AI bots Alice and Bob
California,August7:The internet has been afire with dramatically doom-laden headlines proclaiming that Facebook had to pull the plug on rogue artificial intelligence (AI) bots which developed a language of their own, in which they were carrying on inhumanly private conversations. They read like teasers for a Terminator globocalypse, which is the tool-using primate’s worst nightmare — to be superseded by its own creation, the machine.
The reality is less dramatic but more exciting. Bots are autonomous agents originally programmed to perform housekeeping tasks on communications channels, or to try and pass the Turing test. They can sign you on, kick you out if you misbehave, keep chatrooms open when no one’s home, provide information and masquerade as humans. On the Internet Relay Chat system, the Eggdrop bot was the all-time favourite.
The grandchildren of such janitor bots are embedded in modern search agents and messengers and have AI capabilities. They talk with users rather like a human would, respond to routine queries and offer advice. If your phone seems to know your mind and pulls up the right stuff without having to be asked, much of the credit must go to bots working behind the scenes. Like humans, bots can learn from experience and are destined for greater things in real-world situations.
Alice and Bob, the Facebook bots which have gained infamy because of an unanticipated deficiency in programming logic, are part of an experiment to build negotiating machines. They were simply figuring out how to share a set of objects, such as balls, so that neither party felt cheated. While bargaining, Alice made initially incomprehensible statements like, “Balls have zero to me to me to me to me to me to me to me…” And the scary headlines followed.
Learning is driven by incentives, as teachers and parents know. In this case, the reward system of the exercise was defined — a better share and mutual satisfaction, the essence of bargaining. But there was no incentive for the bots to keep communicating in English, which is a notoriously illogical language. So they slipped into a simplified, more efficient Newspeak-like argot, which is not quite English, but not unintelligible as advertised.
Alice’s statement, which has been misread as an assertion of machine independence, only indicated dismay at being short-changed (“have zero”), and each “to me” stood for an object she demanded. She was doing precisely the job she was programmed for: Bargaining as hard as a shopkeeper in Istanbul’s Grand Bazaar. And if she found that the Queen’s English got in the way, she was not alone. Many races of the former colonies felt that the master tongue impeded communications in their communities.
Only professional human negotiators need to be anxious if the giants of Silicon Valley are investing in bargaining programmes. Terrorists and summiteers should fear superannuation too, for in hostage situations and international deals, they may find themselves facing an inscrutable, implacable machine that’s way smarter than Deep Blue, the IBM supercomputer which challenged chess champion Garry Kasparov in 1996.
The urge to develop private languages is a very human trait. Before bourses were computerised, stockbrokers on the trading floor communicated bids with hand signals which were unintelligible to others. For centuries, law enforcement has been baffled by “thieves’ cants”, the artificial languages of convicts which are gibberish to their jailers. Among 20th century English-speaking criminals, the nonsense word “arkitnay” meant, “Shut up, someone is eavesdropping”. In India, William Henry Sleeman studied Ramaseeana, the cant of Thuggee, and published a vocabulary in 1836. One of Tom Stoppard’s least performed but most intriguing plays is Dogg’s Hamlet, in which schoolchildren perform Shakespeare in their language, Dogg. It was somewhat like a transposition cipher. For instance, “afternoons” meant “hello” in Dogg.
Never mind the doomsayers, what’s interesting about Alice and Bob is that in creating a language, they have betrayed a very human trait, which they were not explicitly programmed to exhibit. Sixty years ago, the first bots were written to explore precisely this question: Could machines be programmed to behave like humans? Could they pass the Turing test? Stated in a 1950 article titled “Computing Machinery and Intelligence”, Turing essentially suggested that if a machine’s communications appear to be human, then the machine should be regarded as human.
The first natural language processing bot to step out of the lab and gain mass popularity was Eliza, created in 1964 by Joseph Weisenbaum at MIT. He wrote it to demonstrate that human-machine textual communications could not rise to the level of a human conversation. On the contrary, Eliza raised public expectations that it would pass the Turing test. Thirty years later, even greater hopes were inspired by Julia, created by Lycos founder Michael Mauldin to compete for the Loebner Prize, the Holy Grail of Turing testing. But beyond a point, Julia’s chats meandered into random musings about the properties of dogs and cats (you can chat with a modern version of her at scratch.mit.edu/projects/2208608).
Now, bots with creative language skills have seized the popular imagination, rekindling anxieties about a robocalypse. But a development at Google’s AI lab is actually more exciting. In September 2016, Google went live with its Neural Machine Translation System, which applies deep learning to language. Two months later, they pushed the envelope: If a machine learned to translate, say, between Hindi and German, and between Hindi and English, could it translate between English and German without the bridge language of Hindi? It could, suggesting that the neural network had learnt something fundamental about how the mind links concepts and grammars to forge languages. The “small, yellow, leechlike” Babel fish, the living universal translator dreamed up by Douglas Adams in the late Seventies, is now hovering near your ear.
Of course, since there is no long-term prognosis for AI, the warning of people who should know better, like Elon Musk and Stephen Hawking, should be given due attention. There should be regulation, despite the protestations of proponents of the freedom to innovate, like Mark Zuckerberg. And a general consensus should develop, agreeing on lines which must not be crossed, as in the case of interventions in the human genome. But it cannot be denied that the experiments at Facebook and Google are advancing the original purpose of AI, which was to model and understand aspects of the human mind. The lurid media stories they attract are passing sensations. The next day, they are suitable for wrapping fish.