Like many people nowadays, I do not talk on my iPhone as much as talk to it. That’s because it runs a program called Siri (Speech Interpretation and Recognition Interface) that works as an intelligent personal assistant and knowledge navigator. It’s useful, in a way. If I ask it for “weather in London today”, it’ll present an hour-by-hour weather forecast. Tell it to “phone home” and it’ll make a decent effort to find the relevant number. Ask it to “text James” and it will come back with: “What do you want to say to James?” Not exactly Socratic dialogue, but it has its uses.
Ask Siri: “What’s the meaning of life?”, however, and it loses its nerve. “Life,” it replies, “is a principle or force that is considered to underlie the distinctive quality of animate beings. I guess that includes me.” Ten points for that last sentence. But the question: “What should I do with my life?” really stumps it. “Interesting question” is all it can do, which suggests that we haven’t really moved much beyond Joseph Weizenbaum’s famous Eliza program, which was created in the MIT Artificial Intelligence Laboratory between 1964 and 1966. Eliza in fact operated by using a script called Doctor, a simulation of a Rogerian psychotherapist. Thus, if asked: “What should I do with my life?”, it might respond: “Have you asked such questions before?” And so on ad infinitum.
Eliza, of course, had no intelligence, artificial or otherwise. That didn’t prevent some people from allegedly becoming addicted to her, but it meant that she posed no existential threat to humanity. The same cannot be said for contemporary manifestations of AI, as represented by the combination of massive processing power, big data, machine learning, advanced robotics and neural networks. Some latterday luminaries – Stephen Hawking, Elon Musk and Bill Gates, to name just three – have taken to worrying about the prospect of superintelligent machines that might, so to speak, have minds of their own – and could therefore regard humans as disposable life forms.
Given global warming, the planet may well have reached the same conclusion about humans some time before superintelligent machines walk the Earth and so existential worries may turn out to be moot. But let’s suppose that we survive long enough to develop such machines. How will we communicate with them?
Easy, peasy, say the AI evangelists: we’ll just use natural language, ie we’ll talk to them just like we talk to one another. At this point, a whirring noise can be heard: it’s Ludwig Wittgenstein rotating at 5,000rpm in his grave. “What can be said at all can be said clearly,” he wrote in the Tractatus Logico-Philosophicus, “and what we cannot talk about we must pass over in silence.” And therein lies the problem. Because often what really matters to us humans is stuff that we have difficulty articulating.
What’s brought this to mind is an extraordinary interview with Stephen Wolfram that’s just appeared on John Brockman’s Edge.org site. The term “genius” is often overused, but I think it’s merited in Wolfram’s case. Those of us who bear the scars from school and university years spent wrestling with advanced maths are forever in his debt, because he invented Mathematica, a computer program that takes much of the pain out of solving equations, graphing complex functions and other arcane tasks. But he’s also worked in computer science and mathematical physics and is the founder of the WolframAlpha “computational knowledge engine”, which is one of the wonders of the online world.
As befits someone who has built such powerful tools for augmenting human capabilities, Wolfram doesn’t seem too concerned about the threat of superintelligent machines. They may be able to do all kinds of things that humans cannot, he thinks, but there is one area where we are unquestionably unique – we have notions of purposes and goals. What machines do is to help us achieve those goals and “that’s what we can increasingly automate. We’ve been automating it for thousands of years. We will succeed in having very good automation of those goals. I’ve spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.”
So as machines become more intelligent, and our requirements of them become more demanding, how will we communicate our desires to them? Wolfram’s conclusion is that “it’s a mixture. Human natural language is good up to a point and has evolved to describe what we typically encounter in the world. Things that exist from nature, things that we’ve chosen to build in the world – these are things which human natural language has evolved to describe. But there’s a lot that exists out there in the world for which human natural language doesn’t have descriptions yet.” He’s right. Come back Ludwig, all is forgiven. How is it that you can never find a philosopher when you need one?
guardian.co.uk © Guardian News & Media Limited 2010