There’s a problem with AI. In the movies, AI is something you have conversations with. It’s intelligent enough to wax lyrical about ‘C-beams glittering in the dark’ while looking like Rutger Hauer. It never misinterprets what you’re saying. It can understand Northern accents.
True, the AI then tends to betray you, develop self-awareness and annihilate the human race, but it’s hard not to feel a crushing sense of disappointment that the current state of the AI art extends to telling your Echo Dot to play Mumford & Sons.
Yet, perhaps the problem isn’t that AI is not yet (and may never be) at a level which requires Harrison Ford to gun it down when it gets out of hand. Perhaps the issue is in the terminology. We say ‘AI’ when what we mean is ‘machine learning’. And that’s a different beast altogether.
Think of AI as a 3-stage story. At its crudest, it is the chatbot that pops up during your online banking. Whilst it has the veneer of AI, there’s nothing intelligent about it. It’s simply a series of keywords with default responses. Some of them are quite sophisticated, but if you’ve ever used one and felt you were dealing with an actual human you really haven’t been paying attention.
Next is machine learning, a form of AI on rails – there can be no breaking the programming, but it can learn and adapt and grow. Its origins lie in the chess challenges of the 1970s and 80s, where the pinnacle of machine intelligence was for a computer to beat a grand master.
Today, what we refer to as AI is invariably machine learning. It is Twitter selecting the most relevant tweets for you – not because of intelligence, but because it has assessed what you’ve engaged with previously and serves up more of the same.
It’s the photo app that has learned to recognise the facial profiles of your family and can slot new photos into family albums.
And it is the ‘intelligent’ personal assistant that knows what you usually order from the Chinese takeaway, so asking Siri, Alexa etc to order your meal via Just Eat is quick and simple.
Mighty clever though this all is, it isn’t intelligence. For proof, you need only cast your mind back to the launch of Twitter’s @TayandYou bot in 2016. “The more you chat…the smarter she gets” was the billing, which acted as an open invitation for the trolls of the world to teach Tay to say things you wouldn’t want your mum to hear.
There’s the recent YouTube controversy, which saw algorithms mistakenly placing ad content alongside hate-filled, violent and extremist content.
And Google’s new Clips camera does away with the requirement to do anything so mundane as choose to take a picture yourself. Instead, it snaps, films and files things it thinks you’d like to see. Google, anticipating a flurry of privacy issues, has built in numerous safeguards but that hasn’t stopped it being described as “invasive and creepy”, despite its impressive tech.
And that’s the crucial difference between machine learning – our current, 2nd stage level of AI – and true AI.
Machine learning can do virtually anything except, it would seem, make an accurate value call about whether it should tweet that racist statement, film that child, or place that ad appropriately.
True AI can make that call. It can decide what’s right. It can be left to make its own decisions, and be trusted to make the right ones (without bringing about the end of humanity).
We’re not there yet. We may never be there. We may not want to be there. But as long as we continue to refer to machine learning as AI, there’s a danger we’re raising expectations too far, of attributing awareness and understanding to systems that are still locked into their algorithms.
So for now, let’s stick to using the term ‘machine learning’ and leave the AI to the movies.
Do you want to have your say in the AI debate? – please complete our BIMA AI survey.