The AI revolution: why we need the right frameworks to avoid disaster

by Kai Grunwitz

28 August 2019

A man standing overlooking a city in the morning

Topics in this article

The fourth industrial revolution is no longer a topic for economists and future-gazers. It’s happening right before our very eyes. This new era of digitization promises to change how we live, work, interact with each other – even how we’re governed. It’s no exaggeration to say it will transform human society as we know it. AI is at the heart of this revolution, already deeply embedded in our working and private lives. But there’s danger ahead if we let this technology grow unfettered.

Governments and industry need to act now to put the right political, economic and ethical frameworks in place to manage the growth of AI – ensuring we use it responsibly and in a way that gets mass buy-in from the populace.

The revolution starts here

The benefits of AI are well understood by now. In analysing, evaluating and making decisions based on vast data sets, it’s already adding value in countless scenarios. Think of the speech recognition algorithms that power AI assistants like Alexa and Siri; the image recognition used by connected cars to identify potential hazards; or even the machine learning employed by e-commerce sites to improve the shopping experience through personalized recommendations.

It’s all AI in one form or another and it’s already having a major impact on our personal lives. But it’s set to have an even bigger impact on our working lives – and this is where we could start to see resistance.

Some AI risk factors

Although the Skynet-like doomsday scenarios painted by Hollywood, and even Elon Musk, make for great headlines, there are greater risks facing us in the nearer term.

The first is based around mankind’s existential fears of one day being replaced by machines. Many people are rightly concerned that their jobs could become obsolete as machines get smarter. There’s no way of pushing back progress, but we can manage these fears to avoid a widespread backlash against the technology. It’s vital that governments act now to develop sociopolitical initiatives that educate the populace about the social and economic benefits of this AI-driven progress whilst focusing on ways to up-skill those whose roles may be most at risk. Without it, their respective nations may well be overtaken by those who do.

The second concern is around our over reliance as a society on AI-based technologies. When critical decisions are taken by AI bots, can we be sure they’re the right decisions? Machines are seen as dispassionate but even they can have prejudices. After all, the data they build algorithms around is effectively a large collection of decisions made by in the past by fallible humans. A further challenge arises if AI applications in turn develop new iterations without human input, as per Google’s AutoML project.

A city street at twilight with cars blurred in the background

Thinking clearly

When starting AI projects, it’s important to bear in mind some fundamental questions. What do I want to achieve? Which methods best support my goals? What impact will the technology have on my organization and employees? What risk might be created by these changes? How much decision-making power do I hand over the AI bot? And at what point should we expect humans to intervene?

We’re at the start of an exciting journey here; one which could result in some fantastic wins for us all. But first we need to develop the right social, ethical and even political frameworks for AI, so that the technology wonderland of today doesn’t become the dystopia of tomorrow.

Kai Grunwitz

Kai Grunwitz

Country Managing Director, NTT Ltd.