Ethical AI: Keeping It Human

#CTP

There’s no doubt that artificial intelligence (AI) is changing the world. It’s also fuelling a growing debate over the impact these new technologies are having and might have on the future. This conversation is taking place at the frontline of AI development and is being amplified by some of the biggest names in business and academia. To its most passionate proponents, the benefits of AI for humanity are seemingly boundless: ending hunger, poverty and disease, and solving climate change are among its list of promises.

But there’s a flip side. AI’s disruptive power is raising a host of ethical, legal and moral issues, including privacy concerns, racial and other social biases, potential job destruction and social discord. Not to mention worries over the reliability, security and accountability of AI systems. The chorus of cautionary voices calling for international regulation of many advanced AI technologies is growing louder.

 

While there are differing views concerning the degree of risk involved, expert consensus is that the pace of AI advancement will continue exponentially—and much faster than the layman thinks. But if we get AI right, its benefits are huge. Now is the time to join the global AI conversation, to ensure that we use this powerful new technology for the benefit of everyone. Let’s start talking!

 

Framing the debate

AI technologies in use today are raising fundamental questions that go way beyond the usual issues of product safety. The cautionary voices are coming from the world’s leading tech companies, scientists, researchers and entrepreneurs as well as watch-dog groups and anti-technologists.

While it’s true that disruption caused by technological innovation is nothing new, this “fourth industrial revolution” is happening much faster than anyone expected, leading to both utopian and dystopian arguments that AI will either propel humanity to new heights or turn us into slaves after the robot revolution.
Headline grabbing advancements in deep-learning AI systems which run in semi-autonomous cars, facial recognition technologies, and chat-bots, and which have mastered the ancient Chinese game of Go, are not only exciting technological advancements, they are clear indicators that AI is moving forward at an increasingly rapid pace.

 

Responsibility is a human thing

The speed and magnitude of change in the AI world, coupled with its numerous risks and “known unknowns”, are likewise pushing the ethical debate into the fast lane. Indeed, some of the world’s leading AI inventors and investors are actually calling for regulation and bans on things like autonomous weapons.

While “killer robots” are grabbing the headlines, there are many other areas for concern. Privacy is one of them. The increasing ability of intelligent systems to monitor not only our every online move, but to follow us in the real world via facial recognition technology, amidst an absence of legislation to regulate this intrusive activity, is starting to raise some red flags.

In July, Microsoft president, Brad Smith, published a letter on its website urging the US Congress to enact laws to regulate and restrict the use of facial recognition technologies. As Smith puts it: “It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse.”

Smith goes on to list a series of concerns, including the possibility that law enforcement could make decisions about who to track, detain and prosecute based on faulty or biased systems; how governments could use the technology to stifle free speech and dissent; and how retailers and other companies could literally track our every move and share this information with other AI systems—all without our knowledge or consent.

Indeed, many companies are realising the need to address AI risks, joining the scientists, technologists, ethicists, legal experts, public policy experts and human-rights advocates in asking the hard questions about how to keep next-generation technologies ethical and human-focused.

 

Leaders of the pack

“Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work.”

This quote, on the DeepMind website, is from the team behind some of the most astounding advances in machine learning using artificial neural networks. It is a sentiment echoed by a number of high-profile scientists, academics and business leaders. Bill Gates, Elon Musk, and the late Stephen Hawking have all called for concerted international action to regulate AI technologies in decidedly pro-human ways. They all see the imperative of safeguarding individual rights and ensuring that the transition to an AI-powered future goes smoothly.

These thought leaders are doing more than talking. They are getting involved in a range of institutes, think tanks and NGOs that are delving deep into the ethical and social issues that AI raises to find practical solutions.

The list of such organisations is large and high-level. Partnership on AI is a tech industry consortium of leading AI developers, including Amazon, Facebook, Google and its subsidiary DeepMind, Microsoft, Apple and IBM. Their mission is to establish best practices for ethical AI systems and to educate the public about AI and its impacts.

Future of Life Institute, co-founded by Elon Musk and the late Stephen Hawking, focuses on existential risks to humanity posed by AI. AI Now, a research institute based at New York University, focuses on four key area: bias and inclusion, labour and automation, rights and liberties, and safety and civil infrastructure. DeepMind Ethics & Society, a separate unit of AI pioneers DeepMind, funds external research into AI risk areas such as privacy, transparency and fairness, economic impact, governance and accountability, risk management, and morality and values.

AI is a global phenomenon with global implications, so it is good news that AI is also making the agenda of leading international and multinational organisations. The World Economic Forum, for example, is among the global groups helping to lead the debate on the risks of AI and is lending its voice to calls for international regulation of AI at the United Nations.

The IEEE Standards Association, which develops global standards across a broad range of industries, is also a leading voice calling for AI regulation at the global level. In 2017 they established the IEEE Global Initiative on Ethics and Autonomous Systems and Intelligent Systems, whose stated mission is worth quoting in its entirety: “To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.” They also publish an annual report, “Ethically Aligned Design,” which may be the definitive global treatise on issues surrounding AI and ethics.

 

Europe: thoughtleader in AI

While Europe as a whole has been somewhat lagging behind North America, Japan and China in the development of an AI strategy, this is set to change soon.

In April this year, a group of leading European scientists published an open letter calling for the establishment ELLIS—the European Lab for Learning and Intelligent Systems. This EU-funded research institute would have labs in several EU member states and would drive not only cutting-edge AI research, but also the debate on ensuring AI stays ethical.

France recently unveiled its own AI strategy with ethics as a centrepiece, and ELLIS would build on and strengthen this approach. There is every reason to believe that ELLIS will start taking shape in the near future and one of its stated goals is also to stem the rapid brain-drain of European talent to the United States and Asia.

It is expected that the first step toward ELLIS will be AI collaboration between France and Germany, with other EU members joining later. Each local lab is expected to be a EUR 100 million facility with an annual budget of around EUR 30 million. Once born, ELLIS is expected to be major magnet for private investment in AI technologies.

 

Join the conversation

Companies across the board are leaping on the AI bandwagon for all the right reasons. For many businesses, the smart deployment of AI is crucial to ensure competitiveness in this fast-changing landscape. AI and other advanced technologies have already started to revolutionize the workplace across diverse industries, and we are only in the infancy of this technological sea change.

A recent McKinsey study predicts that the total value-add of AI to the global economy across 19 industries and nine business functions in the coming decade will be in the range of USD 3.5 to 5.8 trillion a year. Recent PwC research supports this outlook and predicts that AI could raise global GDP by as much as 14% by 2030, which would add an additional USD 15.7 trillion to the world economy.

Companies that prepare for and understand AI are going to gain significant advantages over those that do not. Issues of legal accountability and the lack of transparency regarding the decision-making processes of AI systems are crucial for companies to consider before the roll-out of AI investment. The lack of precedent in this area reinforces the need for companies to have a strong AI rulebook and ensure compliance with the new national and international regulations that are almost certainly on the way.

There are also more mundane restrictions to AI, namely that it is still somewhat limited compared to the vast majority of work-related tasks that humans currently do. Only operations that can benefit from available technology rolled out to scale need to be taken seriously, as otherwise the costs to implement AI outweigh its benefits, particularly when things like employee severance and retraining costs are factored in. Depending on the industry, AI could also have an unforeseen and adverse impact on a company’s brand and underlying business. It’s best to look (and think) before you leap.

AI is a huge subject. It is easy to get lost in its many euphoric promises and in the many dystopian scenarios and existential quandaries it presents. But one thing is clear: AI advances taking place today are already challenging us to rethink some of our basic premises about things like the nature of work, the goals of society, and what it means to be a human being.

If you believe the experts, these changes are coming fast. There has never been a better time to get smart about AI.

 

Which kind of brave new world?

The most popular cinematic conceptions of the world populated with futuristic AI systems tend toward dystopian landscapes. The grander utopian vision of AI, which although benign toward humanity, carries strong ethical dimensions and will propel humanity further along its evolutionary journey.
While both versions are certainly controversial, the good news is that the future is in our hands. The world has woken up to AI and all it takes is the right outlook, combined with smart decisions today. A man-machine partnership is possible. The world of Star Trek is an example of the kind of AI-powered future we should be aiming for, while that popularised in “The Terminator” is possible, it requires people now to determine that outcome.