Reflections on AI from Davos 2020 - World Economic Forum

The AI narrative is moving on from questions to actions, and finding solutions to manage the potential risk and ethics of AI

Simon Greenman
Towards Data Science

--

Some of the members of the World Economic Forum’s Global AI Council who met in Davos, Jan 22nd 2020

What do you say to a Nobel Prize winner when discussing how to make AI explainable in a deep neural network with over one billion parameters? This was my first trip to Davos and it coincided with the World Economic Forum’s (WEF) celebration of its 50th annual meeting. The setting was picture perfect: an idyllic mountain town framed by snow-capped mountains under crystal clear blue skies. The world’s elite were out in force in their designer sunglasses. I spotted senior government ministers, billionaires, tech titans and rock stars all within an hour. And here I was talking to the Nobel Prize winner, Joseph Stiglitz, and making sure that our boutique AI management consultancy, Best Practice AI, was represented at the highest level. We discussed AI explainability, the words that are on everyone’s lips. While Professor Stiglitz has concerns from an academic point of view, I deal with the issue from a different perspective: bringing practical tools to boards who are grappling with AI ethics and how to evidence the management of AI risks.

Economics Nobel Prize winner Joseph Stiglitz with author Simon Greenman

World Economic Forum’s Global AI Council

I am a member of the WEF’s Global AI Council. I attended a Council meeting chaired by Dr Kai-Fu Lee , former CEO of Google China, investor and author of AI Superpowers, and Brad Smith, the President of Microsoft. The Council is made up of senior government representatives, global institutions such as the United Nations and UNICEF, industry bodies such as the IEEE, tech giants such as IBM, Salesforce and Accenture, leading AI academics, the brilliant Will.i.am, and a sprinkling of AI start-ups.

Stakeholder Capitalism

The theme this year at Davos was stakeholder capitalism, pushing corporations to look beyond a single metric of success (shareholder return) to factoring in customers, employees, partners and society as a whole into the calculus. This has come at a time where we all face clear issues of wealth inequality, political instability, and the challenges of global sustainability. While US President Trump couldn’t resist taking a dig at 17 year old Greta Thunberg in his Davos speech, there is no doubt that sustainability and climate change were top of the agenda for the world’s most powerful. It is an incredible gathering of those that truly control this planet. For all the griping about the hypocrisy of a record number of private jets parked at the airport, I couldn’t help but feel a sense of optimism that those who can change the world are coalescing around the right agenda.

Underlying much of the discussion was the role that technology can play in helping to address many of the United Nations’ sustainable development goals (SDGs) including climate action, good health, quality education, gender equality, and reduced inequality. The CEO of Alphabet-Google, Sundar Pichai, pointed to the importance of AI technology and said:

“AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity.

While maybe a bit hyperbolic, there is no doubt AI will be woven into the fabric of society impacting nations, governments, institutions, companies and people. The global spend on AI is expected to hit $52 billion in the next three years and will help double the GDP growth rates of major economies in the next fifteen years, according to Accenture.

Balancing the opportunities of AI with the risk

But as Pichai also said “there is no question” AI needs to be regulated. As with the introduction of any new technology the opportunities need to be balanced with the risks. For example, facial recognition technology is now a commodity that stands on the cusp of becoming ubiquitous. It offers great benefits for society. It can make us more secure by identifying known criminals and terrorists. It can streamline our busy lives by speeding up identification at the airport. But it can also put at risk our right to privacy. It can be used by bad actors to single out persons or ethnic groups for persecution. Governments and non-profits have a role to play in identifying and managing these risks. Businesses have a role to play. Academics have a role to play. And civic leaders have a role to play. And this needs to be done on a multilateral basis. We need to empower AI leadership globally to help address these risks.

Much of the narrative arc of AI has been dominated by fear. The existential fear of AI. The fear for our jobs. The fear that AI is fundamentally unjust with a lack of ethics and intrinsic bias. The fear that AI will mean the loss of our privacy as facial facial recognition becomes ubiquitous. This narrative was repeated in Davos across numerous sessions. But instead of everyone simply asking questions, the discussion has finally turned to potential solutions.

The world has moved on in the past year. AI is also (finally) moving on, beyond experimentation into practical use. Barry O’Bryne, CEO of HSBC Global Commercial Banking, talked about how the company has over 300 AI use cases with a focus on improving the customer journey. The move to scale up AI is forcing us all to address the questions of how do we best manage AI in the real world.

Empowering AI Leadership Board Toolkit

To this end, the WEF launched its Empowering AI Leadership Board Toolkit here in Davos. As Kay Firth-Butterfield, Head of AI and Machine Learning at the WEF, said

“our research found that many executives and investors do not understand the full scope of what AI can do for them and what parameters they can set to ensure the use of the technology is ethical and responsible.”

The Toolkit is designed to help corporate boards understand the value of AI and to ensure it is used responsibly, with practical tools for risk management in their governance and compliance practices. We, at Best Practice AI, were key contributors to this toolkit along with others such as IBM, Accenture and BBVA. The Toolkit is available here for free.

AI Governance Framework

The WEF also launched an updated AI Governance Framework that provides an implementation and self-assessment guide for organisations. The implementation of this framework is being led by the Singaporean government. Best Practice AI was privileged to have had been invited to provide input into this work.

An AI Healthcheck and Compliance Framework

We also announced a partnership in Davos with the law firm Simmons & Simmons, and Jacob Turner, a barrister at Fountain Court Chambers and author of Robot Rules: Regulating Artificial Intelligence, to launch one of the most comprehensive AI healthcheck and compliance services. This will help organisations ensure the responsible and trustworthy use of AI. More information can be found here.

Best Practice AI partnered with Simmons & Simmons LLP and Jacob Turner of Fountain Court Chambers to offer a Healthcheck and Compliance Framework

Workday and ethical AI guidelines

I liked what Workday, the Californian based finance, HR and planning SaaS, company published with the WEF on how your company can be ethical. As they say:

1. Define what ‘AI ethics’ means

2. Build ethical AI into the product development and release framework.

3. Create cross-functional groups of experts

4. Bring customer collaboration into the design, development and deployment of responsible AI.

5. Take a lifecycle approach to bias in machine learning.

6. Be transparent.

7. Empower your employees to design responsible products

8. Share what you know and learn from others in the industry.

The WEF’s Global AI Council met in Davos continuing its focus on how to balance the opportunities from AI with the management of its risks. We returned to the importance of AI ethics and the need to move beyond high level principles to practical policy and implementation.

All in all the question is now how do we make AI real and trustworthy at scale. I am look forwarding to seeing how the AI narrative arc progresses over the next year.

About Simon Greenman

Simon Greenman is a partner at Best Practice AI — an AI Management Consultancy that helps companies create competitive advantage with AI. Simon is on the World Economic Forum’s Global AI Council; an AI Expert in Residence at Seedcamp; and Chairs the Harvard Business School Alumni Angels of London. He has twenty years of leadership of digital transformations across Europe and the US. Please get in touch by emailing him directly or find him on LinkedIn or Twitter or follow him on Medium.

--

--

All in on AI. Partner bestpractice.ai. ex Member @World Economic Forum Global AI Council. MapQuest co-founder. Executive | NXD | CTO