Towards Responsible AI (Part 3)

Top-down and end-to-end governance for the responsible use of AI

Three Lines of Defense for AI Governance

AnandSRao
Towards Data Science
9 min readJan 10, 2021

--

Source: Photo by Anton Polidovets on Unsplash

Responsible AI is a broad topic covering multiple dimensions of the socio-technical system called Artificial Intelligence. We refer to AI as a socio-technical system here as it captures the interaction between humans and how we interact with AI. In the first part of this series we looked at AI risks from five dimensions. In the second part of this series we look at the ten principles of Responsible AI for corporates.

In this article we dive into AI Governance — what do we really mean by governance? What does AI governance entail? What is the governance process and how should companies go about setting up their AI governance?

What is Governance?

The dictionary definition of governance is that it is “the act or process of governing or overseeing the control and direction of something (such as a country or an organization)”. Governance is distinct and separate from management. Management makes decisions and governance oversees how those decisions are made.

Applying the same definition for AI governance we arrive at the following definition of AI Governance for companies

AI Governance is the process of overseeing the responsible application of AI and its impact on all the relevant stakeholders (e.g., individuals, social groups, countries).

Data, analytics and AI groups within organizations typically follow a AI model development process that starts with the decision to build (or at least experiment with) AI, followed by designing, building, deploying and monitoring the AI model. This is the core management function of the AI group. AI governance is about who oversees the core AI function, what decisions or actions do they oversee, and how the oversight works?

It is useful to consider the following questions when deciding if something falls under the preview of AI governance or is a key management function:

  1. Is it big? — The bigger the societal or customer impact of AI the more natural it is for AI governance to oversee. The size here may be determined by the overall revenues or profits it generates, costs it saves, or the number of customers or employees that it impacts.
  2. Is it strategic? — The more strategic the decision, e.g., AI principles to be adopted, code of conduct, ethical policies or leading AI practices, the better it is for the AI governance group to be involved.
  3. Is a red flag flying? — The more risky the application of AI (see AI risks for a categorization of some of these risks) the more relevant it is for AI governance. Some of the key ‘red flag’ AI applications include facial recognition, AI for recruitment, bias in AI-based decisions and recommendations.
  4. Is a watchdog watching? — If the AI comes under any existing regulation or independent self-governance professional groups the better it is to have an oversight on the primary development and approval process.
  5. Is it novel? — If there is no precedent to follow, then there may be additional risks which have not been considered.

These key governance questions have been adapted from the six key questions for Board of Directors and modified for AI governance.

The Three Lines Model

Governance and risk management are not new in the corporate world. There are well accepted standards, guidelines, and regulations that ensure the smooth functioning of corporations. The three lines of defense model was developed in 2008–10 by the Federation of European Risk Management Associations (FERMA) and the European Confederation of Institutes of Internal Auditing (ECIIA) as a Guidance on the 8th EU Company Law Directive Article 41. This was adopted by the Institute for Internal Auditors (IIA) in 2013 with their position paper on The Three Lines of Defense in Effective Risk Management and Control. Since then they have become the standard way of assessing and managing risks and performing governance.

In June 2020, the IIA updated the guidance to release their position paper on The IIA’s Three Line Model. The document describes six key principles of governance, key roles in the three lines model, the relationships between the roles and how to apply the three lines model. It clearly articulates the responsibilities of the management, internal audit function, and the governing body (see figure below).

Figure 1: Three Lines Model (Source: The IIA’s Three Line Model)

The three lines model has been extensively used in many organizations and has been applied to a variety of risks including technology risk and also model risks in financial services organizations. Credit risk, market risk, and operational risk models in banks have been routinely governed based on these three lines of defense. Rather than invent an entirely new governance structure, process, roles and responsibilities we have adapted this model to apply for AI Governance.

  • First line of defense — Creators, Executors, and Operations: Those specifying, designing, building, deploying, and operating data, AI/ML models, automations and software; The first line also include the operations team involved in operating and monitoring the data, software and models.
  • Second line of defense — Managers, Supervisors and Quality Assurance: Those who are assessing the risks of data, AI/ML models, automation, and software as well as those responsible for developing the strategy. The ongoing monitoring is also reviewed by the second line of defense. In addition, the second line is also responsible for checking that the First line has built their systems in alignment with expected practice.
  • Third line of defense — Auditors and Ethicists: Those overseeing the other two lines of defense to ensure compliance with laws, policies, and strategies of the organization, as well as the ethical and responsible use of technology.
  • Ethics Board: The ethics board is a diverse and inclusive group of executives and staff within the organization. Some organizations may also choose to appoint external members to the Board.

In addition to these roles within the organization, companies will also be interacting with external auditors, certifiers, or other assurance providers as well as regulators. The diagram below outlines the details of the different roles and the key responsibilities of each role.

Figure 2: Three Lines Model for AI Governance (Source: PwC Analysis)

End-to-end Governance

In addition to the top-down governance prescribed by the ‘three lines model’ we also need to have end-to-end governance from cradle-to-grave or from inception-to-retirement of models and AI-embedded systems. The starting point for governance is not when starts building a model, but much earlier in the lifecycle.

It should really start from strategy — the corporate or business strategy for the company overall and specifically for the group responsible for data, automation, analytics and AI strategy (see my article on this Unbeatable Quartet). Every company, especially every company that consumes or generates data and insights should have a policy on ethics and internal policies and procedures for adapting, adopting and practicing ethical behavior. An integral component of this stage is also understanding any regulations as well as best practices or guidelines from industry bodies or professional associations. For example, the Data Science Code of Professional Conduct by the Data Science Association and the Oxford-Munich Code of Conduct for Professional Data Science are great starting points for a code of conduct for data scientists. Also worth tracking are the standards being developed by IEEE, especially The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.

The next stage is planning. The quartet of data, automation, analytics, and AI should be part of the planning phase. As we have discussed elsewhere software and AI models need to be treated very differently (see my article on Data Scientists are from Mars and Software Developers are from Venus). The experimental nature of AI models requires a portfolio approach. At any point in time, organizations that are mature in the adoption and rollout of AI have a portfolio of models that are at different stages of evolution — conception, experimentation, deployment, production or retired. The ROI needs to be measured with respect to the overall portfolio and depending on the overall strategy adjusted for the right mix of business use cases, efficiency vs effectiveness initiatives, etc. (see the article on Ten human abilities and four intelligences to exploit human-centered AI for more details on this). The convergence of data, software and AI models require careful attention to the delivery approach. Waterfall and agile software development methodologies need to be modified and interleaved for delivering AI-embedded software or Software 2.0 (see the article on Time to combine agile programming and agile data science). The specific delivery approach will determine the key metrics that should be reported and monitored for oversight.

The next stage of end-to-end governance is the overall ecosystem. Here we mean the ecosystem in which the AI models will be embedded as well as the context in which it will be used by others within the company and outside of the company. The broader social impact of AI being released by the company should be evaluated here. IEEE’s Well-being metrics is a strong contender here. The ecosystem must also cover the context in which the AI-embedded software will be used. For example, whether the system has automated, assisted, augmented or autonomous intelligence will determine the level of governance and escalation required. Change management for people who will be using the AI systems are a critical element for the successful adoption and continuous improvement of the combined human-AI ecosystem. Finally, given the availability of open source and vendor based AI tools and techniques, having a good understanding of what are the minimum procurement standards for sourcing AI models is absolutely essential. Although targeted at public sector organizations, the World Economic Forum’s AI Procurement in a Box, co-developed with UK’s Office for AI is a good starting point.

Next we get to the core model development and deployment stages. At this point the governance focus shifts from the broader set of stakeholders to the development team. This team will be made up of not just the data scientists, data engineers, technologies, product managers, operations team — the technical teams but also the business domain experts and ethicists. The governance oversight involves the phases of value scoping (includes business and data understanding, solution design), value discovery (includes data extraction, pre-processing, and model-building), value delivery (model deployment, transition and execution), and finally value stewardship (includes ongoing monitoring and evaluation & check-in) (see my article on Model Lifecycle: From ideas to Value for more details on the underlying 9-step process).

The final stage in the end-to-end governance process is the operate and monitor stage. The governance here is at two levels. At one level is the governance of specific models and to what extent the data, decisions, usage, algorithms, and the context around which the AI models are being used is changing. The other is at the portfolio level — monitoring the value delivered by the AI models or AI-embedded software and retiring models or initiating new ones going back either to the value scoping stage of model development or even all the way back to the overall strategy of the company. In addition to this operational and strategic support there is also a compliance and internal audit assessment that needs to be done periodically. As we have discussed in the previous section these fall under the second line and third line of defense.

Figure 3: End-to-end Governance for Responsible AI (Source: PwC Analysis)

Conclusion

What we have described in this article are the top-down and end-to-end governance required to manage and mitigate the key AI risks that were outlined in Part 1 of this series and adhere to the ten principles of Responsible AI outlined in Part 2 of this series. In future articles we will delve into the details of the nine-step process and the different phases of the end-to-end governance. We will examine the key artifacts or deliverables that need to be produced by the three lines to ensure the responsible and beneficial use of AI.

Authors: Anand S. Rao and Ilana Golbin

Related Content

  1. Part 1 — Five Views of AI Risk:Understanding the darker side of AI
  2. Part 2 — Ten Principles of Responsible AI for Corporates

--

--

Global AI lead for PwC; Researching, building, and advising clients on AI. Focused at the intersection of AI innovation, policy, economics and application.