The world’s leading publication for data science, AI, and ML professionals.

Gain trust by addressing the responsible AI gaps

Results from the Global Responsible AI Survey

Photo by Brett Jordan on Unsplash
Photo by Brett Jordan on Unsplash

Over the past couple of years AI risks and ethical considerations of AI are coming to the forefront. With the increased use of AI for contact tracing, workforce safety and planning, demand forecasting and supply chain disruption during the pandemic, a number of risks around privacy, Bias, safety, robustness, and explainability of AI models have emerged.

AI risk identification, assessment, and mitigation varies by the level of AI maturity, company size, industry sector and country of domicile. PwC’s Global Responsible AI survey, of over 1,000 C-level executives, conducted in November 2020 reveals a number of insights as it relates to risks of AI and how companies are assessing, managing and mitigating these risks. The companies surveyed were in a number of industry sectors including financial service, technology, energy, utilities, and health. The companies varied in size from small companies with less than $500 million annual revenues, medium sized companies with revenues between $500 million and $1 billion and large companies with over $1 billion in revenues. Nearly 49% were large companies and 29% were medium sized companies and the rest were small companies. We also segmented these companies based on their AI maturity level determined by the number of enterprise wide AI applications deployed. This resulted in three clusters – AI Leaders (or fully embraced AI) (26%), AI Experimenters (or early stages of implementations) (58%), and AI Laggards (or not implemented) (16%). Not surprisingly large companies (> $1 billion in annual revenues) made up nearly 65% of the AI leaders.

AI Ethics is still not on the horizon for a significant number of companies

Nearly 33% of respondents do not take into account ethical considerations (4%) or take them into account only to a limited degree (29%). This percentage rises to 37% for AI experimenters and 44% for AI laggards. In Japan these percentages rise to 58% for AI experimenters. Small companies ($500 million or lower in revenues) tend to pay little to no attention to ethical considerations – 46% of AI experimenters and 52% of AI laggards fall under this category.

Figure 1: Concern about AI ethics by Maturity level (Source: PwC Responsible AI Survey)
Figure 1: Concern about AI ethics by Maturity level (Source: PwC Responsible AI Survey)

Bias, Safety, and Explainability of AI still rank high

Algorithmic bias is still a primary concern for many organizations. It is the primary focus for 36% of respondents and 56% of respondents address it adequately. As companies mature in their AI adoption they embrace algorithmic bias as a primary focus – nearly 60% of AI leaders have it as a primary focus.

Figure 2: Focus on Bias by Maturity level (Source: PwC Responsible AI Survey)
Figure 2: Focus on Bias by Maturity level (Source: PwC Responsible AI Survey)

Safety of AI systems is a primary concern for 28% of respondents, it is of enough concern for 37% and somewhat of a concern for 31%. However, as companies mature, safety becomes more of a concern – 50% of AI leaders have safety as a highly important concern.

Figure 3: Focus on Safety by Maturity level (Source: PwC Responsible AI Survey)
Figure 3: Focus on Safety by Maturity level (Source: PwC Responsible AI Survey)

In our survey 27% of companies definitely had the ability to explain or justify the decision made by a model, 41% could explain reasonably well and 30% could explain to some degree. AI leaders were more proficient in this aspect – with 48% definitely having the ability to explain the decisions.

Figure 4: Focus on Explainability by Maturity level (Source: PwC Responsible AI Survey)
Figure 4: Focus on Explainability by Maturity level (Source: PwC Responsible AI Survey)

Companies are using a variety of approaches to assess and manage AI risk

Amongst the survey respondents, nearly 52% do not have an ethical AI framework nor have they incorporated the ethical principles in day-to-day operations. This number rises to 68% for AI Laggards and 66% for companies with less than $50 million in revenues.

AI code of conducts (63% of respondents), AI impact assessments (52%), AI ethical boards (43%), and AI ethics training (37%) are some of the mechanisms that companies have been using to handle AI risks. While these percentages are relatively consistent across all countries, the UK seems to have a lower number of AI ethical boards (32%) and AI ethics training (28%). As companies mature in their AI adoption they seem to embrace more ethics training and also the use of ethical boards. 60% of AI leaders have AI ethical boards (compared to 43% overall) and 47% of AI leaders have ethics training (compared to 37% overall).

Figure 5: Approaches to managing AI risk by Maturity level (Source: PwC Responsible AI Survey)
Figure 5: Approaches to managing AI risk by Maturity level (Source: PwC Responsible AI Survey)

AI risk identification and accountability is still in its infancy

Only 12% of companies have their AI risk management and internal controls fully embedded and automated; 26% of companies have an enterprise approach that has been standardized and communicated; the rest have a siloed or non-standardized approach to AI risk management. With AI leaders we see nearly 29% have fully embedded and automated risk management and controls and 38% have an enterprise-wide standardized approach.

Figure 6: AI Risk Identification by Maturity level (Source: PwC Responsible AI Survey)
Figure 6: AI Risk Identification by Maturity level (Source: PwC Responsible AI Survey)

When it comes to transparency and accountability, only 19% of companies have a formal and documented process that gets reported to all stakeholders; 29% of companies have a formal process only when there is a specific event; the rest have only an informal process or no clearly defined process at all.

Figure 7: AI Accountability by Maturity Level (Source: PwC Responsible AI Survey)
Figure 7: AI Accountability by Maturity Level (Source: PwC Responsible AI Survey)

It is clear from the above survey results that there is still a significant gap between AI leaders and AI laggards with respect to

  • their concern for issues like bias, safety, and explainability;
  • their approaches to managing AI risk using AI code of conduct, AI ethical boards, training, and AI impact assessments; and
  • their ability to identify AI risks and hold people accountable.

We call this the Responsible Ai gap. Companies need to bridge this gap to gain the trust of their customers, employees, and other stakeholders. Failing to do this is likely to impact their ROI (return-on-investment) and not yield the desired benefits or value from AI initiatives.

Related Article

  1. AI leaders make the most of the COVID-19 crisis to increase the role of AI

Related Articles