
AI adoption accelerated in 2020 – but instances of widely publicized AI problems eclipsed those of previous years. From TikTok and OpenAI to Twitter and Zoom, companies increasingly face PR issues stemming from a lack of transparency and rigor in their AI development – all of which ultimately impacted customer trust. The swift reaction from users prompted quick fixes and an increased awareness of the importance of accountability in AI.
2021 is shaping to be the year when companies embrace Responsible AI practices. Here are the top predictions for Responsible AI
Prediction 1: New Federal and State Regulations for Algorithmic and AI Accountability
Building AI leadership and accountability has been a bipartisan issue – Democratic lawmakers had proposed the Algorithmic Accountability Act in 2019 while the White House recently signed an executive order backing trustworthy AI. Earlier in the year, the Pentagon adopted Ethical AI principles. With China vying for global AI dominance and constant AI issues in the news, the incoming government will be compelled to approve the first federal regulations in AI accountability. Media sources and staffers have already indicated that this is on the immediate agenda. While the EU and other countries have passed lightweight AI regulations, a well-drafted American regulation can turbocharge AI’s adoption and set the stage for a decade of growth.
Prediction 2: Censorship, Ethical AI Whistleblowers and Chief Ethics Officers
In 2020, AI leaders pushed the limits of AI for significant breakthroughs, from GPT-3 to a solution for the decades-long protein folding problem. However, unfettered growth without consideration for its broader human impact is causing unforeseen ramifications. Unless companies take a more human-centric approach to developing AI, teams working on these breakthroughs will increasingly run into moral dilemmas and vocalize the shortcomings of their advances. Companies will restrict this information to avoid any PR fallout while they hopefully mitigate the problem – leading to more AI whistleblowers. Market innovators will hire Chief Ethics Officers to proactively address this problem.
Prediction 3: ML teams will adopt bias testing
National protests for racial inequity shone a powerful spotlight on bias in society in 2020. In this backdrop, the multitude of AI bias issues at Twitter, Zoom and other companies made ML teams increasingly aware of AI’s shortcomings of perpetuating, amplifying or even adding bias. No company introduces bias in their products deliberately – it’s a result of inadequate tools and process. Consumers are overwhelmingly demanding change. Fortunately, 2020 saw the release of several open source ML fairness tools e.g. Fairlearn from Microsoft. With stronger ecosystem support, ML teams will embrace bias testing, even for non-regulatory use cases, as part of their production development. The adoption will start small with the initial focus likely to be on bias assessment versus mitigation.
Prediction 4: Monitoring becomes a critical part of MLOps
The pandemic caused a dramatic shift in consumer behavior that impacted models and caught teams off guard. A lack of real-time operational visibility into production models resulted in a delayed team response that hurt underlying business metrics. As AI accelerates from labs into the real world, business leaders now see the need for visibility into deployed AI systems to ensure their metrics are continuously monitored and no inadvertent liability like bias is being introduced. Just like in DevOps, ML teams will therefore establish monitoring as a critical part of Mlops in 2021.
Prediction 5: ML model validation spreads beyond banking
The Federal Reserve and the OCC mandated validation of banking models in the aftermath of the financial crisis of 2008. With AI models replacing quantitative ones, banks are applying the same rigor and process to ensure AI models are sound. After years in research and development, Explainable AI products are finally mature for broad adoption in financial services giving rise to new roles like AI Validator in AI Governance teams. With a model validation step, banks have been able to successfully limit inadvertent AI issues and increase their AI driven top line. Other verticals, like insurance, retail, healthcare and recruiting, will adopt some aspects of this model validation process to not only ensure that their models are robust but also bring ML transparency to their partner teams in 2021.
2021 is shaping up to be a seminal year of Responsible Ai where companies finally begin to adopt key practices with broader ecosystem, government and end user support. This will inject more trust and transparency into AI products powering their next stage of growth.
Did I miss a prediction? Tweet me at amitpaka