Why AI ethics Requires a Culture-Driven Approach

There is a big piece missing in many organizations: building culture collaboratively to stand for principles.

Sundar Narayanan
Towards Data Science

--

Photo by 43 Clicks North on Unsplash

Timnit Gebru, an Ethiopian American, known for her progressive AI ethics research, who was Co-lead of Ethical Artificial Intelligence Team at Google, said in a tweet on Thursday that she was fired from Google for a research paper that highlighted bias in AI. While the AI ethics researchers and sociologists are expressing their concerns over such a move, there is more than a clear gap that exists in current approach of organizational to AI ethics. That is building culture collaboratively to stand for the AI ethics principles.

Data or technology ownership or access will always have an imbalance, with economics and politics on one side and rights and transparency on other. With concerns around bias, political influence, hate speech and discrimination, Technology Ethics (AI ethics) is becoming a board room conversation for technology businesses across the globe, on platforms or product ethics and responsible artificial intelligence.

Academic think tanks and technology behemoths have developed and shared several frameworks and principles for AI ethics. Companies like Google and Facebook have made their ethics principles public and have initiated a process of sharing insights on what and how they are dealing with critical ethics issues with technology in current times. Typically, the frameworks or corporate principles attempt to cover many of the themes (if not all) including privacy, non-discrimination, safety & security, accountability, transparency & explain-ability and protecting human values.

Focus on AI Ethics

The corporate efforts evolve around three major areas (a) Establishing principles, policies, guidelines, checklist and focus teams to deal with AI ethics including responsible AI leaders and/ or product managers; (b) Conducting research to understand and find solutions for key ethics issues, (leveraging or collaborating with academic / scholarly support where required) and periodically publishing or sharing updates on efforts or research outcomes; (c ) Aligning strategy and initiatives towards AI ethics principles and in some cases bringing or building tools to help the community at large to address select ethics challenges.

These efforts while commendable, are extremely limited. Many of these efforts are by few individuals or group of individuals, definitions of fairness and responsibility are very dynamic or evolving and efforts are towards most visible challenges. It does not address conflicting approaches that exist within the organization. There is a big piece missing in these, that is building culture collaboratively to stand for the principles.

The missing gap

We have unmasked less challenges than there actually is, with reference to Ethics in AI. When Joy Buolamwini, spoke at the Ted Talk (How I’m Fighting bias in algorithm) in 2017, organizations using facial recognition started re-looking at their products and bias that is inherently embedded in such products. The bias in algorithm has not ended there, it continues as more and more models developed, more and more data annotated, more and more used cases identified. We as a society are inherently biased and partially attempting to take preliminary steps to recover in certain aspects (for eg. Pay Parity between women and men)

While policies and principles are a great start, to enabling the culture could be lasting; with people having shared mission and alignment to responsible behavior voluntarily. Culture cannot be built with a set of tasks; for organizations that have established their AI ethics principles as their purpose, they need to look at progressing towards it. The progress would need aligning some of the factors including beliefs (an acceptance that something exists/ is true, without proof), perceptions (the way something is understood or regarded), identity (characteristics determining who or what someone/ thing is), imagery (visual symbolism), judgement (conclusion or opinion) and emotion (an instinctive or intuitive feeling) for the stakeholders who collaboratively work for the mission.

Culture driven approach to AI Ethics

None of these factors may be dependent only on facts. Each of these factors have independent or inter-dependent impact on our thoughts and actions individually and collectively. Let us look at the key factors and how influencing them drives a cultural phenomenon:

1. Develop wider emotions towards the ethics principles:

Emotions are strong messengers and influences of the principles that corporations stand for. They are at a macro level enablers of effective communication and at an individual level, it inspires pride and purpose of life for many individuals who are carriers of such emotion towards the principles. Structured communication and engagement with stories, life experiences, efforts to enable to part of the society evolve from patriarchy are some of the ways in which emotion can be developed. Emotion enables a positive neural response when actions and efforts towards stated principle of AI ethics is taken and exhibits negative neural response for deviation from such principles, and thereby functions as a deterrent.

2. Instill beliefs on organizational direction towards ethics principles

Beliefs and perceptions are important because employees are not always close to how deep the organization feels the principles laid out for AI ethics. Instilling shared beliefs and perceptions would require strategic approach in shaping the business model in sync with the purpose, aligning leadership across levels, designing communications and exhibiting representative behavior. For example, if stakeholders don’t believe that the effort towards addressing discrimination is limited and unreliable for an organization, the aforementioned efforts (principles, policies, research etc) are going to have limited impact on them.

If the organization expects the employees and stakeholders act in a certain manner its necessary to pay attention in instilling the beliefs. This can be done by creating ethics principles part of the goals and objectives of employees and stakeholders or by ensuring that ethics principles are critical strategic discussion point among stakeholders. For instance, Martin Fishbein and Icek Ajzen, in their ‘Theory of Reasoned Action’ mention that intention to a behavior precedes the actual behavior and such intention is a result of belief that performing such behavior will lead to certain outcomes. These in turn help organizations to create an environment that helps people raise their voice to the values and principles the organization should stand for and thereby steering the direction.

3. Be consistent in enriching the beliefs and emotions

Approach of instilling the beliefs and emotions by itself won’t have a significant impact unless such efforts are consistent. Being consistent in this context is not limited to doing the same thing but enriching the efforts therein with every attempt. For instance, identify or innovate ways to engage with employees and stakeholders, modulate story telling to relate to real world insights/ events etc. Eric Van den Steen in his research (‘On the Origin of Shared Belief’) mentions that people prefer to work with others who share their beliefs and assumptions, since such others ‘will do the right thing’ and insists that believes evolve over period with shared learning. This approach has to percolate to new hires including lateral hires, third parties and business partners and above all board and senior management.

Conclusion

In some ways, to be silent on AI ethics issues may reflect poorly on organization and exhibit complicity, as could be perceived in Gebru’s case. This is not about semantic differences in the way a communication is done within or outside the organization. Looking at instilling shared beliefs or enabling positive perception and creating a compelling emotion towards AI ethics is not just essential, but imperative for the brand and business to flourish, as the downside to the reputation for being inconsistent with the stated principles/values can be catastrophic. Hence, it’s necessary to have a holistic outlook on AI ethics and to build responsible AI or AI ethics culture collaboratively by instilling values among stakeholders across the spectrum.

Previously published on Linkedin Pulse (here)

--

--