The world’s leading publication for data science, AI, and ML professionals.

A cautionary tale: multi-stakeholder feedback in AI Ethics

Multi-stakeholder feedback has its inherent flaws, hence, its best to treat them as 'one of the means' than an 'as an end' to establishing…

AI Alignment and Safety

Source: Freepik | High view diverse wooden characters inclusion concept
Source: Freepik | High view diverse wooden characters inclusion concept

Setting the context

Ethics is a system of accepted beliefs that guide or control behaviour of individuals or groups. Beliefs represent rational, truthful, factual view in some parts and their contrary in some parts in relation to the current time. The accepted beliefs may relate to social philosophies learnt from history, morals recognized by groups, principles defined by communities and sometimes rules set by those in-charge of governance. The common theme for accepted beliefs is the one that supports and empowers people. Communities, groups, organizations, and governance structures over centuries have always established ethics through multi-stakeholder consultation and gathering of diverse inputs. Societies and cultures across the globe have ethics embedded as a way of life, much beyond the regulatory spectrum of defining ‘what is ethical’ (establishing a system of accepted beliefs). And hence, ethics (not tied to any territorial constraints) shall be and should be greater than relevant regulatory requirements (inherently tied to a geographic territory).

Need for multi-stakeholder feedback

Artificial intelligence is an agent of change that empowers people and optimizes utility of resource in hand. Any agent of change that has a significant impact on people also carries within it, certain side effects of the change it brings in. For artificial intelligence, such side effects include harm and discrimination amongst others. With the significance of the potential impact of these side effects on people, there is an essential need for establishing ethics. There is an ongoing debate about who, how and when should be empowered to establish ethics (with reference to AI ethics). To have ethics as a larger gamut, beyond regulations, there is a need to have wider inputs and inclusive participation of people across different sects of the society, community, and regions. Hence, multi-stakeholder participation, social dialogue, and diverse inputs are considered as a critical contributor in establishing ‘Ethics in Artificial Intelligence’.

TRUST: The core of this mechanism

The underlying intent of multi-stakeholder and diverse inputs (both online and offline) are to ensure inclusive considerations in framing the system of accepted beliefs. Multi-stakeholder feedback mechanisms are designed to inherently trust the (1) intent and (2) integrity of stakeholders. This is one of the most important strength and limitation of the multi-stakeholder feedback mechanism. Recent history has revealed that social media trends driven by multi-stakeholder feedback has rigged the political system and religious harmony.

Unintended sampling bias in identifying stakeholders for feedback can create more divisive feedbacks that amplify existing social constructs. This is because there exists intent conflicts (differences in accepted beliefs) and uneven baselines (under representation or lack of access) in the existing social constructs. Social systems handle these conflicts by providing territorial independence (not necessarily geographic territory always. For eg. Race) for the groups or to provide minimum or maximum limits of opportunities for different groups (eg. Quota system) to ensure their co-existence, which is difficult practically in the world of AI. For instance, there exists a practical difficulty and dilemma in establishing a quota system for a select group in an AI application, where the AI application currently does not have identifiers of such group. It’s a dilemma of should the Ai application expand on the existing social discrimination.

Trust in integrity is difficult to validate. Economic opportunity and unsettled territorial independence in social system sometimes induces stakeholders or stakeholder groups to exploit the trust in integrity and dilute/ divert the impressions on system of accepted beliefs (Ethics). While these exploitations surface from the dorms sooner or later, it would have already done the damage by then.

Why multi-stakeholder inputs can be flawed?

Looking at ‘multi-stakeholder inputs in ethics’ from the lens of commercial organization building AI system would help expose the challenges within the multi-stakeholder feedback mechanism and caution to be applied while using the mechanism.

1. Research: AI research by commercial organizations is significantly relevant for building better solutions that serve the existing and future customer base. These are done through research collaboration with educational institutions, fellowship/ internship and by providing funding for social institutions and think tanks working in a specific subject of interest for the organization. This research involves collecting multi-stakeholder feedback which is questioned in the recent past for issues like (1) Silenced feedback – trading loyalties, incentives or higher purpose ideology, (2) Feedback that does not explore the demerits of the research – either by choice or due to inadequate data or insufficiency in time.

2. Automated feedback: AI applications that adopts self-learning or human trained learning or a combination thereof, are prone to (1) False feedbacks – eg. Bots driven trending news (2) Influenced feedbacks – eg. Sharing and commenting on fake news, (3) Irrational feedback – eg. Unqualified opinion or judgement or advice on how a country should deal with terrorists. In many circumstances it would be difficult to differentiate between feedbacks that empower and/ or divide people. These are done through various adversarial attacks including data poisoning and feedback channelized through hired adversaries.

  1. Decisions and associated actions: AI applications may make a choice or decide on a trade-off based on multi-stakeholder feedbacks (online and offline). These choices / trade-offs may be associated with understanding of/ impression of ground truth and its conflict with economics or business principles of the organization. These may be plagued with (1) Feedbacks that are representatively not independent – conflict of interest (2) Narratives that appear to be selective representation of issues about one or more groups against others – eg. Seeking feedback on certain issues faster than others (3) Feedbacks with an inherent notion of protecting the interest of the commercial organization (4) Feedbacks from stakeholder group having an imbalance in power and capacity and (5) Feedbacks where consensus is not reached while the underlying circumstance changes.

It is important to understand that not all of these are influenced by the commercial organization building AI system themselves, sometimes its driven by (a) the economic and social needs of the stakeholders and (b) who is catering to such needs (Company, its competitor – as an espionage attacks, activist and / or political representation etc). This determines whether they express (favourably, unfavourably, or falsely) or stay silent.

Conclusion

This is not to represent multi-stakeholder feedback mechanism as ineffective. It’s one of the most inclusive approaches to restore faith in humanity about AI systems, however, it is important to recognize that they are flawed. The reason to expose the flaws are to take adequate caution while using them and treat them as ‘one of the means’ than an ‘as an end’ to establishing and maintaining Ethics in Artificial Intelligence.


Related Articles