Series on AI Ethics

Participatory Approaches to Algorithmic Responsibility

A framework for involving people in enabling the responsible use of algorithmic decision-making systems (ADS)

Maya Murad
Towards Data Science
11 min readMay 11, 2022

--

Source: image by the author

A framework on responsible algorithmic systems would be incomplete (and not very responsible) without provisioning for citizen participation.

The premise is simple: in deliberative democracies, citizens should have agency over how their data is used, and have a say in policies that affect their well-being. This policy influence should also extend to algorithmic decision-making systems (ADS) deployed by private entities.

High-risk ADS are algorithmic decision-making systems that directly or indirectly impact benefits, punishments, or opportunities individuals or groups can receive. The risk here being that individuals or groups are harmed by the output of the decision-making system, resulting in incorrect or unfair outcomes. High-risk ADS implementations are currently facing increased regulatory pressure.

In this article, I propose a broad framework for involving citizens to enable the responsible design, development, and deployment of algorithmic decision-making systems. This framework aims to challenge the current status quo where civil society is in the dark about risky ADS.

How public-interest groups opened the path for better algorithmic transparency and accountability

We first start by acknowledging the existing role public-interest groups play in providing demand-side and influencing supply-side accountability mechanisms. A participatory framework should account for the role of public-interest groups in advancing responsible ADS and enabling their mission.

Public-interest groups consist of various individuals and organizations that work to protect and advocate for the rights of civil society and marginalized groups. They typically bring to the table domain expertise, and may or may not be remunerated for their work. Public-interest groups can include advocacies, think tanks, investigative journalists, academics, etc.

There are several examples of public-interest groups playing a critical role in bringing visibility to problematic ADS implementations (such as demonstrating bias in a judiciary recidivism algorithm in the US) and in helping pull the plug on harmful systems (such as banning an invasive and inaccurate fraud detection algorithm in the Netherlands).

These examples illustrate demand-side accountability, where, in the lack of clear or swift regulatory action, non-state actors take on the responsibility, often in a bottom-up way, to challenge the stakeholders believed to be at fault.

Excerpt from the “Reporter’s Guide on Processing Algorithms” by the Society of Professional Journalists (Source). A compilation of journalistic pieces on problematic algorithmic systems can be found here.

Public-interest groups also play a key role in shaping policy and regulation on algorithmic-decision making systems. For example, in Canada, calls from academics and advocacies scrutinizing the use of AI in the immigration system helped halt problematic practices and resulted in the creation of responsible use guidelines that mandate impact assessments and quality assurance measures for ADS deployed by the public sector.

Although ADS regulation is still in its infancy, public-interest groups can help grow supply-side accountability (performed by state actors) by continuing to exercise pressure on regulators.

More broadly speaking, governments stand to benefit from engaging civil society early in the shaping process of ADS regulation. A detailed study from the OECD, shows that the implementation of representative deliberative processes can lead to better policy outcomes.

Participatory pitfalls and how to avoid them

Before diving into how to enhance participatory approaches in enabling transparent and responsible algorithmic decision-making, we should first examine some common participatory pitfalls:

  1. The “free auditor problem,” which pertains to the reliance on the labor of public-interest groups (instead of state actors) to provide accountability.
  2. The tokenization of public participation and “participation washing.”

The first pitfall is relying primarily on the labor of public-interest groups and broader civil society to enforce (rather with difficulty in the absence of supporting regulation) algorithmic transparency and accountability mechanisms. There is a fear that the burden of action will remain on the shoulders of citizens who act as “free auditors” for the benefit of the public. While some public-interest groups have specific mandates regarding the safeguarding of the interests of civil society and/or specific minority groups, the primary burden of accountability should lie with government actors. They are the ones responsible for drafting and enforcing regulation, and can deploy incentives and controls to guarantee the responsible use and development of algorithmic decision-making systems.

That being said, public-interest groups should continue to play a key role in enforcing demand-side transparency and accountability. There are opportunities to empower public-interest groups and reduce the labor burden by providing privileged access channels to ADS documentation (especially for high-risk systems). Currently, accessing ADS documentation constitutes a major pain point for public-interest groups, even in contexts where access should be guaranteed by “Freedom of Information” laws.

Another pitfall to avoid is the tokenization of public participation in the design and development of algorithmic decision-making systems, wherein these systems can appear to gain legitimacy in the public perception thanks to their association with trusted public-interest groups. This can be problematic in situations where the owner of an ADS publicly shares the names of parties consulted but does not disclose the consultation’s outcome, or if citizen’s recommendations are not reflected in the system’s design. In short, public consultations do not guarantee better ADS outcomes. There should be additional considerations on how to disclose the engagement of civil society while avoiding “participation washing”.

Towards a better participatory framework

An exhaustive framework on responsible algorithmic decision-making systems should outline roles and engagement modalities for public-interest groups, impacted groups, and broader society, while avoiding the participatory pitfalls stated above.

There are 7 guiding principles to participatory approaches on algorithmic decision-making systems.

Guiding principles to participatory approaches on algorithmic decision-making systems. Source: image by author.

A participatory framework should:

1. Center on responsible algorithmic decision-making principles

Participatory approaches should support the responsible use of algorithmic decision-making systems. This means ensuring that an ADS produces fair and explainable outcomes, performs robustly, upholds data security and privacy, and avoids harm to impacted users.

To uphold these values, participatory mechanisms should complement and enhance transparency, accountability and agency measures in place.

Excerpt from Revisiting the Responsible Algorithmic Decision-Making (ADS) Framework. Source: image by author.

The north star of a responsible ADS framework is the safeguarding of human agency, including the possibility for recourse that allows an impacted individual to challenge the system’s outcome.

2. Enable agency and empowerment of impacted groups

Building on the responsible ADS framework, participatory approaches should lead to the empowerment of involved groups, specifically individuals impacted directly or indirectly by the system in question.

Arnstein’s “Ladder of Citizen Participation” is a cornerstone framework for thinking through enabling increasing levels of citizen control and agency. This framework has been adopted in data stewardship frameworks to enable the responsible use, collection and management of data.

Left: The ‘ladder of participation’. Adapted from Arnstein, S.R. 1969 (Source). Right: Framework for participation in data stewardship, Ada Lovelace Institute, 2021 (Source)

The “ladder of participation” framework can also be useful in the context of enabling responsible use of algorithmic decision-making systems. At the lowest level of agency, impacted groups are informed about how their outcomes are being determined by the ADS. Next, they can be consulted about the ADS in question, eliciting feedback and concerns. They can be involved in the design, development, and monitoring of the ADS, ensuring that feedback is reflected. They can also collaborate with system owners during the ADS lifecycle to uphold responsible use tenants. Finally, at the highest level of agency, impacted groups are empowered to challenge ADS outcomes, demand recourse and influence the system’s design.

These levels of participation can be thought of as building blocks. We first need to start with informing citizens how different algorithmic decision-making systems are impacting their outcomes — which is a minimum participatory threshold that is sadly not a requirement today in most countries. Once a citizen is informed about what’s going on, they then can be consulted, involved, and empowered in the decision-making process.

3. Take place throughout the ADS lifecycle

An effective participatory engagement model, especially for a high-risk decision-making system, should start at the commissioning stage, when the intent of using an algorithmic system for the problem at hand materializes. During this stage, the system owner should research the problem at hand, look into different solutions and their trade-offs, agree on a solution proposal, and form a roadmap for the ADS. The most impactful opportunity to ensure that the ADS is designed and planned in a responsible manner takes place during this stage.

Participatory engagement at the commissioning stage should contribute to answering the following:

  • Does the proposed ADS adequately solve a problem or add value to a procedure while minimizing risk of harm?
  • Is an ADS appropriate in the first place (as opposed to other decision-making systems)?
  • Has the risk assessment and mitigation plan proposed been validated by potential impacted groups or domain experts?

Beyond the commissioning stage, participatory engagement can help ensure that the ADS is ready for launch, that it effectively enables agency and recourse, and that it actually supports good decision-making.

A lifecycle based engagement model between internal and external stakeholders for a high-risk algorithmic decision-making system (illustrative). Source: image by author.

A register can be a helpful tool to enable access to process and outcome information related to an algorithmic system throughout its lifecycle, It also can be a good support to capture inputs and outcomes of participatory approaches.

An ADS register is a governance tool that enables transparency of outcomes and processes related to a set of algorithmic decision-making systems. It’s basically a log of disclosures shared by different stakeholders about the algorithmic systems governed throughout their lifecycle. For more information on algorithmic registers, check out my previous article.

4. Adapt to the local context

Recommended participation modalities should be influenced by the risk-level of the ADS and the local context where it is applied.

Existing policy proposals plan to adopt a risk-proportionate approach to regulating ADS. This approach can also extend to recommended participatory modalities. It is important to note that all ADS systems, regardless of risk, should have a minimum disclosure requirement that can help verify that the risk-level is correctly categorized. This documentation should be available to relevant civil groups to comment on or contest.

It can be helpful to look into existing participatory modalities currently used in existing deliberative processes and leverage them as a basis for participatory engagement in an ADS.

Participatory modalities can be:

  • Formal (instituted into governance and regulation) or informal (without legally-binding engagement requirements);
  • Collective (all concerned parties reach one set of decisions or recommendations) or fragmented (representing standalone opinions or recommendations).
Different participatory modalities (non-exhaustive). Source: image by author.

5. Provide meaningful information for engagement

Individuals or groups involved in participatory mechanism should:

  • Have a clear and well-defined task relating to promoting responsible ADS;
  • Have access to relevant documentation and training required to help them deliver on their task.

For example, if an individual who will likely be impacted by a proposed ADS is asked to comment on its impact assessment, they would first need to understand how the ADS would work. They would need access to information related to its design, testing, and monitoring (amongst others). This information would need to be presented in a format that enables the individual’s understanding. Access to raw code or even complex system architectures may not be meaningful information on their own for most individuals. The individual here is looking to understand how a decision impacting them would be made and what guardrails are in place to ensure that decisions are made responsibly.

Algorithmic registers can be a helpful tool for enabling participating individuals and groups to have access to meaningful information on the system being evaluated.

6. Disclose outcomes of participation

Disclosing participatory outcomes is a means to avoid “participation washing”, particularly in cases where engaged individuals or groups disapprove of certain practices or decisions related to an ADS.

In the case of formal participatory modalities with collective outputs, such as referendums or citizen juries, the participatory outcome is usually publicly available.

Entities hosting informal public engagements do not usually have an obligation to disclose the recommendations of engaged individuals or groups.

Ideally, ADS owners should be transparent about the participatory approaches they leveraged and the recommendations and decisions that ensued.

There should also be third party mechanisms to clearly demonstrate whether an ADS passed the approvals of different participatory modalities. This can be in the form of certifications or audit reports.

7. Compensate participation efforts

Finally, participatory efforts should be compensated proportionally to the effort invested. This measure is proposed to avoid situations where big corporations and state actors benefit from and rely on the labor of civil society members to bring transparency and accountability into problematic ADS.

This proposal may be the most challenging to implement because on the one hand, this may discourage organizations from embracing participatory approaches, and on the other, compensation might tweak the participatory incentive mechanism.

Governments should unlock adequate funding to support formal participatory modalities where needed. There are legal precedents here for compensating participants or having it inscribed as part of their civic duty.

Targeted funding towards public-interest groups could be a potential vehicle for compensating informal participatory modalities.

In the private sector, paying to recruit participants for informal user studies and citizen juries is already an established practice (check out this example from Microsoft). Corporations that would like to extend their ADS footprint should also consider the participatory costs related to establishing responsible processes.

Putting it all together.

The database below provides detailed recommendations for participatory modalities at each stage of the ADS lifecycle. The right engagement model would largely depend on the system’s risk level and local implementation context.

Responsible ADS Lifecycle Management Database — Created by author.

Towards implementation

Participatory mechanisms should be a cornerstone of algorithmic regulations. Unfortunately, provisions for citizen participation have been lacking in recent regulatory proposals.

One criticism of the EU AI Act proposal is that it does not account properly for citizen empowerment in the face of risky ADS. As the Ada Lovelace Institute pointed out, the proposed regulation “fails to create meaningful space for individuals’ participation”. Furthermore, the initial draft had no provisions related to impacted group consultations or the right to recourse. Thanks to public-interest group pressure, new provisions regarding recourse have been added, but we are yet to see provisions on informing end users that they are subject to an AI system and guaranteeing their right to an explanation.

With the lack of “supply-side” enforcement to empower citizens in the ADS lifecycle, we can consider “demand-side” mechanisms in the meantime, such as certifications and third-party audits.

Companies that center their values on the responsible use of ADS will have the chance to lead by example. Public registers can be a great tool to invite and encourage participatory modalities in the lack of implemented regulation.

Additional resources on the topic:

Part of a series on Responsible AI based on my graduate thesis “Beyond the Black Box” (MIT 2021). The ideas presented were developed based on the feedback and support of several practitioners with direct experience in regulating, deploying, and assessing AI systems. I am sharing and open-sourcing my findings to enable others to easily study and contribute to this space.

--

--

Interdisciplinary tech strategist living between Boston and Beirut. I write about ethics in AI, innovation ecosystems, and my creative coding experiments.