AI Could Learn a Thing or Two from These Three Fields

Malak Sadek
Towards Data Science
7 min readSep 8, 2022

--

As AI systems are constantly learning from data produced by the world around them, so must their creators. Photo by Aideal Hwa from Unsplash.

It’s no secret that the field of Artificial Intelligence (AI) is fraught with scandals, biases and limitations. There’s also no shortage of attempts to fix these issues, whether coming from tech, mathematics, ethics, or even design. What’s becoming more and more clear is that there will never be a one-size-fits all solution to these problems, and instead of trying to reinvent the wheel, the field could benefit greatly from taking advantage of existing movements and trends moving towards human-centeredness and inclusivity.

Data Feminism — Participation

In a broad sense, participation means to take part in something. One of the important benefits of stakeholder participation in the field of AI is to more evenly distribute the power of decision-making and having an influential voice among the parties affected by a technology or intervention and especially those experiencing “structural oppression” or “systemic disadvantages”. ‘Participation’ refers to anyone getting involved in an activity at any time for any purpose; it is non-specific as a term. While the term is often used in the fields of human-computer interaction and design, it is used to describe widely varying degrees of engagement that can often be unequal or asymmetric with regards to stakeholders vs. the design/development team or even between different user groups , forming a skewed model of collaboration.

Data Feminism goes beyond gender inequalities and focuses on equitable and meaningful participation in the entire technology life-cycle. Photo by That’s Her Business from Unsplash.

There’s also been very little work done on documenting and analyzing the actual effects of interactions between researchers and participants when participatory activities take place. On the other hand, by using a framework of inclusion and diversity, the goal becomes oriented towards building an environment where people with diverse backgrounds and experiences are comfortable and empowered enough to meaningfully and effectively participate. Participation also enables value-sensitivity as engaging stakeholders is the only way to understand their values and allow them to make decisions reflective of those values. It avoids the making of assumptions and decisions by a homogenous group of creators, who can fall prey to privilege hazard and tunnel-vision.

Interdisciplinary Studies — Interdisciplinary Collaboration

Despite the numerous global calls for interdisciplinarity, catapulting it into buzz-word status interdisciplinary work has been taking place for a very long time. While the term ‘interdisciplinary studies’ first appears around the 20th century, evidence suggests that civilizations as old as the ancient Egyptians and Greeks were involved in interdisciplinary work and research. Any activity can be described as interdisciplinary so long as the tools, knowledge, or frameworks it makes use of span across two or more disciplines; where a discipline is an academic area that has its own established way of generating knowledge, asking questions and seeking answers to them (i.e. its own tools, frameworks, models, terms, and so on). By utilising the knowledge, skills, tools, and frameworks of different and often dissimilar disciplines, interdisciplinary teams are better equipped to solve problems and answer questions that span across disciplines, where a mono-disciplinary expert would not have the knowledge needed to address it on their own. Creating AI-based systems is a highly interdisciplinary process in terms of involving several scientific domains such as data science, mathematics, computer science, physics. What must not be overlooked, however, is the need for interdisciplinary collaboration with social scientists, domain experts, users, and design experts for a truly holistic view of the dimensions and impacts of these systems.

Interdisciplinary collaboration is a powerful tool that can help bridge the gaps between disciplines’ tools and understanding and build a stronger and more united whole. Photo by Vardan Papikyan from Unsplash.

There have been countless calls for the introduction of an inter-disciplinary, participatory design process for AI-based systems, especially in terms of aiding explainability and transparency, embedding values into these systems, providing accountability, and mitigating downstream harms arising from several cascading biases and limitations. There have also been calls for collaboration within the entire AI pipeline, including data creation and selection, instead of having designers in the front-end or start of the process and engineers at the back-end. In fact, it has been said that the only way to combat existing structural and data biases that creep into AI systems is to step away from custom-built, solely-technical solutions, and to use open constructive dialogues, collaborations, and group reflections to bridge the gulf between stakeholder visions and what actually gets implemented and to examine a broader frame of the wider socio-cultural contexts in which AI systems are being used, and have the needed voices give their input in a meaningful, impactful way.

These calls and recommendations highlight that a human-centered and value-sensitive AI system is one that was built through diverse participation and interdisciplinary collaboration.

Software Engineering — Non-Functional Requirements and Documentation

Looking towards the field of Software Engineering, several important lessons emerge.

Making Use of Non-Functional Requirements

Requirements specification is a crucial bridge between abstract, higher-level concepts that emerge in the form of non-functional requirements in software engineering and concrete requirements that engineers can actually implement. These non-functional requirements can be considered synonymous to user values and priorities, as well as other socio-technical elements and constraints; factors which are commonly overlooked in technological domains.

Having practical ways of operationalising non-functional and value-based requirements along functional ones can go a long way in creating more human-centered AI systems. Photo by Joan Gamell from Unsplash.

Starting from high-level non-functional requirements collected from stakeholders, these are then typically re-framed as ‘quality objectives’, the underlying values are then extracted in the form of ‘quality factors’, those are quantified into ‘quality criteria’, which are finally measured using ‘quality metrics’. For example, a user might express that they want the system to always be online whenever they need it. This non-functional requirement would translate into a quality objective stating that the system should always be online for users. The underlying quality factor would be ‘reliability’, the quality criteria might be having the system online 99\% of the time, and the metrics used could be the amount of time servers were offline in a year (which should be <1\%). By following this series of translations and conversions, ambiguous stakeholder statements can be transformed into quantifiable metrics and well-defined desired outcomes across managerial, design, and implementation levels.

This specification process can help address calls for operationalizing abstract chosen qualities and values into measurable, actionable steps and metrics, and “moving from principles to practices” when it comes to AI-based systems, as well as the criticism received by ethical guidelines and recommendations published within the field for being too theoretical and difficult to implement. The key is to translate metrics that are otherwise theoretical constructs, which are impossible to quantify or even observe into concrete goals and outcomes.

Air-Tight Documentation

One of the massive limitations of the way AI systems are built is the lack of transparency. This partially results from overly-technical and/or limited documentation of and reflection on the learnings and knowledge produced, and the design decisions taken throughout them. This lack of documentation can lead to data going missing, and confusion over team roles, expected outcomes, and the processes taking place — especially in interdisciplinary settings. On the other hand, documentation provides a shared representation of the team’s understanding, as well as a point of reference, and can be created incrementally and collaboratively, making it an incredibly useful tool. As such, focusing on having documentation phases, or the production of artifacts throughout the process that act as documentation themselves, is paramount. Within the context of AI-based systems, documentation has been found to increase transparency and explainability, helping creators make sense of the diverse needs of the various stakeholders involved, reflect and justify different decisions made, bridge the gap between AI ethics and practice, and encompass and unify the whole AI lifecycle in an artifact — turning the document into both a tangible artifact and a process in itself.

Making AI systems more human-centered and value-sensitive is a challenging but crucial endeavor. Photo by Alexander Sinns from Unsplash.

Bringing it All Together

The field of AI and its design processes can stand to learn a thing or two from other fields and ongoing trends. By focusing on:

  1. Meaningful and equitable participation (Data Feminism),
  2. Supporting and prioritizing interdisciplinary collaboration (Interdisciplinary Studies),
  3. Capturing and operationalizing people’s values and priorities (Software Engineering), and
  4. Maintaining documentation for transparency (Software Engineering)

The field can take strides towards addressing several of its current shortcomings by using tried-and-tested methods from right next door.

Where I Fit In

This current reality where AI systems hang in the balance with the potential to become even more isolated, exclusive and complicated; or open up and become more accessible and inclusive, is what inspired my PhD project. I’m working towards creating a participatory process, and a toolkit to support it, to systematically involve people throughout the AI life-cycle — with a focus on value-sensitivity.

You can check out the official page for my project on the Imperial College London website. You can also check out this other article I wrote explaining the details of my PhD project.

I’ve set up this Medium account to publish interesting findings as I work on my PhD project to hopefully spread news and information about AI systems in a way that makes it understandable to anyone and everyone. If you’ve liked this article then please consider following along as I post new things, and please like and share!

--

--

Hi! I’m a Design Engineering PhD Candidate at Imperial College London working at the intersection of AI and design.