The world’s leading publication for data science, AI, and ML professionals.

Directing your company towards ethical AI

A comprehensive strategy to enable organisations to place ethics at the heart of their AI modelling

Photo by Aaron Burden on Unsplash
Photo by Aaron Burden on Unsplash

For most organisations, AI ethics is the equivalent of flossing: they know it’s good for them, but they prefer to do something – possibly anything – else. But with growing scrutiny from governments, the press, and international organisations, companies must ensure they consider the ethics of their models.

In this article, we’re going to explore a comprehensive strategy for producing responsible AI models. To do this, we’re going to consider several factors, including: (1) fairness, (2) explainability and transparency, (3) accountability, (4) security and safety, and (5) the level of human-centricity and benefit to society as a whole.

Fair

Organisations should ensure that their AI models are fair by carefully choosing the model’s features, testing the model, and adopting IBM’s AI fairness algorithm

Currently, the society that we live in is unfair, but the models that we deploy should aim to make society fairer. Therefore, fairness should be put at the heart of all of an organisation’s models, particularly since there are large legal penalties associated with discrimination. However, ‘fair’ is a subjective term. On the one hand, fairness could be considered as treating everyone equally despite their circumstances. However, on the other hand, fairness could be considered as treating each individual based on their own circumstance. And seemingly ‘fair decision’, as one Google report found, such as giving disadvantaged people access to credit, negatively impacted their credit score. Machines learning models, however, have no awareness of context, and thus fairness. Amazon’s AI recruitment-screening model, for example, had a bias towards male applicants, because historically men filled these roles. Although there are no quick fix for any of these ethical issues that organisations, which implement AI models, are facing, they should consider an ethical framework, which can increase the fairness of their model throughout its lifecycle, including:

  • Choosing the features. Discrimination can be broken down into two main components: intentional (disparity treatment) and unintentional (disparate impact). However, steering clear of disparate impact, is not a straightforward a simply removing protected features; some apparently neutral features can act as proxies for protected features. Therefore, organisations should carefully consider the impact of each feature used in its model.
  • Adopting IBM’S fairness 360. Organisation should consider adopting IBM’s AI Fairness 360 toolkit to detect and mitigate against biases within a given dataset. Three bias optimising preprocessing techniques, which can be found within this library, standout, including group discrimination, individual distortion, and utility preservation. Throughout the design and development process, for AI models, the developers should consider using these techniques to minimise the bias within the algorithm.
  • Validating the model. During the development process, the organisation’s developers should test the AI model, including using the ‘eighty-percent rule’. This rule, frequently used by statisticians, is calculated by dividing the proportion of the disadvantaged group by the advantaged group. In addition to this, organisations should look at getting another external team to validate the model’s performance, particularly from an ethical standpoint.

Key takeaways:

  • Organisations should adopt a fairness-by-design approach to all of the AI model decision-making.
  • There is no one-size-fits-all methodology to fairness. However, organisations should adopt two core principles: first, document their approach to minimise disparate impact; and second, clearly justify all of their decisions.

Explainable and transparent

To guide the AI model production, organisation should ensure that they weigh the risk and accuracy requirements associated with their model

One of the key trade-offs in AI model Ethics is between the model’s accuracy and its explainability. On the one hand, it’s tempting to believe that, in all situation, a more accurate model outweighs the costs of it being less explainable. Certainty, this is true in some low-risk, low-impact situations, such as a streaming service recommending which movie you should watch next. However, on the other hand, having a model that’s difficult to explainable may not be acceptable in many situations, such as offering a loan to a customer. So, how should an organisation decide how explainable their model should be? First, they should weigh the risk and accuracy requirements against each other. Second, if a lack of transparency is acceptable, then organisation can consider black-box modelling techniques (e.g. neural nets, random forest); however, if explainability is important, then they should focus on using interpretable models (e.g. decision trees, logistic regression).

As organisation embrace AI, they should:

  • Clarify the explainability requirements. Organisations, in conjunction with business and legal stakeholders, should consider their expected level of explainability for their model.
  • Utilise your data scientists. There are a wide range of machine learning algorithms, from high transparency (e.g. decision trees), partial transparency (e.g. random forest), and little transparency (e.g. neural nets). So, organisations should leverage their data scientists to determine the level of transparency that they require for their desired use case.
  • Hire AI ethicists. Organisations should consider hiring people that have a deep understanding of the model, and can, upon requests, explain the approach that the model took to produce a given outcome.

Key takeaways:

  • Organisations – particularly banks, government agencies, and pharmaceutical companies – face a large amount of scrutiny; therefore, they should focus on making their algorithm explainable. To do this, they should use high or partial transparent models, unless there is a clear competitive advantage of using a less transparent model (e.g. neural nets) that outweighs the risks associated with them. Note, however, in most use cases, it’s unlikely that humans will need to vet all of the AI’s decision-making, only for high-risk decisions.

Accountable

Organisations should ensure the AI model’s designers, developers, and managers are responsible for the societal and litigation impacts of their solution going live

Algorithms, if left unchecked, can impact businesses, individuals, and societies. Therefore, it’s vital that organisations have an artificial intelligence governance policy in place. Although, at first, it seems like the AI’s logical decision-making is objective; humans judgement plays a role throughout the design and development of the model. Consequently, data scientists and their management should remain accountable for the actions of their model. The list below explores three ways to make an organisation’s AI more accountable, including implementing strong AI governance, defining the responsibility associated with the consequences of the AI system, and documenting all ethical decisions.

  • Governance. Organisations should ensure that they produced clear and accessible policies, standards, and procedures for their ethical considerations before undertaking any AI modelling project. Thus, minimising confusions around who is responsible for any ethical consequences associated with the production of the model.
  • Responsibility. The data scientist and their management should remain accountable for their models. Therefore, organisations should ensure that they provide their employees, who work in the AI space, with relevant compliance training, so they understand their accountability to make sure that the team is producing a responsible, socially-beneficial model. As one fast company study found, fifty-percent of developers believed that the developers, who produced the AI model, should be accountable for the consequences of their model.
  • Documentation. All design decision-making should be clearly documented, and accessible to leadership and programmers. Furthermore, if an organisation’s consultants or contractors leave, they should consider hiring full-time ethicists to monitor the model throughout its entire lifetime, and ensure that any ethical concerns are addressed swiftly and effectively.

Key takeaways:

  • Make clear and accessible policies, standards, and procedures to outline the ethical considerations associated with AI model development.
  • The data scientists and their management will be responsible for the implications of the organisation’s customer churn model.
  • Produced detailed records for the ethical considerations.

Secure and safe

Organisations should ensure that they respect their users’ privacy, monitor the algorithm, and do no harm to their customers

The AI model should be implemented in a secure and safe manner, which includes respecting customers privacy, monitoring the algorithm, and doing no harm.

  • Respect privacy. Companies are increasingly providing their customers with greater control on their data. However, this type of data privacy optimisation requires a careful balancing act. On the one hand, companies that collect more data may be at an increased risk from litigation penalties. However, on the other hand, companies that have strong data privacy standards may forego monetary benefits of data. So, what should organisations do? The optimal Strategy is to collect and use no more – but no less – data than the organisation’s competitors. In addition to this, organisations should clearly articulate to its customers the intricacies of their AI model.
  • Monitor the algorithm. The analytics team should ensure that they clearly articulates the methodology of the AI model. So, if key team members leave, the organisation can respond to ethical queries posed by its customers. Furthermore, organisations should consider creating a permanent role for AI ethicists, who will be able to monitor the AI algorithm throughout its entire lifecycle. The monitoring processes should include considering the inputs, outputs, and local legislation.
  • Do no harm. Ensure that the AI model system is not used to harm any of the organisation’s customers. Take, for instance, a credit card customer who has had trouble managing their finance, and may have gotten into significant amounts of debt – it’s not appropriate for the organisation to encourage this customer to borrow more money with the bank. Instead, the organisation should focus on supporting its customer to help them achieve their goals. And if that isn’t something to be proud of – what is? So, the algorithm should ensure that a human can step in when necessary, to make sure that the algorithm doesn’t harm other individuals.

Key takeaways:

  • Adopt a privacy-by-design approach to all decision-making associated with designing, developing, and testing of an AI model.
  • Produce an AI model that is customer-centric.

Human-centric and socially beneficial

Organisations should ensure their system is managed by a human, socially beneficial, and lawful. To do this, high-risk tasks will be augment by humans and low-risk tasks automated

When designing and developing an AI system, organisations should take into consideration a variety of social and ethical factors, and the organisation shall only produce algorithms where the benefits clearly outweigh the risks. Three principles stand out in terms of both their human-centricity and their potential to benefit society. These are human-in-the-loop, socially beneficial, and lawful.

  • Human-centric. A human should, at all times, be in charge of the AI model. But this doesn’t mean that the human should always approve every decision made by the AI. However, organisations should ensure that high-risk decisions are augmented by humans.
  • Socially beneficial. AI should positively augment society, particularly to help achieve large issues within society. Note, however, not everything that is socially beneficial is human-centric. China’s social credit score, for instance, aims to shield society as whole, but it doesn’t take an individual’s need for privacy into account.
  • Lawful. Organisations shall, at all times, ensure that their AI model and digital solution complies with all local and extraterritorial policies, standards, and procedures, including DPA ’18, GDPR, and Data Sharing Agreement.

Key takeaways

  • Focus on ensuring that the algorithm benefits the end customer, and is compliant with local and extraterritorial laws.

Conclusion

There are no quick fixes for any of the ethical issues surrounding artificial intelligence, and sometimes that can feel paralysing. But the lesson is clear: organisations should ensure their system is fair, explainable to a non-technical audience, held accountable by their designers, respectful of users’ privacy, and socially beneficial. However, more importantly, by ensuring that organisations place ethics at the heart of all of their AI models, they can build trust with their customers. Consequently, this additional trust may make their customers feel more comfortable sharing additional data, which could improve— potentially more so than making the model less transparent – the AI model’s efficacy, and in turn, their customers’ experience. And that’s the mission of every organisation – to put the customer first.


Related Articles