Code of Ethics — AI and Analytics for HR

Jasmit Kaur
Towards Data Science
4 min readAug 10, 2019

--

Photo by ev on Unsplash

Last night, I watched a documentary about the information war started by Cambridge Analytica (“The Great Hack” on Netflix).It featured Christopher Wylie, a former data scientist and eventual whistleblower, who played a role in harvesting personal data about millions of Facebook users who were ultimately targeted for political ads by foreign agents. Nothing is more thrilling to data scientists than having access to a large dataset. And we all want to use the data to change the world in some positive way. But, what if we inadvertently cause harm, rather than benefit? That’s a key question for any data scientist, and it applies just as much to people analytics — the application of AI or data science to HR.

At a recent conference on employee assessments, discussion invariably veered into the legal and ethical issues of people analytics. During Q&A following a talk I gave there on stages of organizational sophistication in people analytics, the participants asked a range of questions. One question was about the ethics of employee surveys that claim to be anonymous, even though someone in HR often has access to identifying data in the results that can pinpoint individual employees. Another asked, How would one defend the use of an AI-based video technology’s screening process in a court of law? Yet another: Can you explain a machine-learning based predictive model for identifying the “best” candidates? And another: What’s behind these algorithms that claim to match the right person for the job?

Leading with Ethical issues

Legal and ethical issues are intertwined. The law is, to some extent, a manifestation of society’s ethical standards. As a society, we think that it’s ethical to treat people fairly and equitably and hence there are laws to prevent adverse impact. As a society, we believe people should be allowed to do many things out of reach of the public eye, and hence we have privacy laws.

Yet, the law does not coerce every kind of ethical action, and where ethical standards remain in doubt, the law necessarily trails behind.

Title VII prohibits discrimination in employment on two grounds — “demonstrable intent to discriminate” and “disparate impact”. However, there is legal precedence that allows organizations to have disparate impact if it arises from business requirements, especially where there appears to be no intent to discriminate. This is a bit of a loophole could undo all of Title VII once data science and AI enter the picture. Organizations could make people decisions based on data that ends up having disparate impact, even without including obviously sensitive variables such as race or gender. Algorithms might find statistical proxies for groups in biased historical data and discriminate based on them — all while the organizations responsible claim both business requirement and discriminatory innocence.

It’s not yet clear how the law will treat such “black-box” algorithms, so those of us working in people analytics are sometimes in a Wild West, where laws are uncertain, and anything goes. When the law is ambiguous, we need to rely on our own ethical compass.

So, what are the biggest ethical concerns in people analytics outcomes and products?

  • Are decisions susceptible to bias because of bias in the data?
  • What is the right balance between the desire for more data and personal privacy?
  • How should we be held accountable for potential errors made by algorithms (whether they were ”correctly” programmed or not)?
  • What should be communicated to people who were affected by decisions made in part by an algorithm?
  • When is it OK for job candidates and employees to interface only with machines? When should real people be the contact?

Code of Ethics for People Analytics

The Association for Computing Machinery has had a code of ethics since at least 1992, and the American Library Association has had a code of ethics regarding information users since 1939, well before mainstream computing. Just as medical doctors take the Hippocratic oath, these other professions needed a code of ethics because of the potential impact their work can have on real people.

I believe those of us working in people analytics need a code of ethics that can guide our work, as well. As a starting point, here is my own draft, which has been inspired by other professional codes of ethics:

  • I will be ever-mindful of privacy and security.
  • I will be transparent with my clients about the assumptions and limitations in my methods.
  • I will encourage organizational clients to be transparent with their stakeholders.
  • I will use data only for the purposes it was collected for.
  • If I or my organizational clients want to use data for purposes other than the original intent, we will seek explicit permission from those about whom the data was collected.
  • I will strive to communicate such that information I produce is more difficult to misuse.
  • I will strive to use interpretable algorithms, especially when they have the potential to influence an individual’s selection, performance, and career growth decisions.
  • I will be clear and up front about what data analytics and technology can solve, and what it cannot.
  • I will inform my clients about trade-offs when using any form of automation.
  • I will be vigilant against the potential for undesired bias, even where it might be allowed by law.

This code of ethics covers five categories — data privacy, purpose driven-usage, transparency, visibility to limitations, and action against bias. This is a start for me. I will continue to evolve this code as I learn from other professionals in the people analytics world.

--

--

I am CEO and co-founder of Culturebie, a people analytics company based in Ann Arbor, MI. My goal is to make people analytics accessible to all companies.