Looking at AI-focused Case Studies

Alongside a discussion of how they may develop in the US and the EU

Yash Mittal
Towards Data Science

--

This post is written for Dr. Darakhshan Mir’s class on Computing and Society at Bucknell University. We discuss problems in tech and analyze them using ethical frameworks. Taehwan Kim and I worked together on this project.

Photo by Franck V. on Unsplash

In my last post, I discussed the impact of Artificial Intelligence (AI) on the job market and underlined the need for foresight in terms of regulating AI. I believe that our society is unable to cope with the ethical repercussions of a growing dependence on AI. Tech leadership in many countries acknowledge this issue, and in the past few years, they have each come out with a strategy to promote the development of AI in an effective way. An Overview of National AI Strategies briefly discusses the different AI policies proposed since the beginning of 2017.

Figure 1: No two policies are the same. They focus on different aspects of AI: the role of the public sector, investment in research, and ethics. | Tim Dutton

In this article, I focus on the differences between the policies proposed by the US and the EU. Thereafter, I discuss three (semi) hypothetical scenarios and how they may play out in the two regions which have vastly contrasting takes on AI development. For the analysis, I extensively use the ethical frameworks described in An Introduction to Data Ethics.

Comparing US and EU Policies on AI

The US focuses on continued innovation with limited regulations from the government. | Wikimedia Commons

In October 2016, the White House released its first strategy to tackle the societal challenges posed by AI [3]. The report underlines the importance of public R&D and of holding accountable anyone who promotes AI research. It suggests that innovation in this sector should be allowed to prosper, and the public sector should enforce minimal regulations on the private sector. The understanding is that a free market-oriented approach to AI progress would require little intervention from the government.

A more recent report [4], published a year into the Trump administration, focuses on maintaining the American leadership in the field and removing barriers to innovation so that companies hire locals and are discouraged to move overseas.

The EU promotes greater regulation while still being competitive in the global AI ecosystem. | Wikimedia Commons

On the other hand, the report published by the EU treats AI as a component of “smart autonomous robots” [5]. AI is thought of as an enabler for automation in other technological systems [6]. The report suggests the creation of a stricter ethical and legal framework based on the principles of autonomy, human dignity, and non-discrimination. In fact, this is further developed in 2018 when the EU Commission adopts the Communication on Artificial Intelligence. The original report also discusses the key responsibilities of the public sector in ensuring that the said ethical framework is carried through without bias from the industry.

— Case Study I

A company based in Silicon Valley is known to invest heavily in AI research. Recently, they came up with a state-of-the-art Generative Adversarial Network (GAN) which is able to generate hyper-realistic human faces. The program can come up with a human face of a specific sex, race, and age.

The applications of GANs is rising and so is the risk of misinformation spread through their use. In this day and age when fake content is so prevalent, GANs pose a significant challenge to the efforts already underway to combat misinformation. This X does not exist is a website which compiles a list of popular GANs. Most of them are quite harmless, however, this person does not exist raises critical ethical concerns.

Figure 2: StyleGAN developed by NVIDIA. All these faces are “fake.” Or are they?

What are the benefits and risks of harm that could be created by this project?

As I wrote earlier, the US has a relatively greater focus on R&D. Such a project may receive further funding in hopes that the faces generated by this program would replace the training data needed for other facial recognition algorithms. At the same time, the risk of fake identities could increase, with people pretending to be someone who closely matches their facial features.

In the EU, however, due to stricter regulations, a similar project may never become widespread. Other machine learning algorithms could then be trained only on “real” faces, which in turn may lead to privacy issues. In this scenario, it is important for data practitioners to follow ethical data storage practices. There have already been numerous instances where people’s pictures were downloaded without their consent and then used for AI experiments.

— Case Study II

A start-up X in Washington, Seattle wants to join the e-commerce industry. Recently, X recruited a lot of talented computer science graduates from the University of Washington and built a novel pair-matching algorithm to match consumers and products. When X did an anonymous survey of their product, they found that most of the participants prefer their product over the product of company Y, which currently dominates the e-commerce industry. However, due to a lack of data and consumers, X has trouble flourishing.

My last post discussed the concept of large datasets as a barrier to entry for AI start-ups. As described in Furman et al. [7], businesses that rely on a network of users with sustained interconnectivity, be it direct (in case of Facebook) or indirect (in case of Amazon), have an advantage over the entrants in the same industry.

How are the stakeholders impacted in the two economies? What are the most relevant ethical issues here?

The biggest stakeholders are the competitors and their respective customers. Even though both the US and the EU have antitrust laws in place, the two regions have drifted in terms of antitrust regulations since the beginning of the century [8]. Due to considerable lobbying, corporations in the US enjoy “leniency in antitrust enforcement” while the ones in the EU are generally more competitive [9].

Figure 3: FANG + Microsoft dominate the US tech industry as well as investments in AI. [10] | Image Source

Therefore, the US is less likely to adopt data portability policies which allow the sharing of data within competitors. This would hinder the growth of newcomers in any industry that is dominated by a monopoly (search engine) or an oligopoly (ride sharing). In our case study specifically, the start-up may struggle to stay relevant and attract more customers because the incumbent is aware of the preferences of a larger customer base. This also has the potential of causing discriminatory marketing practices to target groups of people who cannot afford the product.

In the EU, data sharing would be accompanied by a myriad of data storage and privacy issues. It is important that the companies develop a mitigation strategy in case an unauthorized third-party gets access to the customer data.

— Case Study III

A CEO, who holds dual citizenship in the EU and the US, frequently travels between her workplace (US) and her home (EU). Recently, some rumors about her personal life have surfaced the internet, and this has severely affected the reputation of her company. She is hoping to find a way to take the rumors down, but she is constrained by the policies of the two places she resides in.

Figure 4: According to Bertram et al. [11], Google had received 2.4 million requests to delist links in the first three years of Right to be Forgotten. | Image Source

In May 2014, the European Court of Justice introduced the “Right to be Forgotten” to give people more control over their personal data. It allows Europeans to request a search engine, say Google, to delist specific links from its search results [11]. The search engine is not obliged to entertain every delisting request. It gauges whether a person’s privacy rights outweigh the interest of the public in relation to the concerned search results.

It is interesting to note that the right to be forgotten does not apply outside Europe, so even for Europeans, the search results in the US remain unchanged.

Is there any harm to transparency and autonomy in society?

The two concepts are closely related and often go hand in hand. Vallor et al. [2] define transparency as the ability to see how a given social system or institution works. On the other hand, autonomy is the ability to take charge of one’s own life.

The adoption of the right to be forgotten in the EU promotes a greater sense of autonomy in that an individual can get their personal data deleted if the data is justifiably inaccurate, irrelevant or excessive. However, this is tricky because the search engine itself, not an independent third party, is responsible for determining whether to delist the requested links.

Search engine companies carry a lot of power already and having them decide on the delisting requests can lead to “personal, social, and business” damages [2]. We are left to wonder whether establishing an unbiased agency to process the requests should have been the logical next step after introducing the right.

Are there any downstream impacts?

In the US, the right to be forgotten is largely inadmissible because it might restrict the citizens’ freedom of speech. Overall, this could lead to a more transparent society, albeit at the expense of less autonomy over personal data appearing on the search results. As an extreme side effect of this situation, companies may lose lots of “relevant” data which they could have otherwise used to train their AI algorithms.

Conclusion

As we have seen the three case studies above, the US and the EU focus on different aspects of AI policy. The US promotes a liberal notion of free markets in addition to furthering public R&D in the industry. The EU has tighter regulations but wishes to maintain its competitive edge in the field of AI. Unlike the US, the EU does not want to offload the regulatory responsibilities onto the private sector.

Image result for stylegan
Image Source

Taehwan and I had assumed that more regulations on the development and use of AI would cause fewer issues. However, the case studies tell a different story. Each strategy raises its own set of ethical concerns. Investing in GANs, for example, might mean choosing between data privacy issues and fake identities. In the case of a data-driven market, the regulators need to gauge whether to enforce data portability practices to break up a monopoly.

— The Path Forward

The aforementioned (extreme) scenarios are avoidable with a plan which takes input from a diverse group of stakeholders. The strategies proposed by the two regions are comprehensive but, as Cath et al. [6] argue, answering the following question is going to be key in shaping the perfect policy.

What is the human project for the mature information societies of the twenty-first century?

The paper offers a two-pronged approach to developing a clear understanding of our vision for an AI society. Firstly, the authors suggest the creation of an independent council which mediates between the different stakeholders, including the government and the corporations. Secondly, they ask that human dignity be put at the center of all decisions. In essence, this is done to ensure that the interests of the most vulnerable stakeholders are considered in decision making.

There is no question that a massive collaborative effort is needed to realize this ambitious plan. However, we can draw inspiration from similar undertakings, such as the General Data Protection Regulation (GDPR) [13], in our recent past. The EU and the US would still need their own AI policies, but at least discussing their priorities could help them reach some common ground.

References

[1] Dutton, Tim. “An Overview of National AI Strategies.” Politics + AI (Jun 2018).

[2] Vallor, Shannon and Rewak, William. “An Introduction to Data Ethics.”

[3] Executive Office of the President National Science and Technology Council Committee on Technology. “Preparing for the Future of Artificial Intelligence.” (Oct 2016).

[4] Office of Science and Technology Policy. “Summary of the 2018 White House Summit on AI for American Industry.” (May 2018).

[5] Directorate-General for Internal Policies. “European Civil Law Rules in Robotics.” (2016).

[6] Cath, Corinne, et al. “Artificial intelligence and the ‘good society’: the US, EU, and UK approach.” Science and engineering ethics (2018).

[7] Furman, Jason, and Robert Seamans. “AI and the Economy.” Innovation Policy and the Economy (2019).

[8] Gutiérrez, Germán, and Philippon, Thomas. “How EU Markets Became More Competitive Than US Markets: A Study of Institutional Drift.” The National Bureau of Economic Research (Jun 2018).

[9] Gutiérrez, Germán, and Philippon, Thomas. “You’re paying more in America than you would in Europe.” The Washington Post (Aug 2018).

[10] Bughin, Jacques et. al. “Artificial Intelligence: The Next Digital Frontier?” MGI Report, McKinsey Global Institute (Jun 2017).

[11] Bertram, Theo, et al. “Three years of the Right to be Forgotten.” Elie Bursztein’s site (2018). Link.

[12] House of Commons Science and Technology Committee. “Robotics and Artificial Intelligence.” (2017).

[13] European Union. “General Data Protection Regulation.” (May 2018). Link.

If you liked this, please check out my other Medium posts and my personal blog. Comment below on how I can improve.

--

--