My comments on the EU’s leaked white paper on AI

Xeno Acharya
Towards Data Science
5 min readJan 23, 2020

--

The “Structure for the White Paper on Artificial Intelligence — a European approach” provides an overview of the regulatory lay of the land for AI in the EU and proposes several options for regulations moving into the future. It lays out existing policy framework for artificial intelligence at the EU level and beyond; outlines why AI needs to be promoted across Europe; proposes better access to data through edge computing; and digs deeper into the key elements of the future comprehensive European legislative framework for AI; closing with five options for governance.

My comments, after reading the paper in its entirety thoroughly and having spent a considerable amount of time thinking through the implications of AI regulations and ethics in my day job, focus on some interesting (troubling?) parts only, not going in-depth into each section. I will focus on section 5 and 6, the meat of this white paper, as most of the rest is background material. In section 5B (EU legislative framework for artificial intelligence), the paper discusses the weaknesses of the current legislative framework:

  1. Limitations of scope as regards fundamental rights: the Charter of Fundamental Rights does not apply to situations involving only private sector parties, and leaves out rights other than access to employment, social protection, education, public services (housing). The implications of AI systems go far beyond these fundamentals.
  2. Limitations of scope with respect to products: thus far the EU product safety legislation has only applied to the placing of products on the markets, not on services based on AI. With the new wave of companies touting ‘AI as a service’, this is going to be a problem.
  3. Uncertainty as regards the division of responsibilities between different economic operators in the supply chain: under current EU legislation, AI software existing products (such as self-driving cars) are not covered — only the products are, i.e. if BMW’s self-driving vehicles use Chinese algorithms and these vehicles malfunction, only BMW would be liable and not the algorithm developer.
  4. Changing nature of products: EU legislation has not accounted for pace-makers that dynamically adjust current if your body functions change, for example. It was designed for products that don’t change significantly (have software upgrades or learn over time) once put to market.
  5. Emergence of new risks: current legislation does not adequately cover cybersecurity risks, malfunction due to loss of connectivity, machine learning during product use, etc.
  6. Difficulties linked to enforcement: current laws are not designed to cover ‘black-box’ systems that provide automated decision-making, where causality is difficult to prove and therefore attributing liability is difficult.

Section 5E (Possible types of obligations) outlines possible ex ante and ex post requirements for regulating AI systems in the future. My comments are focused on only two ex ante requirements that seem a bit problematic and/or incomplete.

As part of maintaining accountability and transparency requirements for developers (which I assume here means the organization that creates the AI system, not just coders), the proposed requirement is to “disclose the design parameters of the AI system, metadata of datasets used for training, on conducted audits, etc.” — while this is a great suggestion, the EU needs to seriously think about how it is going to protect intellectual property. Companies have built businesses around keeping their algorithms secret, the same way pharmaceutical companies have built their businesses around keeping their drug formula a secret. That said, similar intellectual property principles can be applied for AI algorithms as have been applied for drug manufacturing.

Another of their requirements ex ante is the requirement for human oversight or a possible review of the automated decision by the AI system by a human. While I agree with this in principle, AI systems are going to be implemented in increasingly complex areas of our lives (e.g. population health) and no single human, or even a collection of humans, might be able to provide a non-biased oversight unless they know why the AI system made the decision it made. However, knowing this is also not always possible due to black-box algorithms, although there is a growing set of tools to make black-boxes more explainable.

EU’s proposed set of 5 possible regulatory options are all weak:

  1. Voluntary labeling — this gives the illusion of responsible behavior while allowing possible perpetrators to go off scot-free, or creates a whack-a-mole problem;
  2. Sectorial requirements for public administration and facial recognition — first, it is not just facial recognition (any kind of biometrics, gait or movement analysis, fingerprint or retinal scans, behavioral pattern recognition, all of them bring similar issues), and second — this only covers the public sector — the use of AI systems by the private sector is what I’m more concerned by. Finally, it suggests a ban on technologies such as facial recognition for 3–5 years while the EU figures out the rules — that must be a joke: the world will have advanced by light years (in AI terms) in 3–5 years, after which not only will EU players be far behind the world in these types of technologies, they will also have completely missed the boat on the improvements that could be made on regulating this through participation in open and productive discourse sooner/now;
  3. Mandatory risk-based requirements for high-risk applications — this comes closest to a potential solution, however, leaves considerable wiggle room for what high-risk sectors and high-risk requirements are (opening up the potential for bias here). That said, this seems like the most sensible approach of the 5 options;
  4. Options 4 (Safety and liability) and 5 (Governance) are applicable to all three options above, and don’t fit in this list — the group writing the white paper must have been exhausted by the time they got to this point!

The paper concludes with “the Commission is of the view that” combining Option 3 with 4 and 5 would be best. That is great; however, it is important to create an arm within each existing regulatory body to employ a cadre of AI experts who are able to judge the design, development, and deployment of AI systems as ethical, and who can knowledgeably provide recommendations or enforcement decisions if that were not the case.

Historically, the EU has been extremely progressive in pioneering regulations that protect the fundamental tenets of human rights, dignity, and respect. This has often required radical thought leadership, significant resources, and a shake-up of the status quo. I believe the time is right for such a shake-up again. Regulating AI systems and putting in place the right governance for them requires not incremental, but disruptive innovation, and I would rather it come from the EU than from Google.

--

--