Illustration by author with assets from Undraw

AI Strategy in EU Towards 2020

A Summary of the Various Strategies and Policies Relating to Artificial Intelligence 2018–2019

Alex Moltzau
Towards Data Science
24 min readDec 28, 2019

--

Summary

The EU investment in ethical AI is coordinated in guidelines and recommendations, neither are legally binding for member countries. It has led to a broad engagement and specific initiatives that will be rolled out across the region such as access to the free course created by Finland (Elements of AI) translated into all the languages of the member states. This in addition to the piloting of ethics guidelines as well as a clear stated commitment investing in AI with environmental and climate concerns in mind is part of the strategy. It will happen alongside a human-centric approach addressing issues of sustainability. The foreshadowing in these strategic documents outlines a great increase in the investment into research within the field of AI in the coming decade 2020–2030. The EU’s current main focus is on the ethical use; increased understanding by the public; and practical responsible collaboration on applications.

Why Attempt Understanding the EU Strategy on AI?

Over the last week almost into 2020, I decided to go through the strategies relating to artificial intelligence within the European Union (EU) in the last two years. It is now the end of December 2019 and I thought it would be great to make a recap of certain strategic initiatives within the European Union in relation to the field of artificial intelligence. In doing so I have looked at five different documents issued by the EU. This is of course not a comprehensive review, rather it is an attempt to take certain of the varieties of documents issued by the EU over the last few years and give brief introductions to what they contain. I do this firstly to learn; secondly, so citizens in member countries of the EU can pursue the strategy more coordinated; and thirdly so that those outside the EU interested in policy may get a foundation in where the EU is currently headed in terms of their investment in ethical artificial intelligence.

The five documents are as follows:

  1. Declaration of cooperation on Artificial Intelligence (2018, April)
  2. Artificial Intelligence for Europe (2018, April)
  3. Ethics Guidelines for Trustworthy AI (2019, April)
  4. Policy and investment recommendations for trustworthy Artificial Intelligence (2019, April)
  5. The European Alliance Assembly (2019, June)

I will go through each document underneath starting with the declaration.

1. Declaration of Cooperation on Artificial Intelligence

25 European countries signed a Declaration of cooperation on Artificial Intelligence on the 10th of April in 2018. Norway was part of this, although they are not an EU member (rather part of the EEA — European Economic Area). This was said to build on the pre-existing investment and community in Europe.

It set out boosting technology and industrial capacity through access to public sector data. Addressing socio-economic changes, especially in the labour market. Ensuring an adequate legal and ethical framework building on fundamental rights and values, as well as transparency and accountability.

The commitment that “Member States agree to” (I have simplified at times):

  1. Work towards a comprehensive and integrated European approach on AI to increase the EU’s competitiveness, attractiveness and excellence in R&D in AI (where needed modernise national policies).
  2. Encourage discussions with stakeholders on AI and support the development of a broad and diverse community of stakeholders in a European AI Alliance to build awareness and foster the development of AI that maximises benefit to economy and society.
  3. Consider allocation of R&D&I funding to the further development and deployment of AI, including on disruptive innovation and applications, as a matter of priority.
  4. Reinforcing AI research centres and supporting their pan-European dimension.
  5. Establishment of Digital Innovation Hubs at European level.
  6. Make AI available in public sector. Exchange best practices on procuring and using AI in government.
  7. Help SMEs and companies from non-technological sectors get access to AI.
  8. Exchange views on ethical and legal frameworks related to AI.
  9. Ensure humans remain at the centre of the development, deployment and decision-making of AI, prevent the harmful creation and use of AI applications.
  10. Advance the public understanding of AI.
  11. Engage in a continuous dialogue with the Commission on AI.

2. Artificial Intelligence for Europe

This communication has a more positive slant in terms of what AI can solve and gives an introduction to what AI is. The message is that AI is transforming society like the steam engine or electricity. It says a solid European framework is required.

This report says that EU should have a ‘coordinated approach’ towards AI for good and for all. It suggests to do so through:

  1. World-class researchers, labs and startups
  2. Digital Single Market — common rules for data protection, cybersecurity and connectivity
  3. Unlocking data (termed as ‘the raw material for AI’ by the report)

It mentions the commitment in the declaration. In almost the same breath it puts particular emphasis on competition, leaving no one behind and EU’s sustainable approach to technologies. An approach that benefits people and society as a whole.

It goes back to the review of the Digital Single Market strategy in 2017 where the invitation to explore the EU approach to AI partly originated (with civil law rules to robotics for example). The three points there were on (1) boosting tech capacity, (2) preparing for socio-economic changes and (3) ensuring ethical and legal framework.

A different part of the communication on EU’s position in the competitive international landscape outlines the increased investment in unclassified AI research by US and China, thus making it clear that EU is ‘behind in private investments’. There was an expressed wish to create an environment that stimulates investments. EU apparently produces one-fourth of the professional service robots. “Europe cannot miss the train” and the benefits of adopting AI are widely recognised — a few projects funded by EU are mentioned (agri, healthcare, infrastructure and manufacturing).

In outlining the way forward they say that a joint effort by private and public is needed by 2020 and beyond. It outlines an increased investment from around EUR 4–5 billion towards EUR 20 billion over the following decade.

This way stepping up investments is stated. They outline a growth towards this number by the end of 2020. This report also mentions supporting excellence centres (and digital innovation hubs). A mention of the ‘AI-on-demand platform’ is made here that can help facilitate collaboration between the more than 400 digital innovation hubs. It mentions that hubs focused on AI will be created.

Toward 2020 they will invest 1.5 billion in research and innovation; strengthening excellence centres; and a toolbox for potential users. They talk of an AI-on-demand platform and industrial data platforms giving access to quality datasets. Beyond 2020 they will be upgrading and supporting public interest applications, and a support-centre for data sharing alongside a variety of upgrades to existing policies.

There is an aim to make more data available, EU has been doing so over the last 15 years, an examples is EU’s space programmes. It talks of an ageing society and enhancing people, in their ‘leaving no one behind’ they talk of new skills. This was largely oriented towards STEM while in another section they talk of diversity as well as interdisciplinary approaches.

  • More women and people of diverse backgrounds including people with disabilities.
  • Interdisciplinary ways combining joint degrees for example in law or psychology and AI. The importance of ethics is mentioned here, while considering the creation of an attractive environment to make talent stay in Europe.

The report had plans for education policies in 2018 with (re-)training schemes, analysis of labour market, digital traineeships in advanced digital skills, business-education-partnerships and social partners to include AI in impact studies.

“Proposals under the next EU multiannual financial framework (2021–2027) will include strengthened support for the acquisition of advanced digital skills including AI-specific expertise.”

In ensuring the ethical framework there is a mention of fundamental rights, GDPR, digital single market and explainable AI systems. In the last paragraph, there is additionally a question of intellectual property rights. Draft ethics guidelines were to be developed towards the end of the year. Safety and liability was mentioned leading into the empowerment of individuals through guidance document on the product liability directive in light of technological development by mid-2019. A pilot project in Algorithmic Awareness Building was mentioned too and support for consumer organisations for data protection.

Engaging member states is important in this work and this section mentioned Finland’s national strategy. It says: “Every member state is encouraged to have an AI strategy, including on investment.” A multi-stakeholder European AI Alliance was mentioned here as well as international outreach.

“With AI being easily tradeable across borders, only global solutions will be sustainable in this domain.”

The contribution by EU is mentioned with its values and fundamental rights, and this is mentioned in the conclusion as well. Ending with the wish to place the power of AI at the service of human progress.

3. Ethics Guidelines for Trustworthy AI

3.1 The Independent High-Level Expert Group on Artificial Intelligence

The documents that I examine related to ethics and specific policy, as well as investments, were put together by the Independent High-Level Expert Group on Artificial Intelligence (AI HLEG). Therefore, I thought it may be good to explain first what AI HLEG is, their role and their members. There is a page for AI HLEG on the EU website.

“Following an open selection process, the Commission has appointed 52 experts to a High-Level Expert Group on Artificial Intelligence, comprising representatives from academia, civil society, as well as industry.”

They have a general objective to support the implementation of the European Strategy on Artificial Intelligence. Thus it is related to policy development, ethical, legal and societal issues related to AI including the socio-economic challenges. Since its creation, it is stated by the EU that they have delivered on both ethics guidelines as well as policy and investment recommendations.

The AI HLEG is also the steering group for the European AI Alliance, a multi-stakeholder forum for engaging in a broad and open discussion of all aspects of AI development and its impact on the economy and society. There was a European AI Alliance Assembly in June 2019. It is possible to look at the entire conference, at least what was discussed:

The focus on this was discussing investment and ethics. There is a piloting process with learnings that may announce additional documents in the coming year or at least internally release information to members participating.

The European AI Alliance is a forum that engages more than 3000 European citizens and stakeholders in a dialogue on the future of AI in Europe.

You can register online to join at Futurium. Once your Futurium account is created, you will be able to fill in the online registration form to join the European AI Alliance.

All the members of AI HLEG are publicly available online.

3.2 Ethics Guidelines for Trustworthy AI

The document is split into three sections: foundations, realising and assessing trustworthy AI. As such you could say in some sense it is what is it built on in terms of values, how do we build it and how do we know what we have built is good or not. They outline that trustworthy AI should be (1) lawfully compliant; (2) ethical adhering to values; (3) robust from a technical and social perspective. If tensions arise between these components: “…society should endeavour to align them.”

One should develop, deploy and use AI systems in a way that adheres to the ethical principles of respect for human autonomy, prevention of harm, fairness, and explicability.

Tensions between these should be resolved too when they appear. Situations involving vulnerable groups should be prioritised, within this consideration we find for example children, disabled, and asymmetries of power (employee/employer and business/consumer). While bringing benefit AI systems pose certain risks, certain things might be hard to measure such as impact on democracy, rule of law and the human mind. There has to be measures taken to mitigate risk.

There are seven requirements that AI systems should meet through both technical and non-technical methods.

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Environmental and societal well-being
  7. Accountability

Technical and non-technical methods have top be considered to ensure implementation of those requirements. Fostering innovation, communication in a clear manner to stakeholders, facilitating traceability and auditability of AI systems. Adopting trustworthy AI assessment list can be useful and adapting it to specific cases keeping in mind that such lists are not exhaustive.

Mainly in short points we could say that according to the report there are three components of trustworthy AI:

  • Lawful
  • Ethical
  • Robust

Each of the three are necessary, but not sufficient.

Ideally, all three work in harmony and overlap in their operation. In practice, however, there may be tension s between these elements (e.g. at times the scope and content of existing law might be out of step with ethical norms ). It is our individual and collective responsibility as a society to work towards ensuring that all three components help to secure Trustworthy AI .

They talk of this in the report as ‘responsible competitiveness’ in a global framework. Stakeholders can voluntarily use these guidelines as a method to operationalise their commitment. They argue that different situations raise different challenges (music recommendation system vs. critical medical treatments). Thus the guidelines have to be adapted to different situations. As mentioned people are invited to pilot the Trustworthy AI assessment list that operationalise this framework.

These Guidelines articulate a framework for achieving Trustworthy AI based on fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (EU Charter ), and in relevant international human rights law .

Below, I sum up Trustworthy AI’s three components.

(I) Lawful: AI does not operate in a lawless world. It is important to consider EU primary law: the Treaties of the European and its Charter of Fundamental Rights. Additionally EU secondary law such as General Data Protection Regulation (GDPR); the Product Liability Directive; the Regulation on the Free Flow of Non-Personal Data; anti-discrimination Directives; consumer law and safety and health at work Directives; the UN Human Rights treaties and the Council of Europe conventions (such as the convention on Human Rights), and numerous EU Member State laws. Then various domain laws apply. The guidelines does not mainly deal with these and none of the text should be regarded as legal advice.

(II) Ethical AI: laws are not always up to speed with technical developments, and can be out of step with ethical norms or not suited to addressing certain issues.

(III) Robust AI: individuals and society must be confident AI systems will not cause any intentional harm. Systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts. This is needed both from a technical and social perspective.

The following model is used to display the approach in the guideline document:

The report frames AI ethics as a subfield of applied ethics relating it to the EU Agenda 2030. It speaks as well of building an ethical culture and mind-set through public debate, education and practical learning.

The fundamental rights mentioned are the (1) respect for human dignity; (2) freedom of the individual; (3) respect for democracy, justice and the rule of law; (4) equality, non-discrimination and solidarity; (5) citizens’ rights.

3.3 The Four Principles

It further outlines the four principles mentioned earlier.

Human autonomy: following human-centric design principles and leave options for meaningful human choice and human oversight over work processes in AI systems. It should aim for the creation of meaningful work.

Prevention of harm: it should not exacerbate harm to human beings so attention must be paid to systems where power asymmetries of information can arise. Preventing harm also entails the consideration of the natural environment and all living beings.

Fairness: development and deployment must be fair. This has to be a substantive and procedural dimension. It should increase societal fairness and equal opportunity to balance competing interests and objectives. In order to seek redress against decisions the entity accountable for the decision must be identifiable and the process of making a decision should be explicable.

Explicability: processes need to be transparent, capabilities communicated, explainable to those directly and indirectly affected. Explanation is not always possible according to the report (these so called ‘black box’ examples) in this way other measures may be required (traceability, auditability and transparent communcation on system capabilities). This is dependent on the context and severity of the consequences.

Different stakeholder should have different roles to play.

a. Developers should implement and apply the requirements to design and development processes;

b. Deployers should ensure that the systems they use and the products and services they offer meet the requirements;

c. End — users and the broader society should be informed about these requirements and able to request that the

3.4 Requirements of Trustworthy AI

Systemic and individual aspects in requirements matters.

These different aspects are described in detail within the report. Within each requirements there is a breakdown of sub-requirements, or perhaps keywords to consider.

Human agency and oversight. Systems should support human autonomy and act as enabler to a democratic as well as equitable society by supporting user’s agency. Fundamental rights can help in letting people track their personal data or increasing accessibility to education. Given the reach and capacity of AI systems, they can negatively affect fundamental rights, therefore in cases where such risks exist a fundamental rights impact assessment should be undertaken. This should be done prior to the system’s development and include an evaluation of whether those risks can be reduced or justified in order to respect the freedoms of others. Moreover, mechanisms should be put into place to receive external feedback regarding AI systems that potentially infringe on fundamental rights. Users should be able to self-assess or challenge the system where reasonable. Human autonomy should be kept so that humans are not subject to a decision solely based on automated processing when this produces legal effects on users or similarly significantly affects them.

Additionally governance mechanism such as human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach.

Human-in-the-loop (HITL): refers to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable.

Human-on-the-loop (HOTL): refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation.

Human-in-command (HIC): refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation.

This can include the decision not to use an AI system in a particular situation. Oversight mechanisms can be required in varying degrees to support safety and control depending on the application area and potential risk.

“All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.”

Technical robustness and safety. According to the report this is closely linked to the principle of prevention of harm. AI systems must reliably behave as intended while minimising unintentional and unexpected harm — this should also apply to changes in operating environment or presence of other agents (one could perhaps relate this to AI Safety in an operational sense). The physical and mental integrity of humans should be ensured. Resilience to attack and security is an aspect of this, and as such AI systems needs to be protected from hacking. This includes targeting of the data (data poisoning), the model (model leakage) or the underlying infrastructure, both software and hardware. If an AI system is attacked it can lead to different decisions, or causing it to shut down. Unintended applications and potential abuse by malicious actors should be taken into account, and steps should be taken to mitigate these. Fallback plan and general safety in case of problems can be devised. This could mean switching from a statistical to a rule-based procedure, or ask a human before continuing an action. Process to clarify and assess potential risks of AI across various application areas should be established. Safety measures must be treated proactively. Accuracy or correct judgements, for example in classifying information into the proper categories. A high level of accuracy is especially crucial in situations where the AI system directly affect human lives. Reliability and reproducibility is critical to be able to scrutinise an AI system and to prevent unintended harms. Reproducibility describes whether an AI experiment exhibits the same behaviour when repeated under the same conditions. This enables scientists and policy makers to accurately describe what AI systems do. Replications files can facilitate the process of testing and reproducing behaviours.

Privacy and data governance. According to the report, privacy is a fundamental right affected by AI systems. This means we need the right data governance, data integrity, access to protocols and data processing capability that protects privacy. Data protection is important in this regard throughout a system’s lifecycle. By this, there needs to be consideration for information initially provided and generated in interactions. Digital records of human behaviour may allow AI systems to infer not only individual preferences, but also their sexual orientation, age, gender, religious or political views. Quality and integrity of data are paramount to performance of AI systems and this has to be addressed prior to training with any given data set. Integrity of the data must be ensured so that malicious data is not used in an AI system that may change its behaviour, especially with self-learning systems. Thus data sets must be tested and documented each step of the way. This should also apply to AI systems that were not developed in-house, but acquired elsewhere. In any given organisation handling data is important and data protocols governing data should be put in place. Access to data needs to be clear in conjunction with qualified personnel with the competence and need to access individual’s data (not all should be allowed).

Transparency of elements relevant to an AI system: the data, the system and the business models. Process that yields decision(s) in AI systems should be documented to the best possible standard to allow for traceability. This helps us know why an AI-decision was erroneous and in turn help prevent future mistakes enabling an easier facilitation of auditability and explainability. Explaining both technical processes and human decisions. The technical requires that decisions can be traced and understood by human beings. The report mentions a trade-offs between explainability that may reduce accuracy — however the explanation has to be adapted to the stakeholder involved (layperson, regulator, researcher). In communication AI systems should not represent themselves as humans to users, humans have the right to be informed that they are interacting with an AI system. AI must be identifiable as such and options to decide against this interaction in favour of human interaction should be provided to ensure compliance with fundamental rights. Limitations should be communicated and encompass the system’s level of accuracy.

Diversity, non-discrimination and fairness. Involvement of all affected stakeholders giving equal access through the design processes as well as equal treatment linked to the principle of fairness. Avoidance of unfair bias must be strived for, this can be against groups of people due to inadvertent historic bias, incompleteness and bad governance models. Harm can result from intentional exploitation of (consumer) biases or unfair competition and could be counteracted by putting in place oversight processes to analyse and address the system’s purpose, constraints, requirements and decisions in a clear and transparent manner. Moreover hiring from diversity of backgrounds, cultures and disciplines can ensure diversity of opinions and should be encouraged. Accessibility and universal design should be made to enable use of AI products regardless of age, gender, abilities or characteristics. Access for people with disabilities is of particular importance. Therefore AI systems should not have a one-size-fits-all approach that will enable equitable access and active participation. Stakeholder participation is advisable and beneficial, this could be done throughout the system life cycle.

Societal and environmental well-being. AI systems should be used to benefit all human beings, including future generations. Sustainability and ecological sustainability of AI systems should be encouraged and research fostered into AI solutions addressing areas of global concern, such as for instance the Sustainable Development Goals (SDGs). The system’s development, deployment and use process, as well as its entire supply chain, should be assessed in this regard. The effects of these systems in regards to social impact in all areas of our lives must be monitored and considered as well. For society and democracy the effect on institutions and society must be given careful consideration, including both political decision-making and electoral contexts.

Accountability. This last requirement complements the previous ones as it necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use. Auditability entails the enablement of the assessment of algorithms, data and design processes. Evaluation must be by internal and external auditors, and the availability of such report can contribute to the trustworthiness of the technology. In applications affecting fundamental rights, including safety-critical applications, AI systems should be able to be independently audited. Ability to report on actions and respond to consequences must be ensured — minimising and reporting of negative impacts. The use of impact assessments for example through red teaming or forms of Algorithmic Impact Assessment both prior to and during the development can be helpful to minimise negative impact proportionate to risk that AI systems pose. Trade-offs when implementing these requirements may arise. Each trade-off should be reasoned and properly documented. Redress needs to happen when unjust adverse impact occurs especially with vulnerable persons or groups.

These are the seven requirements: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; (6) societal and environmental well-being; (7) accountability.

In addition, there is an importance to have evaluation and justification throughout the use. Within the use, analysis, development and re-design.

They describe technical and non-technical methods to ensure trustworthy AI.

4. Policy and Investment Recommendations for Trustworthy Artificial Intelligence

4.1 Using Trustworthy AI to Build A Positive Impact in Europe

The following are highlights of the High-Level Expert Group on Artificial Intelligence’s 2019 Recommendations on Policy and Investment. The document detailing the policy and investment recommendations by the High-Level Expert Group on Artificial Intelligence was made public on the 26th of June 2019. I call this ‘highlights’ because it is what I took notice of within the report and you may notice something else.

The report relates to the private sector and how human-centric AI-based services and the government as a platform can catalyse the AI development in Europe. As described previously this is done through world-class research capabilities with the right infrastructure and generating skills and education within the field of AI. Forward resulting from this establishing the appropriate governance and regulatory framework as well as raising funding and investment. Particularly addressing the question of how to enable an open and lucrative climate of investments that rewards trustworthy AI.

The report is split into two chapters and a conclusion. The first gives specific recommendation for policies in Europe, and the second is more specifically about “leveraging Europe’s enablers”.

4.2 Highlights Within the First Chapter on AI Policy to Create a Positive Impact in Europe

Humans and society. One first suggestion which I think is good is digital literacy through courses (Massive Open Online Courses — MOOCs) across Europe to provide elementary AI training. Another is to integrate AI training more closely across education levels as well as informing about educational resources online and facilitate discussions. There is the suggestion too of a yearly European AI Awareness Day (for example on the birthday of Alan Turing).

Protecting the integrity of humans, society and the environment is a clear recommendation too. Refraining from disproportionate mass surveillance, commercial surveillance or asymmetries in digital power.

Encouraging automation of dangerous tasks and establish a fund to manage transformation. Introduce a duty-of-care for developers and encourage better and safer AI for children.

Measuring and monitoring societal impact of AI is additionally said to be an important priority. There is talk of establishing monitoring mechanisms and support civil society organisations.

(there are three more sections detailing private sector, public sector and research & academia)

4.3 Leveraging Europe’s Enablers for Trustworthy AI

Investing in computer infrastructure and a network of facilities. Developing compliant data management and trusted data spaces as well as creating a data donor scheme. Supporting mechanisms for cutting-edge research and commercial development while developing and infrastructure for cybersecurity for entire data-transmission systems. Attain the necessary skills, however there is a mention of gender competence training in STEM. Developing and retaining talent in Europe is mentioned as important.

5. The European Alliance Assembly

After the launch of the European AI Strategy in April 2018 a High-Level Expert Group on AI (AI HLEG) was created. This group drafted a document on AI Policy and Investment Recommendations. The European AI Alliance was set up in parallel to the AI HLEG.

In June 2019, 500 members of the group met in the European AI Alliance Assembly to discuss the latest achievements in AI policy and its future perspectives.

It is a forum that Engages More Than 3000 European Citizens.

It was seen as a multi-stakeholder forum that could give input to EU policy-making more generally. Input received from the AI alliance after presentation of the Ethics Guidelines for Trustworthy AI (another report) was part of creating the policy and investment recommendations.

AI HLEG is the steering group of the AI Alliance.

It is possible to join the forum online. If you register for the AI Alliance you can access a platform in EU called Futurium.

The goals of the AI Alliance is the following:

  • Full mobilisation of a diverse set of participants including businesses, consumer organisations, trade unions, and other representatives of civil society
  • In particular helping to prepare the ethics guidelines and ensuring competitiveness of the European region in the field of Artificial intelligence.
  • Piloting the Ethics Guidelines for Trustworthy AI.

Thus what can be said is that one should follow this development closely if one is interested in the field of artificial intelligence.

The piloting phase is set to be six months from the meeting of the AI Alliance towards December 2019 according to European Commissioner for Digital Economy and Society, Mariya Gabriel.

In particular it would be good if you are interested in policy related to artificial intelligence, regulations and ethics.

Pekka Ala-Pietilä has been the chair of the EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG). He mentioned eleven key takeaways during his talk at the AI Alliance Assembly in June 2019.

Key takeaways from AI HLEG Policy and Investment Recommendations according to Pekka as presented in the June assembly:

  1. Empower and protect humans and society
  2. Take up a tailored approach to the AI market
  3. Secure a Single European Market for Trustworthy AI
  4. Enable AI ecosystems through sectoral multi-stakeholder alliances
  5. Foster the European data economy
  6. Exploit the multi-faceted role of the public sector
  7. Strengthen and unite Europe’s research capabilities
  8. Nurture education to the Fourth Power
  9. Adopt a risk-based governance approach to AI and ensure an appropriate regulatory framework
  10. Stimulate an open and lucrative investment environment
  11. Embrace a holistic way of working, combining a 10-year vision with a rolling action plan

“A major opportunity is knocking on Europe’s door. That opportunity is AI-enabled.” — Pekka Ala-Pietilä, Chair of AI HLEG, June 2019

It was mentioned later during the panel on launching a piloting process for trustworthy AI that certain members of AI HLEG will spend some time with those piloting the guidelines. It was mentioned that it might be good to tailor it to different sectors. There is a question of whether the guidelines can be operationalised: AI is more than just technology. Within this discussion it was an important aspect to give incentives to requiring some form of self-assessment or externally certified practice. It was mentioned as the difference between it feeling as a test or external audit or a self-assessment. In addition a discussion of bridging ethics with current laws. Ethical approaches for large technology companies with consultancies like KPMG, Deloitte and so on is setting up teams to be able to do this in Europe according to Richard Benjamins from Telefonica.

Summary

The EU investment in ethical AI is coordinated in guidelines and recommendations, neither are legally binding for member countries. It has led to a broad engagement and specific initiatives that will be rolled out across the region such as the access to the free course created by Finland (Elements of AI) translated into all the languages of the member states. This in addition to the piloting of ethics guidelines as well as a clear stated commitment investing in AI with environmental and climate concerns in mind is part of the strategy. It will happen alongside a human-centric approach addressing issues of sustainability. The foreshadowing in these strategic documents outline a great increase in the investment into research within the field of AI in the coming decade 2020–2030. The EU’s current main focus is on the ethical use; increased understanding by the public; and practical responsible collaboration on applications.

Otherwise

It would be wise to understand DG Connect in this context, that is: The Directorate‑General for Communications Networks, Content and Technology. Their Strategic Plan 2016–2020 is of course relevant.

There was a press release on the 6th of June 2018 in regards to the EU budget 2021–2027 for the Digital Europe programme with a proposed investment of €9.2 billion.

I have yet to look into Liability for Artificial Intelligence (2019, November). However, I will do so and add it to this summary once I get the chance.

There is additionally the Strategic Research, Innovation and Deployment Agenda for an AI PPP that is in the consultation phase. PPP in this context is an abbreviation for Public-Private Partnerships.

A relevant aspect to the AI strategy would be the European High-Performance Computing Joint Undertaking — EuroHPC has selected 8 sites for supercomputing centres located in 8 different Member States to host the new high-performance computing machines. The eight sites were announced on the 7th of June 2019.

Please do notify me if you think I have missed any important documents or parts of my summaries are lacking, I will strive to amend this if given notice.

This is #500daysofAI and you are reading article 207. I am writing one new article about or related to artificial intelligence every day for 500 days. My current focus for 100 days 200–300 is national and international strategies for artificial intelligence.

--

--