Photo by @deko_lt

Artificial Intelligence — Ethics vs. World Domination?

A strange question to ask and to answer for Norway

Alex Moltzau
Towards Data Science
10 min readAug 6, 2019

--

I was contacted by a friend who is helping to host an event at a large business conference in Norway where industry and politicians meet. The name of the event is “Artificial Intelligence – Ethics vs. World Domination?”. In this context I was asked a few questions and I will do my best to answer. I will however first discuss a series of questions I was sent relating to the topic. These questions concern competitiveness, human-centric AI, Norwegian interests and social responsible AI. First let us begin with the description of the event.

The (Security) Dilemma

Artificial Intelligence is selected as a strategic important technology for the competitive power of states. On a global basis there is an arms race to reach the most ambitious goals and strategies. The European Union wish to take a global leadership both to strengthen the business development and to strengthen EUs strategic position in the world. At the same time European countries have clear targets in regard to data safety, ethics and a “human-centric” development of technology. These are targets that according to some could be in conflict with the goal to win the “AI Arms Race”, while global competitors such as China and US has less of these limitations for example access to raw material for the development of artificial intelligence, in particular data.

I have in this context been asked four questions:

  • Which Norwegian professors should be contacted related to this topic?
  • How does EU strengthen its position within AI, and what does it mean to go for a “human-centric” artificial intelligence?
  • What is Norway’s role and possibilities in this picture and who is looking after Norwegian interests?
  • How important is it to develop AI for Norwegian and European competitiveness?
  • How can socially responsible AI become a competitive advantage in the geopolitical game?

Which Norwegian professors should be contacted related to this topic?

There are various professors within different areas that could be worth contacting. Each has their own specialised area, as professors do, carving out an area of focus. This is my short list of three people in order of priority:

  1. Bjørn Høyland. The teacher for the module I completed this spring Machine Learning in the Social Sciences is named Bjørn Høyland and he is a professor at the Institute for Political Science. He teaches both STV2500 European Decision-Making Processes and Policy Areas as well as STV1515 — Machine Learning and programming for Social Scientists. His background is PhD from London School of Economics and Political Science, 2005. He is part of the editorial council of the European Union Politics. His research interests are legislative politics, applied political methodology, and computational social science.
  2. Lene Pettersen is an Associate Professor at the University of Oslo and she has recently written about a participatory business model for news. Her background is a mix of anthropology, media studies and business. Moreover, she is a member of the Center for Interdiciplinary Media research’s (STM) board, and is one of the two editors of a special issue of Norsk medietidsskrift on behalf of STM, about algorithms, automatization and data analytics in the digital media landscape(1/2019).
  3. Two Norwegian academics are coming out with a book on digital ethics. Leonora Onarheim Bergsjø and Håkon Bergsjø are the two authors of this book. Therefore they would likely have some clear thoughts on this subject. Leonora has a doctorate in ethics and philosophy of religion from the University of Oslo. Her research focuses on digital ethics.

Do I have any further tips to professors UiO to an event during the week in Arendalsuka? It is a tough question. Since I have been trying to find professors at the University of Oslo for half a year interested in this subject relating to the social sciences. I have organised a series of talks on the topic through AI Social Research the youth initiative regarding questions in social science pertaining to the application of artificial intelligence.

How does EU strengthen its position within AI, and what does it mean to go for a “human-centric” artificial intelligence?

A challenging and abstract question to be sure. Is not most efforts within the field of AI human-centric? It is created by humans after all and instilled with actions based on our decisions. AI is not ‘it’ or a ‘thing’, at least not in most cases yet, we compare it to ourselves and the cliché has become having one robot arm and one human arm. This is a persistent image.

Wanting to ‘recreate through our image’ sounds intuitive, and particularly the focus of deep neural networks (DNN) which is the most popular machine learning technique that reinvigorated the field is modelled after neurons (a specialized cell emitting nerve impulses). Therefore and due to a series of robots (almost) resembling human being this seems to have become an obsession of scientists or transhumanist, advocating for the transformation of the human condition through technological enhancement of human physiology. We are talking of superhuman, singularity etc. It certainly has a religious vibe to it.

I have previously written about the AI and the Imposter Syndrome talking of the anthropomorphism (giving human characteristics) as well as why it may create a dangerous culture for developers and could lead to a series of misunderstandings in term of capabilities of a given project. In a sense it can be compared to striving for immortality or chasing the holy grail, if being human is a great goal in itself. I am not saying some people should not pursue it, however it is potentially dangerous if this perception becomes the dominant ideology or dogma in development of AI.

We must see the limitations and how technological applications function different, even machine behaviour as suggested by MIT alludes to this opposition of human and machine. As in: it is not human behaviour, it is different, and it must be studied or understood as such. On the other hand if we take a less social constructivist approach (in relation to this political perspective) and consider realism in the state-perspective of power we could venture into completely different discussions. Yet let us keep to the constructivist perspective a tad longer.

How important is it to develop AI for Norwegian and European competitiveness?

EU is strengthening its position within AI because AI has become securitised. Wæver and Buzan with their theory of securitisation is fascinating in this regard. A common example used by theorists is how terrorism is a top priority in security discussions, even though people are much more likely to be killed by automobiles or preventable diseases than from terrorism. Securitisation studies aims to understand “who securitises (securitising actor), on what issues (threats), for whom (referent object), why, with what results, and not least, under what conditions.”

In this sense the EU (actor) is securitising AI (threat) for human-centric approach (referent object, ideal that has to be protected) to protect the citizens of EU, as well as the community in Europe (audience). Of course this could be redefined and is simplified. Yet human-centric assumes there is something inhuman about the alternative, and it could be argued they are right.

I wrote about Facebook vs. EU Artificial Intelligence and Data Politics with the fines imposed by EU against Facebook for its breach of privacy regulations. In this article I focus on the report by European Union Agency for Fundamental Rights (FRA) called Data quality and artificial intelligence — mitigating bias and error to protect fundamental rights. This notion moves closer to what could be called fairness which I also explored in relation to gender and equality.

It is worth mentioning that this approach is not exclusive to EU. Recently Stanford opened its Institute for Human-Centered Artificial Intelligence. However it can be problematic when this receives such a large portion of its funding from Microsoft (correct me if I am wrong), and ethics institutes in Europe such as the one in Munich receives so much funding from Facebook. If human-centered is a clear conflict of interest, then these institutions are highly human-centric. We have a hard time making up our minds.

What is Norway’s role and possibilities in this picture and who is looking after Norwegian interests?

I have not fully answered this question however I did previously outline all Scandinavian AI Strategies for 2019. The greatest risk concerning us all is the Climate Crisis and this should not be forgotten. Yes it is important to develop responsible machine learning applications however the biggest threat in AI Safety right now is and should be climate change. If we include emissions resulting from exports then Norway is one of the largest CO2 emitters in the world. Cutting oil as well as gas and focusing on renewable energy alongside high-technological development also within the field of AI could be an approach.

There is a general consensus that this has to be done if we are to reach our commitments to the Paris Agreement as well as our responsibility to ensure human survival. I am not hinting that AI is not a threat. Meanwhile it is important to consider that Norway has no nuclear weapons although it sells a sizeable amount of weapons in the forms of export. The threat of nuclear war has been looming over humanity for a while and it could be because we ask ourselves questions such as the one posed in this debate.

If we touch back to political science and international relations the US is the hegemonic power with the greatest influence, although it is challenged by emerging powers such as China and we may in the future have a bipolar world with two powers as opposed to one (unipolar). Who is looking after Norwegian interest? China has recently been acquiring a series of Norwegian companies and a rich business man has been buying territory in Norway.

Chinese mining (copper) and foraging (the purchase of Norske Skog) could lead to Norway as a supplier to China to a greater extent. US interest in Norway through NATO with Jens Stoltenberg as its head seems relatively clear. EU is expanding its energy collaboration with Norway, and as such also has some vested interest. Yet if we ask ourselves ‘who’ as a person more specifically I find it harder to answer the question.

If we ask ‘who’ as an institution and Norwegian interest lies in peace then the Peace Research Institute of Oslo seems to be an important institution. Additionally the Norwegian Defence Research Establishment (FFI) has strong interests together with Norwegian Defence to maintain our defence interests. There is a large scale recruitment to the Norwegian cyberdefence within various roles — this seems clear.

How can socially responsible AI become a competitive advantage in the geopolitical game?

It is hard to know whether drastic moves such as basing all the Norwegian government systems on blockchain is being considered. Quantum Computing combined with Artificial Intelligence in hacking efforts (particularly recurrent neural networks) seems a threat that could be lessened with increased cryptation (although this is not my area).

Estonia has done so with E-estonia. This can be important seen in the light of the latest ransomware attack on Georgia courts or the hacking of VISMA one of the largest Norwegian financial technology company. Since cyberattacks are occurring it leads to an increasing awareness of security or the lack thereof. However it is vital that the energy use of AI be considered in this context. Additionally this digital security dilemma or digital insecurities that can be a threat to our existence due to worsening of the climate crisis.

Security dilemma is as pertinent in regards to the digital, however this focus on securitising data in this context can lead to two-level games as described by Putnam with relevance for international as well as domestic policy. It can be said too that this can create a certain political figures when they make a threat out of ‘Russian’ (FaceApp as an example) or ‘Chinese’ interference in data.

Giving Facebook or Google large fines can in this context likely be seen both as a protective measure yet additionally prove political strength. Thus AI or the ‘idea’ or ideology of what AI is supposed to be can be exploited in a political context. There is a fear of new technology (well-founded of course) that can become pervasive in certain societies and showing political will could of course be a strategic move.

Norway has already begun developing machine learning capabilities in NAV (the state owned welfare system). I know this because they tried to recruit us at the university through talks. Combining programming with social science is a combination that is increasingly needed, at least that is what studies by the University of Copenhagen has shown prior to their creation of the master’s programme in Social Data Science. Can artificial intelligence be social democratic? Can AI Safety consider job safety or not?

Socially responsible AI does of course take both social science, natural sciences and other scientific fields into consideration. I believe UK as an example has made a grand mistake with their focus investing £200 million in purely the natural sciences. Yet they can of course prove me wrong. Why not go the unique path and focus on what is believed to be Scandinavian with participation in the democratic process as well as a focus on social democratic AI? Can we make sustainability a focal point in our development of AI? This is to me are interesting questions that need to be raised rather than getting to hung up on world domination.

It is indeed a strange question ask for little Norway, yet we have experienced invasions and do have to consider possible repercussions of this development while taking care of our citizens (new and old) in addition to tackling the challenging climate crisis we find ourselves in.

This is day 65 of #500daysofAI. My current focus for day 50–100 is on AI Safety. If you enjoy this please give me a response as I do want to improve my writing or discover new research, companies and projects.

--

--