Despite the fact artificial intelligence is already in widespread use around the globe, its development should be handled professionally and with caution for we are still unaware of the effects a conscious machine could have on humanity. Leading researchers in the field warn of its potential consequences, with Professor Steven Hawking claiming that a super-intelligent machine could spell the end of the human race¹.
Artificial Intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence². In recent years it has gained the attention of the mainstream media, due to its rapid progress and extensive applications, as well as public figures such as Elon Musk and Bill Gates drawing attention to the subject. It has proved to be a topic of controversy between computer scientists and philosophers, some of whom stating that a super-intelligent AI will be the last thing we ever create³.
This report evaluates evidence, studies and statements from prominent researchers and academic reports and concludes that AI should be permitted to be developed, but only under strict-to-moderate supervision and regulation. It works to discuss the research questions: Does super-intelligent AI pose an existential threat to humanity, is creating conscious machines a good idea and is it unethical to create conscious machines?
Is creating conscious machines a good idea for humanity?
Categories of AI –
Artificial Intelligence can be broken down into three categories⁴. Artificial Narrow Intelligence (ANI) is a low level of intelligence, ANI can carry out complex tasks, but cannot understand why it’s doing them, it does not think on a conscious level e.g. Siri, Alexa. Artificial General Intelligence (AGI) is conscious, it can think on the same level as humans, is self-aware and can understand that it is a machine. AGI does not yet exist. Finally, Artificial Super Intelligence (ASI) is machine intelligence more advanced than humans, it also does not yet exist, but it would be able to solve problems beyond our understanding.
Timeframe –
As AGI and ASI do not yet exist, it is a subject of speculation how imminent they are and thus how concerned we need to be. A survey of experts in the field⁵ found the median of respondents estimated a one in two chance that AGI would be created by 2040, rising to a nine in ten chance by 2075. They also predicted that ASI will follow this by less than 30 years after. The chance of ASI turning out to be ‘bad’ or ‘extremely bad’ for humanity was estimated to be one in three.
Whilst the survey is insightful and gives us a timeframe to work with, it should be relied on with caution as the write up does not reference who these experts are and what the criteria are for being classed as an expert.
Benefits and threats –
It is understood that with consciousness, autonomous decisions and opinions are formed. This means conscious machines may have goals which misalign with those of their creators. And with a processing time millions of times quicker than that of a human, they would be hard to defend against. A report from the University of Wisconsin – Madison states that the creation of an AGI could cause an AI arms race⁶. AGI would allow for autonomous weapons systems which could very quickly become highly sophisticated, and as the machine transitions from an AGI to an ASI, we could lose control of it. As well as hostile nations attempting to gain access to the technology. This could be classed as an existential threat, as there is little humanity could do in the way of an armed, super-intelligent machine. In the same paper, a survey was conducted, with 48% of respondents agreeing that "society should prioritize work on minimising the potential risks of AI". Although the response is now slightly outdated and opinions may have changed, it reflects the fact that there is concern about developing conscious machines.
Just because a conscious machine is a big step for humanity and it comes with its threats, it doesn’t mean it shouldn’t be created at all. Solutions such as global collaboration on AGI and governmental regulations can be put in place to ensure it is developed responsibly. Dr Jean-Marc Rickli claims benefits of an AGI would be huge, it could play a large role in activities such as preventing fraud and spotting cyber-attacks, as well as defence and anti-terrorism⁷. Although, Rickli’s ideas can easily be disputed as AGI could also aid cyber-attacks and terrorism.
Is creating artificial consciousness fair to the machine?
Ethics are an important thing to consider when building conscious machines, as it is essentially creating life. A consciousness with too many restrictions on it would be like keeping a caged animal. Furthermore, relentlessly forcing a consciousness to perform task after task is reminiscent of events in history that we look back on unfavourably now, for example slavery.
To prevent history from repeating itself, guidelines would need to be created regarding what can and cannot be done with AGI. Bostrom proposes a master list of what is considered ethical⁸, which can be updated as ethics inevitably shift. This master list should also be abided to by the AI as well. Whilst this is a good suggestion, it could be considered unrealistic. Bostrom only presents a solution but not how to implement it. It would be unreasonable to assume everyone would abide by it.
To conclude
It would seem that there is no definitive answer to the question ‘Is creating a conscious machine a good idea for humanity?’, but if it is not acted upon, it will go ahead anyway. Therefore, as artificial consciousness is so monumental and world-changing, it would seem best to proceed to create it, but with strict regulations in place to ensure it is developed safely and responsibly.
AGI could well be an existential threat to humanity and giving a machine consciousness could also be unethical as well, if not produced with responsibility. There is certainly evidence to suggest AGIs creation will bring many rewards to humanity, but this does not mean we can overlook the threats, which are perhaps more important.
In answer to the question ‘Is creating artificial consciousness fair to the machine?’, it depends on who is in control of it. This is why regulations are so important and why technological progression on this scale must not be hidden away from the world.
The evolution of technology is inevitable, but that does not mean humans shouldn’t be ethical, safe and forward-thinking whilst doing it, particularly in the case of advanced artificial intelligence.
www.linkedin.com/in/joe-drury18
References
[1] Cellan-Jones, R., 2014. Stephen Hawking warns artificial intelligence could end mankind. BBC News, 2, p.2014.
[2] Lexico Dictionaries | English. (2019). Artificial Intelligence | Definition of Artificial Intelligence by Lexico. [online] Available at: https://www.lexico.com/en/definition/artificial_intelligence [Accessed 23 Oct. 2019].
[3] Bostrom, N., 2003. Ethical issues in advanced artificial intelligence. Science Fiction and Philosophy: From Time Travel to Superintelligence, pp.241.
[4] Tweedie, M. (2017). 3 Types of AI: Narrow, General, and Super AI. [online]
[5] Müller, V.C. and Bostrom, N., 2016. Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.
[6] Ramamoorthy, A. and Yampolskiy, R., 2018. Beyond mad? the race for artificial general intelligence. ITU J, 1, pp.1–8.
[7] Rickli, J.M., 3.2 Assessing the Risk of Artificial Intelligence.
[8] Bostrom, N. and Yudkowsky, E., 2014. The ethics of artificial intelligence. The Cambridge handbook of artificial intelligence, 316, p.334.
Codebots. Available at: https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible [Accessed 12 Nov. 2019].