Backpropagating AI’s future
Backpropagation (and Gradient Descent): The most prevalent algorithm used to train neural networks, by identifying which weights contribute most to the error in the output, and consequently adjusting them to give better results.
Artificial Intelligence is a challenge first and foremost, in every sense of the word. Technologically, ethically, economically, we have never faced a disruption like this ever before in the history of mankind. It makes sense to explore where this will lead us and where we are today, and try to make a roadmap of where to go from here.
In Part 1, I present some of the current arguments for what AI’s future will look like, its philosophical implications and the far-fetched.
In Part 2, I backpropagate all the way to today and have a more technical discussion about where AI Research is, where it’s heading and what the key issues are.
Part 1
Machines today aren’t intelligent/smart (or any other synonym you can think of) in the way you think they are. In fact they aren’t even close. The promise of Terminators and I, Robots (AI cliche #1) is misplaced.
Computers need A LARGE amount of data to learn anything, and even then they don’t always generalize well. Imagine calling a 5 year-old, intelligent — who had to see 10,000 images of cats before understanding what a cat looked like. I wouldn’t open a college fund for him. I would also not recommend taking out the time to show anyone 10,000 images of your cats.
This is not what the future of AI will look like. In the future, artificial agents would be able to generalize almost effortlessly with minimal (or no) data. This is called Artificial General Intelligence, and it will understand or learn any intellectual task that a human can, much better than us, now that you think of it.
Nick Bostrom’s, thought-provoking book — Superintelligence presents the concept of an Artificial Super Intelligence (ASI) that will supersede AGI by a good margin, and then some more.
Let’s get our hands dirty (with robot oil am I right) and see what all this means.
“Everybody hates moral philosophy professors!”
These artificial agents would not only be implementable on a computer, but any virtual machine that we come up with (Some researchers also argue, that our mind is a virtual machine too). Needless to say, we have found the most pervasive technology in all of human history. Not only will this be the most advanced technology ever created, it would be an entirely new being — Life 3.0 as Max Tegmark calls it, in his book of the same name.
What moral status do these new beings have?
Principle of Substrate Non-Discrimination:
If two beings have the same functionality and the same conscious experience,
and differ only in the substrate of their implementation, then they have the
same moral status.¹
It wouldn’t matter if I’m carbon and my new AI friend is silicon, just like my skin colour doesn’t matter. Bostrom et al.¹ argue that substrate lacks any moral significance, hence these agents would have the same moral status as us.
Here’s the problem with moral debates regarding AI: unless moral philosophers can provide an impeccable framework for morality and ethics, all debates would be unfruitful. This job is innately and recursively difficult, and has far-reaching consequences. An AI’s utility function could allow for potential harm, biases and global catastrophes, simply because common sense and ethics cannot be formalized into a language that these agents can understand.
The moral discussion is important because it’s likely that these beings would be so powerful, that they would be indistinguishable from the concept of a God. An “Intelligence Explosion” could take place as soon as we achieve general intelligence with just one machine. Any agent that is sufficiently intelligent, could enter the loop of self-improvement till it gets out of our human hands.
Let’s talk about what happens when these beings begin to live between us.
Well, they took my job
The leading argument against AI indubitably, is that they’re going to take our jobs. The counter-argument is even if they do, we would be freed to spend our time on meaningful things, avoid all mechanical work and cringe at the fact someday, that ancient humans spent the majority of their lives working on unnecessary, back-breaking, dangerous things.
Weizenbaum argued that AI should not be used to replace people in positions that require respect and care, such as any of these²:
- A therapist
- A soldier
- A judge
- A police office
- Customer service
AI is being used in almost all these professions, profusely. Google’s demo of their assistant booking an appointment is a hint to AI’s future dominance in all jobs involving human conversation (therapy, customer service etc). Hannah Fry’s book, Hello World demonstrates many examples of AI being implemented as judges, doctors and discusses issues with this. AI has also proved its mettle in domains like art and music, which was the last thing anybody could have predicted a 100 years ago.
Jobs and wages are provided on the basis of a person’s contribution to society and is directly proportional (usually) to the difficulty of the problem a person solves. What happens when an effortlessly intelligent being steps up?
There are scant reasons why our jobs wouldn’t go to them, and almost none are rational. This needn’t be a bad thing at all, economists and technologists suggest that with these jobs going to AI agents, new jobs will open up for humans, the nature of which we can’t even predict. Looks like you can show 10,000 images of your cats to people after all.
It’s very difficult to predict what will happen and it’s not wise to extrapolate from past trends, as the rise of true AI is an unprecedented and highly uncertain trend.
Goals of an Intelligent being
AI is potentially dangerous, not because it’s going to inhabit the body of Arnold Schwarzenegger to unleash their wrath, but simply because they don’t want to. They have no motivation or goals that could instigate them to do such a thing (OR DO THEY). That brings us to the actual danger, what happens when AI’s goals aren’t aligned with ours, and how do we ensure that this does not happen.
By the very nature of being an intelligent being, we learn to make our decisions on our own, considering our survival and advancement. No AI system today has any goals, although they spit out decisions every second. They are finely tuned to a specific domain restricted task and are completely oblivious to anything other than what they’ve already seen millions of times. It’s possible for these agents to decide something, without the presence of any internal representation of a goal, they do that today billions of times. But it’s hard to imagine a general intelligent being with no goals; even a 2 year old has goals. It makes sense to ask for transparent goals in AGI systems, for fail-safe if nothing else. But as mentioned previously, an intelligence explosion could get out of hand fast and we wouldn’t know or have any tool to interpret the goals of these silicon brains. It’s imperative to tackle this issue before any such advanced machine is even in the making.
Creativity and Emotions
Creativity is perhaps our biggest strength and our most valuable asset, probably in the argument against AI too. They might be able to do a billion calculations per second BUT CAN THEY DRAW THIS
Recognizing creativity is a difficult problem, let alone coming up with creative ideas. With the advent of Computer Generated art and Neural Style Transfer, which stirred up the Deep Learning Community, we have hints of AI’s creativity in conventional domains. In these restricted domains like painting, music and playing chess, AI already has a head start, but matching human creativity in the general sense is something that we can only make assumptions about.
Another thing, that is seen as alien when we mention AI is, emotion.
AI systems can fake emotions very well (think chatbots). They have recently begun to write beautifully (Open GPT-2) and talk like humans. I say fake emotions because there is no internal representation of any emotions in these algorithms, yet they spit out magnificent pieces of writing, paintings and music. Here’s a kicker: Some researchers argue that feelings like anxiety would also need to be included and used³ in making AGI systems, it makes sense because our decision making stems not only from our feelings, which we take for emotion, but also functional and phenomenal consciousness and past history; these things enable us to schedule and prioritize motives in our lives — something that we want to pass on to our intelligent overlords.
There is a general consensus about what AI’s future will look like, although nobody agrees on the details of it.
It’s important to be optimistic, imaginative but also skeptical of every piece of information. Trying to predict the future has never worked out very well in the past, and extrapolating extravagantly is even worse. Glaringly obvious facts and their inevitable consequences can lead us astray in the future as everything today is transient and could well turn out to be wrong in the first place. I haven’t discussed a lot of major issues (The Trolley Problem, The problem of Consciousness and qualia, Robot rights etc) but this is indeed a starting point to get your feet wet with some of the key issues in AI ethics, safety and its impact on all of us (Spoiler: it’ll affect us a lot more than just recommending movies and products).
AI agents will remember what topping I liked on my ice cream, what my favourite movie is, who am I talking with the most and what I have forgotten. They’re continuously taking away our need to perform any mental computation and we like it.
We aren’t the same humans like our grandparents were, technology has already become an extension of us; we’re cyborgs already, and AI is the next chapter of the revolution.
It’s important to think critically about whether all this is an actual possibility or have all of us grossly overestimated Alexa. In Part 2, I attempt to explore what is already true, where we’re heading and what happened to the cat images.
Part 2
Referring to AI today means referring to either Machine Learning or Deep Learning (or Neural Networks), both of which are subsets of Artificial Intelligence. The underlying principle behind both the classes of algorithms is simple: to solve any problem, you require sufficient examples of the problem being solved — labelled data, giving us the features and the answer for a question (supervised learning), and we hope that using the features we’ve extracted for a problem, we get a good prediction out of our system. By comparing our prediction with the correct answer and of course beating our computers up with a stick each time it gives a wrong answer, we adjust the parameters so that we get better outputs each time. Do this a million times maybe, for millions of parameters (or more) and we have engendered intelligence.
Intuitively, it makes sense, you look at the image of a cat, you see feature like eyes, ears, whiskers and with enough training you understand what a cat looks like. This oversimplified version should be enough to convince you that given enough data and time, these algorithms work wonders. There are also big problems with our algorithms today. Let’s look at some of those and see whether the future really is as smart as a 5 year old.
The intelligent bias
A very subtle problem is engendered with our current approach to intelligence, by no fault of ours. All of us have inherent biases and unfortunately our machines caught up with that too.
Computers aren’t intelligent but they are extremely good pattern finders. A google search for images of doctors would reveal more than 95% of the results are of people (predominantly male) from American or English ethnicity. These kinds of biases are prevalent in our algorithms. If I train my cat classifier with only two breeds of cats, it would stop all the other cats from entering the cat cafe. Great care and attention is given to making balanced and representative datasets, but that’s not the only problem.
The problem begins with humans themselves. Majority of the people behind AI’s progress and the Government representatives have been White males, whereas a problem like AI needs to be addressed by the representatives of humanity. Many examples of bias in the human factor of AI development have been observed and it’s dangerous because AI is one of the most weaponize-able technologies in the world right now. If only a small group of people control it, and god forbid their intentions aren’t right, the future of humanity could be very adversely affected. To the credit of AI researchers, they identified this problem very swiftly and steps are being taken to tackle such issues.
Bias can slip in not only because of imbalanced datasets, but it can creep in much before, during the framing of problem itself.
The prevalent way of coding these algorithms is to use APIs that big companies (eg: Google, Facebook) provide, needless to say if a bias is induced during that stage, it will propagate promptly.
All’s well and good as far as this is limited to cat cafes and google images, but future AI systems would be deployed in high-stakes situation like courts, job interviews and hospitals. If these biases creep in then, a lot of lives would be affected.
Imagine an AI judge being presented with identical data about a white and black person, but only sentencing the black person to prison, because it learnt statistically that black people are more likely to be guilty. Here, framing the problem incorrectly and an imbalanced dataset could both play a role (among many other factors) in giving out a wrong or biased decision.
So much for our all intelligent overlords.
How do you know that’s a cat: Transparency in AI algorithms
For multiple reasons, it makes sense to understand how a computer arrives at its decisions. Many people treat AI algorithms like black boxes, but recent research (eg. Bayesian networks) is opening up frontiers to figuring out why an algorithm arrived at a particular decision.
This could also be effective against biases in these algorithms. It’s imperative to develop AI in a way that it’s transparent to inspection.
Imagine in the future, when your AI doctor advises you to take a particular medicine, and you trust it because you don’t wanna mess with Arnold Schwarzenegger. Here’s the catch: Terminator is not real (it won’t be in the future either) and you don’t trust your AI doctor. You want to know exactly why and how it arrived at its decision, your life depends on it.
Black boxes aren’t for the long term, AGI agents would need to explain their rationale behind a decision like us humans do. This is another problem that we need to overcome without which AGI systems won’t come into existence.
“What bomb?”: Adversarial attacks
What happens when our intelligent overlords are fooled? In the image above, adding a tiny amount of random noise resulted in a rather funny mis-classification, even though a 5 year old could identify the panda in this image. Turns out, this is another important weak point in our algorithms today, that a safe AGI system of the future should stay a million miles away from.
Adversarial attacks designed by other computers, humans or simply the presence of random noise could break this sophisticated technology.
Imagine, if you could place a bomb in your pocket/suitcase such that it easily fools the AI vision security system at the mall, airport or the White House.
These systems would be deployed everywhere, and a sufficiently advanced AGI system should not be fooled by manipulations because of humans or computers. Robustness against adversarial attacks is an important factor to consider before we have true intelligent agents.
Alexa! Cure my headache
A lot of optimistic promises have been made regarding AI’s contribution in medicine and biology. Many argue that only an AI agent would be able to solve cancer, protein folding, genome sequencing and other challenges whose solutions are nowhere in reach. More often than not Computer Vision systems have performed exceedingly well in detecting tumors and other anomalies than radiologists.
Your personal assistant — AGI system of the future would have your entire medical history, would have enough data about medical diseases in your locality and would have deduced the several diseases you might be prone to. This is not a difficult feat today, given adequate data and good computing power, this can be implemented right now.
AGI systems and agents of the future might or might not be more intelligent than us, but at any given time they would have more information and better extrapolation skills than us. It could predict the outbreak of epidemics, mobilize resources and also be your personal doctor. This does not mean that human doctors won’t be needed, but like many domains, a doctor’s job would become exponentially easier.
Democratizing AI
Artificial Intelligence goes beyond engineers and scientists. Many companies, start-ups, and universities have recognized the danger AI would pose, and have taken significant steps towards decentralizing it. Countries are coming up with policies and strategies to tackle AI-posed dangers and stay ahead in the world of AI.
Contributions from everyone is required. We need to have an important conversation about which direction we want to take AI in. How do we implement it? How do we make fail-safe systems in case things go awry? What do we want our future to look like? Will it be an endless cycle of tweets and instagram posts or will it offer something more promising?
Deep Networks and Shallow Relationships
In Part 1, I explored the philosophical implications of what the future of AI will bring with it, In this Part, I presented some areas of interest/issues and how we need to tackle them responsibly to ensure a better, meaningful, safe future, utilizing AI to its fullest.
I conclude with a humble introspection, which has to be the first step to establish what we want for our future generations. Although Artificial Intelligence has created all the hype — today, and for the future, it’s important to realize how it’s already affecting society. Personalized recommendations, information retrieval at an instant, data traveling at enormous speeds, all of this is affecting us in ways that nobody could have thought of. I opined that AGI systems if not handled responsibly, without ethical and safety frameworks in place (or at least a way to induce these things in machines) would be disastrous. But it’s not the future we need to worry about most, it’s that we don’t think about the future enough.
We’re always entertained (cannot be bored in this climate — watch me fight ants in my next Facebook exclusive), mostly living our lives on virtual platforms (Instagram eats before I touch my food), and this is the kind of future that nobody predicted 50 years ago. The Good-Story bias governs our thinking in subtle ways and gives an optimistic measure of the future, but AI technology has crept up insidiously in almost all domains of life. It’s vital to think critically about where it’s headed, whether we want a generation to live within virtual reality headsets and social media or do we have something else in place?
AI has and will continue to solve major problems across innumerable domains. But while our networks get deeper, human relationships become ephemeral. It’s important to be vigilant and overcome the hype to see what’s actually going on, to think clearly, decide what we want and to be brave in making choices that are subtly influencing our future.
AI is changing the world for all of us, and it’s time we had this conversation as a species before things get out of hand and the 5 year old who got frustrated with you showing him pictures of your cats, ends up becoming Terminator himself. Plot twist of the century am I right?
References
[1] The Ethics of Artificial Intelligence — Nick Bostrom and Eliezer Yudkowsky
[2] Computer Power and Human Reason — Joseph Weizenbaum
[3] AI: Its nature and its future — Margaret A. Boden