
Introduction
This is the fourth – and final – article in my series "AI and the Environment". If you have not read the first three, that’s no issue. However, feel free to scan through their introductions [here](https://medium.com/@jeffrey.sardina/ai-and-the-environment-ii-engaging-the-dialogue-e0460f3ad6d9), here, and here — it won’t take more than a minute, and it provides some nice context for this article.
After this series, I’ll be starting up a new one – I am still deciding on a topic, so stay tuned!
In this post, I will look at the times when "Red AI" (AI that requires large amounts of energy to run) can be beneficial. In some cases, the benefits of Red AI are not only to the AI performance itself, but to the environment as well. The key take-away points are:
- Red AI can lead to an overall reduction of energy use, even though it takes huge amounts of energy to create
- Red AI for health and for Climate science is often justified, since its benefits to human and ecosystem life can outweigh the cost of its creation
- Red AI for its own sake sometimes has value, because it helps create better AIs for the above uses. However, this is dangerous, and should only be done with great care and consideration.
- Legitimate use of Red AI is not an excuse for environmental irresponsibility. When using Red AI, it is even more important that researchers minimise energy costs.
Red AI for climate science and response
Red AI can have a very strong positive impact on the climate through two main routes: improving our understanding of how climate change is progressing, and responding intelligently to climate change.
AI has been used to predict global temperature changes, model storms, oceanic events, cloud changes, and extreme weather events [5]. It has also had success in predicting the results of these changes, such as water shortage, wildfires, rain damage, and forced human migration due to climate disasters [5]. At a more local level, it can be applied effectively to energy management [3, 5] and city energy grids [5] – saving energy and reducing emissions.
Interestingly, a Microsoft report in 2018 estimated that using AI to to understand and respond to climate change cloud result in a 1.5% to 4.4% decrease in emissions while also increasing global GDP by 3.1% to 4.4% by 2030 [6].
However, many of the AI models needed for this change will require a large up-front energy and emissions cost. Let’s take a look at data centres, for example.
Data centres are huge consumers of electricity. They are, in essence, hundreds or thousands of industrial computers stacked together in a warehouse. Power is needed not only to run the computers, but also to cool them and perform other auxiliary tasks [1]. In fact, cooling is the single most expensive auxiliary task for most data centres [1].
Google and DeepMind found that by using an AI to manage the internal cooling system in a data centre, they reduced the cost of cooling by 40%, and reached record values for Energy Efficiency [3].
The AI behind this, however, had to take many factors into account and required millions of data points for training [2]. It relies on a large number of different neural network working together – called an "ensemble" setup [3]. While such setups can be very powerful, they often also take much more energy than smaller, non-ensemble methods.
While the exact details of the AI involved have not been published (to my knowledge), it certainly would fall under the category of Red AI. But despite the high cost for its creating, its ability to shave 40% of the cooling cost and set record-level efficiency speaks far more to its benefits than detriments. Now that it has been created, it can be applied in many different data centres – and likely yield similarly powerful results there.
What it comes down to, in the end, is cost-benefit analysis. Red AI has a large up-front cost, but sometimes it can lead to huge benefits later. And often, those benefits are worth it.
Red AI for health
When AI has a direct impact on life, it is important that AI have as little error as possible. AI has no shortage of applications in medicine. For example, AI models have been created to predict if two pharmaceutical drugs can be safely combined in humans, or if they are likely to do harm when combined [7].
Such a system could be used to help guide researchers when exploring what drugs to combine when treating a disease. Of course, if the researchers rely on such a system as a guide and it is wrong, then they will have wasted valuable time – something that could have a real impact on patients. If AI is to be used in this area, it absolutely must be reliable.
I have mentioned in a previous post that Red AI is marked by diminishing returns [4]. Using more and more energy-hungry AIs with more data becomes less and less beneficial [4]. However, that does not mean it is not beneficial – even a 0.1% change in accuracy can have a large impact if an AI is used in millions of cases or to help millions of patients. And if this improvement has the potential to impact life positively, then it must be considered.
Pollution and emissions caused by Red AI are clearly very harmful to humans, and to the whole world, in the long run. This must be balanced, however, with improving (perhaps even saving) lives in the present.
To go back to the example of predicting pharmaceutical drug combinations: image that the system is highly reliable. This means that researchers using it would need less time to identify drugs that can be safely combined to treat a given disease. Treatments could be developed sooner, and more people helped as a result.
Applying Red AI to health requires the correct balance of present and future – emphasizing either one to the exclusion of the other would do more harm than good.
Red AI for Red AI’s sake
In their argument for promoting Green AI, Schwartz et al. argue that Red AI does have a place [4]. The place they give it is the so-called ‘pure-research’ – research for its own sake. It is often Red AI, after all, that leads to huge improvements in how we understand and use AI [4]. Without those past uses of Red AI, and those improvements, using AI to help deal with climate change, climate predictions, and health would not be possible.
Therefore, to assume that every case of Red AI not directly tied to climate or health is bad could preclude similar progress that would be of benefit.
This is, of course, a very dangerous line to walk. This does not, and should not, mean that all Red AI is good. In fact, I would argue that most Red AI is not good. But it can be, and which Red AIs are good and which are not cannot always be known in advance.
As a final note on using Red AI for its own sake, there is one point I must emphasize:
Red AI – even when it is legitimate to use – is never an excuse to be environmentally irresponsible.
In fact, when researchers use Red AI, they need to be even more focused on reducing energy usage as much as they can. They must also be dedicated to sharing their AI modes so others can re-use them – this leads to huge energy savings in the long run.
Red AI is not irresponsible AI – it is just AI where the creators accept in advance that large amounts of energy are needed to work towards an ultimately greater good. If it is unclear that a greater good for the Environment or human life is being worked towards, the Red AI should not be used.
This all needs to be part of the dialogue – from public discourse to policy debate and research methods, the environmental impact must be considered. Every use of Red AI must be clearly justified.
Conclusion
Drawing a hard "good / bad" line is rarely useful. This is the case also with Red AI – it is a tool, and it has very good uses. It is up to researchers, policymakers, and the general public to keep the dialogue focused on the impacts – for better and worse – of AI, and to move forwards along the path that will do the most good and the least harm. Red AI fits into that just as much as Green does.
They key take-aways of this post are:
- Red AI can lead to an overall reduction of energy use, even though it takes huge amounts of energy to create
- Red AI for health and for climate science is often justified, since its benefits to human and ecosystem life can outweigh the cost of its creation
- Red AI for its own sake sometimes has value, because it helps create better AIs for the above uses. However, this is dangerous, and should only be done with great care and consideration.
- Legitimate use of Red AI is not an excuse for environmental irresponsibility. When using Red AI, it is even more important that researchers minimise energy costs.
References
- Jones N. How to stop data centres from gobbling up the world’s electricity. Nature. 2018 Sep;561(7722):163–166. doi: 10.1038/d41586–018–06610-y. PMID: 30209383.
- Safety-first AI for autonomous data centre cooling and industrial control | DeepMind
- DeepMind AI Reduces Google Data Centre Cooling Bill by 40% | DeepMind
- Schwartz, Roy, et al. "Green ai." Communications of the ACM 63.12 (2020): 54–63.
- Cowls, J., Tsamados, A., Taddeo, M. et al. The AI gambit: leveraging artificial intelligence to combat climate change – opportunities, challenges, and recommendations. AI & Soc (2021). https://doi.org/10.1007/s00146-021-01294-x
- How AI can enable a sustainable future (microsoft.com)
- Karim, Md & Cochez, Michael & Chaves, João Bosco Jares & Uddin, Mamtaz & Beyan, Oya & Decker, Stefan. (2019). Drug-Drug Interaction Prediction Based on Knowledge Graph Embeddings and Convolutional-LSTM Network. 113–123. 10.1145/3307339.3342161.