Data for Change

What Mainstream AI is (Not) Doing

The pandemic accelerated AI adoption — and made Big Tech richer — but did AI adoption happen in the places where it was needed?

Kunj Mehta
Towards Data Science
14 min readJul 6, 2021

--

After dabbling in machine learning during my undergrad, I scored a job as a Business Analyst catering to the Public Sector and Education business of one of the leading AI service providers in India. Having landed the job in the pandemic — which I must say, I consider myself fortunate to — and onboarded onto the role fully online, I have been at the forefront of the transition of organizations from real-world to digital world. And I say this not only for the company I work for, but for the numerous governmental agencies and educational institutions that have approached us seeking to undergo a digital transformation in order to enhance their processes in the current situation as well as for the future.

I genuinely believe that these past one-and-a-half years have been a blessing for the entire IT industry and in association to the AI industry — since AI is now nearly ubiquitous when it comes to IT — as evidenced by the incredulous growth the Big Tech has achieved by a two-prong strategy of undergoing digital transformation themselves (thereby cutting expenditures) and providing the same services to other industries (and generating immense revenue).

Mainstream AI needs to bridge the gap between commercial and social applications. (Photo by Indira Tjokorda on Unsplash)

Increasing AI Adoption

As previously mentioned, Big Tech companies, while initially apprehensive of the impact of the ‘new normal’, soon realized its potential and took full advantage of it. Everything increased — from ad revenues due to digital ads becoming the major avenue to reach consumers, online shopping, sales of laptops and mobiles as everything went online, social media engagement to increased cloud consumption as businesses went digital. Inadvertently, this forced shift to online along with the fear of missing out acted as a catalyst for stakeholders to adopt not only traditional digital transformation but also to jump onto the bandwagon of newer technologies like cloud, artificial intelligence and blockchain.

As per a KPMG survey, there has been a 37% increase in AI adoption in at least one function in FinTech, 20% increase in Tech and 29% increase in Retail compared to last year. Couple this with the $67.9 billion in investment AI saw in 2020 — including Microsoft’s $1 billion investment in Open AI, previously a not-for-profit organization — and the representative mapping of use cases this money was spent on based on a report by McKinsey. About 50% of use cases in AI in 2020 were related to Natural Language Processing and 20% to Computer Vision across industries like IT, FMCG, Healthcare, FinTech, Legal and Automotive. In comparison to this mainstream, Google via their AI for Social Good initiative offered just $25 million for research of AI applications that improve society.

This renewed push for digital transformation, the change in the adoption mindset and the amount of money being poured into the space (this time even from the public sector) was the perfect opportunity for the AI community to capitalize and enhance their efforts to tackle the problems facing AI and to build applications for the social good, especially in these times. Unfortunately, examples of that happening are few and far between.

Use Cases that need AI (based on UN SDGs)

International organizations like International Telecommunication Unit (ITU) and McKinsey use the United Nation’s Sustainable Development Goals (UN SDG) adopted in 2015 as a guideline for the world’s major problems today and try to map how AI can be used to these problems. Serendipitously, a research paper published in Nature estimates that 79% of the targets within the 17 SDGs can be reached using AI in some manner. Let’s look at the 17 SDGs and how AI can achieve these targets as set forth by the UN:

  • No Poverty: To achieve UN’s goal to eradicate extreme poverty by 2030, AI can be used to analyze satellite data or mobile usage data and detect areas of poverty followed by complementing policies to aid the residents in these areas. Better weather prediction leveraging AI can help in evacuating these people to safety. Economically, in a bid to bridge the gap between the rich and poor, AI can be give credit ratings for loans to those below the poverty line (assuming the underlying data is not biased)
  • Zero Hunger: Ancillary AI applications that indirectly contribute to this goal have already been gaining traction in the agriculture sector, while last mile routing solutions were developed to provide for food during the pandemic. Examples in agriculture include using Computer Vision to detect crop diseases, analyzing and modeling historical data to support farmer decisions, harvest forecasting, weather forecasting and IoT-enabled, AI-powered farm equipment. However, agricultural AI research will count for nothing if the end-product cannot be put into the hands of the poor farmers of third-world countries.
  • Good Health and Well Being: This goal falls squarely into the healthcare sector which has received focus and funds to battle the pandemic. Intel research shows that there has been a 39 percentage point increase in healthcare leaders adopting or wanting to adopt AI, post the pandemic. Examples of AI use cases in the healthcare sector include AI and data powered support systems in hospitals, automation of drug discovery, tuberculosis and cancer detection, increasing AIDS awareness, solutions to reduce traffic accidents (cue autonomous vehicles), suicide prevention and reduction of distressing posts on social media (credit where its due: this last one is Facebook). However, many of these solutions are borderline ethical and with the dangers of datasets not representing the real world and lives being at stake, it is a big gamble for now.
  • Quality Education: Another sector that has seen tremendous increase in AI adoption due to the pandemic. I can say from personal experience that AI is increasingly being used to monitor students’ attention or to carry out emotional surveillance to determine how comfortable children are learning certain subjects, identifying students who are struggling before their test results become available. Attempts at using AI to deliver personalized and adaptive teaching are being made, especially on online learning platforms. However, AI in education has a long way to go to provide truly personalized education.
AI Investment Increased in Education and Healthcare Roughly During the Pandemic (Image via Stanford AI Index, 2021 under the Attribution-NoDerivatives 4.0 International license)
  • Gender Equality: There has been very little direct involvement of AI in achieving gender equality, mainly because it is a socio-cultural problem. That being said, the AI community needs to make sure that it does not promote gender bias by ensuring the underlying datasets and deployed systems do not have bias inherent in them.
  • Clean Water and Sanitation: The only properly researched use case pertaining to this goal is using AI to predict and suggest steps to improve the quality of water in a water treatment plant. Potential use cases include catchment area management and water pipeline / flow management
  • Affordable and Clean Energy: Again, a not frequently researched field, most of the efforts have been towards how AI can optimize energy production and consumption and better predict demand. More research is needed into how AI can help setup Smart Grids.
  • Decent Work and Economic Growth: This is again a more socio-cultural problem then a technological problem. However, generally speaking the rise of the AI industry and its share in the global economy can spur economic growth but at the risk of replacing low-skilled workers, which is why the AI industry needs to be self-aware and balance the advantages and disadvantages it brings to society as well as to the environment. In addition, with the automation that AI provides, productivity will also increase.
  • Industry, Innovation and Infrastructure: Though research into many applications of AI in infrastructure is ongoing and the underlying concepts are sound, there is not much adoption in this area. There is a need for an adoption push in use cases such as air and water quality management, energy management, asset and construction management, predictive maintenance, enhancing productivity, automation and smart cities and grids.
  • Reduced Inequalities: This social problem can be indirectly solved in part by using AI. For financial equality, applications can be developed that predict credit ratings and provide loans for the low-income group as well as help them understand the personal investment world through data. For cultural, racial and caste equality, I believe it would be more beneficial if all AI applications managed to eradicate the underlying bias in data itself rather than coming up with specific applications targeted to promoting equality. A representative example here would be tweaking recommendation engines to not promote hate speech. On a larger scale, data related to demographics and social patterns in a particular community can be analyzed to conclude how inclusive the community is.
  • Sustainable Cities and Communities: Many points pertaining to this goal have been covered above. Large-scale AI can help achieve sustainable and smart cities through solutions geared towards better transportation infrastructure and reduced accidents, water and energy management, predicting earthquakes, spread of wildfires and oil spills, ecology analysis, air quality analysis, etc. Again, the challenge here is large-scale adoption and policy support.
  • Responsible Consumption and Production: AI-enabled systems can leverage historical data and patterns to optimize production and consumption schedules in various sectors like manufacturing, energy, etc. while the automation that AI brings will improve productivity. From a policy standpoint, the process of government procurement of services and goods which is very slow and corrupt in many places can thank AI for systems that automate the filtering out of unqualified vendors.
  • Climate Action: AI cuts both ways when it comes to climate action with the humongous energy requirements for developing and running AI models indirectly adding to the CO2 in the air, while the possibility of automating energy management, climate modeling and predicting natural disasters being its positive side.
  • Life Below Water: Research has shown oceans and sea-life can be conserved by leveraging AI in the detection and mitigation planning automation of oil spills and catchment area management. Satellite data analysis can be useful in ecological forecasting (which includes coral bleaching and algal blooms among other events), tracking and regulating trawler activities.
  • Life on Land: Research shows AI can be used to identify forest animals through their footprints which can play a non-invasive part in wildlife conservation efforts. Remote sensing leveraging AI can be used for assessing, predicting, and mapping forest structural features which serve as indicators of forest condition and help in conserving forests. Neural networks and objective-oriented techniques can be used to better classify vegetation cover types and hence detect desertification and droughts.
  • Peace, Justice and Strong Institutions: AI can be leveraged in surveillance systems and social media to detect, notify and filter violent people and/or content, bullying and child pornography. Facial recognition software can also be used in predictive policing. However, this is one section for which the required technology is ready but its adoption is controversial due to ethical, privacy and discrimination concerns.
  • Partnerships for the Goals: This particular section in the UN’s SDG list lists down targets towards collaboration for achieving the other 16 goals and as such is not related to technology. So, we will skip this.

How Big Tech and Government can Control AI

To pursue research of any kind in any sector or industry requires funding. Research will be focused where the funds are. Keeping this in mind, let’s take a look at how Big Tech and international governments have the potential to centralize AI research and determine how the industry moves forward.

Corporate Participation in Academic Research Across Conferences (Image via Stanford AI Index, 2021 under the Attribution-NoDerivatives 4.0 International license)
Big Tech’s Acquisition of AI Startups (Image via Stanford AI Index, 2021 under the Attribution-NoDerivatives 4.0 International license)

From 2012, when Big Tech started entering the AI field after a breakthrough in machine learning techniques to 2019, there has been a 550% increase in footfall at NeurIPS, the largest machine learning conference that is held. And Big Tech reps attend it with the aim of luring PhDs into their companies. These companies also hire tenure-track professors to help them in their research. Big Tech’s talent grab is not only limited to academia but has even expanded to startups in recent years, as this article diagrammatically presents.

The two graphs below reinforce the belief that corporate has a strong interest in academic AI research, especially in America. The position of corporates in peer-reviewed research is replicated in research papers published in journals or showcased in conferences as well.

Private Investment in AI Across Countries. USA has the highest (Image via Stanford AI Index, 2021 under the Attribution-NoDerivatives 4.0 International license)
Peer-Reviewed AI Publications in the USA. Corporate has the highest involvement(Image via Stanford AI Index, 2021 under the Attribution-NoDerivatives 4.0 International license)
Peer-Reviewed AI Publications in China. Government has the highest involvement(Image via Stanford AI Index, 2021 under the Attribution-NoDerivatives 4.0 International license)

It is instantly noticeable from the graphs above: USA, China, and the European Union are the leaders in AI research. In America, corporate has a much larger interest than the government or academia. However, in China, the government — and not the corporates — is the one that centralizes AI, which is obvious if we come to think of China’s political structure. The image below reinforces this as we can see that the corporate-academic partnership for publishing papers is much less in China (in spite of the large number of papers that China publishes) than in the USA.

Academic-Corporate Partnership in AI Publications. USA leads the others by a large margin (Image via Stanford AI Index, 2021 under the Attribution-NoDerivatives 4.0 International license)

China is a case of more than average AI centralization by the government. Nonetheless, the reason that governments have a stake in AI is that they have the power to roll out legislation concerning the proper use of publicly and privately available data for AI applications (case in point: EU’s GDPR). And specifically for projects in the social realm—and not to mention the large scale sectors of infra, smart cities, water management, etc over which the government has a larger control — the government is the one that can announce policies, development schemes, and scope out and fund large-scale projects which leverage AI to achieve the project objective. To that extent, ITU’s research shows that 131 countries have already passed or are in the process of passing a data policy with 18 countries having a specific AI policy in place, as of 2020.

Challenges in the Industry

However the AI industry moves forward as a result of corporate and/or government support, the things that need to be addressed foremost to ensure that it isn’t ‘two steps forward, one step back’ are the problems facing the industry and community right now.

  • Explainability of AI: Machine learning models — neural networks especially — frequently behave as a black box and it is often unclear how they arrive at their conclusion. Sometimes it is preferable to have a model that provides a result that can be backtracked so as to be able to modify the proper parameters in the model whenever required. This aspect gains increased focus in the healthcare industry because of the stakes involved and also, recently in facial recognition systems because of erroneous outputs leading to disastrous results.
  • Unavailability of Data: In many use cases, collecting the right data and creating an actionable dataset is a time-consuming task. This because firstly, what are the avenues from where data can be collected has to be figured out; secondly, gathering this data from disparate sources takes time; thirdly, organizations have to modify the collected data to comply with data and privacy regulations, while still keeping the data actionable. For academia though, getting past the first stage itself is tough because Big Tech and government has much of the data (collected from their various services) which it keeps private or classified, as the case may be.
  • Bias and Misrepresented Datasets: The data that organizations collect and the method with which they do so represent the reality of the place from where the data has been collected. This results in cultural or social prejudices making its way into the datasets and it becomes the organization’s responsibility to weed the bias out. Taking it a step further, it is also the organization’s responsibility to make sure that the application is deployed in a place that has an identical representative population to the one from which the dataset was prepared to ensure that controversies and failed deployments do not occur. Bias creeping into the models is a real threat for facial recognition systems and large language models.
  • Generalizability: OpenAI is the torchbearer for the AI industry’s quest towards a Artificial General Intelligence — a single AI that can do everything a human can do. At present, this is a distant possibility and so for every use case discussed above, chances are the whole process from data collection to model deployment will be redone specifically for that use case. This seems like a waste of time and effort, which it is. However, generalizability in itself isn’t as bad as the next point that it brings us to.
  • Energy Consumption: OpenAI itself, chasing generalizability in natural language processing has kept outdoing itself and building bigger neural language models (cue GPT-3). The energy that is required to train (even once) and deploy these models is really, really huge — so much so that it gets you thinking that if the current trend of building ever-bigger models continues then whatever advantages AI brings for the environment won’t actually matter. Add to this the fact that GPT-3 doesn’t even understand human language; it is just a master in manipulating it.
  • Corporate-Academia Gap: The gap between corporate and academic resources and research is there for all to see. Which is why the AI community needs to refocus itself and build more applications that do actual good, irrespective of corporate or government interests.

Conclusion

We can clearly see that sectors such as healthcare and education that had some presence of AI saw a wild rise in AI adoption during the pandemic. Other sectors have had research being conducted but lack large-scale visibility, interest, investment, implementation and adoption. This can be partly attributed to the problems that the AI industry is facing, majorly in relation to availability of data and the control that Big Tech and government policies wield on the industry. That being said, the presence of Big Tech and government interest in AI is a good thing because it keeps the technology from becoming a thing of the past through continuous funding. The only thing that is needed is a shift in focus by all the stakeholders in the industry.

However, fear not, because not all hope is lost! AI clearly has the technical capability to help society as evidenced by the mapping of capability to social goals. And as the title of the article implies, AI is being used for much social good; it has just not reached a tipping point towards mainstream interest and investment. People, think tanks and organizations are actively involved in research and development of applications that aim for a better society. Here is to hoping they get the support they deserve!

--

--