Photo by @joelfilip

Artificial Intelligence and Nonprofits

How can nonprofits use AI for good?

Alex Moltzau
Towards Data Science
14 min readSep 1, 2019

--

Could part of AI Safety be ensuring distribution or work towards equality? I have written before about fairness in AI; the importance of data quality; and equality relating to gender. Yet the most challenging article to write was Inequalities and AI. Is artificial intelligence truly safe if it worsens or exacerbates inequality? What is one of the greatest inequalities?

It has been important for nonprofits to connect to makers of new technology to see if any part of the revenue can be funnelled towards a humanitarian purpose or programs. As much as we can question these technologies, because they are of course not faultless, it is arguably important that nonprofits are able to raise funds and address issues. The question for these organisations is often a large looming ‘how’? In an ideal world their operations would not be needed, yet in the current situation there is a place for the charity sector, and how they operate is certainly changing.

With these services moving to apps or social media with a variety of actors it does seem a challenge to keep up. In many instances the technologies such as AI or ML are integrated into existing products or services. Is it necessary to collaborate? We proceed with the assumption that it can be generated in conjunction with machine learning projects and that part of the money should go to charity. Let us explore a few options, but first a quick look at AI for Good.

AI for Good

Since I have been writing about AI intensively I have seen the rising trend of AI for Good. It does seem like parts of the charity sector has caught an interest in this, however let us first look at the possibilities and a few possible issues. I will start with a review of the recent report by Salesforce called AI for Good Nonprofit Trends & Use Cases.

The report from Salesforce cover five areas: (1) a description of AI; (2) why nonprofits should be involved; (3) imagining a better future; (4) using AI to advance your mission; (5) how to keep AI ethical.

First I find their description of AI slightly confusing as it lumps everything into one basket. In this regard I believe the distinction made by Finland’s famous course on artificial intelligence Elements of AI to provide a much more apt distinction from another previous article:

“Narrow AI or applied AI is the use of software to study or accomplish specific problem solving or reasoning tasks. Perhaps it can be said that Applied AI in this sense is the most common usage and easier to define than its counterpart broad/strong AI which has been said to at time border towards thoughts of AGI — more capable of experiencing consciousness.”

This being clearer and our specific focus being narrow AI we can move into the description by the report, and they present a few one-liner examples: Siri, the voice assistant; Facebook recommendation engine for photos; Amazon, recommending products; and Google Maps providing optimal routes to a desired location. After this they attempt selling the Salesforce product and outline how nonprofits can start using AI (learning more about applied AI):

  1. Capture data
  2. Learn from Data
  3. Act on insights

This sounds like an easy three step process, yet opens up for a large degree of complexity. It is increasingly important to process information in a good manner, and as such good quality data is additionally where nonprofits can make a difference. They provide a compelling argument from an article in Economic Times with for-profit organisations.

“AI and machine learning are making their mark as hot topics in the for-profit sector right now. In fact, for-profit organisations with AI expect to see a 39% increase on average in their revenues by 2020, alongside a 37% reduction in costs.” — Economic Times (India Times) the 17th of January 2017 [bold added]

In addition to this the use of artificial intelligence by nonprofits is projected to grow 361% in the next two years. This may be more telling as to the low amount of nonprofits that currently use artificial intelligence and how expensive labour AI or machine learning labour is. It is argued that on a global level, the values and principles that nonprofits embody can help shape the future of AI. Instead of exploiting vulnerable people these applications could be identified and get support.

In the report it is additionally referred to five principles of AI for Good.

  1. Being of benefit: funding for research to ensure the beneficial good of AI should not only go to defence or health — there has to be a beneficial use of AI that can address the seriously challenging tasks that nonprofits has been working with for a long time.
  2. Human value alignment: Data citizenship is increasingly discussed after launches of documentaries such as The Great Hack as well as the large billion dollar fines by EU towards Facebook and Google for their stewardship of data or competitive rules. What is the perceived ‘good’ by these companies? This may not match what people in nonprofit perceives as ‘good’. Involvement in this value alignment is crucial.
  3. Open debate between science and policy: healthy exchange between science and policy is vital for progress that benefits humanity. If there is an open debate between private companies, science, policymakers and nonprofits on the possible ‘good’ or risk to society it may be beneficial.
  4. Cooperation, trust and transparency in systems and among the AI community: if we consider the aspect of cooperation it is vital that nonprofts decides to begin involving different types of communities in the development of both its understanding of this new technology in warfare or defence as much as it can begin to understand the potential financial upside of using this technology for fundraising. If The European Union has in addition to this decided to take an ethical and human-centric approach to the development of artificial intelligence, so this could be well aligned to a broader European strategy.
  5. Safety and Responsibility: there seems to be increasingly a higher amount of data scientists and developers using machine learning techniques, yet the rise in AI Safety and whether it is aligned is a tough question to answer. On the surface it would be fair to make a generalisation that it is not aligned, indeed if we are building these applications for society and far less investments are made in these fields it does not seem to be a priority. Cybersecurity is important yet interdisciplinary teams working with security for nonprofits and corporations seem increasingly necessary. KPMG Lighthouse has people from both security, programming and social science backgrounds. This team is also located closely to KPMG IDAS which works directly with international development and sustainability. Since I currently work in this environment [as an intern for KPMG] I can see that these combinations may be important to ensure responsible operations as not all charities or small organisations can have an operational team addressing a variety of issues in this field. If we combine this with financial expertise then it is relatively easy to make a case for the start of a move towards greater safety in implementation of artificial intelligence in nonprofit projects.

We stand in front of a digital divide and we must do our best to act responsibly faced with grave inequalities.

Why could this be of greater benefit to nonprofits?

The report by Salesforce outlines benefits for individual nonprofits AI as opposed to commercial business. I have taken their two suggestions (as I felt the third was more like a sales pitch) and added some of my own thoughts to the mix:

  1. Nonprofits have limited resources and time of employees, AI-assisted operations with responsible use of technology can be beneficial in this regard. This can be the case both in day to day work or the case of a crisis with emergency communication. I have previously interviewed Morten Goodwin at the Centre at Centre for Artificial Intelligence Research (CAIR) in Norway, and they have written some recent papers about how machine learning can be used in the case of a disaster to identify those who need help quicker.
  2. Nonprofits with advocacy goals can benefit from benefit from access to more sophisticated metrics (often displayed better), to better understand their audience and impact on attitudes and behaviours. Metrics do not say everything, but they can help to give an overview of the bigger picture at times, and perhaps in addition to this new ways of displaying information may help constituents, members or funders to better understand the situations which they are helping to address (with the need to maintain privacy ever important in this regard of course).

An interesting initiative which is mentioned in this regard is GovLab that is a think tank that often works with government related subjects such as activism and health care. The success of technology does completely depend on the relationship between actors developing this technology, and there has been several less successful governance projects (even by Microsoft). I revieved a report on AI governance in Argentina and Uruguay in this regard written by the the World Wide Web Foundation (by the guy who made the Internet). It is highly possible that uncritical applications within the field of AI can be damaging both to current trust and to future implementation of new projects.

What is ethical AI?

There can be many possible approaches to addressing this. One classical is focusing on donors. With data on your donors giving history with predictive insights you could run the analysis with a click of a button. There can be a prediction of likelihood to give, volunteering history. Program outcomes can be analysed and displayed with historical data, yet this should of course not always be a part of judging rather to better understand.

However this does of course pose several question in regards to the ethics (moral principles). There is a growing trend of addressing this issue with Facebook beginning to make sizeable investments in this area both in training and research. Google is attempting to develop privacy technologies such as federated learning assisted by advancements in semi-structured machine learning. Thus attempting to learn what people do without having any way to in-depth understand who they are, which does seem a challenging task. The Salesforce report outlines six different ways to ‘keep AI ethical’.

  1. Build a diverse team. It is suggested to recruit for a diversity of backgrounds. However in this case it must be mentioned that Equal AI amongst other have argued the field of machine learning is currently perhaps one of the least diverse there is.
  2. Cultivate an ethical mindset. Ethics is a mindset, not a checklist. Empowering employees to do the right thing is important. Companies can cultivate an ethical mindset together with nonprofits.
  3. Conduct a social systems analysis. Involving stakeholders at every stage of the process to correct for the impact of systemic social inequalities in AI data. Open source platforms can get input from communities and create so-called ‘community-sprints’, these can be particularly relevant in the case of disasters or a crisis, to enable participation of a diverse group.
  4. Being transparent. Understanding values, knowing who benefits and pays, as well as giving users control over data to give feedback is an absolute necessity. Exploring the development is particularly important, as there is a possibility to either confuse or inform. With great quantity comes great responsibility and the last thing an NGO wants is a ‘black box’ and algorithms that are impossible to explain to funders or where you know there may be externalities (damage) that will return to catch you off guard.
  5. Understand your values. As previously mentioned what critical applications should be automated or not? Examining the outcomes and trade-offs that may come into conflict when making decisions, which results in compromises is vital. When trade offs are made, they must be made explicit to everyone affected. This can be difficult if AI algorithms are preventing people from knowing exactly how decisions are made. Cassie Kozyrkov the Chief Decision Making Engineer at Google has written a lot on this topic.
  6. Giving users control of their data. It is inevitable that nonprofits gather user data, and with the role that a nonprofit has this sector can be an actor that takes data citizenship more intently into consideration than other companies or organisation. The report suggests that you allow users to correct or delete data you have collected about them. Nonprofits can end up with a lot of data on their constituents through a variety of touch points online or through IoT.
  7. Protect your data. Data security is not spoken about very often by nonprofit organisations (or not often enough). In May 2019 one of New York largest nonprofits People Inc had a major data breach (up to 1,000 clients). Again, many nonprofits do not have a large dedicated IT staff, yet cyberattacks is a real threat and GDPR (in effect May 2018) must be considered. Small charities could go bankrupt for not complying and larger charities may face large fines.
  8. Take Feedback. Allowing users to give feedback about what they think about narrow AI. In this case naming your offering AI may be a quick way to get attention, yet additionally a quick way to get abstract distrust from constituents if the technology is not explained. Perhaps it is wiser to talk of machine learning techniques than AI? If so regardless it must be clearly explained and possible for people to interact with your approach. The community in the field of AI developing solutions seems to have an open source attitude, however this is not the case across the board. Without feedback your offering is weaker and less compliant when you work with governance issues.

KPMG Intelligent Automation in Financial Technology

In a recent report by KPMG in March 2019 called Easing the pressure points: The state of intelligent automation we can get an overview of certain relevant technologies related to AI. It is described that: “Intelligent automation is the catch-all phrase for disruptive technologies. It includes robotic process automation (RPA), artificial intelligence (AI), machine learning (ML), cognitive computing (CC), and smart analytics.” This is explained in a relatively accessible overview which I will relay.

In this report KPMG Describes artificial intelligence as: the capability of a machine to imitate intelligent human behaviour.

Machine learning (ML): Machine learning is an application of artificial intelligence that enables systems to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves.

Cognitive computing (CC): Cognitive computing is the simulation of human thought processes in a computerized model. Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works.

Robotic process automation (RPA): Robotic process automation enables organizations to configure computer software or a ‘bot’ to capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems.

These technologies within automation is highly relevant when we talk of artificial intelligence. It is often a combination of these technologies as an example used in self-driving cars. It is more about the operating model of a nonprofit rather than simple a one-shot revenue solution that will solve everything. There is an overview site called Artificial Intelligence: Enhancing, Accelerating, and Automating Decisions that has collected different reports on this topic.

If we look outside the salesforce report according to the KPMG Venture Pulse report on the second quarter of 2019 artificial intelligence and machine learning (particularly in the field of cybersecurity) has in the past decade or so have advances in computing begun to yield significant advantages to harnessing the power of intelligent algorithms in automating key processes. Other projections by Credit Suisse predict rising revenues worldwide within artificial intelligence. Part of this could be directed towards the nonprofit sector.

Credit Suisse Group

Three Examples of Initiatives

Before we dive into three short case studies we can ask how nonprofits are operating currently. With a short search for nonprofits cases and AI you may experience at the current time of the 7th of August 2017 that you will discover few cases readily available to understand. Rather there is much thinking and little doing; a careful approach that may be necessary. Are nonprofits not using applied AI because they are either unsure of it or unaware of it? This is a question which is not immediately easy to answer. Even nonprofits aimed towards AI seem hesitant to use or display their usage of such technologies.

Google AI for Social Good

Google.org issued an open call to organisations around the world to submit their ideas for how they could use AI to help address societal challenges. They received applications from 119 countries, spanning 6 continents with projects ranging from environmental to humanitarian. From these applications, they selected 20 organisations to support. This is a treasure trove of fascinating information on how different organisations using AI applications or at least experimenting with the possible usage. I will mention a few I found interesting:

  • Nexleaf analytics: The storage condition of a vaccine can significantly affect its effectiveness, which is especially challenging in remote regions with limited infrastructure. They will use AI technologies to build data models that predict vaccine degradation, quantify the value of vaccines at risk, and ultimately develop an end-to-end system to ensure safe, effective vaccine delivery.
  • HURIDOCS. Human rights lawyers are currently required to sift through vast document repositories to identify the most relevant facts for their case. They are using natural language processing and machine learning methods to extract, explore and connect relevant information in laws, jurisprudence, victim testimonies, and resolutions.
  • Crisis Text Line, inc: With over 100 million messages exchanged to date between people in crisis and Crisis Text Line’s counselors, it can be challenging to balance spikes in volume and counselor availability. Crisis Text Line will use natural language processing and data on counselor capacity to optimize how they allocate texters to counselors, with the goal of reducing wait times while still ensuring effective communication and deescalation.
  • Rainforest Connection. Rainforests are under increasing threat from illegal logging and global warming. Rainforest Connection is using commonplace mobile technology and deep learning for bioacoustic monitoring to detect immediate threats and track rainforest health.

Datakind

The mission of Datakind is: “Harnessing the power of data science in the service of humanity.” Their compelling argument is that the same algorithms and techniques that companies use to boost profits can be leveraged by mission-driven organisations to improve the world, from battling hunger to advocating for child well-being and more. This must be taken with a grain of salt, but in certain cases there are existing applications that can be used within a nonprofit sector too. They bring together top data scientists with leading social change organisations to collaborate on cutting-edge analytics and advanced algorithms to maximise social impact.

Photo by @alexacea

Gravyty

Adam Martel is the CEO and Co-Founder at Gravyty. Gravyty helps nonprofit organizations raise more money by increasing their fundraisers’ efficiency with actionable artificial intelligence. Gravyty gives fundraisers the ability to maximize their time building relationships with the right donors at the right time. Gravyty was recently named the #1 “new fundraising idea that worked” in the Chronicle of Philanthropy. In an article [curiously again written by a Salesforce member] he gives a set of pointers, he argues for:

  • AI and predictive analytics integrated into a platform
  • AI will not replace fundraisers, but rather making them more successful and engaged
  • Proactive AI-powered applications will optimize performance and business outcomes
  • Nonprofits have the opportunity to lead use of prescriptive analytics with AI-powered applications

Conclusion — Nonpfifits Connecting to Technology

It requires a collaboration between nonprofits, private and government actors to work. Due to considerations of fairness, security, accountability and transparency.

With the increasing focus on artificial intelligence together with its projected revenue growth it would make sense for nonprofits to work more distinctly towards understanding how it could keep coupling with frontrunners in technology to raise funds in the time ahead. The trends do point towards a convoluted space with disproportional data or even worsening inequalities exacerbated by this push towards large quantities with complete disregard for data quality or security.

In this chaotic space with irresponsible and responsible actors nonprofits can take the role of bridging responsible and ethical use of technology while channeling resources to addressing the harshest inequalities we see on this planet.

This is day 89 of #500daysofAI. My current focus for day 50–100 is on AI Safety. If you enjoy this please give me a response as I do want to improve my writing or discover new research, companies and projects.

--

--