The world’s leading publication for data science, AI, and ML professionals.

Our Next Enlightenment will be AI-Driven

A realistic and symbiotic future between humans and AI

New York City Lights. Image by author
New York City Lights. Image by author

Personal Opinion

The Enlightenment Ride to the Top

The Age of Enlightenment took place during the 17th and 18th centuries and is seen as a global phenomenon where we collectively "turned on the light bulbs" in our heads. This movement was an intellectual and philosophical force that saw the rise of some of the most prominent thinkers ever like Kant, Voltaire, and Adam Smith.

The Enlightenment was largely fueled by philosophers and mathematicians like Descartes and Newton. This is because the world before this point ascribed knowledge and power to higher powers – deities, stars, etc. These religious and spiritual beings were seen as primary authorities, consequently creating the society that humans live in.

Enlightenment tackled complex issues like individual liberty, happiness, knowledge, and more. Societal structures, laws, and perspectives were largely driven by doctrines held by the Catholic Church before this period, and this Age shifted the focus to a different higher power – reason.

The Scientific Revolution coupled this focus on reason, which closely preceded Enlightenment. The combination of these two led us to trade fealty to the Catholic Church for fealty to empiricism and rationality. This incredibly significant moment has spawned a marriage between philosophers, mathematicians, and scientists that spurred the Age of Computation through ideas like Symbolic Logic, Moore’s Law, and, now at its apex, Machine Learning.

The first Enlightenment refocused our sights to science and experimentation to yield insights. This has led to generations of us recording our experiences in the form of data so we can conduct science and engineering in the name of growth. This growth eventually led us to create innovations that completely changed everything from personal lives to global societies. The Internet, iPhones, Social Media, and so much more transformed the world in a mere 50 years and now I believe we’re on the apex of the ride the first Enlightenment took us on.

Where do we go from here?

The Rise of Machines

After generations of relying on reason to make decisions, we developed intuitions based on the patterns of outcomes and experiences we saw in the decisions we made. This yielded a cycle that contained a little more intuition and a little less reason each time we taught the skills to the following generations. Essentially, we whittled away at our reasoning capacity in the name of efficiency – it’s much faster to make a decision "just because it’s always been done" on something you’re sure will repeat seemingly endlessly because that’s what has always been done rather than employ critical thinking each time.

That’s fine for contained, local problems that have minimal variation and need to scale. The issue arises when you attempt to employ reason at scale with the world hyper-connected in the age we live in today. Not only does our reasoning ability have limits, but our intuition also falters when we’re dealing with a vast amount of externalities. Society’s solution to this was computer programming. Using computers, we can run simulations, conduct statistical tests, automate entire workflows, and more at a scale that was previously incomprehensible for any one person to manage. Learning to code enabled one person to have the leverage of five people.

We’ve all seen how popular the natural evolution of this has gone in recent history with AI and Machine Learning taking the world by storm. All those experiences we recorded and stored have become the bread and butter of institutions worldwide because we can train statistical models to learn the patterns we had intuition applied on a massive, global scale. This has been such a transformative technology that we’ve seen technologists, futurists, philosophers, and more come out to warn us about creating a Terminator-like future.

Everything from Sam Harris’s famous TED Talk to Nick Bostrom’s book discussing the emergence of SuperIntelligence to Max Tegmark’s book outlining the potential futures we can have with AI has shown us that AI’s integration into human society is not a pop-culture, sci-fi concept anymore. After countless hours of reading op-eds, articles, books, etc., the consensus is to build with extreme caution. Almost as if someone could [overnight] build AGI (Artificial General Intelligence) that will be able to contend with and potentially overtake humans. Every now and then, there’s a futurist or philosopher who believes that we are likely to not see AGI built for generations to come especially considering how naïve and relatively "simplistic" the implementations are today.

Even with all the theories, speculations, and concerns, I have seen very few, if any, take the opinion that AI will make us better humans. It’s an optimistic take that I probably wouldn’t have written a story about until just recently.

Let’s dive right into it.

Raising the Bar of Being a Human

The AI hype cycle has been going on for some time now, but it really started making waves when Google’s AI built from DeepMind beat the world champion at Go, one of the most complex games in the world. Eventually, we saw AI systems built that can determine protein structure just based on its amino acid sequence, build a website with just text inputs, answer philosophical questions with nuance, create art like Van Gogh, and much more. In fact, even as a Data Scientist by trade, it’s been surprisingly good to watch how far AI has come.

All of the concerns and cautionary tales of the future make complete sense when seeing these innovations happen nearly every year. And then, a paper was published in July 2021 by Choi et. al. titled "How Does AI Improve Human Decision-Making? Evidence from the AI-Powered Go Program" that completely blew me away.

The researchers evaluated the move quality of human players before the AI-powered Go program (Leela) was built and after to gauge what the difference was. They controlled for the trend which can indicate the players just getting better or worse by themselves and evaluated across measures like human errors, the magnitude of the most critical mistake, and the quality of decision making in a large amount of uncertainty (at the beginning of the game).

The authors found a dramatic increase in move quality, a decrease in human errors, a decrease in the magnitude of the most critical mistake, and an increase in the quality of decision making in a large amount of uncertainty over the course of 3 years after Leela was released. This graph shows the drastic change in human performance and decision making skills.

Choi, Sukwoong and Kim, Namil and Kim, Junsik and Kang, Hyo, How Does AI Improve Human Decision-Making? Evidence from the AI-Powered Go Program (July 26, 2021). USC Marshall School of Business Research Paper Sponsored by iORB, No. Forthcoming, Available at SSRN: https://ssrn.com/abstract=3893835 or http://dx.doi.org/10.2139/ssrn.3893835
Choi, Sukwoong and Kim, Namil and Kim, Junsik and Kang, Hyo, How Does AI Improve Human Decision-Making? Evidence from the AI-Powered Go Program (July 26, 2021). USC Marshall School of Business Research Paper Sponsored by iORB, No. Forthcoming, Available at SSRN: https://ssrn.com/abstract=3893835 or http://dx.doi.org/10.2139/ssrn.3893835

This may seem relatively insignificant since it’s based on a game, but I promise it’s anything but. This study has shown that when we collectively think of AI and it’s role in society, we take the human capacity for learning, growing, and improving for granted. We forget the immense progress human society has made in such a short amount of time and the apex of the Enlightenment ride makes us feel that a large ride down is forthcoming.

I disagree.

I think we’ll see a second Enlightenment driven by AI systems. In the first, we traded the authority of the Catholic Church for the authority of reason. Over generations, this has led to an over-reliance on our intuition and capacity to critically think which, as mentioned, has many limitations. I think in the second Enlightenment we will trade the authority of reason for the authority of Artificial Intelligence.

In other words, we’ll give up the ego of human wisdom to become better at handling complex problems at a scale we cannot comprehend. How will this make us better? Is this a reality we want? Like with each movement, it won’t be completely good or bad but I do think it will force us to raise the bar for being human. Ignorance, or willful laziness, will be seen as one of the worst things you can be instead of a "default". When you have global knowledge, large-scale pattern recognition, and an opportunity to become better ready at your fingertips, it becomes "wrong" to choose to remain in your box closed off to growing past your biases.

A big concern with AI, and something I deal with a lot with my work, is how easily we can bake in our human biases into AI models. This is true and needs to be consistently monitored, controlled, and fixed. And that’s exactly the point of how this can make us better. Reason was/is thought of as this infallible authority that is immune to our biases and human errors. Reason is a product of our cognitive functions and therefore is prone to the same biases we bake into our autonomous systems. Except those biases are invisible and can take generations to really change, if at all. With the biases baked into AI models, it’s in front of us being monitored by a [ideally] diverse team. And we have the means to fix those biases as well since they’re all in code and math that we ourselves wrote. The cycle from identification to debate to fix to result for biases found in AI is a million times faster than biases in reason.

There may come a time in the distant future when we have to reconcile the power dynamics between humans and AI, but I believe we’ll see a long era of human evolution guided by AI making us more egalitarian and efficient at the same time.

How will this start?

I believe we’re still quite early when you think of our evolution in these terms. So far, widespread adoption of AI has really only come for where regulation has been minimal and technology has been heavily funded. Domains like Advertising, Marketing, or other Consumer Tech industries have found massive success (and issues) in deploying AI into the wild.

I think we’ll see a drastic shift in perspectives of AI and approaches to using it when we see it proliferate in industries more heavily regulated or where technology is not funded as much. Solutions for climate change or public health will warrant government and private enterprises collaborating to create AI systems at a global scale with regulatory oversight. This will kickstart an era of AI symbiosis that most will more easily welcome.

It a public good that open source communities are spawning and becoming the default as we’re seeing AI dominate each aspect of society. Effective transparency and user-controlled systems are going to be crucial features if we’re to trust autonomous systems being further embedded in our lives.

Ultimately, as domain experts start to translate their intuition to code and allow themselves to grow past their own reasoning abilities, that’s when we’ll start to see the next stage of human evolution take shape.


Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details.


Related Articles