50-year-old Cybernetics questions for an ethical future of AI

Norbert Wiener, one of cybernetics pioneers, envisioned AI ethics problems way ahead of us

David Pereira
Towards Data Science

--

“43081” by Tekniska museet is licensed with CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/

Ethics has definitely become a trend in the field of Artificial Intelligence. It seems clear that AI faces a lot of challenges if we want it to have a positive impact for our society. Nevertheless, it is not the first time that researchers warn us about the risks of this kind of technology. Norbert Wiener, cybernetics pioneer, wrote this somehow prophetic piece back in his book God & Golem, inc, back in 1964:

It is relatively easy to promote good and to fight evil and good and evil are arranged against each other in two clear lines, and when those on the other side are our unquestioned enemies and those on our side our trusted allies. What, however, if we must ask, each time and in every situation, where is the friend and where is the enemy? What, moreover, when we have to put the decision in the hands of an inexorable magic or an inexorable machine of which we must ask the right questions in advance, without fully understanding the operations of the process by which they will be answered?

Quote taken from God & Golem, Inc — Norbert Wiener, 1964.

It is mesmerizing to think that this was written more than 50 years ago and that somehow it reflects so well some of the main challenges we are facing for ensuring a truly ethical future for AI. Let’s break down Wiener’s written piece.

Wiener starts pointing out one of the main challenges for modern AI, and it is the definition of good and evil. It is clear now that a lot of developments in AI are driven by economic powers, might those come from national governments or big technology companies. The problem with the definition of good and evil in AI is not only a problem of finding clear lines, but a problem of who gets in the conversation of deciding where the line should be. Recent developments have clearly shown that underrepresented groups not only get discriminated by AI algorithms, but also have a hard time getting their positions considered when they are recognised experts in the field, as the very well known case of Timnit Gebru has pointed out.

Wiener then deals with another challenge of AI, which is the black box problem, or, as Wiener calls it, an inexorable magic. This is a very well known problem which implications I covered in the two articles below. As a summary, we need not only algorithmic explainability, but also traceability and auditability, moving us from explainable AI to traceable and transparent AI.

Finally, let me point out a final challenge that Wiener uncovers in that quote and that can go unnoticed. While discussing the challenge of putting decisions on the hands of an inexorable machine, he also writes “of which we must ask the right questions in advance”. This might feel like an easy part of defining an AI model, but it has been proven several times that it is not, even if we think we are setting up a positive social impact goal for the algorithm. Let’s take the case of an algorithm that was designed with an admirable cause in mind: find those patients who could benefit the most from extra medical care due to its clinical history. It seems like a right goal for an algorithm to solve, right? Well, it turns out it was proven that the algorithm “dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine” as you can read in the article below.

We might think that the problem of this algorithm is a data bias problem. It is not only that. As professor Stuart Russell and other researchers point out, we should remove the assumption of perfectly known goals for algorithms, and acknowledge, as I wrote in a previous article, that Bias in AI is much more than a data problem.

If you enjoyed reading this piece, please consider a membership to get full access to every story on while supporting me and other writers on Medium.

--

--