Should AI be allowed to make our gambles? Facebook with Carnegie Mellon AI researchers have just created an AI that can beat people in poker. This is a monumental achievement. In poker, there is no optimal answer, no winning series of moves to find. Instead, there is the best gamble to take to maximize reward and minimize risk. But can we apply agents trained to succeed in scenarios with imperfect information, responsibly? In what ways could these types of superhuman AI make bad bets, and how could we design systems around them for social good?
![[2]](https://towardsdatascience.com/wp-content/uploads/2019/08/1laP46tdYiK1r89hV9dTTLA.jpeg)
Let’s look at a thought experiment. Medical diagnosis and treatment are one of the most developed and prolific areas of Machine Learning research. Diagnosis is, for the most part, a classification problem. You have a large amount of input data from the patient, such as symptom data, environmental data, etc. Machine learning algorithms are applied to find patterns among the massive amounts of data to diagnose the patient. Often the algorithms find patterns so intricate that professionals don’t always make sense of them. Diagnosis is a classic machine learning application that comes with its ethical questions. For the sake of experiment, let’s assume that the doctor is very sharp or an ML algorithm accurately gives the correct diagnosis.
Recommending treatment, however, is not a standard classification problem. Its a game of imperfect information. You have to bet on the best procedure given the person and the diagnosis. The algorithm’s job is to look at various success chances of treatment, the diagnosis and the person, and recommend the best possible course of treatment to save their life. They might not have time to take more treatments, so every recommendation must be of high quality. This problem is where Facebook and CMU’s research could potentially be applied. After all, if everyone was correctly diagnosed and given the best possible treatment, wouldn’t that be a world worth building?
![[3]](https://towardsdatascience.com/wp-content/uploads/2019/08/1RMB5tnfmzJwCw_i_UVtCng.jpeg)
Sadly, it might not be possible. If we make an AI agent to recommend treatment, we might find the same blind spots as doctors do. An agent that is recommending treatments would probably have to look at the success rates of the various therapies as part of the many data features to analyze. Success rates can be p-hacked, or disingenuously statistically manipulated to be higher than they are. For example, Lipitor, a drug that was proclaimed to have a 36% success rate while in reality, having a 1% success rate. An AI agent recommending treatments and looking at metrics like success rate could make the wrong bet.
Doctors can also fall prey to faulty success rates. This weakness is the problem. Doctors and superhuman AI have the same flaw: corrupt or biased data. But doctors can provide reasons for their diagnosis. Look as hard as you might, but it is unrealistic to find the exact pattern among 50,000 matrices all multiplied together in differing sequential ways and correctly identify the reason why. Maybe you could make an AI to comprehend and translate AI insights into human understanding, but you’d run into the same problem over again.
Even by being able to outperform humans in areas of imperfect information, we still depend on accurate and unbiased data about the problems at hand. Gambling on whether that is possible is a crucial question for the future of AI.
![[4]](https://towardsdatascience.com/wp-content/uploads/2019/08/1GssYt_rx9slqipa-4XIsiQ.jpeg)