The world’s leading publication for data science, AI, and ML professionals.

Everyday Life and Microprediction

B2C Everyday Life Prediction Tools With Artificial Intelligence

Photo by @chengfengrecord
Photo by @chengfengrecord

B2C Prediction and Safety Tools With Artificial Intelligence

Do we have the right as individuals to use Artificial Intelligence to attempt to predict behaviour in everyday life? As an example should you be able to predict the risk of hiring a specific babysitter? In 2018 Predictim advertised a service that promised to vet possible babysitters by scanning their presence on the web, social media and online criminal databases. AI safety for personal use so to speak.

Predictim provides a new and innovative way to vet people instantly using artificial intelligence and alternative data.

It can be said that Predictim has not gained any sort of great investment ($100,000 in a seed round). It was then late 2018 announced by Twitter and Facebook that Predictim had been banned. It seems Joel Simonoff and Sal Parsa took no further action to start up a new company. In fact in the good name or bad name of social media stalking I now see that Joel Simonoff is a machine learning researcher with NASA.

What is okay and not okay?

However the startup with its rise and decline poses an interesting question in terms of the way we use technology to predict and take actions based on those predictions. With Predictim there was a clear case of inherent racism in the way it was structured. The AI scan for respect and attitude with such a lacklustre understanding of ethics in the sphere of the home seems an obvious intrusion. What is okay and not okay?

We still seem to think it is somewhat okay for companies to gather predictive information about our behaviour in our house as long as they speak with an eloquent voice or if you get the answer you are looking for online. Same goes for our fitbit (measuring health information), phone (contextual awareness, location info) and social media (psychographic, preferences).

These predictive tools with larger frameworks can certainly have bias just as bad or worse than the babysitter app, yet we can perhaps discuss that at another time.

Microprediction

Well you may have heard about micromanagement: it can be a management style whereby a manager closely observes and/or controls and/or reminds the work of his/her subordinates or employees. It can however be referred to in relationships outside of a work context.

By saying microprediction out loud I am using it more or less as a discussion point. Prediction is nothing new, and in a way you could say it is part of what makes us unique as humans:

"Symbolic abstract thinking: Very simply, this is our ability to think about objects, principles, and ideas that are not physically present. It gives us the ability for complex language. This is supported by a lowered larynx (which allows for a wider variety of sounds than all other animals) and brain structures for complex language."

How wonderful, symbolic abstract thinking, and yet it begs the question whether there is a limit of what to predict or not to predict. There may even be questions on how to predict, or more likely laws as ethical considerations passes into regulation. Indeed prediction to govern in various sciences could be traced far back, and many mention The Prince by Niccolò Machiavelli as an early example __ estimated to have been distributed as early as 1513. The idea of using quantitative information and gathering of this for better decision-making.

Recent examples of protection of rights in this context of prediction being the EU General Data Protection Regulation (GDPR) and the FDA considering regulating machine learning in healthcare.

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use in order to perform a specific task effectively without using explicit instructions, relying on patterns and inference instead.

Without drawing definitive guidelines there seems to be a vague understanding of what is okay to do and not okay. The anthropologist Marilyn Strathern has written a piece on this dating further back named Future Kinship and the Study of Culture (1995). Wherein she talks of our notion of what is artificial or not, referring to how it has changed over time. This artificial or artifice is not set in stone, and what is conceived of as natural changes.

Why microprediction?

Micromanagement is generally considered to have a negative connotation, mainly due to the fact that it shows a lack of freedom in the workplace. Microprediction as a talking point may need to be discussed further, and I have not defined it because I do not yet understand how it could be used in a good manner. However to spark further interest I want to finish my article with this comment by Predictim in regards to ethics:

"We take ethics and bias extremely seriously," Sal Parsa, Predictim’s CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process." –Brian Merchant writes in Gizmodo the 12th of June 2018

Does that sound familiar? If so who has the right to violate or use user data?

I am not sure whether I would support scanning of a babysitter based on this criteria, but would I support insurance companies or credit scoring?

Credit scoring based on narrow AI techniques combined with nontraditional data such as social media or other sources could be an interesting field to continue monitoring as new companies are popping up and applying machine learning in novel ways that may be uncomfortable.

Bias: inclination or prejudice for or against one person or group, especially in a way considered to be unfair. In the field of anthropology we discuss ethnocentrism: evaluation of other cultures according to preconceptions originating in the standards and customs of one’s own culture.

How we use technology or condemn the use of technology is fascinating.

I will be sure to explore this further.

This is day 22 of #500daysofAI, I hope you enjoyed it.

What is #500daysofAI?

I am challenging myself to write and think about the topic of artificial intelligence for the next 500 days with the #500daysofAI. It is a challenge I invented to keep myself thinking of this topic and share my thoughts.

This is inspired by the film 500 Days of Summer where the main character tries to figure out where a love affair went sour, and in doing so, rediscovers his true passions in life.


Related Articles