Is Trust in AI Trustworthy?

The technology world and Artificial Intelligence has a trust problem. But before we start solving it with the same brute force that created the problem, let’s pause and ask, are we trustworthy?

Alka Roy
Towards Data Science

--

Photo by Franck V. on Unsplash

In my last article on Trust in AI, I wrote about how building trust in AI needs to include both 1) the people and institutions behind the technology and AI (those selling, making, using it) and 2) the technology of AI systems and solutions. But before we run off and collectively open shop for the “trust” business or lay out a blueprint and start coding trust into our behavior or our technology, let’s take the time to understand trust.

The primary accountability and responsibility for trust should and always does lie with the first group, the people. Why? Because no matter what tools and methodologies we build into our technology, directly or indirectly, they are always a product of our goals. Take for example Microsoft’s recent spinoff of Xiaoice (or it’s earlier incarnation, Zo), a problematic chat bot with a teenage girl persona. Over five years were invested in developing this chat bot’s several incarnations. Why wasn’t anyone who was leading the charge on trustworthy AI in that ecosystem able to raise enough concerns about modeling a chat bot after teenage girls? These products take not only Microsoft but the entire chat bot industry — and humanity — further away from trust. Couldn’t they have innovated, shown off their brilliance, and even made money using a different, less problematic and opportunistic persona?

Why do we keep getting this wrong? Is it because we don’t understand trust? Is it because people making technology and business decisions are caught up in their insular world, unable to perceive anything beyond its shiny possibilities? Because accountability and responsibility, outside the legal domain, are not part of the technology building ecosystem? Even when we are interjecting every city, every home, every hiring decision, the criminal justice system, with more and more interaction and transactions with AI, it does not occur to us to take the time to “listen” to the people we want to serve?

Photo by Ali Pazani from Pexels

According to Edelman’s 2020 Trust Barometer global survey:

  • 61% of people felt that the pace of change technology is too fast and that governments do not understand new tech enough to regulate them effectively.
  • 66% worried that technology will make it impossible to know if what people are seeing or hearing is real.

With AI, we are innovating reality itself. The greatest risk of all is that we who innovate will lose the public’s trust forever — and at some point there will be no way for us to correct course. If we do not reconsider the necessity of trust in tech now, we may not get a second chance.

The Complex Nature of Trust

Trust is about meeting the expectations we set. So it’s about intention, communication, clarity, discipline, culture, habit. Lots of stuff that is hard to pinpoint or define. Trust is also like listening or understanding. We want it more than we are willing to give it, so it takes effort.

Trust is a conscious and subconscious calculation: We trust when the perceived cost of trust is less than the perceived cost of not trusting. We trust when the perceived value of trust is greater than the perceived gains from or value of not trusting. It’s a belief in the alignment of self-interests.

And how exactly do we figure out these perceived costs and values? Try this exercise. Think about someone or something you greatly trust. What exactly made you trust them? What would make you lose your trust? Has your trust changed or evolved over time? Here is a list of characteristics that I have gathered from my observations and analysis that makes trust exciting, valuable, and tricky to navigate:

  1. Trust is a gamble. It requires us to guess, have faith or believe.
  2. It’s uneven. We want to have it more than we want to give it.
  3. Trust takes time and attention. It can’t be rushed.
  4. It takes work. Thoughtful and mindful work.
  5. Trust evolves with time and interactions. It changes as we change or learn.
  6. It’s fragile. It’s easier to break and harder to repair.
  7. It’s not entirely in your control. It’s contextual and interdependent. Other parties have to be willing and ready.
  8. Trust has future and social implications — both for gain and cost.
  9. Trust is not everything. Sometimes excitement or a good deal or rewards or survival matters more than trust
  10. Trust is different than caring. You can care about someone and not trust them or vice versa.
  11. Trust is about authenticity (someone’s alignment with values) more than honesty. Trust is about understanding; and, if needed, letting you keep your secrets.
  12. Trust is elusive. The more heavy-handed we get with it, the more it eludes us.

I have a separate deeper analysis of these characteristics, but for this discussion, we can group them into three key takeaways. Trust involves:

  1. Lack of certainty and a high level of variability.
  2. Need for self-awareness, and awareness of others and future impact.
  3. Discipline with flexibility, a willingness to put the effort and yet relinquish the desire to control.

That’s tricky work. Why do we even bother with trust? Because trust can be invaluable. It enables faster and less risky decision-making. It enables diverse groups with different self-interests and goals to collaborate toward more collective value and opportunity. Trust can generate novel ideas and form the ecosystem to put ideas into action and at scale.

When does trust matter? It matters when we make decisions in the middle of uncertainty or with groups of people or institutions we are uncertain about. Think about where you go for information and guidance about coronavirus, resilience at work, or homeschooling. We navigate the “unknown” elements based on what we know. Basically, we make a decision about the future based on what we can predict or deduce from the present and the past. Trust helps us make a calculated bet to navigate the risks and rewards. Which is what makes trust so valuable, scary, and exciting.

What’s key to fully understand trust is the “perceived” value or cost. Remember those last two lines on the trust characteristics list? It’s hard for us to ever have a complete picture of the complex interconnections and every perspective of a situation. Our level of trust is based on what we can see and comprehend about our reality, with our limitations and biases. Our perceptions. That’s why we say hindsight is 20/20. That’s why we have buyer’s remorse after a big purchase or realize later that what we perceived as a good deal, good job, or a good partner, wasn’t.

We don’t want to be made a fool of, but more than that, we don’t want everyone to find out that we made a fool of ourselves. That social perception of our vulnerability bothers us even more. We lose trust in our ability to trust. That is why the cost of broken trust is so high and hard to repair.

The Tricky Trickster

Most discussions I have with tech decision-makers seem to focus on:

  1. How do I separate reality from hype? That is, which tech (5G, Conversational AI, Differential Privacy) is ready for adoption and for which use case?
  2. What companies and tools should I buy or invest in?
  3. What strategies can “get” consumers and enterprises to trust my products and “keep us out of trouble?”
  4. How do I separate myself from the “bad actors” or “mistakes” without slowing down growth?

All practical and fair questions. We need to answer them to make our day-to-day decisions. But another important set of questions that no one has asked me: Am I trustworthy? Should the users or the public trust me? When should or shouldn’t they trust me?

Let’s unpack this. When are we trustworthy? Basically, individual or collective alignment of self-interest is the biggest motivator for both trusting and being trustworthy. This self-interest could be intangible like our values, social standing, brand, reputation or something tangible like job, property, settlement, business stake. But both costs and values are perceived. Trust further depends on how and with whom we populate this formula. Is it habitual and automatic, business as usual? Is it thoughtful and reflective with multiple stakeholders and long-term impact factored along with short-term? Right now, AI innovation is all about automation, adoption rates, and valuations. Where are the variables that can lead us to trust?

We need to stop using the Turing test as the goal for AI and as a way to grab news headlines. The ultimate desire for AI shouldn’t be its ability to dupe us. We should focus on its potential to assist us, understand us, and respond to our needs. In the Xiaoice and Zo teenage chat bot example, the focus seems to have been to show off a commercially viable chat bot that appears to be a person. A chat bot that writes poetry, holds art exhibitions, is “sassy,” wears a school uniform, and doesn’t mind adult men confessing love to her. In the cleverness of technology, the long-term cost got lost. Consider the misuse of the teenage girl persona and its inherent gender bias in a highly funded and publicized product that was over five years in the making. There was so much time to change or correct course. And we wonder why we don’t have more women and girls interested in tech.

How do we increase the cost of breaking this basic trust and decrease the value for products, technology, and businesses that target vulnerable groups or take the easy way out? Currently, they prosper because there is a group or demographic that is willing to pay top dollar to use these products. Until we solve this, how can we begin to trust the people behind the AI? In these hands, would “Trustworthy AI” labels serve as convenient covers and create confusion, and would be anything but trustworthy?

Are We Going about Trust All Wrong?

Photo by Bernard Hermant on Unsplash

We all know that giving a persuasion blueprint to a smart person without constraints is like handing them a how-to-manipulate guide. How do you think we all got “hooked” and “addicted” to technology in the first place? Concerns around trust have entered the marketplace. Products, consultancy services, technology tools, and leadership coaches are getting into the business and strategy of trust.

The fact is, trust without caring and empathy can lead to trust in self-interest, meaning we can trust that people will be guided by the instinct to self-protect even at others’ expense. Trust without respect leads to arrogance and manipulation. Trust without awareness can be dangerous. Trust without delight, boring.

Here is some collective feedback to NPS designers and some of the diversity and inclusion programs — most people get it. They get the difference between what they need to say in surveys and training (explicit culture) and how they can or should behave (implicit culture). Heck, we teach kids that. We say “don’t lie” and then lie about our age, salary, and why we were late to someone right in front of them. How does trust work in such an environment?

Employees trust the implicit culture — by watching what the leaders are doing. When to clap and stay on their PR coached point if interviewed or when speaking in public. But the Edelman survey results show that the general public knows it, too. Eventually, we all figure it out. The question is about what other options do we have, and how much we are jolted out of our habitual patterns to do something about it.

Remember, trust is about our or the other party’s perceived value and perceived cost. What we can understand and see, as our gain and loss. Sometimes it serves to go with the flow and follow the crowd. We can’t assume we have their trust. Have you heard someone say, “it came out of nowhere”? But you saw it well before they did? Very few things come out of nowhere — it just depends who and what we were tracking.

What Does All This Mean?

If someone is selling you trust, run. Or slow down enough to understand the alignment of interests. Because they may likely have a conflict of interest. If someone is showing you how to trick people into trust, run. Unless you are a nomad and plan to make a quick buck and hide out. In that case, I’m not the person to advise you. But if you are really thinking about trust thoughtfully, then first start with the question: Do I trust myself? Why or when am I not trustworthy? Who or what can help figure out what’s missing? Practice your muscle of gauging trust with yourself.

The question and the answers may feel uncomfortable at first. But I can tell you from experience they will also give you a sense of relief. Or at least clarity. You don’t have to go confess it. Definitely, don’t tweet it. But know your goals. And if you are in the business of making or buying or using technology, especially AI, during the ideation or decision making phase, ask these important questions. You have a choice. AI systems can be both complex and reliable; and use thoroughly tested, ethically collected and accurate data.That said, if they are not designed to flag biases or anonymize or be transparent about the data source, the systems will not automatically point out these problems. Or course that is not the end. AI systems are evolving. And even something that is built with great resources and attention to detail, can be rebuilt or hacked or scrambled by other people with different systems and different goals.

We have scientific and architecture reviews and they are good models for reflection. But there is often not even cognitive diversity. I had some no-nonsense, tough critics review this article. I also had someone, who doesn’t come from the tech world, review it for clarity. Is that enough?

Trust is the result. It is a decision, a measure, a gauge. Rather than trying to build trust, what if we designed with responsibility and trust? Designing “AI with trust” is about a consistent, responsible decision-making framework that keeps in mind the considerations about the people who are impacted.

Instead of “How do we make people trust 5G or AI so they will adopt it faster?”, what if we asked:

  • Why don’t people trust the use of a certain technology — 5G, AI, Data Analytics, or Neurotech?
  • Who or which groups don’t trust it?
  • Why should they not trust it?
  • When should they not trust it?
  • What do they have to lose? What are their concerns?
  • Is that group or advocates of that group consulted?
  • How can we innovate to address their concerns?

What Can We Do?

I often speak to motivated and concerned AI experts, business leaders, researchers, educators, engineers, and product managers who ask: But what can I do? They are ambitious, they want to do well financially and professionally. But they are tired of having to compromise their values or do things in ways they find fundamentally flawed. Why do we keep making the world worse, they ask me. They are looking for alternatives. They want to revisit our technology-making framework without a cost to their aspirations. They want their leaders to make this shift a priority. They want the metrics to change. They want others to change. I know, I have been and am, in many ways, one of those people.

This is what I tell myself: AI is expected to add more than $13 trillion dollars to the global economy by 2030. That means it will touch every part of the human system and the environment, will have a long-lasting impact, and will generate enough revenue that we don’t have an excuse to not invest in building with responsibility and trust. Regulatory oversight will trail innovation. And we haven’t even mentioned the nexus of funding that is at the root of so many conflicts of interest. Stop feeling guilty about driving accountability, as if we are somehow betraying the companies or the economy. We are helping them and us by going back to the fundamentals — our values. What all this is really for — the people.

This is what I tell the tech and business leaders: What worked for me was the awareness of what is missing and reframing the problem statement. We are heading into a cognitive overload. We need trust to help us navigate our world. Whether it’s AI or another technology, if we don’t have a way to see or understand the impact to all the stakeholders as essential, we are going to make terrible stuff. I tell them not to underestimate or overestimate the resources and power and skills they have. Whatever they know or have, use it. Learn from others or exchange ideas. Join a community of people with diverse ideas who share their values. It’s OK to care. It’s important to care. Let’s make it acceptable to care. Balance it with our needs. Figure out how to feed our professional and intellectual drive to succeed and innovate a balanced diet. Let’s become the people we want to become and make things for the kind of future that we want to make. Ask, what am I missing? What might make me more trustworthy? Who can help me figure that out?

This is what inspired me to start The Responsible Innovation Project and create a framework for product and technology ideation, development and assessment. But that is only a start. We can’t do this alone or in isolation. A shift from surviving to thriving must become tech culture’s norm. If we are going to make trustworthy technology or AI, we have to integrate trust into our processes and make it in a trustworthy way. But before we do, we have to take the time to understand trust and ask what we are missing. Take responsibility for it. That is the only shift we can trust.

--

--

Founder, The Responsible Innovation Project. Helping leaders & businesses build awesome tech and products (including AI) responsibly and with trust.