enterprise

Trust in AI is more than a moral problem


Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.


The economic potential of AI is uncontested, but it is largely unrealized by organizations, with an astounding 87% of AI projects failing to succeed.

Some consider this a technology problem, others a business problem, a culture problem or an industry problem — but the latest evidence reveals that it is a trust problem.

According to recent research, nearly two-thirds of C-suite executives say that trust in AI drives revenue, competitiveness and customer success.

Trust has been a complicated word to unpack when it comes to AI. Can you trust an AI system? If so, how? We don’t trust humans immediately, and we’re even less likely to trust AI systems immediately.

VB Event

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Request an invite

But a lack of trust in AI is holding back economic potential, and many of the recommendations for building trust in AI systems have been criticized as too abstract or far-reaching to be practical.

It’s time for a new “AI Trust Equation” focused on practical application.

The AI trust equation

The Trust Equation, a concept for building trust between people, was first proposed in The Trusted Advisor by David Maister, Charles Green and Robert Galford. The equation is Trust = Credibility + Reliability + Intimacy, divided by Self-Orientation.

It is clear at first glance why this is an ideal equation for building trust between humans, but it does not translate to building trust between humans and machines.

For building trust between humans and machines, the new AI Trust Equation is Trust = Security + Ethics + Accuracy, divided by Control.

Security forms the first step in the path to trust, and it is made up of several key tenets that are well outlined elsewhere. For the exercise of building trust between humans and machines, it comes down to the question: “Will my information be secure if I share it with this AI system?”

Readers Also Like:  Meta announces Quest VR gaming showcase in June

Ethics is more complicated than security because it is a moral question rather than a technical question. Before investing in an AI system, leaders need to consider:

  1. How were people treated in the making of this model, such as the Kenyan workers in the making of ChatGPT? Is that something I/we feel comfortable with supporting by building our solutions with it?
  2. Is the model explainable? If it produces a harmful output, can I understand why? And is there anything I can do about it (see Control)?
  3. Are there implicit or explicit biases in the model? This is a thoroughly documented problem, such as the Gender Shades research from Joy Buolamwini and Timnit Gebru and Google’s recent attempt to eliminate bias in their models, which resulted in creating ahistorical biases.
  4. What is the business model for this AI system? Are those whose information and life’s work have trained the model being compensated when the model built on their work generates revenue?
  5. What are the stated values of the company that created this AI system, and how well do the actions of the company and its leadership track to those values? OpenAI’s recent choice to imitate Scarlett Johansson’s voice without her consent, for example, shows a significant divide between the stated values of OpenAI and Altman’s decision to ignore Scarlett Johansson’s choice to decline the use of her voice for ChatGPT.

Accuracy can be defined as how reliably the AI system provides an accurate answer to a range of questions across the flow of work. This can be simplified to: “When I ask this AI a question based on my context, how useful is its answer?” The answer is directly intertwined with 1) the sophistication of the model and 2) the data on which it’s been trained.

Control is at the heart of the conversation about trusting AI, and it ranges from the most tactical question: “Will this AI system do what I want it to do, or will it make a mistake?” to the one of the most pressing questions of our time: “Will we ever lose control over intelligent systems?” In both cases, the ability to control the actions, decisions and output of AI systems underpins the notion of trusting and implementing them.

Readers Also Like:  Daily Markets: Earnings, Rate Hikes in Focus Today - Nasdaq

5 steps to using the AI trust equation

  1.  Determine whether the system is useful: Before investing time and resources in investigating whether an AI platform is trustworthy, organizations would benefit from determining whether a platform is useful in helping them create more value.
  2. Investigate if the platform is secure: What happens to your data if you load it into the platform? Does any information leave your firewall? Working closely with your security team or hiring security advisors is critical to ensuring you can rely on the security of an AI system.
  3. Set your ethical threshold and evaluate all systems and organizations against it: If any models you invest in must be explainable, define, to absolute precision, a common, empirical definition of explainability across your organization, with upper and lower tolerable limits, and measure proposed systems against those limits. Do the same for every ethical principle your organization determines is non-negotiable when it comes to leveraging AI.
  4. Define your accuracy targets and don’t deviate: It can be tempting to adopt a system that doesn’t perform well because it’s a precursor to human work. But if it’s performing below an accuracy target you’ve defined as acceptable for your organization, you run the risk of low quality work output and a greater load on your people. More often than not, low accuracy is a model problem or a data problem, both of which can be addressed with the right level of investment and focus.
  5. Decide what degree of control your organization needs and how it’s defined: How much control you want decision-makers and operators to have over AI systems will determine whether you want a fully autonomous system, semi-autonomous, AI-powered, or if your organizational tolerance level for sharing control with AI systems is a higher bar than any current AI systems may be able to reach.

In the era of AI, it can be easy to search for best practices or quick wins, but the truth is: no one has quite figured all of this out yet, and by the time they do, it won’t be differentiating for you and your organization anymore.

Readers Also Like:  Markets News, Oct. 27, 2023: Amazon Earnings Propel Nasdaq Higher; S&P 500 Slips Into Correction - Investopedia

So, rather than wait for the perfect solution or follow the trends set by others, take the lead. Assemble a team of champions and sponsors within your organization, tailor the AI Trust Equation to your specific needs, and start evaluating AI systems against it. The rewards of such an endeavor are not just economic but also foundational to the future of technology and its role in society.

Some technology companies see the market forces moving in this direction and are working to develop the right commitments, control and visibility into how their AI systems work — such as with Salesforce’s Einstein Trust Layer — and others are claiming that that any level of visibility would cede competitive advantage. You and your organization will need to determine what degree of trust you want to have both in the output of AI systems as well as with the organizations that build and maintain them.

AI’s potential is immense, but it will only be realized when AI systems and the people who make them can reach and maintain trust within our organizations and society. The future of AI depends on it.

Brian Evergreen is author of “Autonomous Transformation: Creating a More Human Future in the Era of Artificial Intelligence.”

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.