Trust: The Currency of AI

Part I: The Sanctity of Truth

In the past two weeks, we’ve been flooded with “AI News.” There was an uproar over Open AI’s non-disparagement exit agreements. Consumers were uneasy over the realization that Slack’s customers are opted-in by default to their data being used to train its AI product.  Scarlett Johnson’s voice was seemingly imitated and used to voice Sky for GPT-4o. And, Chrome’s AI Overview, a search insights summary, suffered a number of inaccuracies that caught attention on social media. The running theme across these examples was trust, or the lack of it. In today’s newsletter, we break down why trust matters to consumers and offer small steps companies can take to build trust in AI, today.

The Responsibility of Building Trust

The “bubble” of AI hype is certainly bursting, and the public are more concerned about the stability of these consumer products and whether or not these products, and the companies that build them, can be trusted. Many argue that the answer is no, we can’t fully trust AI. The lack of trust isn’t meant in the “boogeyman” or “doomsday” way. It simply means that we collectively lack an understanding of AI, its opportunities and its harms (today) and how to standardize and regulate how it is built and used. Consumers at large seemingly agree. From 2023 to 2024, trust in AI in the U.S. declined from 50% to 35%. While the EU AI Act is promising, we have a long way to go to establish a global taxonomy and standard about “AI” and protections for consumers around its use. As we solidify regulation, one of our greatest concerns should be who defines what is safe, who defines what is regulated, and who decides what is trustworthy

At Everi, we believe that the responsibility of these definitions should be shared beyond just technologists. We believe that we must engage civil society at large as well as consumers to define the boundaries of AI and to ensure we build for a need and/or a demand.

As we all await regulation, we want to provide companies with a launchpad to start creating internal processes and guardrails for building AI consumer products. Ultimately, it will be the responsibility of companies building AI consumer products to abide by regulation, but that shouldn’t wait until a new law is enacted. The trust is built now. To build trust in AI, companies should consider the three pillars of Trust: (1) The Sanctity of Truth, (2) The Security of Consumer Data, and (3) The Ethics of Companies and Their Leaders. In today’s newsletter, we will focus on the first pillar of trust, The Sanctity of Truth.

The Sanctity of Truth (Is It Accurate?)

Why Accuracy of Information Matters

Among the many hallucinations and inappropriate answers we’ve seen from AI over the past few years, the pitfalls of AI this past week were largely about information, or misinformation. Unfortunately with Google’s AI summaries, we saw a harsh truth: the internet is noisy, biased, and unfiltered. To apply AI summaries at scale, we need better outcomes that we can trust. Admittedly, many of the examples shared across social media were innocently funny, and harmless; but they provoked a longstanding concern: what do we do when we can’t trust AI to be accurate? I can imagine some cynic out there thinking: “we do what we’ve always done, because it’s never been accurate.” Well, yes the quick caveat is that, we shouldn’t expect for the internet to be right all the time, and we’ve never been able to. In its pre-existing form, searching for answers on Google always required a bit of work and verification to ensure your answers were accurate. But in a world where AI summaries encourage you to forsake active search, it becomes more important that the information summarized is accurate and true. 

This need for accuracy is only amplified when applied to the context of how we use information on the internet to make decisions. In a battle against “fake news,” social media companies and major media networks worked over time during the 2020 election to combat misinformation online. In 2024, technology companies are now asking consumers to trust ChatGPT, Gemini and the like to provide them with easy access to accurate information. In an election year, the consequences of misinformation escalate from harmless and funny to dangerous and consequential. With deep fakes and voice cloning becoming harder to detect, the sophistication of fraud imposes a serious risk during the election and beyond. Trust is no longer about getting stats right in an English paper, it’s about how we perceive government officials, and how the public makes decisions about the future of the country. 

Implementing the Truth (as Companies)

So what can we do? In the US, regulation is a looming tailwind but is still pending as of today. How can companies honor The Sanctity of Truth and build trust with their customers?  

To build trust with customers, you have to build trust in your product services through quality and accuracy. A lot of today’s technology has been driven by the popular tech philosophy: “ship fast and break things.” But when we ship too fast and break too many things, we break trust. When we break trust we pollute the demand for more products. 

Now, I don’t believe that consumers don’t want AI at all. What everyone is calling “AI” has been the recommendation systems providing personalized ads or recommended content on social media for years. But for newer AI consumer products, the tolerance for false information is lower. AI can be cool, but it isn’t quite cool enough to justify consistently false information (or the perception of it).  

Follow these three steps to improve quality and accuracy and build trust in your AI products (hint: it works for all products):

(1) Listen to your customers. The 2024 Edelman Trust Barometer Report showed “listening” as a top trust building action for innovators building new technologies in AI. Don’t just listen for demand, listen for risk. This goes beyond user experience research or customer feedback. You need to understand your customers’ fears. Try to understand their questions, and roll out your new features with as much information to address those questions at launch. Innovation requires more than building what people say they want. Most people don’t know they want something they’ve never seen. Customers will tell you what they don’t want once they’ve seen it. When they do, listen. 

(2) Improve your product releases. Are you ready when you launch your product? We all ship with small known bugs, but do you have an understanding of your customers’ tolerance levels? Instead of “ship fast and break things” we encourage companies to “learn fast and test things.” Utilize beta and alpha group testing. Incentivize loyalty from that early group of testers. If we’ve learned anything from the past few weeks, we should understand that the race for the best AI consumer products is not about who is first, it’s about who is best (quality) and who is right (accuracy). Test your features, fix your bugs, rinse and repeat until you’re ready for scale. 

Instead of “ship fast and break things” we encourage companies to “learn fast and test things.”

(3) Make it “accurate” for everyone. If your product’s output or in this case information output is only accurate for a subset of your customers, it’s not accurate, it’s broken. Information on the internet scales beyond the US or the UK, and we really encourage companies to invest in accuracy for your global market. For companies scaling, your total addressable market is far beyond North America and Europe. When you get information right for everyone, you increase your reach and your scalability. 

Trust is the currency of AI. Invest this currency well, and it will pay dividends. In the next newsletter, we’ll talk about the second pillar of trust: “The Security of Customer Data.”

How Everi Can Help

The key building block of truthful, accurate AI is clean, accurate data. At Everi AI, we help companies curate high quality data to train accurate and precise AI models. We partner with companies to build AI products that work for Everi one, Everi where. 

If you’re also focused on ethically curating datasets to train or fine tune your models, we’d love to hear from you @ [email protected] today!

Reply

or to participate.