Data and Privacy In The Age of AI

Build An Ethical Moat

As a digital consumer, there’s one phrase I used to say more than anything else: “I’m not reading all that.” When I said it, I was referring to those long, detailed, confusing Terms of Service agreements. I would rather search the internet for a summary than get into the weeds of the same monotonous agreements for every single platform: email, project management task, social media, productivity apps and more. The barrier to entry is usually clicking yes, without being required to even open the agreement doc and 91% consumers do just that. 

None of this has changed my expectations of data privacy over the years. In 2017 I posed for a few photos at an event that led to a series of funny, but uncomfortable encounters where friends and strangers told me they saw my picture in their company’s onboarding videos. I had not consented to my image being used but I found it funny and harmless that I was the person being used to welcome a friend to her new role. After the jokes wore off, I was left wondering, where else is my photo being used? I hope it isn’t being used in a harmful way? Why am I not getting paid for being the face of a company onboarding? 

Consumers are Right About Consumer Rights

When the new California Privacy Rights Act law came into place (known in my home as “The Cookies Law”), I felt a glimmer of empowerment by clicking “reject all” as often as possible. Today, I will even take the more difficult route to only select “necessary” cookies whenever possible. 

In the age of AI, where consumers are learning more about how algorithms are trained and how data is used to build Generative AI products, data privacy and education matter more than ever. Just last week, it was announced that Open AI and Stack Overflow went into a contract to license training data. Stack Overflow users rebelled by deleting and altering posts, leading to a mass suspension of profiles attempting to undercut the new licensing deal. 

Last week, I wrote about “Data People Fit” and how we believe that it’s important to collect representative data in a fair and ethical way, through consent. Today, datasets and models are developed by scraping the internet for data, or major data exchanges that consumers never opted into. But as we saw in the Stack Overflow example, consumers care about how their data is used, and this includes packaging their previously submitted data and content in licensing deals. Despite our best efforts, it is hard to create paths for our users to truly provide consent. 

So, What Can We Do?

Here are a few ways to consider data and privacy to build trust with your customers.

Use ethically curated datasets 

Ensure your team is using ethically curated datasets. What is an ethically curated dataset? Well, for one it doesn’t infringe on copyright laws. If you have been paying attention, you’ve noticed public distrust for major companies who have basically said “oh well” to taking books, art, articles, social media content and more to train AI models. After stealing people’s work without their consent, how can you expect them to use your products? How can you capture market scale and consumer interest by losing their trust from how you build your models? Second, ethically curated datasets are licensed appropriately for training usage. Besides the fact that it is illegal to use poorly / non licensed datasets, you can build a lot of trust by using properly licensed datasets. When you’ve ethically trained your models, you should also procure a “Fairly Trained” certification

Emphasize ToS education 

Building trust through transparency is a cultural win for any AI company. 52% of Americans are nervous about AI and how it will be used. To be honest, they are justified in their concerns. Is AI coming for the majority of jobs? Is creativity being squashed by technology (see Apple’s most recent commercial)? Will these new products be racist, sexist and homophobic in the name of free will? Will they be safe for myself and my kids to use? The questions are endless, and consumers just want to know that they are safe and protected when using your platform. One of the ways to do this is to provide transparent, accessible summaries of your Terms of Service agreements. With every update you can include an article, quick video, or audio byte about how you’ll be using customer data and emphasizing your privacy guardrails. If consumers can trust you to tell them what they are signing up for and signing away, they are likely to trust you with additional experimentation. What’s most important is that consumers who want to opt out, have a genuine, educated choice to do so.  

Deliver value through personalization 

Again, if you read last week’s post you read about “The Personalization Framework” we discussed our definition of personalization: “creating a tailored product experience that predictably works as intended for every customer.” What is the unique experience that each consumer has on your platform? What can each consumer find that they can connect to? With the number of consumer AI products springing up, your unique differentiator will be in your ability to create nuanced value for every user. To get them to keep coming back, make sure it predictably works for all of them. For many consumers they are willing to exchange insight and information for a predictable, personalized product. 

When in doubt, choose privacy and trust

Finally, make the choices that count! Have company morals, and stand on them. Decide what you will or won’t do with user data, and don’t shift with the wind. Make your value of privacy be your moat. I promise you, you will build loyalty and trust with the consumers who respect your decision to protect them. 

Loyalty and Trust

Despite the AI fear frenzy, many consumers want to see the future of AI succeed. They want to be more productive, they want to be more creative, and they want to be technologically advanced. Historically, we’ve seen that consumers are willing to exchange their data for personalized AI products that represent them and their values. But the only way we’ll achieve this in the age of AI is through earnest transparency, ethical development, and equitable products. If you make privacy and trust your moat, you’ll discover loyal consumers who champion your products and find new ways to opt in to sharing their data for better product experiences. 

How Can Everi Help?

Everi AI is a marketplace and curation platform for high quality datasets. Our mission is to help companies train and fine tune AI models for personalization and internationalization. We partner with companies to build an AI enabled future that works for Everi one, Everi where. 

If you’re looking for ways to ethically curate datasets to train or fine tune your models, we’re happy to help. Reach out to us @ [email protected] today!

Reply

or to participate.