Truth, Lies, BS, and the Frightening Future of AI Models

I write about strategies to turn fans into customers and customers into fans. I also share ways to use real-time strategies to spread ideas, influence minds, and build business.

Research and Analysis  |  Artificial Intelligence

shutterstock_1820699792In our polarized world, we cannot agree on the truth. Who won the 2020 US Presidential election? Are vaccines safe and effective? What’s going on with university campus protests? When we add dumb AI robots like the models underlying ChatGPT spitting out “answers” things get super scary.

I’m a fan of the Marketing AI Institute podcast and look forward to each week’s episode when hosts Paul Roetzer and Mike Kaput jump into a range of quick-fire subjects. In a special episode last week, they discussed a conversation on The Ezra Klein Show with Dario Amodei, the former vice president of research at OpenAI and now co-founder and CEO of Anthropic, the company behind the large language model series Claude AI. The show's topic: What if Dario Amodei Is Right About A.I.?

“It’s one of the craziest interviews I’ve ever listened to,” Paul says. “If we, as a society, can’t agree on truth, how do we build models that agree on truth? Anyone can build whatever they believe truth to be. It can create cults, it can create new religions – all these things because they are insanely good, they’re superhuman at persuading people to believe something.”

I listened to the Amodei interview myself a few days ago and have been thinking about the issues raised since then.

AI models: the ultimate bullshit artists

on bullshitIn the interview with Dario Amodei, Ezra Klein brought up the book “On Bullshit” by Harry Frankfurt.

The “On Bullshit” book description reads, in part: bullshitters seek to convey a certain impression of themselves without being concerned about whether anything at all is true. They quietly change the rules governing their end of the conversation so that claims about truth and falsity are irrelevant. Liars at least acknowledge that it matters what is true. By virtue of this, Frankfurt writes, bullshit is a greater enemy of the truth than lies are.

Ezra Klein: “When I began interacting with the more modern versions of these systems, what they struck me as is the perfect bullshitter, in part because they don’t know that they’re bullshitting. There’s no difference in the truth value to the system, how the system feels.”

Dario Amodei: “If you think of how the models are trained, they read a bunch of stuff on the internet. A lot of it’s true. Some of it, more than we’d like, is false. And when you’re training the model, it has to model all of it.”

"Truth" and bad actors

Tools like ChatGPT are great at many things, and I use AI nearly every day in my work. As I interact with the various large language models I don’t really think about “truth”. Clearly there is a lot of bad information out there that these models have been trained on, but like Google or other search engines, I tend to be careful with the results and give them a personal “smell test” before using.

I’m an outlier because I understand the basics. I’ve been working in and around the professional information business for nearly 40 years. Most people simply don’t consider “truth” in what they read or find online. 

Imagine a future where bad actors deliberately create AI models that are false or that skew the truth.

  • What happens if a nation-state creates an AI model?
  • Will we see a day when there are AI models built by a political party or a religion to deliver their version of “truth” to followers?
  • Will Fox News, MSNBC, the Wall Street Journal, and the New York Times have their own models?

“These things can do so much damage because they’re perfect,” Paul says. “They don’t care about the truth. They have no relation to the truth. They just achieve whatever objective is set out. The leaders of these AI companies all think these models may go very bad. But publicly they say we have to build the smartest versions possible, and we’ll just figure it out.”

This is an important topic.  I encourage you to listen to Ezra Klein’s interview of Dario Amodei as well as Paul and Mike’s analysis of it.

Also! At the Marketing AI Institute’s AI for B2B Marketers virtual event on June 6, 2024, I will be moderating a keynote discussion: “The Reality of AI Adoption in B2B Marketing - A Panel Discussion with B2B Leaders”. If you are a B2B marketer looking to reinvent what's possible, check out this virtual event.

Disclosure: I am an investor in the Marketing AI Institute.New Call-to-action