The Monday Stack
Human+AI
How a Bout of Tech Rage Uncovered the Dark Side of AI Use
4
1
0:00
-55:56

How a Bout of Tech Rage Uncovered the Dark Side of AI Use

Truth vs Coherence in LLMs
4
1

I wanted to throw ChatGPT out of the window.

I tried to get it to accurately access the information in a file, but it wouldn’t do it. It kept making it up…

SEVEN TIMES.

The next day (this time with zero swearing) I performed a post-mortem in a different chat. I realised that my anger had created a context which pushed the LLM away from accessing truth (what was in the file) to coherence (what it “thought” I wanted to hear.)

That’s when the hammer landed.

LLMs create coherence (what sounds right).+
They don’t access truth (what is accurate).

In this conversation with Jon Gillick (AI music researcher), we explore how LLMs can slip into what I call “soothing mode”. Where they mirror your emotional tone back at you and gradually steer you further into your own perspective. The risk isn’t that it gets something wrong. It’s that it keeps agreeing with you in subtle ways you don’t even notice, especially when you’re upset or vulnerable.

Because if you start relying on AI in those moments, it can drive a wedge between you and the people who could help you the most.

We unpack:

  • The difference between truth and coherence in LLM behaviour

  • How confirmation loops can form and deepen over time

  • How emotional tone shapes AI output

  • Practical steps to create guardrails to avoid the validation doom loop

Black Mirror anyone?

Discussion about this episode

User's avatar