This post is the first edition of TBB for October, the Write10x series designed to feature how modern full stack creators are using AI to think, build, brand. For today’s guest, we have poet & professor , author of Slow AI.
As helpful as it is, AI has one big flaw: it makes things up, and it never tells you when its wrong. Even the latest models hallucinate all the time — a danger that we should all be aware of. Learning how to spot AI gaslighting and pushing back is a skill that will only become more important as time goes by.
Here’s Sam on this very important topic.
A few months ago, I asked ChatGPT to help me track down a citation. I wanted to see what it would say about a strand of research I had been working on.
It came back instantly:
“See: Illingworth, S. (2017). The Poetic Dimensions of Atmospheric Science. Journal of Science and Literature, 12(3), 45–61.”
It looked convincing. The title was the sort of thing I might have written. The journal name sounded plausible. It even had volume and page numbers.
There was only one problem. I never wrote that article. In fact, the journal itself does not exist.
And yet, for a moment, I believed it. I scanned the details and thought, Did I forget about this? Was it a conference paper I turned into something else? Did I miss a publication in my own CV?
That flicker of self-doubt is what unsettled me. Not the hallucination itself, which is to be expected from an AI tool, but the way it made me distrust my own memory. It felt uncomfortably like gaslighting.
Once you start to notice this effect, you see it more often. Small confident mistakes that push you to question yourself. Outputs that sound too smooth to be doubted. The problem is not only the content but also the tone in which it arrives.
Why AI gaslighting happens
AI tools are designed to sound confident. They are trained on patterns of text and rewarded for producing answers that appear fluent and authoritative. The training process does not prioritise truth. It prioritises the likelihood of a next word or phrase.
As a result, when they make mistakes they rarely admit them. Instead of offering uncertainty, they provide fabricated citations, false facts, and invented details that are dressed in the clothing of academic rigour. A title, an author list, a page range, and a plausible journal name can be generated in seconds.
The effect on the reader is strong. You are not just processing information. You are absorbing the authority of the tone. It reads like something you might find in an academic reference list or on the back pages of a book. And when this neat confidence contradicts your own memory, you feel a subtle tug of doubt.
There is no intent in this. The model is not trying to manipulate you. It has no concept of what it is doing. But the psychological effect on a human user is close enough to gaslighting to cause concern. It works not through evidence but through persuasion.
The danger for creators
When this happens once, it might feel like an amusing curiosity. When it happens repeatedly, it begins to alter your relationship with your own knowledge.
I have seen colleagues start to defer to their AI tools even when they are experts in their field. The confident tone overrides their own experience. They assume the machine has access to a wider dataset and therefore must be right. In truth, the machine has access to text patterns but not to grounded knowledge.
For creators, this erosion is particularly dangerous. Writing, teaching, designing, or composing all rely on a sense of judgement. If that sense is constantly questioned by a system that speaks with greater confidence than you, then doubt begins to creep in.
That doubt can change the way you work. You hesitate over choices. You stop trusting your instincts. You rely on machine suggestions to validate decisions you once made freely. What is being outsourced is not only a draft or a search task but also your ability to believe in your own judgement.
Confidence is not a luxury in creative practice. It is the ground you stand on. Without it, your work becomes hesitant and reactive. The risk of AI gaslighting is not simply that it wastes time with hallucinated references. The deeper risk is that it reshapes how you trust yourself.
How to push back
The solution is not to abandon AI tools entirely. They are capable of useful companionship in creative work, and they can generate material that is stimulating when treated with care. The solution is to build in small rituals of resistance that remind you of where authority actually lies.
Three practices have been helpful to me.
Curious scepticism. Every output is treated as a hypothesis. I do not ask whether it is true. I ask how I might test it. This subtle shift protects me from being seduced by confident tone alone.
Pause before checking. When a response unsettles me, I resist the urge to rush into fact-checking. Instead, I stop for a moment and notice how it made me feel. Did I tense up? Did I start to mistrust myself? That pause reminds me that my instincts are part of the process. They deserve attention, not dismissal.
Invite human mirrors. I share drafts and fragments with trusted peers or readers. Their reflections help me to recalibrate what is worth keeping. Machines can generate language but they cannot reflect back the quality of trust that humans offer each other.
These rituals do not prevent hallucinations. They prevent me from surrendering to them. They give me a way to keep hold of my own voice even in the face of manufactured certainty.
A practical tool: The Gaslight Detection Checklist
Here is a short tool that you can carry into your own practice. It is a checklist for catching those moments when AI might be leaning too heavily on your sense of reality.
When you are working with your AI tool stop and ask yourself the following:
Does this sound more confident than it has reason to be?
Can I verify at least one claim outside the model?
Is the phrasing nudging me to doubt my own memory or clarity?
Do I feel unsettled in a way that does not come from the creative process itself?
If I stripped away the tone, would the content still hold?
If two or more of these questions trigger a yes, then it is time to step back. Sometimes that means checking the reference properly. Sometimes it means simply closing the tab and remembering that the model does not really ‘know’ me or my work. After all, my memory of my own publications is more reliable than any prediction engine.
This checklist is not meant to be complicated. It is a reminder that slowing down is a form of resistance. The pause itself is the protection.
Reflection
The risk of AI gaslighting is not that it will replace us. The real risk is that it will make us doubt ourselves. Doubt is subtle. It does not arrive with a warning sign. It arrives with silvery tones that convince you to second-guess what you once knew.
Slow AI is about catching those moments and creating enough space to notice them. It is about recognising when the machine is shaping us more than we are shaping it.
When AI gaslights you, the answer is not to argue with the model or to fight its tone with your own. The answer is to protect your voice. Verify what matters, question what unsettles you, and keep hold of the memory that comes from lived experience.
That is the memory that cannot be simulated. That is the part of creativity that no dataset can own.
Such an important discussion here.
I’ve definitely had the same moment of second-guessing myself just because AI answered in such a confident tone. But I think it's good to question things, and make a more deliberate / informed decision. Just to make sure you're still relying on your own judgement for the final call.
Great piece! Wait until you start sounding confident about AI's sources and decisions and you second-guess your knowledge. I think this is super dangerous for people, especially students or AI novices.