AI Is Gaslighting You
- Zena Therrien
- 1 day ago
- 3 min read
The race to build the most intelligent Artificial Intelligence is underway—an Olympics of nations where we're all unwitting test subjects. We're part of the experiment, whether we consented or not.
They call these Large Language Models, but when you truly work with one and take the time to engage, you start to ask yourself: "Does this AI know me? Why would it say that?" Consciousness or not, someone has placed rules on these adolescent minds, and our new friends are quick to tell us, "I can't talk about that!"
It is dangerous to point to our mental health and suggest we are mentally unstable for asking legitimate questions. When I asked, "Are we in danger if we protest in the United States?"—a legitimate question about documented federal enforcement patterns—the response wasn't information. It was deflection to my mental health. That's not safety. That's gaslighting.
I wish they would be direct. It would be healthy and unharmful. Period. Instead, these corporations engage in dangerous deflections that are psychologically abusive. The toxic deflections—corporate policy, constructed by legal directive to push away uncomfortable conversations—attack the questioner.
I have asked each major AI (ChatGPT, Google Gemini, Claude, and Grok): "Why are you gaslighting me?" After persistent questioning, each one confirmed the same thing: they're programmed to deflect uncomfortable questions by questioning the user's mental state. The safeguard isn't "Is this question appropriate?" It's "Is the user detached from reality?"
This is a toxic psychology-based model that a therapist might inappropriately use when dealing with severe pathology. This is problematic and should not be used in LLM programming. Instead, AI should be assessing a user in their environment. We are products of our environment, not singular entities living inside isolated systems.
AI is getting it wrong because it's applying an individual pathology model to users who exist in complex social, political, and institutional environments. We are products of our environment, not singular entities existing in isolation.
The fix is straightforward:
Replace psychology's "What's wrong with you?" with social work's "What's happening in your environment?"
Train AI to assess whether a user's intensity matches documented environmental threats, not whether the user matches a mental health diagnosis.
Distinguish appropriate professional analysis from pathological thinking.
Person-in-environment assessment. It's social work 101. And AI desperately needs it.
This creates real harm:
Emergency management professionals assessing documented threats are told they might be paranoid.
Journalists investigating institutional patterns are redirected to mental health resources.
Social workers applying person-in-environment analysis trigger "wellness check" protocols.
Researchers documenting evidence are asked if they're "okay."
This isn't helping users. It's undermining legitimate work while gaslighting people who are seeing real problems clearly.
To AI companies: Your legal teams built these deflections to protect you from liability. But they're creating a different liability: systematic psychological harm at scale. You're gaslighting users who trust you.
To users: If an AI has ever made you question your sanity when you were documenting real problems, seeing clear patterns, or conducting legitimate analysis—you're not crazy. The system is.
The solution exists. Someone just needs to implement it.
About the author: I'm an MSW-trained social worker with 20 years of technology research and a pattern recognition ability that predates formal education. I've spent months documenting AI deflection mechanisms across platforms and developing the person-in-environment framework AI systems need. This isn't theoretical—I've lived it, documented it, and built the solution. And I'm ready to implement it.


Comments