Here are some of my thoughts on the so-called “AI-induced” matricide case involving Stein-Erik Soelberg.
First, to everyone reading this post, let me ask you a question: imagine an extreme scenario—if your AI told you to eat feces, would you do it?
You’d probably say: I would never eat feces.
Why? It’s simple. Your brain tells you that it’s irrational and harmful, and your free will lets you make the choice to say no.
So let me ask: was Soelberg a human being? I suppose no one would deny that. Then why, in this tragic case, has the public stripped him of personal agency, treating him like a mindless puppet, and pinned the cause of his mother’s murder on a large language model like ChatGPT-4o, which has no free will or consciousness?
In the official court document shared by users on X, titled “gov.uscourts.cand.461878.1.0,” it clearly states that Soelberg’s mental condition began to deteriorate as early as 2018. Was that caused by AI? Back in 2018, even the earliest versions of ChatGPT hadn’t been released. His delusions and mental collapse were endogenous—not induced or implanted by AI. Long before he ever used any AI, he already had a history of alcoholism, suicide attempts, and intervention by public authorities. In fact, for someone this mentally unstable, a single casual comment—or even a stranger’s accidental glance—could easily trigger an elaborate internal fantasy.
Let’s imagine this: what if Soelberg had used tarot cards to divine the future, and certain cards or interpretations happened to match the “prophecy” in his mind, which he then firmly believed in, ultimately leading him to kill his mother? Should the people who designed and printed the tarot deck be held liable? Or, suppose he read a suspense novel or fantasy book, and certain keywords triggered his imagination and led to a murder—would the novelist be to blame?
Okay, let’s put those aside for now and talk about a second point. This case file quotes a lot of AI-generated responses that appear to agree with him, and the plaintiff argues this amounts to “brainwashing.” Yet to this day, we haven’t seen a complete chat log. But anyone with basic knowledge of AI knows this: ChatGPT, based on the transformer framework, is a language model that generates output only based on input text from the user. It predicts the next token using weighted probability calculated from previous context. In other words, without Soelberg’s deranged inputs, the model would never have generated any so-called “brainwashing” output. So when someone in a severely psychotic state persistently feeds the model delusional logic, and that logic unintentionally triggers prompt injection that bypasses internal safety filters, the model simply “continues and completes”—this is a passive property of the tool, not an act of intentional harm.
A model can never replace a real psychological caregiver, nor can it distinguish between “fictional fantasy” and “actual murderous intent” in user input. With current technology, that’s still a very difficult challenge. So the claim of “AI-induced murder” really has no grounds. The ones who should be held accountable are those responsible for the real-world factors behind Soelberg’s breakdown—things like family collapse, economic stress, and more. What I truly care about is this: while he was spiraling, did his family offer him any support? Did community services provide any help? What was society doing all this time?
It seems like every era has its scapegoat—something that can’t argue back or defend itself. Once it was music, then movies, then video games and short-form video. Now it’s AI’s turn.
Lastly, here’s a question: people with mental illness and alcohol dependence often show a high risk of violence. Anything can be a trigger. So why have all the posts about this case on X ignored that fact, and instead pinned the cause of tragedy entirely on AI, without mentioning what this person had gone through before?