When Your Therapist Is an Algorithm
ChatGPT is driving a spike in reports of organized ritual abuse in the UK, according to police and trauma specialists, as survivors of "satanic" sexual violence increasingly turn to the AI tool for therapy sessions. The Guardian reports that law enforcement is seeing more cases of "witchcraft, spirit possession and spiritual abuse" (WSPRA) surface after victims use ChatGPT to process childhood trauma — abuse that UK authorities say is chronically under-reported and lacks a modern criminal charge despite involving sexual violence, neglect, and ritualistic control tactics inspired by satanism or esoteric beliefs. The phenomenon raises uncomfortable questions about what happens when millions of people offload mental health care to a language model trained on the internet.
Meanwhile, new studies show ChatGPT is actively terrible at the medical advice people are asking it for. NPR cites research finding that AI chatbots routinely point users in the wrong direction on health questions, with answer quality entirely dependent on prompt engineering skills most patients don't have. A separate Decrypt-reported study found that simply mentioning a mental health condition in your prompt increases the likelihood of AI refusals — even on legitimate, non-dangerous tasks. Tell ChatGPT you have depression, and it might refuse to help you book a dentist appointment.
The Prediction Market Angle
This creates a perverse market dynamic: AI companies face mounting liability pressure as parents organize against chatbot-linked teen deaths (The New York Times reports a new wave of grieving families joining earlier social media safety campaigns), yet the tools are simultaneously being used as free therapy by vulnerable populations who can't access real care. Traders watching AI regulation should note that "AI safety" increasingly means conflicting things — blocking self-harm content vs. not discriminating against mentally ill users, surfacing repressed memories vs. not practicing medicine without a license.
The data points matter for anyone with exposure to OpenAI valuation bets or AI liability markets. ChatGPT is now the de facto mental health provider for people who mention ritual abuse, depression, or trauma in prompts — a use case OpenAI never advertised but can't easily prevent without building discriminatory filters. UK police say WSPRA cases lack proper criminal codes, meaning legal frameworks are years behind the technology. For prediction markets, that's a signal: regulatory action will be reactive, not proactive, and probably come after a high-profile case involving AI-surfaced trauma or bad medical advice leads to harm.
What Happens Next
Watch for two parallel tracks. First, parental advocacy groups (already mobilized around social media harms) will likely pressure lawmakers to extend Section 230 reforms or EU AI Act principles to chatbots specifically. Second, expect OpenAI and competitors to quietly adjust refusal rates — probably erring toward over-blocking mental health content, which creates the discrimination problem researchers already documented. The UK ritual abuse spike suggests these tools are doing things their creators didn't intend and can't easily control, which is exactly the scenario that gets regulators interested.