6 minutes read time.
In a move that was as predictable as it is problematic, ChatGPT-maker OpenAI announced this week that it will be rolling out parental controls for its wildly popular chatbot. The decision, coming just a week after the family of a teenager who died by suicide filed a lawsuit against the company, is being framed as a responsible step toward protecting young users. But we must be clear about what this is: it is not a thoughtful, pedagogical solution to a complex problem. It is a panicked, legally motivated, and deeply flawed reaction that creates a dangerous illusion of safety while failing to address the fundamental challenges that AI poses to our children and our society. This is not a step forward; it is a step into a new and more insidious kind of digital bubble wrap.
The Three Flaws: A Leaky Shield
On the surface, the proposal seems reasonable. But a closer examination reveals three fundamental flaws that render it ineffective at best and actively harmful at worst.
First, there is the problem of its profound and deliberate vagueness. As you so rightly asked, Charles: “Control what?” The announcement from OpenAI speaks of allowing parents to “set limits” and “disable some of the chatbot’s features.” But what does that actually mean? Are we talking about a simple filter for profanity and sexually explicit content, a digital V-chip for AI? Or is this a new and powerful tool of censorship, allowing a parent, driven by their own political or religious ideology, to block their child’s legitimate inquiries about sexual health, gender identity, or mental health resources? The “control” being offered is a black box, and we have no idea what is being locked inside.
Second, there is the price of this panic button. The impetus for this new feature was a series of profound human tragedies. But we must ask the difficult and uncomfortable question: “Should one suicide set the bar for everyone else’s use of the system?” The risk of a “safety-first” approach is that it can inadvertently hobble the intellectual growth of millions. A system designed to sand off every rough edge, to preemptively remove any content that could potentially cause distress, is a system that is fundamentally hostile to the process of learning. It is a system that prioritizes the avoidance of risk over the cultivation of resilience, a decision that could create a generation of young people “protected” from the very challenges that are necessary for them to develop critical thinking skills and emotional maturity.
Finally, there is the crisis of competence. “Do the parents have a working knowledge of what they’re supposed to control?” The answer, in the vast majority of cases, is almost certainly no. We are handing the control panel of a complex, probabilistic, and often counterintuitive technology to a generation of parents who, for the most part, do not understand how it works. This creates a dangerous and false sense of security. It allows them to believe they have “solved” the problem of AI safety, when in reality, they have simply outsourced their own responsibility to have difficult, honest, and technologically-literate conversations with their children. The parental controls become a substitute for, rather than an aid to, actual parenting.

The Therapist’s Burden and the AI’s Impossible Standard
But the true, cynical nature of this new feature is only revealed when we compare the standards we apply to AI with the standards we apply to our most trusted human professionals. A parent interviewing a potential therapist for their child might ask a series of deeply loaded questions, including the most difficult one of all: “Have you had any patients commit suicide despite being in therapy?”
It is a question that forces an acknowledgment of a fundamental and deeply uncomfortable truth: even the best, most highly trained, and most compassionate human experts are fallible. Therapy is not a cure; it is a difficult and dangerous human journey. And every human therapist comes with what is rightly called a “fucking airtight liability waiver.” That waiver is not just a legal document; it is an ethical contract, a moment of profound honesty where the institution acknowledges the risks and its own limitations.
Now, hold that difficult, human reality in one hand. And in the other, look at the public conversation we are having about AI. What the hell makes us think that a machine, a statistical text generator with no consciousness, no empathy, and no lived experience, can succeed where our best human experts fail? We have projected onto these systems an aura of infallibility that they do not possess and have not earned, and the companies that build them have done nothing to discourage this dangerous myth.

The Liability Shield
This is where the entire premise of OpenAI’s announcement collapses into a pile of cynical, self-serving legal maneuvering. Human therapists don’t come with parental controls; they come with a liability waiver. This reveals the true purpose of OpenAI’s new feature.
The parental controls are not a good-faith effort to protect children. They are a brilliant and deeply insidious piece of legal and social engineering designed to shift the burden of responsibility from the multi-trillion-dollar corporation that created the tool to the individual, often technologically illiterate, parent who is trying to use it. It is a substitute for the liability waiver. It is a way for them to say, in the inevitable next lawsuit, “We are not responsible for what happened. We gave the parent the tools to control the system. If they failed to use those tools correctly, that is their failure, not ours.”
The conversation we should be having is about what an honest “terms of service” for a therapeutic AI would actually look like. It would have to say something like this: WARNING: This is not a person. This is a statistical model. It does not feel. It does not care. It is capable of generating text that may be harmful or dangerous based on the patterns it has learned from the internet. Its safety protocols are imperfect. By proceeding, you acknowledge these profound and unavoidable risks and absolve the manufacturer of all liability. No company would ever put that language in front of their product, because it would destroy their business model. So instead, they give us parental controls, a cheap and meaningless fig leaf of corporate responsibility designed to distract us from the profound and terrifying abdication of accountability that lies at the very heart of their enterprise. A panic button is not a substitute for that conversation.
Discover more from Clight Morning Analysis
Subscribe to get the latest posts sent to your email.