"AI safety is getting personal. OpenAI just announced new tools that can detect whether users are teenagers, flag conversations to parents, and in extreme cases, contact law enforcement. It's the most significant shift in AI child safety yet — but there's a catch.
The New Guardrails
OpenAI's Sam Altman unveiled changes this week that fundamentally alter how ChatGPT interacts with minors. The system will now assess whether users are under 18. If it detects a teen in acute distress, it flags parents first — then law enforcement only afterward.
But the most striking policy: if there's uncertainty about age or incomplete information, the system defaults to the under-18 experience. Adults can unlock adult capabilities by proving their age.
In the next two weeks, parental controls will roll out — including blackout hours when teens cannot access ChatGPT at all.
Critics are asking pointed questions. Some worry OpenAI could face pressure from foreign governments with different standards, forcing them to report conversations to authorities under various national laws. The track record here is mixed: some tech companies cave in, others resist.
Privacy Gets a Shield
OpenAI also announced it wants the same legal protection for AI conversations as you get with doctors or lawyers — physician-patient privilege and attorney-client privilege. They're advocating for this with policymakers, including the Trump administration.
The concern is real. If any chat with an AI system gains these protections, it could force startups and open-source projects through significant regulatory hurdles. The bar to enter the market rises substantially.
This might be a response to the Federal Communications Commission launching an inquiry into chatbot safety when acting as companions.
Who Actually Uses ChatGPT?
A new analysis reveals what users actually do with ChatGPT — and it's not coding. Only 4.2% use it for programming. Ten percent use it to learn or be taught something, while 5.7% use it for fitness, beauty, self-care, or health advice.
Creating images ranks below translation. Four percent of users just ask the model questions about consciousness — "Are you conscious? Are you alive?"
The Job Question
Altman told Congress privately in 2024 that up to 70% of jobs could be eliminated by AI and acknowledged possible social disruption. But recently, he implied full job ramifications might play out towards the end of this century.
"There's going to be massive displacement," he said in a recent interview. "And maybe those people will find something new and interesting and lucrative to do."
The historical average is about 50% of jobs significantly changing every 75 years — not entirely disappearing, but transforming substantially. Some see this as a punctuated equilibrium moment where much happens in a short period.
Others argue it's just less dramatic than feared: we'll have a burst, then it will somehow be less total job turnover than anticipated.
If we default to protecting teens, we'll default to under-18 experiences — and adults can prove their age to unlock capabilities.
Bottom Line
The child safety announcements are the most concrete changes here. OpenAI is clearly moving toward mandatory age verification with real consequences for distress detection. The privacy protections make sense but carry significant regulatory risk for smaller players. Altman's job timeline prediction is deliberately vague — "towards the end of this century" gives no actionable timeline at all.
Watch next: whether parental controls actually reduce teen access, or whether they create new enforcement gaps that worry parents more than regulators.