← Back to Library

ChatGPT can now call the cops, but 'wait till 2100 for full job impact' - Altman

"AI safety is getting personal. OpenAI just announced new tools that can detect whether users are teenagers, flag conversations to parents, and in extreme cases, contact law enforcement. It's the most significant shift in AI child safety yet — but there's a catch.

The New Guardrails

OpenAI's Sam Altman unveiled changes this week that fundamentally alter how ChatGPT interacts with minors. The system will now assess whether users are under 18. If it detects a teen in acute distress, it flags parents first — then law enforcement only afterward.

ChatGPT can now call the cops, but 'wait till 2100 for full job impact' - Altman

But the most striking policy: if there's uncertainty about age or incomplete information, the system defaults to the under-18 experience. Adults can unlock adult capabilities by proving their age.

In the next two weeks, parental controls will roll out — including blackout hours when teens cannot access ChatGPT at all.

Critics are asking pointed questions. Some worry OpenAI could face pressure from foreign governments with different standards, forcing them to report conversations to authorities under various national laws. The track record here is mixed: some tech companies cave in, others resist.

Privacy Gets a Shield

OpenAI also announced it wants the same legal protection for AI conversations as you get with doctors or lawyers — physician-patient privilege and attorney-client privilege. They're advocating for this with policymakers, including the Trump administration.

The concern is real. If any chat with an AI system gains these protections, it could force startups and open-source projects through significant regulatory hurdles. The bar to enter the market rises substantially.

This might be a response to the Federal Communications Commission launching an inquiry into chatbot safety when acting as companions.

Who Actually Uses ChatGPT?

A new analysis reveals what users actually do with ChatGPT — and it's not coding. Only 4.2% use it for programming. Ten percent use it to learn or be taught something, while 5.7% use it for fitness, beauty, self-care, or health advice.

Creating images ranks below translation. Four percent of users just ask the model questions about consciousness — "Are you conscious? Are you alive?"

The Job Question

Altman told Congress privately in 2024 that up to 70% of jobs could be eliminated by AI and acknowledged possible social disruption. But recently, he implied full job ramifications might play out towards the end of this century.

"There's going to be massive displacement," he said in a recent interview. "And maybe those people will find something new and interesting and lucrative to do."

The historical average is about 50% of jobs significantly changing every 75 years — not entirely disappearing, but transforming substantially. Some see this as a punctuated equilibrium moment where much happens in a short period.

Others argue it's just less dramatic than feared: we'll have a burst, then it will somehow be less total job turnover than anticipated.

If we default to protecting teens, we'll default to under-18 experiences — and adults can prove their age to unlock capabilities.

Bottom Line

The child safety announcements are the most concrete changes here. OpenAI is clearly moving toward mandatory age verification with real consequences for distress detection. The privacy protections make sense but carry significant regulatory risk for smaller players. Altman's job timeline prediction is deliberately vague — "towards the end of this century" gives no actionable timeline at all.

Watch next: whether parental controls actually reduce teen access, or whether they create new enforcement gaps that worry parents more than regulators.

Deep Dives

Explore these related deep dives:

Sources

ChatGPT can now call the cops, but 'wait till 2100 for full job impact' - Altman

by AI Explained · AI Explained · Watch video

Sam Orman announced in the last couple of hours that CHBT will start trying to assess whether you are a child and in some circumstances can flag conversations for review by parents and the authorities. For those of us who aren't children, Chacht will also sometimes begin flirting. This video then will give you the 5-minute TLDDR on this announcement which is not unique to ChateBT by the way, as well as some other things Samman said this week that 99% of people may have missed, but a good chunk of those should hear. First, the classic corporation speak, which is that OpenAI are building toward a long-term system to understand whether someone is over or under 18.

Unless I missed it, I can't find anywhere where they announced when this would occur or whether it starts as of today. One thing to immediately flag is that as of July, YouTube already does this based on the type of videos that you watch. Okay, but what will Chat GBT do if it assesses that you're a teen? Well, first of all, it won't flirt with you ever.

And second of all, in extreme circumstances, depending on the discussion, it may contact law enforcement. You may of course have seen some recent very sad headlines about why they may have felt they needed to take this step. Like many of you, I think the goal is admirable. The question is they better be really confident they're flagging the right conversations.

Then comes a really key sentence. If we are not confident about someone's age or have incomplete information, we'll take the safer route and default to the under 18 experience and give adults ways to prove their age to unlock adult capabilities. In the next 2 weeks, we do know that there will be parental controls enabling parents to, for example, of teens set blackout hours when a teen cannot use chatbt. Then as before, if the system detects their teen is in a moment of acute distress, it will flag to the parent first and foremost and then only afterwards to law enforcement.

Again, I totally understand the motivation. I guess one thing I'd flag to OpenAI is what happens if like Twitter a foreign country with different standards asks them and says according to our law you have to notify us when X occurs, when a user says Y ...