← Back to Library

ChatGPT Can Now Call the Cops, but 'Wait till 2100 for Full Job Impact' - Altman

Sam Orman announced in the last couple of hours that CHBT will start trying to assess whether you are a child and in some circumstances can flag conversations for review by parents and the authorities. For those of us who aren't children, Chacht will also sometimes begin flirting. This video then will give you the 5-minute TLDDR on this announcement which is not unique to ChateBT by the way, as well as some other things Samman said this week that 99% of people may have missed, but a good chunk of those should hear. First, the classic corporation speak, which is that OpenAI are building toward a long-term system to understand whether someone is over or under 18.

Unless I missed it, I can't find anywhere where they announced when this would occur or whether it starts as of today. One thing to immediately flag is that as of July, YouTube already does this based on the type of videos that you watch. Okay, but what will Chat GBT do if it assesses that you're a teen? Well, first of all, it won't flirt with you ever.

And second of all, in extreme circumstances, depending on the discussion, it may contact law enforcement. You may of course have seen some recent very sad headlines about why they may have felt they needed to take this step. Like many of you, I think the goal is admirable. The question is they better be really confident they're flagging the right conversations.

Then comes a really key sentence. If we are not confident about someone's age or have incomplete information, we'll take the safer route and default to the under 18 experience and give adults ways to prove their age to unlock adult capabilities. In the next 2 weeks, we do know that there will be parental controls enabling parents to, for example, of teens set blackout hours when a teen cannot use chatbt. Then as before, if the system detects their teen is in a moment of acute distress, it will flag to the parent first and foremost and then only afterwards to law enforcement.

Again, I totally understand the motivation. I guess one thing I'd flag to OpenAI is what happens if like Twitter a foreign country with different standards asks them and says according to our law you have to notify us when X occurs, when a user says Y ...

Watch on YouTube →

Watch the full video by AI Explained on YouTube.