Hello and thank you for being a DL contributor. We are changing the login scheme for contributors for simpler login and to better support using multiple devices. Please click here to update your account with a username and password.

Hello. Some features on this site require registration. Please click here to register for free.

Hello and thank you for registering. Please complete the process by verifying your email address. If you can't find the email you can resend it here.

Hello. Some features on this site require a subscription. Please click here to get full access and no ads for $1.99 or less per month.

OpenAI Is Reporting ChatGPT Conversations to Law Enforcement

Earlier this week, buried in the middle of a lengthy blog post addressing ChatGPT's propensity for severe mental health harms, OpenAI admitted that it's scanning users' conversations and reporting to police any interactions that a human reviewer deems sufficiently threatening.

"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," it wrote. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."

Offsite Link
by Anonymousreply 1September 15, 2025 1:42 AM

I always figured this was happening.

by Anonymousreply 1September 15, 2025 1:42 AM
Loading
Need more help? Click Here.

Yes indeed, we too use "cookies." Take a look at our privacy/terms or if you just want to see the damn site without all this bureaucratic nonsense, click ACCEPT. Otherwise, you'll just have to find some other site for your pointless bitchery needs.

×

Become a contributor - post when you want with no ads!