Shipra Sanganeria reports:
The leak of thousands of ChatGPT conversations in August 2025 revealed two concerning realities. First, users are not fully aware of how the AI model handles and distributes their data. Second, people seem to have a high level of trust in their AI assistants—and many of their chats have now been made public.
The problem came from a now-removed feature where, when sharing conversations, users have the option to “Make [the] chat discoverable.” And while the opt-in clearly said that enabling the feature “allows [the chat] to be shown in web searches,” perhaps not all users fully understood what this meant: that their chats would be crawled and indexed by search engines and become available to other users.
This discovery of countless publicly indexed ChatGPT conversations not only exposed the flaw in the AI model’s user experience (UX), which unnecessarily created more opportunities for human error. It also highlighted some concerning facts about AI chatbot usage.
Read what Safety Detectives found when they analyzed 1,000 chats that had been made “discoverable.” And if you or your children use ChatGPT do be sure to read this and then review with family members what their settings should be when it comes to sharing or discovering their chats.