The Meta AI chatbot has a vulnerability that was revealed by a security researcher, Sandeep Hodkasia, that enabled logged-in users to see user prompts and answers that were kept private. In the process of experimenting with the regenerate option, Hodkasia realized that each prompt with its corresponding AI bot response had a numerical ID attached to it. By intercepting the network traffic, he could alter that number and retrieve another user’s conversation history.

Technical Details Behind the Leak
A prompt Metauser acquires an easily guessable ID in its back end when it is edited. The system did not ensure that it checked that the prompt was owned by the requester after retrieving the stored dialogue. Consequently, a substantial amount of confidential prompts and content that is produced by automating subsequent changes to an ID could be scraped by any user who does not require links or attachments.
Bug Bounty and Patch Deployment
Hodkasia privately reported the issue to Meta on December 26, 2024, and received a $10,000 reward under the company’s bug bounty program. Meta said that yesterday, it pushed a server-side fix on January 24, 2025, and was unable to find any evidence it was ever exploited in the wild.
Implications for AI Privacy and Security
This incident highlights the novel privacy risks introduced by AI assistants. Unlike traditional web apps, it is possible to leak conversations simply by manipulating numeric identifiers. Experts warn that AI platforms must enforce strict access controls and unpredictable ID schemes to prevent unauthorized data disclosure.
Lessons for AI Platform Providers
AI developers should treat prompts and responses as sensitive user data and apply the same authorization checks used for other personal information. Such attacks on guessable IDs can be thwarted by randomizing or hashing the internal identifiers. Expansive logging and anomaly detection are also necessary to identify the rapid ID enumeration effort.

Meta AI’s Early Launch Challenges
Meta AI came out at the beginning of 2025 as the competitor to ChatGPT but suffered early setbacks in terms of privacy. There are users who sent confidential conversations to the community Discover feed by mistake, which is why Meta introduced pop-up messages. The irregular proactive leak issue is an eye-opener that calls for continuous security checks as the AI capabilities undergo changes.
The fix of this bug supports the significance of vulnerability disclosure and strong patching of AI services proactively. With the increased use of AI chatbots, people should be assured that their chats stay confidential and secure. Rapid action and reward to the researcher by Meta is an excellent example for the industry.