OpenAI Admits ChatGPT Security Breach: Unauthorised Access And Data Disclosure
OpenAI has officially recognised the compromise of ChatGPT user login credentials, resulting in unauthorised access and misuse of accounts, a claim initially refuted by the company.
This follows recent allegations by ArsTechnica, supported by screenshots from a user, suggesting ChatGPT's inadvertent disclosure of private conversations, including sensitive information like usernames and passwords.
OpenAI reported that the compromised login credentials of the accounts enabled a malicious actor to gain unauthorised access and misuse the affected accounts.
The leaked chat history and files were a consequence of this unauthorised access, and it was not a situation where ChatGPT displayed another user's history.
ArsTechnica reported that ChatGPT was revealing private conversations, as the user's screenshots embedded in the report showed additional chat history unrelated to their query, which did not belong to them.
This incident contributes to a history of security issues with ChatGPT. In March 2023, a bug led to chat title leaks, and in November 2023, researchers extracted private data from the language model by manipulating queries.
Tech CEOs Grilled On Child Safety Measures At Senate Hearing
Click here