Anthropic Will Use Claude Chats For AI Training Unless You Opt Out
Anthropic has announced changes to how it handles user data, giving Claude AI customers the option to allow their conversations to be used for training future models.
The company said all Claude Free, Pro, Max, and Claude Code users must decide by September 28, 2025, whether to share their data for model development.
Data shared will be stored for up to five years to improve model reasoning, coding, and analysis. Users who opt out will have their data deleted within 30 days, unless flagged for policy or legal reasons.
The update does not apply to enterprise offerings such as Claude for Work, Claude Gov, Claude for Education, or API access via Amazon Bedrock and Google Cloud Vertex AI, which follow separate agreements.
New users will choose at signup, while existing users get a notification. Anthropic says deleted chats won’t be trained on and user data isn’t sold.
The company said the move is intended to give consumers more control while supporting AI development. Users must make their choice by the September deadline to continue accessing Claude.
Indonesia Urges Tik Tok, Meta to Curb Harmful Content Online
Click here