India, Aug. 29 -- Anthropic has rolled out a major shift in its privacy policy. Starting now, all Claude users must explicitly decide whether to allow their chat and coding sessions to be used for AI training, or opt out before September 28, 2025. Failure to act counts as consent.
Previously, Anthropic deleted consumer data within 30 days unless flagged or legally required to retain longer. Now, non-opt-out users will see their conversations and code used in model training, with retention stretched to five years.
This update spans all consumer tiers like Claude Free, Pro, Max, and Claude Code. Business and enterprise products like Claude for Work, Education, Gov, and API use remain untouched.
Existing users encounter a prompt labelled ...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.