India, Aug. 31 -- Anthropic is changing how it uses data from Claude users, asking them to decide by September 28 whether their conversations can be included in future AI training. The move introduces new rules on data retention and consent, giving individuals the option to opt out if they do not want their chats analysed.

Until now, Anthropic has not used consumer chat data to train its models. With the update, the company plans to include conversations and coding sessions from users who don't opt out. Data from these accounts could be stored for up to five years. This is a sharp shift from the earlier policy, where prompts and outputs were automatically deleted after 30 days unless flagged for violations or required for legal reasons....