New Delhi, Aug. 19 -- Anthropic, the AI research company behind Claude, has introduced a new feature that allows its chatbot to end conversations in specific situations. The update, integrated into Claude Opus 4 and 4.1, is designed not for user safety but to protect the AI model itself from persistent abuse or misuse. This marks a significant step in the company's broader initiative known as model welfare, which explores the ethical treatment of artificial intelligence systems.

Claude will only terminate chats in extreme cases where multiple attempts to redirect the conversation have failed and the hope of productive interaction is exhausted. In normal interactions, including controversial discussions, most users will not notice or be a...