New Delhi, March 25 -- While artificial intelligence (AI) firms such as OpenAI and Anthropic have created 'pledges' and 'constitutions' promising that their AI would "do no harm", recent conflicts in West Asia have shown the limits of such voluntary guardrails.

Even as companies articulate principles, AI tools are finding their way into military and strategic use. The gap between promise and practice is raising a broader question: do these self-imposed rules carry weight, and why have Indian firms largely avoided them? Mint explains.

AI constitutions are internal, self-regulatory frameworks that outline how a company intends to build and deploy its technology. They go beyond standard terms of service, setting broad principles for safety...