India, March 12 -- When an AI system produces a harmful outcome, responsibility diffuses across a chain of actors -- developers, data suppliers, deployers -- none of whom appears to own the harm in full.
Ask which actor built the underlying model, and you get one answer. Ask which actor deployed it, or which actor's decisions most directly produced the harm, and you get other answers. How the question is framed determines the regulatory framework that follows.
In a country simultaneously investing in AI capacity through the IndiaAI Mission and signaling light-touch regulation, this design choice is not academic: it determines whether governance reinforces development goals.
This is the classification problem at the heart of AI governan...
Click here to read full article from source
इस लेख के रीप्रिंट को खरीदने या इस प्रकाशन का पूरा फ़ीड प्राप्त करने के लिए, कृपया
हमे संपर्क करें.