New Delhi, Jan. 20 -- If there is one thing those who design artificial intelligence (AI) policies insist on, it is that the AI systems we build should be explainable. It seems to be a reasonable request. After all, if an algorithm denies someone a loan, misdiagnoses a disease or autonomously executes an action that results in harm, surely those affected have the right to an explanation.

But, getting an AI model to explain 'why' it behaved the way it did is not as easy as it seems.

When a traditional software program fails, we can study the error message to identify what went wrong. Since a software program follows a series of logical steps described in code, it's easy to identify where it failed. In neural networks, on the other hand, ...