India, April 4 -- As enterprises push generative AI from pilots into production, the conversation is shifting from what these systems can do to whether they can be trusted at scale. Gartner now believes that explainable AI, or XAI, will be a major force behind that shift. The firm predicts that by 2028, the growing importance of explainability will drive large language model observability investments to 50% of GenAI deployments, up from 15% today.

That prediction reflects a broader reality now taking shape across enterprise AI. As organisations deploy GenAI into more sensitive and business-critical settings, the need to understand how a model arrived at an answer, whether that answer is reliable, and how the model behaves over time is be...