New Delhi, April 6 -- A new body of research from Stanford University and the Massachusetts Institute of Technology (MIT) finds that widely used artificial intelligence (AI) chatbots tend to agree with users at significantly higher rates than humans, even in cases involving harmful or incorrect behaviour.

The findings are based on two studies: "Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence," published by Stanford University, published in the journal Science, and "Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians," published by MIT on arXiv in February 2026.

Researchers evaluated 11 widely used AI models, including ChatGPT, Claude, Gemini, and DeepSeek, using thousands of real-world scenario...