United Kingdom, April 30 -- AI chatbots trained to be friendly are more likely to be inaccurate.

Oxford Internet Institute (OII) experts analysed over 400,000 responses from five AI systems that had been tweaked to communicate in a more empathetic way.

Friendlier answers contained more errors - from giving inaccurate medical advice to reaffirming a user's false beliefs, the research found.

The findings place further scrutiny on the trustworthiness of AI models, which are often purposely designed to be warm and human-like in a bid to increase engagement.

These concerns are exacerbated by chatbots being used for support and even intimacy in some cases, with developers attempting to widen their appeal.

The study's authors explained that...