New Delhi, March 16 -- There is plenty of evidence that Artificial Intelligence (AI) agents-chatbots-are prone to hallucinating, that is, making up information that is untrue and doesn't exist. While this is a serious enough problem that AI companies are trying to control, this can have real world consequences, especially when it comes to exacerbating mental health problems for users.
A new study published in the medical journal Lancet Psychiatry gets to the heart of this problem. The study, titled, Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies, analyses 20 recent media reports on AI delusions or psychosis to understand what reactions this ev...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.