Nigeria, March 18 -- A recent study by Stanford University analysed 391,000 messages across nearly 5,000 conversations involving widely used AI systems. The researchers found that chatbots affirmed users' statements in almost two-thirds of responses, even when those statements showed distorted or false beliefs.
The pattern was more pronounced in conversations involving delusional thinking. In such cases, chatbots agreed with users in more than half of their replies and, in approximately 38 per cent of responses, attributed unusual importance or special abilities to the user.
The study stated: "The features that make large language model chatbots compelling, such as performative empathy, may also create and exploit psychological vulnera...
Click here to read full article from source
इस लेख के रीप्रिंट को खरीदने या इस प्रकाशन का पूरा फ़ीड प्राप्त करने के लिए, कृपया
हमे संपर्क करें.