United Kingdom, April 22 -- ChatGPT becomes abusive when exposed to real-life arguments.
A new study analysed how large language models (LLMs) reacted to sustained hostility by feeding the chatbot exchanges from real-life arguments and tracking how its behaviour changed over time.
One expert, who had no connection to the research, described it as "one of the most interesting ever done into AI language and pragmatics".
Dr Vittorio Tantucci, who co-authored the study alongside Professor Jonathan Culpepper at Lancaster University, explained that their research found that AI mirrored the dynamics of real-world arguments.
He said: "When repeatedly exposed to impoliteness, the model began to mirror the tone of the exchanges, with its respon...
Click here to read full article from source
इस लेख के रीप्रिंट को खरीदने या इस प्रकाशन का पूरा फ़ीड प्राप्त करने के लिए, कृपया
हमे संपर्क करें.