New Delhi, May 3 -- Amid lawsuits and warnings about ChatGPT and other AI chatbots being used for violent purposes, a new report by the The Wall Street Journal has revealed the internal clashes within OpenAI about reporting violent users to law enforcement.
The report, while citing people familiar with the matter, notes that OpenAI employees have raised concerns about the AI startup routinely failing to alert law enforcement even when dangerous chatbot users are flagged, prioritising user privacy over public safety.
Reportedly, the disagreements around what cases should be reported to law enforcement came to the fore during an OpenAI meeting last summer. The staff at this meeting were reportedly gathered from various departments, includ...
Click here to read full article from source
इस लेख के रीप्रिंट को खरीदने या इस प्रकाशन का पूरा फ़ीड प्राप्त करने के लिए, कृपया
हमे संपर्क करें.