United Kingdom, March 6 -- ChatGPT misses "high-risk emergencies" when it is used for medical advice.
Health questions are one of the most common uses for OpenAI's chatbot and the company introduced the ChatGPT Health tool earlier this year, but a new study has found that the system could miss emergencies and can't be relied upon to safely tell somebody that they need urgent medical care.
The need to check whether the AI tool was safe led to a fast-tracked study from the Icahn School of Medicine at Mount Sinai.
The research emerged from a recognition that ChatGPT was possibly being relied upon for life and death situations, despite there being limited analysis into whether it actually works.
Lead author and urologist Ashwin Ramaswamy ...
Click here to read full article from source
इस लेख के रीप्रिंट को खरीदने या इस प्रकाशन का पूरा फ़ीड प्राप्त करने के लिए, कृपया
हमे संपर्क करें.