India, Feb. 10 -- The largest user study to date on large language models (LLMs) assisting the general public with medical decisions has found that these systems pose significant risks due to their tendency to provide inaccurate and inconsistent information.
A new study conducted by the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford, in partnership with MLCommons and other institutions, highlights a substantial gap between the promise of LLMs and their real-world usefulness for people seeking medical advice.
While LLMs now perform impressively on standardised medical knowledge tests, the study found that they struggle when supporting individuals with their own symptoms i...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.