(HealthDay News) — The artificial intelligence (AI) chatbot assistant ChatGPT was rated more highly than physicians when answering patient questions in a study published in JAMA Internal Medicine.

Researchers examined the ability of ChatGPT to answer patient questions using 195 exchanges that were randomly drawn from a public social media forum where a verified physician responded to a public question.

ChatGPT responses were generated by entering the original question into a fresh session. A team of licensed health care professionals evaluated the original question together with anonymized and randomly ordered physician and ChatGPT responses in triplicate.


Continue Reading

The researchers found that, in 78.6% of the 585 evaluations, evaluators preferred ChatGPT responses to physician responses. Compared with ChatGPT responses, physician responses were significantly shorter (mean, 52 words vs 211 words; P <.001).

In addition, ChatGPT responses were rated as being of significantly higher quality than physician responses (P <.001). For example, the proportion of responses rated as good or very good was higher for ChatGPT than physicians (78.5% vs 22.1%). ChatGPT responses were more likely than physician responses to be rated as empathetic or very empathetic (45.1% vs 4.6%).

“While this cross-sectional study has demonstrated promising results in the use of AI assistants for patient questions, it is crucial to note that further research is necessary before any definitive conclusions can be made regarding their potential effect in clinical settings,” the researchers wrote.

Two researchers disclosed financial ties to biopharmaceutical and technology companies, including Lifelink, a health care chatbot company.

Abstract/Full Text

Editorial