Clin Mol Hepatol.  2023 Jul;29(3):721-732. 10.3350/cmh.2023.0089.

Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma

Affiliations
  • 1Karsh Division of Gastroenterology and Hepatology, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
  • 2Bristol Medical School, University of Bristol, Bristol, UK
  • 3School of Medicine, Tulane University, New Orleans, LA, USA
  • 4Comprehensive Transplant Center, Cedars-Sinai Medical Center, Los Angeles, CA, USA
  • 5Samuel Oschin Comprehensive Cancer Institute, Cedars- Sinai Medical Center, Los Angeles, CA, USA
  • 6Department of Psychiatry and Behavioral Sciences, Cedars-Sinai, Los Angeles, CA, USA
  • 7Division of Health Services Research, Department of Medicine, Cedars-Sinai, Los Angeles, CA, USA

Abstract

Background/Aims
Patients with cirrhosis and hepatocellular carcinoma (HCC) require extensive and personalized care to improve outcomes. ChatGPT (Generative Pre-trained Transformer), a large language model, holds the potential to provide professional yet patient-friendly support. We aimed to examine the accuracy and reproducibility of ChatGPT in answering questions regarding knowledge, management, and emotional support for cirrhosis and HCC.
Methods
ChatGPT’s responses to 164 questions were independently graded by two transplant hepatologists and resolved by a third reviewer. The performance of ChatGPT was also assessed using two published questionnaires and 26 questions formulated from the quality measures of cirrhosis management. Finally, its emotional support capacity was tested.
Results
We showed that ChatGPT regurgitated extensive knowledge of cirrhosis (79.1% correct) and HCC (74.0% correct), but only small proportions (47.3% in cirrhosis, 41.1% in HCC) were labeled as comprehensive. The performance was better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine. For the quality measures, the model answered 76.9% of questions correctly but failed to specify decision-making cut-offs and treatment durations. ChatGPT lacked knowledge of regional guidelines variations, such as HCC screening criteria. However, it provided practical and multifaceted advice to patients and caregivers regarding the next steps and adjusting to a new diagnosis.
Conclusions
We analyzed the areas of robustness and limitations of ChatGPT’s responses on the management of cirrhosis and HCC and relevant emotional support. ChatGPT may have a role as an adjunct informational tool for patients and physicians to improve outcomes.

Keyword

Artificial intelligence; Patient education as topic; Health communication; Telemedicine; Chronic disease management
Full Text Links
  • CMH
Actions
Cited
CITED
export Copy
Close
Share
  • Twitter
  • Facebook
Similar articles
Copyright © 2024 by Korean Association of Medical Journal Editors. All rights reserved.     E-mail: koreamed@kamje.or.kr