Advertisement

ChatGPT May Accurately Answer Common Patient Questions Regarding Gynecologic Cancer


Advertisement
Get Permission

The artificial intelligence (AI)-based chatbot ChatGPT version 3.4 may correctly answer a majority of the common genetic counseling questions related to gynecologic oncology, according to new findings presented by Patel et al at the Society of Gynecologic Oncology’s (SGO) 2024 Annual Meeting on Women’s Cancer.

Background

Generative AI can predict likely options for the next word in any sentence based on how billions of individuals use words in context on the Internet. Because of its next-word prediction, generative AI chatbots such as ChatGPT can create replies to questions in realistic language and produce clear summaries of complex texts.

“Our data suggest that this tool has the potential to answer common questions from patients to reduce anxiety and keep them informed,” said lead study author Jharna M. Patel, MD, a gynecologic oncology fellow at New York University (NYU) Langone Health. “More data input from gynecologic oncologists is needed before the tool can help to educate patients on their cancers,” she added.

Study Methods and Results

In the new study, researchers consulted with gynecologic oncologists to select 40 questions about topics such as genetic testing and genetic syndromes counseling often asked by patients and that align with professional society guidelines—with the goal of examining the capabilities of generative AI.

After typing the questions into ChatGPT, the researchers then asked attending gynecologic oncologists to rate the chatbot’s answers on the following scale of one to four—in which each ascending score indicated that the responses were correct and comprehensive, correct but not comprehensive, some correct and some incorrect, and completely incorrect, respectively. The proportion of the responses earning each score was calculated overall and within each question category.

The researchers found that ChatGPT provided correct and comprehensive responses to 82.5% (n = 33) of the questions, correct but not comprehensive responses to 15% (n = 6) of the questions, partially incorrect responses to 2.5% (n = 1) questions, and completely incorrect responses to 0% of the questions.

The genetic counseling category of questions had the highest proportion of ChatGPT-generated responses that were both correct and comprehensive, with the chatbot demonstrating 100% accuracy for all 20 questions. Further, ChatGPT performed equally well when answering questions about specific genetic disorders. For instance, the gynecologic oncologists found that 88.2% (n = 15/17) of the responses were correct and complete on testing for differences in the BRCA1 or BRCA2 gene known to be a driver of cancer risk, whereas 66.6% (n = 2/3) of the responses were correct on Lynch syndrome.

Conclusions

“We think we can further improve on these results by continuing to train the AI tool on more data and by learning to ask better sets of questions,” underscored senior study author Marina Stasenko, MD, Assistant Professor in the Department of Obstetrics and Gynecology at the NYU Grossman School of Medicine. “The goal is to deploy this in the clinic when ready, but only as an assistant to human providers,” she concluded.

The content in this post has not been reviewed by the American Society of Clinical Oncology, Inc. (ASCO®) and does not necessarily reflect the ideas and opinions of ASCO®.
Advertisement

Advertisement




Advertisement