Advertisement

Patient-Facing AI in Cancer Care


Advertisement
Get Permission

Currently most use of artificial intelligence (AI) in cancer care has been limited to diagnostics, with the U.S. Food and Drug Administration approving nearly 700 medical devices that utilize AI and/or machine learning, the majority of which are used in the fields of radiology and pathology. However, direct patient-facing AI technologies are an “area of new growth with massive therapeutic potential, but few safeguards,” according to a new paper by bioethics researchers at Dana-Farber Cancer Institute.

While AI is poised to play a significant role in both expanding access to cancer care and assuring its efficacy, they write, it also raises serious ethical concerns and has the potential to jeopardize relationships between patients and clinicians. The report's authors call for coordinated efforts by oncology medical societies and government leaders to collaborate with patients, clinicians, and researchers to develop policies, guidelines, and frameworks to ensure AI-driven health care remains ethically sound, equitable, and patient-centric. The paper by Kelkar et al was published in JCO Oncology Practice.

Areas of Engagement

In the paper, the authors focused on three areas in which patients with cancer are likely to engage with AI—and their potential risks. They include:

  • Telehealth: While this technology can make patient-reported outcomes and patient-reported data before or after oncology clinic visits easier to collect and integrate into clinical decision-making, there are inherent risks to patient confidentiality when data from telehealth visits are extracted by AI.
  • Remote monitoring: These digital health tools use patient reporting as well as wearable sensors to track health data, including vital signs, physical activity, sleep patterns, patient-reported outcomes, and patient-reported data. However, these tools also pose inherent risks to patient confidentiality.
  • Health coaching: Virtual and AI-assisted health coaching offers support and guidance to patients with cancer and caregivers through personalized health advice, education, goal-setting, and psychosocial support. As autonomous health coaching programs become more human-like, argue the authors, there is a danger that actual humans will have less oversight of them, eliminating the person-to-person contact in cancer medicine and raising ethical concerns.

Developing Ethical and Equitable AI-Driven Patient-Centered Care

The authors cite several principles to guide the development and adoption of AI in patient-facing care, including human dignity, patient autonomy, equity and justice, regulatory oversight, and collaboration to ensure that AI-driven health care is ethically sound and equitable.

“Patient-facing AI is poised to play a significant role in both expanding access to cancer care and assuring its efficacy. It also raises serious ethical concerns. We call upon our oncology medical societies and government leaders to collaborate with patients, clinicians, and researchers to develop policies, guidelines, and frameworks to ensure AI-driven health care remains ethically sound, equitable, and patient-centric. The path to ethical patient-facing AI in oncology is complex, but with careful navigation, will lead to a future where such technology truly improves the lives of patients with cancer,” concluded the authors.

Gregory A. Abel, MD, MPH, Director of the Older Adult Hematologic Malignancy Program at Dana-Farber Cancer Institute and a member of Dana-Farber’s Population Sciences Division, is the corresponding author of the JCO Oncology Practice report.

Disclosure: For full disclosures of the study authors, visit ascopubs.org.

The content in this post has not been reviewed by the American Society of Clinical Oncology, Inc. (ASCO®) and does not necessarily reflect the ideas and opinions of ASCO®.
Advertisement

Advertisement




Advertisement