• Chest · Sep 2024

    Review

    An Ethically Supported Framework for Determining Patient Notification and Informed Consent Practices when Using Artificial Intelligence in Healthcare.

    • Susannah L Rose and Devora Shapiro.
    • Center for Bioethics and Society, Department of Bioinformatics, Vanderbilt University Medical Center, Vanderbilt University, Nashville, TN. Electronic address: Susannah.Rose@vumc.org.
    • Chest. 2024 Sep 1; 166 (3): 572578572-578.

    AbstractArtificial intelligence (AI) is increasingly being used in health care. Without an ethically supportable, standard approach to knowing when patients should be informed about AI, hospital systems and clinicians run the risk of fostering mistrust among their patients and the public. Therefore, hospital leaders need guidance on when to tell patients about the use of AI in their care. In this article, we provide such guidance. To determine which AI technologies fall into each of the identified categories (no notification or no informed consent [IC], notification only, and formal IC), we propose that AI use-cases should be evaluated using the following criteria: (1) AI model autonomy, (2) departure from standards of practice, (3) whether the AI model is patient facing, (4) clinical risk introduced by the model, and (5) administrative burdens. We take each of these in turn, using a case example of AI in health care to illustrate our proposed framework. As AI becomes more commonplace in health care, our proposal may serve as a starting point for creating consensus on standards for notification and IC for the use of AI in patient care.Copyright © 2024 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

      Pubmed     Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        

    hide…