• Ann. Intern. Med. · Oct 2023

    Implications of the Use of Artificial Intelligence Predictive Models in Health Care Settings : A Simulation Study.

    • Akhil Vaid, Ashwin Sawant, Mayte Suarez-Farinas, Juhee Lee, Sanjeev Kaul, Patricia Kovatch, Robert Freeman, Joy Jiang, Pushkala Jayaraman, Zahi Fayad, Edgar Argulian, Stamatios Lerakis, Alexander W Charney, Fei Wang, Matthew Levin, Benjamin Glicksberg, Jagat Narula, Ira Hofer, Karandeep Singh, and Girish N Nadkarni.
    • Division of Data-Driven and Digital Medicine, Department of Medicine, and The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York (A.V., P.J.).
    • Ann. Intern. Med. 2023 Oct 1; 176 (10): 135813691358-1369.

    BackgroundSubstantial effort has been directed toward demonstrating uses of predictive models in health care. However, implementation of these models into clinical practice may influence patient outcomes, which in turn are captured in electronic health record data. As a result, deployed models may affect the predictive ability of current and future models.ObjectiveTo estimate changes in predictive model performance with use through 3 common scenarios: model retraining, sequentially implementing 1 model after another, and intervening in response to a model when 2 are simultaneously implemented.DesignSimulation of model implementation and use in critical care settings at various levels of intervention effectiveness and clinician adherence. Models were either trained or retrained after simulated implementation.SettingAdmissions to the intensive care unit (ICU) at Mount Sinai Health System (New York, New York) and Beth Israel Deaconess Medical Center (Boston, Massachusetts).Patients130 000 critical care admissions across both health systems.InterventionAcross 3 scenarios, interventions were simulated at varying levels of clinician adherence and effectiveness.MeasurementsStatistical measures of performance, including threshold-independent (area under the curve) and threshold-dependent measures.ResultsAt fixed 90% sensitivity, in scenario 1 a mortality prediction model lost 9% to 39% specificity after retraining once and in scenario 2 a mortality prediction model lost 8% to 15% specificity when created after the implementation of an acute kidney injury (AKI) prediction model; in scenario 3, models for AKI and mortality prediction implemented simultaneously, each led to reduced effective accuracy of the other by 1% to 28%.LimitationsIn real-world practice, the effectiveness of and adherence to model-based recommendations are rarely known in advance. Only binary classifiers for tabular ICU admissions data were simulated.ConclusionIn simulated ICU settings, a universally effective model-updating approach for maintaining model performance does not seem to exist. Model use may have to be recorded to maintain viability of predictive modeling.Primary Funding SourceNational Center for Advancing Translational Sciences.

      Pubmed     Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        
    You can also include formatting, links, images and footnotes in your notes
    • Simple formatting can be added to notes, such as *italics*, _underline_ or **bold**.
    • Superscript can be denoted by <sup>text</sup> and subscript <sub>text</sub>.
    • Numbered or bulleted lists can be created using either numbered lines 1. 2. 3., hyphens - or asterisks *.
    • Links can be included with: [my link to pubmed](http://pubmed.com)
    • Images can be included with: ![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
    • For footnotes use [^1](This is a footnote.) inline.
    • Or use an inline reference [^1] to refer to a longer footnote elseweher in the document [^1]: This is a long footnote..

    hide…