-
Critical care medicine · Feb 2024
Does Reinforcement Learning Improve Outcomes for Critically Ill Patients? A Systematic Review and Level-of-Readiness Assessment.
- Martijn Otten, Ameet R Jagesar, Tariq A Dam, Laurens A Biesheuvel, Floris den Hengst, Kirsten A Ziesemer, Patrick J Thoral, Harm-Jan de Grooth, GirbesArmand R JARJDepartment of Intensive Care Medicine, Center for Critical Care Computational Intelligence, Amsterdam Medical Data Science (AMDS), Amsterdam Cardiovascular Science (ACS), Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands., Vincent François-Lavet, Mark Hoogendoorn, and ElbersPaul W GPWGDepartment of Intensive Care Medicine, Center for Critical Care Computational Intelligence, Amsterdam Medical Data Science (AMDS), Amsterdam Cardiovascular Science (ACS), Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands..
- Department of Intensive Care Medicine, Center for Critical Care Computational Intelligence, Amsterdam Medical Data Science (AMDS), Amsterdam Cardiovascular Science (ACS), Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands.
- Crit. Care Med. 2024 Feb 1; 52 (2): e79e88e79-e88.
ObjectiveReinforcement learning (RL) is a machine learning technique uniquely effective at sequential decision-making, which makes it potentially relevant to ICU treatment challenges. We set out to systematically review, assess level-of-readiness and meta-analyze the effect of RL on outcomes for critically ill patients.Data SourcesA systematic search was performed in PubMed, Embase.com, Clarivate Analytics/Web of Science Core Collection, Elsevier/SCOPUS and the Institute of Electrical and Electronics Engineers Xplore Digital Library from inception to March 25, 2022, with subsequent citation tracking.Data ExtractionJournal articles that used an RL technique in an ICU population and reported on patient health-related outcomes were included for full analysis. Conference papers were included for level-of-readiness assessment only. Descriptive statistics, characteristics of the models, outcome compared with clinician's policy and level-of-readiness were collected. RL-health risk of bias and applicability assessment was performed.Data SynthesisA total of 1,033 articles were screened, of which 18 journal articles and 18 conference papers, were included. Thirty of those were prototyping or modeling articles and six were validation articles. All articles reported RL algorithms to outperform clinical decision-making by ICU professionals, but only in retrospective data. The modeling techniques for the state-space, action-space, reward function, RL model training, and evaluation varied widely. The risk of bias was high in all articles, mainly due to the evaluation procedure.ConclusionIn this first systematic review on the application of RL in intensive care medicine we found no studies that demonstrated improved patient outcomes from RL-based technologies. All studies reported that RL-agent policies outperformed clinician policies, but such assessments were all based on retrospective off-policy evaluation.Copyright © 2023 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.