The American journal of emergency medicine
-
As artificial intelligence (AI) expands its presence in healthcare, particularly within emergency medicine (EM), there is growing urgency to explore the ethical and practical considerations surrounding its adoption. AI holds the potential to revolutionize how emergency physicians (EPs) make clinical decisions, but AI's complexity often surpasses EPs' capacity to provide patients with informed consent regarding its use. This article underscores the crucial need to address the ethical pitfalls of AI in EM. ⋯ The central question arises: Are EPs equipped with the necessary knowledge to offer well-informed consent regarding clinical AI? Collaborative efforts between EPs, bioethicists, AI researchers, and healthcare administrators are essential for the development and implementation of optimal AI practices in EM. To facilitate informed consent about AI, EPs should understand at least seven key areas: (1) how AI systems operate; (2) whether AI systems are understandable and trustworthy; (3) the limitations of and errors AI systems make; (4) how disagreements between the EP and AI are resolved; (5) whether the patient's personally identifiable information (PII) and the AI computer systems will be secure; (6) if the AI system functions reliably (has been validated); and (7) if the AI program exhibits bias. This article addresses each of these critical issues, aiming to empower EPs with the knowledge required to navigate the intersection of AI and informed consent in EM.