Journal of evaluation in clinical practice
-
This paper examines the use of artificial intelligence (AI) for the diagnosis of autism spectrum disorder (ASD, hereafter autism). In so doing we examine some problems in existing diagnostic processes and criteria, including issues of bias and interpretation, and on concepts like the 'double empathy problem'. We then consider how novel applications of AI might contribute to these contexts. We're focussed specifically on adult diagnostic procedures as childhood diagnosis is already well covered in the literature.
-
The COVID-19 pandemic has transformed traditional in-person care into a new reality of virtual care for patients with complex chronic disease (CCD), but how has this transformation impacted clinical judgement? I argue that virtual specialist-patient interaction challenges clinical reasoning and clinical judgement (clinical reasoning combined with statistical reasoning). However, clinical reasoning can improve by recognising the abductive, deductive, and inductive methods that the clinician employs. Abductive reasoning leading to an inference to the best explanation or invention of an explanatory hypothesis is the default response to unfamiliar or confusing situations. ⋯ Clinical judgement in virtual encounters especially calls for Gestalt cognition to assess a situational pattern irreducible to its parts and independent of its particulars, so that efficient data interpretation and self-reflection are enabled. Gestalt cognition integrates abduction, deduction, and induction, appropriately divides the time and effort spent on each, and can compensate for reduced available information. Evaluating one's clinical judgement for those components especially vulnerable to compromise can help optimize the delivery of virtual care for patients with CCD.
-
The diversity of types of evidence (eg, case reports, animal studies and observational studies) makes the assessment of a drug's safety profile into a formidable challenge. While frequentist uncertain inference struggles in aggregating these signals, the more flexible Bayesian approaches seem better suited for this quest. Artificial Intelligence (AI) offers great promise to these approaches for information retrieval, decision support, and learning probabilities from data. ⋯ Properly applied, AI can help the transition of philosophical principles and considerations concerning evidence aggregation for drug safety to a tool that can be used in practice.
-
Despite the great promises that artificial intelligence (AI) holds for health care, the uptake of such technologies into medical practice is slow. In this paper, we focus on the epistemological issues arising from the development and implementation of a class of AI for clinical practice, namely clinical decision support systems (CDSS). We will first provide an overview of the epistemic tasks of medical professionals, and then analyse which of these tasks can be supported by CDSS, while also explaining why some of them should remain the territory of human experts. ⋯ In practice, this means that the system indicates what factors contributed to arriving at an advice, allowing the user (clinician) to evaluate whether these factors are medically plausible and applicable to the patient. Finally, we defend that proper implementation of CRSS allows combining human and artificial intelligence into hybrid intelligence, were both perform clearly delineated and complementary empirical tasks. Whereas CRSSs can assist with statistical reasoning and finding patterns in complex data, it is the clinicians' task to interpret, integrate and contextualize.
-
In today's culture of the medical profession, it is fairly unusual for students to actually witness physicians talking with patients about anything outside scientific explanation. That other side of medicine - the one that goes beyond explanation to understanding - goes unexplored, and the patient's personal narrative is consequently less understood. Meanwhile, though reflective writing is the most frequently used didactic method to promote introspection and deeper consolidation of new ideas for medical learners, there is robust evidence that other art forms - such as storytelling, dance, theatre, literature and the visual arts - can also help deepen reflection and understanding of the human aspect of medical practice. ⋯ BEAM is envisioned as a modular, online resource of "third things" that any clinician anywhere will be able to access via a smartphone application to deliver brief, focused, humanistic clinical teaching in either hospital or ambulatory care settings. This commentary foregrounds a learner's perspective to model BEAM's usage in an in-depth manner; it examines the relation of a painting by Edward Hopper to medical education through the lens of a poem by Victoria Chang, in the context of the BEAM web-based app educational resource. By assessing the poignancy of the painting via the poem, I demonstrate the capacity of the arts and humanities in medical education, with a specific focus on the development of interpretative skills and tolerance for ambiguity that all authentic, engaged physicians need.