Journal of evaluation in clinical practice
-
Despite the great promises that artificial intelligence (AI) holds for health care, the uptake of such technologies into medical practice is slow. In this paper, we focus on the epistemological issues arising from the development and implementation of a class of AI for clinical practice, namely clinical decision support systems (CDSS). We will first provide an overview of the epistemic tasks of medical professionals, and then analyse which of these tasks can be supported by CDSS, while also explaining why some of them should remain the territory of human experts. ⋯ In practice, this means that the system indicates what factors contributed to arriving at an advice, allowing the user (clinician) to evaluate whether these factors are medically plausible and applicable to the patient. Finally, we defend that proper implementation of CRSS allows combining human and artificial intelligence into hybrid intelligence, were both perform clearly delineated and complementary empirical tasks. Whereas CRSSs can assist with statistical reasoning and finding patterns in complex data, it is the clinicians' task to interpret, integrate and contextualize.
-
Proponents of clinical case formulations argue that the causes and mechanisms contributing to and maintaining a patient's problems should be analysed and integrated into a case conceptualization, on which treatment planning ought to be based. Empirical evidence shows that an individualized treatment based on a case formulation is at least sometimes better than a standardized evidence-based treatment. ⋯ We show how PACT works in practice by discussing treatment planning for a clinical case involving symptoms of social anxiety, depression and post-traumatic stress disorder.
-
This paper explores the possibility of AI-based addendum therapy for borderline personality disorder, its potential advantages and limitations. Identity disturbance in this condition is strongly connected to self-narratives, which manifest excessive incoherence, causal gaps, dysfunctional beliefs, and diminished self-attributions of agency. ⋯ The suggestion of this paper is that human-to-human therapy could be complemented by AI assistance holding out the promise of making patients' self-narratives more coherent through improving the accuracy of their self-assessments, reflection on their emotions, and understanding their relationships with others. Theoretical and pragmatic arguments are presented in favour of this idea, and certain technical solutions are suggested to implement it.
-
The COVID-19 pandemic has transformed traditional in-person care into a new reality of virtual care for patients with complex chronic disease (CCD), but how has this transformation impacted clinical judgement? I argue that virtual specialist-patient interaction challenges clinical reasoning and clinical judgement (clinical reasoning combined with statistical reasoning). However, clinical reasoning can improve by recognising the abductive, deductive, and inductive methods that the clinician employs. Abductive reasoning leading to an inference to the best explanation or invention of an explanatory hypothesis is the default response to unfamiliar or confusing situations. ⋯ Clinical judgement in virtual encounters especially calls for Gestalt cognition to assess a situational pattern irreducible to its parts and independent of its particulars, so that efficient data interpretation and self-reflection are enabled. Gestalt cognition integrates abduction, deduction, and induction, appropriately divides the time and effort spent on each, and can compensate for reduced available information. Evaluating one's clinical judgement for those components especially vulnerable to compromise can help optimize the delivery of virtual care for patients with CCD.