Journal of evaluation in clinical practice
-
In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. ⋯ The authors argue that, although effective current or future AI-enhanced EFM may impose an epistemic obligation on the part of clinicians to rely on such systems' predictions or diagnoses as input to SDM, such obligations may be overridden by inherited defeaters, caused by a form of algorithmic bias. The existence of inherited defeaters implies that the duty of care to the client's knowledge extends to any situation in which a clinician (or anyone else) is involved in producing training data for a system that will be used in SDM. Any future AI must be capable of assessing women individually, taking into account a wide range of factors including women's preferences, to provide a holistic range of evidence for clinical decision-making.
-
Artificial intelligence and big data are more and more used in medicine, either in prevention, diagnosis or treatment, and are clearly modifying the way medicine is thought and practiced. Some authors argue that the use of artificial intelligence techniques to analyze big data would even constitute a scientific revolution, in medicine as much as in other scientific disciplines. Moreover, artificial intelligence techniques, coupled with mobile health technologies, could furnish a personalized medicine, adapted to the individuality of each patient. In this paper we argue that this conception is largely a myth: what health professionals and patients need is not more data, but data that are critically appraised, especially to avoid bias. ⋯ The large amount of data thus appears rather as a problem than a solution. What contemporary medicine needs is not more data or more algorithms, but a critical appraisal of the data and of the analysis of the data. Considering the history of epidemiology, we propose three research priorities concerning the use of artificial intelligence and big data in medicine.
-
Conventional models of cultural humility - even those extending analysis beyond the dyad of healthcare provider-patient to include concentric social influences such as families, communities and institutions that make clinical relationships possible - aren't conceptually or methodologically calibrated to accommodate shifts occurring in contemporary biomedical cultures. More complex methodological frameworks are required that are attuned to how advances in biomedical, communications and information technologies are increasingly transforming the very cultural and material conditions of health care and its delivery structures, and thus how power manifests in clinical encounters. ⋯ Engaging evaluative inquiry diffractively allows for a different ethical practice of care, one that attends to the forms of patient and health provider accountability and responsibility emerging in the clinical encounter.
-
How to classify the human condition? This is one of the main problems psychiatry has struggled with since the first diagnostic systems. The furore over the recent editions of the diagnostic systems DSM-5 and ICD-11 has evidenced it to still pose a wicked problem. ⋯ The promises of AI for mental disorders are threatened by the unmeasurable aspects of mental disorders, and for this reason the use of AI may lead to ethically and practically undesirable consequences in its effective processing. We consider such novel and unique questions AI presents for mental health disorders in detail and evaluate potential novel, AI-specific, ethical implications.