Hastings Cent Rep
-
As we reread Mary Shelley's Frankenstein at two hundred years, it is evident that Victor Frankenstein is both a mad scientist (fevered, obsessive) and a bad scientist (secretive, hubristic, irresponsible). He's also not a very nice person. He's a narcissist, a liar, and a bad "parent." But he is not genuinely evil. And yet when we reimagine him as evil-as an evil scientist and as an evil person-we can learn some important lessons about science and technology, our contemporary society, and ourselves.
-
Brain death, or the determination of death by neurological criteria, has been described as a legal fiction. Legal fictions are devices by which the law treats two analogous things (in this case, biological death and brain death) in the same way so that the law developed for one can also cover the other. ⋯ I will argue that diagnosing brain death as a hidden legal fiction is a helpful way to understand its historical development and current status. For the legal-fictions approach to be ethically justifiable, however, the fact that brain death is a legal fiction not aligned with the standard biological conception of death must be acknowledged and made transparent.
-
The bioethical, professional, and policy discourse over brain death criteria has been portrayed by some scholars as illustrative of the minimal influence of religious perspectives in bioethics. Three questions then lie at the core of my inquiry: What interests of secular pluralistic societies and the medical profession are advanced in examining religious understandings of criteria for determining death? Can bioethical and professional engagement with religious interpretations of death present substantive insights for policy discussions on neurological criteria for death? And finally, how extensive should the scope of policy accommodations be for deeply held religiously based dissent from neurological criteria for death? I begin with a short synopsis of a recent case litigated in Ontario, Canada, Ouanounou v. Humber River Hospital, to illuminate this contested moral terrain.
-
Artificial intelligence and machine learning have the potential to revolutionize the delivery of health care. But designing machine learning-based decision support systems is not a merely technical challenge. It also requires attention to bioethical principles. As AI and machine learning advance, bioethical frameworks need to be tailored to address the problems that these evolving systems might pose, and the development of these automated systems also needs to be tailored to incorporate bioethical principles.
-
In January 2016, Medicare began reimbursing clinicians for time spent engaging in advance care planning with their patients or patients' surrogates. Such planning involves discussions of the care an individual would want to receive should he or she one day lose the capacity to make health care decisions or have conversations with a surrogate about, for example, end-of-life wishes. ⋯ Although it seems that political barriers to reimbursement for such planning have largely faded, the Medicare policy's impact on provider billing practices appears to be limited, suggesting other barriers to clinician engagement in advance care planning. Additionally, the effects of this policy on patient behavior and the clinician-patient relationship are not yet known.