Hastings Cent Rep
-
As we reread Mary Shelley's Frankenstein at two hundred years, it is evident that Victor Frankenstein is both a mad scientist (fevered, obsessive) and a bad scientist (secretive, hubristic, irresponsible). He's also not a very nice person. He's a narcissist, a liar, and a bad "parent." But he is not genuinely evil. And yet when we reimagine him as evil-as an evil scientist and as an evil person-we can learn some important lessons about science and technology, our contemporary society, and ourselves.
-
One of the most recent and original adaptations of Mary Wollstonecraft (Godwin) Shelley's Frankenstein; or, The Modern Prometheus (1818) is the ballet version choreographed by Liam Scarlett and performed by the Royal Ballet in 2016 and the San Francisco Ballet in 2017 and 2018. What emerges from this translation is an economical, emotionally wrenching, and visually elegant drama of family tragedy from which we can draw a cautionary tale about contemporary bioethical dilemmas in family making that new and forthcoming biomedical technologies present. ⋯ In the Frankenstein ballet, the narrative genre of dance-what I'll call "story in the flesh"-invites viewers to identify with the characters and enter into the complexity of interpersonal relations. The ballet becomes a compelling testimony about possible unintended outcomes set in motion by well-intended fallible humans like themselves.
-
The bioethical, professional, and policy discourse over brain death criteria has been portrayed by some scholars as illustrative of the minimal influence of religious perspectives in bioethics. Three questions then lie at the core of my inquiry: What interests of secular pluralistic societies and the medical profession are advanced in examining religious understandings of criteria for determining death? Can bioethical and professional engagement with religious interpretations of death present substantive insights for policy discussions on neurological criteria for death? And finally, how extensive should the scope of policy accommodations be for deeply held religiously based dissent from neurological criteria for death? I begin with a short synopsis of a recent case litigated in Ontario, Canada, Ouanounou v. Humber River Hospital, to illuminate this contested moral terrain.
-
Artificial intelligence and machine learning have the potential to revolutionize the delivery of health care. But designing machine learning-based decision support systems is not a merely technical challenge. It also requires attention to bioethical principles. As AI and machine learning advance, bioethical frameworks need to be tailored to address the problems that these evolving systems might pose, and the development of these automated systems also needs to be tailored to incorporate bioethical principles.
-
When a patient lacks decision-making capacity, then according to standard clinical ethics practice in the United States, the health care team should seek guidance from a surrogate decision-maker, either previously selected by the patient or appointed by the courts. If there are no surrogates willing or able to exercise substituted judgment, then the team is to choose interventions that promote a patient's best interests. We argue that, even when there is input from a surrogate, patient preferences should be an additional source of guidance for decisions about patients who lack decision-making capacity. ⋯ Patients who lack decision-making capacity are well served by these efforts to solicit and use their preferences to promote best interests or to craft would-be autonomous patient images for use by surrogates. However, we go further: the moral reasons for valuing the preferences of patients without decision-making capacity are not reducible to either best-interests or (surrogate) autonomy considerations but can be grounded in the values of liberty and respect for persons. This has important consequences for treatment decisions involving these vulnerable patients.