Journal of evaluation in clinical practice
-
The COVID-19 pandemic has impacted every facet of society, including medical research. This paper is the second part of a series of articles that explore the intricate relationship between the different challenges that have hindered biomedical research and the generation of novel scientific knowledge during the COVID-19 pandemic. In the first part of this series, we demonstrated that, in the context of COVID-19, the scientific community has been faced with numerous challenges with respect to (1) finding and prioritizing relevant research questions and (2) choosing study designs that are appropriate for a time of emergency. ⋯ The COVID-19 pandemic presented challenges in terms of (3) evaluating evidence for the purpose of making evidence-based decisions and (4) sharing scientific findings with the rest of the scientific community. This second paper demonstrates that the four challenges outlined in the first and second papers have often compounded each other and have contributed to slowing down the creation of novel scientific knowledge during the COVID-19 pandemic.
-
The diversity of types of evidence (eg, case reports, animal studies and observational studies) makes the assessment of a drug's safety profile into a formidable challenge. While frequentist uncertain inference struggles in aggregating these signals, the more flexible Bayesian approaches seem better suited for this quest. Artificial Intelligence (AI) offers great promise to these approaches for information retrieval, decision support, and learning probabilities from data. ⋯ Properly applied, AI can help the transition of philosophical principles and considerations concerning evidence aggregation for drug safety to a tool that can be used in practice.
-
Though strong evidence-based medicine is assertive in its claims, an insufficient theoretical basis and patchwork of arguments provide a good case that rather than introducing a new paradigm, EBM is resisting a shift to actually revolutionary complexity theory and other emergent approaches. This refusal to pass beyond discredited positivism is manifest in strong EBM's unsuccessful attempts to continually modify its already inadequate previous modifications, as did the defenders of the Ptolemaic astronomical model who increased the number of circular epicycles until the entire epicycle-deferent system proved untenable. ⋯ The analysis in Part 1 of this three part series showed epistemological confusion as strong EBM plays the discredited positivistic tradition out to the end, thus repeating in a medical sphere and vocabulary the major assumptions and inadequacies that have appeared in the trajectory of modern science. Paper 2 in this series examines application, attending to strong EBM's claim of direct transferability of EBM research findings to clinical settings and its assertion of epistemological normativity. EBM's contention that it provides the "only valid" approach to knowledge and action is questioned by analyzing the troubled story of proposed hierarchies of the quality of research findings (especially of RCTs, with other factors marginalized), which falsely identifies evaluating findings with operationally utilizing them in clinical recommendations and decision-making. Further, its claim of carrying over its normative guidelines to cover the ethical responsibilities of researchers and clinicians is questioned.
-
Review
The transition from inquiry to evidence to actionable clinical knowledge: A proposed roadmap.
We consider the question "What should we do?" in the context of clinical research/practice. There are several steps along the way to providing a satisfactory answer, many of which have received considerable attention in the literature. We aim to provide a unified summary and explication of these "steps along the way". The result will be an increased appreciation for the meaning and structure of "actionable clinical knowledge". ⋯ Clinical decision-making is not infallible, and the steps we can take to minimize error are context dependent. Medical evidence, produced as it is by human effort, can never be perfect. We will be doing well by assuring that the evidence we use has been produced by a reliable process and is relevant to the question posed.