Article Notes
Who are EM3?
EM3 or ‘East Midlands Emergency Medicine Educational Media’ is an online emergency medicine educational resource, based out of Leicester Royal Infirmary ED. While their web presence is the foundation of their online resources, they are most interesting for the very successful way they translate emergency medicine research and education through multi-platform social media and FOAMed.
So, what happened?
In late October there were two inadvertent errors in educational resources simultaneously posted by EM3 to Twitter, Instagram, Facebook and Reddit. The errors were quickly identified and corrected, but despite this the incorrect posts continued to be shared, reaching some 15,000 people.
Edwards and Roland carefully describe the events, the approach EM3 took to correcting the errors, and analysis of the potential impact. They discuss the challenges when correcting what is by its very nature a dynamic resource, and one for which there is limited control once released. EM3 discuss the additional oversight added to their peer review process in response.
Their report is a cautionary tale for the FOAMed community and a useful resource for avoiding and managing SM errors when they inevitably occur.
Don’t be hasty...
Acknowledging that the reach and velocity offered by social media and FOAMed also bring accuracy and credibility concerns, traditional academic publishing is not without its own problems.
Whether outright academic fraud, replication crises or information overload, we already know that incorrect medical information persists for decades after being disproven. This is not a new problem, though FOAMed does accelerate the speed and scope for both good and bad.
Between the lines
The context of the article’s publication reveals the ongoing tension between FOAMed and the reality of traditional academic publishers, such as the BMJ: ‘Learning from mistakes on social media’ is not itself open access...
There is ever greater interest in mitigating medical errors, particularly through cognitive aids and checklist-system long-used in the aviation industry.
Jelacic and team instituted a computerised pre-induction checklist, using an observational before-and-after study design across 1,570 cases. This is the first study of a computerised anaesthesia checklist in a real clinical environment.
They found an absolute risk reduction of almost 4% of failure-to-perform critical pre-induction steps, along with reduction in non-routine events and several examples of pre-induction mistake identification through checklist use.
Although the researchers claim the results “strongly argue for the routine use of a pre-induction anaesthesia checklist” this overstates the case a little. This study, like many similar, struggles with confounder effects on anaesthesia vigilance that may explain some of the results, particularly as arising from observational, non-randomised, non-blinded research.
Be careful
The challenge for cognitive aid research is that commonly it must use surrogate markers (workflow step failure; behavioural deviations; efficiency; time spent on task etc.) rather than the safety outcomes that actually matter to patients: death and injury.
There is no easy way around this other than large multi-center studies focusing on outcomes, such as the WHO surgical safety checklist study – which even then, has not escaped criticism!
Thinking deeper...
There will continue to be tension between those pro-checklist and those against. The irony is that both camps share a similar rationale for their position: the advocates for routine checklists point to the safety benefits of reducing cognitive load, whereas those opposing argue that enforced use is anti-individual and itself adds additional task and cognitive burden for clinicians.