-
- S Frisch, P Werner, A Al-Hamadi, H C Traue, S Gruss, and S Walter.
- Sektion Medizinische Psychologie, Klinik für Psychosomatische Medizin und Psychotherapie, Universitätsklinikum Ulm, Frauensteige 6, 89075, Ulm, Deutschland.
- Schmerz. 2020 Oct 1; 34 (5): 376-387.
BackgroundIn patients with limited communication skills, the use of conventional scales or external assessment is only possible to a limited extent or not at all. Multimodal pain recognition based on artificial intelligence (AI) algorithms could be a solution.ObjectiveOverview of the methods of automated multimodal pain measurement and their recognition rates that were calculated with AI algorithms.MethodsIn April 2018, 101 studies on automated pain recognition were found in the Web of Science database to illustrate the current state of research. A selective literature review with special consideration of recognition rates of automated multimodal pain measurement yielded 14 studies, which are the focus of this review.ResultsThe variance in recognition rates was 52.9-55.0% (pain threshold) and 66.8-85.7%; in nine studies the recognition rate was ≥80% (pain tolerance), while one study reported recognition rates of 79.3% (pain threshold) and 90.9% (pain tolerance).ConclusionPain is generally recorded multimodally, based on external observation scales. With regard to automated pain recognition and on the basis of the 14 selected studies, there is to date no conclusive evidence that multimodal automated pain recognition is superior to unimodal pain recognition. In the clinical context, multimodal pain recognition could be advantageous, because this approach is more flexible. In the case of one modality not being available, e.g., electrodermal activity in hand burns, the algorithm could use other modalities (video) and thus compensate for missing information.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.