-
- Patrick Ramos, Jeremy Montez, Adrian Tripp, Casey K Ng, Inderbir S Gill, and Andrew J Hung.
- USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
- BJU Int. 2014 May 1;113(5):836-42.
ObjectivesTo evaluate robotic dry laboratory (dry lab) exercises in terms of their face, content, construct and concurrent validities. To evaluate the applicability of the Global Evaluative Assessment of Robotic Skills (GEARS) tool to assess dry lab performance.Materials And MethodsParticipants were prospectively categorized into two groups: robotic novice (no cases as primary surgeon) and robotic expert (≥30 cases). Participants completed three virtual reality (VR) exercises using the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA), as well as corresponding dry lab versions of each exercise (Mimic Technologies, Seattle, WA, USA) on the da Vinci Surgical System. Simulator performance was assessed by metrics measured on the simulator. Dry lab performance was blindly video-evaluated by expert review using the six-metric GEARS tool. Participants completed a post-study questionnaire (to evaluate face and content validity). A Wilcoxon non-parametric test was used to compare performance between groups (construct validity) and Spearman's correlation coefficient was used to assess simulation to dry lab performance (concurrent validity).ResultsThe mean number of robotic cases experienced for novices was 0 and for experts the mean (range) was 200 (30-2000) cases. Expert surgeons found the dry lab exercises both 'realistic' (median [range] score 8 [4-10] out of 10) and 'very useful' for training of residents (median [range] score 9 [5-10] out of 10). Overall, expert surgeons completed all dry lab tasks more efficiently (P < 0.001) and effectively (GEARS total score P < 0.001) than novices. In addition, experts outperformed novices in each individual GEARS metric (P < 0.001). Finally, in comparing dry lab with simulator performance, there was a moderate correlation overall (r = 0.54, P < 0.001). Most simulator metrics correlated moderately to strongly with corresponding GEARS metrics (r = 0.54, P < 0.001).ConclusionsThe robotic dry lab exercises in the present study have face, content, construct and concurrent validity with the corresponding VR tasks. Until now, the assessment of dry lab exercises has been limited to basic metrics (i.e. time to completion and error avoidance). For the first time, we have shown it is feasibile to apply a global assessment tool (GEARS) to dry lab training.© 2013 The Authors. BJU International © 2013 BJU International.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.