-
- Fei Dai, Dengyi Zhang, Kehua Su, and Ning Xin.
- School of Computer Science, Wuhan University, Wuhan, China.
- J Burn Care Res. 2021 Aug 4; 42 (4): 755-762.
AbstractBurn injuries are severe problems for human. Accurate segmentation for burn wounds in patient surface can improve the calculation precision of %TBSA (total burn surface area), which is helpful in determining treatment plan. Recently, deep learning methods have been used to automatically segment wounds. However, owing to the difficulty of collecting relevant images as training data, those methods cannot often achieve fine segmentation. A burn image-generating framework is proposed in this paper to generate burn image datasets with annotations automatically. Those datasets can be used to increase segmentation accuracy and save the time of annotating. This paper brings forward an advanced burn image generation framework called Burn-GAN. The framework consists of four parts: Generating burn wounds based on the mainstream Style-GAN network; Fusing wounds with human skins by Color Adjusted Seamless Cloning (CASC); Simulating real burn scene in three-dimensional space; Acquiring annotated dataset through three-dimensional and local burn coordinates transformation. Using this framework, a large variety of burn image datasets can be obtained. Finally, standard metrics like precision, Pixel Accuracy (PA) and Dice Coefficient (DC) were utilized to assess the framework. With nonsaturating loss with R2 regularization (NSLR2) and CASC, the segmentation network gains the best results. The framework achieved precision at 90.75%, PA at 96.88% and improved the DC from 84.5 to 89.3%. A burn data-generating framework have been built to improve the segmentation network, which can automatically segment burn images with higher accuracy and less time than traditional methods.© The Author(s) 2020. Published by Oxford University Press on behalf of the American Burn Association. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.