-
Comput Methods Programs Biomed · Sep 2019
Automatic Multi-Level In-Exhale Segmentation and Enhanced Generalized S-Transform for wheezing detection.
- Hai Chen, Xiaochen Yuan, Jianqing Li, Zhiyuan Pei, and Xiaobin Zheng.
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; School of Information Technology, Beijing Normal University, Zhuhai, Zhuhai, China. Electronic address: isabell@bnuz.edu.cn.
- Comput Methods Programs Biomed. 2019 Sep 1; 178: 163-173.
Background And ObjectiveWheezing is a common symptom of patients caused by asthma and chronic obstructive pulmonary diseases. Wheezing detection identifies wheezing lung sounds and helps physicians in diagnosis, monitoring, and treatment of pulmonary diseases. Different from the traditional way to detect wheezing sounds using digital image process methods, automatic wheezing detection uses computerized tools or algorithms to objectively and accurately assess and evaluate lung sounds. We propose an innovative machine learning-based approach for wheezing detection. The phases of the respiratory sounds are separated automatically and the wheezing features are extracted accordingly to improve the classification accuracy.MethodsTo enhance the features of wheezing for classification, the Adaptive Multi-Level In-Exhale Segmentation (AMIE_SEG) is proposed to automatically and precisely segment the respiratory sounds into inspiratory and expiratory phases. Furthermore, the Enhanced Generalized S-Transform (EGST) is proposed to extract the wheezing features. The highlighted features of wheezing improve the accuracy of wheezing detection with machine learning-based classifiers.ResultsTo evaluate the novelty and superiority of the proposed AMIE_SEG and EGST for wheezing detection, we employ three machine learning-based classifiers, Support Vector Machine (SVM), Extreme Learning Machine (ELM) and K-Nearest Neighbor (KNN), with public datasets at segment level and record level respectively. According to the experimental results, the proposed method performs the best using the KNN classifier at segment level, with the measured accuracy, sensitivity, specificity as 98.62%, 95.9% and 99.3% in average respectively. On the other aspect, at record level, the three classifiers perform excellent, with the accuracy, sensitivity, specificity up to 99.52%, 100% and 99.27% respectively. We validate the method with public respiratory sounds dataset.ConclusionThe comparison results indicate the very good performance of the proposed methods for long-term wheezing monitoring and telemedicine.Copyright © 2019 Elsevier B.V. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.