Document Type

Dissertation

Date of Award

5-31-2021

Degree Name

Doctor of Philosophy in Computing Sciences - (Ph.D.)

Department

Computer Science

First Advisor

Frank Y. Shih

Second Advisor

Xiaoning Ding

Third Advisor

Zhi Wei

Fourth Advisor

Gareth J. Russell

Fifth Advisor

Hai Nhat Phan

Abstract

Deep learning in computer vision and image processing has attracted attentions from various fields including ecology and medical image. Ecologists are interested in finding an effective model structure to classify different species. Tradition deep learning model use a convolutional neural network, such as LeNet, AlexNet, VGG models, residual neural network, and inception models, are first used on classifying bee wing and butterfly datasets. However, insufficient data sample and unbalanced samples in each class have caused a poor accuracy. To make improvement the test accuracy, data augmentation and transfer learning are applied. Recently developed deep learning framework based on mathematical morphology also shows its effective in shape representation, contour detection and image smoothing. The experimental results in the morphological neural network shows this type of deep learning model is also effective in ecology datasets and medical dataset. Compared with CNN, the MNN could achieve a similar or better result in the following datasets.

The chest X-ray images are notoriously difficult to analyze for the radiologists due to their noisy nature. The existing models based on convolutional neural networks contain a giant number of parameters and thus require multi-advanced GPUs to deploy. In this research, the morphological neural networks are developed to classify chest X-ray images, including the Pneumonia Dataset and the COVID-19 Dataset. A novel structure, which can self-learn a morphological dilation or erosion, is proposed for determining the most suitable depth of the adaptive layer. Experimental results on the chest X-ray dataset and the COVID-19 dataset show that the proposed model achieves the highest classification rate as comparing against the existing models. More significant improvement is that the proposed model reduces around 97% computational parameters of the existing models.

Automatic identification of pneumonia on medical images has attracted intensive studies recently. The model for detecting pneumonia requires both a precise classification model and a localization model. A joint-task joint learning model with shared parameters is proposed to combine the classification model and segmentation model. To accurately classify and localize pneumonia area. Experimental results using the massive dataset of Radiology Society of North America have confirmed the efficiency of showing a test mean interception over union (IoU) of 89.27% and a mean precision of area detection result of 58.45% in segmentation model. Then, two new models are proposed to improve the performance of the original joint-task learning model. Two new modules are developed to improve both classification and segmentation accuracies in the first model. These modules including an image preprocessing module and an attention module. In the second model, a novel design is used to combine both convolutional layers and morphological layers with an attention mechanism. Experimental results performed on the massive dataset of the Radiology Society of North America have confirmed its superiority over other existing methods. The classification test accuracy is improved from 0.89 to 0.95, and the segmentation model achieves an improved mean precision result from 0.58 to 0.78. Finally, two weakly-supervised learning methods: class-saliency map and grad-cam, are used to highlight corresponding pixels or areas which have significant influence on the classification model, such that the refined segmentation can focus on the correct areas with high confidence.

Share

COinS