Background and Objectives: medical image Segmentation is a challenging task due to low contrast between Region of Interest and other textures, hair artifacts in dermoscopic medical images, illumination variations in images like Chest-Xray and various imaging acquisition conditions.
Methods: In this paper, we have utilized a novel method based on Convolutional Neural Networks (CNN) for medical image Segmentation and finally, compared our results with two famous architectures, include U-net and FCN neural networks. For loss functions, we have utilized both Jaccard distance and Binary-crossentropy and the optimization algorithm that has used in this method is SGD+Nestrov algorithm. In this method, we have used two preprocessing include resizing image’s dimensions for increasing the speed of our process and Image augmentation for improving the results of our network. Finally, we have implemented threshold technique as postprocessing on the outputs of neural network to improve the contrast of images. We have implemented our model on the famous publicly, PH2 Database, toward Melanoma lesion segmentation and chest Xray images because as we have mentioned, these two types of medical images contain hair artifacts and illumination variations and we are going to show the robustness of our method for segmenting these images and compare it with the other methods.
Results: Experimental results showed that this method could outperformed two other famous architectures, include Unet and FCN convolutional neural networks. Additionally, we could improve the performance metrics that have used in dermoscopic and Chest-Xray segmentation which used before.
Conclusion: In this work, we have proposed an encoder-decoder framework based on deep convolutional neural networks for medical image segmentation on dermoscopic and Chest-Xray medical images. Two techniques of image augmentation, image rotation and horizontal flipping on the training dataset are performed before feeding it to the network for training. The predictions produced from the model on test images were postprocessed using the threshold technique to remove the blurry boundaries around the predicted lesions.
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit: http://creativecommons.org/licenses/by/4.0/
JECEI Publisher remains neutral with regard to jurisdictional claims in published maps and institutional afflictions.
Shahid Rajaee Teacher Training University