SLSNet: Skin lesion segmentation using a lightweight generativeadversarial network

dc.contributor.authorSarker, Md Mostafa Kamal
dc.contributor.authorRashwan, Hatem A.
dc.contributor.authorAkram, Farhan
dc.contributor.authorSingh, Vivek Kumar
dc.contributor.authorBanu, Syeda Furruka
dc.contributor.authorChowdhury, Forhad U.H.
dc.contributor.authorChoudhury, Kabir Ahmed
dc.contributor.authorChambon, Sylvie
dc.contributor.authorRadeva, Petia
dc.contributor.authorPuig, Domènec
dc.contributor.authorAbdel-Nasser, Mohamed
dc.date.accessioned2022-03-31T12:00:51Z
dc.date.available2022-03-31T12:00:51Z
dc.date.issued2021-11-30
dc.date.updated2022-03-31T12:00:51Z
dc.description.abstractThe determination of precise skin lesion boundaries in dermoscopic images using automated methods faces many challenges, most importantly, the presence of hair, inconspicuous lesion edges and low contrast in dermoscopic images, and variability in the color, texture and shapes of skin lesions. Existing deep learning-based skin lesion segmentation algorithms are expensive in terms of computational time and memory. Consequently, running such segmentation algorithms requires a powerful GPU and high bandwidth memory, which are not available in dermoscopy devices. Thus, this article aims to achieve precise skin lesion segmentation with minimum resources: a lightweight, efficient generative adversarial network (GAN) model called SLSNet, which combines 1-D kernel factorized networks, position and channel attention, and multiscale aggregation mechanisms with a GAN model. The 1-D kernel factorized network reduces the computational cost of 2D filtering. The position and channel attention modules enhance the discriminative ability between the lesion and non-lesion feature representations in spatial and channel dimensions, respectively. A multiscale block is also used to aggregate the coarse-to-fine features of input skin images and reduce the effect of the artifacts. SLSNet is evaluated on two publicly available datasets: ISBI 2017 and the ISIC 2018. Although SLSNet has only 2.35 million parameters, the experimental results demonstrate that it achieves segmentation results on a par with the state-of-the-art skin lesion segmentation methods with an accuracy of 97.61%, and Dice and Jaccard similarity coefficients of 90.63% and 81.98%, respectively. SLSNet can run at more than 110 frames per second (FPS) in a single GTX1080Ti GPU, which is faster than well-known deep learning-based image segmentation models, such as FCN. Therefore, SLSNet can be used for practical dermoscopic applications.
dc.format.mimetypeapplication/pdf
dc.identifier.idgrec722515
dc.identifier.issn0957-4174
dc.identifier.urihttps://hdl.handle.net/2445/184577
dc.language.isoeng
dc.publisherElsevier
dc.relation.isformatofReproducció del document publicat a: https://doi.org/10.1016/j.eswa.2021.115433
dc.relation.ispartofExpert Systems with Applications, 2021, vol. 183
dc.relation.urihttps://doi.org/10.1016/j.eswa.2021.115433
dc.rightscc-by-nc-nd (c) Elsevier, 2021
dc.rightscc by (c) Sarker, Md Mostafa Kamal et al., 2021
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.sourceArticles publicats en revistes (Matemàtiques i Informàtica)
dc.subject.classificationMalalties de la pell
dc.subject.classificationAprenentatge automàtic
dc.subject.otherSkin diseases
dc.subject.otherMachine learning
dc.titleSLSNet: Skin lesion segmentation using a lightweight generativeadversarial network
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/publishedVersion

Fitxers

Paquet original

Mostrant 1 - 1 de 1
Carregant...
Miniatura
Nom:
722515.pdf
Mida:
2.25 MB
Format:
Adobe Portable Document Format