Please use this identifier to cite or link to this item:
http://hdl.handle.net/2445/192635
Title: | Optimized CT-MR neurological image fusion framework using biologically inspired spiking neural model in hybrid $\ell_1-\ell_0$ layer decomposition domain |
Author: | Das, Manisha Gupta, Deep Radeva, Petia Bakde, Ashwini M. |
Keywords: | Diagnòstic per la imatge Manifestacions neurològiques de les malalties Imatges mèdiques Diagnostic imaging Neurologic manifestations of general diseases Imaging systems in medicine |
Issue Date: | Jul-2021 |
Publisher: | Elsevier Ltd |
Abstract: | Medical image fusion plays an important role in the clinical diagnosis of several critical neurological diseases by merging complementary information available in multimodal images. In this paper, a novel CT-MR neurological image fusion framework is proposed using an optimized biologically inspired feedforward neural model in twoscale hybrid $\ell_1-\ell_0$ decomposition domain using gray wolf optimization to preserve the structural as well as texture information present in source CT and MR images. Initially, the source images are subjected to two-scale $\ell_1-\ell_0$ decomposition with optimized parameters, giving a scale-1 detail layer, a scale-2 detail layer and a scale2 base layer. Two detail layers at scale-1 and 2 are fused using an optimized biologically inspired neural model and weighted average scheme based on local energy and modified spatial frequency to maximize the preservation of edges and local textures, respectively, while the scale-2 base layer gets fused using choose max rule to preserve the background information. To optimize the hyper-parameters of hybrid $\ell_1-\ell_0$ decomposition and biologically inspired neural model, a fitness function is evaluated based on spatial frequency and edge index of the resultant fused image obtained by adding all the fused components. The fusion performance is analyzed by conducting extensive experiments on different CT-MR neurological images. Experimental results indicate that the proposed method provides better-fused images and outperforms the other state-of-the-art fusion methods in both visual and quantitative assessments. |
Note: | Versió postprint del document publicat a: https://doi.org/10.1016/j.bspc.2021.102535 |
It is part of: | Biomedical Signal Processing And Control, 2021, vol. 68 |
URI: | http://hdl.handle.net/2445/192635 |
Related resource: | https://doi.org/10.1016/j.bspc.2021.102535 |
ISSN: | 1746-8094 |
Appears in Collections: | Articles publicats en revistes (Matemàtiques i Informàtica) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
722517.pdf | 8.16 MB | Adobe PDF | View/Open Request a copy |
Document embargat fins el
31-7-2023
This item is licensed under a
Creative Commons License