DSpace Repository

Brain Tumor Auto-Segmentation on Multimodal Imaging Modalities Using Deep Neural Network

Show simple item record

dc.contributor.author Hossain, Elias
dc.contributor.author Hossain, Md. Shazzad
dc.contributor.author Hossain, Md. Selim
dc.contributor.author Jannat, Sabila Al
dc.contributor.author Huda, Moontahina
dc.contributor.author Alsharif, Sameer
dc.contributor.author Faragallah, Osama S.
dc.contributor.author Eid, Mahmoud M. A.
dc.contributor.author Rashed, Ahmed Nabih Zaki
dc.date.accessioned 2024-03-28T08:15:50Z
dc.date.available 2024-03-28T08:15:50Z
dc.date.issued 2022-02-16
dc.identifier.uri http://dspace.daffodilvarsity.edu.bd:8080/handle/123456789/11888
dc.description.abstract Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional Magnetic Resonance Image (MRI) and Computed Tomography (CT) scans utilizing 3D U-Net Design and ResNet50, taken after by conventional classification strategies. In this inquire, the ResNet50 picked up accuracy with 98.96%, and the 3D U-Net scored 97.99% among the different methods of deep learning. It is to be mentioned that traditional Convolutional Neural Network (CNN) gives 97.90% accuracy on top of the 3D MRI. In expansion, the image fusion approach combines the multimodal images and makes a fused image to extricate more highlights from the medical images. Other than that, we have identified the loss function by utilizing several dice measurements approach and received Dice Result on top of a specific test case. The average mean score of dice coefficient and soft dice loss for three test cases was 0.0980. At the same time, for two test cases, the sensitivity and specification were recorded to be 0.0211 and 0.5867 using patch level predictions. On the other hand, a software integration pipeline was integrated to deploy the concentrated model into the webserver for accessing it from the software system using the Representational state transfer (REST) API. Eventually, the suggested models were validated through the Area Under the Curve–Receiver Characteristic Operator (AUC–ROC) curve and Confusion Matrix and compared with the existing research articles to understand the underlying problem. Through Comparative Analysis, we have extracted meaningful insights regarding brain tumour segmentation and figured out potential gaps. Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imaging modalities. en_US
dc.language.iso en_US en_US
dc.publisher Daffodil International University en_US
dc.subject Brain cancer en_US
dc.subject Neural networks en_US
dc.subject Diseases en_US
dc.title Brain Tumor Auto-Segmentation on Multimodal Imaging Modalities Using Deep Neural Network en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account

Statistics