DSpace Repository

EDDNet30: A Spatial Attention and Multi-Scale Fusion Model for Enhanced Eye Disease Classification with Explainable AI

Show simple item record

dc.contributor.author Paul, Showmick Guha
dc.date.accessioned 2026-04-12T09:22:23Z
dc.date.available 2026-04-12T09:22:23Z
dc.date.issued 2025-10-14
dc.identifier.citation CSE en_US
dc.identifier.uri http://dspace.daffodilvarsity.edu.bd:8080/handle/123456789/16737
dc.description Thesis en_US
dc.description.abstract Eye illnesses are a leading global cause of blindness and vision impairment, underscoring the critical necessity for precise and timely diagnosis to avert further decline. Even though medical imaging has come a long way, it's still impossible to use fundus images to figure out what sort of retinal illness someone has since the visual indications are so complicated and hard to perceive. The goal of this work is to create and test EDDNet30, a new 30- layer deep learning model, to make it easier to tell what kind of eye disease someone has based on fundus photos. The model incorporates important architectural aspects, such as spatial attention and multi-scale fusion modules, that make it more accurate and reliable for diagnosis. To confirm the suggested model, a varied dataset of 5,531 photos, including nine main sickness types, is gathered from many sources. To improve the photographs, a lot of pre-processing methods were used, such as histogram equalization, color spaceconversion, and contrast change. These methods made sure that the photographs were clear and crisp. We also used image data augmentation to add more pictures to the dataset. This enabled the model learn to apply what it learned to new situations better throughout training. The spatial attention module makes the model pay greater attention to the most important parts of the visuals. The multi-scale fusion modules, on the other hand, gather and combine characteristics of different sizes, which makes it much easier to put things into categories. We compared the model against a variety of different transfer learning models to see how well it worked. The results reveal that EDDNet30 is frequently better than transfer-learning models, with an accuracy rate of 95.29% on 10% of the test data. This indicates that EDDNet30 is better at identifying the difference between eye diseases and is more reliable. We also used other explainable AI approaches like Grad-CAM, Grad-CAM++, and LIME to find and highlight the most important factors that affect the decision-making process. This made the model easier to understand. EDDNet30 might be a breakthrough advance in the automated detection of eye diseases since it makes diagnosis more accurate for use in the clinic. en_US
dc.description.sponsorship DIU en_US
dc.language.iso en_US en_US
dc.publisher Daffodil International University en_US
dc.subject Explainable Artificial Intelligence (XAI) en_US
dc.subject Eye Disease en_US
dc.subject Classification Spatial Attention en_US
dc.subject Mechanism Multi-Scale Feature Fusion en_US
dc.title EDDNet30: A Spatial Attention and Multi-Scale Fusion Model for Enhanced Eye Disease Classification with Explainable AI en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account