DSpace Repository

Explainable AI Model Using Federated Learning for Eye Disease Diagnostics

Show simple item record

dc.contributor.author Farid, Md Nur Hossain
dc.date.accessioned 2026-04-25T09:24:35Z
dc.date.available 2026-04-25T09:24:35Z
dc.date.issued 2025-12-30
dc.identifier.citation SWT en_US
dc.identifier.uri http://dspace.daffodilvarsity.edu.bd:8080/handle/123456789/17030
dc.description Thesis Report en_US
dc.description.abstract Retinal disease is still the most common cause of cancer death. The most effective way to improve survival is to change the diagnosis from late-stage to early-stage disease. But high-performing models are generally trained on separate datasets and function like "black boxes," which makes it hard for several institutions to work together and for doctors to trust them. We show how to combine Federated Learning (FL) and Explainable AI (XAI) to make retinal disease detection accurate, private, and open. We trained six deep learning backbones on decentralized data using the FedAvg method, which meant that we didn't have to move patient photos off-site. Our platform includes federated explainability, which means that clients make local Grad-CAM visuals and a quantitative faithfulness metric (Deletion AUC) at the same time. The central server combines these scores to let the model's trustworthiness be checked on a worldwide, round-by-round basis. Our findings demonstrate that accuracy and transparency can be attained together. The suggested HVR-18 (Hybrid ViT–ResNet-18, MLP) model turned out to be the best option, getting a state-of- the-art validation F1-score of 0.9677. It also has a federated Deletion AUC of 58.8, which is almost twice as trustworthy as the next-best model, DenseNet-121 (Val F1 0.9671, Deletion AUC ≈ 30.7). We saw a high positive association between the global F1-score and the aggregated Deletion AUC, which both went up at the same time during training. Local heatmaps also showed that the models trained to focus more and more on elements that were important for diagnosis. These results confirm a new paradigm in which privacy, accuracy, and interpretability increase together, providing a clear way for creating and keeping an eye on reliable clinical AI in real-world, multi- institutional contexts. en_US
dc.description.sponsorship DIU en_US
dc.language.iso en_US en_US
dc.publisher Daffodil International University en_US
dc.subject Medical Image Analysis en_US
dc.subject Explainable AI (XAI) en_US
dc.subject Federated Learning en_US
dc.subject Eye Disease Diagnosis en_US
dc.title Explainable AI Model Using Federated Learning for Eye Disease Diagnostics en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account