DSpace Repository

Integrating Explainable AI with Federated Based Deep Learning for Accurate and Transparent Lung Cancer Classification

Show simple item record

dc.contributor.author Khan, Md Rayhan
dc.date.accessioned 2026-04-27T04:25:12Z
dc.date.available 2026-04-27T04:25:12Z
dc.date.issued 2025-12-27
dc.identifier.citation SWT en_US
dc.identifier.uri http://dspace.daffodilvarsity.edu.bd:8080/handle/123456789/17074
dc.description Thesis Report en_US
dc.description.abstract Lung cancer is still the most common cause of cancer death, and the best way to improve survival is to move the diagnosis from late to early-stage disease. However, high-performing models are often trained on small, separate datasets and work like "black boxes," which makes it hard for institutions to work together and for patients to trust the models. We show how to combine Federated Learning (FL) and Explainable AI (XAI) in a way that makes lung cancer detection accurate, private, and clear. We trained six deep learning backbones on decentralized data using the FedAvg algorithm, which means we didn't have to move patient images off-site. Our framework introduces federated explainability: clients create local Grad-CAM visualizations and a quantitative faithfulness metric (Deletion AUC) at the same time. The central server combines these scores to keep track of the model's trustworthiness on a global, round-by-round basis. Our findings indicate that accuracy and transparency can be attained concurrently. The DenseNet-121 and HVR-18 (Hybrid ViT–ResNet-18, MLP) models that were suggested were the best-performing ones, with validation F1-scores of 0.9678 and 0.9677, respectively. The HSD-121 (Hybrid Swin-T + DenseNet-121, MLP) model also did very well, with a validation F1-score of 0.9555. DenseNet-121 had a Deletion AUC of 0.36 for federated explainability, while HVR-18 and HSD-121 had scores of about 0.38 and 0.33, respectively. This means that Grad-CAM-based explanations were reliable for all three architectures. We saw a strong positive relationship between the global F1-score and the aggregated Deletion AUC during training. Local heatmaps provided additional validation that models incrementally acquired the ability to concentrate on diagnostically pertinent features. These results confirm a new framework in which privacy, accuracy, and interpretability increase together, providing a clear path for creating and keeping an eye on reliable clinical AI in real-world, multiinstitutional settings. en_US
dc.description.sponsorship DIU en_US
dc.language.iso en_US en_US
dc.publisher Daffodil International University en_US
dc.subject Medical Image Analysis en_US
dc.subject Lung Cancer Classification en_US
dc.subject Federated Deep Learning en_US
dc.subject Explainable AI (XAI) en_US
dc.title Integrating Explainable AI with Federated Based Deep Learning for Accurate and Transparent Lung Cancer Classification en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account