Abstract:
Lung cancer is still the most common cause of cancer death, and the best way to improve survival is to move the diagnosis from late to early-stage disease. However, high-performing models are often trained on small, separate datasets and work like "black boxes," which makes it hard for institutions to work together and for patients to trust the models. We show how to combine Federated Learning (FL) and Explainable AI (XAI) in a way that makes lung cancer detection accurate, private, and clear. We trained six deep learning backbones on decentralized data using the FedAvg algorithm, which means we didn't have to move patient images off-site. Our framework introduces federated explainability: clients create local Grad-CAM visualizations and a quantitative faithfulness metric (Deletion AUC) at the same time. The central server combines these scores to keep track of the model's trustworthiness on a global, round-by-round basis. Our findings indicate that accuracy and transparency can be attained concurrently. The DenseNet-121 and HVR-18 (Hybrid ViT–ResNet-18, MLP) models that were suggested were the best-performing ones, with validation F1-scores of 0.9678 and 0.9677, respectively. The HSD-121 (Hybrid Swin-T + DenseNet-121, MLP) model also did very well, with a validation F1-score of 0.9555. DenseNet-121 had a Deletion AUC of 0.36 for federated explainability, while HVR-18 and HSD-121 had scores of about 0.38 and 0.33, respectively. This means that Grad-CAM-based explanations were reliable for all three architectures. We saw a strong positive relationship between the global F1-score and the aggregated Deletion AUC during training. Local heatmaps provided additional validation that models incrementally acquired the ability to concentrate on diagnostically pertinent features. These results confirm a new framework in which privacy, accuracy, and interpretability increase together, providing a clear path for creating and keeping an eye on reliable clinical AI in real-world, multiinstitutional settings.