Abstract:
In this paper, an interpretable and privacy-preserving lung cancer classification framework based on federated learning (FL) combined with Explainable AI (XAI) approaches is proposed. The system comprises of CNN, LeNet5, AlexNet, ResNet50 deep learning model that have been trained using IQ-OTH/NCCD CT scan images to detect lung abnormalities to be malignant, benign, and normal. In this research centralized and federated training scheme were used. Three aggregation algorithms are also applied for federated learning based on FedAvg, FedProx and FedADMM, on both IID and non-IID permutations of the dataset. We show that ResNet50 with FedProx performed best in these cases: it exhibits high global accuracy in non-IID setups while maintaining high privacy of data. Visual explanations were generated using Grad-CAM to improve the interpretability and clinical confidence in the AI predictions. A web application was included that is built on top of Django, which allows clinicians to provide CT scan and get the corresponding classification output as well as to see the interpretation maps calculating heatmaps. Standardization The system complies with ISO/IEC 25010 for software quality and carries the data through HTTPS, and later it is prepared for integration with the HL7 FHIR. Despite computational and data limitations, the work successfully proved that the integration of federated deep learning and explain ability is practical for secure, accurate, and transparent diagnostic tasks in medicine. This work provides a scalable approach for real-world clinical translation, encouraging both technical innovation and the ethical application of AI in healthc