| dc.description.abstract |
This paper presents a deep learning-based approach to differentiate between real medical images from AI-generated counterparts across X-ray, CT, and MRI images, addressing a critical challenge in healthcare diagnostics. Utilizing transfer learning with pre-trained models like InceptionV3, ResNet50, DenseNet121, VGG19, and MobileNetV2 on a 3,000 image dataset with 500 per class, the study achieves a peak accuracy of 0.82 with MobileNetV2, featuring an F1-score of 0.97 and near-perfect recall 1.00 for AI_MRI. InceptionV3 records 0.88, deeper models like ResNet50, DenseNet121, and VGG19 have accuracies below 0.60. It is relevant to engineering and medical standards to improve the reliability of the diagnosis. The framework follows IEEE 1012-2016 and DICOM standards. Limitations include the dataset's small size, leading to overfitting, like 0.9766 training vs. 0.8833 validation accuracy for InceptionV3. Future work proposes dataset expansion to 10,000 images, adversarial training, and edge device optimization to enhance diagnostic reliability and scalability in healthcare. |
en_US |