dc.description.abstract |
The increasing incidence of skin cancer necessitates advancements in early detection methods, where deep learning can be beneficial. This study introduces SkinNet-14, a novel deep learning model designed to classify skin cancer types using low-resolution dermoscopy images. Unlike existing models that require high-resolution images and extensive training times, SkinNet-14 leverages a modified compact convolutional transformer (CCT) architecture to effectively process 32 × 32 pixel images, significantly reducing the computational load and training duration. The framework employs several image preprocessing and augmentation strategies to enhance input image quality and balance the dataset to address class imbalances in medical datasets. The model was tested on three distinct datasets—HAM10000, ISIC and PAD—demonstrating high performance with accuracies of 97.85%, 96.00% and 98.14%, respectively, while significantly reducing the training time to 2–8 s per epoch. Compared to traditional transfer learning models, SkinNet-14 not only improves accuracy but also ensures stability even with smaller training sets. This research addresses a critical gap in automated skin cancer detection, specifically in contexts with limited resources, and highlights the capabilities of transformer-based models that are efficient in medical image analysis. |
en_US |