| dc.description.abstract |
Breast cancer remains one of the leading causes of mortality among women
worldwide. Early and accurate detection is essential for effective treatment
and improved survival rates. This thesis presents a robust and efficient
deep learning-based breast cancer detection system by integrating Vision
Transformer (ViT), ResNet, and ResViT models into a hybrid ensemble
framework. While CNNs like ResNet are effective in capturing local image
features, they often fail to represent long-range dependencies. Conversely,
ViTs are capable of capturing both local and global features but are
computationally expensive. The proposed ensemble combines the strengths
of these architectures to improve classification accuracy while maintaining
deployment efficiency. The dataset undergoes preprocessing steps such as
DPI adjustment, resizing, normalization, and contrast enhancement to
improve model input quality. To address class imbalance, data
augmentation and class-weighted loss functions are applied. The trained
model is converted to TensorFlow Lite format and deployed in a Flutterbased mobile application, enabling real-time, offline diagnosis.
Experimental results show that ResNet achieved 82% test accuracy and
0.90 AUC, ViT reached 94% test accuracy with 0.94 AUC, and the ResViT
model outperformed all with 97% test accuracy and 0.99 AUC. These
findings highlight the effectiveness of the proposed hybrid model in breast
cancer classification tasks. By combining high performance with mobile
accessibility, the system offers a practical and scalable solution for early
detection in low-resource clinical environments, contributing significantly
to mobile health innovation. |
en_US |