| dc.description.abstract |
Timely detection of road surface damage plays a crucial role in maintaining
transportation safety and reducing infrastructure repair costs. Traditional road
inspection methods are time-consuming, labor-intensive, and often lack
precision, particularly in regions with limited resources. This research proposes
a lightweight, real-time road damage detection system using deep learningbased object detection models integrated into a mobile application. The study
evaluates and compares three recent YOLO models YOLOv9s, YOLOv10s, and
YOLOv12s trained on a custom-annotated dataset of road surface images. Each
model is assessed based on detection accuracy (mAP50 and mAP50-95),
computational complexity (GFLOPs), and inference speed. Among the three,
YOLOv9s demonstrated the best overall performance, achieving 88.2% mAP50
and 52.8% mAP50-95 with an inference speed of 9.8 ms and 26.7 GFLOPs. In
contrast, YOLOv10s and YOLOv12s achieved lower accuracy scores but provided
faster inference speeds of 6.9 ms and 4.3 ms, respectively, with significantly
reduced computational loads. Based on the evaluation, YOLOv9s was selected
as the optimal model and exported in TensorFlow Lite format (.tflite) for
integration into a Flutter-based Android mobile application. The final system
enables users to detect road damage in real-time directly from their
smartphones, with results displayed instantly without the need for cloud
processing. The proposed solution bridges the gap between academic model
development and practical deployment, offering a scalable, cost-effective tool for
road maintenance in both urban and rural environments. By ensuring low
latency, high detection accuracy, and lightweight design, this study contributes
a robust framework for intelligent infrastructure monitoring and supports future
smart city applications. |
en_US |