| dc.description.abstract |
In the burgeoning universe of self-driving cars and intelligent driver-assistance systems, nothing is more important than safety. For the safety of these vehicles, they need to be able to look at a traffic sign and understand it immediately and accurately, as if a human were interpreting it. But it is very difficult to teach a computer to do this, because the real world is messy and unpredictable. When weather is poor, cameras on cars can often have a tough time seeing the signs clearly — during heavy rain or fog, for example. It also can be hard for computers to see signs when the lighting is bad, as at night, and when a sign is distant, small or partially obscured behind a tree or another vehicle. The outcome of a car not seeing a “Stop” sign or a ‘Speed Limit’ sign can be dire. The aim of that research paper is the primary and main goal to be addressed. Our goal is to develop a robust detection system that can reliably detect traffic signs even under such challenging conditions in the real world. To this end, we capitalised on a very large and diverse set of photographs known as the "Traffic and Road Signs" dataset. This reference image data set consists of 10,000 real-world images that contain 29 categories of traffic signs. We adopted a unique approach to this challenge. Many work to design entirely new and complex computer architectures in order to achieve betterresults. We used a “data-centric” approach in this study. That is, our primary concern in this paper is making the input data with which we teach model be of higher quality, not modifying the form of the model. We employed a method called “data augmentation,” in which we purposefully altered the training images by, for example, darkening them or rotating them or adding simulated blur to teach the computer how to identify signs in all scenarios. This approach constructs a powerful and generalized model. YOlOv8-Large architecture was selected for the model detection. We selected this model because it is state-of-the-art and has a large capacity to capture complex details. We tested our method using common performance measurements such as Precision, Recall, and Mean Average Precision. The final results were outstanding. Our optimized system eventually obtained a mean Average Precision (mAP@0. 5) is 94.18%, indicating that the method is very accurate. It was also the best mAP@0. 5:0.95 score of 81.06%, indicating that it can also be very accurate for detection of signs. The system's results showed a precision of 97.70% and recall of 94.11%, so it virtually never makes false predictions, nor does it miss a sign. We also performed “ablation studies,” or specialized tests to demonstrate that what we specifically trained the agent on was the real reason for this success. These results demonstrate that the data-driven solution we propose is a reliable, efficient and reproducible method which is ready to assist to build safer intelligent vehicles in the next decade. |
en_US |