Volume 3, Issue 1, ICCK Transactions on Intelligent Systematics
Volume 3, Issue 1, 2026
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Intelligent Systematics, Volume 3, Issue 1, 2026: 32-54

Free to Read | Research Article | 03 March 2026
Real-Time Detection of Road Anomalies for Integration in Rider Assistance Systems
1 University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
2 INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portuga
* Corresponding Authors: Tiago Silva, [email protected] ; João Silva, [email protected] ; António Sousa, [email protected] ; Vítor Filipe, [email protected]
ARK: ark:/57805/tis.2025.418469
Received: 29 September 2025, Accepted: 19 November 2025, Published: 03 March 2026  
Abstract
Road safety has become an increasingly important concern and the integration of Advanced Rider Assistance Systems and Advanced Driver Assistance Systems plays a crucial role in preventing accidents. This work proposes a computer vision pipeline to automatically detect hazardous road anomalies—loose gravel, potholes, and puddles—from a motorcycle-mounted camera, targeting real-time operation on embedded edge devices. A hybrid dataset of 28764 annotated images was created by combining real-world photos, Blender-rendered synthetic scenes, and AI-generated images to improve diversity and coverage. Multiple state-of-the-art object detectors were trained and benchmarked, including the YOLOv5/7/11/12 families and the transformer-based RT-DETR architecture. While the RT-DETR model achieved the highest precision overall, its computational complexity and heavy resource requirements limited its suitability for real-time deployment on low-cost embedded platforms. Conversely, the YOLOv11n model demonstrated the best accuracy–efficiency trade-off, reaching [email protected] = 0.872 at 320$\times$320 with 0.045 s/frame on a Jetson Nano, while lighter variants remained viable on Raspberry Pi boards. Across classes, gravel was the most reliably detected, and operating points around a confidence threshold of $\tau \approx 0.31$ yielded balanced F1 scores up to 0.82. Although results show that automatic road-condition monitoring on affordable hardware is feasible, the prototype has not yet undergone on-road field trials. It does not include an integrated rider alert module or energy-use assessment. These gaps define the immediate roadmap for deployment.

Graphical Abstract
Real-Time Detection of Road Anomalies for Integration in Rider Assistance Systems

Keywords
road monitoring
computer vision
object detection
edge devices
advanced rider assistance systems

Data Availability Statement
Data will be made available on request.

Funding
The study was developed under the project A-MoVeR – “Mobilizing Agenda for the Development of Products & Systems towards an Intelligent and Green Mobility”, operation n.º 02/C05-i01.01/2022.PC646908627-00000069, approved under the terms of the call n.º 02/C05-i01/2022 – Mobilizing Agendas for Business Innovation, financed by European funds provided to Portugal by the Recovery and Resilience Plan (RRP), in the scope of the European Recovery and Resilience Facility (RRF), framed in the Next Generation UE, for the period from 2021-2026.

Conflicts of Interest
The authors declare no conflicts of interest.

AI Use Statement
The authors declare that no generative AI was used in the preparation of this manuscript.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Kozicki, B., & Skrabacz, A. (2024). A comparative analysis of injuries and deaths caused by road traffic accidents in Poland and selected EU countries. Transport Problems, 19.
    [CrossRef]   [Google Scholar]
  2. Nycz, M., & Sobolewski, M. (2025). The Problem of the Comparability of Road Accident Data from Different European Countries. Sustainability, 17(15), 6754.
    [CrossRef]   [Google Scholar]
  3. Wu, D., Hours, M., & Martin, J. L. (2018). Risk factors for motorcycle loss-of-control crashes. Traffic injury prevention, 19(4), 433-439.
    [CrossRef]   [Google Scholar]
  4. Lyu, H., Wang, Z., Lin, P.-S., Lu, Q., Duran, E., & Hsu, P. P. (2025). How Does Pavement Friction Affect Motorcycle Crashes? A Florida Analysis. Transportation Research Record, 2679(10), 647–661.
    [CrossRef]   [Google Scholar]
  5. Hossain, S., Maggi, E., & Vezzulli, A. (2024). Factors influencing the road accidents in low and middle-income countries: a systematic literature review. International Journal of Injury Control and Safety Promotion, 31(2), 294–322.
    [CrossRef]   [Google Scholar]
  6. Chand, A., Jayesh, S., & Bhasi, A. B. (2021). Road traffic accidents: An overview of data sources, analysis techniques and contributing factors. Materials Today: Proceedings, 47, 5135–5141.
    [CrossRef]   [Google Scholar]
  7. Zhao, T., He, J., Lv, J., Min, D., & Wei, Y. (2023). A Comprehensive Implementation of Road Surface Classification for Vehicle Driving Assistance: Dataset, Models, and Deployment. IEEE Transactions on Intelligent Transportation Systems, 24(8), 8361–8370.
    [CrossRef]   [Google Scholar]
  8. Saeed, N., Nyberg, R. G., & Alam, M. (2023). Gravel road classification based on loose gravel using transfer learning. International Journal of Pavement Engineering, 24(2), 2138879.
    [CrossRef]   [Google Scholar]
  9. Saeed, N., Alam, M., & Nyberg, R. G. (2024). A multimodal deep learning approach for gravel road condition evaluation through image and audio integration. Transportation Engineering, 16, 100228.
    [CrossRef]   [Google Scholar]
  10. Ullah, Z., da Silva, J. A., Nunes, R. R., Reis, A., Filipe, V., Barroso, J., & Pires, E. J. (2025). Performance of Advanced Rider Assistance Systems in Varying Weather Conditions. Vehicles (2624-8921), 7(4).
    [CrossRef]   [Google Scholar]
  11. Ali, W. A. E., Saleh, H. A., Eid, H. S., Fathi, A. M., & Mohamed, A. E. R. A. (2024, July). Design and implementation of advanced driver assistance systems. In 2024 International Telecommunications Conference (ITC-Egypt) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  12. Ahmad Jan, M., Adil, M., Brik, B., Harous, S., & Abbas, S. (2025). Making Sense of Big Data in Intelligent Transportation Systems: Current Trends, Challenges and Future Directions. ACM Computing Surveys, 57(8), 1-43.
    [CrossRef]   [Google Scholar]
  13. Li, G., Zhang, C., Li, M., Han, D. L., & Zhou, M. L. (2024). LHA-Net: a lightweight and high-accuracy network for road surface defect detection. IEEE Transactions on Intelligent Vehicles.
    [CrossRef]   [Google Scholar]
  14. Zhao, L., & Zhu, M. (2023). MS-YOLOv7: YOLOv7 based on multi-scale for object detection on UAV aerial photography. Drones, 7(3), 188.
    [CrossRef]   [Google Scholar]
  15. Chen, D. R., & Chiu, W. M. (2020, November). Road crack detection using Gaussian mixture model for diverse illumination images. In 2020 30th International Telecommunication Networks and Applications Conference (ITNAC) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  16. Zhou, W., Zhan, Y., Zhang, H., Zhao, L., & Wang, C. (2022). Road defect detection from on-board cameras with scarce and cross-domain data. Automation in Construction, 144, 104628.
    [CrossRef]   [Google Scholar]
  17. Zhang, R., Yang, S., Lyu, D., Wang, Z., Chen, J., Ren, Y., Gao, B., & Lv, Z. (2025). AGSENet: A Robust Road Ponding Detection Method for Proactive Traffic Safety. IEEE Transactions on Intelligent Transportation Systems, 26(1), 497–516.
    [CrossRef]   [Google Scholar]
  18. Chang, J.-Y., & Xu, Z.-X. (2025). Enhanced Water Puddle Segmentation and Detection Using DCU-Net. IEEE Internet of Things Journal, 12(1), 663–672.
    [CrossRef]   [Google Scholar]
  19. Hasanaath, A. A., Mohammed, A. S., Alotaibi, L., Alabdulrahman, M., & Moinuddeen, A. (2024). Pothole Severity Classification Using Structural Features Extracted from Segmented Images. In 2024 Arab ICT Conference (AICTC) (pp. 52–58).
    [CrossRef]   [Google Scholar]
  20. Sun, Z., Zhu, L., Qin, S., Yu, Y., Ju, R., & Li, Q. (2024). Road Surface Defect Detection Algorithm Based on YOLOv8. Electronics, 13(12), 2413.
    [CrossRef]   [Google Scholar]
  21. Pratheesh Raj, P. L., Santhiya, M., Stewart Kirubakaran, S., Leelipushpam, G. J., Ebenezer, V., & Bijolin Edwin, E. (2024). Pothole Detection and Alert System Sing YOLO Models: A Real-Time Driver Assistance Approach. In 2024 Second International Conference on Intelligent Cyber Physical Systems and Internet of Things (ICoICI) (pp. 1555–1559).
    [CrossRef]   [Google Scholar]
  22. Amer, R. T., Soubra, H., & Mazhr, A. (2024). Advanced Autonomous Rider Assistance for Smart E-Bikes. In 2024 14th International Conference on Electrical Engineering (ICEENG) (pp. 142–144).
    [CrossRef]   [Google Scholar]
  23. Yu, J., & others. (2024). Road Surface Defect Detection—From Image-Based to Non-Image-Based: A Survey. IEEE Transactions on Intelligent Transportation Systems, 25(9), 10581–10603.
    [CrossRef]   [Google Scholar]
  24. Dilek, E., & Dener, M. (2023). Computer Vision Applications in Intelligent Transportation Systems: A Survey. Sensors, 23(6), 2938.
    [CrossRef]   [Google Scholar]
  25. Safyari, Y., Mahdianpari, M., & Shiri, H. (2024). A Review of Vision-Based Pothole Detection Methods Using Computer Vision and Machine Learning. Sensors, 24(17), 5652.
    [CrossRef]   [Google Scholar]
  26. Rathee, M., Bačić, B., & Doborjeh, M. (2023). Automated Road Defect and Anomaly Detection for Traffic Safety: A Systematic Review. Sensors, 23(12), 5656.
    [CrossRef]   [Google Scholar]
  27. Ruggeri, A., Ficara, A., Sollazzo, G., Bosurgi, G., & Fazio, M. (2024, June). A Comparative Analysis of Deep Learning Approaches for Road Anomaly Detection. In 2024 IEEE Symposium on Computers and Communications (ISCC) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  28. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
    [Google Scholar]
  29. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, September). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  30. Lin, T. Y., Dollar, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017, July). Feature Pyramid Networks for Object Detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 936-944). IEEE Computer Society.
    [CrossRef]   [Google Scholar]
  31. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020, August). End-to-end object detection with transformers. In European conference on computer vision (pp. 213-229). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  32. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., & Dai, J. (2020). Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159.
    [Google Scholar]
  33. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017, September). Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 23-30). IEEE.
    [CrossRef]   [Google Scholar]
  34. Ghiasi, G., Cui, Y., Srinivas, A., Qian, R., Lin, T. Y., Cubuk, E. D., ... & Zoph, B. (2021, June). Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2917-2927). IEEE.
    [CrossRef]   [Google Scholar]
  35. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
    [CrossRef]   [Google Scholar]
  36. Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017, October). Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 2242-2251). IEEE.
    [CrossRef]   [Google Scholar]
  37. Karras, T., Laine, S., & Aila, T. (2019, June). A Style-Based Generator Architecture for Generative Adversarial Networks. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 4396-4405). IEEE.
    [CrossRef]   [Google Scholar]
  38. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022, June). High-Resolution Image Synthesis with Latent Diffusion Models. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 10674-10685). IEEE.
    [CrossRef]   [Google Scholar]
  39. Denninger, M., Sundermeyer, M., Winkelbauer, D., Zidan, Y., Olefir, D., Elbadrawy, M., ... & Katam, H. (2019). Blenderproc. arXiv preprint arXiv:1911.01911.
    [Google Scholar]
  40. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., & Koltun, V. (2017, October). CARLA: An open urban driving simulator. In Conference on robot learning (pp. 1-16). PMLR.
    [Google Scholar]
  41. Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710.
    [Google Scholar]
  42. Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149.
    [Google Scholar]
  43. Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., ... & Kalenichenko, D. (2018, June). Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2704-2713). IEEE.
    [CrossRef]   [Google Scholar]
  44. Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531.
    [Google Scholar]
  45. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018, June). MobileNetV2: Inverted Residuals and Linear Bottlenecks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4510-4520). IEEE.
    [CrossRef]   [Google Scholar]
  46. Redmon, J., & Farhadi, A. (2018). YOLOv3: An Incremental Improvement. arXiv preprint arXiv:1804.02767.
    [Google Scholar]
  47. Jocher, G., Chaurasia, A., Qiu, J., & Ultralytics. (n.d.). YOLOv5: A family of object detection architectures and models [Computer software]. GitHub. Retrieved February 8, 2025, from https://github.com/ultralytics/yolov5
    [Google Scholar]
  48. Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2023, June). YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 7464-7475). IEEE.
    [CrossRef]   [Google Scholar]
  49. Ultralytics. (n.d.). YOLOv8 [Computer software]. GitHub. Retrieved February 8, 2025, from https://github.com/ultralytics/yolov8
    [Google Scholar]
  50. Wang, A., Chen, H., Liu, L., Chen, K., Lin, Z., & Han, J. (2024). Yolov10: Real-time end-to-end object detection. Advances in Neural Information Processing Systems, 37, 107984-108011.
    [Google Scholar]
  51. Ali, M. L., & Zhang, Z. (2024). The YOLO framework: A comprehensive review of evolution, applications, and benchmarks in object detection. Computers, 13(12), 336.
    [CrossRef]   [Google Scholar]
  52. Wang, C. Y., Yeh, I. H., & Mark Liao, H. Y. (2024, September). Yolov9: Learning what you want to learn using programmable gradient information. In European conference on computer vision (pp. 1-21). Cham: Springer Nature Switzerland.
    [CrossRef]   [Google Scholar]
  53. Tian, Y., Ye, Q., & Doermann, D. (2025). YOLOv12: Attention-Centric Real-Time Object Detectors. arXiv preprint arXiv:2502.12524.
    [Google Scholar]
  54. Ye, T., Qin, W., Zhao, Z., Gao, X., Deng, X., & Ouyang, Y. (2023). Real-time object detection network in UAV-vision based on CNN and transformer. IEEE Transactions on Instrumentation and Measurement, 72, 1-13.
    [CrossRef]   [Google Scholar]
  55. Zhao, Y., Lv, W., Xu, S., Wei, J., Wang, G., Dang, Q., ... & Chen, J. (2024, June). DETRs Beat YOLOs on Real-time Object Detection. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 16965-16974). IEEE.
    [CrossRef]   [Google Scholar]
  56. Lv, W., Zhao, Y., Chang, Q., Huang, K., Wang, G., & Liu, Y. (2024). Rt-detrv2: Improved baseline with bag-of-freebies for real-time detection transformer. arXiv preprint arXiv:2407.17140.
    [Google Scholar]
  57. Yu, F., Chen, H., Wang, X., Xian, W., Chen, Y., Liu, F., ... & Darrell, T. (2020, June). BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2633-2642). IEEE.
    [CrossRef]   [Google Scholar]
  58. Károly, A. I., Nádas, I., & Galambos, P. (2024). Synthetic Multimodal Video Benchmark (SMVB): Utilizing Blender for rich dataset generation. In 2024 IEEE 22nd World Symposium on Applied Machine Intelligence and Informatics (SAMI) (pp. 000065–000070).
    [CrossRef]   [Google Scholar]
  59. Gangadharaiah, D. (2024). Synthetic data generation for enhanced computer vision applications: A CAD model and Blender approach. Institute for Photonic Systems, Ravensburg-Weingarten University. Available at https://hsbwgt.bsz-bw.de/frontdoor/deliver/index/docId/748/file/Gangadharaiah.pdf
    [Google Scholar]
  60. Basak, S., Javidnia, H., Khan, F., McDonnell, R., & Schukat, M. (2020, June). Methodology for building synthetic datasets with virtual humans. In 2020 31st Irish Signals and Systems Conference (ISSC) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  61. Károly, A. I., & Galambos, P. (2022, March). Automated dataset generation with Blender for deep learning-based object segmentation. In 2022 IEEE 20th Jubilee World Symposium on Applied Machine Intelligence and Informatics (SAMI) (pp. 000329-000334). IEEE.
    [CrossRef]   [Google Scholar]
  62. Rahman, Z. U., Asaari, M. S. M., Ibrahim, H., Abidin, I. S. Z., & Ishak, M. K. (2024). Generative Adversarial Networks (GANs) for Image Augmentation in Farming: A Review. IEEE Access, 12, 179912–179943.
    [CrossRef]   [Google Scholar]
  63. Wang, K., Gou, C., Duan, Y., Lin, Y., Zheng, X., & Wang, F.-Y. (2017). Generative Adversarial Networks: Introduction and Outlook. IEEE/CAA Journal of Automatica Sinica, 4(4), 588–598.
    [CrossRef]   [Google Scholar]
  64. Raspberry Pi Ltd. (n.d.). Raspberry Pi 4 Model B [Product page]. Retrieved April 7, 2025, from https://www.raspberrypi.com/products/raspberry-pi-4-model-b/
    [Google Scholar]
  65. Raspberry Pi Ltd. (n.d.). Raspberry Pi 5 [Product page]. Retrieved April 7, 2025, from https://www.raspberrypi.com/products/raspberry-pi-5/
    [Google Scholar]
  66. NVIDIA Corporation. (n.d.). Jetson Nano [Product page]. NVIDIA Developer. Retrieved April 8, 2025, from https://developer.nvidia.com/embedded/jetson-nano
    [Google Scholar]
  67. Santos, R. C. C. D. M., Coelho, M., & Oliveira, R. (2024). Real-time object detection performance analysis using YOLOv7 on edge devices. IEEE Latin America Transactions, 22(10), 799-805.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Silva, T., Silva, J., Sousa, A., & Filipe, V. (2026). Real-Time Detection of RoadAnomaliesforIntegration in Rider Assistance Systems. ICCK Transactions on Intelligent Systematics, 3(1), 32–54. https://doi.org/10.62762/TIS.2025.418469
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Silva, Tiago
AU  - Silva, João
AU  - Sousa, António
AU  - Filipe, Vítor
PY  - 2026
DA  - 2026/03/03
TI  - Real-Time Detection of Road Anomalies for Integration in Rider Assistance Systems
JO  - ICCK Transactions on Intelligent Systematics
T2  - ICCK Transactions on Intelligent Systematics
JF  - ICCK Transactions on Intelligent Systematics
VL  - 3
IS  - 1
SP  - 32
EP  - 54
DO  - 10.62762/TIS.2025.418469
UR  - https://www.icck.org/article/abs/TIS.2025.418469
KW  - road monitoring
KW  - computer vision
KW  - object detection
KW  - edge devices
KW  - advanced rider assistance systems
AB  - Road safety has become an increasingly important concern and the integration of Advanced Rider Assistance Systems and Advanced Driver Assistance Systems plays a crucial role in preventing accidents. This work proposes a computer vision pipeline to automatically detect hazardous road anomalies—loose gravel, potholes, and puddles—from a motorcycle-mounted camera, targeting real-time operation on embedded edge devices. A hybrid dataset of 28764 annotated images was created by combining real-world photos, Blender-rendered synthetic scenes, and AI-generated images to improve diversity and coverage. Multiple state-of-the-art object detectors were trained and benchmarked, including the YOLOv5/7/11/12 families and the transformer-based RT-DETR architecture. While the RT-DETR model achieved the highest precision overall, its computational complexity and heavy resource requirements limited its suitability for real-time deployment on low-cost embedded platforms. Conversely, the YOLOv11n model demonstrated the best accuracy–efficiency trade-off, reaching [email protected] = 0.872 at 320$\times$320 with 0.045 s/frame on a Jetson Nano, while lighter variants remained viable on Raspberry Pi boards. Across classes, gravel was the most reliably detected, and operating points around a confidence threshold of $\tau \approx 0.31$ yielded balanced F1 scores up to 0.82. Although results show that automatic road-condition monitoring on affordable hardware is feasible, the prototype has not yet undergone on-road field trials. It does not include an integrated rider alert module or energy-use assessment. These gaps define the immediate roadmap for deployment.
SN  - 3068-5079
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Silva2026RealTime,
  author = {Tiago Silva and João Silva and António Sousa and Vítor Filipe},
  title = {Real-Time Detection of Road Anomalies for Integration in Rider Assistance Systems},
  journal = {ICCK Transactions on Intelligent Systematics},
  year = {2026},
  volume = {3},
  number = {1},
  pages = {32-54},
  doi = {10.62762/TIS.2025.418469},
  url = {https://www.icck.org/article/abs/TIS.2025.418469},
  abstract = {Road safety has become an increasingly important concern and the integration of Advanced Rider Assistance Systems and Advanced Driver Assistance Systems plays a crucial role in preventing accidents. This work proposes a computer vision pipeline to automatically detect hazardous road anomalies—loose gravel, potholes, and puddles—from a motorcycle-mounted camera, targeting real-time operation on embedded edge devices. A hybrid dataset of 28764 annotated images was created by combining real-world photos, Blender-rendered synthetic scenes, and AI-generated images to improve diversity and coverage. Multiple state-of-the-art object detectors were trained and benchmarked, including the YOLOv5/7/11/12 families and the transformer-based RT-DETR architecture. While the RT-DETR model achieved the highest precision overall, its computational complexity and heavy resource requirements limited its suitability for real-time deployment on low-cost embedded platforms. Conversely, the YOLOv11n model demonstrated the best accuracy–efficiency trade-off, reaching [email protected] = 0.872 at 320\$\times\$320 with 0.045 s/frame on a Jetson Nano, while lighter variants remained viable on Raspberry Pi boards. Across classes, gravel was the most reliably detected, and operating points around a confidence threshold of \$\tau \approx 0.31\$ yielded balanced F1 scores up to 0.82. Although results show that automatic road-condition monitoring on affordable hardware is feasible, the prototype has not yet undergone on-road field trials. It does not include an integrated rider alert module or energy-use assessment. These gaps define the immediate roadmap for deployment.},
  keywords = {road monitoring, computer vision, object detection, edge devices, advanced rider assistance systems},
  issn = {3068-5079},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 39
PDF Downloads: 10

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
Institute of Central Computation and Knowledge (ICCK) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ICCK Transactions on Intelligent Systematics

ICCK Transactions on Intelligent Systematics

ISSN: 3068-5079 (Online) | ISSN: 3069-003X (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/