Volume 3, Issue 1, ICCK Transactions on Intelligent Systematics
Volume 3, Issue 1, 2026
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Intelligent Systematics, Volume 3, Issue 1, 2026: 55-69

Free to Read | Research Article | 05 March 2026
Fatigue Driving Detection via Multi-Head Transformer with Adaptive Weighted Loss
1 Faculty of Electronic Information Engineering, Huaiyin Institute of Technology, Huai’an 223003, China
* Corresponding Author: Xiaoyan Wang, [email protected]
ARK: ark:/57805/tis.2025.633754
Received: 25 September 2025, Accepted: 09 November 2025, Published: 05 March 2026  
Abstract
Fatigue driving is widely recognized as one of the major factors contributing to traffic accidents, posing not only a serious threat to road safety but also potential risks to drivers’ health and public security. With the rapid development of modern transportation, how to efficiently and accurately detect and warn against driver fatigue has become a critical issue in the field of intelligent transportation. To effectively address this issue, this paper proposes a novel fatigue driving detection method based on a Multi-Head Transformer with Adaptive Weighted Loss. In the proposed framework, the YOLOv8 model is first employed to efficiently and accurately locate key facial regions of the driver from real-time video streams, ensuring both high-speed processing and robustness. Subsequently, a Multi-Head Transformer model is introduced to capture the temporal dependencies and feature correlations among facial landmarks, enabling a more comprehensive characterization of fatigue-related behaviors such as eye closure, blinking, and yawning. In addition, an Adaptive Weighted Loss function is designed to dynamically balance the contributions of multiple fatigue features during training, effectively alleviating the class imbalance problem and enhancing the model’s generalization capability. Experimental results demonstrate that the proposed method maintains stable and superior detection accuracy even under complex long-duration driving conditions. Compared with traditional approaches, the system improves fatigue detection accuracy by 7.2%, achieving 95.5%, while also satisfying real-time requirements. In summary, this study presents an intelligent, adaptive, and efficient fatigue driving detection framework that provides a reliable theoretical foundation for intelligent warning systems and holds significant value for enhancing road traffic safety.

Graphical Abstract
Fatigue Driving Detection via Multi-Head Transformer with Adaptive Weighted Loss

Keywords
fatigue driving detection
facial keypoints
transformer
adaptive weighted loss

Data Availability Statement
The datasets used in this study are publicly available at https://ieee-dataport.org/open-access/yawdd-yawning-detection-dataset, and https://www.nexdata.ai/datasets/computervision/1588. The self-collected dataset consists of driving video recordings from five drivers, all of whom provided informed consent. Due to privacy protection concerns, this dataset cannot be publicly shared.

Funding
This work was supported in part by the Natural Science Research Project of Higher Education Institutions in Jiangsu Province of China under Grant 24KJA510002; in part by the Postgraduate Research & Practice Innovation Program of Jiangsu Province under Grant SJCX25_2194; in part by the Postgraduate Science and Technology Innovation Program of Huaiyin Institute of Technology under Grant HGYK202516.

Conflicts of Interest
The authors declare no conflicts of interest.

AI Use Statement
The authors declare that no generative AI was used in the preparation of this manuscript.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Zhang, G., Yau, K. K., Zhang, X., & Li, Y. (2016). Traffic accidents involving fatigue driving and their extent of casualties. Accident Analysis & Prevention, 87, 34-42.
    [CrossRef]   [Google Scholar]
  2. Zhang, H., Ni, D., Ding, N., Sun, Y., Zhang, Q., & Li, X. (2023). Structural analysis of driver fatigue behavior: A systematic review. Transportation Research Interdisciplinary Perspectives, 21, 100865.
    [CrossRef]   [Google Scholar]
  3. Meng, F., Li, S., Cao, L., Li, M., Peng, Q., Wang, C., & Zhang, W. (2015). Driving fatigue in professional drivers: a survey of truck and taxi drivers. Traffic injury prevention, 16(5), 474-483.
    [CrossRef]   [Google Scholar]
  4. Jiao, Y., Zhang, C., Chen, X., Fu, L., Jiang, C., & Wen, C. (2023). Driver fatigue detection using measures of heart rate variability and electrodermal activity. IEEE Transactions on Intelligent Transportation Systems, 25(6), 5510-5524.
    [CrossRef]   [Google Scholar]
  5. Wang, L., Song, F., Zhou, T. H., Hao, J., & Ryu, K. H. (2023). EEG and ECG-based multi-sensor fusion computing for real-time fatigue driving recognition based on feedback mechanism. Sensors, 23(20), 8386.
    [CrossRef]   [Google Scholar]
  6. Shi, J., & Wang, K. (2023). Fatigue driving detection method based on Time-Space-Frequency features of multimodal signals. Biomedical Signal Processing and Control, 84, 104744.
    [CrossRef]   [Google Scholar]
  7. Du, G., Wang, H., Su, K., Wang, X., Teng, S., & Liu, P. X. (2022). Non-interference driving fatigue detection system based on intelligent steering wheel. IEEE Transactions on Instrumentation and Measurement, 71, 1-11.
    [CrossRef]   [Google Scholar]
  8. Sukumar, N., & Sumathi, P. (2024). An improved lane detection and lane departure warning framework for adas. IEEE Transactions on Consumer Electronics, 70(2), 4793-4803.
    [CrossRef]   [Google Scholar]
  9. Bonfati, L. V., Mendes Junior, J. J., Siqueira, H. V., & Stevan Jr, S. L. (2022). Correlation analysis of in-vehicle sensors data and driver signals in identifying driving and driver behaviors. Sensors, 23(1), 263.
    [CrossRef]   [Google Scholar]
  10. Li, R., Chen, Y. V., & Zhang, L. (2021). A method for fatigue detection based on Driver's steering wheel grip. International Journal of Industrial Ergonomics, 82, 103083.
    [CrossRef]   [Google Scholar]
  11. Xiao, W., Liu, H., Ma, Z., Chen, W., & Hou, J. (2024). FPIRST: fatigue driving recognition method based on feature parameter images and a residual Swin Transformer. Sensors, 24(2).
    [CrossRef]   [Google Scholar]
  12. Cao, S., Feng, P., Kang, W., Chen, Z., & Wang, B. (2025). Optimized driver fatigue detection method using multimodal neural networks. Scientific Reports, 15(1), 12240.
    [CrossRef]   [Google Scholar]
  13. Li, Z., Zhao, X., Wu, F., Chen, D., & Wang, C. (2024). A lightweight and efficient distracted driver detection model fusing convolutional neural network and vision transformer. IEEE Transactions on Intelligent Transportation Systems.
    [CrossRef]   [Google Scholar]
  14. Chen, J., Zhang, Q., Chen, J., Wang, J., Fang, Z., Liu, Y., & Yin, G. (2024). A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior. IEEE Transactions on Intelligent Transportation Systems.
    [CrossRef]   [Google Scholar]
  15. Zhu, T., Zhang, C., Wu, T., Ouyang, Z., Li, H., Na, X., ... & Li, W. (2022). Research on a real-time driver fatigue detection algorithm based on facial video sequences. Applied Sciences, 12(4), 2224.
    [CrossRef]   [Google Scholar]
  16. Gu, W. H., Zhu, Y., Chen, X. D., He, L. F., & Zheng, B. B. (2018). Hierarchical CNN‐based real‐time fatigue detection system by visual‐based technologies using MSP model. IET Image Processing, 12(12), 2319-2329.
    [CrossRef]   [Google Scholar]
  17. Ran, X., He, S., & Li, R. (2023). Research on fatigued-driving detection method by integrating lightweight yolov5s and facial 3d keypoints. Sensors, 23(19), 8267.
    [CrossRef]   [Google Scholar]
  18. Li, Y., Sun, B., Wu, T., & Wang, Y. (2016, September). Face detection with end-to-end integration of a convnet and a 3d model. In European Conference on Computer Vision (pp. 420-436). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  19. Liu, M. Z., Xu, X., Hu, J., & Jiang, Q. N. (2022). Real time detection of driver fatigue based on CNN‐LSTM. IET Image Processing, 16(2), 576-595.
    [CrossRef]   [Google Scholar]
  20. Tao, S., Li, Y., Huang, Y., & Lan, X. (2021, March). Face detection algorithm based on deep residual network. In Journal of Physics: Conference Series (Vol. 1802, No. 3, p. 032142). IOP Publishing.
    [CrossRef]   [Google Scholar]
  21. Peng, B., Zhang, Y., Wang, M., Chen, J., & Gao, D. (2023). TA-MFFNet: Multi-feature fusion network for EEG analysis and driving fatigue detection based on time domain network and attention network. Computational Biology and Chemistry, 104, 107863.
    [CrossRef]   [Google Scholar]
  22. Li, D., Zhang, X., Liu, X., Ma, Z., & Zhang, B. (2023). Driver fatigue detection based on comprehensive facial features and gated recurrent unit. Journal of Real-Time Image Processing, 20(2), 19.
    [CrossRef]   [Google Scholar]
  23. Hassan, O. F., Ibrahim, A. F., Gomaa, A., Makhlouf, M. A., & Hafiz, B. (2025). Real-time driver drowsiness detection using transformer architectures: a novel deep learning approach. Scientific Reports, 15(1), 17493.
    [CrossRef]   [Google Scholar]
  24. Khan, S. S., Sengupta, D., Ghosh, A., & Chaudhuri, A. (2024). MTCNN++: A CNN-based face detection algorithm inspired by MTCNN. The Visual Computer, 40(2), 899-917.
    [CrossRef]   [Google Scholar]
  25. Alameen, S. A., & Alhothali, A. M. (2023). A Lightweight Driver Drowsiness Detection System Using 3DCNN With LSTM. Computer Systems Science & Engineering, 44(1). http://dx.doi.org/10.32604/csse.2023.024643
    [Google Scholar]
  26. Debsi, A., Ling, G., Al‐Mahbashi, M., Al‐Soswa, M., & Abdullah, A. (2024). Driver distraction and fatigue detection in images using ME‐YOLOv8 algorithm. IET Intelligent Transport Systems, 18(10), 1910-1930.
    [CrossRef]   [Google Scholar]
  27. Azmi, M. M. B. M., & Zaman, F. H. K. (2024, May). Driver drowsiness detection using vision transformer. In 2024 IEEE 14th Symposium on Computer Applications & Industrial Electronics (ISCAIE) (pp. 329-336). IEEE.
    [CrossRef]   [Google Scholar]
  28. Xu, M., Zhan, A., Wu, C., & Wang, Z. (2025). A Novel Driver Fatigue Detection Method Based on Dual-Stream Swin-Transformer. IEICE Transactions on Information and Systems, 2024EDL8094.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Huang, L., Li, S., Man, Y., Wang, X., Tang, X., & Ji, R. (2026). Fatigue Driving Detection via Multi-Head Transformer with Adaptive Weighted Loss. ICCK Transactions on Intelligent Systematics, 3(1), 55–69. https://doi.org/10.62762/TIS.2025.633754
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Huang, Ling
AU  - Li, Shifeng
AU  - Man, Yaxin
AU  - Wang, Xiaoyan
AU  - Tang, Xiu
AU  - Ji, Rendong
PY  - 2026
DA  - 2026/03/05
TI  - Fatigue Driving Detection via Multi-Head Transformer with Adaptive Weighted Loss
JO  - ICCK Transactions on Intelligent Systematics
T2  - ICCK Transactions on Intelligent Systematics
JF  - ICCK Transactions on Intelligent Systematics
VL  - 3
IS  - 1
SP  - 55
EP  - 69
DO  - 10.62762/TIS.2025.633754
UR  - https://www.icck.org/article/abs/TIS.2025.633754
KW  - fatigue driving detection
KW  - facial keypoints
KW  - transformer
KW  - adaptive weighted loss
AB  - Fatigue driving is widely recognized as one of the major factors contributing to traffic accidents, posing not only a serious threat to road safety but also potential risks to drivers’ health and public security. With the rapid development of modern transportation, how to efficiently and accurately detect and warn against driver fatigue has become a critical issue in the field of intelligent transportation. To effectively address this issue, this paper proposes a novel fatigue driving detection method based on a Multi-Head Transformer with Adaptive Weighted Loss. In the proposed framework, the YOLOv8 model is first employed to efficiently and accurately locate key facial regions of the driver from real-time video streams, ensuring both high-speed processing and robustness. Subsequently, a Multi-Head Transformer model is introduced to capture the temporal dependencies and feature correlations among facial landmarks, enabling a more comprehensive characterization of fatigue-related behaviors such as eye closure, blinking, and yawning. In addition, an Adaptive Weighted Loss function is designed to dynamically balance the contributions of multiple fatigue features during training, effectively alleviating the class imbalance problem and enhancing the model’s generalization capability. Experimental results demonstrate that the proposed method maintains stable and superior detection accuracy even under complex long-duration driving conditions. Compared with traditional approaches, the system improves fatigue detection accuracy by 7.2%, achieving 95.5%, while also satisfying real-time requirements. In summary, this study presents an intelligent, adaptive, and efficient fatigue driving detection framework that provides a reliable theoretical foundation for intelligent warning systems and holds significant value for enhancing road traffic safety.
SN  - 3068-5079
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Huang2026Fatigue,
  author = {Ling Huang and Shifeng Li and Yaxin Man and Xiaoyan Wang and Xiu Tang and Rendong Ji},
  title = {Fatigue Driving Detection via Multi-Head Transformer with Adaptive Weighted Loss},
  journal = {ICCK Transactions on Intelligent Systematics},
  year = {2026},
  volume = {3},
  number = {1},
  pages = {55-69},
  doi = {10.62762/TIS.2025.633754},
  url = {https://www.icck.org/article/abs/TIS.2025.633754},
  abstract = {Fatigue driving is widely recognized as one of the major factors contributing to traffic accidents, posing not only a serious threat to road safety but also potential risks to drivers’ health and public security. With the rapid development of modern transportation, how to efficiently and accurately detect and warn against driver fatigue has become a critical issue in the field of intelligent transportation. To effectively address this issue, this paper proposes a novel fatigue driving detection method based on a Multi-Head Transformer with Adaptive Weighted Loss. In the proposed framework, the YOLOv8 model is first employed to efficiently and accurately locate key facial regions of the driver from real-time video streams, ensuring both high-speed processing and robustness. Subsequently, a Multi-Head Transformer model is introduced to capture the temporal dependencies and feature correlations among facial landmarks, enabling a more comprehensive characterization of fatigue-related behaviors such as eye closure, blinking, and yawning. In addition, an Adaptive Weighted Loss function is designed to dynamically balance the contributions of multiple fatigue features during training, effectively alleviating the class imbalance problem and enhancing the model’s generalization capability. Experimental results demonstrate that the proposed method maintains stable and superior detection accuracy even under complex long-duration driving conditions. Compared with traditional approaches, the system improves fatigue detection accuracy by 7.2\%, achieving 95.5\%, while also satisfying real-time requirements. In summary, this study presents an intelligent, adaptive, and efficient fatigue driving detection framework that provides a reliable theoretical foundation for intelligent warning systems and holds significant value for enhancing road traffic safety.},
  keywords = {fatigue driving detection, facial keypoints, transformer, adaptive weighted loss},
  issn = {3068-5079},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 27
PDF Downloads: 7

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
Institute of Central Computation and Knowledge (ICCK) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ICCK Transactions on Intelligent Systematics

ICCK Transactions on Intelligent Systematics

ISSN: 3068-5079 (Online) | ISSN: 3069-003X (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/