-
CiteScore
-
Impact Factor
Volume 2, Issue 4, ICCK Transactions on Sensing, Communication, and Control
Volume 2, Issue 4, 2025
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Sensing, Communication, and Control, Volume 2, Issue 4, 2025: 250-262

Free to Read | Research Article | 18 December 2025
LAE-GSDetect: A Lightweight Fusion Framework for Robust Small-Face Detection in Low-Light Conditions
1 Faculty of Electronic Engineering, Huaiyin Institute of Technology, Huaian 223003, China
2 Faculty of Engineering Technology (Electrical and Automation), University College TATI, Kemaman 24000, Malaysia
* Corresponding Authors: Bo Chang, [email protected] ; Lichao Tang, [email protected]
ARK: ark:/57805/tscc.2025.972040
Received: 30 September 2025, Accepted: 26 November 2025, Published: 18 December 2025  
Abstract
In response to the challenges of insufficient accuracy in face detection and missed small targets under low-light conditions, this paper proposes a detection scheme that combines image preprocessing and detection model optimization. Firstly, Zero-DCE low-light enhancement is introduced to adaptively restore image details and contrast, providing high-quality inputs for subsequent detection. Secondly, YOLOv11n is enhanced through the following improvements: a P2 small-target detection layer is added while the P5 layer is removed, addressing the original model's deficiency in detecting small targets and streamlining the computational process to balance model complexity and efficiency; the P2 upsampling is replaced with DySample dynamic upsampling, which adaptively adjusts the sampling strategy based on features to improve the accuracy of feature fusion; a lightweight adaptive extraction module (LAE) is incorporated to reduce the number of parameters and computational costs; finally, the detection head is replaced with GSDetect to maintain accuracy while reducing computational overhead. Experimental results show that the improved model reaches an mAP50 of 58.7%, which is a 10.2% increase compared with the original model. Although the computational complexity increases to 7.5 GFLOPs, the parameter count is reduced by 45%, offering a more optimal solution for face detection in low-light environments.

Graphical Abstract
LAE-GSDetect: A Lightweight Fusion Framework for Robust Small-Face Detection in Low-Light Conditions

Keywords
low-light
face detection
image enhancement
lightweight
deep learning

Data Availability Statement
Data will be made available on request.

Funding
This work was supported in part by the National Natural Science Foundation of China under Grant 62205120; in part by the Key Research and Development Program of Tianjin, China under Grant 22YFZCSN00210.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Atakishiyev, S., Salameh, M., Yao, H., & Goebel, R. (2024). Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions. IEEE Access, 12.
    [CrossRef]   [Google Scholar]
  2. Mummaneni, S., Mudunuri, V. C. S. R., Bommaganti, S. V. V., Kalle, B. V., Jacob, N., & Katari, E. S. R. (2025). Face recognition in dense crowd using deep learning approaches with IP camera. Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska, 15(2), 44-50.
    [CrossRef]   [Google Scholar]
  3. Barbu, A., Lay, N., & Gramajo, G. (2018). Face detection with a 3d model. In Academic Press Library in Signal Processing, Volume 6 (pp. 237-259). Academic Press.
    [CrossRef]   [Google Scholar]
  4. Zhu, C., Tao, R., Luu, K., & Savvides, M. (2018, June). Seeing Small Faces from Robust Anchor's Perspective. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5127-5136). IEEE.
    [CrossRef]   [Google Scholar]
  5. Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R., & Zhang, Y. (2023, October). Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 12470-12479). IEEE.
    [CrossRef]   [Google Scholar]
  6. Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., ... & Wang, Z. (2021). Enlightengan: Deep light enhancement without paired supervision. IEEE transactions on image processing, 30, 2340-2349.
    [CrossRef]   [Google Scholar]
  7. Guo, C., Li, C., Guo, J., Loy, C. C., Hou, J., Kwong, S., & Cong, R. (2020, June). Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1777-1786). IEEE.
    [CrossRef]   [Google Scholar]
  8. Khanam, R., & Hussain, M. (2024). Yolov11: An overview of the key architectural enhancements. arXiv preprint arXiv:2410.17725.
    [Google Scholar]
  9. Yang, W., Yuan, Y., Ren, W., Liu, J., Scheirer, W. J., Wang, Z., ... & Qin, L. (2020). Advancing image understanding in poor visibility environments: A collective benchmark study. IEEE Transactions on Image Processing, 29, 5737-5752.
    [CrossRef]   [Google Scholar]
  10. Fu, X., Zeng, D., Huang, Y., Liao, Y., Ding, X., & Paisley, J. W. (2016). A fusion-based enhancing method for weakly illuminated images. Signal Processing, 129, 82–96.
    [CrossRef]   [Google Scholar]
  11. Ma, L., Ma, T., Liu, R., Fan, X., & Luo, Z. (2022, June). Toward Fast, Flexible, and Robust Low-Light Image Enhancement. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5627-5636). IEEE.
    [CrossRef]   [Google Scholar]
  12. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014, June). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition (pp. 580-587). IEEE.
    [CrossRef]   [Google Scholar]
  13. Ren, S., He, K., Girshick, R., & Sun, J. (2016). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6), 1137-1149.
    [CrossRef]   [Google Scholar]
  14. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016, June). You Only Look Once: Unified, Real-Time Object Detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 779-788). IEEE.
    [CrossRef]   [Google Scholar]
  15. Li, J., Wang, Y., Wang, C., Tai, Y., Qian, J., Yang, J., ... & Huang, F. (2019, June). DSFD: Dual Shot Face Detector. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5055-5064). IEEE.
    [CrossRef]   [Google Scholar]
  16. Tang, X., Du, D. K., He, Z., & Liu, J. (2018, September). PyramidBox: A Context-Assisted Single Shot Face Detector. In European Conference on Computer Vision (pp. 812-828).
    [CrossRef]   [Google Scholar]
  17. Deng, J., Guo, J., Ververas, E., Kotsia, I., & Zafeiriou, S. (2020, June). RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5202-5211). IEEE.
    [CrossRef]   [Google Scholar]
  18. Wang, W., Yang, W., & Liu, J. (2021, June). HLA-Face: Joint High-Low Adaptation for Low Light Face Detection. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 16190-16199). IEEE.
    [CrossRef]   [Google Scholar]
  19. Yu, J., Hao, X., & He, P. (2021, October). Single-stage Face Detection under Extremely Low-light Conditions. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) (pp. 3516-3525). IEEE.
    [CrossRef]   [Google Scholar]
  20. Liu, W., Lu, H., Fu, H., & Cao, Z. (2023, October). Learning to Upsample by Learning to Sample. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 6004-6014). IEEE.
    [CrossRef]   [Google Scholar]
  21. Yu, Z., Guan, Q., Yang, J., Yang, Z., Zhou, Q., Chen, Y., & Chen, F. (2024, December). Lsm-yolo: A compact and effective roi detector for medical detection. In International Conference on Neural Information Processing (pp. 30-44). Singapore: Springer Nature Singapore.
    [CrossRef]   [Google Scholar]
  22. Li, H., Li, J., Wei, H., Liu, Z., Zhan, Z., & Ren, Q. (2024). Slim-neck by GSConv: A lightweight-design for real-time detector architectures. Journal of Real-Time Image Processing, 21(3), 62.
    [CrossRef]   [Google Scholar]
  23. Wang, C. Y., Yeh, I. H., & Mark Liao, H. Y. (2024, September). Yolov9: Learning what you want to learn using programmable gradient information. In European conference on computer vision (pp. 1-21). Cham: Springer Nature Switzerland.
    [CrossRef]   [Google Scholar]
  24. Wang, A., Chen, H., Liu, L., Chen, K., Lin, Z., & Han, J. (2024). Yolov10: Real-time end-to-end object detection. Advances in Neural Information Processing Systems, 37, 107984-108011.
    [Google Scholar]
  25. Zhao, Y., Lv, W., Xu, S., Wei, J., Wang, G., Dang, Q., ... & Chen, J. (2024, June). DETRs Beat YOLOs on Real-time Object Detection. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 16965-16974). IEEE.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Chang, B., Tang, L., Hu, C., Zhu, M., Dou, H. & Ali, K. B. (2025). LAE-GSDetect: A Lightweight Fusion Framework for Robust Small-Face Detection in Low-Light Conditions. ICCK Transactions on Sensing, Communication, and Control, 2(4), 250–262. https://doi.org/10.62762/TSCC.2025.972040
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Chang, Bo
AU  - Tang, Lichao
AU  - Hu, Chen
AU  - Zhu, Mengxiao
AU  - Dou, Huijie
AU  - Ali, Kharudin Bin
PY  - 2025
DA  - 2025/12/18
TI  - LAE-GSDetect: A Lightweight Fusion Framework for Robust Small-Face Detection in Low-Light Conditions
JO  - ICCK Transactions on Sensing, Communication, and Control
T2  - ICCK Transactions on Sensing, Communication, and Control
JF  - ICCK Transactions on Sensing, Communication, and Control
VL  - 2
IS  - 4
SP  - 250
EP  - 262
DO  - 10.62762/TSCC.2025.972040
UR  - https://www.icck.org/article/abs/TSCC.2025.972040
KW  - low-light
KW  - face detection
KW  - image enhancement
KW  - lightweight
KW  - deep learning
AB  - In response to the challenges of insufficient accuracy in face detection and missed small targets under low-light conditions, this paper proposes a detection scheme that combines image preprocessing and detection model optimization. Firstly, Zero-DCE low-light enhancement is introduced to adaptively restore image details and contrast, providing high-quality inputs for subsequent detection. Secondly, YOLOv11n is enhanced through the following improvements: a P2 small-target detection layer is added while the P5 layer is removed, addressing the original model's deficiency in detecting small targets and streamlining the computational process to balance model complexity and efficiency; the P2 upsampling is replaced with DySample dynamic upsampling, which adaptively adjusts the sampling strategy based on features to improve the accuracy of feature fusion; a lightweight adaptive extraction module (LAE) is incorporated to reduce the number of parameters and computational costs; finally, the detection head is replaced with GSDetect to maintain accuracy while reducing computational overhead. Experimental results show that the improved model reaches an mAP50 of 58.7%, which is a 10.2% increase compared with the original model. Although the computational complexity increases to 7.5 GFLOPs, the parameter count is reduced by 45%, offering a more optimal solution for face detection in low-light environments.
SN  - 3068-9287
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Chang2025LAEGSDetec,
  author = {Bo Chang and Lichao Tang and Chen Hu and Mengxiao Zhu and Huijie Dou and Kharudin Bin Ali},
  title = {LAE-GSDetect: A Lightweight Fusion Framework for Robust Small-Face Detection in Low-Light Conditions},
  journal = {ICCK Transactions on Sensing, Communication, and Control},
  year = {2025},
  volume = {2},
  number = {4},
  pages = {250-262},
  doi = {10.62762/TSCC.2025.972040},
  url = {https://www.icck.org/article/abs/TSCC.2025.972040},
  abstract = {In response to the challenges of insufficient accuracy in face detection and missed small targets under low-light conditions, this paper proposes a detection scheme that combines image preprocessing and detection model optimization. Firstly, Zero-DCE low-light enhancement is introduced to adaptively restore image details and contrast, providing high-quality inputs for subsequent detection. Secondly, YOLOv11n is enhanced through the following improvements: a P2 small-target detection layer is added while the P5 layer is removed, addressing the original model's deficiency in detecting small targets and streamlining the computational process to balance model complexity and efficiency; the P2 upsampling is replaced with DySample dynamic upsampling, which adaptively adjusts the sampling strategy based on features to improve the accuracy of feature fusion; a lightweight adaptive extraction module (LAE) is incorporated to reduce the number of parameters and computational costs; finally, the detection head is replaced with GSDetect to maintain accuracy while reducing computational overhead. Experimental results show that the improved model reaches an mAP50 of 58.7\%, which is a 10.2\% increase compared with the original model. Although the computational complexity increases to 7.5 GFLOPs, the parameter count is reduced by 45\%, offering a more optimal solution for face detection in low-light environments.},
  keywords = {low-light, face detection, image enhancement, lightweight, deep learning},
  issn = {3068-9287},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 180
PDF Downloads: 48

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
Institute of Central Computation and Knowledge (ICCK) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ICCK Transactions on Sensing, Communication, and Control

ICCK Transactions on Sensing, Communication, and Control

ISSN: 3068-9287 (Online) | ISSN: 3068-9279 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/