-
CiteScore
-
Impact Factor
Volume 2, Issue 3, Chinese Journal of Information Fusion
Volume 2, Issue 3, 2025
Submit Manuscript Edit a Special Issue
Academic Editor
You He
You He
Tsinghua University, China
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
Chinese Journal of Information Fusion, Volume 2, Issue 3, 2025: 237-252

Open Access | Research Article | 25 September 2025
Lightweight Mura Defect Detection via Semantic Interscale Integration and Neighbor Fusion
1 School of Resources and Environment, University of Electronic Science and Technology of China, Chengdu 611731, China
* Corresponding Author: Huaixin Chen, [email protected]
Received: 25 April 2025, Accepted: 23 July 2025, Published: 25 September 2025  
Abstract
Considering the large-area distribution, smooth brightness gradients, and blurred boundaries of Mura defects in real industrial scenarios, as well as the challenge of balancing accuracy and efficiency in existing methods, we propose a lightweight deep learning-based detection method for large-area Mura defects, termed SIFNet. The SIFNet adopts a classical encoder-decoder architecture with MobileNet-V2 as the backbone. Furthermore, we design a Graph-based Semantic Interscale-fusion Block (GSIB) that integrates the Semantic Fluid Aggregation Module (SFAM) and the Semantic Graph Inference Module (SGIM) to collaboratively extract high-level semantic features across multiple scales and establish abstract semantic representations for accurately localizing large-area Mura defects. Specifically, SFAM leverages a global attention mechanism to extract cross-spatial semantic flows, guiding the model to focus on potential brightness anomaly regions in the image and SGIM explicitly models the semantic relationships between multi-scale features using graph convolution, enhancing the model's ability to interpret regions with blurred boundaries and ambiguous structures. To further improve the model’s sensitivity to edges in regions with smooth brightness transitions, we introduce a NeighborFusion Edge Enhancement Module (NEEM). This module integrates depthwise separable convolutions with a spatial attention mechanism and introduces a CrossNorm-based feature alignment strategy to enhance spatial collaboration across feature layers. Additionally, an edge enhancement mechanism is employed to significantly improve the model’s ability to delineate blurred Mura defect boundaries, while keeping computational cost low and strengthening edge perception and representation. Extensive quantitative and qualitative experiments on three large-area Mura defect datasets constructed in this study demonstrate that SIFNet achieves excellent detection performance with only 3.92M parameters and 6.89 GFLOPs, striking an effective balance between accuracy and efficiency, and fully meeting the demands of industrial deployment.

Graphical Abstract
Lightweight Mura Defect Detection via Semantic Interscale Integration and Neighbor Fusion

Keywords
mura defect detection
lightweight network
interscale-fusion
neighbor fusion

Data Availability Statement
Data will be made available on request.

Funding
This work was supported by the YangFan Project of Guangdong Province of China under Grant 2020-05.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Li, J., Yin, Y., & Meng, H. (2024). Research progress of color photoresists for TFT-LCD. Dyes and Pigments, 225, 112094.
    [CrossRef]   [Google Scholar]
  2. Cen, Y.-G., Zhao, R.-Z., Cen, L.-H., Cui, L.-H., Miao, Z.-J., & Wei, Z. (2015). Defect inspection for TFT-LCD images based on the low-rank matrix reconstruction. Neurocomputing, 149, 1206–1215.
    [CrossRef]   [Google Scholar]
  3. CHEN, Z., SHEN, Y., ZHAI, C., DONG, C., & WANG, S. (2024). Development of AOI inspection of Mura defects on TFT-LCD surface. Chinese Journal of Liquid Crystals and Displays, 39(11), 1463-1476.
    [CrossRef]   [Google Scholar]
  4. Xia, Y., Luo, C., Zhou, Y., & Jia, L. (2022). A hybrid method of frequency and spatial domain techniques for TFT-LCD circuits defect detection. IEEE Transactions on Semiconductor Manufacturing, 36(1), 45–55.
    [CrossRef]   [Google Scholar]
  5. Ming, W., Zhang, S., Liu, X., Liu, K., Yuan, J., Xie, Z., ... & Guo, X. (2021). Survey of mura defect detection in liquid crystal displays based on machine vision. Crystals, 11(12), 1444.
    [CrossRef]   [Google Scholar]
  6. Lee, J. Y., & Yoo, S. I. (2004). Automatic detection of region-mura defect in TFT-LCD. IEICE TRANSACTIONS on Information and Systems, 87(10), 2371-2378.
    [Google Scholar]
  7. Lin, Z. H., Lai, Q. Y., & Li, H. Y. (2024). A machine-learning strategy to detect Mura defects in a low-contrast image by piecewise gamma correction. Sensors, 24(5), 1484.
    [CrossRef]   [Google Scholar]
  8. Chen, C., Chen, K. S., Vechet, S., & Guo, Y. J. (2024, December). An Integrated LCD Quality Assessment Flow based on the Integration of Image Processing, Ergonomics, and Machine Learning. In 2024 21st International Conference on Mechatronics-Mechatronika (ME) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  9. Luo, S., Chen, H., & Liu, B. (2025). DACA-Net: Detail-aware network with contrast attention for locating liquid crystal display defects. Displays, 87, 102913.
    [CrossRef]   [Google Scholar]
  10. Jocher, G., Stoken, A., Borovec, J., Changyu, L., Hogan, A., Diaconu, L., ... & Dave, P. (2020). ultralytics/yolov5: v3. 0. Zenodo.
    [CrossRef]   [Google Scholar]
  11. Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2018). Focal Loss for Dense Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(2), 318-327.
    [CrossRef]   [Google Scholar]
  12. Yang, H., Mei, S., Song, K., Tao, B., & Yin, Z. (2017). Transfer-learning-based online Mura defect classification. IEEE Transactions on Semiconductor Manufacturing, 31(1), 116-123.
    [CrossRef]   [Google Scholar]
  13. Lu, H.-P., & Su, C.-T. (2021). CNNs combined with a conditional GAN for Mura defect classification in TFT-LCDs. IEEE Transactions on Semiconductor Manufacturing, 34(1), 25–33.
    [CrossRef]   [Google Scholar]
  14. Chen, P., Chen, M., Wang, S., Song, Y., Cui, Y., Chen, Z., Zhang, Y., Chen, S., & Mo, X. (2024). Real-time defect detection of TFT-LCD displays using a lightweight network architecture. Journal of Intelligent Manufacturing, 35(3), 1337–1352.
    [CrossRef]   [Google Scholar]
  15. Khanam, R., & Hussain, M. (2024). Yolov11: An overview of the key architectural enhancements. arXiv preprint arXiv:2410.17725.
    [Google Scholar]
  16. Reis, D., Kupec, J., Hong, J., & Daoudi, A. (2023). Real-time flying object detection with YOLOv8. arXiv preprint arXiv:2305.09972.
    [Google Scholar]
  17. Sandler, M., Howard, A. G., Zhu, M., Zhmoginov, A., & Chen, L. (1801). Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR abs/1801.04381 (2018). arXiv preprint arXiv:1801.04381.
    [Google Scholar]
  18. Sun, Y., Li, X., & Xiao, J. (2019). A cascaded Mura defect detection method based on mean shift and level set algorithm for active‐matrix OLED display panel. Journal of the Society for Information Display, 27(1), 13-20.
    [CrossRef]   [Google Scholar]
  19. Chen, L. C., & Kuo, C. C. (2007). Automatic TFT-LCD mura defect inspection using discrete cosine transform-based background filtering and ‘just noticeable difference’quantification strategies. Measurement Science and Technology, 19(1), 015507.
    [CrossRef]   [Google Scholar]
  20. Tsai, D. M., Chuang, S. T., & Tseng, Y. H. (2007). One-dimensional-based automatic defect inspection of multiple patterned TFT-LCD panels using Fourier image reconstruction. International journal of production research, 45(6), 1297-1321.
    [CrossRef]   [Google Scholar]
  21. Cui, Y., Wang, S., Wu, H., Xiong, B., & Pan, Y. (2021). Liquid crystal display defects in multiple backgrounds with visual real‐time detection. Journal of the Society for Information Display, 29(7), 547-560.
    [CrossRef]   [Google Scholar]
  22. Chen, M., Chen, P., Wang, S., Cui, Y., Zhang, Y., & Chen, S. (2022). TFT‐LCD mura defect visual inspection method in multiple backgrounds. Journal of the Society for Information Display, 30(11), 818-831.
    [CrossRef]   [Google Scholar]
  23. Zhu, H., Huang, J., Liu, H., Zhou, Q., Zhu, J., & Li, B. (2021). Deep-learning-enabled automatic optical inspection for module-level defects in LCD. IEEE Internet of Things Journal, 9(2), 1122-1135.
    [CrossRef]   [Google Scholar]
  24. Çelik, A., Küçükmanisa, A., Sümer, A., Çelebi, A. T., & Urhan, O. (2022). A real-time defective pixel detection system for LCDs using deep learning based object detectors. Journal of Intelligent Manufacturing, 33(4), 985-994.
    [CrossRef]   [Google Scholar]
  25. Lin, G., Kong, L., Liu, T., Qiu, L., & Chen, X. (2022). An antagonistic training algorithm for TFT-LCD module mura defect detection. Signal Processing: Image Communication, 107, 116791.
    [CrossRef]   [Google Scholar]
  26. Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2023, June). YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 7464-7475). IEEE.
    [CrossRef]   [Google Scholar]
  27. Wang, C. Y., Yeh, I. H., & Mark Liao, H. Y. (2024, September). Yolov9: Learning what you want to learn using programmable gradient information. In European conference on computer vision (pp. 1-21). Cham: Springer Nature Switzerland.
    [CrossRef]   [Google Scholar]
  28. Wang, A., Chen, H., Liu, L., Chen, K., Lin, Z., & Han, J. (2024). Yolov10: Real-time end-to-end object detection. Advances in Neural Information Processing Systems, 37, 107984-108011.
    [Google Scholar]
  29. Cheng, B., Liu, Z., Wang, Q., Shen, T., Fu, C., & Tian, A. (2024). Lightweight Progressive Multilevel Feature Collaborative Network for Remote Sensing Image Salient Object Detection. IEEE Transactions on Geoscience and Remote Sensing, 62, 1–17.
    [CrossRef]   [Google Scholar]
  30. Liu, J., He, J., Chen, H., Yang, R., & Huang, Y. (2025). A Lightweight Semantic-and Graph-Guided Network for Advanced Optical Remote Sensing Image Salient Object Detection. Remote Sensing, 17(5), 861.
    [CrossRef]   [Google Scholar]
  31. Feng, M., Lu, H., & Ding, E. (2019). Attentive Feedback Network for Boundary-Aware Salient Object Detection. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1623–1632).
    [CrossRef]   [Google Scholar]
  32. Canny, J. (2009). A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6), 679-698.
    [CrossRef]   [Google Scholar]
  33. Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., & Jagersand, M. (2019). BASNet: Boundary-Aware Salient Object Detection. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 7471–7481).
    [CrossRef]   [Google Scholar]
  34. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
    [Google Scholar]
  35. Fan, D. P., Cheng, M. M., Liu, Y., Li, T., & Borji, A. (2017, October). Structure-Measure: A New Way to Evaluate Foreground Maps. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 4558-4567). IEEE.
    [CrossRef]   [Google Scholar]
  36. Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009, June). Frequency-tuned salient region detection. In 2009 IEEE conference on computer vision and pattern recognition (pp. 1597-1604). IEEE.
    [CrossRef]   [Google Scholar]
  37. Fan, D. P., Gong, C., Cao, Y., Ren, B., Cheng, M. M., & Borji, A. (2018, July). Enhanced-alignment measure for binary foreground map evaluation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (pp. 698-704).
    [CrossRef]   [Google Scholar]
  38. Perazzi, F., Krähenbühl, P., Pritch, Y., & Hornung, A. (2012). Saliency filters: Contrast based filtering for salient region detection. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 733–740).
    [CrossRef]   [Google Scholar]
  39. He, J., Liu, B., & Chen, H. (2025). HDPNet: Hourglass Vision Transformer with Dual-Path Feature Pyramid for Camouflaged Object Detection. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (pp. 8638–8647).
    [CrossRef]   [Google Scholar]
  40. Tang, Z., Gao, Y., Zhu, Y., Zhang, Z., Li, M., & Metaxas, D. (2021, October). CrossNorm and SelfNorm for Generalization under Distribution Shifts. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 52-61). IEEE.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Wang, Z., He, J., & Chen, H. (2025). Lightweight Mura Defect Detection via Semantic Interscale Integration and Neighbor Fusion. Chinese Journal of Information Fusion, 2(3), 237–252. https://doi.org/10.62762/CJIF.2025.864944

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 85
PDF Downloads: 11

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
CC BY Copyright © 2025 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Chinese Journal of Information Fusion

Chinese Journal of Information Fusion

ISSN: 2998-3371 (Online) | ISSN: 2998-3363 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/