Volume 3, Issue 1, ICCK Transactions on Intelligent Systematics
Volume 3, Issue 1, 2026
Submit Manuscript Edit a Special Issue
Academic Editor
Jing Na
Jing Na
Kunming University of Science and Technology, China
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Intelligent Systematics, Volume 3, Issue 1, 2026: 21-31

Free to Read | Research Article | 19 February 2026
SemanticBlur: Semantic-Aware Attention Network with Multi-Scale Feature Refinement for Defocus Blur Detection
1 Ilmenau University of Technology, Ilmenau 98693, Germany
2 Democritus University of Thrace, Xanthi 67100, Greece
3 Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom
4 BRAINS Institute Peshawar, Peshawar 25000, Pakistan
* Corresponding Author: Faryal Zahoor, [email protected]
ARK: ark:/57805/tis.2025.879161
Received: 05 September 2025, Accepted: 23 October 2025, Published: 19 February 2026  
Abstract
Defocus blur detection is essential for computational photography applications, but existing methods struggle with accurate blur localization and boundary preservation. We propose SemanticBlur, a deep learning framework which integrates semantic understanding with attention mechanisms for robust defocus blur detection. Our semantic-aware attention module combines channel attention, spatial attention, and semantic enhancement to leverage high-level features for low-level feature refinement. The architecture employs a modified ResNet-50 backbone with dilated convolutions that preserves spatial resolution while expanding receptive fields, coupled with a feature pyramid decoder using learnable fusion weights for adaptive multi-scale integration. A combined loss function balancing binary cross-entropy and structural similarity achieves both pixel-wise accuracy and structural coherence. Extensive experiments on four benchmark datasets (CUHK, DUT, CTCUG, EBD) demonstrate state-of-the-art (SOTA) performance, with ablation studies confirming that semantic enhancement provides the most significant gains while maintaining computational efficiency. SemanticBlur generates visually coherent detection maps with sharp boundaries, validating its practical applicability for real-world deployment.

Graphical Abstract
SemanticBlur: Semantic-Aware Attention Network with Multi-Scale Feature Refinement for Defocus Blur Detection

Keywords
blur detection
semantic segmentation
attention mechanisms
deep neural networks
image analysis
multi-scale processing

Data Availability Statement
Data will be made available on request.

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest.

AI Use Statement
The authors declare that no generative AI was used in the preparation of this manuscript.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Zhang, W. & Cham, W. K. (2011). Single-image refocusing and defocusing. IEEE Transactions on Image Processing, 21(2), 873-882.
    [CrossRef]   [Google Scholar]
  2. Zhang, X., Wang, R., Jiang, X., Wang, W. & Gao, W. (2016). Spatially variant defocus blur map estimation and deblurring from a single image. Journal of Visual Communication and Image Representation, 35, 257-264.
    [CrossRef]   [Google Scholar]
  3. Gur, S., & Wolf, L. (2019, June). Single Image Depth Estimation Trained via Depth From Defocus Cues. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 7675-7684). IEEE.
    [CrossRef]   [Google Scholar]
  4. Liu, Y. Q., Du, X., Shen, H. L., & Chen, S. J. (2020). Estimating generalized gaussian blur kernels for out-of-focus image deblurring. IEEE Transactions on circuits and systems for video technology, 31(3), 829-843.
    [CrossRef]   [Google Scholar]
  5. Abuolaim, A., & Brown, M. S. (2020, August). Defocus deblurring using dual-pixel data. In European conference on computer vision (pp. 111-126). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  6. Jiang, N., Sheng, B., Li, P. & Lee, T. Y. (2022). Photohelper: portrait photographing guidance via deep feature retrieval and fusion. IEEE Transactions on Multimedia, 25, 2226-2238.
    [CrossRef]   [Google Scholar]
  7. Liu, Z. Y. & Liu, J. W. (2023). Hypergraph attentional convolutional neural network for salient object detection. The Visual Computer, 39(7), 2881-2907.
    [CrossRef]   [Google Scholar]
  8. Chen, S., Sun, P., Song, Y., & Luo, P. (2023, October). DiffusionDet: Diffusion Model for Object Detection. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 19773-19786). IEEE.
    [CrossRef]   [Google Scholar]
  9. Golestaneh, S. A., & Karam, L. J. (2017, July). Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 596-605). IEEE.
    [CrossRef]   [Google Scholar]
  10. Xu, G., Quan, Y., & Ji, H. (2017, October). Estimating Defocus Blur via Rank of Local Patches. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 5381-5389). IEEE.
    [CrossRef]   [Google Scholar]
  11. Sakurikar, P., & Narayanan, P. J. (2017, October). Composite Focus Measure for High Quality Depth Maps. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 1623-1631). IEEE.
    [CrossRef]   [Google Scholar]
  12. Zhao, F., Lu, H., Zhao, W., & Yao, L. (2021). Image-scale-symmetric cooperative network for defocus blur detection. IEEE Transactions on Circuits and Systems for Video Technology, 32(5), 2719-2731.
    [CrossRef]   [Google Scholar]
  13. Zhao, Z., Yang, H., Liu, P., Nie, H., Zhang, Z., & Li, C. (2024). Defocus blur detection via adaptive cross-level feature fusion and refinement. The Visual Computer, 40(11), 8141-8153.
    [CrossRef]   [Google Scholar]
  14. Cun, X., & Pun, C. M. (2020, August). Defocus blur detection via depth distillation. In European conference on computer vision (pp. 747-763). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  15. Lee, J., Lee, S., Cho, S., & Lee, S. (2019, June). Deep Defocus Map Estimation Using Domain Adaptation. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 12214-12222). IEEE.
    [CrossRef]   [Google Scholar]
  16. Li, E., Casas, S., & Urtasun, R. (2023, October). MemorySeg: Online LiDAR Semantic Segmentation with a Latent Memory. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 745-754). IEEE.
    [CrossRef]   [Google Scholar]
  17. Jain, J., Singh, A., Orlov, N., Huang, Z., Li, J., Walton, S., & Shi, H. (2023, October). SeMask: Semantically Masked Transformers for Semantic Segmentation. In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) (pp. 752-761). IEEE.
    [CrossRef]   [Google Scholar]
  18. Jonna, S., Medhi, M., & Sahay, R. R. (2023). Distill-dbdgan: Knowledge distillation and adversarial learning framework for defocus blur detection. ACM Transactions on Multimedia Computing, Communications and Applications, 19(2s), 1-26.
    [CrossRef]   [Google Scholar]
  19. Tang, C., Liu, X., An, S. & Wang, P. (2020). BR$^2$Net: Defocus blur detection via a bidirectional channel attention residual refining network. IEEE Transactions on Multimedia, 23, 624-635.
    [CrossRef]   [Google Scholar]
  20. Tang, C., Liu, X., Zheng, X., Li, W., Xiong, J., Wang, L., Zomaya, A. Y. & Longo, A. (2020). DeFusionNET: Defocus blur detection via recurrently fusing and refining discriminative multi-scale deep features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(2), 955-968.
    [CrossRef]   [Google Scholar]
  21. Zhao, W., Zhao, F., Wang, D. & Lu, H. (2019). Defocus blur detection via multi-stream bottom-top-bottom network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(8), 1884-1897.
    [CrossRef]   [Google Scholar]
  22. Zhao, W., Zheng, B., Lin, Q., & Lu, H. (2019). Enhancing diversity of defocus blur detectors via cross-ensemble network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8905-8913).
    [CrossRef]   [Google Scholar]
  23. Tang, C., Liu, X., Zhu, X., Zhu, E., Sun, K., Wang, P., ... & Zomaya, A. (2020, April). R$^2$MRF: Defocus Blur Detection via Recurrently Refining Multi-Scale Residual Features. In Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12063-12070.
    [CrossRef]   [Google Scholar]
  24. Shi, J., Xu, L., & Jia, J. (2014, June). Discriminative Blur Detection Features. In 2014 IEEE Conference on Computer Vision and Pattern Recognition (pp. 2965-2972). IEEE.
    [CrossRef]   [Google Scholar]
  25. Su, B., Lu, S. & Tan, C. L. (2011). Blurred image region detection and classification. In Proceedings of the 19th ACM International Conference on Multimedia (pp. 1397-1400).
    [CrossRef]   [Google Scholar]
  26. Tang, C., Wu, J., Hou, Y., Wang, P., & Li, W. (2016). A spectral and spatial approach of coarse-to-fine blurred image region detection. IEEE Signal Processing Letters, 23(11), 1652-1656.
    [CrossRef]   [Google Scholar]
  27. Yi, X. & Eramian, M. (2016). LBP-based segmentation of defocus blur. IEEE Transactions on Image Processing, 25(4), 1626-1638.
    [CrossRef]   [Google Scholar]
  28. Wang, B., Yang, M., Cao, P., Shen, A., & Liu, Y. (2025). STNet: a lightweight spectral transform framework for salient object detection. Complex & Intelligent Systems, 11(9), 381.
    [CrossRef]   [Google Scholar]
  29. Park, J., Tai, Y. W., Cho, D., & Kweon, I. S. (2017, July). A Unified Approach of Multi-scale Deep and Hand-Crafted Features for Defocus Estimation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2760-2769). IEEE.
    [CrossRef]   [Google Scholar]
  30. Zhao, W., Zhao, F., Wang, D. & Lu, H. (2018). Defocus blur detection via multi-stream bottom-top-bottom fully convolutional network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3080-3088).
    [CrossRef]   [Google Scholar]
  31. Zeng, K., Wang, Y., Mao, J., Liu, J., Peng, W., & Chen, N. (2018). A local metric for defocus blur detection based on CNN feature learning. IEEE Transactions on Image Processing, 28(5), 2107-2115.
    [CrossRef]   [Google Scholar]
  32. Jiang, Z., Xu, X., Zhang, C., & Zhu, C. (2020, September). Multianet: a multi-attention network for defocus blur detection. In 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  33. Wijayasingha, L., Alemzadeh, H., & Stankovic, J. A. (2024). Camera-independent single image depth estimation from defocus blur. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 3749-3758).
    [CrossRef]   [Google Scholar]
  34. Hu, J., Shen, L., Albanie, S., Sun, G., & Wu, E. (2019). Squeeze-and-Excitation Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(8), 2011-2023.
    [CrossRef]   [Google Scholar]
  35. Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018, September). CBAM: Convolutional Block Attention Module. In European Conference on Computer Vision (pp. 3-19). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  36. Jiang, Z., Xu, X., Zhang, L., Zhang, C., Foo, C. S., & Zhu, C. (2022). MA-GANet: A multi-attention generative adversarial network for defocus blur detection. IEEE Transactions on Image Processing, 31, 3494-3508.
    [CrossRef]   [Google Scholar]
  37. Li, J., Fan, D., Yang, L., Gu, S., Lu, G., Xu, Y. & Zhang, D. (2021). Layer-output guided complementary attention learning for image defocus blur detection. IEEE Transactions on Image Processing, 30, 3748-3763.
    [CrossRef]   [Google Scholar]
  38. Habib, K., Talha, U. M., & Imad, R. (2024). Attention enhanced machine instinctive vision with human-inspired saliency detection. http://dx.doi.org/10.1016/j.imavis.2024.105308
    [Google Scholar]
  39. Zhao, Z., Yang, H. & Luo, H. (2022). Defocus Blur detection via transformer encoder and edge guidance. Applied Intelligence, 52(12), 14426-14439.
    [CrossRef]   [Google Scholar]
  40. Chai, S., Zhao, X., Zhang, J. & Kan, J. (2024). Defocus blur detection based on transformer and complementary residual learning. Multimedia Tools and Applications, 83(17), 53095-53118.
    [CrossRef]   [Google Scholar]
  41. Zhang, N., & Yan, J. (2020, August). Rethinking the defocus blur detection problem and a real-time deep DBD model. In European Conference on Computer Vision (pp. 617-632). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  42. Jin, Y., Qian, M., Xiong, J., Xue, N., & Xia, G. S. (2023, July). Depth and DOF cues make a better defocus blur detector. In 2023 IEEE International Conference on Multimedia and Expo (ICME) (pp. 882-887). IEEE.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Haider, A. U., Gazis, A., & Zahoor, F. (2026). SemanticBlur: Semantic-Aware Attention Network with Multi-Scale Feature Refinement for Defocus Blur Detection. ICCK Transactions on Intelligent Systematics, 3(1), 21–31. https://doi.org/10.62762/TIS.2025.879161
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Haider, Asad Ullah
AU  - Gazis, Alexandros
AU  - Zahoor, Faryal
PY  - 2026
DA  - 2026/02/19
TI  - SemanticBlur: Semantic-Aware Attention Network with Multi-Scale Feature Refinement for Defocus Blur Detection
JO  - ICCK Transactions on Intelligent Systematics
T2  - ICCK Transactions on Intelligent Systematics
JF  - ICCK Transactions on Intelligent Systematics
VL  - 3
IS  - 1
SP  - 21
EP  - 31
DO  - 10.62762/TIS.2025.879161
UR  - https://www.icck.org/article/abs/TIS.2025.879161
KW  - blur detection
KW  - semantic segmentation
KW  - attention mechanisms
KW  - deep neural networks
KW  - image analysis
KW  - multi-scale processing
AB  - Defocus blur detection is essential for computational photography applications, but existing methods struggle with accurate blur localization and boundary preservation. We propose SemanticBlur, a deep learning framework which integrates semantic understanding with attention mechanisms for robust defocus blur detection. Our semantic-aware attention module combines channel attention, spatial attention, and semantic enhancement to leverage high-level features for low-level feature refinement. The architecture employs a modified ResNet-50 backbone with dilated convolutions that preserves spatial resolution while expanding receptive fields, coupled with a feature pyramid decoder using learnable fusion weights for adaptive multi-scale integration. A combined loss function balancing binary cross-entropy and structural similarity achieves both pixel-wise accuracy and structural coherence. Extensive experiments on four benchmark datasets (CUHK, DUT, CTCUG, EBD) demonstrate state-of-the-art (SOTA) performance, with ablation studies confirming that semantic enhancement provides the most significant gains while maintaining computational efficiency. SemanticBlur generates visually coherent detection maps with sharp boundaries, validating its practical applicability for real-world deployment.
SN  - 3068-5079
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Haider2026SemanticBl,
  author = {Asad Ullah Haider and Alexandros Gazis and Faryal Zahoor},
  title = {SemanticBlur: Semantic-Aware Attention Network with Multi-Scale Feature Refinement for Defocus Blur Detection},
  journal = {ICCK Transactions on Intelligent Systematics},
  year = {2026},
  volume = {3},
  number = {1},
  pages = {21-31},
  doi = {10.62762/TIS.2025.879161},
  url = {https://www.icck.org/article/abs/TIS.2025.879161},
  abstract = {Defocus blur detection is essential for computational photography applications, but existing methods struggle with accurate blur localization and boundary preservation. We propose SemanticBlur, a deep learning framework which integrates semantic understanding with attention mechanisms for robust defocus blur detection. Our semantic-aware attention module combines channel attention, spatial attention, and semantic enhancement to leverage high-level features for low-level feature refinement. The architecture employs a modified ResNet-50 backbone with dilated convolutions that preserves spatial resolution while expanding receptive fields, coupled with a feature pyramid decoder using learnable fusion weights for adaptive multi-scale integration. A combined loss function balancing binary cross-entropy and structural similarity achieves both pixel-wise accuracy and structural coherence. Extensive experiments on four benchmark datasets (CUHK, DUT, CTCUG, EBD) demonstrate state-of-the-art (SOTA) performance, with ablation studies confirming that semantic enhancement provides the most significant gains while maintaining computational efficiency. SemanticBlur generates visually coherent detection maps with sharp boundaries, validating its practical applicability for real-world deployment.},
  keywords = {blur detection, semantic segmentation, attention mechanisms, deep neural networks, image analysis, multi-scale processing},
  issn = {3068-5079},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 18
PDF Downloads: 5

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
Institute of Central Computation and Knowledge (ICCK) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ICCK Transactions on Intelligent Systematics

ICCK Transactions on Intelligent Systematics

ISSN: 3068-5079 (Online) | ISSN: 3069-003X (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/