-
CiteScore
-
Impact Factor
Volume 2, Issue 4, ICCK Transactions on Sensing, Communication, and Control
Volume 2, Issue 4, 2025
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Sensing, Communication, and Control, Volume 2, Issue 4, 2025: 263-275

Free to Read | Research Article | 20 December 2025
Strip Pooling Coordinate Attention with Directional Learning for Intelligent Fire Recognition in Smart Cities
1 Ilmenau University of Technology, Ilmenau 98693, Germany
2 Department of Software Convergence, Sejong University, Seoul 05006, South Korea
3 Departamento de Sistemas Informaticos, Universidad Politécnica de Madrid, Madrid 28031, Spain
4 Department of IT, Saudi Media Systems, Riyadh, Saudi Arabia
* Corresponding Author: Asad Ullah Haider, [email protected]
ARK: ark:/57805/tscc.2025.675097
Received: 20 October 2025, Accepted: 26 November 2025, Published: 20 December 2025  
Abstract
Fire detection in smart cities requires intelligent visual recognition systems capable of distinguishing fire from visually similar phenomena while maintaining real-time performance under diverse environmental conditions. Existing deep learning approaches employ attention mechanisms that aggregate spatial information isotropically, failing to capture the inherently directional characteristics of fire and smoke patterns. This paper presents DirFireNet, a novel fire detection framework that exploits directional fire dynamics through Strip Pooling Coordinate Attention (SPCA). Unlike conventional attention mechanisms, DirFireNet explicitly models vertical flame propagation and horizontal smoke dispersion via directional strip pooling operations that decompose features along horizontal and vertical axes. The framework integrates a progressive top-down fusion pathway with attention-guided weighting that synthesizes multi-scale representations from coarse to fine resolutions. Furthermore, dual global pooling captures complementary scene statistics holistic fire intensity and salient flame regions. Built upon the lightweight EfficientNetV2-S backbone, DirFireNet achieves superior accuracy while maintaining computational efficiency. Extensive experiments on the FD and BoWFire benchmark demonstrate state-of-the-art (SOTA) performance. Comprehensive ablation studies validate that directional attention contributes to accuracy gain, validating that attention mechanism provides strong inductive biases for intelligent fire recognition in smart city applications.

Graphical Abstract
Strip Pooling Coordinate Attention with Directional Learning for Intelligent Fire Recognition in Smart Cities

Keywords
fire detection
directional attention
strip pooling
smart cities
anisotropic feature learning
multi-scale fusion

Data Availability Statement
Data will be made available on request.

Funding
This work was supported without any funding.

Conflicts of Interest
Taimur Ali Khan is affiliated with the Department of Information Technology, Saudi Media Systems, Riyadh, Saudi Arabia. The authors declare that this affiliation had no influence on the study design, data collection, analysis, interpretation, or the decision to publish, and that no other competing interests exist.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Urza, A. K., Hanberry, B. B., & Jain, T. B. (2023). Landscape-scale fuel treatment effectiveness: lessons learned from wildland fire case studies in forests of the western United States and Great Lakes region. Fire Ecology, 19(1), 1.
    [CrossRef]   [Google Scholar]
  2. Tan, C., & Feng, Z. (2023). Mapping forest fire risk zones using machine learning algorithms in Hunan province, China. Sustainability, 15(7), 6292.
    [CrossRef]   [Google Scholar]
  3. Residential fire estimate summaries (2014-2023). (n.d.). U.S. Fire Administration. Available from: https://www.usfa.fema.gov/statistics/residential-fires/
    [Google Scholar]
  4. Keith, D. A., Allen, S. P., Gallagher, R. V., Mackenzie, B. D., Auld, T. D., Barrett, S., ... & Tozer, M. G. (2022). Fire‐related threats and transformational change in Australian ecosystems. Global Ecology and Biogeography, 31(10), 2070-2084.
    [CrossRef]   [Google Scholar]
  5. Gaur, A., Singh, A., Kumar, A., Kumar, A., & Kapoor, K. (2020). Video flame and smoke based fire detection algorithms: A literature review. Fire technology, 56(5), 1943-1980.
    [CrossRef]   [Google Scholar]
  6. Swain, D. L., Abatzoglou, J. T., Kolden, C., Shive, K., Kalashnikov, D. A., Singh, D., & Smith, E. (2023). Climate change is narrowing and shifting prescribed fire windows in western United States. Communications Earth & Environment, 4(1), 340.
    [CrossRef]   [Google Scholar]
  7. Ahmed, I., & Ledger, K. (2023). Lessons from the 2019/2020 ‘black summer bushfires’ in Australia. International journal of disaster risk reduction, 96, 103947.
    [CrossRef]   [Google Scholar]
  8. Celik, T., Ozkaramanli, H., & Demirel, H. (2007, April). Fire pixel classification using fuzzy logic and statistical color model. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP'07 (Vol. 1, pp. I-1205). IEEE.
    [CrossRef]   [Google Scholar]
  9. Celik, T., & Demirel, H. (2009). Fire detection in video sequences using a generic color model. Fire safety journal, 44(2), 147--158.
    [CrossRef]   [Google Scholar]
  10. Frizzi, S., Kaabi, R., Bouchouicha, M., Ginoux, J. M., Moreau, E., & Fnaiech, F. (2016, October). Convolutional neural network for video fire and smoke detection. In IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society (pp. 877-882). IEEE.
    [CrossRef]   [Google Scholar]
  11. Mao, W., Wang, W., Dou, Z., & Li, Y. (2018). Fire recognition based on multi-channel convolutional neural network. Fire technology, 54(2), 531-554.
    [CrossRef]   [Google Scholar]
  12. Chen, T. H., Wu, P. H., & Chiou, Y. C. (2004, October). An early fire-detection method based on image processing. In 2004 International Conference on Image Processing, 2004. ICIP'04. (Vol. 3, pp. 1707-1710). IEEE.
    [CrossRef]   [Google Scholar]
  13. Chino, D. Y., Avalhais, L. P., Rodrigues, J. F., & Traina, A. J. (2015, August). Bowfire: detection of fire in still images by integrating pixel color and texture analysis. In 2015 28th SIBGRAPI conference on graphics, patterns and images (pp. 95-102). IEEE.
    [CrossRef]   [Google Scholar]
  14. Habiboğlu, Y. H., Günay, O., & Çetin, A. E. (2012). Covariance matrix-based fire and flame detection method in video. Machine Vision and Applications, 23(6), 1103-1113.
    [CrossRef]   [Google Scholar]
  15. Ko, B. C., Cheong, K. H., & Nam, J. Y. (2009). Fire detection based on vision sensor and support vector machines. Fire Safety Journal, 44(3), 322-329.
    [CrossRef]   [Google Scholar]
  16. Lee, W., Kim, S., Lee, Y. T., Lee, H. W., & Choi, M. (2017, January). Deep neural networks for wild fire detection with unmanned aerial vehicle. In 2017 IEEE international conference on consumer electronics (ICCE) (pp. 252-253). IEEE.
    [CrossRef]   [Google Scholar]
  17. Sharma, J., Granmo, O. C., Goodwin, M., & Fidje, J. T. (2017, August). Deep convolutional neural networks for fire detection in images. In International conference on engineering applications of neural networks (pp. 183-193). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  18. Dunnings, A. J., & Breckon, T. P. (2018, October). Experimentally defined convolutional neural network architecture variants for non-temporal real-time fire detection. In 2018 25th IEEE international conference on image processing (ICIP) (pp. 1558-1562). IEEE.
    [CrossRef]   [Google Scholar]
  19. Ye, S., Feng, X., Zhang, T., Ma, X., Lin, S., Li, Z., ... & Wang, Y. (2019). Progressive dnn compression: A key to achieve ultra-high weight pruning and quantization rates using admm. arXiv preprint arXiv:1903.09769.
    [Google Scholar]
  20. Carreira-Perpinan, M. A., & Idelbayev, Y. (2018, June). `` Learning-Compression'' Algorithms for Neural Net Pruning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8532-8541). IEEE.
    [CrossRef]   [Google Scholar]
  21. Rastegari, M., Ordonez, V., Redmon, J., & Farhadi, A. (2016, September). Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision (pp. 525-542). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  22. Wang, Z., Wang, Z., Zhang, H., & Guo, X. (2017, July). A novel fire detection approach based on CNN-SVM using tensorflow. In International conference on intelligent computing (pp. 682-693). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  23. Wu, X., Lu, X., & Leung, H. (2017, October). An adaptive threshold deep learning method for fire and smoke detection. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 1954-1959). IEEE.
    [CrossRef]   [Google Scholar]
  24. Maksymiv, O., Rak, T., & Peleshko, D. (2017, February). Real-time fire detection method combining AdaBoost, LBP and convolutional neural network in video sequence. In 2017 14th international conference the experience of designing and application of CAD Systems in microelectronics (CADSM) (pp. 351-353). IEEE.
    [CrossRef]   [Google Scholar]
  25. Shen, C., Qi, G. J., Jiang, R., Jin, Z., Yong, H., Chen, Y., & Hua, X. S. (2018). Sharp attention network via adaptive sampling for person re-identification. IEEE Transactions on Circuits and Systems for Video Technology, 29(10), 3016-3027.
    [CrossRef]   [Google Scholar]
  26. Ullah, W., Ullah, A., Hussain, T., Khan, Z. A., & Baik, S. W. (2021). An efficient anomaly recognition framework using an attention residual LSTM in surveillance videos. Sensors, 21(8), 2811.
    [CrossRef]   [Google Scholar]
  27. Majid, S., Alenezi, F., Masood, S., Ahmad, M., Gündüz, E. S., & Polat, K. (2022). Attention based CNN model for fire detection and localization in real-world images. Expert Systems with Applications, 189, 116114.
    [CrossRef]   [Google Scholar]
  28. Yar, H., Hussain, T., Agarwal, M., Khan, Z. A., Gupta, S. K., & Baik, S. W. (2022). Optimized dual fire attention network and medium-scale fire classification benchmark. IEEE Transactions on Image Processing, 31, 6331-6343.
    [CrossRef]   [Google Scholar]
  29. Li, S., Yan, Q., & Liu, P. (2020). An efficient fire detection method based on multiscale feature extraction, implicit deep supervision and channel attention mechanism. IEEE Transactions on Image Processing, 29, 8467-8475.
    [CrossRef]   [Google Scholar]
  30. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., & Hu, Q. (2020, June). ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 11531-11539). IEEE.
    [CrossRef]   [Google Scholar]
  31. Foggia, P., Saggese, A., & Vento, M. (2015). Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE TRANSACTIONS on circuits and systems for video technology, 25(9), 1545--1556.
    [CrossRef]   [Google Scholar]
  32. Zhang, D., Han, S., Zhao, J., Zhang, Z., Qu, C., Ke, Y., & Chen, X. (2009, April). Image based forest fire detection using dynamic characteristics with artificial neural networks. In 2009 international joint conference on artificial intelligence (pp. 290-293). IEEE.
    [CrossRef]   [Google Scholar]
  33. Jia, Y., Zeng, Y., & Guo, H. (2025). Cascade Aggregation Network for Accurate Polyp Segmentation. IET Systems Biology, 19(1), e70036.
    [CrossRef]   [Google Scholar]
  34. Hu, J., Shen, L., Albanie, S., Sun, G., & Wu, E. (2019). Squeeze-and-Excitation Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(8), 2011-2023.
    [CrossRef]   [Google Scholar]
  35. Usman, M. T., Khan, H., Rida, I., & Koo, J. (2025). Lightweight transformer-driven multi-scale trapezoidal attention network for saliency detection. Engineering Applications of Artificial Intelligence, 155, 110917.
    [CrossRef]   [Google Scholar]
  36. Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018, September). CBAM: Convolutional Block Attention Module. In European Conference on Computer Vision (pp. 3-19). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  37. Khan, H., Usman, M. T., & Koo, J. (2025). Bilateral feature fusion with hexagonal attention for robust saliency detection under uncertain environments. Information Fusion, 121, 103165.
    [CrossRef]   [Google Scholar]
  38. Khan, H., Usman, M. T., Rida, I., & Koo, J. (2024). Attention enhanced machine instinctive vision with human-inspired saliency detection. Image and Vision Computing, 152, 105308.
    [CrossRef]   [Google Scholar]
  39. Hou, Q., Zhou, D., & Feng, J. (2021, June). Coordinate Attention for Efficient Mobile Network Design. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 13708-13717). IEEE.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Haider, A. U., Khan, S., Ahmed, M. J., & Khan, T. A. (2025). Strip Pooling Coordinate Attention with Directional Learning for Intelligent Fire Recognition in Smart Cities. ICCK Transactions on Sensing, Communication, and Control, 2(4), 263–275. https://doi.org/10.62762/TSCC.2025.675097
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Haider, Asad Ullah
AU  - Khan, Shadab
AU  - Ahmed, Muhammad Jamal
AU  - Khan, Taimur Ali
PY  - 2025
DA  - 2025/12/20
TI  - Strip Pooling Coordinate Attention with Directional Learning for Intelligent Fire Recognition in Smart Cities
JO  - ICCK Transactions on Sensing, Communication, and Control
T2  - ICCK Transactions on Sensing, Communication, and Control
JF  - ICCK Transactions on Sensing, Communication, and Control
VL  - 2
IS  - 4
SP  - 263
EP  - 275
DO  - 10.62762/TSCC.2025.675097
UR  - https://www.icck.org/article/abs/TSCC.2025.675097
KW  - fire detection
KW  - directional attention
KW  - strip pooling
KW  - smart cities
KW  - anisotropic feature learning
KW  - multi-scale fusion
AB  - Fire detection in smart cities requires intelligent visual recognition systems capable of distinguishing fire from visually similar phenomena while maintaining real-time performance under diverse environmental conditions. Existing deep learning approaches employ attention mechanisms that aggregate spatial information isotropically, failing to capture the inherently directional characteristics of fire and smoke patterns. This paper presents DirFireNet, a novel fire detection framework that exploits directional fire dynamics through Strip Pooling Coordinate Attention (SPCA). Unlike conventional attention mechanisms, DirFireNet explicitly models vertical flame propagation and horizontal smoke dispersion via directional strip pooling operations that decompose features along horizontal and vertical axes. The framework integrates a progressive top-down fusion pathway with attention-guided weighting that synthesizes multi-scale representations from coarse to fine resolutions. Furthermore, dual global pooling captures complementary scene statistics holistic fire intensity and salient flame regions. Built upon the lightweight EfficientNetV2-S backbone, DirFireNet achieves superior accuracy while maintaining computational efficiency. Extensive experiments on the FD and BoWFire benchmark demonstrate state-of-the-art (SOTA) performance. Comprehensive ablation studies validate that directional attention contributes to accuracy gain, validating that attention mechanism provides strong inductive biases for intelligent fire recognition in smart city applications.
SN  - 3068-9287
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Haider2025Strip,
  author = {Asad Ullah Haider and Shadab Khan and Muhammad Jamal Ahmed and Taimur Ali Khan},
  title = {Strip Pooling Coordinate Attention with Directional Learning for Intelligent Fire Recognition in Smart Cities},
  journal = {ICCK Transactions on Sensing, Communication, and Control},
  year = {2025},
  volume = {2},
  number = {4},
  pages = {263-275},
  doi = {10.62762/TSCC.2025.675097},
  url = {https://www.icck.org/article/abs/TSCC.2025.675097},
  abstract = {Fire detection in smart cities requires intelligent visual recognition systems capable of distinguishing fire from visually similar phenomena while maintaining real-time performance under diverse environmental conditions. Existing deep learning approaches employ attention mechanisms that aggregate spatial information isotropically, failing to capture the inherently directional characteristics of fire and smoke patterns. This paper presents DirFireNet, a novel fire detection framework that exploits directional fire dynamics through Strip Pooling Coordinate Attention (SPCA). Unlike conventional attention mechanisms, DirFireNet explicitly models vertical flame propagation and horizontal smoke dispersion via directional strip pooling operations that decompose features along horizontal and vertical axes. The framework integrates a progressive top-down fusion pathway with attention-guided weighting that synthesizes multi-scale representations from coarse to fine resolutions. Furthermore, dual global pooling captures complementary scene statistics holistic fire intensity and salient flame regions. Built upon the lightweight EfficientNetV2-S backbone, DirFireNet achieves superior accuracy while maintaining computational efficiency. Extensive experiments on the FD and BoWFire benchmark demonstrate state-of-the-art (SOTA) performance. Comprehensive ablation studies validate that directional attention contributes to accuracy gain, validating that attention mechanism provides strong inductive biases for intelligent fire recognition in smart city applications.},
  keywords = {fire detection, directional attention, strip pooling, smart cities, anisotropic feature learning, multi-scale fusion},
  issn = {3068-9287},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 206
PDF Downloads: 53

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
Institute of Central Computation and Knowledge (ICCK) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ICCK Transactions on Sensing, Communication, and Control

ICCK Transactions on Sensing, Communication, and Control

ISSN: 3068-9287 (Online) | ISSN: 3068-9279 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/