-
CiteScore
-
Impact Factor
Volume 2, Issue 2, ICCK Transactions on Intelligent Systematics
Volume 2, Issue 2, 2025
Submit Manuscript Edit a Special Issue
Academic Editor
Xue-Bo JIN
Xue-Bo JIN
Beijing Technology and Business University, China
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Intelligent Systematics, Volume 2, Issue 2, 2025: 125-136

Research Article | 25 June 2025
ColoSegNet: Visual Intelligence Driven Triple Attention Feature Fusion Network for Endoscopic Colorectal Cancer Segmentation
1 Erciyes University, Kayseri, Turkey
2 Department of Electrical and Computer Engineering, School of Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
3 Heriot-Watt University, Edinburgh, United Kingdom
4 University of the West of Scotland, London E14 2BE, United Kingdom
5 Khyber Medical University, Peshawar, Pakistan
6 University of Haripur, Khyber Pakhtunkhwa, Pakistan
7 Northwest School of Medicine, Peshawar 25000, Pakistan
8 School of Computing, University of Eastern Finland, Joensuu 80100, Finland
* Corresponding Author: Hasnain Ali Shah, [email protected]
Received: 04 May 2025, Accepted: 20 May 2025, Published: 25 June 2025  
Abstract
Accurate segmentation of colorectal cancer (CRC) from endoscopic images is crucial for computer-aided diagnosis. Visual intelligence enhances detection precision, supporting clinical decision-making. However, current segmentation methods often struggle with accurately delineating fine-grained lesion boundaries due to limited context comprehension and inadequate attention to optimal features. Additionally, the poor fusion of multi-scale semantic cues hinders performance, especially in complex endoscopic scenarios. To address these issues, we introduce ColoSegNet, a Visual Intelligence-Driven Triple Attention Feature Fusion Network designed for high-precision CRC segmentation. Our approach begins with an analysis of backbone networks to identify optimal intermediate-level feature extraction. The architecture includes a non-local attention module to capture long-range dependencies, enhanced by channel and spatial attention mechanisms for better feature selectivity across semantic hierarchies. Multi-scale fusion strategies are used to preserve both low-level details and high-level context, enabling accurate localization under challenging conditions. Extensive evaluations across four publicly available endoscopic datasets, along with cross-dataset assessments, demonstrate the effectiveness and generalizability of ColoSegNet, outperforming state-of-the-art segmentation methods.

Graphical Abstract
ColoSegNet: Visual Intelligence Driven Triple Attention Feature Fusion Network for Endoscopic Colorectal Cancer Segmentation

Keywords
colorectal segmentation
visual intelligence
medical imaging
attention mechanisms
feature fusion
semantic segmentation
deep learning

Data Availability Statement
Data will be made available on request.

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
This study utilized only publicly available datasets (ClinicDB, ColonDB, Endoscene, ETIS) with pre-existing ethical approvals. No additional human or animal participation was involved, hence no further ethical approval was required.

References
  1. World Cancer Research Fund International. (2022). Colorectal Cancer Statistics. Retrieved from https://www.wcrf.org/preventing-cancer/cancer-statistics/colorectal-cancer-statistics/
    [Google Scholar]
  2. Fan, D. P., Ji, G. P., Zhou, T., Chen, G., Fu, H., Shen, J., & Shao, L. (2020, September). Pranet: Parallel reverse attention network for polyp segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 263-273). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  3. Dong, B., Wang, W., Fan, D., Li, J., Fu, H., & Shao, L. (2023). Polyp-PVT: Polyp segmentation with pyramid vision transformers. CAAI Artificial Intelligence Research, 9150015.
    [CrossRef]   [Google Scholar]
  4. Kim, T., Lee, H., & Kim, D. (2021, October). Uacanet: Uncertainty augmented context attention for polyp segmentation. In Proceedings of the 29th ACM international conference on multimedia (pp. 2167-2175).
    [CrossRef]   [Google Scholar]
  5. Rahman, M. M., & Marculescu, R. (2023). Medical image segmentation via cascaded attention decoding. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 6222-6231).
    [CrossRef]   [Google Scholar]
  6. Cao, X., Yu, H., Yan, K., Cui, R., Guo, J., Li, X., ... & Huang, T. (2024). DEMF-Net: A dual encoder multi-scale feature fusion network for polyp segmentation. Biomedical Signal Processing and Control, 96, 106487.
    [CrossRef]   [Google Scholar]
  7. Wu, H., Zhao, Z., & Wang, Z. (2023). META-Unet: Multi-scale efficient transformer attention Unet for fast and high-accuracy polyp segmentation. IEEE Transactions on Automation Science and Engineering, 21(3), 4117-4128.
    [CrossRef]   [Google Scholar]
  8. Hussain, A., Ullah, W., Khan, N., Khan, Z. A., Kim, M. J., & Baik, S. W. (2024). TDS-Net: Transformer enhanced dual-stream network for video Anomaly Detection. Expert Systems with Applications, 256, 124846.
    [CrossRef]   [Google Scholar]
  9. Khan, L. U., Ullah, W., Muhaidat, S., Guizani, M., & Hamdaoui, B. (2025). Block successive upper-bound minimization for resource scheduling in wireless metaverse. IEEE Transactions on Network and Service Management.
    [CrossRef]   [Google Scholar]
  10. Pei, Y., Mu, L., Fu, Y., He, K., Li, H., Guo, S., ... & Li, X. (2020). Colorectal tumor segmentation of CT scans based on a convolutional neural network with an attention mechanism. IEEE Access, 8, 64131-64138.
    [CrossRef]   [Google Scholar]
  11. Santhoshi, A., & Muthukumaravel, A. (2024). Enhancing Colorectal Cancer Diagnosis With Machine Learning Algorithms. In Advancing Intelligent Networks Through Distributed Optimization (pp. 165-184). IGI Global.
    [CrossRef]   [Google Scholar]
  12. Sharma, S., Verma, S., Chaudhuri, S., & Saxena, A. (2024, April). Sustainable Development in Cancer Diagnosis: Empowering Precision Medicine with Artificial Intelligence and CRC Detection Tools. In International Conference on Sustainable Development through Machine Learning, AI and IoT (pp. 81-91). Cham: Springer Nature Switzerland.
    [CrossRef]   [Google Scholar]
  13. Patharia, P., Sethy, P. K., & Nanthaamornphong, A. (2024). Advancements and Challenges in the Image-Based Diagnosis of Lung and Colon Cancer: A Comprehensive Review. Cancer Informatics, 23, 11769351241290608.
    [CrossRef]   [Google Scholar]
  14. Fang, Y., Chen, C., Yuan, Y., & Tong, K. Y. (2019). Selective feature aggregation network with area-boundary constraints for polyp segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part I 22 (pp. 302-310). Springer International Publishing.
    [CrossRef]   [Google Scholar]
  15. Hatamizadeh, A., Terzopoulos, D., & Myronenko, A. (2019, October). End-to-end boundary aware networks for medical image segmentation. In International Workshop on Machine Learning in Medical Imaging (pp. 187-194). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  16. Jha, D., Smedsrud, P. H., Riegler, M. A., Johansen, D., De Lange, T., Halvorsen, P., & Johansen, H. D. (2019, December). Resunet++: An advanced architecture for medical image segmentation. In 2019 IEEE international symposium on multimedia (ISM) (pp. 225-2255). IEEE.
    [CrossRef]   [Google Scholar]
  17. Murugesan, B., Sarveswaran, K., Shankaranarayana, S. M., Ram, K., Joseph, J., & Sivaprakasam, M. (2019, July). Psi-Net: Shape and boundary aware joint multi-task deep network for medical image segmentation. In 2019 41st Annual international conference of the IEEE engineering in medicine and biology society (EMBC) (pp. 7223-7226). IEEE.
    [CrossRef]   [Google Scholar]
  18. Fang, X., & Yan, P. (2020). Multi-organ segmentation over partially labeled datasets with multi-scale feature abstraction. IEEE Transactions on Medical Imaging, 39(11), 3619-3629.
    [CrossRef]   [Google Scholar]
  19. He, J., Deng, Z., & Qiao, Y. (2019). Dynamic multi-scale filters for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 3562-3572).
    [CrossRef]   [Google Scholar]
  20. Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N., & Liang, J. (2018). Unet++: A nested u-net architecture for medical image segmentation. In Deep learning in medical image analysis and multimodal learning for clinical decision support: 4th international workshop, DLMIA 2018, and 8th international workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, proceedings 4 (pp. 3-11). Springer International Publishing.
    [CrossRef]   [Google Scholar]
  21. Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., ... & Zhou, Y. (2021). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306.
    [Google Scholar]
  22. Usman, M. T., Khan, H., Rida, I., & Koo, J. (2025). Lightweight transformer-driven multi-scale trapezoidal attention network for saliency detection. Engineering Applications of Artificial Intelligence, 155, 110917.
    [CrossRef]   [Google Scholar]
  23. Valanarasu, J. M. J., Oza, P., Hacihaliloglu, I., & Patel, V. M. (2021). Medical transformer: Gated axial-attention for medical image segmentation. In Medical image computing and computer assisted intervention–MICCAI 2021: 24th international conference, Strasbourg, France, September 27–October 1, 2021, proceedings, part I 24 (pp. 36-46). Springer International Publishing.
    [CrossRef]   [Google Scholar]
  24. Wei, J., Hu, Y., Zhang, R., Li, Z., Zhou, S. K., & Cui, S. (2021). Shallow attention network for polyp segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24 (pp. 699-708). Springer International Publishing.
    [CrossRef]   [Google Scholar]
  25. Khan, H., Usman, M. T., & Koo, J. (2025). Bilateral Feature Fusion with hexagonal attention for robust saliency detection under uncertain environments. Information Fusion, 121, 103165.
    [CrossRef]   [Google Scholar]
  26. Sinha, A., & Dolz, J. (2020). Multi-scale self-guided attention for medical image segmentation. IEEE journal of biomedical and health informatics, 25(1), 121-130.
    [CrossRef]   [Google Scholar]
  27. Ahamed, M. F., Islam, M. R., Nahiduzzaman, M., Chowdhury, M. E., Alqahtani, A., & Murugappan, M. (2024). Automated colorectal polyps detection from endoscopic images using MultiResUNet framework with attention guided segmentation. Human-Centric Intelligent Systems, 4(2), 299-315.
    [CrossRef]   [Google Scholar]
  28. Rahim, N., Ahmad, N., Ullah, W., Bedi, J., & Jung, Y. (2025). Early progression detection from MCI to AD using multi-view MRI for enhanced assisted living. Image and Vision Computing, 157, 105491.
    [CrossRef]   [Google Scholar]
  29. Liu, J., Chen, Q., Zhang, Y., Wang, Z., Deng, X., & Wang, J. (2024). Multi-level feature fusion network combining attention mechanisms for polyp segmentation. Information Fusion, 104, 102195.
    [CrossRef]   [Google Scholar]
  30. Cai, L., Wu, M., Chen, L., Bai, W., Yang, M., Lyu, S., & Zhao, Q. (2022, September). Using guided self-attention with local information for polyp segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 629-638). Cham: Springer Nature Switzerland.
    [CrossRef]   [Google Scholar]
  31. Yang, H., Chen, Q., Fu, K., Zhu, L., Jin, L., Qiu, B., ... & Lu, Y. (2022). Boosting medical image segmentation via conditional-synergistic convolution and lesion decoupling. Computerized Medical Imaging and Graphics, 101, 102110.
    [CrossRef]   [Google Scholar]
  32. Shah, S., Park, N., Chehade, N. E. H., Chahine, A., Monachese, M., Tiritilli, A., ... & Samarasena, J. (2023). Effect of computer‐aided colonoscopy on adenoma miss rates and polyp detection: a systematic review and meta‐analysis. Journal of Gastroenterology and Hepatology, 38(2), 162-176.
    [CrossRef]   [Google Scholar]
  33. Bernal, J., Sánchez, F. J., Fernández-Esparrach, G., Gil, D., Rodríguez, C., & Vilariño, F. (2015). WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized medical imaging and graphics, 43, 99-111.
    [CrossRef]   [Google Scholar]
  34. Tajbakhsh, N., Gurudu, S. R., & Liang, J. (2015). Automated polyp detection in colonoscopy videos using shape and context information. IEEE transactions on medical imaging, 35(2), 630-644.
    [CrossRef]   [Google Scholar]
  35. Vázquez, D., Bernal, J., Sánchez, F. J., Fernández-Esparrach, G., López, A. M., Romero, A., ... & Courville, A. (2017). A benchmark for endoluminal scene segmentation of colonoscopy images. Journal of healthcare engineering, 2017(1), 4037190.
    [CrossRef]   [Google Scholar]
  36. Silva, J., Histace, A., Romain, O., Dray, X., & Granado, B. (2014). Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer. International journal of computer assisted radiology and surgery, 9, 283-293.
    [CrossRef]   [Google Scholar]
  37. Zhang, R., Li, G., Li, Z., Cui, S., Qian, D., & Yu, Y. (2020). Adaptive context selection for polyp segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VI 23 (pp. 253-262). Springer International Publishing.
    [CrossRef]   [Google Scholar]
  38. Qiu, Z., Wang, Z., Zhang, M., Xu, Z., Fan, J., & Xu, L. (2022, April). BDG-Net: boundary distribution guided network for accurate polyp segmentation. In Medical Imaging 2022: Image Processing (Vol. 12032, pp. 792-799). SPIE.
    [CrossRef]   [Google Scholar]
  39. Wang, J., Huang, Q., Tang, F., Meng, J., Su, J., & Song, S. (2022, September). Stepwise feature fusion: Local guides global. In International conference on medical image computing and computer-assisted intervention (pp. 110-120). Cham: Springer Nature Switzerland.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Salman, T., Gazis, A., Ali, A. Khan, M. H., Ali, M., Khan, H., & Shah, H. A. (2025). ColoSegNet: Visual Intelligence Driven Triple Attention Feature Fusion Network for Endoscopic Colorectal Cancer Segmentation. ICCK Transactions on Intelligent Systematics, 2(2), 125–136. https://doi.org/10.62762/TIS.2025.385365

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 113
PDF Downloads: 0

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
Institute of Central Computation and Knowledge (ICCK) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ICCK Transactions on Intelligent Systematics

ICCK Transactions on Intelligent Systematics

ISSN: 3068-5079 (Online) | ISSN: pending (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/