Volume 3, Issue 2, ICCK Transactions on Emerging Topics in Artificial Intelligence
Volume 3, Issue 2, 2026
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Emerging Topics in Artificial Intelligence, Volume 3, Issue 2, 2026: 128-141

Open Access | Research Article | 06 March 2026
SEFF-Net: A Hybrid Feature Fusion Network for Accurate Segmentation of Breast Ultrasound Images
1 School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
2 Taiyuan Central Hospital, Taiyuan 030024, China
* Corresponding Author: Senmao Wang, [email protected]
ARK: ark:/57805/tetai.2026.494190
Received: 10 January 2026, Accepted: 04 March 2026, Published: 06 March 2026  
Abstract
Breast ultrasound imaging plays a crucial role in early breast cancer screening and diagnosis due to its noninvasive nature and cost-effectiveness. However, accurate lesion segmentation remains challenging because of severe speckle noise, low contrast, and blurred tumor boundaries. To address these issues, this paper proposes SEFF-Net, a novel edge-aware feature fusion network with a U-shaped encoder–decoder architecture to capture multi-level semantic representations for breast ultrasound image segmentation task. To enhance boundary perception, a Self-learning Edge Enhancement Module is embedded in the shallow encoding stages, while a Spatial Feature Fusion Module is introduced to effectively integrate multi-scale features by leveraging spatial context, thereby achieving a better balance between low-level spatial details and high-level semantic information.To further alleviate the class imbalance between foreground and background regions and improve boundary learning, a novel joint loss function is designed by combining region-based consistency constraints with boundary-sensitive supervision. This optimization strategy reinforces contour awareness while maintaining overall segmentation accuracy. Experimental results demonstrate that SEFF-Net consistently outperforms state-of-the-art segmentation methods across multiple evaluation metrics, including Dice coefficient, IoU, and boundary-related measures. Overall, SEFF-Net provides an effective and reliable solution for accurate breast ultrasound image segmentation, showing promising potential for clinical computer-aided diagnosis systems.

Graphical Abstract
SEFF-Net: A Hybrid Feature Fusion Network for Accurate Segmentation of Breast Ultrasound Images

Keywords
breast ultrasound segmentation
edge enhancement
feature fusion
hybrid loss strategy
image processing

Data Availability Statement
Data will be made available on request.

Funding
This work was supported by the Taiyuan Bureau of Science and Technology through the Science, Technology and Innovation Program of the National Regional Medical Center under Grant 202243.

Conflicts of Interest
Senmao Wang is affiliated with the Taiyuan Central Hospital, Taiyuan 030024, China. The authors declare that this affiliation had no influence on the study design, data collection, analysis, interpretation, or the decision to publish, and that no other competing interests exist.

AI Use Statement
The authors declare that no generative AI was used in the preparation of this manuscript.

Ethical Approval and Consent to Participate
This study used solely publicly available, de-identified datasets with no human subject involvement or new data collection, no ethical approval or informed consent was required.

References
  1. Sung, H., Ferlay, J., Siegel, R. L., Laversanne, M., Soerjomataram, I., Jemal, A., & Bray, F. (2021). Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians, 71(3), 209-249.
    [CrossRef]   [Google Scholar]
  2. Lee, C., Phillips, J., Sung, J., Lewin, J., & Newell, M. (2022). Breast Imaging Reporting and Data System: ACR BI-RADS breast imaging atlas. Contrast enhanced mammography (CEM)(a supplement to ACR BI-RADS® Mammography 2013), 5th edn. American College of Radiology, Reston.
    [Google Scholar]
  3. Wang, R., Wang, Z., Xiao, Y., Liu, X., Tan, G., & Liu, J. (2025). Application of deep learning on automated breast ultrasound: Current developments, challenges, and opportunities. Meta-Radiology, 3(2), 100138.
    [CrossRef]   [Google Scholar]
  4. Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Cham: Springer international publishing.
    [CrossRef]   [Google Scholar]
  5. Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495.
    [CrossRef]   [Google Scholar]
  6. Huang, Z., Wang, X., Wei, Y., Huang, L., Shi, H., Liu, W., & Huang, T. S. (2020). CCNet: Criss-Cross Attention for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6), 6896-6908.
    [CrossRef]   [Google Scholar]
  7. Al-Karawi, D., Al-Zaidi, S., Helael, K. A., Obeidat, N., Mouhsen, A. M., Ajam, T., ... & Ahmed, M. H. (2024). A review of artificial intelligence in breast imaging. Tomography, 10(5), 705-726.
    [CrossRef]   [Google Scholar]
  8. He, Q., Yang, Q., & Xie, M. (2023). HCTNet: A hybrid CNN-transformer network for breast ultrasound image segmentation. Computers in Biology and Medicine, 155, 106629.
    [CrossRef]   [Google Scholar]
  9. Zhang, M., Huang, A., Yang, D., & Xu, R. (2023). Boundary-oriented network for automatic breast tumor segmentation in ultrasound images. Ultrasonic Imaging, 45(2), 62-73.
    [CrossRef]   [Google Scholar]
  10. Ning, Z., Zhong, S., Feng, Q., Chen, W., & Zhang, Y. (2021). SMU-Net: Saliency-guided morphology-aware U-Net for breast lesion segmentation in ultrasound image. IEEE transactions on medical imaging, 41(2), 476-490.
    [CrossRef]   [Google Scholar]
  11. Al-Dhabyani, W., Gomaa, M., Khaled, H., & Fahmy, A. (2020). Dataset of breast ultrasound images. Data in brief, 28, 104863.
    [CrossRef]   [Google Scholar]
  12. Peng, Y., Chen, D. Z., & Sonka, M. (2025, April). U-net v2: Rethinking the skip connections of u-net for medical image segmentation. In 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE.
    [CrossRef]   [Google Scholar]
  13. Wu, R., Lu, X., Yao, Z., & Ma, Y. (2024). MFMSNet: a multi-frequency and multi-scale interactive CNN-transformer hybrid network for breast ultrasound image segmentation. Computers in Biology and Medicine, 177, 108616.
    [CrossRef]   [Google Scholar]
  14. Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., ... & Zhou, Y. (2021). Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306.
    [Google Scholar]
  15. Sinha, A., & Dolz, J. (2020). Multi-scale self-guided attention for medical image segmentation. IEEE journal of biomedical and health informatics, 25(1), 121-130.
    [CrossRef]   [Google Scholar]
  16. Zhuang, J. (2018). LadderNet: Multi-path networks based on U-Net for medical image segmentation. arXiv preprint arXiv:1810.07810.
    [Google Scholar]
  17. Kumar, A., Kim, J., Lyndon, D., Fulham, M., & Feng, D. (2016). An ensemble of fine-tuned convolutional neural networks for medical image classification. IEEE journal of biomedical and health informatics, 21(1), 31-40.
    [CrossRef]   [Google Scholar]
  18. Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017, July). Feature Pyramid Networks for Object Detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 936-944). IEEE.
    [CrossRef]   [Google Scholar]
  19. Hu, J., Shen, L., Albanie, S., Sun, G., & Wu, E. (2019). Squeeze-and-Excitation Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(8), 2011-2023.
    [CrossRef]   [Google Scholar]
  20. Shelhamer, E., Long, J., & Darrell, T. (2017). Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640-651.
    [CrossRef]   [Google Scholar]
  21. Dar, M. F., & Ganivada, A. (2025). Adaptive ensemble loss and multi-scale attention in breast ultrasound segmentation with UMA-Net. Medical & Biological Engineering & Computing, 63(6), 1697-1713.
    [CrossRef]   [Google Scholar]
  22. Salem, S., Mostafa, A., Ghalwash, Y. E., Mahmoud, M. N., Elnokrashy, A. F., & Mahmoud, A. M. (2023, November). Computer-Aided System for Breast Cancer Lesion Segmentation and Classification Using Ultrasound Images. In International Conference on e-Health and Bioengineering (pp. 297-305). Cham: Springer Nature Switzerland.
    [CrossRef]   [Google Scholar]
  23. Agarwal, R., Diaz, O., Yap, M. H., Llado, X., & Marti, R. (2020). Deep learning for mass detection in full field digital mammograms. Computers in biology and medicine, 121, 103774.
    [CrossRef]   [Google Scholar]
  24. Hussain, M. S. (2019). Breast Lesion Classification from Bi-modal Ultrasound Images by Convolutional Neural Network (Doctoral dissertation, Bangladesh University of Engineering and Technology).
    [Google Scholar]
  25. Tang, F., Ding, J., Quan, Q., Wang, L., Ning, C., & Zhou, S. K. (2024, May). Cmunext: An efficient medical image segmentation network based on large kernel and skip fusion. In 2024 IEEE International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Su, T., Wan, R., Wang, S., & Bai, Y. (2026). SEFF-Net: A Hybrid Feature Fusion Network for Accurate Segmentation of Breast Ultrasound Images. ICCK Transactions on Emerging Topics in Artificial Intelligence, 3(2), 128–141. https://doi.org/10.62762/TETAI.2026.494190
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Su, Tingli
AU  - Wan, Rui
AU  - Wang, Senmao
AU  - Bai, Yuting
PY  - 2026
DA  - 2026/03/06
TI  - SEFF-Net: A Hybrid Feature Fusion Network for Accurate Segmentation of Breast Ultrasound Images
JO  - ICCK Transactions on Emerging Topics in Artificial Intelligence
T2  - ICCK Transactions on Emerging Topics in Artificial Intelligence
JF  - ICCK Transactions on Emerging Topics in Artificial Intelligence
VL  - 3
IS  - 2
SP  - 128
EP  - 141
DO  - 10.62762/TETAI.2026.494190
UR  - https://www.icck.org/article/abs/TETAI.2026.494190
KW  - breast ultrasound segmentation
KW  - edge enhancement
KW  - feature fusion
KW  - hybrid loss strategy
KW  - image processing
AB  - Breast ultrasound imaging plays a crucial role in early breast cancer screening and diagnosis due to its noninvasive nature and cost-effectiveness. However, accurate lesion segmentation remains challenging because of severe speckle noise, low contrast, and blurred tumor boundaries. To address these issues, this paper proposes SEFF-Net, a novel edge-aware feature fusion network with a U-shaped encoder–decoder architecture to capture multi-level semantic representations for breast ultrasound image segmentation task. To enhance boundary perception, a Self-learning Edge Enhancement Module is embedded in the shallow encoding stages, while a Spatial Feature Fusion Module is introduced to effectively integrate multi-scale features by leveraging spatial context, thereby achieving a better balance between low-level spatial details and high-level semantic information.To further alleviate the class imbalance between foreground and background regions and improve boundary learning, a novel joint loss function is designed by combining region-based consistency constraints with boundary-sensitive supervision. This optimization strategy reinforces contour awareness while maintaining overall segmentation accuracy. Experimental results demonstrate that SEFF-Net consistently outperforms state-of-the-art segmentation methods across multiple evaluation metrics, including Dice coefficient, IoU, and boundary-related measures. Overall, SEFF-Net provides an effective and reliable solution for accurate breast ultrasound image segmentation, showing promising potential for clinical computer-aided diagnosis systems.
SN  - 3068-6652
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Su2026SEFFNet,
  author = {Tingli Su and Rui Wan and Senmao Wang and Yuting Bai},
  title = {SEFF-Net: A Hybrid Feature Fusion Network for Accurate Segmentation of Breast Ultrasound Images},
  journal = {ICCK Transactions on Emerging Topics in Artificial Intelligence},
  year = {2026},
  volume = {3},
  number = {2},
  pages = {128-141},
  doi = {10.62762/TETAI.2026.494190},
  url = {https://www.icck.org/article/abs/TETAI.2026.494190},
  abstract = {Breast ultrasound imaging plays a crucial role in early breast cancer screening and diagnosis due to its noninvasive nature and cost-effectiveness. However, accurate lesion segmentation remains challenging because of severe speckle noise, low contrast, and blurred tumor boundaries. To address these issues, this paper proposes SEFF-Net, a novel edge-aware feature fusion network with a U-shaped encoder–decoder architecture to capture multi-level semantic representations for breast ultrasound image segmentation task. To enhance boundary perception, a Self-learning Edge Enhancement Module is embedded in the shallow encoding stages, while a Spatial Feature Fusion Module is introduced to effectively integrate multi-scale features by leveraging spatial context, thereby achieving a better balance between low-level spatial details and high-level semantic information.To further alleviate the class imbalance between foreground and background regions and improve boundary learning, a novel joint loss function is designed by combining region-based consistency constraints with boundary-sensitive supervision. This optimization strategy reinforces contour awareness while maintaining overall segmentation accuracy. Experimental results demonstrate that SEFF-Net consistently outperforms state-of-the-art segmentation methods across multiple evaluation metrics, including Dice coefficient, IoU, and boundary-related measures. Overall, SEFF-Net provides an effective and reliable solution for accurate breast ultrasound image segmentation, showing promising potential for clinical computer-aided diagnosis systems.},
  keywords = {breast ultrasound segmentation, edge enhancement, feature fusion, hybrid loss strategy, image processing},
  issn = {3068-6652},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 58
PDF Downloads: 22

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
CC BY Copyright © 2026 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
ICCK Transactions on Emerging Topics in Artificial Intelligence

ICCK Transactions on Emerging Topics in Artificial Intelligence

ISSN: 3068-6652 (Online)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/