-
CiteScore
-
Impact Factor
Volume 1, Issue 2, ICCK Journal of Software Engineering
Volume 1, Issue 2, 2025
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Journal of Software Engineering, Volume 1, Issue 2, 2025: 75-89

Open Access | Review Article | 31 October 2025
A Comprehensive Review on Software Architectures for Facial Emotion Recognition Using Deep Learning Techniques
1 Department of Computer Science, COMSATS University Islamabad, Sahiwal campus, 57000, Pakistan
2 Department of Computer Engineering, COMSATS University Islamabad, Sahiwal campus, 57000, Pakistan
* Corresponding Author: Amna Farooq, [email protected]
Received: 12 August 2025, Accepted: 17 September 2025, Published: 31 October 2025  
Abstract
Facial Emotion Recognition (FER) software is an important part of modern software applications. It is used for intelligent user interfaces, diagnostics in psychiatry or psychology, human-computer interaction, and even in surveillance. The recent advancements in the use of deep learning, and the advanced architectures based on them, including Convolutional Neural Networks (CNNs) and transformer models have made the development of FER software much efficient and scalable. This review paper contributes to the existing literature by providing a comprehensive synthesis of Facial Emotion Recognition (FER) systems from a software engineering perspective spanning the period from 2015 to the present. Unlike prior surveys, our analysis emphasizes the software engineering perspective, covering aspects such as software architectures, system integration, deployment on edge devices, and MLOps practices for continuous testing and monitoring. It specifically targets software construction, software architecture, software systems integration, training frameworks, datasets employment, and also deployment challenges. It explores common datasets like FER2013, RAF-DB and AffectNet and the prevailing model architectures, which include CNN, Long-Short-Term Memory (LSTM), hybrid and transformers. Importantly, over half of the surveyed studies continue to rely on demographically narrow datasets (e.g., FER2013, JAFFE, CK+), which limits generalizability and raises fairness concerns across ethnicity, age, and gender. This review contributes by providing a comprehensive and up-to-date synthesis of FER systems while providing directions to the development of fair, lightweight, and robust FER systems suitable for real-world deployment.

Graphical Abstract
A Comprehensive Review on Software Architectures for Facial Emotion Recognition Using Deep Learning Techniques

Keywords
convolutional neural network
long-short-term memory
facial emotion recognition

Data Availability Statement
Not applicable.

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Aly, M. (2025). Revolutionizing online education: Advanced facial expression recognition for real-time student progress tracking via deep learning model. Multimedia Tools and Applications, 84(13), 12575-12614.
    [CrossRef]   [Google Scholar]
  2. Yao, Y. (2024). Research on facial recognition system based on deep learning. Applied and Computational Engineering, 34, 232-236.
    [CrossRef]   [Google Scholar]
  3. Leong, S. C., Tang, Y. M., Lai, C. H., & Lee, C. K. M. (2023). Facial expression and body gesture emotion recognition: A systematic review on the use of visual data in affective computing. Computer science review, 48, 100545.
    [CrossRef]   [Google Scholar]
  4. Li, S., & Deng, W. (2020). Deep facial expression recognition: A survey. IEEE transactions on affective computing, 13(3), 1195-1215.
    [CrossRef]   [Google Scholar]
  5. Mollahosseini, A., Hasani, B., & Mahoor, M. H. (2017). Affectnet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing, 10(1), 18-31.
    [CrossRef]   [Google Scholar]
  6. Meena, G., Mohbey, K. K., Indian, A., Khan, M. Z., & Kumar, S. (2024). Identifying emotions from facial expressions using a deep convolutional neural network-based approach. Multimedia Tools and Applications, 83(6), 15711-15732.
    [CrossRef]   [Google Scholar]
  7. Rane, M., Shahare, S., Daware, S., Shedge, Y., Deshmukh, S., & Sarak, G. (2023, January). Human Facial Emotion Recognition using Deep Learning Techniques. In 2023 Third International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  8. Jain, D. K., Shamsolmoali, P., & Sehdev, P. (2019). Extended deep neural network for facial emotion recognition. Pattern Recognition Letters, 120, 69-74.
    [CrossRef]   [Google Scholar]
  9. Kalateh, S., Estrada-Jimenez, L. A., Nikghadam-Hojjati, S., & Barata, J. (2024). A systematic review on multimodal emotion recognition: building blocks, current state, applications, and challenges. IEEE Access, 12, 103976-104019.
    [CrossRef]   [Google Scholar]
  10. Khan, A. R. (2022). Facial emotion recognition using conventional machine learning and deep learning methods: current achievements, analysis and remaining challenges. Information, 13(6), 268.
    [CrossRef]   [Google Scholar]
  11. Mellouk, W., & Handouzi, W. (2020). Facial emotion recognition using deep learning: review and insights. Procedia Computer Science, 175, 689-694.
    [CrossRef]   [Google Scholar]
  12. Ma, W., Zheng, Y., Li, T., Li, Z., Li, Y., & Wang, L. (2024). A comprehensive review of deep learning in EEG-based emotion recognition: classifications, trends, and practical implications. PeerJ Computer Science, 10, e2065.
    [CrossRef]   [Google Scholar]
  13. Maddu, R. B. R., & Murugappan, S. (2024). Online learners’ engagement detection via facial emotion recognition in online learning context using hybrid classification model. Social Network Analysis and Mining, 14(1), 43.
    [CrossRef]   [Google Scholar]
  14. Verma, B. (2023, November). A comprehensive exploration: Attention mechanisms in facial emotion recognition. In 2023 Seventh International Conference on Image Information Processing (ICIIP) (pp. 591-596). IEEE.
    [CrossRef]   [Google Scholar]
  15. Sapkota, R., Flores-Calero, M., Qureshi, R., Badgujar, C., Nepal, U., Poulose, A., ... & Karkee, M. (2025). YOLO advances to its genesis: a decadal and comprehensive review of the You Only Look Once (YOLO) series. Artificial Intelligence Review, 58(9), 274.
    [CrossRef]   [Google Scholar]
  16. Zhang, L., Tjondronegoro, D., & Chandran, V. (2014). Random Gabor based templates for facial expression recognition in images with facial occlusion. Neurocomputing, 145, 451-464.
    [CrossRef]   [Google Scholar]
  17. Zhang, K., Zhang, Z., Li, Z., & Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE signal processing letters, 23(10), 1499-1503.
    [CrossRef]   [Google Scholar]
  18. Rawat, R., Soundararajan, S., Lakshmi, T. R. K., Seekoli, A., & Moharekar, T. T. (2024, March). A Comprehensive Development of Human Emotion Detection based on Facial Features using Digital Image Processing Methodology. In 2024 5th International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV) (pp. 52-60). IEEE.
    [CrossRef]   [Google Scholar]
  19. Selvaganesan, J., Sudharani, B., Shekhar, S. C., Vaishnavi, K., Priyadarsini, K., Raju, K. S., & Rao, T. S. (2024). Enhancing face recognition performance: a comprehensive evaluation of deep learning models and a novel ensemble approach with hyperparameter tuning. Soft Computing, 28(20), 12399-12424.
    [CrossRef]   [Google Scholar]
  20. Hasani, B., & Mahoor, M. H. (2017, July). Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (pp. 2278-2288). IEEE.
    [CrossRef]   [Google Scholar]
  21. Ebrahimi Kahou, S., Michalski, V., Konda, K., Memisevic, R., & Pal, C. (2015, November). Recurrent neural networks for emotion recognition in video. In Proceedings of the 2015 ACM on international conference on multimodal interaction (pp. 467-474).
    [CrossRef]   [Google Scholar]
  22. Refat, C. M. M., & Azlan, N. Z. (2019, October). Deep learning methods for facial expression recognition. In 2019 7th International Conference on Mechatronics Engineering (ICOM) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  23. Banskota, N., Alsadoon, A., Prasad, P. W. C., Dawoud, A., Rashid, T. A., & Alsadoon, O. H. (2023). A novel enhanced convolution neural network with extreme learning machine: facial emotional recognition in psychology practices. Multimedia Tools and Applications, 82(5), 6479-6503.
    [CrossRef]   [Google Scholar]
  24. Haq, H. B. U., Akram, W., Irshad, M. N., Kosar, A., & Abid, M. (2024). Enhanced real-time facial expression recognition using deep learning. Acadlore Trans. Mach. Learn, 3(1), 24-35.
    [CrossRef]   [Google Scholar]
  25. Chen, Z., Feng, X., & Zhang, S. (2022). Emotion detection and face recognition of drivers in autonomous vehicles in IoT platform. Image and Vision Computing, 128, 104569.
    [CrossRef]   [Google Scholar]
  26. Kim, J. H., Poulose, A., & Han, D. S. (2024). CVGG-19: customized visual geometry group deep learning architecture for facial emotion recognition. IEEE Access, 12, 41557-41578.
    [CrossRef]   [Google Scholar]
  27. Aly, M., Ghallab, A., & Fathi, I. S. (2023). Enhancing facial expression recognition system in online learning context using efficient deep learning model. IEEE Access, 11, 121419-121433.
    [CrossRef]   [Google Scholar]
  28. Priyadarshani, M., Kumar, P., Babulal, K. S., Rajput, D. S., & Patel, H. (2024). Human brain waves study using EEG and deep learning for emotion recognition. IEEE Access.
    [CrossRef]   [Google Scholar]
  29. Shahzad, T., Iqbal, K., Khan, M. A., & Iqbal, N. (2023). Role of zoning in facial expression using deep learning. IEEE Access, 11, 16493-16508.
    [CrossRef]   [Google Scholar]
  30. Ahmad, M., Alfandi, O., Khattak, A. M., Qadri, S. F., Saeed, I. A., Khan, S., ... & Ahmad, A. (2023). Facial expression recognition using lightweight deep learning modeling. Mathematical Biosciences and Engineering, 20(5), 8208.
    [CrossRef]   [Google Scholar]
  31. Dosovitskiy, A. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
    [Google Scholar]
  32. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., ... & Guo, B. (2021, October). Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 9992-10002). IEEE.
    [CrossRef]   [Google Scholar]
  33. Qiao, S., Shen, W., Zhang, Z., Wang, B., & Yuille, A. (2018, September). Deep Co-Training for Semi-Supervised Image Recognition. In European Conference on Computer Vision (pp. 142-159). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  34. Howard, A., Sandler, M., Chen, B., Wang, W., Chen, L. C., Tan, M., ... & Le, Q. (2019, October). Searching for MobileNetV3. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 1314-1324). IEEE.
    [CrossRef]   [Google Scholar]
  35. Tan, M., & Le, Q. (2019, May). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105-6114). PMLR.
    [Google Scholar]
  36. Saroop, A., Ghugare, P., Mathamsetty, S., & Vasani, V. (2021). Facial Emotion Recognition: A multi-task approach using deep learning. arXiv preprint arXiv:2110.15028.
    [Google Scholar]
  37. Karnati, M., Seal, A., Bhattacharjee, D., Yazidi, A., & Krejcar, O. (2023). Understanding deep learning techniques for recognition of human emotions using facial expressions: A comprehensive survey. IEEE Transactions on Instrumentation and Measurement, 72, 1-31.
    [CrossRef]   [Google Scholar]
  38. Bhagat, D., Vakil, A., Gupta, R. K., & Kumar, A. (2024). Facial emotion recognition (FER) using convolutional neural network (CNN). Procedia Computer Science, 235, 2079-2089.
    [CrossRef]   [Google Scholar]
  39. Alzahrani, A. O., Alghamdi, A. M., Ashraf, M. U., Ilyas, I., Sarwar, N., Alzahrani, A., & Alarood, A. A. S. (2025). A novel facial expression recognition framework using deep learning based dynamic cross-domain dual attention network. PeerJ Computer Science, 11, e2866.
    [CrossRef]   [Google Scholar]
  40. Kumari, N., & Bhatia, R. (2023). Deep learning based efficient emotion recognition technique for facial images. International Journal of System Assurance Engineering and Management, 14(4), 1421-1436.
    [CrossRef]   [Google Scholar]
  41. Xu, X., Liu, C., Cao, S., & Lu, L. (2023). A high‐performance and lightweight framework for real‐time facial expression recognition. IET Image Processing, 17(12), 3500-3509.
    [CrossRef]   [Google Scholar]
  42. Kollias, D., Tzirakis, P., Nicolaou, M. A., Papaioannou, A., Zhao, G., Schuller, B., ... & Zafeiriou, S. (2019). Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond. International Journal of Computer Vision, 127(6), 907-929.
    [CrossRef]   [Google Scholar]
  43. Noroozi, F., Corneanu, C. A., Kamińska, D., Sapiński, T., Escalera, S., & Anbarjafari, G. (2018). Survey on emotional body gesture recognition. IEEE transactions on affective computing, 12(2), 505-523.
    [CrossRef]   [Google Scholar]
  44. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6), 1-35.
    [CrossRef]   [Google Scholar]
  45. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
    [CrossRef]   [Google Scholar]
  46. Li, S., Deng, W., & Du, J. (2017). Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2852-2861).
    [CrossRef]   [Google Scholar]
  47. Kopalidis, T., Solachidis, V., Vretos, N., & Daras, P. (2024). Advances in facial expression recognition: a survey of methods, benchmarks, models, and datasets. Information, 15(3), 135.
    [CrossRef]   [Google Scholar]
  48. Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
    [Google Scholar]
  49. Tsangko, I., Triantafyllopoulos, A., Abdelmoula, A., Mallol-Ragolta, A., & Schuller, B. W. (2025). Reading Smiles: Proxy Bias in Foundation Models for Facial Emotion Recognition. arXiv preprint arXiv:2506.19079.
    [Google Scholar]
  50. Nahulanthran, S., Tian, L., Kulić, D., & Vered, M. (2025). Explaining Facial Expression Recognition. arXiv preprint arXiv:2501.15864.
    [Google Scholar]
  51. Shome, D., & Kar, T. (2021, October). FedAffect: Few-shot federated learning for facial expression recognition. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) (pp. 4151-4158). IEEE.
    [CrossRef]   [Google Scholar]
  52. Ghosh, T., Banna, M. H. A., Nahian, M. J. A., Kaiser, M. S., Mahmud, M., Li, S., & Pillay, N. (2022, September). A privacy-preserving federated-mobilenet for facial expression detection from images. In International Conference on Applied Intelligence and Informatics (pp. 277-292). Cham: Springer Nature Switzerland.
    [CrossRef]   [Google Scholar]
  53. Zhu, Q., Zhuang, H., Zhao, M., Xu, S., & Meng, R. (2024). A study on expression recognition based on improved mobilenetV2 network. Scientific Reports, 14(1), 8121.
    [CrossRef]   [Google Scholar]
  54. Cugu, I., Sener, E., Akbas, E. (2019, November). Microexpnet: An extremely small and fast model for expression recognition from face images. In 2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  55. Ding, H., Yan, Y., Lu, Y., Xue, J. H., & Wang, H. (2024). Uncertainty-Aware Label Refinement on Hypergraphs for Personalized Federated Facial Expression Recognition. IEEE Transactions on Circuits and Systems for Video Technology.
    [CrossRef]   [Google Scholar]
  56. Shangguan, Z., Dong, Y., Guo, S., Leung, V. C., Deen, M. J., & Hu, X. (2025). Facial expression analysis and its potentials in iot systems: A contemporary survey. ACM Computing Surveys, 58(2), 1-39.
    [CrossRef]   [Google Scholar]
  57. Li, D., Xing, B., Liu, X., Xia, B., Wen, B., & Kälviäinen, H. (2025). Deemo: De-identity multimodal emotion recognition and reasoning. arXiv preprint arXiv:2504.19549.
    [Google Scholar]

Cite This Article
APA Style
Farooq, A., Wattoo, W. A., Farooq, W., Mukhtar, A., Zahid, M., & Iqra. (2025). A Comprehensive Review on Software Architectures for Facial Emotion Recognition Using Deep Learning Techniques. ICCK Journal of Software Engineering, 1(2), 75–89. https://doi.org/10.62762/JSE.2025.285106
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Farooq, Amna
AU  - Wattoo, Waqas Ahmad
AU  - Farooq, Wajiha
AU  - Mukhtar, Aiza
AU  - Zahid, Momina
AU  - Iqra
PY  - 2025
DA  - 2025/10/31
TI  - A Comprehensive Review on Software Architectures for Facial Emotion Recognition Using Deep Learning Techniques
JO  - ICCK Journal of Software Engineering
T2  - ICCK Journal of Software Engineering
JF  - ICCK Journal of Software Engineering
VL  - 1
IS  - 2
SP  - 75
EP  - 89
DO  - 10.62762/JSE.2025.285106
UR  - https://www.icck.org/article/abs/JSE.2025.285106
KW  - convolutional neural network
KW  - long-short-term memory
KW  - facial emotion recognition
AB  - Facial Emotion Recognition (FER) software is an important part of modern software applications. It is used for intelligent user interfaces, diagnostics in psychiatry or psychology, human-computer interaction, and even in surveillance. The recent advancements in the use of deep learning, and the advanced architectures based on them, including Convolutional Neural Networks (CNNs) and transformer models have made the development of FER software much efficient and scalable. This review paper contributes to the existing literature by providing a comprehensive synthesis of Facial Emotion Recognition (FER) systems from a software engineering perspective spanning the period from 2015 to the present. Unlike prior surveys, our analysis emphasizes the software engineering perspective, covering aspects such as software architectures, system integration, deployment on edge devices, and MLOps practices for continuous testing and monitoring. It specifically targets software construction, software architecture, software systems integration, training frameworks, datasets employment, and also deployment challenges. It explores common datasets like FER2013, RAF-DB and AffectNet and the prevailing model architectures, which include CNN, Long-Short-Term Memory (LSTM), hybrid and transformers. Importantly, over half of the surveyed studies continue to rely on demographically narrow datasets (e.g., FER2013, JAFFE, CK+), which limits generalizability and raises fairness concerns across ethnicity, age, and gender. This review contributes by providing a comprehensive and up-to-date synthesis of FER systems while providing directions to the development of fair, lightweight, and robust FER systems suitable for real-world deployment.
SN  - 3069-1834
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Farooq2025A,
  author = {Amna Farooq and Waqas Ahmad Wattoo and Wajiha Farooq and Aiza Mukhtar and Momina Zahid and Iqra},
  title = {A Comprehensive Review on Software Architectures for Facial Emotion Recognition Using Deep Learning Techniques},
  journal = {ICCK Journal of Software Engineering},
  year = {2025},
  volume = {1},
  number = {2},
  pages = {75-89},
  doi = {10.62762/JSE.2025.285106},
  url = {https://www.icck.org/article/abs/JSE.2025.285106},
  abstract = {Facial Emotion Recognition (FER) software is an important part of modern software applications. It is used for intelligent user interfaces, diagnostics in psychiatry or psychology, human-computer interaction, and even in surveillance. The recent advancements in the use of deep learning, and the advanced architectures based on them, including Convolutional Neural Networks (CNNs) and transformer models have made the development of FER software much efficient and scalable. This review paper contributes to the existing literature by providing a comprehensive synthesis of Facial Emotion Recognition (FER) systems from a software engineering perspective spanning the period from 2015 to the present. Unlike prior surveys, our analysis emphasizes the software engineering perspective, covering aspects such as software architectures, system integration, deployment on edge devices, and MLOps practices for continuous testing and monitoring. It specifically targets software construction, software architecture, software systems integration, training frameworks, datasets employment, and also deployment challenges. It explores common datasets like FER2013, RAF-DB and AffectNet and the prevailing model architectures, which include CNN, Long-Short-Term Memory (LSTM), hybrid and transformers. Importantly, over half of the surveyed studies continue to rely on demographically narrow datasets (e.g., FER2013, JAFFE, CK+), which limits generalizability and raises fairness concerns across ethnicity, age, and gender. This review contributes by providing a comprehensive and up-to-date synthesis of FER systems while providing directions to the development of fair, lightweight, and robust FER systems suitable for real-world deployment.},
  keywords = {convolutional neural network, long-short-term memory, facial emotion recognition},
  issn = {3069-1834},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 522
PDF Downloads: 64

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
CC BY Copyright © 2025 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
ICCK Journal of Software Engineering

ICCK Journal of Software Engineering

ISSN: 3069-1834 (Online)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/