Volume 2, Issue 1, Journal of Reliable and Secure Computing
Volume 2, Issue 1, 2026
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
Journal of Reliable and Secure Computing, Volume 2, Issue 1, 2026: 27-38

Open Access | Research Article | 07 February 2026
A Novel System for Detecting Model Poisoning Attacks in Federated Learning
1 Department of Computer Science and Engineering, Sant Longowal Institute of Engineering and Technology, Longowal, India
2 Department of Mathematics, Chaudhary Charan Singh University, Meerut, Uttar Pradesh, India
3 Department of Mathematics, S.S.V. College, Uttar Pradesh, India
* Corresponding Author: Seema Agrawal, [email protected]
ARK: ark:/57805/jrsc.2025.385825
Received: 20 November 2025, Accepted: 21 January 2026, Published: 07 February 2026  
Abstract
Federated learning (FL) enables decentralized model training and enhances user privacy by keeping data on local devices. Despite these advantages, FL remains vulnerable to sophisticated adversarial attacks. Federated recommender systems (FRS), an important application of FL, are particularly susceptible to threats such as model poisoning. In this paper, we propose DyMUSA, a novel model poisoning attack tailored for FRS. DyMUSA exploits systemic vulnerabilities through dynamic user selection and adaptive poisoning strategies. Specifically, it leverages the Isolation Forest algorithm to identify anomalous users and generate poisoned gradients that compromise the integrity of the recommender system. Experiments conducted on real-world datasets demonstrate that DyMUSA significantly increases the exposure of targeted items while maintaining minimal impact on overall system performance.

Graphical Abstract
A Novel System for Detecting Model Poisoning Attacks in Federated Learning

Keywords
federated learning
attacks
privacy
real world data
datasets

Data Availability Statement
Data will be made available on request.

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest.

AI Use Statement
The authors declare that no generative AI was used in the preparation of this manuscript.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Fan, W., Zhao, X., Chen, X., Su, J., Gao, J., Wang, L., ... & Li, Q. A comprehensive survey on trustworthy recommender systems (2022). CoRR, abs/2209.10117.
    [Google Scholar]
  2. He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., & Wang, M. (2020, July). Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval (pp. 639-648).
    [CrossRef]   [Google Scholar]
  3. Cheng, H. T., Koc, L., Harmsen, J., Shaked, T., Chandra, T., Aradhye, H., ... & Shah, H. (2016, September). Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems (pp. 7-10). 7-10.
    [CrossRef]   [Google Scholar]
  4. Wang, X., He, X., Wang, M., Feng, F., & Chua, T. S. (2019, July). Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval (pp. 165-174).
    [CrossRef]   [Google Scholar]
  5. Gunes, I., Kaleli, C., Bilge, A., & Polat, H. (2014). Shilling attacks against recommender systems: a comprehensive survey. Artificial Intelligence Review, 42(4), 767-799.
    [CrossRef]   [Google Scholar]
  6. Xia, G., Chen, J., Yu, C., & Ma, J. (2023). Poisoning attacks in federated learning: A survey. IEEE Access, 11, 10708-10722.
    [CrossRef]   [Google Scholar]
  7. Zhang, H., Li, Y., Ding, B., & Gao, J. (2020, April). Practical data poisoning attack against next-item recommendation. In Proceedings of the web conference 2020 (pp. 2458-2464).
    [CrossRef]   [Google Scholar]
  8. Lam, S. K., & Riedl, J. (2004, May). Shilling recommender systems for fun and profit. In Proceedings of the 13th international conference on World Wide Web (pp. 393-402).
    [CrossRef]   [Google Scholar]
  9. Alonso, S., Bobadilla, J., Ortega, F., & Moya, R. (2019). Robust model-based reliability approach to tackle shilling attacks in collaborative filtering recommender systems. IEEE access, 7, 41782-41798.
    [CrossRef]   [Google Scholar]
  10. Yu, J., Gao, M., Rong, W., Li, W., Xiong, Q., & Wen, J. (2017). Hybrid attacks on model-based social recommender systems. Physica A: Statistical Mechanics and its Applications, 483, 171-181.
    [CrossRef]   [Google Scholar]
  11. Deldjoo, Y., Noia, T. D., & Merra, F. A. (2021). A survey on adversarial recommender systems: from attack/defense strategies to generative adversarial networks. Acm Computing Surveys (Csur), 54(2), 1-38.
    [CrossRef]   [Google Scholar]
  12. Rong, D., Ye, S., Zhao, R., Yuen, H. N., Chen, J., & He, Q. (2022, May). Fedrecattack: Model poisoning attack to federated recommendation. In 2022 IEEE 38th International Conference on Data Engineering (ICDE) (pp. 2643-2655). IEEE.
    [CrossRef]   [Google Scholar]
  13. Zhou, W., Wen, J., Qu, Q., Zeng, J., & Cheng, T. (2018). Shilling attack detection for recommender systems based on credibility of group users and rating time series. PloS one, 13(5), e0196533.
    [CrossRef]   [Google Scholar]
  14. Gao, C., Wang, X., He, X., & Li, Y. (2022, February). Graph neural networks for recommender system. In Proceedings of the fifteenth ACM international conference on web search and data mining (pp. 1623-1625).
    [CrossRef]   [Google Scholar]
  15. Cho, J. H., Sharma, D. P., Alavizadeh, H., Yoon, S., Ben-Asher, N., Moore, T. J., ... & Nelson, F. F. (2020). Toward proactive, adaptive defense: A survey on moving target defense. IEEE Communications Surveys & Tutorials, 22(1), 709-745.
    [CrossRef]   [Google Scholar]
  16. Anelli, V. W., Deldjoo, Y., DiNoia, T., & Merra, F. A. (2021). Adversarial recommender systems: Attack, defense, and advances. In Recommender systems handbook (pp. 335-379). New York, NY: Springer US.
    [CrossRef]   [Google Scholar]
  17. Fang, M., Yang, G., Gong, N. Z., & Liu, J. (2018, December). Poisoning attacks to graph-based recommender systems. In Proceedings of the 34th annual computer security applications conference (pp. 381-392).
    [CrossRef]   [Google Scholar]
  18. Di Noia, T., Malitesta, D., & Merra, F. A. (2020, June). Taamr: Targeted adversarial attack against multimedia recommender systems. In 2020 50th Annual IEEE/IFIP international conference on dependable systems and networks workshops (DSN-W) (pp. 1-8). IEEE.
    [CrossRef]   [Google Scholar]
  19. Si, M., & Li, Q. (2020). Shilling attacks against collaborative recommender systems: a review. Artificial Intelligence Review, 53(1), 291-319.
    [CrossRef]   [Google Scholar]
  20. Nawara, D., Aly, A., & Kashef, R. (2024). Shilling attacks and fake reviews injection: Principles, models, and datasets. IEEE Transactions on Computational Social Systems.
    [CrossRef]   [Google Scholar]
  21. Nguyen, T. T., Quoc Viet Hung, N., Nguyen, T. T., Huynh, T. T., Nguyen, T. T., Weidlich, M., & Yin, H. (2024). Manipulating recommender systems: A survey of poisoning attacks and countermeasures. ACM Computing Surveys, 57(1), 1-39.
    [CrossRef]   [Google Scholar]
  22. Lee, J. S., & Zhu, D. (2012). Shilling attack detection—a new approach for a trustworthy recommender system. INFORMS Journal on Computing, 24(1), 117-131.
    [CrossRef]   [Google Scholar]
  23. Mazurowski, M. A. (2013). Estimating confidence of individual rating predictions in collaborative filtering recommender systems. Expert Systems with Applications, 40(10), 3847-3857.
    [CrossRef]   [Google Scholar]
  24. Moskalenko, V., Kharchenko, V., Moskalenko, A., & Kuzikov, B. (2023). Resilience and resilient systems of artificial intelligence: taxonomy, models and methods. Algorithms, 16(3), 165.
    [CrossRef]   [Google Scholar]
  25. O'Mahony, M., Hurley, N., Kushmerick, N., & Silvestre, G. (2004). Collaborative recommendation: A robustness analysis. ACM Transactions on Internet Technology (TOIT), 4(4), 344-377.
    [CrossRef]   [Google Scholar]
  26. Joshi, P., Shaikh, M. Z., Varshney, N., & Dwivedy, B. (2024, March). Robustness Challenges in Deep Learning: Strategies for Enhancing Model Resilience. In 2024 2nd International Conference on Disruptive Technologies (ICDT) (pp. 757-762). IEEE.
    [CrossRef]   [Google Scholar]
  27. Du, J., Lin, T., Jiang, C., Yang, Q., Bader, C. F., & Han, Z. (2024). Distributed foundation models for multi-modal learning in 6G wireless networks. IEEE Wireless Communications, 31(3), 20-30.
    [CrossRef]   [Google Scholar]
  28. Zhou, X., Xu, M., Wu, Y., & Zheng, N. (2021). Deep model poisoning attack on federated learning. Future Internet, 13(3), 73. 1-10.
    [CrossRef]   [Google Scholar]
  29. Hasan, M. M. (2025). Federated Learning Models for Privacy-Preserving AI In Enterprise Decision Systems. International Journal of Business and Economics Insights, 5(3), 238-269.
    [CrossRef]   [Google Scholar]
  30. Feng, Y., Guo, Y., Hou, Y., Wu, Y., Lao, M., Yu, T., & Liu, G. (2025). A survey of security threats in federated learning. Complex & Intelligent Systems, 11(2), 165.
    [CrossRef]   [Google Scholar]
  31. Alansary, S. A., Ayyad, S. M., Talaat, F. M., & Saafan, M. M. (2025). Emerging AI threats in cybercrime: a review of zero-day attacks via machine, deep, and federated learning. Knowledge and Information Systems, 67(11), 10951-10987.
    [CrossRef]   [Google Scholar]
  32. Guo, P., Wang, R., Zeng, S., Zhu, J., Jiang, H., Wang, Y., ... & Qu, L. (2025). Exploring the vulnerabilities of federated learning: A deep dive into gradient inversion attacks. IEEE Transactions on Pattern Analysis and Machine Intelligence.
    [CrossRef]   [Google Scholar]
  33. Rong, D., Ye, S., Zhao, R., Yuen, H. N., Chen, J., & He, Q. (2022, May). Fedrecattack: Model poisoning attack to federated recommendation. In 2022 IEEE 38th International Conference on Data Engineering (ICDE) (pp. 2643-2655). IEEE.
    [CrossRef]   [Google Scholar]
  34. Song, J., Li, Z., Hu, Z., Wu, Y., Li, Z., Li, J., & Gao, J. (2020, April). Poisonrec: an adaptive data poisoning framework for attacking black-box recommender systems. In 2020 IEEE 36th international conference on data engineering (ICDE) (pp. 157-168). IEEE.
    [CrossRef]   [Google Scholar]
  35. Fang, M., Cao, X., Jia, J., & Gong, N. (2020). Local model poisoning attacks to Byzantine-Robust federated learning. In 29th USENIX security symposium (USENIX Security 20) (pp. 1605-1622).
    [Google Scholar]
  36. Zhang, K., Cao, Q., Sun, F., Wu, Y., Tao, S., Shen, H., & Cheng, X. (2025). Robust recommender system: a survey and future directions. ACM Computing Surveys, 58(1), 1-38.
    [CrossRef]   [Google Scholar]
  37. Muhammad, K., Wang, Q., O'Reilly-Morgan, D., Tragos, E., Smyth, B., Hurley, N., ... & Lawlor, A. (2020, August). Fedfast: Going beyond average for faster training of federated recommender systems. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 1234-1242).
    [CrossRef]   [Google Scholar]
  38. Mothukuri, V., Khare, P., Parizi, R. M., Pouriyeh, S., Dehghantanha, A., & Srivastava, G. (2021). Federated-learning-based anomaly detection for IoT security attacks. IEEE Internet of Things Journal, 9(4), 2545-2554.
    [CrossRef]   [Google Scholar]
  39. Harper, F. M., & Konstan, J. A. (2015). The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4), 1-19. 2015.
    [CrossRef]   [Google Scholar]
  40. Suryam, A. (n.d.). Ecommerce electronics dataset [Data set]. Kaggle. Retrieved from https://www.kaggle.com/datasets/ecommerce-data/amazon-electronics-dataset
    [Google Scholar]
  41. Netflix-Inc. (n.d.). Netflix Prize data [Data set]. Kaggle. Retrieved from https://www.kaggle.com/datasets/netflix-inc/netflix-prize-data
    [Google Scholar]
  42. Yelp, Inc. (n.d.). Yelp Open Dataset [Data set]. Yelp. Retrieved from https://www.yelp.com/dataset
    [Google Scholar]
  43. Kohli, K. (n.d.). Steam dataset [Data set]. Kaggle. Retrieved from https://www.kaggle.com/code/kkohli/steam-dataset
    [Google Scholar]

Cite This Article
APA Style
Singh, J., Kumari, S., & Agrawal, S. (2026). A Novel System for Detecting Model Poisoning Attacks in Federated Learning. Journal of Reliable and Secure Computing, 2(1), 27–38. https://doi.org/10.62762/JRSC.2025.385825
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Singh, Jagdeep
AU  - Kumari, Saru
AU  - Agrawal, Seema
PY  - 2026
DA  - 2026/02/07
TI  - A Novel System for Detecting Model Poisoning Attacks in Federated Learning
JO  - Journal of Reliable and Secure Computing
T2  - Journal of Reliable and Secure Computing
JF  - Journal of Reliable and Secure Computing
VL  - 2
IS  - 1
SP  - 27
EP  - 38
DO  - 10.62762/JRSC.2025.385825
UR  - https://www.icck.org/article/abs/JRSC.2025.385825
KW  - federated learning
KW  - attacks
KW  - privacy
KW  - real world data
KW  - datasets
AB  - Federated learning (FL) enables decentralized model training and enhances user privacy by keeping data on local devices. Despite these advantages, FL remains vulnerable to sophisticated adversarial attacks. Federated recommender systems (FRS), an important application of FL, are particularly susceptible to threats such as model poisoning. In this paper, we propose DyMUSA, a novel model poisoning attack tailored for FRS. DyMUSA exploits systemic vulnerabilities through dynamic user selection and adaptive poisoning strategies. Specifically, it leverages the Isolation Forest algorithm to identify anomalous users and generate poisoned gradients that compromise the integrity of the recommender system. Experiments conducted on real-world datasets demonstrate that DyMUSA significantly increases the exposure of targeted items while maintaining minimal impact on overall system performance.
SN  - 3070-6424
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Singh2026A,
  author = {Jagdeep Singh and Saru Kumari and Seema Agrawal},
  title = {A Novel System for Detecting Model Poisoning Attacks in Federated Learning},
  journal = {Journal of Reliable and Secure Computing},
  year = {2026},
  volume = {2},
  number = {1},
  pages = {27-38},
  doi = {10.62762/JRSC.2025.385825},
  url = {https://www.icck.org/article/abs/JRSC.2025.385825},
  abstract = {Federated learning (FL) enables decentralized model training and enhances user privacy by keeping data on local devices. Despite these advantages, FL remains vulnerable to sophisticated adversarial attacks. Federated recommender systems (FRS), an important application of FL, are particularly susceptible to threats such as model poisoning. In this paper, we propose DyMUSA, a novel model poisoning attack tailored for FRS. DyMUSA exploits systemic vulnerabilities through dynamic user selection and adaptive poisoning strategies. Specifically, it leverages the Isolation Forest algorithm to identify anomalous users and generate poisoned gradients that compromise the integrity of the recommender system. Experiments conducted on real-world datasets demonstrate that DyMUSA significantly increases the exposure of targeted items while maintaining minimal impact on overall system performance.},
  keywords = {federated learning, attacks, privacy, real world data, datasets},
  issn = {3070-6424},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 29
PDF Downloads: 12

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
CC BY Copyright © 2026 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Journal of Reliable and Secure Computing

Journal of Reliable and Secure Computing

ISSN: 3070-6424 (Online)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/