Volume 2, Issue 2, ICCK Transactions on Machine Intelligence
Volume 2, Issue 2, 2026
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Machine Intelligence, Volume 2, Issue 2, 2026: 65-76

Free to Read | Research Article | 08 February 2026
FedTLRec: Federated Recommendation with Transformer-based Parameter Aggregation and LoRA Compression
1 School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
2 Beijing Kunchi Technology Co., Ltd, Beijing, China
* Corresponding Author: Xudong Wang, [email protected]
ARK: ark:/57805/tmi.2025.882476
Received: 30 December 2025, Accepted: 20 January 2026, Published: 08 February 2026  
Abstract
Federated learning has emerged as a key paradigm in privacy-preserving computing due to its "data usable but not visible" property, enabling users to collaboratively train models without sharing raw data. Motivated by this, federated recommendation systems offer a promising architecture that balances user privacy with recommendation accuracy through distributed collaborative learning. However, existing federated recommendation systems face significant challenges in balancing model performance, communication efficiency, and user privacy. In this paper, we propose FedTLRec (Federated Recommendation with Transformer-based Parameter Aggregation and Collaborative LoRA), which introduces a federated recommendation framework that integrates Low-Rank Adaptation (LoRA) for parameter compression and Transformer-based aggregation. It addresses key challenges in communication efficiency and model performance by compressing client updates via LoRA and employing a Transformer model with attention mechanisms to effectively aggregate parameters from multiple clients. A K-means clustering strategy further enhances efficiency by grouping similar clients. Experiments on real-world datasets show that FedTLRec achieves superior recommendation accuracy with significantly reduced communication costs, while maintaining robust performance in client dropout scenarios. Code is available at: https://github.com/trueWangSyutung/FedTLRec.

Graphical Abstract
FedTLRec: Federated Recommendation with Transformer-based Parameter Aggregation and LoRA Compression

Keywords
federated recommendation
low-rank adaptation
transformer

Data Availability Statement
The source code supporting the findings of this study is publicly available at: https://github.com/trueWangSyutung/FedTLRec.

Funding
This work was supported without any funding.

Conflicts of Interest
Ruixin Zhao is affiliated with the Beijing Kunchi Technology Co., Ltd, Beijing, China. The authors declare that this affiliation had no influence on the study design, data collection, analysis, interpretation, or the decision to publish, and that no other competing interests exist.

AI Use Statement
The authors declare that Deepseek-V3 was used solely for data analysis and language editing in the preparation of this manuscript. All scientific interpretations, results, and conclusions were determined by the authors.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Sun, Z., Xu, Y., Liu, Y., He, W., Kong, L., Wu, F., ... & Cui, L. (2024). A survey on federated recommendation systems. IEEE Transactions on Neural Networks and Learning Systems, 36(1), 6-20.
    [CrossRef]   [Google Scholar]
  2. Yin, H., Qu, L., Chen, T., Yuan, W., Zheng, R., Long, J., ... & Zhang, C. (2025). On-device recommender systems: A comprehensive survey. Data Science and Engineering, 1-30.
    [CrossRef]   [Google Scholar]
  3. Zhang, C., Long, G., Zhang, Z., Li, Z., Zhang, H., Yang, Q., & Yang, B. (2025). Personalized recommendation models in federated settings: A survey. IEEE Transactions on Knowledge and Data Engineering.
    [CrossRef]   [Google Scholar]
  4. Chai, D., Wang, L., Chen, K., & Yang, Q. (2020). Secure federated matrix factorization. IEEE Intelligent Systems, 36(5), 11-20.
    [CrossRef]   [Google Scholar]
  5. Perifanis, V., & Efraimidis, P. S. (2022). Federated neural collaborative filtering. Knowledge-Based Systems, 242, 108441.
    [CrossRef]   [Google Scholar]
  6. Zhang, H., Luo, F., Wu, J., He, X., & Li, Y. (2023). LightFR: Lightweight federated recommendation with privacy-preserving matrix factorization. ACM Transactions on Information Systems, 41(4), 1-28.
    [CrossRef]   [Google Scholar]
  7. Zhang, C., Long, G., Zhou, T., Yan, P., Zhang, Z., Zhang, C., & Yang, B. (2023). Dual personalization on federated recommendation. In Proceedings of the International Joint Conference on Artificial Intelligence (pp. 4558–4566).
    [CrossRef]   [Google Scholar]
  8. Jiang, J., Zhang, C., Zhang, H., Li, Z., Li, Y., & Yang, B. (2025, May). A Tutorial of Personalized Federated Recommender Systems: Recent Advances and Future Directions. In Companion Proceedings of the ACM on Web Conference 2025 (pp. 21-24).
    [CrossRef]   [Google Scholar]
  9. Ye, R., Ni, Z., Wu, F., Chen, S., & Wang, Y. (2023, July). Personalized federated learning with inferred collaboration graphs. In International conference on machine learning (pp. 39801-39817). PMLR.
    [Google Scholar]
  10. Zhang, C., Long, G., Zhou, T., Zhang, Z., Yan, P., & Yang, B. (2024). Gpfedrec: Graph-guided personalization for federated recommendation. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 4131–4142).
    [CrossRef]   [Google Scholar]
  11. Li, Z., Long, G., & Zhou, T. (2023). Federated recommendation with additive personalization. arXiv preprint arXiv:2301.09109.
    [Google Scholar]
  12. Wang, X. (2025). UFGraphFR: An attempt at a federated recommendation system based on user text characteristics. arXiv preprint arXiv:2501.08044.
    [Google Scholar]
  13. Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2022). LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on Learning Representations.
    [Google Scholar]
  14. Zhang, Q., Chen, M., Bukharin, A., Karampatziakis, N., He, P., Cheng, Y., ... & Zhao, T. (2023). Adalora: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512.
    [Google Scholar]
  15. Chen, J., Zhang, H., Li, H., Zhang, C., Li, Z., & Li, Y. (2025). Beyond Personalization: Federated Recommendation with Calibration via Low-rank Decomposition. arXiv preprint arXiv:2506.09525.
    [Google Scholar]
  16. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
    [Google Scholar]
  17. Kang, W. C., & McAuley, J. (2018). Self-Attentive Sequential Recommendation. In 2018 IEEE International Conference on Data Mining (pp. 197–206).
    [CrossRef]   [Google Scholar]
  18. Sun, F., Liu, J., Wu, J., Pei, C., Lin, X., Ou, W., & Jiang, P. (2019). BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (pp. 1441–1450).
    [CrossRef]   [Google Scholar]
  19. Zhou, T., Ma, Z., Wen, Q., Wang, X., Sun, L., & Jin, R. (2022, June). FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting. In International Conference on Machine Learning (pp. 27268-27286). PMLR.
    [Google Scholar]
  20. Zhu, Y., Liu, J., Chowdhury, M., & Lai, F. (2024). Fedtrans: Efficient federated learning via multi-model transformation. Proceedings of Machine Learning and Systems, 6, 395-407.
    [Google Scholar]
  21. Ding, Y., Zhang, S., Fan, B., Sun, W., Liao, Y., & Zhou, P. Y. (2024). Fedloca: Low-rank coordinated adaptation with knowledge decoupling for federated recommendations. In Proceedings of the ACM Conference on Recommender Systems (pp. 690–700).
    [CrossRef]   [Google Scholar]
  22. Harper, F. M., & Konstan, J. A. (2015). The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4), 1-19.
    [CrossRef]   [Google Scholar]
  23. Cantador, I., Brusilovsky, P., & Kuflik, T. (2011). Second workshop on information heterogeneity and fusion in recommender systems (hetrec2011). In Proceedings of the ACM Conference on Recommender Systems (pp. 387–388).
    [CrossRef]   [Google Scholar]
  24. Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.
    [CrossRef]   [Google Scholar]
  25. He, X., Liao, L., Zhang, H., Nie, L., Hu, X., & Chua, T.-S. (2017). Neural collaborative filtering. In Proceedings of the International World Wide Web Conference (pp. 173–182).
    [CrossRef]   [Google Scholar]
  26. He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., & Wang, M. (2020). Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the ACM SIGIR International Conference on Research and Development in Information Retrieval (pp. 639–648).
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Wang,X., &Zhao,R.(2026). FedTLRec: FederatedRecommendation with Transformer-based Parameter Aggregation and LoRA Compression. ICCK Transactions on Machine Intelligence, 2(1), 65–76. https://doi.org/10.62762/TMI.2025.882476
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Wang, Xudong
AU  - Zhao, Ruixin
PY  - 2026
DA  - 2026/02/08
TI  - FedTLRec: Federated Recommendation with Transformer-based Parameter Aggregation and LoRA Compression
JO  - ICCK Transactions on Machine Intelligence
T2  - ICCK Transactions on Machine Intelligence
JF  - ICCK Transactions on Machine Intelligence
VL  - 2
IS  - 2
SP  - 65
EP  - 76
DO  - 10.62762/TMI.2025.882476
UR  - https://www.icck.org/article/abs/TMI.2025.882476
KW  - federated recommendation
KW  - low-rank adaptation
KW  - transformer
AB  - Federated learning has emerged as a key paradigm in privacy-preserving computing due to its "data usable but not visible" property, enabling users to collaboratively train models without sharing raw data. Motivated by this, federated recommendation systems offer a promising architecture that balances user privacy with recommendation accuracy through distributed collaborative learning. However, existing federated recommendation systems face significant challenges in balancing model performance, communication efficiency, and user privacy. In this paper, we propose FedTLRec (Federated Recommendation with Transformer-based Parameter Aggregation and Collaborative LoRA), which introduces a federated recommendation framework that integrates Low-Rank Adaptation (LoRA) for parameter compression and Transformer-based aggregation. It addresses key challenges in communication efficiency and model performance by compressing client updates via LoRA and employing a Transformer model with attention mechanisms to effectively aggregate parameters from multiple clients. A K-means clustering strategy further enhances efficiency by grouping similar clients. Experiments on real-world datasets show that FedTLRec achieves superior recommendation accuracy with significantly reduced communication costs, while maintaining robust performance in client dropout scenarios. Code is available at: https://github.com/trueWangSyutung/FedTLRec.
SN  - 3068-7403
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Wang2026FedTLRec,
  author = {Xudong Wang and Ruixin Zhao},
  title = {FedTLRec: Federated Recommendation with Transformer-based Parameter Aggregation and LoRA Compression},
  journal = {ICCK Transactions on Machine Intelligence},
  year = {2026},
  volume = {2},
  number = {2},
  pages = {65-76},
  doi = {10.62762/TMI.2025.882476},
  url = {https://www.icck.org/article/abs/TMI.2025.882476},
  abstract = {Federated learning has emerged as a key paradigm in privacy-preserving computing due to its "data usable but not visible" property, enabling users to collaboratively train models without sharing raw data. Motivated by this, federated recommendation systems offer a promising architecture that balances user privacy with recommendation accuracy through distributed collaborative learning. However, existing federated recommendation systems face significant challenges in balancing model performance, communication efficiency, and user privacy. In this paper, we propose FedTLRec (Federated Recommendation with Transformer-based Parameter Aggregation and Collaborative LoRA), which introduces a federated recommendation framework that integrates Low-Rank Adaptation (LoRA) for parameter compression and Transformer-based aggregation. It addresses key challenges in communication efficiency and model performance by compressing client updates via LoRA and employing a Transformer model with attention mechanisms to effectively aggregate parameters from multiple clients. A K-means clustering strategy further enhances efficiency by grouping similar clients. Experiments on real-world datasets show that FedTLRec achieves superior recommendation accuracy with significantly reduced communication costs, while maintaining robust performance in client dropout scenarios. Code is available at: https://github.com/trueWangSyutung/FedTLRec.},
  keywords = {federated recommendation, low-rank adaptation, transformer},
  issn = {3068-7403},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 99
PDF Downloads: 13

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
Institute of Central Computation and Knowledge (ICCK) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ICCK Transactions on Machine Intelligence

ICCK Transactions on Machine Intelligence

ISSN: 3068-7403 (Online)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/