-
CiteScore
-
Impact Factor
Volume 2, Issue 4, ICCK Transactions on Sensing, Communication, and Control
Volume 2, Issue 4, 2025
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
ICCK Transactions on Sensing, Communication, and Control, Volume 2, Issue 4, 2025: 226-237

Free to Read | Research Article | 28 November 2025
Federated Learning Privacy Protection via Training Randomness
1 School of Automation, Beijing Institute of Technology, Beijing 100081, China
2 Aviation University of Air Force, Jilin 130022, China
* Corresponding Author: Kun Liu, [email protected]
Received: 22 September 2025, Accepted: 13 November 2025, Published: 28 November 2025  
Abstract
Federated learning is a collaborative machine learning paradigm that trains models across multiple computing nodes while aiming to preserve the privacy of local data held by participants. However, because of the open network environment, federated learning faces severe privacy and security challenges. Studies have shown that attackers can reconstruct original training data by intercepting gradients transmitted across the network, thereby posing a serious threat to user privacy. One representative attack is the Deep Leakage from Gradients (DLG), which iteratively recovers training data by optimizing dummy inputs to match the observed gradients. To address this challenge, this paper proposes a novel privacy-preserving strategy that leverages the randomness inherent in model training. Specifically, during the training process, we introduce the dropout method to selectively disconnect neuronal connections, while adding Gaussian noise to mask the gradients of the disconnected neurons. This approach effectively disrupts gradient-leakage attacks and reduces overfitting, thereby enhancing model generalization. We further provide a theoretical analysis of privacy guarantees using differential privacy metrics. Extensive experiments under federated-learning attack-defense scenarios demonstrate the effectiveness of the proposed strategy. Compared with existing defenses, our method achieves strong privacy protection against gradient leakage while maintaining competitive model accuracy, offering new insights and techniques for federated learning privacy preservation.

Graphical Abstract
Federated Learning Privacy Protection via Training Randomness

Keywords
federated learning
privacy-preserving strategy
gradient leakage attack
dropout method
differential privacy

Data Availability Statement
Data will be made available on request.

Funding
This work was supported in part by the National Natural Science Foundation of China under Grant U24A20264 and Grant 62273041; in part by the Open Projects of the Institute of Systems Science, Beiiing Wuzi University under Grant BWUISS11.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. He, K., Zhang, X., Ren, S., & Sun, J. (2016, June). Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 770-778). IEEE.
    [CrossRef]   [Google Scholar]
  2. Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., & Sainath, T. N. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82--97.
    [CrossRef]   [Google Scholar]
  3. Grigorescu, S., Trasnea, B., Cocias, T., & Macesanu, G. (2020). A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3), 362--386. http://dx.doi.org/10.1002/rob.21918
    [Google Scholar]
  4. Li, Q., Diao, Y., Chen, Q., & He, B. (2022, May). Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th international conference on data engineering (ICDE) (pp. 965-978). IEEE.
    [CrossRef]   [Google Scholar]
  5. Zhang, C., Xie, Y., Bai, H., Yu, B., Li, W., & Gao, Y. (2021). A survey on federated learning. Knowledge-Based Systems, 216, 106775.
    [CrossRef]   [Google Scholar]
  6. McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017, April). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.
    [Google Scholar]
  7. Zhu, L., Liu, Z., & Han, S. (2019). Deep leakage from gradients. Advances in neural information processing systems, 32.
    [CrossRef]   [Google Scholar]
  8. Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., & Ristenpart, T. (2014). Privacy in pharmacogenetics: An \{End-to-End\ case study of personalized warfarin dosing. In 23rd USENIX security symposium (USENIX Security 14) (pp. 17-32). https://dl.acm.org/doi/10.5555/2671225.2671227
    [Google Scholar]
  9. Nasr, M., Shokri, R., & Houmansadr, A. (2019, May). Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE symposium on security and privacy (SP) (pp. 739-753). IEEE.
    [CrossRef]   [Google Scholar]
  10. Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep learning with differential privacy. Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, pp. 308--318.
    [CrossRef]   [Google Scholar]
  11. Tramer, F., & Boneh, D. (2020). Differentially private learning needs better features (or much more data). arXiv preprint arXiv:2011.11660.
    [Google Scholar]
  12. Yu, D., Zhang, H., Chen, W., Yin, J., & Liu, T. Y. (2021, July). Large scale private learning via low-rank reparametrization. In International Conference on Machine Learning (pp. 12208-12218). PMLR.
    [Google Scholar]
  13. Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2017). mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412.
    [Google Scholar]
  14. Huang, Y., Song, Z., Li, K., & Arora, S. (2020, November). Instahide: Instance-hiding schemes for private distributed learning. In International conference on machine learning (pp. 4507-4518). PMLR.
    [Google Scholar]
  15. Carlini, N., Deng, S., Garg, S., Jha, S., Mahloujifar, S., Mahmoody, M., ... & Tramèr, F. (2021, May). Is private learning possible with instance encoding?. In 2021 IEEE Symposium on Security and Privacy (SP) (pp. 410-427). IEEE.
    [CrossRef]   [Google Scholar]
  16. Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Patel, S., ... & Seth, K. (2017, October). Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1175-1191).
    [CrossRef]   [Google Scholar]
  17. Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine, 37(3), 50-60.
    [CrossRef]   [Google Scholar]
  18. Enthoven, D., & Al-Ars, Z. (2021). An overview of federated deep learning privacy attacks and defensive strategies. Federated Learning Systems: Towards Next-Generation AI, 173-196.
    [CrossRef]   [Google Scholar]
  19. Deng, L. (2012). The MNIST database of handwritten digit images for machine learning research [Best of the Web]. IEEE Signal Processing Magazine, 29(6), 141--142.
    [CrossRef]   [Google Scholar]
  20. Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
    [Google Scholar]
  21. Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008). Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in'Real-Life'Images: detection, alignment, and recognition.
    [Google Scholar]
  22. Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
    [Google Scholar]
  23. Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3--4), 211--407. http://dx.doi.org/10.1561/0400000042
    [Google Scholar]
  24. Ryffel, T., Trask, A., Dahl, M., Wagner, B., Mancuso, J., Rueckert, D., & Passerat-Palmbach, J. (2018). A generic framework for privacy preserving deep learning. arXiv preprint arXiv:1811.04017.
    [Google Scholar]

Cite This Article
APA Style
Wang, Z., Tian, M., Qiu, Y., Zhao, Z., Zhang, Y. & Liu, K. (2025). Federated Learning Privacy Protection via Training Randomness. ICCK Transactions on Sensing, Communication, and Control, 2(4), 226–237. https://doi.org/10.62762/TSCC.2025.779613
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Wang, Zhiyuan
AU  - Tian, Ming
AU  - Qiu, Yutao
AU  - Zhao, Zhixu
AU  - Zhang, Yongpo
AU  - Liu, Kun
PY  - 2025
DA  - 2025/11/28
TI  - Federated Learning Privacy Protection via Training Randomness
JO  - ICCK Transactions on Sensing, Communication, and Control
T2  - ICCK Transactions on Sensing, Communication, and Control
JF  - ICCK Transactions on Sensing, Communication, and Control
VL  - 2
IS  - 4
SP  - 226
EP  - 237
DO  - 10.62762/TSCC.2025.779613
UR  - https://www.icck.org/article/abs/TSCC.2025.779613
KW  - federated learning
KW  - privacy-preserving strategy
KW  - gradient leakage attack
KW  - dropout method
KW  - differential privacy
AB  - Federated learning is a collaborative machine learning paradigm that trains models across multiple computing nodes while aiming to preserve the privacy of local data held by participants. However, because of the open network environment, federated learning faces severe privacy and security challenges. Studies have shown that attackers can reconstruct original training data by intercepting gradients transmitted across the network, thereby posing a serious threat to user privacy. One representative attack is the Deep Leakage from Gradients (DLG), which iteratively recovers training data by optimizing dummy inputs to match the observed gradients. To address this challenge, this paper proposes a novel privacy-preserving strategy that leverages the randomness inherent in model training. Specifically, during the training process, we introduce the dropout method to selectively disconnect neuronal connections, while adding Gaussian noise to mask the gradients of the disconnected neurons. This approach effectively disrupts gradient-leakage attacks and reduces overfitting, thereby enhancing model generalization. We further provide a theoretical analysis of privacy guarantees using differential privacy metrics. Extensive experiments under federated-learning attack-defense scenarios demonstrate the effectiveness of the proposed strategy. Compared with existing defenses, our method achieves strong privacy protection against gradient leakage while maintaining competitive model accuracy, offering new insights and techniques for federated learning privacy preservation.
SN  - 3068-9287
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Wang2025Federated,
  author = {Zhiyuan Wang and Ming Tian and Yutao Qiu and Zhixu Zhao and Yongpo Zhang and Kun Liu},
  title = {Federated Learning Privacy Protection via Training Randomness},
  journal = {ICCK Transactions on Sensing, Communication, and Control},
  year = {2025},
  volume = {2},
  number = {4},
  pages = {226-237},
  doi = {10.62762/TSCC.2025.779613},
  url = {https://www.icck.org/article/abs/TSCC.2025.779613},
  abstract = {Federated learning is a collaborative machine learning paradigm that trains models across multiple computing nodes while aiming to preserve the privacy of local data held by participants. However, because of the open network environment, federated learning faces severe privacy and security challenges. Studies have shown that attackers can reconstruct original training data by intercepting gradients transmitted across the network, thereby posing a serious threat to user privacy. One representative attack is the Deep Leakage from Gradients (DLG), which iteratively recovers training data by optimizing dummy inputs to match the observed gradients. To address this challenge, this paper proposes a novel privacy-preserving strategy that leverages the randomness inherent in model training. Specifically, during the training process, we introduce the dropout method to selectively disconnect neuronal connections, while adding Gaussian noise to mask the gradients of the disconnected neurons. This approach effectively disrupts gradient-leakage attacks and reduces overfitting, thereby enhancing model generalization. We further provide a theoretical analysis of privacy guarantees using differential privacy metrics. Extensive experiments under federated-learning attack-defense scenarios demonstrate the effectiveness of the proposed strategy. Compared with existing defenses, our method achieves strong privacy protection against gradient leakage while maintaining competitive model accuracy, offering new insights and techniques for federated learning privacy preservation.},
  keywords = {federated learning, privacy-preserving strategy, gradient leakage attack, dropout method, differential privacy},
  issn = {3068-9287},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 191
PDF Downloads: 25

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
Institute of Central Computation and Knowledge (ICCK) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ICCK Transactions on Sensing, Communication, and Control

ICCK Transactions on Sensing, Communication, and Control

ISSN: 3068-9287 (Online) | ISSN: 3068-9279 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/