-
CiteScore
-
Impact Factor
Volume 2, Issue 4 (In Progress) - Table of Contents

×

In Progress
Citations: 0, 0,  0   |   Viewed: 331, Download: 52

Free Access | Research Article | 30 November 2025
RUL Prediction of the Injection Lance in Copper Top-Blown Smelting Using KPCA and TSO-Optimized LSSVM
ICCK Transactions on Sensing, Communication, and Control | Volume 2, Issue 4: 238-249, 2025 | DOI: 10.62762/TSCC.2025.978286
Abstract
As the core component of the copper top-blown smelting, the service life of the injection lance critically affects production stability. To monitor the operating condition of the injection lance, a data-driven model is proposed to predict the Remaining Useful Life (RUL) or service life, namely, the DKT-LSSVM model. Firstly, to reduce noise interference, the Daubechies wavelet with four vanishing moments (DB4) denoising is used to process the raw data. Then, the Kernel Principal Component Analysis (KPCA) method is utilized to extract the principal components from the denoised data, which retains at least 90% information content (18 principal components are obtained). These principal component... More >

Graphical Abstract
RUL Prediction of the Injection Lance in Copper Top-Blown Smelting Using KPCA and TSO-Optimized LSSVM

Free Access | Research Article | 28 November 2025
Federated Learning Privacy Protection via Training Randomness
ICCK Transactions on Sensing, Communication, and Control | Volume 2, Issue 4: 226-237, 2025 | DOI: 10.62762/TSCC.2025.779613
Abstract
Federated learning is a collaborative machine learning paradigm that trains models across multiple computing nodes while aiming to preserve the privacy of local data held by participants. However, because of the open network environment, federated learning faces severe privacy and security challenges. Studies have shown that attackers can reconstruct original training data by intercepting gradients transmitted across the network, thereby posing a serious threat to user privacy. One representative attack is the Deep Leakage from Gradients (DLG), which iteratively recovers training data by optimizing dummy inputs to match the observed gradients. To address this challenge, this paper proposes a... More >

Graphical Abstract
Federated Learning Privacy Protection via Training Randomness