ICCK Transactions on Machine Intelligence | Volume 1, Issue 3: 138-147, 2025 | DOI: 10.62762/TMI.2025.225921
Abstract
Deep learning is a great success primarily because it encodes large amounts of data and manipulates billions of model parameters. Despite this, it is challenging to deploy these cumbersome deep models on devices with limited resources, such as mobile phones and embedded devices, due to the high computational complexity and the amount of storage required. Various techniques are available to compress and accelerate models for this purpose. Knowledge distillation is a novel technique for model compression and acceleration, which involves learning a small student model from a large teacher model. Then, that student network is fine-tuned on any downstream task to be applicable for resource-constr... More >
Graphical Abstract