Adaptive quantization based on ensemble distillation to support FL enabled edge intelligence
GLOBECOM 2022-2022 IEEE Global Communications Conference, 2022•ieeexplore.ieee.org
Federated learning (FL) has recently become one of the most acknowledged technologies in
promoting the development of intelligent edge networks with the ever-increasing computing
capability of user equipment (UE). In traditional FL paradigm, local models are usually
required to be homogeneous for aggregation to achieve an accurate global model.
Moreover, considerable communication cost and training time may be incurred in resource-
constrained edge networks due to a large number of UEs participating in model …
promoting the development of intelligent edge networks with the ever-increasing computing
capability of user equipment (UE). In traditional FL paradigm, local models are usually
required to be homogeneous for aggregation to achieve an accurate global model.
Moreover, considerable communication cost and training time may be incurred in resource-
constrained edge networks due to a large number of UEs participating in model …
Federated learning (FL) has recently become one of the most acknowledged technologies in promoting the development of intelligent edge networks with the ever-increasing computing capability of user equipment (UE). In traditional FL paradigm, local models are usually required to be homogeneous for aggregation to achieve an accurate global model. Moreover, considerable communication cost and training time may be incurred in resource-constrained edge networks due to a large number of UEs participating in model transmission and the large size of transmitted models. Therefore, it is imperative to develop effective training schemes for heterogeneous FL models, while reducing communication cost as well as training time. In this paper, we propose an adaptive quantization scheme based on ensemble distillation (AQeD) for FL to facilitate personalized quantized model training over heterogeneous local models with different size, structure, and quantization level, etc. Specifically, we design an augmented loss function by jointly considering distillation loss function, quantization values and available wireless resources, where UEs train their local personalized machine learning models and send the quantized models to a server. Based on local quantized models, the server first performs global aggregation for cluster ensembles and then sends the aggregated model of the cluster back to the participating UEs. Numerical results show that our proposed AQeD scheme can significantly reduce communication cost as well as training time in comparison with some known state-of-the-art solutions.
ieeexplore.ieee.org
Showing the best result for this search. See all results