Abstract
Federated learning enables multiple clients to collaboratively train a shared model without transmitting their data. Although this novel approach offers significant advantages in data privacy protection, the variations in data distribution among clients can lead to inconsistencies in model updates, particularly in long-tailed data, which prominently affect the model's ability to learn generalizable features essential for enhancing local model performance. In this study, we propose a novel re-weighting federated learning method, which incorporates a dynamic weight allocation mechanism aimed at balancing the local model updates from each client with the aggregation of the global model during training. Specifically, we employ balanced resampling locally at each client to rectify biases and perform cluster clients based on feature similarity, assigning weights appropriately. This strategy not only strengthens the model's capacity to learn cross-client generalizable features but also minimizes the divergence between local models and the global model. The empirical results on the MNIST-LT and EMNIST-LT datasets demonstrate that our method outperforms baseline approaches, revealing key factors behind its effectiveness.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282. (2017)
Zhang, F., Shuai, Z., Kuang, K., Wu, F., Zhuang, Y., Xiao, J.: Unified fair federated learning for digital healthcare. Patterns 5(1), 100907 (2024)
Li, X., Jiang, M., Zhang, X., Kamp, M., Dou, Q.: Fedbn: federated learning on non-iid features via local batch normalization. In: ICLR, Virtual Event (2021)
Hsu, T.H., Qi, H., Brown, M.: Measuring the Effects of Non-identical Data Distribution for Federated Visual Classification. CoRR abs/arXiv:1909.06335 (2019)
Gou, Y., Wang, R., Li, Z., Imran, M.A., Zhang, L.: clustered hierarchical distributed federated learning. In: IEEE International Conference on Communications, pp. 177–182 (2022)
Fan, T., et al.: FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models. arXiv:2310.10049 (2023)
Wang, L., Xu, S., Wang, X., Zhu, Q.: Addressing class imbalance in federated learning. In: AAAI Conference on Artificial Intelligence, vol. 35, pp. 10165–10173 (2021)
Li, M., Cheung, Y., Jiang, J.: Feature-balanced loss for long-tailed visual recognition. In: IEEE International Conference on Multimedia and Expo, pp. 1–6. IEEE (2022)
Cui, Y., Jia, M., Lin, T., Song, Y., Belongie, S.J.: Class-balanced loss based on effective number of samples. In: CVPR, pp. 9268–9277 (2019)
Zhou, B., Cui, Q., Wei, X., Chen, Z.: BBN: bilateral-branch network with cumulative learning for long-tailed visual recognition. In: CVPR, pp. 9716–9725 (2020)
Li, Z., Lin, T., Shang, X., Wu, C.: Revisiting weighted aggregation in federated learning with neural networks. In: ICML, vol. 202, pp. 19767–19788. PMLR (2023)
Zang, Y., Huang, C., Loy, C.C.: FASA: Feature augmentation and sampling adaptation for long-tailed instance segmentation. In: ICCV, pp. 3437–3446. IEEE (2021)
Shi, J., Zheng, S., Yin, X., Lu, Y., Xie, Y., Qu, Y.: Clip-guided Federated Learning on Heterogeneous and Long-tailed Data. CoRR abs/Â arXiv:2312.08648 (2023)
Wang, Y., Ramanan, D., Hebert, M.: Learning to model the tail. In: Advances in Neural Information Processing Systems, 4–9 December, pp. 7029–7039 (2017)
Guo, Y., Tang, X., Lin, T.: Fedbr: improving federated learning on heterogeneous data via local learning bias reduction. In: ICML, vol. 202, pp. 12034–12054. PMLR (2023)
Li, T., Sahu, A., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 429–450 (2020)
Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: SCAFFOLD: stochastic controlled averaging for federated learning. In: ICML. vol. 119, pp. 5132–5143. PMLR (2020)
Kim, Y., Shin, B.: Learning from Drift: Federated Learning on Non-iid Data via Drift Regularization. CoRR abs/2309.07189 (2023)
Yu, X., Liu, Z., Sun, Y., Wang, W.: Clustered federated learning for heterogeneous data. In: AAAI. pp. 16378–16379. AAAI Press (2023)
Ghosh, A., Chung, J., Yin, D.,: Ramchandran, K.: An efficient framework for clustered federated learning. IEEE Trans. Inform. Theory 68, 8076–8091(2020)
Briggs, C., Fan, Z., Andras, P.: Federated learning with hierarchical clustering of local updates to improve training on non-iid data, In: IJCNN, pp. 1–9 (2020)
Duan, M.,et al.: Fedgroup: efficient federated learning via decomposed similarity-based clustering. In ISPA/BDCloud/SocialCom, pp. 228–237 (2020)
Zhang, H., Wu, T., Cheng, S., Liu, J.: Aperiodic Local SGD: Beyond Local SGD. In: ICPP, Bordeaux, France, 29 August 2022 - 1 September 2022. p. 1:1–1:10. ACM (2022)
Yao, X., Huang, T., Wu, C., Zhang, R., Sun, L.: Towards faster and better federated learning: a feature fusion approach. In: ICIP, pp. 175–179. IEEE (2019)
Shi, Y., Liang, J., Zhang, W., Tan, V.Y.F., Bai, S.: Towards understanding and mitigating dimensional collapse in heterogeneous federated learning. In: ICLR (2023)
Cao, K., Wei, C., Gaidon, A., Arechiga, N., Ma, T.: Learning imbalanced datasets with label-distribution-aware margin loss. In: NeurIPS, vol. 32 (2019)
Li, Q., Diao, Y., Chen, Q., He, B.: Federated learning on non-iid data silos: an experimental study. In: IEEE International Conference on Data Engineering, pp. 965–978. IEEE (2022)
Shang, X., Lu, Y., Huang, G., Wang, H.: Federated learning on heterogeneous and long-tailed data via classifier re-training with federated features. In: IJCAI. pp. 2218–2224 (2022)
Sarkar, D., Narang, A., Rai, S.: Fed-focal Loss for Imbalanced Data Classification in Federated Learning. arXiv:2011.06283 (2020)
Liang, P., et al.: Think Locally, Act Globally: Federated Learning with Local and Global Representations. arXiv:2001.01523 (2020)
Acknowledgments
This study was funded by Beijing Natural Science Foundation (No. L181010), China and the BIT Research and Innovation Promoting Project (Grant No. 2023YCXY036).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, Y., Li, K. (2024). Federated Learning for Assigning Weights to Clients on Long-Tailed Data. In: Huang, DS., Zhang, C., Pan, Y. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science(), vol 14876. Springer, Singapore. https://doi.org/10.1007/978-981-97-5666-7_37
Download citation
DOI: https://doi.org/10.1007/978-981-97-5666-7_37
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5665-0
Online ISBN: 978-981-97-5666-7
eBook Packages: Computer ScienceComputer Science (R0)