Sparse FedAdam Reduces Communication Overhead in Federated Learning
FedAdam‑SSM applies a mask to model updates, cutting uplink traffic to about one‑third of FedAdam and achieving 1.1× faster convergence with 14.5% higher accuracy than quantized variants. getnews.me/sparse-fedadam-reduces-c... #fedadam #sparsity
0
0
0
0