Optimizing Decentralized Federated Learning with Asynchronous Model Updates and Robust Differential Privacy
Introduction Federated learning (FL) has emerged as a compelling paradigm for training machine learning models across a network of edge devices while keeping raw data localized. In its classic formulation, a central server orchestrates training rounds: it collects model updates from participants, aggregates them (typically via weighted averaging), and redistributes the improved global model. While this centralized FL model works well for many scenarios, it suffers from several practical limitations: ...