# federated-learning1 **Repository Path**: wardseptember/federated-learning1 ## Basic Information - **Project Name**: federated-learning1 - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-07-17 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Federated Learning This is partly the reproduction of the paper of [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629) Only experiments on MNIST and CIFAR10 (both IID and non-IID) is produced by far. Note: The scripts will be slow without the implementation of parallel computing. ## Requirements python>=3.6 pytorch>=0.4 ## Run The MLP and CNN models are produced by: > python [main_nn.py](main_nn.py) Federated learning with MLP and CNN is produced by: > python [main_fed.py](main_fed.py) See the arguments in [options.py](utils/options.py). For example: > python main_fed.py --dataset mnist --iid --num_channels 1 --model cnn --epochs 50 --gpu 0 NB: for CIFAR-10, `num_channels` must be 3. ## Results ### MNIST Results are shown in Table 1 and Table 2, with the parameters C=0.1, B=10, E=5. Table 1. results of 10 epochs training with the learning rate of 0.01 | Model | Acc. of IID | Acc. of Non-IID| | ----- | ----- | ---- | | FedAVG-MLP| 94.57% | 70.44% | | FedAVG-CNN| 96.59% | 77.72% | Table 2. results of 50 epochs training with the learning rate of 0.01 | Model | Acc. of IID | Acc. of Non-IID| | ----- | ----- | ---- | | FedAVG-MLP| 97.21% | 93.03% | | FedAVG-CNN| 98.60% | 93.81% | ## Ackonwledgements Acknowledgements give to [youkaichao](https://github.com/youkaichao). ## References McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Artificial Intelligence and Statistics (AISTATS), 2017. Shaoxiong Ji, Shirui Pan, Guodong Long, Xue Li, Jing Jiang, and Zi Huang. Learning private neural language modeling with attentive aggregation. In the 2019 International Joint Conference on Neural Networks (IJCNN), 2019. [[Paper](https://arxiv.org/abs/1812.07108)] [[Code](https://github.com/shaoxiongji/fed-att)] Jing Jiang, Shaoxiong Ji, and Guodong Long. Decentralized knowledge acquisition for mobile internet applications. World Wide Web, 2020. [[Paper](https://link.springer.com/article/10.1007/s11280-019-00775-w)]