This space provides the code for Wireless Ad Hoc Federated Learning (WAFL) -- A Fully Autonomous Collaborative Learning with Device-to-Device Communication.
As of May 2025, this repository contains the following six projects.
- WAFL-MLP: The most basic codes with a fully-connected neural network [1] for starters. You can learn what the WAFL is.
- WAFL-ViT: WAFL with Vision Transformer [5] for image recognition.
- WAFL-DETR: WAFL with Detection Transformer [13] for object detection.
- WAFL-YOLO: WAFL with YOLOv9 for object detection.
- WAFL-Whisper: WAFL with Whisper for speech recognition.
- WAFL-Efficiency: WAFL's efficient model exchange with Top-K Difference Sparsification and Difference Quantization [15].
Wireless ad hoc federated learning (WAFL) allows collaborative learning via device-to-device communications organized by the devices physically nearby. Here, each device has a wireless interface and can communicate with each other when they are within the radio range. The devices are expected to move with people, vehicles, or robots, producing opportunistic contacts with each other.
Each device trains a model individually with the local data it has. When a device encounters another device, they exchange their local models with each other through the ad hoc communication channel. Then, the device aggregates the models into a new model, which is expected to be more general compared to the locally trained models. With an adjustment process of the new model with the local training data, they repeat this process during they are in contact. Please note that there is no third-party server operated for the federation among multi-vendor devices.
(*) Click here to see the white paper.

[1] Hideya Ochiai, Yuwei Sun, Qingzhe Jin, Nattanon Wongwiwatchai, Hiroshi Esaki, "Wireless Ad Hoc Federated Learning: A Fully Distributed Cooperative Machine Learning" in May 2022 (https://arxiv.org/abs/2205.11779).
[2] Naoya Tezuka, Hideya Ochiai, Yuwei Sun, Hiroshi Esaki, "Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks", IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA), 2022 (https://ieeexplore.ieee.org/abstract/document/10063735).
[3] Eisuke Tomiyama, Hiroshi Esaki, Hideya Ochiai, "WAFL-GAN: Wireless Ad Hoc Federated Learning for Distributed Generative Adversarial Networks", IEEE International Conference on Knowledge and Smart Technology, 2023 (https://ieeexplore.ieee.org/document/10086811).
[4] Hideya Ochiai, Riku Nishihata, Eisuke Tomiyama, Yuwei Sun, and Hiroshi Esaki, "Detection of Global Anomalies on Distributed IoT Edges with Device-to-Device Communication", ACM MobiHoc, 2023 (https://dl.acm.org/doi/abs/10.1145/3565287.3616528).
[5] Hideya Ochiai, Atsuya Muramatsu, Yudai Ueda, Ryuhei Yamaguchi, Kazuhiro Katoh, and Hiroshi Esaki, "Tuning Vision Transformer with Device-to-Device Communication for Targeted Image Recognition", IEEE World Forum on Internet of Things (WF-IoT), 2023 (Best Paper Award) (https://ieeexplore.ieee.org/document/10539480).
[6] Ryusei Higuchi, Hiroshi Esaki, and Hideya Ochiai, "Personalized Wireless Ad Hoc Federated Learning for Label Preference Skew", IEEE World Forum on Internet of Things (WF-IoT), 2023 (https://ieeexplore.ieee.org/document/10539563).
[7] Yusuke Sugizaki, Hideya Ochiai, Muhammad Asad, Manabu Tsukada, and Hiroshi Esaki, "Wireless Ad-Hoc Federated Learning for Cooperative Map Creation and Localization Models", IEEE World Forum on Internet of Things (WF-IoT), 2023 (https://ieeexplore.ieee.org/document/10539517).
[8] Koshi Eguchi, Hideya Ochiai, and Hiroshi Esaki, "MemWAFL: Efficient Model Aggregation for Wireless Ad Hoc Federated Learning in Sparse Dynamic Networks", IEEE Future Networks World Forum, 2023 (https://ieeexplore.ieee.org/document/10520500).
[9] Yoshihiko Ito, Hideya Ochiai, and Hiroshi Esaki, "Self-Organizing Hierarchical Topology in Peer-to-Peer Federated Learning: Strategies for Scalability, Robustness, and Non-IID Data", IEEE Future Networks World Forum, 2023 (https://ieeexplore.ieee.org/document/10520530).
[10] Ryusei Higuchi, Hiroshi Esaki, and Hideya Ochiai, "Collaborative Multi-Task Learning across Internet Edges with Device-to-Device Communications", IEEE Cybermatics Congress (SmartData), 2023 (https://ieeexplore.ieee.org/document/10501784).
[11] Yudai Ueda, Hideya Ochiai, Hiroshi Esaki, "Device-to-Device Collaborative Learning for Self-Localization with Previous Model Utilization", IEEE International Conference on Knowledge and Smart Technology, 2024 (https://ieeexplore.ieee.org/document/10499694).
[12] Atsuya Muramatsu, Hideya Ochiai, Hiroshi Esaki, "Tuning Personalized Models by Two-Phase Parameter Decoupling with Device-to-Device Communication", IEEE International Conference on Knowledge and Smart Technology, 2024 (Best Paper Award) (https://ieeexplore.ieee.org/document/10499649).
[13] Ryuhei Yamaguchi, Hideya Ochiai, "Tuning Detection Transformer with Device-to-Device Communication for Mission-Oriented Object Detection", IEEE WiMob, 2024 (https://ieeexplore.ieee.org/document/10770328).
[14] Ryusei Higuchi, Hiroshi Esaki, Hideya Ochiai, "Neuron Personalization of Collaborative Federated Learning via Device-to-Device Communications", IEEE WiMob, 2024 (https://ieeexplore.ieee.org/document/10770527).
[15] Kaito Tsuchiya, Hiroshi Esaki, Hideya Ochiai, "Top-K Difference Sparsification and Quantization for Communication-Efficient Model Aggregation in Wireless Ad Hoc Federated Learning", IEEE Conference on Artificial Intelligence, 2025 (in-press).
[16] Yudai Ueda, Hideya Ochiai, "Fully Decentralized Collaborative Learning for Visual Question Answering in Distributed Scenarios", IEEE Conference on Artificial Intelligence, 2025 (in-press).



