Published in: IEEE Transactions on Wireless Communications
By leveraging edge servers as intermediaries to perform partial model aggregation in proximity and relieve core network transmission overhead, it enables great potentials in low-latency and energy-efficient FL. Hence we introduce a novel Hierarchical Federated Edge Learning (HFEL) framework in which model aggregation is partially migrated to edge servers from the cloud.
Key Words: Federated Learning, Resource Scheduling
MEC allows delay-sensitive and computation-intensive tasks to be offloaded from distributed mobile devices to edge servers in proximity, which offers real-time response and high energy efficiency .
We propose a novel Hierarchical Federated Edge Learning (HFEL) framework, in which edge servers usually fixedly deployed with base stations as intermediaries between mobile devices and the cloud, can perform edge aggregations of local models which are transmitted from devices in proximity.
When each of them achieves a given learning accuracy, updated models at the edge are transmitted to the cloud for global aggregation.
We still face the following challenges:
how to solve a joint computation and communication resource allocation for each device to achieve training acceleration and energy saving?
How to associate a proper set of device users to an edge server for efficient edge model aggregation?