dc.description.abstract |
Federated Learning (FL) is a collaborative learning platform. It enables users to train machine
learning models on their devices using their data. However, the final updated model they receive is trained on other datasets as well. All users participating in the FL, train the model on
their device and send the updated weights to a server for aggregation. Given the diversity of
devices involved, there has been an increasing interest in exploring how offloading can improve
FL’s performance. Traditionally, offloading strategies have involved all users sending the same
number of model layers to the server for additional training. However, this approach does not
account for stragglers, leading to delays in the overall training process. Additionally, offload ing introduces privacy concerns, as sensitive information could potentially be compromised,
which poses a challenge in encouraging users to participate in FL training. This thesis introduces a game-theoretic framework that balances trade-off between privacy protection, latency
reduction, and energy efficiency. This leads to ultimately enhancing user participation in FL
offloading. We propose an adaptive offloading strategy tailored to users’ privacy requirements,
device capabilities, and energy consumption. Our results show that this adaptive approach surpasses traditional methods, as it allows stragglers to offload more layers, ensuring synchronized
completion of model training among all users. Additionally, we examine the impact of varying
privacy levels on users’ offloading decisions. |
en_US |