dc.contributor.author |
Kamran |
|
dc.date.accessioned |
2024-10-10T08:02:04Z |
|
dc.date.available |
2024-10-10T08:02:04Z |
|
dc.date.issued |
2024 |
|
dc.identifier.other |
330623 |
|
dc.identifier.uri |
http://10.250.8.41:8080/xmlui/handle/123456789/47182 |
|
dc.description |
Supervisor: Dr. Muhammad Imran
Co Supervisor : Dr. Muhammad Shahzad Younis |
en_US |
dc.description.abstract |
Approximate Computing has emerged as a promising solution to address the increas
ing computational demands of modern applications by allowing controlled inaccu
racies. This thesis explores the integration of Approximate Computing into Feder
ated Learning (FL), a decentralized machine learning framework designed to protect
data privacy. The proposed method introduces an Approximate Stochastic Gradient
Descent (SGD) with Batch Averaging (BatchAvg) aggregation, reducing communi
cation and computational costs while maintaining model performance. By utilizing
techniques like Fixed Point with Error Compensation (FPEC) and BiScaled-DNN,
the 32-bit floating-point weights are quantized to 8-bit representations, minimiz
ing energy consumption and bandwidth usage. This approach mimics the effects of
stragglers in FL, allowing resource-constrained devices to participate effectively in
the learning process. Evaluations using the CIFAR-10 dataset demonstrate that this
method achieves significant energy and bandwidth savings with only minimal impact
on model accuracy. The results indicate the potential for Approximate Computing
to improve the scalability and efficiency of Federated Learning, especially in edge
computing environments. |
en_US |
dc.language.iso |
en |
en_US |
dc.publisher |
School of Electrical Engineering & Computer Science (SEECS), NUST |
en_US |
dc.subject |
Machine Learning, Deep Learning, Federated Learning, Approximate Computing, Decentralized Learning |
en_US |
dc.title |
Approximate Computing in Federated Learning Settings |
en_US |
dc.type |
Thesis |
en_US |