NUST Institutional Repository

Towards efficient resource utilization in collaborative fog environment using multi-armed bandit.

Show simple item record

dc.contributor.author Salee, Rabia
dc.date.accessioned 2023-07-26T13:39:08Z
dc.date.available 2023-07-26T13:39:08Z
dc.date.issued 2022
dc.identifier.other 275454
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/35194
dc.description Supervisor: Dr. Asad Waqar M en_US
dc.description.abstract Offloading and resource utilization in vehicular networks and smart cities has been an important problem due to the excessive load on the vehicles despite the availability of multiple resources. These resources are located at the cloud or fog end of the network. Smart vehicles produce a generous number of tasks due to the multiple duties they perform on the road. The tasks that are more CPU-intensive and require real-time results quickly should not wait in the queue to be executed. An efficient offloading technique is required for this purpose that can utilize the resources of the network efficiently while ensuring task execution at a lower waiting time and higher efficiency. There are many existing offloading techniques that have been implemented to solve this problem but none of these techniques have attempted to solve the problem by making the system learn from its behavior. Hence, in our proposed framework, we have introduced an intelligent system of offloading that generated rewards based on certain parameters for each entity included in the offloading decision. Multi-armed bandit is a deep-learning reinforcement algorithm that is implemented on the fog federation. Fog nodes act as both the agent and the arms of the bandit where rewards are assigned to each arm based on different parameters in different variants of the algorithm. The task is offloaded to the highest reward generating fog node after running the algorithm. We have also implemented the network without the multi-armed bandit algorithm and compared the results of 6 variants of the system. The aim of this research is to prove that offloading and resource utilization can be improved if the system acts intelligently by learning from its past behavior and using that knowledge to make efficient en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science (SEECS), NUST en_US
dc.title Towards efficient resource utilization in collaborative fog environment using multi-armed bandit. en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [376]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account