Abstract:
In the present day technological environment of vehicular networks where novelties such
as; autonomous driving, real-time traffic monitoring and infotainment integration are
becoming a common specialty there is a move towards making significant computational
functionalities and highly optimized operations. Jeopardizing this progress, a distinct
mechanism named Mobile Edge Computing (MEC) comes into the picture that can
effectively maintain the real time acquisition for exponentially increasing data and enhance
the capabilities of vehicular networks by employing the support of edge servers
with a very low latency. In this work, we propose the use of deep reinforcement learning
(DRL) to reduce the delay of services from the vehicle to the vehicular network with
MEC. We present a primary-then-secondary DRL-based resource management scheme
first, the vehicle adjusts the transmitting power using the DDPG scheme, following the
vehicle’s first computational offloading step, which enables the reduction of propagation
delay. Secondly, once the data is passed to the RSU, the strategy works with the DQN
algorithm to determine the number of cores that can be used to perform computational
work, reducing the time spent on computation. Also, it provides an energy efficiency
(EE) and spectral efficiency (SE) comparison of the vehicle to evaluate their agreement.
To assess the proposed schemes performance, the results are compared with other variants
namely always offload, never offload, random offload, and cooperative offloading
based on Deep Neural Network (DNN). According to simulation results it is possible
to state that the proposed resource allocation strategy outperforms the benchmark approaches
and increases the overall system performance decreasing the overall service
delay.