dc.description.abstract |
As vehicular networks increasingly rely on real-time applications such as autonomous
driving and intelligent transportation systems (ITS), Mobile Edge Computing (MEC)
has emerged as a crucial technology for reducing latency by offloading computational
tasks to nearby servers. However, optimizing service delays in MEC-enabled vehicular
networks remains a significant challenge due to the limited computational capacity of
vehicles and the dynamic nature of the environment. Existing research has primarily
focused on task offloading strategies and single-core RSU resource allocation, but these
approaches often fail to adequately address propagation and computational delays without
requiring extensive RSU cooperation. To address these issues, studies introduced a
dual-stage deep reinforcement learning (DRL)-based framework using deep deterministic
policy gradient (DDPG) to optimize transmit power for minimizing the overall service
delay and deep Q-network (DQN) for core allocation. In this paper, we extend the dualstage
DRL approach by incorporating advanced DRL algorithms—prioritized experience
replay DDPG (PER-DDPG), combined experience replay DDPG (CER-DDPG), twin
delayed DDPG (TD3), and proximal policy optimization (PPO)—to enhance power optimization
and reduce propagation delays. Additionally, we introduce a comparative
analysis of core allocation algorithms, including dueling double DQN (DDDQN) and
double DQN (DDQN), to evaluate computational delay with varying core counts. Our
findings show that Doelling DDQN offers the most efficient core allocation, leading to
lower computational delays. This research fills a critical gap in the literature, showcasing
how advanced DRL techniques can significantly improve resource-constrained vehicular
networks and guide the optimization of MEC systems for more responsive and scalable
ITS solutions. |
en_US |