Yeong-Jun Seok's research while affiliated with Korea University of Technology and Education and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (5)


Graph-Powered Reinforcement Learning for Intelligent Task Offloading in Vehicular Networks
  • Conference Paper

February 2024

·

20 Reads

·

·

Yeong-Jun Seok

·

[...]

·

Share


Algorithm 1. Training Stage of the DDQNEC algorithm
The system model and structure of edge-cloud computing
The architecture of the DDQNEC scheme for task offloading and resource allocation
Task rejection comparison a: Small Environment b: Large Environment
Resource utilization comparison a: Small Environment b: Large Environment

+4

Optimizing task offloading and resource allocation in edge-cloud networks: a DRL approach
  • Article
  • Full-text available

July 2023

·

114 Reads

·

7 Citations

Journal of Cloud Computing

Edge-cloud computing is an emerging approach in which tasks are offloaded from mobile devices to edge or cloud servers. However, Task offloading may result in increased energy consumption and delays, and the decision to offload the task is dependent on various factors such as time-varying radio channels, available computation resources, and the location of devices. As edge-cloud computing is a dynamic and resource-constrained environment, making optimal offloading decisions is a challenging task. This paper aims to optimize offloading and resource allocation to minimize delay and meet computation and communication needs in edge-cloud computing. The problem of optimizing task offloading in the edge-cloud computing environment is a multi-objective problem, for which we employ deep reinforcement learning to find the optimal solution. To accomplish this, we formulate the problem as a Markov decision process and use a Double Deep Q-Network (DDQN) algorithm. Our DDQN-edge-cloud (DDQNEC) scheme dynamically makes offloading decisions by analyzing resource utilization, task constraints, and the current status of the edge-cloud network. Simulation results demonstrate that DDQNEC outperforms heuristic approaches in terms of resource utilization, task offloading, and task rejection.

Download


Citations (2)


... Network Connectivity: Managing dynamic network conditions for efficient task transfer [15]. ...

Reference:

An Optimal Novel Approach for Dynamic Energy-Efficient Task Offloading in Mobile Edge-Cloud Computing Networks
Optimizing task offloading and resource allocation in edge-cloud networks: a DRL approach

Journal of Cloud Computing

... In an edge-cloud computing environment, it can be challenging to determine the optimal location for task offloading, as there are many factors to consider, including the computational capacity of edge servers, the transmission delays of networks, and the diverse requirements of end devices. Numerous research has been conducted on the topic of computation offloading in edge-cloud networks [5][6][7][8][9]. However, due to the diverse requirements of end devices and the limited information available about wireless channels, bandwidth, and computing resources in edge-cloud networks, it is challenging to design an optimal offloading strategy. ...

Optimal Task Offloading with Deep Q-Network for Edge-Cloud Computing Environment
  • Citing Conference Paper
  • October 2022