[go: up one dir, main page]

CN117938851A - DAG real-time task unloading optimization method based on edge calculation - Google Patents

DAG real-time task unloading optimization method based on edge calculation Download PDF

Info

Publication number
CN117938851A
CN117938851A CN202410087384.9A CN202410087384A CN117938851A CN 117938851 A CN117938851 A CN 117938851A CN 202410087384 A CN202410087384 A CN 202410087384A CN 117938851 A CN117938851 A CN 117938851A
Authority
CN
China
Prior art keywords
time
task
sub
real
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410087384.9A
Other languages
Chinese (zh)
Inventor
龙林波
邓姚
沈靖程
刘智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202410087384.9A priority Critical patent/CN117938851A/en
Publication of CN117938851A publication Critical patent/CN117938851A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer And Data Communications (AREA)

Abstract

本发明属于任务卸载技术领域,具体涉及一种基于边缘计算的DAG实时任务卸载优化方法;包括:SDN控制器接收用户发送的待卸载实时任务,构建任务最早完成时间模型;计算任务在不同服务器上的执行时间,得到初次任务卸载策略;根据初次任务卸载策略将所有任务卸载到服务器中的等待队列中,执行等待队列中具有依赖关系的实时任务;根据剩余容忍时延对服务器内等待队列中的剩余实时任务进行排序并构建待二次卸载任务集合;根据实时任务的剩余容忍时延得到二次卸载策略,根据二次卸载策略执行任务二次卸载;服务器执行等待队列中的实时任务,完成DAG实时任务的卸载执行;本发明提高了系统利用率,降低了服务器在大量任务执行时的拥塞。

The present invention belongs to the technical field of task offloading, and specifically relates to a DAG real-time task offloading optimization method based on edge computing; the method comprises: an SDN controller receives real-time tasks to be offloaded sent by a user, and constructs an earliest task completion time model; the execution time of tasks on different servers is calculated to obtain an initial task offloading strategy; all tasks are offloaded to a waiting queue in a server according to the initial task offloading strategy, and real-time tasks with dependencies in the waiting queue are executed; the remaining real-time tasks in the waiting queue in the server are sorted according to the remaining tolerance delay and a set of tasks to be offloaded for a second time is constructed; a secondary offloading strategy is obtained according to the remaining tolerance delay of the real-time tasks, and the secondary offloading of tasks is executed according to the secondary offloading strategy; the server executes the real-time tasks in the waiting queue to complete the offloading execution of the DAG real-time tasks; the present invention improves the system utilization rate and reduces the congestion of the server when a large number of tasks are executed.

Description

DAG real-time task unloading optimization method based on edge calculation
Technical Field
The invention belongs to the technical field of task unloading, and particularly relates to a DAG real-time task unloading optimization method based on edge calculation.
Background
With the rapid arrival of the universal interconnection age and the popularization of wireless networks, the number of devices at the network edge is rapidly increased; the Edge Computing (EC) is used as an emerging distributed computing architecture, and due to the characteristic that the EC is closer to a user side, services are provided near the user side, so that lower network delay is ensured, and the application low-delay requirement of the Internet of things can be better met. However, applying real-time processing remains a major challenge in task offloading, and real-time processing requirements from unstructured data and large volumes of real-time data are an urgent issue to be addressed. How to achieve real-time offloading in a resource-limited edge environment is a major goal of research.
In order to fully exert the edge computing advantages and further optimize the application time delay, a plurality of research works propose a new edge computing task model, and the task parallelism is increased and the processing time delay is reduced by carrying out optimization modeling around the DAG task model; the offloading of DAG real-time tasks requires simultaneous consideration of the execution delays of multiple tasks on servers and the communication delays consumed by the data transfer of tasks between servers. Some researchers propose new multi-user resource allocation schemes and application level adjustment decisions to achieve near real-time processing of tasks. However, these schemes still do not effectively meet the low latency requirements of the real-time task offloading problem.
When an application performs real-time offloading in a scenario with multiple heterogeneous edge servers, how to guarantee the minimum completion time of the DAG application subtask offloading policy with a dependency is optimal. When a large number of real-time tasks are unloaded onto the same server, how to avoid task waiting and blocking problems caused by limited server resources. How to perform cooperative service between servers to improve the utilization rate of edge computing resources. All are major problems to be solved.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a DAG real-time task unloading optimization method based on edge calculation, which comprises the following steps:
S1: the SDN controller receives real-time tasks to be offloaded sent by a user, constructs a DAG real-time task flow chart, and constructs a task earliest completion time model according to the DAG real-time task flow chart;
S2: calculating the execution time of the task on different servers to obtain a primary task unloading strategy; according to the primary task unloading strategy, all tasks are unloaded to a waiting queue in a server;
S3: executing real-time tasks with dependency relations in the waiting queue according to the earliest task completion time model;
S4: calculating the residual tolerance time delay of the tasks, sequencing the residual real-time tasks in a waiting queue in the server according to the residual tolerance time delay, and constructing a task set to be secondarily offloaded;
s5: obtaining a secondary unloading strategy according to the residual tolerance time delay of the real-time task in the secondary unloading task set, and executing task secondary unloading according to the secondary unloading strategy;
S6: and the server executes the real-time tasks in the waiting queue to finish the unloading execution of the DAG real-time tasks. Preferably, the task earliest completion time model is expressed as:
Fi=min{Fmax_i+Di+pi}
Wherein, F i represents the earliest completion time of task R i, F max_i represents the earliest completion time of the largest precursor task of task R i, D i represents the transmission delay of the task, and p i represents the computation delay of the task.
Preferably, the process of obtaining the primary task offloading policy includes:
periodically acquiring the execution rate of each server and the data size of unfinished tasks by the SDN controller; and calculating the execution time of the real-time task on different servers according to the data size of the real-time task, the execution rate of the servers and the data size of the unfinished task, and selecting the server with the minimum execution time as an unloading server.
Further, the formula for calculating the execution time of the task on the server is:
wherein T i,j represents the execution time of task R i on server ES j, p i,j represents the computation delay of task R i on server ES j, W j represents the size of task data outstanding on server ES j, and q j represents the execution rate of server ES j.
Preferably, the process of constructing the task set to be secondarily offloaded includes: and selecting real-time tasks which cannot be completed within the limited tolerance time delay from the residual real-time tasks, and sequencing the real-time tasks which cannot be completed according to the residual tolerance time delay to obtain a task set to be subjected to secondary unloading.
Preferably, the process of obtaining the secondary offloading policy includes: judging whether each server meets the time delay condition according to the residual tolerance time delay of the real-time tasks in the secondary unloading task set, and if so, selecting the server with the minimum sum of the transmission time delay and the calculation time delay as the unloading server.
Further, the time delay condition is:
Wherein, p i,k represents the calculation time delay of the task R i on the server ES k, d i represents the data amount required to be processed by the task R i, v j,k represents the data transmission rate between the server ES j and the server ES k, W k represents the unfinished task data size on the server ES k, and q k represents the execution rate of the server ES k; t i' represents the remaining tolerable delay for task R i.
The beneficial effects of the invention are as follows:
(1) Under the condition of meeting the real-time processing of the task, the invention optimizes the data communication time delay of the DAG task; and processing according to the data dependence among the tasks, adopting an integer programming mode, taking real-time requirements as limiting conditions, minimizing real-time task completion time, obtaining an optimized unloading decision, and effectively mapping the server and the DAG real-time tasks.
(2) According to the invention, the multi-server cooperative processing is adopted in the unloading scene of the edge environment facing a large number of real-time tasks, so that the utilization rate of the system is improved, the congestion of the server when a large number of tasks are executed is reduced, and the real-time processing of the tasks is satisfied.
Drawings
FIG. 1 is a schematic diagram of a DAG real-time task offloading optimization method based on edge computation in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a DAG real-time task unloading optimization method based on edge calculation, which is shown in fig. 1 and comprises the following steps:
S1: the SDN controller receives real-time tasks to be offloaded sent by a user, builds a DAG real-time task flow chart, and builds a task earliest completion time model according to the DAG real-time task flow chart.
Periodically obtaining initial information of edge computing server nodes by SDN controller, and using a full graphRepresenting a communication relationship between edge servers within an edge computing environment, wherein a complete graph Gt vertex es= { ES 1,...,ESj,...,ESJ } is an edge server computing node,For the collection of Gt edges,Element e x,y of ES x、ESy represents an edge connecting two vertices of ES x、ESy, v is the weight of the Gt edge, that is, the transmission rate v= { v 1,...,vj,…,vJ},ESj between edge server computing nodes and the data transmission rate to ES y is v i,y, and meanwhile, waiting queue information of the edge computing server nodes is obtained.
The user sends real-time tasks to the SDN, and the SDN controller receives the real-time tasks to be offloaded sent by the user and constructs a DAG real-time task flow chart with I nodes. Specific: analyzing the dependency relationship of the tasks to obtain the data amount to be transmitted by the precursor task and the precursor task of each task, wherein the dependency matrix is represented by a matrix pro with a size IxI, wherein pro i,x=ux represents that the task R i needs to receive the transmission data u x from the precursor task R x, and when pro i,x =0, R i does not need to receive the transmission data of R x, i.e. R x is not the precursor task of R i; one task is provided with a plurality of precursor tasks, and the current task can be started to be executed after all the precursor tasks are executed; constructing a DAG real-time task flow chart according to the dependency relationship of the tasks, introducing a virtual starting task R 0 to be linked to each actual starting task, and simulating the actual requirements by a virtual ending task R I+1 to which each actual ending task needs to be pointed; the processing time of the virtual start task and the virtual end task is 0, and the virtual start task and the virtual end task are placed on the same server.
Constructing a task earliest completion time model according to the DAG real-time task flow chart, and specifically:
The task earliest completion time model is expressed as:
Fi=min{Fmax_i+Di+pi}
Wherein F i represents the earliest completion time of task R i, D i represents the transmission delay of the task, p i represents the computation delay of the task, and F max_i represents the earliest completion time of the largest precursor task of task R i:
Fmax_i=max{Fx|proi,x>0}
Wherein F x represents the earliest completion time of the precursor task R x.
S2: calculating the execution time of the task on different servers to obtain a primary task unloading strategy; and unloading all tasks to a waiting queue in the server according to the primary task unloading strategy.
The DAG real-time task offload policy is x= { X 1,...,Xi,...,Xn }, where X i={x1,...,xj,...,xm},xj e {0,1},0 represents no offload, 1 represents offload. The SDN controller periodically acquires the execution rate of each server and the data size of the unfinished task, and calculates the execution time of the real-time task on different servers according to the data size of the real-time task, the execution rate of the servers and the data size of the unfinished task; task R i is executed on server ES j for the following time:
Wherein, p i,j represents the calculation delay of the task R i, W j represents the data size of the task which is not completed on the server ES j, d i represents the data size of the task R i, and q j represents the execution rate of the server ES j.
And selecting the server with the minimum execution time as an unloading server to obtain a primary task unloading strategy. Therefore, in the primary task offloading policy, the execution time of the task on the offload server is:
Where T i represents the execution time of task R i, m represents the number of servers, and X i,j represents whether task R i is offloaded to server ES j.
And unloading all tasks to a waiting queue in the server according to the primary task unloading strategy. The real-time task enters a waiting queue with a queue length L in a target edge server computing node to wait for execution, wherein the queue length L=L' +1 of the updated waiting queue represents the queue length before being updated.
S3: and executing the real-time tasks with the dependency relationship in the waiting queue according to the earliest task completion time model.
And acquiring an execution sequence corresponding to the real-time tasks with the dependency relationships according to the earliest task completion time model, and executing the real-time tasks with the dependency relationships by the server.
S4: calculating the residual tolerance time delay of the tasks, sequencing the residual real-time tasks in the waiting queue in the server according to the residual tolerance time delay, and constructing a task set to be secondarily offloaded.
After the primary unloading execution, tasks from different DAG flowcharts or tasks without data dependency relationship exist in a waiting queue of the same server, and the tasks can be optimized for the execution sequence again in the actual unloading; specific:
Calculating the residual tolerance time delay of the task:
Ti′=θi-Di
Where T i' represents the remaining tolerable delay of task R i, θ i represents the defined tolerable delay of task R i, and D i represents the transmission delay of task R i.
And sequencing the residual real-time tasks in the waiting queue in the server according to the residual tolerance time delay, wherein the sequencing is more forward when the residual tolerance time delay is smaller. And selecting real-time tasks which cannot be completed within the limited tolerance time delay from the residual real-time tasks, and sequencing the real-time tasks which cannot be completed according to the residual tolerance time delay (the residual tolerance time delay is smaller and the sequencing is more forward), so as to obtain a task set to be subjected to secondary unloading.
S5: and obtaining a secondary unloading strategy according to the residual tolerance time delay of the real-time task in the secondary unloading task set, and executing task secondary unloading according to the secondary unloading strategy.
Acquiring real-time information of each server so as to calculate task calculation time delay, task residual tolerance time delay and transmission time delay among the servers; screening servers meeting the time delay conditions according to the residual tolerance time delay of the real-time tasks in the secondary unloading task set, and selecting a server with the minimum sum of the transmission time delay and the calculation time delay as an unloading server to obtain a secondary unloading strategy; and if the server meeting the condition does not exist, canceling the secondary unloading. Preferably, the delay conditions are:
Wherein, p i,k represents the calculation time delay of the task R i on the server ES k, d i represents the data amount required to be processed by the task R i, v j,k represents the data transmission rate between the server ES j and the server ES k, W k represents the unfinished task data size on the server ES k, and q k represents the execution rate of the server ES k; t i' represents task R i.
And executing task secondary unloading according to the secondary unloading strategy, and sequentially unloading real-time tasks in the secondary unloading task set to the waiting queues of the corresponding target servers.
S6: and the server executes the real-time tasks in the waiting queue to finish the unloading execution of the DAG real-time tasks.
And the server executes the real-time tasks in the waiting queue, finishes the unloading execution of the DAG real-time tasks, and obtains the minimum time delay after the DAG real-time tasks are optimized, wherein the minimum completion time of the DAG real-time tasks is the completion time of the ending tasks in the DAG real-time task flow chart.
In summary, in the large-scale DAG real-time task unloading scene of edge computing processing, the task unloading efficiency is improved, the transmission delay of the task in the unloading process is considered, the task with large computing capacity or low priority is cooperatively processed with other servers on the edge server with limited resources, congestion is avoided, and real-time processing of the task is ensured.
While the foregoing is directed to embodiments, aspects and advantages of the present invention, other and further details of the invention may be had by the foregoing description, it will be understood that the foregoing embodiments are merely exemplary of the invention, and that any changes, substitutions, alterations, etc. which may be made herein without departing from the spirit and principles of the invention.

Claims (7)

1.一种基于边缘计算的DAG实时任务卸载优化方法,其特征在于,包括:1. A method for optimizing real-time task offloading based on edge computing using a DAG, characterized by comprising: S1:SDN控制器接收用户发送的待卸载实时任务并构建DAG实时任务流程图,根据DAG实时任务流程图构建任务最早完成时间模型;S1: The SDN controller receives the real-time tasks to be unloaded sent by the user and constructs a DAG real-time task flowchart. Based on the DAG real-time task flowchart, it constructs the earliest completion time model of the task. S2:计算任务在不同服务器上的执行时间,得到初次任务卸载策略;根据初次任务卸载策略将所有任务卸载到服务器中的等待队列中;S2: Calculate the execution time of tasks on different servers to obtain the initial task unloading strategy; unload all tasks into the waiting queue on the server according to the initial task unloading strategy; S3:根据任务最早完成时间模型执行等待队列中具有依赖关系的实时任务;S3: Execute real-time tasks with dependencies in the waiting queue based on the earliest completion time model of the tasks; S4:计算任务的剩余容忍时延并根据剩余容忍时延对服务器内等待队列中的剩余实时任务进行排序并构建待二次卸载任务集合;S4: Calculate the remaining tolerance latency of the task and sort the remaining real-time tasks in the waiting queue in the server according to the remaining tolerance latency, and construct a set of tasks to be unloaded a second time. S5:根据二次卸载任务集合中实时任务的剩余容忍时延得到二次卸载策略,根据二次卸载策略执行任务二次卸载;S5: Obtain the secondary unloading strategy based on the remaining tolerable latency of real-time tasks in the secondary unloading task set, and execute the secondary unloading of tasks according to the secondary unloading strategy. S6:服务器执行等待队列中的实时任务,完成DAG实时任务的卸载执行。S6: The server executes the real-time tasks in the waiting queue and completes the unloading and execution of the DAG real-time tasks. 2.根据权利要求1所述的一种基于边缘计算的DAG实时任务卸载优化方法,其特征在于,所述任务最早完成时间模型表示为:2. The DAG real-time task offloading optimization method based on edge computing according to claim 1, characterized in that the earliest task completion time model is expressed as: Fi=min{Fmax_i+Di+pi}F i =min{F max_i +D i + pi } 其中,Fi表示任务Ri的最早完成时间,Fmax_i任务Ri的最大前驱任务最早完成时间,Di表示任务的传输时延,pi表示任务的计算时延。Where F <sub>i</sub> represents the earliest completion time of task R<sub>i</sub> , F <sub>max_i </sub> represents the earliest completion time of the maximum predecessor task of task R<sub> i </sub>, D <sub>i </sub> represents the transmission delay of the task, and p<sub>i</sub> represents the computation delay of the task. 3.根据权利要求1所述的一种基于边缘计算的DAG实时任务卸载优化方法,其特征在于,得到初次任务卸载策略的过程包括:3. The DAG real-time task offloading optimization method based on edge computing according to claim 1, characterized in that the process of obtaining the initial task offloading strategy includes: SDN控制器周期性获取每个服务器的执行速率和未完成任务的数据大小;根据实时任务的数据大小、服务器的执行速率和未完成任务的数据大小计算实时任务在不同服务器上的执行时间,选择执行时间最小的服务器作为卸载服务器。The SDN controller periodically obtains the execution rate and the amount of unfinished tasks for each server; based on the data size of the real-time tasks, the server's execution rate, and the amount of unfinished tasks, it calculates the execution time of the real-time tasks on different servers and selects the server with the shortest execution time as the offloading server. 4.根据权利要求3所述的一种基于边缘计算的DAG实时任务卸载优化方法,其特征在于,计算任务在服务器上的执行时间的公式为:4. The DAG real-time task offloading optimization method based on edge computing according to claim 3, characterized in that the formula for calculating the execution time of the computation task on the server is: 其中,Ti,j表示任务Ri在服务器ESj上的执行时间,pi,j表示任务Ri在服务器ESj上的计算时延,Wj表示服务器ESj上未完成的任务数据大小,qj表示服务器ESj的执行速率。Where Ti ,j represents the execution time of task Ri on server ES j , p i,j represents the computation latency of task Ri on server ES j , W j represents the size of unfinished task data on server ES j , and q j represents the execution rate of server ES j . 5.根据权利要求1所述的一种基于边缘计算的DAG实时任务卸载优化方法,其特征在于,构建待二次卸载任务集合的过程包括:从剩余实时任务中选出在限定容忍时延内无法完成的实时任务,根据剩余容忍时延对无法完成的实时任务进行排序,得到待二次卸载任务集合。5. The DAG real-time task offloading optimization method based on edge computing according to claim 1, characterized in that the process of constructing the set of tasks to be offloaded a second time includes: selecting real-time tasks that cannot be completed within the limited tolerance delay from the remaining real-time tasks, sorting the real-time tasks that cannot be completed according to the remaining tolerance delay, and obtaining the set of tasks to be offloaded a second time. 6.根据权利要求1所述的一种基于边缘计算的DAG实时任务卸载优化方法,其特征在于,得到二次卸载策略的过程包括:根据二次卸载任务集合中实时任务的剩余容忍时延判断各服务器是否满足时延条件,若满足,选取传输时延和计算时延之和最小的服务器作为卸载服务器。6. The DAG real-time task offloading optimization method based on edge computing according to claim 1, characterized in that the process of obtaining the secondary offloading strategy includes: judging whether each server meets the latency condition based on the remaining tolerable latency of the real-time tasks in the secondary offloading task set; if it meets the condition, selecting the server with the smallest sum of transmission latency and computation latency as the offloading server. 7.根据权利要求6所述的一种基于边缘计算的DAG实时任务卸载优化方法,其特征在于,时延条件为:7. The DAG real-time task offloading optimization method based on edge computing according to claim 6, characterized in that the latency condition is: 其中,pi,k表示任务Ri在服务器ESk上的计算时延,di表示任务Ri需要处理的数据量,vj,k表示服务器ESj与服务器ESk间的数据传输速率,Wk表示服务器ESk上未完成的任务数据大小,qk表示服务器ESk的执行速率;Ti′表示任务Ri的剩余容忍时延。Where p <sub>i,k</sub> represents the computation latency of task R<sub> i </sub> on server ES<sub>k</sub>, d<sub>i</sub> represents the amount of data that task R<sub>i</sub> needs to process, v <sub>j,k</sub> represents the data transfer rate between server ES<sub> j </sub> and server ES<sub>k</sub>, W<sub>k</sub> represents the size of unfinished task data on server ES <sub>k </sub>, q <sub>k</sub> represents the execution rate of server ES <sub>k </sub>, and T <sub>i ′</sub> represents the remaining tolerable latency of task R <sub>i</sub> .
CN202410087384.9A 2024-01-22 2024-01-22 DAG real-time task unloading optimization method based on edge calculation Pending CN117938851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410087384.9A CN117938851A (en) 2024-01-22 2024-01-22 DAG real-time task unloading optimization method based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410087384.9A CN117938851A (en) 2024-01-22 2024-01-22 DAG real-time task unloading optimization method based on edge calculation

Publications (1)

Publication Number Publication Date
CN117938851A true CN117938851A (en) 2024-04-26

Family

ID=90755040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410087384.9A Pending CN117938851A (en) 2024-01-22 2024-01-22 DAG real-time task unloading optimization method based on edge calculation

Country Status (1)

Country Link
CN (1) CN117938851A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515324A (en) * 2021-07-16 2021-10-19 广东工业大学 A collaborative edge computing method, device, electronic device and storage medium for offloading decision-making based on directed acyclic graph
US20220232423A1 (en) * 2022-03-25 2022-07-21 Intel Corporation Edge computing over disaggregated radio access network functions
CN115695424A (en) * 2022-10-27 2023-02-03 北京师范大学珠海校区 Dependent task online unloading method based on cooperative edge computing
CN115883561A (en) * 2022-12-01 2023-03-31 重庆邮电大学 Safety scheduling method for DAG task flow in edge computing
WO2023073403A1 (en) * 2021-10-27 2023-05-04 Telefonaktiebolaget Lm Ericsson (Publ) Data transfer scheduling
CN116521345A (en) * 2023-05-18 2023-08-01 重庆邮电大学空间通信研究院 Joint scheduling and unloading method based on task dependency relationship
CN116709378A (en) * 2023-05-04 2023-09-05 华南理工大学 Task Scheduling and Resource Allocation Method Based on Federated Reinforcement Learning in Internet of Vehicles

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515324A (en) * 2021-07-16 2021-10-19 广东工业大学 A collaborative edge computing method, device, electronic device and storage medium for offloading decision-making based on directed acyclic graph
WO2023073403A1 (en) * 2021-10-27 2023-05-04 Telefonaktiebolaget Lm Ericsson (Publ) Data transfer scheduling
US20220232423A1 (en) * 2022-03-25 2022-07-21 Intel Corporation Edge computing over disaggregated radio access network functions
CN115695424A (en) * 2022-10-27 2023-02-03 北京师范大学珠海校区 Dependent task online unloading method based on cooperative edge computing
CN115883561A (en) * 2022-12-01 2023-03-31 重庆邮电大学 Safety scheduling method for DAG task flow in edge computing
CN116709378A (en) * 2023-05-04 2023-09-05 华南理工大学 Task Scheduling and Resource Allocation Method Based on Federated Reinforcement Learning in Internet of Vehicles
CN116521345A (en) * 2023-05-18 2023-08-01 重庆邮电大学空间通信研究院 Joint scheduling and unloading method based on task dependency relationship

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOBO ZHOU ET AL.: "DAG-Based Dependent Tasks Offloading in MEC-Enabled IoT With Soft Cooperation", 《IEEE》, 30 October 2023 (2023-10-30) *
吕洁娜;张家波;张祖凡;甘臣权;: "移动边缘计算卸载策略综述", 小型微型计算机系统, no. 09, 4 September 2020 (2020-09-04) *

Similar Documents

Publication Publication Date Title
CN113220356B (en) User computing task unloading method in mobile edge computing
CN112799823B (en) Online dispatching and scheduling method and system for edge computing tasks
CN109561148B (en) Distributed task scheduling method based on directed acyclic graph in edge computing network
CN113950103A (en) Multi-server complete computing unloading method and system under mobile edge environment
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN112000388B (en) Concurrent task scheduling method and device based on multi-edge cluster cooperation
CN109656703A (en) A kind of mobile edge calculations auxiliary vehicle task discharging method
CN110928651B (en) A fault-tolerant scheduling method for service workflow in mobile edge environment
CN111782627B (en) Task and data cooperative scheduling method for wide-area high-performance computing environment
CN111711962B (en) A method for coordinated scheduling of subtasks in mobile edge computing systems
CN110096362A (en) A kind of multitask discharging method based on Edge Server cooperation
CN116366576A (en) Computing power network resource scheduling method, device, equipment and medium
CN113377516A (en) Centralized scheduling method and system for unloading vehicle tasks facing edge computing
CN113626104A (en) Multi-objective optimization unloading strategy based on deep reinforcement learning under edge cloud architecture
Zhao et al. Dynamic caching dependency-aware task offloading in mobile edge computing
CN114172558B (en) A task offloading method based on edge computing and UAV cluster collaboration in vehicle networks
CN116954866A (en) Edge cloud task scheduling method and system based on deep reinforcement learning
CN110888745A (en) A MEC Node Selection Method Considering Task Transmission Arrival Time
CN118708343A (en) A service migration and resource optimization method based on imitation learning
CN118055160A (en) Edge computing server task allocation system and method
CN120658307A (en) Low orbit satellite network communication calculation integrated method and simulation realization system
CN109298932B (en) Resource scheduling method, scheduler and system based on OpenFlow
CN118301666B (en) QoE-aware mobile assisted edge service method, system and device
CN117938851A (en) DAG real-time task unloading optimization method based on edge calculation
CN119052273B (en) A method and apparatus for offloading vehicle-side collaborative computing tasks based on deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination