[go: up one dir, main page]

CN113326140B - Process migration method, device, computing device and storage medium - Google Patents

Process migration method, device, computing device and storage medium Download PDF

Info

Publication number
CN113326140B
CN113326140B CN202110738697.2A CN202110738697A CN113326140B CN 113326140 B CN113326140 B CN 113326140B CN 202110738697 A CN202110738697 A CN 202110738697A CN 113326140 B CN113326140 B CN 113326140B
Authority
CN
China
Prior art keywords
load
processor
task
active
computing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110738697.2A
Other languages
Chinese (zh)
Other versions
CN113326140A (en
Inventor
叶中玉
周鹏
余昇锦
胡翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uniontech Software Technology Co Ltd
Original Assignee
Uniontech Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uniontech Software Technology Co Ltd filed Critical Uniontech Software Technology Co Ltd
Priority to CN202110738697.2A priority Critical patent/CN113326140B/en
Publication of CN113326140A publication Critical patent/CN113326140A/en
Priority to PCT/CN2021/124293 priority patent/WO2023273015A1/en
Application granted granted Critical
Publication of CN113326140B publication Critical patent/CN113326140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Hardware Redundancy (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a process migration method, a device, a computing device and a storage medium, wherein the process migration method is executed in the computing device and comprises the following steps: dividing each process into an active process or an inactive process based on the real load of each process in the processor; judging whether the active process is the only active process in the processor; if the process is not the only active process, sequentially determining task layering loads of the processes; and when the task layering load of the process meets the preset condition, migrating the process to other processors.

Description

Process migration method, device, computing equipment and storage medium
Technical Field
The present invention relates to the field of the internet, and in particular, to a process migration method, a device, a computing device, and a storage medium.
Background
In a multi-core SMP (symmetric multiprocessor) system, reasonable task scheduling is an important premise for exploiting the potential of the multi-core system. Based on multi-core scheduling, a process queue is currently running on each processor. And one executable process can be added to other running queues to realize load balancing among the processors, so that the situation that one part of the processors are busy and the other part of the processors are idle is avoided.
The current load balancing implementation uses task layering load (task_h_load), namely considering the load contribution of the current process to the current processor, and judging whether a task can meet the migration condition according to the size of the task layering load. However, the existing process migration method has the following problems that in a full-load scene, the phenomenon that a high-load process is migrated across a processor or even across a memory node due to the influence of a background low-load process occurs, so that serious cache failure is caused, and the normal performance of the high-load process is further influenced.
Disclosure of Invention
The present invention has been made in view of the above problems, and provides a process migration method, apparatus, computing device, and storage medium that overcome or at least partially solve the above problems.
According to one aspect of the present invention, there is provided a process migration method, executed in a computing device, the method comprising: dividing each process into an active process or an inactive process based on the real load of each process in the processor; judging whether the active process is the only active process in the processor; if the process is not the only active process, sequentially determining task layering loads of the processes; and when the task layering load of the process meets the preset condition, migrating the process to other processors.
Optionally, in the process migration method according to the present invention, if the process is not the only active process, the step of sequentially determining task layering loads of the process includes: polling a linked list of each process stored in the processor to obtain the joining sequence of each process; and sequentially determining task layering loads of the processes according to the adding sequence of the processes from back to front.
Optionally, in the process migration method according to the present invention, after the step of dividing each process into an active process or an inactive process based on a real load of each process in the processor, the method further includes the step of: and counting the number of active processes in the processor.
Optionally, in the process migration method according to the present invention, the step of calculating the real load includes: respectively acquiring time information of each process in a working state and a non-working state; based on the time information, the real load of each process is calculated.
Optionally, in the process migration method according to the present invention, the step of calculating the task layering load includes: acquiring the number of processors of the process in the corresponding group; acquiring a process load of a process in a corresponding group; and taking the ratio of the load of the process to the number of the processors as the task layering load of the process.
Optionally, in the process migration method according to the present invention, the step of dividing each process into an active process or an inactive process based on a real load of each process in the processor includes: if the real load of the process is greater than a preset load threshold, determining that the process is an active process; otherwise, the process is determined to be an inactive process.
Optionally, in the process migration method according to the present invention, when the task layering load of the process meets a preset condition, the step of migrating the process to the other processor includes: judging whether the task layering load of the process is less than half of the load imbalance value or not; if yes, the process is migrated to other processors.
According to still another aspect of the present invention, there is provided a process migration apparatus including: the process state determining module is suitable for dividing each process into an active process or an inactive process based on the real load of each process in the processor; the judging module is suitable for judging whether the active process is the only active process in the processor; the process task layering load determining module is suitable for sequentially determining task layering loads of the processes; and a process migration module adapted to migrate the process to other processors.
According to yet another aspect of the present invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the above-described method.
According to yet another aspect of the present invention, there is provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the above-described method.
According to the scheme of the invention, the dual effects of the real load of the task and the layered load of the task of the process are comprehensively considered, when only one process with high real load is arranged on the processor, the low-load process newly added to the processor is preferentially migrated, the migration of the process with high real load is avoided, and the normal performance of the process with high load is further ensured.
According to the scheme of the invention, under the full thread use case scene, the influence of the background process on the main process is reduced, the main process is not influenced by the background to cause process migration across processors and even across memory nodes, and at the moment, the utilization rate of the cache memory is highest, and the running performance of the program is also best.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 illustrates a schematic diagram of a principle 100 of process migration;
FIG. 2 illustrates a flow chart of a prior art process migration method 200;
FIG. 3 shows a schematic diagram of a computing device 300 according to one embodiment of the invention;
FIG. 4 illustrates a flow diagram of a process migration method 400 according to one embodiment of the invention.
Fig. 5 illustrates a block diagram of a process migration apparatus 500 according to one embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
An important part of load balancing is how to select the appropriate process in the processor process queue for migration. Specifically, the load of the processor process queue is the sum of the loads of all the processes on the process queue, the load of one process is related to the actual running time of the process, and the longer the substantially continuous running time is, the higher the load is. The goal of load balancing is therefore to utilize the processor power resources as much as possible, allowing each process to get sufficient processor time. To achieve this goal, it is necessary to select an appropriate process (typically, a process with a smaller load is likely to satisfy the migration condition) to migrate to a relatively idle processor from a busy processor with a relatively large number of processes in the run queue and a relatively large total load.
As shown in fig. 1, fig. 1 shows a schematic diagram of a principle 100 of process migration, where a process running in a processor 0 has a process 1, a process 2 and a process 3, and a process running in the processor 1 has a process 4. Processor 0 is a busy processor as compared to processor 1, and at this time, for example, process 3 in processor 0 may be migrated to processor 1 to implement load balancing of the system.
FIG. 2 illustrates a flow chart of one prior art implementation of a process migration method 200 that begins by sequentially polling each process on a busy processor after a process migration flow is initiated. And then judging whether the task layering load of the process meets the migration requirement. And finally, migrating the process meeting the migration requirement from the busy processor to the idle processor. The process migration method uses task layering load, namely considering load contribution of the current process to the current processor, and judging whether a task can meet migration conditions according to the size of the task layering load.
Under the condition that the task layering load starts group scheduling, the more tasks run in the same group, because the weight of one task group is a default value when not actively adjusted, the task weight averaged to one processor is smaller than a standard value, so that the task layering load (task_h_load) =task load/cpu number is caused, and at the moment, the background process load > =work process load is possible.
In one specific example, after the group dispatch is turned on, the full thread runs the specified program.
Process 1 (user process running continuously):
The system is positioned in a task group A, and 10 processes in the group are distributed on 10 processors;
process load task_load=1000;
The hierarchical load task_h_load=1000/10=100 of the process.
Process 2 (background process running periodically):
The system is positioned in a task group B, and 1 process in the group is distributed on 1 processor;
Process load task_load=120;
the hierarchical load task_h_load=120/1=120 of the process.
In this case, the hierarchical load of the background process > =the hierarchical load of the working process, and the above-mentioned process with smaller load is easy to satisfy the migration condition, and when the plurality of tasks on one processor all satisfy the migration condition, the migration is performed according to the order of adding the tasks into the running queue, and the process added into the running queue is preferentially migrated, so that the user process that normally runs continuously is migrated. In a full load scenario, a phenomenon that a high-load process is migrated across processors and even across memory nodes due to the influence of a background low-load process can occur, which can cause serious cache failure, thereby affecting the normal performance of the high-load process.
The technical scheme of the invention is provided for solving the problems in the prior art. One embodiment of the present invention provides a process migration method that may be performed in a computing device. In particular, FIG. 3 illustrates a block diagram of a computing device 300 according to one embodiment of the invention. As shown in FIG. 3, in a basic configuration 302, computing device 300 typically includes a system memory 306 and one or more processors 304. A memory bus 308 may be used for communication between the processor 304 and the system memory 306.
Depending on the desired configuration, processor 304 may be any type of processing, including, but not limited to: a microprocessor (μp), a microcontroller (μc), a digital information processor (DSP), or any combination thereof. Processor 304 may include one or more levels of cache, such as a first level cache 310 and a second level cache 312, a processor core 314, and registers 316. The example processor core 314 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 318 may be used with the processor 304 or, in some implementations, the memory controller 318 may be an internal part of the processor 304.
Depending on the desired configuration, system memory 306 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. Physical memory in a computing device is often referred to as volatile memory RAM, and data in disk needs to be loaded into the physical memory in order to be read by processor 304. The system memory 306 may include an operating system 320, one or more applications 322, and program data 324. The application 322 is in effect a plurality of program instructions for instructing the processor 304 to perform a corresponding operation. In some implementations, the application 322 may be arranged to execute instructions on an operating system by the one or more processors 304 using the program data 324 in some implementations. Operating system 320 may be, for example, linux, windows or the like, which includes program instructions for handling basic system services and performing hardware-dependent tasks. The application 322 includes program instructions for implementing various functions desired by the user, and the application 322 may be, for example, a browser, instant messaging software, a software development tool (e.g., integrated development environment IDE, compiler, etc.), or the like, but is not limited thereto. When an application 322 is installed into computing device 300, a driver module may be added to operating system 320.
When the computing device 300 starts up running, the processor 304 reads the program instructions of the operating system 320 from the memory 306 and executes them. Applications 322 run on top of operating system 320, utilizing the interfaces provided by operating system 320 and the underlying hardware to implement various user-desired functions. When a user launches the application 322, the application 322 is loaded into the memory 306, and the processor 304 reads and executes the program instructions of the application 322 from the memory 306.
Computing device 300 also includes storage device 332, storage device 332 includes removable storage 336 and non-removable storage 338, both removable storage 336 and non-removable storage 338 being connected to storage interface bus 334.
Computing device 300 may also include an interface bus 340 that facilitates communication from various interface devices (e.g., output devices 342, peripheral interfaces 344, and communication devices 346) to basic configuration 302 via bus/interface controller 330. The example output device 342 includes a graphics processing unit 348 and an audio processing unit 350. They may be configured to facilitate communication with various external devices such as a display or speakers via one or more a/V ports 352. Example peripheral interfaces 344 may include a serial interface controller 354 and a parallel interface controller 356, which may be configured to facilitate communication via one or more I/O ports 358 and external devices, such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripheral devices (e.g., printer, scanner, etc.). The example communication device 346 may include a network controller 360, which may be arranged to facilitate communication with one or more other computing devices 362 via one or more communication ports 364 over a network communication link.
The network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media in a modulated data signal, such as a carrier wave or other transport mechanism. A "modulated data signal" may be a signal that has one or more of its data set or changed in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or special purpose network, and wireless media such as acoustic, radio Frequency (RF), microwave, infrared (IR) or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
Computing device 300 also includes a storage interface bus 334 that is coupled to bus/interface controller 330. The storage interface bus 334 is connected to the storage device 332, and the storage device 332 is adapted to store data. Example storage devices 332 may include removable storage 336 (e.g., CD, DVD, U-disk, removable hard disk, etc.) and non-removable storage 338 (e.g., hard disk drive HDD, etc.).
In computing device 300 according to the present invention, application 322 includes a plurality of program instructions that perform method 400.
FIG. 4 illustrates a flow diagram of a process migration method 400 according to one embodiment of the invention. The method 400 is suitable for execution in a computing device (e.g., the computing device 300 described previously).
As shown in fig. 4, the purpose of the method 400 is to implement a process migration method, beginning with step S402, in which each process is divided into an active process or an inactive process based on the actual load of each process in the processor.
It should be noted that, according to the embodiment of the present invention, before the step S402 is executed, it is already determined that there is a load imbalance in the computing device at this time, and the process on the processor needs to be migrated, and the determination of the load imbalance may be known based on the foregoing or the existing load balancing policy, which is not described herein.
It should be further noted that, the process migration method provided in this embodiment is applicable to a scenario where each process runs at or near the full load in the processor, that is, the occupancy rate of the processor is near 100%.
Preferably, the actual load of the process can be calculated by the following steps.
Step S422, respectively obtaining time information of each process in the working state and the non-working state.
The process is intermittently executed, and after a period of time, the processor stops running the process for a period of time, and changes to running other processes, and when other processes are stopped running, the processor re-runs the process, and the time when the process is in a working state is the running time of the process.
In step S424, the actual load of each process is calculated based on the time information.
The load of a process is the accumulation of a plurality of runtimes, while the load in the previous runtime is required to decay, the decay factor is related to the time that the process is in a non-working state.
The real load is calculated according to the time of the process in each operation interval and the attenuation coefficient of each operation interval, for example, the process runs for 3 time intervals before the current time, and then the real load of the process= (first time interval x first attenuation coefficient) + (second time interval x second attenuation coefficient) + (third time interval x third attenuation coefficient).
And comparing the real load of the process with a preset load threshold value after obtaining the real load of the process, and judging that the process is an active process when the real load of the process is larger than the load threshold value, or else, judging that the process is an inactive process. The setting of the load threshold may be performed by a person skilled in the art or according to the attribute of the computing device, which is not limited in this embodiment.
In step S404, it is determined whether the active process is the only active process in the processor. In step S402, the state of each process (active process or inactive process) in the processor is already determined by the real load of each process, and it may be directly determined whether the target process is the only active process in the processor.
It should be noted that when the process is the only active process in the processor, the process is skipped directly, in other words, when the process is the only process in the processor, the migration of the process is abandoned.
Of course, to facilitate determining the number of active processes in the processor, the number of active processes in the processor may be counted before executing step S404.
In step S406, if it is not the only active process, the task layering loads of the processes are sequentially determined. Polling determines the task layering load of each process on the processor, starting with the process that later joins the run queue.
Specifically, polling a linked list of each process stored in a processor to obtain the joining sequence of each process; and sequentially determining task layering loads of the processes according to the adding sequence of the processes from back to front.
The process can be put into the table head position of the appointed linked list after being added into the process queue, so that the process can be polled according to the time of adding the process queue from the head of the linked list, that is to say, the adding sequence of the process can be known through polling the linked list.
In some embodiments, the task load for each process is calculated as follows: acquiring the number of processors of the process in the corresponding group; acquiring a process load of a process in a corresponding group; and taking the ratio of the load of the process to the number of the processors as the task layering load of the process.
The task load calculating step is applicable to a scenario that each process in the group runs at or near the full load on the processor.
In one specific example, the conventional task layering load is calculated as follows:
task layered load = real load of process. Upper queue relative top layer load/upper queue load;
Thus, task hierarchical load/task real load = upper queue relative top layer load/upper queue load = current cpu group entity weight/1024 = current process load/total load of all processes within the group;
so when each process runs at full load, the load is the same and the total load of all processes in the current process load/group is approximately 1/total number of processors.
In other words, when each process in the packet runs at or near full load on the processor, the task hierarchical load of each process=process load/number of processors of that process in the corresponding packet.
In one particular example, processes are located within task group 0, a total of 5 processors are distributed within the group, and each process within task group 0 is running at full load. Load of the process=10; the task layering load of the process=10/5=2.
In step S408, when the task layering load of the process satisfies a preset condition, the process is migrated to the other processor.
Specifically, a load imbalance value is calculated based on a current load value of the computing device. And judging whether the task layering load of the process is less than half of the load imbalance value. If yes, the process is migrated to other processors.
It should be noted that, the load value of the computing device is not a fixed value, and changes with the running of each process, and thus, the load imbalance value also changes from time to time. In summary, the smaller the task layering load, the easier the migration condition is satisfied.
Fig. 5 illustrates a block diagram of a process migration apparatus 500 according to one embodiment of the present invention.
As shown in fig. 5, the apparatus 500 includes a state determination module adapted to divide each process into an active process or an inactive process based on a real load of each process in the processor; the judging module is suitable for judging whether the active process is the only active process in the processor; the task layering load determining module is suitable for sequentially determining task layering loads of the processes; and a migration module adapted to migrate the process to the other processor.
It should be noted that, the principle and workflow of the process migration apparatus provided in this embodiment are similar to those of the foregoing process migration method, and the relevant points may be described with reference to the foregoing process migration method, which is not repeated herein.
In the embodiment, on the task migration standard of load balancing, the dual effects of the real load of the task and the layered load of the task are comprehensively considered, and when only one process with high real load is arranged on the processor, the low-load process newly added to the processor is preferentially migrated. Under the full thread use case scene, the influence of a background process on a main process is reduced, the main process is not influenced by the background to cause process migration across processors and even across memory nodes, at the moment, the cache memory utilization rate is highest, and the program running performance is also best.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions of the methods and apparatus of the present invention, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U-drives, floppy diskettes, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the invention in accordance with instructions in said program code stored in the memory.
By way of example, and not limitation, readable media comprise readable storage media and communication media. The readable storage medium stores information such as computer readable instructions, data structures, program modules, or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with examples of the invention. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It should be appreciated that the teachings of the present invention as described herein may be implemented in a variety of programming languages and that the foregoing description of specific languages is provided for disclosure of preferred embodiments of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Furthermore, some of the embodiments are described herein as methods or combinations of method elements that may be implemented by a processor of a computer system or by other means of performing the functions. Thus, a processor with the necessary instructions for implementing the described method or method element forms a means for implementing the method or method element. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is for carrying out the functions performed by the elements for carrying out the objects of the invention.
As used herein, unless otherwise specified the use of the ordinal terms "first," "second," "third," etc., to describe a general object merely denote different instances of like objects, and are not intended to imply that the objects so described must have a given order, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (9)

1.一种进程迁移方法,在计算设备中执行,所述计算设备中驻留有多个处理器,所述方法包括:1. A process migration method, executed in a computing device, wherein a plurality of processors reside in the computing device, the method comprising: 基于处理器中各进程的真实负载,将所述各进程划分为活跃进程或非活跃进程;Based on the actual load of each process in the processor, classify the processes into active processes or inactive processes; 判断所述活跃进程是否为所述处理器中的唯一活跃进程;determining whether the active process is the only active process in the processor; 若不是唯一活跃进程,则依序确定进程的任务分层负载,所述任务分层负载的计算步骤包括:获取进程在对应分组内的处理器个数,获取所述进程在对应分组内的进程负载,将所述进程负载与所述处理器的个数的比值,作为该进程的任务分层负载;If it is not the only active process, the task hierarchical load of the process is determined in sequence, and the calculation step of the task hierarchical load includes: obtaining the number of processors in the corresponding group of the process, obtaining the process load of the process in the corresponding group, and taking the ratio of the process load to the number of processors as the task hierarchical load of the process; 当所述进程的任务分层负载满足预设条件时,将所述进程迁移到其它处理器。When the task layer load of the process meets a preset condition, the process is migrated to other processors. 2.如权利要求1所述的方法,其中,所述若不是唯一活跃进程,则依序确定进程的任务分层负载的步骤包括:2. The method of claim 1, wherein the step of determining the task hierarchical load of the process in sequence if it is not the only active process comprises: 轮询存储有所述处理器中各进程的链表,以获取各进程的加入顺序;Polling a linked list storing each process in the processor to obtain the order in which each process is added; 按各进程从后往前的加入顺序,依次确定进程的任务分层负载。According to the order in which each process is added from the back to the front, the task layering load of the process is determined in sequence. 3.如权利要求1所述的方法,其中,在所述基于处理器中各进程的真实负载,将各进程划分为活跃进程或非活跃进程的步骤之后,还包括步骤:3. The method according to claim 1, wherein after the step of classifying each process into an active process or an inactive process based on the real load of each process in the processor, the method further comprises the step of: 统计所述处理器中活跃进程的个数。Count the number of active processes in the processor. 4.如权利要求1所述的方法,其中,所述真实负载的计算步骤包括:4. The method according to claim 1, wherein the step of calculating the real load comprises: 分别获取各进程在处于工作状态下和处于非工作状态下的时间信息;Obtain the time information of each process in the working state and in the non-working state respectively; 基于所述时间信息,计算各进程的真实负载。Based on the time information, the real load of each process is calculated. 5.如权利要求1所述的方法,其中,所述基于处理器中各进程的真实负载,将各进程划分为活跃进程或非活跃进程的步骤包括:5. The method of claim 1, wherein the step of classifying each process into an active process or an inactive process based on the actual load of each process in the processor comprises: 若进程的真实负载大于预设的负载阈值,则确定该进程为活跃进程;SPE2If the actual load of the process is greater than the preset load threshold, the process is determined to be an active process; SPE2 否则,确定该进程为非活跃进程。Otherwise, the process is determined to be an inactive process. 6.如权利要求2所述的方法,其中,所述当进程的任务分层负载满足预设条件时,将进程迁移到其它处理器的步骤包括:6. The method of claim 2, wherein when the task layer load of the process meets a preset condition, the step of migrating the process to another processor comprises: 根据所述计算设备当前的负载值,计算负载不均衡值;Calculating a load imbalance value according to a current load value of the computing device; 判断所述进程的任务分层负载是否小于所述负载不均衡值的一半;Determining whether the task hierarchical load of the process is less than half of the load imbalance value; 若是,则将所述进程迁移到其他处理器。If so, the process is migrated to another processor. 7.一种进程迁移装置,包括:7. A process migration device, comprising: 状态确定模块,适于基于处理器中各进程的真实负载,将所述各进程划分为活跃进程或非活跃进程;a state determination module adapted to classify each process in the processor as an active process or an inactive process based on the actual load of each process in the processor; 判断模块,适于判断所述活跃进程是否为所述处理器中的唯一活跃进程;a determination module, adapted to determine whether the active process is the only active process in the processor; 任务分层负载确定模块,适于依序确定进程的任务分层负载,所述任务分层负载的计算步骤包括:获取进程在对应分组内的处理器个数,获取所述进程在对应分组内的进程负载,将所述进程负载与所述处理器的个数的比值,作为该进程的任务分层负载;以及The task hierarchical load determination module is adapted to sequentially determine the task hierarchical load of a process, wherein the calculation step of the task hierarchical load comprises: obtaining the number of processors in a corresponding group of a process, obtaining the process load of the process in the corresponding group, and taking the ratio of the process load to the number of processors as the task hierarchical load of the process; and 迁移模块,适于将所述进程迁移到其它处理器。A migration module is adapted to migrate the process to other processors. 8.一种计算设备,包括:8. A computing device comprising: 至少一个处理器;和at least one processor; and 存储有程序指令的存储器,其中,所述程序指令被配置为适于由所述至少一个处理器执行,所述程序指令包括用于执行如权利要求1-6中任一项所述方法的指令。A memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, and the program instructions include instructions for executing the method according to any one of claims 1 to 6. 9.一种存储有程序指令的可读存储介质,当所述程序指令被计算设备读取并执行时,使得所述计算设备执行如权利要求1-6中任一项所述的方法。9. A readable storage medium storing program instructions, when the program instructions are read and executed by a computing device, the computing device executes the method according to any one of claims 1 to 6.
CN202110738697.2A 2021-06-30 2021-06-30 Process migration method, device, computing device and storage medium Active CN113326140B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110738697.2A CN113326140B (en) 2021-06-30 2021-06-30 Process migration method, device, computing device and storage medium
PCT/CN2021/124293 WO2023273015A1 (en) 2021-06-30 2021-10-18 Process migration method and apparatus, computing device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110738697.2A CN113326140B (en) 2021-06-30 2021-06-30 Process migration method, device, computing device and storage medium

Publications (2)

Publication Number Publication Date
CN113326140A CN113326140A (en) 2021-08-31
CN113326140B true CN113326140B (en) 2024-11-26

Family

ID=77425256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110738697.2A Active CN113326140B (en) 2021-06-30 2021-06-30 Process migration method, device, computing device and storage medium

Country Status (2)

Country Link
CN (1) CN113326140B (en)
WO (1) WO2023273015A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326140B (en) * 2021-06-30 2024-11-26 统信软件技术有限公司 Process migration method, device, computing device and storage medium
CN114398329A (en) * 2021-12-15 2022-04-26 西安统信软件技术有限公司 A file cache-based scheduling method, device and computing device
CN114942791A (en) * 2022-05-26 2022-08-26 统信软件技术有限公司 Process awakening method and device, computing device and readable storage medium
CN115857418B (en) * 2023-02-28 2023-05-02 深圳华龙讯达信息技术股份有限公司 Programmable logic control system based on coupling design
CN116542707A (en) * 2023-03-14 2023-08-04 读书郎教育科技有限公司 Dynamic user layering method and system based on behavior data
CN117290075B (en) * 2023-11-23 2024-02-27 苏州元脑智能科技有限公司 Process migration method, system, device, communication equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729248A (en) * 2012-10-16 2014-04-16 华为技术有限公司 Method and device for determining tasks to be migrated based on cache perception
CN109766180A (en) * 2017-11-09 2019-05-17 阿里巴巴集团控股有限公司 Load-balancing method and device, calculate equipment and computing system at storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102834807B (en) * 2011-04-18 2015-09-09 华为技术有限公司 The method and apparatus of multicomputer system load balancing
US20130332608A1 (en) * 2012-06-06 2013-12-12 Hitachi, Ltd. Load balancing for distributed key-value store
CN105574141B (en) * 2015-12-15 2021-04-27 杭州朗和科技有限公司 Method and device for carrying out data migration on database
CN107196865B (en) * 2017-06-08 2020-07-24 中国民航大学 Load-aware adaptive threshold overload migration method
CN108549574B (en) * 2018-03-12 2022-03-15 深圳市万普拉斯科技有限公司 Thread scheduling management method, apparatus, computer equipment and storage medium
US11216314B2 (en) * 2018-11-02 2022-01-04 EMC IP Holding Company LLC Dynamic reallocation of resources in accelerator-as-a-service computing environment
JP7234704B2 (en) * 2019-03-11 2023-03-08 富士通株式会社 Information processing device and information processing program
CN113326140B (en) * 2021-06-30 2024-11-26 统信软件技术有限公司 Process migration method, device, computing device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729248A (en) * 2012-10-16 2014-04-16 华为技术有限公司 Method and device for determining tasks to be migrated based on cache perception
CN109766180A (en) * 2017-11-09 2019-05-17 阿里巴巴集团控股有限公司 Load-balancing method and device, calculate equipment and computing system at storage medium

Also Published As

Publication number Publication date
WO2023273015A1 (en) 2023-01-05
CN113326140A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN113326140B (en) Process migration method, device, computing device and storage medium
CN106502791B (en) A kind of method for allocating tasks and device
US8914805B2 (en) Rescheduling workload in a hybrid computing environment
CN102834807B (en) The method and apparatus of multicomputer system load balancing
CN113504985B (en) A task processing method and network device
CN108549574B (en) Thread scheduling management method, apparatus, computer equipment and storage medium
CN113553164B (en) Process migration method, computing device and storage medium
WO2017016421A1 (en) Method of executing tasks in a cluster and device utilizing same
US20120054770A1 (en) High throughput computing in a hybrid computing environment
US9063750B2 (en) Mapping high-performance computing applications to platforms
CN107391031A (en) Data migration method and device in a kind of computing system based on mixing storage
CN109992366A (en) Task scheduling method and scheduling device
CN101887383A (en) A real-time process scheduling method
CN105491117B (en) Streaming diagram data processing system and method towards real-time data analysis
CN113918527B (en) Scheduling method and device based on file cache and computing equipment
CN115391026A (en) Process migration method, computing device and readable storage medium
CN107885579A (en) The load-balancing method and computer-readable recording medium of virtual machine
CN112286623A (en) Information processing method and device and storage medium
CN114416310A (en) A multiprocessor load balancing method, computing device and storage medium
CN115951988B (en) Job scheduling method, computing equipment and storage medium
CN110308991B (en) A method and system for energy-saving optimization of data center based on random tasks
US20110191775A1 (en) Array-based thread countdown
CN108139938A (en) For assisting the device of main thread executing application task, method and computer program using secondary thread
CN113094155B (en) Task scheduling method and device under Hadoop platform
US11269525B2 (en) Co-processing a plurality of dependent systems with a finite number of processing threads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant