[go: up one dir, main page]

CN100542139C - Method and device for resource allocation based on task grouping - Google Patents

Method and device for resource allocation based on task grouping Download PDF

Info

Publication number
CN100542139C
CN100542139C CNB2006101564647A CN200610156464A CN100542139C CN 100542139 C CN100542139 C CN 100542139C CN B2006101564647 A CNB2006101564647 A CN B2006101564647A CN 200610156464 A CN200610156464 A CN 200610156464A CN 100542139 C CN100542139 C CN 100542139C
Authority
CN
China
Prior art keywords
task
resources
grouping
module
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006101564647A
Other languages
Chinese (zh)
Other versions
CN101009642A (en
Inventor
任艳花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB2006101564647A priority Critical patent/CN100542139C/en
Publication of CN101009642A publication Critical patent/CN101009642A/en
Application granted granted Critical
Publication of CN100542139C publication Critical patent/CN100542139C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a kind of resource allocation methods based on the task grouping, at first receiving of task is divided into groups to obtain different task groupings, determine the expectation number of resources of different task grouping then, and be different task packet allocation resources according to described expectation number of resources, make the task of different grouping obtain having equal opportunities of resource service.The invention also discloses a kind of resource allocation device based on the task grouping, comprise task grouping module and administration module, the task grouping module is used for task is divided into groups, administration module is determined the expectation number of resources of different task grouping, and is the different task packet allocation resource that the task grouping module is divided according to described expectation number of resources.This device makes the task of different grouping obtain having equal opportunities of resource service.

Description

A kind of resource allocation methods and device based on the task grouping
Technical field
The present invention relates to resource allocation techniques, refer to a kind of resource allocation methods and device especially based on the task grouping.
Background technology
In the server program based on client/server (C/S) pattern, reasonably the resource of distribution server end can make server program can serve client better.The resource of server end comprises: thread, processor, internal memory, bandwidth or the like.
Thread is the control flows of certain single order in the process.Compare with creating a process, create a thread and will expend much smaller system resource,, use thread to obtain more performance than use process for the many especially application of those concurrent process numbers.
In order to improve the utilization rate of processor, multithreading has appearred.Multithreading has solved the problem of the concurrent execution of a plurality of threads in the processor unit, can significantly reduce the standby time of processor unit, increases the handling capacity of processor unit.But the establishment of thread and destroy and all to need certain timeslice resource that takies processor so if when busy, create and destroy thread in system continually, can increase the processing time of individual task, can influence the performance of server program on the contrary.
Destroy the influence of time in order to reduce thread creation time and thread, the thread pool technology occurred server performance.This technology of thread pool is arranged in thread creation time and the thread time of destroying respectively startup and the time of end or other the free time section of server program.After creating the thread of some quantity, allow these threads all be in idle condition, when client has a new task, wake the some idle threads in the thread pool up, allow it handle this task of client, after handling this task, thread is in idle condition again.Like this, when server program is handled the request that client sends, thread creation and the expense of the time of destruction have not just been had again.
At present, thread pool to the scheduling of client task is: as long as idle thread is arranged, with regard to successively from task queue the taking-up task handle.
Fig. 1 is the schematic diagram that the method to existing thread pool scheduler task is illustrated, as shown in Figure 1, supposing has two kinds of tasks in the task queue of server application, a kind of is lightweight task J1, the time of expending thread is 1 millisecond (ms), another kind is heavyweight task J2, and the time of expending thread is 100000ms; Also supposing in addition has two thread Th1 and Th2 in the thread pool, and the disposal ability of thread Th1 and thread Th2 is identical.
As shown in Figure 1, as long as Th1 and Th2 be idle, with regard to successively from task queue scheduler task handle, scheduling process is as follows:
Th1 has got task J2 at 0ms, and Th1 takies from 0ms to 100000ms by J2;
Th2 has got task J1 at 0ms, and Th2 takies from 0ms to 1ms by J1;
Th2 has got task J1 at 1ms, and Th2 takies from 1ms to 2ms by J1;
Th2 has got task J2 at 2ms, and Th2 takies from 2ms to 100002ms by J2;
Th1 has got task J1 at 100000ms, and Th1 is taken by J1 to 100001ms from 100000ms.
As can be seen, from 2ms to 100000ms, two thread Th1 and Th2 in the thread pool are taken by heavyweight task J2, just can obtain by the chance of thread process after making the lightweight task J1 of as easy as rolling off a log processing will wait during this period of time.Though heavyweight task and lightweight task are impartial to the chance of occupying of idle condition thread, in case the heavyweight task preemption thread in the thread pool, will take the long time.So in the method for existing thread pool scheduler task, the heavyweight task is different with the time scale that the lightweight task is occupied the thread in the thread pool.For example in example shown in Figure 1,3 lightweight task J1 and two lightweight task J2 are arranged in the formation, the time that the total time Totaltime that handling these five tasks needs equals to handle 3 J1 adds the time of handling 2 J2, that is:
Totaltime=3×1ms+2×100000ms
Therefore, time of accounting for the ratio RateJ1 of total time and handling 2 J2 time of handling 3 J1 accounts for the ratio RateJ2 of total time and is respectively:
RateJ1=3×1ms/Totaltime=3×1ms/(3×1ms+2×100000ms)=0.0015%
RateJ2=2×100000ms/Totaltime=2×100000ms/(3×1ms+2×100000ms)
=99.9985%
From top formula as can be seen, under the situation that the quantity of lightweight task and heavyweight task is more or less the same, it is much bigger that the heavyweight task takies the ratio of thread time, is in weak tendency during the thread resources of lightweight task in fighting for thread pool relatively.At this moment, the task of being easy to occur heavyweight has taken whole thread resources, and the lightweight task can not get the situation of thread process.
The situation that practical application middle heavyweight task takies whole thread resources is a lot, for example, Fig. 2 is the flow chart of intelligent network pre-payment signaling, in Fig. 2, calling mobile exchanging center/Visited Location Registor/Service Switching Point (MSCa/VLR/SSP) can be regarded client as, (SCPa) can regard server as at the calling service control point, relative other tasks with Apply Charging Report message (ACR) of Initial Detection Point message (IDP) are the heavyweight task, and Basic Call State Model BCSM event report message (ERB) is the lightweight task.
If the thread resources among the SCPa is dispatched by existing thread resources dispatching method, when the number of calls of per second is bigger, the thread resources of SCPa is all taken by IDP and ACR, and the ERB task in this flow process can not obtain the timely processing of SCPa thread, thereby cause session timeout, the percent of call lost improves.
From the above description as can be seen, because the task of heavyweight takies whole thread resources, can cause harmful effect to server application and client-side program.For server program, in the time period that all thread resources are all taken by the heavyweight task, other task in the task queue can not get scheduling and handles, make that being accumulated in the task queue of task is more and more, the Installed System Memory that server program takies is also increasing, may cause task queue congested at last, even cause the server application collapse.For client-side program, can cause the request response of client untimely, and when a task of client is made up of the task of a plurality of Different Weight levels, because all thread resources of server are all taken by the heavyweight task, the task of other magnitudes can't meet with a response, thereby causes the business of client to realize.
In a word, in the existing thread pool dispatching method, the part task takies the situation of whole resources, makes other task can't obtain the service of thread resources.
Summary of the invention
The embodiment of the invention provides a kind of resource allocation methods based on the task grouping, makes the task of different grouping obtain having equal opportunities of resource service.
The embodiment of the invention also provides a kind of resource allocation device based on the task grouping, makes the task of different grouping obtain having equal opportunities of resource service.
The embodiment of the invention discloses a kind of resource allocation methods based on the task grouping, this method comprises:
Receiving of task is divided into groups, obtain different task groupings;
Task weight according to resource sum and task total number packets and different task grouping is determined the expectation number of resources that described different task is divided into groups;
According to described expectation number of resources is different task packet allocation resources;
This method also comprises: receive new task, judge whether described new task belongs to the task grouping that has existed, if belong to, then described new task is assigned in the task grouping that has existed accordingly; Otherwise, newly-built task grouping, and described new task is assigned in the grouping of newly-built task, simultaneously, according to the task total number packets after changing, redefine the expectation number of resources of different task grouping, and be that resource is redistributed in different task groupings according to the expectation number of resources that redefines.
The invention also discloses a kind of resource allocation device, it is characterized in that this device comprises based on the task grouping:
The task grouping module is used for receiving of task is divided into groups, and with the Task Distribution of different grouping in corresponding task packet queue;
Administration module is used to the different task grouping calculation expectation number of resources in the described task grouping module, and is different task packet queue Resources allocation according to described expectation number of resources;
Described task grouping module is further used for receiving new task, judges that whether described new task belongs to the task grouping that has existed, if belong to, then is assigned to described new task in the task packet queue that has existed accordingly; Otherwise, a newly-built task packet queue, and described new task is assigned in the newly-built task packet queue;
Described administration module is further used for according to the task total number packets after changing, and recomputates the expectation number of resources of different task groupings, and is that resource is redistributed in different task groupings according to the described expectation number of resources that recomputates.
As seen from the above technical solution, embodiment of the invention scheme is at first divided into groups to task, obtain different task groupings, determine the expectation number of resources of different task grouping then, and be each different task packet queue Resources allocation according to described expectation number of resources, make the task of different grouping obtain having equal opportunities of resource service, the task of grouping takies whole resources thereby avoided partly, and the task of other groupings can't obtain the situation of resource service.
Description of drawings
Fig. 1 is the schematic diagram that the method to existing thread pool scheduler task is illustrated;
Fig. 2 is also the pay flow chart of signaling of intelligent network;
Fig. 3 is the flow chart of the embodiment of the invention based on the resource allocation methods of task grouping;
Fig. 4 is the structured flowchart of the embodiment of the invention based on the resource allocation device of task grouping;
Fig. 5 is the schematic diagram of the embodiment of the invention based on the thread pool Resource Allocation Formula of task grouping;
Fig. 6 is the structured flowchart of the embodiment of the invention based on the thread resources distributor of task grouping;
Fig. 7 is the flow chart of server application Processing tasks;
Fig. 8 is the flow chart of server application scheduling idle thread;
Fig. 9 is the flow chart of the busy thread of server application scheduling.
Embodiment
The embodiment of the invention obtains having equal opportunities of resource service in order to make various tasks, at first receiving of task is divided into groups, and determines the expectation number of resources of different task grouping then, and is different task packet allocation resource according to described expectation number of resources.
Fig. 3 is the flow chart of the embodiment of the invention based on the resource allocation methods of task grouping, may further comprise the steps:
Step 301 is divided into groups to receiving of task, obtains different task groupings.
Here, can divide into groups to receiving of task according to the number of Processing tasks resource requirement and/or the attribute of task, wherein, the attribute of task can be the affiliated client of task or the weight of task.
Step 302, according to the resource sum, and/or the task total number packets, and/or the task weight of different tasks grouping determines the expectation number of resources of different task grouping, and is different task packet allocation resource according to described expectation number of resources.
In this step, can determine the expectation number of resources of different task grouping according to the resource allocation formula shown in the formula (1), the concrete form of formula (1) is as follows:
R i = R × ( r i / Σ m = 1 M r m ) , i = 1,2 , . . . , M - - - ( 1 )
Wherein, expectation number of resources R iThe desired service device number of resources of i task grouping is distributed in expression;
Total number of resources R represents total server resource number;
r iRepresent the task weight in current i the task grouping;
r mRepresent m the task weight in the task grouping, m=1,2 ..., M;
Task grouping number M is represented total task grouping number.
The implication of formula (1) is: be the percentage that the expectation number of resources of i task packet allocation accounts for total number of resources, the weight that equals i the task in the task grouping accounts for the percentage of summation of the weight of the task of M task in dividing into groups.
Parameters R in formula (1), and/or r i, and/or M recomputates the resource allocation formula when changing, and is every group task distribution server resource again according to result of calculation.
Method shown in Figure 3 can guarantee to a certain extent that the task of different grouping obtains having equal opportunities of server resource service.
Fig. 4 is the structured flowchart of the embodiment of the invention based on the resource allocation device of task grouping, and this device comprises administration module 401 and task grouping module 402.
Administration module 401 is used to the different task grouping calculation expectation number of resources in the described task grouping module, and is different task packet queue Resources allocation according to described expectation number of resources.
Task grouping module 402 receives the task that client sends, receiving of task is divided into groups, and with the Task Distribution of different grouping in corresponding task packet queue;
Below be preferred embodiment with the thread pool resource of server end, the embodiment of the invention is further described.
Fig. 5 is the schematic diagram of the embodiment of the invention based on the thread pool Resource Allocation Formula of task grouping, as shown in Figure 5, the task of client is sent to the thread resources distributor 502 based on the task grouping after at first transmitting by the interface arrangement task actuator 501 of client and server end.Based on the thread resources distributor 502 of task grouping according to the type of task with Task Distribution in the task packet queue in the middle of the corresponding task packet queue, and dynamically the thread resources in the thread pool 503 is distributed to each group task according to the resource allocation formula.
In the embodiment shown in fig. 5, suppose that resource is the thread in the server, then the resource allocation formula shown in the formula (1) can be rewritten into following thread resources distribution formula:
N i = N × ( T i / Σ m = 1 M T m ) , i = 1,2 , . . . , M - - - ( 2 )
Wherein, expectation number of threads N iThe expectation number of threads of i task grouping is served in expression;
Total thread N represents number of threads total in the thread pool;
T iThe time of representing the required by task in i task grouping of a thread process;
T mThe time of representing required by task in m task grouping of a thread process;
Task grouping number M is represented total task grouping number.
Suppose that all tasks among Fig. 5 are divided into 3 groups according to the weight of heavyweight, i.e. M=3.The 1st group is the heavyweight task, and the 3rd group is the lightweight task, and the 2nd group is the task of heavyweight between the 1st group and the 3rd group.When the heavyweight of task was meant the thread process task here, task took the length of thread time.Every kind of task is arranged in the middle of separately the formation according to the sequencing that arrives.Total total N thread in the thread pool, carry out the thread resources distribution result according to resource allocation formula (2) and be: the number of threads of serving the 1st group task is N 1, the number of threads of serving the 2nd group task is N 2, the number of threads of serving the 3rd group task is N 3, and N 1+ N 2+ N 3=N.
Fig. 6 is the structured flowchart of the embodiment of the invention based on the thread resources distributor of task grouping, as shown in Figure 6, the thread resources distributor 502 based on the task grouping comprises: administration module 601, task grouping module 602, dynamic analysis module 603, thread resources scheduler module 604 and task queue overload detection module 605.
Administration module 601 is used for each task packet allocation thread that distributes formula (2) to be divided for task grouping module 602 according to thread resources.Administration module 601 also writes down the expectation number of threads N of i task grouping i, the required by task of active service in the actual number of threads of i kind task grouping, i task of a thread process are divided into groups time T i, total number of threads N, each task queue can be held in the thread pool maximum task number J MaxiIdentification information with each task packet queue.The identification information of task packet queue is used for the corresponding task grouping of mark uniquely.I=1 in the above-mentioned parameter, 2 ..., M.When total grouping number M changed, administration module 601 recomputated thread resources and distributes formula (2), and record result of calculation.
Task grouping module 602 is used to receive the task that client sends, and receiving of task is divided into groups, and the identification information of each different task packet queue is registered in the administration module 601.
Task grouping module 602 is before the Task Distribution of will be divided into groups is in the corresponding task packet queue, send detection notice to task queue overload detection module 605, when receiving the not overload notification that task queue overload detection module 605 is returned, again with the Task Distribution of different grouping in corresponding task packet queue.
Task grouping module 602 is the length according to the time of thread process required by task in the present embodiment, the i.e. grouping of being undertaken by the heavyweight size of task, such task that can guarantee the Different Weight level to a certain extent that is grouped in obtains having equal opportunities of thread process.Also can divide into groups according to the customer ID of task, such task that can guarantee different clients to a certain extent that is grouped in obtains having equal opportunities of thread process.By that analogy, can divide into groups, make not task on the same group obtain having equal opportunities of thread process according to the various attributes of task.
Task queue overload detection module 605, be used to receive the detection notice that task grouping module 602 sends after, read the maximum task number J that the corresponding task packet queue can hold from administration module 601 Maxi, and whether the task number in the corresponding task queue reaches J in the task of the inspection grouping module MaxiIf the task number in the corresponding task queue does not reach J Maxi, then task queue overload detection module 605 is returned the not notice of overload to task grouping module 602.
Task queue overload detection module 605 itself also can write down the maximum task number J that each task packet queue can hold Maxi, need not this moment to administration module 601 inquiries.
Dynamic analysis module 603 is used for obtaining from task grouping module 602 task of i task packet queue, and analyzes the time T of the required by task in i task packet queue of thread process i, with T iBe registered in the administration module 601, i=1,2 ..., M; And increasing a new task packet queue, be when occurring the task of M+1 kind attribute in the client task request, the analysis notice that receiving thread scheduling of resource module 604 sends, obtain task the formation of M+1 group task from task grouping module 602, analyze the time T of a required by task in this group task packet queue of thread process M+1, and with T M+1Be registered in the administration module 601.
Administration module 601 receives T M+1After, distribute formula to recomputate the expectation number of threads N of each task packet queue according to thread resources i, i=1,2 ..., M+1, and record result of calculation.
The weight of task is the time T of a required by task of thread process in the resource allocation formula in the present embodiment i, the task that task grouping module 602 receives itself is not carried about T iInformation, therefore need the dynamic analysis module to analyze the weight T of task in the different task grouping iBut the task that task grouping module 602 receives itself also can directly be carried task weight information, this moment, task grouping module 602 direct weights with task in the different task grouping were registered in the administration module, and did not need dynamic analysis module 603 to go the weight of analysis task.
Thread resources scheduler module 604 is used to dispatch idle thread that is in idle condition and the busy thread of handling current task.Promptly when total task grouping number changes, realize redistributing of resource by scheduling idling-resource and busy resource.
Thread resources scheduler module 604 scheduling idling-resources are:
At first thread resources scheduler module 604 is sent to idling-resource and is activated announcement, activates the thread that is in idle condition; Afterwards, whether there is the task packet queue of not determining the expectation number of resources as yet to administration module 601 inquiries, whether i.e. inquiry exists newly-increased task packet queue, if exist, then send and analyze notice, comprise the identification information of determining the task packet queue of expectation number of resources as yet in the analysis notice to dynamic analysis module 603; Dynamic analysis module 603 is according to according to obtaining task in the task packet queue of determining the expectation number of resources as yet of identification information from task grouping module 602 of analyzing in the notice, and this determines the time T of a required by task in the task packet queue of expectation number of resources as yet to analyze thread process M+1, and with T M+1Be registered in the administration module 601;
When not having the task packet queue of not determining the expectation number of resources as yet, whether thread resources scheduler module 604 exists actual number of threads less than expectation number of threads N to administration module 601 inquiries iThe task packet queue.Actual number of threads is meant the number of threads of active service in this task packet queue.If exist actual number of threads less than expectation number of threads N iThe task packet queue, then thread resources scheduler module 604 is added the thread that newly activates to actual number of threads less than expectation number of threads N iThe thread work space of task packet queue, and the actual number of threads of this task packet queue of registration updating in administration module 601; Otherwise discharge the thread that activates, make it come back to idle condition.
The busy resource of thread resources scheduler module 604 scheduling is:
Thread resources scheduler module 604 treats that busy thread handles after the current task, whether there is the task packet queue of not determining the expectation number of resources as yet to administration module 601 inquiries equally, if exist, then send to dynamic analysis module 603 and analyze notice, this determines the time T of a required by task in the task grouping of expectation number of resources as yet to make dynamic analysis module 603 analyze thread process M+1, and with T M+1Be registered in the administration module 601;
When not having the task packet queue of not determining the expectation number of resources as yet, whether thread resources scheduler module 604 exists actual number of threads less than expectation number of threads N to administration module 601 inquiries iThe task packet queue.If deposit actual number of threads less than expectation number of threads N iThe task packet queue, then thread resources scheduler module 604 is added busy thread to actual number of threads less than expectation number of threads N iThe thread work space of task packet queue, and the actual number of threads of this task packet queue of registration updating in administration module 601; Otherwise discharge busy thread, make it become idle thread.
In the said process, whether scheduling of resource module 604 exists the task packet queue of determining the expectation number of resources as yet to be to administration module 601 inquiry, and identification information and expectation number of resources by each different task packet queue of record in the searching and managing module 601 determine whether to exist the task packet queue of determining the expectation number of resources as yet.For example, write down the identification information of certain task grouping in the administration module 601, but do not write down the expectation number of resources of this task grouping, then this task grouping is the task grouping of not determining the expectation number of resources as yet.
Below by the process of describing thread in server application Processing tasks and the scheduling thread pond, further describe the technical scheme of the embodiment of the invention.
Fig. 7 is the flow chart of server application Processing tasks.Shown in Figure 7, may further comprise the steps:
Step 701 is determined the heavyweight of the task that client-side program is submitted to.
Whether step 702 exists the task packet queue of corresponding heavyweight in the heavyweight query task packet queue according to task, if there is execution in step 703, otherwise execution in step 704.
Step 703, whether the task packet queue that detects corresponding heavyweight transships, and checks that promptly whether task quantity in the corresponding task packet queue is greater than J Max, be execution in step 706 then, otherwise execution in step 707.
Step 704, a newly-built task packet queue, and task added in this task packet queue.
Step 705 distributes formula (2) to redistribute thread according to thread resources.Process ends.
Step 706, the task of refusal client is submitted to.Process ends.
Step 707 is added task in the corresponding task packet queue to, and the idle thread in thread pool is sent the activation announcement.
Redistributing thread according to thread resources distribution formula (2) described in the step 705 is, by what realize with certain method scheduling idle thread and busy thread.Introduce idle thread provided by the invention and busy thread scheduling method below.
Fig. 8 is the flow chart of server application scheduling idle thread.As shown in Figure 8, may further comprise the steps:
Step 801, the idle thread in thread pool are sent and are activated announcement, are in the thread of idle condition with activation.
Step 802, whether the time sequencing inquiry of creating according to the task packet queue exists the task grouping of not determining the expectation number of resources as yet, if exist, execution in step 803, otherwise, execution in step 804.
Step 803, the self study by thread obtain the time T that one of thread process is determined the required by task in the task grouping of expectation number of resources as yet M+1, distribute formula (2) to recomputate according to thread resources and serve the expectation number of threads N of each task grouping i, and record result of calculation.Execution in step 802.
Step 804, the task whether the time sequencing inquiry of creating according to the task packet queue exists actual number of threads not reach desired value is divided into groups, if there is execution in step 805; Otherwise, execution in step 806.
Actual number of threads does not reach the task grouping of desired value, is meant that active service is less than the expectation number of threads N of this task packet queue that is calculated by thread resources distribution formula (2) in the number of threads of this task grouping i
Step 805 is added the idle thread that activates in the thread work space of the task grouping that actual number of threads do not reach desired value.Process ends.
Step 806 discharges this thread, makes thread get back to idle condition, waits to be activated.
Fig. 9 is the flow chart of the busy thread of server application scheduling.As shown in Figure 9, may further comprise the steps:
Step 901, when thread is busy, after the busy thread of wait is finished handling of task, execution in step 902.
Step 902, whether the time sequencing inquiry of creating according to the task packet queue exists the task grouping of not determining the expectation number of resources as yet, if exist, execution in step 903, otherwise, execution in step 904.
Step 903, the self study by thread obtain the time T that one of thread process is determined the required by task in the task grouping of expectation number of resources as yet M+1, distribute formula (2) to recomputate according to thread resources and serve the expectation number of threads N of each task grouping i, and record result of calculation.Execution in step 902.
Step 904, the task whether the time sequencing inquiry of creating according to the task packet queue exists actual number of threads not reach desired value is divided into groups, if there is execution in step 905; Otherwise, execution in step 906.
Step 905 is added the thread that activates in the thread work space of the task grouping that actual number of threads do not reach desired value.Process ends.
Step 906 discharges this thread, makes thread become idle thread, waits to be activated.
In the process of above-mentioned server application scheduling thread, at first whether inquiry exists the task grouping of not determining the expectation number of resources as yet, if exist, then the weight of task in this task grouping of precedence parse redefines the expectation number of resources that each task is divided into groups then.The task grouping that this scheme has been avoided increasing newly can not get the situation of thread process for a long time.
The above is preferred embodiment of the present invention only, is not to be used to limit protection scope of the present invention, all any modifications of being made within the spirit and principles in the present invention, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1、一种基于任务分组的资源分配方法,其特征在于,该方法包括:1. A resource allocation method based on task grouping, characterized in that the method comprises: 对接收到的任务进行分组,获得不同的任务分组;Group the received tasks to obtain different task groups; 根据资源总数和任务分组总数以及不同的任务分组的任务权重确定所述不同的任务分组的期望资源数;determining the expected number of resources of the different task groups according to the total number of resources, the total number of task groups and the task weights of different task groups; 根据所述期望资源数为不同的任务分组分配资源;Allocating resources to different task groups according to the expected number of resources; 该方法还包括:接收到新任务,判断所述新任务是否属于已经存在的任务分组,如果属于,则将所述新任务分配到相应的已经存在的任务分组中;否则,新建一个任务分组,并将所述新任务分配到新建的任务分组中,同时,根据变化后的任务分组总数,重新确定不同的任务分组的期望资源数,并根据重新确定的期望资源数为不同的任务分组重新分配资源。The method further includes: receiving a new task, judging whether the new task belongs to an existing task group, and if so, assigning the new task to a corresponding existing task group; otherwise, creating a new task group, And assign the new tasks to the newly created task groups, and at the same time, re-determine the expected number of resources of different task groups according to the changed total number of task groups, and redistribute the different task groups according to the re-determined expected number of resources resource. 2、如权利要求1所述的方法,其特征在于,在将所述新任务分配到相应的已经存在的任务分组中之前,进一步包括:确定所述相应的已经存在的任务分组未过载。2. The method according to claim 1, further comprising: determining that the corresponding existing task group is not overloaded before assigning the new task to the corresponding existing task group. 3、如权利要求1所述的方法,其特征在于,该方法还包括:重新确定不同的任务分组的期望资源数,根据重新确定的期望资源数为不同的任务分组分配资源,包括:3. The method according to claim 1, further comprising: re-determining the expected resource numbers of different task groups, and allocating resources to different task groups according to the re-determined expected resource numbers, including: 查询是否存在尚未确定期望资源数的任务分组,如果存在,则根据资源总数和任务分组总数以及当前不同的任务分组的任务权重,重新确定当前不同的任务分组的期望资源数;Query whether there is a task group that has not yet determined the expected number of resources. If it exists, re-determine the expected number of resources for the current different task groups according to the total number of resources, the total number of task groups, and the task weights of the current different task groups; 当不存在尚未确定期望资源数的任务分组时,查询是否存在实际资源数小于所述期望资源数的任务分组,如果存在,则将空闲资源或处理完当前任务的忙碌资源分配给所述实际资源数小于期望资源数的任务分组;否则,释放所述空闲资源或处理完当前任务的忙碌资源。When there is no task group whose expected number of resources has not been determined, query whether there is a task group whose actual number of resources is less than the expected number of resources, and if so, allocate idle resources or busy resources that have processed the current task to the actual resources The number of task groups is less than the expected number of resources; otherwise, the idle resources are released or the busy resources of the current task are processed. 4、如权利要求1所述的方法,其特征在于,对接收到的任务进行分组具体为:根据处理所述接收任务所需资源的数目和/或所述接收任务的属性进行分组。4. The method according to claim 1, wherein grouping the received tasks specifically comprises: grouping according to the number of resources required to process the received tasks and/or the attributes of the received tasks. 5、如权利要求1至4所述的任一方法,其特征在于,所述资源具体为服务器端的处理器、线程、内存或带宽。5. The method according to any one of claims 1 to 4, wherein the resource is specifically a server-side processor, thread, memory or bandwidth. 6、一种基于任务分组的资源分配装置,其特征在于,该装置包括:6. A resource allocation device based on task grouping, characterized in that the device comprises: 任务分组模块,用于对接收到的任务进行分组,并将不同分组的任务分配到相应的任务分组队列中;The task grouping module is used for grouping the received tasks, and assigning tasks of different groups to corresponding task grouping queues; 管理模块,用于为所述任务分组模块中的不同的任务分组计算期望资源数,并根据所述期望资源数为不同的任务分组队列分配资源;A management module, configured to calculate the expected number of resources for different task groups in the task grouping module, and allocate resources to different task group queues according to the expected number of resources; 所述任务分组模块,进一步用于接收新任务,判断所述新任务是否属于已经存在的任务分组,如果属于,则将所述新任务分配到相应的已经存在的任务分组队列中;否则,新建一个任务分组队列,并将所述新任务分配到新建的任务分组队列中;The task grouping module is further used to receive a new task, judge whether the new task belongs to an existing task group, and if so, assign the new task to the corresponding existing task group queue; otherwise, create a new task A task grouping queue, and assigning the new task to the newly created task grouping queue; 所述管理模块,进一步用于根据变化后的任务分组总数,重新计算不同的任务分组的期望资源数,并根据所述重新计算的期望资源数为不同的任务分组重新分配资源。The management module is further configured to recalculate the expected resource numbers of different task groups according to the changed total number of task groups, and reallocate resources for different task groups according to the recalculated expected resource numbers. 7、如权利要求6所述的装置,其特征在于,该装置进一步包括:动态分析模块,用于从所述任务分组模块中的不同任务分组队列获取任务,分析不同任务分组队列中任务的权重,并将不同任务分组队列中任务的权重注册到所述管理模块中;7. The device according to claim 6, further comprising: a dynamic analysis module, configured to obtain tasks from different task grouping queues in the task grouping module, and analyze the weights of tasks in different task grouping queues , and register the weights of tasks in different task grouping queues into the management module; 所述任务分组模块进一步用于将不同任务分组队列的标识信息注册到所述管理模块中;The task grouping module is further configured to register identification information of different task grouping queues into the management module; 所述管理模块进一步用于,接收并记录所述动态分析模块发送的不同任务分组队列中任务权重的注册值、所述任务分组模块发送的不同任务分组队列的标识信息,并记录计算得到的不同任务分组队列的期望资源数和实际分配给不同任务分组队列的实际资源数;管理模块根据资源总数和任务分组总数以及不同的任务分组的任务权重计算所述不同的任务分组的期望资源数。The management module is further configured to receive and record the registration values of task weights in different task grouping queues sent by the dynamic analysis module, the identification information of different task grouping queues sent by the task grouping module, and record the calculated different task grouping queues. The expected number of resources in the task grouping queue and the actual number of resources actually allocated to different task grouping queues; the management module calculates the expected number of resources in different task groupings according to the total number of resources, the total number of task groups and the task weights of different task groupings. 8、如权利要求7所述的装置,其特征在于,该装置进一步包括:资源调度模块,用于从所述管理模块查询是否存在尚未确定期望资源数的任务分组队列,在存在尚未确定期望资源数的任务分组队列时,向所述动态分析模块发送分析通知;8. The device according to claim 7, characterized in that the device further comprises: a resource scheduling module, configured to query from the management module whether there is a task grouping queue whose expected resource number has not been determined, and if there is a task group queue whose expected resource number has not been determined When the number of task groups queues, send an analysis notification to the dynamic analysis module; 所述动态分析模块进一步用于,接收来自资源调度模块的分析通知,从所述分组模块中的尚未确定期望资源数的任务分组队列中获取任务,分析尚未确定期望资源数的任务分组队列中任务的权重,并将分析得到任务权重注册到所述管理模块中;The dynamic analysis module is further configured to receive an analysis notification from the resource scheduling module, obtain tasks from the task grouping queues in the grouping module whose expected number of resources has not been determined, and analyze the tasks in the task grouping queues whose expected number of resources has not been determined weight, and register the task weight obtained through analysis into the management module; 所述管理模块进一步用于,接收来自所述动态分析模块的任务权重的注册值,重新计算并记录当前不同的任务分组的期望资源数。The management module is further configured to receive the registration value of the task weight from the dynamic analysis module, and recalculate and record the number of expected resources of different task groups at present. 9、根据权利要求8所述的装置,其特征在于,在不存在尚未确定期望资源数的任务分组队列时,9. The device according to claim 8, wherein when there is no task grouping queue whose expected number of resources has not been determined, 所述资源调度模块进一步用于,从所述管理模块查询是否存在实际资源数小于期望资源数的任务分组队列,在存在时,将空闲资源或处理完当前任务的忙碌资源分配给所述实际资源数小于期望资源数的任务分组队列,并更新管理模块中记录的所述实际资源数小于期望资源数的任务分组队列的实际资源数;The resource scheduling module is further configured to query from the management module whether there is a task grouping queue whose actual number of resources is less than the expected number of resources, and if there is, allocate idle resources or busy resources that have processed the current task to the actual resources number of task grouping queues whose number is less than the expected number of resources, and update the actual number of resources recorded in the management module of the task grouping queue whose actual number of resources is less than the expected number of resources; 在不存在时,所述资源调度模块释放空闲资源或处理完当前任务的忙碌资源。When it does not exist, the resource scheduling module releases idle resources or processes busy resources of the current task. 10、如权利要求6所述的装置,其特征在于,该装置进一步包括:任务队列过载检测模块,用于接收来自所述任务分组模块的检测通知,检测相应的任务分组队列是否过载,在未过载时,向所述任务分组模块发送未过载通知;10. The device according to claim 6, further comprising: a task queue overload detection module, configured to receive a detection notification from the task grouping module, detect whether the corresponding task grouping queue is overloaded, and When overloaded, send a non-overloaded notification to the task grouping module; 所述任务分组模块进一步用于,接收来自任务队列过载检测模块的未过载通知,将所述已分组的任务分配到所述相应的任务分组队列中。The task grouping module is further configured to receive the non-overload notification from the task queue overload detection module, and assign the grouped tasks to the corresponding task grouping queue.
CNB2006101564647A 2006-12-31 2006-12-31 Method and device for resource allocation based on task grouping Expired - Fee Related CN100542139C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101564647A CN100542139C (en) 2006-12-31 2006-12-31 Method and device for resource allocation based on task grouping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101564647A CN100542139C (en) 2006-12-31 2006-12-31 Method and device for resource allocation based on task grouping

Publications (2)

Publication Number Publication Date
CN101009642A CN101009642A (en) 2007-08-01
CN100542139C true CN100542139C (en) 2009-09-16

Family

ID=38697785

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101564647A Expired - Fee Related CN100542139C (en) 2006-12-31 2006-12-31 Method and device for resource allocation based on task grouping

Country Status (1)

Country Link
CN (1) CN100542139C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108370499A (en) * 2015-10-27 2018-08-03 黑莓有限公司 Resource is detected to access

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140528B (en) * 2007-08-31 2013-03-20 中兴通讯股份有限公司 Method and device for realizing timing tasks load in cluster
CN101562622B (en) * 2009-06-05 2012-09-26 杭州华三通信技术有限公司 Method for executing user request and corresponding server thereof
US20120066396A1 (en) * 2010-09-10 2012-03-15 Samsung Electronics Co. Ltd. Apparatus and method for supporting periodic multicast transmission in machine to machine communication system
CN102467412B (en) * 2010-11-16 2015-04-22 金蝶软件(中国)有限公司 Method, device and business system for processing operation request
CN102567086B (en) * 2010-12-30 2014-05-07 中国移动通信集团公司 Task scheduling method, equipment and system
CN102307198A (en) * 2011-08-30 2012-01-04 苏州阔地网络科技有限公司 Audio and video data transmission method
CN102333226A (en) * 2011-09-01 2012-01-25 苏州阔地网络科技有限公司 Audio/video data transmission method
US9465662B2 (en) * 2011-10-17 2016-10-11 Cavium, Inc. Processor with efficient work queuing
CN103179285B (en) * 2011-12-21 2015-10-07 中国移动通信集团山西有限公司 A kind of acquisition method of CDR file and device
CN103248644B (en) * 2012-02-08 2016-07-06 腾讯科技(深圳)有限公司 The load-balancing method of a kind of plug-in unit upgrading Detection task and device
CN102629220A (en) * 2012-03-08 2012-08-08 北京神州数码思特奇信息技术股份有限公司 Dynamic task allocation and management method
CN103533002A (en) * 2012-07-05 2014-01-22 阿里巴巴集团控股有限公司 Data processing method and system
CN102902573B (en) * 2012-09-20 2014-12-17 北京搜狐新媒体信息技术有限公司 Task processing method and device based on shared resources
CN103365729A (en) * 2013-07-19 2013-10-23 哈尔滨工业大学深圳研究生院 Dynamic MapReduce dispatching method and system based on task type
CN103810048B (en) * 2014-03-11 2017-01-18 国家电网公司 Automatic adjusting method and device for thread number aiming to realizing optimization of resource utilization
CN104750556A (en) * 2015-04-14 2015-07-01 浪潮电子信息产业股份有限公司 Method and device for dispatching HPC (high performance computing) cluster work
CN106557366B (en) * 2015-09-28 2020-09-08 阿里巴巴集团控股有限公司 Task distribution method, device and system
CN107643944A (en) * 2016-07-21 2018-01-30 阿里巴巴集团控股有限公司 A kind of method and apparatus of processing task
CN107220077B (en) 2016-10-20 2019-03-19 华为技术有限公司 Using the management-control method and management and control devices of starting
CN108965364B (en) * 2017-05-22 2021-06-11 杭州海康威视数字技术股份有限公司 Resource allocation method, device and system
CN107341056A (en) * 2017-07-05 2017-11-10 郑州云海信息技术有限公司 A kind of method and device of the thread distribution based on NFS
CN109426561A (en) * 2017-08-29 2019-03-05 阿里巴巴集团控股有限公司 A kind of task processing method, device and equipment
CN109697118A (en) * 2017-10-20 2019-04-30 北京京东尚科信息技术有限公司 Streaming computing task management method, device, electronic equipment and storage medium
CN109063037A (en) * 2018-07-17 2018-12-21 叶舒婷 A kind of querying method, service equipment, terminal device and computer readable storage medium
CN109614222B (en) * 2018-10-30 2022-04-08 成都飞机工业(集团)有限责任公司 Multithreading resource allocation method
CN109669776B (en) * 2018-12-12 2023-08-04 北京文章无忧信息科技有限公司 Detection task processing method, device and system
CN111338882A (en) * 2018-12-18 2020-06-26 北京京东尚科信息技术有限公司 Data monitoring method, device, medium and electronic equipment
CN111435315A (en) * 2019-01-14 2020-07-21 北京沃东天骏信息技术有限公司 Method, apparatus, device and computer readable medium for allocating resources
CN110347489B (en) 2019-07-12 2021-08-03 之江实验室 A Stream Processing Method for Multi-center Data Collaborative Computing Based on Spark
CN112667369A (en) * 2020-06-08 2021-04-16 宸芯科技有限公司 Thread scheduling method and device, storage medium and electronic equipment
CN112559148A (en) * 2020-12-14 2021-03-26 用友网络科技股份有限公司 Execution method, execution device and execution system of ordered tasks
CN115811614A (en) * 2021-09-13 2023-03-17 华为技术有限公司 Video data processing method, chip, electronic device and readable storage medium
CN114283046B (en) * 2021-11-19 2022-08-19 广州市城市规划勘测设计研究院 Point cloud file registration method and device based on ICP (inductively coupled plasma) algorithm and storage medium
CN114565284B (en) * 2022-03-02 2025-03-28 北京百度网讯科技有限公司 A method, system, electronic device and storage medium for task allocation

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108370499A (en) * 2015-10-27 2018-08-03 黑莓有限公司 Resource is detected to access
US10952087B2 (en) 2015-10-27 2021-03-16 Blackberry Limited Detecting resource access
CN108370499B (en) * 2015-10-27 2022-05-10 黑莓有限公司 Detecting resource access

Also Published As

Publication number Publication date
CN101009642A (en) 2007-08-01

Similar Documents

Publication Publication Date Title
CN100542139C (en) Method and device for resource allocation based on task grouping
US20050055694A1 (en) Dynamic load balancing resource allocation
CN101938396B (en) Data stream control method and device
US9390130B2 (en) Workload management in a parallel database system
CN109542608B (en) Cloud simulation task scheduling method based on hybrid queuing network
CN103763378A (en) Task processing method and system and nodes based on distributive type calculation system
CN106817499A (en) A kind of resources for traffic dispatching method and forecast dispatching device
CN108681481A (en) The processing method and processing device of service request
CN109710416B (en) Resource scheduling method and device
CN107967175A (en) A kind of resource scheduling system and method based on multiple-objection optimization
CN113312160A (en) Techniques for behavioral pairing in a task distribution system
CN112148449A (en) Local area network scheduling algorithm and system based on edge calculation
CN107343112A (en) Intelligent traffic distribution method based on the layering of call center's seat
CN118656216B (en) A data center resource management system and method based on cloud computing
CN109450803A (en) Traffic scheduling method, device and system
CN109117280A (en) The method that is communicated between electronic device and its limiting process, storage medium
CN111343275A (en) Resource scheduling method and system
CN106325997B (en) Virtual resource allocation method and device
CN109117279A (en) The method that is communicated between electronic device and its limiting process, storage medium
CN114416355A (en) Resource scheduling method, apparatus, system, electronic device and medium
CN109933433A (en) A kind of GPU resource scheduling system and its dispatching method
EP4557095A1 (en) Internet of vehicles platform expansion and contraction method and system, and storage medium
CN118678306A (en) Short message sending method, device, equipment, storage medium and program product
CN114827033B (en) Data flow control method, device, equipment and computer readable storage medium
CN114157717B (en) A system and method for dynamic current limiting of microservices

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090916

Termination date: 20121231