[go: up one dir, main page]

CN102710489B - Dynamic shunt dispatching patcher and method - Google Patents

Dynamic shunt dispatching patcher and method Download PDF

Info

Publication number
CN102710489B
CN102710489B CN201110085812.7A CN201110085812A CN102710489B CN 102710489 B CN102710489 B CN 102710489B CN 201110085812 A CN201110085812 A CN 201110085812A CN 102710489 B CN102710489 B CN 102710489B
Authority
CN
China
Prior art keywords
stream
path
described specific
specific stream
bandwidth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110085812.7A
Other languages
Chinese (zh)
Other versions
CN102710489A (en
Inventor
胡艳
夏勇
陈松杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC China Co Ltd
Original Assignee
NEC China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC China Co Ltd filed Critical NEC China Co Ltd
Priority to CN201110085812.7A priority Critical patent/CN102710489B/en
Priority to JP2011244242A priority patent/JP5324637B2/en
Publication of CN102710489A publication Critical patent/CN102710489A/en
Application granted granted Critical
Publication of CN102710489B publication Critical patent/CN102710489B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Small-Scale Networks (AREA)

Abstract

The invention provides a kind of dynamic shunt dispatching patcher for data center network and method.Main frame is that specific stream is reserved to the stream scheduler bandwidth application of data center.Stream scheduler checks the bandwidth usage in all paths that this specific stream can be selected, if remaining bandwidth sum meets the bandwidth demand of this specific stream on these paths, then this stream is divided into one or more subflow, then these subflows is forwarded on corresponding path.If the remaining bandwidth on all paths can not meet the bandwidth demand of this specific stream, then circulation existing on these paths is moved on on other paths, thus vacating space is in order to forward this specific stream.Dynamic shunt dispatching patcher of the present invention and method promptly and accurately can obtain the actual bandwidth demand flowed, and than operation dispatching in the thinner granularity of stream, thus the utilance of whole data center network can be balanced and the aggregate bandwidth of whole data center network is increased.

Description

Dynamic shunt dispatching patcher and method
Technical field
The present invention relates to data center network field, be specifically related to a kind of dynamic shunt dispatching patcher and method.
Background technology
Along with the development of the application such as the Internet and cloud computing service, the scale of data center increases day by day.Current, data center contains server and the switch of thousands of.In addition, data center supports multiple different application usually simultaneously, and some of them application requires to perform a large amount of data communication between the different server of data center inside.The increase of data center's scale and the development of application bring new challenge to its network architecture.
Due to the restriction of switch ports themselves number, data center takes the topological structure of a kind of many trees usually.Fig. 1 shows the topological structure of a kind of many trees, and this topological structure is that the communication between main frame provides many optional paths.Owing to there are many optional paths, a crucial problem is exactly on these many optional paths, how to carry out stream scheduling dynamically, to obtain large network polymerization bandwidth.
Current the most frequently used stream scheduling mechanism is equal cost multipath (Equal Cost Multipath, ECMP).Switch is to the multiple optional forward-path of certain given destination address configuration, when packet arrives switch time, switch carries out Hash operation to some field in the stem of packet, then carry out modular arithmetic according to total path number, obtain the path of final selection and packet is forwarded from this path.Random for data traffic can be distributed on many feasible paths by hash algorithm, thus realizes load balancing.
But ECMP is a kind of static mappings, what the packet belonging to same stream all can be fixed is mapped on a certain paths.In addition, ECMP carrys out selecting paths according to Hash operation result completely and does not consider the size of network utilization and stream.Like this, in some cases, many large streams have selected same path, can form network bottleneck and cause the reduction of whole network utilization.Fig. 2 shows the example of the stream conflict of two types.As shown in Figure 2, flow A and flow B and have selected same path due to the Hash operation conflict at switch A gg0 place.In addition, switch A gg1 and Agg2 independently convection current C forwards with the packet of stream D, but meets with conflict at switch Core2 place.Assuming that links all in Fig. 2 is all 1Gbps, because there occurs stream conflict, all 4 streams all can only obtain the bandwidth of 500Mbps.If system better flows scheduling strategy according to the situation employing of network resource usage and stream, such as stream A is dispatched to switch Core1, stream D is dispatched to switch Core3,4 all streams just can reach 1Gbps.
Therefore, this static mappings of ECMP mechanism does not consider the size of network utilization and stream, thus may cause the conflict of stream and reduce the overall utilization rate of network.
List of references 1 (" Hedera:Dynamic Flow Scheduling for Data Center Networks ", Mohammad Al-Fares, et al, NSDI 2010) proposes a kind of dynamic flow dispatching patcher for data center network.First this system detects large stream at edge switch place, then estimate the bandwidth demand of these large streams, and calculate the optimal path of these large streams.Finally, these paths are installed on switches.
Fig. 3 shows the system construction drawing of Hedera.As shown in Figure 3, Hedera is that a kind of stream dispatching patcher based on " fat tree " (Fat tree) Network Topology Design and realization is (see list of references 2: " A scalable; Commodity Data Center Network Architecture ", Mohammad Al-Fares, et al, SIGCOMM 2008).Switch in Fat tree is divided into three layers from bottom to top: marginal layer, convergence-level and core layer.Switches all in different layers is all identical, and each switch comprises k port.Fig. 3 shows the Fat tree topology of k=4.Fat tree is divided into k independently piecemeal (pod), and as shown in k dotted line frame of marginal layer in figure and convergence-level, k switch inside each frame forms a piecemeal.Fat tree topological structure can support k 3/ 4 main frames, use 5k 2/ 4 k mouth switchs.Switch runs OpenFlow, and OpenFlow controller can the transmitting of access switch, and inquires about, inserts, the operation such as amendment to the list item transmitted.Stream is dispatched on not conflicting path according to current network condition by stream scheduler.OpenFlow controller and stream scheduler works are in a centre manager concentrated in logic.
Fig. 4 shows the system block diagram of Hedera.As shown in Figure 4, this Hedera system 40 is made up of switch 410 and centre manager 420.
Switch 410 comprise transmit 4110, packet retransmission unit 4120 and flow statistic reporting element 4130.Transmit 4110 for the mapping between storage flow and transmit port.Packet retransmission unit 4120 is according to transmitting the mapping relations and forwarding data bag that store in 4110.Flow statistic reporting element 4130 for statistic flow information, and regularly sends (within such as every 5 seconds, sending once) to centre manager 420.
Centre manager 420 comprises exchange control unit 4210, stream scheduling unit 4220, stream needs estimate unit 4230, stream demand schedule 4240 and flow statistic collector unit 4250.Flow statistic collector unit 4250 receives flow information from flow statistic reporting element 4130, to detect large stream in each dispatching cycle (such as 5 seconds).Stream demand schedule 4240 stores the stream requirement matrix of All hosts.Stream needs estimate unit 4230 uses the flow information added up to estimate the bandwidth demand flowed.Stream scheduling unit 4220 is the path that large flow assignment is suitable from all feasible paths.Exchange control unit 4210, according to the scheduling result of stream scheduling unit 4220, corresponding switch creates forwarding-table item, and this path flowing through new choosing is forwarded.
Particularly, when a new stream starts transmission time, the default action of switch 410 selects in equative route one (described by ECMP) based on the cryptographic Hash of stream.Switch 410 uses this path to forward this stream always, until this stream flow exceed certain predetermined threshold values, such as 100Mbps (10% of 1GigE link bandwidth).Within each dispatching cycle, flow statistic collector unit 4250 obtains traffic statistics to detect large stream from edge switch.Then, the bandwidth demand that needs estimate unit 4230 estimates large stream is flowed.Next, stream scheduling unit 4220, based on the bandwidth demand of large stream, is the path that large stream calculation is suitable.Finally, these paths are arranged on corresponding switch by exchange control unit 4210, and amendment forwarding-table item makes these flow through the new path forwarding selected.
The input of stream needs estimate unit 4230 is flow informations of current all large stream, it maintains the matrix M of a N*N, wherein N is the number of main frame, and the element M (i, j) being positioned at the i-th row and jth row in matrix M contains estimated value from main frame i to the demand of the stream of main frame j.Stream needs estimate unit 4230 is repeated below operation: increase stream is at the bandwidth demand of source host, and the bandwidth restriction according to destination host reduces the bandwidth flowing and exceed, and finally makes all stream converge to their bandwidth demands own.
After the bandwidth demand estimating large stream, stream scheduling unit 4220 is the path that large stream calculation is suitable according to bandwidth demand.Two kinds of dispatching algorithms are employed in list of references 1.The first is " overall first coupling (Global First Fit) ", when a new large stream being detected, the all possible path of search that scheduler is linear, when finding all link all to meet the path of bandwidth demand, just selects this path to be scheduling result.The second is " simulated annealing (Simulated Annealing) ", and simulated annealing is a kind of general probabilistic algorithm, carries out search to find one close to optimized solution to state space.In order to reduce the status number of search volume, Hedera system 40 distributes a core switch for each destination host, instead of is each flow assignment core switch.
But there are following 2 deficiencies in the stream scheduling mechanism in Hedera system:
The first, promptly and accurately can not obtain the actual bandwidth demand flowed, this is because the bandwidth demand of stream is that the traffic statistics measured according to edge switch estimates, sometimes can not accurately reflect the actual bandwidth demand of application.In addition, scheduling only performs once in each dispatching cycle (such as 5 seconds).Therefore, within a dispatching cycle, traffic conditions may have a very large change.
The second, the granularity due to scheduling is stream, is therefore not enough in some cases realize the balance of network utilization and large network polymerization bandwidth.
Summary of the invention
In order to solve the problem, the invention provides a kind of dynamic shunt dispatching patcher for data center network.Main frame is that specific stream is reserved to the stream scheduler bandwidth application of data center.Stream scheduler checks the bandwidth usage in all paths that this specific stream can be selected, if remaining bandwidth sum meets the bandwidth demand of this specific stream on these paths, then this stream is divided into one or more subflow, then these subflows is forwarded on corresponding path.If the remaining bandwidth on all paths can not meet the bandwidth demand of this specific stream, then circulation existing on these paths is moved on on other paths, thus vacating space is in order to forward this specific stream.
According to an aspect of the present invention, provide a kind of centre manager, comprising: bandwidth demand collector unit, for collecting the bandwidth demand of specific stream from source host; Memory cell, the feasible path that bandwidth usage and main frame for storing link are right; Stream scheduling unit, from the feasible path of described specific stream, one or more path is distributed for described specific stream for the bandwidth usage according to current link, the path of distribution is returned described source host, and receives sub-stream information from described source host; And exchange control unit, for being configured corresponding switch according to the scheduling result of stream scheduling unit.
Preferably, flow scheduling unit to be configured to: the idle bandwidth sum calculating the feasible path of described specific stream; And if the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream, then distribute one or more path for described specific stream from the feasible path of described specific stream.
Preferably, stream scheduling unit is also configured to: if the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream, then the size descending of the feasible path of described specific stream according to its idle bandwidth, and from the feasible path after sequence path selection successively, until the idle bandwidth sum in selected path is more than or equal to the bandwidth demand of described specific stream.
Preferably, stream scheduling unit is also configured to: if the idle bandwidth sum of the feasible path of described specific stream is less than the bandwidth demand of described specific stream, then search other streams by the feasible path of described specific stream, and calculate the idle bandwidth sum of the feasible path of other streams described; And if the idle bandwidth sum of the feasible path of other streams described is more than or equal to the bandwidth demand of described specific stream, then other streams described are rescheduled, then from the feasible path of described specific stream, distribute one or more path for described specific stream.
Preferably, stream scheduling unit is also configured to: if the idle bandwidth sum of the feasible path of other streams described is more than or equal to the bandwidth demand of described specific stream, then the idle bandwidth sum descending of other streams described according to its feasible path, from the stream after sequence, choose stream successively and this stream is rescheduled, until the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream.
Preferably, stream scheduling unit is also configured to: if the idle bandwidth sum of the feasible path of other streams described is less than the bandwidth demand of described specific stream, then other streams described are rescheduled on its idle route, still by the stream of the feasible path of described specific stream after then the idle bandwidth of the feasible path of described specific stream being distributed to described specific stream according to maximum-minimum equity criterion and rescheduled.
Preferably, flow scheduling unit to be also configured to: the sub-stream information received from source host is sent to destination host.
Preferably, memory cell also stores described sub-stream information.
Preferably, sub-stream information comprises the port numbers of subflow.
According to a further aspect in the invention, provide a kind of stream scheduling method, comprising: the bandwidth demand collecting specific stream from source host; According to the bandwidth usage of current link, from the feasible path of described specific stream, distribute one or more path for described specific stream, the path of distribution is returned described source host, and receive sub-stream information from described source host; And according to scheduling result, corresponding switch is configured.
Preferably, the step of dispense path comprises: the idle bandwidth sum calculating the feasible path of described specific stream; And if the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream, then distribute one or more path for described specific stream from the feasible path of described specific stream.
Preferably, if the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream, then the size descending of the feasible path of described specific stream according to its idle bandwidth, and from the feasible path after sequence path selection successively, until the idle bandwidth sum in selected path is more than or equal to the bandwidth demand of described specific stream.
Preferably, if the idle bandwidth sum of the feasible path of described specific stream is less than the bandwidth demand of described specific stream, then searches other streams by the feasible path of described specific stream, and calculate the idle bandwidth sum of the feasible path of other streams described; And if the idle bandwidth sum of the feasible path of other streams described is more than or equal to the bandwidth demand of described specific stream, then other streams described are rescheduled, then from the feasible path of described specific stream, distribute one or more path for described specific stream.
Preferably, if the idle bandwidth sum of the feasible path of other streams described is more than or equal to the bandwidth demand of described specific stream, then the idle bandwidth sum descending of other streams described according to its feasible path, from the stream after sequence, choose stream successively and this stream is rescheduled, until the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream.
Preferably, if the idle bandwidth sum of the feasible path of other streams described is less than the bandwidth demand of described specific stream, then other streams described are rescheduled on its idle route, still by the stream of the feasible path of described specific stream after then the idle bandwidth of the feasible path of described specific stream being distributed to described specific stream according to maximum-minimum equity criterion and rescheduled.
Preferably, the method also comprises: the sub-stream information received from source host is sent to destination host.
Preferably, sub-stream information comprises the port numbers of subflow.
In accordance with a further aspect of the present invention, provide a kind of main frame, comprising: RSVP unit, for the bandwidth of centre manager application for specific stream; And current control unit, according to the scheduling result that centre manager returns, described specific stream is divided into one or more subflow, and sets up corresponding connection thus described one or more subflow is forwarded on the path that centre manager is distributed.
Preferably, this main frame also comprises: data recombination unit, for the data received by described one or more subflow are recombinated.
Preferably, current control unit is configured to: if receive the instruction changing subflow from centre manager, then adjustment is corresponding connects.
Dynamic shunt dispatching patcher of the present invention and method promptly and accurately can obtain the actual bandwidth demand flowed, and can than (i.e. subflow) operation dispatching in the thinner granularity of stream.Therefore, dynamic shunt dispatching patcher of the present invention and method make the utilance of whole data center network be balanced, and the aggregate bandwidth of whole data center network is increased.
Accompanying drawing explanation
By hereafter detailed description with the accompanying drawing, above-mentioned and further feature of the present invention will become more apparent, wherein:
Fig. 1 shows the schematic diagram of many tree topology of the prior art;
Fig. 2 shows the schematic diagram of the stream conflict of two types of the prior art;
Fig. 3 shows the schematic diagram of Hedera system of the prior art;
Fig. 4 shows the block diagram of Hedera system of the prior art;
Fig. 5 shows the block diagram of dynamic shunt dispatching patcher according to an embodiment of the invention;
Fig. 6 shows the flow chart of the operation of dynamic shunt dispatching patcher according to an embodiment of the invention;
Fig. 7 shows the flow chart flowing dispatching algorithm according to an embodiment of the invention;
Fig. 8 shows the block diagram of dynamic shunt dispatching patcher in accordance with another embodiment of the present invention; And
Fig. 9 shows the flow chart of the operation of dynamic shunt dispatching patcher in accordance with another embodiment of the present invention.
Embodiment
Below, in conjunction with the drawings to the description of specific embodiments of the invention, principle of the present invention and realization will become obvious.It should be noted that the present invention should not be limited to specific embodiment hereinafter described.In addition, in order to for simplicity, the detailed description of known technology unrelated to the invention is eliminated.
Fig. 5 shows the block diagram of dynamic shunt dispatching patcher 50 according to an embodiment of the invention.As shown in Figure 5, dynamic shunt dispatching patcher 50 is made up of the node of three types: main frame 510, centre manager 520 and switch 530.
Switch 530 comprises transmits 5310 and packet retransmission unit 5320.Wherein transmit 5310 for the mapping between storage flow and transmit port.Packet retransmission unit 5320 is according to transmitting the mapping relations and forwarding data bag that store in 5310.
Main frame 510 comprises current control unit 5110 and RSVP unit 5120.Wherein, RSVP unit 5120 is reserved for specifically flowing to centre manager 520 bandwidth application for certain.Current control unit 5110 is one or more subflow for the scheduling result that provides according to centre manager 520 this specific flow point, then these subflows is forwarded on corresponding path.
Centre manager 520 comprises exchange control unit 5210, stream scheduling unit 5220, bandwidth use and routing information table memory cell 5230 and bandwidth demand collector unit 5240.Wherein, bandwidth demand collector unit 5240 is for collecting the bandwidth demand of stream from main frame 510.Bandwidth use and routing information table memory cell 5230 for storing the bandwidth usage of all links and all feasible paths that arbitrarily main frame is right.Stream scheduling unit 5220, based on the bandwidth usage of the link in current network, is the one or more of suitable path of certain specific flow assignment from all feasible paths.Exchange control unit 5210, according to the scheduling result of stream scheduling unit 5220, corresponding switch creates forwarding-table item, forwards in order to convection current.
Below, composition graphs 6 describes the operation of the dynamic shunt dispatching patcher 50 shown in Fig. 5 in detail.
Fig. 6 shows the operational flowchart of dynamic shunt dispatching patcher 50 according to an embodiment of the invention.In figure 6, host A, host B and host C all can be realized by the main frame 510 shown in Fig. 5, and centre manager can be realized by the centre manager 520 shown in Fig. 5, and switch can be realized by the switch 530 shown in Fig. 5.Concrete operations are as follows:
1.1: certain on source host A specifically applies the bandwidth demand of informing the stream corresponding with this application to RSVP unit.
1.2: the RSVP unit in source host A is reserved to centre manager bandwidth application.
1.3: the stream scheduling unit in centre manager carries out stream scheduling according to dispatching algorithm (specific algorithm describes hereinafter), is namely this one or more of path of specific flow assignment from all feasible paths.
1.4: centre manager returns bandwidth reserved on the path of distribution and every paths to the source host A that bandwidth application is reserved.
This specific stream, according to scheduling result, is cut into one or more subflow by the current control unit of 1.5: source host A.
Sub-stream information (such as comprising the port numbers that subflow uses) is reported to centre manager by 1.6: source host A.
1.7: the exchange control unit in centre manager creates forwarding-table item on corresponding switch, switch is forwarded the path that these flow through new choosing.
1.8: in some cases, centre manager can require that other main frames (such as host C) change path and the occupied bandwidth of some existing stream, to be the reserved specific stream vacating space of bandwidth application.
1.9: the host C receiving instruction changes path and the occupied bandwidth of some stream existing according to the requirement of centre manager.
1.10: source host A uses one or more to connect sends data, and every bar connects a corresponding subflow.
1.11: destination host B receives data by one or more subflow.
Fig. 7 shows the flow chart flowing dispatching algorithm according to an embodiment of the invention.This dispatching algorithm can the stream scheduling unit 5220 of centre manager 520 as shown in Figure 5 perform, to realize stream scheduling.
First, in step S701, obtain stream F and bandwidth demand R thereof f.
In step S702, find out all optional path of stream F.Assuming that total N paths { idle bandwidth of Pi, 1 <=i <=N}, every paths Pi is Ci.
In step S703, calculate the idle bandwidth of all N paths and, namely
Next, S is judged in step S704 fwhether be more than or equal to R f.If S fbe more than or equal to R f, show that the idle bandwidth on existing N paths can meet the bandwidth demand of stream F, now turn to step S705 to continue to perform.If S fbe less than R f, show that the idle bandwidth on existing N paths can not meet the bandwidth demand of stream F, need other stream vacating spaces on these paths, now turn to step S707 to continue to perform.
If S fbe more than or equal to Rx, in step S705, every paths Pi sorted according to the mode of its idle bandwidth Ci descending, from idle bandwidth descending path selection successively Pi, until satisfy condition assuming that have selected M paths Pi.Next, in step S706, the M paths of selection being distributed to stream F, the idle bandwidth Ci on every paths is be the reserved bandwidth of stream F.Afterwards, flow finishing scheduling and return.
If S fbe less than R f, find by { Pi, all streams { Fj, 1 <=j <=L}, i.e. this L stream and the stream F overlapping trees in any one path in 1 <=i <=N} in step S707.Because the idle bandwidth on existing N paths can not meet the bandwidth demand of stream F, several streams in needing this L to flow vacate the bandwidth demand that certain space meets stream F.
In step S708, for every bar stream Fj, { wherein jth bar stream Fj has Nj paths for Hjk, 1 <=k <=Nj}, assuming that the idle bandwidth of every paths Hjk is Cjk to find all optional paths of Fj.In addition, { Hjk, in 1 these paths of <=k <=Nj}, assuming that a total T paths.Because the path Hjk of different Fj may have repetition, so assuming that the idle bandwidth of every paths in T paths is Ct.
In step S709, calculate the idle bandwidth sum of T paths and the idle bandwidth of every bar stream Fj and
In step S710, judge whether the idle bandwidth sum Sum of T paths is more than or equal to R f.If Sum is more than or equal to R f, show that the space vacateed can meet the bandwidth demand of stream F if be rescheduled on other paths by this L stream Fj, now turn to step S711 to continue process.If Sum is less than R feven if show this L stream Fj to be rescheduled to the bandwidth demand that other paths also cannot meet stream F, now turn to step S712 to continue process.
If Sum is more than or equal to R f, in step S711, stream Fj is sorted in the mode of the idle bandwidth Sj descending on its all path, choose Fj successively and Fj be rescheduled on its idle route, and by the allocated bandwidth of vacateing to stream F, until S fbe more than or equal to R f.Reschedule comprise existing subflow fractionation again, polymerization and change amount of bandwidth.Work as S fbe more than or equal to R ftime, turn to step S705 to continue process.
If Sum is less than R fin step S712, because other paths of Fj may also have idle bandwidth Sj, and flow all paths { Pi of F, 1 <=i <=N} is because stream no longer includes idle bandwidth adding of F, so first Fj is rescheduled on its idle route, the bandwidth of those idle routes is all used full.Then, in step S713, even if due to L stream Fj is rescheduled to the bandwidth demand that other paths also cannot meet stream F, now by path { Pi, the idle bandwidth of 1 <=i <=N} according to maximum-minimum (max-min) equity criterion distribute to reschedule in stream F and step S712 after still by { Pi, the stream Fj in any one path in 1 <=i <=N}, and give stream F by the path P i that calculates and corresponding allocated bandwidth.Afterwards, flow finishing scheduling and return.
After stream finishing scheduling, the current control unit 5110 in main frame 510 obtains scheduling result from centre manager 520, and this scheduling result comprises bandwidth reserved on the path and every paths distributed for specific stream.Specific stream is cut into one or more subflow (the corresponding path distributed of every bar subflow) according to the path distributed by current control unit 5110, and then create one or more and connect, every bar connects a corresponding subflow.Like this, current control unit 5110 sends corresponding data traffic according to the RSVP situation on different path.
In the process of transfer of data, if the instruction that current control unit 5110 receives centre manager 520 changes (such as comprise cutting is carried out in convection current again, antithetical phrase stream merges and change reserved bandwidth of certain subflow etc.) to certain stream, so current control unit 5110 can be closed and connected and may initiate new connection accordingly.
In addition, in the process of transfer of data, the data traffic of current control unit 5110 pairs of upper layer application controls.If the data traffic that application sends exceed for this application the bandwidth reserve, current control unit 5110 can carry out buffer memory to the data of upper layer application, and according to the speed transmission of the reserved bandwidth of centre manager 520.If for very large bandwidth has been reserved in application, but the speed sending data does not reach reserved bandwidth for a long time, then current control unit 5110 can notify that centre manager 520 revises resource reservation situation.
After application terminates, current control unit 5110 can notify that centre manager 520 cancels corresponding resource reservation.
Need to illustrate, in stream scheduling process, the stream scheduling unit 5220 in centre manager 520 needs to know the bandwidth usage of all links and main frame is right arbitrarily all feasible paths.For this reason, bandwidth uses and stores in routing information table memory cell 5230 routing table that comprises the right all feasible paths of any main frame and comprises stream belonging to all subflows, the path of process and the bandwidth use table of bandwidth reserved on this path.Table 1 below and table 2 respectively illustrate an example of routing table and bandwidth use table:
(source, object) Feasible path
(host A, host B) Path 1, path 2 ...
(host A, host C) Path 2, path 3 ...
... ...
Table 1 routing table
Subflow Stream Path Bandwidth reserved
Subflow 1 Stream 1 Path 1 100Mbps
Subflow 2 Stream 1 Path 2 100Mbps
... ... ... ...
Table 2 bandwidth uses table
For some application-specific of such as Streaming Media, the data received are needed to keep succession.But, then transmitted by many connections because data are cut into multiple subflow at main frame place, usually cannot ensure the succession of data.For this reason, receiving terminal main frame needs to realize data recombination.
Fig. 8 shows the block diagram of dynamic shunt dispatching patcher 80 in accordance with another embodiment of the present invention, can realize the restructuring receiving data.Wherein, dynamic shunt dispatching patcher 80 shown in Fig. 8 is only with the difference of the dynamic shunt dispatching patcher 50 shown in Fig. 5: the main frame 810 in dynamic shunt dispatching patcher 80 also comprises data recombination unit 8130, and the bandwidth in centre manager 820 uses and the content of routing information table memory cell 8230 storage is different.Particularly, in the present embodiment, data recombination unit 8130 is for recombinating the data received by multiple subflow, and bandwidth uses and routing information table memory cell 8230 not only stores the bandwidth usage of all links and all feasible paths that arbitrarily main frame is right, also store the information of subflow, such as port etc.Below, composition graphs 9 describes the operation of the dynamic shunt dispatching patcher 80 shown in Fig. 8 in detail.
Fig. 9 shows the operational flowchart of dynamic shunt dispatching patcher 80 in accordance with another embodiment of the present invention.In fig .9, host A, host B and host C all can be realized by the main frame 810 shown in Fig. 8, and centre manager can be realized by the centre manager 820 shown in Fig. 8, and switch can be realized by the switch 830 shown in Fig. 8.
Can find out, the operation 1.1 to 1.11 shown in Fig. 9 is identical with the operation 1.1 to 1.11 shown in Fig. 6, therefore omits the detailed description to operation 1.1 to 1.11 here.Two operations are comprised extraly: 1.12 and 1.13 relative to Fig. 6, Fig. 9.
In order to enable destination host B recombinate to ensure that upper layer application receives data in order to the data received by multiple subflow, data recombination unit needs to know which subflow belongs to same stream.Therefore, sub-stream information is supplied to destination host B (operation 1.12) by centre manager.Then, destination host B receives data from multiple subflow, and recombinates (operation 1.13) according to the sub-stream information that centre manager provides to the data received.
Therefore, even if data are cut into multiple subflow and are transmitted by many connections, dynamic shunt dispatching patcher 80 also can recover the order of the data received, thus meets the requirement of the application-specific of such as Streaming Media etc.
Although below show the present invention in conjunction with the preferred embodiments of the present invention, one skilled in the art will appreciate that without departing from the spirit and scope of the present invention, various amendment, replacement and change can be carried out to the present invention.Therefore, the present invention should not limited by above-described embodiment, and should be limited by claims and equivalent thereof.

Claims (13)

1. a centre manager, is characterized in that, described centre manager comprises:
Bandwidth demand collector unit, for collecting the bandwidth demand of specific stream from source host;
Memory cell, the feasible path that bandwidth usage and main frame for storing link are right;
Stream scheduling unit, distributes one or more path for described specific stream for the bandwidth usage according to current link, the path of distribution is returned described source host from the feasible path of described specific stream, and receives sub-stream information from described source host; Wherein, specific stream is cut into one or more subflow according to the path distributed by described source host, subflow is forwarded on corresponding path; And
Exchange control unit, for being configured corresponding switch according to the scheduling result of stream scheduling unit;
Wherein, described stream scheduling unit is configured to: the idle bandwidth sum calculating the feasible path of described specific stream; And if the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream, then from the feasible path of described specific stream, distribute one or more path for described specific stream, make the idle bandwidth sum in the path of distribution be more than or equal to the bandwidth demand of described specific stream;
Wherein, described stream scheduling unit is also configured to: if the idle bandwidth sum of the feasible path of described specific stream is less than the bandwidth demand of described specific stream, then search other streams by the feasible path of described specific stream, and calculate the idle bandwidth sum of the feasible path of other streams described; And if the idle bandwidth sum of the feasible path of other streams described is more than or equal to the bandwidth demand of described specific stream, then other streams described are rescheduled, then from the feasible path of described specific stream, distribute one or more path for described specific stream, make the idle bandwidth sum in the path of distribution be more than or equal to the bandwidth demand of described specific stream.
2. centre manager according to claim 1, is characterized in that, described stream scheduling unit is also configured to:
If the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream, then the size descending of the feasible path of described specific stream according to its idle bandwidth, and from the feasible path after sequence path selection successively, until the idle bandwidth sum in selected path is more than or equal to the bandwidth demand of described specific stream.
3. centre manager according to claim 1, is characterized in that, described stream scheduling unit is also configured to:
If the idle bandwidth sum of the feasible path of other streams described is more than or equal to the bandwidth demand of described specific stream, then the idle bandwidth sum descending of other streams described according to its feasible path, from the stream after sequence, choose stream successively and this stream is rescheduled, until the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream.
4. centre manager according to claim 1, is characterized in that, described stream scheduling unit is also configured to:
If the idle bandwidth sum of the feasible path of other streams described is less than the bandwidth demand of described specific stream, then other streams described are rescheduled on its idle route, still by the stream of the feasible path of described specific stream after then the idle bandwidth of the feasible path of described specific stream being distributed to described specific stream according to maximum-minimum equity criterion and rescheduled.
5. centre manager according to claim 1, is characterized in that, described stream scheduling unit is also configured to:
The sub-stream information received from source host is sent to destination host.
6. centre manager according to claim 1, is characterized in that, described memory cell also stores described sub-stream information.
7. the centre manager according to claim 5 or 6, is characterized in that, described sub-stream information comprises the port numbers of subflow.
8. a stream scheduling method, is characterized in that, described stream scheduling method comprises:
The bandwidth demand of specific stream is collected from source host;
According to the bandwidth usage of current link, from the feasible path of described specific stream, distribute one or more path for described specific stream, the path of distribution is returned described source host, and receive sub-stream information from described source host; Wherein, specific stream is cut into one or more subflow according to the path distributed by described source host, subflow is forwarded on corresponding path; And
According to scheduling result, corresponding switch is configured;
Wherein, the step of described dispense path comprises: the idle bandwidth sum calculating the feasible path of described specific stream; And if the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream, then from the feasible path of described specific stream, distribute one or more path for described specific stream, make the idle bandwidth sum in the path of distribution be more than or equal to the bandwidth demand of described specific stream;
Wherein, if the idle bandwidth sum of the feasible path of described specific stream is less than the bandwidth demand of described specific stream, then searches other streams by the feasible path of described specific stream, and calculate the idle bandwidth sum of the feasible path of other streams described; And if the idle bandwidth sum of the feasible path of other streams described is more than or equal to the bandwidth demand of described specific stream, then other streams described are rescheduled, then from the feasible path of described specific stream, distribute one or more path for described specific stream, make the idle bandwidth sum in the path of distribution be more than or equal to the bandwidth demand of described specific stream.
9. method according to claim 8, it is characterized in that, if the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream, then the size descending of the feasible path of described specific stream according to its idle bandwidth, and from the feasible path after sequence path selection successively, until the idle bandwidth sum in selected path is more than or equal to the bandwidth demand of described specific stream.
10. method according to claim 8, it is characterized in that, if the idle bandwidth sum of the feasible path of other streams described is more than or equal to the bandwidth demand of described specific stream, then the idle bandwidth sum descending of other streams described according to its feasible path, from the stream after sequence, choose stream successively and this stream is rescheduled, until the idle bandwidth sum of the feasible path of described specific stream is more than or equal to the bandwidth demand of described specific stream.
11. methods according to claim 8, it is characterized in that, if the idle bandwidth sum of the feasible path of other streams described is less than the bandwidth demand of described specific stream, then other streams described are rescheduled on its idle route, still by the stream of the feasible path of described specific stream after then the idle bandwidth of the feasible path of described specific stream being distributed to described specific stream according to maximum-minimum equity criterion and rescheduled.
12. methods according to claim 8, is characterized in that, described method also comprises:
The sub-stream information received from source host is sent to destination host.
13. methods according to claim 12, is characterized in that, described sub-stream information comprises the port numbers of subflow.
CN201110085812.7A 2011-03-28 2011-03-28 Dynamic shunt dispatching patcher and method Expired - Fee Related CN102710489B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201110085812.7A CN102710489B (en) 2011-03-28 2011-03-28 Dynamic shunt dispatching patcher and method
JP2011244242A JP5324637B2 (en) 2011-03-28 2011-11-08 Dynamic flowlet scheduling system, flow scheduling method, and flow scheduling program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110085812.7A CN102710489B (en) 2011-03-28 2011-03-28 Dynamic shunt dispatching patcher and method

Publications (2)

Publication Number Publication Date
CN102710489A CN102710489A (en) 2012-10-03
CN102710489B true CN102710489B (en) 2015-07-29

Family

ID=46903059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110085812.7A Expired - Fee Related CN102710489B (en) 2011-03-28 2011-03-28 Dynamic shunt dispatching patcher and method

Country Status (2)

Country Link
JP (1) JP5324637B2 (en)
CN (1) CN102710489B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254273A (en) * 2016-08-09 2016-12-21 清华大学深圳研究生院 The dispatching method of a kind of data center network stream and system

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2614811B2 (en) 1993-05-20 1997-05-28 富士電気化学株式会社 Manufacturing method of magnetite powder
JP2614810B2 (en) 1993-05-20 1997-05-28 富士電気化学株式会社 Method for producing single-phase magnetite powder
US10735327B2 (en) 2013-06-14 2020-08-04 Microsoft Technology Licensing, Llc Fault tolerant and load balanced routing
CN104426792B (en) * 2013-09-05 2019-08-13 中兴通讯股份有限公司 Inquiry, notification method and the device of scheduler tenability
CN104579722A (en) * 2013-10-11 2015-04-29 中兴通讯股份有限公司 Flow statistics capability negotiation method and apparatus
US9300528B2 (en) * 2013-12-13 2016-03-29 International Business Machines Corporation Trill network with multipath redundancy
US9397957B2 (en) * 2013-12-23 2016-07-19 Google Inc. Traffic engineering for large scale data center networks
CN103729427B (en) * 2013-12-25 2017-08-29 南京未来网络产业创新有限公司 A kind of flow table conversion method based on self-defined multilevel flow table incremental update
CN103825838B (en) * 2014-02-24 2017-11-10 上海交通大学 A kind of data center removes bandwidth fragmentation stream scheduling method
CN104468352B (en) * 2014-12-26 2018-05-01 深圳市新格林耐特通信技术有限公司 Special flow QOS support methods based on SDN
CN104618157B (en) * 2015-01-27 2018-05-18 华为技术有限公司 Network management, equipment and system
CN105610709B (en) * 2016-02-03 2018-09-11 西安电子科技大学 Big current load equilibrium System and method for based on SDN
CN106027416B (en) * 2016-05-23 2019-03-01 北京邮电大学 A kind of data center network traffic scheduling method and system based on space-time combination
CN107634912B (en) * 2016-07-19 2020-04-28 华为技术有限公司 Load balancing method, device and device
CN109327400B (en) * 2017-08-01 2022-04-26 华为技术有限公司 Data communication method and data communication network
CN108199966A (en) * 2018-01-09 2018-06-22 莫毓昌 The network flow dynamic dispatching method and system at a kind of high-performance data center
CN109361603B (en) * 2018-11-26 2021-03-23 浪潮思科网络科技有限公司 Method and system for dynamically adjusting equivalent path flow based on programmable switching chip
CN109905329B (en) * 2019-01-04 2021-06-08 东南大学 Task type aware flow queue self-adaptive management method in virtualization environment
CN110061935B (en) * 2019-03-13 2022-02-18 平安科技(深圳)有限公司 Flow source ratio adjusting method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1622524A (en) * 2004-12-17 2005-06-01 北京邮电大学 Mitigation and adjustment method of mobile IP burst traffic
CN101072171A (en) * 2006-05-09 2007-11-14 薄明霞 Integrated routing method for multi-fiber IP over WDM net
US20100020806A1 (en) * 2008-07-22 2010-01-28 Amin Vahdat Scalable Commodity Data Center Network Architecture

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3769544B2 (en) * 2003-01-31 2006-04-26 富士通株式会社 Transmission band control device
JP2006081086A (en) * 2004-09-13 2006-03-23 Matsushita Electric Ind Co Ltd Route selection device, route selection method, communication terminal device, and packet communication system
JP2010245640A (en) * 2009-04-01 2010-10-28 Mitsubishi Electric Corp Simulation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1622524A (en) * 2004-12-17 2005-06-01 北京邮电大学 Mitigation and adjustment method of mobile IP burst traffic
CN101072171A (en) * 2006-05-09 2007-11-14 薄明霞 Integrated routing method for multi-fiber IP over WDM net
US20100020806A1 (en) * 2008-07-22 2010-01-28 Amin Vahdat Scalable Commodity Data Center Network Architecture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Hedera: Dynamic flow scheduling for data center";M. Al-Fares ET AL;《Usenix NSDI 2010 》;20101231;第1页左栏第1行到第14页右栏末行 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254273A (en) * 2016-08-09 2016-12-21 清华大学深圳研究生院 The dispatching method of a kind of data center network stream and system

Also Published As

Publication number Publication date
JP2012209928A (en) 2012-10-25
CN102710489A (en) 2012-10-03
JP5324637B2 (en) 2013-10-23

Similar Documents

Publication Publication Date Title
CN102710489B (en) Dynamic shunt dispatching patcher and method
CN105765928B (en) For handling the method and the network equipment of network packet
CN108173761A (en) A resource optimization method for the integration of SDN and NFV
CN105337861B (en) A kind of method for routing based on energy efficiency priority and cognitive theory
CN101707788B (en) Differential pricing strategy based dynamic programming method of multilayer network services
CN109039897B (en) A software-defined backhaul network routing method based on service awareness
CN111245722B (en) SDN data center network flow forwarding method based on genetic algorithm
JP5943431B2 (en) Network, data transfer node, communication method and program
CN105960783A (en) Inter-domain SDN traffic engineering
CN104871490B (en) The multipath communication device of energy ecology and its method for distributing business for improving energy ecology can be improved
CN106209669A (en) Towards SDN data center network maximum of probability path stream scheduling method and device
CN103259739A (en) Load balancing device and load balancing method
CN103795805A (en) Distributed server load balancing method based on SDN
Gálvez et al. Efficient rate allocation, routing and channel assignment in wireless mesh networks supporting dynamic traffic flows
CN108289064A (en) Mixed load equalization methods in a kind of data center net
CN101841487A (en) Configuration method for aggregating link service flow and packet switching device
CN107579922A (en) Network load balancing device and method
Liu Intelligent routing based on deep reinforcement learning in software-defined data-center networks
CN102055675A (en) Multipath routing distribution method based on load equilibrium
CN103916265A (en) A method and a controller system for configuring a software-defined network
CN104144135A (en) Resource allocation method and indestructible resource allocation method for multicast virtual network
Pathan et al. Priority based energy and load aware routing algorithms for SDN enabled data center network
Al-Darrab et al. Software-Defined Networking load distribution technique for an internet service provider
Ren et al. A sdn-based dynamic traffic scheduling algorithm
CN107018018A (en) A kind of server delta online upgrading method and system based on SDN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150729

Termination date: 20170328