CN1633081A - A method for assigning path bandwidth in bearing control layer - Google Patents
A method for assigning path bandwidth in bearing control layer Download PDFInfo
- Publication number
- CN1633081A CN1633081A CN 200310123099 CN200310123099A CN1633081A CN 1633081 A CN1633081 A CN 1633081A CN 200310123099 CN200310123099 CN 200310123099 CN 200310123099 A CN200310123099 A CN 200310123099A CN 1633081 A CN1633081 A CN 1633081A
- Authority
- CN
- China
- Prior art keywords
- bandwidth
- newly
- path
- expense
- user data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000001167 microscope projection photolithography Methods 0.000 claims abstract description 22
- 230000009191 jumping Effects 0.000 claims description 35
- 230000008569 process Effects 0.000 claims description 11
- 230000005540 biological transmission Effects 0.000 claims description 5
- 230000004044 response Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000007727 signaling mechanism Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
This invention discloses a method for distributing path bandwidth in a load control layer including a, After each hop load network resource management device receives the request of resource connecting application, it selects paths according to MTD in said network, user data total message head length, the required bandwidth included in the request an MPPL of the user service to obtain the bandwidth values to be assigned to every selected hop path, b, after setting up connections of all paths, each hop load network resource manager obtains bandwidth value according to RTD of each hop path, total message head length of user data, the required bandwidth and MPPL to replace the assigned bandwidth to each hop path with said bandwidth value.
Description
Technical field
The present invention relates to the independent bearing layer differentiated service (Diff-serv, DifferentiatedService) technology, relate in particular to a kind of in the bearer control layer of the differentiated service that the independent bearing layer is arranged the method for dispense path bandwidth.
Background technology
Along with the continuous increase of the Internet (Internet) scale, various service quality (QoS, Quality ofService) technology is arisen at the historic moment.Therefore, the Internet engineering duty group (IETF, Internet EngineeringTask Force) has been advised a lot of service models and mechanism, to satisfy the demand of QoS.What industry was relatively approved at present is to use integrated service model (Int-Serv, Integrated Service) at the access and the edge of network, uses differentiated service (Diff-serv, Differentiated Service) at server.Differentiated service is only set priority level and is ensured the QoS measure, though this QoS measure has the high characteristics of line efficiency, concrete effect is difficult to prediction.Therefore, in order further to improve the QoS technology, industry begins to introduce an independently bearer control layer for the backbone network differentiated service, sets up the qos signaling mechanism of the special differentiated service of a cover.This differentiated service is called as the differentiated service that independent bearing control layer is arranged.
Fig. 1 has the independently differentiated service figure of bearer control layer.As shown in Figure 1, in this model, bearer control layer 102 places between bearer network 103 and the service control layer 101.Call Agent (CA in service control layer 101, Call Agent) is service server, such as soft switch, video request program (VOD) Control Server, route gatekeeper (GK, Gate Keeper) etc., CA receives the call request of subscriber equipment, and proxy user equipment is finished the request and the exchange of calling; In bearer control layer 102, load network resource manager has disposed rule and network topology, service bandwidth application Resources allocation for the client, three load network resource manager have only been drawn among this figure, be load network resource manager 1, load network resource manager 2 and load network resource manager 3, but the number of load network resource manager is not certain, and each load network resource manager is transmitted client's service bandwidth application request and result by signaling each other and is professional routed path information of applying for distribution etc.; In bearer network 103, each specific bearer network zone of load network resource manager management, this specific bearer network zone is called as the management domain of pairing load network resource manager, it among this figure the management domain 107 of load network resource manager 1, the management domain 108 of load network resource manager 2 and the management domain 109 of load network resource manager 3, comprise edge router (ER in the management domain 107, EdgeRouter) 110, core router 111 and border router (BR, Border Router) 112, wherein, ER can be linked into bearer network with the call business stream of subscriber equipment or draw bearer network, also comprises core router and border router in management domain 108 and the management domain 109.
In the differentiated service of independent bearing control layer was arranged, load network resource manager was that user's business connects the application communication path, and the path allocation bandwidth for applying for.In many differentiated service that bearer control layer independently arranged, the method for distributing bandwidth is arranged all, as serve key experimental network (Qbone, Quality-of-Service backbone) bandwidth broker device model, Fig. 2 is the bandwidth broker device illustraton of model of Qbone, as shown in Figure 2, bandwidth broker device 1, what bandwidth broker device 2 and bandwidth broker device 3 were realized is exactly the function of load network resource manager, in this model, the bandwidth broker device is responsible for handling from subscriber's main station, service server or network maintenance staff's bandwidth application request, utilize the statistic algorithm of traffic engineering to obtain the distribution bandwidth according to writing down the bulk information parameter in this bandwidth request and this Bandwidth Broker, these informations parameter comprise all kinds of configuration informations, the topology information of physical network, the configuration information of router and policy information, current resource obligate information, a large amount of static or dynamic information such as network seizure condition information.
The bandwidth broker device of above-mentioned Qbone distributes the shortcoming of bandwidth scheme to be: needs to calculate with quantity of parameters, and the calculation procedure complexity, amount of calculation is very big, and is also bigger to expending of device resources such as processor, thereby causes cost than higher.
In addition, the Rich QoS scheme that also has a kind of NEC Corporation to propose.Fig. 3 is the illustraton of model of Rich QoS scheme, as shown in Figure 3, QoS server 301 is as critical component, also comprise the strategic server 302 and LIST SERVER 303 and the network management monitoring server 304 that match with the QoS server, in this programme, distribute the method for bandwidth to be: to gather the primitive network topological data in the router of network management monitoring server 304 from bearer network, and the topological data that collects left in the LIST SERVER 303, when needing to distribute bandwidth, strategic server 302 reads relevant data from LIST SERVER 303, and obtaining bandwidth, QoS server 301 reads the result who obtains from strategic server 302 again, and distributes bandwidth.Bandwidth acquisition process wherein is: utilize based on multiprotocol label switching (MPLS, Multiprotocol Lable Switch) traffic engineering statistic algorithm obtains bandwidth, and this method is obtained the bandwidth that needs distribute according to the length of user data message and the multiple parameters such as two-way time of data.
The shortcoming of dispense path bandwidth is in the above-mentioned Rich QoS scheme: the network management data traffic of bearer control layer and bearer network is big, and bearer control layer has bigger bandwidth calculation amount, and server related on the hardware is too many, thereby expends a large amount of device resources; In addition, the method for obtaining bandwidth also needs the participation of quantity of parameters and program complexity, amount of calculation is very big, and is bigger to expending of device resources such as processor, thereby causes cost very high, and, measure and need expend time in two-way time, so the real-time of this kind scheme is very poor.
Summary of the invention
In view of this, main purpose of the present invention provide a kind of in bearer control layer the method for dispense path bandwidth, thereby simplify the step of distributing bandwidth, reduce the meaningless wasting of resources, and reduce cost.
To achieve these goals, technical scheme of the present invention specifically is achieved in that
A kind of in bearer control layer the method for dispense path bandwidth, it is characterized in that described method comprises:
A: in bearer control layer, carry out in the process of resource request, each is jumped after load network resource manager receives connection resource application request, select the path according to this connection resource application request, obtain the newly-increased expense of the maximum of user data the time through this jumping path according to maximum path label stack degree of depth MTD in this bearer network and the total heading length of user data; Peak-peak message length MPPL according to bandwidth that the user asked that comprises in the newly-increased expense of this maximum, the connection resource application request and customer service obtains the maximum newly-increased shared bandwidth of expense again; The bandwidth sum that bandwidth that the newly-increased expense of the maximum of being obtained is shared and user are asked is distributed to selected every jumping path as bandwidth value;
B: when the source of resource request load network resource manager is jumped after the selected path of load network resource manager sets up all paths and connect according to each, each jumps load network resource manager and jumps the relative path label stack degree of depth RTD in path and the total heading length of the user data newly-increased expense of reality when obtaining user data and jumping path according to each through each; The newly-increased shared bandwidth of expense of reality when obtaining user data and jump path according to this actual newly-increased expense, bandwidth that the user asked and MPPL again through each; And the shared bandwidth of the newly-increased expense of reality when respectively user data being jumped path through each and the user bandwidth sum of being asked is replaced before bandwidth for each jumping path allocation as bandwidth value.
In step a, the method for obtaining the newly-increased expense of described maximum is:
The value that the total heading length of MTD * 4 * 2+ user data is obtained increases expense newly as described maximum.
In step a, the method for obtaining the newly-increased shared bandwidth of expense of described maximum is:
The value that the newly-increased expense/MPPL of bandwidth * maximum that the user asked is obtained increases the shared bandwidth of expense newly as described maximum.
After step b, described method is further comprising the steps of:
C, when the described load network resource manager of respectively jumping when receiving that connection resource is revised request, the described load network resource manager of respectively jumping is jumped the RTD in path and the total heading length of the user data newly-increased expense of reality when obtaining user data and jumping path according to each through each; The newly-increased shared bandwidth of expense of reality when revising the bandwidth that the user asked that comprises in the request and MPPL and obtain user data and jump path according to this actual newly-increased expense, connection resource again through each; The bandwidth addition that bandwidth that this actual newly-increased expense is shared and user are asked is made the bandwidth that path allocation had before been jumped in the bandwidth value replacement for each with what obtain with value.
The method of the described newly-increased expense of reality when obtaining user data and jumping path is through each: whether the message length of maximum data packet of judging customer service is greater than the maximum path transmission unit PMTU that works as the skip before path, if, the newly-increased expense of reality in the time of then will working as value that the total heading length of RTD * 4 * 2+ user data in skip before path obtains and jump path through each as described user data; Otherwise, the newly-increased expense of reality in the time of will working as value that RTD * 4 in skip before path obtain and jump path through each as described user data.
The message length of the maximum data packet of described customer service is: the peak-peak message length+4 * described RTD that respectively jumps the path.
The method of the newly-increased shared bandwidth of expense of reality when obtaining described user data through each jumping path is:
The newly-increased shared bandwidth of expense of reality when the newly-increased value that expense/the peak-peak message length obtains of bandwidth * reality that the user asked is jumped the path as described user data through each.
Described total heading length by: user data package each layer heading length sum of process.
Described each layer heading comprises: link layer message head and IP heading.
Because the method for the invention is utilized resource network carrier independent allocation path bandwidth, thereby saved device resource, and method of the present invention only just can more accurately be obtained with a spot of parameters such as message length and bandwidth request and is required to be the bandwidth of respectively jumping path allocation, greatly reduce the complexity of obtaining bandwidth, workload is little, thereby save a large amount of processor resources, greatly reduce cost; In addition, the speed ratio of the method for the invention is very fast, does not also spend the two-way time of measurement data, so real-time is fine.
Description of drawings
Fig. 1 is the differentiated service figure that independent bearing control layer is arranged;
Fig. 2 is the bandwidth broker device illustraton of model of Qbone;
Fig. 3 is the illustraton of model of Rich QoS scheme;
Fig. 4 is for finishing the flow chart of resource request in bearer network;
Fig. 5 is the common message format figure of customer service raw data packets;
Fig. 6 is for as (the user service data message format figure during MPPL+4 * RTD)<=PMTU;
Fig. 7 is for as (the user service data message format figure during MPPL+4 * RTD)>PMTU.
Embodiment
The present invention is further described in more detail below in conjunction with the drawings and specific embodiments.
Bearer control layer is when connecting the selection path for customer service, need be according to user's resource request dispense path bandwidth, method of the present invention, mainly be on load network resource manager according to user's message length, bandwidth on demand and routing information, be the path allocation bandwidth of choosing.
The described path of present embodiment is meant label switched path (LSP, Label Switch Path), each load network resource manager inside in the bearer control layer can connect for the business of user request selects this jumping load network management device to have jurisdiction over LSP in the management domain, and obtain the bandwidth resources that on every jumping LSP, need distribution, and distribute bandwidth for every jumping LSP according to the result who obtains.
Fig. 4 is for finishing the flow chart of resource request in bearer network, as shown in Figure 4, application connection resource that ordinary business practice connects or modification are adjusted resource process and be may further comprise the steps:
A:CA is to the source load network resource manager, be that load network resource manager 1 sends connection resource application request, comprise the bandwidth RB that the user applies in this connection resource application request, after the source load network resource manager is received connection resource application request, select LSP, and on selected every LSP, distribute bandwidth reserved, jump load network resource manager to next subsequently and send connection resource application request;
B: after current load network resource manager is received connection resource application request, select LSP, and on selected every jumping LSP, distribute bandwidth reserved, if the purpose load network resource manager that this current load network resource manager is a resource request, be load network resource manager n, then upwards a jumping load network resource manager is returned connection resource application response, execution in step c; Otherwise, jump load network resource manager to next and send connection resource application request, return step b;
C: current load network resource manager is received connection resource application response, if the source load network resource manager that this current load network resource manager is a resource request, then set up professional all LSP that connect and connect, and return connection resource application response to CA according to the LSP information in the resource response; Otherwise upwards a jumping load network resource manager is returned connection resource application response, returns step c.
After the source load network resource manager is set up professional all LSP that connect and is connected, need be according to the adjustment of making amendment of the previous bandwidth of reserving of the information of all LSP; Afterwards, when the source load network resource manager is received the request of adjusting about the previous bandwidth of reserving is made amendment of CA, need be according to revising the request of adjustment to the adjustment of making amendment of the bandwidth of previous reservation.These the two kinds processes of revising the adjustment bandwidth are identical, and concrete steps are as follows:
D: the source load network resource manager is according to the bandwidth that the user asked that comprises in the connection resource application request, perhaps connection resource is revised the included bandwidth that the user asked in the request, be the adjustment of making amendment of the previous bandwidth of reserving of this source load network resource manager, and send connection resource to next bar source load network resource manager and revise request;
E: after current load network resource manager receives that connection resource is revised request, be the adjustment of making amendment of the previous bandwidth of reserving of this current load network resource manager, if the purpose load network resource manager that this current load network resource manager is a resource request, be load network resource manager n, then upwards a jumping load network resource manager is returned connection resource and is revised response, execution in step f; Otherwise, jump load network resource manager to next and send connection resource modification request, return step e;
F: current load network resource manager is received connection resource application response, if the source load network resource manager that this current load network resource manager is a resource request is then returned connection resource to CA and revised response; Otherwise upwards a jumping load network resource manager is returned connection resource and is revised response, returns step f.
In the process of whole application resource or modification resource, load network resource manager will be connected every jumping LSP for user's business and go up the distribution bandwidth, and method of the present invention is exactly to obtain each to jump the bandwidth that needs on LSP, and carries out allocated bandwidth according to this:
Present embodiment is that example illustrates method of the present invention with the transmit IP message, at first, what explanation determine the required bandwidth of each jumping LSP by, Fig. 5 is the common message format figure of customer service raw data packets, as shown in Figure 5, the message of customer service raw data packets comprises: link layer message head 501, IP heading 502 and customer service net load data 503.Described link layer message head 501 is the heading of link layer, and described IP heading 502 is this message added heading through ip protocol layer the time, and described customer service net load data 503 are exactly user's business datum.The bandwidth RB that the user asked determines according to the shared bandwidth of above-mentioned customer service raw data packets.
When user service data transmits by LSP in bearer network, the message format of user service data is corresponding to change, Fig. 6 is the user service data message format figure of this moment, as shown in Figure 6, the user data message comprises: link layer message head 501, LSP label stack 601, protocol massages head 502 and customer service net load data 503.In bearer network, each jumps LSP all the label of self, each jumps self label of LSP and the tag storage of former jumping LSP is jumped in the label stack 601 of LSP at this, the quantity of storage tags is by the relative label stack degree of depth (RTD in the LSP label stack 601, Relative path Tag stack Depth) represents, promptly in whole LSP set, each jumps LSP for initial CN, the LSP jumping figure of process.For example first RTD that jumps LSP is that 1, the second RTD that jumps LSP is that 2, the three RTD that jump LSP are 3, and the rest may be inferred.In the differentiated service of bearer control layer is independently arranged, also has a specification attribute at load network resource manager, be the maximum path label stack degree of depth (MTD, Max path Tag stack Depth), MTD represents that business is connected in the whole LSP set, allow the maximum LSP jumping figure of process, the value of MTD can define voluntarily according to the scale of network.
Because bandwidth is not only relevant with the length of message, also relevant with the transmission frequency of message, and the message length of transmission all the time also is Protean, so, with peak-peak message length (MPPL, MaxPeak Packet Length) be illustrated in user's the business connection, the message of shared bandwidth maximum, described MPPL are connected the message length of individual data bag maximum under the peak bandwidth situation for user's business; In addition, one to jump the maximum data packet that institute allows to transmit on the LSP be MTU (MTU, Max TransferUnit); Between CN and CN, among professional all LSP that connect institute's energy process, MTU value minimal data bag is maximum path transmission unit (PMTU, Path Max Transfer Unit).
When under peak condition, the message length of the maximum data packet of customer service is: original message length+newly-increased overhead length, promptly (MPPL+4 * RTD).When (during MPPL+4 * RTD)<=PMTU, as shown in Figure 6, the general data bag that packet is original with respect to user shown in Figure 5, newly-increased expense is a LSP label stack 601, so will include increasing the shared bandwidth of expense newly when obtaining bandwidth, this newly-increased expense is: the length of LSP label stack 601, i.e. RTD * 4.
When (during MPPL+4 * RTD)>PMTU, then current LSP does not allow the packet of customer service to pass through, so this moment, this packet must carry out burst to be handled, the customer service net load data that are about in the packet are respectively charged into two packets, and these two packets can pass through current LSP, as shown in Figure 7, user's raw data packets is divided into packet 701 and packet 702, and packet 701 comprises: the first 703 of link layer message head 501, LSP label stack 601, IP heading 502 and customer service net load data 503; Packet 702 comprises: the remainder 704 of link layer message head 501, LSP label stack 601, IP heading 502 and customer service net load data 503.Compare with user's shown in Figure 5 raw data packets, LSP label stack 601 in the packet 701 is newly-increased expense 1, link layer message head 501 in the packet 702, LSP label stack 601 and IP heading 502 are newly-increased expense 2, so the newly-increased expense of these two parts will be taken into account when obtaining bandwidth, so the newly-increased expense of whole packet is: the newly-increased expense 2=LSP label stack length * 2+ (link layer header length+IP header length) of newly-increased expense 1+, wherein, LSP label stack length is: the RTD * shared byte number of each label, i.e. RTD * 4.
As mentioned above, the packet that transmits in bearer network has increased new expense than user's raw data packets, and these new expenses will take a part of bandwidth, so, for the bandwidth of distributing as skip before LSP is obtained with formula (1):
Be the bandwidth=RB+ Δ RB (1) that distributes as skip before LSP
In the formula (1), the unit of bandwidth is bps (bps), and RB is the bandwidth that the user asked, and Δ RB is the newly-increased shared bandwidth of expense.Because newly-increased expense can be different under different situations, thus under different situations the value difference of Δ RB, corresponding current LSP bandwidth is also different, below explanation respectively:
Be that described every jumping LSP is when distributing bandwidth reserved in above-mentioned steps a and step b, because can not determine final all LSP that this business connects fully connects, in order to ensure enough using for the bandwidth reserved that distributes as skip before LSP, need be the bandwidth of reserving as skip before LSP so obtain by the newly-increased expense of maximum this moment, promptly maximum newly-increased expense is: above-mentioned newly-increased expense 1 and newly-increased expense 2 sums, and as the maximum MTD of degree of depth choosing of the label stack of skip before LSP, suc as formula (2):
Δ RB=RB * maximum increases expense/MPPL (2) newly
In the formula (2), described newly-increased expense is the maximum shared byte number of newly-increased expense, i.e. 4 * MTD * 2+ (link layer header length+IP header length).
After whole LSP sets up successfully, then each jumps the RTD that load network resource manager has been known each jumping LSP, therefore can make first resource to the whole LSP set that previous application is got off and revise and adjust, promptly to the adjustment of making amendment of previous each bandwidth reserved of jumping LSP; Perhaps, because other reasons will be to adjustments of making amendment of the previous bandwidth of reserving for LSP, for example load network resource manager may be received that the connection resource of CA is revised and asks, so will be to the adjustment of making amendment of original bandwidth.At this moment, in order more accurately to be retrieved as the bandwidth of distributing,, two kinds of situations there is this moment so obtain bandwidth and distribution by accurately newly-increased expense as skip before LSP:
If (during MPPL+4 * RTD)<=PMTU, then:
Δ RB=RB * accurately newly-increased expense/MPPL (3)
As shown in Figure 6, the newly-increased expense described in the formula (3) is the label stack length as skip before LSP, that is, and and 4 * RTD.
If (MPPL+4 * RTD)>PMTU, then:
Δ RB=RB * accurately newly-increased expense/MPPL (4)
As shown in Figure 7, the newly-increased expense described in the formula (4) is: when the shared byte number * 2+ of the label stack of skip before LSP (link layer header length+IP header length), i.e. (4 * RTD * 2+ (link layer header length+IP header length)).
In the present embodiment, the clean business datum of user is at the second layer, i.e. IP layer, if the clean business datum of user the 3rd layer and with upper-layer protocol in, then above-mentioned IP header length should replace with: IP header length+three layer and each layer header sum more than three layers.
Generally speaking, adopt the described method of the foregoing description to obtain and the dispense path bandwidth, but method of the present invention also can only be obtained the newly-increased shared bandwidth of expense with formula (2), and calculates the bandwidth of required distribution with this, afterwards also not to the adjustment of making amendment of the bandwidth of distribution.Though this execution mode is fairly simple, amount of calculation is little, and precision is not high, causes the waste of bandwidth resources easily.This execution mode is for the time requirement height and the low business of bandwidth requirement is suitable for, but generally speaking, do not adopt this execution mode.
The above; only for the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, and anyly is familiar with the people of this technology in the disclosed technical scope of the present invention; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.
Claims (9)
1, a kind of in bearer control layer the method for dispense path bandwidth, it is characterized in that described method comprises:
A: in bearer control layer, carry out in the process of resource request, each is jumped after load network resource manager receives connection resource application request, select the path according to this connection resource application request, obtain the newly-increased expense of the maximum of user data the time through this jumping path according to maximum path label stack degree of depth MTD in this bearer network and the total heading length of user data; Peak-peak message length MPPL according to bandwidth that the user asked that comprises in the newly-increased expense of this maximum, the connection resource application request and customer service obtains the maximum newly-increased shared bandwidth of expense again; The bandwidth sum that bandwidth that the newly-increased expense of the maximum of being obtained is shared and user are asked is distributed to selected every jumping path as bandwidth value;
B: when the source of resource request load network resource manager is jumped after the selected path of load network resource manager sets up all paths and connect according to each, each jumps load network resource manager and jumps the relative path label stack degree of depth RTD in path and the total heading length of the user data newly-increased expense of reality when obtaining user data and jumping path according to each through each; The newly-increased shared bandwidth of expense of reality when obtaining user data and jump path according to this actual newly-increased expense, bandwidth that the user asked and MPPL again through each; And the shared bandwidth of the newly-increased expense of reality when respectively user data being jumped path through each and the user bandwidth sum of being asked is replaced before bandwidth for each jumping path allocation as bandwidth value.
2, the method for claim 1 is characterized in that, in step a, the method for obtaining the newly-increased expense of described maximum is:
The value that the total heading length of MTD * 4 * 2+ user data is obtained increases expense newly as described maximum.
3, the method for claim 1 is characterized in that, in step a, the method for obtaining the newly-increased shared bandwidth of expense of described maximum is:
The value that the newly-increased expense/MPPL of bandwidth * maximum that the user asked is obtained increases the shared bandwidth of expense newly as described maximum.
4, the method for claim 1 is characterized in that, after step b, described method is further comprising the steps of:
C, when the described load network resource manager of respectively jumping when receiving that connection resource is revised request, the described load network resource manager of respectively jumping is jumped the RTD in path and the total heading length of the user data newly-increased expense of reality when obtaining user data and jumping path according to each through each; The newly-increased shared bandwidth of expense of reality when revising the bandwidth that the user asked that comprises in the request and MPPL and obtain user data and jump path according to this actual newly-increased expense, connection resource again through each; The bandwidth addition that bandwidth that this actual newly-increased expense is shared and user are asked is made the bandwidth that path allocation had before been jumped in the bandwidth value replacement for each with what obtain with value.
5, as claim 1 or 4 described methods, it is characterized in that, the method of the described newly-increased expense of reality when obtaining user data and jumping path is through each: whether the message length of maximum data packet of judging customer service is greater than the maximum path transmission unit PMTU that works as the skip before path, if, the newly-increased expense of reality in the time of then will working as value that the total heading length of RTD * 4 * 2+ user data in skip before path obtains and jump path through each as described user data; Otherwise, the newly-increased expense of reality in the time of will working as value that RTD * 4 in skip before path obtain and jump path through each as described user data.
6, method as claimed in claim 5 is characterized in that, the message length of the maximum data packet of described customer service is: the peak-peak message length+4 * described RTD that respectively jumps the path.
As claim 1 or 4 described methods, it is characterized in that 7, the method for the newly-increased shared bandwidth of expense of reality when obtaining described user data through each jumping path is:
The newly-increased shared bandwidth of expense of reality when the newly-increased value that expense/the peak-peak message length obtains of bandwidth * reality that the user asked is jumped the path as described user data through each.
8, the method for claim 1 is characterized in that, described total heading length by: user data package each layer heading length sum of process.
9, method as claimed in claim 8 is characterized in that, described each layer heading comprises: link layer message head and IP heading.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2003101230996A CN100334837C (en) | 2003-12-24 | 2003-12-24 | A method for assigning path bandwidth in bearing control layer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2003101230996A CN100334837C (en) | 2003-12-24 | 2003-12-24 | A method for assigning path bandwidth in bearing control layer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1633081A true CN1633081A (en) | 2005-06-29 |
CN100334837C CN100334837C (en) | 2007-08-29 |
Family
ID=34844737
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2003101230996A Expired - Fee Related CN100334837C (en) | 2003-12-24 | 2003-12-24 | A method for assigning path bandwidth in bearing control layer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100334837C (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008071112A1 (en) * | 2006-12-15 | 2008-06-19 | Huawei Technologies Co., Ltd. | Method of resource schedule for a wireless system and system thereof |
CN101414956B (en) * | 2007-10-15 | 2011-08-03 | 华为技术有限公司 | Method, system and apparatus for bandwidth request |
CN108462596A (en) * | 2017-02-21 | 2018-08-28 | 华为技术有限公司 | SLA decomposition methods, equipment and system |
CN112350935A (en) * | 2019-08-08 | 2021-02-09 | 南京中兴软件有限责任公司 | Path calculation method and device for path with stack depth constraint |
CN112699660A (en) * | 2019-10-23 | 2021-04-23 | 阿里巴巴集团控股有限公司 | Data processing method, system and equipment |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104468408B (en) * | 2013-09-22 | 2018-04-06 | 中国电信股份有限公司 | For dynamically adjusting the method and control centre's server of service bandwidth |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10147748A1 (en) * | 2001-09-27 | 2003-04-17 | Siemens Ag | Method and device for adapting label-switched paths in packet networks |
US20030145105A1 (en) * | 2002-01-30 | 2003-07-31 | Harikishan Desineni | Method and apparatus for obtaining information about one or more paths terminating at a subject node for a group of packets |
-
2003
- 2003-12-24 CN CNB2003101230996A patent/CN100334837C/en not_active Expired - Fee Related
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008071112A1 (en) * | 2006-12-15 | 2008-06-19 | Huawei Technologies Co., Ltd. | Method of resource schedule for a wireless system and system thereof |
CN101517969B (en) * | 2006-12-15 | 2012-02-29 | 华为技术有限公司 | Resource dispatching method and resource dispatching system based on wireless system |
US8325646B2 (en) | 2006-12-15 | 2012-12-04 | Huawei Technologies Co., Ltd. | Method and system for resource scheduling in wireless system |
CN101414956B (en) * | 2007-10-15 | 2011-08-03 | 华为技术有限公司 | Method, system and apparatus for bandwidth request |
CN108462596A (en) * | 2017-02-21 | 2018-08-28 | 华为技术有限公司 | SLA decomposition methods, equipment and system |
CN108462596B (en) * | 2017-02-21 | 2021-02-23 | 华为技术有限公司 | SLA decomposition method, equipment and system |
CN112350935A (en) * | 2019-08-08 | 2021-02-09 | 南京中兴软件有限责任公司 | Path calculation method and device for path with stack depth constraint |
CN112350935B (en) * | 2019-08-08 | 2023-03-24 | 中兴通讯股份有限公司 | Path calculation method and device for path with stack depth constraint |
CN112699660A (en) * | 2019-10-23 | 2021-04-23 | 阿里巴巴集团控股有限公司 | Data processing method, system and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN100334837C (en) | 2007-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1283079C (en) | IP network service quality assurance method and system | |
KR100853045B1 (en) | Automatic IP Traffic Optimization in Mobile Communication Systems | |
JP4889174B2 (en) | Service parameter network connection method | |
CN1552143A (en) | Method and arrangement in an IP network | |
JP2002158700A (en) | Control method for allocating qos server and resource | |
CN105164982A (en) | Managing bandwidth allocation among flows through assignment of drop priority | |
CN1585357A (en) | Method for selecting server in network | |
CN1708947A (en) | Method and arrangement to reserve resources in an IP network | |
CN1805366A (en) | Method of implementing resource application for multi-service streams | |
CN1633081A (en) | A method for assigning path bandwidth in bearing control layer | |
CN1283071C (en) | Method for assigning route in network | |
CN101222417A (en) | Method, device and system for realizing flow group QoS control in NGN network | |
WO2009049676A1 (en) | Method and apparatus for use in a network | |
CN1808986A (en) | Method of implementing resource allocation in bearer network | |
CN1705296A (en) | Data packet transmission method capable of guaranteeing service quality | |
CN1756186A (en) | Resource management realizing method | |
CN100502370C (en) | A media transmission optimization system and optimization method on different transmission channels | |
CN1601966A (en) | Route path selection method | |
CN100352215C (en) | Automatic detecting and processing method of label exchange path condition | |
CN1601971A (en) | Resource allocation method of bearing control layer | |
CN100589401C (en) | Method for configuring path route at carrying network resource supervisor | |
CN1599328A (en) | Selecting method of path in resource supervisor | |
CN1783797A (en) | Method for distributing bearing net resource | |
CN100382540C (en) | Method for realizing service connection resource management | |
CN100396050C (en) | An independent operating network crossing routing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20070829 Termination date: 20151224 |
|
EXPY | Termination of patent right or utility model |