WO2024260574A1 - Link aggregation with bandwidth monitoring - Google Patents
Link aggregation with bandwidth monitoring Download PDFInfo
- Publication number
- WO2024260574A1 WO2024260574A1 PCT/EP2023/084662 EP2023084662W WO2024260574A1 WO 2024260574 A1 WO2024260574 A1 WO 2024260574A1 EP 2023084662 W EP2023084662 W EP 2023084662W WO 2024260574 A1 WO2024260574 A1 WO 2024260574A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- flow
- flows
- group
- data flow
- Prior art date
Links
- 230000002776 aggregation Effects 0.000 title claims abstract description 140
- 238000004220 aggregation Methods 0.000 title claims abstract description 140
- 238000012544 monitoring process Methods 0.000 title description 2
- 238000000034 method Methods 0.000 claims abstract description 62
- 238000005259 measurement Methods 0.000 claims abstract description 38
- 230000005540 biological transmission Effects 0.000 claims description 45
- 238000012545 processing Methods 0.000 claims description 18
- 230000004044 response Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 28
- 230000006870 function Effects 0.000 description 24
- 230000008901 benefit Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 4
- 230000003139 buffering effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 239000000969 carrier Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241001522296 Erithacus rubecula Species 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000005641 tunneling Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/41—Flow control; Congestion control by acting on aggregated flows or links
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/24—Multipath
- H04L45/245—Link aggregation, e.g. trunking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/38—Flow based routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/14—Multichannel or multilink protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/02—Capturing of monitoring data
- H04L43/022—Capturing of monitoring data by sampling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/02—Capturing of monitoring data
- H04L43/026—Capturing of monitoring data using flow identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0894—Packet rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/16—Threshold monitoring
Definitions
- the present disclosure relates to methods, and devices capable of executing methods, for aggregation of communication links, performed in a transmitter communication arrangement, e.g. comprising a transmit node, and a receiver communication arrangement which together form a communication system.
- the data communication links may comprise wireline or wireless data communication links.
- RLB Radio Link Bonding
- L1 layer one
- L2 layer two
- Bonding means that different parts of the traffic are conveyed over different links and reassembled when received. If the links have different rates, the delays are different, implying buffering and/or delay equalization before reassembly when waiting for the subsequent parts of data to arrive over slower links.
- the link speed may also change arbitrarily between links due to, e.g., different susceptibility to external conditions for different carrier frequencies. Buffering is therefore often centralized and needs to be dimensioned for a worst-case scenario.
- L2 and layer three (L3) link aggregation methods There are also L2 and layer three (L3) link aggregation methods.
- One such known method is the Link Aggregation (LAG) standard, IEEE Std 802.1 AX-2008, where link/route allocation is performed based on flow identification assigned via higher protocol layer address fields.
- LAG Link Aggregation
- L2/L3 schemes are in comparison to L1 schemes less complicated to implement.
- a flow is identified by, e.g., its hash checksum value, often calculated from static address fields.
- the flow is then assigned to a physical link in an aggregation group (AG).
- A aggregation group
- Subsequent data segments with the same hash checksum value are thereafter forwarded to the link originally assigned. This results in that each certain flow is only forwarded by means of one specific corresponding link, which in turn results in that data segment order within flows is preserved.
- a method performed by a transmit node for link aggregation comprises transmitting a plurality of data segments over a plurality of member links of an aggregation group. Transmitting the plurality of data segments comprises, for each data segment of the plurality of data segments, identifying a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups and each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group, and directing the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated.
- the method further comprises, while transmitting the plurality of data segments over the plurality of member links of the aggregation group, obtaining one or more measurements for the plurality of data flows and adjusting an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
- a feedback loop is provided that enables high link utilization to be achieved while minimizing the risk of ending up in bias and poorly utilized links.
- the one or more measurements comprise actual transmission rates for the plurality of data flows.
- adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows.
- adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows and/or known bandwidths of the plurality of member links of the aggregation group.
- adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises moving a particular data flow from a data flow group to which the particular data flow is currently assigned to another data flow group.
- adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting the allocation of the plurality of data flows to the plurality of data flow groups on a per member link basis.
- adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group, determining that one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group are satisfied and, in response thereto, moving at least one flow from the flow group mapped to the particular member link to another flow group that is mapped to another member link.
- the one or more measurements comprise actual transmission rates for the plurality of data flows
- the one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows current allocated to the particular flow group exceeds a predefined or configured threshold.
- the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
- adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group, determining that one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group are satisfied and, in response thereto, moving at least one flow from another flow group that is mapped to another member link to the flow group mapped to the particular member link.
- the one or more measurements comprise actual transmission rates for the plurality of data flows
- the one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows currently allocated to the particular flow group is less than a predefined or configured threshold.
- the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
- the member links of the aggregation group comprise a plurality of wireless links.
- the member links of the aggregation group comprise a plurality of wired links.
- the member links of the aggregation group comprise one or more wireless links and one or more wired links.
- a transmit node comprises a parser and distributor function and a flow manager function.
- the parser and distributor function is configured to transmit a plurality of data segments over a plurality of member links of an aggregation group.
- Transmitting the plurality of data segments comprises, for each data segment of the plurality of data segments, identifying a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups and each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group, and directing the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated.
- the flow manager function is configured to, while the parser and distributor function is transmitting the plurality of data segments over the plurality of member links of the aggregation group, obtain one or more measurements for the plurality of data flows and adjust an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
- Figure 1 illustrates a system that performs link aggregation in accordance with one embodiment of the present disclosure
- Figure 2 illustrates an example of a default allocation of flows to flow groups and member links of an aggregation group
- Figure 3 illustrates an example embodiment of an aggregation group port as part of a L2/L3 forwarding scheme
- Figure 4 illustrates an aggregation group example with four member links
- Figure 5 illustrates an example of a default allocation of flows to flow groups and member links of an aggregation group in which TEIDs are used as flow IDs;
- Figure 6 is a flow chart that illustrates the operation of a transmit node to provide link aggregation in accordance with an embodiment of the present disclosure.
- Figure 7 is a block diagram of an example embodiment of the transmit node.
- RLB Radio Link Bonding
- L2 and L3 link aggregation methods include statistical bias. This could be that the hashing algorithm interferes with address assignment rules in the network thus causing a systematic preference for one link.
- Dynamic link aggregation typically means that there is a method to measure how the hash distribution has ended up, and that there are methods to hash the header with different input data, which can help overcome some of the problems with bias in hashing schemes.
- QoS Quality of Service
- Embodiments of the present disclosure achieve these effects by distributing traffic based on: classification of packet data; continuous supervision; measurement of egress utilization for each member in the aggregation group; and re-assignment (if needed), rather than hashing or rehashing flows.
- the present disclosure parses, bins, and distributes flows for members in a Link Aggregation Group based on flow classification.
- the flow can be any field in the packet header, including, but not limited to L2 Medium Access Control (MAC) Destination Address (DA)ZSource Address (SA), Internet Protocol (IP) DA/SA, Transmission Control Protocol (TCP)ZUser Datagram Protocol (UDP), Virtual Local Area Network (VLAN) Identifier (ID), VLAN Priority Code Point (PCP), port number General Packet Radio System (GPRS) Tunneling Protocol (GTP) User (GTP-U) session (Tunnel Endpoint Identifier, TEID). It can be any field that is used to identify flows/conversations where data order must be preserved.
- MAC Medium Access Control
- DA Destination Address
- IP Internet Protocol
- TCP Transmission Control Protocol
- UDP Transmission Control Protocol
- VLAN Virtual Local Area Network
- PCP VLAN Priority Code Point
- GPRS General Packet Radio System
- GTP User
- the solution continuously supervises and measures the rate of each flow and dynamically reassigns flows to achieve maximum utilization for each of the members in the Aggregation Group.
- Embodiments of the present disclosure are not based on hashing the result for which member a flow ID is to be forwarded to. Instead, the decision is based on bandwidth measurement of the flow ID and utilization of the member port in the Aggregation Group.
- Embodiments of the present disclosure use flow groups for default allocation of flows towards the Aggregation Group member ports. This is to avoid runtime traffic sampling, which would otherwise be needed to get a default allocation of bandwidth for the individual members in the Link Aggregation Group.
- Embodiments of the present disclosure allow for high link utilization to be achieved and minimize the risk of ending up in bias and poorly utilized links by the fact that the solution parses and classifies flows, then distributes and measures the flow, meaning there is a feedback loop that tunes in towards best possible utilization of the available flows in the system without extra configuration input to the system than what flow type to search for and member ID bandwidth.
- Embodiments of the present disclosure do not require any special arrangement on the receiving side either, since all traffic handling and preserving of conversation orders are handled by transmitter side. This gives the advantage of lower latency compared to link bonding method, but still not the statistical bias of hash based link aggregation schemes that gives poor utilization.
- a communication link can be said to be associated with a congestion state.
- Congestion in data networking and queueing theory refers to the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
- a communication link that is in a congestion state, or is entering a congestion state regularly, is said to be associated with a congestion condition.
- Information about when a given link is associated with a congestion state can be obtained via delay measurement, or reports from a buffering module, a queuing module, or the like.
- a communication link is also associated with a transmission capacity, which indicates an amount of data that can be transferred over the link.
- An operator may have invested in equipment and spectrum assets to establish a point-to-point connection for mobile backhaul.
- the aggregated capacity is preferably as close as possible to the sum of the capacities from the individual channels.
- a communication link is also often associated with a transmission cost.
- the transmission cost can be determined, e.g., in terms of energy expenditure, monetary cost, equipment cost, or the like.
- An operator may be interested in reducing transmission costs, which may influence a choice between two or more communication links. For instance, a given high transmission rate link may be associated with an expensive lease contract, which makes it undesirable to use except for in cases where other less costly links are in congested states or otherwise not available.
- a wireline link may be, e.g., an Ethernet cable, or an optical fiber link for use in a trunking network or the like.
- a wireless link may be, e.g., a microwave point-to-point link used in a backhauling application, or it can be an in-band backhauling wireless link in a cellular access network.
- Link aggregation can be performed in a number of different ways, as was discussed earlier, and can be divided into two main categories:
- FIG. 1 is a reference diagram that illustrates one embodiment of a system 100 that implements link aggregation including supervision with measurement and dynamic re-allocation to a flow-aware Link Aggregation Group, in accordance with one embodiment of the present disclosure.
- the main components (i ,e. , the transmitter and the receiver) of the system 100 illustrated in Figure 1 are divided into the following sub-blocks (i.e. , blocks that together constitute an embodiment of present disclosure) in the transmit (TX) direction:
- Parser & Distributor 104 includes a parser function and a distributor function and, in general, operates to parse incoming data segments and search for the flow identifier.
- the parser function classifies and reports all the flows identified to Flow Manager 106.
- the distributor function distributes traffic according to the configuration set by the Flow Manager 106. Flow IDs belonging to a certain Aggregation Group need to be identified (parsed) and then redirected to the applicable member port in the Aggregation Group.
- parser function and the distributor functions may be implemented in software or a combination of software and hardware (e.g., software executed by processing circuitry such as, e.g., one or more Central Processing Units (CPUs), one or more Digital Signal Processors (DSPs), one or more Application Specific Integrated Circuits (ASICs), one or more Field Programmable Gate Arrays (FPGAs), and/or the like).
- processing circuitry such as, e.g., one or more Central Processing Units (CPUs), one or more Digital Signal Processors (DSPs), one or more Application Specific Integrated Circuits (ASICs), one or more Field Programmable Gate Arrays (FPGAs), and/or the like).
- CPUs Central Processing Units
- DSPs Digital Signal Processors
- ASICs Application Specific Integrated Circuits
- FPGAs Field Programmable Gate Arrays
- the Flow Manager 106 moves flows between the flow groups, optimizing one member link at a time, and sends information about how the flows are to be moved between the flow groups to the distributor function, which in turn re-assigns flows accordingly. In other words, when/if there are updates on which flow is to be moved from/to which member port, that configuration update is sent to the distributor function.
- the system may end up in a situation where one or more of the member links in the Aggregation Group are empty (i.e., have no flows assigned to the respective flow group), due to flows having been moved such that there are flows that occupy some but not all of the member links.
- This is one of the targets with the solution since this situation can be translated into not needed member links once the procedure has settled (i.e., reached the defined thresholds), which for a wireless point-to-point communication system translates into saved operational expense and saved spectrum costs due to not needing that many carriers for sending data.
- Aggregation Group Port 108 Typically, several links are combined into an Aggregation Group (AG) Port 108.
- This AG port 108 is a logical construct residing above the Medium Access Control (MAC) layer, which means that in standard L2/L3 forwarding schemes, it is identified as a valid target egress port (meaning that if the Aggregation Group Port 108 is part of std L2/L3 forwarding schemes, it will behave as a std port in the system, as illustrated in Figure 3).
- MAC Medium Access Control
- the Aggregation Group Port 108 is simply a standard egress port in the transmit (TX) direction. In the receive (RX) direction, data segments from the members will be forwarded towards the corresponding L2/L3 forwarding system, and the source port for all the members will be the Aggregation Group Port.
- the Aggregation Group Port 108 is one of several potential target ports as a result of L2 MAC address table lookup; and in the RX direction, the source MAC addressed learned are associated with the Aggregated Group Port 112.
- the Aggregation Group Port 108 also measures the bandwidth on sent octets from each of the member links and reports this value continuously to the Flow Manager 106.
- the attributes desired on the Aggregation Group Port 108 are the following:
- Aggregation Group Port Bandwidth The Aggregated bandwidth of all the member links in the aggregation group.
- the aggregation group port bandwidth is to be used by the communication system to calculate overall utilization of the complete Aggregation Group Port.
- the bandwidth for that member link needs to be known, either as a configuration input or as a result of protocol communication, e.g., BNM, Bandwidth Notification Message as per ITU-T Y.1731 Ethernet Bandwidth Notification standard or other similar methods.
- BNM Bandwidth Notification Message
- ITU-T Y.1731 Ethernet Bandwidth Notification standard or other similar methods In a case with a wired system (e.g., copper or optical cable), the bandwidth typically equals the interface bandwidth.
- the bandwidth is a result of available spectrum and carriers, and is typically a value less than the interface bandwidth to/from the wireless carrier.
- the Aggregation Group Port 112 forwards the data segments as a std port in any system. There is no need for any specific functions at the receiver function due to that conversations are preserved and managed by the transmitter side.
- the input traffic i.e., data segments to be transmitted over the communication system
- the input traffic are in this example divided as per the following:
- the Aggregation Group Port consists of four member links (referred to here as member ports).
- the bandwidth of the Aggregation Group Port and the bandwidths of the member ports are as following: o Group Port BW: 1 .6 Gbps o Member Port 1 : 600 Megabits per second (Mbps) o Member Port 2: 600Mbps o Member Port 3: 200 Mbps o Member Port 4: 200 Mbps
- the incoming data segments constitute 80 TEID flows, with a uniform bandwidth distribution (e.g., 15 Mbps each in this example).
- the TEID flow IDs in this example are incremental from value of zero.
- Parser & Distributor In this example, the user has configured the TEID value as flow identifier, meaning that this subblock will search and classify frames based on their TEID values, visible in the GTP header.
- the parser function classifies and reports all the 80 TEID flows identified to the Flow Manager.
- the distributor function distributes traffic according to the configuration set by the Flow Manager. Flow IDs (i.e., TEID values in this example) belonging to a certain flow group are identified (parsed) and then re-directed to the applicable member port in the Aggregation Group.
- the default allocation is as per the following:
- Default Member port allocation is 20 GTP flows per member port of the aggregation group, with an aggregated bandwidth of 300Mbps. This result is based on incremental numbers starting from LSB, where each flow group has a default allocation towards member ports in the Aggregation group. The default allocation using TEID values is shown in Figure 5.
- the bandwidth utilization after the default allocation is, in this example, as per the following:
- the flow manager is continuously notified of the actual rate on each of the four members, and the data collected after default allocation are, in this example, as listed above. Based on the result of the measurement, the Flow Manager moves flows between the flow groups, optimizing one member port at a time, and sends the information about how the flows are to be re-assigned to the distributor, which in turn will re-assign flows. This process is then iterated continuously where the system settles at some defined threshold on a per member port basis.
- the defined threshold will typically be a percentage (%) of utilization on a per member port basis. In this example, the utilization for the member ports is set at 90%.
- the system will optimize starting with member #1, and once settled will move to member #2, until finishing with member #4.
- the result of the illustrated example shows that the fourth link will not be necessary, resulting in operational savings.
- the example also shows that even the third member link can be saved by setting a threshold of 100% for member links 1 and 2.
- the system will continuously measure even when the system has reached its threshold, since if the bandwidth of the flow changes, the bandwidth for the member where the flow is directed will be affected.
- the optimization order started with member port #1, but the solution is not limited thereto.
- flows were started to be moved to member #1 from member #2, but the solution is not limited thereto.
- An alternative could be to move flows from over-provisioned members instead (members #3 and #4 in this example), which would result in fewer iterations until settled.
- FIG. 6 is a flow chart that illustrates the operation of the transmit node 102, and more specifically the Parser & Distributor 104 and Flow Manager 106 of the transmit node 102, in accordance with one embodiment of the present disclosure.
- the transmit node 102 transmits data segments over member links of an aggregation group (steps 600-604). More specifically, the Parser & Distributor 104 receives a data segment (step 600) and identifies a data flow to which the data segment belongs (step 602).
- the data flow is one of multiple data flows, wherein: each data flow is allocated to one of multiple data flow groups (e.g., initially via a default allocation and subsequently via adjusted allocations) and each data flow group is mapped to one of the member links of the aggregation group.
- the Parser & Distributor 104 directs the data segment to the member link that is mapped to the data flow group to which the identified data flow is allocated (step 604). Steps 600-604 are repeated for each data segment. Note that while steps 600-604 are shown sequentially, these steps may be performed in a pipeline architecture where each stage starts to operate on the next data segment once it has completed processing on the current data segment. Also note that the description of the operation of the Parser & Distributor 104 above is equally applicable here to the description of Figure 6 (and in particular to steps 600-604).
- the Flow Manager 106 obtains measurements for the data flows (step 606) and adjusts the allocation of the data flows to the data flow groups (and thus the member links of the aggregation group) based on the measurements for the data flows, as described above (step 608). Steps 606 and 608 are repeated (e.g., continuously) such that the allocation of flows to the flow groups is continuously updated. Note that the description of the operation of the Flow Manager 106 above is equally applicable here to the description of Figure 6 (and in particular to steps 606 and 608).
- the one or more measurements obtained in step 606 comprise actual transmission rates for the data flows.
- adjusting the allocation of the data flows to the data flow groups in step 608 comprises adjusting the allocation of the data flows to the data flow groups based on the actual transmission rates for the data flows and known bandwidths of the member links of the aggregation group.
- adjusting the allocation of the plurality of data flows to the plurality of data flow groups in step 608 comprises adjusting the allocation of the data flows to the data flow groups on a per member link basis.
- adjusting the allocation of the data flows to the data flow groups in step 608 comprises, for a particular member link of the aggregation group: determining that one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group are satisfied and, in response thereto, moving at least one flow from another flow group that is mapped to another member link to the flow group mapped to the particular member link.
- the one or more measurements comprise actual transmission rates for the plurality of data flows
- the one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows currently allocated to the particular flow group is less than a predefined or configured threshold.
- the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
- the member links of the aggregation group comprise a plurality of wired links. In one embodiment, the member links of the aggregation group comprise a plurality of wireless links. In one embodiment, the member links of the aggregation group comprise one or more wired links and one or more wireless links.
- Figure 7 illustrates one example embodiment of a transmit node 700.
- the transmit node 700 includes processing circuitry 702, memory 704, and a communication interface 704.
- the processing circuitry 702 includes any type(s) of processors such as, e.g., one or more CPUs, one or more DSPs, one or more ASICs, one or more FPGAs, or the like, or any combination thereof.
- a parser and distributor 708 is implemented in software stored in memory 704 and executed by the processing circuitry 702.
- the parser and distributor 708 performs the functionality of the Parser & Distributor 104 described above.
- a flow manager 710 is implemented in software stored in memory 704 and executed by the processing circuitry 702.
- the flow manager 710 performs the functionality of the Flow Manager 106 described above.
- the communication interface 706 is any type of communication interface providing multiple links (wired and/or wireless).
- the communication interface 706 includes or provides an aggregation port group 712 includes N member ports 714-1 through 714-N, which are also referred to herein as member links, of an aggregation group. Note that details of the aggregation port group 108 described above are equally applicable here to Figure 7.
- computing devices described herein may include the illustrated combination of hardware components
- computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
- a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
- non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
- processing circuitry executing instructions stored in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium.
- some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hardwired manner.
- the processing circuitry can be configured to perform the described functionality.
- transmitting (600-604) the plurality of data segments comprises, for each data segment of the plurality of data segments: o identifying (602) a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein:
- each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups
- each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group; and o directing (604) the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated;
- Embodiment 2 The method of embodiment 1, wherein the one or more measurements comprise actual transmission rates for the plurality of data flows.
- Embodiment 3 The method of embodiment 2, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows.
- Embodiment 4 The method of embodiment 2, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows and/or known bandwidths of the plurality of member links of the aggregation group.
- Embodiment 5 The method of any of embodiments 1 to 4, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises moving a particular data flow from a data flow group to which the particular data flow is currently assigned to another data flow group.
- Embodiment 6 The method of any of embodiments 1 to 5, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups on a per member link basis.
- Embodiment 7 The method of any of embodiments 1 to 6, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group: determining that one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group are satisfied; and, in response thereto, moving at least one flow from the flow group mapped to the particular member link to another flow group that is mapped to another member link.
- Embodiment 8 The method of embodiment 7, wherein: the one or more measurements comprise actual transmission rates for the plurality of data flows; and the one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows current allocated to the particular flow group exceeds a predefined or configured threshold.
- Embodiment 9 The method of embodiment 8, wherein the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
- Embodiment 10 The method of any of embodiments 1 to 6, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group: determining that one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group are satisfied; and, in response thereto, moving at least one flow from another flow group that is mapped to another member link to the flow group mapped to the particular member link.
- Embodiment 11 The method of embodiment 10, wherein the one or more measurements comprise actual transmission rates for the plurality of data flows; and the one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows currently allocated to the particular flow group is less than a predefined or configured threshold.
- Embodiment 13 The method of any of embodiments 1 to 12, wherein the member links of the aggregation group comprise a plurality of wireless links.
- Embodiment 14 The method of any of embodiments 1 to 12, wherein the member links of the aggregation group comprise a plurality of wired links.
- Embodiment 15 The method of any of embodiments 1 to 12, wherein the member links of the aggregation group comprise one or more wireless links and one or more wired links.
- Embodiment 16 A transmit node adapted to perform the method of any of embodiments 1 to 15.
- Embodiment 17 A transmit node comprising:
- a parser and distributor function configured to transmit (600-604) a plurality of data segments over a plurality of member links of an aggregation group, wherein transmitting (600-604) the plurality of data segments comprises, for each data segment of the plurality of data segments: o identifying (602) a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein:
- each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups
- each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group; and o directing (604) the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated;
- a flow manager function configured to, while the parser and distributor function is transmitting (600-604) the plurality of data segments over the plurality of member links of the aggregation group: o obtain (606) one or more measurements for the plurality of data flows; and o adjust (608) an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
- Embodiment 18 A computer program comprising instructions which, when executed on at least one processor, cause the processor to carry out the method according to any of embodiments 1 to 15.
- Embodiment 19 A carrier containing the computer program of embodiment 18, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium.
- Embodiment 20 A non-transitory computer-readable medium comprising instructions executable by processing circuitry of a transmit node, whereby the transmit node is operable to perform the method of any of embodiments 1 to 15.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Environmental & Geological Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Systems and methods for link aggregation are disclosed. In one embodiment, a method performed by a transmit node for link aggregation comprises transmitting data segments over member links of an aggregation group. Transmitting the data segments comprises, for each data segment, identifying a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein each data flow is allocated to one of a plurality of data flow groups and each data flow group is mapped to one of the member links of the aggregation group, and directing the data segment to one of the member links of the aggregation group that is mapped to one of the data flow groups to which the identified data flow is allocated. The method further comprises, while transmitting the data segments over the member links of the aggregation group, obtaining measurements for the data flows and adjusting an allocation of the data flows to the data flow groups based on the measurements for the data flows.
Description
LINK AGGREGATION WITH BANDWIDTH MONITORING
TECHNICAL FIELD
The present disclosure relates to methods, and devices capable of executing methods, for aggregation of communication links, performed in a transmitter communication arrangement, e.g. comprising a transmit node, and a receiver communication arrangement which together form a communication system. The data communication links may comprise wireline or wireless data communication links.
BACKGROUND
For communication links, it is known to aggregate two or more links to increase capacity of data transmission. Several ways exist to aggregate links to increase data bandwidth between two points in a network.
One way to do this is communication link bonding, or Radio Link Bonding (RLB), which refers to layer one (L1) schemes and is agnostic to layer two (L2) and higher protocol layers. Bonding means that different parts of the traffic are conveyed over different links and reassembled when received. If the links have different rates, the delays are different, implying buffering and/or delay equalization before reassembly when waiting for the subsequent parts of data to arrive over slower links. The link speed may also change arbitrarily between links due to, e.g., different susceptibility to external conditions for different carrier frequencies. Buffering is therefore often centralized and needs to be dimensioned for a worst-case scenario.
There are also L2 and layer three (L3) link aggregation methods. One such known method is the Link Aggregation (LAG) standard, IEEE Std 802.1 AX-2008, where link/route allocation is performed based on flow identification assigned via higher protocol layer address fields.
L2/L3 schemes are in comparison to L1 schemes less complicated to implement. Basically, a flow is identified by, e.g., its hash checksum value, often calculated from static address fields. The flow is then assigned to a physical link in an aggregation group (AG). Subsequent data segments with the same hash checksum value are thereafter forwarded to the link originally assigned. This results in that each certain flow is only forwarded by means of one specific corresponding link, which in turn results in that data segment order within flows is preserved.
SUMMARY
Systems and methods for link aggregation are disclosed. In one embodiment, a method performed by a transmit node for link aggregation comprises transmitting a plurality of data segments over a plurality of member links of an aggregation group. Transmitting the plurality of data segments comprises, for each data segment of the plurality of data segments, identifying a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups and each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group, and directing the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated. The method further comprises, while transmitting the plurality of data segments over the plurality of member links of the aggregation group, obtaining one or more measurements for the plurality of data flows and adjusting an allocation of
the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows. In this manner, a feedback loop is provided that enables high link utilization to be achieved while minimizing the risk of ending up in bias and poorly utilized links.
In one embodiment, the one or more measurements comprise actual transmission rates for the plurality of data flows. In one embodiment, adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows. In one embodiment, adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows and/or known bandwidths of the plurality of member links of the aggregation group.
In one embodiment, adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises moving a particular data flow from a data flow group to which the particular data flow is currently assigned to another data flow group.
In one embodiment, adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting the allocation of the plurality of data flows to the plurality of data flow groups on a per member link basis.
In one embodiment, adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group, determining that one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group are satisfied and, in response thereto, moving at least one flow from the flow group mapped to the particular member link to another flow group that is mapped to another member link. In one embodiment, the one or more measurements comprise actual transmission rates for the plurality of data flows, and the one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows current allocated to the particular flow group exceeds a predefined or configured threshold. In one embodiment, the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
In one embodiment, adjusting the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group, determining that one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group are satisfied and, in response thereto, moving at least one flow from another flow group that is mapped to another member link to the flow group mapped to the particular member link. In one embodiment, the one or more measurements comprise actual transmission rates for the plurality of data flows, and the one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows currently allocated to the particular flow group is less than a predefined or configured threshold. In one embodiment, the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
In one embodiment, the member links of the aggregation group comprise a plurality of wireless links. In one embodiment, the member links of the aggregation group comprise a plurality of wired links. In one embodiment, the member links of the aggregation group comprise one or more wireless links and one or more wired links.
Corresponding embodiments of a transmit node are also disclosed. In one embodiment, a transmit node comprises a parser and distributor function and a flow manager function. The parser and distributor function is configured to transmit a plurality of data segments over a plurality of member links of an aggregation group. Transmitting the plurality of data segments comprises, for each data segment of the plurality of data segments, identifying a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups and each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group, and directing the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated. The flow manager function is configured to, while the parser and distributor function is transmitting the plurality of data segments over the plurality of member links of the aggregation group, obtain one or more measurements for the plurality of data flows and adjust an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure, and together with the description serve to explain the principles of the disclosure.
Figure 1 illustrates a system that performs link aggregation in accordance with one embodiment of the present disclosure;
Figure 2 illustrates an example of a default allocation of flows to flow groups and member links of an aggregation group;
Figure 3 illustrates an example embodiment of an aggregation group port as part of a L2/L3 forwarding scheme;
Figure 4 illustrates an aggregation group example with four member links;
Figure 5 illustrates an example of a default allocation of flows to flow groups and member links of an aggregation group in which TEIDs are used as flow IDs;
Figure 6 is a flow chart that illustrates the operation of a transmit node to provide link aggregation in accordance with an embodiment of the present disclosure; and
Figure 7 is a block diagram of an example embodiment of the transmit node.
DETAILED DESCRIPTION
The embodiments set forth below represent information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and
will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure.
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
There currently exist certain challenge(s). Specifically, problems associated with Radio Link Bonding (RLB) include the following. RLB schemes can be made very effective when it comes to making best use of the available spectrum resources (i.e. , maximize utilization of the links in the group). The main disadvantage is that latency between any endpoints is set by segment latency over the slowest physical link.
Problems associated with L2 and L3 link aggregation methods include statistical bias. This could be that the hashing algorithm interferes with address assignment rules in the network thus causing a systematic preference for one link.
There are evolved concepts around link aggregation, normally referred to as "dynamic link aggregation”, but dynamic link aggregation only addresses parts of the problem since it is still based on hashing schemes. Dynamic link aggregation typically means that there is a method to measure how the hash distribution has ended up, and that there are methods to hash the header with different input data, which can help overcome some of the problems with bias in hashing schemes.
Another problem with L2 and L3 link aggregation methods is Quality of Service (QoS) impact on individual overprovisioned links, i.e., biased link assignment (or temporary congestion on a single link even if distribution otherwise balanced) may lead to unintended data segment drops, i.e., data segments can be dropped even when there is capacity available, making the QoS system not really work in a good way.
Certain aspects of the disclosure and their embodiments may provide solutions to these or other challenges. The present disclosure overcomes several of the listed limitations of existing aggregation methods. Specifically, systems and methods are disclosed herein that:
• eliminate statistical bias, and
• avoid that latency between endpoints is set by the slowest link in the aggregation group.
Embodiments of the present disclosure achieve these effects by distributing traffic based on: classification of packet data; continuous supervision; measurement of egress utilization for each member in the aggregation group; and re-assignment (if needed), rather than hashing or rehashing flows.
The present disclosure parses, bins, and distributes flows for members in a Link Aggregation Group based on flow classification. The flow can be any field in the packet header, including, but not limited to L2 Medium Access Control (MAC) Destination Address (DA)ZSource Address (SA), Internet Protocol (IP) DA/SA, Transmission Control Protocol (TCP)ZUser Datagram Protocol (UDP), Virtual Local Area Network (VLAN) Identifier (ID), VLAN Priority Code Point (PCP), port number General Packet Radio System (GPRS) Tunneling Protocol (GTP) User (GTP-U) session (Tunnel Endpoint Identifier, TEID). It can be any field that is used to identify flows/conversations where data order must be preserved. The solution continuously supervises and measures the rate of each flow and dynamically reassigns flows to achieve maximum utilization for each of the members in the Aggregation Group.
Embodiments of the present disclosure are not based on hashing the result for which member a flow ID is to be forwarded to. Instead, the decision is based on bandwidth measurement of the flow ID and utilization of the member port in the Aggregation Group.
Embodiments of the present disclosure use flow groups for default allocation of flows towards the Aggregation Group member ports. This is to avoid runtime traffic sampling, which would otherwise be needed to get a default allocation of bandwidth for the individual members in the Link Aggregation Group.
Certain embodiments may provide one or more of the following technical advantage(s). Embodiments of the present disclosure allow for high link utilization to be achieved and minimize the risk of ending up in bias and poorly utilized links by the fact that the solution parses and classifies flows, then distributes and measures the flow, meaning there is a feedback loop that tunes in towards best possible utilization of the available flows in the system without extra configuration input to the system than what flow type to search for and member ID bandwidth.
Embodiments of the present disclosure do not require any special arrangement on the receiving side either, since all traffic handling and preserving of conversation orders are handled by transmitter side. This gives the advantage of lower latency compared to link bonding method, but still not the statistical bias of hash based link aggregation schemes that gives poor utilization.
Aspects of the present disclosure will now be described more fully with reference to the accompanying drawings. The different devices, computer programs and methods disclosed herein can, however, be realized in many different forms and should not be construed as being limited to the aspects set forth herein. Like numbers in the drawings refer to like elements throughout.
The terminology used herein is for describing aspects of the disclosure only and is not intended to limit the present disclosure.
Herein, a communication link can be said to be associated with a congestion state. Congestion in data networking and queueing theory refers to the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput. A communication link that is in a congestion state, or is entering a congestion state regularly, is said to be associated with a congestion condition. Information about when a given link is associated with a congestion state can be obtained via delay measurement, or reports from a buffering module, a queuing module, or the like.
A communication link is associated with a transmission rate. A transmission rate of a communication link can be measured in terms of, e.g., information bits/second (bps), or packets per second. The transmission rate may be an information transmission rate payload transmission rate which does not include overhead such as headers and address fields, or it can be a raw transmission rate which includes all data transmitted over the communication link.
A communication link is also associated with a transmission capacity, which indicates an amount of data that can be transferred over the link. An operator may have invested in equipment and spectrum assets to establish a point-to-point connection for mobile backhaul. When aggregating channels using that equipment, the aggregated capacity is preferably as close as possible to the sum of the capacities from the individual channels. When using, e.g.
LAG, that is not the case, and data can be discarded in the QoS domain even if there is capacity available on one of the links.
A communication link is also often associated with a transmission cost. The transmission cost can be determined, e.g., in terms of energy expenditure, monetary cost, equipment cost, or the like. An operator may be interested in reducing transmission costs, which may influence a choice between two or more communication links. For instance, a given high transmission rate link may be associated with an expensive lease contract, which makes it undesirable to use except for in cases where other less costly links are in congested states or otherwise not available.
It is thus appreciated that a ‘preferred link’ can be preferred for many different reasons beyond transmission rate.
A wireline link may be, e.g., an Ethernet cable, or an optical fiber link for use in a trunking network or the like. A wireless link may be, e.g., a microwave point-to-point link used in a backhauling application, or it can be an in-band backhauling wireless link in a cellular access network.
Herein, a flow or data flow is a coherent and consecutive flow of data segments. A data flow can according to some aspects correspond to a user streaming a film, a user sending an e-mail, or a user having a telephone conversation.
A data segment is a unit of data, such as a data frame or a data packet.
Link aggregation can be performed in a number of different ways, as was discussed earlier, and can be divided into two main categories:
1. Flow-aware Link aggregation. This is the typical Link aggregation as per IEEE802.1 AX, where the basic mechanism is based on hashing flows from static packet header content. The advantage of this approach is that conversations (i.e., flows) are identified, and thus the risk of re-ordering of frames are removed, meaning that the need for a buffer on either transmit (TX) or receive (RX) side for managing out-of-order between flows is not necessary. The biggest problem with standard link aggregation is that the function is not aware of the bandwidth on any of the member links in the aggregation group nor the bandwidth of the flows in the system. This translates into poorly utilized links, and there is a large risk of dropping high priority data even if there is capacity in the system. This, together with statistical bias, is the main reason this method is not used for wireless communications systems in mobile backhaul.
2. Flow-agnostic Link Aggregation: Link Bonding, or Radio Link Bonding, RLB. The main principle of link bonding is that it is not flow aware. Instead, link bonding works on layer 1 (L1) and distributes data segments among the member links in the aggregation group. This has the advantage that it is optimized towards capacity, and there is no risk of dropping any high priority packets when there is capacity available, due to that bonding works towards a single QoS entity (i.e. high priority frames will normally be prioritized in case of congestion) The main disadvantage of link bonding, as mentioned earlier, is that re-ordering is needed in the system, which adds latency to the system, and the slowest link in the group will determine the overall latency.
The present disclosure describes embodiments of systems and methods that address some of the limitations with existing aggregation methods by adding supervision with measurement and dynamic re-allocation to a flow- aware Link Aggregation Group.
Figure 1 is a reference diagram that illustrates one embodiment of a system 100 that implements link aggregation including supervision with measurement and dynamic re-allocation to a flow-aware Link Aggregation Group, in accordance with one embodiment of the present disclosure. The main components (i ,e. , the transmitter and the receiver) of the system 100 illustrated in Figure 1 are divided into the following sub-blocks (i.e. , blocks that together constitute an embodiment of present disclosure) in the transmit (TX) direction:
• Transmitter 102: o Parser & Distributor 104: The Parser & Distributor 104 includes a parser function and a distributor function and, in general, operates to parse incoming data segments and search for the flow identifier. The parser function classifies and reports all the flows identified to Flow Manager 106. The distributor function distributes traffic according to the configuration set by the Flow Manager 106. Flow IDs belonging to a certain Aggregation Group need to be identified (parsed) and then redirected to the applicable member port in the Aggregation Group. Note that the parser function and the distributor functions may be implemented in software or a combination of software and hardware (e.g., software executed by processing circuitry such as, e.g., one or more Central Processing Units (CPUs), one or more Digital Signal Processors (DSPs), one or more Application Specific Integrated Circuits (ASICs), one or more Field Programmable Gate Arrays (FPGAs), and/or the like). o Flow Manager 106: The flow manager 106 puts the various flows into different groups, called flow groups. At start-up, the flow manager 106 is not aware of the bandwidth of each flow, and a default allocation is performed, as illustrated in the example of Figure 2. The default allocation does a distribution based on available flow IDs. The purpose of doing default allocation based on flow IDs is to avoid sampling of data segments to assess bandwidth of each of the flows. Instead, with the default allocation, the data segments are put in default groups based on the identified flow ID. Each flow group, and thus the flows in the flow group, is mapped to a member link identified by a respective Member ID. Note that the Flow Manager 106 may be implemented in software or a combination of software and hardware (e.g., software executed by processing circuitry such as, e.g., one or more Central Processing Units (CPUs), one or more Digital Signal Processors (DSPs), one or more Application Specific Integrated Circuits (ASICs), one or more Field Programmable Gate Arrays (FPGAs), and/or the like). o Aggregation Group Port 108: The aggregation group port 108 includes ports for N member physical links of the Aggregation Group. Further details about the Aggregation Group Port 108 are provided below.
Receiver 110:
o Aggregation Group Port 112: The aggregation group 112 port includes ports for N member physical links of the Aggregation Group. Further details about the Aggregation Group Port 112 are provided below.
In operation, when data segments start to be sent into the communication system (i.e. , entering Parser & Distributor 104 of the transmitter 102 of Figure 1), a default allocation is applied, as described above. With the default allocation of flow IDs into flow groups, data segments can be passed from the Parser & Distributor 104 to the individual member links of the Aggregation Group Port (see Figure 1). In other words, the data segments for each flow in a flow group are passed from the Parser & Distributor 104 to the member link allocated to that flow group. Once flows are allocated to flow groups and flow groups mapped to member links, measurement of the actual transmission rate of each flow in each flow group (or member link) begins, and the Flow Manager 106 is continuously notified of the actual transmission rates (also referred to herein as actual bandwidths) of the flows on each of the member links. Based on this information, the Flow Manager 106 moves flows between the flow groups, optimizing one member link at a time, and sends information about how the flows are to be moved between the flow groups to the distributor function, which in turn re-assigns flows accordingly. In other words, when/if there are updates on which flow is to be moved from/to which member port, that configuration update is sent to the distributor function. So if flow x/y are to be moved from member n to member m, that is a configuration update that is sent to the distributor, and that is done by the flow manager 106. This is then a recurring event where the system settles at some defined threshold on a per member link basis. The defined threshold will typically be a percentage (%) of utilization on a per member link basis, but the present disclosure puts no limit on what such threshold shall be defined as. Since the transmission rate of different flows must be assumed to be variable, the system will continuously measure even when the system has reached its threshold, since if the bandwidth of the flow changes, the bandwidth in that specific flow group will be affected, and the bandwidth of the member will be affected.
Once all links are measured and traffic is distributed, the system may end up in a situation where one or more of the member links in the Aggregation Group are empty (i.e., have no flows assigned to the respective flow group), due to flows having been moved such that there are flows that occupy some but not all of the member links. This is one of the targets with the solution since this situation can be translated into not needed member links once the procedure has settled (i.e., reached the defined thresholds), which for a wireless point-to-point communication system translates into saved operational expense and saved spectrum costs due to not needing that many carriers for sending data.
Aggregation Group Port 108: Typically, several links are combined into an Aggregation Group (AG) Port 108. This AG port 108 is a logical construct residing above the Medium Access Control (MAC) layer, which means that in standard L2/L3 forwarding schemes, it is identified as a valid target egress port (meaning that if the Aggregation Group Port 108 is part of std L2/L3 forwarding schemes, it will behave as a std port in the system, as illustrated in Figure 3).
When incoming data segments enter the L2/L3 forwarding system, there will be an L2 switching or L3 routing decision happening, and as a result an egress port will be identified. The Aggregation Group Port 108 is simply a standard egress port in the transmit (TX) direction. In the receive (RX) direction, data segments from the members
will be forwarded towards the corresponding L2/L3 forwarding system, and the source port for all the members will be the Aggregation Group Port.
Overall, this means, for example, that in the case of an L2 switching system, in TX direction, the Aggregation Group Port 108 is one of several potential target ports as a result of L2 MAC address table lookup; and in the RX direction, the source MAC addressed learned are associated with the Aggregated Group Port 112.
The Aggregation Group Port 108 also measures the bandwidth on sent octets from each of the member links and reports this value continuously to the Flow Manager 106. The attributes desired on the Aggregation Group Port 108 are the following:
• Aggregation Group Port Bandwidth: The Aggregated bandwidth of all the member links in the aggregation group. The aggregation group port bandwidth is to be used by the communication system to calculate overall utilization of the complete Aggregation Group Port.
• Aggregation Group Member Bandwidth: For each of the configured member links in the aggregation group, the bandwidth for that member link needs to be known, either as a configuration input or as a result of protocol communication, e.g., BNM, Bandwidth Notification Message as per ITU-T Y.1731 Ethernet Bandwidth Notification standard or other similar methods. In a case with a wired system (e.g., copper or optical cable), the bandwidth typically equals the interface bandwidth. In case of a wireless point-to-point microwave system, the bandwidth is a result of available spectrum and carriers, and is typically a value less than the interface bandwidth to/from the wireless carrier.
In the RX direction, the Aggregation Group Port 112 forwards the data segments as a std port in any system. There is no need for any specific functions at the receiver function due to that conversations are preserved and managed by the transmitter side.
Example: The solution is now illustrated more in detail with an example. Starting with configuration input, below are the inputs used for this example embodiment:
• Which field(s) in the packet header is(are) to be used to identify flows. This is configured to be able to parse and classify the wanted flows properly. In this example, GTP flows are used, which are identified by their TEID values.
• Bandwidth on each of the member links in the Aggregation Group. This is used since, in this embodiment, the solution is based on continuous measurement of each of the member links and dynamically assigning flows to reach as high utilization as possible. To do this, the bandwidth on each of the member links is either manually configured or entered as a result of dynamic protocols, such as BNM, Bandwidth Notification Message, as mentioned earlier.
As illustrated in Figure 4, the input traffic (i.e., data segments to be transmitted over the communication system) are in this example divided as per the following:
• Aggregated input data: 1 .2 Gigabits per second (Gbps)
• The Aggregation Group Port consists of four member links (referred to here as member ports). The bandwidth of the Aggregation Group Port and the bandwidths of the member ports are as following: o Group Port BW: 1 .6 Gbps
o Member Port 1 : 600 Megabits per second (Mbps) o Member Port 2: 600Mbps o Member Port 3: 200 Mbps o Member Port 4: 200 Mbps
The incoming data segments constitute 80 TEID flows, with a uniform bandwidth distribution (e.g., 15 Mbps each in this example). The TEID flow IDs in this example are incremental from value of zero.
The different blocks in the system are explained in the context of the TEID example:
Parser & Distributor: In this example, the user has configured the TEID value as flow identifier, meaning that this subblock will search and classify frames based on their TEID values, visible in the GTP header. The parser function classifies and reports all the 80 TEID flows identified to the Flow Manager. The distributor function distributes traffic according to the configuration set by the Flow Manager. Flow IDs (i.e., TEID values in this example) belonging to a certain flow group are identified (parsed) and then re-directed to the applicable member port in the Aggregation Group. The default allocation is as per the following:
• 80 Flow Groups, each containing 1x TEID flow with a bandwidth of 15 Mbps
• Default Member port allocation is 20 GTP flows per member port of the aggregation group, with an aggregated bandwidth of 300Mbps. This result is based on incremental numbers starting from LSB, where each flow group has a default allocation towards member ports in the Aggregation group. The default allocation using TEID values is shown in Figure 5.
The bandwidth utilization after the default allocation is, in this example, as per the following:
• Member Port 1 : 600Mbps available, 300Mbps after default allocation (50% utilization)
• Member Port 2: 600Mbps available, 300Mbps after default allocation (50% utilization)
• Member Port 3: 200 Mbps available, 200Mbps after default allocation (150% utilization, packet drops)
• Member Port 4: 200 Mbps available, 200Mbps after default allocation (150% utilization, packet drops)
Flow Manager: The flow manager is continuously notified of the actual rate on each of the four members, and the data collected after default allocation are, in this example, as listed above. Based on the result of the measurement, the Flow Manager moves flows between the flow groups, optimizing one member port at a time, and sends the information about how the flows are to be re-assigned to the distributor, which in turn will re-assign flows. This process is then iterated continuously where the system settles at some defined threshold on a per member port basis. The defined threshold will typically be a percentage (%) of utilization on a per member port basis. In this example, the utilization for the member ports is set at 90%. This means that once flows are distributed and have reached a utilization per II nk/port that is equal to or exceeds 90% (the present disclosure puts no limit in how to set the limit), it will move to the next member link, and re-assign the remaining flows among the remaining links. In this example, after the default allocation, the flow manager knows that each flow ID has a bandwidth of ~15Mbps, and to reach at least 90% utilization for member #1, it will move 17 flows from member #2 resulting in this Flow ID distribution and bandwidth among the members after the 1 sl iteration to optimize flow IDs for member port #1 :
• Member Port 1 : 5557600Mbps, 92.5% utilization, 37 GTP flows
• Member Port 2: 457600Mbps, 3 GTP flows
• Member Port 3: 200/200Mbps (150% utilization), 20 GTP flows
• Member Port 4: 200/200Mbps (150% utilization), 20 GTP flows
The system will optimize starting with member #1, and once settled will move to member #2, until finishing with member #4.
The result once all four members are optimized in round robin order will be as per the following (assuming a configured threshold of 90% per member):
• Member Port 1 : 555/600Mbps, 92.5% utilization, 37 GTP flows
• Member Port 2: 5557600Mbps, 92.5% utilization, 37 GTP flows
• Member Port 3: 90/200Mbps, 45% utilization, 6 GTP flows
• Member Port 4: 0/200Mbps (0% utilization), 0 GTP flows
The result of the illustrated example shows that the fourth link will not be necessary, resulting in operational savings. The example also shows that even the third member link can be saved by setting a threshold of 100% for member links 1 and 2.
Since the rate of different flows must be assumed to be variable, the system will continuously measure even when the system has reached its threshold, since if the bandwidth of the flow changes, the bandwidth for the member where the flow is directed will be affected.
In this example, the optimization order started with member port #1, but the solution is not limited thereto. In this example, flows were started to be moved to member #1 from member #2, but the solution is not limited thereto. An alternative could be to move flows from over-provisioned members instead (members #3 and #4 in this example), which would result in fewer iterations until settled.
Figure 6 is a flow chart that illustrates the operation of the transmit node 102, and more specifically the Parser & Distributor 104 and Flow Manager 106 of the transmit node 102, in accordance with one embodiment of the present disclosure. As illustrated, the transmit node 102 transmits data segments over member links of an aggregation group (steps 600-604). More specifically, the Parser & Distributor 104 receives a data segment (step 600) and identifies a data flow to which the data segment belongs (step 602). The data flow is one of multiple data flows, wherein: each data flow is allocated to one of multiple data flow groups (e.g., initially via a default allocation and subsequently via adjusted allocations) and each data flow group is mapped to one of the member links of the aggregation group. The Parser & Distributor 104 directs the data segment to the member link that is mapped to the data flow group to which the identified data flow is allocated (step 604). Steps 600-604 are repeated for each data segment. Note that while steps 600-604 are shown sequentially, these steps may be performed in a pipeline architecture where each stage starts to operate on the next data segment once it has completed processing on the current data segment. Also note that the description of the operation of the Parser & Distributor 104 above is equally applicable here to the description of Figure 6 (and in particular to steps 600-604).
In addition, while the Parser & Distributor 104 transmits the data segments over the member links of the aggregation group, the Flow Manager 106 obtains measurements for the data flows (step 606) and adjusts the allocation of the data flows to the data flow groups (and thus the member links of the aggregation group) based on the measurements for the data flows, as described above (step 608). Steps 606 and 608 are repeated (e.g.,
continuously) such that the allocation of flows to the flow groups is continuously updated. Note that the description of the operation of the Flow Manager 106 above is equally applicable here to the description of Figure 6 (and in particular to steps 606 and 608).
In one embodiment, the one or more measurements obtained in step 606 comprise actual transmission rates for the data flows. In one embodiment, adjusting the allocation of the data flows to the data flow groups in step 608 comprises adjusting the allocation of the data flows to the data flow groups based on the actual transmission rates for the data flows and known bandwidths of the member links of the aggregation group.
In one embodiment, adjusting the allocation of the data flows to the data flow groups in step 608 comprises moving a particular data flow from a data flow group to which the particular data flow is currently assigned to another data flow group.
In one embodiment, adjusting the allocation of the plurality of data flows to the plurality of data flow groups in step 608 comprises adjusting the allocation of the data flows to the data flow groups on a per member link basis.
In one embodiment, adjusting the allocation of the data flows to the data flow groups in step 608 comprises, for a particular member link of the aggregation group: determining that one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group are satisfied and, in response thereto, moving at least one flow from the flow group mapped to the particular member link to another flow group that is mapped to another member link. In one embodiment, the one or more measurements comprise actual transmission rates for the plurality of data flows, and the one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows current allocated to the particular flow group exceeds a predefined or configured threshold. In one embodiment, the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
In one embodiment, adjusting the allocation of the data flows to the data flow groups in step 608 comprises, for a particular member link of the aggregation group: determining that one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group are satisfied and, in response thereto, moving at least one flow from another flow group that is mapped to another member link to the flow group mapped to the particular member link. In one embodiment, the one or more measurements comprise actual transmission rates for the plurality of data flows, and the one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows currently allocated to the particular flow group is less than a predefined or configured threshold. In one embodiment, the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
In one embodiment, the member links of the aggregation group comprise a plurality of wired links. In one embodiment, the member links of the aggregation group comprise a plurality of wireless links. In one embodiment, the member links of the aggregation group comprise one or more wired links and one or more wireless links.
Figure 7 illustrates one example embodiment of a transmit node 700. As illustrated, the transmit node 700 includes processing circuitry 702, memory 704, and a communication interface 704. The processing circuitry 702
includes any type(s) of processors such as, e.g., one or more CPUs, one or more DSPs, one or more ASICs, one or more FPGAs, or the like, or any combination thereof. In this embodiment, a parser and distributor 708 is implemented in software stored in memory 704 and executed by the processing circuitry 702. The parser and distributor 708 performs the functionality of the Parser & Distributor 104 described above. In this embodiment, a flow manager 710 is implemented in software stored in memory 704 and executed by the processing circuitry 702. The flow manager 710 performs the functionality of the Flow Manager 106 described above. The communication interface 706 is any type of communication interface providing multiple links (wired and/or wireless). In this example, the communication interface 706 includes or provides an aggregation port group 712 includes N member ports 714-1 through 714-N, which are also referred to herein as member links, of an aggregation group. Note that details of the aggregation port group 108 described above are equally applicable here to Figure 7.
Although the computing devices described herein (e.g., transmit node, network node, etc.) may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions, and methods disclosed herein. Determining, calculating, obtaining, or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the device, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Moreover, while components are depicted as single boxes located within a larger box or nested within multiple boxes, in practice computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components. For example, a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
In certain embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hardwired manner. In any of those particular embodiments, whether executing instructions stored on a non-transitory computer-readable storage medium or not, the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole and/or by end users and a wireless network generally.
Those skilled in the art will recognize improvements and modifications to the embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein.
Some example embodiments of the present disclosure are as follows:
Embodiment 1 : A method performed by a transmit node for link aggregation, the method comprising:
• transmitting (600-604) a plurality of data segments over a plurality of member links of an aggregation group, wherein transmitting (600-604) the plurality of data segments comprises, for each data segment of the plurality of data segments: o identifying (602) a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein:
■ each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups; and
■ each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group; and o directing (604) the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated; and
• while transmitting (600-604) the plurality of data segments over the plurality of member links of the aggregation group: o obtaining (606) one or more measurements for the plurality of data flows; and o adjusting (608) an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
Embodiment 2: The method of embodiment 1, wherein the one or more measurements comprise actual transmission rates for the plurality of data flows.
Embodiment 3: The method of embodiment 2, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows.
Embodiment 4: The method of embodiment 2, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows and/or known bandwidths of the plurality of member links of the aggregation group.
Embodiment 5: The method of any of embodiments 1 to 4, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises moving a particular data flow from a data flow group to which the particular data flow is currently assigned to another data flow group.
Embodiment 6: The method of any of embodiments 1 to 5, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups on a per member link basis.
Embodiment 7: The method of any of embodiments 1 to 6, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group: determining that one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group are satisfied; and, in response thereto, moving at least one flow from the flow group mapped to the particular member link to another flow group that is mapped to another member link.
Embodiment 8: The method of embodiment 7, wherein: the one or more measurements comprise actual transmission rates for the plurality of data flows; and the one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows current allocated to the particular flow group exceeds a predefined or configured threshold.
Embodiment 9: The method of embodiment 8, wherein the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
Embodiment 10: The method of any of embodiments 1 to 6, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group: determining that one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group are satisfied; and, in response thereto, moving at least one flow from another flow group that is mapped to another member link to the flow group mapped to the particular member link.
Embodiment 11 : The method of embodiment 10, wherein the one or more measurements comprise actual transmission rates for the plurality of data flows; and the one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows currently allocated to the particular flow group is less than a predefined or configured threshold.
Embodiment 12: The method of embodiment 11, wherein the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
Embodiment 13: The method of any of embodiments 1 to 12, wherein the member links of the aggregation group comprise a plurality of wireless links.
Embodiment 14: The method of any of embodiments 1 to 12, wherein the member links of the aggregation group comprise a plurality of wired links.
Embodiment 15: The method of any of embodiments 1 to 12, wherein the member links of the aggregation group comprise one or more wireless links and one or more wired links.
Embodiment 16: A transmit node adapted to perform the method of any of embodiments 1 to 15.
Embodiment 17: A transmit node comprising:
• a parser and distributor function configured to transmit (600-604) a plurality of data segments over a plurality of member links of an aggregation group, wherein transmitting (600-604) the plurality of data segments comprises, for each data segment of the plurality of data segments:
o identifying (602) a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein:
■ each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups; and
■ each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group; and o directing (604) the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated; and
• a flow manager function configured to, while the parser and distributor function is transmitting (600-604) the plurality of data segments over the plurality of member links of the aggregation group: o obtain (606) one or more measurements for the plurality of data flows; and o adjust (608) an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
Embodiment 18: A computer program comprising instructions which, when executed on at least one processor, cause the processor to carry out the method according to any of embodiments 1 to 15.
Embodiment 19: A carrier containing the computer program of embodiment 18, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium.
Embodiment 20: A non-transitory computer-readable medium comprising instructions executable by processing circuitry of a transmit node, whereby the transmit node is operable to perform the method of any of embodiments 1 to 15.
Claims
1 . A method performed by a transmit node for link aggregation, the method comprising:
• transmitting (600-604) a plurality of data segments over a plurality of member links of an aggregation group, wherein transmitting (600-604) the plurality of data segments comprises, for each data segment of the plurality of data segments: o identifying (602) a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein:
■ each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups; and
■ each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group; and o directing (604) the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow groups to which the identified data flow is allocated; and
• while transmitting (600-604) the plurality of data segments over the plurality of member links of the aggregation group: o obtaining (606) one or more measurements for the plurality of data flows; and o adjusting (608) an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
2. The method of claim 1 , wherein the one or more measurements comprise actual transmission rates for the plurality of data flows.
3. The method of claim 2, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows.
4. The method of claim 2, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups based on the actual transmission rates for the plurality of data flows, known bandwidths of the plurality of member links of the aggregation group, or both the actual transmission rates for the plurality of data flows and known bandwidths of the plurality of member links of the aggregation group.
5. The method of any of claims 1 to 4, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises moving a particular data flow from a data flow group to which the particular data flow is currently assigned to another data flow group.
6. The method of any of claims 1 to 5, wherein adjusting (608) the allocation of the plurality of data flows to the
plurality of data flow groups comprises adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups on a per member link basis.
7. The method of any of claims 1 to 6, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group: determining that one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group are satisfied; and in response thereto, moving at least one flow from the flow group mapped to the particular member link to another flow group that is mapped to another member link.
8. The method of claim 7, wherein: the one or more measurements comprise actual transmission rates for the plurality of data flows; and the one or more criteria for moving one or more flows from the flow group mapped to the particular member link to another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows current allocated to the particular flow group exceeds a predefined or configured threshold.
9. The method of claim 8, wherein the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
10. The method of any of claims 1 to 6, wherein adjusting (608) the allocation of the plurality of data flows to the plurality of data flow groups comprises, for a particular member link from among the plurality of member links of the aggregation group: determining that one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group are satisfied; and in response thereto, moving at least one flow from another flow group that is mapped to another member link to the flow group mapped to the particular member link.
11 . The method of claim 10, wherein the one or more measurements comprise actual transmission rates for the plurality of data flows; and the one or more criteria for moving one or more flows to the flow group mapped to the particular member link from another flow group comprise a criterion that a sum of the actual transmission rates of one or more flows currently allocated to the particular flow group is less than a predefined or configured threshold.
12. The method of claim 11, wherein the predefined or configured threshold is a threshold percentage of the member link mapped to the particular flow group.
13. The method of any of claims 1 to 12, wherein the member links of the aggregation group comprise a plurality of wireless links, a plurality of wired links, or both one or more wireless links and one or more wired links.
14. A transmit node for link aggregation, the transmit node adapted to:
• transmit (600-604) a plurality of data segments over a plurality of member links of an aggregation group, wherein, in order to transmit (600-604) the plurality of data segments, the transmit node is further adapted to, for each data segment of the plurality of data segments: o identify (602) a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein:
■ each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups; and
■ each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group; and o direct (604) the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow groups to which the identified data flow is allocated; and
• while transmitting (600-604) the plurality of data segments over the plurality of member links of the aggregation group: o obtain (606) one or more measurements for the plurality of data flows; and o adjust (608) an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
15. The transmit node of claim 14, further adapted to perform the method of any of claims 2 to 13.
16. A transmit node comprising:
• a parser and distributor function configured to transmit (600-604) a plurality of data segments over a plurality of member links of an aggregation group, wherein transmitting (600-604) the plurality of data segments comprises, for each data segment of the plurality of data segments: o identifying (602) a data flow to which the data segment belongs, the data flow being one of a plurality of data flows wherein:
■ each data flow of the plurality of data flows is allocated to one of a plurality of data flow groups; and
■ each data flow group of the plurality of data flow groups is mapped to one of the plurality of member links of the aggregation group; and o directing (604) the data segment to one of the plurality of member links of the aggregation group that is mapped to one of the plurality of data flow group to which the identified data flow is allocated; and
• a flow manager function configured to, while the parser and distributor function is transmitting (600-604) the plurality of data segments over the plurality of member links of the aggregation group: o obtain (606) one or more measurements for the plurality of data flows; and o adjust (608) an allocation of the plurality of data flows to the plurality of data flow groups based on the one or more measurements for the plurality of data flows.
17. A computer program comprising instructions which, when executed on at least one processor, cause the processor to carry out the method according to any of claims 1 to 13.
18. A carrier containing the computer program of claim 17, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium.
19. A non-transitory computer-readable medium comprising instructions executable by processing circuitry of a transmit node, whereby the transmit node is operable to perform the method of any of claims 1 to 13.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202363509467P | 2023-06-21 | 2023-06-21 | |
US63/509,467 | 2023-06-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024260574A1 true WO2024260574A1 (en) | 2024-12-26 |
Family
ID=89190618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2023/084662 WO2024260574A1 (en) | 2023-06-21 | 2023-12-07 | Link aggregation with bandwidth monitoring |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024260574A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020054567A1 (en) * | 2000-09-18 | 2002-05-09 | Fan Kan Frankie | Dynamic network load balancing over heterogeneous link speed |
US20150138986A1 (en) * | 2013-11-15 | 2015-05-21 | Broadcom Corporation | Load balancing in a link aggregation |
-
2023
- 2023-12-07 WO PCT/EP2023/084662 patent/WO2024260574A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020054567A1 (en) * | 2000-09-18 | 2002-05-09 | Fan Kan Frankie | Dynamic network load balancing over heterogeneous link speed |
US20150138986A1 (en) * | 2013-11-15 | 2015-05-21 | Broadcom Corporation | Load balancing in a link aggregation |
Non-Patent Citations (1)
Title |
---|
KRISHNAN BROCADE COMMUNICATIONS L YONG HUAWEI USA A GHANWANI DELL NING SO TATA COMMUNICATIONS B KHASNABISH ZTE CORPORATION R: "Mechanisms for Optimizing LAG/ECMP Component Link Utilization in Networks; draft-ietf-opsawg-large-flow-load-balancing-13.txt", MECHANISMS FOR OPTIMIZING LAG/ECMP COMPONENT LINK UTILIZATION IN NETWORKS; DRAFT-IETF-OPSAWG-LARGE-FLOW-LOAD-BALANCING-13.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, S, 14 June 2014 (2014-06-14), pages 1 - 26, XP015099574 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111770028B (en) | Method and network device for computer network | |
US10708144B2 (en) | Predicting application quality of experience metrics using adaptive machine learned probes | |
US11770309B2 (en) | On-demand probing for quality of experience metrics | |
EP3278514B1 (en) | Data transmission | |
US8531945B2 (en) | Method and apparatus to support deep packet inspection in a mobile network | |
US8780899B2 (en) | Method and system for improving traffic distribution across a communication network | |
US9197568B2 (en) | Method for providing quality of service in software-defined networking based network and apparatus using the same | |
WO2011044396A2 (en) | Method and apparatus for supporting network communications | |
CN104168212B (en) | The method and apparatus for sending message | |
EP3890257B1 (en) | Flow balancing method and device | |
CN110944358B (en) | Data transmission method and device | |
CN110662256A (en) | Data packet scheduling method and system for multi-path cross-protocol transmission | |
JP2006506845A (en) | How to select a logical link for a packet in a router | |
US9118592B2 (en) | Switch and/or router node advertising | |
CN113746751A (en) | Communication method and device | |
US11805071B2 (en) | Congestion control processing method, packet forwarding apparatus, and packet receiving apparatus | |
WO2024260574A1 (en) | Link aggregation with bandwidth monitoring | |
US20190140965A1 (en) | Method for obtaining path information of data packet and device | |
CN115190537B (en) | Dynamic selection method and system for wireless link | |
JP5898321B2 (en) | Method and apparatus for router-radio flow control | |
CN114598745B (en) | Communication method and device | |
CN114125931A (en) | Flow regulation method and device and network equipment | |
Grežo et al. | Network traffic measurement and management in software defined networks | |
Vinodha et al. | Introducing novel service policies in designing protocol for congestion control mechanism | |
Sivakumar et al. | Convex Optimized Lagrange Multiplier Based Algebraic Congestion Likelihood for Improved TCP Performance in MANET |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23821939 Country of ref document: EP Kind code of ref document: A1 |