Method and system for transmitting data over an aggregated connection
Technical Field
The present invention relates generally to the field of computer networks. More specifically, the present invention discloses a method and system for transmitting a data packet from a first communication router to a second communication router over an aggregated connection comprising a plurality of tunnels.
Background
A multi-Wide Area Network (WAN) site-to-site VPN router is a router that supports bandwidth that centralizes multiple interconnections (e.g., WAN connections for accessing one or more remote private networks). In some embodiments, each TCP/IP session is routed to only one WAN. In this configuration, a single TCP file transfer session can only utilize the bandwidth of one WAN connection on each terminal. For example, in a session-based site-to-site Virtual Private Network (VPN) connection, VPN traffic is routed to multiple WAN connections between two sites (e.g., sites a and B).
In one embodiment, M × N tunnels are first formed between WAN connections, where M and N are the number of WAN network connections for site a and site B, respectively. The application TCP/IP session is then routed via a different tunnel. It should be noted, however, that while a session-based site-to-site VPN can utilize different tunnels for different sessions, a single download session in this type of connection can only utilize one tunnel.
In wireless communications, the quality of packet transmission may be unpredictable and the packet loss rate may change often. This can reduce the quality of the overall packet transmission. Even if the bandwidth limit of each tunnel is high, the packet loss rate may not increase. There is a need for a solution that utilizes multiple tunnels to increase the likelihood of successful transmission of data that can be achieved by transmitting a data packet more than once and also by transmitting a packet that can be used to retrieve the original data packet.
Disclosure of Invention
Methods and systems for transmitting a data packet from a first communications router to a second communications router over an aggregated connection including a plurality of tunnels are disclosed. A first communication router receives a first data packet belonging to a data session from a first host over a Local Area Network (LAN) interface. The first data packet is destined for a second host, wherein the second host is reachable through a second communication router. The first communication router then transmits the first data packet to the second communication router and stores the first data packet in a local storage medium. The first communication router determines whether to activate an error correction mode. If the error correction mode is activated, the first communication router retransmits the first data packet and also transmits an error correction packet corresponding to the first data packet.
According to one embodiment of the present invention, an error correction mode is activated at a first communication router when the first communication router receives a first delayed query message from a second communication router. The first delayed query message indicates that the second communication router did not successfully receive the first data packet. The first deferred query message may include a first global sequence number of the first packet. When the second communication router does not successfully receive the plurality of data packets, the delayed query message includes a plurality of global sequence numbers of the plurality of data packets.
According to one embodiment of the present invention, when the first communication router receives a resume normal message from the second communication router, the first communication router deactivates the error correction mode. The resume normal message is used to indicate that the first communication router no longer needs to be in the error correction mode. In one embodiment, the error correction mode may be activated by an administrator of the first communication router.
According to one embodiment of the invention, the first data packet is transmitted through a first tunnel of the aggregated connection, the first data packet is retransmitted through a second tunnel of the aggregated connection, and the error correction packet is transmitted through a third tunnel of the aggregated connection.
According to one embodiment of the present invention, when the error correction mode is activated, the first data packet is retransmitted and the error correction packet is transmitted only when an acknowledgement corresponding to the first data packet is not received from the second host within the waiting time period.
According to one embodiment of the invention, the waiting time period is preferably less than the first time period minus the third time period, and is also preferably less than the second time period minus the third time period. The first time period is the time between the first host transmitting the first data packet and the first host determining to retransmit the first data packet. The second time period is the time between the first host transmitting the first data packet and the second host sending a request to the first host to retransmit the first data packet. The third time period is a time required for the first packet to be transmitted from the first communication router to the second communication router.
According to one embodiment of the invention, the first data packet is retransmitted and the error correction packet is transmitted before the first host retransmits the first data packet.
Detailed Description
The following description provides preferred exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the invention. Indeed, the following description of the preferred exemplary embodiments will provide those skilled in the art with an enabling description for implementing a preferred exemplary embodiment of the invention. It being understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth in the appended claims.
In the following description, specific details are given to provide a thorough understanding of the embodiments. However, it will be understood by those skilled in the art that the embodiments may be practiced without these specific details. For example, circuits may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process terminates when the operation is completed, but the process may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a procedure corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
Embodiments, or portions thereof, may be implemented in program instructions operable on a processing unit for performing functions and operations as described herein. Program instructions that make up the various embodiments may be stored in a storage medium.
Program instructions that make up the various embodiments may be stored in a storage medium. Furthermore, as disclosed herein, the term "storage medium" may represent one or more devices for storing data, including read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), Random Access Memory (RAM), magnetic RAM, core memory, floppy disks, hard disks, magnetic tape, CD-ROMs, flash memory devices, memory cards, and/or other machine-readable media for storing information. The term "machine-readable medium" includes, but is not limited to portable or fixed storage devices, optical storage media, magnetic media, memory chips or cartridges, wireless channels, and various other media capable of storing, containing or carrying instruction(s) and/or data. The machine-readable medium may be implemented by virtualization and may be a virtual machine-readable medium, including a virtual machine-readable medium in a cloud-based instance.
The term "computer-readable medium, main memory or secondary storage device" as used herein refers to any medium that participates in providing instructions to a processing unit for execution. The computer-readable medium is but one example of a machine-readable medium that may carry instructions for implementing any of the methods and/or techniques described herein. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks. Volatile media includes dynamic memory. Transmission media include coaxial cables, copper wire and fiber optics. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Volatile storage may be used for storing temporary variables or other intermediate information during execution of instructions by the processing unit. Non-volatile storage or static storage may be used to store static information and instructions for the processor, as well as various system configuration parameters.
The storage medium may contain a number of software modules that may be embodied as software code that is executed by the processing unit using any suitable type of computer instructions. The software code may be stored as a series of instructions or commands, or as a program in a storage medium.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. Alternatively, the remote computer may load the instructions into its dynamic memory and send the instructions to the system, which executes one or more sequences of one or more instructions.
The processing unit may be a microprocessor, a microcontroller, a Digital Signal Processor (DSP), any combination of those devices, or any other circuitry configured to process information.
The processing unit executes program instructions or code segments for implementing embodiments of the present invention. Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program instructions for performing the necessary tasks may be stored in a computer-readable storage medium. The processing unit may be implemented by virtualization and may be a virtual processing unit, including a virtual processing unit in a cloud-based instance.
Embodiments of the invention relate to the use of computer systems to implement the techniques described herein. In an embodiment, the processing unit of the present invention may reside on a machine such as a computer platform. According to one embodiment of the invention, the techniques described herein are performed by a computer system in response to a processing unit executing one or more sequences of one or more instructions contained in volatile memory. Such instructions may be read into the volatile memory from another computer-readable medium. Execution of the sequences of instructions contained in volatile memory causes the processing unit to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
A code segment such as a program instruction may represent a step, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the principles of the invention. Thus, implementations consistent with the principles of the invention are not limited to any specific combination of hardware circuitry and software.
Network interfaces that may be provided by a node are ethernet interfaces, frame relay interfaces, fiber optic interfaces, cable interfaces, DSL interfaces, token ring interfaces, serial bus interfaces, Universal Serial Bus (USB) interfaces, firewire interfaces, Peripheral Component Interconnect (PCI) interfaces, and the like.
The network interface may be implemented by a separate electronic component or may be integrated with other electronic components. Depending on the configuration, the network interface may have no network connection or at least one network connection. The network interface may be an ethernet interface, a frame relay interface, a fiber optic interface, a cable interface, a Digital Subscriber Line (DSL) interface, a token ring interface, a serial bus interface, a Universal Serial Bus (USB) interface, a firewire interface, a Peripheral Component Interconnect (PCI) interface, a cellular network interface, or the like.
The network interface may be connected to a wired or wireless access network. The access network may carry one or more network protocol data. A wired access network may be implemented using ethernet, fiber optic, cable, DSL, frame relay, token ring, serial bus, USB, firewire, PCI, or any material that can transfer information. The wireless access network may use infrared, High Speed Packet Access (HSPA), HSPA +, Long Term Evolution (LTE), WiMax, General Packet Radio Service (GPRS), global system for mobile communications (GSM), enhanced data rates for GSM evolution (EDGE), Code Division Multiple Access (CDMA), WiFi, CDMA2000, wideband CDMA (wcdma), time division CDMA (TD-SCDMA), bluetooth, WiBRO, evolution data optimized (EV-DO); digital Enhanced Cordless Telecommunications (DECT); digital AMPS (IS-136/TDMA); integrated Digital Enhanced (iDEN) or any other wireless technology. For example, the network interface may function as a Local Area Network (LAN) interface or a Wide Area Network (WAN) interface.
Embodiments, or portions thereof, may be implemented in a computer data signal, which may take any suitable form for communication over a transmission medium such that the computer data signal is readable for execution by a functional device (e.g., a processing unit) to perform the operations described herein. The computer data signal may include any binary digital electronic signal capable of being propagated over a transmission medium, such as an electronic network channel, an optical fiber, air, an electromagnetic medium, a Radio Frequency (RF) link, etc., and thus the data signal may take the form of an electrical signal, an optical signal, a radio frequency or other wireless communication signal, etc. In certain embodiments, the code segments may be downloaded via a computer network, such as the Internet, an intranet, a LAN, a Metropolitan Area Network (MAN), a Wide Area Network (WAN), the Public Switched Telephone Network (PSTN), a satellite communication system, a cable transmission system, and/or the like.
Fig. 1A illustrates a system 100 adapted according to an embodiment, which is configured to optimize the throughput of a combined plurality of variable bandwidth connections by adjusting a tunnel bandwidth weighting pattern during a data transfer session. System 100 includes a plurality of sites 102 and 104, each of which includes at least one network node. The network node may be referred to as a communication router. However, the scope of the present invention is not limited to communication routers, such that the present invention may be performed at a gateway, router, server, or any other type of network node. For simplicity, fig. 1A illustrates sites 102 and 104 as including communication routers 106 and 108, respectively. Communication routers 106 and 108 may be implemented as multi-WAN routers that support a bandwidth that centralizes multiple internet connections. Communication routers 106 and 108 are connected by a network 110. Network 110 may include a LAN, MAN, WAN, wireless network, PSTN, Internet, Intranet, Extranet, and the like.
Site 102 and router 106 may include M connections 112 and site 104 and router 108 may include N connections 114. Connections 112 and 114 are data connections for transferring information within network 110 between sites 102 and 104. In the illustrated embodiment, M equals 3 and N equals 2; however, these values may vary depending on the desired router and configuration. Connections 112 and 114 may have similar or different bandwidth capabilities. Further, connections 112 and 114 may include different types of WAN connections, such as WiFi, cable, DSF, T1, 3G, 4G, satellite connections, and so forth. It should also be noted that station 102 and station 104 may be considered both transmitters or receivers and that discussion regarding the functionality of either station may be implemented on the other station. In other words, the system 100 may be implemented as a symmetric network.
FIG. 1B illustrates a network environment according to one embodiment of the invention. Tunnels 103A, 103B, and 103C are established between communication router 106 and communication router 108. Tunnels 103A, 103B, and 103C may be combined to form an aggregate connection.
According to one embodiment, the communication routers 106 and 108 may have multiple network interfaces. Communication router 106 establishes tunnels 103A, 103B, and 103C with one or more network interfaces of communication router 108 via one or more of its multiple network interfaces.
The communication devices 106 and 108 may act as gateways, routers, switches, access points, hubs, bridges, and the like.
Fig. 1C illustrates a system 100 adapted according to an embodiment, the system being configured to optimize the throughput of a combined plurality of variable bandwidth connections. System 100 is similar to system 101 except for M × N virtual tunnels 116. When a bonded connection is established between sites 102 and 104, for example by implementing a bonded site-to-site VPN connection, mxn tunnels 116 may be created. Tunnel 116 corresponds to a unique arrangement of the network connections of station 102 and the network connections of station 104. An aggregated connection may be formed between the communication routers 106 and 108. Tunnel 116 may be a virtual tunnel.
Multiple established tunnels 116 may be centralized, combined, or joined together to form one aggregate connection. Those skilled in the art will appreciate that there are a variety of methods for centralizing, combining or joining multiple established tunnels to form one centralized tunnel. A session or application using an aggregated connection treats the aggregated connection as a tunnel. The aggregated connection may be an end-to-end connection, a virtual private network connection, or a connectionless-oriented connection. For example, the aggregated connection may be a TCP connection or a UDP connection. In another example, the aggregated connection is a collection of multiple tunnels, and each tunnel connection is between communication router 106 and communication router 108. In another example, the aggregated connection may be a VPN tunnel, including a plurality of established tunnels, and each established tunnel connection is between communication router 106 and communication router 108.
Fig. 2A illustrates a high-level flow diagram of the operation of system 100, which depicts a method 200 for increasing the throughput of a bonded connection. It is to be understood that the specific functions, sequences of functions, etc. provided in fig. 2 are intended to be exemplary operations according to the concepts of the present invention. Thus, the concepts herein may be practiced in ways different from the illustrated embodiments.
At step 201 of the illustrated embodiment, when a bonded connection is established between routers 102 and 104, for example by implementing a bonded site-to-site VPN connection, mxn virtual tunnels 116 may be created, as illustrated in fig. 1C. Virtual tunnel 116 corresponds to a unique arrangement of the network connections of station 102 and the network connections of station 104.
At step 202 of the illustrated embodiment, a default weight is determined and/or assigned for the tunnel. To determine the default weights, embodiments exchange upstream and downstream bandwidth data of connections 112 and 114 between stations 102 and 104. Using this bandwidth data, a default weight may be calculated according to: suppose the downstream bandwidth of connections 1 to m of station 102 is dl, d2, … … dm, and the upstream bandwidth of connections 1 to n of station 104 is ur, U2, … … Un; the default weight for the tunnel between connection X of station 102 and connection Y of station 104 may be defined as DW (X, Y), where DW (X, Y) is dx.dy. Using the above method to calculate the default weights, if the connections 112-1 to 112-3 are WAN connections of multiple WAN routers with respective upstream/downstream bandwidths of 10M/6M, 8M/4M, and 6M/6M, and the connections 114-1 to 114-2 are WAN connections of multiple WAN routers with respective upstream/downstream bandwidths of 7M/5M and 9M/3M, then the respective default weights for each tunnel would be as follows:
[ Table 0001]
TABLE 0001
For site 102
|
For the site104
|
DW(1,1)=6*7=42
|
DW(1,1)=5*10=50
|
DW(1,2)=6*9=54
|
DW(1,2)=5*8=40
|
DW(2,1)=4*7=28
|
DW(1,3)=5*6=30
|
DW(2,2)=4*9=36
|
DW(2,1)=3*10=30
|
DW(3,1)=6*7=42
|
DW(2,2)=3*8=24
|
DW(3,2)=6*9=54
|
DW(2,3)=3*6=18 |
It should be noted that other ways of calculating the default weights can be envisaged and the above are only examples of implementations of embodiments of the present invention. It should be noted that many different weighting patterns may be used to define the initial bandwidth of the tunnel. For example, it may be desirable to weight a tunnel in one direction using only the downstream capacity of the receiving station and the upstream capacity of the transmitting station. Any weighting scheme used to characterize the tunnel capacity in establishing a bonded connection may be used for the purposes of the present invention.
When routing packets from station 102 to station 104 according to an embodiment, the packets are distributed to the tunnels at a rate according to the effective weight EW (x, y). Initially, the effective weight of an embodiment is set equal to the default weight, EW (x, y) ═ DW (x, y), and is optimal for packet distribution if the bandwidth of tunnel 116 remains unchanged from the initial setting. However, if a user is downloading a file via a bonded network connection in a TCP session with one or more tunnels where there is a packet loss situation, the overall throughput of the session will drop dramatically. This is partly because packet loss will continue to cause TCP retransmissions and the traffic control of TCP will keep the throughput low even if the tunnel without packet loss is not fully occupied.
An effective way to improve throughput may be to avoid such packet losses. To do so, at step 203 of the illustrated embodiment, an embodiment of the present invention distinguishes when a tunnel is experiencing an increase or decrease in packet loss rate. At step 204, the embodiment is further configured to modify the effective weights of the tunnels that experienced or have experienced a change in packet loss rate. The packet loss rate information may be continuously monitored or monitored based on a specific time period. Upon determining that the tunnel is experiencing an unacceptable packet loss rate (step 204-1), the illustrated embodiment reduces the effective weight of the tunnel at step 204-2. In some embodiments, unacceptable may mean that the packet loss rate is a non-zero number, while other embodiments may determine that the unacceptable rate is any rate other than the predefined threshold. Embodiments implement these reductions in a stepwise manner, a continuous manner, a manner of reducing once in proportion to an increase in packet loss rate, and the like. When the reduction is done in a gradual manner, embodiments may continue to monitor the tunnel in order to optimize the amount of reduction implemented.
Tunnel 116 may be established or monitored by sending heartbeat packets from router 106 or router 108 via each tunnel. In some embodiments, when a receiving terminal fails to receive a heartbeat packet from a tunnel for a period of time, the receiving terminal will treat the tunnel as downstream and the tunnel will not be used to route traffic. If a heartbeat packet starts to be received again, the tunnel may be re-established and weighted along with other tunnels. Thus, embodiments may monitor and re-establish connections using heartbeat packets in the event that all packets are dropped in a tunnel and the effective weight of the tunnel is reduced to zero.
Further, when a tunnel recovers all or a portion of its corresponding bandwidth, e.g., determines that the packet loss rate is decreasing (step 204-3), the illustrated embodiment operates to increase the effective weight of such tunnels (step 204-4) in order to fully or more fully utilize the bandwidth. Some embodiments increase the effective weight of the tunnel using a predetermined step size until the exact effective weight is restored. Other embodiments increase the effective weight in proportion to the latest measured bandwidth, which may correspond to the latest measured packet loss rate. Further, embodiments may increase the effective weight of the tunnel based on a predetermined linear or exponential ratio.
After the effective weights of the tunnels are adjusted, or after it is determined that no adjustment is needed, the weighting pattern of the system is updated at step 205 of the illustrated embodiment. This updating may include storing any processed information, using such information in further processing, causing the system to take action, and the like. For example, the processing performed with respect to step 205 may be used to average the weighted pattern over a period of time in order to reduce the error associated with a high transient anomaly. Further, the updated information may be used on the system 100 to modify the packet distribution of the data transfer session, as discussed with respect to fig. 2B. System 100 may continue to implement steps 203 through 205 continuously or periodically throughout the data transfer session.
Fig. 2B illustrates an embodiment in which the modified weight distribution packet is based at least in part on the tunnel after the weighting method 200 is implemented. In particular, step 206 of the illustrated embodiment is used to distribute packets over the tunnel according to a weighting scheme determined by the operation of method 200. In some embodiments, this distribution will change throughout the data transfer session, and thus the steps of fig. 2B are shown as being repeated. Some embodiments change the packet distribution each time the system is updated at step 205. Further, step 205 may cause the change to be implemented periodically, in response to certain loss rate change thresholds, and so on. It should be appreciated that determining the weights by operation of the method 200 and applying the determined weights to the packet distributions at step 206 may have different periodicities. For example, the method 200 may be used to provide updates of weighting pattern information using relatively short iteration loops, while the distribution of packets varies based on such weighting pattern information using longer iteration loops.
To monitor the bandwidth of the various tunnels 116, some embodiments of the present invention encapsulate each transmitted IP packet with various information. Fig. 3 illustrates an example embodiment showing the types of information 300 that may be encapsulated in transmitted IP packets. The version field 302 may contain information about the protocol version utilized and the protocol type field 303 may contain the protocol type of the net load packet. In general, the value of this field will correspond to the ethernet protocol type of the packet. However, additional values may be defined in other documents. The tunnel ID field 304 may be a 32-bit field and may contain an identifier to identify the current tunnel of the IP packet. Advanced Encryption Standard (AES) initialization vector field 306 may be a 32-bit field and may contain an AES encrypted initialization vector. The global sequence number field 308 may be a 32-bit field and may contain a sequence number used to reorder each of the packets for the respective sessions into an appropriate order when the packets have emerged from their respective tunnels. Sequence number per tunnel field 310 may be a 32-bit field that may represent the sequence number assigned to each packet routed to a particular tunnel. AES encrypted payload field 312 may be used to convey the payload of an IP packet. AES encryption may be suitable for higher security of the payload in order to prevent third party attacks.
Each tunnel sequence number discussed above may be used to monitor packet loss in the tunnel. In one embodiment, the router on the receiving terminal calculates the packet loss rate DR (x, y) per tunnel per f seconds by monitoring the per-tunnel sequence number of the received packet. DR (x, y) may be characterized as the number of missing sequences in period f divided by the number of sequence increase. The length of period f may vary, and in one embodiment, f equals 5 seconds.
Packet loss can also be monitored in other ways, such as: the transmitter may periodically inform the receiving terminal of the number of packets it has transmitted, the transmitter transmits heartbeat packets to the receiving terminal every constant time period and the receiving terminal may estimate the overall loss rate by monitoring the loss rate of the heartbeat packets, by obtaining a loss rate map from the physical interface/device/layer, etc.
The receiving terminal may return a loss rate, effective weight, or other bandwidth indication for the particular tunnel to the sending router. When the transmitter receives information about packet loss, some embodiments reduce the effective weight EW (x, y) of the tunnel by EW (x, y) · DR (x, y). Other metrics may be used to modify the effective weights of the tunnels. In some embodiments, the transmitter may receive feedback and the effective weight may be reduced by a number greater or less than the packet loss rate. Such variations may be configured according to the specific needs of the communication system. The above example shows an attempt to reduce the effective weight of the tunnel to prevent further packet loss while maximizing the measure of the weight of the amount of available bandwidth of the tunnel. Any measure of this balance may be found to be preferred.
Fig. 4A illustrates an example embodiment of the type of information 400 that may be encapsulated in a feedback packet sent to a transmission router to report packet loss rate or other bandwidth-related data received at a receiving end router. The type field 402 may contain data about the type of data to be contained in the data 1 field 404 and the data 2 field 406. Data 1 field 404 and data 2 field 406 may contain any information that may be used to help a router determine tunnel information regarding the number of tunnels, tunnel bandwidth, number of packet drops in a tunnel, etc. An example of possible values for type field 402 in data fields 404 and 406 is shown in the chart of FIG. 4B.
Information encapsulated in transmitted IP packets such as those shown in fig. 3 and 4 may also be used for packet buffering and reordering. Because the latency of each tunnel may be different, when two consecutive packets of the same TCP session are sent to a VPN peer through a combined VPN tunnel, the two consecutive packets arrive out of order because they are routed via two different tunnels. If a TCP session receives out-of-order packets from a VPN, the TCP session will slow down due to TCP retransmissions. Thus, the receiving terminal should buffer packets that occur too early before slower packets arrive or before the expiration time has elapsed. By this buffering, delayed packets occurring before the expiration time will be forwarded in sequence to the target device. This buffering helps to optimize end-to-end throughput.
It should be noted that the embodiments described herein are often discussed in the context of VPN connections. These discussions are presented to illustrate example embodiments of the combined connections. The inventive concepts described in the claims herein are not limited to this connection. Indeed, any connection in which sufficient data may be obtained and exchanged to dynamically monitor the bandwidth of multiple communication paths for a data transfer session may be implemented by embodiments of the present invention.
As discussed above, each packet may be assigned two different sequence numbers, namely a Global Sequence Number (GSN) and a Per Tunnel Sequence Number (PTSN). These numbers may be used to facilitate package caching and reordering operations. After passing the packet to the upper layer, the receiving terminal may update a next expected per-tunnel sequence number (NE-PTSN) and a next expected global sequence number (NE-GSN).
One method of how packets may be buffered or forwarded to a target device after receiving and decrypting the packets will be described below.
1. If the GSN of the packet is equal to zero, the packet is immediately forwarded to the target device.
2. It is checked whether the packet's PTSN is equal to the NE-PTSN. If not, all packets with smaller GSNs are dequeued (forwarded to the target device) in order. Leaving the packet untreated.
3. NE-PTSN is updated (i.e., NE-PTSN is set to PTSN + 1).
4. If the GSN is less than the NE-GSN, forwarding to the target device.
5. If the GSN of the packet is equal to NE-GSN, then NE-GSN is updated (i.e., NEGSN is set to GSN +1) and forwarded to the target device. The NE-GSN is repeatedly updated and the cache header is dequeued from the cache if its GSN equals the new NE-GSN.
6. Otherwise (GSN is larger than NE-GSN), the packets are listed according to the order of GSN.
7. If the packet is in a queue longer than a fixed amount of time, NE-GSN is set to GSN +1 of the packet and all packets with smaller GSNs are dequeued in order.
Thus, the encapsulating packet information discussed in fig. 2 and 3 may contain information that optimizes the overall throughput of the data transmission system (e.g., 100) by helping to optimize both the tunnel bandwidth in response to monitoring packet loss rates and by helping to efficiently reorder packets received in a data transfer session.
Fig. 5 illustrates an exemplary processor-based system 500 that may be used to implement systems, apparatus, and methods according to some embodiments. The processor-based system 500 may represent an architecture of the communication routers 106 and 108. A Central Processing Unit (CPU)501 is coupled to a system bus 502. CPU 501 may be any general purpose CPU or may be a special purpose CPU designed to implement the teachings above. The present invention is not limited by the architecture of CPU 501 (or other components of exemplary system 500) as long as CPU 501 (and other components of system 500) support the inventive operations as described herein. CPU 501 may execute the various logical instructions described herein. For example, CPU 501 may execute machine-level instructions according to the exemplary operational flow described above in connection with fig. 2. When executing instructions representing the operational steps illustrated in fig. 2, CPU 501 becomes a special-purpose processor of a special-purpose computing platform specifically configured to operate in accordance with various embodiments of the teachings described herein.
The system 500 also includes Random Access Memory (RAM)503, which may be SRAM, DRAM, SDRAM, or the like. RAM503 may be a secondary storage device that stores program instructions that are executable by CPU 501. The system 500 includes Read Only Memory (ROM)504, which may be PROM, EPROM, EEPROM, and the like. The RAM503 and ROM 504 hold user and system data and programs, as is well known in the art.
System 500 also includes input/output (I/O) adapter 505, communications adapter 511, user interface adapter 508, and display adapter 509. In certain embodiments, I/O adapter 505, user interface adapter 508, and/or communications adapter 511 may enable a user to interact with system 500 to enter information.
I/O adapter 505 connects storage devices 506, such as hard disk drives, Compact Disc (CD) drives, floppy disk drives, tape drives, etc., to system 500. In addition to the RAM503, the storage devices are used to perform the relevant memory requirements for the operations discussed in the embodiments above. The communication adapter 511 is adapted to couple the system 500 to a network 512, which may enable information to be input to and/or output from the system 500 via such network 512 (e.g., the internet or other wide area network, a local area network, a public or private switched telephone network, a wireless network, any combination thereof). The communication adapter 511 may be considered a network interface, and the system 500 may include a plurality of communication adapters 511. The user interface adapter 508 couples user input devices, such as a keyboard 513, a pointing device 507, and a microphone 514, and/or output devices, such as a speaker 515, to the system 500. The display adapter 509 is driven by the CPU 501 to control display on the display device 510. Display adapter 509 transmits instructions for transforming or manipulating the state of various numbers of pixels for use by display device 510 to visually present desired information to a user. Such instructions include instructions for changing state from on to off, setting a particular color, intensity, duration, or the like. Each such instruction constitutes a rendering instruction to control how and what content is displayed on the display device 510.
Fig. 6 illustrates a network environment based on the network environment illustrated in fig. 1A. One of the LAN interfaces of the communication router 106 is connected to the host 103, and one of the LAN interfaces of the communication router 108 is connected to the host 105. Hosts 103 and 105 are located in sites 102 and 104, respectively. Multiple tunnels may be established between communication routers 106 and 108 through network 110. The communication router 106 can establish a tunnel with the communication router 108 through one or more WAN interfaces of the communication router 106 and one or more WAN interfaces of the communication router 108. For purposes of illustration only, communication router 108 is connected to internet 110 by two connections, while communication router 106 is connected to network 110 by two connections. The network 110 may be an interconnected network, such as the Internet.
Fig. 7 is a timing diagram illustrating how data is transferred between host 105 and host 103, according to one embodiment of the invention. In flow 701, data-1 is transmitted from host 103 to communications router 106. Data-1 originates at host 103 and is expected to be transmitted to host 105. Data-1 may be included in one or more data packets, one or more frames, or any other unit of data. Whether data-1 is included in one packet or a plurality of packets depends on the size of data-1. If the size of data-1 is such that data-1 can be encapsulated in one packet, data-1 is included in one packet. If data-1 is too large to be encapsulated in one packet, data-1 may be segmented into multiple portions and the multiple portions are encapsulated in multiple packets, respectively. Communication router 106 then sends data-1 to communication router 108 in flow 702 over interconnection network 110. Communication router 108 forwards data-1 to host 105. When host 105 receives data-1, the host generates an acknowledgement packet, Ack-1, destined for host 103. Ack-1 is received at communication router 108 at flow 704 and forwarded to communication router 106 at flow 705. Communication router 106 then sends Ack-1 to host 103 in flow 706. Thus, host 103 is now aware that data-1 successfully arrived at host 105. Host 103 then transmits data-2 to host 105 through communication routers 106 and 108. In flow 707, data-2 is transmitted from host 103 to communications router 106. Communication router 106 forwards data-2 to communication router 108 in flow 708. The communication router 108 does not receive data-2 on time because data-2 may be discarded or may be delayed when transmitted to the communication router 108. Communications router 108 then generates and sends a delay query message in flow 709 to inform communications router 106 that data-2 has not been received and to check why there is a delay. When the communication router 106 receives the delayed query message, the communication router determines that data-2 has been lost and activates the error correction mode accordingly. In error correction mode, communications router 106 sends data-2-1 and data-2-EC in flows 710 and 711, respectively. Data-2-1 may be the same as data-2, while data-2-EC includes error correction data corresponding to data-2. Using data-2-1 and/or data-2-EC, communication router 108 may generate data-2 in flow 712 and send data-2 to host 105. When host 105 receives data-2, the host generates an acknowledgement packet, Ack-2, destined for host 103. Ack-2 is received at communication router 108 at flow 713 and forwarded to communication router 106 at flow 714. The communication router 106 then sends Ack-2 to the host 103 in flow 715. Host 103 sends data-3 to host 105 through communication routers 106 and 108. Data-3 is transmitted to communications router 106 in flow 716. Since the error correction mode is activated in communication router 106, communication router 106 sends both data-3 and data-3-1, as well as data-3-EC, to communication router 108 in flows 717, 718, and 722, respectively. Communication router 108 successfully receives data-3-1 in flow 719 and forwards data-3 to host 105. Host 105 then generates and sends an acknowledgement packet, Ack-3, destined for host 103. Ack-3 is received at communication router 108 at flow 720 and forwarded to communication router 106 at flow 721. The communication router 106 then sends Ack-3 to the host 103 in flow 723. When communication router 108 receives both data-3-1 and data-3-EC, the communication router determines that the data is no longer lost, and therefore sends a resume normal message to communication router 106 in flow 724. When the communications router 106 receives the resume normal message, the communications router may deactivate the error correction mode. When the error correction mode is disabled, the communication router 106 no longer sends error correction packets, e.g., data-2-EC and data-3-EC.
In some communication protocols, such as TCP, an end message may be sent from the sender to the receiver when the data transfer session ends in order to indicate that the data transfer session has ended, i.e. that no more data packets belonging to the data transfer session will be sent. The communication router 108 may determine that data, such as data-2, is delayed or lost because the communication router may not receive any end messages after receiving data-1. Thus, the communications router sends a deferred query message in flow 709 to request the communications router 106 to resend data-2.
When data is received at the communication router 106 from the host 103, the communication router 106 can save the data in a local storage medium, such as a cache memory, so that the communication router can send a copy of the data when necessary without the host 103 having to resend the data. In the event that data is lost when transmitted to the communication router 108, the communication router 106 can quickly resend the data. There is no need to wait for the host 103 to resend the data or to notify the host 103 of the loss. Those skilled in the art will appreciate that in some communication protocols, the host 103 may slow down data transmission when the host 103 is aware of data loss. Because of the reduced latency, the communication router 108 is able to receive a retransmission of the data from the communication router 106, rather than issuing a retransmission request to the host 103.
In flow 721, communication router 106 issues data-3-EC. For purposes of illustration, it is coincidental that the data-3-EC is sent before Ack-3 is received at the communication router 106. In one variation, if Ack-3 has been received earlier at the communication router 106, the communication router 106 will not have the transmission data-3-EC.
The content of data-2-1 may be the same as the content of data-2 because communication router 106 resends data-2 by sending data-2-1. data-2-EC is encapsulated in an error correction packet. In one variation, the packet encapsulating data-2-1 is the same as the packet encapsulating data-2 because the content of data-2-1 is the same as the content of data-2, the source address of the packet is the same and the destination address of the packet is the same.
The error correction packet may be a packet transmitted by the communication router 106 to correct or detect any errors in the data packet transmitted from the communication router 106 to the communication router 108. The error correction packet includes a Forward Error Correction (FEC) packet or an Automatic repeat request (ARQ) packet, a parity packet, and the like. In a variant, the error correction packet may also be a combination of two or more packets. For example, if the first packet includes data-2 and the second packet includes data-2-EC, the third packet may include data content based on an exclusive-OR (XOR) operation applied to the content of the first and second packets. Using data-2-EC, communication router 108 may be able to retrieve data-2 and send data-2 to host 105.
In one variation, data-2 may be segmented into two portions. A first portion of data-2 is encapsulated in a first data packet and a second portion of data-2 is encapsulated in a second data packet. For purposes of illustration, when the communication router 106 sends data-2 to the communication router 108 in flow 708, the communication router 108 receives the first data packet, but discards or loses the second data packet. Communications router 108 then sends the deferred query message in flow 709. The data-2-EC may include an error correction packet whose data content is based on an XOR operation applied to the contents of the first and second data packets, i.e., to the first portion of data-2 and the second portion of data-2. When communication router 106 sends data-2-EC, communication router 108 may retrieve the second portion of data-2 by using data-2 and the first portion of data-2-EC. Those skilled in the art will appreciate how the first portion of data-2 and data-2-EC may be used to retrieve the second portion of data-2, since data-2-EC is based on the first and second portions of data-2.
In one variation, when communication router 106 receives the delayed query message and activates the error correction mode, communication router 106 may send only error correction packets, e.g., data-2-EC, and not data-2-1. In another variation, when communication router 106 receives the deferred query message, communication router 106 may send only data-2-1 and not data-2-EC. The benefit of sending only either of data-2-1 and data-2-EC is that a smaller number of packets are transmitted and therefore consume lower bandwidth. This may help reduce traffic congestion. However, in some cases, data-2-EC alone may not assist in retrieving data-2 if data-2-1 is not received. For example, when the data-2-EC contains only a checksum, or parity packet, the data-2-EC may be used for error detection only. In another example, when data-2-EC is generated by applying an XOR operation on two packets of data-2, data-2 may only be retrieved from data-2-EC if at least one of the two packets of data-2 has been received. Thus, it may be helpful to send data-2-1 so that data-2 may be retrieved using data-2-1 and data-2-EC. In one variation, data-2-EC may be used to correct errors when there may be some errors in data-2-1 received at communications router 108.
Fig. 11 illustrates the structure of a deferred query message according to one embodiment of the invention. When any data packet is dropped or lost, i.e., there is a lost data packet, a deferred query message 1100 may be sent. The delayed query message 1100 may be an IP packet including an IP header 1101, other information fields 1104, and a payload 1105. The IP header 1101 includes a source address field 1102 and a destination address field 1103. For example, referring to FIG. 7, a deferred query message is sent in flow 709 with the source address being the IP address of the communications router 108 and the destination address being the IP address of the communications router 106. The other information field 1104 may contain different information such as the nature of the IP packet, i.e., the information that the IP packet is a delayed query message. This indicates to the communications router 106 that the IP packet is not a data packet, but is simply a management message. The communication router 106 may process the data packets and management messages in different ways. For example, the communication router 106 may store data packets received from the communication router 108 in a cache memory, but the communication router 106 may not store management messages, such as delayed query messages, in the cache memory. The other information field 1102 may further contain session information to indicate to which session the missing packet belongs. The payload 1105 may contain the GSN of the lost packet. For example, in flow 709, payload 1105 of deferred query message 1100 sent by communication router 108 to communication router 106 may contain a GSN corresponding to one or more packets of data-2 because data-2 has not yet successfully arrived at communication router 108. When communication router 106 receives deferred query message 1100 from communication router 108 and payload 1105 contains a GSN corresponding to one or more packets of data-2, communication router 106 may determine that communication router 108 has not received data-2. Thus, communication router 106 then sends data-2-1 and data-2-EC to communication router 108. The scope of the invention is not limited to the delayed query message being an IP packet. The delayed query message may be a Short Message Service (SMS) message, a Multimedia Message Service (MMS) message, or any other type of message that may be sent by the communication router 108 to the communication router 106 to provide information of any missing data packets.
Fig. 8 is a timing diagram illustrating how data is transferred between host 105 and host 103, according to one embodiment of the invention. Host 103 sends data-1 to host 105 through communication routers 106 and 108. In flow 801, data-1 is transmitted from host 103 to communications router 106. Communication router 106 then transmits data-1 to communication router 108 in flow 802, and communication router 108 forwards data-1 to host 105 in flow 803. Communication router 106 is initially in error correction mode and thus after a latency period, it sends data-1-1 and data-1-EC in flows 806 and 807, respectively. The benefit of transferring data-1-1 and data-1-EC is that in the event of a loss of data-1, the host 103 will not have to retransmit data-1. Data-1-1 and data-1-EC may assist communication router 108 in generating data-1 and sending data-1 to host 105 in the event of a loss of data-1 in stream 802. When host 105 receives data-1, the host generates an acknowledgement packet, i.e., Ack-1, destined for host 103. Ack-1 is transmitted to the communication router 108 in flow 804 and to the communication router 106 by the communication router 108 in flow 805. When communication router 108 receives both data-1 and data-1-1 from communication router 106, communication router 108 sends a resume normal message to communication router 106 in flow 808. After receiving the resume normal message, the communication router 106 deactivates the error correction mode because packets are no longer lost. Host 103 transmits data-2 destined for host 105. When data-2 is received at the communication router 106 in flow 810, the communication router 106 forwards the data-2 to the communication router 108 in flow 811. Communication router 108 then forwards data-2 to host 105 in flow 812. When the error correction mode has been disabled, communication router 106 does not send any error correction packets for data-2. When host 105 receives data-2, the host generates an acknowledgement packet, i.e., Ack-2, destined for host 103. Ack-2 is sent to communication router 108 in flow 813 and forwarded to communication router 106 in flow 814. The communications router 106 then sends Ack-2 to the host 103 in flow 815.
The latency period between the communication router 106 sending data-1 and sending data-1-1 may be adjusted or adapted to different network requirements. The purpose of transmitting data-1-1 is to reduce the likelihood of host 103 retransmitting data-1. Host 103 may retransmit data-1 for one or both of the following reasons: (i) when host 103 determines that data-1 is lost, because no acknowledgement is received; and (ii) when host 105 sends a request to host 103 to resend data-1. Thus, in one variation, the latency period is preferably less than (i) a first time period between the host 103 sending data-1 and the host 103 deciding to retransmit data-1 and (ii) a second time period between the host 103 transmitting data-1 and the host 105 deciding to send a request to the host 103 to retransmit data-1. The first time period and the second time period may be defined in a transport protocol, such as the TCP protocol. Since a particular delay causes data to travel from the communication router 106 to the communication router 108, a third time period required for the data packet to travel from the communication router 106 to the communication router 108 may also be considered for the set latency period. The third time period may vary depending on network conditions. The wait time period may then be less than the first time period minus the third time period, and may also be less than the second time period minus the third time period. The benefit of reducing the likelihood of host 103 retransmitting data-1 is that host 103 does not slow down data transmission due to packet loss. In another variation, the latency period is greater than a fourth time period, which is generally the time required for Ack-1 to be transmitted from the communications router 108 to the communications router 106. This may cause the communications router 108 to wait the required time to receive Ack-1 and transmit data-1-1 and data-1-EC if Ack-1 is not received within the fourth time period. The benefit of having a latency period greater than the fourth time period is that less bandwidth may be consumed because data-1-1 and data-1-EC are not transmitted if Ack-1 is received. However, the fourth time period may vary depending on network conditions. The disadvantage of the latency period being greater than the fourth time period is that the host 103 may retransmit data-1 when the fourth time period has elapsed, and may also slow down the data transfer. Therefore, it is preferred to consider the first, second and third time periods as discussed above, and to ignore the fourth time period. This ensures that data is transmitted successfully and does not slow down data transmission even though bandwidth consumption may be high. In another variation, the latency period may be adjusted by an administrator of the communication routers 106 and 108.
The difference between the timing diagrams in fig. 7 and fig. 8 is that in fig. 7, the error correction mode is activated at the communication router 106 when the delayed query message is received at the communication router 106. Alternatively, in fig. 8, the error correction mode may be activated by default, or may be activated by a user or administrator of the communication router 106. If the error correction mode is activated by default, there will be traffic congestion caused by the retransmission of error correction packets and data packets. This slows down data transmission and consumes significant bandwidth. Therefore, it may be beneficial to activate the error correction mode only when a delayed query message is received in order to conserve bandwidth and reduce traffic congestion. However, default activation of the error correction mode may make the process of resending data packets and error correction packets faster because communication router 106 need not wait for communication router 108 to send a delay inquiry message. This may be beneficial when the communication router 106 knows that the packet drop rate or packet loss rate of the tunnel is high. When the packet loss rate or the packet loss rate is high, the communication router 106 retransmits the data packet and transmits the error correction packet.
In an example, when host 103 transmits data to host 105 through communication routers 106 and 108 using TCP, and multiple packets are dropped or lost, host 103 may reduce the transmission rate in order to reduce packet loss and packet discard. This can make the overall transmission much slower, even if the increase in packet loss is temporary. In addition, when the host 103 determines that a packet has been dropped or lost, the host will resend the packet. When the communication router 106 retransmits the packet, the overall transmission can be faster than when the host 103 retransmits the packet. To avoid the host 103 reducing the transmission rate and resending packets, the communication router 106 sends more than one copy of the same packet, e.g., data-2 and data-2-1, and may also send error correction packets, e.g., data-2-EC. When communication router 106 sends more than one copy of the same packet, the redundancy is higher and it is more likely that the packet will be received at host 105. Thus, acknowledgement packets may be sent from the host 105 and received at the host 103 in time, and the host 103 will not need to reduce the transmission rate or resend the packets.
In a preferred embodiment, copies of the same packet are transmitted over different tunnels of the aggregate connection, if possible. For example, referring to FIG. 7, if data-2 is transmitted in flow 708 through tunnel 103A, data-2-1 is transmitted in flow 710 through tunnel 103B and data-2-EC is transmitted in flow 711 through tunnel 103C. This may allow data-2, data-2-1, and data-2-EC to be transmitted at substantially the same time. Further, if tunnels 103A, 103B, and 103C are established using networks provided by different service providers, the probability that at least one of data-2, data-2-1, and data-2-EC will successfully reach communication router 108 is high. This is because the network quality of the tunnels may be different. If the network quality of tunnel 103A is not satisfactory or tunnel 103A is broken or tunnel 103A is experiencing a large number of packet drops, data-2 may not successfully reach communication router 108. If data-2 is sent through tunnel 103A, data-2-1 may experience the same network problems that data-2 has just experienced. Thus, data-2-1 is preferably sent through a different tunnel (e.g., tunnel 103B) that may experience better network performance, because the path, route, or connection used by tunnel 103B may be different than the path, route, or connection used by tunnel 103A. The use of different tunnels is more likely to increase the likelihood that data-2-1 will successfully reach communication router 108.
Fig. 9 is a flow diagram illustrating a process performed at the communications router 106 according to one embodiment of the invention. In step 901, communication router 106 receives data from host 103, where the data is destined for host 105. In step 902, the communications router 106 determines whether to activate the error correction mode. If the error correction mode is activated, the communications router 106 sends both the raw data and the error correction data for the raw data in step 903. The raw data is the same as the data originally received from the host 103. Alternatively, if the error correction mode is not activated, the communications router sends only the raw data in step 904. In step 905, the process ends.
Fig. 10 is a flow diagram illustrating a process performed at communications router 108 according to one embodiment of the invention. In step 1001, communications router 108 receives data from communications router 106. The communication router 108 then determines whether all data is received in time in step 1002. When the error correction mode is activated in the communication router 106, all data includes the original data sent by the host 103 and the error correction data originating from the communication router 106. If all data is received in time, communications router 108 sends a resume normal message to communications router 106 in step 1003. Communication router 108 may forward the raw data to host 105. Alternatively, if not all data is received in time, the communication router 108 sends a request to the communication router 106 in step 1004, wherein the request is to resend the data. In step 1005, the process ends.
In one variation, 1003 is omitted if the error correction mode has been disabled. The communications router 108 determines whether to activate the error correction mode.
Preferably, the communication router 108 sends a request to resend the data in step 1004 before the host 103 determines that the data has not been successfully transmitted. When the communication router 106 receives the request to resend the data, the communication router resends the copy of the data to the communication router 108. Communication router 106 may also send error correction data to communication router 108.
The communication routers 106 and 108 preferably have large caches. In accordance with the present invention, the communication router 106 must create and send error correction packets and copies of the packets based on the packets sent by the host 103 and stored in the cache memory of the communication router 106. Thus, the communications router 106 may need to save multiple packets in its cache. Additionally, multiple management messages, e.g., a resume normal message and a delayed query message, may need to be exchanged between the communication routers 106 and 108. These management messages may also need to be stored in the cache memory of the communication routers 106 and 108.
It should be appreciated that the communication router 108 is capable of performing the same processes as the communication router 106, and vice versa. In the above description, for ease of reading, the communication router 106 has been described as a data sender and the communication router 108 has been described as a data receiver. It should be noted that the scope of the present invention is not limited to only the host 103 sending data to the host 105, such that data transfer from the host 105 to the host 103 may also occur. The data transfer may also be bidirectional, so that both host 103 and host 105 may send data to each other.
The storage medium is used to store data since the communication router needs to store the data for retransmission. The required amount of memory should be able to hold the transmitted data for a few seconds. Preferably about 20 seconds of data transmission. For example, for an access connection capable of transmitting 100 megabits per second, the amount of storage should be about 20 seconds by 100 megabytes per second and produce about 250 megabytes of storage. Those skilled in the art will appreciate that the greater the amount of storage, the more data may be retransmitted.
Drawings
FIG. 1A illustrates an overall system for optimizing throughput for a plurality of variable bandwidth connections according to an embodiment of the present invention;
FIG. 1B illustrates a network environment in accordance with various embodiments of the invention;
fig. 1C illustrates a system 100 adapted according to an embodiment, the system being configured to optimize the throughput of a combined plurality of variable bandwidth connections.
FIG. 2A illustrates a flow diagram depicting a method for increasing throughput of bonded connections, in accordance with an embodiment of the invention;
FIG. 2B illustrates a flow diagram depicting a method for increasing throughput of a bonded connection, in accordance with an embodiment of the invention;
FIG. 3 is an exemplary embodiment illustrating the types of information that may be encapsulated in a transmitted IP packet according to an embodiment of the present invention;
FIG. 4A is an exemplary embodiment illustrating the types of information that may be encapsulated in a feedback packet according to embodiments of the invention;
FIG. 4B is a chart illustrating possible values for fields of the feedback packet of FIG. 4A;
FIG. 5 depicts a block diagram of a processing system adapted to implement the present invention;
FIG. 6 illustrates a network environment in accordance with various embodiments of the invention;
FIG. 7 is a timing diagram illustrating how data is transferred between hosts according to one embodiment of the invention;
FIG. 8 is a timing diagram illustrating how data is transferred between hosts according to one embodiment of the invention;
fig. 9 is a flow diagram illustrating a process performed at a first communications router according to one embodiment of the present invention;
FIG. 10 is a flow diagram illustrating a process performed at a second communications router according to one embodiment of the present invention;
fig. 11 illustrates the structure of a deferred query message according to one embodiment of the invention.