WO2022105686A1 - 报文处理方法以及相关装置 - Google Patents
报文处理方法以及相关装置 Download PDFInfo
- Publication number
- WO2022105686A1 WO2022105686A1 PCT/CN2021/130315 CN2021130315W WO2022105686A1 WO 2022105686 A1 WO2022105686 A1 WO 2022105686A1 CN 2021130315 W CN2021130315 W CN 2021130315W WO 2022105686 A1 WO2022105686 A1 WO 2022105686A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- queue
- network device
- burst
- data stream
- packet
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 claims abstract description 133
- 238000012545 processing Methods 0.000 claims abstract description 78
- 230000008569 process Effects 0.000 claims abstract description 75
- 238000013507 mapping Methods 0.000 claims description 49
- 230000004044 response Effects 0.000 claims description 3
- 230000009172 bursting Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 33
- 230000005540 biological transmission Effects 0.000 description 16
- 238000013461 design Methods 0.000 description 9
- 239000004744 fabric Substances 0.000 description 8
- 238000009825 accumulation Methods 0.000 description 7
- 238000011144 upstream manufacturing Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000001934 delay Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0852—Delays
- H04L43/087—Jitter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/17—Interaction among intermediate nodes, e.g. hop by hop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/56—Queue scheduling implementing delay-aware scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
Definitions
- the present application relates to the field of communication technologies, and in particular, to a message processing method and related devices.
- Delay determinism means that for any packet in a data flow, the end-to-end delay experienced in the network does not exceed a certain value, that is, the network guarantees a certain delay for the data flow. boundary.
- Latency determinism represents the ability of the network to deliver packets in a "timely" manner.
- the jitter of a data stream refers to the difference between the upper and lower delay bounds that the packets in the data stream may experience. Jitter determinism not only specifies the upper bound of the delay of the packets in the data stream, but also the lower bound of the delay of the packets in the data stream, which means whether the network can deliver the packets on time, neither too early nor too early. Not too late.
- the controller needs to remotely control the robotic arm to complete many delicate operations, requiring the delay between the controller and the robotic arm to be less than 1ms (milliseconds), and the jitter to be less than 1us (microseconds), or even zero jitter.
- Current scheduling methods for example, Damper model-based schemes, cyclic queuing and forwarding schemes
- Embodiments of the present application provide a packet processing method and a related device, which are used to ensure a deterministic upper bound of delay and end-to-end zero jitter of packets.
- a first aspect of the embodiments of the present application provides a message processing method, where the message processing method includes:
- the first network device receives the first packet in the network at the first moment, the first packet is the first packet of the first burst of the first data stream, and the first burst is the first packet received by the first network device.
- One of multiple bursts included in a data stream the first burst includes one or more packets, and the first network device is the first hop for processing the one or more packets included in the first data stream network device; then, the first network device determines a first target queue from a plurality of queues included in the first queue system according to the first moment; One or more packets included in the first burst are added to the first target queue; the first network device processes the first target queue according to the scheduling rules of the multiple queues.
- the first network device determines the first target queue according to the first moment when the first packet of the first burst is received, and the first network device enqueues the first burst in a burst granularity manner. One or more packets are sequentially added to the first target queue.
- the last-hop network device that processes one or more packets included in the first burst may determine the corresponding third target queue, and then sequentially adds the one or more packets included in the first burst to the third target queue .
- enqueuing and scheduling are performed through the mapping between the first target queue and the third target queue between the first-hop network device and the last-hop network device, ensuring that the shape of the data flow entering the network device and leaving the network device is the same, thus ensuring that packets deterministic upper bound on latency and end-to-end zero jitter.
- the first time intervals between the opening times of two adjacent queues among the multiple queues included in the first queue system are equal.
- the first time interval between the opening times of two adjacent queues in the first queue system is equal, so as to configure the time at which two adjacent bursts in each data stream arrive at the first network device
- the interval is equal to an integer multiple of the gating granularity, which provides the basis for the implementation of the scheme, thereby ensuring a deterministic upper bound of packet delay and zero end-to-end jitter.
- the second time interval between two adjacent bursts in the multiple bursts included in the first data stream reaching the first network device is equal, and the second time interval is an integer of the first time interval times.
- the time interval between two adjacent bursts in the first data stream reaching the first network device is equal to an integer multiple of the gating granularity, so as to ensure that the shape of the first data stream entering the first-hop network device is the same as that of the subsequent first hop network device.
- the shape of the data flow leaving the last-hop network device is the same, thus ensuring a deterministic upper bound of packet delay and zero end-to-end jitter.
- the number of bits of the multiple bursts included in the first data stream is the same.
- the number of burst bits included in each data stream should be the same, so as to avoid bursts of data streams on the last-hop network device.
- the difference in the number of bits of the packet causes the different delays experienced by the network devices in the network, resulting in the end-to-end jitter of the packet.
- the first burst includes multiple packets of the same size.
- the first burst includes multiple packets of the same size, which can avoid end-to-end jitter of the packets due to the different packet sizes that cause different times to pass through the network devices in the network.
- the method further includes: the first network device receives a second packet in the network at a second moment, where the second packet is the first of a second burst of the second data stream message, the second burst is one of multiple bursts included in the second data stream received by the first network device, and the second burst includes one or more messages; the first network The device determines a second target queue from a plurality of queues included in the first queue system according to the second moment; the second target queue is the first target queue, or the second target queue is located after the first target queue and, the first target queue is the last queue of the first queue system, or the first target queue is before the last queue of the first queue system.
- the first network device can still determine the corresponding target queue according to the reception time of the first packet of the burst of different data streams, and join the the target queue. Two bursts of different data streams can be added to the same destination queue.
- the third time interval during which two adjacent bursts of the multiple bursts included in the second data stream reach the first network device are equal, and the third time interval is equal to the first time interval. integer multiples.
- the time interval between two adjacent bursts in the second data stream reaching the first network device is equal to an integer multiple of the gating granularity, so as to ensure that the shape of the second data stream entering the first-hop network device is consistent with the subsequent The shape of the second data flow leaving the last-hop network device is the same, thereby ensuring a deterministic upper bound of the packet delay and zero end-to-end jitter.
- the first target queue adds packets included in N bursts, the N bursts include the first burst, and each of the N bursts corresponds to a data stream and the N bursts include the first burst.
- the data streams corresponding to different bursts in the N bursts are different, the number of bits of the N bursts is less than the number of bits that the first target queue can hold, and the number of bits that the first target queue can hold is equal to the number of bits that the first target queue can hold
- the port rate of the network device is multiplied by the time interval between the opening time of the first target queue and the ending time of the first target queue.
- the first target queue can add N burst packets, and the number of bits of the N bursts is less than the number of bits that the first target queue can accommodate, so as to ensure that the first target queue is in the first target queue.
- the time interval between the start time of the queue and the end time is to complete the transmission of N burst packets, so as to avoid affecting the sending of packets in other target queues, thereby ensuring deterministic delay of packets and zero end-to-end jitter.
- the first packet includes queue information of the first target queue; or, one or more packets included in the first burst respectively include queue information of the first target queue.
- the first packet or each packet of the first burst carries the queue information of the first target queue, so that the last-hop network device can determine the target corresponding to the first target queue according to the queue information Queues are implemented to implement the mapping between the first target queue and the third target queue between the first-hop network device and the last-hop network device for enqueuing and scheduling, so as to ensure a deterministic upper bound of packet delay and zero end-to-end jitter.
- the queue information of the first target queue includes a queue number of the first target queue.
- the queue information of the first target queue is expressed in the form of queue numbers.
- the one or more packets included in the first burst further include a queue group to which the queue for instructing the second network device to join the one or more packets included in the first burst belongs.
- the second network device is the last-hop network device that processes one or more packets included in the first data flow.
- the first network device receives multiple data streams, and there are burst drops of different data streams in the multiple data streams.
- the first network device may carry the queue group number to which the queue for instructing the second network device to join the one or more packets included in the first burst belongs .
- the last-hop network device determines the corresponding queue group according to the queue group number to which the queues used to indicate one or more packets included in the first burst belong , and process the queue group according to the scheduling rules of the queue group, so as to avoid the squeezing of different data streams, and ensure the deterministic upper bound of the packet delay and the end-to-end zero jitter.
- each of the one or more packets included in the first burst includes first time information of each packet, and the first time information is used to indicate each packet
- the first remaining processing time is the difference between the first theoretical time limit and the first actual time for the first network device to process each packet;
- the first theoretical time limit is the first reference moment The theoretical upper limit of the time that each message passes through the network device from the start to the second reference time;
- the first reference time is the reference time when the first network device releases each message to the first queuing system, or , the first reference time is the time when the first network device receives each message;
- the second reference time is when each message enters and processes one or more messages included in the first burst The reference time of the queue system of the second network device;
- the first actual time is from the first reference time of each message to the time when each message is output from the first network device. The actual time that the packet has experienced inside the first network device.
- each packet of the first burst may carry the first time information of each packet, so as to facilitate the second data stream processing the first data stream.
- the network device determines the reference time of each packet at the second network device that processes the first data stream, and selects a corresponding target queue for each packet according to the reference time.
- the first time information includes the first reference time of each packet and the time when each packet is output from the first network device.
- the first time information may specifically include the first reference time of each packet at the first network device and the time when each packet is output from the first network device, so that the second time in the network
- the network device that processes the first data stream determines the reference time of each packet at the second network device that processes the first data stream, and selects a corresponding target queue for each packet according to the reference time .
- the first time information further includes a first theoretical upper limit of time for each packet.
- the first time information further includes a first theoretical upper limit of time for each packet, so that the second network device in the network that processes the first data stream determines that each packet is in the first time limit of the first data stream.
- the first packet includes first time information of the first packet, where the first time information is used to indicate the first remaining processing time of the first packet, and the first remaining processing time
- the time is the difference between the first theoretical upper limit of time for the first network device to process the first packet and the first actual time; the first theoretical upper limit is the time from the first reference time to the second reference time after the first packet has passed The upper limit of theoretical time experienced by the network device; the first reference time is the reference time when the first network device releases the first packet to the first queue system, and the second reference time is when the first packet enters the
- the first burst includes the reference moment of the queue system of the second network device for processing one or more packets.
- the intermediate node that processes the first data stream may add bursts of the first data stream to the corresponding target queue at a burst granularity. Therefore, the first network device may carry the first time information of the first packet in the first packet of the first burst, and the second network device in the network that processes the first data stream The target queue corresponding to the first burst can be determined based on the time information, and the first burst is added to the target queue, thereby reducing the overhead of packet transmission.
- the second time information includes a first reference time of the first packet and a time when the first packet is output from the first network device.
- the first time information may specifically include the first reference time of the first packet at the first network device and the time at which the first packet is output from the first network device, so that the second time in the network
- the network device processing the first data stream determines the reference time of the first packet at the second network device processing the first data stream, and selects a corresponding target queue for the first burst according to the reference time.
- the first time information further includes a first theoretical upper limit of the time of the first packet.
- the first time information further includes a first theoretical time upper limit of the first packet, so that the second network device in the network that processes the first data stream determines that the first packet is in the first packet in the first data stream.
- Two reference times of the network devices that process the first data stream and select a corresponding target queue for the first burst according to the reference times.
- a second aspect of an embodiment of the present application provides a message processing method, where the message processing method includes:
- the second network device receives a first data stream, the first data stream includes one or more bursts, a first burst of the multiple bursts includes one or more packets, and a first burst of the multiple bursts includes one or more packets.
- Three bursts include one or more packets, the first burst and the third burst are two adjacent bursts in the first data stream, and the second network device is included in the first data stream.
- the second network device adds the first burst and the third burst of the first data stream to the third target queue and the fourth target queue in a burst granularity enqueue manner, and the third target queue is the same as the one in the third target queue.
- enqueuing and scheduling are performed through the mapping between the first target queue and the third target queue between the first-hop network device and the last-hop network device, ensuring that the shape of the data flow entering the network device and leaving the network device is the same, thus ensuring that packets deterministic upper bound on latency and end-to-end zero jitter.
- the third target queue and the fourth target queue are two adjacent or non-adjacent queues in the second queue system.
- the target queues added to the last-hop network device for two adjacent bursts of the same data flow may be two adjacent queues or two non-adjacent queues.
- the mapping method of the burst of the data stream to the target queue by the hop network device and the time interval between the two adjacent bursts of the data stream reaching the first hop network device are determined.
- the second network device releases one or more packets included in the first burst to the second queuing system and the second network device releases the packets included in the third burst
- the time interval between the moments when one or more packets are sent to the second queue system is the fourth time interval
- the time interval between the opening time of the third target queue and the opening time of the fourth target queue is the fifth time interval.
- time interval, the fourth time interval is equal to the fifth time interval.
- the above-mentioned fourth time interval is equal to the fifth time interval, so as to achieve deterministic delay of packets and zero end-to-end jitter.
- the method further includes: the second network device receives a second data stream, the second data stream includes one or more bursts, and the second burst in the plurality of bursts includes One or more packets, the moment when the second data stream arrives at the second network device is after the moment when the first burst of the first data stream arrives at the second network device, and at the end of the first data stream Before the moment when a burst arrives at the second network device; the second network device selects the first queue group from the second queue system, and the second network device selects the first queue group according to the order of one or more bursts included in the first data stream.
- One or more bursts included in a data stream are added to the first queue group; the second network device selects a second queue group from the second queue system, according to the one or more bursts included in the second data stream add one or more bursts included in the second data stream to the second queue group; the priority of the first queue group is higher than the priority of the second queue group; the second network device The scheduling rules of multiple queues in the two-queue system are to process the first queue group and the second queue group.
- the second network device may map the different data streams to different queue groups respectively, and each Each queue group corresponds to a priority, and different queue groups correspond to different priorities, so as to avoid the squeezing of different data streams, and ensure the deterministic upper bound of packet delay and zero end-to-end jitter.
- the second network device determining the third target queue from the second queue system of the second network device includes: the second network device determining the first target queue, and the first target queue is A queue to which one or more packets included in the first burst in the first network device are added, where the first network device is a first-hop network device that processes one or more packets included in the first data stream; then, the first network device is The second network device determines a third target queue corresponding to the first target queue from the second queue system according to the first mapping relationship, where the first mapping relationship includes the queue in the first queue system of the first network device and the queue in the second queue system Mapping relationship between queues.
- the second network device may determine the third target queue corresponding to the first target queue according to the mapping relationship between the queues of the first queue system and the queues of the second queue system, so that through the first hop network device Enqueue and schedule the mapping between the first target queue and the third target queue between the last-hop network device and the last-hop network device, so as to ensure the deterministic upper bound of packet delay and zero end-to-end jitter.
- the first packet of the first burst includes queue information of the first target queue; the second network device determining the first target queue includes: the second network device according to the first target queue The queue information determines the first target queue.
- the second network device may determine, according to the queue information of the first target queue carried in the first packet of the first burst, the first target queue that the first burst is added to the first network device, In order to facilitate the second network device to determine the third target queue corresponding to the first target queue.
- the third target queue adds packets included in N bursts, the N bursts include the first burst, and each burst in the N bursts corresponds to a data stream, and the N bursts include the first burst.
- the data streams corresponding to different bursts in the N bursts are different; the N bursts correspond to N queue groups, each of the N queue groups corresponds to a priority, and the priorities of different queue groups are different.
- the bursts of multiple data streams fall into the same target queue at the same time, the bursts of each data stream should be allocated to the corresponding queue group, and then the scheduling rules of each queue group are used for each queue group.
- Each queue group is processed to avoid transmission conflicts between bursts of multiple data streams, so as to ensure a deterministic upper bound of packet delay and zero end-to-end jitter.
- the number of bits of the multiple bursts included in the first data stream is the same.
- the number of burst bits included in each data stream should be the same, so as to avoid bursts of data streams on the last-hop network device.
- the difference in the number of bits of the packet causes the different delays experienced by the network devices in the network, resulting in the end-to-end jitter of the packet.
- the first burst includes multiple packets of the same size.
- the first burst includes multiple packets of the same size, which can avoid end-to-end jitter of the packets due to the different packet sizes that cause different times to pass through the network devices in the network.
- a third aspect of the embodiments of the present application provides a first network device, where the first network device includes:
- a receiving unit configured to receive a first packet in the network at the first moment, where the first packet is the first packet of the first burst of the first data stream, and the first burst is received by the first network device
- the processing unit is configured to determine the first target queue from the plurality of queues included in the first queue system according to the first moment; according to the order of the one or more packets included in the first burst, one or more packets included in the first burst Multiple packets are added to the first target queue;
- the sending unit is configured to process the first target queue according to the scheduling rules of the multiple queues.
- the first time intervals between the opening times of two adjacent queues among the multiple queues included in the first queue system are equal.
- the second time interval between two adjacent bursts in the multiple bursts included in the first data stream reaching the first network device is equal, and the second time interval is an integer of the first time interval times.
- the number of bits of the multiple bursts included in the first data stream is the same.
- the first burst includes multiple packets of the same size.
- the receiving unit is further used for:
- the second packet is the first packet of the second burst of the second data stream
- the second burst is the second packet received by the first network device one of the multiple bursts included in the data stream, the second burst includes one or more packets
- the processing unit is also used to:
- the second target queue is the first target queue, or the second target queue is located after the first target queue; and the first target queue is the last queue of the first queue system, or the first target queue The target queue is before the last queue of the first queue system.
- the third time interval during which two adjacent bursts of the multiple bursts included in the second data stream reach the first network device are equal, and the third time interval is equal to the first time interval. integer multiples.
- the first target queue adds packets included in N bursts, the N bursts include the first burst, and each of the N bursts corresponds to a data stream and the N bursts include the first burst.
- the data streams corresponding to different bursts in the N bursts are different, the number of bits of the N bursts is less than the number of bits that the first target queue can hold, and the number of bits that the first target queue can hold is equal to the number of bits that the first target queue can hold
- the port rate of the network device is multiplied by the time interval between the opening time of the first target queue and the ending time of the first target queue.
- the first packet includes queue information of the first target queue; or, one or more packets included in the first burst respectively include queue information of the first target queue.
- the queue information of the first target queue includes a queue number of the first target queue.
- the one or more packets included in the first burst further include a queue group to which the queue for instructing the second network device to join the one or more packets included in the first burst belongs.
- the second network device is the last-hop network device that processes one or more packets included in the first data flow.
- each of the one or more packets included in the first burst includes first time information of each packet, and the first time information is used to indicate each packet
- the first remaining processing time is the difference between the first theoretical time limit and the first actual time for the first network device to process each packet;
- the first theoretical time limit is the first reference moment The theoretical upper limit of the time that each message passes through the network device from the start to the second reference time;
- the first reference time is the reference time when the first network device releases each message to the first queuing system, or , the first reference time is the time when the first network device receives each message;
- the second reference time is when each message enters and processes one or more messages included in the first burst The reference time of the queue system of the second network device;
- the first actual time is from the first reference time of each message to the time when each message is output from the first network device. The actual time that the packet has experienced inside the first network device.
- the first time information includes the first reference time of each packet and the time when each packet is output from the first network device.
- the first time information further includes a first theoretical upper limit of time for each packet.
- the first packet includes first time information of the first packet, where the first time information is used to indicate the first remaining processing time of the first packet, and the first remaining processing time
- the time is the difference between the first theoretical upper limit of time for the first network device to process the first packet and the first actual time; the first theoretical upper limit is the time from the first reference time to the second reference time after the first packet has passed The upper limit of theoretical time experienced by the network device; the first reference time is the reference time when the first network device releases the first packet to the first queue system, and the second reference time is when the first packet enters the
- the first burst includes the reference moment of the queue system of the second network device for processing one or more packets.
- the second time information includes a first reference time of the first packet and a time when the first packet is output from the first network device.
- the first time information further includes a first theoretical upper limit of the time of the first packet.
- a fourth aspect of the embodiments of the present application provides a message processing method, where the message processing method includes:
- a receiving unit configured to receive a first data stream, where the first data stream includes one or more bursts, a first burst of the multiple bursts includes one or more packets, and a first burst of the multiple bursts includes one or more packets.
- the third burst includes one or more packets, the first burst and the third burst are two adjacent bursts in the first data stream, and the second network device is for the first data stream to include The last hop network device that processes one or more packets;
- a processing unit configured to determine a third target queue and a fourth target queue from the second queue system of the second network device; and the first burst according to the sequence of one or more packets included in the first burst
- the one or more packets included in the third burst are added to the third target queue;
- the second network device is in the order of the one or more packets included in the third burst.
- the one or more packets included in the third burst join the fourth target queue;
- a sending unit configured to process the third target queue and the fourth target queue according to the scheduling rules of the third target queue and the fourth target queue.
- the third target queue and the fourth target queue are two adjacent or non-adjacent queues in the second queue system.
- the second network device releases one or more packets included in the first burst to the second queuing system and the second network device releases the packets included in the third burst
- the time interval between the moments when one or more packets are sent to the second queue system is the fourth time interval
- the time interval between the opening time of the third target queue and the opening time of the fourth target queue is the fifth time interval.
- time interval, the fourth time interval is equal to the fifth time interval.
- the receiving unit is further used for:
- the second data stream including one or more bursts, a second burst of the plurality of bursts including one or more packets, the second data stream reaching the second network device
- the time is after the time when the first burst of the first data stream reaches the second network device and before the time when the last burst of the first data stream reaches the second network device;
- the processing unit is also used to:
- the sending unit is also used to:
- the first queue group and the second queue group are processed according to the scheduling rules of the multiple queues of the second queue system.
- processing unit is specifically used for:
- the first target queue is a queue to which one or more packets included in the first burst in the first network device are added, and the first network device is a queue for performing processing on one or more packets included in the first data stream.
- the first-hop network device processed;
- a third target queue corresponding to the first target queue is determined from the second queue system according to a first mapping relationship, where the first mapping relationship includes a relationship between a queue in the first queue system of the first network device and a queue in the second queue system mapping relationship.
- the first packet of the first burst includes queue information of the first target queue; the processing unit is specifically configured to:
- the first target queue is determined according to the queue information of the first target queue.
- the third target queue adds packets included in N bursts, the N bursts include the first burst, and each burst in the N bursts corresponds to a data stream, and the N bursts include the first burst.
- the data streams corresponding to different bursts in the N bursts are different; the N bursts correspond to N queue groups, each of the N queue groups corresponds to a priority, and the priorities of different queue groups are different.
- the multiple bursts included in the first data stream have the same number of bits.
- the first burst includes multiple packets of the same size.
- a fifth aspect of an embodiment of the present application provides a network device, where the network device includes a processor configured to execute a program stored in storage, and when the program is executed, causes the network device to execute the first aspect or any of the first aspect above.
- the network device includes a processor configured to execute a program stored in storage, and when the program is executed, causes the network device to execute the first aspect or any of the first aspect above.
- the memory is located outside the network device.
- a sixth aspect of an embodiment of the present application provides a network device, where the network device includes a processor configured to execute a program stored in a storage, and when the program is executed, causes the network device to execute the second aspect or any of the second aspect above.
- the network device includes a processor configured to execute a program stored in a storage, and when the program is executed, causes the network device to execute the second aspect or any of the second aspect above.
- the memory is located outside the network device.
- a seventh aspect of the embodiments of the present application provides a computer-readable storage medium, including computer instructions, which, when the computer instructions are executed on a computer, cause the computer to execute any of the possible designs of the first aspect and the second aspect. method.
- An eighth aspect of the embodiments of the present application provides a computer program product including computer instructions, which is characterized in that, when it runs on a computer, the computer is made to execute any of the possible designs of the first aspect to the second aspect. method.
- a ninth aspect of an embodiment of the present application provides a network device, the network device includes a processor, a memory, and computer instructions stored on the memory and executable on the processor, when the computer instructions are executed, the network
- the apparatus performs the method of the first aspect or any possible design of the first aspect.
- a tenth aspect of the embodiments of the present application provides a network device, the network device includes a processor, a memory, and computer instructions stored on the memory and executable on the processor, when the computer instructions are executed, the network
- the apparatus performs the method of the second aspect or any of the possible designs of the second aspect.
- An eleventh aspect of the embodiments of the present application provides a network system, where the network system includes the first network device according to the third aspect and the second network device according to the fourth aspect.
- the embodiments of the present application have the following advantages:
- the first network device receives the first packet in the network at the first moment, the first packet is the first packet of the first burst of the first data stream, and the first burst is the first packet of the first burst.
- One of multiple bursts included in the first data stream received by a network device the first burst includes one or more packets, and the first network device is a response to one or more packets included in the first data stream.
- the first hop network device that processes the message; then, the first network device determines the first target queue from the multiple queues included in the first queue system according to the first moment, and determines the first target queue according to one or more messages included in the first burst
- One or more packets included in the first burst are added to the first target queue in the sequence of the first burst; the first network device processes the first target queue according to the scheduling rules of the multiple queues.
- the first network device determines the first target queue at the first moment of receiving the first packet of the first burst, and the first network device enters the queue with burst granularity
- the method sequentially adds one or more packets included in the first burst to the first target queue.
- the last-hop network device that processes one or more packets included in the first burst can determine the corresponding third target queue, and then sequentially adds the one or more packets included in the first burst to the third target queue .
- enqueuing and scheduling are performed through the mapping between the first target queue and the third target queue between the first-hop network device and the last-hop network device, so as to ensure a deterministic upper bound of packet delay and zero end-to-end jitter.
- Fig. 1 is a schematic diagram of the reasons for the formation of burst accumulation
- FIG. 2 is a schematic diagram of a system to which an embodiment of the present application can be applied;
- 3A is a schematic structural block diagram of a router capable of implementing an embodiment of the present application.
- 3B is a schematic diagram of the opening time of multiple queues of the queue system
- 4A is a schematic diagram of an embodiment of a packet processing method according to an embodiment of the present application.
- FIG. 4B is a schematic diagram of another embodiment of the packet processing method according to the embodiment of the present application.
- FIG. 4C is a schematic diagram of another embodiment of the packet processing method according to the embodiment of the present application.
- 5A is a schematic diagram of a transmission scenario of a first data stream in a first-hop network device and a last-hop network device according to an embodiment of the present application;
- 5B is a schematic diagram of another transmission scenario of the first data stream in the first hop network device and the last hop network device according to the embodiment of the present application;
- 5C is a schematic diagram of another transmission scenario of the first data stream in the first hop network device and the last hop network device according to the embodiment of the present application;
- FIG. 6 shows a sequence diagram of the ingress edge device 231 and the network device 232 processing packets
- FIG. 7 shows a sequence diagram of the processing of packets by the network device 232 and the network device 233;
- FIG. 8 is a schematic diagram of a deterministic delay after a packet is forwarded by a network device according to an embodiment of the present application
- 9A is a schematic diagram of a transmission scenario of a first data stream and a second data stream in a first-hop network device and a last-hop network device according to an embodiment of the present application;
- 9B is a schematic diagram of another transmission scenario of the first data stream and the second data stream in the first hop network device and the last hop network device according to the embodiment of the present application;
- FIG. 10 is a schematic diagram of another transmission scenario of the first data stream and the second data stream in the first hop network device and the last hop network device according to the embodiment of the present application;
- FIG. 11 is a schematic diagram of another embodiment of a message processing method according to an embodiment of the present application.
- FIG. 12 is a schematic diagram of a transmission scenario of a first data stream, a second data stream, and a third data stream in a first-hop network device and a last-hop network device according to an embodiment of the present application;
- FIG. 13 is a schematic diagram of another embodiment of a message processing method according to an embodiment of the present application.
- FIG. 14 is a schematic structural diagram of a first network device according to an embodiment of the present application.
- FIG. 15 is a schematic structural diagram of a second network device according to an embodiment of the present application.
- FIG. 16 is a schematic diagram of a network system according to an embodiment of the present application.
- the embodiments of the present application provide a packet processing method and network device, which are used to ensure the deterministic upper bound of the delay and the end-to-end zero jitter of the packet.
- the network architecture and service scenarios described in the embodiments of the present application are for the purpose of illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application.
- the evolution of the architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
- references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
- appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
- the terms “including”, “including”, “having” and their conjugations all mean “including but not limited to” unless specifically emphasized otherwise.
- At least one means one or more, and “plurality” means two or more.
- And/or which describes the relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, it can indicate that A exists alone, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
- the character “/” generally indicates that the associated objects are an “or” relationship.
- At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
- At least one item (a) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c may be single or multiple .
- IP Internet Protocol
- Burst accumulation is the root cause of delay uncertainty.
- the burst accumulation is caused by the squeezing of different packets of tissue paper against each other.
- FIG. 1 is a schematic diagram of the reasons for the formation of burst accumulation.
- FIG. 2 is a schematic diagram of a system to which an embodiment of the present application is applied.
- the network 200 shown in FIG. 2 may be composed of an edge network 210 , an edge network 220 and a core network 230 .
- Edge network 220 includes user equipment 221 .
- Core network 230 includes ingress edge device 231 , network device 232 , network device 233 , network device 234 , and egress edge device 235 . As shown in FIG. 2 , the user equipment 211 may communicate with the user equipment 221 through the core network.
- a device capable of implementing the embodiments of the present application may be a router, a switch, or the like.
- FIG. 3A is a schematic structural block diagram of a router capable of implementing an embodiment of the present application.
- the router 300 includes an upstream board 301 , a switch fabric 302 and a downstream board 303 .
- the upstream board may also be called the upstream interface board.
- Upstream board 301 may include multiple input ports.
- the upstream board can decapsulate the packets received by the input port, and use the forwarding table to find the output port. Once the output port is found (for convenience of description, the found output port is hereinafter referred to as the target output port), the packet is sent to the switch fabric 302 .
- Switch fabric 302 forwards the received message to one of the destination output ports. Specifically, the switch fabric 302 forwards the received packet to the downlink board 303 including the target output port. Downstream boards may also be referred to as downstream interface boards.
- the lower board 303 includes a plurality of output ports.
- the downlink board 303 receives the message from the switch fabric 302 .
- the downlink board can perform buffer management and encapsulation processing on the received message, and then send the message to the next node through the target output port.
- a router may include multiple upstream boards and/or multiple downstream boards.
- FIG. 4A is a schematic flowchart of a packet processing method provided according to an embodiment of the present application.
- FIG. 4A is a description of the packet processing method provided by the embodiment of the present application with reference to FIG. 2 . It is assumed that the packet processing method in this embodiment of the present application is applied to the core network 230 shown in FIG. 2 .
- Ingress edge device 231 may receive multiple data streams.
- the ingress edge device 231 handles each of the multiple data streams in the same manner. It is assumed that the paths of the multiple data streams received by the ingress edge device 231 pass through in sequence: the ingress edge device 231 is the first network device in which the multiple data streams enter the core network 230 . Therefore, the ingress edge device 231 may also be referred to as a first-hop network device or a first-hop network device.
- the network device 232 is the second-hop network device
- the network device 233 is the third-hop network device
- the network device 234 is the fourth-hop network device
- the egress edge device 235 is the last-hop network device or the last-hop network device Network equipment.
- the average bandwidth reserved for the ith data flow by the output port of each network device in the path is r i .
- Multiple data streams satisfy the traffic model, which can be expressed by the following formula 4.1:
- t is the time
- G i is the total data flow of the i-th data stream within t time
- D i is the maximum burst degree of the i-th data stream.
- the ingress edge device 231 has a queue system that includes a plurality of queues.
- the moment when the network device receives the packet may be referred to as the reference moment of the packet in the network device;
- this moment may be referred to as the reference moment of the first hop network device of the packet.
- Each of the network device 232, the network device 233, the network device 234, and the egress edge device 235 has a queue system, and the queue system includes multiple queues.
- the network device determines to add the received packet to the queue in the queue system according to a moment. This moment may be referred to as the reference moment of the packet in the non-first hop network device.
- the theoretical upper limit of time is calculated based on the network calculus theory to obtain the maximum time required for two adjacent network devices to process packets. In other words, the processing time of two adjacent network devices will not exceed the theoretical upper limit of time.
- the theoretical upper limit of time does not include the transmission delay of packets transmitted between two adjacent network devices.
- the theoretical upper limit of the time for a packet to travel from the first network device in the network device to the second network device in the network device refers to the time from the reference time of the packet on the first network device to the time when the packet is on the second network device.
- the theoretical upper limit of the time for a packet to travel from the second network device in the network device to the third network device in the network device refers to the time from the reference time of the packet in the second network device to the third time when the packet is in the network device.
- the theoretical upper limit of time between the reference moments of each network device refers to the time from the reference time of the packet on the first network device to the time when the packet is on the second network device.
- the theoretical upper limit of the time for a packet from the first network device in the network device to the second network device in the network device is referred to as the theoretical upper limit of the time of the first network device.
- the theoretical time upper limit of the packet from the second network device in the network device to the third network device in the network device is called the theoretical time upper limit of the second network device, and the theoretical time upper limit for other network devices is also similar.
- the actual time refers to the actual time that the packet passes through in the network device from the reference time of a certain network device to the time when the packet is output from the network device.
- the first actual time of the first packet refers to the time from the reference time of the first packet at the ingress edge device 231 to the time when the first packet is output from the ingress edge device 231 .
- the second actual time of the first packet refers to the actual time that the first packet passes through the network device 232 from the reference time of the first packet at the network device 232 to the time when the first packet is output from the network device 232 .
- the queuing system of the network devices in the core network 230 is described below.
- the queue opening and packet sending in the queue system both meet the following criteria: the queue is opened at a specified time, and the packet is allowed to be sent after the queue is opened. Multiple queues can be kept open at the same time, but the queue that is opened first sends the message added by the queue that is opened first, and the queue that is opened first sends the message added to the queue that is opened first before the next one is allowed to open. The queue to send the message added to the next open queue.
- the queue system includes M queues, which are queues Q1 to QM, respectively.
- ⁇ is the time interval between the turn-on times of two adjacent queues in the M queues.
- the opening time of queue Q1 is T+ ⁇
- the opening time of queue Q2 is T+2 ⁇
- the opening time of queue Q3 is T+3 ⁇
- the opening time of queue QM is T+ ⁇ +D max .
- M is equal to ( ⁇ +D max )/ ⁇ . Closes when the queue meets the conditions, and sets the priority of that queue to the queue with the lowest priority in the queue system. For example, as shown in FIG. 3B , after the queue Q1 is turned off, the turn-on time of the queue Q1 is set to T+2 ⁇ +D max . The same is true for other queues.
- D max should be set in combination with the theoretical upper limit of the time of the network device.
- the theoretical upper limit of the time of the ingress edge device 231 is D 1 max
- the D max of the queue system of the ingress edge device 231 should not be less than D 1 max
- the theoretical upper limit of the time of the network device 232 is D 2 max , so the D max of the queue system of the network device 232 should not be less than D 2 max .
- the queue is closed after both the first condition and the second condition are met.
- the first condition is that the queue is opened for at least the time interval ⁇
- the second condition is that the queue is empty, that is, the packets of the queue have been emptied.
- the queue systems corresponding to the ingress edge device 231 , the network device 232 , the network device 233 , the network device 234 and the egress edge device 235 can be implemented on the upstream board or on the downstream board, which is not limited in this application.
- a unit in a network device for implementing a queue system may be referred to as a queue system unit, and the queue system unit is used to add packets to a corresponding target queue.
- a unit in a network device for actively delaying or staying for a period of time may be called an active delay unit.
- the following describes how the network device in the network processes the received first data stream by taking the first data stream as an example in conjunction with FIG. 4A .
- the first data stream is any one of the multiple data streams received by the ingress edge device 231 . .
- Steps 401 to 410 in FIG. 4A are described by taking the processing process of the first burst of the first data stream as an example, and the same applies to other bursts of the first data stream.
- FIG. 4A is a schematic diagram of an embodiment of a packet processing method according to an embodiment of the present application.
- the message processing method includes:
- the ingress edge device 231 receives the first packet at the first moment.
- the first packet is the first packet of the first burst of the first data stream, and the first burst is one of multiple bursts included in the first data stream received by the ingress edge device 231 .
- a burst includes one or more messages.
- the ingress edge device 231 is a first-hop network device that processes one or more packets included in the first data flow.
- the ingress edge device 231 receives multiple bursts included in the first data stream, which are respectively burst B1 , burst B2 , burst B3 , and burst B4 .
- the first burst is a burst B1
- the burst B1 includes one or more packets.
- the burst B1 includes three packets, and the packet sizes of the three packets are the same or different. Then, it can be known that the first moment is the moment when the first packet of the burst B1 reaches the ingress edge device 231 .
- the packet sizes included in each burst in the first data stream are the same or different.
- the size of the packets included in each burst is the same, the end-to-end jitter of the packets in the network due to the packet size of the data flow can be avoided.
- the ingress edge device 231 determines that the first packet is the first packet of the first burst, and two possible implementations are shown below.
- the ingress edge device 231 negotiates with the sender in advance to determine the arrival time of the packet.
- the ingress edge device 231 monitors the first data stream in real time, and when it is found that the packets of the first data stream arrive discontinuously, the ingress edge device 231 can determine different bursts of the first data stream, and determine the size of each burst. first message.
- the first packet of each burst carries a special identifier, and the special identifier is used to identify the packet as the first packet of the burst.
- the ingress edge device 231 determines the first packet of each burst based on the special identifier.
- the manner in which the ingress edge device 231 determines the first packet of each burst of the first data flow is similar.
- the manner in which the network device 232 , the network device 233 , the network device 234 , and the egress edge device 235 determine the first packet of each burst in the following is also similar, and details will not be described one by one later.
- the ingress edge device 231 determines a first target queue from a plurality of queues included in the queue system unit of the ingress edge device 231 according to the first moment.
- the first time interval between the opening times of two adjacent queues among the multiple queues included in the queue system unit of the ingress edge device 231 is equal.
- queue x and queue x+1 are two adjacent queues
- queue x+1 and queue x+2 are two adjacent queues
- queue x+2 and queue x+3 are two adjacent queues.
- the time interval between the opening time of queue x and the opening time of queue x+1 is equal to the time interval between the opening time of queue x+1 and the opening time of queue x+2.
- the time interval between the opening time of queue x+1 and the opening time of queue x+2 is equal to the time interval between the opening time of queue x+2 and the opening time of queue x+3.
- the ingress edge device 231 selects the kth queue opened after the first time in the queue system unit of the ingress edge device 231 as the first target queue, where k is an integer greater than or equal to 1.
- the ingress edge device 231 selects the queue x that is first opened after the first time in the queue system unit of the ingress edge device 231 .
- the ingress edge device 231 selects the second open queue x+1 after the first time in the queue system unit of the ingress edge device 231 .
- the ingress edge device 231 adds the one or more packets included in the first burst to the first target queue according to the sequence of the one or more packets included in the first burst.
- the sequence of the one or more packets included in the first burst may be understood as the sequence in which the one or more packets arrive at the ingress edge device 231 .
- the first burst includes Packet 1, Packet 2, and Packet 3.
- Packet 1 arrives at ingress edge device 231 before Packet 2
- Packet 3 arrives at ingress edge device 231 before Packet 2.
- the ingress edge device 231 sequentially adds the packet 1, the packet 2 and the packet 3 to the first target queue.
- the ingress edge device 231 first sends the packet 1, then sends the packet 2, and finally sends the packet 3.
- the ingress edge device 231 sequentially adds one or more packets included in the first burst to the first target queue in a queuing manner in which the burst is the queuing granularity. For example, as shown in FIG. 5A , the first burst is burst B1 , and the ingress edge device 231 sequentially adds one or more packets included in the burst B1 to the queue x in the queue system unit of the ingress edge device 231 .
- each of the one or more packets included in the first burst includes queue information of the first target queue; or, the first packet (the first packet of the first burst) includes the first packet. Queue information of a target queue.
- the queue information of the first target queue includes the queue number of the first target queue.
- the first burst is B1
- the queue information of the first target queue includes the queue number x.
- each of the one or more packets included in the first burst includes first time information corresponding to each packet.
- the first time information of each packet is used to indicate the first remaining processing time of each packet.
- the first remaining processing time of each packet is the difference between the first theoretical time upper limit of each packet and the first actual time of each packet.
- the first theoretical upper limit of each packet is the theoretical upper limit of each packet passing through the network device between the reference time of each packet at the ingress edge device 231 and the reference time of each packet at the network device 232 .
- the first actual time is the actual time that each packet passes inside the ingress edge device 231 from the reference time of each packet in the ingress edge device 231 to the time when each packet is output from the ingress edge device 231 .
- the reference moment please refer to the introduction of the aforementioned terms.
- the reference time of the first packet at the ingress edge device 231 is E 1
- the time when the first packet is output from the ingress edge device 231 is t 1 out , that is, the first actual time is the reference time E
- the first theoretical upper limit of time is D 1 max .
- the first remaining processing time of the first packet is D 1 max minus the time interval between the reference time E 1 and the time t 1 ou .
- the first time information of each packet includes the reference time of each packet at the ingress edge device 231, the time when each packet is output from the ingress edge device 231, and the first theoretical upper limit of the time for each packet.
- the first time information of the first packet is introduced as an example.
- the first time information of the first packet includes the reference time of the first packet at the ingress edge device 231 and the output of the first packet from the ingress edge device 231. and the first theoretical upper limit of time, namely D 1 max . The same is true for other packets of the first burst.
- the first packet includes first time information of the first packet.
- the first time information of the first packet is used to indicate the first remaining processing time of the first packet.
- the network device 232 Since the network device 232 adopts the queuing method in which the burst is the queuing granularity, the network device 232 only needs to determine the first time information of the first packet of the first burst. Specifically, the network device 232 uses the first packet of the first burst.
- the network device 232 uses the first packet of the first burst.
- the first time information and the related process of determining the target queue please refer to the following introduction.
- the content included in the first time information of the first packet please refer to the foregoing introduction, and details are not repeated here.
- D 1 max may be pre-configured in the network device 232 or a preset default value. In this case, the first time information of each message or the first time information of the first message may not include D 1 max .
- the ingress edge device 231 sends one or more packets included in the first burst to the network device 232 according to the scheduling rule of the first target queue.
- the ingress edge device 231 sends one or more packets included in the first burst to the network device 232 according to the scheduling rule of the first target queue in the queue system unit of the ingress edge device 231 .
- the scheduling rule of the queue can be known in conjunction with the related introduction of the M queues of the queue system in FIG. 3B .
- FIG. 6 shows a sequence diagram of processing the first packet by the ingress edge device 231 and the network device 232 .
- the first packet arrives at the ingress edge device 231 at time t 1 in , and the first packet enters the queue system unit of the ingress edge device 231 .
- the first packet is output from the ingress edge device 231 at time t 1 out .
- the first message is input to the network device 232 at time t2in .
- the first packet leaves the switching fabric of the network device 232 at time t' 2 in and enters the active delay unit of the network device 232 .
- the network device 232 determines the reference time E 2 of the first packet at the network device 232 according to the first time information of the first packet, and selects the first packet from the queue system of the network device 232 according to the reference time E 2 of the first packet at the network device 232
- the target queue is selected in the unit, and the first packet is output from the network device 232 at time t 2 out .
- queuing system unit Q and the active delay unit D shown in FIG. 6 and subsequent figures are only logically divided different units. In terms of specific device form, the two can be the same physical unit.
- the reference time E 1 of the first packet in the ingress edge device 231 is set as the first time t 1 in when the ingress edge device 231 receives the first packet.
- the first theoretical time upper limit of the first packet is from the reference time E1 of the first packet at the ingress edge device 231 to the reference time E2 of the first packet at the network device 232, and the first packet is at the ingress edge.
- the first theoretical time upper limit of the first packet does not include the transmission delay of the first packet from the ingress edge device 231 to the network device 232 .
- the first actual time of the first packet is the time elapsed by the first packet in the ingress edge device 231 from the reference time E 1 of the first packet in the ingress edge device 231 to the time t 1 out .
- the network device 232 sends one or more packets included in the first burst to the network device 233.
- the network device 232 may add one or more packets included in the first burst to the target queue by using a queueing method with a burst as the queueing granularity, or may add the first burst in a queueing manner with a packet-based queueing granularity. One or more of the included packets are added to the destination queue.
- step 405 with reference to FIG. 4B based on the network device 232 using the burst as the queue entry granularity.
- this embodiment further includes steps 405 a to 405 b.
- the network device 232 determines a sixth target queue from the queue system unit of the network device 232 according to the first packet including the first time information of the first packet.
- the first time information of the first packet is used to indicate the first remaining processing time of the first packet.
- the network device 232 may determine the reference time of the first packet at the ingress edge device 231 according to the first remaining processing time of the first packet. Then, the network device 232 selects the sixth target queue according to the reference time of the first packet at the ingress edge device 231, and the opening time of the sixth target queue is after the reference time E2 .
- the first time information corresponding to the first packet includes the reference time of the first packet at the ingress edge device 231 and the first theoretical upper limit of the time of the first packet.
- the reference time of the first packet at the ingress edge device 231 is E 1
- the upper limit of the first theoretical time is D 1 max
- the network device 232 can determine the first packet through D 1 max and E 1
- the message is at the reference time E 2 of the network device 232 . It can be seen from FIG.
- the network device 232 can determine the active delay of the first packet in the network device 232 according to time t' 2 in and reference time E 2 . The length of time to stay in the delay unit. Then, the network device 232 can select the sixth target queue at the reference time E2 of the network device 232 according to the first packet, and the sixth target queue is opened after the reference time E2 .
- the network device 232 adds the one or more packets included in the first burst to the sixth target queue in the order of the one or more packets included in the first burst.
- the first packet added to the sixth target queue includes second time information of the first packet, and the second time information of the first packet is used to indicate the second remaining processing time of the first packet.
- the second remaining processing time of the first packet is the difference between the second theoretical upper limit of the first packet and the second actual time of the first packet.
- the second theoretical time upper limit of the first packet is the theoretical upper limit of the time for the first packet to pass through the network device from the reference time of the first packet at the network device 232 to the reference time of the first packet at the network device 233 .
- the second actual time is the actual time that the first packet has experienced inside the network device 232 from the reference time of the first packet in the network device 232 to the time when the first packet is output from the network device 232 .
- the network device 232 sequentially adds one or more packets included in the first burst to the sixth target queue in a queueing manner in which the burst is the queueing granularity.
- the second time information of the first packet includes the reference time of the first packet at the network device 232, the time at which the first packet is output from the network device 232, and the second theoretical upper limit of the time of the first packet.
- the reference time of the first packet at the network device 232 is the reference time E 2
- the second theoretical upper limit of the time of the first packet is D 2 max .
- D 2 max is the maximum delay from when the first packet is queued by the network device 232 to the queue system unit of the network device 232 to when the first packet is queued to the queue system unit of the network device 233 .
- D 2 max may be pre-configured in the network device 233 or a preset default value.
- the second time information of the first packet may not include D 2 max .
- the one or more packets included in the first burst in step 404 respectively include the queue information of the first target queue
- the one or more packets of the first burst added to the sixth target queue also include the first Queue information for the target queue. If the first packet includes the queue information of the first target queue in step 404, the first packet of the first burst added to the sixth target queue includes the queue information of the first target queue.
- step 405 specifically includes step 405c.
- the network device 232 sends one or more packets included in the first burst to the network device 233 according to the scheduling rule of the sixth target queue.
- the network device 232 sends one or more packets included in the first burst to the network device 233 according to the scheduling rule of the sixth target queue in the queue system unit of the network device 232 .
- the scheduling rule of the queue can be understood in conjunction with the foregoing description of the M queues of the queue system in FIG. 3B .
- FIG. 7 shows a sequence diagram of processing the first packet by the network device 232 and the network device 233 .
- the first packet arrives at the network device 232 at time t 2 in , and the first packet leaves the switching fabric of the network device 232 at time t' 2 in and enters the active delay unit of the network device 232 .
- the network device 232 determines the reference time E 2 of the first packet at the network device 232 according to the first time information of the first packet, and selects the first packet from the queue system of the network device 232 according to the reference time E 2 of the first packet at the network device 232
- the sixth target queue is selected in the unit, and the first packet is output from the network device 232 at time t 2 out .
- step 405 of the queue entry method based on the network device 232 using the packet as the queue entry granularity with reference to FIG. 4B .
- the enqueuing and sending of packets are described below by taking the first packet of the first burst as an example with reference to FIG. 4C .
- this embodiment further includes steps 405d to 405e.
- the network device 232 determines the sixth target queue from the queue system unit of the network device 232 according to the first time information included in the first packet.
- Step 405d is similar to step 405a.
- Step 405d please refer to the related introduction of the foregoing step 405a.
- the network device 232 adds the first packet to the sixth target queue.
- the first packet added to the sixth target queue includes second time information of the first packet.
- second time information For the related introduction of the second time information, please refer to the aforementioned step 405b.
- the first packet added to the sixth target queue also includes the queue information of the first target queue.
- step 405 specifically includes step 405f.
- Step 405f The network device 232 sends the first packet to the network device 233 according to the scheduling rule of the sixth target queue.
- the processing flow for other packets in the first burst is also similar.
- the network device 232 determines each packet according to the other packets in the first burst including the first time information corresponding to each packet.
- the corresponding target queue is added, each packet is added to the target queue corresponding to each packet, and each packet is sent to the network device 233 according to the scheduling rule of the target queue corresponding to each packet.
- the network device 233 sends one or more packets included in the first burst to the network device 234.
- Step 406 is similar to the foregoing step 405 , please refer to the relevant introduction of the foregoing step 405 for details.
- FIG. 7 shows a sequence diagram of processing the first packet by the network device 232 and the network device 233 .
- the first packet arrives at the network device 233 at time t 3 in , and the first packet leaves the switching fabric of the network device 233 at time t' 3 in and enters the active delay unit of the network device 233 .
- the network device 233 determines the reference time E 3 of the first packet at the network device 232 according to the second time information of the first packet, and selects the first packet from the queue system of the network device 233 according to the reference time E 3 of the first packet at the network device 233
- the target queue is selected in the unit, and the first packet is output from the network device 233 at time t 3 out .
- the network device 234 sends one or more packets included in the first burst to the egress edge device 235.
- Step 407 is similar to the processing process of step 405 , please refer to the relevant introduction of the foregoing step 405 for details.
- the mapping between the first-hop network device and the last-hop network device at the egress is performed through queue mapping and scheduling, so as to ensure the deterministic upper bound of the packet delay and the end-to-end zero jitter. . Therefore, one or more packets included in the first burst sent by the network device 234 to the egress edge device 235 in step 407 may not carry the time information of the packets.
- the network device 234 receives one or more packets included in the first burst sent by the network device 233 and includes the queue information of the first target queue, the first burst sent by the network device 234 to the egress edge device 235 includes One or more of the packets respectively include queue information of the first target queue.
- the network device 234 If the network device 234 receives the first packet in the first burst sent by the network device 233 and includes the queue information of the first target queue, the network device 234 sends the first packet of the first burst to the egress edge device 235 .
- the file includes queue information of the first target queue.
- the egress edge device 235 determines a third target queue from the queue system unit of the egress edge device 235.
- the egress edge device 235 determines the first target queue to which one or more packets included in the first burst are added to the ingress edge device 231; then, the egress edge device 235 determines the corresponding first target queue according to the first mapping relationship the third target queue.
- the first mapping relationship includes the mapping relationship between the queues of the queue system unit of the ingress edge device 231 and the queues of the queue system unit of the egress edge device 235 .
- the first mapping relationship may be preconfigured in the egress edge device 235, or may be acquired by the egress edge device 235 through data plane learning or control plane configuration, which is not specifically limited in this application. Also, the mapping relationship between the queues of the queue system unit of the ingress edge device 231 and the queues of the queue system unit of the egress edge device 235 may be determined through experimental data.
- the egress edge device 235 may determine the first target queue according to the queue information of the first target queue included in the first packet sent by the network device 234; then, the egress edge device 235 determines according to the first mapping relationship. The third target queue corresponding to the first target queue.
- the queue information of the first target queue includes the queue number x.
- the egress edge device 235 determines the queue number y corresponding to the queue number x according to the first mapping relationship, that is, the third target queue is the queue with the queue number y in the queue system unit of the egress edge device 235 .
- the first mapping relationship may include the mapping relationship between the queue numbers of the queues of the queue system unit of the ingress edge device 231 and the queue numbers of the queues of the queue system unit of the egress edge device 235 .
- the first mapping relationship can be expressed as:
- the queue with the queue number x in the queue system unit of the ingress edge device 231 corresponds to the queue with the queue number y in the queue system unit of the egress edge device 235 .
- the queue with the queue number x+1 in the queue system unit of the ingress edge device 231 corresponds to the queue with the queue number y+1 in the queue system unit of the egress edge device 235 .
- the queue with the queue number x+2 in the queue system unit of the ingress edge device 231 corresponds to the queue with the queue number y+2 in the queue system unit of the egress edge device 235 .
- the queue with the queue number x+3 in the queue system unit of the ingress edge device 231 corresponds to the queue with the queue number y+3 in the queue system unit of the egress edge device 235 .
- the egress edge device 235 adds the one or more packets included in the first burst to the third target queue according to the sequence of the one or more packets included in the first burst.
- the egress edge device 235 adds one or more packets included in the first burst to the third target queue in a queuing manner in which the burst is the queuing granularity.
- the sequence of one or more packets included in the first burst please refer to the related introduction of step 403 .
- the first burst is burst B1
- the first target queue is queue x.
- the egress edge device 235 determines that queue x corresponds to queue y according to the first mapping relationship.
- the egress edge device 235 adds the one or more packets included in the first burst to the queue y in the order of the one or more packets included in the first burst.
- the first burst is burst B1
- the first target queue is queue x+1.
- Egress edge device 235 determines that queue x+1 corresponds to queue y+1.
- the egress edge device 235 adds the one or more packets included in the first burst to queue y+1 in the order of the one or more packets included in the first burst.
- the egress edge device 235 sends one or more packets included in the first burst according to the scheduling rule of the third target queue.
- the egress edge device 235 sends one or more packets included in the first burst to the egress edge device 235 according to the scheduling rule of the third target queue in the queue system unit of the egress edge device 235 .
- the scheduling rule of the queue can be understood in conjunction with the above-mentioned description of the M queues in the queue system in FIG. 3B .
- the ingress edge device 231 receives the first packet in the network at the first moment, where the first packet is the first packet of the first burst of the first data stream, and the first burst is the first packet of the first burst of the first data stream.
- One burst among multiple bursts included in the first data stream received by a network device the first burst includes one or more packets, and the ingress edge device 231 is for the one or more packets included in the first data stream.
- the ingress edge device 231 determines the first target queue from the multiple queues included in the queue system unit of the ingress edge device 231 according to the first moment, and determines the first target queue according to one or more of the queues included in the first burst.
- the sequence of multiple packets adds one or more packets included in the first burst to the first target queue; the ingress edge device 231, according to the scheduling rules of multiple queues of the queue system unit of the ingress edge device 231 queue for processing.
- the ingress edge device 231 determines the first target queue by the first moment of receiving the first packet of the first burst; In the queue mode, one or more packets included in the first burst are sequentially added to the first target queue.
- the last-hop network device that processes one or more packets included in the first burst may determine the corresponding third target queue, and then sequentially adds the one or more packets included in the first burst to the third target queue .
- enqueuing and scheduling are performed through the mapping between the first target queue and the third target queue between the first-hop network device and the last-hop network device, which ensures that the shape of the data flow entering and leaving the network device is the same, and the data flow can be guaranteed.
- the deterministic delay upper bound sum and end-to-end zero jitter of the paper.
- the first data stream includes multiple bursts, and the second time interval between two adjacent bursts in the first data stream reaching the ingress edge device 231 is equal, and the second time interval is equal to an integer of the first time interval. times.
- the fifth time interval between the turn-on times of the target queues to which the egress edge devices 231 are respectively mapped for the two adjacent bursts is equal to the second time interval.
- the following describes the process of processing two adjacent bursts (the first burst and the third burst) in the first data stream by the network device in the network in conjunction with steps 401 to 420.
- the processing procedure of two adjacent bursts is also applicable.
- the ingress edge device 231 receives the third packet at the third moment.
- the third packet is the first packet of the third burst, the third burst is a burst of the first data stream, and the first burst and the third burst are adjacent two packets of the first data stream. a sudden.
- the first burst is the burst B1 of the first data stream
- the second burst is the burst B2 of the first data stream
- the burst B1 and the burst B2 are the phases in the first data stream.
- Two bursts of neighbors Alternatively, the first burst is the burst B2 of the second data stream, the second burst is the burst B3 of the first data stream, and the burst B2 and the burst B3 are two adjacent bursts in the first data stream .
- the second time interval during which two adjacent bursts of the first data stream arrive at the ingress edge device 231 are equal, and the second time interval is an integer multiple of the first time interval.
- the first burst is burst B1
- the third burst is burst B2.
- Burst B1 arrives at the ingress edge device 231 at the first time
- burst B2 arrives at the ingress edge device 231 at the third time.
- the second time interval between the first time and the third time is equal to the first time interval, that is, equal to the gating granularity of the queue of the queue system unit of the ingress edge device 231 (that is, the gating granularity is equal to the queue system unit of the ingress edge device 231 ). duration of a queue).
- the first burst is B1 and the third burst is B2.
- Burst B1 arrives at the ingress edge device 231 at the first time
- burst B2 arrives at the ingress edge device 231 at the third time.
- the second time interval between the first time instant and the third time instant is equal to twice the first time interval. That is, equal to twice the gating granularity of the queue of the queue system unit of the ingress edge device 231 .
- the number of bits of the multiple bursts included in the first data stream is the same or different.
- the burst B1 , the burst B2 , the burst B3 and the burst B4 included in the first data stream all include the same number of bits. That is, it can be understood that the burst B1, the burst B2, the burst B3 and the burst B4 respectively contain the same amount of data.
- the number of packets included in multiple bursts is the same or different.
- burst B1 includes 3 packets
- burst B2 includes 4 packets
- burst B3 includes 3 packets. That is, the number of packets included in the burst B1 is the same as the number of packets included in the burst B3, and the number of packets included in the burst B1 is different from the number of packets included in the burst B2.
- each burst includes the same packet size. If the packets included in each burst have the same size, end-to-end jitter of packets caused by different packet sizes can be avoided.
- burst B1 includes 3 packets, and each packet in the 3 packets includes the same number of bits. In this way, the network equipment in the network can avoid the occurrence of the 3 packets during transmission. The size causes the three packets to experience different times on the network devices in the network, resulting in end-to-end jitter of the packets.
- the ingress edge device 231 determines a fifth target queue from a plurality of queues included in the queue system unit of the ingress edge device 231 according to the third moment.
- the ingress edge device 231 selects the kth queue opened after the third time in the queue system unit of the ingress edge device 231 as the fifth target queue.
- the first burst is burst B1
- the third burst is burst B2.
- the ingress edge device 231 determines that burst B1 is mapped to queue x, and determines that burst B2 is mapped to queue x+1.
- the first burst is burst B1
- the third burst is burst B2.
- Ingress edge device 231 determines that burst B1 maps to queue x+1 and burst B2 maps to queue x+2.
- the ingress edge device 231 adds one or more packets included in the third burst to the fifth packet in the order of the one or more packets included in the third burst. target queue.
- the ingress edge device 231 sends one or more packets included in the third burst to the network device 232 according to the scheduling rule of the fifth target queue.
- Steps 413 to 414 are similar to the foregoing steps 403 to 404 .
- Steps 413 to 414 are similar to the foregoing steps 403 to 404 .
- the network device 232 sends one or more packets included in the third burst to the network device 233.
- the network device 233 sends one or more packets included in the third burst to the network device 234.
- the network device 234 sends one or more packets included in the third burst to the egress edge device 235.
- Steps 415 to 417 are similar to the aforementioned steps 405 to 407 .
- Steps 415 to 417 are similar to the aforementioned steps 405 to 407 .
- the egress edge device 235 determines a fourth target queue from the queue system unit of the egress edge device 235.
- the time when the egress edge device 235 releases one or more packets included in the first burst to the queue system unit of the egress edge device 235 is the same as when the egress edge device 235 releases one or more packets included in the third burst to the egress edge device
- the time interval between the instants of the queue system unit of 235 is the fourth time interval.
- the first burst is burst B1
- the third burst is burst B2.
- the time when the egress edge device 235 releases the burst B1 to the queue system unit of the egress edge device 235 is T13
- the time when the egress edge device 235 releases the burst B2 to the queue system unit of the egress edge device 235 is T31
- the time between time T13 and time T31 is T31.
- the time interval between is the fourth time interval.
- the time interval between the turn-on time of the third target queue and the turn-on time of the fourth target queue is the fifth time interval.
- the fourth time interval is equal to the fifth time interval.
- burst B1 is mapped to queue y and burst B2 is mapped to queue y+2.
- the opening time of queue y is T12
- the opening time of queue y+2 is T22.
- the time interval between the turn-on time T12 and the turn-on time T22 is the fifth time interval.
- the time interval between time T13 and time T31 is the fourth time interval, and the fourth time interval is equal to the fifth time interval.
- the time interval between the turn-on times of two adjacent queues among the multiple queues included in the queue system unit of the egress edge device 235 is a sixth time interval.
- queue y and queue y+1 are two adjacent queues, and the time interval between the opening time of queue y and the opening time of queue y+1 is the sixth time interval.
- the third target queue and the fourth target queue are adjacent or non-adjacent queues.
- the fifth time interval between the turn-on time of the third target queue and the turn-on time of the fourth target queue is an integer multiple of the sixth time interval.
- the third target queue is queue y
- the fourth target queue is queue y+1
- queue y and queue y+1 are two adjacent queues.
- the fifth time interval between the on time of queue y and the on time of queue y+1 is equal to the sixth time interval.
- the third target queue is queue y
- the fourth target queue is queue y+2
- queue y and queue y+2 are two non-adjacent queues.
- the fifth time interval between the on time of queue y and the on time of queue y+2 is equal to twice the sixth time interval.
- the second time interval and the fifth time interval have the following relationship: the second time interval is equal to the fifth time interval.
- the time when the first packet of burst B1 reaches the ingress edge device 231 is T11, and the time when the first packet of burst B2 reaches the ingress edge device 231 is T21.
- the time when the first packet of the burst B1 leaves the egress edge device 235 is T12, and the time when the first packet of the burst B2 leaves the egress edge device 235 is T22.
- T12-T11 the time elapsed by the first packet of the burst B1 in the network device in the network is T12-T11
- T22-T21 the time elapsed by the first packet of the burst B2 in the network device in the network.
- T12-T11 should be equal to T22-T21. That is, it is specifically expressed as formula 4.2:
- T12-T11 T22-T21 (Formula 4.2)
- T12-T22 T11-T21.
- T12-T22 is the fifth time interval
- T11-T21 is the second time interval. It can therefore be seen that the second time interval is determined to be equal to the fifth time interval.
- the fifth time interval is equal to an integral multiple of the sixth time interval.
- the second time interval is equal to the fifth time interval, so it can be known that the first time interval is equal to the sixth time interval. That is, the gating granularity of the queue of the queue system unit of the ingress edge device 231 is equal to the gating granularity of the queue of the queue system unit of the egress edge device 235 .
- the egress edge device 235 adds the one or more packets included in the third burst to the fourth target queue according to the sequence of the one or more packets included in the third burst.
- the egress edge device 235 sends one or more packets included in the first burst according to the scheduling rule of the fourth target queue.
- Steps 419 to 420 are similar to the foregoing steps 409 to 410 .
- Steps 419 to 420 are similar to the foregoing steps 409 to 410 .
- D 1 max is the maximum delay from when the packet is received at the ingress edge device 231 to when the packet is queued to the queue system unit of the network device 232 .
- D 2 max is the maximum delay from when the packet is queued by the network device 232 to the queue system unit of the network device 232 to when the packet is queued to the queue system unit of the network device 233 .
- D 3 max is the maximum delay from when the packet is queued by the network device 233 to the queue system unit of the network device 233 until the packet is queued to the queue system unit of the network device 234 .
- D 4 max is the maximum delay from when the packet is queued by the network device 234 to the queue system unit of the network device 234 to when the packet is queued to the queue system unit of the egress edge device 235 .
- D h is the maximum delay of the packet in the queue system unit and the scheduling unit of the egress edge device 235 .
- the D h of different packets at the egress edge device 235 is the same, so as to ensure that different packets experience the same time from the ingress edge device 231 to the egress edge device 235, so as to realize the message
- the end-to-end jitter of network devices in the network is zero, thus solving the end-to-end jitter of packets caused by scheduling in the Damper scheme.
- the intermediate nodes adopt the Damper scheme to introduce the technical solutions of the embodiments of the present application, so as to solve the problem of end-to-end packets caused by scheduling under the Damper scheme. end jitter.
- the technical solutions of the embodiments of the present application may also be implemented based on other solutions, as long as the other solutions can ensure that the jitter of different packets of the same data flow from the ingress edge 231 to the network device 234 is zero. Do limit.
- the embodiments of the present application mainly make the D h of different packets at the egress edge device 235 the same, so that the end-to-end jitter of the network devices in the network of the packets is zero.
- the ingress edge device 231 may receive multiple data streams. The following description will be made by taking the ingress edge device 231 receiving the first data stream and the second data stream as an example.
- the ingress edge device 231 receives the second packet at the second time, and determines the second target queue from the multiple queues included in the queue system unit of the ingress edge device 231 according to the second time; The sequence of sending the one or more packets included in the second burst is added to the second target queue, and then according to the scheduling rule of the second target queue, one or more packets included in the second burst are sent. a message.
- the second packet is the first packet of the second burst of the second data stream, the second burst is one of multiple bursts included in the second data stream received by the ingress edge device 231 , and the second packet is A burst consists of one or more packets.
- the second data stream includes a plurality of bursts, namely, a burst A1, a burst A2, a burst A3, and a burst A4. If the second burst is the burst A2, then the second time is the time when the burst A2 reaches the ingress edge device 231 .
- the third time interval for two adjacent bursts to reach the ingress edge device 231 is equal, and the third time interval is an integer multiple of the first time interval.
- the first time interval is the time interval between the turn-on times of two adjacent queues in the queue system unit of the ingress edge device 231 .
- the second data stream includes burst A1, burst A2, burst A3, and burst A4. Take burst A1, burst A2, and burst A3 as examples for introduction.
- Burst A1 and burst A2 are two adjacent bursts in the second data stream
- burst 2 and burst 3 are two adjacent bursts in the second data stream.
- the time interval between the time burst A1 arrives at the ingress edge device 231 and the time burst 2 arrives at the ingress edge device 231 and the time between the time burst A2 arrives at the ingress edge device 231 and the time burst A3 arrives at the ingress edge device 231 time intervals are equal. And, the time interval between the time when the burst A1 arrives at the ingress edge device 231 and the time when the burst A2 arrives at the ingress edge device 231 is equal to the first time interval. The time interval between the time when burst A2 arrives at ingress edge device 231 and the time when burst A3 arrives at ingress edge device 231 is equal to the first time interval. That is, the third time interval is equal to the first time interval.
- the time interval between the moments when the egress edge device 235 releases two adjacent bursts of the second data stream to the queue system unit of the egress edge device 235 is equal to the time when the two adjacent bursts are mapped on the egress edge device 235 respectively.
- the time interval between the open times of the target queue is equal to the time when the two adjacent bursts are mapped on the egress edge device 235 respectively.
- burst A1 of the second data stream is mapped to queue y
- burst A2 of the second data stream is mapped to queue y+1.
- the time when the egress edge device 235 releases the burst A1 of the second data stream to the queue system unit of the egress edge device 235 is T1
- the time of the unit is T2
- the time interval between time T1 and time T2 is equal to the time interval between the opening time of queue y and the opening time of queue y+1.
- the time interval between the turn-on times of the target queues mapped by the egress edge device 235 for the two adjacent bursts is equal to the time interval between the times when the two adjacent bursts arrive at the ingress edge device 231 respectively.
- the time when the burst A1 of the second data stream reaches the ingress edge device 231 and the time when the burst A2 of the second data stream reaches the ingress edge device 231 is a third time interval.
- the burst A1 of the second data stream is mapped to the queue y, and the burst A2 of the second data stream is mapped to the queue y+1.
- the time interval between the on time of queue y and the on time of queue y+1 is equal to the third time interval.
- the relationship between some related time intervals of the second data stream is similar to that of the first data stream.
- the related description of the first data stream please refer to the related description of the first data stream in the embodiment shown in FIG. 4A.
- the first target queue is the second target queue, or the second target queue is located after the first target queue; and the first target queue is the last queue in the queue system unit of the ingress edge device 231, or the first target queue The queue is before the last queue of the queue system unit of the ingress edge device 231 .
- At least one burst and one burst of the first data flow are simultaneously added to the same target queue in the queue system unit of the ingress edge device 231 .
- the first burst of the first data stream is burst B1
- the first burst A1 of the second data stream arrives after the first burst B1 of the first data stream reaches the ingress edge device 231 Ingress edge device 231.
- both burst A and burst B1 are added to the queue x of the queue system unit of the ingress edge device 231 .
- the second burst A2 of the second data arrives at the ingress edge device 231 after the second burst B2 of the first data stream reaches the ingress edge device 231.
- both burst A2 and burst B2 are added to the ingress edge Queue x+1 of the queue system unit of device 231.
- the first burst of the first data stream is burst B1
- the partial packets of the first burst A1 of the second data stream and the partial packets of burst B1 arrive at the ingress edge device 231 at the same time .
- both burst A and burst B1 are added to the queue x of the queue system unit of the ingress edge device 231 .
- the egress edge device 235 receives multiple data streams and bursts of different data streams in the multiple data streams fall into the queue system in the egress edge device 235 In the case of the same target queue of the unit (bursts of the different data streams on the ingress edge device 231 fall into the same target queue of the queue system unit in the ingress edge device 231), the egress edge device 235 can select the corresponding and the queue group is processed through the scheduling rules of the queue group.
- the following describes the multiple queue groups included in the queue system unit of the egress edge device 235 and the priorities of the queue groups.
- the multiple queues included in each queue group are consistent with the working principle and setting mechanism of the multiple queues included in the queuing system described in FIG. 3B. Groups have different priorities.
- the egress edge device 235 includes a first queue group and a second queue group.
- the fact that the first queue group is higher than the second queue group means: in the two queues with the same queue number (that is, the two queues opened at the same time) in the first queue group and the second queue group, the priority of the queues in the first queue group A queue higher than the second queue group.
- the priority of the queue y of the first queue group is higher than the queue y of the second queue group, the queue y of the first queue group and the queue y of the second queue group are opened at the same time, but only when the queue y of the first queue group After the packets in the queue y are emptied, the packets in the queue y of the second queue group start to be sent.
- the packet sending modes of the queues with the same queue numbers in the first queue group and the second queue group are also similar, and will not be described one by one here.
- the second burst of the second data flow and the first burst of the first data flow are both added to the first target queue of the ingress edge device 231 as an example for introduction, that is, the second target The queue is the first target queue.
- the embodiment shown in FIG. 11 is only an example, and the second burst of the second data stream may also be added to the same target queue as other bursts of the first data stream.
- both the second burst of the second data flow and the third burst of the first data flow are added to the fifth target queue of the ingress edge device 231 , that is, the second target queue is the fifth target queue.
- FIG. 11 only introduces the case where the second burst of the second data stream and the first burst of the first data stream are both added to the same target queue. In practical applications, there may be two or two second data streams.
- the above burst and the burst of the first data flow are simultaneously added to the same target queue in the queue system unit of the ingress edge device 231, which is not specifically limited in this application.
- FIG. 11 is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
- the message processing method includes:
- the ingress edge device 231 receives the first packet at the first moment.
- the ingress edge device 231 determines the first target queue from the queue system unit of the ingress edge device 231 according to the first moment.
- the ingress edge device 231 adds the one or more packets included in the first burst to the first target queue according to the sequence of the one or more packets included in the first burst.
- Steps 1101 to 1103 are similar to steps 401 to 403 in the embodiment shown in FIG. 4A .
- steps 401 to 403 in the embodiment shown in FIG. 4A please refer to the related introductions of steps 401 to 403 in the embodiment shown in FIG. 4A .
- the one or more packets included in the first burst added to the first target queue further include one or more packets used to instruct the egress edge device 235 to include the first burst The queue group number of the first queue group to which the queue to which the message is added belongs; or, the first packet added to the first target queue includes the queue group number of the first queue group.
- the ingress edge device 231 determines the first queue group corresponding to the first data flow according to the second mapping relationship, and carries the queue group number of the first queue group in each packet of the first burst or the first queue group number. in the first packet of the burst.
- the second mapping relationship includes the mapping relationship between the queue groups in the queue system unit of the egress edge device 2 and the data flow, each data flow corresponds to a queue group, and each queue group corresponds to a priority.
- the second mapping relationship may be preconfigured in the ingress edge device 231, or may be acquired by the ingress edge device 231 through data plane learning or control plane configuration, which is not specifically limited in this application.
- the priority of the data flow may be determined according to factors such as the user level or the importance of the service corresponding to the data flow. For example, the higher the user level of a user, the higher the priority of the user's data flow. The higher the importance of the service of a certain data flow, the higher the priority of the data flow. The higher the priority of the data flow, the higher the priority of the queue group corresponding to the data flow.
- the ingress edge device 231 can identify the data flow type to which different bursts belong through the quintuple of the packet.
- the ingress edge device 231 receives the second packet at the second moment.
- the ingress edge device 231 determines a first target queue from multiple queues included in the queue system unit of the ingress edge device 231 according to the second moment.
- the ingress edge device 231 adds the one or more packets included in the second burst to the first target queue according to the sequence of the one or more packets included in the second burst.
- Steps 1104 to 1105 are similar to steps 401 to 403 in the embodiment shown in FIG. 4A .
- steps 401 to 403 in the embodiment shown in FIG. 4A please refer to the related introductions of steps 401 to 403 in the embodiment shown in FIG. 4A .
- the first burst is burst B1
- the second burst is burst A1.
- the second time point is after the first time point.
- Ingress edge device 231 determines that burst B1 maps to queue x in the queue system unit of ingress edge device 231 , and determines that burst A1 maps to queue x in the queue system unit of ingress edge device 231 . Since the burst B1 arrives at the ingress edge device 231 before the burst A1, the ingress edge device 231 may add the burst B1 to the queue x first, and then add the burst A1 to the queue x, as shown in FIG. 9A .
- the ingress edge device 231 first sends one or more packets included in the burst B1 to the network device 232 in the order of the one or more packets included in the burst B1; After the one or more packets are sent, the ingress edge device 231 sends the one or more packets included in the burst A1 to the network device 232 in the order of the one or more packets included in the burst B1.
- the order in which the ingress edge device 231 adds the burst A1 and the burst B1 to the queue x may not be limited. For example, the ingress edge device 231 adds one or more packets included in burst A to queue x first, and then adds one or more packets included in burst B1 to queue x.
- the first burst is a burst B2, and the second burst is a burst A1. It can be seen from FIG. 10 that the second time instant is before the first time instant.
- Ingress edge device 231 determines that burst A1 is mapped to queue x+1 in the queue system unit of ingress edge device 231 , and determines that burst B2 is mapped to queue x+1 in the queue system unit of ingress edge device 231 .
- the ingress edge device 231 can first add one or the packets included in the burst A1 to the queue x+1 in the order of the packets, and then add the packets in the order of the packets.
- One or more packets included in the first burst B2 are specifically shown in FIG. 10 .
- the order in which the ingress edge device 231 adds the burst A1 and the burst B2 to the queue x+1 may not be limited. For example, the ingress edge device 231 may first add one or more packets included in burst B1 to queue x+1, and then add one or more packets included in burst A1 to queue x+1.
- the one or more packets included in the second burst added to the first target queue also respectively include a queue that is used to instruct the egress edge device 235 to add the one or more packets included in the second burst to the queue.
- the queue group number of the second queue group; or, the second packet added to the first target queue includes the queue group number of the second queue group.
- the priority of the first queue group in step 1103 is higher than the priority of the second queue group.
- the priority of the first data stream is higher than that of the second data stream, and the priority of the first queue group is higher than that of the second queue group, so the data of the first data stream can pass through the first queue group
- the second data stream can be transmitted through the queues of the second queue group.
- the ingress edge device 231 determines the second queue group corresponding to the second data flow according to the second mapping relationship, and carries the queue group number of the second queue group in each packet or second queue of the second burst. in the first packet of the burst.
- the second mapping relationship includes the mapping relationship between queue groups and data flows in the queue system unit of the egress edge device 2 .
- the ingress edge device 231 sends one or more packets included in the first burst and one or more packets included in the second burst to the network device 232 according to the scheduling rule of the first target queue.
- the sum of the number of bits of the first burst and the number of bits of the second burst is less than or equal to the number of bits that the first target queue can accommodate.
- the number of bits that the first target queue can hold is equal to the port rate of the ingress edge device 231 multiplied by the time interval between the opening time of the first target queue and the ending time of the first target queue.
- burst B1 corresponds to the first data stream
- burst A1 corresponds to the second data stream.
- the sum of the number of bits of burst B1 and the number of bits of burst A1 should be less than the number of bits that the ingress edge device 231 can transmit in the time interval between the opening time of the first target queue and the ending time of the first target queue.
- the network device 232 sends one or more packets included in the first burst and one or more packets included in the second burst to the network device 233.
- the network device 233 sends one or more packets included in the first burst and one or more packets included in the second burst to the network device 234.
- the network device 234 sends one or more packets included in the first burst and one or more packets included in the second burst to the egress edge device 235.
- Steps 1107 to 1110 are similar to steps 404 to 407 in the embodiment shown in FIG. 4A .
- steps 404 to 407 in the embodiment shown in FIG. 4A please refer to the related introductions of steps 404 to 407 in the embodiment shown in FIG. 4A .
- the first burst includes one or more Each of the packets respectively includes a queue group number used to instruct the egress edge device 235 to add the one or more packets included in the first burst to the first queue group to which the queue belongs, and one or more packets included in the second burst belong.
- the text respectively includes the queue group number of the second queue group to
- the second packet includes an instruction The queue group number of the second queue group to which the queue to which the egress edge device 235 adds one or more packets included in the second burst belongs, then the intermediate nodes (network device 232, network device 233, and network device 234) are transmitting the In the case of a burst and a second burst, the first packet includes the queue group number of the first queue group that instructs the egress edge device 235 to add the one or more packets included in the first burst to the first queue group to which the queue belongs.
- the message includes a queue group number that indicates the second queue group to which the queue to which the egress edge device 235 adds the one or more packets included in the second burst belongs.
- the egress edge device 235 determines the first queue group from the queue system unit of the egress edge device 235.
- the egress edge device 235 receives one or more packets included in the first burst sent by the network device 234 .
- the one or more packets included in the first burst respectively include the queue group number of the first queue group, or the first packet includes the queue group number of the first queue group.
- the egress edge device 235 determines the first queue group from the queue system unit of the egress edge device 235 according to the queue group number.
- the egress edge device 235 determines the first queue group corresponding to the first data stream from the queue system unit of the egress edge device 235 according to the second mapping relationship.
- the second mapping relationship may be preconfigured in the egress edge device 235, or may be acquired by the egress edge device 235 through data plane learning or control plane configuration, which is not specifically limited in this application.
- the egress edge device 235 can identify the data flow to which different bursts belong through the quintuple of the packet.
- the egress edge device 235 determines a second queue group from the queue system unit of the egress edge device 235.
- Step 1112 is similar to the aforementioned step 1111 .
- Step 1112 please refer to the related introduction of step 1111 , which will not be repeated here.
- the egress edge device 235 determines a third target queue from the queue system unit of the egress edge device 235.
- Step 1113 is similar to step 408 in the foregoing embodiment shown in FIG. 4A .
- Step 1113 please refer to the related introduction of step 408 in the foregoing embodiment shown in FIG. 4A .
- the egress edge device 235 adds the one or more packets included in the first burst to the third target queue of the first queue group according to the sequence of the one or more packets included in the first burst.
- the egress edge device 235 adds the one or more packets included in the second burst to the third target queue of the second queue group according to the sequence of the one or more packets included in the second burst.
- the egress edge device 235 sends one or more packets included in the first burst according to the scheduling rule of the third target queue of the first queue group.
- the egress edge device 235 sends one or more packets included in the second burst according to the scheduling rule of the third target queue of the second queue group.
- Steps 1116 and 1117 are described below in conjunction with specific examples.
- the first burst is the burst B1 of the first data stream
- the second burst is the burst A1 of the second data stream.
- the queue y of the first queue group and the queue y of the second queue group are opened at the same time. Since the priority of the first queue group is higher than that of the second queue group, the egress edge device 235 sends the queue y of the first queue group first. One or more packets included in the first burst of One or more packets included in the second burst of the queue y of the second queue group are sent.
- multiple bursts of the first data stream have the same number of bits.
- the first data stream includes burst B1, burst B2, burst B3, and burst B4.
- the second data stream includes burst A1, burst A2, burst A3, and burst A4.
- Burst B2 falls into queue y+1 of the first queue group, and burst A1 falls into queue y+1 of the second queue group.
- Burst B3 falls into queue y+2 of the first queue group, and burst A2 falls into queue y+2 of the second queue group.
- Burst B4 falls into queue y+3 of the first queue group, and burst A3 falls into queue y+3 of the second queue group.
- the first of the burst A1 The time interval between the time when the packet leaves the egress edge device 235 and the time when the first packet of burst A2 leaves the egress edge device 235 is equal to the time when the first packet of burst A2 leaves the egress edge device 235 and the time when the first packet of burst A2 leaves the egress edge device 235 and the time when the first packet of burst A2 leaves the egress edge device 235.
- the port rate of the egress edge device 235 is fixed, so the number of bits included in the burst B1, burst B2, burst B3, and burst B4 of the first data stream should be the same, so as to ensure the duration 1, duration 2, Both duration 3 and duration 4 are equal to satisfy condition 1 above.
- the duration 1 is the sending duration occupied by the ingress edge device 235 for sending the burst B1 from the queue y of the first queue group.
- the duration 2 is equal to the sending duration occupied by the ingress edge device 235 to send the burst B2 from the queue y+1 of the first queue group.
- the duration 3 is the sending duration occupied by the ingress edge device 235 to transmit the burst B3 from the queue y+3 of the first queue group.
- the duration 4 is the transmission duration occupied by the ingress edge device 235 to transmit the burst B4 from the queue y+3 of the first queue group.
- the multiple bursts of the first data stream comprise the same number of
- the egress edge device 235 can select a corresponding queue group for each data flow, and each queue group corresponds to a priority; then the egress edge device 235 selects the scheduling rule of the corresponding queue group for each data flow, and for each data flow
- the corresponding queue group is selected for processing, so as to achieve deterministic delay and end-to-end zero jitter of the packets of different data flows on the network device in the network.
- the ingress edge device 231 can receive multiple data streams, and the time interval between two adjacent bursts in each data stream in the multiple data streams reaching the ingress edge device 231 is equal to the time interval between the two adjacent bursts at the egress edge device. 235 The time interval between the open times of the target queues mapped to respectively.
- the first target queue adds the packets included in the N bursts, the first target queue includes the first burst and the second burst, and each of the N bursts corresponds to a data stream and the N bursts.
- the data streams corresponding to different bursts in the transmission are different, the number of bits of the N bursts is less than the number of bits that can be accommodated by the first target queue, and N is an integer greater than or equal to 2.
- the number of bits that the first target queue can hold is equal to the port rate of the egress edge device 231 multiplied by the time interval between the opening time of the first target queue and the ending time of the first target queue.
- the number of bits included in multiple bursts included in each of the N data streams corresponding to the N bursts is the same.
- N bursts correspond to N queue groups
- each of the N queue groups corresponds to a priority
- different queue groups in the N queue groups correspond to different priorities.
- the first target queue is the queue x+1 of the ingress edge device 231 .
- the first target queue joins three bursts, namely, burst A1, burst B2, and burst C1.
- Burst B2 corresponds to the first data stream
- burst A1 corresponds to the second data stream
- burst C1 corresponds to the third data stream.
- the sum of the number of bits in burst A1, the number of bits in burst B2, and the number of bits in burst C1 is less than or equal to the ingress edge device 231 in the first time interval (the opening time of queue x+1 to the ending time of queue x+1).
- the priority of the first data stream is higher than the priority of the second data stream, and the priority of the second data stream is higher than that of the third data stream.
- the priority of the first queue group is higher than the priority of the second queue group, and the priority of the second queue group is higher than the priority of the third queue group.
- the egress edge device 235 maps the burst B2 to the queue y+1 of the first queue group, and sends one or more packets included in the burst B2 according to the scheduling rule of the queue y+1 of the first queue group;
- the egress edge device 235 maps the burst A1 to the queue y+1 of the second queue group, and sends one or more packets included in the burst A1 according to the scheduling rule of the queue y+1 of the second queue group;
- the egress edge The device 235 maps the burst C1 to the queue y+1 of the second queue group, and sends one or more packets included in the burst C1 according to the scheduling rule of the queue y+1 of the third queue group.
- FIG. 13 is a schematic flowchart of a packet processing method provided according to an embodiment of the present application.
- the first network device receives the first packet in the network at the first moment.
- the first packet is the first packet of the first burst of the first data stream, and the first burst is one of multiple bursts included in the first data stream received by the first network device.
- a burst includes one or more packets, and the first network device is a first-hop network device that processes one or more packets included in the first data stream.
- the first network device determines a first target queue from a plurality of queues included in the first queue system of the first network device according to the first moment.
- first time intervals between the opening times of two adjacent queues among the multiple queues included in the first queue system are equal.
- the second time interval between two adjacent bursts in the multiple bursts included in the first data stream reaching the first network device is equal, and the second time interval is an integer multiple of the first time interval.
- the plurality of bursts included in the first data stream have the same number of bits. Multiple packets included in the first burst have the same size.
- the first network device adds the one or more packets included in the first burst to the first target queue according to the sequence of the one or more packets included in the first burst.
- one or more packets included in the first burst respectively include queue information of the first target queue; or, the first packet includes queue information of the first target queue.
- each of the one or more packets included in the first burst includes first time information of each packet, and the first time information is used to indicate the first remainder of each packet Processing time, the first remaining processing time of each packet is the difference between the first theoretical time upper limit of each packet and the first actual time of each packet.
- the first theoretical upper limit of the time of each packet is the theoretical upper limit of each packet passing through the network device between the first reference time and the second reference time.
- the first reference time is the reference time when the first network device releases each message to the first queue system, or the first reference time is the time when the first network device receives each message; the second reference time
- the time is the reference time when each packet enters the queue system of the second network device that processes each packet.
- the first reference moment may be referred to as the reference moment of each packet in the first network device, and the second reference moment may be referred to as the reference of each packet in the second network device that processes each packet. time.
- the first actual time of each packet is from the first reference moment of each packet in the first network device to the moment when each packet is output from the first network device, and each packet is sent to the first network device Actual time experienced internally.
- the first time information of each message includes the reference time of each message in the first network device, the time when each message is output from the first network device, and the first time of each message.
- Theoretical time limit is the reference time of each message in the first network device.
- the first packet includes first time information of the first packet.
- the first time information of the first packet is used to indicate the first remaining processing time of the first packet.
- the first remaining processing time of the first packet is the difference between the first theoretical upper limit of the first packet and the first actual time of the first packet.
- the first theoretical time upper limit of the first packet is the theoretical upper limit of each packet passing through the network device between the first reference time and the second reference time.
- the first reference time is the reference time when the first network device releases the first packet to the first queuing system, or the first reference time is the time when the first network device receives the first message; the second reference time
- the moment is the reference moment when the first packet enters the queue system of the second network device that processes the first packet.
- the first reference moment may be referred to as the reference moment of the first packet in the first network device, and the second reference moment may be referred to as the reference of the first packet in the second network device that processes the first packet time.
- the first actual time of the first packet is from the first reference time of the first packet at the first network device and the time at which the first packet is output from the first network device, and the first packet is sent to the first network device. Actual time experienced internally.
- the first time information of the first packet includes the first reference time of the first packet at the first network device, the time when the first packet is output from the first network device, and the first theory of the first packet. time limit.
- the first network device processes the first target queue according to the scheduling rules of the multiple queues included in the first queue system.
- Figure 13 also includes steps 1304a to 1304d.
- the first network device receives the second packet in the network at the second moment.
- the second packet is the first packet of the second burst of the second data stream
- the second burst is one of multiple bursts included in the second data stream received by the first network device
- the second burst Send includes one or more messages.
- the third time interval between two adjacent bursts in the multiple bursts included in the second data stream reaching the first network device is equal, and the third time interval is an integer multiple of the first time interval.
- the first network device determines a second target queue from a plurality of queues included in the first queue system according to the second moment;
- the first network device adds the one or more packets included in the second burst to the second target queue in the order of the one or more packets included in the second burst.
- the first network device processes the second target queue according to the scheduling rules of the multiple queues included in the first queue system.
- the first target queue adds packets included in N bursts, the N bursts include the first burst, each of the N bursts corresponds to a data stream, and the N bursts are different
- the data streams corresponding to the bursts are different.
- the number of bits in the N bursts is less than the number of bits that can be accommodated in the first target queue.
- the number of bits that can be accommodated in the first target queue is equal to the port rate of the first network device multiplied by the first target. The time interval between the opening time of the queue and the end of the first target queue.
- the above steps 1301 to 1304 illustrate the process of processing the first burst of the first data stream by the first network device.
- the processing procedures for other bursts of the first data stream are similar, and are not described one by one here.
- the first network device is a first-hop network device that processes the first data stream.
- the first data stream passes through the first network device, then passes through the intermediate node device, and is finally transmitted to the last-hop network device that processes the first data stream, that is, the second network device.
- the last-hop network device that processes the first data stream, that is, the second network device.
- the second network device receives the first data stream.
- the first data stream includes one or more bursts, a first burst of the plurality of bursts includes one or more packets, and a third burst of the plurality of bursts includes one or more packets,
- the first burst and the third burst are two adjacent bursts in the first data stream.
- the second network device is a last-hop network device that processes one or more packets included in the second data stream.
- the second network device determines a third target queue and a fourth target queue from the second queue system of the second network device.
- the sixth time interval between the turn-on times of two adjacent queues in the second queuing system is equal to the first time interval, and the first time interval is the turn-on time of two adjacent queues among the plurality of queues included in the first queuing system time interval between.
- the third target queue and the fourth target queue are adjacent or non-adjacent queues.
- the moment when the second network device releases one or more packets included in the first burst to the first queuing system is the same as when the second network device releases one or more packets included in the second burst to the first queuing system
- the time interval between the moments is the fourth time interval.
- the time interval between the turn-on time of the third target queue and the turn-on time of the fourth target queue is the fifth time interval.
- the fourth time interval is equal to the fifth time interval
- the fifth time interval is equal to the second time interval
- the second time interval is the time when two adjacent bursts among the multiple bursts included in the first data stream reach the first network device interval.
- the second network device determining the third target queue from the second queue system of the second network device includes: the second network device determining a first target queue, where the first target queue is the first target queue in the first network device A queue to which one or more packets included in the burst are added; then, the second network device determines a third target queue corresponding to the first target queue from the second queue system according to the first mapping relationship; the first mapping relationship includes the first The mapping relationship between the queues in the first queue system and the queues in the second queue system of the network device.
- the first packet of the first burst includes queue information of the first target queue; the second network device determining the first target queue includes: the second network device determining the first target queue according to the queue information of the first target queue target queue.
- the second network device adds one or more packets included in the first burst to the third target queue according to the order of the one or more packets included in the first burst, and adds one or more packets included in the first burst to the third target queue according to the order of the one or more packets included in the third burst.
- One or more packets included in the second burst are added to the fourth destination queue in order of the packets.
- the second network device processes the third target queue and the fourth target queue according to the scheduling rule of the third target queue and the scheduling rule of the fourth target queue.
- the plurality of bursts included in the first data stream have the same number of bits. Multiple packets included in the first burst have the same size.
- the third target queue adds packets included in N bursts, the N bursts include the first burst, each of the N bursts corresponds to a data stream, and the N bursts are different Data streams corresponding to bursts are different; N bursts correspond to N queue groups, each of the N queue groups corresponds to a priority, and different queue groups have different priorities.
- steps 1305 to 1308 described above are replaced with steps 1309 to 1312 .
- the second network receives the second data stream.
- the second data stream includes one or more bursts, the second burst of the multiple bursts includes one or more packets, and the moment when the second data stream arrives at the second network device is at the beginning of the first data stream After the moment at which the first burst arrives at the second network device, and before the moment at which the last burst of the first data stream arrives at the second network device.
- the second network device selects a first queue group from the second queue system, and adds one or more bursts included in the first data stream to the first queue according to the sequence of the one or more bursts included in the first data stream Group;
- the second network device selects a second queue group from the second queue system, and adds one or more bursts included in the second data stream to the second queue according to the sequence of the one or more bursts included in the second data stream Group;
- the priority of the first queue group is higher than the priority of the second queue group.
- the second network device processes the first queue group and the second queue group according to the scheduling rules of multiple queues in the second queue system.
- the plurality of bursts included in the first data stream have the same number of bits. Multiple packets included in the first burst have the same size.
- FIG. 14 is a schematic structural block diagram of a first network device provided according to an embodiment of the present application.
- the first network device 1400 shown in FIG. 14 includes a receiving unit 1401 , a processing unit 1402 and a sending unit 1403 .
- the receiving unit 1401 is configured to receive the first packet in the network at the first moment, where the first packet is the first packet of the first burst of the first data stream, and the first burst is received by the first network device One of the multiple bursts included in the first data stream of the First hop network equipment;
- the processing unit 1402 is configured to determine a first target queue from a plurality of queues included in the first queuing system according to the first moment; or multiple packets are added to the first target queue;
- the sending unit 1403 is configured to process the first target queue according to the scheduling rules of the multiple queues.
- the first time intervals between the opening times of two adjacent queues among the multiple queues included in the first queue system are equal.
- the second time interval between two adjacent bursts in the multiple bursts included in the first data stream reaching the first network device is equal, and the second time interval is an integer of the first time interval times.
- the number of bits of the multiple bursts included in the first data stream is the same.
- the first burst includes multiple packets of the same size.
- the receiving unit 1401 is further configured to:
- the second packet is the first packet of the second burst of the second data stream
- the second burst is the second packet received by the first network device one of the multiple bursts included in the data stream, the second burst includes one or more packets
- the processing unit 1402 is also used for:
- the second target queue is the first target queue, or the second target queue is located after the first target queue; and the first target queue is the last queue of the first queue system, or the first target queue The target queue is before the last queue of the first queue system.
- the third time interval during which two adjacent bursts of the multiple bursts included in the second data stream reach the first network device are equal, and the third time interval is equal to the first time interval. integer multiples.
- the first target queue adds packets included in N bursts, the N bursts include the first burst, and each of the N bursts corresponds to a data stream and the N bursts include the first burst.
- the data streams corresponding to different bursts in the N bursts are different, the number of bits of the N bursts is less than the number of bits that the first target queue can hold, and the number of bits that the first target queue can hold is equal to the number of bits that the first target queue can hold
- the port rate of the network device is multiplied by the time interval between the opening time of the first target queue and the ending time of the first target queue.
- the first packet includes queue information of the first target queue; or, one or more packets included in the first burst respectively include queue information of the first target queue.
- the queue information of the first target queue includes a queue number of the first target queue.
- the one or more packets included in the first burst further include a queue group to which the queue for instructing the second network device to join the one or more packets included in the first burst belongs.
- the second network device is the last-hop network device that processes one or more packets included in the first data flow.
- each of the one or more packets included in the first burst includes first time information of each packet, and the first time information is used to indicate each packet
- the first remaining processing time is the difference between the first theoretical time limit and the first actual time for the first network device to process each packet;
- the first theoretical time limit is the first reference moment The theoretical upper limit of the time that each message passes through the network device from the start to the second reference time;
- the first reference time is the reference time when the first network device releases each message to the first queuing system, or , the first reference time is the time when the first network device receives each message;
- the second reference time is when each message enters and processes one or more messages included in the first burst The reference time of the queue system of the second network device;
- the first actual time is from the first reference time of each message to the time when each message is output from the first network device. The actual time that the packet has experienced inside the first network device.
- the first time information includes the first reference time of each packet and the time when each packet is output from the first network device.
- the first time information further includes a first theoretical upper limit of time for each packet.
- the first packet includes first time information of the first packet, where the first time information is used to indicate the first remaining processing time of the first packet, and the first remaining processing time
- the time is the difference between the first theoretical upper limit of time for the first network device to process the first packet and the first actual time; the first theoretical upper limit is the time from the first reference time to the second reference time after the first packet has passed The upper limit of theoretical time experienced by the network device; the first reference time is the reference time when the first network device releases the first packet to the first queue system, and the second reference time is when the first packet enters the
- the first burst includes the reference moment of the queue system of the second network device for processing one or more packets.
- the second time information includes a first reference time of the first packet and a time when the first packet is output from the first network device.
- the first time information further includes a first theoretical upper limit of the time of the first packet.
- FIG. 15 is a schematic structural block diagram of a second network device provided according to an embodiment of the present application.
- the second network device 1500 shown in FIG. 15 includes a receiving unit 1501 , a processing unit 1502 and a sending unit 1503 .
- a receiving unit 1501 configured to receive a first data stream, where the first data stream includes one or more bursts, a first burst in the plurality of bursts includes one or more packets, and among the plurality of bursts The third burst includes one or more packets, the first burst and the third burst are two adjacent bursts in the first data flow, and the second network device is for the first data flow
- the last hop network device that includes one or more packets for processing
- the processing unit 1502 is configured to determine the third target queue and the fourth target queue from the second queue system of the second network device; send the one or more packets included in the third burst to the third target queue; the second network device sends the one or more packets included in the third burst in the order of the one or more packets included in the third burst The text joins the fourth target queue;
- the sending unit 1503 is configured to process the third target queue and the fourth target queue according to the scheduling rules of the third target queue and the fourth target queue.
- the third target queue and the fourth target queue are two adjacent or non-adjacent queues in the second queue system.
- the second network device releases one or more packets included in the first burst to the second queuing system and the second network device releases the packets included in the third burst
- the time interval between the moments when one or more packets are sent to the second queue system is the fourth time interval
- the time interval between the opening time of the third target queue and the opening time of the fourth target queue is the fifth time interval.
- time interval, the fourth time interval is equal to the fifth time interval.
- the receiving unit 1501 is further configured to:
- the second data stream including one or more bursts, a second burst of the plurality of bursts including one or more packets, the second data stream reaching the second network device
- the time is after the time when the first burst of the first data stream reaches the second network device and before the time when the last burst of the first data stream reaches the second network device;
- the processing unit 1502 is also used for:
- the sending unit 1503 is also used for:
- the first queue group and the second queue group are processed according to the scheduling rules of the multiple queues of the second queue system.
- processing unit 1502 is specifically used for:
- the first target queue is a queue to which one or more packets included in the first burst in the first network device are added, and the first network device is a queue for performing processing on one or more packets included in the first data stream.
- the first-hop network device processed;
- a third target queue corresponding to the first target queue is determined from the second queue system according to a first mapping relationship, where the first mapping relationship includes a relationship between a queue in the first queue system of the first network device and a queue in the second queue system mapping relationship.
- the first packet of the first burst includes queue information of the first target queue; the processing unit is specifically configured to:
- the first target queue is determined according to the queue information of the first target queue.
- the third target queue adds packets included in N bursts, the N bursts include the first burst, and each burst in the N bursts corresponds to a data stream, and the N bursts include the first burst.
- the data streams corresponding to different bursts in the N bursts are different; the N bursts correspond to N queue groups, each of the N queue groups corresponds to a priority, and the priorities of different queue groups are different.
- the number of bits of the multiple bursts included in the first data stream is the same.
- the first burst includes multiple packets of the same size.
- the embodiment of the present application also provides a processing apparatus, including a processor and an interface.
- the processor may be used to execute the methods in the above method embodiments.
- the above processing device may be a chip.
- the processing device may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a system on chip (SoC), or a It is a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller (microcontroller unit). , MCU), it can also be a programmable logic device (PLD), other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or other integrated chips.
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- SoC system on chip
- MCU microcontroller unit
- MCU microcontroller unit
- PLD programmable logic device
- PLD programmable logic device
- each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in a processor or an instruction or program code in the form of software.
- the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
- the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
- the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
- the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
- the steps of the above method embodiments may be completed by hardware integrated logic circuits in the processor or instructions or program codes in the form of software.
- a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
- the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
- the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
- the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
- the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
- Volatile memory may be random access memory (RAM), which acts as an external cache.
- RAM random access memory
- DRAM dynamic random access memory
- SDRAM synchronous DRAM
- SDRAM double data rate synchronous dynamic random access memory
- ESDRAM enhanced synchronous dynamic random access memory
- SLDRAM synchronous link dynamic random access memory
- direct rambus RAM direct rambus RAM
- the present application further provides a network system.
- FIG. 16 is a schematic diagram of a network system according to an embodiment of the present application.
- the network system includes a first network device as shown in FIG. 14 and a second network device as shown in FIG. 15 .
- the first network device shown in FIG. 14 is configured to perform some or all of the steps performed by the first network device in the foregoing method embodiments.
- the second network device shown in FIG. 15 is configured to perform some or all of the steps performed by the second network device in the foregoing method embodiments.
- the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code is run on a computer, the computer is made to execute any one of the above embodiments. method.
- the present application also provides a computer-readable medium, where the computer-readable medium stores program codes, and when the program codes run on a computer, causes the computer to execute any one of the above-mentioned embodiments. method.
- the disclosed system, apparatus and method may be implemented in other manners.
- the apparatus embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
- the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
- the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
一种报文处理方法,该方法包括:第一网络设备在第一时刻接收网络中的第一报文,第一报文为第一数据流的第一突发的首个报文,第一突发为第一网络设备接收的第一数据流包括的多个突发中的一个突发,第一突发包括一个或多个报文,第一网络设备为对第一数据流包括的一个或多个报文进行处理的首跳网络设备;第一网络设备根据第一时刻从第一网络设备的第一队列系统包括的多个队列中确定第一目标队列;第一网络设备按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列;第一网络设备根据多个队列的调度规则,对第一目标队列进行处理。这样实现数据流的报文在网络中的网络设备的端到端抖动为零。
Description
本申请要求于2020年11月17日提交中国国家知识产权局,申请号为202011287339.6,发明名称为“报文处理方法以及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及通信技术领域,尤其涉及一种报文处理方法以及相关装置。
时延确定性是指对于某条数据流内的任意一个报文来说,其在网络中经历的端到端时延均不超过某个值,即网络为该数据流保证确定的时延上界。时延确定性代表网络能否“及时”将报文送达的能力。数据流的抖动是指该数据流内的报文所可能经历的时延上界和时延下界的差值。抖动确定性既规定了数据流内的报文的时延上界,还规定了数据流内的报文的时延下界,代表网络能否“准时”将报文送达,既不过早,也不过晚。
在很多工业控制场景中,控制器需要远程控制机械臂完成很多精细的操作,要求控制器和机械臂之间的时延小于1ms(毫秒),抖动小于1us(微秒),甚至零抖动。目前的调度方法(例如,基于Damper模型的方案、循环排队和转发(cyclic queueing and forwarding)的方案)可以做到端到端确定性的时延,但是其抖动仍然较大,无法满足这类业务极低抖动的要求。
发明内容
本申请实施例提供了一种报文处理方法以及相关装置,用于保证报文的确定性时延上界和端到端零抖动。
本申请实施例第一方面提供一种报文处理方法,该报文处理方法包括:
第一网络设备在第一时刻接收网络中的第一报文,该第一报文为第一数据流的第一突发的首个报文,第一突发为第一网络设备接收的第一数据流包括的多个突发中的一个突发,第一突发包括一个或多个报文,第一网络设备为对第一数据流包括的一个或多个报文进行处理的首跳网络设备;然后,第一网络设备根据第一时刻从第一队列系统包括的多个队列中确定第一目标队列;第一网络设备按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列;第一网络设备根据多个队列的调度规则,对第一目标队列进行处理。
本实施例中,第一网络设备通过接收第一突发的首个报文的第一时刻确定第一目标队列,并第一网络设备以突发粒度的入队方式将该第一突发包括的一个或多个报文顺序加入第一目标队列。而对第一突发包括的一个或多个报文进行处理的末跳网络设备可以确定对应的第三目标队列,再将该第一突发包括一个或多个报文顺序加入第三目标队列。即通过首跳网络设备和末跳网络设备之间的第一目标队列与第三目标队列的映射进行入队和调度,保证了数据流进入网络设备和离开网络设备的形状相同,从而保证报文的确定性时延上界 和端到端零抖动。
一种可能的实现方式中,第一队列系统包括的多个队列中相邻的两个队列的开启时间之间的第一时间间隔相等。
在该实现方式中,第一队列系统中相邻的两个队列的开启时间之间的第一时间间隔相等,以便于配置每条数据流中相邻两个突发到达第一网络设备的时间间隔等于门控粒度的整数倍,为方案的实施提供基础,从而保证报文的确定性时延上界和端到端零抖动。
另一种可能的实现方式中,第一数据流包括的多个突发中相邻的两个突发到达第一网络设备的第二时间间隔相等,第二时间间隔为第一时间间隔的整数倍。
在该实现方式中,第一数据流中相邻两个突发到达第一网络设备的时间间隔等于门控粒度的整数倍,以保证第一数据流进入首跳网络设备的形状与后续第一数据流离开末跳网络设备的形状相同,从而保证报文的确定性时延上界和端到端零抖动。
另一种可能的实现方式中,第一数据流包括的多个突发的比特数相同。
在该可能的实现方式中,如果多个数据流同时进入第一网络设备时,那么每条数据流包括的突发的比特数应当相同,从而避免在末跳网络设备上由于数据流的突发的比特数不同导致报文在网络中的网络设备经历的时延不同导致报文的端到端抖动。
另一种可能的实现方式中,该第一突发包括多个报文大小相同。
在该可能的实现方式中,第一突发包括多个报文大小相同,这样可以避免由于报文大小导致不同报文在网络中的网络设备经历的时间不同导致报文的端到端抖动。
另一种可能的实现方式中,该方法还包括:该第一网络设备在第二时刻接收网络中的第二报文,该第二报文为第二数据流的第二突发的首个报文,该第二突发为该第一网络设备接收的该第二数据流包括的多个突发中的一个突发,该第二突发包括一个或多个报文;该第一网络设备根据该第二时刻从该第一队列系统包括的多个队列中确定第二目标队列;该第二目标队列为该第一目标队列,或者,该第二目标队列位于该第一目标队列之后;以及,该第一目标队列为该第一队列系统的最后一个队列,或者,该第一目标队列为该第一队列系统的最后一个队列之前。
在该可能的实现方式中,针对第一网络设备接收多条数据流的情况,第一网络设备仍可以根据不同数据流的突发的首个报文的接收时刻确定对应的目标队列,并加入该目标队列。不同数据流的两个突发可以加入同一目标队列中。
另一种可能的实现方式中,第二数据流包括的多个突发中相邻的两个突发到达该第一网络设备的第三时间间隔相等,第三时间间隔为第一时间间隔的整数倍。
在该可能的实现方式中,第二数据流中相邻两个突发到达第一网络设备的时间间隔等于门控粒度的整数倍,以保证第二数据流进入首跳网络设备的形状与后续第二数据流离开末跳网络设备的形状相同,从而保证报文的确定性时延上界和端到端零抖动。
另一种可能的实现方式中,第一目标队列加入N个突发包括的报文,该N个突发包括第一突发,该N个突发中每个突发对应一个数据流且该N个突发中不同突发对应的数据流不同,该N个突发的比特数小于该第一目标队列所能容纳的比特数,该第一目标队列所能容纳的比特数等于该第一网络设备的端口速率乘以该第一目标队列的开启时间与该第一目 标队列的结束时间之间的时间间隔。
在该可能的实现方式中,第一目标队列可以加入N个突发的报文,并且该N个突发的比特数小于该第一目标队列所能容纳的比特数,从而保证在第一目标队列的开启时间至结束时间之间的时间间隔将N个突发的报文发送完毕,避免影响其他目标队列的报文的发送,从而保证报文的确定性时延和端到端零抖动。
另一种可能的实现方式中,第一报文包括第一目标队列的队列信息;或者,第一突发包括的一个或多个报文分别包括第一目标队列的队列信息。
在该可能的实现方式中,第一报文或第一突发的每个报文都携带第一目标队列的队列信息,以便于末跳网络设备根据该队列信息确定第一目标队列对应的目标队列,实现首跳网络设备和末跳网络设备之间的第一目标队列与第三目标队列的映射进行入队和调度,从而保证报文的确定性时延上界和端到端零抖动。
另一种可能的实现方式中,第一目标队列的队列信息包括第一目标队列的队列编号。
在该可能的实现方式中,示出了第一目标队列的队列信息通过队列编号的形式表示。
另一种可能的实现方式中,第一突发包括的一个或多个报文分别还包括用于指示第二网络设备加入第一突发包括的一个或多个报文的队列所属的队列组编号,该第二网络设备为对第一数据流包括的一个或多个报文进行处理的最后一跳网络设备。
在该可能的实现方式中,为了实现报文在网络中的网络设备的端到端抖动为零,第一网络设备接收到多条数据流且多条数据流中存在不同数据流的突发落入第一网络设备的第一队列系统的同一目标队列的情况,第一网络设备可以携带用于指示第二网络设备加入第一突发包括的一个或多个报文的队列所属的队列组编号。由于每个队列组对应一个优先级,不同队列组对应不同优先级,末跳网络设备根据用于指示第一突发包括的一个或多个报文的队列所属的队列组编号确定对应的队列组,并通过队列组的调度规则,对队列组进行处理,从而避免不同数据流之间的相互挤压,保证报文的确定性时延上界和端到端零抖动。
另一种可能的实现方式中,第一突发包括的一个或多个报文中每个报文包括该每个报文的第一时间信息,第一时间信息用于指示该每个报文的第一剩余处理时间,该第一剩余处理时间为该第一网络设备处理该每个报文的第一理论时间上限和第一实际时间的差;该第一理论时间上限为第一参考时刻开始至第二参考时刻该每个报文经过网络设备的内部经历的理论时间上限;该第一参考时刻为该第一网络设备释放该每个报文给该第一队列系统的参考时刻,或者,该第一参考时刻为该第一网络设备接收到该每个报文的时刻;该第二参考时刻为该每个报文进入对该第一突发包括的一个或多个报文进行处理的第二个网络设备的队列系统的参考时刻;该第一实际时间为该每个报文在该第一参考时刻至该每个报文从该第一网络设备输出的时刻为止,该每个报文在该第一网络设备内部经历的实际时间。
在该可能的实现方式中,在基于Damper模型的方案下,第一突发的每个报文可以携带每个报文的第一时间信息,以便于第二个对第一数据流进行处理的网络设备确定每个报文在该第二个对第一数据流进行处理的网络设备的参考时刻,并根据该参考时刻为该每个报文选择对应的目标队列。
另一种可能的实现方式中,第一时间信息包括每个报文的第一参考时刻以及每个报文 从第一网络设备输出的时刻。
在该可能的实现方式中,第一时间信息具体可以包括每个报文在第一网络设备的第一参考时刻和每个报文从第一网络设备输出的时刻,以便于网络中第二个对第一数据流进行处理的网络设备确定每个报文在该第二个对第一数据流进行处理的网络设备的参考时刻,并根据该参考时刻为该每个报文选择对应的目标队列。
另一种可能的实现方式中,第一时间信息还包括每个报文的第一理论时间上限。
在该可能的实现方式中,第一时间信息还包括每个报文的第一理论时间上限,以便于网络中第二个对第一数据流进行处理的网络设备确定每个报文在该第二个对第一数据流进行处理的网络设备的参考时刻。
另一种可能的实现方式中,第一报文包括该第一报文的第一时间信息,该第一时间信息用于指示该第一报文的第一剩余处理时间,该第一剩余处理时间为该第一网络设备处理该第一报文的第一理论时间上限和第一实际时间的差;该第一理论时间上限为第一参考时刻开始至第二参考时刻该第一报文经过网络设备内部经历的理论时间上限;该第一参考时刻为该第一网络设备释放该第一报文给该第一队列系统的参考时刻,该第二参考时刻为该第一报文进入对该第一突发包括的一个或多个报文进行处理的第二个网络设备的队列系统的参考时刻。
在该可能的实现方式中,对第一数据流进行处理的中间节点可以以突发的粒度将第一数据流的突发加入对应的目标队列中。因此,第一网络设备可以在第一突发的首个报文携带第一报文的第一时间信息,网络中第二个对第一数据流进行处理的网络设备根据第一报文的第一时间信息就可以确定第一突发所对应的目标队列,并将该第一突发加入该目标队列,从而减少报文传输的开销。
另一种可能的实现方式中,第二时间信息包括第一报文的第一参考时刻和第一报文从第一网络设备输出的时刻。
在该可能的实现方式中,第一时间信息具体可以包括第一报文在第一网络设备的第一参考时刻和第一报文从第一网络设备输出的时刻,以便于网络中第二个对第一数据流进行处理的网络设备确定第一报文在该第二个对第一数据流进行处理的网络设备的参考时刻,并根据该参考时刻为第一突发选择对应的目标队列。
另一种可能的实现方式中,第一时间信息还包括第一报文的第一理论时间上限。
在该可能的实现方式中,第一时间信息还包括第一报文的第一理论时间上限,以便于网络中第二个对第一数据流进行处理的网络设备确定第一报文在该第二个对第一数据流进行处理的网络设备的参考时刻,并根据该参考时刻为第一突发选择对应的目标队列。
本申请实施例第二方面提供一种报文处理方法,该报文处理方法包括:
第二网络设备接收第一数据流,该第一数据流包括一个或多个突发,该多个突发中的第一突发包括一个或多个报文,该多个突发中的第三突发包括一个或多个报文,该第一突发和该第三突发为第一数据流中相邻的两个突发,该第二网络设备为对该第一数据流包括的一个或多个报文进行处理的最后一跳网络设备;然后,该第二网络设备从该第二网络设备的第二队列系统中确定第三目标队列和第四目标队列;该第二网络设备按照该第一突发 包括的一个或多个报文的顺序将该第一突发包括的一个或多个报文加入该第三目标队列;该第二网络设备按照该第三突发包括的一个或多个报文的顺序将该第三突发包括的一个或多个报文加入该第四目标队列;该第二网络设备根据该第三目标队列和该第四目标队列的调度规则,对该第三目标队列和该第四目标队列进行处理。
本实施例中,第二网络设备以突发粒度的入队方式将第一数据流的第一突发和第三突发加入第三目标队列和第四目标队列,第三目标队列与在第一网络设备上第一突发加入的第一目标队列对应,第四目标队列与在第一网络设备上第三突发加入的目标队列对应。即通过首跳网络设备和末跳网络设备之间的第一目标队列与第三目标队列的映射进行入队和调度,保证了数据流进入网络设备和离开网络设备的形状相同,从而保证报文的确定性时延上界和端到端零抖动。
一种可能的实现方式中,第三目标队列和所述第四目标队列为该第二队列系统中相邻或不相邻的两个队列。
在该可能的实现方式中,同一数据流相邻的两个突发在末跳网络设备加入的目标队列可以是相邻的两个队列,也可以是非相邻的两个队列,具体应当由首跳网络设备对该数据流的突发与目标队列的映射方式,以及该数据流相邻的两个突发到达首跳网络设备的时间间隔决定。
另一种可能的实现方式中,该第二网络设备释放该第一突发包括的一个或多个报文给该第二队列系统的时刻和该第二网络设备释放该第三突发包括的一个或多个报文给该第二队列系统的时刻之间的时间间隔为第四时间间隔,该第三目标队列的开启时间与该第四目标队列的开启时间之间的时间间隔为第五时间间隔,该第四时间间隔与该第五时间间隔相等。
在该可能的实现方式中,上述第四时间间隔与第五时间间隔相等,以实现报文的确定性时延和端到端零抖动。
另一种可能的实现方式中,该方法还包括:该第二网络设备接收第二数据流,该第二数据流包括一个或多个突发,该多个突发中的第二突发包括一个或多个报文,该第二数据流到达该第二网络设备的时刻在该第一数据流的首个突发到达该第二网络设备的时刻之后,并且在该第一数据流的最后一个突发到达该第二网络设备的时刻之前;该第二网络设备从该第二队列系统中选择第一队列组,按照该第一数据流包括的一个或多个突发的顺序将该第一数据流包括的一个或多个突发加入该第一队列组;该第二网络设备从该第二队列系统中选择第二队列组,按照该第二数据流包括的一个或多个突发的顺序将该第二数据流包括的一个或多个突发加入该第二队列组;该第一队列组的优先级高于该第二队列组的优先级;该第二网络设备根据该第二队列系统的多个队列的调度规则,对该第一队列组和该第二队列组进行处理。
在该可能的实现方式中,第二网络设备接收到多条数据流,且不同数据流的突发映射至同一目标队列时,第二网络设备可以将不同数据流分别映射至不同队列组,每个队列组对应一个优先级,不同队列组对应不同优先级,从而避免不同数据流之间的相互挤压,保证报文的确定性时延上界和端到端零抖动。
另一种可能的实现方式中,该第二网络设备从该第二网络设备的第二队列系统中确定第三目标队列,包括:该第二网络设备确定第一目标队列,第一目标队列为第一网络设备中第一突发包括的一个或多个报文加入的队列,第一网络设备为对第一数据流包括一个或多个报文进行处理的首跳网络设备;然后,该第二网络设备根据第一映射关系从第二队列系统中确定第一目标队列对应的第三目标队列,第一映射关系包括第一网络设备的第一队列系统中的队列与第二队列系统中的队列之间的映射关系。
在该可能的实现方式中,第二网络设备可以根据第一队列系统的队列与第二队列系统的队列之间的映射关系确定第一目标队列对应的第三目标队列,这样通过首跳网络设备和末跳网络设备之间的第一目标队列与第三目标队列的映射进行入队和调度,从而保证报文的确定性时延上界和端到端零抖动。
另一种可能的实现方式中,第一突发的首个报文包括该第一目标队列的队列信息;第二网络设备确定第一目标队列,包括:第二网络设备根据该第一目标队列的队列信息确定第一目标队列。
在该可能的实现方式中,第二网络设备可以根据第一突发的首个报文携带的第一目标队列的队列信息确定第一突发在第一网络设备中加入的第一目标队列,以便于第二网络设备确定第一目标队列对应的第三目标队列。
另一种可能的实现方式中,第三目标队列加入N个突发包括的报文,该N个突发包括第一突发,该N个突发中每个突发对应一个数据流,该N个突发中不同突发对应的数据流不同;该N个突发对应的N个队列组,N个队列组中每个队列组对应一个优先级,不同队列组的优先级不同。
在该可能的实现方式中,当多条数据流的突发同时落入同一目标队列时,每条数据流的突发应当分配至对应的队列组,再通过每个队列组的调度规则对每个队列组进行处理,从而避免多条数据流的突发之间的传输冲突,以保证报文的确定性时延上界和端到端零抖动。
另一种可能的实现方式中,第一数据流包括的多个突发的比特数相同。
在该可能的实现方式中,如果多个数据流同时进入第一网络设备时,那么每条数据流包括的突发的比特数应当相同,从而避免在末跳网络设备上由于数据流的突发的比特数不同导致报文在网络中的网络设备经历的时延不同导致报文的端到端抖动。
另一种可能的实现方式中,该第一突发包括多个报文大小相同。
在该可能的实现方式中,第一突发包括多个报文大小相同,这样可以避免由于报文大小导致不同报文在网络中的网络设备经历的时间不同导致报文的端到端抖动。
本申请实施例第三方面提供一种第一网络设备,该第一网络设备包括:
接收单元,用于在第一时刻接收网络中的第一报文,该第一报文为第一数据流的第一突发的首个报文,第一突发为第一网络设备接收的第一数据流包括的多个突发中的一个突发,第一突发包括一个或多个报文,第一网络设备为对第一数据流包括的一个或多个报文进行处理的首跳网络设备;
处理单元,用于根据第一时刻从第一队列系统包括的多个队列中确定第一目标队列; 按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列;
发送单元,用于根据多个队列的调度规则,对第一目标队列进行处理。
一种可能的实现方式中,第一队列系统包括的多个队列中相邻的两个队列的开启时间之间的第一时间间隔相等。
另一种可能的实现方式中,第一数据流包括的多个突发中相邻的两个突发到达第一网络设备的第二时间间隔相等,第二时间间隔为第一时间间隔的整数倍。
另一种可能的实现方式中,第一数据流包括的多个突发的比特数相同。
另一种可能的实现方式中,该第一突发包括多个报文大小相同。
另一种可能的实现方式中,该接收单元还用于:
在第二时刻接收网络中的第二报文,该第二报文为第二数据流的第二突发的首个报文,该第二突发为该第一网络设备接收的该第二数据流包括的多个突发中的一个突发,该第二突发包括一个或多个报文;
该处理单元还用于:
根据该第二时刻从该第一队列系统包括的多个队列中确定第二目标队列;
该第二目标队列为该第一目标队列,或者,该第二目标队列位于该第一目标队列之后;以及,该第一目标队列为该第一队列系统的最后一个队列,或者,该第一目标队列为该第一队列系统的最后一个队列之前。
另一种可能的实现方式中,第二数据流包括的多个突发中相邻的两个突发到达该第一网络设备的第三时间间隔相等,第三时间间隔为第一时间间隔的整数倍。
另一种可能的实现方式中,第一目标队列加入N个突发包括的报文,该N个突发包括第一突发,该N个突发中每个突发对应一个数据流且该N个突发中不同突发对应的数据流不同,该N个突发的比特数小于该第一目标队列所能容纳的比特数,该第一目标队列所能容纳的比特数等于该第一网络设备的端口速率乘以该第一目标队列的开启时间与该第一目标队列的结束时间之间的时间间隔。
另一种可能的实现方式中,第一报文包括第一目标队列的队列信息;或者,第一突发包括的一个或多个报文分别包括第一目标队列的队列信息。
另一种可能的实现方式中,第一目标队列的队列信息包括第一目标队列的队列编号。
另一种可能的实现方式中,第一突发包括的一个或多个报文分别还包括用于指示第二网络设备加入第一突发包括的一个或多个报文的队列所属的队列组编号,该第二网络设备为对第一数据流包括的一个或多个报文进行处理的最后一跳网络设备。
另一种可能的实现方式中,第一突发包括的一个或多个报文中每个报文包括该每个报文的第一时间信息,第一时间信息用于指示该每个报文的第一剩余处理时间,该第一剩余处理时间为该第一网络设备处理该每个报文的第一理论时间上限和第一实际时间的差;该第一理论时间上限为第一参考时刻开始至第二参考时刻该每个报文经过网络设备的内部经历的理论时间上限;该第一参考时刻为该第一网络设备释放该每个报文给该第一队列系统的参考时刻,或者,该第一参考时刻为该第一网络设备接收到该每个报文的时刻;该第二 参考时刻为该每个报文进入对该第一突发包括的一个或多个报文进行处理的第二个网络设备的队列系统的参考时刻;该第一实际时间为该每个报文在该第一参考时刻至该每个报文从该第一网络设备输出的时刻为止,该每个报文在该第一网络设备内部经历的实际时间。
另一种可能的实现方式中,第一时间信息包括每个报文的第一参考时刻以及每个报文从第一网络设备输出的时刻。
另一种可能的实现方式中,第一时间信息还包括每个报文的第一理论时间上限。
另一种可能的实现方式中,第一报文包括该第一报文的第一时间信息,该第一时间信息用于指示该第一报文的第一剩余处理时间,该第一剩余处理时间为该第一网络设备处理该第一报文的第一理论时间上限和第一实际时间的差;该第一理论时间上限为第一参考时刻开始至第二参考时刻该第一报文经过网络设备内部经历的理论时间上限;该第一参考时刻为该第一网络设备释放该第一报文给该第一队列系统的参考时刻,该第二参考时刻为该第一报文进入对该第一突发包括的一个或多个报文进行处理的第二个网络设备的队列系统的参考时刻。
另一种可能的实现方式中,第二时间信息包括第一报文的第一参考时刻和第一报文从第一网络设备输出的时刻。
另一种可能的实现方式中,第一时间信息还包括第一报文的第一理论时间上限。
本申请实施例第四方面提供一种报文处理方法,该报文处理方法包括:
接收单元,用于接收第一数据流,该第一数据流包括一个或多个突发,该多个突发中的第一突发包括一个或多个报文,该多个突发中的第三突发包括一个或多个报文,该第一突发和该第三突发为第一数据流中相邻的两个突发,该第二网络设备为对该第一数据流包括的一个或多个报文进行处理的最后一跳网络设备;
处理单元,用于从该第二网络设备的第二队列系统中确定第三目标队列和第四目标队列;按照该第一突发包括的一个或多个报文的顺序将该第一突发包括的一个或多个报文加入该第三目标队列;该第二网络设备按照该第三突发包括的一个或多个报文的顺序将该第三突发包括的一个或多个报文加入该第四目标队列;
发送单元,用于根据该第三目标队列和该第四目标队列的调度规则,对该第三目标队列和该第四目标队列进行处理。
一种可能的实现方式中,第三目标队列和所述第四目标队列为该第二队列系统中相邻或不相邻的两个队列。
另一种可能的实现方式中,该第二网络设备释放该第一突发包括的一个或多个报文给该第二队列系统的时刻和该第二网络设备释放该第三突发包括的一个或多个报文给该第二队列系统的时刻之间的时间间隔为第四时间间隔,该第三目标队列的开启时间与该第四目标队列的开启时间之间的时间间隔为第五时间间隔,该第四时间间隔与该第五时间间隔相等。
另一种可能的实现方式中,该接收单元还用于:
接收第二数据流,该第二数据流包括一个或多个突发,该多个突发中的第二突发包括一个或多个报文,该第二数据流到达该第二网络设备的时刻在该第一数据流的首个突发到 达该第二网络设备的时刻之后,并且在该第一数据流的最后一个突发到达该第二网络设备的时刻之前;
该处理单元还用于:
从该第二队列系统中选择第一队列组,按照该第一数据流包括的一个或多个突发的顺序将该第一数据流包括的一个或多个突发加入该第一队列组;从该第二队列系统中选择第二队列组,按照该第二数据流包括的一个或多个突发的顺序将该第二数据流包括的一个或多个突发加入该第二队列组;该第一队列组的优先级高于该第二队列组的优先级;
该发送单元还用于:
根据该第二队列系统的多个队列的调度规则,对该第一队列组和该第二队列组进行处理。
另一种可能的实现方式中,该处理单元具体用于:
确定第一目标队列,第一目标队列为第一网络设备中第一突发包括的一个或多个报文加入的队列,第一网络设备为对第一数据流包括一个或多个报文进行处理的首跳网络设备;
根据第一映射关系从第二队列系统中确定第一目标队列对应的第三目标队列,第一映射关系包括第一网络设备的第一队列系统中的队列与第二队列系统中的队列之间的映射关系。
另一种可能的实现方式中,第一突发的首个报文包括该第一目标队列的队列信息;该处理单元具体用于:
根据该第一目标队列的队列信息确定第一目标队列。
另一种可能的实现方式中,第三目标队列加入N个突发包括的报文,该N个突发包括第一突发,该N个突发中每个突发对应一个数据流,该N个突发中不同突发对应的数据流不同;该N个突发对应的N个队列组,N个队列组中每个队列组对应一个优先级,不同队列组的优先级不同。
另一种可能的实现方式中,第一数据流包括的多个突发的比特数相同。
另一种可能的实现方式中,该第一突发包括多个报文大小相同。
本申请实施例第五方面提供一种网络设备,网络设备包括处理器,用于执行存储中存储的程序,当该程序被执行时,使得该网络设备执行上述第一方面或第一方面的任一种可能的设计的方法。
一种可能的实现方式中,该存储器位于该网络设备之外。
本申请实施例第六方面提供一种网络设备,网络设备包括处理器,用于执行存储中存储的程序,当该程序被执行时,使得该网络设备执行上述第二方面或第二方面的任一种可能的设计的方法。
一种可能的实现方式中,该存储器位于该网络设备之外。
本申请实施例第七方面提供一种计算机可读存储介质,包括计算机指令,当该计算机指令在计算机上运行时,使得计算机执行如第一方面和第二方面中的任一种可能的设计的方法。
本申请实施例第八方面提供一种包括计算机指令的计算机程序产品,其特征在于,当 其在计算机上运行时,使得该计算机执行如第一方面至第二方面中任一种可能的设计的方法。
本申请实施例第九方面提供一种网络设备,该网络设备包括处理器、存储器以及存储在该存储器上并可在该处理器上运行的计算机指令,当该计算机指令被运行时,使得该网络设备执行如第一方面或第一方面中的任一种可能的设计的方法。
本申请实施例第十方面提供一种网络设备,该网络设备包括处理器、存储器以及存储在该存储器上并可在该处理器上运行的计算机指令,当该计算机指令被运行时,使得该网络设备执行如第二方面或第二方面中的任一种可能的设计的方法。
本申请实施例第十一方面提供一种网络系统,该网络系统包括如第三方面的第一网络设备和如第四方面的第二网络设备。
从以上技术方案可以看出,本申请实施例具有以下优点:
经由上述技术方案可知,第一网络设备在第一时刻接收网络中的第一报文,该第一报文为第一数据流的第一突发的首个报文,第一突发为第一网络设备接收的第一数据流包括的多个突发中的一个突发,第一突发包括一个或多个报文,第一网络设备为对第一数据流包括的一个或多个报文进行处理的首跳网络设备;然后,第一网络设备根据第一时刻从第一队列系统包括的多个队列中确定第一目标队列,并按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列;该第一网络设备根据多个队列的调度规则,对第一目标队列进行处理。由此可知,本申请实施例的技术方案中,第一网络设备通过接收第一突发的首个报文的第一时刻确定第一目标队列,并第一网络设备以突发粒度的入队方式将该第一突发包括的一个或多个报文顺序加入第一目标队列。而对第一突发包括的一个或多个报文进行处理的末跳网络设备可以确定对应的第三目标队列,再将该第一突发包括一个或多个报文顺序加入第三目标队列。即通过首跳网络设备和末跳网络设备之间的第一目标队列与第三目标队列的映射进行入队和调度,从而保证报文的确定性时延上界和端到端零抖动。
图1为突发累积形成原因的示意图;
图2是能够应用本申请实施例的系统的示意图;
图3A是能够实现本申请实施例的路由器的示意性结构框图;
图3B为队列系统的多个队列的开启时间的示意图;
图4A为本申请实施例报文处理方法的一个实施例示意图;
图4B为本申请实施例报文处理方法的另一个实施例示意图;
图4C为本申请实施例报文处理方法的另一个实施例示意图;
图5A为本申请实施例第一数据流在首跳网络设备与末跳网络设备的一个传输场景示意图;
图5B为本申请实施例第一数据流在首跳网络设备与末跳网络设备的另一个传输场景示意图;
图5C为本申请实施例第一数据流在首跳网络设备与末跳网络设备的另一个传输场景示意图;
图6示出了入口边缘设备231和网络设备232处理报文的时序图;
图7示出了网络设备232和网络设备233处理报文的时序图;
图8是本申请实施例提供的报文通过网络设备转发后的确定性时延的示意图;
图9A为本申请实施例第一数据流和第二数据流在首跳网络设备与末跳网络设备的一个传输场景示意图;
图9B为本申请实施例第一数据流和第二数据流在首跳网络设备与末跳网络设备的另一个传输场景示意图;
图10为本申请实施例第一数据流和第二数据流在首跳网络设备与末跳网络设备的另一个传输场景示意图;
图11为本申请实施例报文处理方法的另一个实施例示意图;
图12为本申请实施例第一数据流、第二数据流和第三数据流在首跳网络设备与末跳网络设备的一个传输场景示意图;
图13为本申请实施例报文处理方法的另一个实施例示意图;
图14为本申请实施例第一网络设备的一个结构示意图;
图15为本申请实施例第二网络设备的一个结构示意图;
图16为本申请实施例网络系统的一个示意图。
本申请实施例提供一种报文处理方法和网络设备,用于保证报文的确定性时延上界和和端到端零抖动。
本申请将围绕可包括多个设备、组件、模块等的系统来呈现各个方面、实施例或特征。应当理解和明白的是,各个系统可以包括另外的设备、组件、模块等,并且/或者可以并不包括结合附图讨论的所有设备、组件、模块等。此外,还可以使用这些方案的组合。
另外,在本申请实施例中,“示例的”、“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用示例的一词旨在以具体方式呈现概念。
本申请实施例描述的网络架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的 变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
互联网协议(Internet Protocol,IP)网络中,由于突发累积的存在,导致其无法为某条流提供确定性的端到端时延和抖动。
突发累积是导致时延不确定的根本原因。突发累积形成的原因是不同数据包纸巾的相互挤压。
图1是突发累积形成原因的示意图。
如图1中三条流(流1、流2和流3)同时到达节点101的时候是完全均匀的,由于节点101只能线速处理报文,导致流2受到其他两条流的挤压,从而两个连续报文紧挨在了一起,突发度增加。以上过程若干次循环之后,会导致某跳的流形成一个难以预测大突发,大突发进一步会挤压其他流,导致其他流的时延增加,并且难以预测,微突发逐跳累计是时延不确定性的根本原因。现有解决上述问题的方法要么依赖于全网设备的时间同步,要么对传输距离有限制,很难适用于大规模IP网络。
因此,在传输IP网络中,由于突发累积的存在,导致无法为某条流提供确定性的端到端时延和抖动。
图2为应用本申请实施例的系统的一个示意图。如图2所示的网络200可以由边缘网络210、边缘网络220和核心网络230组成。
边缘网络210中包括用户设备211。边缘网络220包括用户设备221。核心网络230包括入口边缘(ingress edge)设备231、网络设备232、网络设备233、网络设备234和出口边缘(egress edge)设备235。如图2所示,用户设备211可以通过核心网络与用户设备221进行通信。
需要说明的是,能够实现本申请实施例的设备可以是路由器、交换机等。
图3A是能够实现本申请实施例的路由器的示意性结构框图。如图3A所示,路由器300包括上行板301、交换结构302和下行板303。
上行板也可以称为上行接口板。上行板301可以包括多个输入端口。上行板可以对输入端口接收到的报文进行拆封等处理,利用转发表查找输出端口。一旦查找到输出端口(为了便于描述,以下将查找到的输出端口称为目标输出端口),报文就会被发送至交换结构302。
交换结构302将接收到的报文转发到一个该目标输出端口。具体地,交换结构302将接收到的报文转发到包括该目标输出端口的下行板303上。下行板也可以称为下行接口板。下行板303中包括多个输出端口。下行板303接收来自于交换结构302的报文。下行板可 以对接收到的报文进行缓存管理、封装等处理,然后通过该目标输出端口将该报文发送至下一节点。
可以理解的是,如图3A所示的路由器仅示出了一个上行板301和一个下行板303。在一些实施例中,路由器可以包括多个上行板和/或多个下行板。
图4A是根据本申请实施例提供的报文处理方法的示意性流程图。图4A是结合图2对本申请实施例提供的报文处理方法进行描述的。假设本申请实施例中报文处理方法应用于图2所示的核心网络230中。
入口边缘设备231可以接收到多个数据流。入口边缘设备231对多个数据流中的每个数据流的处理方式是相同的。假设入口边缘设备231接收到的多个数据流的路径是依次经过:入口边缘设备231是该多个数据流进入核心网络230中的第一个网络设备。因此,入口边缘设备231也可以称为第一跳的网络设备或者首跳网络设备。相应的,网络设备232是第二跳的网络设备,网络设备233是第三跳的网络设备,网络设备234是第四跳的网络设备,出口边缘设备235是末跳网络设备或最后一跳的网络设备。
对于多个数据流中的第i条数据流,路径中的每个网络设备的输出端口为第i条数据流预留的平均带宽为r
i。多个数据流满足流量模型,流量模型可以通过以下公式4.1表示:
G
i=r
i*t+D
i (公式4.1)
其中,t为时间;G
i为t时间内第i条数据流的数据总流量;D
i为第i条数据流的最大突发度。
为了便于更好地理解本申请的技术方案,下面对本申请涉及的一些概念进行介绍。其中,在介绍每个概念时结合第一报文为例进行介绍,对于其他报文同样类似。
1、参考时刻。
入口边缘设备231有一个队列系统,该队列系统包括多个队列。对于首跳网络设备(即入口边缘设备231)来说,网络设备接收到报文的时刻可以称为该报文在网络设备的参考时刻;或者,网络设备会根据一个时刻将接收到的报文加入该网络设备的队列系统中的队列中,这个时刻可以称为该报文在首跳网络设备的参考时刻。
网络设备232、网络设备233、网络设备234和出口边缘设备235中分别都有一个队列系统,该队列系统包括多个队列。对于非首跳网络设备(即网络设备232、网络设备233、网络设备234和出口边缘设备235)来说,网络设备会根据一个时刻确定将接收到的报文加入到队列系统中的队列中。这个时刻可以称为该报文在非首跳网络设备的参考时刻。
2、理论时间上限。
理论时间上限基于网络演算(network calculus)理论计算得到相邻两个网络设备处理报文需要的最大时间。换句话说,相邻两个网络设备处理报文的时间不会大于理论时间上限。理论时间上限并不包括相邻两个网络设备之间传输报文的传输时延。
例如,报文从网络设备中的第1个网络设备到网络设备中的第2个网络设备的理论时间上限是指从报文在第1个网络设备的参考时刻到报文在第2个网络设备的参考时刻之间的理论时间上限。报文从网络设备中的第2个网络设备到网络设备中的第3个网络设备的理论时间上限是指从报文在第2个网络设备的参考时刻到报文在网络设备中的第3个网络 设备的参考时刻之间的理论时间上限。
本申请中,将报文从网络设备中的第1个网络设备到网络设备中的第2个网络设备的理论时间上限称为第1个网络设备的理论时间上限。报文从网络设备中的第2个网络设备到网络设备中的第3个网络设备的理论时间上限称为第2个网络设备的理论时间上限,对于其他网络设备的理论时间上限同样类似。
3、实际时间。
实际时间是指报文在某个网络设备的参考时刻开始至该报文从该网络设备输出的时刻为止,该报文在该网络设备内部经历的实际时间。
例如,第一报文的第一实际时间指第一报文在入口边缘设备231的参考时刻开始至该第一报文从该入口边缘设备231输出的时刻为止,第一报文在入口边缘设备231内部经历的实际时间。第一报文的第二实际时间指第一报文在网络设备232的参考时刻开始至第一报文从网络设备232输出的时刻为止,第一报文在网络设备232内部经历的实际时间。
下面介绍核心网络230中的网络设备的队列系统。
入口边缘设备231、网络设备232、网络设备233、网络设备234、和出口边缘设备235都有一个队列系统。
队列系统中的队列开启和报文发送都满足以下准则:队列在规定时刻开启,开启之后才允许发送报文。多个队列可以同时保持开启状态,但是先开启的队列先发送该先开启的队列加入的报文,该先开启的队列发送该先开启的队列中加入的报文完毕后,才允许下一个开启的队列发送该下一个开启的队列中加入的报文。
下面结合图3B介绍队列系统的多个队列的开启时间。
请参阅图3B,队列系统包括M个队列,分别为队列Q1至队列QM。Δ为M个队列中相邻两个队列的开启时间之间的时间间隔。假设起始时刻为T,那么队列Q1的开启时间为T+Δ,队列Q2的开启时间为T+2Δ,队列Q3的开启时间为T+3Δ,以此类推,队列QM的开启时间为T+Δ+D
max。M等于(Δ+D
max)/Δ。当队列满足条件时关闭,并将该队列的优先级设置为队列系统中优先级最低的队列。例如,如图3B所示,队列Q1关闭后,将该队列Q1的开启时间设置为T+2Δ+D
max。对于其他队列同样类似。
其中,D
max应当结合网络设备的理论时间上限来设定。例如,对于入口边缘设备231来说,入口边缘设备231的理论时间上限为D
1
max,则入口边缘设备231的队列系统的D
max应当不小于D
1
max。对于网络设备232来说,网络设备232的理论时间上限为D
2
max,则网络设备232的队列系统的D
max应当不小于D
2
max。
下面示出队列系统中队列动态调整两个可能的调整方式:
1、队列同时满足第一条件和第二条件后关闭。第一条件为该队列至少开启时间间隔Δ,第二条件为该队列为空,即该队列的报文已经排空。而队列关闭后,将该队列下一次的开启时间重置为Tlast+Δ,Tlast为当前优先级最低的队列的开启时间;或者,将该队列设置为队列系统中优先级最低的队列,该队列系统的其他队列的优先级相应升级。
2、确定当前最高优先级队列,该最高优先级队列开启Δ时间后,该最高优先级队列的开启时间为Tnow。如果该最高优先级队列中的报文已经排空,则将该最高优先级队列的开 启时间重置为Tnow+Δ+D
max;然后,将该最高优先级队列的优先级设置为最低优先级队列。
入口边缘设备231、网络设备232、网络设备233、网络设备234和出口边缘设备235分别所对应的队列系统可以在上行板实现,也可以在下行板实现,具体本申请不做限定。
本申请实施例中,网络设备中用于实现队列系统的单元可以称为队列系统单元,队列系统单元用于将报文加入相应的目标队列。网络设备中用于报文主动延迟或停留一段时长的单元可以称为主动延迟单元。
下面结合图4A以第一数据流为例介绍对网络中的网络设备如何处理接收到的第一数据流进行描述,第一数据流是入口边缘设备231接收到的多个数据流中的任一个。图4A中步骤401至步骤410以第一数据流的第一突发的处理过程为例进行介绍,对于第一数据流的其他突发同样适用。
请参阅图4A,图4A为本申请实施例报文处理方法的一个实施例示意图。在图4A中,报文处理方法包括:
401、入口边缘设备231在第一时刻接收第一报文。
其中,第一报文为第一数据流的第一突发的首个报文,第一突发为入口边缘设备231接收的第一数据流包括的多个突发中的一个突发,第一突发包括一个或多个报文。入口边缘设备231为对第一数据流包括的一个或多个报文进行处理的首跳网络设备。
例如,如图5A所示,入口边缘设备231接收第一数据流包括的多个突发,分别为突发B1、突发B2、突发B3、突发B4。第一突发为突发B1,突发B1包括一个或多个报文,例如,突发B1包括3个报文,3个报文的报文大小相同或不同。那么可知,第一时刻为突发B1的首个报文到达入口边缘设备231的时刻。
第一数据流中每个突发包括的报文大小相同或不同。当每个突发包括的报文大小相同时,则可以避免由于数据流的报文大小导致报文在网络中的端到端的抖动。
入口边缘设备231确定第一报文为第一突发的首个报文的确定方式有多种,下面示出两种可能的实现方式。
1、入口边缘设备231预先和发送方(sender)协商确定报文到达的时间。
2、入口边缘设备231实时监测第一数据流,当发现第一数据流的报文到达不连续时,则入口边缘设备231可以确定第一数据流的不同突发,并确定每个突发的首个报文。
3、每个突发的首个报文中携带特殊标识符,该特殊标识符用于标识该报文为该突发的首个报文。入口边缘设备231根据特殊标识符确定每个突发的首个报文。
入口边缘设备231确定第一数据流的每个突发的首个报文的方式都类似。后文中网络设备232、网络设备233、网络设备234和出口边缘设备235确定每个突发的首个报文的方式也类似,具体后续不再一一说明。
402、入口边缘设备231根据第一时刻从入口边缘设备231的队列系统单元包括的多个队列中确定第一目标队列。
其中,入口边缘设备231的队列系统单元包括的多个队列中相邻两个队列的开启时间之间的第一时间间隔相等。
例如,如图5A所示,在入口边缘设备231的队列系统单元中,队列x与队列x+1为相 邻的两个队列,队列x+1与队列x+2为相邻的两个队列,队列x+2与队列x+3为相邻的两个队列。队列x的开启时间与队列x+1的开启时间之间的时间间隔等于队列x+1的开启时间与队列x+2的开启时间之间的时间间隔。队列x+1的开启时间与队列x+2的开启时间之间的时间间隔等于队列x+2的开启时间与队列x+3的开启时间之间的时间间隔。
具体的,入口边缘设备231选择入口边缘设备231的队列系统单元中在第一时刻之后开启的第k个队列作为第一目标队列,k为大于或等于1的整数。
例如,如图5A所示,入口边缘设备231选择入口边缘设备231的队列系统单元中的在第一时刻之后首个开启的队列x。
例如,如图5B所示,入口边缘设备231选择入口边缘设备231的队列系统单元中的在第一时刻之后的第二个开启的队列x+1。
403、入口边缘设备231按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列。
其中,第一突发包括的一个或多个报文的顺序可以理解为该一个或多个报文到达入口边缘设备231的顺序。
例如,第一突发包括报文1、报文2和报文3,报文1在报文2之前到达入口边缘设备231,报文3在报文2之前到达入口边缘设备231。那么入口边缘设备231顺序将报文1、报文2和报文3加入第一目标队列。那么当第一目标队列开启之后,入口边缘设备231首先发送报文1,再发送报文2,最后发送报文3。
入口边缘设备231采用突发为入队粒度的入队方式将第一突发包括的一个或多个报文顺序加入第一目标队列。例如,如图5A所示,第一突发为突发B1,入口边缘设备231将突发B1包括的一个或多个报文顺序加入入口边缘设备231的队列系统单元中的队列x中。
可选的,第一突发包括的一个或多个报文中每个报文都包括第一目标队列的队列信息;或者,第一报文(第一突发的首个报文)包括第一目标队列的队列信息。
其中,第一目标队列的队列信息包括第一目标队列的队列编号。例如,如图5A所述,第一突发为B1,第一目标队列的队列信息包括队列编号x。
若网络设备232采用报文为入队粒度的入队方式,则可选的,第一突发包括的一个或多个报文中每个报文包括每个报文对应的第一时间信息。
其中,每个报文的第一时间信息用于指示每个报文的第一剩余处理时间。
每个报文的第一剩余处理时间为每个报文的第一理论时间上限与每个报文的第一实际时间的差。
每个报文的第一理论时间上限为从每个报文在入口边缘设备231的参考时刻与每个报文在网络设备232的参考时刻为止的每个报文经过网络设备的理论上限。
第一实际时间为从每个报文在入口边缘设备231的参考时刻与每个报文从入口边缘设备231输出的时刻为止,每个报文在入口边缘设备231内部经历的实际时间。关于参考时刻的相关介绍请参阅前述术语的介绍。
例如,如图6所示,第一报文在入口边缘设备231的参考时刻为E
1,第一报文从入口边缘设备231输出的时刻为t
1
out,即第一实际时间为参考时刻E
1与时刻t
1
ou之间的时间间隔, 第一理论时间上限为D
1
max。那么第一报文的第一剩余处理时间为D
1
max减去参考时刻E
1与时刻t
1
ou之间的时间间隔。
具体的,每个报文的第一时间信息包括每个报文在入口边缘设备231的参考时刻、每个报文从入口边缘设备231输出的时刻和每个报文的第一理论时间上限。
例如,以第一报文的第一时间信息为例进行介绍,第一报文的第一时间信息包括第一报文在入口边缘设备231的参考时刻、第一报文从入口边缘设备231输出的时刻和第一理论时间上限,即D
1
max。对于第一突发的其他报文同样类似。
若网络设备232采用突发为入队粒度的入队方式,则可选的,第一报文包括第一报文的第一时间信息。第一报文的第一时间信息用于指示第一报文的第一剩余处理时间。
由于网络设备232采用突发为入队粒度的入队方式,因此网络设备232只需要确定第一突发的首个报文的第一时间信息即可,具体网络设备232使用第一报文的第一时间信息以及确定目标队列的相关过程请参阅后文介绍。第一报文的第一时间信息包括的内容请参阅前述介绍,这里不再赘述。
需要说明的是,D
1
max可以是预先配置在网络设备232中的或者是一个预设的默认值。在此情况下,每个报文的第一时间信息或第一报文的第一时间信息可以不包括D
1
max。
404、入口边缘设备231根据第一目标队列的调度规则,向网络设备232发送第一突发包括的一个或多个报文。
具体的,入口边缘设备231按照入口边缘设备231的队列系统单元中第一目标队列的调度规则,向网络设备232发送第一突发包括的一个或多个报文。关于第一目标队列的调度规则可以结合图3B中对队列系统的M个队列的相关介绍了解队列的调度规则。
图6示出了入口边缘设备231和网络设备232处理第一报文的时序图。
如图6所示,第一报文在时刻t
1
in到达入口边缘设备231,第一报文进入入口边缘设备231的队列系统单元。第一报文在时刻t
1
out从入口边缘设备231中输出。在图6中,第一报文在时刻t
2
in输入网络设备232。第一报文在时刻t’
2
in离开网络设备232的交换结构,进入网络设备232的主动延迟单元。网络设备232根据第一报文的第一时间信息确定第一报文在网络设备232的参考时刻E
2,并根据第一报文在网络设备232的参考时刻E
2从网络设备232的队列系统单元中选择目标队列,第一报文在时刻t
2
out从网络设备232输出。
可以理解的是,图6以及后续附图中所示的队列系统单元Q和主动延迟单元D仅仅是逻辑上划分的不同单元。具体设备形态上二者可以是相同的物理单元。
需要说明的是,图6中第一报文在入口边缘设备231的参考时刻E
1设定为入口边缘设备231接收第一报文的第一时刻t
1
in。
第一报文的第一理论时间上限就是从第一报文在入口边缘设备231的参考时刻E
1开始到第一报文在网络设备232的参考时刻E
2为止,第一报文在入口边缘设备231以及网络设备232经历的理论时间上限。第一报文的第一理论时间上限不包括第一报文从入口边缘设备231到网络设备232之间的传输时延。
第一报文的第一实际时间是从第一报文在入口边缘设备231的参考时刻E
1到时刻t
1
out为止,第一报文在入口边缘设备231经历的时间。
405、网络设备232向网络设备233发送第一突发包括的一个或多个报文。
网络设备232可以采用突发为入队粒度的入队方式将第一突发包括的一个或多个报文加入目标队列,也可以采用报文为入队粒度的入队方式将第一突发包括的一个或多个报文加入目标队列。下面分别结合两种不同的入队方式介绍步骤405。
一、下面结合图4B介绍基于网络设备232采用突发为入队粒度的入队方式介绍步骤405。
请参阅图4B,在步骤405之前,本实施例还包括步骤405a至步骤405b。
405a:网络设备232根据第一报文包括第一报文的第一时间信息从网络设备232的队列系统单元中确定第六目标队列。
第一报文的第一时间信息用于指示第一报文的第一剩余处理时间。网络设备232可以通过第一报文的第一剩余处理时间确定第一报文在入口边缘设备231的参考时刻。那么网络设备232根据第一报文在入口边缘设备231的参考时刻选择第六目标队列,第六目标队列的开启时间在参考时刻E
2之后。
具体的,第一报文对应的第一时间信息包括第一报文在入口边缘设备231的参考时刻和第一报文的第一理论时间上限。例如,如图6所示,第一报文在入口边缘设备231的参考时刻为E
1,第一理论时间上限为D
1
max,那么可知网络设备232通过D
1
max和E
1可以确定第一报文在网络设备232的参考时刻E
2。由图6可知,第一报文在时刻t’
2
in进入网络设备232的主动延迟单元,网络设备232可以根据时刻t’
2
in和参考时刻E
2确定第一报文在网络设备232的主动延迟单元中停留的时长。那么,网络设备232可以根据第一报文在网络设备232的参考时刻E
2选择第六目标队列,第六目标队列在参考时刻E
2之后开启。
405b:网络设备232按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第六目标队列。
其中,加入第六目标队列的第一报文包括第一报文的第二时间信息,第一报文的第二时间信息用于指示第一报文的第二剩余处理时间。
第一报文的第二剩余处理时间为第一报文的第二理论时间上限与第一报文的第二实际时间的差。第一报文的第二理论时间上限为从第一报文在网络设备232的参考时刻至第一报文在网络设备233的参考时刻为止的第一报文经过网络设备的理论时间上限。第二实际时间为从第一报文在网络设备232的参考时刻与第一报文从网络设备232输出的时刻为止,第一报文在网络设备232内部经历的实际时间。
具体的,网络设备232采用突发为入队粒度的入队方式将第一突发包括的一个或多个报文顺序加入第六目标队列。
可选的,第一报文的第二时间信息包括第一报文在网络设备232的参考时刻、第一报文从网络设备232输出的时刻和第一报文的第二理论时间上限。
例如,如图6和图7所示,第一报文在网络设备232的参考时刻为参考时刻E
2,第一报文的第二理论时间上限为D
2
max。对于第一报文来说,D
2
max为第一报文被网络设备232入队至网络设备232的队列系统单元至该第一报文入队至网络设备233的队列系统单元的最大时延。
需要说明的是,D
2
max可以是预先配置在网络设备233中,或者是一个预设的默认值。在此情况下,第一报文的第二时间信息可以不包括D
2
max。
若步骤404中第一突发包括的一个或多个报文分别包括第一目标队列的队列信息,则加入第六目标队列的第一突发的一个或多个报文也分别都包括第一目标队列的队列信息。若步骤404中第一报文包括第一目标队列的队列信息,则加入第六目标队列的第一突发的第一报文包括第一目标队列的队列信息。
那么上述步骤405具体包括步骤405c。
405c:网络设备232根据第六目标队列的调度规则,向网络设备233发送第一突发包括的一个或多个报文。
具体的,网络设备232按照网络设备232的队列系统单元中第六目标队列的调度规则,向网络设备233发送第一突发包括的一个或多个报文。关于第六目标队列的调度规则可以结合前述对图3B对队列系统的M个队列的相关介绍了解队列的调度规则。
图7示出了网络设备232和网络设备233处理第一报文的时序图。第一报文在时刻t
2
in到达网络设备232,第一报文在时刻t’
2
in离开网络设备232的交换结构,进入网络设备232的主动延迟单元。网络设备232根据第一报文的第一时间信息确定第一报文在网络设备232的参考时刻E
2,并根据第一报文在网络设备232的参考时刻E
2从网络设备232的队列系统单元中选择第六目标队列,第一报文在时刻t
2
out从网络设备232输出。
一、下面结合图4B介绍基于网络设备232采用报文为入队粒度的入队方式介绍步骤405。下面通过图4C以第一突发的第一报文为例介绍报文的入队和发送。
请参阅图4C,步骤405之前,本实施例还包括步骤405d至步骤405e。
405d:网络设备232根据第一报文包括的第一时间信息从网络设备232的队列系统单元确定第六目标队列。
步骤405d与步骤405a类似,具体请参阅前述步骤405a的相关介绍。
405e:网络设备232将第一报文加入第六目标队列。
其中,加入第六目标队列的第一报文包括第一报文的第二时间信息。第二时间信息的相关介绍请参阅前述步骤405b。
若步骤404中第一报文包括第一目标队列的队列信息,则加入第六目标队列的第一报文也包括第一目标队列的队列信息。
那么上述步骤405具体包括步骤405f。
步骤405f:网络设备232根据第六目标队列的调度规则,向网络设备233发送第一报文。
对于第一突发的其他报文的处理流程也类似,网络设备232根据该第一突发的其他报文中每个报文包括每个报文对应的第一时间信息确定该每个报文对应的目标队列,并将该每个报文加入该每个报文对应的目标队列,再通过该每个报文对应的目标队列的调度规则向网络设备233发送该每个报文。
406、网络设备233向网络设备234发送第一突发包括的一个或多个报文。
步骤406与前述步骤405类似,具体请参阅前述步骤405的相关介绍。
图7示出了网络设备232和网络设备233处理第一报文的时序图。第一报文在时刻t
3
in到达网络设备233,第一报文在时刻t’
3
in离开网络设备233的交换结构,进入网络设备233的主动延迟单元。网络设备233根据第一报文的第二时间信息确定第一报文在网络设备232的参考时刻E
3,并根据第一报文在网络设备233的参考时刻E
3从网络设备233的队列系统单元中选择目标队列,第一报文在时刻t
3
out从网络设备233输出。
407、网络设备234向出口边缘设备235发送第一突发包括的一个或多个报文。
步骤407与步骤405的处理过程类似,具体请参阅前述步骤405的相关介绍。
不同的地方在于,本实施例中,出口首跳网络设备和末跳网络设备之间的通过队列的映射进行入队和调度,从而保证报文的确定性时延上界和端到端零抖动。因此,步骤407中网络设备234向出口边缘设备235发送的第一突发包括的一个或多个报文可以不携带报文的时间信息。
若网络设备234接收到网络设备233发送的第一突发包括的一个或多个报文分别包括第一目标队列的队列信息时,则网络设备234向出口边缘设备235发送的第一突发包括的一个或多个报文分别都包括第一目标队列的队列信息。
若网络设备234接收到网络设备233发送的第一突发中的第一报文包括第一目标队列的队列信息时,则网络设备234向出口边缘设备235发送的第一突发的第一报文包括第一目标队列的队列信息。
408、出口边缘设备235从出口边缘设备235的队列系统单元确定第三目标队列。
具体的,出口边缘设备235确定第一突发包括的一个或多个报文在入口边缘设备231中加入的第一目标队列;然后,出口边缘设备235根据第一映射关系确定第一目标队列对应的第三目标队列。第一映射关系包括入口边缘设备231的队列系统单元的队列与出口边缘设备235的队列系统单元的队列之间的映射关系。
可选的,第一映射关系可以预先配置在出口边缘设备235中,也可以是出口边缘设备235可以通过数据面学习的方式或控制面配置的方式获取到的,具体本申请不做限定。并且,入口边缘设备231的队列系统单元的队列与出口边缘设备235的队列系统单元的队列之间的映射关系可以是通过实验数据确定的。
一种可能的实现方式中,出口边缘设备235可以根据网络设备234发送的第一报文包括的第一目标队列的队列信息确定第一目标队列;然后,出口边缘设备235根据第一映射关系确定第一目标队列对应的第三目标队列。
例如,如图5A所示,第一目标队列的队列信息包括队列编号x。出口边缘设备235根据第一映射关系确定队列编号x对应的队列编号y,即第三目标队列为出口边缘设备235的队列系统单元中队列编号为y的队列。由此可知,第一映射关系可以包括入口边缘设备231的队列系统单元的队列的队列编号与出口边缘设备235的队列系统单元的队列的队列编号之间的映射关系。例如,结合图5A所示的示例,第一映射关系可以表示为:
表1
由表1可知,入口边缘设备231的队列系统单元中队列编号x的队列对应出口边缘设备235的队列系统单元中队列编号y的队列。入口边缘设备231的队列系统单元中队列编号x+1的队列对应出口边缘设备235的队列系统单元中队列编号y+1的队列。入口边缘设备231的队列系统单元中队列编号x+2的队列对应出口边缘设备235的队列系统单元中队列编号y+2的队列。入口边缘设备231的队列系统单元中队列编号x+3的队列对应出口边缘设备235的队列系统单元中队列编号y+3的队列。
409、出口边缘设备235按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第三目标队列。
具体的,出口边缘设备235以突发为入队粒度的入队方式将第一突发包括的一个或多个报文加入第三目标队列。第一突发包括的一个或多个报文的顺序的相关介绍请参阅步骤403的相关介绍。
例如,如图5A所示,第一突发为突发B1,第一目标队列为队列x。出口边缘设备235根据第一映射关系确定队列x对应队列y。出口边缘设备235按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入队列y。
例如,如图5B所示,第一突发为突发B1,第一目标队列为队列x+1。出口边缘设备235确定队列x+1对应队列y+1。出口边缘设备235按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入队列y+1。
410、出口边缘设备235根据第三目标队列的调度规则,发送第一突发包括的一个或多个报文。
具体的,出口边缘设备235按照出口边缘设备235的队列系统单元中第三目标队列的调度规则,向出口边缘设备235发送第一突发包括的一个或多个报文。关于第三目标队列的调度规则可以结合前述图3B对队列系统的M个队列的相关介绍了解队列的调度规则。
本申请实施例中,入口边缘设备231在第一时刻接收网络中的第一报文,该第一报文为第一数据流的第一突发的首个报文,第一突发为第一网络设备接收的第一数据流包括的多个突发中的一个突发,第一突发包括一个或多个报文,入口边缘设备231为对第一数据流包括的一个或多个报文进行处理的首跳网络设备;然后,入口边缘设备231根据第一时刻从入口边缘设备231的队列系统单元包括的多个队列中确定第一目标队列,并按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列;入口边缘设备231根据入口边缘设备231的队列系统单元的多个队列的调度规则,对第一目标队列进行处理。由此可知,本申请实施例的技术方案中,入口边缘设备231通过接收第一突发的首个报文的第一时刻确定第一目标队列;并且,入口边缘设备231以突发粒度的入队方式将该第一突发包括的一个或多个报文顺序加入第一目标队列。而对第一突发包括的一个或多个报文进行处理的末跳网络设备可以确定对应的第三目标队列,再将该第一 突发包括一个或多个报文顺序加入第三目标队列。即通过首跳网络设备和末跳网络设备之间的第一目标队列与第三目标队列的映射进行入队和调度,保证了数据流进入网络设备和离开网络设备的形状相同,并且能够保证报文的确定性时延上界和和端到端零抖动。
可选的,第一数据流包括多个突发,第一数据流中相邻的两个突发到达入口边缘设备231的第二时间间隔相等,且第二时间间隔等于第一时间间隔的整数倍。该相邻的两个突发在出口边缘设备231分别所映射至的目标队列的开启时间之间的第五时间间隔等于第二时间间隔。
下面结合步骤401至步骤420示出网络中的网络设备对第一数据流中相邻的两个突发(第一突发和第三突发)的处理过程,对于第一数据流的其他相邻的两个突发的处理过程同样适用。
411、入口边缘设备231在第三时刻接收第三报文。
其中,第三报文为第三突发的首个报文,第三突发为第一数据流的一个突发,第一突发与第三突发为第一数据流中相邻的两个突发。
例如,如图5A所示,第一突发为第一数据流的突发B1,第二突发为第一数据流的突发B2,突发B1和突发B2为第一数据流中相邻的两个突发。或者,第一突发为第二数据流的突发B2,第二突发为第一数据流的突发B3,突发B2和突发B3为第一数据流中相邻的两个突发。
第一数据流的相邻两个突发到达入口边缘设备231的第二时间间隔相等,而第二时间间隔为第一时间间隔的整数倍。
例如,如图5A,第一突发为突发B1,第三突发为突发B2。突发B1在第一时刻到达入口边缘设备231,突发B2在第三时刻到达入口边缘设备231。第一时刻与第三时刻之间的第二时间间隔等于第一时间间隔,即等于入口边缘设备231的队列系统单元的队列的门控粒度(即门控粒度等于入口边缘设备231的队列系统单元中一个队列的时长)。
例如,如图5C,第一突发为B1,第三突发为B2。突发B1在第一时刻到达入口边缘设备231,突发B2在第三时刻到达入口边缘设备231。第一时刻与第三时刻之间的第二时间间隔等于两倍第一时间间隔。即等于入口边缘设备231的队列系统单元的队列的两倍门控粒度。
其中,第一数据流包括的多个突发的比特数相同或不相同。
例如,如图5A所示,第一数据流包括的突发B1、突发B2、突发B3和突发B4均包括相同的比特数。即可以理解为突发B1、突发B2、突发B3和突发B4分别包含的数据量相同。
多个突发包括的报文数量相同或不相同。例如,突发B1包括3个报文,突发B2包括4个报文,突发B3包括3个报文。即突发B1包括的报文数量与突发B3包括的报文数量相同,而突发B1包括的报文数量与突发B2包括的报文数量不同。
可选的,每个突发包括的报文大小相同。若每个突发包括的报文大小相同,则可以避免由于报文大小不同导致报文的端到端抖动。例如,如图5A所示,突发B1包括3个报文,该3个报文中每个报文包括的比特数相同,这样网络中的网络设备中传输时可以避免由于该3个报文大小导致该3个报文在网络中的网络设备经历的时间不同导致报文的端到端抖 动。
412、入口边缘设备231根据第三时刻从入口边缘设备231的队列系统单元包括的多个队列中确定第五目标队列。
具体的,入口边缘设备231选择入口边缘设备231的队列系统单元中在第三时刻之后开启的第k个队列作为第五目标队列。
例如,如图5A所示,第一突发为突发B1,第三突发为突发B2。入口边缘设备231确定突发B1映射至队列x,确定突发B2映射至队列x+1。
例如,如图5B所示,第一突发为突发B1,第三突发为突发B2。入口边缘设备231确定突发B1映射至队列x+1,突发B2映射至队列x+2。
413、入口边缘设备231按照第三突发包括的一个或多个报文的顺序将第三突发包括的一个或多个报文将第三突发包括的一个或多个报文加入第五目标队列。
414、入口边缘设备231根据第五目标队列的调度规则,向网络设备232发送第三突发包括的一个或多个报文。
步骤413至步骤414与前述步骤403至步骤404类似,具体请参阅前述步骤403至步骤404的相关介绍。
415、网络设备232向网络设备233发送第三突发包括的一个或多个报文。
416、网络设备233向网络设备234发送第三突发包括的一个或多个报文。
417、网络设备234向出口边缘设备235发送第三突发包括的一个或多个报文。
步骤415至步骤417与前述步骤405至步骤407类似,具体请参阅前述步骤405至步骤407的相关介绍。
418、出口边缘设备235从出口边缘设备235的队列系统单元确定第四目标队列。
出口边缘设备235释放第一突发包括的一个或多个报文给出口边缘设备235的队列系统单元的时刻与出口边缘设备235释放第三突发包括的一个或多个报文给出口边缘设备235的队列系统单元的时刻之间的时间间隔为第四时间间隔。
例如,如图5C所示,第一突发为突发B1,第三突发为突发B2。出口边缘设备235释放突发B1给出口边缘设备235的队列系统单元的时刻为T13,出口边缘设备235释放突发B2给出口边缘设备235的队列系统单元的时刻为T31,时刻T13与时刻T31之间的时间间隔为第四时间间隔。
第三目标队列的开启时间与第四目标队列的开启时间之间的时间间隔为第五时间间隔。第四时间间隔与第五时间间隔相等。
例如,如图5C所示,突发B1映射至队列y,突发B2映射至队列y+2。队列y的开启时间为T12,队列y+2的开启时间为T22。开启时间T12与开启时间T22之间的时间间隔为第五时间间隔。时刻T13与时刻T31之间的时间间隔为第四时间间隔,第四时间间隔与第五时间间隔相等。
出口边缘设备235的队列系统单元包括的多个队列中相邻两个队列的开启时间之间的时间间隔为第六时间间隔。
例如,如图5C所示,出口边缘设备235的队列系统单元中队列y与队列y+1是相邻的 两个队列,队列y的开启时间与队列y+1的开启时间之间的时间间隔为第六时间间隔。
第三目标队列与第四目标队列为相邻或不相邻的两个队列。第三目标队列的开启时间与第四目标队列的开启时间之间的第五时间间隔为第六时间间隔的整数倍。
例如,如图5A所示,第三目标队列为队列y,第四目标队列为队列y+1,队列y与队列y+1是相邻的两个队列。队列y的开启时间与队列y+1的开启时间之间的第五时间间隔等于第六时间间隔。
例如,如图5C所示,第三目标队列为队列y,第四目标队列为队列y+2,队列y与队列y+2为不相邻的两个队列。队列y的开启时间与队列y+2的开启时间之间的第五时间间隔等于第六时间间隔的两倍。
为了使得报文在网络中的网络设备的端到端抖动为零,第二时间间隔与第五时间间隔有如下关系:第二时间间隔等于第五时间间隔。
下面结合图5C进行说明。例如,如图5C所示,突发B1的首个报文到达入口边缘设备231的时刻为T11,突发B2的首个报文到达入口边缘设备231的时刻为T21。突发B1的首个报文离开出口边缘设备235的时刻为T12,突发B2的首个报文离开出口边缘设备235的时刻为T22。
那么可知,突发B1的首个报文在网络中的网络设备经历的时间为T12-T11,突发B2的首个报文在网络中的网络设备经历的时间为T22-T21。为了使得报文在网络中的网络设备的端到端抖动为零,所以T12-T11应当等于T22-T21。即具体如公式4.2表示:
T12-T11=T22-T21(公式4.2)
将公式4.2进行变换,得到T12-T22=T11-T21。而T12-T22为第五时间间隔,T11-T21为第二时间间隔。因此可知确定第二时间间隔等于第五时间间隔。
由于第二时间间隔等于第一时间间隔的整数倍,第五时间间隔等于第六时间间隔的整数倍。而第二时间间隔等于第五时间间隔,因此可知第一时间间隔等于第六时间间隔。即入口边缘设备231的队列系统单元的队列的门控粒度等于出口边缘设备235的队列系统单元的队列的门控粒度。
419、出口边缘设备235按照第三突发包括的一个或多个报文的顺序将第三突发包括的一个或多个报文加入第四目标队列。
420、出口边缘设备235根据第四目标队列的调度规则,发送第一突发包括的一个或多个报文。
步骤419至步骤420与前述步骤409至步骤410类似,具体请参阅前述步骤409至步骤410的相关介绍,这里不再赘述。
由此可知,结合图8所示,D
1
max为报文在入口边缘设备231接收到报文至报文入队至网络设备232的队列系统单元的最大时延。D
2
max为报文被网络设备232入队至网络设备232的队列系统单元至报文入队至网络设备233的队列系统单元的最大时延。D
3
max为报文被网络设备233入队至网络设备233的队列系统单元至该报文入队至网络设备234的队列系统单元的最大时延。D
4
max为报文被网络设备234入队至网络设备234的队列系统单元至该报文入队至出口边缘设备235的队列系统单元的最大时延。D
h为报文在出口边缘设备235的 队列系统单元和调度单元的最大时延。经由本申请实施例的技术方案可知,不同的报文在出口边缘设备235的D
h都相同,从而保证不同的报文在入口边缘设备231至出口边缘设备235经历的时间相同,以实现报文在网络中的网络设备的端到端抖动为零,从而解决采用Damper方案下由于调度而引起报文的端到端抖动。
需要说明的是,上述是以中间节点(网络设备232、网络设备233和网络设备234)采用Damper方案介绍本申请实施例的技术方案,以解决采用Damper方案下由于调度而引起报文的端到端抖动。实际应用中,本申请实施例的技术方案也可以基于其他方案实施,只要该其他方案能够保证同一数据流的不同报文在入口边缘231至网络设备234的抖动为零即可,具体本申请不做限定。本申请实施例主要在于使得不同报文在出口边缘设备235的D
h都相同,从而实现报文在网络中的网络设备的端到端抖动为零。
本申请实施例中,入口边缘设备231可以接收多条数据流。下面以入口边缘设备231接收第一数据流和第二数据流为例进行介绍。
第一数据流的相关介绍可以参阅前述图4A所示的实施例。入口边缘设备231在第二时刻接收第二报文,并根据第二时刻从入口边缘设备231的队列系统单元包括的多个队列中确定第二目标队列;然后,入口边缘设备231按照第二突发包括的一个或多个报文的顺序将第二突发包括的一个或多个报文加入第二目标队列,再根据第二目标队列的调度规则,发送第二突发包括的一个或多个报文。
第二报文为第二数据流的第二突发的首个报文,第二突发为入口边缘设备231接收到的第二数据流包括的多个突发中的一个突发,第二突发包括一个或多个报文。
例如。如图9A所示,第二数据流包括多个突发,分别为突发A1、突发A2、突发A3和突发A4。若第二突发为突发A2,那么第二时刻为突发A2到达入口边缘设备231的时刻。
第二数据流包括的多个突发中相邻两个突发到达入口边缘设备231的第三时间间隔相等,且第三时间间隔为第一时间间隔的整数倍。第一时间间隔为入口边缘设备231的队列系统单元中相邻两个队列的开启时间之间的时间间隔。
例如,如图9A所示,第二数据流包括突发A1、突发A2、突发A3、突发A4。以突发A1、突发A2和突发A3为例进行介绍。突发A1与突发A2为第二数据流中相邻的两个突发,突发2与突发3为第二数据流中相邻的两个突发。
突发A1到达入口边缘设备231的时刻与突发2到达入口边缘设备231的时刻之间的时间间隔与突发A2到达入口边缘设备231的时刻与突发A3到达入口边缘设备231的时刻之间的时间间隔相等。并且,突发A1到达入口边缘设备231的时刻与突发A2到达入口边缘设备231的时刻的时间间隔等于第一时间间隔。突发A2到达入口边缘设备231的时刻与突发A3到达入口边缘设备231的时刻之间的时间间隔等于第一时间间隔。即第三时间间隔等于第一时间间隔。
出口边缘设备235将第二数据流的相邻两个突发释放给出口边缘设备235的队列系统单元的时刻之间的时间间隔等于该相邻的两个突发在出口边缘设备235分别映射的目标队列的开启时间之间的时间间隔。
例如,如图9A所示,第二数据流的突发A1映射至队列y,第二数据流的突发A2映射 至队列y+1。出口边缘设备235将第二数据流的突发A1释放给出口边缘设备235的队列系统单元的时刻为T1,出口边缘设备235将第二数据流的突发A2释放给出口边缘设备235的队列系统单元的时刻为T2,时刻T1与时刻T2之间的时间间隔等于队列y的开启时间与队列y+1的开启时间之间的时间间隔。
该相邻的两个突发在出口边缘设备235分别映射的目标队列的开启时间之间的时间间隔等于该相邻的两个突发分别到达入口边缘设备231的时刻之间的时间间隔。
例如,如图9A所示,第二数据流的突发A1到达入口边缘设备231的时刻与第二数据流的突发A2到达入口边缘设备231的时刻为第三时间间隔。第二数据流的突发A1映射至队列y,第二数据流的突发A2映射至队列y+1。队列y的开启时间与队列y+1的开启时间之间的时间间隔等于第三时间间隔。
上述第二数据流的一些相关时间间隔的关系与第一数据流类似,具体设置原因的说明可以参阅前述图4A所示的实施例中第一数据流的相关说明。
其中,第一目标队列为第二目标队列,或者,第二目标队列位于第一目标队列之后;以及第一目标队列为入口边缘设备231的队列系统单元中的最后一个队列,或者,第一目标队列为入口边缘设备231的队列系统单元的最后一个队列之前。
第二数据流中至少存在一个突发与第一数据流的一个突发同时加入入口边缘设备231的队列系统单元中的同一目标队列的情况。
例如,如图9A所示,第一数据流的首个突发为突发B1,第二数据流的首个突发A1在第一数据流的首个突发B1到达入口边缘设备231之后到达入口边缘设备231。如图9A可知,突发A与突发B1都加入入口边缘设备231的队列系统单元的队列x。第二数据的第二个突发A2在第一数据流的第二个突发B2到达入口边缘设备231之后到达入口边缘设备231,如图9A可知,突发A2和突发B2都加入入口边缘设备231的队列系统单元的队列x+1。
例如,如图9B所示,第一数据流的首个突发为突发B1,第二数据流的首个突发A1的部分报文与突发B1的部分报文同时到达入口边缘设备231。如图9B可知,突发A与突发B1都加入入口边缘设备231的队列系统单元的队列x。
为了实现报文在网络中的网络设备的端到端抖动为零,出口边缘设备235接收多条数据流且多条数据流中存在不同数据流的突发落入出口边缘设备235中的队列系统单元的同一目标队列的情况(在入口边缘设备231上该不同数据流的突发落入入口边缘设备231中的队列系统单元的同一目标队列),出口边缘设备235可以为每个数据流选择对应的队列组,并通过队列组的调度规则,对队列组进行处理。
下面介绍出口边缘设备235的队列系统单元包括的多个队列组以及队列组的优先级。
每个队列组包括的多个队列与前述图3B中介绍的队列系统包括的多个队列的工作原理和设置机制是一致的,而多个队列组中每个队列组对应一个优先级,不同队列组的优先级不同。
例如,如图10所示,出口边缘设备235包括第一队列组和第二队列组。第一队列组高于第二队列组是指:第一队列组和第二队列组中相同队列编号(即同时开启的两个队列)的两个队列中,第一队列组的队列的优先级高于第二队列组的队列。例如,第一队列组的 队列y的优先级高于第二队列组的队列y,第一队列组的队列y和第二队列组的队列y同时开启,但是只有当第一队列组的队列y中的报文排空之后,第二队列组的队列y的报文才开始发送。对于第一队列组和第二队列组中相同队列编号的队列的报文发送方式也类似,这里不再一一说明。
下面结合图11所示的实施例进行介绍。其中,图11所示的实施例中以第二数据流的第二突发与第一数据流的第一突发都加入入口边缘设备231的第一目标队列为例进行介绍,即第二目标队列为第一目标队列。
需要说明的是,图11所示的实施例仅仅是一种示例,第二数据流的第二突发也可能是与第一数据流的其他突发加入同一目标队列。例如,第二数据流的第二突发与第一数据流的第三突发都加入入口边缘设备231的第五目标队列,即第二目标队列为第五目标队列。并且,图11仅仅以第二数据流的第二突发与第一数据流的第一突发都加入同一目标队列的情况进行介绍,实际应用中,第二数据流可以存在两个或两个以上突发与第一数据流的突发同时加入入口边缘设备231的队列系统单元中的同一目标队列,具体本申请不做限定。
请参阅图11,图11为本申请实施例报文处理方法的另一个实施例示意图。在图11中,报文处理方法包括:
1101、入口边缘设备231在第一时刻接收第一报文。
1102、入口边缘设备231根据第一时刻从入口边缘设备231的队列系统单元中确定第一目标队列。
1103、入口边缘设备231按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列。
步骤1101至步骤1103与前述图4A所示的实施例中的步骤401至步骤403类似,具体请参阅前述图4A所示的实施例中的步骤401至步骤403的相关介绍。
本实施例中,可选的,加入第一目标队列的第一突发包括的一个或多个报文中还分别包括用于指示出口边缘设备235将第一突发包括的一个或多个报文加入的队列所属的第一队列组的队列组编号;或者,加入第一目标队列的第一报文包括第一队列组的队列组编号。
具体的,入口边缘设备231根据第二映射关系确定第一数据流所对应的第一队列组,并将该第一队列组的队列组编号携带在第一突发的每个报文或第一突发的第一报文中。
第二映射关系包括出口边缘设备2的队列系统单元中队列组与数据流之间的映射关系,每个数据流对应一个队列组,每个队列组对应一个优先级。
可选的,第二映射关系可以是预先配置在入口边缘设备231中的,也可以是入口边缘设备231通过数据面学习或控制面配置获取到的,具体本申请不做限定。
数据流的优先级可以是根据用户等级或数据流对应的业务的重要程度等因素确定的。例如,某用户的用户等级越高,该用户的数据流的优先级越高。某数据流的业务的重要程度较高,则该数据流的优先级越高。而数据流的优先级越高,则该数据流对应的队列组的优先级也越高。
入口边缘设备231可以通过报文的五元组识别不同突发所属的数据流类型。
1104、入口边缘设备231在第二时刻接收第二报文。
1105、入口边缘设备231根据第二时刻从入口边缘设备231的队列系统单元包括的多个队列确定第一目标队列。
1106、入口边缘设备231按照第二突发包括的一个或多个报文的顺序将第二突发包括的一个或多个报文加入第一目标队列。
步骤1104至步骤1105与前述图4A所示的实施例中的步骤401至步骤403类似,具体请参阅前述图4A所示的实施例中的步骤401至步骤403的相关介绍。
例如,如图9A所示,第一突发为突发B1,第二突发为突发A1。由图9A可知,第二时刻在第一时刻之后。入口边缘设备231确定突发B1映射至入口边缘设备231的队列系统单元中的队列x,以及确定突发A1映射至入口边缘设备231的队列系统单元中的队列x。由于突发B1在突发A1之前到达入口边缘设备231,因此入口边缘设备231可以先将突发B1加入队列x,再将突发A1加入队列x,具体如图9A所示。
也就是当队列x开启后,入口边缘设备231先按照突发B1包括的一个或多个报文的顺序向网络设备232发送突发B1包括的一个或多个报文;当突发B1包括的一个或多个报文发送完毕后,入口边缘设备231再按照突发B1包括的一个或多个报文的顺序向网络设备232发送突发A1包括的一个或多个报文。在实际应用中,入口边缘设备231将突发A1和突发B1加入队列x的顺序也可以不限定。例如,入口边缘设备231将突发A包括的一个或多个报文先加入队列x,再将突发B1包括的一个或多个报文加入队列x。
例如,如图10所示,第一突发为突发B2,第二突发为突发A1,由图10可知,第二时刻在第一时刻之前。入口边缘设备231确定突发A1映射至入口边缘设备231的队列系统单元中的队列x+1,确定突发B2映射至入口边缘设备231的队列系统单元中的队列x+1。由于突发A1在突发B2之前到达入口边缘设备231,因此入口边缘设备231可以先将突发A1包括的一个或报文优先按照报文的顺序加入队列x+1,再按照报文顺序加入第一突发B2包括的一个或多个报文,具体如图10所示。实际应用中,入口边缘设备231将突发A1和突发B2加入队列x+1的顺序可以不限定。例如,入口边缘设备231可以先将突发B1包括的一个或多个报文加入队列x+1,再将突发A1包括的一个或多个报文加入队列x+1。
可选的,加入第一目标队列的第二突发包括的一个或多个报文中还分别包括用于指示出口边缘设备235将第二突发包括的一个或多个报文加入的队列所属的第二队列组的队列组编号;或者,加入第一目标队列的第二报文包括第二队列组的队列组编号。
步骤1103中的第一队列组的优先级高于第二队列组的优先级。
例如,第一数据流的优先级高于第二数据流的优先级,而第一队列组的优先级高于第二队列组的优先级,因此第一数据流的数据可以通过第一队列组的队列进行传输,第二数据流可以通过第二队列组的队列进行传输。
具体的,入口边缘设备231根据第二映射关系确定第二数据流所对应的第二队列组,并将该第二队列组的队列组编号携带在第二突发的每个报文或第二突发的第一报文中。第二映射关系包括出口边缘设备2的队列系统单元中队列组与数据流之间的映射关系。
1107、入口边缘设备231根据第一目标队列的调度规则,向网络设备232发送第一突发包括的一个或多个报文和第二突发包括的一个或多个报文。
需要说明的是,第一突发的比特数和第二突发的比特数之和小于或等于第一目标队列所能容纳的比特数。
其中,第一目标队列所能容纳的比特数等于入口边缘设备231的端口速率乘以第一目标队列的开启时间与第一目标队列的结束时间之间的时间间隔。
例如,如图9A所示,突发B1对应第一数据流,突发A1对应第二数据流。突发B1的比特数和突发A1的比特数之和应当小于在第一目标队列的开启时间与第一目标队列的结束时间之间的时间间隔内入口边缘设备231能够传输的比特数。
1108、网络设备232向网络设备233发送第一突发包括的一个或多个报文和第二突发包括的一个或多个报文。
1109、网络设备233向网络设备234发送第一突发包括的一个或多个报文和第二突发包括的一个或多个报文。
1110、网络设备234向出口边缘设备235发送第一突发包括的一个或多个报文和第二突发包括的一个或多个报文。
步骤1107至步骤1110与前述图4A所示的实施例中的步骤404至步骤407类似,具体请参阅前述图4A所示的实施例中的步骤404至步骤407的相关介绍。
需要说明的是,若入口边缘设备231发送的第一突发包括的一个或多个报文中分别包括用于指示出口边缘设备235将第一突发包括的一个或多个报文加入的队列所属的第一队列组的队列组编号,以及第二突发包括的一个或多个报文中分别包括用于指示出口边缘设备235将第二突发包括的一个或多个报文加入的队列所属的第二队列组的队列组编号,则中间节点(网络设备232、网络设备233和网络设备234)在传输第一突发和第二突发时,第一突发包括的一个或多个报文均分别包括用于指示出口边缘设备235将第一突发包括的一个或多个报文加入的队列所属的第一队列组的队列组编号,第二突发包括的一个或多个报文中均分别包括用于指示出口边缘设备235将第二突发包括的一个或多个报文加入的队列所属的第二队列组的队列组编号。
若入口边缘设备231发送的第一报文包括指示出口边缘设备235将第一突发包括的一个或多个报文加入的队列所属的第一队列组的队列组编号,第二报文包括指示出口边缘设备235将第二突发包括的一个或多个报文加入的队列所属的第二队列组的队列组编号,则中间节点(网络设备232、网络设备233和网络设备234)在传输第一突发和第二突发时,第一报文包括指示出口边缘设备235将第一突发包括的一个或多个报文加入的队列所属的第一队列组的队列组编号,第二报文包括指示出口边缘设备235将第二突发包括的一个或多个报文加入的队列所属的第二队列组的队列组编号。
1111、出口边缘设备235从出口边缘设备235的队列系统单元确定第一队列组。
下面示出口边缘设备235确定第一队列组的两种可能的实现方式。
1、出口边缘设备235接收网络设备234发送的第一突发包括的一个或多个报文。该第一突发包括的一个或多个报文分别包括第一队列组的队列组编号,或者,第一报文包括第一队列组的队列组编号。出口边缘设备235根据该队列组编号从出口边缘设备235的队列系统单元确定第一队列组。
2、出口边缘设备235根据第二映射关系从出口边缘设备235的队列系统单元确定第一数据流所对应的第一队列组。
可选的,第二映射关系可以是预先配置在出口边缘设备235中,也可以是出口边缘设备235通过数据面学习或控制面配置的方式获取的,具体本申请不做限定。
出口边缘设备235可以通过报文的五元组识别不同突发所属的数据流。
1112、出口边缘设备235从出口边缘设备235的队列系统单元确定第二队列组。
步骤1112与前述步骤1111类似,具体请参阅步骤1111的相关介绍,这里不再赘述。
1113、出口边缘设备235从出口边缘设备235的队列系统单元确定第三目标队列。
步骤1113与前述图4A所示的实施例中步骤408类似,具体可以参阅前述图4A所示的实施例中步骤408的相关介绍。
1114、出口边缘设备235按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一队列组的第三目标队列。
1115、出口边缘设备235按照第二突发包括的一个或多个报文的顺序将第二突发包括的一个或多个报文加入第二队列组的第三目标队列。
1116、出口边缘设备235根据第一队列组的第三目标队列的调度规则,发送第一突发包括的一个或多个报文。
1117、出口边缘设备235根据第二队列组的第三目标队列的调度规则,发送第二突发包括的一个或多个报文。
下面结合具体示例介绍步骤1116和步骤1117。例如,如图9A所示,第一突发为第一数据流的突发B1,第二突发为第二数据流的突发A1。第一队列组的队列y与第二队列组的队列y同时开启,由于第一队列组的优先级高于第二队列组的优先级,因此出口边缘设备235先发送第一队列组的队列y中第一突发包括的一个或多个报文,当出口边缘设备235将第一队列组的队列y中的第一突发包括的一个或多个报文排空后,出口边缘设备235再发送第二队列组的队列y的第二突发包括的一个或多个报文。
本实施例中,第一数据流的多个突发的比特数相同。
例如,如图10所示,第一数据流包括突发B1、突发B2、突发B3和突发B4。第二数据流包括突发A1、突发A2、突发A3和突发A4。突发B2落入第一队列组的队列y+1,突发A1落入第二队列组的队列y+1。突发B3落入第一队列组的队列y+2,突发A2落入第二队列组的队列y+2。突发B4落入第一队列组的队列y+3,突发A3落入第二队列组的队列y+3。
为了保证第二数据流的报文的确定性时延,使得第二数据流的报文在网络中的网络设备的端到端抖动为零,那么应当满足以下条件1:突发A1的首个报文离开出口边缘设备235的时刻与突发A2的首个报文离开出口边缘设备235的时刻之间时间间隔等于突发A2的首个报文离开出口边缘设备235的时刻与突发A3的首个报文离开出口边缘235的时刻之间的时间间隔。
而出口边缘设备235的端口速率是一定的,所以第一数据流的突发B1、突发B2、突发B3、突发B4分别包括的比特数应当相同,这样才能保证时长1、时长2、时长3和时长4均相等,以满足上述条件1。时长1为入口边缘设备235将突发B1从第一队列组的队列y 中发送出去所占用的发送时长。时长2为入口边缘设备235将突发B2从第一队列组的队列y+1发送出去所占用的发送时长相等。时长3为入口边缘设备235将突发B3从第一队列组的队列y+3发送出去所占用的发送时长。时长4为入口边缘设备235将突发B4从第一队列组的队列y+3发送出去所占用的发送时长。因此,第一数据流的多个突发包括的比特数相同。
本申请实施例中,由上述图11所示的实施例可知,当第一数据流的第一突发与第二数据流的第二突发都映射至出口边缘设备235的第三目标队列时,出口边缘设备235可以为每个数据流选择对应的队列组,每个队列组对应一个优先级;然后出口边缘设备235通过每个数据流选择对应的队列组的调度规则,对每个数据流选择对应的队列组进行处理,从而实现不同数据流的报文在网络中的网络设备的确定性时延和端到端零抖动。
入口边缘设备231可以接收多条数据流,该多条数据流中每条数据流中相邻的两个突发到达入口边缘设备231的时间间隔等于该相邻的两个突发在出口边缘设备235分别映射至的目标队列的开启时间之间的时间间隔。
可选的,第一目标队列加入N个突发包括的报文,第一目标队列包括第一突发和第二突发,N个突发中每个突发对应一条数据流且N个突发中不同突发对应的数据流不同,N个突发的比特数小于第一目标队列所能容纳的比特数,N为大于或等于2的整数。
其中,第一目标队列所能容纳的比特数等于出口边缘设备231的端口速率乘以第一目标队列的开启时间与第一目标队列的结束时间之间的时间间隔。
N个突发对应的N个条数据流中每条数据流包括的多个突发的比特数相同,具体的原理介绍请参阅前述图11所示的实施例中对第一数据流包括的多个突发的比特数相同的设置原理的相关介绍。
在出口边缘设备235中,第三目标队列加入N个突发包括的报文,N个突发中每个突发对应一条数据流,且N个突发中不同突发对应的数据流不同。N个突发对应N个队列组,N个队列组中每个队列组对应一个优先级,N个队列组中不同队列组对应的优先级不同。
例如,如图12所示,第一目标队列为入口边缘设备231的队列x+1。第一目标队列加入三个突发,分别为突发A1、突发B2和突发C1。突发B2对应第一数据流,突发A1对应第二数据流,突发C1对应第三数据流。突发A1的比特数、突发B2的比特数和突发C1的比特数之和小于或等于入口边缘设备231在第一时间间隔(队列x+1的开启时间至队列x+1的结束时间之间的时间间隔)内传输的比特数。
第一数据流的优先级高于第二数据流的优先级,第二数据流的优先级高于第三数据流的优先级。第一队列组的优先级高于第二队列组的优先级,第二队列组的优先级高于第三队列组的优先级。因此,出口边缘设备235将突发B2映射至第一队列组的队列y+1,并根据第一队列组的队列y+1的调度规则,发送突发B2包括的一个或多个报文;出口边缘设备235将突发A1映射至第二队列组的队列y+1,并根据第二队列组的队列y+1的调度规则,发送突发A1包括的一个或多个报文;出口边缘设备235将突发C1映射至第二队列组的队列y+1,并根据第三队列组的队列y+1的调度规则发送突发C1包括的一个或多个报文。
请参阅图13,图13是根据本申请实施例提供的一种报文处理方法的示意性流程图。
1301、第一网络设备在第一时刻接收网络中的第一报文。
其中,第一报文为第一数据流的第一突发的首个报文,第一突发为第一网络设备接收到第一数据流包括的多个突发中的一个突发,第一突发包括一个或多个报文,第一网络设备为对第一数据流包括的一个或多个报文进行处理的首跳网络设备。
1302、第一网络设备根据第一时刻从第一网络设备的第一队列系统包括的多个队列中确定第一目标队列。
其中,第一队列系统包括的多个队列中相邻的两个队列的开启时间之间的第一时间间隔相等。第一数据流包括的多个突发中相邻的两个突发到达第一网络设备的第二时间间隔相等,且第二时间间隔为第一时间间隔的整数倍。
在一些实施例中,第一数据流包括的多个突发的比特数相同。第一突发包括的多个报文大小相同。
1303、第一网络设备按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列。
在一些实施例中,第一突发包括的一个或多个报文分别包括第一目标队列的队列信息;或者,第一报文包括第一目标队列的队列信息。
在一些实施例中,第一突发包括的一个或多个报文中每个报文包括每个报文的第一时间信息,第一时间信息用于指示该每个报文的第一剩余处理时间,该每个报文的第一剩余处理时间为该每个报文的第一理论时间上限与每个报文的第一实际时间的差。
每个报文的第一理论时间上限为第一参考时刻与第二参考时刻为止的该每个报文经过网络设备的理论上限。第一参考时刻为该第一网络设备释放该每个报文给第一队列系统的参考时刻,或者,第一参考时刻为该第一网络设备接收到该每个报文的时刻;第二参考时刻为该每个报文进入对该每个报文进行处理的第二个网络设备的队列系统的参考时刻。第一参考时刻可以称为该每个报文在第一网络设备的参考时刻,第二参考时刻可以称为该每个报文在对该每个报文进行处理的第二个网络设备的参考时刻。
每个报文的第一实际时间为从每个报文在第一网络设备的第一参考时刻与每个报文从第一网络设备输出的时刻为止,该每个报文在第一网络设备内部经历的实际时间。
可选的,每个报文的第一时间信息包括该每个报文在第一网络设备的参考时刻、该每个报文从第一网络设备输出的时刻和该每个报文的第一理论时间上限。
在一些实施例中,第一报文包括第一报文的第一时间信息。第一报文的第一时间信息用于指示第一报文的第一剩余处理时间。第一报文的第一剩余处理时间为该第一报文的第一理论时间上限与第一报文的第一实际时间的差。
第一报文的第一理论时间上限为第一参考时刻与第二参考时刻为止的该每个报文经过网络设备的理论上限。第一参考时刻为该第一网络设备释放该第一报文给第一队列系统的参考时刻,或者,第一参考时刻为该第一网络设备接收到该第一报文的时刻;第二参考时刻为该第一报文进入对该第一报文进行处理的第二个网络设备的队列系统的参考时刻。第一参考时刻可以称为该第一报文在第一网络设备的参考时刻,第二参考时刻可以称为该第一报文在对该第一报文进行处理的第二个网络设备的参考时刻。
第一报文的第一实际时间为从第一报文在第一网络设备的第一参考时刻与该第一报文从第一网络设备输出的时刻为止,第一报文在第一网络设备内部经历的实际时间。
可选的,第一报文的第一时间信息包括第一报文在第一网络设备的第一参考时刻、第一报文从第一网络设备输出的时刻和第一报文的第一理论时间上限。
1304、第一网络设备根据第一队列系统包括的多个队列的调度规则,对第一目标队列进行处理。
在一些实施例中,图13还包括步骤1304a至步骤1304d。
1304a、第一网络设备在第二时刻接收网络中的第二报文。
第二报文为第二数据流的第二突发的首个报文,第二突发为第一网络设备接收的第二数据流包括的多个突发中的一个突发,第二突发包括一个或多个报文。
第二数据流包括的多个突发中相邻的两个突发到达第一网络设备的第三时间间隔相等,第三时间间隔为第一时间间隔的整数倍。
1304b、第一网络设备根据第二时刻从第一队列系统包括的多个队列中确定第二目标队列;
1304c、第一网络设备按照第二突发包括的一个或多个报文的顺序将第二突发包括的一个或多个报文加入第二目标队列。
1304d、第一网络设备根据第一队列系统包括的多个队列的调度规则,对第二目标队列进行处理。
在一些实施例中,第一目标队列加入N个突发包括的报文,N个突发包括第一突发,N个突发中每个突发对应一个数据流且N个突发中不同突发对应的数据流不同,N个突发的比特数小于第一目标队列所能容纳的比特数,第一目标队列所能容纳的比特数等于第一网络设备的端口速率乘以第一目标队列的开启时间与第一目标队列的结束之间的时间间隔。
上述步骤1301至步骤1304示出了第一网络设备对第一数据流的第一突发的处理过程。对于第一数据流的其他突发的处理过程类似,这里不一一说明。
需要说明的是,第一网络设备为对第一数据流进行处理的首跳网络设备。第一数据流经过第一网络设备,再经过中间节点设备,最后传输到对第一数据流进行处理的末跳网络设备,即第二网络设备。其中,第一数据流经过中间节点设备的处理过程可以参阅前述图4A所示的实施例中的相关介绍。下面结合步骤1305至步骤1308介绍末跳网络设备对第一数据流的处理过程。
1305、第二网络设备接收第一数据流。
第一数据流包括一个或多个突发,该多个突发中的第一突发包括一个或多个报文,该多个突发中的第三突发包括一个或多个报文,第一突发和第三突发为第一数据流中相邻的两个突发。第二网络设备为对第二数据流包括的一个或多个报文进行处理的最后一跳网络设备。
1306第二网络设备从第二网络设备的第二队列系统中确定第三目标队列和第四目标队列。
第二队列系统中相邻两个队列的开启时间之间的第六时间间隔等于第一时间间隔,第 一时间间隔为第一队列系统包括的多个队列中相邻的两个队列的开启时间之间的时间间隔。
其中,第三目标队列和第四目标队列为相邻或不相邻的两个队列。
第二网络设备将第一突发包括的一个或多个报文释放给第一队列系统的时刻与第二网络设备将第二突发包括的一个或多个报文释放给第一队列系统的时刻之间的时间间隔为第四时间间隔。
第三目标队列的开启时间与第四目标队列的开启时间之间的时间间隔为第五时间间隔。第四时间间隔等于第五时间间隔,第五时间间隔等于第二时间间隔,第二时间间隔为第一数据流包括的多个突发中相邻的两个突发到达第一网络设备的时间间隔。
在一些实施例中,第二网络设备从第二网络设备的第二队列系统中确定第三目标队列包括:第二网络设备确定第一目标队列,第一目标队列为第一网络设备中第一突发包括的一个或多个报文加入的队列;然后,第二网络设备根据第一映射关系从第二队列系统中确定第一目标队列对应的第三目标队列;第一映射关系包括第一网络设备的第一队列系统中的队列与第二队列系统中的队列之间的映射关系。
在一些实施例中,第一突发的首个报文包括第一目标队列的队列信息;第二网络设备确定第一目标队列包括:第二网络设备根据第一目标队列的队列信息确定第一目标队列。
1307、第二网络设备按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第三目标队列,按照第三突发包括的一个或多个报文的顺序将第二突发包括的一个或多个报文加入第四目标队列。
1308、第二网络设备根据第三目标队列的调度规则和第四目标队列的调度规则,对第三目标队列和第四目标队列进行处理。
在一些实施例中,第一数据流包括的多个突发的比特数相同。第一突发包括的多个报文大小相同。
在一些实施例中,第三目标队列加入N个突发包括的报文,N个突发包括第一突发,N个突发中每个突发对应一个数据流,N个突发中不同突发对应的数据流不同;N个突发对应N个队列组,N个队列组中每个队列组对应一个优先级,不同队列组的优先级不同。
在一些实施例中,上述步骤1305至步骤1308替换为步骤1309至步骤1312。
1309、第二网络接收第二数据流。
第二数据流包括一个或多个突发,多个突发中的第二突发包括一个或多个报文,第二数据流到达第二网络设备的时刻在所述第一数据流的首个突发到达第二网络设备的时刻之后,并且在第一数据流的最后一个突发到达第二网络设备的时刻之前。
1310、第二网络设备从第二队列系统中选择第一队列组,按照第一数据流包括的一个或多个突发的顺序将第一数据流包括的一个或多个突发加入第一队列组;
1311、第二网络设备从第二队列系统中选择第二队列组,按照第二数据流包括的一个或多个突发的顺序将第二数据流包括的一个或多个突发加入第二队列组;
第一队列组的优先级高于第二队列组的优先级。
1312、第二网络设备根据第二队列系统的多个队列的调度规则,对第一队列组和所述第二队列组进行处理。
在一些实施例中,第一数据流包括的多个突发的比特数相同。第一突发包括的多个报文大小相同。
请参阅图14,图14是根据本申请实施例提供的一种第一网络设备的示意性结构框图。如图14所示的第一网络设备1400包括接收单元1401、处理单元1402和发送单元1403。
接收单元1401,用于在第一时刻接收网络中的第一报文,该第一报文为第一数据流的第一突发的首个报文,第一突发为第一网络设备接收的第一数据流包括的多个突发中的一个突发,第一突发包括一个或多个报文,第一网络设备为对第一数据流包括的一个或多个报文进行处理的首跳网络设备;
处理单元1402,用于根据第一时刻从第一队列系统包括的多个队列中确定第一目标队列;按照第一突发包括的一个或多个报文的顺序将第一突发包括的一个或多个报文加入第一目标队列;
发送单元1403,用于根据多个队列的调度规则,对第一目标队列进行处理。
一种可能的实现方式中,第一队列系统包括的多个队列中相邻的两个队列的开启时间之间的第一时间间隔相等。
另一种可能的实现方式中,第一数据流包括的多个突发中相邻的两个突发到达第一网络设备的第二时间间隔相等,第二时间间隔为第一时间间隔的整数倍。
另一种可能的实现方式中,第一数据流包括的多个突发的比特数相同。
另一种可能的实现方式中,该第一突发包括多个报文大小相同。
另一种可能的实现方式中,该接收单元1401还用于:
在第二时刻接收网络中的第二报文,该第二报文为第二数据流的第二突发的首个报文,该第二突发为该第一网络设备接收的该第二数据流包括的多个突发中的一个突发,该第二突发包括一个或多个报文;
该处理单元1402还用于:
根据该第二时刻从该第一队列系统包括的多个队列中确定第二目标队列;
该第二目标队列为该第一目标队列,或者,该第二目标队列位于该第一目标队列之后;以及,该第一目标队列为该第一队列系统的最后一个队列,或者,该第一目标队列为该第一队列系统的最后一个队列之前。
另一种可能的实现方式中,第二数据流包括的多个突发中相邻的两个突发到达该第一网络设备的第三时间间隔相等,第三时间间隔为第一时间间隔的整数倍。
另一种可能的实现方式中,第一目标队列加入N个突发包括的报文,该N个突发包括第一突发,该N个突发中每个突发对应一个数据流且该N个突发中不同突发对应的数据流不同,该N个突发的比特数小于该第一目标队列所能容纳的比特数,该第一目标队列所能容纳的比特数等于该第一网络设备的端口速率乘以该第一目标队列的开启时间与该第一目标队列的结束时间之间的时间间隔。
另一种可能的实现方式中,第一报文包括第一目标队列的队列信息;或者,第一突发包括的一个或多个报文分别包括第一目标队列的队列信息。
另一种可能的实现方式中,第一目标队列的队列信息包括第一目标队列的队列编号。
另一种可能的实现方式中,第一突发包括的一个或多个报文分别还包括用于指示第二网络设备加入第一突发包括的一个或多个报文的队列所属的队列组编号,该第二网络设备为对第一数据流包括的一个或多个报文进行处理的最后一跳网络设备。
另一种可能的实现方式中,第一突发包括的一个或多个报文中每个报文包括该每个报文的第一时间信息,第一时间信息用于指示该每个报文的第一剩余处理时间,该第一剩余处理时间为该第一网络设备处理该每个报文的第一理论时间上限和第一实际时间的差;该第一理论时间上限为第一参考时刻开始至第二参考时刻该每个报文经过网络设备的内部经历的理论时间上限;该第一参考时刻为该第一网络设备释放该每个报文给该第一队列系统的参考时刻,或者,该第一参考时刻为该第一网络设备接收到该每个报文的时刻;该第二参考时刻为该每个报文进入对该第一突发包括的一个或多个报文进行处理的第二个网络设备的队列系统的参考时刻;该第一实际时间为该每个报文在该第一参考时刻至该每个报文从该第一网络设备输出的时刻为止,该每个报文在该第一网络设备内部经历的实际时间。
另一种可能的实现方式中,第一时间信息包括每个报文的第一参考时刻以及每个报文从第一网络设备输出的时刻。
另一种可能的实现方式中,第一时间信息还包括每个报文的第一理论时间上限。
另一种可能的实现方式中,第一报文包括该第一报文的第一时间信息,该第一时间信息用于指示该第一报文的第一剩余处理时间,该第一剩余处理时间为该第一网络设备处理该第一报文的第一理论时间上限和第一实际时间的差;该第一理论时间上限为第一参考时刻开始至第二参考时刻该第一报文经过网络设备内部经历的理论时间上限;该第一参考时刻为该第一网络设备释放该第一报文给该第一队列系统的参考时刻,该第二参考时刻为该第一报文进入对该第一突发包括的一个或多个报文进行处理的第二个网络设备的队列系统的参考时刻。
另一种可能的实现方式中,第二时间信息包括第一报文的第一参考时刻和第一报文从第一网络设备输出的时刻。
另一种可能的实现方式中,第一时间信息还包括第一报文的第一理论时间上限。
请参阅图15,图15是根据本申请实施例提供的一种第二网络设备的示意性结构框图。如图15所示的第二网络设备1500包括接收单元1501、处理单元1502和发送单元1503。
接收单元1501,用于接收第一数据流,该第一数据流包括一个或多个突发,该多个突发中的第一突发包括一个或多个报文,该多个突发中的第三突发包括一个或多个报文,该第一突发和该第三突发为第一数据流中相邻的两个突发,该第二网络设备为对该第一数据流包括的一个或多个报文进行处理的最后一跳网络设备;
处理单元1502,用于从该第二网络设备的第二队列系统中确定第三目标队列和第四目标队列;按照该第一突发包括的一个或多个报文的顺序将该第一突发包括的一个或多个报文加入该第三目标队列;该第二网络设备按照该第三突发包括的一个或多个报文的顺序将该第三突发包括的一个或多个报文加入该第四目标队列;
发送单元1503,用于根据该第三目标队列和该第四目标队列的调度规则,对该第三目标队列和该第四目标队列进行处理。
一种可能的实现方式中,第三目标队列和所述第四目标队列为该第二队列系统中相邻或不相邻的两个队列。
另一种可能的实现方式中,该第二网络设备释放该第一突发包括的一个或多个报文给该第二队列系统的时刻和该第二网络设备释放该第三突发包括的一个或多个报文给该第二队列系统的时刻之间的时间间隔为第四时间间隔,该第三目标队列的开启时间与该第四目标队列的开启时间之间的时间间隔为第五时间间隔,该第四时间间隔与该第五时间间隔相等。
另一种可能的实现方式中,该接收单元1501还用于:
接收第二数据流,该第二数据流包括一个或多个突发,该多个突发中的第二突发包括一个或多个报文,该第二数据流到达该第二网络设备的时刻在该第一数据流的首个突发到达该第二网络设备的时刻之后,并且在该第一数据流的最后一个突发到达该第二网络设备的时刻之前;
该处理单元1502还用于:
从该第二队列系统中选择第一队列组,按照该第一数据流包括的一个或多个突发的顺序将该第一数据流包括的一个或多个突发加入该第一队列组;从该第二队列系统中选择第二队列组,按照该第二数据流包括的一个或多个突发的顺序将该第二数据流包括的一个或多个突发加入该第二队列组;该第一队列组的优先级高于该第二队列组的优先级;
该发送单元1503还用于:
根据该第二队列系统的多个队列的调度规则,对该第一队列组和该第二队列组进行处理。
另一种可能的实现方式中,该处理单元1502具体用于:
确定第一目标队列,第一目标队列为第一网络设备中第一突发包括的一个或多个报文加入的队列,第一网络设备为对第一数据流包括一个或多个报文进行处理的首跳网络设备;
根据第一映射关系从第二队列系统中确定第一目标队列对应的第三目标队列,第一映射关系包括第一网络设备的第一队列系统中的队列与第二队列系统中的队列之间的映射关系。
另一种可能的实现方式中,第一突发的首个报文包括该第一目标队列的队列信息;该处理单元具体用于:
根据该第一目标队列的队列信息确定第一目标队列。
另一种可能的实现方式中,第三目标队列加入N个突发包括的报文,该N个突发包括第一突发,该N个突发中每个突发对应一个数据流,该N个突发中不同突发对应的数据流不同;该N个突发对应的N个队列组,N个队列组中每个队列组对应一个优先级,不同队列组的优先级不同。
另一种可能的实现方式中,第一数据流包括的多个突发的比特数相同。
另一种可能的实现方式中,该第一突发包括多个报文大小相同。
本申请实施例还提供了一种处理装置,包括处理器和接口。所述处理器可用于执行上述方法实施例中的方法。
应理解,上述处理装置可以是一个芯片。例如,该处理装置可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)、其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,或其他集成芯片。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令或程序代码完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令或程序代码完成。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
根据本申请实施例提供的方法,本申请还提供一种网络系统。请参阅图16,图16为本申请实施例网络系统的一个示意图。该网络系统包括如图14所示的第一网络设备和如图15所示的第二网络设备。图14所示的第一网络设备用于执行前述方法实施例中第一网络设备执行的部分或全部步骤。图15所示的第二网络设备用于执行前述方法实施例中第二网 络设备执行的部分或全部步骤。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码,当该计算机程序代码在计算机上运行时,使得该计算机执行上述任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读介质,该计算机可读介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行上述任意一个实施例的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
Claims (28)
- 一种报文处理方法,其特征在于,所述方法包括:第一网络设备在第一时刻接收网络中的第一报文,所述第一报文为第一数据流的第一突发的首个报文,所述第一突发为所述第一网络设备接收的所述第一数据流包括的多个突发中的一个突发,所述第一突发包括一个或多个报文,所述第一网络设备为对所述第一数据流包括的一个或多个报文进行处理的首跳网络设备;所述第一网络设备根据所述第一时刻从所述第一网络设备的第一队列系统包括的多个队列中确定第一目标队列;所述第一网络设备按照所述第一突发包括的一个或多个报文的顺序将所述第一突发包括的一个或多个报文加入所述第一目标队列;所述第一网络设备根据所述多个队列的调度规则,对所述第一目标队列进行处理。
- 根据权利要求1所述的方法,其特征在于,所述第一队列系统包括的多个队列中相邻的两个队列的开启时间之间的第一时间间隔相等。
- 根据权利要求1或2所述的方法,其特征在于,所述第一数据流包括的多个突发中相邻的两个突发到达所述第一网络设备的第二时间间隔相等,所述第二时间间隔为所述第一时间间隔的整数倍。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述第一数据流包括的多个突发的比特数相同。
- 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:所述第一网络设备在第二时刻接收网络中的第二报文,所述第二报文为第二数据流的第二突发的首个报文,所述第二突发为所述第一网络设备接收的所述第二数据流包括的多个突发中的一个突发,所述第二突发包括一个或多个报文;所述第一网络设备根据所述第二时刻从所述第一队列系统包括的多个队列中确定第二目标队列;所述第二目标队列为所述第一目标队列,或者,所述第二目标队列位于所述第一目标队列之后;以及,所述第一目标队列为所述第一队列系统的最后一个队列,或者,所述第一目标队列为所述第一队列系统的最后一个队列之前。
- 根据权利要求5所述的方法,其特征在于,所述第二数据流包括的多个突发中相邻的两个突发到达所述第一网络设备的第三时间间隔相等,所述第三时间间隔为所述第一时间间隔的整数倍。
- 一种报文处理方法,其特征在于,所述方法包括:第二网络设备接收第一数据流,所述第一数据流包括一个或多个突发,所述多个突发中的第一突发包括一个或多个报文,所述多个突发中的第三突发包括一个或多个报文,所述第一突发和所述第三突发为第一数据流中相邻的两个突发,所述第二网络设备为对所述第一数据流包括的一个或多个报文进行处理的最后一跳网络设备;所述第二网络设备从所述第二网络设备的第二队列系统中确定第三目标队列和第四目标队列;所述第二网络设备按照所述第一突发包括的一个或多个报文的顺序将所述第一突发包括的一个或多个报文加入所述第三目标队列;所述第二网络设备按照所述第三突发包括的一个或多个报文的顺序将所述第三突发包括的一个或多个报文加入所述第四目标队列;所述第二网络设备根据所述第三目标队列和所述第四目标队列的调度规则,对所述第三目标队列和所述第四目标队列进行处理。
- 根据权利要求7所述的方法,其特征在于,所述第二网络设备释放所述第一突发包括的一个或多个报文给所述第二队列系统的时刻和所述第二网络设备释放所述第三突发包括的一个或多个报文给所述第二队列系统的时刻之间的时间间隔为第四时间间隔,所述第三目标队列的开启时间与所述第四目标队列的开启时间之间的时间间隔为第五时间间隔,所述第四时间间隔与所述第五时间间隔相等。
- 根据权利要求7或8所述的方法,其特征在于,所述方法还包括:所述第二网络设备接收第二数据流,所述第二数据流包括一个或多个突发,所述多个突发中的第二突发包括一个或多个报文,所述第二数据流到达所述第二网络设备的时刻在所述第一数据流的首个突发到达所述第二网络设备的时刻之后,并且在所述第一数据流的最后一个突发到达所述第二网络设备的时刻之前;所述第二网络设备从所述第二队列系统中选择第一队列组,按照所述第一数据流包括的一个或多个突发的顺序将所述第一数据流包括的一个或多个突发加入所述第一队列组;所述第二网络设备从所述第二队列系统中选择第二队列组,按照所述第二数据流包括的一个或多个突发的顺序将所述第二数据流包括的一个或多个突发加入所述第二队列组;所述第一队列组的优先级高于所述第二队列组的优先级;所述第二网络设备根据所述第二队列系统的多个队列的调度规则,对所述第一队列组和所述第二队列组进行处理。
- 根据权利要求7至9中任一项所述的方法,其特征在于,所述第二网络设备从所述第二网络设备的第二队列系统中确定第三目标队列,包括:所述第二网络设备确定第一目标队列,所述第一目标队列为第一网络设备中所述第一突发包括的一个或多个报文加入的队列,所述第一网络设备为对所述第一数据流包括一个或多个报文进行处理的首跳网络设备;所述第二网络设备根据第一映射关系从所述第二队列系统中确定所述第一目标队列对应的所述第三目标队列,所述第一映射关系包括所述第一网络设备的第一队列系统中的队列与所述第二队列系统中的队列之间的映射关系。
- 根据权利要求7至10中任一项所述的方法,其特征在于,所述第三目标队列加入N个突发包括的报文,所述N个突发包括所述第一突发,所述N个突发中每个突发对应一个数据流,所述N个突发中不同突发对应的数据流不同;所述N个突发对应的N个队列组,N个队列组中每个队列组对应一个优先级,不同队列组的优先级不同。
- 根据权利要求7至11中任一项所述的方法,其特征在于,所述第一数据流包括的 多个突发的比特数相同。
- 一种第一网络设备,其特征在于,所述第一网络设备包括:接收单元,用于在第一时刻接收网络中的第一报文,所述第一报文为第一数据流的第一突发的首个报文,所述第一突发为所述第一网络设备接收的所述第一数据流包括的多个突发中的一个突发,所述第一突发包括一个或多个报文,所述第一网络设备为对所述第一数据流包括的一个或多个报文进行处理的首跳网络设备;处理单元,用于根据所述第一时刻从所述第一网络设备的第一队列系统包括的多个队列中确定第一目标队列;按照所述第一突发包括的一个或多个报文的顺序将所述第一突发包括的一个或多个报文加入所述第一目标队列;发送单元,用于根据所述多个队列的调度规则,对所述第一目标队列进行处理。
- 根据权利要求13所述的第一网络设备,其特征在于,所述第一队列系统包括的多个队列中相邻的两个队列的开启时间之间的第一时间间隔相等。
- 根据权利要求13或14所述的第一网络设备,其特征在于,所述第一队列系统包括的多个队列中相邻的两个队列的开启时间之间的第一时间间隔相等。
- 根据权利要求13至15中任一项所述的第一网络设备,其特征在于,所述第一数据流包括的多个突发的比特数相同。
- 根据权利要求13至16中任一项所述的第一网络设备,其特征在于,所述接收单元还用于:在第二时刻接收网络中的第二报文,所述第二报文为第二数据流的第二突发的首个报文,所述第二突发为所述第一网络设备接收的所述第二数据流包括的多个突发中的一个突发,所述第二突发包括一个或多个报文;所述处理单元,用于根据所述第二时刻从所述第一队列系统包括的多个队列中确定第二目标队列;所述第二目标队列为所述第一目标队列,或者,所述第二目标队列位于所述第一目标队列之后;以及,所述第一目标队列为所述第一队列系统的最后一个队列,或者,所述第一目标队列为所述第一队列系统的最后一个队列之前。
- 根据权利要求17所述的第一网络设备,其特征在于,所述第二数据流包括的多个突发中相邻的两个突发到达所述第一网络设备的第三时间间隔相等,所述第三时间间隔为所述第一时间间隔的整数倍。
- 一种第二网络设备,其特征在于,所述第二网络设备包括:接收单元,用于接收第一数据流,所述第一数据流包括一个或多个突发,所述多个突发中的第一突发包括一个或多个报文,所述多个突发中的第三突发包括一个或多个报文,所述第一突发和所述第三突发为第一数据流中相邻的两个突发,所述第二网络设备为对所述第一数据流包括的一个或多个报文进行处理的最后一跳网络设备;处理单元,用于从所述第二网络设备的第二队列系统中确定第三目标队列和第四目标队列;按照所述第一突发包括的一个或多个报文的顺序将所述第一突发包括的一个或多个报文加入所述第三目标队列;按照所述第三突发包括的一个或多个报文的顺序将所述第三突发包括的一个或多个报文加入所述第四目标队列;发送单元,用于根据所述第三目标队列和所述第四目标队列的调度规则,对所述第三目标队列和所述第四目标队列进行处理。
- 根据权利要求19所述的第二网络设备,其特征在于,所述第二网络设备释放所述第一突发包括的一个或多个报文给所述第二队列系统的时刻和所述第二网络设备释放所述第三突发包括的一个或多个报文给所述第二队列系统的时刻之间的时间间隔为第四时间间隔,所述第三目标队列的开启时间与所述第四目标队列的开启时间之间的时间间隔为第五时间间隔,所述第四时间间隔与所述第五时间间隔相等。
- 根据权利要求19或20所述的第二网络设备,其特征在于,所述接收单元还用于:接收第二数据流,所述第二数据流包括一个或多个突发,所述多个突发中的第二突发包括一个或多个报文,所述第二数据流到达所述第二网络设备的时刻在所述第一数据流的首个突发到达所述第二网络设备的时刻之后,并且在所述第一数据流的最后一个突发到达所述第二网络设备的时刻之前;所述处理单元还用于:从所述第二队列系统中选择第一队列组,按照所述第一数据流包括的一个或多个突发的顺序将所述第一数据流包括的一个或多个突发加入所述第一队列组;从所述第二队列系统中选择第二队列组,按照所述第二数据流包括的一个或多个突发的顺序将所述第二数据流包括的一个或多个突发加入所述第二队列组;所述第一队列组的优先级高于所述第二队列组的优先级;所述发送单元还用于:根据所述第二队列系统的多个队列的调度规则,对所述第一队列组和所述第二队列组进行处理。
- 根据权利要求19至21中任一项所述的第二网络设备,其特征在于,所述处理单元具体用于:确定第一目标队列,所述第一目标队列为第一网络设备中所述第一突发包括的一个或多个报文加入的队列,所述第一网络设备为对所述第一数据流包括一个或多个报文进行处理的首跳网络设备;根据第一映射关系从所述第二队列系统中确定所述第一目标队列对应的所述第三目标队列,所述第一映射关系包括所述第一网络设备的第一队列系统中的队列与所述第二队列系统中的队列之间的映射关系。
- 根据权利要求19至22中任一项所述的第二网络设备,其特征在于,所述第三目标队列加入N个突发包括的报文,所述N个突发包括所述第一突发,所述N个突发中每个突发对应一个数据流,所述N个突发中不同突发对应的数据流不同;所述N个突发对应的N个队列组,N个队列组中每个队列组对应一个优先级,不同队列组的优先级不同。
- 根据权利要求19至23中任一项所述的第二网络设备,其特征在于,所述第一数据流包括的多个突发的比特数相同。
- 一种网络设备,其特征在于,所述网络设备包括处理器,用于执行存储器中存储 的程序,当所述程序被执行时,使得所述网络设备执行如权利要求1至6中任一项所述的方法;或者,使得所述网络设备执行如权利要求7至12中任一项所述的方法。
- 根据权利要求25所述的网络设备,其特征在于,所述存储器位于所述网络设备之外。
- 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在计算机上运行时,使得如权利要求1至6中任一项所述的方法被执行;或者,使得如权利要求7至12中任一项所述的方法被执行。
- 一种网络设备,其特征在于,所述网络设备包括处理器、存储器以及存储在所述存储器上并可在所述处理器上运行的计算机指令,当所述计算机指令被运行时,使得所述网络设备执行如权利要求1至6中任一项所述的方法;或者,使得所述网络设备执行如权利要求7至12中任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21893833.0A EP4239975A4 (en) | 2020-11-17 | 2021-11-12 | PACKET PROCESSING METHOD AND RELATED APPARATUS |
US18/318,016 US20230283566A1 (en) | 2020-11-17 | 2023-05-16 | Packet processing method and related apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011287339.6A CN114513477A (zh) | 2020-11-17 | 2020-11-17 | 报文处理方法以及相关装置 |
CN202011287339.6 | 2020-11-17 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/318,016 Continuation US20230283566A1 (en) | 2020-11-17 | 2023-05-16 | Packet processing method and related apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022105686A1 true WO2022105686A1 (zh) | 2022-05-27 |
Family
ID=81546693
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/130315 WO2022105686A1 (zh) | 2020-11-17 | 2021-11-12 | 报文处理方法以及相关装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230283566A1 (zh) |
EP (1) | EP4239975A4 (zh) |
CN (1) | CN114513477A (zh) |
WO (1) | WO2022105686A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12068968B2 (en) * | 2022-01-30 | 2024-08-20 | Mellanox Technologies, Ltd. | Efficient scattering to buffers |
CN120301792A (zh) * | 2024-01-10 | 2025-07-11 | 华为技术有限公司 | 网络测量方法及相关设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030093267A1 (en) * | 2001-11-15 | 2003-05-15 | Microsoft Corporation | Presentation-quality buffering process for real-time audio |
CN1531276A (zh) * | 2003-03-13 | 2004-09-22 | 华为技术有限公司 | 用于消除ip语音数据抖动的自适应抖动缓存实现方法 |
CN1767457A (zh) * | 2004-10-27 | 2006-05-03 | 华为技术有限公司 | 一种ip网络抖动模拟的方法 |
CN108259383A (zh) * | 2016-12-29 | 2018-07-06 | 北京华为数字技术有限公司 | 一种数据的传输方法和网络设备 |
CN110086728A (zh) * | 2018-01-26 | 2019-08-02 | 华为技术有限公司 | 发送报文的方法、第一网络设备及计算机可读存储介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5282196A (en) * | 1991-10-15 | 1994-01-25 | Hughes Aircraft Company | Bursted and non-bursted data router |
US7298973B2 (en) * | 2003-04-16 | 2007-11-20 | Intel Corporation | Architecture, method and system of multiple high-speed servers to network in WDM based photonic burst-switched networks |
EP3734919A1 (en) * | 2019-04-30 | 2020-11-04 | Mitsubishi Electric R&D Centre Europe B.V. | In-band signalling for dynamic transmission time window control |
-
2020
- 2020-11-17 CN CN202011287339.6A patent/CN114513477A/zh active Pending
-
2021
- 2021-11-12 EP EP21893833.0A patent/EP4239975A4/en active Pending
- 2021-11-12 WO PCT/CN2021/130315 patent/WO2022105686A1/zh unknown
-
2023
- 2023-05-16 US US18/318,016 patent/US20230283566A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030093267A1 (en) * | 2001-11-15 | 2003-05-15 | Microsoft Corporation | Presentation-quality buffering process for real-time audio |
CN1531276A (zh) * | 2003-03-13 | 2004-09-22 | 华为技术有限公司 | 用于消除ip语音数据抖动的自适应抖动缓存实现方法 |
CN1767457A (zh) * | 2004-10-27 | 2006-05-03 | 华为技术有限公司 | 一种ip网络抖动模拟的方法 |
CN108259383A (zh) * | 2016-12-29 | 2018-07-06 | 北京华为数字技术有限公司 | 一种数据的传输方法和网络设备 |
CN110086728A (zh) * | 2018-01-26 | 2019-08-02 | 华为技术有限公司 | 发送报文的方法、第一网络设备及计算机可读存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4239975A4 |
Also Published As
Publication number | Publication date |
---|---|
US20230283566A1 (en) | 2023-09-07 |
CN114513477A (zh) | 2022-05-17 |
EP4239975A4 (en) | 2024-04-24 |
EP4239975A1 (en) | 2023-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019214561A1 (zh) | 一种报文发送的方法、网络节点和系统 | |
Laursen et al. | Routing optimization of AVB streams in TSN networks | |
JP7512456B2 (ja) | パケットスケジューリング方法、スケジューラ、ネットワーク装置及びネットワークシステム | |
JP4995101B2 (ja) | 共有リソースへのアクセスを制御する方法及びシステム | |
CN110324242A (zh) | 一种报文发送的方法、网络节点和系统 | |
WO2022105686A1 (zh) | 报文处理方法以及相关装置 | |
WO2022022224A1 (zh) | 调度数据包的方法和相关装置 | |
CN108540380A (zh) | 多子流网络传输方法及装置 | |
CN108259355A (zh) | 一种报文转发方法和装置 | |
US20210021520A1 (en) | Per-flow queue management in a deterministic network switch based on deterministically transmitting newest-received packet instead of queued packet | |
WO2024131109A1 (zh) | 基于动态运算的队列门控调度方法、系统和存储介质 | |
CN114501544A (zh) | 一种数据传输方法、装置和存储介质 | |
Wu et al. | Low latency and efficient packet scheduling for streaming applications | |
WO2022068617A1 (zh) | 流量整形方法及装置 | |
Wei et al. | EC4: ECN and credit-reservation converged congestion control | |
Chen et al. | Credit-based low latency packet scheduling algorithm for real-time applications | |
CN106330834B (zh) | 一种虚拟通道连接建立方法及装置 | |
CN115396378A (zh) | 基于时隙映射的跨域协同时延敏感网络调度方法和系统 | |
US20250030645A1 (en) | Packet transmission | |
Demir et al. | A priority-based queuing model approach using destination parameters forreal-time applications on IPv6 networks | |
CN114095454A (zh) | 发送数据包的方法及网络设备 | |
WO2024234289A1 (en) | Adaptive traffic control for distributed networks | |
JP3972370B2 (ja) | ネットワーク内のrncとノードbとの間のダウンリンク通信における差別化スケジューリング方法 | |
Nisar et al. | An efficient voice priority queue (VPQ) scheduler architectures and algorithm for VoIP over WLAN networks | |
Kobayashi | A DRAM-friendly priority queue Internet packet scheduler implementation and its effects on TCP |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21893833 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021893833 Country of ref document: EP Effective date: 20230530 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |