CN114244738A - Switch cache scheduling method and system - Google Patents
Switch cache scheduling method and system Download PDFInfo
- Publication number
- CN114244738A CN114244738A CN202111544616.1A CN202111544616A CN114244738A CN 114244738 A CN114244738 A CN 114244738A CN 202111544616 A CN202111544616 A CN 202111544616A CN 114244738 A CN114244738 A CN 114244738A
- Authority
- CN
- China
- Prior art keywords
- port
- data
- delay time
- switch
- flow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000005540 biological transmission Effects 0.000 claims abstract description 23
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000010363 phase shift Effects 0.000 claims description 7
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 238000004891 communication Methods 0.000 abstract description 9
- 230000003111 delayed effect Effects 0.000 abstract description 6
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012827 research and development Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/16—Threshold monitoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/28—Flow control; Congestion control in relation to timing considerations
- H04L47/283—Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/29—Flow control; Congestion control using a combination of thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a method and a system for dispatching a switch cache, wherein the method comprises the following steps: receiving and caching a data packet, judging whether the data volume of the data packet reaches an early warning value of the total caching amount, counting the input flow and the output flow of each port in the switch when the data volume reaches the early warning value, calculating the delay time of delaying and phase-shifting data transmission of each port according to the input flow and the output flow, and adjusting the data transmission time according to the delay time. Whether the buffer storage amount of data exceeds an early warning value or not is judged through real-time monitoring of the buffer storage capacity, when the buffer storage amount exceeds the early warning value, far-end port data of partial ports are transmitted in a delayed phase-shifting mode according to the input and output conditions of port data, the utilization efficiency of the buffer storage is improved by dynamically adjusting the data transmission and reception of communication ports, the communication efficiency is guaranteed, and meanwhile the use efficiency of the buffer storage is improved to the maximum extent to guarantee the stability of the communication data.
Description
Technical Field
The present application relates to a method for dispatching a switch cache, and more particularly, to a method for dispatching data transmission time of a delay port.
Background
The storage forwarding mode is a general mode which is usually adopted by a switch for exchanging data, and the working principle is that a main control unit of the switch caches data packets received by an input port, checks the data packets and filters out conflicting data packets, and searches an output port of a destination address through a lookup table according to the destination address to send the data packets. The cache is an important capability of the switch to store the data packets, and the switch adopts a first-in first-out mode to temporarily store the data packets by utilizing a cache function.
When the data of the input port is larger than the data of the output port, the problem that the data packet is overflowed due to the full buffer memory and the data packet is lost occurs. Because the cache capacity of the switch is determined by the performance of its main IC, the existing solution will use the IC with larger cache capacity to replace the IC to solve the problem of insufficient cache space.
However, replacing an IC with a larger cache capacity may increase the cost of the switch product, and on the other hand, replacing the IC may not only cause a significant economic loss due to the replacement of the originally deployed device, but also require more research and development efforts to solve the replacement problem of the deployed device, resulting in more investment.
Disclosure of Invention
In order to solve the problem of insufficient cache capacity of the switch, the application provides a switch cache scheduling method.
A method for dispatching a switch cache comprises the following steps:
receiving and caching a data packet, judging whether the data volume of the data packet reaches an early warning value of the total caching amount, counting the input flow and the output flow of each port in the switch when the data volume reaches the early warning value, calculating the delay time of delaying and phase-shifting data transmission of each port according to the input flow and the output flow, and adjusting the data transmission time of a far-end port according to the delay time.
Further, calculating a delay time of each port for delaying and phase-shifting the transmission data according to the input flow and the output flow, comprising: and sequencing each port according to the input flow from large to small, and calculating the delay time from the port with the second largest input flow.
Further, calculating a delay time for each port to delay and phase-shift the transmission data according to the input flow and the output flow, specifically comprising: and calculating the number of ports to be controlled according to the input flow and the output flow of each port, and calculating the delay time of each port according to the output flow and the output flow of each port and the number of ports to be controlled.
Further, the number of ports to be controlled is calculated according to the input flow and the output flow of each port, and the method is realized by the following formula:
wherein Q isinRepresents the input flow, QoutIndicating outgoing traffic, I indicating the ith port, P indicating the number of ports that need to be controlled, and I indicating the total number of ports of the switch.
Further, the delay time of each port is calculated according to the output flow and the output flow of each port and the number of ports to be controlled, and the method is realized by the following formula:
Further, adjusting the data sending time of the remote port according to the delay time specifically includes: and sending the delay time to a remote port corresponding to each port through a PAUSE frame, and adjusting the data sending time of the remote port according to the PAUSE frame.
Further, the early warning value is 70% to 90% of the total cache amount, and a remaining value obtained by subtracting the early warning value from the total cache amount is a reserved amount, where the reserved amount is greater than or equal to the data amount of the maximum data unit.
Further, the calculation method of the early warning value comprises the following steps:
wherein, Up represents the early warning value, and U represents the total amount of the buffer memory.
The invention also provides a switch cache scheduling system, which comprises:
the judging unit is used for receiving the data packet and judging whether the data volume of the data packet reaches an early warning value of the total cache volume;
the computing unit is used for counting the input flow and the output flow of each port in the switch and computing the delay time of each port for delaying phase shift to send data according to the input flow and the output flow;
and the adjusting unit is used for adjusting the data sending time according to the delay time.
Further, the computing unit specifically includes:
the port number calculating unit is used for calculating the number of ports to be controlled;
and the delay time calculation unit is used for calculating the delay time of each port.
The invention has the beneficial effects that:
when a large number of data packets are simultaneously input into the switch, whether the buffer storage amount of the data exceeds an early warning value is judged through real-time monitoring of the buffer capacity, when the buffer storage amount exceeds the early warning value, the far-end port data of partial ports are transmitted in a delayed phase-shifting mode according to the input and output conditions of the port data, the data transmission and reception of the communication ports are dynamically adjusted to improve the utilization efficiency of the buffer, and the use efficiency of the buffer is improved to the maximum extent while the communication efficiency is guaranteed to ensure the stability of the communication data. In addition, the data communication capacity of the switch is improved under the condition that the total amount of the cache is not increased, the total amount of the cache requirement of the switch is effectively reduced, the networking cost is reduced, and the cost of re-research and development caused by replacing the switch to improve the network communication quality is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flow chart of the steps of the present scheme.
Detailed Description
In order to make the purpose, features and advantages of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The invention is further elucidated with reference to the drawings and the embodiments.
Example 1
A method for dispatching a switch cache, as shown in fig. 1, includes the following steps:
and S1, receiving the data packet and buffering the data packet. In this step, sometimes, checking the data packet, filtering the error and the duplicate data packets is also included, and checking the data packet is the prior art and is not described herein again.
And S2, judging whether the data volume of the data packet reaches the early warning value of the total buffer amount, executing the steps S3-S5 when the data volume of the data packet reaches the early warning value, and if not, transmitting according to a normal data transmission flow. Here, the "reached" warning value indicates that the data amount of the packet is greater than or equal to the warning value.
And S3, counting the input traffic and the output traffic of each port in the switch.
And S4, calculating the delay time of each port for delaying and phase-shifting the transmitted data according to the input flow and the output flow of each port.
And S5, adjusting the data transmission time according to the delay time and transmitting the data.
The dotted line in the figure indicates the data transmission process in the prior art, and the data packet is received, buffered, and directly sent without the buffer scheduling method of this embodiment. The technical means of the scheme is a flow of a solid arrow.
The early warning value in step S2 is generally 70% to 90% of the total amount of the buffer, and the remaining value of the total amount of the buffer minus the early warning value is a reserved amount, where the reserved amount is greater than or equal to the data amount of the largest data unit. The warning value in this embodiment is 80% of the total amount of the buffer, and the reserved amount must be greater than or equal to 1518 bytes, because the 1518byte size is the maximum data unit in the TCP communication mode. And taking 80% of the total amount of the cache, wherein the cache can early warn a starting condition for a subsequent step instead of finding the overflow of the cache capacity, and the flow is reserved for subsequent receiving work.
The calculation method comprises the following steps:
wherein, Up represents the early warning value, and U represents the total amount of the buffer memory. 1518 the unit is byte, the unit of traffic is bit, unifying both as traffic unit, then the 1518byte needs to be multiplied by 8.
The step of calculating the delay time in S4 specifically includes:
and calculating the number of ports to be controlled according to the input flow and the output flow of each port counted by the S3, and calculating the delay time of each port according to the output flow and the output flow of each port and the number of ports to be controlled.
In this step, each port is sorted according to the input flow from large to small, the delay time is calculated from the port with the second largest input flow, the delay time of the port with the largest input flow is not calculated, and the data of the port with the largest input flow is not transmitted in a delayed manner. The delayed sending in this embodiment does not delay the port with the largest flow, so that the port with the largest flow can be received and forwarded in time, and the delayed sending is controlled from the port with the second flow. Firstly, the port data with the largest flow is put to pass first, so that the subsequent buffer space is enlarged, and secondly, the port data with the largest flow is generally important.
Calculating the number of ports to be controlled according to the input flow and the output flow of each port, which specifically comprises the following steps:
wherein Q isinRepresenting the current incoming flow, Q, in 1msoutIndicating the output traffic in the current 1ms, P indicating the number of ports to be controlled, I indicating the ith port, and I indicating the total number of ports of the switch. 1518 the unit is byte, the unit of traffic is bit, unifying the two into the unit of traffic, and need to multiply 1518byte by 8. The number of ports to be controlled is controlled by the difference between the input and output flows, with the greater the difference, the greater the number of ports to be controlled.
The port quantity needing to be controlled is calculated because the port with smaller control flow can be abandoned, and the port with more flow is subjected to phase-shift control instead of each port, so that the calculation time and the operation flow are saved.
Calculating the delay time of each port according to the output flow and the output flow of each port and the number of the ports to be controlled, which specifically comprises the following steps:
where j is 1, 2, 3 … … P, and V denotes the transmission rate of the port. The control is performed according to the calculation time from the port with the second largest flow, namely j is 1, and the calculation is performed until the P-th port. 1518 x 8 is added to the numerator to ensure that at least one data unit is delayed.
In S5, adjusting the data sending time according to the delay time specifically includes:
according to delay time tjAnd transmitting the PAUSE frame to a far-end port corresponding to each port, and adjusting the data transmission time of the far-end port according to the PAUSE frame to realize the phase shift of data transmission. The PAUSE frame is a standard protocol frame, the phase shift is a concept of a phase, and the transmission time of 1518byte is a phase in this embodiment.
The 1518byte in this embodiment is a storage space of the largest data unit, and is a default value, but in other embodiments, the fine adjustment may be performed according to the actual situation, so as to adapt to the requirements of different switches on packet loss and delay.
Example 2
This embodiment provides a switch cache system, including:
and the judging unit is used for receiving the data packet and judging whether the data volume of the data packet reaches the early warning value of the total cache volume.
And after the judgment unit judges that the data volume of the data packet exceeds the early warning value, starting a cache scheduling program, and calling the calculation unit to calculate the delay time of each port.
The computing unit is used for counting the input flow and the output flow of each port in the switch and computing the delay time of each port for delaying phase shift to send data according to the input flow and the output flow;
the calculation unit specifically includes:
the port number calculating unit is used for calculating the number of ports to be controlled;
and the delay time calculation unit is used for calculating the delay time of each port.
After the port number calculation unit calculates the number of the ports to be controlled, the port number is sent to the delay time calculation unit, and the delay time calculation unit calculates the delay time of each port according to the number of the ports to be controlled. And sends the delay time to the adjustment unit.
And the adjusting unit is used for adjusting the data sending time according to the delay time.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed.
The units may or may not be physically separate, and components displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Claims (10)
1. A method for dispatching a switch cache is characterized by comprising the following steps:
receiving and caching a data packet, judging whether the data volume of the data packet reaches an early warning value of the total caching amount, counting the input flow and the output flow of each port in the switch when the data volume reaches the early warning value, calculating the delay time of delaying and phase-shifting data transmission of each port according to the input flow and the output flow, and adjusting the data transmission time of a far-end port according to the delay time.
2. The method for dispatching the switch buffer according to claim 1, wherein calculating the delay time for delaying and phase-shifting the transmission data of each port according to the input traffic and the output traffic comprises: and sequencing each port according to the input flow from large to small, and calculating the delay time from the port with the second largest input flow.
3. The method for dispatching the switch cache according to claim 1, wherein calculating the delay time for delaying and phase-shifting the transmission data of each port according to the input traffic and the output traffic comprises:
and calculating the number of ports to be controlled according to the input flow and the output flow of each port, and calculating the delay time of each port according to the output flow and the output flow of each port and the number of ports to be controlled.
4. The switch cache scheduling method according to claim 3, wherein the number of ports to be controlled is calculated according to the input traffic and the output traffic of each port, and is implemented by the following formula:
wherein Q isinRepresents the input flow, QoutIndicating outgoing traffic, I indicating the ith port, P indicating the number of ports that need to be controlled, and I indicating the total number of ports of the switch.
5. The switch cache scheduling method according to claim 4, wherein the delay time of each port is calculated according to the output traffic and the output traffic of each port and the number of ports to be controlled, and is implemented by the following formula:
where j is 1, 2, 3 … … P, and V denotes the transmission rate of the port.
6. The method for dispatching the cache of the switch according to claim 1, wherein the adjusting the data transmission time of the remote port according to the delay time specifically comprises: and sending the delay time to a remote port corresponding to each port through a PAUSE frame, and adjusting the data sending time of the remote port according to the PAUSE frame.
7. The switch cache scheduling method according to claim 1, wherein the warning value is 70% to 90% of the total cache amount, and a value remaining after subtracting the warning value from the total cache amount is a reserved amount, and the reserved amount is greater than or equal to a data amount of a maximum data unit.
9. A switch cache scheduling system, comprising:
the judging unit is used for receiving the data packet and judging whether the data volume of the data packet reaches an early warning value of the total cache volume;
the computing unit is used for counting the input flow and the output flow of each port in the switch and computing the delay time of each port for delaying phase shift to send data according to the input flow and the output flow;
and the adjusting unit is used for adjusting the data sending time according to the delay time.
10. The switch cache scheduling system according to claim 9, wherein the computing unit specifically includes:
the port number calculating unit is used for calculating the number of ports to be controlled;
and the delay time calculation unit is used for calculating the delay time of each port.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111544616.1A CN114244738B (en) | 2021-12-16 | 2021-12-16 | Switch cache scheduling method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111544616.1A CN114244738B (en) | 2021-12-16 | 2021-12-16 | Switch cache scheduling method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114244738A true CN114244738A (en) | 2022-03-25 |
CN114244738B CN114244738B (en) | 2024-07-19 |
Family
ID=80757584
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111544616.1A Active CN114244738B (en) | 2021-12-16 | 2021-12-16 | Switch cache scheduling method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114244738B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5901140A (en) * | 1993-10-23 | 1999-05-04 | International Business Machines Corporation | Selective congestion control mechanism for information networks |
US6118761A (en) * | 1997-12-18 | 2000-09-12 | Advanced Micro Devices, Inc. | Apparatus and method for generating rate control frames in a workgroup switch based on traffic contribution from a network switch port |
CN102932267A (en) * | 2012-11-22 | 2013-02-13 | 合肥华云通信技术有限公司 | Distributed flow control method for Ethernet switch |
CN102932263A (en) * | 2012-06-29 | 2013-02-13 | 浙江宇视科技有限公司 | Access terminal |
CN103023806A (en) * | 2012-12-18 | 2013-04-03 | 武汉烽火网络有限责任公司 | Control method and control device of cache resource of shared cache type Ethernet switch |
CN104852863A (en) * | 2015-04-15 | 2015-08-19 | 清华大学 | Method and device for managing dynamic threshold in switch of shared cache |
JP2015231137A (en) * | 2014-06-05 | 2015-12-21 | 株式会社日立製作所 | Transfer control device, computer system and management device |
CN107948103A (en) * | 2017-11-29 | 2018-04-20 | 南京大学 | A kind of interchanger PFC control methods and control system based on prediction |
CN110855580A (en) * | 2019-11-09 | 2020-02-28 | 许继集团有限公司 | Mirror processing method for relay protection service in station and switching equipment |
-
2021
- 2021-12-16 CN CN202111544616.1A patent/CN114244738B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5901140A (en) * | 1993-10-23 | 1999-05-04 | International Business Machines Corporation | Selective congestion control mechanism for information networks |
US6118761A (en) * | 1997-12-18 | 2000-09-12 | Advanced Micro Devices, Inc. | Apparatus and method for generating rate control frames in a workgroup switch based on traffic contribution from a network switch port |
CN102932263A (en) * | 2012-06-29 | 2013-02-13 | 浙江宇视科技有限公司 | Access terminal |
CN102932267A (en) * | 2012-11-22 | 2013-02-13 | 合肥华云通信技术有限公司 | Distributed flow control method for Ethernet switch |
CN103023806A (en) * | 2012-12-18 | 2013-04-03 | 武汉烽火网络有限责任公司 | Control method and control device of cache resource of shared cache type Ethernet switch |
JP2015231137A (en) * | 2014-06-05 | 2015-12-21 | 株式会社日立製作所 | Transfer control device, computer system and management device |
CN104852863A (en) * | 2015-04-15 | 2015-08-19 | 清华大学 | Method and device for managing dynamic threshold in switch of shared cache |
CN107948103A (en) * | 2017-11-29 | 2018-04-20 | 南京大学 | A kind of interchanger PFC control methods and control system based on prediction |
CN110855580A (en) * | 2019-11-09 | 2020-02-28 | 许继集团有限公司 | Mirror processing method for relay protection service in station and switching equipment |
Non-Patent Citations (2)
Title |
---|
BAHAREH PAHLEVANZADEH等: "New approach for flow control using PAUSE frame management", IEEE, 18 February 2009 (2009-02-18) * |
马宏伟, 钱华林: "输入缓冲交换机的缓冲管理方案研究", 微电子学与计算机, no. 12 * |
Also Published As
Publication number | Publication date |
---|---|
CN114244738B (en) | 2024-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7872973B2 (en) | Method and system for using a queuing device as a lossless stage in a network device in a communications network | |
JP2870569B2 (en) | Congestion processing method and congestion processing circuit in frame relay switching equipment | |
US7260104B2 (en) | Deferred queuing in a buffered switch | |
CN109120544B (en) | A transmission control method based on host-side traffic scheduling in a data center network | |
CN100574310C (en) | A kind of credit flow control method | |
CN112953848B (en) | Traffic supervision method, system and equipment based on strict priority | |
US8040889B2 (en) | Packet forwarding device | |
CN108243116B (en) | Flow control method and switching equipment | |
US8514741B2 (en) | Packet forwarding device | |
CN107948103A (en) | A kind of interchanger PFC control methods and control system based on prediction | |
CN111224888A (en) | Method for sending message and message forwarding device | |
CN103023806A (en) | Control method and control device of cache resource of shared cache type Ethernet switch | |
JPH09130400A (en) | Priority control system | |
CN111400206A (en) | Cache Management Method Based on Dynamic Virtual Threshold | |
CN112565102A (en) | Load balancing method, device, equipment and medium | |
CN103888372B (en) | Traffic shaping method and data processing equipment | |
CN114244738A (en) | Switch cache scheduling method and system | |
CN102497285A (en) | Byte-based filtering and policing system and method of avionics full duplex switched Ethernet (AFDX) switch | |
CN101022414A (en) | Message retransmitting method and apparatus | |
US20130107711A1 (en) | Packet traffic control in a network processor | |
CN115801639B (en) | Bandwidth detection method, device, electronic device and storage medium | |
CN116170377B (en) | Data processing method and related equipment | |
US20120057605A1 (en) | Bandwidth Control Method and Bandwidth Control Device | |
US7353366B2 (en) | Processing device | |
CN101296189A (en) | Distributed stream processing network appliance and packet transmission method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |