CN114095513B - Method for forwarding traffic and mirror image traffic scheduling under limited bandwidth scene and application - Google Patents
Method for forwarding traffic and mirror image traffic scheduling under limited bandwidth scene and application Download PDFInfo
- Publication number
- CN114095513B CN114095513B CN202111421264.0A CN202111421264A CN114095513B CN 114095513 B CN114095513 B CN 114095513B CN 202111421264 A CN202111421264 A CN 202111421264A CN 114095513 B CN114095513 B CN 114095513B
- Authority
- CN
- China
- Prior art keywords
- forwarding
- flow
- mirror image
- processing engine
- traffic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 118
- 238000004590 computer program Methods 0.000 claims description 9
- 230000001105 regulatory effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2425—Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
- H04L47/2433—Allocation of priorities to traffic types
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a method for dispatching forwarding flow and mirror image flow in a limited bandwidth scene and application thereof, wherein the method comprises the following steps: obtaining a configuration state of a direction processing engine, wherein the configuration state comprises the following steps: mirror traffic state and normal forwarding state only; and setting the depth of the mirror image flow cache area and/or the forwarding flow cache area according to the configuration state of the outgoing direction processing engine so as to schedule forwarding flow and mirror image flow. According to the method, the depth of the buffer area of the forwarding flow and the mirror image flow can be regulated, so that the forwarding flow and the mirror image flow can be forwarded to the greatest extent according to actual requirements, when only the mirror image flow is forwarded, the situation that the forwarding flow enters the buffer area to occupy bandwidth can be avoided, the forwarding flow is discarded in advance, and packet loss of the mirror image flow is avoided.
Description
Technical Field
The present invention relates to the field of communications, and in particular, to a method and an application for forwarding traffic and mirror traffic scheduling in a limited bandwidth scenario.
Background
Mirroring refers to copying a message of a mirror port (source port) to an observation port (destination port). In the process of network maintenance, the situation that the message needs to be acquired and analyzed, such as the case that an attack message is suspected, is encountered, and the message needs to be acquired and analyzed under the condition that the message forwarding is not affected. The mirror image can copy the message of the mirror image port to the observation port under the condition that the normal processing flow of the message is not affected, and a user uses the data monitoring equipment to analyze the message copied to the observation port so as to perform network monitoring and fault elimination.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a method for forwarding traffic and mirror traffic scheduling under a limited bandwidth scene and application thereof, and solves the problem that a mirror traffic packet is lost due to bandwidth limitation under the condition that a mirror destination port forwards only the mirror traffic.
In order to achieve the above objective, the embodiment of the present invention provides a method for forwarding traffic and mirror traffic scheduling in a limited bandwidth scenario.
In one or more embodiments of the invention, the method comprises: obtaining a configuration state of a direction processing engine, wherein the configuration state comprises the following steps: mirror traffic state and normal forwarding state only; and setting the depth of the mirror image flow cache area and/or the forwarding flow cache area according to the configuration state of the outgoing direction processing engine so as to schedule forwarding flow and mirror image flow.
In one or more embodiments of the present invention, setting the depth of the mirrored traffic buffer and/or the forwarding traffic buffer according to the configuration state of the outbound processing engine includes: judging whether the configuration state of the direction processing engine is a mirror image flow state only or not; if yes, reducing the depth of the forwarding flow buffer area and increasing the depth of the mirror image flow buffer area; if not, the depth of the forwarding flow buffer area is increased, and the depth of the mirror image flow buffer area is reduced.
In one or more embodiments of the present invention, forwarding the forwarding traffic or the mirrored traffic sequentially from the outbound processing engine includes: when the configuration state of the outgoing direction processing engine is the mirror image flow state only, the forwarding flow is all discarded, the mirror image flow is sequentially sent to the outgoing direction processing engine from the mirror image flow cache area, and the forwarding is carried out from the outgoing direction processing engine; or when the configuration state of the outbound processing engine is a normal forwarding state, forwarding flow and mirror image flow are respectively and sequentially sent to the outbound processing engine from the corresponding buffer area, and forwarded from the outbound processing engine.
In one or more embodiments of the invention, the method further comprises: and configuring a mirror mark in the incoming direction processing engine, and copying the forwarding flow according to the mirror mark to generate mirror flow.
In another aspect of the present invention, an apparatus for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario is provided that includes an ingress direction processing engine, a scheduling module, and an egress direction processing engine.
The scheduling module is used for obtaining the configuration state of the direction processing engine, and the configuration state comprises the following steps: mirror traffic state and normal forwarding state only; and
and the outgoing direction processing engine is used for setting the depth of the mirror image flow cache area and/or the forwarding flow cache area according to the configuration state of the outgoing direction processing engine so as to schedule forwarding flow and mirror image flow.
The incoming direction processing engine is used for configuring the mirror mark in the incoming direction processing engine.
In one or more embodiments of the present invention, the scheduling module is further configured to: judging whether the configuration state of the direction processing engine is a mirror image flow state only or not; if yes, reducing the depth of the forwarding flow buffer area and increasing the depth of the mirror image flow buffer area; if not, the depth of the forwarding flow buffer area is increased, and the depth of the mirror image flow buffer area is reduced.
In one or more embodiments of the invention, the outbound direction processing engine is further to: when the configuration state of the outgoing direction processing engine is the mirror image flow state only, the forwarding flow is all discarded, the mirror image flow is sequentially sent to the outgoing direction processing engine from the mirror image flow cache area, and the forwarding is carried out from the outgoing direction processing engine; or when the configuration state of the outbound processing engine is a normal forwarding state, forwarding flow and mirror image flow are respectively and sequentially sent to the outbound processing engine from the corresponding buffer area, and forwarded from the outbound processing engine.
In one or more embodiments of the present invention, the scheduling module is further configured to copy forwarding traffic according to the mirror label to generate mirror traffic.
In another aspect of the present invention, there is provided an electronic device including: at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the method of forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario as described above.
In another aspect of the invention, a computer readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements the steps of a method of forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario as described.
Compared with the prior art, according to the method and the application for scheduling the forwarding flow and the mirror image flow in the limited bandwidth scene, the forwarding flow and the mirror image flow can be forwarded to the greatest extent according to actual requirements by adjusting the depth of the buffer areas of the forwarding flow and the mirror image flow, when only the mirror image flow is forwarded, the bandwidth occupied by the forwarding flow entering the buffer areas can be avoided, the forwarding flow is discarded in advance, and the packet loss of the mirror image flow is avoided.
Drawings
FIG. 1 is a flow chart of a method of forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario according to an embodiment of the present invention;
FIG. 2 is a specific flow diagram of a method of forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario according to an embodiment of the present invention;
FIG. 3 is a block diagram of a method of forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario in accordance with an embodiment of the present invention;
FIG. 4 is a block diagram of an apparatus for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario in accordance with one embodiment of the present invention;
fig. 5 is a hardware block diagram of a computing device forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario in accordance with an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention is, therefore, to be taken in conjunction with the accompanying drawings, and it is to be understood that the scope of the invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the term "comprise" or variations thereof such as "comprises" or "comprising", etc. will be understood to include the stated element or component without excluding other elements or components.
The following describes in detail the technical solutions provided by the embodiments of the present invention with reference to the accompanying drawings.
Example 1
As shown in fig. 1, a method for forwarding traffic and mirror traffic scheduling in a limited bandwidth scenario in one embodiment of the present invention is described, the method comprising the following steps.
In step S101, the configuration state of the direction processing engine is acquired.
The outbound processing engine typically forwards two copies of traffic, one being the own forwarded traffic and the other being the mirrored traffic. In the configuration state of the outbound processing engine, the configuration state of the outbound processing engine includes: only mirror traffic state and normal forwarding state. The mirror traffic-only state may forward only mirror traffic out and not normal traffic, so that only mirror traffic is forwarded out. And in the normal forwarding state, the mirror image traffic and the forwarding traffic are sequentially forwarded out of the outbound direction engine according to the order of the priority from high to low.
In step S102, the depth of the mirrored traffic buffer and/or the forwarding traffic buffer is set according to the configuration status of the outbound processing engine.
After forwarding traffic and mirror traffic enter a scheduling module, the forwarding traffic and the mirror traffic need to enter corresponding queue buffer areas in sequence from high to low according to the priority of the traffic, a total buffer pool is arranged at an outlet port of the scheduling module, the total buffer pool is the total number of traffic which can be allowed by an outlet in total and is queued in the total queue, the forwarding traffic queue and the mirror traffic queue are provided with own buffer areas and actual buffer numbers for specific queues, the depth of the buffer areas of the queues is the maximum allowed buffer traffic number, and the actual buffer number is the actual buffer traffic number.
Forwarding traffic queues and mirroring traffic queues actually cache traffic that consumes resources of the total cache pool. When the traffic actually cached by the forwarding traffic or the mirror traffic reaches the upper limit of the buffer zone, the subsequent traffic is discarded, and likewise, if the actual caches of all the queues reach the upper limit of the total cache pool, the subsequent traffic is discarded, and the occupation of the total resource pool by each queue takes the principle of high priority occupation.
Before the buffer depth adjustment, because the priority of the forwarding flow is high, a large amount of flow enters the forwarding flow buffer area, the actual buffer is continuously increased, and most of buffer resources are occupied, as the flow with high priority enters the queue, when the mirror flow can be entered into the queue, most of the resources of the total buffer pool are occupied by the forwarding flow, so that only a small part of the mirror flow enters the queue, and when the processing engine is in the outgoing direction, the forwarding flow is discarded, and only a small part of the mirror flow can be forwarded even if no mirror flow exists.
Therefore, the buffer depths corresponding to the forwarding flow and the mirror image flow are adjusted according to the actual demands, and the forwarding of the forwarding flow and the mirror image flow is flexibly controlled under the limited bandwidth scene. Specifically, whether the direction processing engine has only the mirror image flow mark is judged; if not, the forwarding flow and the mirror image flow need to be forwarded from the outgoing direction processing engine, the depth of the forwarding flow buffer area is increased, and the depth of the mirror image flow buffer area is reduced. If yes, the depth of the forwarding flow buffer area is reduced, and the depth of the mirror image flow buffer area is increased. Because the outgoing direction processing engine has only the mirror image flow mark, only the mirror image flow needs to be forwarded out from the outgoing direction processing engine, and the forwarding flow needs to be discarded entirely, in this embodiment, the depth of the buffer area of the forwarding flow may be set to 0, and the total capacity of the total buffer pool is allocated to the mirror image flow, so that the mirror image flow can be buffered to the maximum extent, which means that any forwarding message is not allowed to enter the queue, that is, the forwarding flow is discarded in advance, that is, the resources of the total buffer pool are not occupied, so that the mirror image flow can be buffered into the queue sufficiently, and is finally forwarded out from the outlet.
When forwarding the message, if the configuration state of the outbound processing engine is a normal forwarding state, forwarding traffic and mirror traffic are forwarded from the outbound processing engine, and if the configuration state of the outbound processing engine is a mirror traffic-only state, only mirror traffic is forwarded from the outbound processing engine.
Specifically, when the configuration state of the outgoing direction processing engine is the mirror image traffic state, only the mirror image traffic is forwarded from the outgoing direction processing engine, the forwarding traffic needs to be discarded completely, and when the depth of the cache region of the mirror image traffic is set to 0, the forwarding traffic does not exist in the queue, and only the mirror image traffic needs to be sequentially sent from the mirror image traffic cache region to the outgoing direction processing engine and forwarded from the outgoing direction processing engine.
When the configuration state of the outbound processing engine is a normal forwarding state, both forwarding traffic and mirror traffic need to be forwarded from the outbound processing engine, and the forwarding traffic and the mirror traffic are sequentially sent to the outbound processing engine from corresponding buffer areas respectively and forwarded from the outbound processing engine.
Example 2
As shown in fig. 2 to 3, a method for forwarding traffic and mirror traffic scheduling in a limited bandwidth scenario in one embodiment of the present invention is described, the method comprising the following steps.
In step S201, the ingress direction processing engine configures a mirror mark, and copies the forwarding traffic according to the mirror mark to generate mirror traffic.
Configuring a mirror mark on an ingress direction processing engine, carrying the mirror mark when forwarding traffic enters a scheduling module, copying the forwarding traffic to generate corresponding mirror traffic if the scheduling module finds that the forwarding traffic has the mirror mark, and then respectively entering the forwarding traffic and the mirror traffic into corresponding queues.
In step S202, the configuration state of the direction processing engine is acquired.
The outbound processing engine typically forwards two copies of traffic, one being the own forwarded traffic and the other being the mirrored traffic. In the configuration state of the outbound processing engine, the configuration state of the outbound processing engine includes: only mirror traffic state and normal forwarding state. The mirror traffic-only state may forward only mirror traffic out and not normal traffic, so that only mirror traffic is forwarded out. And in the normal forwarding state, the mirror image traffic and the forwarding traffic are sequentially forwarded out of the outbound direction engine according to the order of the priority from high to low.
In step S203, the depth of the mirrored traffic buffer and/or the forwarding traffic buffer is set according to the configuration status of the outbound processing engine.
Before the buffer depth adjustment, because the priority of the forwarding flow is high, a large amount of flow enters the forwarding flow buffer area, the actual buffer is continuously increased, and most of buffer resources are occupied, as the flow with high priority enters the queue, when the mirror flow can be entered into the queue, most of the resources of the total buffer pool are occupied by the forwarding flow, so that only a small part of the mirror flow enters the queue, and when the processing engine is in the outgoing direction, the forwarding flow is discarded, and only a small part of the mirror flow can be forwarded even if no mirror flow exists.
Therefore, the buffer depths corresponding to the forwarding flow and the mirror image flow are adjusted according to the actual demands, and the forwarding of the forwarding flow and the mirror image flow is flexibly controlled under the limited bandwidth scene. Specifically, whether the direction processing engine has only the mirror image flow mark is judged; if not, the forwarding flow and the mirror image flow need to be forwarded from the outgoing direction processing engine, the depth of the forwarding flow buffer area is increased, and the depth of the mirror image flow buffer area is reduced. If yes, the depth of the forwarding flow buffer area is reduced, and the depth of the mirror image flow buffer area is increased. Because the outgoing direction processing engine has only the mirror image flow mark, only the mirror image flow needs to be forwarded out from the outgoing direction processing engine, and the forwarding flow needs to be discarded entirely, in this embodiment, the depth of the buffer area of the forwarding flow may be set to 0, and the total capacity of the total buffer pool is allocated to the mirror image flow, so that the mirror image flow can be buffered to the maximum extent, which means that any forwarding message is not allowed to enter the queue, that is, the forwarding flow is discarded in advance, that is, the resources of the total buffer pool are not occupied, so that the mirror image flow can be buffered into the queue sufficiently, and is finally forwarded out from the outlet.
In step S204, the forwarding traffic and the mirror traffic are placed in the corresponding buffer areas according to the priority order, and the forwarding traffic or the mirror traffic is forwarded from the outbound processing engine in sequence.
When the outgoing direction processing engine has the mirror image flow mark only, only the mirror image flow is forwarded from the outgoing direction processing engine, the forwarding flow needs to be discarded completely, when the depth of the buffer zone of the mirror image flow is set to 0, the forwarding flow does not exist in the queue, and only the mirror image flow is sequentially sent to the outgoing direction processing engine from the mirror image flow buffer zone and is forwarded from the outgoing direction processing engine.
When the outgoing direction processing engine does not only have the mirror image flow mark, both the forwarding flow and the mirror image flow need to be forwarded from the outgoing direction processing engine, and the forwarding flow and the mirror image flow are respectively sent to the outgoing direction processing engine from corresponding buffer areas in sequence and forwarded from the outgoing direction processing engine.
As shown in fig. 4, an apparatus for forwarding traffic and mirror traffic scheduling in a limited bandwidth scenario according to an embodiment of the present invention is described.
In an embodiment of the present invention, the apparatus for forwarding traffic and mirror traffic scheduling in a limited bandwidth scenario includes an ingress direction processing engine 401, a scheduling module 402, and an egress direction processing engine 403.
The scheduling module 402 is configured to obtain a configuration state of the direction processing engine, where the configuration state includes: only mirror traffic state and normal forwarding state.
The outbound processing engine 403 is configured to set the depth of the mirror traffic buffer and/or the forwarding traffic buffer according to the configuration state of the outbound processing engine, so as to schedule the forwarding traffic and the mirror traffic.
The ingress direction processing engine 401 is configured to configure the mirror mark in the ingress direction processing engine.
The scheduling module 402 is further configured to: judging whether the configuration state of the direction processing engine is a mirror image flow state only or not; if yes, reducing the depth of the forwarding flow buffer area and increasing the depth of the mirror image flow buffer area; if not, the depth of the forwarding flow buffer area is increased, and the depth of the mirror image flow buffer area is reduced.
The outbound direction processing engine 403 is also configured to: when the configuration state of the outgoing direction processing engine is the mirror image flow state only, the forwarding flow is all discarded, the mirror image flow is sequentially sent to the outgoing direction processing engine from the mirror image flow cache area, and the forwarding is carried out from the outgoing direction processing engine; or when the configuration state of the outbound processing engine is a normal forwarding state, forwarding flow and mirror image flow are respectively and sequentially sent to the outbound processing engine from the corresponding buffer area, and forwarded from the outbound processing engine.
The scheduling module 402 is further configured to copy the forwarding traffic according to the mirror label to generate mirror traffic.
Fig. 5 illustrates a hardware block diagram of a computing device 50 for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario according to an embodiment of the present description. As shown in fig. 5, computing device 50 may include at least one processor 501, memory 502 (e.g., non-volatile memory), memory 503, and communication interface 504, and at least one processor 501, memory 502, memory 503, and communication interface 504 are connected together via bus 505. The at least one processor 501 executes at least one computer-readable instruction stored or encoded in the memory 502.
It should be appreciated that the computer-executable instructions stored in memory 502, when executed, cause at least one processor 501 to perform the various operations and functions described above in connection with fig. 1-5 in various embodiments of the present description.
In embodiments of the present description, computing device 50 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile computing devices, smart phones, tablet computers, cellular phones, personal Digital Assistants (PDAs), handsets, messaging devices, wearable computing devices, consumer electronic devices, and the like.
According to one embodiment, a program product, such as a machine-readable medium, is provided. The machine-readable medium may have instructions (i.e., elements described above implemented in software) that, when executed by a machine, cause the machine to perform the various operations and functions described above in connection with fig. 1-5 in various embodiments of the specification. In particular, a system or apparatus provided with a readable storage medium having stored thereon software program code implementing the functions of any of the above embodiments may be provided, and a computer or processor of the system or apparatus may be caused to read out and execute instructions stored in the readable storage medium.
According to the method and the application for forwarding flow and mirror image flow scheduling under the limited bandwidth scene, the forwarding flow and the mirror image flow can be forwarded to the maximum according to actual requirements by adjusting the buffer depth of the forwarding flow and the mirror image flow, when only the mirror image flow is forwarded, the bandwidth occupied by the forwarding flow entering the buffer area can be avoided, the forwarding flow is discarded in advance, and the packet loss of the mirror image flow is avoided.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing descriptions of specific exemplary embodiments of the present invention are presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain the specific principles of the invention and its practical application to thereby enable one skilled in the art to make and utilize the invention in various exemplary embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.
Claims (8)
1. A method for forwarding traffic and mirror traffic scheduling in a limited bandwidth scenario, the method comprising:
obtaining a configuration state of a direction processing engine, wherein the configuration state comprises the following steps: mirror traffic state and normal forwarding state only; the mirror image flow state refers to a state of forwarding the mirror image flow only and not forwarding normal flow, and the normal forwarding state refers to a state of forwarding the mirror image flow and the forwarding flow from the outbound processing engine in sequence according to the priority from high to low; and
judging whether the configuration state of the direction processing engine is a mirror image flow state only or not;
if yes, reducing the depth of the forwarding flow buffer area and increasing the depth of the mirror image flow buffer area;
if not, increasing the depth of the forwarding flow buffer area and reducing the depth of the mirror image flow buffer area;
and setting the depths of the mirror image flow cache area and the forwarding flow cache area according to the configuration state of the outgoing direction processing engine, so as to schedule forwarding flow and mirror image flow.
2. The method for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario according to claim 1, said method further comprising:
when the configuration state of the outgoing direction processing engine is the mirror image flow state only, the forwarding flow is all discarded, the mirror image flow is sequentially sent to the outgoing direction processing engine from the mirror image flow cache area, and the forwarding is carried out from the outgoing direction processing engine; or (b)
When the configuration state of the outbound processing engine is a normal forwarding state, forwarding flow and mirror image flow are sequentially sent to the outbound processing engine from corresponding buffer areas respectively, and forwarded from the outbound processing engine.
3. The method for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario according to claim 1, said method further comprising:
and configuring a mirror mark in the incoming direction processing engine, and copying the forwarding flow according to the mirror mark to generate mirror flow.
4. An apparatus for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario, the apparatus comprising:
the scheduling module is used for obtaining the configuration state of the direction processing engine, and the configuration state comprises the following steps: mirror traffic state and normal forwarding state only; the mirror image flow state refers to a state of forwarding the mirror image flow only and not forwarding normal flow, and the normal forwarding state refers to a state of forwarding the mirror image flow and the forwarding flow from the outbound processing engine in sequence according to the priority from high to low; and
the system comprises an outgoing direction processing engine, a forwarding flow rate control engine and a forwarding flow rate control engine, wherein the outgoing direction processing engine is used for setting the depths of a mirror image flow rate cache area and a forwarding flow rate cache area according to the configuration state of the outgoing direction processing engine so as to schedule forwarding flow rate and mirror image flow rate;
the scheduling module is also used for judging whether the configuration state of the direction processing engine is a mirror image flow state only or not; if yes, reducing the depth of the forwarding flow buffer area and increasing the depth of the mirror image flow buffer area; if not, the depth of the forwarding flow buffer area is increased, and the depth of the mirror image flow buffer area is reduced.
5. The apparatus for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario as recited in claim 4, wherein said outbound processing engine is further configured to:
when the configuration state of the outgoing direction processing engine is the mirror image flow state only, the forwarding flow is all discarded, the mirror image flow is sequentially sent to the outgoing direction processing engine from the mirror image flow cache area, and the forwarding is carried out from the outgoing direction processing engine; or (b)
When the configuration state of the outbound processing engine is a normal forwarding state, forwarding flow and mirror image flow are sequentially sent to the outbound processing engine from corresponding buffer areas respectively, and forwarded from the outbound processing engine.
6. The apparatus for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario according to claim 4, further comprising:
an ingress direction processing engine for configuring a mirror image flag at the ingress direction processing engine,
and the scheduling module is also used for copying the forwarding flow according to the mirror mark to generate mirror flow.
7. An electronic device, comprising:
at least one processor; and
a memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the method of forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario of any one of claims 1 to 3.
8. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of the method for forwarding traffic and mirrored traffic scheduling in a limited bandwidth scenario according to any of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111421264.0A CN114095513B (en) | 2021-11-26 | 2021-11-26 | Method for forwarding traffic and mirror image traffic scheduling under limited bandwidth scene and application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111421264.0A CN114095513B (en) | 2021-11-26 | 2021-11-26 | Method for forwarding traffic and mirror image traffic scheduling under limited bandwidth scene and application |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114095513A CN114095513A (en) | 2022-02-25 |
CN114095513B true CN114095513B (en) | 2024-03-29 |
Family
ID=80304964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111421264.0A Active CN114095513B (en) | 2021-11-26 | 2021-11-26 | Method for forwarding traffic and mirror image traffic scheduling under limited bandwidth scene and application |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114095513B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1870566A (en) * | 2005-05-24 | 2006-11-29 | 华为技术有限公司 | Method for implementing image in exchange system |
CN102932262A (en) * | 2011-08-11 | 2013-02-13 | 中兴通讯股份有限公司 | Network processor and image realizing method thereof |
US8745264B1 (en) * | 2011-03-31 | 2014-06-03 | Amazon Technologies, Inc. | Random next iteration for data update management |
CN105657031A (en) * | 2016-01-29 | 2016-06-08 | 盛科网络(苏州)有限公司 | Service-aware chip cache resource management method |
CN106330765A (en) * | 2015-06-30 | 2017-01-11 | 中兴通讯股份有限公司 | Cache distribution method and device |
CN107168820A (en) * | 2017-04-01 | 2017-09-15 | 华为技术有限公司 | A kind of data image method and storage system |
CN107682446A (en) * | 2017-10-24 | 2018-02-09 | 新华三信息安全技术有限公司 | A kind of message mirror-image method, device and electronic equipment |
CN108449270A (en) * | 2018-03-21 | 2018-08-24 | 中南大学 | A Priority-Based Cache Management Method in Opportunistic Networks |
CN108664354A (en) * | 2017-04-01 | 2018-10-16 | 华为技术有限公司 | A kind of data image method and storage system |
CN109496410A (en) * | 2016-05-18 | 2019-03-19 | 马维尔以色列(M.I.S.L.)有限公司 | Outflow traffic mirroring in the network equipment |
CN110493145A (en) * | 2019-08-01 | 2019-11-22 | 新华三大数据技术有限公司 | A kind of caching method and device |
CN112286671A (en) * | 2020-12-29 | 2021-01-29 | 湖南星河云程信息科技有限公司 | Containerization batch processing job scheduling method and device and computer equipment |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7127566B2 (en) * | 2003-12-18 | 2006-10-24 | Intel Corporation | Synchronizing memory copy operations with memory accesses |
US7484058B2 (en) * | 2004-04-28 | 2009-01-27 | Emc Corporation | Reactive deadlock management in storage area networks |
US8095683B2 (en) * | 2006-03-01 | 2012-01-10 | Cisco Technology, Inc. | Method and system for mirroring dropped packets |
US7734950B2 (en) * | 2007-01-24 | 2010-06-08 | Hewlett-Packard Development Company, L.P. | Bandwidth sizing in replicated storage systems |
US9423978B2 (en) * | 2013-05-08 | 2016-08-23 | Nexgen Storage, Inc. | Journal management |
US9203711B2 (en) * | 2013-09-24 | 2015-12-01 | International Business Machines Corporation | Port mirroring for sampling measurement of network flows |
US11070654B2 (en) * | 2019-10-03 | 2021-07-20 | EMC IP Holding Company LLC | Sockets for shared link applications |
US11165721B1 (en) * | 2020-04-09 | 2021-11-02 | Arista Networks, Inc. | Reprogramming multicast replication using real-time buffer feedback |
-
2021
- 2021-11-26 CN CN202111421264.0A patent/CN114095513B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1870566A (en) * | 2005-05-24 | 2006-11-29 | 华为技术有限公司 | Method for implementing image in exchange system |
US8745264B1 (en) * | 2011-03-31 | 2014-06-03 | Amazon Technologies, Inc. | Random next iteration for data update management |
CN102932262A (en) * | 2011-08-11 | 2013-02-13 | 中兴通讯股份有限公司 | Network processor and image realizing method thereof |
CN106330765A (en) * | 2015-06-30 | 2017-01-11 | 中兴通讯股份有限公司 | Cache distribution method and device |
CN105657031A (en) * | 2016-01-29 | 2016-06-08 | 盛科网络(苏州)有限公司 | Service-aware chip cache resource management method |
CN109496410A (en) * | 2016-05-18 | 2019-03-19 | 马维尔以色列(M.I.S.L.)有限公司 | Outflow traffic mirroring in the network equipment |
CN107168820A (en) * | 2017-04-01 | 2017-09-15 | 华为技术有限公司 | A kind of data image method and storage system |
CN108664354A (en) * | 2017-04-01 | 2018-10-16 | 华为技术有限公司 | A kind of data image method and storage system |
CN107682446A (en) * | 2017-10-24 | 2018-02-09 | 新华三信息安全技术有限公司 | A kind of message mirror-image method, device and electronic equipment |
CN108449270A (en) * | 2018-03-21 | 2018-08-24 | 中南大学 | A Priority-Based Cache Management Method in Opportunistic Networks |
CN110493145A (en) * | 2019-08-01 | 2019-11-22 | 新华三大数据技术有限公司 | A kind of caching method and device |
CN112286671A (en) * | 2020-12-29 | 2021-01-29 | 湖南星河云程信息科技有限公司 | Containerization batch processing job scheduling method and device and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114095513A (en) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3629162B1 (en) | Technologies for control plane separation at a network interface controller | |
CN109388504B (en) | Method and device for processing informationized docking, computer equipment and storage medium | |
CN106708607B (en) | Congestion control method and device for message queue | |
US10417062B2 (en) | Method and apparatus of unloading out of memory processing flow to user space | |
EP3504849B1 (en) | Queue protection using a shared global memory reserve | |
US9900837B2 (en) | Multi-channel communications for sending push notifications to mobile devices | |
US20050276222A1 (en) | Platform level overload control | |
EP3075126A1 (en) | Method and system for adjusting heavy traffic loads between personal electronic devices and external services | |
CN112600878B (en) | Data transmission method and device | |
CN112148644A (en) | Method, apparatus and computer program product for processing input/output requests | |
CN107667352A (en) | File cache and synchronous technology for predictability | |
CN112799793B (en) | Scheduling method and device, electronic equipment and storage medium | |
CN112600761A (en) | Resource allocation method, device and storage medium | |
CN110830388B (en) | Data scheduling method, device, network equipment and computer storage medium | |
US9178838B2 (en) | Hash perturbation with queue management in data communication | |
CN114095513B (en) | Method for forwarding traffic and mirror image traffic scheduling under limited bandwidth scene and application | |
CN102055671A (en) | Priority management method for multi-application packet sending | |
CN109257806B (en) | Carrier aggregation mode setting method for communication terminal, communication terminal and medium | |
CN111813557A (en) | Task processing apparatus, method, terminal device and readable storage medium | |
KR101523145B1 (en) | Method for traffic distribution for multi-link terminal | |
JP2022534557A (en) | Using client computers for document processing | |
Banerjee et al. | Priority based K-Erlang distribution method in cloud computing | |
US12224943B2 (en) | Service flow transmission method and apparatus, device, and storage medium | |
US10250515B2 (en) | Method and device for forwarding data messages | |
CN116541185A (en) | Data interaction method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |