WO2025020081A1 - Method and apparatus for flow information handling - Google Patents
Method and apparatus for flow information handling Download PDFInfo
- Publication number
- WO2025020081A1 WO2025020081A1 PCT/CN2023/109122 CN2023109122W WO2025020081A1 WO 2025020081 A1 WO2025020081 A1 WO 2025020081A1 CN 2023109122 W CN2023109122 W CN 2023109122W WO 2025020081 A1 WO2025020081 A1 WO 2025020081A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- flow
- cam
- network device
- processor
- packet
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 118
- 230000015654 memory Effects 0.000 claims abstract description 38
- 238000004590 computer program Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 description 34
- 230000006870 function Effects 0.000 description 21
- 238000012545 processing Methods 0.000 description 21
- 230000008901 benefit Effects 0.000 description 17
- 238000012544 monitoring process Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 9
- 230000009471 action Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000006855 networking Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 241000721662 Juniperus Species 0.000 description 1
- 241001025261 Neoraja caerulea Species 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/745—Address table lookup; Address filtering
- H04L45/74591—Address table lookup; Address filtering using content-addressable memories [CAM]
Definitions
- the non-limiting and exemplary embodiments of the present disclosure generally relate to the technical field of communications, and specifically to methods and apparatuses for flow information handling.
- Traffic on a network can be seen as consisting of flows passing through network devices. For administrative or other purposes, it is often interesting, useful, or even necessary to have access to information about these flows that pass through the network devices.
- Network flow monitoring may be an essential tool for a lot of network administrators. Flow monitoring allows to collect and record various traffic going to and from network devices. Network flow monitoring can provide visibility into causes of congestion, which applications are using the most resources, abnormal traffic patterns, or the ability to provide usage-based billing.
- IPFIX Internet protocol
- J-Flow Juniper flow
- NetStream NetStream
- Appflow etc.
- IPFIX is the standard tracking all the developments done initially by Cisco with Netflow, including all the enhancements done up to Netflow version 10 and even more. IPFIX captures a rich set of flow statistics and the captured data offers variety of uses to network planning and/or operations team.
- IPFIX is now well defined in the Internet Engineering Task Force (IETF) by several Request for Comments (RFCs) , such as RFC 7011, the disclosure of which is incorporated by reference herein in its entirety.
- RFCs Request for Comments
- Flow monitoring protocols such as IPFIX may comprise three components e.g. Metering Process, Exporting Process and Collecting Process.
- the Metering Process may sample the traffic on an observation point and store them into a cache.
- the observation point is a location in the network where packets can be observed. Examples of the observation point include a line to which a probe is attached; a shared medium, such as an Ethernet-based local area network (LAN) ; a single port of a router; or a set of interfaces (physical or logical) of a router.
- LAN local area network
- the Metering Process generates Flow Records.
- Inputs to the Metering Process are packet headers, characteristics, and Packet Treatment observed at one or more observation points.
- the Metering Process consists of a set of functions that includes packet header capturing, timestamping, sampling, classifying, and maintaining Flow Records.
- the maintenance of Flow Records may include creating new records, updating existing ones, computing Flow statistics, deriving further Flow properties, detecting Flow expiration, passing Flow Records to the Exporting Process, and deleting Flow Records.
- the Exporting Process may export the sampled traffic to the Collecting Process through a message such as IPFIX message.
- the message may include a template and a record.
- the Exporting Process sends IPFIX Messages to one or more Collecting Processes.
- the Flow Records in the Messages are generated by one or more Metering Processes.
- the Collecting Process may receive the flow messages and parse the flow messages based on the flow template. Then it could do the further analysis, for example, if there is denial of service attack.
- the Collecting Process receives IPFIX Messages from one or more Exporting Processes.
- the Collecting Process might process or store Flow Records received within these Messages.
- the Metering Process and the Exporting Process may be implemented on a network device such as network switch, router, bridge, etc. And several observation points can be enabled to capture traffic for different applications at the same time. Customers may expect the traffic sampled rate 1: 1 to capture all the traffic.
- the packets parsing and flow data caching may be managed by a processor of the network device such as Central Processing Unit (CPU) or packet processor, which means the capacity is limited to the processor and the memory of the network device.
- a processor of the network device such as Central Processing Unit (CPU) or packet processor, which means the capacity is limited to the processor and the memory of the network device.
- CPU Central Processing Unit
- packet processor which means the capacity is limited to the processor and the memory of the network device.
- the network device throughput is increased rapidly.
- the fifth generation (5G) system or the data center may require a large network device throughput.
- the line bit rates of the ports of the network device are mainly 1Gbit/s, 2.5Gbit/sor 10Gbit/s.
- the line bit rates of the ports of the network device are mainly 40Gbit/s, 100Gbit/s, 400Gbit/s.
- FIG. 1 shows an example of flow monitoring implementation according to an embodiment of the present disclosure.
- the observation point may be enabled on a port of the network device. To avoid eating up all the resources of the network device, some sampled traffic has to be dropped if the network device is overloaded.
- the traditional flow monitoring implementation is hard to meet the flow monitoring demand on a high throughout network device.
- the power consumption is high if the processor of the network device is running heavily when handling the sampled traffic.
- an improved solution for flow information handling may be desirable.
- a method performed by a network device may comprise providing a search key corresponding to a packet of a first flow to a content addressable memory (CAM) .
- the method may further comprise determining, by the CAM, that the search key matches a search pattern for the first flow stored in the CAM.
- the method may further comprise counting a first hit number of the first flow.
- the method may further comprise suppressing providing the search key or information regarding the packet of the first flow to a processor of the network device.
- the method may further comprise providing the search key or the information regarding the packet of the first flow to the processor of the network device.
- the method may further comprise determining, by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow.
- the method may further comprise counting a second hit number of the first flow.
- the method may further comprise, when information of the first flow is exported to a collector, combining the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
- the method may further comprise sending the information of the first flow comprising the third hit number of the first flow to the collector.
- the method may further comprise determining a weight of the first flow.
- the method may further comprise determining whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
- the method may further comprise selecting at least one flow whose weight exceeds a threshold as at least one candidate flow.
- the method may further comprise finding a predefined number of highest weight candidate flows from the at least one candidate flow.
- the method may further comprise, when the first flow belongs to the predefined number of highest weight candidate flows, determining to store the search pattern for the first flow in the CAM.
- the method may further comprise, when the first flow does not belong to the predefined number of highest weight candidate flows, determining to not store the search pattern for the first flow in the CAM and removing the search pattern for the first flow from the CAM if the search pattern for the first flow has been previously stored in the CAM.
- the determining the weight of the first flow may comprise determining the weight of the first flow when a timer expires.
- the method may further comprise obtaining first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor.
- the method may further comprise enabling or disabling the first functionality based on the first information.
- the method may further comprise obtaining second information indicating maximum CAM entries used for the first functionality.
- the method may further comprise allocating a CAM entry used for the first functionality based on the second information.
- the first flow may comprise Internet protocol (IP) Flow Information Export (IPFIX) flow.
- IP Internet protocol
- IPFIX IP Flow Information Export
- the CAM may comprise at least one of binary CAM, or ternary CAM.
- a network device comprising a processor, a content addressable memory (CAM) coupled to the processor and a memory coupled to the processor.
- Said memory contains instructions executable by said processor.
- Said network device is operative to provide a search key corresponding to a packet of a first flow to the CAM.
- Said network device is further operative to determine, by the CAM, that the search key matches a search pattern for the first flow stored in the CAM.
- Said network device is further operative to count a first hit number of the first flow.
- Said network device is further operative to suppress providing the search key or information regarding the packet of the first flow to a processor of the network device.
- the network device may comprise a first providing module configured to provide a search key corresponding to a packet of a first flow to a content addressable memory (CAM) .
- the network device may further comprise a first determining module configured to determine by the CAM that the search key matches a search pattern for the first flow stored in the CAM.
- the network device may further comprise a first counting module configured to count a first hit number of the first flow.
- the network device may further comprise a suppressing module configured to suppress providing the search key or information regarding the packet of the first flow to a processor of the network device.
- the network device may further comprise a second providing module configured to provide the search key or the information regarding the packet of the first flow to the processor of the network device.
- the network device may further comprise a second determining module configured to determine, by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow.
- the network device may further comprise a second counting module configured to count a second hit number of the first flow.
- the network device may further comprise a combining module configured to, when information of the first flow is exported to a collector, combine the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
- a combining module configured to, when information of the first flow is exported to a collector, combine the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
- the network device may further comprise a sending module configured to send the information of the first flow comprising the third hit number of the first flow to the collector.
- the network device may further comprise a third determining module configured to determine a weight of the first flow.
- the network device may further comprise a fourth determining module configured to determine whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
- the network device may further comprise an obtaining module configured to obtain first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor.
- the network device may further comprise an enabling module configured to enable the first functionality based on the first information.
- the network device may further comprise a disabling module configured to disable the first functionality based on the first information.
- the network device may further comprise a third obtaining module configured to obtain second information indicating maximum CAM entries used for the first functionality.
- the network device may further comprise an allocating module configured to allocate a CAM entry used for the first functionality based on the second information.
- a computer program product comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out any of the method according to the above first aspect.
- a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to carry out any of the method according to the above first aspect.
- Embodiments herein afford many advantages, of which a non-exhaustive list of examples follows.
- it may increase the flow (such as IPFIX) handing performance and capacity.
- it may decrease the power consumption as CAM is used for flow handling.
- the dynamic CAM allocation method can guarantee the CAM resource allocated for the flow with high rate.
- FIG. 1 shows an example of flow monitoring implementation according to an embodiment of the present disclosure
- FIG. 2 shows a flowchart of a method according to an embodiment of the present disclosure
- FIG. 3 shows a flowchart of a method according to another embodiment of the present disclosure
- FIG. 4 shows a flowchart of a method according to another embodiment of the present disclosure
- FIG. 5 shows a flowchart of a method according to another embodiment of the present disclosure
- FIG. 6 shows a flowchart of a method according to another embodiment of the present disclosure.
- FIG. 7 shows a flowchart of a method according to another embodiment of the present disclosure.
- FIG. 8 shows a flowchart of a method according to another embodiment of the present disclosure.
- FIG. 9 shows an example of an IPFIX flow handling method
- FIG. 10 shows an example of an IPFIX flow handling method according to another embodiment of the present disclosure.
- FIG. 11 shows an example of a relation between the flows in IPFIX cache DB and TCAM entries according to an embodiment of the present disclosure
- FIG. 12 shows a flowchart of an IPFIX flow handling method according to another embodiment of the present disclosure
- FIG. 13 is a block diagram showing an apparatus suitable for use in practicing some embodiments of the disclosure.
- FIG. 14 is a block diagram showing a network device according to an embodiment of the disclosure.
- the term “network” refers to a network following any suitable communication standards such as new radio (NR) , long term evolution (LTE) , LTE-Advanced, wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , Code Division Multiple Access (CDMA) , Time Division Multiple Address (TDMA) , Frequency Division Multiple Access (FDMA) , Orthogonal Frequency-Division Multiple Access (OFDMA) , Single carrier frequency division multiple access (SC-FDMA) and other wireless networks.
- NR new radio
- LTE long term evolution
- WCDMA wideband code division multiple access
- HSPA high-speed packet access
- CDMA Code Division Multiple Access
- TDMA Time Division Multiple Address
- FDMA Frequency Division Multiple Access
- OFDMA Orthogonal Frequency-Division Multiple Access
- SC-FDMA Single carrier frequency division multiple access
- a CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA) , etc.
- a TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM) .
- GSM Global System for Mobile Communications
- An OFDMA network may implement a radio technology such as Evolved UTRA (E-UTRA) , Ultra Mobile Broadband (UMB) , IEEE 802.11 (Wi-Fi) , IEEE 802.16 (WiMAX) , IEEE 802.20, Flash-OFDMA, Ad-hoc network, wireless sensor network, etc.
- E-UTRA Evolved UTRA
- UMB Ultra Mobile Broadband
- IEEE 802.11 Wi-Fi
- IEEE 802.16 WiMAX
- IEEE 802.20 Flash-OFDMA
- Ad-hoc network wireless sensor network
- the terms “network” and “system” can be used interchangeably.
- the communications between two devices in the network may be performed according to any suitable communication protocols, including, but not limited to, the communication protocols as defined by a standard organization such as 3GPP.
- the communication protocols may comprise the first generation (1G) , 2G
- network device or “network node” or “network function” refers to any suitable function which can be implemented in a network entity (physical or virtual) of a communication network.
- the network function can be implemented either as a network element on a dedicated hardware, as a software instance running on a dedicated hardware, or as a virtualized function instantiated on an appropriate platform, e.g. on a cloud infrastructure.
- Virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
- virtualization can be applied to a provider edge node and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks) .
- some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments hosted by one or more of hardware nodes. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node) , then the provider edge node or PE may be entirely virtualized.
- the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node)
- the provider edge node or PE may be entirely virtualized.
- the functions may be implemented by one or more applications (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc. ) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
- Applications are run in virtualization environment which provides hardware comprising processing circuitry and memory.
- Memory contains instructions executable by processing circuitry whereby application is operative to provide one or more of the features, benefits, and/or functions disclosed herein.
- Virtualization environment comprises general-purpose or special-purpose network hardware devices comprising a set of one or more processors or processing circuitry, which may be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs) , or any other type of processing circuitry including digital or analog hardware components or special purpose processors.
- Each hardware device may comprise memory which may be non-persistent memory for temporarily storing instructions or software executed by processing circuitry.
- Each hardware device may comprise one or more network interface controllers (NICs) , also known as network interface cards, which include physical network interface.
- NICs network interface controllers
- Each hardware device may also include non-transitory, persistent, machine-readable storage media -having stored therein software and/or instructions executable by processing circuitry.
- Software may include any type of software including software for instantiating one or more virtualization layers (also referred to as hypervisors) , software to execute virtual machines as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiment
- Virtual machines comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer or hypervisor. Different embodiments of the instance of virtual appliance may be implemented on one or more of virtual machines, and the implementations may be made in different ways.
- processing circuitry executes software to instantiate the hypervisor or virtualization layer, which may sometimes be referred to as a virtual machine monitor (VMM) .
- Virtualization layer may present a virtual operating platform that appears like networking hardware to virtual machine.
- references in the specification to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
- the term “and/or” includes any and all combinations of one or more of the associated listed terms.
- the phrase “at least one of A and B” or “at least one of A or B” should be understood to mean “only A, only B, or both A and B. ”
- the phrase “A and/or B” should be understood to mean “only A, only B, or both A and B” .
- FIG. 2 shows a flowchart of a method according to an embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality.
- the network device may provide means or modules or circuits for accomplishing various parts of the method 200 as well as means or modules or circuits for accomplishing other processes in conjunction with other components.
- the network device may provide a search key corresponding to a packet of a first flow to a content addressable memory (CAM) .
- CAM content addressable memory
- the network device may be any suitable network device which comprises the CAM.
- the network devices may support functionalities such as data forwarding, data exchanging, data transmission, data processing, etc.
- the network device may be network switch, router, bridge, Ethernet device, Radio over Ethernet (RoE) device, etc.
- the network devices may receive the packet communicated over a network.
- the network devices may process the packet communicated over the network.
- the processing may include generating a search key representative of a packet, determining a processing rule for the packet and processing the packet according to the processing rule.
- the packet may be any suitable packet of any suitable communication protocol or network.
- the packet may be an Internet protocol (IP) packet.
- IP Internet protocol
- the packet may be a packet in Information-Centric Networking (ICN) .
- the packet may be a packet used in any suitable data center network.
- the first flow may be any suitable flow.
- the first flow may comprise IP Flow Information Export (IPFIX) flow.
- IPFIX IP Flow Information Export
- a flow is defined as a set of packets or frames passing an observation point in the network during a certain time interval. All packets belonging to a particular flow have a set of common properties. Each property is defined as the result of applying a function to the values of:
- packet header fields e.g., destination IP address
- transport header fields e.g., destination port number
- application header fields e.g., Real-time Transport Protocol (RTP) header fields [IETF RFC3550]
- one or more characteristics of the packet itself e.g., number of Multi-Protocol Label Switching (MPLS) labels, etc. .
- a packet may be defined as belonging to a flow if it completely satisfies all the defined properties of the Flow.
- the search key may include any suitable header information retrieved from the packet and/or metadata associated with the packet, such as an identifier of a port that received the packet, etc.
- the search key may include network address information (e.g., one of, or any suitable combination of two or more of, a destination address, such as a destination media access control (MAC) address, a destination IP address, etc. ; a source address, e.g., a source MAC address, a source IP address, etc. ) .
- the search key may also include transmission control protocol (TCP) port information and/or user datagram protocol (UDP) port information (e.g., one of or any suitable combination of two or more of a TCP source port, a TCP destination port, a UDP source port, a UDP destination port) .
- TCP transmission control protocol
- UDP user datagram protocol
- the search key may additionally or alternatively include other header information such as virtual local area network (VLAN) identifier (ID) , a protocol type, etc.
- the search key may additionally or alternatively include metadata associated with the packet, such as an ID of a port or network interface of the network device that received the packet.
- the search key may additionally or alternatively include IP type of service (ToS) .
- IP type of service ToS
- the CAM may comprise at least one of binary CAM or ternary CAM (TCAM) .
- Binary CAMs support storage and searching of binary bits, zero or one (0, 1) .
- TCAMs support storing of zero, one, or don't care bit (0, 1, X) .
- the network device may determine, by the CAM, that the search key matches a search pattern for the first flow stored in the CAM.
- the search pattern for the first flow may correspond to known patterns of header information of packets of the first flow and/or metadata associated with packets of the first flow.
- the search pattern for the first flow may be stored in the CAM according to various rules.
- the search pattern for the first flow may be removed from in the CAM according to various rules.
- the search pattern for the first flow is stored in the CAM.
- Binary CAM requires an exact match.
- a feature of TCAM is that one or more portions of a search pattern can be designated as “don't care, ” where portions marked as “don't care” do not need to match a search key in order for the TCAM to return determine a match result.
- a stored word of “01XX0” in a TCAM with “X” indicating a “don't care” bit, will match any of the search keys “01000” , “01010” , “01100” , and “01110” .
- the network device may count a first hit number of the first flow. For example, in response to the determination of the search key matches the search pattern for the first flow stored in the CAM, the network device may count a first hit number of the first flow.
- the first hit number of the first flow may be stored in the memory such as random access memory (RAM) of the network device.
- the network device may suppress providing the search key or information regarding the packet of the first flow to a processor of the network device. For example, in response to the determination of the search key matches the search pattern for the first flow stored in the CAM, the network device may suppress providing the search key or information regarding the packet of the first flow to a processor of the network device.
- the processor may be any suitable processor such as packet processor.
- the CAM may output an index that indicates the pattern that matches the search key.
- the index output by the CAM may point to a location in another memory, such as RAM, that stores information indicating one or more actions to be taken in connection with the packet. Examples of actions include counting a first hit number of the first flow and suppressing providing the search key or information regarding the packet of the first flow to a processor of the network device.
- the suppressing operation may comprise dropping the search key or the information regarding the packet of the first flow.
- the information regarding the packet of the first flow may include any suitable header information retrieved from the packet and/or metadata associated with the packet, such as an identifier of a port that received the packet, etc.
- the information regarding the packet of the first flow may include the header information of the packet and/or metadata associated with the packet.
- FIG. 3 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality.
- the network device may provide means or modules or circuits for accomplishing various parts of the method 300 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
- the search pattern for the first flow is not stored in the CAM.
- the search pattern for the first flow has been removed from the CAM.
- the network device determines to not store the search pattern for the first flow in the CAM.
- the search pattern for the first flow is not stored in the CAM when the functionality of using CAM to offload flow handling from the processor is disabled.
- the network device may provide the search key or the information regarding the packet of the first flow to the processor of the network device. For example, when the functionality of using CAM to offload flow handling from the processor is disabled, the network device may provide the search key or the information regarding the packet of the first flow to the processor of the network device and the method 200 of FIG. 2 may be omitted. When the functionality of using CAM to offload flow handling from the processor is enabled, the method 200 of FIG. 2 may be performed before method 300. Since the search pattern for the first flow is not stored in the CAM, the network device does not count the first hit number of the first flow and does not suppress providing the search key or the information regarding the packet of the first flow to the processor of the network device.
- the network device may determine, by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow.
- a memory such as RAM may store various search templates for various flows.
- the processor of the network device may determine whether the search key or the information regarding the packet of the first flow matches a search template within search templates stored in the memory such as RAM. Since the search template for the first flow is stored in the memory such as RAM, the processor of the network device determines that the search key or the information regarding the packet of the first flow matches the search template for the first flow.
- the search template for the first flow may correspond to known patterns of header information of packets of the first flow and/or metadata associated with packets of the first flow.
- the operation of block 304 may eat up many resources. In some cases, some sampled traffic has to be dropped if the processor of the network device is overloaded.
- the operation of block 302 may be hard to meet the flow demand on the high throughout network device. In addition, the power consumption of the operation of block 302 is high if the processor is running heavily.
- the network device may count a second hit number of the first flow.
- the second hit number of the first flow may be stored in the memory such as RAM.
- the packet may be a first packet for the first flow and the network device may store the search template for the first flow in the memory and count the second hit number of the first flow.
- FIG. 4 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality.
- the network device may provide means or modules or circuits for accomplishing various parts of the method 400 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
- the network device may combine the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
- the collector may be any suitable device that hosts one or more Collecting Processes.
- the third hit number of the first flow may be a sum of the second hit number of the first flow and first hit number of the first flow.
- the network device may send the information of the first flow comprising the third hit number of the first flow to the collector.
- the information of the first flow may comprise any suitable information for example depending on a specific flow monitoring protocol.
- the information of the first flow may be an IPFIX message.
- FIG. 5 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality.
- the network device may provide means or modules or circuits for accomplishing various parts of the method 500 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
- the network device may determine a weight of the first flow.
- the network device may determine the weight of the first flow in various ways and the present disclosure has no limit on it.
- the weight of the first flow may be determined based on machine learning or flow statistical data or flow congestion data or abnormal traffic data or usage-based billing, etc.
- the network device may determine the weight of the first flow based on flow count increase at a current interval (flow_cnt_incr_cur_interval) , flow count at a previous interval (flow_cnt_prev_interval) and flow count at the one before previous interval (flow_cnt_last_prev_prev_interval) .
- the network device may determine the weight of the first flow as below.
- Flow weight 50%*flow_cnt_incr_cur_interval + 30%*flow_cnt_prev_interval + 20%*flow_cnt_last_prev_prev_interval.
- the network device may determine the weight of the first flow periodically or based on an event. For example, when the load of the processor of the network device exceeds a predefined threshold, which means that more flow handling is required to be offloaded from the processor to the CAM, the network device may determine the flow weight.
- the network device may determine the weight of the first flow when a timer expires.
- the network device may determine whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
- the network device may select a flow whose weight exceeds a predefined threshold to store its search pattern in the CAM.
- FIG. 6 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality.
- the network device may provide means or modules or circuits for accomplishing various parts of the method 600 as well as means or modules or circuits for accomplishing other processes in conjunction with other components.
- the network device may provide means or modules or circuits for accomplishing various parts of the method 600 as well as means or modules or circuits for accomplishing other processes in conjunction with other components.
- the network device may select at least one flow whose weight exceeds a threshold as at least one candidate flow.
- the network device may find a predefined number of highest weight candidate flows from the at least one candidate flow.
- the predefined number may be determined in various ways. For example, the predefined number may be specified by an operator. The predefined number may be determined based on available resources of CAM. The predefined number may be determined based on load information of the network device. The predefined number may be determined based on the maximum CAM entries used for offloading flow handling from the processor.
- the network device may determine to store the search pattern for the first flow in the CAM.
- the network device may store the search pattern for the first flow in the CAM.
- the network device may determine to not store the search pattern for the first flow in the CAM and remove the search pattern for the first flow from the CAM if the search pattern for the first flow has been previously stored in the CAM.
- FIG. 7 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality.
- the network device may provide means or modules or circuits for accomplishing various parts of the method 700 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
- the network device may obtain first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor.
- the first functionality needs to be enabled/disabled by the user or the network device.
- the user may send the first information to the network device and the network device may obtain the first information.
- the network device determines to enable or disable the first functionality, it may obtain the first information by itself.
- the first functionality may be enabled or disabled due to various reasons and the present disclosure has no limit on it.
- the first functionality may be enabled or disabled based on at least one of available resources of CAM, the load information of the network device, flow weights, etc.
- the first information may be sent to the network device via Command Line Interface (CLI) .
- CLI Command Line Interface
- An example of CLI for enabling/disabling the first functionality is as following.
- the first functionality is off by default.
- the network device may enable or disable the first functionality based on the first information.
- FIG. 8 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality.
- the network device may provide means or modules or circuits for accomplishing various parts of the method 800 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
- the network device may obtain second information indicating maximum CAM entries used for the first functionality.
- the maximum CAM entries used for the first functionality may be determined in various ways and the present disclosure has no limit on it.
- the maximum CAM entries used for the first functionality may be determined based on at least one of available CAM entries, the load information of the network device, flow weights, etc.
- the maximum CAM entries used for the first functionality may be configured by the user or the network device. For example, when the user determines the maximum CAM entries used for the first functionality, the user may send the second information to the network device and the network device may obtain the second information. When the network device determines the maximum CAM entries used for the first functionality, it may obtain the second information by itself.
- the second information may be sent to the network device via CLI.
- CLI for configuring the maximum CAM entries used for the first functionality is as following.
- the network device may allocate a CAM entry used for the first functionality based on the second information. For example, when a CAM entry is to be allocated, the network device may check if there is an available CAM entry used for the first functionality. If so, the network device may allocate the CAM entry, otherwise the network device may reject the CAM entry allocation.
- the proposed solution is to use CAM (such as TCAM) resource on the network device (e.g. chipset) to offload flow such as IPFIX handling for the flow with high rate.
- CAM such as TCAM
- the network device e.g. chipset
- CAM such as TCAM is the common hardware (HW) resource on the network device such as router, it is no need to install extra CAM on the network device.
- the CAM is used for classifying the flow and doing the assigned actions.
- the flow (such as IPFIX) template is combined with several elements, for example, source IP address, TCP port, protocol type, etc. Those elements can be classified by CAM entry.
- the CAM resources are valuable, and the number of flows is much bigger than the number of CAM resources. Therefore, a dynamic CAM adjustment method is proposed for allocating the CAM resources for the flows with high rate.
- a CAM offload functionality is enabled/disable by the user or the network device.
- the user or the network device decides if the CAM is used and the maximum CAM entries can be used.
- the proposed solution can use CAM resources to offload the flow (such as IPFIX) handling.
- IPFIX When IPFIX is enabled, the user traffic is sampled on the observation points at ingress/egress direction, and the sampled traffic is sent to IPFIX software (SW) module through processor (such as CPU) channel. IPFIX flows are generated based on the configured IPFIX templates. Finally, the corresponding IPFIX messages are sent to IPFIX collector.
- SW IPFIX software
- processor such as CPU
- FIG. 9 shows an example of an IPFIX flow handling method.
- the network device 900 only depicts some exemplary elements, such as hardware (HW) chipset 901, IPFIX SW module 902, and two observation points 903 and 904.
- HW hardware
- the user traffic is sampled by the observation points 903 and 904. Then the sampled traffic is sent to the IPFIX SW module 902 through a CPU channel.
- the IPFIX message is generated by the IPFIX SW module 902 and sent to the IPFIX collector 905 via a network 906.
- FIG. 10 shows an example of an IPFIX flow handling method according to another embodiment of the present disclosure.
- the network device 1000 only depicts some exemplary elements, such as HW TCAM 1001, IPFIX metering process 1002, two observation points 1003 and 1004, IPFIX TCAM offloading module 1005, IPFIX cache database (DB) 1006, Buffered traffic queue 1007, IPFIX Exporting process 1008.
- HW TCAM 1001 IPFIX metering process 1002
- IPFIX TCAM offloading module 1005 IPFIX cache database (DB) 1006, Buffered traffic queue 1007, IPFIX Exporting process 1008.
- DB IPFIX cache database
- the user traffic is sampled by the observation points 1003 and 1004. Then the sampled traffic may be stored in Buffered traffic queue 1007 and then sent to the IPFIX metering process 1002 through a CPU channel.
- the IPFIX record may be stored in IPFIX cache DB 1006.
- the IPFIX message is generated by the IPFIX Exporting process 1008 and sent to the IPFIX collector 1010 via a network 1009.
- the IPFIX TCAM offloading module 1005 is new added in the network device 1000. It monitors flow entries in the IPFIX cache DB 1006. If the flow meets a criterion such as high rate, then a corresponding TCAM entry for the flow may be created in the HW TCAM 1001.
- the functionalities of the TCAM entry may suppress snooping the sampled traffic to the CPU for the selected flow and count the hit number of the selected flow.
- IPFIX cache DB 1006 When exporting the IPFIX flow to the IPFIX collector, it needs to combine the information of the flow in the IPFIX cache DB 1006 and TCAM information for that flow.
- FIG. 11 shows an example of a relation between the flows in IPFIX cache DB and TCAM entries according to an embodiment of the present disclosure.
- a flow is identified by IP protocol, source IP address, destination IP address, source port, destination port and input interface.
- Flows IDs 2, 3 and 9 are flows with high rate and corresponding TCAM entries for these flows are created in the HW TCAM.
- the actions of TCAM are suppression of the snoop action and allocation of hit counter.
- FIG. 12 shows a flowchart of an IPFIX flow handling method according to another embodiment of the present disclosure.
- the network device may check a timer.
- the network device may restart the timer and the following steps may continue.
- the network device may add the delta flow count in HW TCAM into IPFIX flow cache.
- the network device may calculate the weight for each IPFIX flow.
- the network device may select the flows whose weight exceeds the threshold as the candidates.
- the network device may find the top highest weight candidates (e.g. at most 100) .
- the network device may free the previous allocated TCAM resource of flows which are not the top highest weight candidates.
- the network device may allocate TCAM resource for top highest weight candidates not using TCAM before.
- Embodiments herein afford many advantages, of which a non-exhaustive list of examples follows.
- it may increase the flow (such as IPFIX) handing performance and capacity. For example, assuming there are 10 flows with 10Gbps rate, some sampled packets have to be dropped if the processor of the network device is used for the flow handling and the processor is overloaded. It takes little CPU resource if 10 CAM entries are used for classifying the 10 flows.
- it may decrease the power consumption as CAM is used for flow handling.
- the dynamic CAM allocation method can guarantee the CAM resource allocated for the flow with high rate.
- FIG. 13 is a block diagram showing an apparatus suitable for use in practicing some embodiments of the disclosure.
- the network device described above may be implemented through the apparatus 1300.
- the apparatus 1300 comprises at least one processor 1321, such as a packet processor, CAM 1326 such as TCAM, and at least one memory (MEM) 1322 coupled to the processor 1321.
- the apparatus 1300 may further comprise a plurality of network interfaces 1323 (e.g., ports, link aggregate groups (LAGs) , tunnel interfaces, etc. ) configured to couple to network links.
- the network device 100 includes any suitable number of network interfaces 1323.
- the MEM 1322 stores a program (PROG) 1324.
- the PROG 1324 may include instructions that, when executed on the associated processor 1321, enable the apparatus 1300 to operate in accordance with the embodiments of the present disclosure.
- a combination of the at least one processor 1321 and the at least one MEM 1322 may form processing means 1325 adapted to implement various embodiments of the present disclosure.
- Various embodiments of the present disclosure may be implemented by computer program executable by one or more of the processor 1321, software, firmware, hardware or in a combination thereof.
- the processor 1321 may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors DSPs and processors based on multicore processor architecture, as non-limiting examples.
- the processor 1321 may be configured to process packets (e.g., by analyzing header information in the packets and, optionally, metadata associated with packets, such indicators of ports of the network device that received the packets, etc. ) .
- the processor 1321 may include a processing engine that is coupled to a lookup engine and TCAM.
- the processing engine is configured to use header information from a packet, and optionally metadata associated with the packet (e.g., an indicator of a port of the network device that received the packet) , to generate a search key and to provide the search key to the lookup engine and TCAM.
- the lookup engine is configured to perform a lookup in a lookup table.
- the lookup table includes information that indicates actions to be performed by the network device on packets received by the network device, such as classifying packet as belonging to particular packet flows.
- the MEM 1322 may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memories and removable memories, as non-limiting examples.
- the CAM 1326 may be of any type suitable to the local technical environment, and may include one or more of binary CAM or ternary CAM as non-limiting examples.
- CAM 1326 has a plurality of entries storing search patterns.
- the search patterns correspond to known patterns of header information of packets and/or metadata associated with packets.
- the MEM 1322 may include a plurality of entries that store rules associated with the search patterns stored in the CAM 1326.
- the rules in the MEM 1322 may indicate processing actions to be performed on packets that match respective search patterns stored in the CAM 1326.
- the memory 1322 contains instructions executable by the processor 1321, whereby the network device operates according to any of the methods performed by the network device.
- FIG. 14 is a block diagram showing a network device according to an embodiment of the disclosure.
- the network device 1400 may comprise a first providing module 1401 configured to provide a search key corresponding to a packet of a first flow to a content addressable memory (CAM) .
- the network device 1400 may further comprise a first determining module 1402 configured to determine by the CAM that the search key matches a search pattern for the first flow stored in the CAM.
- the network device 1400 may further comprise a first counting module 1403 configured to count a first hit number of the first flow.
- the network device 1400 may further comprise a suppressing module 1404 configured to suppress providing the search key or information regarding the packet of the first flow to a processor of the network device.
- the network device 1400 may further comprise a second providing module 1405 configured to provide the search key or the information regarding the packet of the first flow to the processor of the network device.
- the network device 1400 may further comprise a second determining module 1406 configured to determine, by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow.
- the network device 1400 may further comprise a second counting module 1407 configured to count a second hit number of the first flow.
- the network device 1400 may further comprise a combining module 1408 configured to, when information of the first flow is exported to a collector, combine the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
- a combining module 1408 configured to, when information of the first flow is exported to a collector, combine the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
- the network device 1400 may further comprise a sending module 1409 configured to send the information of the first flow comprising the third hit number of the first flow to the collector.
- the network device 1400 may further comprise a third determining module 1410 configured to determine a weight of the first flow.
- the network device 1400 may further comprise a fourth determining module 1411 configured to determine whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
- the network device 1400 may further comprise an obtaining module 1412 configured to obtain first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor.
- the network device 1400 may further comprise an enabling module 1413 configured to enable the first functionality based on the first information.
- the network device 1400 may further comprise a disabling module 1414 configured to disable the first functionality based on the first information.
- the network device 1400 may further comprise a third obtaining module 1415 configured to obtain second information indicating maximum CAM entries used for the first functionality.
- the network device 1400 may further comprise an allocating module 1416 configured to allocate a CAM entry used for the first functionality based on the second information.
- a computer program product being tangibly stored on a computer readable storage medium and including instructions which, when executed on at least one processor, cause the at least one processor to carry out any of the methods as described above.
- a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to carry out any of the methods as described above.
- the present disclosure may also provide a carrier containing the computer program as mentioned above, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
- the computer readable storage medium can be, for example, an optical compact disk or an electronic memory device like a RAM (random access memory) , a ROM (read only memory) , Flash memory, magnetic tape, CD-ROM, DVD, Blue-ray disc and the like.
- an apparatus implementing one or more functions of a corresponding apparatus described with an embodiment comprises not only prior art means, but also means for implementing the one or more functions of the corresponding apparatus described with the embodiment and it may comprise separate means for each separate function, or means that may be configured to perform two or more functions.
- these techniques may be implemented in hardware (one or more apparatuses) , firmware (one or more apparatuses) , software (one or more modules) , or combinations thereof.
- firmware or software implementation may be made through modules (e.g., procedures, functions, and so on) that perform the functions described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Embodiments of the present disclosure provide methods and apparatuses for flow information handling. A method performed by a network device may comprise providing a search key corresponding to a packet of a first flow to a content addressable memory (CAM). The method may further comprise determining, by the CAM, that the search key matches a search pattern for the first flow stored in the CAM. The method may further comprise counting a first hit number of the first flow. The method may further comprise suppressing providing the search key or information regarding the packet of the first flow to a processor of the network device.
Description
The non-limiting and exemplary embodiments of the present disclosure generally relate to the technical field of communications, and specifically to methods and apparatuses for flow information handling.
This section introduces aspects that may facilitate a better understanding of the disclosure. Accordingly, the statements of this section are to be read in this light and are not to be understood as admissions about what is in the prior art or what is not in the prior art.
Traffic on a network can be seen as consisting of flows passing through network devices. For administrative or other purposes, it is often interesting, useful, or even necessary to have access to information about these flows that pass through the network devices.
Network flow monitoring may be an essential tool for a lot of network administrators. Flow monitoring allows to collect and record various traffic going to and from network devices. Network flow monitoring can provide visibility into causes of congestion, which applications are using the most resources, abnormal traffic patterns, or the ability to provide usage-based billing.
There are many flow monitoring protocols, such as NetFlow, sampled NetFlow (sFlow) , Internet protocol (IP) Flow Information Export (IPFIX) , Juniper flow (J-Flow) , NetStream, Appflow, etc.
IPFIX is the standard tracking all the developments done initially by Cisco with Netflow, including all the enhancements done up to Netflow version 10 and even more. IPFIX captures a rich set of flow statistics and the captured data offers variety of uses to network planning and/or operations team.
IPFIX is now well defined in the Internet Engineering Task Force (IETF) by several Request for Comments (RFCs) , such as RFC 7011, the disclosure of which is incorporated by reference herein in its entirety.
Flow monitoring protocols such as IPFIX may comprise three components e.g. Metering Process, Exporting Process and Collecting Process.
The Metering Process may sample the traffic on an observation point and store them into a cache. The observation point is a location in the network where packets can be observed. Examples of the observation point include a line to which a probe is attached; a shared medium,
such as an Ethernet-based local area network (LAN) ; a single port of a router; or a set of interfaces (physical or logical) of a router.
For example, as described in RFC 7011, the Metering Process generates Flow Records. Inputs to the Metering Process are packet headers, characteristics, and Packet Treatment observed at one or more observation points. The Metering Process consists of a set of functions that includes packet header capturing, timestamping, sampling, classifying, and maintaining Flow Records. The maintenance of Flow Records may include creating new records, updating existing ones, computing Flow statistics, deriving further Flow properties, detecting Flow expiration, passing Flow Records to the Exporting Process, and deleting Flow Records.
The Exporting Process may export the sampled traffic to the Collecting Process through a message such as IPFIX message. The message may include a template and a record.
For example, as described in RFC 7011, the Exporting Process sends IPFIX Messages to one or more Collecting Processes. The Flow Records in the Messages are generated by one or more Metering Processes.
The Collecting Process may receive the flow messages and parse the flow messages based on the flow template. Then it could do the further analysis, for example, if there is denial of service attack.
For example, as described in RFC 7011, the Collecting Process receives IPFIX Messages from one or more Exporting Processes. The Collecting Process might process or store Flow Records received within these Messages.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Usually, the Metering Process and the Exporting Process may be implemented on a network device such as network switch, router, bridge, etc. And several observation points can be enabled to capture traffic for different applications at the same time. Customers may expect the traffic sampled rate 1: 1 to capture all the traffic.
The packets parsing and flow data caching may be managed by a processor of the network device such as Central Processing Unit (CPU) or packet processor, which means the capacity is limited to the processor and the memory of the network device.
The network device throughput is increased rapidly. For example the fifth generation (5G) system or the data center may require a large network device throughput. In the past, the
line bit rates of the ports of the network device are mainly 1Gbit/s, 2.5Gbit/sor 10Gbit/s. Currently, the line bit rates of the ports of the network device are mainly 40Gbit/s, 100Gbit/s, 400Gbit/s.
FIG. 1 shows an example of flow monitoring implementation according to an embodiment of the present disclosure. The observation point may be enabled on a port of the network device. To avoid eating up all the resources of the network device, some sampled traffic has to be dropped if the network device is overloaded. The traditional flow monitoring implementation is hard to meet the flow monitoring demand on a high throughout network device.
In addition, the power consumption is high if the processor of the network device is running heavily when handling the sampled traffic.
To overcome or mitigate at least one of the above mentioned problems or other problems, an improved solution for flow information handling may be desirable.
In a first aspect of the disclosure, there is provided a method performed by a network device. The method may comprise providing a search key corresponding to a packet of a first flow to a content addressable memory (CAM) . The method may further comprise determining, by the CAM, that the search key matches a search pattern for the first flow stored in the CAM. The method may further comprise counting a first hit number of the first flow. The method may further comprise suppressing providing the search key or information regarding the packet of the first flow to a processor of the network device.
In an embodiment, when the search pattern for the first flow is not stored in the CAM, the method may further comprise providing the search key or the information regarding the packet of the first flow to the processor of the network device. The method may further comprise determining, by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow. The method may further comprise counting a second hit number of the first flow.
In an embodiment, the method may further comprise, when information of the first flow is exported to a collector, combining the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow. The method may further comprise sending the information of the first flow comprising the third hit number of the first flow to the collector.
In an embodiment, the method may further comprise determining a weight of the first flow. The method may further comprise determining whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
In an embodiment, the method may further comprise selecting at least one flow whose weight exceeds a threshold as at least one candidate flow. The method may further comprise finding a predefined number of highest weight candidate flows from the at least one candidate flow. The method may further comprise, when the first flow belongs to the predefined number of highest weight candidate flows, determining to store the search pattern for the first flow in the CAM. The method may further comprise, when the first flow does not belong to the predefined number of highest weight candidate flows, determining to not store the search pattern for the first flow in the CAM and removing the search pattern for the first flow from the CAM if the search pattern for the first flow has been previously stored in the CAM.
In an embodiment, the determining the weight of the first flow may comprise determining the weight of the first flow when a timer expires.
In an embodiment, the method may further comprise obtaining first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor. The method may further comprise enabling or disabling the first functionality based on the first information.
In an embodiment, the method may further comprise obtaining second information indicating maximum CAM entries used for the first functionality. The method may further comprise allocating a CAM entry used for the first functionality based on the second information.
In an embodiment, the first flow may comprise Internet protocol (IP) Flow Information Export (IPFIX) flow.
In an embodiment, the CAM may comprise at least one of binary CAM, or ternary CAM.
In a second aspect of the disclosure, there is provided a network device. The network device comprises a processor, a content addressable memory (CAM) coupled to the processor and a memory coupled to the processor. Said memory contains instructions executable by said processor. Said network device is operative to provide a search key corresponding to a packet of a first flow to the CAM. Said network device is further operative to determine, by the CAM, that the search key matches a search pattern for the first flow stored in the CAM. Said network device is further operative to count a first hit number of the first flow. Said network device is further operative to suppress providing the search key or information regarding the packet of the first flow to a processor of the network device.
In a third aspect of the disclosure, there is provided a network node. The network device may comprise a first providing module configured to provide a search key corresponding to a packet of a first flow to a content addressable memory (CAM) . The network device may further
comprise a first determining module configured to determine by the CAM that the search key matches a search pattern for the first flow stored in the CAM. The network device may further comprise a first counting module configured to count a first hit number of the first flow. The network device may further comprise a suppressing module configured to suppress providing the search key or information regarding the packet of the first flow to a processor of the network device.
In an embodiment, the network device may further comprise a second providing module configured to provide the search key or the information regarding the packet of the first flow to the processor of the network device.
In an embodiment, the network device may further comprise a second determining module configured to determine, by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow.
In an embodiment, the network device may further comprise a second counting module configured to count a second hit number of the first flow.
In an embodiment, the network device may further comprise a combining module configured to, when information of the first flow is exported to a collector, combine the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
In an embodiment, the network device may further comprise a sending module configured to send the information of the first flow comprising the third hit number of the first flow to the collector.
In an embodiment, the network device may further comprise a third determining module configured to determine a weight of the first flow.
In an embodiment, the network device may further comprise a fourth determining module configured to determine whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
In an embodiment, the network device may further comprise an obtaining module configured to obtain first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor.
In an embodiment, the network device may further comprise an enabling module configured to enable the first functionality based on the first information.
In an embodiment, the network device may further comprise a disabling module configured to disable the first functionality based on the first information.
In an embodiment, the network device may further comprise a third obtaining module configured to obtain second information indicating maximum CAM entries used for the first functionality.
In an embodiment, the network device may further comprise an allocating module configured to allocate a CAM entry used for the first functionality based on the second information.
In a fourth aspect of the disclosure, there is provided a computer program product, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out any of the method according to the above first aspect.
In a fifth aspect of the disclosure, there is provided a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to carry out any of the method according to the above first aspect.
Embodiments herein afford many advantages, of which a non-exhaustive list of examples follows. In some embodiments herein, it may increase the flow (such as IPFIX) handing performance and capacity. In some embodiments herein, it may decrease the power consumption as CAM is used for flow handling. In some embodiments herein, the dynamic CAM allocation method can guarantee the CAM resource allocated for the flow with high rate. The embodiments herein are not limited to the features and advantages mentioned above. A person skilled in the art will recognize additional features and advantages upon reading the following detailed description.
The above and other aspects, features, and benefits of various embodiments of the present disclosure will become more fully apparent, by way of example, from the following detailed description with reference to the accompanying drawings, in which like reference numerals or letters are used to designate like or equivalent elements. The drawings are illustrated for facilitating better understanding of the embodiments of the disclosure and not necessarily drawn to scale, in which:
FIG. 1 shows an example of flow monitoring implementation according to an embodiment of the present disclosure;
FIG. 2 shows a flowchart of a method according to an embodiment of the present disclosure;
FIG. 3 shows a flowchart of a method according to another embodiment of the present disclosure;
FIG. 4 shows a flowchart of a method according to another embodiment of the present disclosure;
FIG. 5 shows a flowchart of a method according to another embodiment of the present disclosure;
FIG. 6 shows a flowchart of a method according to another embodiment of the present disclosure;
FIG. 7 shows a flowchart of a method according to another embodiment of the present disclosure;
FIG. 8 shows a flowchart of a method according to another embodiment of the present disclosure;
FIG. 9 shows an example of an IPFIX flow handling method;
FIG. 10 shows an example of an IPFIX flow handling method according to another embodiment of the present disclosure;
FIG. 11 shows an example of a relation between the flows in IPFIX cache DB and TCAM entries according to an embodiment of the present disclosure;
FIG. 12 shows a flowchart of an IPFIX flow handling method according to another embodiment of the present disclosure;
FIG. 13 is a block diagram showing an apparatus suitable for use in practicing some embodiments of the disclosure; and
FIG. 14 is a block diagram showing a network device according to an embodiment of the disclosure.
The embodiments of the present disclosure are described in detail with reference to the accompanying drawings. It should be understood that these embodiments are discussed only for the purpose of enabling those skilled persons in the art to better understand and thus implement the present disclosure, rather than suggesting any limitations on the scope of the present disclosure. Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present disclosure should be or are in any single embodiment of the disclosure. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present disclosure. Furthermore, the described features, advantages, and characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the disclosure may be practiced
without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the disclosure.
As used herein, the term “network” refers to a network following any suitable communication standards such as new radio (NR) , long term evolution (LTE) , LTE-Advanced, wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , Code Division Multiple Access (CDMA) , Time Division Multiple Address (TDMA) , Frequency Division Multiple Access (FDMA) , Orthogonal Frequency-Division Multiple Access (OFDMA) , Single carrier frequency division multiple access (SC-FDMA) and other wireless networks. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA) , etc. UTRA includes WCDMA and other variants of CDMA. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM) . An OFDMA network may implement a radio technology such as Evolved UTRA (E-UTRA) , Ultra Mobile Broadband (UMB) , IEEE 802.11 (Wi-Fi) , IEEE 802.16 (WiMAX) , IEEE 802.20, Flash-OFDMA, Ad-hoc network, wireless sensor network, etc. In the following description, the terms “network” and “system” can be used interchangeably. Furthermore, the communications between two devices in the network may be performed according to any suitable communication protocols, including, but not limited to, the communication protocols as defined by a standard organization such as 3GPP. For example, the communication protocols may comprise the first generation (1G) , 2G, 3G, 4G, 4.5G, 5G, 6G communication protocols, and/or any other protocols either currently known or to be developed in the future.
The term “network device” or “network node” or “network function” refers to any suitable function which can be implemented in a network entity (physical or virtual) of a communication network. For example, the network function can be implemented either as a network element on a dedicated hardware, as a software instance running on a dedicated hardware, or as a virtualized function instantiated on an appropriate platform, e.g. on a cloud infrastructure.
Virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to a provider edge node and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks) .
In some embodiments, some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in
one or more virtual environments hosted by one or more of hardware nodes. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node) , then the provider edge node or PE may be entirely virtualized.
The functions may be implemented by one or more applications (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc. ) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. Applications are run in virtualization environment which provides hardware comprising processing circuitry and memory. Memory contains instructions executable by processing circuitry whereby application is operative to provide one or more of the features, benefits, and/or functions disclosed herein.
Virtualization environment, comprises general-purpose or special-purpose network hardware devices comprising a set of one or more processors or processing circuitry, which may be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs) , or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware device may comprise memory which may be non-persistent memory for temporarily storing instructions or software executed by processing circuitry. Each hardware device may comprise one or more network interface controllers (NICs) , also known as network interface cards, which include physical network interface. Each hardware device may also include non-transitory, persistent, machine-readable storage media -having stored therein software and/or instructions executable by processing circuitry. Software may include any type of software including software for instantiating one or more virtualization layers (also referred to as hypervisors) , software to execute virtual machines as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein.
Virtual machines, comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer or hypervisor. Different embodiments of the instance of virtual appliance may be implemented on one or more of virtual machines, and the implementations may be made in different ways.
During operation, processing circuitry executes software to instantiate the hypervisor or virtualization layer, which may sometimes be referred to as a virtual machine monitor (VMM) . Virtualization layer may present a virtual operating platform that appears like networking hardware to virtual machine.
References in the specification to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular
feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed terms.
As used herein, the phrase “at least one of A and B” or “at least one of A or B” should be understood to mean “only A, only B, or both A and B. ” The phrase “A and/or B” should be understood to mean “only A, only B, or both A and B” .
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” , “comprising” , “has” , “having” , “includes” and/or “including” , when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
It is noted that these terms as used in this document are used only for ease of description and differentiation among nodes, devices or networks etc. With the development of the technology, other terms with the similar/same meanings may also be used.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
It is noted that some embodiments of the present disclosure are mainly described in relation to IPFIX being used as a non-limiting example for flow monitoring protocols. As such, the description of exemplary embodiments given herein specifically refers to terminology which is directly related thereto. Such terminology is only used in the context of the presented non-limiting examples and embodiments, and does naturally not limit the present disclosure in any way. Rather, any other flow monitoring protocols may equally be utilized as long as exemplary embodiments described herein are applicable.
FIG. 2 shows a flowchart of a method according to an embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality. As such, the network device may provide means or modules or circuits for accomplishing various parts of the method 200 as well as means or modules or circuits for accomplishing other processes in conjunction with other components.
At block 202, the network device may provide a search key corresponding to a packet of a first flow to a content addressable memory (CAM) .
The network device may be any suitable network device which comprises the CAM. The network devices may support functionalities such as data forwarding, data exchanging, data transmission, data processing, etc. For example, the network device may be network switch, router, bridge, Ethernet device, Radio over Ethernet (RoE) device, etc.
The network devices may receive the packet communicated over a network. The network devices may process the packet communicated over the network. The processing may include generating a search key representative of a packet, determining a processing rule for the packet and processing the packet according to the processing rule.
The packet may be any suitable packet of any suitable communication protocol or network. For example, the packet may be an Internet protocol (IP) packet. The packet may be a packet in Information-Centric Networking (ICN) . The packet may be a packet used in any suitable data center network.
The first flow may be any suitable flow. In an embodiment, the first flow may comprise IP Flow Information Export (IPFIX) flow.
There may be any suitable definitions of the term ‘flow’ . For example, within the context of IPFIX, a flow is defined as a set of packets or frames passing an observation point in the network during a certain time interval. All packets belonging to a particular flow have a set of common properties. Each property is defined as the result of applying a function to the values of:
1. one or more packet header fields (e.g., destination IP address) , transport header fields (e.g., destination port number) , or application header fields (e.g., Real-time Transport Protocol (RTP) header fields [IETF RFC3550] ) .
2. one or more characteristics of the packet itself (e.g., number of Multi-Protocol Label Switching (MPLS) labels, etc. ) .
3. one or more of the fields derived from Packet Treatment (e.g., next-hop IP address, the output interface, etc. ) .
A packet may be defined as belonging to a flow if it completely satisfies all the defined properties of the Flow.
The search key may include any suitable header information retrieved from the packet and/or metadata associated with the packet, such as an identifier of a port that received the packet, etc.
In some embodiments, the search key may include network address information (e.g., one of, or any suitable combination of two or more of, a destination address, such as a destination media access control (MAC) address, a destination IP address, etc. ; a source address, e.g., a source MAC address, a source IP address, etc. ) . In some embodiments, the search key may also include transmission control protocol (TCP) port information and/or user datagram protocol (UDP) port information (e.g., one of or any suitable combination of two or more of a TCP source port, a TCP destination port, a UDP source port, a UDP destination port) . In some embodiments, the search key may additionally or alternatively include other header information such as virtual local area network (VLAN) identifier (ID) , a protocol type, etc. In some embodiments, the search key may additionally or alternatively include metadata associated with the packet, such as an ID of a port or network interface of the network device that received the packet. In some embodiments, the search key may additionally or alternatively include IP type of service (ToS) .
In an embodiment, the CAM may comprise at least one of binary CAM or ternary CAM (TCAM) . Binary CAMs support storage and searching of binary bits, zero or one (0, 1) . TCAMs support storing of zero, one, or don't care bit (0, 1, X) .
At block 204, the network device may determine, by the CAM, that the search key matches a search pattern for the first flow stored in the CAM.
The search pattern for the first flow may correspond to known patterns of header information of packets of the first flow and/or metadata associated with packets of the first flow.
The search pattern for the first flow may be stored in the CAM according to various rules. In addition, the search pattern for the first flow may be removed from in the CAM according to various rules. In this embodiment, the search pattern for the first flow is stored in the CAM.
Binary CAM requires an exact match. A feature of TCAM is that one or more portions of a search pattern can be designated as “don't care, ” where portions marked as “don't care” do not need to match a search key in order for the TCAM to return determine a match result. As a simple illustrative example, a stored word of “01XX0” in a TCAM, with “X” indicating a “don't care” bit, will match any of the search keys “01000” , “01010” , “01100” , and “01110” .
At block 206, the network device may count a first hit number of the first flow. For example, in response to the determination of the search key matches the search pattern for the first flow stored in the CAM, the network device may count a first hit number of the first flow. The first hit number of the first flow may be stored in the memory such as random access memory (RAM) of the network device.
At block 208, the network device may suppress providing the search key or information regarding the packet of the first flow to a processor of the network device. For example, in response to the determination of the search key matches the search pattern for the first flow stored in the CAM, the network device may suppress providing the search key or information regarding the packet of the first flow to a processor of the network device. The processor may be any suitable processor such as packet processor.
For example, whenever the CAM detects that the search key matches a pattern stored in the CAM, the CAM may output an index that indicates the pattern that matches the search key. Typically, the index output by the CAM may point to a location in another memory, such as RAM, that stores information indicating one or more actions to be taken in connection with the packet. Examples of actions include counting a first hit number of the first flow and suppressing providing the search key or information regarding the packet of the first flow to a processor of the network device. For example, the suppressing operation may comprise dropping the search key or the information regarding the packet of the first flow.
The information regarding the packet of the first flow may include any suitable header information retrieved from the packet and/or metadata associated with the packet, such as an identifier of a port that received the packet, etc.
In an embodiment, the information regarding the packet of the first flow may include the header information of the packet and/or metadata associated with the packet.
FIG. 3 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality. As such, the network device may provide means or modules or circuits for accomplishing various parts of the method 300 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
In this embodiment, the search pattern for the first flow is not stored in the CAM. For example, the search pattern for the first flow has been removed from the CAM. The network device determines to not store the search pattern for the first flow in the CAM. The search
pattern for the first flow is not stored in the CAM when the functionality of using CAM to offload flow handling from the processor is disabled.
At block 302, the network device may provide the search key or the information regarding the packet of the first flow to the processor of the network device. For example, when the functionality of using CAM to offload flow handling from the processor is disabled, the network device may provide the search key or the information regarding the packet of the first flow to the processor of the network device and the method 200 of FIG. 2 may be omitted. When the functionality of using CAM to offload flow handling from the processor is enabled, the method 200 of FIG. 2 may be performed before method 300. Since the search pattern for the first flow is not stored in the CAM, the network device does not count the first hit number of the first flow and does not suppress providing the search key or the information regarding the packet of the first flow to the processor of the network device.
At block 304, the network device may determine, by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow. For example, a memory such as RAM may store various search templates for various flows. The processor of the network device may determine whether the search key or the information regarding the packet of the first flow matches a search template within search templates stored in the memory such as RAM. Since the search template for the first flow is stored in the memory such as RAM, the processor of the network device determines that the search key or the information regarding the packet of the first flow matches the search template for the first flow.
The search template for the first flow may correspond to known patterns of header information of packets of the first flow and/or metadata associated with packets of the first flow.
The operation of block 304 may eat up many resources. In some cases, some sampled traffic has to be dropped if the processor of the network device is overloaded. The operation of block 302 may be hard to meet the flow demand on the high throughout network device. In addition, the power consumption of the operation of block 302 is high if the processor is running heavily.
At block 306, the network device may count a second hit number of the first flow. The second hit number of the first flow may be stored in the memory such as RAM.
In an embodiment, if the search template for the first flow is not stored in the memory, the packet may be a first packet for the first flow and the network device may store the search template for the first flow in the memory and count the second hit number of the first flow.
FIG. 4 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively
coupled to a network device or any other entity having similar functionality. As such, the network device may provide means or modules or circuits for accomplishing various parts of the method 400 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
At block 402, when information of the first flow is exported to a collector, the network device may combine the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
The collector may be any suitable device that hosts one or more Collecting Processes.
For example, the third hit number of the first flow may be a sum of the second hit number of the first flow and first hit number of the first flow.
At block 404, the network device may send the information of the first flow comprising the third hit number of the first flow to the collector.
The information of the first flow may comprise any suitable information for example depending on a specific flow monitoring protocol. For example, the information of the first flow may be an IPFIX message.
FIG. 5 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality. As such, the network device may provide means or modules or circuits for accomplishing various parts of the method 500 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
At block 502, the network device may determine a weight of the first flow.
The network device may determine the weight of the first flow in various ways and the present disclosure has no limit on it. For example, the weight of the first flow may be determined based on machine learning or flow statistical data or flow congestion data or abnormal traffic data or usage-based billing, etc.
In an embodiment, the network device may determine the weight of the first flow based on flow count increase at a current interval (flow_cnt_incr_cur_interval) , flow count at a previous interval (flow_cnt_prev_interval) and flow count at the one before previous interval (flow_cnt_last_prev_prev_interval) .
For example, the network device may determine the weight of the first flow as below.
Flow weight = 50%*flow_cnt_incr_cur_interval + 30%*flow_cnt_prev_interval + 20%*flow_cnt_last_prev_prev_interval.
The network device may determine the weight of the first flow periodically or based on an event. For example, when the load of the processor of the network device exceeds a predefined threshold, which means that more flow handling is required to be offloaded from the processor to the CAM, the network device may determine the flow weight.
In an embodiment, the network device may determine the weight of the first flow when a timer expires.
At block 504, the network device may determine whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
For example, the network device may select a flow whose weight exceeds a predefined threshold to store its search pattern in the CAM.
FIG. 6 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality. As such, the network device may provide means or modules or circuits for accomplishing various parts of the method 600 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
At block 602, the network device may select at least one flow whose weight exceeds a threshold as at least one candidate flow.
At block 604, the network device may find a predefined number of highest weight candidate flows from the at least one candidate flow.
The predefined number may be determined in various ways. For example, the predefined number may be specified by an operator. The predefined number may be determined based on available resources of CAM. The predefined number may be determined based on load information of the network device. The predefined number may be determined based on the maximum CAM entries used for offloading flow handling from the processor.
At block 606, when the first flow belongs to the predefined number of highest weight candidate flows, the network device may determine to store the search pattern for the first flow in the CAM. When the network device determines to store the search pattern for the first flow in the CAM, the network device may store the search pattern for the first flow in the CAM.
At block 608, when the first flow does not belong to the predefined number of highest weight candidate flows, the network device may determine to not store the search pattern for the first flow in the CAM and remove the search pattern for the first flow from the CAM if the search pattern for the first flow has been previously stored in the CAM.
FIG. 7 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality. As such, the network device may provide means or modules or circuits for accomplishing various parts of the method 700 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
At block 702, the network device may obtain first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor.
As the CAM is key resource on the network device such as switch/router, the first functionality needs to be enabled/disabled by the user or the network device. For example, when the user determines to enable or disable the first functionality, the user may send the first information to the network device and the network device may obtain the first information. When the network device determines to enable or disable the first functionality, it may obtain the first information by itself.
The first functionality may be enabled or disabled due to various reasons and the present disclosure has no limit on it. For example, the first functionality may be enabled or disabled based on at least one of available resources of CAM, the load information of the network device, flow weights, etc.
For example, the first information may be sent to the network device via Command Line Interface (CLI) . An example of CLI for enabling/disabling the first functionality is as following.
[local] router6000 (config) #ipfix tcam-offload [on/off] .
In an embodiment, the first functionality is off by default.
At block 704, the network device may enable or disable the first functionality based on the first information.
FIG. 8 shows a flowchart of a method according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a network device or any other entity having similar functionality. As such, the network device may provide means or modules or circuits for accomplishing various parts of the method 800 as well as means or modules or circuits for accomplishing other processes in conjunction with other components. For some parts which have been described in the above embodiments, detailed description thereof is omitted here for brevity.
At block 802, the network device may obtain second information indicating maximum CAM entries used for the first functionality.
The maximum CAM entries used for the first functionality may be determined in various ways and the present disclosure has no limit on it. For example, the maximum CAM entries used for the first functionality may be determined based on at least one of available CAM entries, the load information of the network device, flow weights, etc.
As the CAM is key resource on the network device such as switch/router, the maximum CAM entries used for the first functionality may be configured by the user or the network device. For example, when the user determines the maximum CAM entries used for the first functionality, the user may send the second information to the network device and the network device may obtain the second information. When the network device determines the maximum CAM entries used for the first functionality, it may obtain the second information by itself.
For example, the second information may be sent to the network device via CLI. An example of CLI for configuring the maximum CAM entries used for the first functionality is as following.
[local] router6000 (config) #ipfix tcam-offload-max-entries [maximum count] .
At block 804, the network device may allocate a CAM entry used for the first functionality based on the second information. For example, when a CAM entry is to be allocated, the network device may check if there is an available CAM entry used for the first functionality. If so, the network device may allocate the CAM entry, otherwise the network device may reject the CAM entry allocation.
In an embodiment, the proposed solution is to use CAM (such as TCAM) resource on the network device (e.g. chipset) to offload flow such as IPFIX handling for the flow with high rate.
In an embodiment, since CAM such as TCAM is the common hardware (HW) resource on the network device such as router, it is no need to install extra CAM on the network device. The CAM is used for classifying the flow and doing the assigned actions. The flow (such as IPFIX) template is combined with several elements, for example, source IP address, TCP port, protocol type, etc. Those elements can be classified by CAM entry.
In an embodiment, the CAM resources are valuable, and the number of flows is much bigger than the number of CAM resources. Therefore, a dynamic CAM adjustment method is proposed for allocating the CAM resources for the flows with high rate.
In an embodiment, a CAM offload functionality is enabled/disable by the user or the network device. The user or the network device decides if the CAM is used and the maximum CAM entries can be used.
In an embodiment, the proposed solution can use CAM resources to offload the flow (such as IPFIX) handling.
In an embodiment, it proposed dynamic CAM resource allocation to make sure the CAM resources are allocated for the flows with high rate.
When IPFIX is enabled, the user traffic is sampled on the observation points at ingress/egress direction, and the sampled traffic is sent to IPFIX software (SW) module through processor (such as CPU) channel. IPFIX flows are generated based on the configured IPFIX templates. Finally, the corresponding IPFIX messages are sent to IPFIX collector.
FIG. 9 shows an example of an IPFIX flow handling method.
For simplicity, the network device 900 only depicts some exemplary elements, such as hardware (HW) chipset 901, IPFIX SW module 902, and two observation points 903 and 904.
The user traffic is sampled by the observation points 903 and 904. Then the sampled traffic is sent to the IPFIX SW module 902 through a CPU channel. The IPFIX message is generated by the IPFIX SW module 902 and sent to the IPFIX collector 905 via a network 906.
FIG. 10 shows an example of an IPFIX flow handling method according to another embodiment of the present disclosure.
For simplicity, the network device 1000 only depicts some exemplary elements, such as HW TCAM 1001, IPFIX metering process 1002, two observation points 1003 and 1004, IPFIX TCAM offloading module 1005, IPFIX cache database (DB) 1006, Buffered traffic queue 1007, IPFIX Exporting process 1008.
The user traffic is sampled by the observation points 1003 and 1004. Then the sampled traffic may be stored in Buffered traffic queue 1007 and then sent to the IPFIX metering process 1002 through a CPU channel. The IPFIX record may be stored in IPFIX cache DB 1006. The IPFIX message is generated by the IPFIX Exporting process 1008 and sent to the IPFIX collector 1010 via a network 1009.
The IPFIX TCAM offloading module 1005 is new added in the network device 1000. It monitors flow entries in the IPFIX cache DB 1006. If the flow meets a criterion such as high rate, then a corresponding TCAM entry for the flow may be created in the HW TCAM 1001.
The functionalities of the TCAM entry may suppress snooping the sampled traffic to the CPU for the selected flow and count the hit number of the selected flow.
When exporting the IPFIX flow to the IPFIX collector, it needs to combine the information of the flow in the IPFIX cache DB 1006 and TCAM information for that flow.
FIG. 11 shows an example of a relation between the flows in IPFIX cache DB and TCAM entries according to an embodiment of the present disclosure.
As shown, a flow is identified by IP protocol, source IP address, destination IP address, source port, destination port and input interface. Flows IDs 2, 3 and 9 are flows with high rate and corresponding TCAM entries for these flows are created in the HW TCAM. The actions of TCAM are suppression of the snoop action and allocation of hit counter.
FIG. 12 shows a flowchart of an IPFIX flow handling method according to another embodiment of the present disclosure.
At step 1201. The network device may check a timer.
If the timer expires, the network device may restart the timer and the following steps may continue.
At step 1202. The network device may add the delta flow count in HW TCAM into IPFIX flow cache.
At step 1203. The network device may calculate the weight for each IPFIX flow.
At step 1204. The network device may select the flows whose weight exceeds the threshold as the candidates.
At step 1205. The network device may find the top highest weight candidates (e.g. at most 100) .
At step 1206. The network device may free the previous allocated TCAM resource of flows which are not the top highest weight candidates.
At step 1207. The network device may allocate TCAM resource for top highest weight candidates not using TCAM before.
Embodiments herein afford many advantages, of which a non-exhaustive list of examples follows. In some embodiments herein, it may increase the flow (such as IPFIX) handing performance and capacity. For example, assuming there are 10 flows with 10Gbps rate, some sampled packets have to be dropped if the processor of the network device is used for the flow handling and the processor is overloaded. It takes little CPU resource if 10 CAM entries are used for classifying the 10 flows. In some embodiments herein, it may decrease the power consumption as CAM is used for flow handling. In some embodiments herein, the dynamic CAM allocation method can guarantee the CAM resource allocated for the flow with high rate. The embodiments herein are not limited to the features and advantages mentioned above. A person skilled in the art will recognize additional features and advantages upon reading the following detailed description.
FIG. 13 is a block diagram showing an apparatus suitable for use in practicing some embodiments of the disclosure. For example, the network device described above may be implemented through the apparatus 1300.
The apparatus 1300 comprises at least one processor 1321, such as a packet processor, CAM 1326 such as TCAM, and at least one memory (MEM) 1322 coupled to the processor 1321. The apparatus 1300 may further comprise a plurality of network interfaces 1323 (e.g., ports, link aggregate groups (LAGs) , tunnel interfaces, etc. ) configured to couple to network links. In various embodiments, the network device 100 includes any suitable number of network interfaces 1323. The MEM 1322 stores a program (PROG) 1324. The PROG 1324 may include instructions that, when executed on the associated processor 1321, enable the apparatus 1300 to operate in accordance with the embodiments of the present disclosure. A combination of the at least one processor 1321 and the at least one MEM 1322 may form processing means 1325 adapted to implement various embodiments of the present disclosure.
Various embodiments of the present disclosure may be implemented by computer program executable by one or more of the processor 1321, software, firmware, hardware or in a combination thereof.
The processor 1321 may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors DSPs and processors based on multicore processor architecture, as non-limiting examples.
The processor 1321 may be configured to process packets (e.g., by analyzing header information in the packets and, optionally, metadata associated with packets, such indicators of ports of the network device that received the packets, etc. ) . The processor 1321 may include a processing engine that is coupled to a lookup engine and TCAM. The processing engine is configured to use header information from a packet, and optionally metadata associated with the packet (e.g., an indicator of a port of the network device that received the packet) , to generate a search key and to provide the search key to the lookup engine and TCAM. The lookup engine is configured to perform a lookup in a lookup table. The lookup table includes information that indicates actions to be performed by the network device on packets received by the network device, such as classifying packet as belonging to particular packet flows.
The MEM 1322 may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memories and removable memories, as non-limiting examples.
The CAM 1326 may be of any type suitable to the local technical environment, and may include one or more of binary CAM or ternary CAM as non-limiting examples.
CAM 1326 has a plurality of entries storing search patterns. The search patterns correspond to known patterns of header information of packets and/or metadata associated with
packets. The MEM 1322 may include a plurality of entries that store rules associated with the search patterns stored in the CAM 1326. The rules in the MEM 1322 may indicate processing actions to be performed on packets that match respective search patterns stored in the CAM 1326.
In an embodiment where the apparatus is implemented as or at the network device, the memory 1322 contains instructions executable by the processor 1321, whereby the network device operates according to any of the methods performed by the network device.
FIG. 14 is a block diagram showing a network device according to an embodiment of the disclosure. As shown, the network device 1400 may comprise a first providing module 1401 configured to provide a search key corresponding to a packet of a first flow to a content addressable memory (CAM) . The network device 1400 may further comprise a first determining module 1402 configured to determine by the CAM that the search key matches a search pattern for the first flow stored in the CAM. The network device 1400 may further comprise a first counting module 1403 configured to count a first hit number of the first flow. The network device 1400 may further comprise a suppressing module 1404 configured to suppress providing the search key or information regarding the packet of the first flow to a processor of the network device.
In an embodiment, the network device 1400 may further comprise a second providing module 1405 configured to provide the search key or the information regarding the packet of the first flow to the processor of the network device.
In an embodiment, the network device 1400 may further comprise a second determining module 1406 configured to determine, by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow.
In an embodiment, the network device 1400 may further comprise a second counting module 1407 configured to count a second hit number of the first flow.
In an embodiment, the network device 1400 may further comprise a combining module 1408 configured to, when information of the first flow is exported to a collector, combine the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow.
In an embodiment, the network device 1400 may further comprise a sending module 1409 configured to send the information of the first flow comprising the third hit number of the first flow to the collector.
In an embodiment, the network device 1400 may further comprise a third determining module 1410 configured to determine a weight of the first flow.
In an embodiment, the network device 1400 may further comprise a fourth determining module 1411 configured to determine whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
In an embodiment, the network device 1400 may further comprise an obtaining module 1412 configured to obtain first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor.
In an embodiment, the network device 1400 may further comprise an enabling module 1413 configured to enable the first functionality based on the first information.
In an embodiment, the network device 1400 may further comprise a disabling module 1414 configured to disable the first functionality based on the first information.
In an embodiment, the network device 1400 may further comprise a third obtaining module 1415 configured to obtain second information indicating maximum CAM entries used for the first functionality.
In an embodiment, the network device 1400 may further comprise an allocating module 1416 configured to allocate a CAM entry used for the first functionality based on the second information.
According to an aspect of the disclosure it is provided a computer program product being tangibly stored on a computer readable storage medium and including instructions which, when executed on at least one processor, cause the at least one processor to carry out any of the methods as described above.
According to an aspect of the disclosure it is provided a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to carry out any of the methods as described above.
In addition, the present disclosure may also provide a carrier containing the computer program as mentioned above, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium. The computer readable storage medium can be, for example, an optical compact disk or an electronic memory device like a RAM (random access memory) , a ROM (read only memory) , Flash memory, magnetic tape, CD-ROM, DVD, Blue-ray disc and the like.
The techniques described herein may be implemented by various means so that an apparatus implementing one or more functions of a corresponding apparatus described with an embodiment comprises not only prior art means, but also means for implementing the one or more functions of the corresponding apparatus described with the embodiment and it may comprise separate means for each separate function, or means that may be configured to perform two or more functions. For example, these techniques may be implemented in hardware (one or
more apparatuses) , firmware (one or more apparatuses) , software (one or more modules) , or combinations thereof. For a firmware or software, implementation may be made through modules (e.g., procedures, functions, and so on) that perform the functions described herein.
Exemplary embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of the subject matter described herein, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any implementation or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular implementations. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The above described embodiments are given for describing rather than limiting the disclosure, and it is to be understood that modifications and variations may be resorted to without departing from the spirit and scope of the disclosure as those skilled in the art readily understand. Such modifications and variations are considered to be within the scope of the disclosure and the appended claims. The protection scope of the disclosure is defined by the accompanying claims.
Claims (14)
- A method (200) performed by a network device, comprising:providing (202) a search key corresponding to a packet of a first flow to a content addressable memory (CAM) ;determining (204) , by the CAM, that the search key matches a search pattern for the first flow stored in the CAM;counting (206) a first hit number of the first flow; andsuppressing (208) providing the search key or information regarding the packet of the first flow to a processor of the network device.
- The method according to claim 1, wherein when the search pattern for the first flow is not stored in the CAM, the method further comprises:providing (302) the search key or the information regarding the packet of the first flow to the processor of the network device;determining (304) , by the processor of the network device, that the search key or the information regarding the packet of the first flow matches a search template for the first flow; andcounting (306) a second hit number of the first flow.
- The method according to any of claim 2, further comprising:when information of the first flow is exported to a collector, combining (402) the second hit number of the first flow and the first second hit number of the first flow to generate the third hit number of the first flow; andsending (404) the information of the first flow comprising the third hit number of the first flow to the collector.
- The method according to any of claims 1-3, further comprising:determining (502) a weight of the first flow; anddetermining (504) whether to store the search pattern for the first flow in the CAM based on the weight of the first flow.
- The method according to claim 4, wherein the determining whether to store the search pattern for the first flow in the CAM based on the weight of the first flow comprises:selecting (602) at least one flow whose weight exceeds a threshold as at least one candidate flow;finding (604) a predefined number of highest weight candidate flows from the at least one candidate flow;when the first flow belongs to the predefined number of highest weight candidate flows, determining (606) to store the search pattern for the first flow in the CAM; andwhen the first flow does not belong to the predefined number of highest weight candidate flows, determining (608) to not store the search pattern for the first flow in the CAM and removing (608) the search pattern for the first flow from the CAM if the search pattern for the first flow has been previously stored in the CAM.
- The method according to any of claims 4-5, wherein the determining the weight of the first flow comprises:determining the weight of the first flow when a timer expires.
- The method according to any of claims 1-6, further comprising:obtaining (702) first information indicating enabling or disabling a first functionality of using CAM to offload flow handling from the processor; andenabling or disabling (704) the first functionality based on the first information.
- The method according to claim 7, further comprising:obtaining (802) second information indicating maximum CAM entries used for the first functionality; andallocating (804) a CAM entry used for the first functionality based on the second information.
- The method according to any of claims 1-8, wherein the first flow comprises Internet protocol (IP) Flow Information Export (IPFIX) flow.
- The method according to any of claims 1-9, wherein the CAM comprises at least one of:binary CAM, orternary CAM.
- A network device (1300) , comprising:a processor (1321) ;a memory (1322) coupled to the processor (1321) , anda content addressable memory (CAM) (1326) coupled to the processor (1321) , said memory (1322) containing instructions executable by said processor (1321) , whereby said network device (1300) is operative to:provide a search key corresponding to a packet of a first flow to the CAM;determine, by the CAM, that the search key matches a search pattern for the first flow stored in the CAM; andcount a first hit number of the first flow; andsuppress providing the search key or information regarding the packet of the first flow to a processor of the network device.
- The network device according to claim 11, wherein the network device is further operative to perform the method of any one of claims 2 to 10.
- A computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to perform the method according to any one of claims 1 to 10.
- A computer program product comprising instructions which when executed by at least one processor, cause the at least one processor to perform the method according to any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2023/109122 WO2025020081A1 (en) | 2023-07-25 | 2023-07-25 | Method and apparatus for flow information handling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2023/109122 WO2025020081A1 (en) | 2023-07-25 | 2023-07-25 | Method and apparatus for flow information handling |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2025020081A1 true WO2025020081A1 (en) | 2025-01-30 |
Family
ID=94373957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/109122 WO2025020081A1 (en) | 2023-07-25 | 2023-07-25 | Method and apparatus for flow information handling |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2025020081A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1465014A (en) * | 2001-07-20 | 2003-12-31 | 诺基亚有限公司 | Selective routing of data flows using a tcam |
US20100058459A1 (en) * | 2008-08-27 | 2010-03-04 | Inventec Corporation | Network interface card with packet filtering function and filtering method thereof |
CN101668002A (en) * | 2008-09-03 | 2010-03-10 | 英业达股份有限公司 | Network interface card with data packet filtering and its filtering method |
CN102546299A (en) * | 2012-01-09 | 2012-07-04 | 北京锐安科技有限公司 | Method for detecting deep packet under large flow |
CN105591989A (en) * | 2016-01-25 | 2016-05-18 | 盛科网络(苏州)有限公司 | Chip realization method for reporting protocol message to CPU |
-
2023
- 2023-07-25 WO PCT/CN2023/109122 patent/WO2025020081A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1465014A (en) * | 2001-07-20 | 2003-12-31 | 诺基亚有限公司 | Selective routing of data flows using a tcam |
US20100058459A1 (en) * | 2008-08-27 | 2010-03-04 | Inventec Corporation | Network interface card with packet filtering function and filtering method thereof |
CN101668002A (en) * | 2008-09-03 | 2010-03-10 | 英业达股份有限公司 | Network interface card with data packet filtering and its filtering method |
CN102546299A (en) * | 2012-01-09 | 2012-07-04 | 北京锐安科技有限公司 | Method for detecting deep packet under large flow |
CN105591989A (en) * | 2016-01-25 | 2016-05-18 | 盛科网络(苏州)有限公司 | Chip realization method for reporting protocol message to CPU |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11095536B2 (en) | Detecting and handling large flows | |
US10735325B1 (en) | Congestion avoidance in multipath routed flows | |
CN110945842B (en) | Path selection for applications in software defined networks based on performance scores | |
US10917322B2 (en) | Network traffic tracking using encapsulation protocol | |
US10361969B2 (en) | System and method for managing chained services in a network environment | |
US9106443B2 (en) | Forwarding table optimization with flow data | |
EP3278503B1 (en) | Method of packet marking for flow analytics | |
US10778588B1 (en) | Load balancing for multipath groups routed flows by re-associating routes to multipath groups | |
US10693790B1 (en) | Load balancing for multipath group routed flows by re-routing the congested route | |
US9887881B2 (en) | DNS-assisted application identification | |
US20130304915A1 (en) | Network system, controller, switch and traffic monitoring method | |
CN110557342B (en) | Apparatus for analyzing and mitigating dropped packets | |
US8638793B1 (en) | Enhanced parsing and classification in a packet processor | |
da Silva et al. | IDEAFIX: Identifying elephant flows in P4-based IXP networks | |
US10892992B2 (en) | Load balancing | |
US11095674B2 (en) | DDoS attack detection method and device | |
US10819640B1 (en) | Congestion avoidance in multipath routed flows using virtual output queue statistics | |
CN103957157B (en) | Route method for network interface to define forwarding rule | |
US20230114898A1 (en) | Efficient network flow management using custom filter-based packet sampling | |
US20210336960A1 (en) | A System and a Method for Monitoring Traffic Flows in a Communications Network | |
US8792366B2 (en) | Network packet latency measurement | |
WO2025020081A1 (en) | Method and apparatus for flow information handling | |
US20240146655A1 (en) | Telemetry-based congestion source detection | |
WO2022121454A1 (en) | Traffic table sending method and related apparatus | |
JP5659393B2 (en) | Network device and packet processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23946174 Country of ref document: EP Kind code of ref document: A1 |