WO2011068091A1 - サーバ及びフロー制御プログラム - Google Patents
サーバ及びフロー制御プログラム Download PDFInfo
- Publication number
- WO2011068091A1 WO2011068091A1 PCT/JP2010/071316 JP2010071316W WO2011068091A1 WO 2011068091 A1 WO2011068091 A1 WO 2011068091A1 JP 2010071316 W JP2010071316 W JP 2010071316W WO 2011068091 A1 WO2011068091 A1 WO 2011068091A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- flow
- transmission
- packet
- reception
- function
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/38—Flow based routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
- H04L45/021—Ensuring consistency of routing table updates, e.g. by using epoch numbers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/58—Association of routers
- H04L45/586—Association of routers of virtual routers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/70—Virtual switches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
Definitions
- the present invention relates to a server based on virtualization technology and a flow control program executed by the server.
- VM Virtual Machine
- virtualization software such as VMware (registered trademark) or Xen (registered trademark).
- a virtual switch (Virtual Switch) is also built in the physical server along with the virtual machine.
- the virtual switch is a software-based packet switch, and relays communication between virtual machines and between a virtual machine and the outside, as shown in FIGS. 1A and 1B. Since the virtual switch is adjacent to the virtual machine, traffic control is easy. Moreover, since the virtual switch is software-based, it has excellent flexibility and expandability.
- I / O (input / output) virtualization technology such as VT-d / VT-c (registered trademark) is known.
- data can be directly exchanged between a virtual machine and a network interface card (NIC: Network Interface Card) without using a virtual switch.
- NIC Network Interface Card
- FIG. 2 a virtual NIC is constructed for each virtual machine. By using these virtual NICs, the virtual switch can be completely bypassed. Such processing is hereinafter referred to as “NIC offload”.
- Patent Literature 1 Japanese Patent Publication No. 2007-522583
- the device disclosed in Patent Literature 1 includes at least one router and a data structure.
- the data structure is used by the at least one router to organize connections between one or more virtual network interface cards (VNICs) to form a virtual network.
- VNICs virtual network interface cards
- Japanese Patent Laid-Open No. 2008-102929 discloses a technique for communicating with a network adapter using a queue data structure.
- the device driver invokes the device driver service to initialize address translation and protection table (ATPT) entries in the route complex for the queue data structure.
- the device driver service returns an untranslated address to the device driver, which is then provided to the network adapter.
- the network adapter In response to retrieving the queue element from the queue data structure, the network adapter requests translation of the untranslated address specified for the queue element and before receiving a data packet that targets the buffer associated with the queue element.
- the converted address can be stored in the network adapter.
- Japanese Patent Laid-Open No. 2009-151745 discloses a virtual machine monitor that operates a virtual server on a multiprocessor system.
- the virtual machine monitor includes a physical hardware information acquisition unit, a reception unit, and an allocation processing unit.
- the physical hardware information acquisition unit acquires hardware configuration information including physical location information of hardware including a processor, a memory, an I / O device, and a network of the multiprocessor system.
- the accepting unit accepts a generation request including the number of processors of the virtual server to be generated, the amount of memory, the I / O device and resource allocation policy.
- the allocation processing unit allocates an I / O device to the virtual server based on the received generation request, and then allocates a processor and memory to the virtual server so as to satisfy the allocation policy.
- the virtual switch relays all traffic between the virtual machine and the outside. That is, traffic concentrates on the virtual switch.
- the virtual switch is software-based, and in some cases, switch processing proceeds in a single thread. In that case, it becomes impossible to handle the concentrated traffic. For this reason, the virtual switch tends to be a bottleneck in network processing.
- the virtual switch can be completely bypassed.
- the packet communication path is fixed, and the advantage of flexible traffic control using the virtual switch cannot be enjoyed.
- One object of the present invention is to suppress the concentration of traffic to the virtual switch while enjoying the advantages of flexible traffic control using the virtual switch.
- a server in one aspect of the present invention, includes a processing device, a network adapter connected to the processing device, and a path switching unit.
- the processing device includes a virtual machine and a virtual switch that relays a packet transmitted and received by the virtual machine to and from the outside.
- the network adapter has a transmission function for transmitting and receiving packets to and from a virtual machine without going through a virtual switch.
- the path switching unit dynamically switches the flow of packets transmitted and received by the virtual machine to the first path pattern flow or the second path pattern flow. Then, the path switching unit causes the transmission function of the network adapter to process the first path pattern flow and causes the virtual switch to process the second path pattern flow.
- a flow control program executed by a server includes a processing device and a network adapter connected to the processing device.
- the processing device includes a virtual machine and a virtual switch that relays a packet transmitted and received by the virtual machine to and from the outside.
- the network adapter has a transmission function for transmitting and receiving packets to and from a virtual machine without going through a virtual switch.
- the flow control program causes the server to realize a path switching function.
- the route switching function dynamically switches the flow of packets transmitted and received by the virtual machine to the first route pattern flow or the second route pattern flow.
- the path switching function causes the transmission function of the network adapter to process the first path pattern flow and causes the virtual switch to process the second path pattern flow.
- a network adapter connected to a server processing apparatus includes a virtual machine and a virtual switch that relays a packet transmitted and received by the virtual machine to and from the outside.
- the network adapter has a transmission function for transmitting and receiving packets to and from a virtual machine without going through a virtual switch.
- the network adapter includes a path switching unit. The path switching unit dynamically switches the flow of packets transmitted and received by the virtual machine to the first path pattern flow or the second path pattern flow. Then, the route switching unit causes the transmission function to process the first route pattern flow and causes the virtual switch to process the second route pattern flow.
- FIG. 1A is a conceptual diagram illustrating an example of a virtual switch.
- FIG. 1B is a conceptual diagram illustrating another example of a virtual switch.
- FIG. 2 is a conceptual diagram showing the NIC offload function.
- FIG. 3 is a block diagram schematically showing a configuration example of the network system according to the embodiment of the present invention.
- FIG. 4 is a block diagram showing a hardware configuration of the server according to the embodiment of the present invention.
- FIG. 5 is a block diagram conceptually showing the structure of the server according to the embodiment of the present invention.
- FIG. 6 is a block diagram showing a basic configuration of the network adapter according to the embodiment of the present invention.
- FIG. 7 is a conceptual diagram showing an example of the reception filter table in the embodiment of the present invention.
- FIG. 1A is a conceptual diagram illustrating an example of a virtual switch.
- FIG. 1B is a conceptual diagram illustrating another example of a virtual switch.
- FIG. 2 is a conceptual diagram showing the NIC off
- FIG. 8 is a schematic diagram for explaining the function of the path switching unit according to the embodiment of the present invention.
- FIG. 9 is a conceptual diagram for explaining an example of the route switching process according to the embodiment of the present invention.
- FIG. 10 is a conceptual diagram showing an example of the transmission filter table in the embodiment of the present invention.
- FIG. 11 is a conceptual diagram for explaining two route patterns in the embodiment of the present invention.
- FIG. 12 is a conceptual diagram showing an example of the transmission / reception filter table in the embodiment of the present invention.
- FIG. 13 is a block diagram illustrating a configuration example of the virtual switch according to the embodiment of the present invention.
- FIG. 14 is a conceptual diagram for explaining the cache control according to the embodiment of the present invention.
- FIG. 15 is a block diagram showing the configuration of the virtual switch according to the first embodiment of the present invention.
- FIG. 16 is a flowchart showing the processing in the first embodiment.
- FIG. 17 is a conceptual diagram showing an example of a flow table in the embodiment of the present invention.
- FIG. 18 is a conceptual diagram showing an example of the port-VM correspondence table in the embodiment of the present invention.
- FIG. 19 is a conceptual diagram showing processing in the first embodiment.
- FIG. 20 is a conceptual diagram for explaining the processing in the first embodiment.
- FIG. 21 is a block diagram showing a configuration example according to the second embodiment of the present invention.
- FIG. 22 is a flowchart showing the processing in the second embodiment.
- FIG. 23 is a block diagram showing a configuration example according to the third embodiment of the present invention.
- FIG. 24 is a block diagram for explaining another example of the route switching processing according to the embodiment of the present invention.
- FIG. 25 is a conceptual diagram showing an example of a flow table referred to by the branch function of the virtual machine shown in FIG.
- FIG. 26 is a block diagram showing the configuration of the virtual switch in the case of FIG.
- FIG. 3 is a block diagram schematically showing a configuration example of the network system 1 according to the present embodiment.
- the network system 1 includes a plurality of servers 10 connected to a network (not shown). A plurality of switches are arranged between the servers 10.
- the network system 1 is connected to an external network via a network appliance such as a firewall or a load balancer.
- the network system 1 is a network system in a data center, for example.
- FIG. 4 is a block diagram showing a hardware configuration of the server (physical server) 10 according to the present embodiment.
- the server 10 includes a CPU (Central Processing Unit) 20, a main memory 30, and a network adapter (network interface device) 100.
- the network adapter 100 is also called a network card or NIC (Network Interface Card).
- the CPU 20, the main memory 30, and the network adapter 100 are connected to each other.
- the main memory 30 stores virtualization software and a flow control program PROG.
- the virtualization software is a computer program executed by the CPU 20, and constructs a virtual machine (VM: “Virtual Machine”) or a virtual switch (Virtual Switch) on the server 10.
- the flow control program PROG is a computer program executed by the CPU 20 and implements a “route switching function” described later on the server 10.
- the virtualization software and the flow control program PROG may be recorded on a computer-readable recording medium.
- the flow control program PROG may be incorporated in the virtualization software.
- FIG. 5 is a block diagram conceptually showing the configuration of the server 10 according to the present embodiment.
- the server 10 includes a processing device 40 and a network adapter 100 connected to the processing device 40.
- the processing device 40 is realized by the cooperation of the CPU 20, the main memory 30, the virtualization software, and the flow control program PROG, and has various functions based on the virtual environment.
- the processing device 40 includes a hypervisor 50, a virtual switch 200, and at least one virtual machine (virtual server) 300.
- the hypervisor 50 manages the operation of each virtual machine 30 and provides a communication transmission path between the virtual machines 300.
- the hypervisor 50 is also called a virtual machine monitor (VMM).
- the virtual switch 200 relays packets transmitted / received by the virtual machine 300 to / from the outside.
- VMM virtual machine monitor
- the virtual switch 200 may operate on a control virtual machine (Control VM) (see FIG. 1A) or may operate on the hypervisor 50 (see FIG. 1B). Each application runs on each virtual machine 300 (Guest VM). Control virtual machine (Control VM) is also called an input / output processing virtual machine (IOVM).
- Control VM Control virtual machine
- IOVM input / output processing virtual machine
- NIC offload by the network adapter 100 is possible. That is, data can be directly exchanged between the network adapter 100 and the virtual machine 300 without going through the virtual switch 200.
- FIG. 6 is a block diagram showing a basic configuration of the network adapter 100 according to the present embodiment.
- the network adapter 100 includes a virtual NIC (indicated by a broken line in the figure), a reception filter 110, a transmission filter 120, a storage device 130, and a data direct transmission function 140.
- the data direct transmission function 140 is a function for directly transmitting / receiving a packet to / from the virtual machine 300 without going through the virtual switch 200. Specifically, the data direct transmission function 140 performs direct data transfer between the transmission / reception queue of the network adapter 100 and the address space used by the virtual machine 300.
- a virtual NIC is provided for each virtual machine 300 (VM1, VM2,).
- Each virtual NIC includes a reception queue 101 and a transmission queue 102.
- the reception queue 101 stores reception packets received from the data link by the network adapter 100.
- the received packet stored in the reception queue 101 is directly sent to the corresponding virtual machine 300 by the data direct transmission function 140.
- the transmission packet directly received from the virtual machine by the network adapter 100 by the data direct transmission function 140 is stored in the transmission queue 102 corresponding to the virtual machine.
- a virtual NIC is provided for the virtual switch 200.
- the reception queue 101 and the transmission queue 102 of the virtual NIC connected to the virtual switch 200 are referred to as the reception queue 101-S and the transmission queue 102-S, respectively.
- the reception queue 101-S stores reception packets received by the network adapter 100 from the external data link.
- the received packet stored in the reception queue 101-S is sent to the virtual switch 200.
- the transmission packet received by the network adapter 100 from the virtual switch 200 is stored in the transmission queue 102-S.
- the transmission filter 120 selects the transmission queues 102 and 102-S in a predetermined order / timing. Then, the transmission filter 120 extracts the transmission packet from the selected transmission queue 102 or 102-S, and transmits the transmission packet to the data link.
- the transmission queue 102 can store not only data of the packet itself but also metadata of the packet such as an address in the virtual machine 300 storing the packet. In this case, when the transmission filter 120 selects the transmission queue 102 from which the packet is next extracted, the data direct transmission function 140 is used so that the packet is transferred from the virtual machine 300 using the metadata of the packet stored in the corresponding queue. To instruct.
- the reception filter 110 receives a reception packet from the data link.
- the reception filter 110 determines in which reception queue 101 or 101-S the received packet is stored.
- the “reception filter table FILT1” is used for the determination.
- the reception filter table FILT1 is stored in the storage device 130. Examples of the storage device 130 include DRAM, SRAM, associative memory (CAM: “Content” Addressable “Memory”), and the like.
- the reception filter table FILT1 is a table showing the correspondence between flows and reception actions.
- the reception filter 110 refers to the reception filter table FILT1 and performs a reception action associated with the flow of the reception packet on the reception packet.
- the first reception action is “direct transmission of the received packet to the designated virtual machine 300 by using the data direct transmission function 140”. In this case, the reception filter 110 stores the received packet in the designated reception queue 101.
- the second reception action is “transmit the received packet to the virtual switch 200”. In this case, the reception filter 110 stores the reception packet in the reception queue 101-S addressed to the virtual switch 200.
- FIG. 7 shows an example of the reception filter table FILT1.
- the reception filter table FILT1 has a plurality of filter entries. Each filter entry indicates a key (Key) for identifying a flow and a reception action (Action) performed on a received packet of the corresponding flow.
- the key is flow identification information, and is composed of a combination of predetermined protocol header fields in the header information of the received packet. This key is the same as the key in the flow table of OpenFlowSwitch (see http://www.openflowswitch.org/), for example.
- a reception queue serving as a storage destination of the reception packet is described.
- “reception action: VM1” means the reception queue 101 addressed to the virtual machine VM1, and corresponds to the first reception action described above.
- “Reception action: vsswitch” means the reception queue 101-S addressed to the virtual switch 200, and corresponds to the above-described second reception action.
- the receiving filter 110 When receiving the received packet, the receiving filter 110 searches for an exact match entry (exact ⁇ ⁇ ⁇ ⁇ match entry) in the receiving filter table FILT1 using the header information of the received packet. If there is an exact match entry that matches the flow of the received packet, the reception filter 110 performs the first reception action specified by the exact match entry on the received packet. For example, in the example of FIG. 7, the reception filter 110 stores the reception packet belonging to the flow flow1 in the reception queue 101 addressed to the virtual machine VM1. On the other hand, if there is no exact match entry that matches the flow of the received packet, the reception filter 110 performs the second reception action on the received packet, that is, stores the received packet in the reception queue 101-S addressed to the virtual switch 200. To do. In this way, NIC offload is possible.
- the server 10 according to the present embodiment further includes a “route switching unit 60”.
- FIG. 8 is a schematic diagram for explaining the function of the route switching unit 60 according to the present embodiment.
- the path switching unit 60 switches the transmission path of packets transmitted and received by the virtual machine 300 to “dynamic”.
- first route pattern the data direct transmission function 140 of the network adapter 100 described above directly transmits and receives packets between the network adapter 100 and the virtual machine 300 without going through the virtual switch 200 (NIC offload).
- second route pattern packets are transmitted to and received from the virtual machine 300 via at least the virtual switch 200.
- the flows of the first and second route patterns are hereinafter referred to as “first route pattern flow” and “second route pattern flow”, respectively.
- the path switching unit 60 sets the flow path of packets transmitted and received by the virtual machine 300 to the first path pattern or the second path pattern. Furthermore, the route switching unit 60 dynamically switches the route setting according to a predetermined condition. That is, the path switching unit 60 dynamically switches (sorts) the flow of packets transmitted and received by the virtual machine 300 to the first path pattern flow or the second path pattern flow. Then, the path switching unit 60 causes the data direct transmission function 140 of the network adapter 100 to process the first path pattern flow, while causing the virtual switch 200 to process the second path pattern flow.
- not all flows are fixedly bypassing the virtual switch 200. Only the desired flow (first path pattern flow) is NIC offloaded and bypasses the virtual switch 200. Other flows (second route pattern flows) are transmitted through the virtual switch 200 as usual. As a result, it is possible to suppress the concentration of traffic on the virtual switch 200 while enjoying the advantages of flexible traffic control using the virtual switch 200.
- the path switching unit 60 is realized by the server 10 (CPU 20) executing the flow control program PROG.
- the route switching unit 60 may be incorporated in the processing device 40 as shown in FIG.
- the path switching unit 60 may be incorporated in the network adapter 100 (described later in section 3-3).
- the path switching unit 60 is incorporated in the virtual switch 200 or the hypervisor 50 of the processing device 40. However, it is not limited to this.
- FIG. 9 is a conceptual diagram for explaining an example of a path switching process according to the present embodiment.
- the network adapter 100 is provided with not only the reception filter table FILT1 but also the “transmission filter table FILT2”. Similar to the reception filter table FILT1, the transmission filter table FILT2 is also stored in the storage device 130.
- the reception filter table FILT1 and the transmission filter table FILT2 may be collectively referred to as “filter table FILT”.
- the transmission filter table FILT2 is a table showing the correspondence between flows and transmission actions.
- the transmission filter 120 refers to the transmission filter table FILT2, and performs a transmission action associated with the flow of the transmission packet on the transmission packet. Two patterns can be considered as the transmission action.
- the first transmission action is “transmit the transmission packet to an external data link”. In this case, the transmission filter 120 transmits the transmission packet to the data link.
- the second transmission action is “to receive a transmission packet and loop it back to the reception filter 110 (reception path)”. In this case, the transmission filter 120 loops back the transmission packet to the reception filter 110 (reception path) as a reception packet.
- FIG. 10 shows an example of the transmission filter table FILT2.
- the transmission filter table FILT2 has a plurality of filter entries. Each filter entry indicates a key (Key) for identifying a flow and a transmission action (Action) performed on a transmission packet of the corresponding flow.
- the key is flow identification information, and is composed of a combination of predetermined protocol header fields in the header information of the transmission packet. This key is the same as the key in the flow table of OpenFlowSwitch (see http://www.openflowswitch.org/), for example.
- OpenFlowSwitch see http://www.openflowswitch.org/
- the transmission filter 120 When the transmission filter 120 takes out the transmission packet from the selected transmission queue 102, the transmission filter 120 searches for an exact match entry (exact match entry) in the transmission filter table FILT2 using the header information of the transmission packet. When there is an exact match entry that matches the flow of the transmission packet, the transmission filter 120 performs the first transmission action (out) specified by the exact match entry on the transmission packet, that is, the transmission packet is set to the data link. Send it out. On the other hand, when there is no exact match entry that matches the flow of the transmission packet, the transmission filter 120 performs the second transmission action (loopback) on the transmission packet, that is, the reception filter 110 (reception path) using the transmission packet as the reception packet. Loop back to).
- an exact match entry that matches the flow of the transmission packet
- the transmission filter 120 performs the first transmission action (out) specified by the exact match entry on the transmission packet, that is, the transmission packet is set to the data link. Send it out.
- the transmission filter 120 performs the second transmission action (loopback) on
- FIGS. 9 and 11 two route patterns will be described.
- the reception filter table FILT1 in the reception filter table FILT1, only flows flow1 and flow2 are associated with the first reception action, and the other flows are associated with the second reception action.
- the transmission filter table FILT2 In the transmission filter table FILT2, only the flows flow1 and flow2 are associated with the first transmission action, and the other flows are associated with the second transmission action.
- a transmission packet sent from the virtual machine 300 is first input to the network adapter 100.
- the transmission packet is directly input to the network adapter 100 by the data direct transmission function 140 of the network adapter 100 without passing through the virtual switch 200.
- the transmission filter 120 extracts a transmission packet from the selected transmission queue 102.
- the transmission filter 120 transmits the transmission packet to the data link. That is, the transmission packet is transmitted from the virtual machine 300 to the outside via the network adapter 100 without passing through the virtual switch 200. This corresponds to the first route pattern described above.
- the transmission filter 120 loops back the transmission packet to the reception filter 110 as a reception packet.
- the reception filter 110 sends the received packet to the virtual switch 200 via the reception queue 101-S. That is, the packet is once input to the network adapter 100 and then processed by the virtual switch 200. This corresponds to the above-described second route pattern.
- the received packets received from the data link are as follows. When the received packet belongs to the flow flow1 or flow2, the exact match entry in the reception filter table FILT1 is hit. Accordingly, the reception filter 110 stores the received packet in the reception queue 101 addressed to the corresponding virtual machine 300. The received packet is directly sent to the corresponding virtual machine 300 by the data direct transmission function 140 without going through the virtual switch 200. This corresponds to the first route pattern described above.
- the reception filter 110 stores the received packet in the reception queue 101-S addressed to the virtual switch 200. Therefore, the received packet is processed by the virtual switch 200. This corresponds to the above-described second route pattern.
- reception filter table FILT1 and the transmission filter table FILT2 may be combined and provided as a single transmission / reception filter table.
- the second reception action and the second transmission action are common and “vswitch: store the packet in the reception queue 101-S addressed to the virtual switch 200”. This also realizes loopback of the transmission packet to the reception path.
- Route switching unit 60 As described above, the flow path of packets transmitted and received by the virtual machine 300 can be set to the first path pattern or the second path pattern by setting entries in the reception filter table FILT1 and the transmission filter table FILT2. it can. It is also possible to switch the flow path to “dynamic” by changing entry settings in the reception filter table FILT1 and the transmission filter table FILT2. The route switching unit 60 performs such entry setting and setting change.
- the route switching unit 60 assigns the flow of packets transmitted and received by the virtual machine 300 to the first route pattern flow or the second route pattern flow based on a predetermined reference. The assignment can be changed dynamically. Then, the route switching unit 60 sets the reception filter table FILT1 so that the first route pattern flow is associated with the first reception action and the second route pattern flow is associated with the second reception action. To do. Further, the route switching unit 60 sets the transmission filter table FILT2 so that the first route pattern flow is associated with the first transmission action described above and the second route pattern flow is associated with the second transmission action described above. To do. As a result, the first path pattern flow is processed without going through the virtual switch 200, that is, NIC offloaded. On the other hand, the second route pattern flow is processed by the virtual switch 200.
- a filter entry related to the same flow may be set only in one of the reception filter table FILT1 and the transmission filter table FILT2.
- the path pattern is asymmetric between the receiving side and the transmitting side.
- the filter entry related to the flow flow1 is set only in the transmission filter table FILT2 in FIGS.
- the transmission route of the transmission packet is a first route pattern that does not pass through the virtual switch 200
- the transmission route of the reception packet is a second route pattern that passes through the virtual switch 200.
- the route switching unit 60 is incorporated in the virtual switch 200, for example.
- FIG. 13 is a block diagram illustrating a functional configuration example of the virtual switch 200 in that case.
- the virtual switch 200 includes a flow identification function 210, a packet switch function 220, a VM identification function 230, a queue determination function 240, and a NIC setting function 250.
- the virtual switch 200 receives a packet from the network adapter 100 or the virtual machine 300.
- the flow identification function 210 identifies the flow to which the packet belongs based on the received header information of the packet. Further, the flow identification function 210 refers to the flow table TBL indicating the correspondence between the flow identification information (Key) and the action (Action) to obtain an action to be performed on the packet.
- the packet switch function 220 processes the packet according to the action. Typically, the action of the flow table TBL describes a packet output port (transfer destination). The packet switch function 220 outputs a packet from the output port specified by the action. The output packet is sent to the network adapter 100 or the virtual machine 300.
- the flow identification function 210 When there is no flow entry matching the packet in the flow table TBL, the flow identification function 210 performs a predetermined process on the packet. For example, the flow identification function 210 transfers the packet to an open flow controller (OFC: “Open Flow Controller”) and requests path setting.
- OFC Open Flow Controller
- the VM identification function 230 identifies the virtual machine 300 that transmits / receives a packet belonging to the specified flow with respect to the specified flow.
- the “designated flow” is a flow for which entry setting for the filter table FILT on the network adapter 100 is desired.
- the queue determination function 240 determines the transmission / reception queue (101, 102) associated with the virtual machine 300 specified by the VM identification function 230.
- the NIC setting function 250 creates a filter entry to be set in the filter table FILT by appropriately referring to the transmission / reception queue.
- the NIC setting function 250 notifies the network adapter 100 of the created filter entry, and sets / changes the filter table FILT.
- the path switching unit 60 includes the VM identification function 230, the queue determination function 240, and the NIC determination function 250.
- Cache control Cache control of the filter table FILT is also possible. This is suitable when only a relatively small storage device 130 can be mounted on the network adapter 100. The cache control will be described with reference to FIG.
- the main body of the filter table FILT is stored in the main memory 30 (see FIG. 4) of the server 10.
- the NIC setting function 250 (route switching unit 60) described above sets and changes the filter table FILT on the main memory 30.
- the storage device 130 of the network adapter 100 is a cache memory that is relatively small (for example, several tens of kilobytes).
- the filter table FILT (cache) cached in the cache memory 130 is a part of the main body of the filter table FILT stored in the main memory 30.
- Each of the reception filter 110 and the transmission filter 120 of the network adapter 100 includes a search function 115.
- the search function 115 first checks an entry cached in the cache memory 130. In the case of a cache hit, the search function 115 processes the packet as described above according to the hit entry. On the other hand, in the case of a cache miss, the search function 115 accesses the main memory 30 and searches for and acquires a necessary entry from the main body of the filter table FILT. Then, the search function 115 stores the acquired entry in the cache memory 130 and processes the packet according to the entry. When there is no empty entry, the search function 115 also replaces cache entries.
- each entry of the filter table FILT may include statistical information that is updated each time a packet is processed.
- each entry includes the number of matches of the entry.
- the search function 115 writes back the statistical information from the cache memory 130 to the main memory 30 at a predetermined timing. Examples of the predetermined timing include when the path switching unit 60 needs the statistical information, when an entry is deleted from the cache memory 130, and the like.
- the first path pattern flow is processed without going through the virtual switch 200, that is, NIC offloaded.
- This NIC offload suppresses the concentration of traffic to the virtual switch 200.
- Various candidates are conceivable as the first route pattern flow to be NIC offloaded. There are various possible timings for setting NIC offload. Several embodiments will be described below.
- the first route pattern flow as the NIC offload target is an “overload flow” in which the load exceeds a predetermined threshold.
- the second route pattern flow is a “normal load flow” in which the load is equal to or less than a predetermined threshold.
- the start timing of NIC offload is when a certain flow changes from a normal load flow to an overload flow
- the end timing of NIC offload is when a certain flow returns from an overload flow to a normal load flow.
- the path switching unit 60 measures the load for each flow based on the packet transmitted and received by the virtual machine 300.
- the path switching unit 60 compares the measured load with a predetermined threshold value and determines a normal load flow or an overload flow for each flow.
- the path switching unit 60 switches the overload flow to the first path pattern flow.
- the overload flow is NIC offloaded and bypasses the virtual switch 200.
- the path switching unit 60 returns the flow from the first path pattern flow to the second path pattern flow. As a result, the flow is processed by the virtual switch 200.
- the first embodiment only the overload flow is NIC offloaded. Therefore, traffic concentration on the virtual switch 200 can be efficiently reduced. Further, the number of entries set in the filter table FILT is relatively small. Therefore, even if only a relatively small storage device 130 can be installed in the network adapter 100, the first embodiment is possible. Further, when the load is nonuniform between the transmission side and the reception side, the path pattern may be asymmetric between the transmission side and the reception side.
- the path switching unit 60 is incorporated in the virtual switch 200.
- FIG. 15 is a block diagram showing the configuration of the virtual switch 200 in the first embodiment.
- the virtual switch 200 further includes a processing load measurement function 260, a path change determination function 270, and a destination attached data addition function 280 in addition to the configuration shown in FIG. .
- the processing load measurement function 260 samples transmission / reception packets at a predetermined frequency, and measures the load (packet processing amount, processing load) for each flow based on the transmission / reception packets.
- the processing load measurement function 260 holds load information indicating the measurement result.
- the route change determination function 270 determines whether each flow is an overload flow (first route pattern flow) or a normal load flow (second route pattern flow) by referring to the load information.
- the route change determination function 270 dynamically changes the classification of the first route pattern flow and the second route pattern flow based on the load information. Then, the route change determination function 270 designates a flow for switching the route pattern to the VM identification function 230.
- the destination attached data adding function 280 will be described later.
- FIG. 16 is a flowchart showing an example of processing in the first embodiment.
- the virtual switch 200 receives a packet from the network adapter 100 or the virtual machine 300 (step A10).
- the flow identification function 210 identifies the flow to which the packet belongs based on the received header information of the packet. Further, the flow identification function 210 refers to the flow table TBL and obtains an action to be performed on the packet (step A20).
- FIG. 17 shows an example of the flow table TBL.
- the flow table TBL has a plurality of table entries. Each table entry indicates a key (Key) for identifying the flow and an action (Action) to be performed on the packet of the corresponding flow.
- the key is flow identification information, and is composed of a combination of predetermined protocol header fields in the packet header information.
- the action typically describes the output port (transfer destination) of the packet.
- the flow table TBL is stored in a predetermined storage device (typically the main memory 30).
- each table entry also has a flag indicating whether or not there is a corresponding entry on the network adapter 100. This flag is provided in order for the virtual switch 200 to know what filter entry the network adapter 100 holds.
- the packet switch function 220 performs a switch process according to the action obtained in step A20 (step A30). Typically, the packet switch function 220 outputs a packet from the output port specified by the action. The output packet is sent to the network adapter 100 or the virtual machine 300.
- the processing load measurement function 260 updates the load information in response to the packet processing (step A40). Further, the route change determination function 270 refers to the load information and compares the load relating to the flow of the processed packet with a predetermined threshold (step A50). When the load exceeds a predetermined threshold (step A50; Yes), the path change determination function 270 regards the flow as an overload flow and assigns it to the first path pattern flow. Then, the route change determination function 270 designates the flow as a NIC offload target and notifies the VM identification function 230 of the flow.
- the virtual switch 200 performs an offload setting process (step A60). Specifically, for the flow specified by the path change determination function 270, the VM identification function 230 specifies the virtual machine 300 that transmits and receives packets belonging to the flow (step A61). At this time, the VM identification function 230 may identify the virtual machine 300 with reference to the port-VM correspondence table as shown in FIG.
- the queue determination function 240 determines a transmission / reception queue associated with the virtual machine 300 specified by the VM identification function 230 (step A62).
- the NIC setting function 250 creates a filter entry to be set in the filter table FILT by appropriately referring to the transmission / reception queue. Then, the NIC setting function 250 notifies the created filter entry to the network adapter 100 and sets the filter table FILT (step A63). Also, the NIC setting function 250 sets the flag of the corresponding entry shown in FIG. 17 to “present”.
- FIG. 19 is a conceptual diagram showing a processing image in the first embodiment. Note that when the flow returns from the overload flow to the normal load flow, the offload setting is canceled. When canceling the offload setting, the filter entry relating to the flow may be deleted from the filter table FILT. In addition, the flag of the corresponding entry shown in FIG. 17 is set to “none”.
- the role of the destination attached data adding function 280 will be described with reference to FIG. In some cases, there is no packet entry on the network adapter 100, and packet delivery of data link ⁇ virtual switch 200 ⁇ virtual machine 300 may be performed.
- the network adapter 100 does not know to which direction the packet should be sent. Therefore, the destination data is given to the packet itself.
- the destination attached data adding function 280 of the virtual switch 200 adds destination data indicating “outside address” or “VM address” to a packet output from the virtual switch 200.
- the network adapter 100 includes a packet transmission determination function 150 that refers to the destination data and determines a packet delivery destination.
- the virtual switch 200 When receiving the flow3 packet, the virtual switch 200 recognizes that the packet is addressed to the virtual machine VM1 by referring to the flow table TBL. Accordingly, the destination attached data adding function 280 adds destination data indicating “to VM1” to the packet.
- the packet transmission determining function 150 refers to the destination data attached to the packet and determines to transmit the packet to the virtual machine VM1.
- the destination data is not attached to the transmission packet from the virtual machine VM1 as in the case of flow1 and flow2 in FIG. In that case, as described above, the transmission packet is processed according to the filter entry of the transmission filter table FILT2.
- NIC offload setting is performed with “predetermined packet” as a trigger. That is, when the route switching unit 60 receives a “predetermined packet” of a certain flow, the route switching unit 60 assigns the flow to the first route pattern flow as a NIC offload target. Thereafter, the flow is NIC offloaded, and packets belonging to the flow bypass the virtual switch 200.
- the route switching unit 60 may return the flow from the first route pattern flow to the second route pattern flow.
- the first packet is an example of “predetermined packet”.
- the first packet is a packet received first among packets belonging to a certain flow, and is a packet received in a state where an entry of the flow has not yet been created.
- the NIC after the first packet of the flow is offloaded.
- Another example of the “predetermined packet” is a packet including an HTTP request URL.
- the DPI processing is to determine the destination / processing method of the flow to which the packet belongs, using information of a layer higher than the transport layer included in the packet, for example, the contents of the URL included in the packet.
- NIC offloaded much of the data plane traffic is NIC offloaded. Therefore, it is possible to further reduce traffic concentration on the virtual switch 200 as compared with the first embodiment. Also, much of the data plane traffic is NIC offloaded while the control plane is not NIC offloaded. Thereby, the flexibility using the virtual switch 200 is ensured.
- the path switching unit 60 is incorporated in the virtual switch 200.
- the “predetermined packet” is the first packet.
- FIG. 21 is a block diagram illustrating a configuration example of the network adapter 100 and the virtual switch 200 in the second embodiment.
- the configuration of the network adapter 100 is the same as that shown in FIG. However, the reception filter table FILT1 and the transmission filter table FILT2 are not shown.
- the configuration of the virtual switch 200 is the same as that shown in FIG.
- FIG. 22 is a flowchart illustrating an example of processing in the second embodiment.
- the reception filter 110 of the network adapter 100 receives a packet from the data link (step B10).
- the reception filter 110 searches for an exact match entry in the reception filter table FILT1 using the header information of the reception packet (step B20). If there is an exact match entry that matches the flow of the received packet (step B20; Yes), the reception filter 110 stores the received packet in the reception queue 101 addressed to the corresponding virtual machine 300.
- the received packet is directly transmitted to the corresponding virtual machine 300 by the data direct transmission function 140 (step B30).
- the reception filter 110 stores the received packet in the reception queue 101-S addressed to the virtual switch 200.
- the received packet stored in the reception queue 101-S is sent to the virtual switch 200 (step B40).
- the virtual switch 200 receives the received packet.
- the flow identification function 210 identifies the flow to which the received packet belongs based on the header information of the received packet, and searches the flow table TBL (step B50). There is no flow entry (exact match entry) matching the received packet in the flow table TBL. Therefore, the flow identification function 210 regards the received packet as the first packet, designates the flow of the received packet as a NIC offload target, and notifies the VM identification function 230 of it.
- the virtual switch 200 performs an offload setting process (step B60). Specifically, for the flow specified by the flow identification function 210, the VM identification function 230 identifies the virtual machine 300 that transmits and receives packets belonging to the flow (step B61). The queue determination function 240 determines the transmission / reception queue associated with the virtual machine 300 specified by the VM identification function 230 (step B62). The NIC setting function 250 creates a filter entry to be set in the filter table FILT by appropriately referring to the transmission / reception queue. Then, the NIC setting function 250 notifies the network adapter 100 of the created filter entry, and sets the filter table FILT (step B63). The NIC setting function 250 also stores a copy of the filter entry in the flow table TBL.
- the packet switch function 220 returns the first packet to the network adapter 100 (step B70). This time, the exact match entry in the reception filter table FILT1 is hit (step B20; Yes). Therefore, the first packet is directly transmitted to the corresponding virtual machine 300 by the data direct transmission function 140 (step B30). The same applies to subsequent packets following the first packet. In this way, the flow is NIC offloaded.
- the offload setting may be cancelled.
- the reception filter 110 or the transmission filter 120 of the network adapter 100 records the last match time in the filter entry of the filter table FILT.
- the flow identification function 210 of the virtual switch 200 confirms the last match time for each predetermined period and determines a timeout.
- the flow identification function 210 instructs cancellation of the offload setting for the flow.
- the NIC setting function 250 deletes the filter entry related to the flow from the filter table FILT.
- the NIC setting function 250 also deletes the filter entry from the flow table TBL.
- FIG. 23 is a block diagram illustrating a configuration example of a network adapter 100 and a virtual switch 200 according to a third embodiment.
- changes from the second embodiment will be mainly described.
- the network adapter 100 includes a flow identification function 160 and a flow setting function 170 in addition to the configuration shown in FIG.
- the flow identification function 160 is the same as the flow identification function 210 of the virtual switch 200.
- the reception filter 110 transfers the received packet to the flow identification function 160 (step B40).
- the flow setting function 170 sets a filter entry related to the flow designated by the flow identification function 160 in the filter table FILT.
- the path switching unit 60 is incorporated in the network adapter 100. That is, in addition to the data plane, the setting of the filter table FILT is also NIC offloaded.
- “standard processing” exemplified in the setting of the filter table FILT is NIC offloaded.
- the virtual switch 200 delegates a program for performing a routine process to the network adapter 100.
- a program that performs such a routine process can be implemented as an action of a wildcard match entry in the flow table TBL.
- a program for setting NAPT Network Address / Port Translation
- NAPT Network Address / Port Translation
- the virtual switch 200 includes a flow processing rule offload function 290.
- the flow processing rule offload function 290 sets a part or all of its own flow table TBL (exact match flow table and wild card match flow table) in the flow table TBL on the network adapter 100.
- the routine processing is NIC offloaded. Processing that does not end in a short time and advanced expansion processing are processed by the virtual switch 200 as usual.
- the route switching means is not limited to that described in Section 2 above. Hereinafter, another example of the route switching process will be described. In this processing example, the route of the transmission packet branches inside the virtual machine 300.
- FIG. 24 is a block diagram illustrating a configuration of the virtual machine 300 in the present processing example.
- the virtual machine 300 includes a NIC packet transmission / reception function 310, a virtual switch packet transmission / reception function 320, a branching function 330, and a protocol processing function 340.
- the protocol processing function 340 is realized by a program (typically a TCP / IP stack) that performs protocol processing.
- the NIC packet transmission / reception function 310 (first transmission / reception function) is realized by a device driver that performs packet transmission / reception with the data direct transmission function 140 of the network adapter 100.
- the virtual switch packet transmission / reception function 320 (second transmission / reception function) is realized by a device driver that performs packet transmission / reception with the virtual switch 200.
- the branch function 330 is arranged between the protocol processing function 340 and the packet transmission / reception functions 310 and 320.
- the branch function 330 receives the transmission packet from the protocol processing function 340 and transfers the transmission packet to either the NIC packet transmission / reception function 310 or the virtual switch packet transmission / reception function 320.
- the branch function 330 refers to the flow table TBL2 indicating the correspondence between the flow and the packet transfer destination.
- the packet transfer destination is the NIC packet transmission / reception function 310 (first packet transfer destination) or the virtual switch packet transmission / reception function 320 (second packet transfer destination).
- FIG. 25 is a conceptual diagram showing the flow table TBL2.
- the flow table TBL2 has a plurality of table entries. Each table entry indicates a key (Key) for identifying a flow and an action (Action) to be performed on a transmission packet of the corresponding flow.
- the key is flow identification information, and is composed of a combination of predetermined protocol header fields in the header information of the transmission packet.
- the action describes the transfer destination of the transmission packet. For example, “action: NIC” indicates that the transfer destination of the transmission packet is the NIC packet transmission / reception function 310 (first packet transfer destination). “Action: vsswitch” means that the transfer destination of the transmission packet is the virtual switch packet transmission / reception function 320 (second packet transfer destination).
- the flow table TBL2 is stored in a predetermined storage device (typically the main memory 30).
- the branch function 330 transfers the transmission packet of the virtual machine 300 to the packet transfer destination associated with the flow of the transmission packet by referring to the flow table TBL2. More specifically, the branch function 330 includes a flow identification function 331 and an attached information rewriting function 332.
- the flow identification function 331 identifies the flow of the transmission packet based on the header information of the transmission packet. Further, the flow identification function 331 refers to the flow table TBL2 and determines a packet transfer destination associated with the flow.
- the attached information rewriting function 332 rewrites the transmission interface in the attached information of the transmission packet to the packet transfer destination as a result of the determination. Then, the branch function 330 transfers the transmission packet to the corresponding packet transfer destination.
- the NIC packet transmission / reception function 310 receives the transmission packet from the branch function 330.
- the NIC packet transmission / reception function 310 stores the received transmission packet in a buffer and instructs the network adapter 100 to transmit the packet.
- the data direct transmission function 140 of the network adapter 100 reads the transmission packet from the buffer, and stores the transmission packet in the transmission queue 102 corresponding to the transmission source virtual machine 300.
- the virtual switch packet transmission / reception function 320 receives the transmission packet from the branch function 330.
- the virtual switch packet transmission / reception function 320 stores the received transmission packet in a buffer and requests the hypervisor 50 to deliver a packet.
- the hypervisor 50 instructs the virtual switch 200 to process the transmission packet.
- the virtual switch 200 reads the transmission packet from the buffer and performs switch processing.
- the virtual machine 300 receives a received packet from the network adapter 100 or the virtual switch 200.
- the NIC packet transmission / reception function 310 receives the received packet and passes it to the branch function 330.
- the virtual switch packet transmission / reception function 320 receives the received packet and passes it to the branch function 330.
- the branch function 330 pretends that the received packet is received from the same interface.
- the attached information rewriting function 332 rewrites the receiving interface in the attached information of the received packet to the branch function 330.
- the branch function 330 sends the received packet to the protocol processing function 340.
- a plurality of reception paths cannot be seen from the protocol stack.
- the packet attachment information indicates packet attributes and additional information held in association with packet data.
- the attached information typically includes a packet length, a payload, a head address of header data of each protocol, and the like.
- the “interface” of the transmission interface and the reception interface refers to a virtual connection point between the virtual machine 300 and the network. In the example of FIG. 24, the entrance / exit of the transmission path with the network adapter 100 and the entrance / exit of the transmission path with the virtual switch 200 are “interfaces”.
- the network adapter 100 is not provided with the transmission filter table FILT2.
- the virtual machine 300 is provided with a flow table TBL2.
- the path switching unit 60 dynamically changes the setting of the flow table TBL2 of each virtual machine 300 instead of dynamically changing the setting of the transmission filter table FILT2 of the network adapter 100.
- the route switching unit 60 performs flow so that the first route pattern flow is associated with the first packet transfer destination and the second route pattern flow is associated with the second packet transfer destination.
- the first path pattern flow is processed without going through the virtual switch 200, that is, NIC offloaded.
- the second route pattern flow is processed by the virtual switch 200.
- the path switching unit 60 sets the reception filter table FILT1 of the network adapter 100 as in the case of the second section described above. Further, the path pattern may be asymmetric between the reception side and the transmission side.
- FIG. 26 is a block diagram illustrating a functional configuration example of the virtual switch 200 in that case.
- the virtual switch 200 shown in FIG. 26 further includes a branch function setting function 255 in addition to the configuration shown in FIG. 13 or FIG.
- the NIC setting function 250 sets the reception filter table FILT1 of the network adapter 100.
- the branch function setting function 255 sets the flow table TBL2 of the virtual machine 300.
- the virtual machine 300 adds, deletes, and updates entries in the flow table TBL2 according to the setting by the branch function setting function 255.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
Abstract
Description
図3は、本実施の形態に係るネットワークシステム1の構成例を概略的に示すブロック図である。ネットワークシステム1は、ネットワーク(不図示)に接続された複数のサーバ10を備えている。サーバ10間には、複数のスイッチが配置されている。このネットワークシステム1は、ファイアウォールやロードバランサといったネットワークアプライアンスを介して外部ネットワークに接続されている。ネットワークシステム1は、例えばデータセンタ(data center)内のネットワークシステムである。
VM)は、入出力処理用の仮想マシン(IOVM)とも呼ばれる。
図9は、本実施の形態に係る経路切替処理の一例を説明するための概念図である。本処理例では、ネットワークアダプタ100に、受信フィルタテーブルFILT1だけでなく、「送信フィルタテーブルFILT2」も設けられる。受信フィルタテーブルFILT1と同様に、送信フィルタテーブルFILT2も記憶装置130に格納される。尚、受信フィルタテーブルFILT1と送信フィルタテーブルFILT2は、まとめて「フィルタテーブルFILT」と参照される場合がある。
送信フィルタテーブルFILT2は、フローと送信アクションとの対応関係を示すテーブルである。送信フィルタ120は、その送信フィルタテーブルFILT2を参照し、送信パケットのフローに対応付けられた送信アクションを送信パケットに対して行う。送信アクションとしては、2パターン考えられる。第1送信アクションは、「送信パケットを外部のデータリンクへ送信すること」である。この場合、送信フィルタ120は、送信パケットをデータリンクへ送出する。第2送信アクションは、「送信パケットを受信パケットして受信フィルタ110(受信経路)へループバックさせること」である。この場合、送信フィルタ120は、送信パケットを受信パケットとして受信フィルタ110(受信経路)へループバックさせる。
以上に説明されたように、受信フィルタテーブルFILT1及び送信フィルタテーブルFILT2におけるエントリ設定により、仮想マシン300が送受信するパケットのフロー経路を、第1の経路パターンあるいは第2の経路パターンに設定することができる。そして、受信フィルタテーブルFILT1及び送信フィルタテーブルFILT2におけるエントリ設定を変更することによって、フロー経路を“動的”に切り替えることも可能である。そのようなエントリ設定及び設定変更を行うのが、経路切替部60である。
フィルタテーブルFILTのキャッシュ制御も可能である。これは、ネットワークアダプタ100に比較的小規模な記憶装置130しか搭載できない場合に好適である。図14を参照して、キャッシュ制御を説明する。
上述の通り、第1経路パターンフローは、仮想スイッチ200を介することなく処理される、すなわち、NICオフロードされる。このNICオフロードにより、仮想スイッチ200へのトラフィックの集中が抑制される。NICオフロード対象の第1経路パターンフローとしては、様々な候補が考えられる。また、NICオフロードの設定タイミングとしても、様々考えられる。以下、いくつかの実施形態を説明する。
第1の実施の形態では、NICオフロード対象としての第1経路パターンフローは、負荷が所定の閾値を超えた「過負荷フロー」である。一方、第2経路パターンフローは、負荷が所定の閾値以下である「通常負荷フロー」である。NICオフロードの開始タイミングは、あるフローが通常負荷フローから過負荷フローになったときであり、NICオフロードの終了タイミングは、あるフローが過負荷フローから通常負荷フローに戻ったときである。
第2の実施の形態では、「所定のパケット」を契機として、NICオフロード設定が行われる。つまり、経路切替部60は、あるフローの「所定のパケット」を受け取った場合、当該フローをNICオフロード対象としての第1経路パターンフローに割り当てる。その後、当該フローはNICオフロードされ、当該フローに属するパケットは仮想スイッチ200をバイパスすることになる。また、第1経路パターンフローのパケットが処理されない期間が一定以上続く、すなわち、第1経路パターンフローに関してタイムアウトが発生する場合がある。その場合、経路切替部60は、当該フローを第1経路パターンフローから第2経路パターンフローに戻してもよい。
図23は、第3の実施の形態におけるネットワークアダプタ100及び仮想スイッチ200の構成例を示すブロック図である。以下、第2の実施の形態からの変更点を主に説明する。
経路切替の手段は、上記第2節で説明されたものに限られない。以下、経路切替処理の他の例を説明する。本処理例では、送信パケットの経路が仮想マシン300の内部で分岐する。
Claims (10)
- 処理装置と、
前記処理装置に接続されたネットワークアダプタと、
経路切替部と
を備え、
前記処理装置は、
仮想マシンと、
前記仮想マシンが送受信するパケットを外部との間で中継する仮想スイッチと
を備え、
前記ネットワークアダプタは、前記仮想スイッチを介することなく前記仮想マシンに対してパケットを送受信する伝送機能を有し、
前記経路切替部は、前記仮想マシンが送受信するパケットのフローを、第1経路パターンフローあるいは第2経路パターンフローに動的に切り替え、
前記経路切替部は、前記第1経路パターンフローを前記ネットワークアダプタの前記伝送機能に処理させ、前記第2経路パターンフローを前記仮想スイッチに処理させる
サーバ。 - 請求項1に記載のサーバであって、
前記ネットワークアダプタは、
受信パケットを受け取る受信フィルタと、
フローと受信アクションとの対応関係を示す受信フィルタテーブルが格納された記憶装置と
を備え、
前記受信フィルタは、前記受信フィルタテーブルを参照し、前記受信パケットのフローに対応付けられた前記受信アクションを前記受信パケットに対して行い、
前記経路切替部は、前記第1経路パターンフローが第1受信アクションに対応付けられ、前記第2経路パターンフローが第2受信アクションに対応付けられるように、前記受信フィルタテーブルを設定し、
前記第1受信アクションは、前記伝送機能を用いることによって前記受信パケットを前記仮想マシンへ送信することであり、
前記第2受信アクションは、前記受信パケットを前記仮想スイッチへ送信することである
サーバ。 - 請求項2に記載のサーバであって、
前記記憶装置には、更に、フローと送信アクションとの対応関係を示す送信フィルタテーブルが格納され、
前記ネットワークアダプタは、更に、前記伝送機能によって前記仮想マシンから送信パケットを受け取る送信フィルタを備え、
前記送信フィルタは、前記送信フィルタテーブルを参照し、前記送信パケットのフローに対応付けられた前記送信アクションを前記送信パケットに対して行い、
前記経路切替部は、前記第1経路パターンフローが第1送信アクションに対応付けられ、前記第2経路パターンフローが第2送信アクションに対応付けられるように、前記送信フィルタテーブルを設定し、
前記第1送信アクションは、前記送信パケットを外部へ送信することであり、
前記第2送信アクションは、前記送信パケットを前記受信パケットとして前記受信フィルタへループバックさせることである
サーバ。 - 請求項2に記載のサーバであって、
前記仮想マシンは、
前記仮想スイッチを介することなく前記ネットワークアダプタとパケットを送受信する第1送受信機能と、
前記仮想スイッチとパケットを送受信する第2送受信機能と、
フローとパケット転送先との対応関係を示すフローテーブルを参照し、前記仮想マシンの送信パケットを、前記送信パケットのフローに対応付けられた前記パケット転送先に転送する分岐機能と
を備え、
前記経路切替部は、前記第1経路パターンフローが第1パケット転送先に対応付けられ、前記第2経路パターンフローが第2パケット転送先に対応付けられるように、前記フローテーブルを設定し、
前記第1パケット転送先は、前記第1送受信機能であり、
前記第2パケット転送先は、前記第2送受信機能である
サーバ。 - 請求項1乃至4のいずれか一項に記載のサーバであって、
前記経路切替部は、前記仮想マシンが送受信するパケットに基いてフロー毎の負荷を計測し、
前記計測された負荷が所定の閾値を超えたフローは、過負荷フローであり、
前記経路切替部は、前記過負荷フローを前記第1経路パターンフローに切り替える
サーバ。 - 請求項1乃至4のいずれか一項に記載のサーバであって、
前記経路切替部は、あるフローの所定のパケットを受け取った場合、前記あるフローを前記第1経路パターンフローに切り替える
サーバ。 - 請求項1乃至6のいずれか一項に記載のサーバであって、
前記経路切替部は、前記仮想スイッチに組み込まれている
サーバ。 - 請求項1乃至6のいずれか一項に記載のサーバであって、
前記経路切替部は、前記ネットワークアダプタに組み込まれている
サーバ。 - サーバによって実行されるフロー制御プログラムが記録された記録媒体であって、
前記サーバは、
処理装置と、
前記処理装置に接続されたネットワークアダプタと
を備え、
前記処理装置は、
仮想マシンと、
前記仮想マシンが送受信するパケットを外部との間で中継する仮想スイッチと
を備え、
前記ネットワークアダプタは、前記仮想スイッチを介することなく前記仮想マシンに対してパケットを送受信する伝送機能を有し、
前記フロー制御プログラムは、前記サーバに、経路切替機能を実現させ、
前記経路切替機能は、前記仮想マシンが送受信するパケットのフローを、第1経路パターンフローあるいは第2経路パターンフローに動的に切り替え、
前記経路切替機能は、前記第1経路パターンフローを前記ネットワークアダプタの前記伝送機能に処理させ、前記第2経路パターンフローを前記仮想スイッチに処理させる
記録媒体。 - サーバの処理装置に接続されるネットワークアダプタであって、
前記処理装置は、
仮想マシンと、
前記仮想マシンが送受信するパケットを外部との間で中継する仮想スイッチと
を備え、
前記ネットワークアダプタは、前記仮想スイッチを介することなく前記仮想マシンに対してパケットを送受信する伝送機能を有し、
前記ネットワークアダプタは、経路切替部を備え、
前記経路切替部は、前記仮想マシンが送受信するパケットのフローを、第1経路パターンフローあるいは第2経路パターンフローに動的に切り替え、
前記経路切替部は、前記第1経路パターンフローを前記伝送機能に処理させ、前記第2経路パターンフローを前記仮想スイッチに処理させる
ネットワークアダプタ。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201080055153.6A CN102648455B (zh) | 2009-12-04 | 2010-11-30 | 服务器和流控制程序 |
EP10834541.4A EP2509000A4 (en) | 2009-12-04 | 2010-11-30 | Server and flow control program |
JP2011544249A JP5720577B2 (ja) | 2009-12-04 | 2010-11-30 | サーバ及びフロー制御プログラム |
US13/137,619 US9130867B2 (en) | 2009-12-04 | 2011-08-30 | Flow control for virtualization-based server |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-276679 | 2009-12-04 | ||
JP2009276679 | 2009-12-04 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/137,619 Continuation US9130867B2 (en) | 2009-12-04 | 2011-08-30 | Flow control for virtualization-based server |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011068091A1 true WO2011068091A1 (ja) | 2011-06-09 |
Family
ID=44114938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/071316 WO2011068091A1 (ja) | 2009-12-04 | 2010-11-30 | サーバ及びフロー制御プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US9130867B2 (ja) |
EP (1) | EP2509000A4 (ja) |
JP (1) | JP5720577B2 (ja) |
CN (1) | CN102648455B (ja) |
WO (1) | WO2011068091A1 (ja) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013041445A (ja) * | 2011-08-17 | 2013-02-28 | Fujitsu Ltd | 情報処理装置、情報処理方法及び情報処理プログラム |
JP2013126062A (ja) * | 2011-12-14 | 2013-06-24 | Kddi Corp | ネットワークトラヒック制御装置、制御方法、およびプログラム |
JP2013161283A (ja) * | 2012-02-06 | 2013-08-19 | Nec Commun Syst Ltd | サーバ、物理ノード、負荷分散方法およびプログラム |
JP2013197919A (ja) * | 2012-03-21 | 2013-09-30 | Nec Corp | 通信制御装置、通信制御方法および通信制御プログラム |
JP2013239996A (ja) * | 2012-05-17 | 2013-11-28 | Nec Corp | 計算機、データ変換装置、通信方法及びプログラム |
WO2013186825A1 (en) * | 2012-06-12 | 2013-12-19 | Nec Corporation | Computer system, communication control server, communication control method, and program |
JP2014017618A (ja) * | 2012-07-06 | 2014-01-30 | Oki Electric Ind Co Ltd | 通信装置、方法及びプログラム |
JP2014506739A (ja) * | 2011-02-24 | 2014-03-17 | 日本電気株式会社 | ネットワークシステム、コントローラ、及びフロー制御方法 |
JP2014524086A (ja) * | 2011-06-27 | 2014-09-18 | マイクロソフト コーポレーション | ホスト使用可能管理チャネル |
JP2014195178A (ja) * | 2013-03-28 | 2014-10-09 | Fujitsu Ltd | 情報処理装置、送信制御方法および送信制御プログラム |
WO2014192259A1 (ja) * | 2013-05-27 | 2014-12-04 | 日本電気株式会社 | ネットワーク制御装置、ネットワーク制御方法、プログラムおよび通信システム |
KR20140143155A (ko) * | 2012-03-21 | 2014-12-15 | 마이크로소프트 코포레이션 | 네트워킹 장치 가상화를 위한 패킷 처리 오프로딩 기법 |
KR101493933B1 (ko) * | 2014-05-26 | 2015-02-16 | 주식회사 파이오링크 | 하드웨어 스위치 및 소프트웨어 스위치를 사용하여 가상 머신의 통신을 지원하기 위한 방법, 장치, 시스템 및 컴퓨터 판독 가능한 기록 매체 |
KR20150024845A (ko) * | 2012-06-21 | 2015-03-09 | 마이크로소프트 코포레이션 | 물리적 큐들로의 가상 머신 플로우들의 오프로딩 기법 |
JP2015530831A (ja) * | 2012-09-13 | 2015-10-15 | シマンテック コーポレーションSymantec Corporation | 選択的ディープパケットインスペクションを実行するためのシステム及び方法 |
JP2015195466A (ja) * | 2014-03-31 | 2015-11-05 | 東芝ライテック株式会社 | 通信装置、通信方法、および通信システム |
WO2016056210A1 (ja) * | 2014-10-10 | 2016-04-14 | 日本電気株式会社 | サーバ、フロー制御方法および仮想スイッチ用プログラム |
JP2017022767A (ja) * | 2014-06-23 | 2017-01-26 | インテル コーポレイション | ソフトウェア確定ネットワークにおける仮想マシンと仮想化コンテナを用いたローカルサービスチェーン |
JP2017126998A (ja) * | 2011-03-30 | 2017-07-20 | アマゾン・テクノロジーズ、インコーポレイテッド | オフロードデバイスベースのパケット処理のためのフレームワークおよびインターフェース |
JP2018028779A (ja) * | 2016-08-17 | 2018-02-22 | 日本電信電話株式会社 | 移行システム、移行方法および移行プログラム |
JP2018185624A (ja) * | 2017-04-25 | 2018-11-22 | 富士通株式会社 | スイッチプログラム、スイッチング方法及び情報処理装置 |
JP2019161319A (ja) * | 2018-03-08 | 2019-09-19 | 富士通株式会社 | 情報処理装置、情報処理システム及びプログラム |
US10565002B2 (en) | 2011-03-30 | 2020-02-18 | Amazon Technologies, Inc. | Frameworks and interfaces for offload device-based packet processing |
JP2020526122A (ja) * | 2017-06-30 | 2020-08-27 | 華為技術有限公司Huawei Technologies Co.,Ltd. | データ処理方法、ネットワークインタフェースカード、及びサーバ |
US11416281B2 (en) | 2016-12-31 | 2022-08-16 | Intel Corporation | Systems, methods, and apparatuses for heterogeneous computing |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9110703B2 (en) * | 2011-06-07 | 2015-08-18 | Hewlett-Packard Development Company, L.P. | Virtual machine packet processing |
US9215184B2 (en) * | 2011-10-17 | 2015-12-15 | Hewlett-Packard Development Company, L.P. | Methods of and apparatus for managing non-congestion-controlled message traffic in a datacenter |
WO2013114620A1 (ja) | 2012-02-03 | 2013-08-08 | 富士通株式会社 | 仮想マシン制御プログラム、仮想マシン制御方法および情報処理装置 |
CN107743093B (zh) * | 2012-03-19 | 2020-11-03 | 英特尔公司 | 用于输入/输出虚拟化系统中分组管理的装置、方法和介质 |
WO2013177313A2 (en) * | 2012-05-22 | 2013-11-28 | Xockets IP, LLC | Processing structured and unstructured data using offload processors |
US9548920B2 (en) * | 2012-10-15 | 2017-01-17 | Cisco Technology, Inc. | System and method for efficient use of flow table space in a network environment |
US9602334B2 (en) * | 2013-01-22 | 2017-03-21 | International Business Machines Corporation | Independent network interfaces for virtual network environments |
US9317310B2 (en) * | 2013-01-31 | 2016-04-19 | Broadcom Corporation | Systems and methods for handling virtual machine packets |
US8955155B1 (en) | 2013-03-12 | 2015-02-10 | Amazon Technologies, Inc. | Secure information flow |
EP2974180B1 (en) | 2013-03-12 | 2018-11-07 | Nec Corporation | A packet data network, a method for operating a packet data network and a flow-based programmable network device |
TWI520530B (zh) * | 2013-05-17 | 2016-02-01 | 智邦科技股份有限公司 | 封包交換裝置及方法 |
CN103346981B (zh) * | 2013-06-28 | 2016-08-10 | 华为技术有限公司 | 虚拟交换方法、相关装置和计算机系统 |
US9325630B2 (en) | 2013-07-05 | 2016-04-26 | Red Hat, Inc. | Wild card flows for switches and virtual switches based on hints from hypervisors |
US9742666B2 (en) * | 2013-07-09 | 2017-08-22 | Nicira, Inc. | Using headerspace analysis to identify classes of packets |
CN104348740B (zh) * | 2013-07-31 | 2018-04-10 | 国际商业机器公司 | 数据包处理方法和系统 |
US20150055456A1 (en) | 2013-08-26 | 2015-02-26 | Vmware, Inc. | Traffic and load aware dynamic queue management |
US9634948B2 (en) | 2013-11-07 | 2017-04-25 | International Business Machines Corporation | Management of addresses in virtual machines |
US9124536B2 (en) * | 2013-12-12 | 2015-09-01 | International Business Machines Corporation | Managing data flows in overlay networks |
US9288135B2 (en) * | 2013-12-13 | 2016-03-15 | International Business Machines Corporation | Managing data flows in software-defined network using network interface card |
KR102160252B1 (ko) * | 2013-12-18 | 2020-09-25 | 삼성전자주식회사 | 가상 스위칭 방법 및 장치 |
US9887939B2 (en) * | 2015-03-11 | 2018-02-06 | International Business Machines Corporation | Transmitting multi-destination packets in overlay networks |
US9515933B2 (en) * | 2014-05-30 | 2016-12-06 | International Business Machines Corporation | Virtual network data control with network interface card |
US9515931B2 (en) * | 2014-05-30 | 2016-12-06 | International Business Machines Corporation | Virtual network data control with network interface card |
US9667754B2 (en) * | 2014-08-11 | 2017-05-30 | Oracle International Corporation | Data structure and associated management routines for TCP control block (TCB) table in network stacks |
US10116772B2 (en) * | 2014-11-14 | 2018-10-30 | Cavium, Inc. | Network switching with co-resident data-plane and network interface controllers |
US9762457B2 (en) | 2014-11-25 | 2017-09-12 | At&T Intellectual Property I, L.P. | Deep packet inspection virtual function |
US10812632B2 (en) * | 2015-02-09 | 2020-10-20 | Avago Technologies International Sales Pte. Limited | Network interface controller with integrated network flow processing |
US10924381B2 (en) | 2015-02-19 | 2021-02-16 | Arista Networks, Inc. | System and method of processing in-place adjacency updates |
US10044676B2 (en) | 2015-04-03 | 2018-08-07 | Nicira, Inc. | Using headerspace analysis to identify unneeded distributed firewall rules |
US9781209B2 (en) * | 2015-08-20 | 2017-10-03 | Intel Corporation | Techniques for routing packets between virtual machines |
US11362862B2 (en) * | 2015-10-30 | 2022-06-14 | Nec Corporation | Method and system for operating networks with data-plane and control-plane separated network functions |
US20170171298A1 (en) * | 2015-12-09 | 2017-06-15 | Intel Corporation | Enhanced virtual switch for network function virtualization |
EP3366014A4 (en) * | 2015-12-17 | 2019-05-01 | Hewlett-Packard Enterprise Development LP | SELECTION OF A REDUCED SET OF ORTHOGONAL NETWORK GUIDELINES |
US10193968B2 (en) | 2016-10-14 | 2019-01-29 | Google Llc | Virtual router with dynamic flow offload capability |
US20180181421A1 (en) * | 2016-12-27 | 2018-06-28 | Intel Corporation | Transferring packets between virtual machines via a direct memory access device |
US10587479B2 (en) | 2017-04-02 | 2020-03-10 | Nicira, Inc. | GUI for analysis of logical network modifications |
US11469953B2 (en) | 2017-09-27 | 2022-10-11 | Intel Corporation | Interworking of legacy appliances in virtualized networks |
US11750533B2 (en) * | 2017-10-24 | 2023-09-05 | Intel Corporation | Hardware assisted virtual switch |
US10992601B2 (en) * | 2018-10-19 | 2021-04-27 | Gubernet Inc. | Packet processing method and apparatus in multi-layered network environment |
US20210021517A1 (en) * | 2019-07-19 | 2021-01-21 | Arista Networks, Inc. | Avoiding recirculation of data packets in a network device |
US11740919B2 (en) * | 2020-05-18 | 2023-08-29 | Dell Products L.P. | System and method for hardware offloading of nested virtual switches |
JP7164267B2 (ja) * | 2020-12-07 | 2022-11-01 | インテル・コーポレーション | ヘテロジニアスコンピューティングのためのシステム、方法及び装置 |
US11451493B2 (en) * | 2021-01-06 | 2022-09-20 | Mellanox Technologies, Ltd. | Connection management in a network adapter |
JP2022166934A (ja) * | 2021-04-22 | 2022-11-04 | 富士通株式会社 | 情報処理装置、過負荷制御プログラムおよび過負荷制御方法 |
JP2023003987A (ja) * | 2021-06-25 | 2023-01-17 | 富士通株式会社 | 情報処理装置、情報処理プログラム、及び情報処理方法 |
US20230017692A1 (en) * | 2021-06-30 | 2023-01-19 | Juniper Networks, Inc. | Extending switch fabric processing to network interface cards |
US12081395B2 (en) | 2021-08-24 | 2024-09-03 | VMware LLC | Formal verification of network changes |
US11909656B1 (en) * | 2023-01-17 | 2024-02-20 | Nokia Solutions And Networks Oy | In-network decision for end-server-based network function acceleration |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007522583A (ja) | 2004-02-13 | 2007-08-09 | インテル・コーポレーション | 動的に拡張可能な仮想スイッチの装置および方法 |
JP2008102929A (ja) | 2006-10-17 | 2008-05-01 | Internatl Business Mach Corp <Ibm> | データ処理システム内でネットワーク入出力(i/o)アダプタと通信するためにネットワーク・デバイス・ドライバによって使用されるバッファ・データ構造にアクセスするためのアドレス変換を管理するための方法、コンピュータ・プログラム、および装置(キュー・データ構造およびキャッシュされたアドレス変換を使用してネットワーク・アダプタと通信するための装置および方法) |
JP2009506618A (ja) * | 2005-08-23 | 2009-02-12 | ネトロノーム システムズ インク | 伝送情報を処理して、転送するシステムおよび方法 |
JP2009151745A (ja) | 2007-11-28 | 2009-07-09 | Hitachi Ltd | 仮想マシンモニタ及びマルチプロセッサシステム |
JP2009276679A (ja) | 2008-05-16 | 2009-11-26 | Panasonic Corp | 広角レンズ |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7007103B2 (en) * | 2002-04-30 | 2006-02-28 | Microsoft Corporation | Method to offload a network stack |
JP4054714B2 (ja) * | 2003-04-28 | 2008-03-05 | 株式会社リコー | 昇降圧型dc−dcコンバータ |
JP2007525883A (ja) * | 2004-01-14 | 2007-09-06 | リヴァーストーン ネットワークス インコーポレーテッド | ネットワークノードにおける処理利用管理 |
US7797460B2 (en) * | 2005-03-17 | 2010-09-14 | Microsoft Corporation | Enhanced network system through the combination of network objects |
EP1917782A2 (en) * | 2005-07-18 | 2008-05-07 | Broadcom Israel R&D | Method and system for transparent tcp offload |
US7643482B2 (en) * | 2006-06-30 | 2010-01-05 | Sun Microsystems, Inc. | System and method for virtual switching in a host |
US8543808B2 (en) * | 2006-08-24 | 2013-09-24 | Microsoft Corporation | Trusted intermediary for network data processing |
JP2008093316A (ja) | 2006-10-16 | 2008-04-24 | Aruze Corp | スロットマシン及びそのプレイ方法 |
US8819675B2 (en) | 2007-11-28 | 2014-08-26 | Hitachi, Ltd. | Virtual machine monitor and multiprocessor system |
JP4636625B2 (ja) * | 2008-01-25 | 2011-02-23 | 株式会社日立情報システムズ | 仮想ネットワークシステムのnic接続制御方法と仮想ネットワークのnic接続制御システムおよびプログラム |
US8195774B2 (en) * | 2008-05-23 | 2012-06-05 | Vmware, Inc. | Distributed virtual switch for virtualized computer systems |
US7983257B2 (en) * | 2008-07-18 | 2011-07-19 | Emulex Design & Manufacturing Corporation | Hardware switch for hypervisors and blade servers |
-
2010
- 2010-11-30 WO PCT/JP2010/071316 patent/WO2011068091A1/ja active Application Filing
- 2010-11-30 EP EP10834541.4A patent/EP2509000A4/en not_active Withdrawn
- 2010-11-30 JP JP2011544249A patent/JP5720577B2/ja not_active Expired - Fee Related
- 2010-11-30 CN CN201080055153.6A patent/CN102648455B/zh not_active Expired - Fee Related
-
2011
- 2011-08-30 US US13/137,619 patent/US9130867B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007522583A (ja) | 2004-02-13 | 2007-08-09 | インテル・コーポレーション | 動的に拡張可能な仮想スイッチの装置および方法 |
JP2009506618A (ja) * | 2005-08-23 | 2009-02-12 | ネトロノーム システムズ インク | 伝送情報を処理して、転送するシステムおよび方法 |
JP2008102929A (ja) | 2006-10-17 | 2008-05-01 | Internatl Business Mach Corp <Ibm> | データ処理システム内でネットワーク入出力(i/o)アダプタと通信するためにネットワーク・デバイス・ドライバによって使用されるバッファ・データ構造にアクセスするためのアドレス変換を管理するための方法、コンピュータ・プログラム、および装置(キュー・データ構造およびキャッシュされたアドレス変換を使用してネットワーク・アダプタと通信するための装置および方法) |
JP2009151745A (ja) | 2007-11-28 | 2009-07-09 | Hitachi Ltd | 仮想マシンモニタ及びマルチプロセッサシステム |
JP2009276679A (ja) | 2008-05-16 | 2009-11-26 | Panasonic Corp | 広角レンズ |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014506739A (ja) * | 2011-02-24 | 2014-03-17 | 日本電気株式会社 | ネットワークシステム、コントローラ、及びフロー制御方法 |
US11941427B2 (en) | 2011-03-30 | 2024-03-26 | Amazon Technologies, Inc. | Frameworks and interfaces for offload device-based packet processing |
US11099885B2 (en) | 2011-03-30 | 2021-08-24 | Amazon Technologies, Inc. | Frameworks and interfaces for offload device-based packet processing |
JP2017126998A (ja) * | 2011-03-30 | 2017-07-20 | アマゾン・テクノロジーズ、インコーポレイテッド | オフロードデバイスベースのパケット処理のためのフレームワークおよびインターフェース |
US11656900B2 (en) | 2011-03-30 | 2023-05-23 | Amazon Technologies, Inc. | Frameworks and interfaces for offload device-based packet processing |
US10565002B2 (en) | 2011-03-30 | 2020-02-18 | Amazon Technologies, Inc. | Frameworks and interfaces for offload device-based packet processing |
US12210896B2 (en) | 2011-03-30 | 2025-01-28 | Amazon Technologies, Inc. | Frameworks and interfaces for offload device-based packet processing |
JP2014524086A (ja) * | 2011-06-27 | 2014-09-18 | マイクロソフト コーポレーション | ホスト使用可能管理チャネル |
US9807129B2 (en) | 2011-06-27 | 2017-10-31 | Microsoft Technology Licensing, Llc | Host enabled management channel |
JP2013041445A (ja) * | 2011-08-17 | 2013-02-28 | Fujitsu Ltd | 情報処理装置、情報処理方法及び情報処理プログラム |
JP2013126062A (ja) * | 2011-12-14 | 2013-06-24 | Kddi Corp | ネットワークトラヒック制御装置、制御方法、およびプログラム |
JP2013161283A (ja) * | 2012-02-06 | 2013-08-19 | Nec Commun Syst Ltd | サーバ、物理ノード、負荷分散方法およびプログラム |
KR20140143155A (ko) * | 2012-03-21 | 2014-12-15 | 마이크로소프트 코포레이션 | 네트워킹 장치 가상화를 위한 패킷 처리 오프로딩 기법 |
JP2015515798A (ja) * | 2012-03-21 | 2015-05-28 | マイクロソフト コーポレーション | ネットワーキング・デバイスの仮想化のためのパケット処理のオフロード |
JP2013197919A (ja) * | 2012-03-21 | 2013-09-30 | Nec Corp | 通信制御装置、通信制御方法および通信制御プログラム |
KR101969194B1 (ko) | 2012-03-21 | 2019-08-13 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | 네트워킹 장치 가상화를 위한 패킷 처리 오프로딩 기법 |
JP2013239996A (ja) * | 2012-05-17 | 2013-11-28 | Nec Corp | 計算機、データ変換装置、通信方法及びプログラム |
WO2013186825A1 (en) * | 2012-06-12 | 2013-12-19 | Nec Corporation | Computer system, communication control server, communication control method, and program |
JP2015523747A (ja) * | 2012-06-12 | 2015-08-13 | 日本電気株式会社 | コンピュータシステム、通信制御サーバ、通信制御方法およびプログラム |
US9571379B2 (en) | 2012-06-12 | 2017-02-14 | Nec Corporation | Computer system, communication control server, communication control method, and program |
JP2015528231A (ja) * | 2012-06-21 | 2015-09-24 | マイクロソフト テクノロジー ライセンシング,エルエルシー | 仮想マシンのフローの物理的なキューへのオフロード |
KR20150024845A (ko) * | 2012-06-21 | 2015-03-09 | 마이크로소프트 코포레이션 | 물리적 큐들로의 가상 머신 플로우들의 오프로딩 기법 |
KR102008551B1 (ko) | 2012-06-21 | 2019-10-21 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | 물리적 큐들로의 가상 머신 플로우들의 오프로딩 기법 |
JP2014017618A (ja) * | 2012-07-06 | 2014-01-30 | Oki Electric Ind Co Ltd | 通信装置、方法及びプログラム |
JP2015530831A (ja) * | 2012-09-13 | 2015-10-15 | シマンテック コーポレーションSymantec Corporation | 選択的ディープパケットインスペクションを実行するためのシステム及び方法 |
JP2014195178A (ja) * | 2013-03-28 | 2014-10-09 | Fujitsu Ltd | 情報処理装置、送信制御方法および送信制御プログラム |
WO2014192259A1 (ja) * | 2013-05-27 | 2014-12-04 | 日本電気株式会社 | ネットワーク制御装置、ネットワーク制御方法、プログラムおよび通信システム |
JPWO2014192259A1 (ja) * | 2013-05-27 | 2017-02-23 | 日本電気株式会社 | ネットワーク制御装置、ネットワーク制御方法、プログラムおよび通信システム |
JP2015195466A (ja) * | 2014-03-31 | 2015-11-05 | 東芝ライテック株式会社 | 通信装置、通信方法、および通信システム |
KR101493933B1 (ko) * | 2014-05-26 | 2015-02-16 | 주식회사 파이오링크 | 하드웨어 스위치 및 소프트웨어 스위치를 사용하여 가상 머신의 통신을 지원하기 위한 방법, 장치, 시스템 및 컴퓨터 판독 가능한 기록 매체 |
JP2017022767A (ja) * | 2014-06-23 | 2017-01-26 | インテル コーポレイション | ソフトウェア確定ネットワークにおける仮想マシンと仮想化コンテナを用いたローカルサービスチェーン |
US10261814B2 (en) | 2014-06-23 | 2019-04-16 | Intel Corporation | Local service chaining with virtual machines and virtualized containers in software defined networking |
WO2016056210A1 (ja) * | 2014-10-10 | 2016-04-14 | 日本電気株式会社 | サーバ、フロー制御方法および仮想スイッチ用プログラム |
JP2018028779A (ja) * | 2016-08-17 | 2018-02-22 | 日本電信電話株式会社 | 移行システム、移行方法および移行プログラム |
US11416281B2 (en) | 2016-12-31 | 2022-08-16 | Intel Corporation | Systems, methods, and apparatuses for heterogeneous computing |
US11693691B2 (en) | 2016-12-31 | 2023-07-04 | Intel Corporation | Systems, methods, and apparatuses for heterogeneous computing |
US12135981B2 (en) | 2016-12-31 | 2024-11-05 | Intel Corporation | Systems, methods, and apparatuses for heterogeneous computing |
JP2018185624A (ja) * | 2017-04-25 | 2018-11-22 | 富士通株式会社 | スイッチプログラム、スイッチング方法及び情報処理装置 |
US11223579B2 (en) | 2017-06-30 | 2022-01-11 | Huawei Technologies Co., Ltd. | Data processing method, network interface card, and server |
JP7034187B2 (ja) | 2017-06-30 | 2022-03-11 | 華為技術有限公司 | データ処理方法、ネットワークインタフェースカード、及びサーバ |
JP2020526122A (ja) * | 2017-06-30 | 2020-08-27 | 華為技術有限公司Huawei Technologies Co.,Ltd. | データ処理方法、ネットワークインタフェースカード、及びサーバ |
JP2019161319A (ja) * | 2018-03-08 | 2019-09-19 | 富士通株式会社 | 情報処理装置、情報処理システム及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN102648455B (zh) | 2015-11-25 |
EP2509000A1 (en) | 2012-10-10 |
EP2509000A4 (en) | 2017-09-20 |
JP5720577B2 (ja) | 2015-05-20 |
US20110320632A1 (en) | 2011-12-29 |
CN102648455A (zh) | 2012-08-22 |
US9130867B2 (en) | 2015-09-08 |
JPWO2011068091A1 (ja) | 2013-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5720577B2 (ja) | サーバ及びフロー制御プログラム | |
JP5839032B2 (ja) | ネットワークシステム、コントローラ、及びフロー制御方法 | |
US11329918B2 (en) | Facilitating flow symmetry for service chains in a computer network | |
US10949379B2 (en) | Network traffic routing in distributed computing systems | |
US8254261B2 (en) | Method and system for intra-host communication | |
KR101747518B1 (ko) | 소프트웨어 정의 네트워크에서의 가상화된 컨테이너 및 가상 머신을 통한 로컬 서비스 체이닝 | |
JP6855906B2 (ja) | スイッチプログラム、スイッチング方法及び情報処理装置 | |
US10178054B2 (en) | Method and apparatus for accelerating VM-to-VM network traffic using CPU cache | |
US9736211B2 (en) | Method and system for enabling multi-core processing of VXLAN traffic | |
KR101969194B1 (ko) | 네트워킹 장치 가상화를 위한 패킷 처리 오프로딩 기법 | |
US8446824B2 (en) | NUMA-aware scaling for network devices | |
US20080005441A1 (en) | Bridging network components | |
US10630587B2 (en) | Shared memory communication in software defined networking | |
US20170214612A1 (en) | Chaining network functions to build complex datapaths | |
US11343187B2 (en) | Quantitative exact match distance in network flows | |
US10108566B2 (en) | Apparatus and method for virtualizing network interface | |
EP4187868A1 (en) | Load balancing and networking policy performance by a packet processing pipeline | |
US20230185624A1 (en) | Adaptive framework to manage workload execution by computing device including one or more accelerators | |
US11855889B2 (en) | Information processing device, information processing method, and computer-readable medium of providing dummy response when memory search is unnecessary | |
JP5359357B2 (ja) | パケット処理装置、該処理装置に用いられるパケット処理順序制御方法及びパケット処理順序制御プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080055153.6 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10834541 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2010834541 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011544249 Country of ref document: JP Ref document number: 2010834541 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |