CN115858102A - Method for deploying virtual machine supporting virtualization hardware acceleration - Google Patents
Method for deploying virtual machine supporting virtualization hardware acceleration Download PDFInfo
- Publication number
- CN115858102A CN115858102A CN202310159184.5A CN202310159184A CN115858102A CN 115858102 A CN115858102 A CN 115858102A CN 202310159184 A CN202310159184 A CN 202310159184A CN 115858102 A CN115858102 A CN 115858102A
- Authority
- CN
- China
- Prior art keywords
- virtual
- virtual machine
- data processor
- node
- virtualization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000001133 acceleration Effects 0.000 title claims abstract description 64
- 238000012545 processing Methods 0.000 claims abstract description 71
- 230000002093 peripheral effect Effects 0.000 claims abstract description 47
- 230000006870 function Effects 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 15
- 238000004891 communication Methods 0.000 claims description 11
- 238000011161 development Methods 0.000 claims description 10
- 230000007246 mechanism Effects 0.000 claims description 10
- 239000004744 fabric Substances 0.000 claims description 2
- 238000007726 management method Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Landscapes
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
A method for deploying a virtual machine supporting virtualized hardware acceleration is provided. The method comprises the following steps: creating, by the at least one data processor of the first node, at least one virtual appliance to simulate a back-end processing module in a virtualization architecture of the virtual machine, the at least one virtual appliance supporting virtualization hardware acceleration and peripheral interconnect information of the at least one virtual appliance being configured to a port of the back-end processing module such that network traffic of the back-end processing module is offloaded to a delegate peer port of the at least one data processor; and simulating, by the system simulator of the first node, a front-end driver module in the virtualization architecture. The front-end driver module operates in a user mode and communicates with the back-end processing module through a data path, the virtualization hardware acceleration supported by the at least one virtual device includes the hardware virtualization of the data path, and the network flow of the front-end driver module is offloaded to the at least one data processor through the system simulator. Thus, user mode network acceleration is realized.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for deploying a virtual machine supporting virtualized hardware acceleration.
Background
With the development of the next-generation information technologies such as cloud computing technology, 5G, artificial intelligence, edge computing, and the like, various virtualization technologies have appeared in the virtual machine by generally applying networks, storage, and the like. Among them, the VDPA (virtual Data Path Acceleration) is also called VDPA Acceleration, which offloads the Data Path to the hardware and simulates the virtualization framework in the Data layer. However, the existing virtual technology architecture supporting VDPA deployment requires kernel-state VDPA deployment and has limitations on hardware supporting VDPA, and the hardware resources of the cloud computing node are limited, which brings limitations to the extended application of the virtual machine supporting VDPA.
In summary, the problem to be solved at present is how to provide a method for deploying a virtual machine supporting virtualized hardware acceleration, which can overcome the limitations caused by the limited hardware resources and the existing virtualization technology architecture.
Disclosure of Invention
The embodiment of the application provides a method for deploying a virtual machine supporting virtualized hardware acceleration, which is used for solving the problems in the prior art.
In a first aspect, the present application provides a method for deploying a virtual machine that supports virtualized hardware acceleration. The method comprises the following steps: creating, by at least one data processor of a first node, at least one virtual appliance to simulate a back-end processing module in a virtualization architecture of the virtual machine, wherein the at least one virtual appliance supports virtualization hardware acceleration and peripheral interconnect information of the at least one virtual appliance is configured to a port of the back-end processing module such that network traffic of the back-end processing module is offloaded to a representative peer port of the at least one data processor; and simulating, by a system simulator of the first node, a front-end driver module in the virtualization framework, wherein the front-end driver module runs in a user state and communicates with the back-end processing module via a data path, the virtualization hardware acceleration supported by the at least one virtual device includes hardware virtualization of the data path, and a network flow of the front-end driver module is offloaded to the at least one data processor via the system simulator.
Through the first aspect of the application, the problem of limited hardware resources of a physical server is solved, the acceleration of a network in a user state can be realized, and the limitation brought by the existing virtual technology architecture is overcome.
In one possible implementation of the first aspect of the present application, the at least one data processor runs a data plane development kit and a multi-layer virtual switch for accelerating data traffic forwarding associated with the virtual machine.
In one possible implementation of the first aspect of the present application, the at least one data processor includes a system-on-chip and a field-programmable-gate array, wherein the system-on-chip is configured to run the data plane development kit and the multilayer virtual switch, and the field-programmable-gate array is configured to be reconfigurable to construct a configuration space of the at least one virtual device to create the at least one virtual device in cooperation with the system-on-chip.
In one possible implementation of the first aspect of the present application, the virtual machine is a user-state virtual machine and is operable to emulate a virtualized hardware acceleration device designed to be deployed in a kernel state.
In a possible implementation manner of the first aspect of the present application, the virtualization architecture is a Virtio architecture, the front-end driver module is a Virtio-net module, and the back-end processing module is a ghost user module.
In a possible implementation manner of the first aspect of the present application, the first node configures peripheral interconnection information of the at least one virtual device, so that the virtual machine directly communicates with the at least one data processor through the at least one virtual device.
In a possible implementation manner of the first aspect of the present application, the at least one virtual device is a virtualized peripheral device interconnection device, the virtualized peripheral device interconnection device is connected through a virtual peripheral device interconnection bus, and the peripheral device interconnection information is associated with the virtual peripheral device interconnection bus.
In a possible implementation manner of the first aspect of the present application, the first node is a given computing node in at least one computing node of a cloud platform based on an open stack architecture, and the cloud platform based on the open stack architecture further includes at least one control node.
In one possible implementation of the first aspect of the present application, the open stack architecture includes kernel-mode virtualized hardware acceleration, and the virtual machine deployed at the first node causes the at least one virtual device created by the at least one data processor to support the kernel-mode virtualized hardware acceleration included in the open stack architecture by adapting code of the open stack architecture and configuring components of the open stack architecture.
In one possible implementation manner of the first aspect of the present application, the method further includes: managing peripheral device interconnection resources of the at least one virtual device deployed at the first node through a peripheral device interconnection resource management mechanism of the open stack architecture.
In a possible implementation manner of the first aspect of the present application, the peripheral device interconnection resource management mechanism of the open stack architecture includes a configuration file that configures an open stack computing organization controller of the first node.
In one possible implementation of the first aspect of the present application, the virtual machine is created using a virtualized hardware acceleration type port of the first node.
In a possible implementation manner of the first aspect of the present application, a service flow table is configured for the representative peer interface of the at least one data processor by a cloud management platform of the cloud platform based on the open stack architecture, so as to speed up a service plane data path of the cloud platform.
In one possible implementation manner of the first aspect of the present application, the method further includes: adding or deleting, by the at least one data processor, a virtual appliance to manage resources for a single root input output virtualized virtual function of the at least one virtual appliance and to manage peripheral interconnect resources of the at least one virtual appliance.
In one possible implementation manner of the first aspect of the present application, the method further includes: adding or deleting virtual devices through the at least one data processor so as to add or delete virtual devices with a single-root input/output virtualization function in the at least one virtual device.
In one possible implementation of the first aspect of the present application, the at least one data processor runs a multi-layer virtual switch, and the multi-layer virtual switch is configured to implement a network function of the virtual machine based on an internet communication protocol.
In a possible implementation manner of the first aspect of the present application, the at least one data processor is connected to a physical server of the first node in a pluggable manner, and the virtual machine is deployed on the physical server.
In one possible implementation manner of the first aspect of the present application, the method further includes: after the peripheral device interconnection information of the at least one virtual device is configured to the port of the back-end processing module, determining whether the peripheral device interconnection resources of the at least one virtual device are reported successfully.
In a second aspect, the present application further provides a computer device, where the computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method according to any one of the implementation manners of the above aspects when executing the computer program.
In a third aspect, embodiments of the present application further provide a computer-readable storage medium storing computer instructions that, when executed on a computer device, cause the computer device to perform the method according to any one of the implementation manners of any one of the above aspects.
In a fourth aspect, the present application further provides a computer program product, which includes instructions stored on a computer-readable storage medium, and when the instructions are run on a computer device, the computer device is caused to execute the method according to any one of the implementation manners of any one of the above aspects.
In a fifth aspect, an embodiment of the present application further provides a virtual machine supporting virtualized hardware acceleration. The virtualization architecture of the virtual machine comprises a front-end driver module and a back-end processing module, the virtual machine is deployed at a first node, the first node comprises a system simulator and at least one data processor, wherein at least one virtual device is created by the at least one data processor so as to simulate the back-end processing module, the at least one virtual device supports virtualization hardware acceleration and peripheral interconnection information of the at least one virtual device is configured to a port of the back-end processing module so that network traffic of the back-end processing module is offloaded to a representative peer port of the at least one data processor, the front-end driver module is simulated by the system simulator, the front-end driver module runs in a user state and communicates with the back-end processing module through a data path, the virtualization hardware acceleration supported by the at least one virtual device comprises hardware of the data path, and network flow of the front-end driver module is offloaded to the at least one data processor through the system simulator.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a cloud platform based on an open stack architecture according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a method for deploying a virtual machine supporting virtualized hardware acceleration according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a virtual machine supporting virtualized hardware acceleration according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a method for deploying a virtual machine supporting virtualized hardware acceleration, which is used for solving the problems in the prior art. The method and the device provided by the embodiment of the application are based on the same inventive concept, and because the principles of solving the problems of the method and the device are similar, the embodiments, the implementation modes, the examples or the implementation modes of the method and the device can be mutually referred, and repeated parts are not described again.
It should be understood that, in the description of the present application, "at least one" means one or more than one, and "a plurality" means two or more than two. Additionally, the terms "first," "second," and the like, unless otherwise noted, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, nor order.
Fig. 1 is a schematic diagram of a cloud platform based on an open stack architecture according to an embodiment of the present disclosure. As shown in fig. 1, the cloud platform includes a cloud management platform 110, a control node 120, and a compute node 122. It should be understood that the cloud platform may include a plurality of computing nodes, and that computing node 122 in fig. 1 is merely exemplary. The cloud platform employs a virtualization architecture, such as an open stack (Openstack) architecture. The control node 120 is used to establish a virtual machine and allocate storage, and the compute node 122 is used to run the virtual machine. The cloud platform also includes network nodes (not shown) for external-to-internal network communications and storage nodes (not shown) for providing additional storage management of the virtual machines. The basic services of the control node 120 include a compute fabric controller (nova) module for deploying virtual servers and compute management services. As can be seen from fig. 1, with the application of cloud computing and virtualization technologies, the virtual machine on the computing node 122 needs to provide more applications in terms of network and storage, while the hardware resources of the physical server of the computing node 122 are limited, so that the network and storage bring great limitations to the extended application of the virtual machine. To this end, the cloud platform of fig. 1 also shows a device simulator 130, the device simulator 130 being provided by a Data Processing Unit (DPU) of the compute node 122. The data processor is used to solve the problem of insufficient hardware resources of the physical server of the compute node 122, and may be used as an internal engine of the network and storage of the compute node 122, take over infrastructure layer services such as network virtualization and hardware resource pooling, and release the hardware resources of the physical server and the Central Processing Unit (CPU) of the compute node 122 to the upper layer application. The device simulator 130 is implemented based on a data processor, and the device simulator 130 may be a device simulator module (emulator) specific to a corresponding DPU, and is used for management functions such as simulation of a VDPA device on a host and creation and deletion of the VDPA device. FIG. 1 also shows that the virtual machines of compute node 122 are connected to external network device 140 through device simulator 130. The cloud management platform 110 is connected with the control node 120 and manages the overall resources through the control node 120. Cloud management platform 110 also interfaces with device simulator 130 and initiates, for example, a create virtualized device request. The control node 120 is connected to the compute node 122 and performs operations such as virtual machine establishment, memory allocation, and the like.
With continued reference to FIG. 1, the virtualization architecture employed by the compute node 122 is typically a paravirtualization architecture. A fully virtualized architecture refers to emulating a complete physical device on software, which leaves the guest operating system unaware of whether it is running on a virtual or physical machine, but which is inefficient. A paravirtualization architecture, such as a Virtual IO architecture, adds a driver as a front end and provides a back end driver for specific device simulation, thereby improving virtualization efficiency. Taking the Virtio architecture as an example, the Virtio architecture provides a front-end and back-end architecture, and includes a front-end driving module (such as a Virtio network card driver) and a back-end processing module (such as a ghost). In the virtual machine, a front-end driving module Virtio-net is also called as Virtio network card driving module. Not in the virtual machine is a back-end processing module, also called a vhost user module. The back-end processing module may be in a user mode or a kernel mode. The back-end processing module is implemented in various ways, one is to create a virtual machine under a system simulator such as a Quick simulator (QEMU) also called a QEMU simulator to simulate a front end and a back end, another is to simulate the front end in a kernel mode in the virtual machine and simulate the back end in a kernel mode in a physical machine, and another is to simulate the back end in a user mode in the physical machine by combining a Data Plane Development Kit (DPDK). The front-end driving module is arranged in the client, receives the user mode request, encapsulates the request and then sends the operation to the back-end processing module. The back-end processing module is used for receiving the request from the front-end driving module, operating the physical equipment and informing the front end. In addition, the Virtio architecture completes communication between a front end and a back end through a Peripheral Component Interconnect (PCI) configuration space, and realizes sharing of input and output request data through a data sharing mechanism, for example, a front end driving module applies for a memory area in a memory of a virtual machine and shares the memory area to a back end process, so that data of the input and output request sent by the front end driving module is placed in the shared memory area, and the back end process directly obtains the data from the shared memory area, so that complete operation under the condition of hardware simulation is not needed, and performance overhead is saved.
With continued reference to fig. 1, the virtualization hardware acceleration (VDPA) is also called virtual host datapath acceleration, which offloads the datapath to hardware and emulates the virtualization architecture at the Data layer. However, a virtual machine and virtualization technology architecture supporting the VDPA, such as an Openstack architecture, requires kernel-mode VDPA deployment, but a computing node under the Openstack architecture, that is, a node for running a virtual machine, has limited hardware resources, which results in low resource utilization rate and is also not beneficial to virtual machine expansion. In addition, the data sharing mechanism of the VDPA in the kernel mode requires that a shared memory region is allocated from the limited storage resource of the kernel mode, which also affects the performance of other applications such as a data path and a kernel mode protocol stack in the kernel mode. Therefore, the hardware supporting the VDPA is limited, and the hardware resources of the cloud computing node are limited, so that the limit is brought to the extended application of the virtual machine supporting the VDPA. How a method for deploying a virtual machine supporting virtualized hardware acceleration according to an embodiment of the present application overcomes these challenges is described in detail below with reference to fig. 2.
Fig. 2 is a flowchart of a method for deploying a virtual machine supporting virtualized hardware acceleration according to an embodiment of the present disclosure. As shown in fig. 2, the method includes the following steps.
Step S210: creating, by at least one data processor of a first node, at least one virtual appliance to emulate a back-end processing module in a virtualization architecture of the virtual machine, wherein the at least one virtual appliance supports virtualization hardware acceleration and peripheral interconnect information of the at least one virtual appliance is configured to a port of the back-end processing module such that network traffic of the back-end processing module is offloaded to a delegate peer port of the at least one data processor.
Step S220: simulating, by a system simulator of the first node, a front-end driver module in the virtualization framework, wherein the front-end driver module runs in a user state and communicates with the back-end processing module via a data path, the virtualization hardware acceleration supported by the at least one virtual device includes a hardware virtualization of the data path, and a network flow of the front-end driver module is offloaded to the at least one data processor via the system simulator.
Referring to fig. 2, at least one data processor of the first node creates at least one virtual device, and the virtual device supports virtualization hardware acceleration, i.e., VDPA, and the created virtual device may be a VDPA device. And creating virtual equipment through the data processor to simulate a back-end processing module in a virtualization framework of the virtual machine of the first node, wherein the back-end processing module is used for receiving a request from the front-end driving module, operating the physical equipment and informing the front end. Here, the peripheral component interconnect information, i.e., PCI information, of the at least one virtual device is configured to the port of the back-end processing module such that network traffic of the back-end processing module is offloaded to a representative (rep) peer port of the at least one data processor. The front-end driver module is then simulated by a system simulator of the first node, e.g. a QEMU simulator. Here, the front-end driving module operates in a user mode and communicates with the back-end processing module through a data path. Moreover, the virtualization hardware acceleration supported by the at least one virtual device includes hardware of the data path, which is beneficial to accelerate the transceiving of network packets. In addition, the network flow of the front-end driver module is offloaded to the at least one data processor through the system simulator. In this way, the VDPA devices are created and virtualized by the data processor, and management of PCI resources of the VDPA devices can be achieved by configuring PCI information, so that the network traffic of the back-end processing module can be offloaded to the representative peer port of the at least one data processor and the network flow of the front-end driving module can be offloaded to the at least one data processor. Therefore, the service flow table can be configured for the representative peer port unloaded to the data processor when the virtual machine is deployed through a plug-in and the like, so that the data path and acceleration of the service plane are achieved. Moreover, since the front-end driver module operates in the user mode and the network traffic of the back-end processing module is offloaded to the representative peer port of the at least one data processor and the network flow of the front-end driver module is offloaded to the at least one data processor, the virtual machine of the first node can realize user mode acceleration of the network flow by means of the data processor.
In summary, the method for deploying the virtual machine supporting the virtualized hardware acceleration shown in fig. 2 can solve the problem of hardware resource limitation of the node, can also improve the network performance of the virtual device inside the virtual machine, and can also improve the extensibility of the virtual device by improving the data processor, so that the problem of limited hardware resources of the physical server is solved, and the acceleration of the network in the user state can be realized, thereby overcoming the limitation caused by the existing virtual technology architecture.
With continued reference to fig. 2, in some embodiments, the at least one data processor of the first node is configured to solve the problem of insufficient hardware resources of the physical server of the first node, and may serve as an internal engine of the network and the storage of the first node, take over infrastructure layer services such as network virtualization and hardware resource pooling, and release hardware resources of the physical server and simultaneously release a central processor of the first node to the upper layer application. In some embodiments, the method shown in FIG. 2 employs an open stack architecture, i.e., the Openstack architecture. The computing node under the Openstack architecture corresponds to a first node, the Virtio architecture is adopted, hardware supporting VDPA, namely virtualized VDPA equipment, is obtained through DPU or intelligent network card virtualization, and a back-end processing module of the virtualized VDPA equipment is unloaded to a DPU network card for acceleration. By performing code adaptation on the Openstack architecture, it is possible to deploy VDPA devices obtained through DPU virtualization and simulate hardware supporting VDPA technology with the virtualized VDPA devices to create a virtual host virtual machine, that is, a vhost virtual machine. As mentioned above, the currently prevailing Openstack architecture and related protocols require kernel-mode VDPA deployment. Here, the VDPA device obtained based on the DPU virtualization has its front-end driver module offloaded onto the DPU network card instead of in the kernel state, and optionally can implement user-state acceleration through an Open VSwitch (OVS) and a DPDK. Here, the combination of OVS and DPDK means that the virtual switch does not interrupt the CPU but provides the DPDK interface to the application program to read the data packet directly from the memory, thus saving CPU interrupts and memory copies. In some embodiments, the method further comprises creating a VDPA-type port at a control node of Openstack; confirming that the calculation node compiles and installs the OVS in a user mode of OVS plus DPDK and ensuring that service components such as neutron-OVS-agent service are normal; and (2) creating a Virtual machine by using the VDPA type port and ensuring that the PCI of the virtualized VDPA device is correctly set to the vhoster Virtual machine port so as to realize that the VDPA drives Virtual Function input and output (Vfio) to map VDPA device resources and further realize unloading. Here, vfio mapping refers to mapping a direct memory access physical memory address mapped by a device through a direct access memory input/output bus to a user mode, so that a user mode program can automatically control data transmission and can automatically register an interrupt processing function, thereby implementing a driver of the device in the user mode.
In one possible implementation, the at least one data processor runs a data plane development kit and a multi-layer virtual switch for accelerating data traffic forwarding associated with the virtual machine. In some embodiments, the at least one data processor includes a System On Chip (SOC) for running the data plane development kit and the multi-layer virtual switch, and a field-programmable gate array (FPGA) configured to be reconfigurable to construct a configuration space of the at least one virtual device to create the at least one virtual device in cooperation with the SOC. Here, the data processor realizes virtualization of the virtual device, that is, a system on chip plus FPGA is adopted. The operating system and the backend software are run on the SOC to implement virtualization of the PCI device, for example, the PCI device is loaded on the SOC operating system and loaded to the user state through the DPDK library, the PCI configuration space of the corresponding virtualization device is constructed through the FPGA, and finally the corresponding virtualization PCI device, that is, the at least one virtual device, is virtualized on the host.
In one possible implementation, the virtual machine is a user-state virtual machine and may be used to simulate a virtualized hardware acceleration device designed to be deployed in kernel-state. Virtualized hardware acceleration devices designed to be deployed in kernel-mode, such as VDPA devices or VDPA hardware deployed in kernel-mode required by some virtualization technology architectures, may be emulated by the user-mode virtual machine. As described above, by adapting the respective virtualization technology architecture code, for example, the Openstack architecture code, and configuring the configuration component of the respective component, the respective VDPA device or VDPA hardware can be created on the data processor, and the virtualized PCI device thus virtualized on the host can simulate a virtualized hardware acceleration device designed to be deployed in a kernel state, and further enable the virtual machine of the first node to implement user-state acceleration of the network flow by means of the data processor.
In one possible implementation, the virtualization framework is a Virtio framework, the front-end driver module is a Virtio-net module, and the back-end processing module is a vhost user module.
In one possible embodiment, the first node configures the peripheral interconnection information of the at least one virtual device such that the virtual machine is routed through the at least one data processor via the at least one virtual device. In some embodiments, the at least one virtual device is a virtualized peripheral interconnect device connected by a virtual peripheral interconnect bus, the peripheral interconnect information associated with the virtual peripheral interconnect bus. Therefore, the virtualized PCI equipment is obtained through virtualization on the host machine.
In one possible implementation, the first node is a given computing node of at least one computing node of an open stack architecture based cloud platform, the open stack architecture based cloud platform further comprising at least one control node. In some embodiments, the open stack architecture comprises kernel-mode virtualized hardware acceleration, the virtual machine deployed at the first node causing the at least one virtual device created by the at least one data processor to support the kernel-mode virtualized hardware acceleration comprised by the open stack architecture by adapting code of the open stack architecture and configuring components of the open stack architecture. In some embodiments, the method further comprises: managing peripheral device interconnection resources of the at least one virtual device deployed at the first node through a peripheral device interconnection resource management mechanism of the open stack architecture. In some embodiments, the peripheral device interconnect resource management mechanism of the open stack architecture comprises a configuration file that configures an open stack computing organization controller of the first node. In some embodiments, the virtual machine is created using a virtualized hardware acceleration type port of the first node. In some embodiments, a service plane data path of the cloud platform is accelerated by configuring a service flow table for the representative peer port of the at least one data processor by a cloud management platform of the cloud platform based on the open stack architecture. In this way, a virtual machine supporting virtualization hardware acceleration is deployed under an open stack architecture, that is, an Openstack architecture, where the virtual machine includes hardware supporting VDPA, such as VDPA devices. In addition, the requirements on the VDPA under the open stack architecture can be met only by creating corresponding VDPA equipment on the data processor, so that the requirements on hardware of specific manufacturers are reduced, and the virtual machine supporting the acceleration of the virtualization hardware is favorably deployed. In addition, the PCI resources of the VDPA equipment can be managed by the peripheral device interconnection resource management mechanism of the open stack architecture, and seamless compatibility is realized. In addition, by offloading data traffic of the virtual machine to the data processor, resources on the host may be released.
In one possible embodiment, the method further comprises: adding or deleting Virtual devices by the at least one data processor to manage resources for Single Root I/O Virtualization (SRIOV) Virtual Functions (VFs) of the at least one Virtual device and to manage peripheral component interconnect resources of the at least one Virtual device. In some embodiments, the method further comprises: adding or deleting virtual devices through the at least one data processor so as to add or delete virtual devices with a single-root input/output virtualization function in the at least one virtual device. Here, single-root i/o virtualization, i.e., SRIOV, refers to enabling multiple virtual machines to share the same PCIE physical hardware resource. While virtual functions, i.e., VF functions, refer to light PCIE functions on the network adapter for supporting SRIOV. Here, by using the peripheral component interconnect resource management mechanism of the open stack architecture, PCI resources of the VDPA device can be managed, including managing SRIOV VF resources of the virtual device and adding and deleting devices with SRIOV.
In one possible implementation, the at least one data processor runs a multi-layer virtual switch configured to implement network functions of the virtual machine based on an on-network communication protocol. Here, the network communication protocol may be, for example, an OpenFlow protocol.
In a possible implementation manner, the at least one data processor is connected with a physical server of the first node in a pluggable manner, and the virtual machine is deployed on the physical server.
In one possible implementation, the method further includes: after the peripheral device interconnection information of the at least one virtual device is configured to the port of the back-end processing module, determining whether the peripheral device interconnection resources of the at least one virtual device are reported successfully.
Fig. 3 is a schematic diagram of a virtual machine supporting virtualized hardware acceleration according to an embodiment of the present disclosure. As shown in fig. 3, the virtualization architecture of the virtual machine includes a front-end driver module 312 and a back-end processing module (not shown). The virtual machine is deployed at a first node that includes a system simulator 310 and at least one data processor 320. At least one virtual device (virtual device refers to virtualized hardware acceleration device 332 in fig. 3) is created by the at least one data processor 320 to emulate the back-end processing module. The at least one virtual appliance supports virtualized hardware acceleration and peripheral interconnect information of the at least one virtual appliance is configured to a port of the back-end processing module such that network traffic of the back-end processing module is offloaded to a delegate peer port 322 of the at least one data processor 320. The front-end driver module 312 is simulated by the system simulator 310. The front-end driver module 312 operates in a user mode and communicates with the back-end processing module via a data path. The virtualized hardware acceleration supported by the at least one virtual device includes a hardenation of the data path. The network flow of the front-end driver module 312 is offloaded to the at least one data processor 320 through the system simulator 310. The data processor 320 includes a device simulator 330, which corresponds to a device simulator module (emulator) in the data processor 320, and is configured to perform management functions such as simulation of a VDPA device on a host and creation and deletion of the VDPA device. Thus, the data processor 320 simulates the back-end processing module by creating at least one virtual appliance through the appliance simulator 330, where the created virtual appliance is referred to in FIG. 3 as virtualized hardware acceleration appliance 332.
Continuing with fig. 3, the back-end processing module, not specifically identified in fig. 3, enables negotiation between the front-end driver module 312 and the back-end processing module by the data processor 320 by causing network traffic for the back-end processing module to be offloaded to the representative peer port 322 of the at least one data processor 320 and simulating the front-end driver module 312 by the system simulator 310. In particular, the front-end driver module 312 operates in a user mode and communicates with the back-end processing module via a data path embodied as a connection line between the front-end driver module 312 and the delegate peer-to-peer port 322. In addition, the control plane is embodied as a connection line between the front-end driver module 312 and the virtualized hardware acceleration device 332. Thus, it is equivalent to using the data processor 320 as a backend device of the virtual machine, which is beneficial to accelerate forwarding of data traffic through the OVS and the DPDK on the data processor 320.
With continued reference to fig. 3, the data processor 320 also includes a data plane development kit and a multi-layer virtual switch 340 and a network card 350. The network card 350 is used for communicating with an external device. The data plane development kit and the multi-layer virtual switch 340 are used to user-wise accelerate network flows. For the virtual machine shown in fig. 3, reference may be made to the foregoing embodiments related to fig. 2 for the basic principle, and details are not repeated here.
Fig. 4 is a schematic structural diagram of a computing device provided in an embodiment of the present application, where the computing device 400 includes: one or more processors 410, a communication interface 420, and a memory 430. The processor 410, communication interface 420, and memory 430 are interconnected by a bus 440. Optionally, the computing device 400 may further include an input/output interface 450, and the input/output interface 450 is connected with an input/output device for receiving parameters set by a user, and the like. The computing device 400 can be used to implement some or all of the functionality of the device embodiments or system embodiments described above in the present application; the processor 410 can also be used to implement some or all of the operational steps of the method embodiments described above in the embodiments of the present application. For example, specific implementations of the computing device 400 to perform various operations may refer to specific details in the above-described embodiments, such as the processor 410 being configured to perform some or all of the steps or some or all of the operations in the above-described method embodiments. For another example, in this embodiment of the application, the computing device 400 may be used to implement part or all of the functions of one or more components in the above-described apparatus embodiments, and the communication interface 420 may be specifically used to implement the communication functions and the like necessary for the functions of these apparatuses and components, and the processor 410 may be specifically used to implement the processing functions and the like necessary for the functions of these apparatuses and components.
It should be understood that the computing device 400 of fig. 4 may include one or more processors 410, and the processors 410 may cooperatively provide processing capabilities in a parallelized, serialized, deserialized, or any connection, or the processors 410 may form a processor sequence or an array of processors, or the processors 410 may be separated into a main processor and an auxiliary processor, or the processors 410 may have different architectures such as employing heterogeneous computing architectures. Further, the computing device 400 shown in FIG. 4, the associated structural and functional descriptions are exemplary and non-limiting. In some example embodiments, computing device 400 may include more or fewer components than shown in FIG. 4, or combine certain components, or split certain components, or have a different arrangement of components.
The processor 410 may be implemented in various specific forms, for example, the processor 410 may include one or more combinations of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a neural-Network Processing Unit (NPU), a Tensor Processing Unit (TPU), or a Data Processing Unit (DPU), and the embodiments of the present application are not limited in particular. Processor 410 may also be a single core processor or a multicore processor. The processor 410 may be comprised of a combination of a CPU and hardware chips. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. The processor 410 may also be implemented solely using logic devices with built-in processing logic, such as an FPGA or a Digital Signal Processor (DSP). The communication interface 420 may be a wired interface, such as an ethernet interface, a Local Interconnect Network (LIN), or the like, or a wireless interface, such as a cellular network interface or a wireless lan interface, for communicating with other modules or devices.
The memory 430 may be a non-volatile memory, such as a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. The memory 430 may also be volatile memory, which may be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, synchronous Link DRAM (SLDRAM), and direct bus RAM (DR RAM). The memory 430 may also be used to store program codes and data for the processor 410 to call the program codes stored in the memory 430 to perform some or all of the operation steps of the above-described method embodiments, or to perform the corresponding functions in the above-described apparatus embodiments. Moreover, computing device 400 may contain more or fewer components than shown in FIG. 4, or have a different arrangement of components.
The bus 440 may be a peripheral component interconnect express (PCIe) bus, an Extended Industry Standard Architecture (EISA) bus, a unified bus (UBs or UBs), a computer express link (CXL), a cache coherent interconnect protocol (CCIX) bus, or the like. The bus 440 may be divided into an address bus, a data bus, a control bus, and the like. The bus 440 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. However, for clarity, only one thick line is shown in FIG. 4, but this does not represent only one bus or one type of bus.
Embodiments of the present application further provide a system, where the system includes a plurality of computing devices, and the structure of each computing device may refer to the structure of the computing device described above. The functions or operations that can be implemented by the system may refer to specific implementation steps in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described in detail herein. Embodiments of the present application also provide a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed on a computer device (such as one or more processors), the method steps in the above method embodiments may be implemented. The specific implementation of the processor of the computer-readable storage medium in executing the above method steps may refer to the specific operations described in the above method embodiments and/or the specific functions described in the above apparatus embodiments, which are not described herein again. Embodiments of the present application further provide a computer program product, which includes instructions stored on a computer-readable storage medium, and when the instructions are run on a computer device, the computer device is caused to execute the method steps in the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. The present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium. The computer program product includes one or more computer instructions. When loaded or executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium, or a semiconductor medium. The semiconductor medium may be a solid state disk, or may be a random access memory, flash memory, read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, registers, or any other form of suitable storage medium.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. Each flow and/or block in the flow charts and/or block diagrams, and combinations of flows and/or blocks in the flow charts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. The steps in the method of the embodiment of the application can be sequentially adjusted, combined or deleted according to actual needs; the modules in the system of the embodiment of the application can be divided, combined or deleted according to actual needs. If these modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, then the present application is intended to include these modifications and variations as well.
Claims (21)
1. A method for deploying a virtual machine that supports virtualized hardware acceleration, the method comprising:
creating, by at least one data processor of a first node, at least one virtual appliance to simulate a back-end processing module in a virtualization architecture of the virtual machine, wherein the at least one virtual appliance supports virtualization hardware acceleration and peripheral interconnect information of the at least one virtual appliance is configured to a port of the back-end processing module such that network traffic of the back-end processing module is offloaded to a representative peer port of the at least one data processor; and
simulating, by a system simulator of the first node, a front-end driver module in the virtualization framework, wherein the front-end driver module runs in a user state and communicates with the back-end processing module via a data path, the virtualization hardware acceleration supported by the at least one virtual device includes a hardware virtualization of the data path, and a network flow of the front-end driver module is offloaded to the at least one data processor via the system simulator.
2. The method of claim 1, wherein the at least one data processor runs a data plane development kit and a multi-layer virtual switch for accelerating data traffic forwarding associated with the virtual machine.
3. The method of claim 2, wherein the at least one data processor comprises a system-on-chip for running the data plane development kit and the multi-layer virtual switch and a field-programmable gate array configured to be reconfigurable to construct a configuration space for the at least one virtual device to create the at least one virtual device in cooperation with the system-on-chip.
4. The method of claim 1, wherein the virtual machine is a user-state virtual machine and is operable to simulate a virtualized hardware acceleration device designed to be deployed in a kernel state.
5. The method of claim 1, wherein the virtualization framework is a Virtio framework, the front-end driver module is a Virtio-net module, and the back-end processing module is a vhost user module.
6. The method of claim 1, wherein the first node configures peripheral interconnect information for the at least one virtual machine to enable the virtual machine to pass through the at least one data processor via the at least one virtual machine.
7. The method of claim 6, wherein the at least one virtual device is a virtualized peripheral interconnect device, the virtualized peripheral interconnect device connected through a virtual peripheral interconnect bus, the peripheral interconnect information associated with the virtual peripheral interconnect bus.
8. The method of claim 1, wherein the first node is a given computing node of at least one computing node of an open stack architecture based cloud platform, the open stack architecture based cloud platform further comprising at least one control node.
9. The method of claim 8, wherein the open stack architecture comprises kernel-mode virtualized hardware acceleration, and wherein the virtual machine deployed at the first node causes the at least one virtual device created by the at least one data processor to support the kernel-mode virtualized hardware acceleration comprised by the open stack architecture by adapting code of the open stack architecture and configuring components of the open stack architecture.
10. The method of claim 9, further comprising: managing peripheral device interconnection resources of the at least one virtual device deployed at the first node through a peripheral device interconnection resource management mechanism of the open stack architecture.
11. The method of claim 10, wherein the peripheral component interconnect resource management mechanism of the open stack architecture comprises configuring a configuration file of an open stack compute fabric controller of the first node.
12. The method of claim 9, wherein the virtual machine is created using a virtualized hardware acceleration type port of the first node.
13. The method of claim 12, wherein a traffic flow table is configured for the delegate peer port of the at least one data processor by a cloud management platform of the cloud platform based on the open stack architecture to speed up traffic plane data path of the cloud platform.
14. The method of claim 1, further comprising:
adding or deleting virtual devices by the at least one data processor to manage resources for a single root input output virtualization virtual function of the at least one virtual device and to manage peripheral interconnect resources of the at least one virtual device.
15. The method of claim 14, further comprising:
adding or deleting virtual devices through the at least one data processor so as to add or delete virtual devices with a single-root input/output virtualization function in the at least one virtual device.
16. The method of claim 1, wherein the at least one data processor runs a multi-layer virtual switch configured to implement network functions of the virtual machine based on an internet communication protocol.
17. The method of claim 1, wherein the at least one data processor is connected to a physical server of the first node in a pluggable manner, the virtual machine being deployed on the physical server.
18. The method of claim 1, further comprising:
after the peripheral device interconnection information of the at least one virtual device is configured to the port of the back-end processing module, determining whether the peripheral device interconnection resources of the at least one virtual device are reported successfully.
19. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 18 when executing the computer program.
20. A computer-readable storage medium storing computer instructions which, when executed on a computer device, cause the computer device to perform the method of any one of claims 1 to 18.
21. A virtual machine supporting virtualized hardware acceleration, the virtual machine comprising a front-end driver module and a back-end processing module, the virtual machine deployed at a first node comprising a system simulator and at least one data processor, wherein at least one virtual appliance is created by the at least one data processor to simulate the back-end processing module, the at least one virtual appliance supports virtualized hardware acceleration and peripheral component interconnect information of the at least one virtual appliance is configured to a port of the back-end processing module such that network traffic of the back-end processing module is offloaded to a representative peer port of the at least one data processor, the front-end driver module is simulated by the system simulator, the front-end driver module runs in a user state and communicates with the back-end processing module over a data path, the virtualized hardware acceleration supported by the at least one virtual appliance includes hardware virtualization of the data path, and network flow of the front-end driver module is offloaded to the at least one data processor by the system simulator.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310159184.5A CN115858102B (en) | 2023-02-24 | 2023-02-24 | Method for deploying virtual machine supporting virtualized hardware acceleration |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310159184.5A CN115858102B (en) | 2023-02-24 | 2023-02-24 | Method for deploying virtual machine supporting virtualized hardware acceleration |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115858102A true CN115858102A (en) | 2023-03-28 |
| CN115858102B CN115858102B (en) | 2023-05-16 |
Family
ID=85658802
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310159184.5A Active CN115858102B (en) | 2023-02-24 | 2023-02-24 | Method for deploying virtual machine supporting virtualized hardware acceleration |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115858102B (en) |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116257276A (en) * | 2023-05-09 | 2023-06-13 | 珠海星云智联科技有限公司 | Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration |
| CN116303154A (en) * | 2023-05-19 | 2023-06-23 | 珠海星云智联科技有限公司 | Base address register resource allocation method and medium for data processing unit |
| CN116800616A (en) * | 2023-08-25 | 2023-09-22 | 珠海星云智联科技有限公司 | Management method and related device of virtualized network equipment |
| CN116795605A (en) * | 2023-08-23 | 2023-09-22 | 珠海星云智联科技有限公司 | Automatic recovery system and method for abnormality of peripheral device interconnection extension equipment |
| CN117395100A (en) * | 2023-10-25 | 2024-01-12 | 中科驭数(北京)科技有限公司 | Network function virtualization gateway realization method, device, equipment and medium |
| CN118363717A (en) * | 2024-06-19 | 2024-07-19 | 北京壁仞科技开发有限公司 | Data processing method, device, medium and program product |
| CN118426913A (en) * | 2024-07-04 | 2024-08-02 | 珠海星云智联科技有限公司 | Method, computer device and medium for vDPA memory mapping |
| CN119127407A (en) * | 2024-11-12 | 2024-12-13 | 珠海星云智联科技有限公司 | Method, computer device and medium for virtualization hardware acceleration |
| WO2025002287A1 (en) * | 2023-06-27 | 2025-01-02 | 杭州阿里云飞天信息技术有限公司 | Method and system for providing computing resource, and electronic device and storage medium |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103677955A (en) * | 2013-12-04 | 2014-03-26 | 深圳清华大学研究院 | Online migration method of memory of virtual machine based on Virtio driver |
| CN104618158A (en) * | 2015-01-28 | 2015-05-13 | 上海交通大学 | Embedded network virtualization environment VirtIO (virtual input and output) network virtualization working method |
| US20160321094A1 (en) * | 2015-04-28 | 2016-11-03 | Altera Corporation | Network functions virtualization platforms with function chaining capabilities |
| US20170177396A1 (en) * | 2015-12-22 | 2017-06-22 | Stephen T. Palermo | Methods and apparatus for multi-stage vm virtual network function and virtual service function chain acceleration for nfv and needs-based hardware acceleration |
| WO2019127476A1 (en) * | 2017-12-29 | 2019-07-04 | 深圳前海达闼云端智能科技有限公司 | Virtual system bluetooth communication method and device, virtual system, storage medium, and electronic apparatus |
| CN112445568A (en) * | 2019-09-02 | 2021-03-05 | 阿里巴巴集团控股有限公司 | Data processing method, device and system based on hardware acceleration |
| CN113312142A (en) * | 2021-02-26 | 2021-08-27 | 阿里巴巴集团控股有限公司 | Virtualization processing system, method, device and equipment |
| CN113821310A (en) * | 2021-11-19 | 2021-12-21 | 阿里云计算有限公司 | Data processing method, programmable network card device, physical server and storage medium |
| CN114465899A (en) * | 2022-02-09 | 2022-05-10 | 浪潮云信息技术股份公司 | Network acceleration method, system and device under complex cloud computing environment |
| CN114553635A (en) * | 2022-02-18 | 2022-05-27 | 珠海星云智联科技有限公司 | Data processing method, data interaction method and product in DPU network equipment |
| CN114691286A (en) * | 2020-12-29 | 2022-07-01 | 华为云计算技术有限公司 | Server system, virtual machine creation method and device |
-
2023
- 2023-02-24 CN CN202310159184.5A patent/CN115858102B/en active Active
Patent Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103677955A (en) * | 2013-12-04 | 2014-03-26 | 深圳清华大学研究院 | Online migration method of memory of virtual machine based on Virtio driver |
| CN104618158A (en) * | 2015-01-28 | 2015-05-13 | 上海交通大学 | Embedded network virtualization environment VirtIO (virtual input and output) network virtualization working method |
| US20160321094A1 (en) * | 2015-04-28 | 2016-11-03 | Altera Corporation | Network functions virtualization platforms with function chaining capabilities |
| US20170177396A1 (en) * | 2015-12-22 | 2017-06-22 | Stephen T. Palermo | Methods and apparatus for multi-stage vm virtual network function and virtual service function chain acceleration for nfv and needs-based hardware acceleration |
| WO2019127476A1 (en) * | 2017-12-29 | 2019-07-04 | 深圳前海达闼云端智能科技有限公司 | Virtual system bluetooth communication method and device, virtual system, storage medium, and electronic apparatus |
| CN112445568A (en) * | 2019-09-02 | 2021-03-05 | 阿里巴巴集团控股有限公司 | Data processing method, device and system based on hardware acceleration |
| CN114691286A (en) * | 2020-12-29 | 2022-07-01 | 华为云计算技术有限公司 | Server system, virtual machine creation method and device |
| WO2022143714A1 (en) * | 2020-12-29 | 2022-07-07 | 华为云计算技术有限公司 | Server system, and virtual machine creation method and apparatus |
| CN113312142A (en) * | 2021-02-26 | 2021-08-27 | 阿里巴巴集团控股有限公司 | Virtualization processing system, method, device and equipment |
| CN113821310A (en) * | 2021-11-19 | 2021-12-21 | 阿里云计算有限公司 | Data processing method, programmable network card device, physical server and storage medium |
| CN114465899A (en) * | 2022-02-09 | 2022-05-10 | 浪潮云信息技术股份公司 | Network acceleration method, system and device under complex cloud computing environment |
| CN114553635A (en) * | 2022-02-18 | 2022-05-27 | 珠海星云智联科技有限公司 | Data processing method, data interaction method and product in DPU network equipment |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116257276A (en) * | 2023-05-09 | 2023-06-13 | 珠海星云智联科技有限公司 | Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration |
| CN116303154A (en) * | 2023-05-19 | 2023-06-23 | 珠海星云智联科技有限公司 | Base address register resource allocation method and medium for data processing unit |
| CN116303154B (en) * | 2023-05-19 | 2023-08-22 | 珠海星云智联科技有限公司 | Base address register resource allocation method and medium for data processing unit |
| WO2025002287A1 (en) * | 2023-06-27 | 2025-01-02 | 杭州阿里云飞天信息技术有限公司 | Method and system for providing computing resource, and electronic device and storage medium |
| CN116795605B (en) * | 2023-08-23 | 2023-12-12 | 珠海星云智联科技有限公司 | Automatic recovery system and method for abnormality of peripheral device interconnection extension equipment |
| CN116795605A (en) * | 2023-08-23 | 2023-09-22 | 珠海星云智联科技有限公司 | Automatic recovery system and method for abnormality of peripheral device interconnection extension equipment |
| CN116800616A (en) * | 2023-08-25 | 2023-09-22 | 珠海星云智联科技有限公司 | Management method and related device of virtualized network equipment |
| CN116800616B (en) * | 2023-08-25 | 2023-11-03 | 珠海星云智联科技有限公司 | Management method and related device of virtualized network equipment |
| CN117395100A (en) * | 2023-10-25 | 2024-01-12 | 中科驭数(北京)科技有限公司 | Network function virtualization gateway realization method, device, equipment and medium |
| CN117395100B (en) * | 2023-10-25 | 2024-08-02 | 中科驭数(北京)科技有限公司 | Network function virtualization gateway realization method, device, equipment and medium |
| CN118363717A (en) * | 2024-06-19 | 2024-07-19 | 北京壁仞科技开发有限公司 | Data processing method, device, medium and program product |
| CN118426913A (en) * | 2024-07-04 | 2024-08-02 | 珠海星云智联科技有限公司 | Method, computer device and medium for vDPA memory mapping |
| CN118426913B (en) * | 2024-07-04 | 2024-09-24 | 珠海星云智联科技有限公司 | Method, computer device and medium for vDPA memory mapping |
| CN119127407A (en) * | 2024-11-12 | 2024-12-13 | 珠海星云智联科技有限公司 | Method, computer device and medium for virtualization hardware acceleration |
| CN119127407B (en) * | 2024-11-12 | 2025-03-11 | 珠海星云智联科技有限公司 | Method, computer device and medium for virtualization hardware acceleration |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115858102B (en) | 2023-05-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN115858102B (en) | Method for deploying virtual machine supporting virtualized hardware acceleration | |
| US11526374B2 (en) | Dedicated distribution of computing resources in virtualized environments | |
| US10778521B2 (en) | Reconfiguring a server including a reconfigurable adapter device | |
| US20230115114A1 (en) | Hardware assisted virtual switch | |
| US10360061B2 (en) | Systems and methods for loading a virtual machine monitor during a boot process | |
| US9031081B2 (en) | Method and system for switching in a virtualized platform | |
| CN115858103B (en) | Method, device and medium for virtual machine hot migration of open stack architecture | |
| CN106537336B (en) | Cloud firmware | |
| CN117519908B (en) | Virtual machine thermomigration method, computer equipment and medium | |
| CN115857995B (en) | Method, medium and computing device for upgrading interconnection device | |
| US11003618B1 (en) | Out-of-band interconnect control and isolation | |
| CN113127144A (en) | Processing method, processing device and storage medium | |
| CN118819873B (en) | Virtual function management method, computer device, medium and system | |
| CN118426913B (en) | Method, computer device and medium for vDPA memory mapping | |
| CN118331687B (en) | User-state paravirtualized data path acceleration method, device, cluster and medium | |
| WO2025007852A1 (en) | Virtual-instance deployment method and system based on cloud service | |
| CN117675583A (en) | A communication method, communication device and communication system | |
| US20260030046A1 (en) | Nested virtualization with enhanced network connectivity and hardware offloading | |
| CN118331747B (en) | Forwarding method for data processor, computer equipment and medium | |
| CN118394453A (en) | User-state paravirtualized device creation and deletion system, device and cluster |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |