Background
The virtualization technology is the basis of cloud computing, and Docker is an open-source application container engine and can realize light-weight virtualization. In order to realize a strong data isolation mechanism, a container technology needs a perfect daemon process, containers completely use a sandbox mechanism, and no interface exists between the containers.
Disclosure of Invention
The inventor finds that: the data strong isolation mechanism of the containers causes the characteristic that the containers cannot cooperate with each other, and when the containers have problems, the applications in the containers cannot run, and flexible deployment of services cannot be realized.
One technical problem to be solved by the present disclosure is: a novel virtualization management system is provided, and flexible deployment of services can be achieved.
According to some embodiments of the present disclosure, there is provided a virtualization management system, including: the management center comprises at least one storage node and a plurality of operation nodes; the management center is used for receiving a service request sent by a user aiming at the service of the cloud, distributing operation nodes of the service for the user from the plurality of operation nodes according to the service request, and sending a progress query instruction to the storage node; the storage node is used for inquiring a progress file corresponding to the business of the user according to the progress inquiry instruction and sending the progress file to the operation node of the business; and the operation node is used for responding to the received progress file, loading the progress file, constructing a virtual environment and operating the service.
In some embodiments, the running node is further configured to intercept input information of the service from the user, and send the input information to the corresponding virtual environment, so as to run the service according to the input information.
In some embodiments, where the service includes graphics rendering, the virtual environment invokes the graphics processor in the form of an operating system thread to render the images in the service.
In some embodiments, the operation node is further configured to intercept output information generated during operation of the service, and send the output information to a client of a user for output.
In some embodiments, the service request includes identity information of the user; the management center is also used for authenticating the user according to the identity information of the user, and executing the operation node which distributes the service for the user from the plurality of operation nodes according to the service request under the condition that the authentication is passed.
In some embodiments, the management center is further configured to receive a termination request for the service sent by the user, and send a termination instruction to the service operation node; the running node is further configured to terminate the running of the service in response to receiving the termination instruction.
In some embodiments, the running node is further configured to update the progress file of the service, send the progress file to the corresponding storage node for storage, and delete the progress file in the running node.
According to other embodiments of the present disclosure, a virtualization management method is provided, including: the management center receives a service request sent by a user aiming at the service of the cloud, distributes service operation nodes for the user from a plurality of operation nodes according to the service request, and sends a progress query instruction to the storage node; the storage node inquires a progress file corresponding to the service of the user according to the progress inquiry instruction and sends the progress file to an operation node of the service; and the operation node loads the progress file in response to the received progress file, constructs a virtual environment and operates the service.
In some embodiments, the running node constructs a virtual environment, and running the service includes: and the operation node intercepts input information of the user to the service and sends the input information to the corresponding virtual environment so as to operate the service according to the input information.
In some embodiments, operating the service according to the input information comprises: in the case where the service includes graphics rendering, the virtual environment invokes the graphics processor in the form of an operating system thread to render the images in the service.
In some embodiments, the method further comprises: and the operation node intercepts output information generated in the service operation process and sends the output information to a client of a user for output.
In some embodiments, the service request includes identity information of the user; the operation node for distributing the service for the user from the plurality of operation nodes by the management center according to the service request comprises the following steps: and the management center authenticates the user according to the identity information of the user, and executes the operation node for distributing the service for the user from the plurality of operation nodes according to the service request under the condition that the authentication is passed.
In some embodiments, the method further comprises: the management center receives a service termination request sent by a user and sends a termination instruction to a service operation node; and the operation node responds to the received termination instruction and terminates the operation of the service.
In some embodiments, the method further comprises: and the operation node updates the progress file of the service, sends the progress file to the corresponding storage node for storage, and deletes the progress file in the operation node.
According to still other embodiments of the present disclosure, a virtualization management system is provided, including: a memory; and a processor coupled to the memory, the processor configured to perform the steps of the virtualization management method according to any of the embodiments described above based on the instructions stored in the memory.
According to still further embodiments of the present disclosure, there is provided a computer readable storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the steps of the virtualization management method of any of the preceding embodiments.
The virtualization management system in the disclosure comprises a management center, at least one storage node and a plurality of operation nodes, and adopts a separation mechanism of the storage nodes, the operation nodes and the management center. The management center is responsible for distributing operation nodes for the users according to the requests of the users, the storage nodes store progress files of the users, and the operation nodes are responsible for constructing virtual environments according to the progress files, so that operation of services is realized, and the last-time use conditions of the users can be seamlessly docked. Different operation nodes can realize the construction of the virtual environment and provide services for users. Compared with a single machine mechanism in which storage, operation and management in a container are all located in one container, the mechanism of separating the storage node, the operation node and the management center can realize flexible deployment of services and meet service requirements of users in real time.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The present disclosure proposes a novel virtualization management system, implementing a novel virtualization management mechanism, and some embodiments of the virtualization management system of the present disclosure are described below with reference to fig. 1.
FIG. 1 is a block diagram of some embodiments of a virtualization management system of the present disclosure. As shown in fig. 1, the virtualization management system 10 of this embodiment includes: a management center 110, at least one storage node 120, and a plurality of operational nodes 130.
The management center 110 is configured to receive a service request sent by a user for a service in the cloud, allocate the service to the operation node 130 of the user from the multiple operation nodes 130 according to the service request, and send a progress query instruction to the storage node 120.
An interface is arranged between the management center 110 and the storage node 120 for information interaction. An interface is arranged between the management center 110 and the operation node 130 for information interaction.
In some embodiments, the management center 110 selects the operational node of the service according to at least one of the distance of the operational node 130 from the user, the network quality, and the support capability for the service of the user.
The storage node 120 is configured to query a progress file corresponding to a service of a user according to the progress query instruction, and send the progress file to the service operation node 130.
In some embodiments, in the case that the storage node 120 includes a plurality of storage nodes and there are a plurality of storage nodes that all store the progress file of the service of the user, the management center 110 may select a storage node 120 according to at least one of a distance between the storage node 120 and the user and a network quality, and send the query instruction to the selected storage node 120 in parallel.
The storage node 120 may find a progress file corresponding to the service of the user according to the dual-dimension of the identification information of the user and the identification information of the service.
The operation node 130 is configured to load the progress file in response to receiving the progress file, construct a virtual environment, and operate the service.
An interface is arranged between the storage node 120 and the operation node 130 for information interaction. After receiving the progress file and the operation instruction of the management center 130, the operation node loads the progress file and constructs a virtual environment. Under the condition of no progress file, the operation node 130 responds to the operation instruction of the management center 130 to directly construct a virtual environment and operate the service. No progress file indicates that the service is first applied.
The virtualization management system in the above embodiment includes a management center, at least one storage node, and a plurality of operation nodes, and a mechanism for separating the storage node, the operation nodes, and the management center is adopted. The management center is responsible for distributing operation nodes for the users according to the requests of the users, the storage nodes store progress files of the users, and the operation nodes are responsible for constructing virtual environments according to the progress files, so that operation of services is realized, and the last-time use conditions of the users can be seamlessly docked. Different operation nodes can realize the construction of the virtual environment and provide services for users. Compared with a single machine mechanism and a data strong isolation mechanism which are stored and operated in a container and are positioned in the same container, the mechanism of separating the storage node, the operation node and the management center in the embodiment can realize flexible deployment of services and meet service requirements of users in real time.
In addition, the mechanism of separating the storage node, the operation node and the management center in the above embodiment can effectively support linear expansion, and flexibly add the operation node and the storage node. The multiple operation nodes support high concurrency of services, and the carrying capacity of the cloud is improved. Compared with a single machine mechanism and a data strong isolation mechanism of the container, the scheme of the embodiment can also reduce the occupation of hardware resources, and further improve the carrying capacity of the cloud.
The scheme disclosed by the invention is particularly suitable for the Internet service which needs multiple concurrent supporting capabilities and has low data isolation requirements.
Further embodiments of the disclosed virtualization management system are described below in conjunction with fig. 2.
FIG. 2 is a block diagram of further embodiments of the virtualization management system of the present disclosure. As shown in fig. 2, the virtualization management system 10 of this embodiment includes: a management center 110, at least one storage node 120, and a plurality of operational nodes 130.
The management center 110 may implement management of users, management of run nodes and storage nodes, and the like. For the user management, the management center 110 implements, for example, authentication of a user service request, allocation of a running node, and the like, and implements an overall management function for using a virtual environment and running a corresponding service for a user. In some embodiments, the service request includes identity information of the user. The management center 110 is configured to authenticate the user according to the identity information of the user, and execute the operation node 130 that allocates the service to the user from the plurality of operation nodes 130 according to the service request if the authentication is passed.
In some embodiments, the management center 110 is further configured to receive a termination request for the service sent by the user, and send a termination instruction to the service running node. The execution node 130 is further configured to terminate the execution of the service in response to receiving the termination instruction. Further, the termination request may include identity information of the user, and the management center 110 is configured to authenticate the user according to the identity information of the user, and execute sending a termination instruction to the service operation node 130 if the authentication is passed.
For operation management, the management center 110 implements functions including, for example, operation node monitoring, operation node scheduling, association of a user with an operation node, and the like. The management center 110 may assign the service operation node 130 to the user, and send an operation instruction or a termination instruction to the operation node 130 through the interface.
For storage management, the management center 110 implements functions including, for example, storage node monitoring, storage node scheduling, association of users with storage nodes, and the like. In the case that the storage node 120 includes a plurality of storage nodes, the management center 110 may monitor the plurality of storage nodes 120, schedule one storage node 120 for a user among the plurality of storage nodes 120 to query the progress file of the user, or send a progress query instruction to the storage node 120 associated with the user and a service.
The storage node 120 may implement a function of storing service progress based on two-dimensional mapping of users and services. The storage node 120 may provide scheduling functions to store services and schedule files, etc. For the storage service, the storage node 120 provides a function of storing a plurality of progress files of a plurality of virtual environments according to an instruction of the management center. Aiming at scheduling of the progress files, the storage node 120 realizes the scheduling function aiming at different progress files according to the association relation between users and services, and transmits the corresponding progress files to corresponding operation nodes in real time according to the query instruction of the management center.
The operation node 130 may complete the virtual operation environment construction, the import of the progress file, the takeover of the output/input, and the like according to the instruction of the management center. For the process file import, the operation node 130 may import the process file corresponding to the service of the user into the corresponding virtual operation environment, so as to implement seamless docking of the last use condition of the user. For the input takeover, the operation node 130 can effectively intercept the input operation of the user for the service, including instructions such as a hard disk/a mouse, a handle and the like, shield the limitation of uniqueness of the operation node for external input, and effectively transmit the limitation to the corresponding virtual environment. The output takeover operation node 130 can intercept output results of information such as images and audio in the user service operation process, reduce consumption of software and hardware, and support remote output based on video streams or other modes.
In some embodiments, the operation node 130 is configured to intercept input information of the service from a user, and send the input information to a corresponding virtual environment, so as to operate the service according to the input information. In some embodiments, the operation node 130 is further configured to intercept output information generated during operation of the service, and send the output information to a client of a user for output. The running node may implement the takeover of input/output using operating system level API Hook (application interface hooking) technology. The API Hook technology is suitable for various operating systems, and compared with the problem that the container technology is limited in the application of the operating systems, the scheme disclosed by the invention has wider applicability.
In some embodiments, the running node 130 is further configured to update the progress file of the service, send the progress file to the corresponding storage node 120 for storage, and delete the progress file in the running node 130, so as to save storage space and avoid data confusion when other users use the same service.
The operation node 130 serves as a main service execution subject in the virtualization management system, and the structure diagram is shown in fig. 3. The runtime node 130 may be configured with a plurality of virtual environments, which mainly include a hardware component and a software component. The hardware environment comprises: hardware execution environments such as a CPU, a GPU (graphics processing unit), and the like. The software part comprises a monitoring and capturing function, an operating environment, an external interface and the like. The interception and capture function is to intercept and intercept the input information and the corresponding output information of the user, i.e. input/output take-over. The output takeover includes API HOOK takeover of graphics output and audio output such as DirectX, OpenGL (open graphics library). The input takeover comprises handle input such as DirectInput, XINPUt, RawInput, USB input, keyboard input API HOOK takeover and the like.
And running the corresponding application based on the multithreading mode of the operating system in the running environment. In some embodiments, where the service includes graphics rendering, the virtual environment in the run-time node 130 invokes the graphics processor in the form of an operating system thread to render the image in the service. The scheme disclosed by the invention supports the effective calling of the GPU, and can realize the rendering and processing of complex images relative to the deficiency of the calling capability of a container mechanism to the GPU.
The external interface of the operation node 130 includes: a run interface and a storage interface. And the operation interface can be in butt joint with the management center to realize the starting of the virtual operation environment according to the requirement. When the service is started, the storage interface can import the progress file of the storage node into the operation node according to the instruction of the management center; when the service is terminated, updating and storing the user service progress, saving the corresponding storage progress file to the storage node, and deleting the local storage.
The virtualization management system is suitable for internet cloud services with multiple concurrent services and low data isolation requirements, and solves the problem that the container technology cannot run or the cloud performance is reduced in the running process. By adopting a separation mechanism of the storage node, the operation node and the management center, flexible service deployment can be met, the operation and storage capacity can be effectively expanded according to the service bearing requirement, the overall bearing capacity is increased, and the cost of concurrent users is effectively reduced. And the virtualization management system disclosed by the invention is simple in structure and easy to realize.
According to the method, a complete simple cloud virtual mechanism for self-adaptive service requirements is constructed from two aspects of virtual environment management and individual virtual environment operation, the problems of operating system and GPU (graphics processing unit) computing capability support limitation under a Docker mechanism and native API hook mechanism localization limitation are solved, environment relative isolation can be achieved, and occupation of resources can be effectively reduced.
The method is constructed by adopting an API Hook mechanism, and has the widest support capability of an operating system, including main stream systems such as linux, windows and Android and various versions. The virtual environment is operated in a system thread mode, so that more software and hardware resources and capabilities of the cloud server are not required to be consumed, and the bearing capacity of the application can be effectively improved. Based on a calling mechanism at the bottom layer of the operating system, the limitation of the Docker mechanism on GPU capability calling can be effectively solved, and the GPU can be effectively called.
The present disclosure also provides a virtualization management method, some embodiments of which are described below in conjunction with fig. 4.
Fig. 4 is a flow chart of some embodiments of the disclosed method. As shown in fig. 4, the method of this embodiment includes: steps S402 to S406.
In step S402, the management center receives a service request sent by a user for a service in the cloud, allocates a service operation node for the user from a plurality of operation nodes according to the service request, and sends a progress query instruction to the storage node.
The user can submit a request for using the related business cloud service at a remote end in a mode of a client or a web page of the business service and the like.
In some embodiments, the service request includes an identity information management center of the user authenticating the user according to the identity information of the user, and if the authentication is passed, the operation node that allocates the service to the user from the plurality of operation nodes according to the service request is executed.
In step S404, the storage node queries a progress file corresponding to the service of the user according to the progress query instruction, and sends the progress file to the service operation node.
In step S406, the running node loads the progress file in response to receiving the progress file, constructs a virtual environment, and runs the service.
In some embodiments, the running node intercepts input information of a user to the service, and sends the input information to the corresponding virtual environment so as to run the service according to the input information. Further, in the case that the service includes graphics rendering, the virtual environment calls the graphics processor in the form of an operating system thread to render the images in the service.
In some embodiments, the operation node intercepts output information generated in the operation process of the service, and sends the output information to the client of the user for output.
Further embodiments of the disclosed virtualization management methods are described below in conjunction with FIG. 5.
Fig. 5 is a flow chart of some embodiments of the disclosed method. As shown in fig. 5, the method of this embodiment includes: steps S502 to S506.
In step S502, the management center receives a termination request for the service sent by the user, and sends a termination instruction to the service operation node.
In some embodiments, the termination request includes identity information of the user, the management terminal authenticates the user according to the identity information of the user, and sends a termination instruction to the service operation node when the authentication is passed.
In step S504, the running node terminates the running of the service in response to receiving the termination instruction.
In step S506, the running node updates the progress file of the service, sends the progress file to the corresponding storage node for storage, and deletes the progress file in the running node.
The storage node can complete the storage of the corresponding progress file.
The virtualization management systems, e.g., management center, run node, or storage node in embodiments of the present disclosure may each be implemented by various computing devices or computer systems, as described below in conjunction with fig. 6 and 7.
FIG. 6 is a block diagram of some embodiments of the virtualization management system of the present disclosure. As shown in fig. 6, the virtualization management system 60 of this embodiment includes: a memory 610 and a processor 620 coupled to the memory 610, the processor 620 configured to perform the steps of the virtualization management method in any of the embodiments of the present disclosure based on instructions stored in the memory 610.
Memory 610 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), a database, and other programs.
FIG. 7 is a block diagram of further embodiments of the virtualization management system of the present disclosure. As shown in fig. 7, the virtualization management system 70 of this embodiment includes: memory 710 and processor 720 are similar to memory 610 and processor 620, respectively. An input output interface 730, a network interface 740, a storage interface 750, and the like may also be included. These interfaces 730, 740, 750, as well as the memory 710 and the processor 720, may be connected, for example, by a bus 760. The input/output interface 730 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 740 provides a connection interface for various networking devices, such as a database server or a cloud storage server. The storage interface 750 provides a connection interface for external storage devices such as an SD card and a usb disk.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.