Disclosure of Invention
The disclosure provides a control method of virtual lamplight in a virtual studio, a control device of virtual lamplight in the virtual studio, a computer readable storage medium and electronic equipment, so as to realize the control problem of the virtual lamplight in the virtual studio at least to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method for controlling virtual light in a virtual studio, the method comprising: acquiring a real scene picture shot by a camera and containing a target object; extracting a matting picture of the target object from the real scene picture; determining the position of the target object in the virtual studio based on the matting picture; and determining the beam position of the virtual lamplight according to the position of the target object in the virtual studio so as to control the virtual lamplight to irradiate towards the beam position.
In an exemplary embodiment of the present disclosure, the acquiring a real scene picture including a target object captured by a camera includes: a real scene picture of the target object transmitted by at least one camera is received.
In an exemplary embodiment of the present disclosure, the extracting a matting picture of the target object in the real scene picture includes: preprocessing the real scene picture and determining a pixel area of the target object; and extracting characteristic information of the target object based on the pixel region of the target object, and extracting a matting picture of the target object through the characteristic information.
In an exemplary embodiment of the present disclosure, the preprocessing the real scene picture to determine a pixel area of the target object includes: performing binarization processing on the real scene picture to obtain a binary picture of the real scene picture; and determining the pixel distribution of the target object in the binary image, and removing interference points in the binary image to obtain a pixel region of the target object.
In an exemplary embodiment of the present disclosure, the extracting, based on the pixel area of the target object, feature information of the target object, and extracting a matting image of the target object through the feature information includes: generating a pixel matrix of the pixel region; performing dimension reduction on the pixel matrix by adopting a convolutional neural network model to generate characteristic information of the target object; and extracting the matting picture of the target object through the pre-trained artificial neural network model and the characteristic information.
In an exemplary embodiment of the present disclosure, when the pixel matrix is subjected to the dimension reduction processing by using a convolutional neural network model, the method further includes: training the convolutional neural network model by a back propagation algorithm.
In an exemplary embodiment of the disclosure, the determining, based on the matting screen, a position of the target object in a virtual studio includes: taking the gray value of each pixel in the image matting picture as the pixel weight of each pixel; calculating the coordinates of the gravity center of the target object in the real scene picture according to the pixel weights; and determining the position of the gravity center of the target object in the virtual studio according to the difference value between the coordinate of the gravity center of the target object in the real scene picture and the central coordinate of the real scene picture.
In an exemplary embodiment of the present disclosure, the determining a beam position of a virtual light according to a position of the target object in the virtual studio includes: adding a preset offset value to the position of the gravity center of the target object in the virtual studio, and determining the position of the light tracking point of the virtual lamplight; and determining the light source position of the virtual lamplight, and determining the beam direction of the virtual lamplight according to the light source position and the light tracking point position of the virtual lamplight.
In an exemplary embodiment of the present disclosure, after determining the location of the target object in the virtual studio, the method further comprises: obtaining a virtual scene picture of the virtual studio; and synthesizing the matting picture of the target object to the virtual scene picture based on the position of the target object in the virtual studio, and generating the target picture of the virtual studio.
According to a second aspect of the present disclosure, there is provided a control device for virtual lights in a virtual studio, the device comprising: the acquisition module is used for acquiring a real scene picture containing a target object, which is shot by the camera; the extraction module is used for extracting the matting picture of the target object from the real scene picture; the determining module is used for determining the position of the target object in the virtual studio based on the image matting picture; and the control module is used for determining the beam position of the virtual lamplight according to the position of the target object in the virtual studio so as to control the virtual lamplight to irradiate towards the beam position.
In one exemplary embodiment of the present disclosure, the acquisition module is configured to receive a real scene picture of the target object transmitted by at least one camera.
In an exemplary embodiment of the disclosure, the extracting module is configured to preprocess the real scene frame, determine a pixel area of the target object, extract feature information of the target object based on the pixel area of the target object, and extract a matting frame of the target object through the feature information.
In an exemplary embodiment of the disclosure, the extracting module is further configured to perform binarization processing on the real scene image to obtain a binary image of the real scene image, determine a pixel distribution of the target object in the binary image, and remove interference points in the binary image to obtain a pixel area of the target object.
In an exemplary embodiment of the disclosure, the extracting module is further configured to generate a pixel matrix of the pixel area, perform a dimension reduction process on the pixel matrix by using a convolutional neural network model, generate feature information of the target object, and extract a matting picture of the target object through a pre-trained artificial neural network model and the feature information.
In an exemplary embodiment of the disclosure, when the convolutional neural network model is used to perform the dimension reduction processing on the pixel matrix, the extraction module is further used to train the convolutional neural network model through a back propagation algorithm.
In an exemplary embodiment of the disclosure, the determining module is configured to use a gray value of each pixel in the matting frame as a pixel weight of each pixel, calculate a coordinate of a center of gravity of the target object in the real scene frame according to the pixel weight, and determine a position of the center of gravity of the target object in the virtual studio according to a difference between the coordinate of the center of gravity of the target object in the real scene frame and a center coordinate of the real scene frame.
In an exemplary embodiment of the disclosure, the control module is configured to add a preset offset value to a position of a center of gravity of the target object in the virtual studio, determine a light point position of the virtual light, determine a light source position of the virtual light, and determine a beam direction of the virtual light according to the light source position and the light point position of the virtual light.
In an exemplary embodiment of the present disclosure, after determining the position of the target object in the virtual studio, the determining module is further configured to obtain a virtual scene picture of the virtual studio, synthesize, based on the position of the target object in the virtual studio, the matting picture of the target object to the virtual scene picture, and generate a target picture of the virtual studio.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of controlling virtual lights in any one of the virtual studios described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute any one of the above-described methods of controlling virtual lights in a virtual studio via execution of the executable instructions.
The present disclosure has the following beneficial effects:
According to the control method of virtual light in a virtual studio, the control apparatus of virtual light in a virtual studio, the computer-readable storage medium, and the electronic and apparatus in the present exemplary embodiment, it is possible to control irradiation of virtual light to the beam position by acquiring a real scene picture of a target object photographed by a camera, extracting a matting picture of the target object in the real scene picture, determining a position of the target object in the virtual studio based on the matting picture, and determining a beam position of the virtual light according to the position of the target object in the virtual studio. According to the method and the device for controlling the virtual lamplight irradiation, the beam position of the virtual lamplight is determined through the position of the target object in the virtual studio, the virtual lamplight is controlled to irradiate towards the beam position, the effect of the light-following lamp can be achieved in the virtual studio, the lamplight richness in a program is improved, various program effect requirements are met, an operator is not required to set a lamplight irradiation mode according to the program manufacturing requirements, and the workload of the operator is greatly reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The exemplary embodiment of the present disclosure first provides a method for controlling virtual light in a virtual studio. The method can be applied to the electronic equipment, so that the electronic equipment can control the virtual lamplight to irradiate in a corresponding mode in the virtual studio. The virtual studio is a special television program making method, and the technology is that based on the traditional color key image matting technology, the perspective relation of a three-dimensional virtual scene is kept consistent with a foreground by utilizing a computer three-dimensional graphic technology and a video synthesis technology according to the position and parameters of a camera, and after color key synthesis, a shooting object in the foreground is completely immersed in the three-dimensional virtual scene generated by a computer in visual effect, so that the realistic and strong-stereoscopic television studio effect is presented.
Fig. 1 shows a flow of the present exemplary embodiment, and may include the following steps S110 to S140:
s110, acquiring a real scene picture containing a target object, which is shot by a camera.
Wherein, the target object refers to a role in a real scene picture, and can be a person, an animal and the like; the real scene picture refers to a picture of a target object in a real scene, and in general, the real scene picture may be a picture of an actor, a host, or the like photographed by a camera.
In the present exemplary embodiment, the real scene image of the target object photographed by the camera may be received through a network or a specific data interface, such as a USB (Universal Serial Bus ) interface, etc., for example, the real scene image transmitted by the camera may be received in real time through the network during photographing. Also, in real environments, the camera may be located in a separate studio or some outdoor shooting place, and a machine processing a real scene picture may be located in another place, so that in some cases, an intermediate transmission computer device needs to be added to integrate and transmit the video signal of the camera.
To determine the position and pose of the target object in the real environment, the target object is rendered three-dimensional in the virtual studio, and in an alternative embodiment, step S110 may be implemented by receiving a real scene picture of the target object transmitted by at least one camera. Specifically, when a plurality of cameras are adopted for shooting, each camera can be respectively located at different viewing angles of the target object, taking three cameras as an example, each camera can be a common shooting device, and referring to fig. 2, each camera can be respectively located at a position right above, right in front of and left of the target object to shoot a real scene picture of the target object under each viewing angle; when shooting is performed by using one camera or two cameras, in order to facilitate determination of the position and the posture of the target object in the real environment, the camera may use an image capturing apparatus having a depth of field function, so that the camera obtains a real scene picture of the target object by calculating the positional relationship of the target object in space.
And S120, extracting a matting picture of the target object from the real scene picture.
The matting picture refers to a picture only including the target object, for example, the matting picture may be a picture of a person or object located in front of or near the front edge of the main body in the real scene picture.
By identifying the target object in the real scene picture, the picture except the target object can be removed from the real scene picture, and the picture where the target object is located can be extracted.
In practical application, in order to facilitate the picking of the picture of the target object, the target object such as an actor or a host can be photographed in a green screen environment, and when the picking picture of the target object is extracted, the background where the target object is located can be deleted by identifying the color distribution of the green screen environment and the like. For example, referring to fig. 3, when an actor or a presenter performs in a green curtain environment, a matting image of a target object at each camera view angle can be directly obtained by removing a green curtain background.
However, considering the case where there may be interference points or uneven color distribution or the like in a real environment, in this case, it is difficult to directly extract a matting picture of a target object by simple color discrimination. Thus, in an alternative embodiment, referring to FIG. 4, step S120 may be implemented by the following steps S410-S420:
and S410, preprocessing the real scene picture to determine a pixel area of the target object.
And S420, extracting characteristic information of the target object based on the pixel region of the target object, and extracting a matting picture of the target object through the characteristic information.
The pixel area of the target object refers to the pixel distribution position of the target object in the real scene picture; the feature information of the target object may include color features, texture features, contour features, and the like in the picture region where the target object is located.
When the distinguishing condition of the target object in the real scene picture is not good, the real scene picture can be preprocessed, for example, a background picture in a certain range in the real scene picture is deleted, the approximate range of the target object picture is determined, and the pixel area of the target object is obtained, so that the image matting picture of the target object is extracted by extracting the characteristic information of the target object in the pixel area according to the pixel area.
In general, the number of pixels in a real scene is very large, and when extracting feature information of a target object, a large amount of computer resources are required to process all pixels of the target object. Therefore, in order to increase the speed of extracting the matting picture and reduce the waste of computer resources, in an alternative embodiment, the step S420 may be implemented by the following method:
generating a pixel matrix of the pixel region;
performing dimension reduction on the pixel matrix by adopting a convolutional neural network model to generate characteristic information of a target object;
And extracting the matting picture of the target object through the pre-trained artificial neural network model and the characteristic information.
The pixel matrix of the pixel area is generated according to the pixels of the picture area where the target object is located, for example, after the real scene picture is preprocessed, the pixel matrix formed by 0 and 1 is generated according to the pixels of the picture area where the target object is located, the pixel matrix is subjected to dimension reduction processing by adopting a convolutional neural network model, specifically, the pixel matrix is processed by a convolutional layer of the convolutional neural network model, the local characteristics of the picture area where the target object is located are extracted, the parameter order of magnitude is reduced by a pooling layer, the dimension reduction processing of the pixel matrix is completed, the result is output by a full-connection layer, the characteristic information of the target object is obtained, and finally the image matting picture of the target object is extracted by a pre-trained artificial neural network model and the characteristic information.
The pixel matrix is subjected to dimension reduction processing by adopting the convolutional neural network model, so that the image with large data volume can be subjected to dimension reduction processing into data with small data volume, and meanwhile, the characteristics of the picture area where the target object is located can be reserved, and the characteristic information of the target object can be extracted in a human visual-like manner.
Further, in order to improve the accuracy of generating the feature information when the feature information of the target object is generated by performing the dimension reduction processing on the pixel matrix by using the convolutional neural network model, in an alternative embodiment, the convolutional neural network model may be trained by using a back propagation algorithm. For example, the parameters of the convolutional neural network model may be updated after each training using a gradient descent algorithm or the like so that errors in the feature information obtained by the convolutional neural network model are minimized.
Further, in preprocessing the above-described real scene picture by step S410, in order to facilitate determination of the pixel region of the target object, in an alternative embodiment, the real scene picture may be preprocessed by:
performing binarization processing on the real scene picture to obtain a binary picture of the real scene picture;
and determining the pixel distribution of the target object in the binary image, and removing interference points in the binary image to obtain a pixel region of the target object.
The binarization processing is carried out on the real scene picture, so that the real scene picture actually presents an obvious black-and-white effect, and compared with the real scene picture, the binarization picture can more prominently display the key contour of the target object, and the data volume is smaller, thereby being more convenient for calculation. Specifically, when the binarization processing is performed on the real scene picture, the real scene picture may be converted into a gray-scale picture, for example, the gray-scale value of each pixel may be calculated by the following formula (1), thereby converting the real scene picture into the gray-scale picture:
Gray=0.299R+0.587G+0.114B (1)
wherein R, G, B denote the pixel values of the corresponding pixels on the respective color channels, respectively.
After the gray level picture of the real scene picture is obtained, all pixel values except 0 can be set to 255 according to the gray level value of the gray level picture, so that a binary picture of the real scene picture is obtained, further, the pixel distribution of the target object is determined according to the binary picture, and interference points in the pixel distribution of the target object are removed, so that the pixel region of the target object is obtained.
In addition, in order to reduce the calculation amount, after obtaining the binary image of the real scene image, the image outside the image area where the target object is located may be cut off according to the image area where the target object is located. For example, after obtaining the grayscale image as shown in fig. 5, the image other than the image area where the target object 510 is located in the grayscale image may be deleted, thereby obtaining the grayscale image including only the target object 510.
Fig. 6 shows a flowchart of extracting a matting picture in the present exemplary embodiment, where, as shown in the drawing, a real scene picture may be converted into a gray scale picture, and the gray scale picture may be converted into a binary picture, and a boundary picture with a longer distance from a picture area where a target object is located in the real scene picture is edited and removed, so as to determine a picture area where the target object is located, further determine pixel distribution of the target object, remove interference points in a pixel distribution area of the target object, and determine a pixel area of the target object; according to the pixel area of the target object, extracting the characteristic information of the target object, carrying out characteristic matching through an artificial neural network model trained in advance, determining the picture range of the target object in a picture, eliminating the background environment and the like in the picture, and extracting the image matting picture of the target object.
In the process of feature matching through the artificial neural network model, the model needs to be pre-trained through a pre-established object feature database, so that the model can accurately distinguish whether a target object exists in a picture, a picture area where the target object is located and the like.
And S130, determining the position of the target object in the virtual studio based on the image matting picture.
In order to make the pictures show rich special effects, such as adding monster special effects in movies or showing storm special effects in weather forecast programs, the target object needs to be fused with the pictures of the virtual studio, and in the fusion process, the position of the target object in the virtual studio needs to be determined, so that the target object and the scene of the virtual studio show visual effects with higher reality.
To facilitate determining the location of the target object in the virtual studio, in an alternative embodiment, step S130 may be implemented by:
taking the gray value of each pixel in the matting picture as the pixel weight of each pixel;
Calculating the coordinates of the gravity center of the target object in the real scene picture according to the pixel weights;
And determining the position of the gravity center of the target object in the virtual studio according to the difference value between the coordinate of the gravity center of the target object in the real scene picture and the center coordinate of the real scene picture.
Specifically, after the image matting picture of the target object is extracted, the gray value of each pixel in the image matting picture can be used as the pixel weight of each pixel, and the barycenter coordinate of the target object in the real scene picture is calculated according to formulas (2) and (3):
Wherein X and Y respectively represent pixel coordinates of the center of gravity of the target object in the X direction and the Y direction in the real scene picture, w n represents the weight of each pixel in the n-th column in the X direction, w' n represents the weight of each pixel in the n-th row in the Y direction, and w 1 is taken as an example, which may represent the weight of each pixel in the 1 st column in the X direction; x n denotes a pixel value of each pixel in the n-th column in the x-direction, y n denotes a pixel value of each pixel in the n-th row in the y-direction, and x 1 may denote a pixel value of each pixel in the 1-th column in the x-direction, for example; w represents the average of the sum of the weights of all pixels in the real scene picture. In the calculation process, w n、w′n and X n、yn can be respectively expressed as vectors, and the values of w nxn and w' nyn are calculated according to the dot product of the vectors, so as to obtain the values of X and Y.
In addition, when determining the gray value of the image matting picture, the image matting picture may be converted into the gray picture, so as to determine the gray value of each pixel, or the pixel value may be converted into the gray value according to the pixel value of the image matting picture on each color channel, such as red, yellow and blue color channels, for example, the gray value of each pixel in the image matting picture may be calculated according to formula (1).
As shown in fig. 7, the Center coordinate of the real scene is Center (x, y), and the Center of gravity of the target object 510 is G (x, y) in the real scene, and is located at the waist position of the target object. In the present exemplary embodiment, when capturing with a plurality of cameras, the position of the center of gravity of the target object at each capturing view angle in the real scene image may be determined according to the real scene image captured by each camera, and the position of the center of gravity may be input into the virtual studio, so that the center of gravity point of the target object in the virtual studio may be obtained synchronously.
And S140, determining the beam position of the virtual lamplight according to the position of the target object in the virtual studio, and controlling the virtual lamplight to irradiate to the beam position.
The beam position of the virtual lamplight refers to the lamplight irradiation position of the virtual lamplight in the virtual studio.
In order to enable the virtual lamplight to present the effect of the light-following lamp in the virtual studio, after the position of the target object in the virtual studio is determined, the beam position of the virtual lamplight can be further determined, and the virtual lamplight can be irradiated to the corresponding beam position.
Depending on the type of target object and the location in the virtual studio, the tracking point of the target object in the virtual studio may be determined to highlight the target object in the virtual studio, specifically, in an alternative embodiment, step S140 may be implemented by:
Adding a preset offset value to the position of the gravity center of the target object in the virtual studio, and determining the position of the light tracking point of the virtual lamplight;
and determining the light source position of the virtual lamplight, and determining the beam direction of the virtual lamplight according to the light source position and the light tracking point position of the virtual lamplight.
The light tracking point position is a light irradiation position of the virtual light in the range of the target object; the preset offset value refers to a distance between the tracking point of the virtual light and the center of gravity of the target object, and can be generally set according to the type of the target object.
According to the coordinate of the gravity center of the target object in the virtual studio, a certain preset offset value is added to the coordinate, so that the position of the light tracking point of the virtual lamplight can be obtained, and the direction, the distance and the like from the position of the light source to the position of the light tracking point are determined according to the position of the light source of the virtual lamplight in the virtual studio, and the beam direction of the virtual lamplight is obtained. Specifically, a vector representing the direction of the light beam of the virtual light can be obtained by calculation according to the coordinates of the light source position and the position of the light-following lamp, and the virtual light is controlled to irradiate along the corresponding direction according to the vector, so that the light-following position of the virtual light within the range of the target object as shown in fig. 8 is locked.
In some cases, when the area occupied by the target object in the virtual studio is small, the position of the target object in the virtual studio may be directly determined as the beam position of the virtual lamp light, so that the virtual lamp light is directed to the beam position.
Furthermore, to achieve fusion of the target object and the virtual studio, in an alternative embodiment, the matting picture of the target object may be fused to the virtual scene picture by:
Obtaining a virtual scene picture of a virtual studio;
and fusing the matting picture of the target object to the virtual scene picture based on the position of the target object in the virtual studio, and generating a target picture of the virtual studio.
Wherein the virtual scene picture can be pre-generated by three-dimensional modeling software, for example, in live applications, the virtual scene picture can be built in a live engine.
According to the position of the target object in the virtual studio, fusing the matting picture of the target object into the virtual scene picture, so that the target object presents the immersed visual effect in the virtual studio.
In summary, according to the method for controlling virtual light in a virtual studio in the present exemplary embodiment, a real scene image of a target object captured by a camera may be acquired, a matting image of the target object may be extracted from the real scene image, a position of the target object in the virtual studio may be determined based on the matting image, and a beam position of the virtual light may be determined according to the position of the target object in the virtual studio, so as to control the virtual light to irradiate to the beam position. According to the method and the device for controlling the virtual lamplight irradiation, the beam position of the virtual lamplight is determined through the position of the target object in the virtual studio, the virtual lamplight is controlled to irradiate towards the beam position, the effect of the light-following lamp can be achieved in the virtual studio, the lamplight richness in a program is improved, various program effect requirements are met, an operator is not required to set a lamplight irradiation mode according to the program manufacturing requirements, and the workload of the operator is greatly reduced.
The present exemplary embodiment further provides a control device for virtual light in a virtual studio, referring to fig. 9, the control device 900 for virtual light includes: the acquiring module 910 may be configured to acquire a real scene picture including a target object captured by the camera; the extracting module 920 may be configured to extract a matting picture of the target object from the real scene picture; a determining module 930, configured to determine a position of the target object in the virtual studio based on the matting picture; the control module 940 may be configured to determine a beam position of the virtual light according to a position of the target object in the virtual studio, so as to control the virtual light to irradiate to the beam position.
In one exemplary embodiment of the present disclosure, the acquisition module 910 may be configured to receive a real scene picture of a target object transmitted by at least one camera.
In an exemplary embodiment of the present disclosure, the extraction module 920 may be configured to pre-process a real scene image, determine a pixel region of a target object, extract feature information of the target object based on the pixel region of the target object, and extract a matting image of the target object through the feature information.
In an exemplary embodiment of the present disclosure, the extraction module 920 may be further configured to perform binarization processing on the real scene frame to obtain a binary frame of the real scene frame, determine a pixel distribution of the target object in the binary frame, and remove interference points in the binary frame to obtain a pixel area of the target object.
In an exemplary embodiment of the present disclosure, the extracting module 920 may be further configured to generate a pixel matrix of the pixel area, perform a dimension reduction process on the pixel matrix by using a convolutional neural network model, generate feature information of the target object, and extract a matting picture of the target object through the pre-trained artificial neural network model and the feature information.
In an exemplary embodiment of the present disclosure, the extraction module 920 may also be used to train the convolutional neural network model through a back-propagation algorithm when performing a dimension reduction process on the pixel matrix using the convolutional neural network model.
In an exemplary embodiment of the present disclosure, the determining module 930 may be configured to use a gray value of each pixel in the matting frame as a pixel weight of each pixel, calculate coordinates of a center of gravity of the target object in the real scene frame according to the pixel weights, and determine a position of the center of gravity of the target object in the virtual studio according to a difference between the coordinates of the center of the target object in the real scene frame and the center coordinates of the real scene frame.
In an exemplary embodiment of the present disclosure, the control module 940 may be configured to add a preset offset value to a position of a center of gravity of the target object in the virtual studio, determine a light point position of the virtual light, determine a light source position of the virtual light, and determine a beam direction of the virtual light according to the light source position and the light point position of the virtual light.
In an exemplary embodiment of the present disclosure, after determining the position of the target object in the virtual studio, the determining module 930 may be further configured to obtain a virtual scene picture of the virtual studio, synthesize the matting picture of the target object to the virtual scene picture based on the position of the target object in the virtual studio, and generate the target picture of the virtual studio.
The specific details of each module in the above apparatus are already described in the method section embodiments, and the details of the undisclosed solution may be referred to the method section embodiments, so that they will not be described in detail.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
The program product may take the form of a portable compact disc read-only memory (CD-ROM) and comprises program code and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The exemplary embodiment of the disclosure also provides an electronic device capable of implementing the method. An electronic device 1000 according to such an exemplary embodiment of the present disclosure is described below with reference to fig. 10. The electronic device 1000 shown in fig. 10 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic device 1000 may be embodied in the form of a general purpose computing device. Components of electronic device 1000 may include, but are not limited to: the at least one processing unit 1010, the at least one memory unit 1020, a bus 1030 connecting the various system components (including the memory unit 1020 and the processing unit 1010), and a display unit 1040.
Wherein the memory unit 1020 stores program code that can be executed by the processing unit 1010, such that the processing unit 1010 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present specification. For example, the processing unit 1010 may perform the method steps shown in fig. 1, 4, and 6, etc.
The memory unit 1020 may include readable media in the form of volatile memory units such as Random Access Memory (RAM) 1021 and/or cache memory unit 1022, and may further include Read Only Memory (ROM) 1023.
Storage unit 1020 may also include a program/utility 1024 having a set (at least one) of program modules 1025, such program modules 1025 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1030 may be representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1000 can also communicate with one or more external devices 1100 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1000, and/or with any device (e.g., router, modem, etc.) that enables the electronic device 1000 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1050. Also, electronic device 1000 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1060. As shown, the network adapter 1060 communicates with other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the electronic device 1000, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
From the description of the embodiments above, those skilled in the art will readily appreciate that the exemplary embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the exemplary embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the exemplary embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.