CN114422697B - Virtual shooting method, system and storage medium based on optical capture - Google Patents
Virtual shooting method, system and storage medium based on optical capture Download PDFInfo
- Publication number
- CN114422697B CN114422697B CN202210060188.3A CN202210060188A CN114422697B CN 114422697 B CN114422697 B CN 114422697B CN 202210060188 A CN202210060188 A CN 202210060188A CN 114422697 B CN114422697 B CN 114422697B
- Authority
- CN
- China
- Prior art keywords
- virtual
- face data
- face
- data
- position information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Studio Devices (AREA)
Abstract
本发明提供了一种基于光学捕捉的虚拟拍摄方法、系统及存储介质,涉及虚拟制片的技术领域,先获取通过光学捕捉智能终端确定的第一位置信息,再根据由第一位置信息生成的第二位置信息来进行虚拟拍摄,最后将虚拟相机拍摄到的画面推送至智能终端,解决了现有技术中虚拟相机位置和参数的变换通过为制作人员手动制作的动画曲线进行,并不具有实时性的技术问题。相比多种环节分离进行最后再进行拼合的现有技术,本发明基于光学捕捉的虚拟拍摄方法,具有与传统拍摄一样能实时看到拍摄画面的优点,不仅可以让导演和摄像师实时看到相机拍摄的画面,并且模拟了真实相机的操作和交互。
The present invention provides a virtual shooting method, system and storage medium based on optical capture, and relates to the technical field of virtual production. The second location information is used for virtual shooting, and finally the picture captured by the virtual camera is pushed to the smart terminal, which solves the problem that the transformation of the virtual camera position and parameters in the prior art is carried out through the animation curve manually made by the production staff, which does not have real-time sexual technical issues. Compared with the prior art in which multiple links are separated and finally stitched together, the virtual shooting method based on optical capture in the present invention has the advantage of being able to see the shooting picture in real time as in traditional shooting, not only allowing directors and photographers to see real-time The pictures taken by the camera simulate the operation and interaction of the real camera.
Description
技术领域technical field
本发明涉及虚拟制片的技术领域,特别涉及一种基于光学捕捉的虚拟拍摄方法、系统及存储介质。The present invention relates to the technical field of virtual production, in particular to a virtual shooting method, system and storage medium based on optical capture.
背景技术Background technique
虚拟拍摄指的就是在电影拍摄中,根据导演所要求的拍摄动作,全部镜头都在电脑里的虚拟场景中进行。拍摄这个镜头所需的各种元素,包括场景、人物、灯光等,全部被整合进了电脑,然后,导演就可以根据自己的意图,在计算机上“指挥”角色的表演和动作,从任意角度运动他的镜头。Virtual shooting refers to that in film shooting, according to the shooting action required by the director, all the shots are carried out in the virtual scene in the computer. All the elements needed to shoot this shot, including scenes, characters, lighting, etc., are all integrated into the computer, and then the director can "direct" the performance and actions of the characters on the computer according to his own intentions, from any angle Sport his lens.
但在现有的虚拟拍摄方法中,还会存在以下问题:一方面,由于虚拟相机的位置和参数的变换大多为制作人员手动制作的动画曲线,并非在拍摄过程中实时拍摄的结果,并不具有实时性,因而很多虚拟拍摄并非为实时拍摄,是由多种环节分离进行最后再进行拼合,无法与传统拍摄一样能实时看到拍摄画面;另一方面,很多数据多为手动制作,严重影响虚拟拍摄的进度和拟真的准确性。However, in the existing virtual shooting methods, there are still the following problems: on the one hand, since the position and parameter transformation of the virtual camera are mostly animation curves manually produced by the production staff, they are not the result of real-time shooting during the shooting process, and are not It is real-time, so many virtual shootings are not real-time shootings. They are separated from various links and finally stitched together. It is impossible to see the shooting pictures in real time like traditional shootings. On the other hand, many data are mostly produced manually, which seriously affects The pace and realistic accuracy of the virtual shoot.
发明内容Contents of the invention
本发明的目的就是解决背景技术中提到的问题,提出一种基于光学捕捉的虚拟拍摄方法、系统及存储介质。The purpose of the present invention is to solve the problems mentioned in the background technology, and propose a virtual shooting method, system and storage medium based on optical capture.
为实现上述目的,本发明首先提出了一种基于光学捕捉的虚拟拍摄方法,包括以下步骤:获取通过光学捕捉智能终端确定的第一位置信息;根据所述第一位置信息生成第二位置信息;根据所述第二位置信息来进行虚拟拍摄;将虚拟相机拍摄到的画面推送至智能终端。In order to achieve the above object, the present invention first proposes a virtual shooting method based on optical capture, including the following steps: acquiring the first position information determined by the optical capture smart terminal; generating second position information according to the first position information; Carrying out virtual shooting according to the second location information; pushing the picture captured by the virtual camera to the smart terminal.
可选的,还包括以下步骤:获取第一人脸数据;通过对所述第一人脸数据进行匹配处理,生成第二人脸数据;将所述第二人脸数据作为预设人物模型的人脸目标体数据,生成预设人物模型对应的面部表情。Optionally, the following steps are also included: obtaining the first face data; generating second face data by matching the first face data; using the second face data as the preset character model Face target volume data to generate facial expressions corresponding to preset character models.
可选的,所述通过对第一人脸数据进行匹配处理,生成第二人脸数据包括以下步骤:将无表情状态下的第一人脸数据作为初始值,其他每一帧的第一人脸数据减去所述初始值来进行人脸初始化;将经过人脸初始化的第一人脸数据乘以预设系数来进行表情整体缩放。Optionally, said generating the second face data by matching the first face data includes the following steps: using the first face data in the expressionless state as an initial value, and the first person in each other frame The initial value is subtracted from the face data to perform face initialization; the first face data after face initialization is multiplied by a preset coefficient to perform overall expression scaling.
可选的,还包括以下步骤:根据获取的关键点位置数据形成骨骼数据;将所述骨骼数据与骨骼模型进行绑定;根据被绑定的骨骼数据来实时驱动虚拟人物。Optionally, the following steps are further included: forming skeleton data according to the acquired key point position data; binding the skeleton data to the skeleton model; driving the avatar in real time according to the bound skeleton data.
可选的,所述智能终端为平板电脑。Optionally, the smart terminal is a tablet computer.
本发明还提出了一种基于光学捕捉的虚拟拍摄系统,包括:第一位置信息处理模块,被配置为获取通过光学捕捉智能终端确定的第一位置信息;第二位置信息处理模块,被配置为根据所述第一位置信息生成第二位置信息;虚拟拍摄模块,被配置为将所述第二位置信息作为虚拟相机的位置来进行虚拟拍摄;画面推送模块,被配置为将虚拟相机拍摄到的画面推送至智能终端。The present invention also proposes a virtual shooting system based on optical capture, including: a first position information processing module configured to acquire the first position information determined by the optical capture smart terminal; a second position information processing module configured to Generate second position information according to the first position information; the virtual shooting module is configured to use the second position information as the position of the virtual camera to perform virtual shooting; the screen push module is configured to capture the virtual camera The screen is pushed to the smart terminal.
可选的,还包括:第一人脸数据处理模块,被配置为获取第一人脸数据;第二人脸数据处理模块,被配置为通过对所述第一人脸数据进行匹配处理,生成第二人脸数据;面部表情生成模块,被配置为将所述第二人脸数据作为预设人物模型的人脸目标体数据,以生成预设人物模型对应的面部表情。Optionally, it also includes: a first face data processing module configured to acquire first face data; a second face data processing module configured to generate a face data by matching the first face data The second human face data; a facial expression generation module configured to use the second human face data as the human face object data of the preset character model to generate facial expressions corresponding to the preset character model.
可选的,所述第二人脸数据处理模块还包括:人脸初始化模块,被配置为将无表情状态下的第一人脸数据作为初始值,其他每一帧的第一人脸数据减去所述初始值来进行人脸初始化;表情整体缩放模块,被配置为将经过人脸初始化的第一人脸数据乘以预设系数来进行表情整体缩放Optionally, the second human face data processing module also includes: a human face initialization module configured to use the first human face data in the expressionless state as an initial value, and subtract the first human face data of each other frame The initial value is removed to initialize the face; the overall expression scaling module is configured to multiply the first human face data initialized by the face by a preset coefficient to perform overall expression scaling
本发明还提出了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现上述的基于光学捕捉的虚拟拍摄方法。The present invention also proposes a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the above-mentioned virtual shooting method based on optical capture is realized.
本发明的有益效果:Beneficial effects of the present invention:
本发明实施例的一种基于光学捕捉的虚拟拍摄方法中,先获取通过光学捕捉智能终端确定的第一位置信息,再根据由第一位置信息生成的第二位置信息来进行虚拟拍摄,最后将虚拟相机拍摄到的画面推送至智能终端,解决了现有技术中虚拟相机位置和参数的变换通过为制作人员手动制作的动画曲线进行,并不具有实时性的技术问题。相比多种环节分离进行最后再进行拼合的现有技术,本发明实施例的一种基于光学捕捉的虚拟拍摄方法,具有与传统拍摄一样能实时看到拍摄画面的优点,不仅可以让导演和摄像师实时看到相机拍摄的画面,并且可以模拟真实相机的操作和交互。In a virtual shooting method based on optical capture in an embodiment of the present invention, the first position information determined by the optical capture smart terminal is obtained first, and then the virtual shooting is performed according to the second position information generated by the first position information, and finally the The picture captured by the virtual camera is pushed to the smart terminal, which solves the technical problem in the prior art that the transformation of the position and parameters of the virtual camera is carried out through the animation curve manually made by the production staff and does not have real-time performance. Compared with the existing technology in which various links are separated and finally stitched together, a virtual shooting method based on optical capture in the embodiment of the present invention has the same advantage as traditional shooting that can see the shooting picture in real time, not only allowing the director and The videographer sees the images captured by the camera in real time, and can simulate the operation and interaction of the real camera.
本发明的特征及优点将通过实施例结合附图进行详细说明。The features and advantages of the present invention will be described in detail with reference to the accompanying drawings.
附图说明Description of drawings
图1为本发明实施例一种基于光学捕捉的虚拟拍摄方法的流程示意图之一;FIG. 1 is one of the schematic flow diagrams of a virtual shooting method based on optical capture according to an embodiment of the present invention;
图2为本发明实施例一种基于光学捕捉的虚拟拍摄方法的流程示意图之二;FIG. 2 is the second schematic flow diagram of a virtual shooting method based on optical capture according to an embodiment of the present invention;
图3为本发明实施例一种基于光学捕捉的虚拟拍摄方法的流程示意图之三;Fig. 3 is the third schematic flow diagram of a virtual shooting method based on optical capture according to an embodiment of the present invention;
图4为本发明实施例一种基于光学捕捉的虚拟拍摄系统的结构框图之一;4 is one of the structural block diagrams of a virtual shooting system based on optical capture according to an embodiment of the present invention;
图5为本发明实施例一种基于光学捕捉的虚拟拍摄系统的结构框图之二;5 is the second structural block diagram of a virtual shooting system based on optical capture according to an embodiment of the present invention;
图6为本发明实施例一种基于光学捕捉的虚拟拍摄系统的结构框图之三。FIG. 6 is the third structural block diagram of a virtual shooting system based on optical capture according to an embodiment of the present invention.
具体实施方式Detailed ways
为了便于本领域技术人员的理解,下面将结合具体实施例对本发明作进一步详细描述。In order to facilitate the understanding of those skilled in the art, the present invention will be further described in detail below in conjunction with specific embodiments.
图1示意性示出本发明实施例一种基于光学捕捉的虚拟拍摄方法的流程示意图。Fig. 1 schematically shows a schematic flowchart of a virtual shooting method based on optical capture according to an embodiment of the present invention.
如图1所示,该基于光学捕捉的虚拟拍摄方法包括步骤S10至步骤S40:As shown in Figure 1, the virtual shooting method based on optical capture includes step S10 to step S40:
步骤S10,获取通过光学捕捉智能终端确定的第一位置信息;Step S10, acquiring the first position information determined by the optical capture smart terminal;
步骤S20,根据所述第一位置信息生成第二位置信息;Step S20, generating second location information according to the first location information;
步骤S30,根据所述第二位置信息来进行虚拟拍摄;Step S30, performing virtual shooting according to the second location information;
步骤S40,将虚拟相机拍摄到的画面推送至智能终端。Step S40, pushing the picture captured by the virtual camera to the smart terminal.
在本发明实施例的一种基于光学捕捉的虚拟拍摄方法中,先获取通过光学捕捉智能终端确定的第一位置信息,再根据由第一位置信息生成的第二位置信息来进行虚拟拍摄,最后将虚拟相机拍摄到的画面推送至智能终端,解决了现有技术中虚拟相机位置和参数的变换通过为制作人员手动制作的动画曲线进行,并不具有实时性的技术问题。相比多种环节分离进行最后再进行拼合的现有技术,本发明实施例的一种基于光学捕捉的虚拟拍摄方法,具有与传统拍摄一样能实时看到拍摄画面的优点,不仅可以让导演和摄像师实时看到相机拍摄的画面,并且模拟了真实相机的操作和交互。In a virtual shooting method based on optical capture in an embodiment of the present invention, the first position information determined by the optical capture smart terminal is obtained first, and then the virtual shooting is performed according to the second position information generated by the first position information, and finally Pushing the picture captured by the virtual camera to the smart terminal solves the technical problem in the prior art that the transformation of the position and parameters of the virtual camera is carried out through the animation curve manually made for the production staff and does not have real-time performance. Compared with the existing technology in which various links are separated and finally stitched together, a virtual shooting method based on optical capture in the embodiment of the present invention has the same advantage as traditional shooting that can see the shooting picture in real time, not only allowing the director and The videographer sees the images captured by the camera in real time, and simulates the operation and interaction of the real camera.
下面,将结合附图及实施例对本发明实施例中的基于光学捕捉的虚拟拍摄方法的各个步骤进行更详细的说明。Next, each step of the optical capture-based virtual shooting method in the embodiment of the present invention will be described in more detail with reference to the accompanying drawings and embodiments.
步骤S10,获取通过光学捕捉智能终端确定的第一位置信息。Step S10, acquiring the first location information determined by the optical capture smart terminal.
需要说明的是,具备拍摄功能的智能终端作为在真实世界中代表虚拟相机的设备,具备以下两个条件:一是可以接收虚拟相机中拍摄画面的推流,二是可以对虚拟相机进行相应交互功能。在一优选实施例中,选择平板电脑作为所述智能终端,在其他实施例中,也可以选择例如手机等其他设备作为所述智能终端。此外,光学捕捉智能终端的过程可采用现有的光学捕捉技术进行,在此不再赘述。It should be noted that, as a device that represents a virtual camera in the real world, a smart terminal with a shooting function has the following two conditions: one is that it can receive the push stream of the shooting screen in the virtual camera, and the other is that it can interact with the virtual camera accordingly Function. In a preferred embodiment, a tablet computer is selected as the smart terminal, and in other embodiments, other devices such as mobile phones may also be selected as the smart terminal. In addition, the process of optically capturing the smart terminal can be performed by using the existing optical capturing technology, which will not be repeated here.
步骤S20,根据所述第一位置信息生成第二位置信息。Step S20, generating second location information according to the first location information.
本步骤的目的是为了统一真实坐标和虚拟坐标的一致性,以保证代表真实相机的智能终端在真实世界中的运动和虚拟相机在虚拟世界中运动的一致性。The purpose of this step is to unify the consistency of the real coordinates and the virtual coordinates, so as to ensure the consistency between the movement of the smart terminal representing the real camera in the real world and the movement of the virtual camera in the virtual world.
具体的,当接收到未经处理的代表智能终端在真实世界中位置的第一位置信息后,将第一位置信息映射至虚拟光捕场坐标系,以生成第二位置信息。Specifically, after receiving the unprocessed first position information representing the position of the smart terminal in the real world, the first position information is mapped to the virtual light capture field coordinate system to generate the second position information.
需要说明的,在光学捕捉智能终端的过程中,现实世界中光学捕捉的有效范围区域被称为光捕场,这个区域限制了虚拟相机等需要光学捕捉物体的运动范围。由于所述光捕场中X轴、Y轴的正方向与虚拟环境中X轴、Y轴的正方向并不一致,相应位置信息在光捕场和虚拟环境中传递时需要进行转换才能保证运动的一致性。本实施例中,通过将第一位置信息映射至虚拟光捕场坐标系,以生成第二位置信息,从而省去了进行多次坐标系转换的复杂度。It should be noted that in the process of optically capturing smart terminals, the effective range area of optical capture in the real world is called the optical capture field, and this area limits the movement range of objects that need to be optically captured, such as virtual cameras. Since the positive directions of the X-axis and Y-axis in the light-trapping field are not consistent with the positive directions of the X-axis and Y-axis in the virtual environment, the corresponding position information needs to be converted when it is transmitted in the light-trapping field and the virtual environment to ensure the movement consistency. In this embodiment, the second position information is generated by mapping the first position information to the coordinate system of the virtual light trapping field, thereby saving the complexity of multiple coordinate system transformations.
步骤S30,根据所述第二位置信息来进行虚拟拍摄。Step S30, virtual shooting is performed according to the second location information.
由于第二位置信息与第一位置信息同步变化,从而虚拟相机能根据真实相机所在的位置变化在虚拟场景中改变位置或朝向来进行实时的虚拟拍摄。Since the second position information changes synchronously with the first position information, the virtual camera can change its position or orientation in the virtual scene according to the position change of the real camera to perform real-time virtual shooting.
步骤S40,将虚拟相机拍摄到的画面推送至智能终端。Step S40, pushing the picture captured by the virtual camera to the smart terminal.
通过将虚拟相机拍摄到的画面推送至智能终端,从而在智能终端上实时显示虚拟拍摄画面。相比多种环节分离进行最后再进行拼合的现有技术,本实施例具有与传统拍摄一样能实时看到拍摄画面的优点。By pushing the picture captured by the virtual camera to the smart terminal, the virtual shooting picture can be displayed in real time on the smart terminal. Compared with the prior art in which multiple links are separated and finally stitched together, this embodiment has the advantage of being able to see the shooting picture in real time just like traditional shooting.
请参考图2,本发明实施例一种基于光学捕捉的虚拟拍摄方法还包括以下步骤:Please refer to Fig. 2, a kind of virtual shooting method based on optical capture in the embodiment of the present invention also includes the following steps:
步骤S11,获取第一人脸数据;通过光学摄像头和或深度摄像头对目标人脸进行采集,来得到第一人脸数据。Step S11, acquiring the first face data; collecting the target face with an optical camera and/or a depth camera to obtain the first face data.
步骤S21,通过对所述第一人脸数据进行匹配处理,生成第二人脸数据,目的是来减小由不同人生成的同一虚拟人物的差异。Step S21, generating second face data by performing matching processing on the first face data, in order to reduce the difference between the same virtual character generated by different people.
步骤S31,将所述第二人脸数据作为预设人物模型的人脸目标体数据,生成预设人物模型对应的面部表情。Step S31, using the second face data as the face object data of the preset character model to generate facial expressions corresponding to the preset character model.
请参考图3,通过对第一人脸数据进行匹配处理,生成第二人脸数据具体包括以下步骤:Please refer to Fig. 3, by matching the first face data, generating the second face data specifically includes the following steps:
步骤S2110,将无表情状态下的第一人脸数据作为初始值,其他每一帧的第一人脸数据减去所述初始值来进行人脸初始化;Step S2110, using the first face data in the expressionless state as an initial value, and subtracting the initial value from the first face data of each other frame to perform face initialization;
步骤S2120,将经过人脸初始化的第一人脸数据乘以预设系数来进行表情整体缩放。Step S2120: Multiply the first human face data initialized by the human face by a preset coefficient to perform overall expression scaling.
根据实际测试得出的结果,根据获取到的不同演员的第一人脸数据,对于同一个表情动作,例如张嘴,有的演员可能是0.3到0.7的数值变化,有的可能是0.2到0.9的变化。而通过上述步骤,在获取到的不同演员的第一人脸数据存在整体偏大或者偏小的情况下,对不同演员表演同一个虚拟角色的相似性进行调整并对虚拟角色的表情细节进行补充,从而实现了在更换演员时不会使生成的虚拟人物有太大差异的技术效果。According to the results of the actual test, according to the obtained first face data of different actors, for the same facial expression, such as opening the mouth, some actors may have a numerical change of 0.3 to 0.7, and some may have a numerical change of 0.2 to 0.9 Variety. Through the above steps, when the obtained first face data of different actors is overall too large or too small, the similarity of the same virtual character performed by different actors is adjusted and the expression details of the virtual character are supplemented. , so as to achieve the technical effect that the generated virtual characters will not have too much difference when the actors are replaced.
此外,本实施例还具有对人体进行动态捕捉,使虚拟人物跟随真实人体同步运动的功能。具体的,上述功能通过以下步骤来实现:根据获取的关键点位置数据形成骨骼数据;将所述骨骼数据与骨骼模型进行绑定;根据被绑定的骨骼数据来实时驱动虚拟人物。In addition, this embodiment also has the function of dynamically capturing the human body so that the virtual character moves synchronously with the real human body. Specifically, the above functions are realized through the following steps: forming skeleton data according to the acquired key point position data; binding the skeleton data to the skeleton model; driving the avatar in real time according to the bound skeleton data.
基于上述一种基于光学捕捉的虚拟拍摄方法,本发明实施例还提供了一种基于光学捕捉的虚拟拍摄系统,如图4所示,该系统包括以下模块:Based on the above-mentioned virtual shooting method based on optical capture, an embodiment of the present invention also provides a virtual shooting system based on optical capture, as shown in Figure 4, the system includes the following modules:
第一位置信息处理模块100,被配置为获取通过光学捕捉智能终端确定的第一位置信息;The first location information processing module 100 is configured to acquire the first location information determined by the optical capture smart terminal;
第二位置信息处理模块200,被配置为根据所述第一位置信息生成第二位置信息;The second location information processing module 200 is configured to generate second location information according to the first location information;
虚拟拍摄模块300,被配置为将所述第二位置信息作为虚拟相机的位置来进行虚拟拍摄;The virtual shooting module 300 is configured to use the second position information as the position of the virtual camera to perform virtual shooting;
画面推送模块400,被配置为将虚拟相机拍摄到的画面推送至智能终端。The screen push module 400 is configured to push the screen captured by the virtual camera to the smart terminal.
如图5所示,在一实施例中,一种基于光学捕捉的虚拟拍摄系统还包括:As shown in Figure 5, in one embodiment, a kind of virtual shooting system based on optical capture also includes:
第一人脸数据处理模块110,被配置为获取第一人脸数据;The first face data processing module 110 is configured to acquire the first face data;
第二人脸数据处理模块210,被配置为通过对所述第一人脸数据进行匹配处理,生成第二人脸数据;The second face data processing module 210 is configured to generate second face data by performing matching processing on the first face data;
面部表情生成模块310,被配置为将所述第二人脸数据作为预设人物模型的人脸目标体数据,以生成预设人物模型对应的面部表情。The facial expression generating module 310 is configured to use the second human face data as the human face object data of the preset character model, so as to generate a facial expression corresponding to the preset character model.
如图6所示,在一实施例中,第二人脸数据处理模块还包括:As shown in Figure 6, in one embodiment, the second face data processing module also includes:
人脸初始化模块21100,被配置为将无表情状态下的第一人脸数据作为初始值,其他每一帧的第一人脸数据减去所述初始值来进行人脸初始化;The face initialization module 21100 is configured to use the first face data in the expressionless state as an initial value, and subtract the initial value from the first face data of each other frame to perform face initialization;
表情整体缩放模块21200,被配置为将经过人脸初始化的第一人脸数据乘以预设系数来进行表情整体缩放。The overall expression scaling module 21200 is configured to multiply the first human face data initialized by the face by a preset coefficient to perform overall expression scaling.
综上所述,本发明实施例的一种基于光学捕捉的虚拟拍摄系统,该系统可以实现为一种程序的形式,在计算机设备上运行。计算机设备的存储器中可存储组成该基于光学捕捉的虚拟拍摄系统的各个程序模块,比如,图4所示的第一位置信息处理模块100、第二位置信息处理模块200、虚拟拍摄模块300、画面推送模块400。各个程序模块构成的程序使得处理器执行本说明书中描述的本申请各个实施例的一种基于光学捕捉的虚拟拍摄方法中的步骤。To sum up, the optical capture-based virtual shooting system of the embodiment of the present invention can be realized as a program and run on a computer device. The memory of the computer device can store various program modules that make up the virtual shooting system based on optical capture, for example, the first position information processing module 100 shown in FIG. 4 , the second position information processing module 200, the virtual shooting module 300, the picture Push module 400. The program constituted by the various program modules enables the processor to execute the steps in the optical capture-based virtual shooting method of the various embodiments of the present application described in this specification.
本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现本申请各个实施例的一种基于光学捕捉的虚拟拍摄方法中的步骤。An embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the steps in a virtual shooting method based on optical capture in various embodiments of the present application are implemented.
上述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The various technical features of the above-mentioned embodiments can be combined arbitrarily. For the sake of concise description, all possible combinations of the various technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be It is considered to be within the range described in this specification.
上述实施例是对本发明的说明,不是对本发明的限定,任何对本发明简单变换后的方案均属于本发明的保护范围。以上所述仅是本发明的优选实施方式,本发明的保护范围并不仅局限于上述实施例,凡属于本发明思路下的技术方案均属于本发明的保护范围。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理前提下的若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above-mentioned embodiment is an illustration of the present invention, not a limitation of the present invention, and any solution after a simple transformation of the present invention belongs to the protection scope of the present invention. The above descriptions are only preferred implementations of the present invention, and the scope of protection of the present invention is not limited to the above examples, and all technical solutions that fall under the idea of the present invention belong to the scope of protection of the present invention. It should be pointed out that for those skilled in the art, some improvements and modifications without departing from the principles of the present invention should also be regarded as the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210060188.3A CN114422697B (en) | 2022-01-19 | 2022-01-19 | Virtual shooting method, system and storage medium based on optical capture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210060188.3A CN114422697B (en) | 2022-01-19 | 2022-01-19 | Virtual shooting method, system and storage medium based on optical capture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114422697A CN114422697A (en) | 2022-04-29 |
CN114422697B true CN114422697B (en) | 2023-07-18 |
Family
ID=81274482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210060188.3A Active CN114422697B (en) | 2022-01-19 | 2022-01-19 | Virtual shooting method, system and storage medium based on optical capture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114422697B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113781610A (en) * | 2021-06-28 | 2021-12-10 | 武汉大学 | Virtual face generation method |
WO2022005300A1 (en) * | 2020-07-02 | 2022-01-06 | Weta Digital Limited | Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model |
CN113905145A (en) * | 2021-10-11 | 2022-01-07 | 浙江博采传媒有限公司 | LED circular screen virtual-real camera focus matching method and system |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2161871C2 (en) * | 1998-03-20 | 2001-01-10 | Латыпов Нурахмед Нурисламович | Method and device for producing video programs |
US9729765B2 (en) * | 2013-06-19 | 2017-08-08 | Drexel University | Mobile virtual cinematography system |
CN106027855B (en) * | 2016-05-16 | 2019-06-25 | 深圳迪乐普数码科技有限公司 | A kind of implementation method and terminal of virtual rocker arm |
CN207234975U (en) * | 2017-02-09 | 2018-04-13 | 量子动力(深圳)计算机科技有限公司 | The system of seizure performer expression based on virtual image technology |
US20200090392A1 (en) * | 2018-09-19 | 2020-03-19 | XRSpace CO., LTD. | Method of Facial Expression Generation with Data Fusion |
US11069135B2 (en) * | 2019-03-07 | 2021-07-20 | Lucasfilm Entertainment Company Ltd. | On-set facial performance capture and transfer to a three-dimensional computer-generated model |
CN109859297B (en) * | 2019-03-07 | 2023-04-18 | 灵然创智(天津)动画科技发展有限公司 | Mark point-free face capturing device and method |
CN112040092B (en) * | 2020-09-08 | 2021-05-07 | 杭州时光坐标影视传媒股份有限公司 | Real-time virtual scene LED shooting system and method |
CN113537056A (en) * | 2021-07-15 | 2021-10-22 | 广州虎牙科技有限公司 | Avatar driving method, apparatus, device, and medium |
-
2022
- 2022-01-19 CN CN202210060188.3A patent/CN114422697B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022005300A1 (en) * | 2020-07-02 | 2022-01-06 | Weta Digital Limited | Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model |
CN113781610A (en) * | 2021-06-28 | 2021-12-10 | 武汉大学 | Virtual face generation method |
CN113905145A (en) * | 2021-10-11 | 2022-01-07 | 浙江博采传媒有限公司 | LED circular screen virtual-real camera focus matching method and system |
Also Published As
Publication number | Publication date |
---|---|
CN114422697A (en) | 2022-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108986189B (en) | Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation | |
CN106375662B (en) | A kind of image pickup method based on dual camera, device and mobile terminal | |
CN111080759B (en) | Method and device for realizing split mirror effect and related product | |
CN106161939B (en) | Photo shooting method and terminal | |
CN103888683A (en) | Mobile terminal and shooting method thereof | |
CN113572962B (en) | Outdoor natural scene illumination estimation method and device | |
CN108668050B (en) | Video shooting method and device based on virtual reality | |
CN111179392A (en) | Virtual idol comprehensive live broadcast method and system based on 5G communication | |
CN114125269B (en) | Mobile phone real-time panoramic shooting method based on deep learning | |
WO2023217138A1 (en) | Parameter configuration method and apparatus, device, storage medium and product | |
CN112308977A (en) | Video processing method, video processing device and storage medium | |
WO2024007182A1 (en) | Video rendering method and system in which static nerf model and dynamic nerf model are fused | |
CN111986263B (en) | Image processing method, device, electronic equipment and storage medium | |
CN114422696A (en) | Virtual shooting method and device and storage medium | |
CN114422697B (en) | Virtual shooting method, system and storage medium based on optical capture | |
CN110365911A (en) | Method for shooting photo by mobile terminal, mobile terminal and server | |
CN105187736B (en) | A kind of method, system and mobile terminal that static face picture is converted into video | |
CN108495038A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN115623313B (en) | Image processing method, image processing device, electronic device, and storage medium | |
CN115426444B (en) | Shooting method and device | |
Li et al. | Learning to synthesize photorealistic dual-pixel images from RGBD frames | |
CN111223192B (en) | Image processing method, application method, device and equipment thereof | |
CN114241127B (en) | Panoramic image generation method, device, electronic device and medium | |
CN115442710B (en) | Audio processing method, device, electronic equipment and computer readable storage medium | |
CN109934930A (en) | A reality augmentation method and system based on user's precise location |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Virtual shooting method, system, and storage medium based on optical capture Granted publication date: 20230718 Pledgee: Xiaoshan Branch of Agricultural Bank of China Ltd. Pledgor: ZHEJIANG VERSATILE MEDIA Co.,Ltd. Registration number: Y2024980057089 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right |