CN115063518A - Trajectory rendering method, device, electronic device and storage medium - Google Patents
Trajectory rendering method, device, electronic device and storage medium Download PDFInfo
- Publication number
- CN115063518A CN115063518A CN202210647498.5A CN202210647498A CN115063518A CN 115063518 A CN115063518 A CN 115063518A CN 202210647498 A CN202210647498 A CN 202210647498A CN 115063518 A CN115063518 A CN 115063518A
- Authority
- CN
- China
- Prior art keywords
- track
- rendered
- trajectory
- model data
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
技术领域technical field
本申请属于虚拟现实技术领域,具体涉及一种轨迹渲染方法、装置、电子设备及存储介质。The present application belongs to the technical field of virtual reality, and specifically relates to a trajectory rendering method, device, electronic device and storage medium.
背景技术Background technique
在虚拟现实/增强现实(Virtual Reality/Augmented Reality,简称VR/AR)设备的运行过程中,会渲染出用于用户可见的物体。但是,相关的渲染方法所渲染出的图像都是纯色而使得所渲染出的图像并没有环境光的反馈,进而缺乏真实感。During the operation of a virtual reality/augmented reality (VR/AR for short) device, objects visible to the user are rendered. However, the images rendered by the related rendering methods are all pure colors, so that the rendered images have no feedback of ambient light, thus lacking realism.
发明内容SUMMARY OF THE INVENTION
鉴于上述问题,本申请提出了一种轨迹渲染方法、装置、电子设备以及存储介质,以实现改善上述问题。In view of the above problems, the present application proposes a trajectory rendering method, apparatus, electronic device, and storage medium, so as to improve the above problems.
第一方面,本申请实施例提供了一种轨迹渲染方法,所述方法包括:获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹;基于所述待渲染轨迹,生成三维模型数据;基于所述三维模型数据和材质捕捉贴图,生成所述待渲染轨迹对应的三维涂鸦渲染结果。In a first aspect, an embodiment of the present application provides a trajectory rendering method, the method includes: acquiring a to-be-rendered trajectory, where the to-be-rendered trajectory is a sliding trajectory of a touch object in a three-dimensional space; based on the to-be-rendered trajectory, generating three-dimensional model data; and generating a three-dimensional graffiti rendering result corresponding to the track to be rendered based on the three-dimensional model data and the material capture map.
第二方面,本申请实施例提供了一种轨迹渲染装置,所述装置包括:轨迹获取单元,用于获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹;数据生成单元,用于基于所述待渲染轨迹,生成三维模型数据;渲染单元,用于基于所述三维模型数据和材质捕捉贴图,生成所述待渲染轨迹对应的三维涂鸦渲染结果。In a second aspect, an embodiment of the present application provides a trajectory rendering device, the device includes: a trajectory acquiring unit, configured to acquire a trajectory to be rendered, where the trajectory to be rendered is a sliding trajectory of a touch object in a three-dimensional space; data A generating unit is configured to generate three-dimensional model data based on the track to be rendered; a rendering unit is configured to capture a map based on the three-dimensional model data and the material, and generate a three-dimensional graffiti rendering result corresponding to the track to be rendered.
第三方面,本申请实施例提供了一种电子设备,包括一个或多个处理器以及存储器;一个或多个程序,其中所述一个或多个程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述的方法。In a third aspect, embodiments of the present application provide an electronic device, including one or more processors and a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to Executed by the one or more processors, the one or more programs are configured to perform the above-described methods.
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有程序代码,其中,在所述程序代码运行时执行上述的方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, wherein the foregoing method is executed when the program code is executed.
本申请实施例提供了一种轨迹渲染方法、装置、电子设备及存储介质。首先获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹,然后基于待渲染轨迹,生成三维模型数据,再基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果。通过上述方法,在三维空间里,通过材质捕获贴图进行图像绘制,使得绘制的图像可以模拟真实的物理光照效果,从而可以更具有真实感。Embodiments of the present application provide a trajectory rendering method, apparatus, electronic device, and storage medium. First, obtain the track to be rendered, which is the sliding track of the touch object in the three-dimensional space, then generate 3D model data based on the track to be rendered, and then capture the map based on the 3D model data and the material to generate the track corresponding to the to-be-rendered track. 3D Doodle rendering result. Through the above method, in the three-dimensional space, the image is drawn through the material capture map, so that the drawn image can simulate the real physical lighting effect, so that it can be more realistic.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application. For those skilled in the art, other drawings can also be obtained from these drawings without creative effort.
图1示出了本申请一实施例提出的一种轨迹渲染方法的应用场景示意图;FIG. 1 shows a schematic diagram of an application scenario of a trajectory rendering method proposed by an embodiment of the present application;
图2示出了本申请一实施例提出的待渲染轨迹的示意图;FIG. 2 shows a schematic diagram of a track to be rendered proposed by an embodiment of the present application;
图3示出了本申请一实施例提出的一种轨迹渲染方法的应用场景示意图;FIG. 3 shows a schematic diagram of an application scenario of a trajectory rendering method proposed by an embodiment of the present application;
图4示出了本申请一实施例提出的一种轨迹渲染方法的应用场景示意图;FIG. 4 shows a schematic diagram of an application scenario of a trajectory rendering method proposed by an embodiment of the present application;
图5示出了本申请一实施例提出的一种轨迹渲染方法的流程图;FIG. 5 shows a flowchart of a trajectory rendering method proposed by an embodiment of the present application;
图6示出了本申请一实施例中的待渲染轨迹的示意图;FIG. 6 shows a schematic diagram of a track to be rendered in an embodiment of the present application;
图7示出了本申请一实施例中生成的三维模型数据的示意图;FIG. 7 shows a schematic diagram of three-dimensional model data generated in an embodiment of the present application;
图8示出了本申请另一实施例提出的一种轨迹渲染方法的流程图;FIG. 8 shows a flowchart of a trajectory rendering method proposed by another embodiment of the present application;
图9示出了本申请另一实施例中的贝塞尔曲线的示意图;FIG. 9 shows a schematic diagram of a Bezier curve in another embodiment of the present application;
图10示出了本申请另一实施例中的待渲染轨迹的示意图;FIG. 10 shows a schematic diagram of a track to be rendered in another embodiment of the present application;
图11示出了本申请另一实施例中获取轨迹点对应的圆周的示意图;11 shows a schematic diagram of acquiring a circle corresponding to a trajectory point in another embodiment of the present application;
图12示出了本申请另一实施例中从轨迹点对应的圆周上获取参考点的示意图;12 shows a schematic diagram of obtaining a reference point from a circle corresponding to a trajectory point in another embodiment of the present application;
图13示出了本申请另一实施例中生成的三维模型数据的示意图;13 shows a schematic diagram of three-dimensional model data generated in another embodiment of the present application;
图14示出了本申请再一实施例提出的一种轨迹渲染方法的流程图;FIG. 14 shows a flowchart of a trajectory rendering method proposed by still another embodiment of the present application;
图15示出了本申请再一实施例中的预设三维模型数据的示意图;FIG. 15 shows a schematic diagram of preset three-dimensional model data in still another embodiment of the present application;
图16示出了本申请再一实施例中的对预设三维模型数据进行划分的示意图;FIG. 16 shows a schematic diagram of dividing preset three-dimensional model data in still another embodiment of the present application;
图17示出了本申请再一实施例中生成的三维模型数据的示意图;17 shows a schematic diagram of three-dimensional model data generated in still another embodiment of the present application;
图18示出了本申请又一实施例提出的一种轨迹渲染方法的流程图;FIG. 18 shows a flowchart of a trajectory rendering method proposed by another embodiment of the present application;
图19示出了本申请又一实施例中生成的MatCap贴图的示意图;FIG. 19 shows a schematic diagram of a MatCap map generated in another embodiment of the present application;
图20示出了本申请又一实施例中生成的三维涂鸦渲染结果的示意图;FIG. 20 shows a schematic diagram of a three-dimensional graffiti rendering result generated in another embodiment of the present application;
图21示出了本申请实施例提出的一种轨迹渲染装置的结构框图;FIG. 21 shows a structural block diagram of a trajectory rendering apparatus proposed by an embodiment of the present application;
图22示出了本申请实施例提出的一种轨迹渲染装置的结构框图;FIG. 22 shows a structural block diagram of a trajectory rendering apparatus proposed by an embodiment of the present application;
图23示出了本申请中的用于执行根据本申请实施例的轨迹渲染方法的电子设备或服务器的结构框图;FIG. 23 shows a structural block diagram of an electronic device or a server in the present application for executing the trajectory rendering method according to an embodiment of the present application;
图24示出了本申请中的用于保存或者携带实现根据本申请实施例的轨迹渲染方法的程序代码的存储单元。FIG. 24 shows a storage unit in the present application for storing or carrying program codes for implementing the trajectory rendering method according to the embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.
虚拟现实/增强现实(Virtual Reality/Augmented Reality,简称VR/AR)是一种构建并可体验、交互的虚拟仿真系统,综合计算机图形学、实时图像处理、精密传感器及人机接口等多项科学。逼真的场景与模型给人以身临其境的体验。目前多应用于游戏、影视、动画、军事等领域。Virtual Reality/Augmented Reality (VR/AR for short) is a virtual simulation system that can be constructed, experienced and interacted with. . Realistic scenes and models provide an immersive experience. At present, it is mostly used in games, film and television, animation, military and other fields.
发明人在对相关的轨迹渲染方法的研究中发现,相关的轨迹渲染方法,非常依赖手持传感器硬件去采集用户绘制的轨迹点,而且绘制的图形不能很好的融入VR/AR场景设置的、或者采集的虚拟的、或者真实的光照场景中,因此相关的渲染方法,绘制出的图像大多都是纯色,并没有环境光的反馈,进而缺乏真实感。In the research of related trajectory rendering methods, the inventor found that related trajectory rendering methods rely heavily on handheld sensor hardware to collect trajectory points drawn by users, and the drawn graphics cannot be well integrated into VR/AR scene settings, or In the collected virtual or real lighting scenes, most of the images drawn by related rendering methods are pure colors, and there is no feedback from ambient light, thus lacking realism.
因此,发明人提出了本申请中的轨迹渲染方法、装置、电子设备及存储介质。首先获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹,然后基于待渲染轨迹,生成三维模型数据,再基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果。通过上述方法,在三维空间里,通过材质捕获贴图进行图像绘制,使得绘制的图像可以模拟真实的物理光照效果,从而可以更具有真实感。Therefore, the inventor proposes the trajectory rendering method, apparatus, electronic device and storage medium in the present application. First, obtain the track to be rendered, which is the sliding track of the touch object in the three-dimensional space, then generate 3D model data based on the track to be rendered, and then capture the map based on the 3D model data and the material to generate the track corresponding to the to-be-rendered track. 3D Doodle rendering result. Through the above method, in the three-dimensional space, the image is drawn through the material capture map, so that the drawn image can simulate the real physical lighting effect, so that it can be more realistic.
请参阅图1,图1为本申请实施例所提供的轨迹渲染方法的应用场景示意图。Please refer to FIG. 1 , which is a schematic diagram of an application scenario of the trajectory rendering method provided by an embodiment of the present application.
如图1所示,本申请实施例所提供的轨迹渲染方法可以应用于如图1所示的应用场景中,在该应用场景中,用户可以佩戴VR/AR设备,在用户佩戴VR/AR设备的同时,用户可以根据VR/AR设备中显示的内容,通过手指或触控物在三维空间中绘制轨迹点或图像。示例性的,在用户佩戴着VR/AR设备正在进行某个游戏的情况下,若VR/AR设备中显示需要用户控制游戏中的人物向下移动,那么用户可以通过手指或触控物在三维空间中绘制向下的轨迹,如图2所示,图2中的轨迹就是用户通过手指或触控物在三维空间中绘制的向下的轨迹。As shown in FIG. 1 , the trajectory rendering method provided by this embodiment of the present application can be applied to the application scenario shown in FIG. 1 . At the same time, users can draw track points or images in three-dimensional space through fingers or touch objects according to the content displayed in the VR/AR device. Exemplarily, when the user is wearing a VR/AR device and is playing a game, if the VR/AR device shows that the user needs to control the character in the game to move down, the user can use a finger or a touch object to move in the three-dimensional A downward trajectory is drawn in the space, as shown in FIG. 2 . The trajectory in FIG. 2 is a downward trajectory drawn by a user in a three-dimensional space through a finger or a touch object.
可选的,在本申请实施例中,所提供的轨迹渲染方法可以由电子设备执行。在由电子设备执行的这种方式中,本申请实施例提供的轨迹渲染方法中所有步骤可以均由电子设备执行。例如,如图3所示,通过电子设备100的轨迹获取装置可以获取待渲染轨迹,然后将获取到的待渲染轨迹传输给处理器,使得处理器可以实时的基于待渲染轨迹,生成三维模型数据,进一步的,处理器还可以基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果。Optionally, in this embodiment of the present application, the provided trajectory rendering method may be executed by an electronic device. In this manner performed by the electronic device, all steps in the trajectory rendering method provided by the embodiments of the present application may be performed by the electronic device. For example, as shown in FIG. 3 , the trajectory to be rendered can be acquired by the trajectory acquisition device of the
再者,本申请实施例提供的轨迹渲染方法也可以由服务器(云端)进行执行。对应的,在由服务器执行的这种方式中,可以由电子设备获取用户手指绘制图像,并将用户手指绘制图像同步发送给服务器,然后由服务器实时的对用户手指绘制图像进行识别以得到待渲染轨迹,从而服务器可以基于待渲染轨迹,生成三维模型数据;基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果。Furthermore, the trajectory rendering method provided by the embodiment of the present application may also be executed by a server (cloud). Correspondingly, in this method executed by the server, the electronic device can obtain the image drawn by the user's finger, and send the image drawn by the user's finger to the server synchronously, and then the server can recognize the image drawn by the user's finger in real time to obtain the to-be-rendered image. track, so that the server can generate 3D model data based on the track to be rendered; based on the 3D model data and the material capture map, generate a 3D graffiti rendering result corresponding to the track to be rendered.
另外,还可以由电子设备和服务器协同执行。在由电子设备和服务器协同执行的这种方式中,本申请实施例提供的轨迹渲染方法中的部分步骤由电子设备执行,而另外部分的步骤则由服务器来执行。In addition, the electronic device and the server can also be cooperatively executed. In this manner of cooperative execution by the electronic device and the server, some steps in the trajectory rendering method provided by the embodiments of the present application are executed by the electronic device, while other parts of the steps are executed by the server.
示例性的,如图4所示,电子设备100可以执行轨迹渲染方法包括的:获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹,然后由服务器200来执行基于所述待渲染轨迹,生成三维模型数据;基于所述三维模型数据和材质捕捉贴图,生成所述待渲染轨迹对应的三维涂鸦渲染结果。Exemplarily, as shown in FIG. 4 , the
需要说明的是,在由电子设备和服务器协同执行的这种方式中,电子设备和服务器分别执行的步骤不限于上述示例中所介绍的方式,在实际应用中,可以根据实际情况动态的调整电子设备和服务器分别执行的步骤。It should be noted that, in this way of co-execution by the electronic device and the server, the steps performed by the electronic device and the server respectively are not limited to the way described in the above example. In practical applications, the electronic device can be dynamically adjusted according to the actual situation. The steps performed by the device and the server separately.
需要说明的是,该电子设备100除了可以为图1和图2中所示的智能手机外,还可以为车机设备、可穿戴设备、平板电脑、笔记本电脑、智能音箱、VR/AR设备等。服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统。It should be noted that the
下面将结合附图具体描述本申请的各实施例。The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
请参阅图5,本申请实施例提供的一种轨迹渲染方法,应用于如图3或图4所示的电子设备或服务器,所述方法包括:Referring to FIG. 5 , a trajectory rendering method provided by an embodiment of the present application is applied to an electronic device or server as shown in FIG. 3 or FIG. 4 , and the method includes:
步骤S110:获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹。Step S110: Acquire a track to be rendered, where the track to be rendered is a sliding track of the touch object in the three-dimensional space.
在本申请实施例中,待渲染轨迹为需要进行三维图形绘制的滑动轨迹。触控物可以为用户在某个应用场景中可以用于绘制滑动轨迹的物品,比如,在VR游戏场景中,触控物可以为用户手中握着的控制器,也可以为用户的手指,在此不做具体限定。In this embodiment of the present application, the track to be rendered is a sliding track for which three-dimensional graphics need to be drawn. The touch object can be an item that the user can use to draw a sliding trajectory in a certain application scenario. For example, in a VR game scene, the touch object can be the controller held by the user or the user's finger. This is not specifically limited.
作为一种方式,待渲染轨迹可以预先存储在云服务器或预设存储区域中的滑动轨迹,待渲染轨迹也可以为实时获取的滑动轨迹。作为其中一种方式,在待渲染轨迹为预先存储在云服务器或预设存储区域中的滑动轨迹的情况下,可以预先获取不同场景下各自对应的滑动轨迹,然后将不同场景下各自对应的滑动轨迹存储在云服务器或预设存储区域中。其中,在对不同应用场景下各自对应的滑动轨迹进行存储时,可以先为不同的应用场景设置不同的标识,并为不同应用场景各自对应的滑动轨迹设置不同的编号,建立应用场景与滑动轨迹的对应关系。可选的,若某个应用场景对应的滑动轨迹有多个,那么在对该多个滑动轨迹设置编号时,可以按照预设编号顺序对多个滑动轨迹进行编号。其中,预设编号顺序可以为编号从小到大的顺序、编号从大到小的顺序、或者是按照获取到滑动轨迹的时间的先后顺序,根据编号从小到大的顺序等,在此不做具体限定。比如,若某个VR游戏场景下,对应有3个滑动轨迹,其中,滑动轨迹1的获取时间为12月1日,滑动轨迹2的获取时间为12月5日,滑动轨迹3的获取时间为12月2日,那么若按照滑动轨迹的获取时间的先后顺序,根据编号从小到大的顺序对上述3个滑动轨迹进行编号,那么滑动轨迹1的编号可以为001,滑动轨迹3的编号可以为002,滑动轨迹2的编号可以为003。As an approach, the track to be rendered may be pre-stored in a cloud server or a sliding track in a preset storage area, and the track to be rendered may also be a sliding track acquired in real time. As one of the methods, when the track to be rendered is a sliding track pre-stored in a cloud server or a preset storage area, the corresponding sliding trajectories in different scenarios can be obtained in advance, and then the corresponding sliding tracks in different scenarios can be obtained in advance. Tracks are stored in cloud servers or preset storage areas. Among them, when storing the corresponding sliding trajectories in different application scenarios, you can first set different identifiers for different application scenarios, and set different numbers for the sliding trajectories corresponding to different application scenarios, and establish application scenarios and sliding trajectories. corresponding relationship. Optionally, if there are multiple sliding tracks corresponding to an application scenario, when setting numbers for the multiple sliding tracks, the multiple sliding tracks may be numbered according to a preset number sequence. Among them, the preset numbering order can be the order of the numbers from small to large, the order of the numbers from large to small, or the order of the time when the sliding track is obtained, according to the order of the numbers from small to large, etc., which is not specified here. limited. For example, if there are 3 sliding tracks in a VR game scene, the acquisition time of sliding track 1 is December 1, the acquisition time of sliding track 2 is December 5, and the acquisition time of sliding track 3 is On December 2nd, if the above three sliding tracks are numbered according to the sequence of the acquisition time of the sliding tracks and the sequence of numbers from small to large, the number of the sliding track 1 can be 001, and the number of the sliding track 3 can be 002, the number of the sliding track 2 can be 003.
当通过上述方式,对不同应用场景各自对应的滑动轨迹进行存储后,在需要获取待渲染轨迹时,可以先对当前的应用场景进行识别,进而在识别出当前的应用场景时,从云服务器或预设存储区域中获取该应用场景对应的滑动轨迹,将获取到的滑动轨迹作为需要获取的待渲染轨迹。可选的,在获取应用场景对应的滑动轨迹时,若应用场景对应的滑动轨迹有多个,可以将对应的多个滑动轨迹都作为需要获取的待渲染轨迹;或者,也可以将对应的多个滑动轨迹中的任意一个滑动轨迹作为需要获取的待渲染轨迹;或者,也可以将对应的多个滑动轨迹中的编号最小或最大的滑动轨迹作为需要获取的待渲染轨迹,在此不做具体限定。After the sliding tracks corresponding to different application scenarios are stored in the above manner, when the track to be rendered needs to be obtained, the current application scenario can be identified first, and then when the current application scenario is identified, the current application scenario can be identified from the cloud server or The sliding track corresponding to the application scene is obtained from the preset storage area, and the obtained sliding track is used as the track to be rendered to be obtained. Optionally, when acquiring the sliding trajectories corresponding to the application scene, if there are multiple sliding trajectories corresponding to the application scene, the corresponding multiple sliding trajectories can be used as the to-be-rendered trajectories to be acquired; Any one of the sliding trajectories is used as the track to be rendered to be acquired; alternatively, the sliding track with the smallest or largest number among the corresponding multiple sliding trajectories can also be used as the track to be rendered to be acquired, which is not specified here. limited.
进一步的,还可以将每个应用场景划分为多个子应用场景,建立每个子应用场景和滑动轨迹的对应关系,每个子应用场景可以对应一个滑动轨迹。其中,在将每个应用场景划分为多个子应用场景时,可以根据不同的应用阶段来划分。比如,在VR游戏应用场景中,可以根据需不需要操作游戏人物来将VR游戏应用场景划分为两个子应用场景,一个子应用场景为需要操作游戏人物,一个子应用场景为不需要操作游戏人物。其中,需要操作游戏人物的子应用场景对应的滑动轨迹为控制游戏人物的旋转的滑动轨迹;不需要操作游戏人物的子应用场景对应的滑动轨迹为点击装备的滑动轨迹。Further, each application scenario may be divided into multiple sub-application scenarios, and a corresponding relationship between each sub-application scenario and a sliding track may be established, and each sub-application scenario may correspond to a sliding track. Wherein, when each application scenario is divided into multiple sub-application scenarios, it can be divided according to different application stages. For example, in a VR game application scenario, the VR game application scenario can be divided into two sub-application scenarios according to whether the game characters need to be operated or not. . Wherein, the sliding track corresponding to the sub-application scene that requires the operation of the game character is the sliding track that controls the rotation of the game character; the sliding track corresponding to the sub-application scene that does not require the operation of the game character is the sliding track of clicking the device.
在将每个应用场景划分为多个子应用场景后,在需要获取待渲染轨迹时,可以在找到应用场景后,再确定对应的子应用场景,进而找到对应的滑动轨迹作为需要获取的待渲染轨迹。After each application scenario is divided into multiple sub-application scenarios, when the track to be rendered needs to be obtained, the corresponding sub-application scenario can be determined after the application scenario is found, and then the corresponding sliding track can be found as the track to be rendered to be obtained. .
作为其中另一种方式,在待渲染轨迹为实时获取的滑动轨迹的情况下,在用户头戴VR/AR设备时,用户可以通过触控物实时的在三维空间中绘制滑动轨迹。当然,用户也可以根据VR/AR设备中正在运行的游戏场景来绘制滑动轨迹。As another method, in the case where the track to be rendered is a sliding track obtained in real time, when the user wears the VR/AR device, the user can draw the sliding track in the three-dimensional space in real time through the touch object. Of course, the user can also draw the sliding trajectory according to the running game scene in the VR/AR device.
步骤S120:基于所述待渲染轨迹,生成三维模型数据。Step S120: Generate three-dimensional model data based on the track to be rendered.
在本申请实施例中,可以通过预设方法,基于待渲染轨迹的形状以及待渲染轨迹中各轨迹点的坐标,生成与待渲染轨迹具有相同形状的三维模型数据。示例性的,若待渲染轨迹如图6所示,那么生成的三维模型数据就如图7所示。In this embodiment of the present application, a preset method may be used to generate three-dimensional model data having the same shape as the track to be rendered based on the shape of the track to be rendered and the coordinates of each track point in the track to be rendered. Exemplarily, if the track to be rendered is shown in FIG. 6 , the generated three-dimensional model data is shown in FIG. 7 .
步骤S130:基于所述三维模型数据和材质捕捉贴图,生成所述待渲染轨迹对应的三维涂鸦渲染结果。Step S130: Based on the 3D model data and the material capture map, generate a 3D graffiti rendering result corresponding to the track to be rendered.
在本申请实施例中,材质捕捉贴图也叫做MatCap(Material Capture)贴图。MatCap贴图是预先设定的用于给模型添加某些显示效果的材质,是预先生成的一种存储了光照和反射等信息的贴图,运行时使用法线方向进行采样。三维涂鸦渲染结果为具有MatCap材质效果的三维模型数据。In this embodiment of the present application, the material capture map is also called a MatCap (Material Capture) map. The MatCap texture is a preset material used to add some display effects to the model. It is a pre-generated texture that stores information such as lighting and reflection, and uses the normal direction for sampling at runtime. The 3D graffiti rendering result is 3D model data with MatCap material effect.
作为一种方式,可以根据该MatCap贴图渲染该三维模型数据,以生成具有MatCap材质效果的三维模型数据。As a way, the 3D model data can be rendered according to the MatCap map to generate 3D model data with MatCap material effects.
不同的应用场景中,生成的材质捕捉贴图可以不同,进而通过不同的材质捕捉贴图对三维模型数据进行渲染,可以得到具有不同MatCap材质效果的三维模型数据。示例性的,在VR游戏运行时,当需要在VR游戏场景中展示预设三维模型数据时,可以根据目标MatCap贴图渲染预设三维模型数据,以在游戏场景中展示具有MatCap材质效果的预设三维模型数据。In different application scenarios, the generated material capture maps can be different, and then the 3D model data can be rendered through different material capture maps to obtain 3D model data with different MatCap material effects. Exemplarily, when the VR game is running, when the preset 3D model data needs to be displayed in the VR game scene, the preset 3D model data can be rendered according to the target MatCap texture, so as to display the preset with MatCap material effect in the game scene. 3D model data.
在本申请实施例中,在基于三维模型数据和材质捕捉贴图,生成具有MatCap材质效果的三维模型数据时,可以通过MatCap Shader方法生成具有MatCap材质效果的三维模型数据。In this embodiment of the present application, when 3D model data with MatCap material effects is generated based on 3D model data and material capture maps, the 3D model data with MatCap material effects can be generated by using the MatCap Shader method.
其中,MatCap Shader是一种在某些层面能替代甚至超越PBR(Physicallly-BasedRendering,基于物理的渲染)的次世代渲染方案。MatCap Shader的基本思路是,使用某特定材质球的贴图,作为当前材质的视图空间环境贴图(view-space environment map),来实现具有均匀表面着色的反射材质物体的显示。考虑到物体的所有法线的投影的范围在x(-1,1),y(-1,1),构成了一个圆形,所以MatCap贴图中存储光照信息的区域是一个圆形。基于MatCap思想的Shader,可以无需提供任何光照,只需提供一张或多张合适的MatCap贴图作为光照结果的“指导”即可。Among them, MatCap Shader is a next-generation rendering solution that can replace or even surpass PBR (Physically-Based Rendering) at some levels. The basic idea of MatCap Shader is to use the texture of a specific shader as the view-space environment map of the current material to achieve the display of reflective material objects with uniform surface shading. Considering that the projections of all normals of the object are in the range of x(-1,1), y(-1,1), forming a circle, so the area where the lighting information is stored in the MatCap map is a circle. Shader based on the idea of MatCap can provide no lighting, just provide one or more suitable MatCap maps as a "guide" for lighting results.
本申请提供的一种轨迹渲染方法,首先获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹,然后基于待渲染轨迹,生成三维模型数据,再基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果。通过上述方法,在三维空间里,通过材质捕获贴图进行图像绘制,使得绘制的图像可以模拟真实的物理光照效果,从而可以更具有真实感。In a trajectory rendering method provided by the present application, a to-be-rendered trajectory is first obtained, where the to-be-rendered trajectory is a sliding trajectory of a touch object in a three-dimensional space, then based on the to-be-rendered trajectory, three-dimensional model data is generated, and then based on the three-dimensional model data and The material captures the map to generate the 3D graffiti rendering result corresponding to the track to be rendered. Through the above method, in the three-dimensional space, the image is drawn through the material capture map, so that the drawn image can simulate the real physical lighting effect, so that it can be more realistic.
请参阅图8,本申请实施例提供的一种轨迹渲染方法,应用于如图3或图4所示的电子设备或服务器,所述方法包括:Referring to FIG. 8 , a trajectory rendering method provided by an embodiment of the present application is applied to an electronic device or server as shown in FIG. 3 or FIG. 4 , and the method includes:
步骤S210:通过图像采集设备采集三维空间中的触控物绘制图像。Step S210 : collecting a drawing image of the touch object in a three-dimensional space by using an image collecting device.
在本申请实施例中,图像采集设备可以为带有图像采集功能的电子设备,比如手机、相机、图像传感器等。可选的,触控物为用户手指,触控物绘制图像为用户手指绘制图像,图像采集设备采集的用户手指绘制图像中可以包括待渲染轨迹以及用户手指以及背景图像。In this embodiment of the present application, the image acquisition device may be an electronic device with an image acquisition function, such as a mobile phone, a camera, an image sensor, and the like. Optionally, the touch object is a user's finger, the drawn image of the touch object is an image drawn by the user's finger, and the drawn image of the user's finger collected by the image acquisition device may include the track to be rendered, the user's finger, and a background image.
作为一种方式,为了降低硬件成本,可以通过手指与图像采集装置作为输入设备。响应于图像获取指令,图像采集设备开始采集用户手指绘制图像。其中,图像获取指令可以为与图像采集设备建立通信连接的外部设备发送的指令,也可以为当检测到图像采集设备中的某个应用程序开始运行时触发的指令,在此不做具体限定。As a way, in order to reduce the hardware cost, the finger and the image acquisition device can be used as the input device. In response to the image capturing instruction, the image capturing device starts capturing the image drawn by the user's finger. The image acquisition instruction may be an instruction sent by an external device that establishes a communication connection with the image acquisition device, or may be an instruction triggered when an application program in the image acquisition device is detected to start running, which is not specifically limited herein.
图像采集设备在采集用户手指绘制图像时,可以连续采集多帧用户手指绘制图像,也可以按照预设时间间隔采集用户手指绘制图像。作为其中一种方式,在图像采集设备连续采集多帧用户手指绘制图像的情况下,在时间先后顺序上,用户手指绘制图像的变化较快;作为其中另一种方式,在图像采集设备按照预设时间间隔采集用户手指绘制图像的情况下,在时间顺序上,用户手指绘制图像的变化较慢。也就是说,可以根据用户手指绘制图像在时间先后顺序上的变化的快慢,来确定通过哪种方式来采集用户手指绘制图像。其中,判断用户手指绘制图像变化的快慢可以通过判断采集的相邻帧用户手指绘制图像之间的相似度是否小于预设相似度、以及相似度小于预设相似度的相邻帧用户手指绘制图像的获取时间间隔是否小于预设时间间隔来确定。当相邻帧用户手指绘制图像之间的相似度小于预设相似度,且相邻帧用户手指绘制图像的获取时间间隔小于预设时间间隔时,确定用户手指绘制图像变化得较快;当相邻帧用户手指绘制图像之间的相似度大于或等于预设相似度,且相邻帧用户手指绘制图像的获取时间间隔大于或等于预设时间间隔时,确定用户手指绘制图像变化得较慢。其中,预设相似度为预先设置的确定用户手指绘制图像变化快的最大相似度,预设时间间隔为预先设置的确定用户手指绘制图像变化快的最大时间间隔。When collecting the images drawn by the user's finger, the image acquisition device may continuously collect multiple frames of the images drawn by the user's finger, or may collect the images drawn by the user's finger at preset time intervals. As one of the methods, in the case where the image acquisition device continuously collects multiple frames of images drawn by the user's finger, in the chronological order, the images drawn by the user's finger change rapidly; In the case where the images drawn by the user's finger are collected at time intervals, the change of the images drawn by the user's finger is slow in the time sequence. That is to say, it can be determined according to the speed of change in the time sequence of the image drawn by the user's finger, which method is used to collect the image drawn by the user's finger. The speed of the change of the user's finger-drawing images can be judged by judging whether the similarity between the collected adjacent frames of the user's finger-drawing images is less than the preset similarity, and whether the similarity between the adjacent frames of the user's finger-drawing images is less than the preset similarity. Whether the acquisition time interval is less than the preset time interval is determined. When the similarity between the user's finger-drawn images in adjacent frames is less than the preset similarity, and the acquisition time interval of the user's finger-drawn images in adjacent frames is less than the preset time interval, it is determined that the user's finger-drawn images change rapidly; When the similarity between the user's finger-drawn images in adjacent frames is greater than or equal to the preset similarity, and the acquisition time interval of the user's finger-drawn images in adjacent frames is greater than or equal to the preset time interval, it is determined that the user's finger-drawn images change slowly. The preset similarity is a preset maximum similarity for determining that the image drawn by the user's finger changes rapidly, and the preset time interval is a preset maximum time interval for determining that the image drawn by the user's finger changes rapidly.
示例性的,可以将预设相似度设置为60%,将预设时间间隔设置为5s。那么在上述情况下,若图像采集设备采集到的相邻帧用户手指绘制图像之间的相似度为55%,且相邻帧用户手指绘制图像的获取时间间隔为4.5s,那么就可以判断用户手指绘制图像的变化较快,因此在后续获取用户手指绘制图像时,图像采集设备可以连续采集多帧用户手指绘制图像。Exemplarily, the preset similarity may be set to 60%, and the preset time interval may be set to 5s. Then in the above situation, if the similarity between the adjacent frames of the user's finger-drawn images collected by the image acquisition device is 55%, and the acquisition time interval of the adjacent frames of the user's finger-drawn images is 4.5s, then it can be judged that the user The finger-drawing images change rapidly, so when acquiring the user's finger-drawing images subsequently, the image acquisition device can continuously collect multiple frames of the user's finger-drawing images.
可选的,图像采集设备在采集用户手指绘制图像时,可以只采用其中一种方式来获取用户手指绘制图像,也可以同时使用两种方式来获取用户手指绘制图像。比如,在VR游戏应用场景中,若在游戏前期,用户手指绘制图像变化得较慢,在游戏中期和和游戏后期,用户手指绘制图像变化得较快,那么图像采集设备在采集用户手指绘制图像时,就可以先按照预设时间间隔采集用户手指绘制图像,在到达游戏中期和游戏后期时,图像采集设备就可以连续采集多帧用户手指绘制图像。Optionally, when the image collection device collects the image drawn by the user's finger, only one of the methods may be used to obtain the image drawn by the user's finger, or two methods may be used simultaneously to obtain the image drawn by the user's finger. For example, in a VR game application scenario, if the image drawn by the user's finger changes slowly in the early stage of the game, and the image drawn by the user's finger changes rapidly in the middle and later stages of the game, the image capture device is collecting the image drawn by the user's finger. When the user's finger is drawn at a preset time interval, the image collection device can continuously collect multiple frames of the user's finger-drawn image when it reaches the middle and late game stages.
步骤S220:识别所述触控物绘制图像中的多个轨迹点。Step S220 : Identify multiple track points in the image drawn by the touch object.
在本申请实施例中,可以通过手指识别与追踪技术,对用户手指绘制图像进行识别,识别出用户手指在三维空间中的位置,即识别出当前用户手指绘制图像中手指滑动的轨迹点的坐标,输出轨迹点集合。In the embodiments of the present application, the finger recognition and tracking technology can be used to identify the image drawn by the user's finger, and identify the position of the user's finger in the three-dimensional space, that is, identify the coordinates of the trajectory point of the finger sliding in the current user's finger-drawing image. , the output track point set.
步骤S230:对所述多个轨迹点进行平滑处理,得到所述多个轨迹点对应的待渲染轨迹。Step S230: Perform smoothing processing on the plurality of trajectory points to obtain the to-be-rendered trajectory corresponding to the plurality of trajectory points.
在本申请实施例中,可以通过贝塞尔曲线对用户手指绘制图像中所包括的轨迹点进行平滑处理,得到待渲染轨迹。其中,贝塞尔曲线如图9所示,贝塞尔曲线是计算机图形图像造型的基本工具,是圆形造型运用得最多的基本线条之一。贝塞尔曲线通过控制曲线上的四个点(起始点P0、终止点P3以及两个相互分离的中间点P1、P2)来创造、编辑图形。In this embodiment of the present application, the track points included in the image drawn by the user's finger may be smoothed by using a Bezier curve to obtain a track to be rendered. Among them, the Bezier curve is shown in Figure 9. The Bezier curve is the basic tool for computer graphics and image modeling, and it is one of the most widely used basic lines for circular modeling. Bezier curve creates and edits graphics by controlling four points on the curve (start point P0, end point P3 and two separate intermediate points P1, P2).
步骤S240:基于所述待渲染轨迹中每两个相邻的轨迹点建立轨迹点组合,以得到多个轨迹点组合。Step S240: Establish a track point combination based on every two adjacent track points in the track to be rendered, so as to obtain multiple track point combinations.
在本申请实施例中,将待渲染轨迹中的轨迹点进行轨迹点组合划分,将待渲染轨迹所包括的所有轨迹点,按照每两个相邻的轨迹点划分为一个轨迹点组合的方式进行划分,得到多个轨迹点组合。示例性的,如图10所示,待渲染轨迹中包括轨迹点a、轨迹点b、轨迹点c、轨迹点d、轨迹点e、轨迹点f、轨迹点g。在划分轨迹点组合时,轨迹点a和轨迹点b被划分为一个轨迹点组合,轨迹点b和轨迹点c被划分为一个轨迹点组合,轨迹点c和轨迹点d被划分为一个轨迹点组合,轨迹点d和轨迹点e被划分为一个轨迹点组合,轨迹点e和轨迹点f被划分为一个轨迹点组合,轨迹点f和轨迹点g被划分为一个轨迹点组合,得到6个轨迹点组合。In the embodiment of the present application, the track points in the track to be rendered are divided into track point combinations, and all the track points included in the track to be rendered are divided into one track point combination for every two adjacent track points. Divide to obtain a combination of multiple trajectory points. Exemplarily, as shown in FIG. 10 , the track to be rendered includes track point a, track point b, track point c, track point d, track point e, track point f, and track point g. When dividing the trajectory point combination, the trajectory point a and the trajectory point b are divided into a trajectory point combination, the trajectory point b and the trajectory point c are divided into a trajectory point combination, the trajectory point c and the trajectory point d are divided into a trajectory point combination Combination, trajectory point d and trajectory point e are divided into a trajectory point combination, trajectory point e and trajectory point f are divided into a trajectory point combination, trajectory point f and trajectory point g are divided into a trajectory point combination, and 6 A combination of track points.
步骤S250:分别获取每个轨迹点组合中的两个轨迹点各自对应的圆周,其中,所述圆周对应的中心为每个轨迹点。Step S250: Obtaining the respective circles corresponding to the two track points in each track point combination, wherein the center corresponding to the circle is each track point.
在本申请实施例中,每个轨迹点都会对应有一个圆周,该圆周上的点与轨迹点的连线垂直于相邻两个轨迹点组成的向量,且该圆周上的点与轨迹点的连线等于预设半径,如图11所示。In the embodiment of the present application, each trajectory point corresponds to a circle, the line connecting the point on the circle and the trajectory point is perpendicular to the vector formed by two adjacent trajectory points, and the point on the circle and the trajectory point The connecting line is equal to the preset radius, as shown in Figure 11.
步骤S260:以每个轨迹点组合对应的预设平滑度,在轨迹点组合对应的圆周上获取预设数量的参考点,以得到每个轨迹点组合对应的参考点。Step S260: Obtain a preset number of reference points on the circumference corresponding to the combination of trajectory points with the preset smoothness corresponding to each combination of trajectory points, so as to obtain the reference points corresponding to the combination of trajectory points.
在本申请实施例中,预设平滑度用于表征获取的参考点的连线组成的图形像圆的概率。其中,预设平滑度越大,表征获取的参考点组成的图形越像圆,预设平滑度越小,表征获取的参考点组成的图形越不像圆。预设平滑度是预先设置的用于在每个轨迹点对应的圆周上获取预设数量的参考点的平滑度。In this embodiment of the present application, the preset smoothness is used to represent the probability that a graph formed by connecting lines of acquired reference points resembles a circle. Wherein, the larger the preset smoothness is, the more circle-like the graph composed of the acquired reference points is, and the smaller the preset smoothness is, the less the graph composed of the acquired reference points is like a circle. The preset smoothness is a preset smoothness used to obtain a preset number of reference points on the circumference corresponding to each trajectory point.
作为一种方式,按照预设平滑度以及相同间隔在每个轨迹点组合对应的圆周上取预设数量的参考点。其中,在每个轨迹点组合对应的圆周上取预设数量的参考点时,是在两个对应的圆周的相同位置上取参考点。示例性的,如图12所示,图12中,参考点a1和参考点b1处于对应圆周的相同位置上,参考点an和参考点bn处于对应圆周的相同位置上。As a method, a preset number of reference points are taken on the circumference corresponding to each track point combination according to the preset smoothness and the same interval. Wherein, when a preset number of reference points are taken on a circle corresponding to each track point combination, the reference points are taken at the same position of the two corresponding circles. Exemplarily, as shown in FIG. 12 , in FIG. 12 , the reference point a1 and the reference point b1 are at the same position corresponding to the circumference, and the reference point an and the reference point bn are at the same position corresponding to the circumference.
其中,相同间隔可以理解为根据预设平滑度确定需要获取的参考点的数量,然后将轨迹点对应的圆周均分为相应数量的弧度,在每个划分位置取参考点,以得到相应数量的参考点。Among them, the same interval can be understood as determining the number of reference points to be obtained according to the preset smoothness, and then dividing the circle corresponding to the trajectory points into a corresponding number of radians, and taking reference points at each divided position to obtain a corresponding number of radians. reference point.
步骤S270:基于所述每个轨迹点组合对应的参考点以及每个轨迹点组合所包括的轨迹点,生成与所述待渲染轨迹对应的三维模型数据。Step S270: Generate three-dimensional model data corresponding to the to-be-rendered trajectory based on the reference point corresponding to each trajectory point combination and the trajectory points included in each trajectory point combination.
在本申请实施例中,将每个轨迹点组合对应的圆周上取的相同位置的参考点进行连接,以及将每个轨迹点与各自对应的圆周上取的参考点进行连接,以得到与待渲染轨迹对应的三维模型数据。如图13所示,轨迹点a的圆周上取的参考点包括a1,...,an,轨迹点b对应的圆周上取的参考点包括b1,...,bn。在进行连接时,a1和b1是处于相同位置的参考点,因此将a1和b1进行连接;an和bn是处于相同位置的参考点,因此将an和bn进行连接;轨迹点a分别与a1,...,an进行连接,轨迹点b分别与b1,...,bn进行连接。In the embodiment of the present application, the reference points taken at the same position on the circumference corresponding to each trajectory point combination are connected, and each trajectory point is connected with the reference point taken on the corresponding circumference, so as to obtain the The 3D model data corresponding to the rendering track. As shown in FIG. 13 , the reference points taken on the circumference of the trajectory point a include a1,...,an, and the reference points taken on the circumference corresponding to the trajectory point b include b1,...,bn. When connecting, a1 and b1 are reference points in the same position, so a1 and b1 are connected; an and bn are reference points in the same position, so an and bn are connected; the trajectory point a is respectively connected with a1, ...,an are connected, and the trajectory point b is connected with b1,...,bn respectively.
步骤S280:基于所述三维模型数据和材质捕捉贴图,生成所述待渲染轨迹对应的三维涂鸦渲染结果。Step S280: Based on the 3D model data and the material capture map, generate a 3D graffiti rendering result corresponding to the track to be rendered.
本申请提供的一种轨迹渲染方法,首先通过图像采集设备采集三维空间中的触控物绘制图像,识别触控物绘制图像中的多个轨迹点,并对多个轨迹点进行平滑处理,得到多个轨迹点对应的待渲染轨迹,然后基于待渲染轨迹中每两个相邻的轨迹点建立轨迹点组合,以得到多个轨迹点组合,分别获取每个轨迹点组合中的两个轨迹点各自对应的圆周,再以每个轨迹点组合对应的预设平滑度,在轨迹点组合对应的圆周上取预设数量的参考点,以得到每个轨迹点组合对应的参考点,最后基于每个轨迹点组合对应的参考点以及每个轨迹点组合所包括的轨迹点,生成与待渲染轨迹对应的三维模型数据,再基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果。通过上述方法,通过预设平滑度在轨迹点组合对应的圆周上取预设数量的参考点,可以使得得到的三维模型数据更接近待渲染轨迹的形状的三维模型数据,从而使得得到的待渲染轨迹对应的三维涂鸦渲染结果更具有真实感。In a trajectory rendering method provided by the present application, first, an image drawn by a touch object in a three-dimensional space is collected by an image acquisition device, a plurality of trajectory points in the drawn image of the touch object are identified, and the plurality of trajectory points are smoothed to obtain The tracks to be rendered corresponding to multiple track points, and then a track point combination is established based on every two adjacent track points in the track to be rendered, so as to obtain multiple track point combinations, and two track points in each track point combination are obtained respectively. The corresponding circles, and then with the preset smoothness corresponding to each combination of trajectory points, take a preset number of reference points on the circle corresponding to the combination of trajectory points to obtain the reference points corresponding to each combination of trajectory points, and finally based on each combination of trajectory points. The reference points corresponding to each track point combination and the track points included in each track point combination generate 3D model data corresponding to the track to be rendered, and then capture the map based on the 3D model data and material to generate the 3D graffiti rendering corresponding to the track to be rendered. result. Through the above method, a preset number of reference points are taken on the circumference corresponding to the combination of trajectory points by the preset smoothness, so that the obtained 3D model data can be closer to the 3D model data of the shape of the trajectory to be rendered, so that the obtained 3D model data to be rendered can be obtained. The 3D graffiti rendering results corresponding to the trajectory are more realistic.
请参阅图14,本申请实施例提供的一种轨迹渲染方法,应用于如图3或图4所示的电子设备或服务器,所述方法包括:Referring to FIG. 14, a trajectory rendering method provided by an embodiment of the present application is applied to the electronic device or server as shown in FIG. 3 or 4, and the method includes:
步骤S310:通过图像采集设备采集三维空间中的触控物绘制图像。Step S310 : collecting a drawing image of the touch object in a three-dimensional space by using an image collecting device.
步骤S320:识别所述触控物绘制图像中的多个轨迹点。Step S320 : Identify multiple track points in the image drawn by the touch object.
步骤S330:对所述多个轨迹点进行平滑处理,得到所述多个轨迹点对应的待渲染轨迹。Step S330: Perform smoothing processing on the plurality of trajectory points to obtain the to-be-rendered trajectory corresponding to the plurality of trajectory points.
步骤S340:获取预设三维模型数据。Step S340: Acquire preset three-dimensional model data.
在本申请实施例中,预设三维模型数据为预先设置的具有固定大小、形状的圆柱体模型。其中,预设三维模型数据可以预先存储在云服务器或预设存储区域中,在需要获取预设三维模型数据时,可以从云服务器或预设存储区域中获取预设三维模型数据。示例性的,如图15所示,图15示出了一种预设三维模型数据。可选的,预设三维模型数据也可以为具有固定大小、形状的球体模型或立方体模型。In the embodiment of the present application, the preset three-dimensional model data is a preset cylinder model with a fixed size and shape. The preset three-dimensional model data may be pre-stored in a cloud server or a preset storage area, and when the preset three-dimensional model data needs to be acquired, the preset three-dimensional model data may be acquired from the cloud server or the preset storage area. Exemplarily, as shown in FIG. 15 , FIG. 15 shows a preset three-dimensional model data. Optionally, the preset three-dimensional model data may also be a sphere model or a cube model with a fixed size and shape.
步骤S350:获取所述待渲染轨迹中每两个相邻的轨迹点之间的距离之和。Step S350: Obtain the sum of the distances between every two adjacent track points in the track to be rendered.
在本申请实施例中,每两个相邻的轨迹点之间的距离为两个轨迹点之间的直线距离。作为一种方式,先计算待渲染轨迹中每两个相邻的轨迹点的直线距离,然后将计算得到的所有直线距离进行相加,得到距离之和。In this embodiment of the present application, the distance between every two adjacent track points is a straight-line distance between the two track points. As a method, the straight-line distance between each two adjacent track points in the track to be rendered is calculated first, and then all the calculated straight-line distances are added to obtain the sum of the distances.
步骤S360:将所述预设三维模型数据的长度缩放到与所述距离之和相同,得到缩放后的预设三维模型数据。Step S360: Scale the length of the preset three-dimensional model data to be the same as the sum of the distances to obtain the scaled preset three-dimensional model data.
在本申请实施例中,若预设三维模型数据的长度大于计算得到的距离之和,则将预设三维模型数据的长度进行压缩,压缩至与距离之和相同;若预设三维模型数据的长度小于计算得到的距离之和,则将预设三维模型数据的长度进行放大,放大值与距离之和相同。In this embodiment of the present application, if the length of the preset three-dimensional model data is greater than the calculated sum of distances, the length of the preset three-dimensional model data is compressed to be the same as the sum of the distances; if the length of the preset three-dimensional model data is If the length is less than the calculated sum of distances, the length of the preset 3D model data will be enlarged, and the enlargement value is the same as the sum of distances.
在本申请实施例中,若预设三维模型数据为如图15所示的圆柱体模型,那么预设三维模型数据的长度可以为圆柱体模型的高的长度;若预设三维模型数据为球体模型,那么预设三维模型的长度可以为球体模型的直径长度;若预设三维模型数据为立方体模型,那么预设三维模型数据的长度可以为立方体模型的边长的长度。In this embodiment of the present application, if the preset three-dimensional model data is a cylinder model as shown in FIG. 15 , the length of the preset three-dimensional model data may be the height of the cylinder model; if the preset three-dimensional model data is a sphere model, the length of the preset 3D model can be the diameter length of the sphere model; if the preset 3D model data is a cube model, then the length of the preset 3D model data can be the length of the side length of the cube model.
步骤S370:基于每两个相邻的轨迹点之间的距离与所述距离之和的比值,将所述缩放后的预设三维模型数据进行纵向划分,分别得到每两个相邻的轨迹点对应的参考三维模型模数据。Step S370: Based on the ratio between the distance between each two adjacent trajectory points and the sum of the distances, divide the scaled preset three-dimensional model data vertically to obtain each two adjacent trajectory points respectively. Corresponding reference 3D model die data.
在本申请实施例中,计算每两个相邻的轨迹点之间的直线距离与距离之和的比值,按照对应的比值将缩放后的预设三维模型进行纵向划分,得到每两个相邻的轨迹点对应的参考三维模型数据。其中,纵向划分指的是按照建立的坐标系的y轴方向对缩放后的预设三维模型数据进行划分。示例性的,如图16所示,y轴方向为垂直地面的方向,待渲染轨迹为abc,根据轨迹点a和轨迹点b之间的直线距离L1,以及轨迹点b与轨迹点c之间的直线距离L2,计算距离之和L3=L1+L2,然后计算L1与L3的比值1以及L1与L3的比值2,在y轴方向按照比值1和比值2将缩放后的预设三维模型数据划分为第一参考模型数据(ab区域)和第二参考三维模型数据(bc区域),得到轨迹点a和轨迹点b对应的参考三维模型数据,轨迹点b与轨迹点c对应的参考三维模型数据,进而可以知道每个参考三维模型数据的顶点坐标。其中,每个参考三维模型数据的顶点为每个参考三维模型数据的截面圆周上的点,截面为按照比值纵向划分后得到。In the embodiment of the present application, the ratio of the straight-line distance between each two adjacent trajectory points to the sum of the distances is calculated, and the scaled preset three-dimensional model is divided longitudinally according to the corresponding ratio to obtain every two adjacent trajectory points. The trajectory points corresponding to the reference 3D model data. The vertical division refers to dividing the scaled preset three-dimensional model data according to the y-axis direction of the established coordinate system. Exemplarily, as shown in Figure 16, the y-axis direction is the direction perpendicular to the ground, and the track to be rendered is abc, according to the linear distance L1 between the track point a and the track point b, and the distance between the track point b and the track point c. The straight-line distance L2 of the Divide into the first reference model data (ab area) and the second reference 3D model data (bc area), and obtain the reference 3D model data corresponding to the trajectory point a and the trajectory point b, and the reference 3D model corresponding to the trajectory point b and the trajectory point c. data, and then the vertex coordinates of each reference 3D model data can be known. Wherein, the vertex of each reference three-dimensional model data is a point on the circumference of the section of each reference three-dimensional model data, and the section is obtained by dividing vertically according to the ratio.
步骤S380:分别基于每两个相邻的轨迹点的坐标,以及每两个相邻的轨迹点组成的向量的斜率,对每两个相邻的轨迹点对应的参考三维模型数据的顶点进行计算,以得到所述三维模型数据。Step S380: Calculate the vertices of the reference three-dimensional model data corresponding to each two adjacent trajectory points based on the coordinates of each two adjacent trajectory points and the slope of the vector formed by each two adjacent trajectory points. , to obtain the 3D model data.
在本申请实施例中,根据每两个相邻的轨迹点的坐标得到相应的向量,进而可以基于计算得到向量计算相应的斜率,从而可以根据每两个相邻的轨迹点的坐标以及相应的斜率,对相应的参考三维模型数据的顶点的坐标进行计算,若计算出来的坐标与预设坐标不同,则基于计算出来的坐标对预设坐标进行更新,得到更新坐标后的参考三维模型数据,进而将每个更新坐标后的三维模型数据进行拼接得到与待渲染轨迹对应的三维模型数据。对如图16所示的参考三维模型数据的顶点坐标进行更新后,可以得到如图17所示的三维模型数据。In the embodiment of the present application, a corresponding vector is obtained according to the coordinates of each two adjacent trajectory points, and then a corresponding slope can be calculated based on the calculated vector, so that the coordinates of each two adjacent trajectory points and the corresponding Slope, calculate the coordinates of the vertices of the corresponding reference 3D model data, if the calculated coordinates are different from the preset coordinates, then update the preset coordinates based on the calculated coordinates to obtain the updated reference 3D model data, Then, the three-dimensional model data after each updated coordinate is spliced to obtain three-dimensional model data corresponding to the track to be rendered. After updating the vertex coordinates of the reference three-dimensional model data shown in FIG. 16 , the three-dimensional model data shown in FIG. 17 can be obtained.
步骤S390:基于所述三维模型数据和材质捕捉贴图,生成所述待渲染轨迹对应的三维涂鸦渲染结果。Step S390: Based on the 3D model data and the material capture map, generate a 3D graffiti rendering result corresponding to the track to be rendered.
本申请提供的一种轨迹渲染方法,首先通过图像采集设备采集三维空间中的触控物绘制图像,识别触控物绘制图像中的多个轨迹点,并对多个轨迹点进行平滑处理,得到多个轨迹点对应的待渲染轨迹,然后获取预设三维模型数据,获取待渲染轨迹中每两个相邻的轨迹点之间的距离之和,将预设三维模型数据的高度缩放到与距离之和相同,得到缩放后的预设三维模型数据,再基于每两个相邻的轨迹点之间的距离与距离之和的比值,将缩放后的预设三维模型数据进行纵向划分,分别得到每两个相邻的轨迹点对应的参考三维模型数据,再分别基于每两个相邻的轨迹点的坐标,以及每两个相邻的轨迹点组成的向量的斜率,对每两个相邻的轨迹点对应的参考三维模型数据的顶点进行计算,以得到三维模型数据,最后基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果。通过上述方法,通过将预设三维模型数据划分为多个参考三维模型数据,从而可以根据每两个轨迹点的坐标以及两个轨迹点组成的向量的斜率,对参考模型数据的顶点坐标进行更新,得到更接近待渲染轨迹的形状的三维模型数据,进而通过材质捕获贴图对三维模型数据进行图像绘制,使得绘制的图像可以模拟真实的物理光照效果。In a trajectory rendering method provided by the present application, first, an image drawn by a touch object in a three-dimensional space is collected by an image acquisition device, a plurality of trajectory points in the drawn image of the touch object are identified, and the plurality of trajectory points are smoothed to obtain The tracks to be rendered corresponding to the multiple track points, then the preset 3D model data is obtained, the sum of the distances between every two adjacent track points in the track to be rendered is obtained, and the height of the preset 3D model data is scaled to the distance The sum is the same, and the scaled preset 3D model data is obtained, and then based on the ratio of the distance between each two adjacent trajectory points to the sum of the distances, the scaled preset 3D model data is divided vertically to obtain respectively The reference 3D model data corresponding to each two adjacent trajectory points, and then based on the coordinates of each two adjacent trajectory points and the slope of the vector formed by each two adjacent trajectory points, for each two adjacent trajectory points. The vertices of the reference 3D model data corresponding to the trajectory points of the 3D model are calculated to obtain the 3D model data, and finally the 3D graffiti rendering result corresponding to the to-be-rendered trajectory is generated based on the 3D model data and the material capture map. Through the above method, by dividing the preset three-dimensional model data into a plurality of reference three-dimensional model data, the vertex coordinates of the reference model data can be updated according to the coordinates of every two trajectory points and the slope of the vector formed by the two trajectory points. , obtain 3D model data that is closer to the shape of the track to be rendered, and then use the material capture map to draw the 3D model data, so that the drawn image can simulate the real physical lighting effect.
请参阅图18,本申请实施例提供的一种轨迹渲染方法,应用于如图3或图4所示的电子设备或服务器,所述方法包括:Referring to FIG. 18 , a trajectory rendering method provided by an embodiment of the present application is applied to an electronic device or server as shown in FIG. 3 or FIG. 4 , and the method includes:
步骤S410:获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹。Step S410: Acquire a track to be rendered, where the track to be rendered is a sliding track of the touch object in the three-dimensional space.
步骤S420:基于所述待渲染轨迹,生成三维模型数据。Step S420: Generate three-dimensional model data based on the track to be rendered.
步骤S430:获取预设笔刷材质、预设光照信息以及预设球体模型,生成材质捕捉贴图。Step S430: Acquire a preset brush material, preset lighting information, and a preset sphere model, and generate a material capture map.
在本申请实施例中,预设笔刷材质可以基于预设材质贴图和预设颜色生成,其中,预设材质贴图可以为预先从材质贴图库中获取的,预设颜色可以为预先设置的任意颜色。预设光照信息可以为预先在虚拟空间中设置的,或者也可以为AR设备获取的,其中,预设光照信息包括光源坐标以及光照强度。预设球体模型是一种常用的材质球。In this embodiment of the present application, the preset brush material can be generated based on a preset texture map and a preset color, wherein the preset texture map can be obtained in advance from a texture map library, and the preset color can be any preset color. color. The preset lighting information may be pre-set in the virtual space, or may also be acquired by an AR device, where the preset lighting information includes light source coordinates and lighting intensity. The preset sphere model is a commonly used shader.
根据预设笔刷材质、预设光照信息以及预设球体模型,可以生成具有特定材质、光照信息效果的MatCap贴图,生成的MatCap贴图可以如图19所示。According to the preset brush material, preset lighting information and preset sphere model, a MatCap map with specific material and lighting information effects can be generated, and the generated MatCap map can be shown in Figure 19.
步骤S440:基于所述三维模型数据和材质捕捉贴图,生成所述待渲染轨迹对应的三维涂鸦渲染结果。Step S440: Based on the 3D model data and the material capture map, generate a 3D graffiti rendering result corresponding to the track to be rendered.
在本申请实施例中,生成的三维涂鸦渲染结果可以如图20所示,通过图20可以看出,该三维涂鸦渲染结果可以很好的反映物理光照效果。In the embodiment of the present application, the generated three-dimensional graffiti rendering result can be shown in FIG. 20 , and it can be seen from FIG. 20 that the three-dimensional graffiti rendering result can well reflect the physical lighting effect.
步骤S450:变换所述三维涂鸦渲染结果的风格。Step S450: Transform the style of the three-dimensional graffiti rendering result.
在本申请实施例中,对于很多的涂鸦初学者来说,总是不能绘制出自己心仪的线条和曲线,在线条颜色选择上往往也不太如心意,这时可以通过涂鸦辅助功能对用户输入的轨迹结果进行智能平滑与拉直,甚至在涂鸦一些数字或者字体时,还可以通过实时通信技术(RTC技术)自动替换成风格化的字体与数字,涂鸦完后还可以根据涂鸦环境进行风格迁移让画面更具有艺术感。可选的,还可以终端本地获取预设风格,进而可以将三维涂鸦渲染结果的风格替换为预设风格。In the embodiment of the present application, for many graffiti beginners, it is always impossible to draw the lines and curves they like, and the line color selection is often unsatisfactory. At this time, the graffiti auxiliary function can be used to input user input The result of the trajectory is intelligently smoothed and straightened, and even when graffitiing some numbers or fonts, it can be automatically replaced with stylized fonts and numbers through real-time communication technology (RTC technology). After graffiti, the style can also be transferred according to the graffiti environment. Make the picture more artistic. Optionally, the preset style can also be obtained locally on the terminal, and then the style of the 3D graffiti rendering result can be replaced with the preset style.
步骤S460:通过网络将所述三维涂鸦渲染结果进行分享展示。Step S460: Share and display the three-dimensional graffiti rendering result through the network.
在本申请实施例中,在虚拟世界的多人交互场景中,可以通过网络将三维涂鸦渲染结果分享给其他人,在其他人面前同步进行生成三维涂鸦渲染结果过程的再现,可以作为3D虚拟世界的一种新的社交方式,类似于在公告板上留言一样,比普通古板的字体显示,涂鸦更具个性化与风格化。In the embodiment of the present application, in the multi-person interaction scene in the virtual world, the 3D graffiti rendering result can be shared with others through the network, and the process of generating the 3D graffiti rendering result can be reproduced synchronously in front of others, which can be used as a 3D virtual world. A new way of socializing, similar to leaving a message on a bulletin board, graffiti is more personal and stylized than the ordinary old-fashioned font display.
本申请提供的一种轨迹渲染方法,首先获取待渲染轨迹,基于待渲染轨迹,生成三维模型数据,然后获取预设笔刷材质、预设光照信息以及预设球体模型,生成材质捕捉贴图,再基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果,而后可以通过实时通信技术变换三维涂鸦渲染结果的风格,并且可以将三维涂鸦渲染结果进行分享。通过上述方法,通过材质捕获贴图进行图像绘制,使得绘制的图像可以模拟真实的物理光照效果。In a trajectory rendering method provided by the present application, a trajectory to be rendered is first obtained, 3D model data is generated based on the trajectory to be rendered, then a preset brush material, preset lighting information and a preset sphere model are obtained, a material capture map is generated, and then a texture capture map is generated. Based on the 3D model data and material capture map, the 3D graffiti rendering result corresponding to the track to be rendered can be generated, and then the style of the 3D graffiti rendering result can be transformed through real-time communication technology, and the 3D graffiti rendering result can be shared. Through the above method, the image is drawn through the material capture map, so that the drawn image can simulate the real physical lighting effect.
请参阅图21,本申请实施例提供的一种轨迹渲染装置500,所述装置500包括:Referring to FIG. 21 , a trajectory rendering apparatus 500 provided by an embodiment of the present application, the apparatus 500 includes:
轨迹获取单元510,用于获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹。The
作为一种方式,轨迹获取单元510具体用于通过图像采集设备采集三维空间中的触控物绘制图像;识别所述触控物绘制图像中的多个轨迹点;对所述多个轨迹点进行平滑处理,得到所述多个轨迹点对应的待渲染轨迹。In one way, the
数据生成单元520,用于基于所述待渲染轨迹,生成三维模型数据。The
作为一种方式,数据生成单元520具体用于基于所述待渲染轨迹中每两个相邻的轨迹点建立轨迹点组合,以得到多个轨迹点组合;分别获取每个轨迹点组合中的两个轨迹点各自对应的圆周,其中,所述圆周对应的中心为每个轨迹点;以每个轨迹点组合对应的预设平滑度,在轨迹点组合对应的圆周上获取预设数量的参考点,以得到每个轨迹点组合对应的参考点;基于所述每个轨迹点组合对应的参考点以及每个轨迹点组合所包括的轨迹点,生成与所述待渲染轨迹对应的三维模型数据。As a way, the
作为另一种方式,数据生成单元520具体用于获取预设三维模型数据;获取所述待渲染轨迹中每两个相邻的轨迹点之间的距离之和;将所述预设三维模型数据的长度缩放到与所述距离之和相同,得到缩放后的预设三维模型数据;基于每两个相邻的轨迹点之间的距离与所述距离之和的比值,将所述缩放后的预设三维模型数据进行纵向划分,分别得到每两个相邻的轨迹点对应的参考三维模型模数据;分别基于每两个相邻的轨迹点的坐标,以及每两个相邻的轨迹点组成的向量的斜率,对每两个相邻的轨迹点对应的参考三维模型数据的顶点进行计算,以得到所述三维模型数据。As another way, the
渲染单元530,用于基于所述三维模型数据和材质捕捉贴图,生成所述待渲染轨迹对应的三维涂鸦渲染结果。The
请参阅图22,所述装置500还包括:Referring to FIG. 22, the apparatus 500 further includes:
贴图生成单元540,用于获取预设笔刷材质、预设光照信息以及预设球体模型,生成材质捕捉贴图。The
处理单元550,用于通过实时通信技术,变换所述三维涂鸦渲染结果的风格。The
可选的,处理单元550具体用于通过网络将所述三维涂鸦渲染结果进行分享展示。Optionally, the
需要说明的是,本申请中装置实施例与前述方法实施例是相互对应的,装置实施例中具体的原理可以参见前述方法实施例中的内容,此处不再赘述。It should be noted that the apparatus embodiments in the present application correspond to the foregoing method embodiments, and the specific principles in the apparatus embodiments may refer to the content in the foregoing method embodiments, which will not be repeated here.
下面将结合图23对本申请提供的一种电子设备或服务器进行说明。An electronic device or server provided by the present application will be described below with reference to FIG. 23 .
请参阅图23,基于上述的轨迹渲染方法、装置,本申请实施例还提供的另一种可以执行前述轨迹渲染方法的电子设备或服务器800。电子设备或服务器800包括相互耦合的一个或多个(图中仅示出一个)处理器802、存储器804以及网络模块806。其中,该存储器804中存储有可以执行前述实施例中内容的程序,而处理器802可以执行该存储器804中存储的程序。Referring to FIG. 23 , based on the above-mentioned trajectory rendering method and apparatus, an embodiment of the present application further provides another electronic device or
其中,处理器802可以包括一个或者多个处理核。处理器802利用各种接口和线路连接整个服务器800内的各个部分,通过运行或执行存储在存储器804内的指令、程序、代码集或指令集,以及调用存储在存储器804内的数据,执行服务器800的各种功能和处理数据。可选地,处理器802可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(ProgrammableLogic Array,PLA)中的至少一种硬件形式来实现。处理器802可集成中央处理器(CentralProcessing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器802中,单独通过一块通信芯片进行实现。The
存储器804可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器804可用于存储指令、程序、代码、代码集或指令集。存储器804可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储电子设备或服务器800在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。The
所述网络模块806用于接收以及发送电磁波,实现电磁波与电信号的相互转换,从而与通讯网络或者其他设备进行通讯,例如和音频播放设备进行通讯。所述网络模块806可包括各种现有的用于执行这些功能的电路元件,例如,天线、射频收发器、数字信号处理器、加密/解密芯片、用户身份模块(SIM)卡、存储器等等。所述网络模块806可与各种网络如互联网、企业内部网、无线网络进行通讯或者通过无线网络与其他设备进行通讯。上述的无线网络可包括蜂窝式电话网、无线局域网或者城域网。例如,网络模块806可以与基站进行信息交互。The
请参考图24,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读存储介质900中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。Please refer to FIG. 24 , which shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application. The computer-readable storage medium 900 stores program codes, and the program codes can be invoked by the processor to execute the methods described in the above method embodiments.
计算机可读存储介质900可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质900包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质900具有执行上述方法中的任何方法步骤的程序代码910的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码910可以例如以适当形式进行压缩。The computer-readable storage medium 900 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM. Optionally, the computer-readable storage medium 900 includes a non-transitory computer-readable storage medium. Computer readable storage medium 900 has storage space for
本申请提供的一种轨迹渲染方法、装置、电子设备以及存储介质,首先获取待渲染轨迹,所述待渲染轨迹为触控物在三维空间中的滑动轨迹,然后基于待渲染轨迹,生成三维模型数据,再基于三维模型数据和材质捕捉贴图,生成待渲染轨迹对应的三维涂鸦渲染结果。通过上述方法,在三维空间里,通过材质捕获贴图进行图像绘制,使得绘制的图像可以模拟真实的物理光照效果,从而可以更具有真实感。In a trajectory rendering method, device, electronic device and storage medium provided by the present application, a trajectory to be rendered is obtained first, and the trajectory to be rendered is a sliding trajectory of a touch object in a three-dimensional space, and then a three-dimensional model is generated based on the trajectory to be rendered Then, based on the 3D model data and the material capture map, the 3D graffiti rendering result corresponding to the track to be rendered is generated. Through the above method, in the three-dimensional space, the image is drawn through the material capture map, so that the drawn image can simulate the real physical lighting effect, so that it can be more realistic.
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本发明的保护之内。The embodiments of the present invention have been described above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned specific embodiments, which are merely illustrative rather than restrictive. Under the inspiration of the present invention, without departing from the spirit of the present invention and the scope protected by the claims, many forms can be made, which all belong to the protection of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210647498.5A CN115063518B (en) | 2022-06-08 | 2022-06-08 | Trajectory rendering method, device, electronic device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210647498.5A CN115063518B (en) | 2022-06-08 | 2022-06-08 | Trajectory rendering method, device, electronic device and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115063518A true CN115063518A (en) | 2022-09-16 |
| CN115063518B CN115063518B (en) | 2025-01-07 |
Family
ID=83199471
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210647498.5A Active CN115063518B (en) | 2022-06-08 | 2022-06-08 | Trajectory rendering method, device, electronic device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115063518B (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115496832A (en) * | 2022-10-12 | 2022-12-20 | 北京天融信网络安全技术有限公司 | Method, device, electronic device, and computer-readable storage medium for drawing trajectory |
| CN116030221A (en) * | 2022-10-28 | 2023-04-28 | 北京字跳网络技术有限公司 | Processing method, device, electronic device and storage medium of augmented reality image |
| CN116824009A (en) * | 2023-06-29 | 2023-09-29 | 广州市大神文化传播有限公司 | An animation rendering method, system, device and storage medium |
| WO2024051756A1 (en) * | 2022-09-08 | 2024-03-14 | 北京字跳网络技术有限公司 | Special effect image drawing method and apparatus, device, and medium |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106898040A (en) * | 2017-03-06 | 2017-06-27 | 网易(杭州)网络有限公司 | Virtual resource object rendering intent and device |
| CN108268257A (en) * | 2016-12-29 | 2018-07-10 | 福建省天奕网络科技有限公司 | Lines track method for drafting and system applied to VR scenes |
| US20190017838A1 (en) * | 2017-07-14 | 2019-01-17 | Rosemount Aerospace Inc. | Render-based trajectory planning |
| CN109224448A (en) * | 2018-09-25 | 2019-01-18 | 北京天马时空网络技术有限公司 | A kind of method and apparatus of streamer rendering |
| US20200184714A1 (en) * | 2017-08-18 | 2020-06-11 | Tencent Technology (Shenzhen) Company Limited | Method for renfering of simulating illumination and terminal |
| CN111833459A (en) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | An image processing method, device, electronic device and storage medium |
| US20200404179A1 (en) * | 2018-04-09 | 2020-12-24 | SZ DJI Technology Co., Ltd. | Motion trajectory determination and time-lapse photography methods, device, and machine-readable storage medium |
| US20220005273A1 (en) * | 2020-07-01 | 2022-01-06 | Wacom Co., Ltd. | Dynamic three-dimensional surface sketching |
| CN114419099A (en) * | 2022-01-18 | 2022-04-29 | 腾讯科技(深圳)有限公司 | Method for capturing motion trail of virtual object to be rendered |
| CN114419230A (en) * | 2022-01-21 | 2022-04-29 | 北京字跳网络技术有限公司 | An image rendering method, device, electronic device and storage medium |
-
2022
- 2022-06-08 CN CN202210647498.5A patent/CN115063518B/en active Active
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108268257A (en) * | 2016-12-29 | 2018-07-10 | 福建省天奕网络科技有限公司 | Lines track method for drafting and system applied to VR scenes |
| CN106898040A (en) * | 2017-03-06 | 2017-06-27 | 网易(杭州)网络有限公司 | Virtual resource object rendering intent and device |
| US20190017838A1 (en) * | 2017-07-14 | 2019-01-17 | Rosemount Aerospace Inc. | Render-based trajectory planning |
| US20200184714A1 (en) * | 2017-08-18 | 2020-06-11 | Tencent Technology (Shenzhen) Company Limited | Method for renfering of simulating illumination and terminal |
| US20200404179A1 (en) * | 2018-04-09 | 2020-12-24 | SZ DJI Technology Co., Ltd. | Motion trajectory determination and time-lapse photography methods, device, and machine-readable storage medium |
| CN109224448A (en) * | 2018-09-25 | 2019-01-18 | 北京天马时空网络技术有限公司 | A kind of method and apparatus of streamer rendering |
| US20220005273A1 (en) * | 2020-07-01 | 2022-01-06 | Wacom Co., Ltd. | Dynamic three-dimensional surface sketching |
| CN111833459A (en) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | An image processing method, device, electronic device and storage medium |
| CN114419099A (en) * | 2022-01-18 | 2022-04-29 | 腾讯科技(深圳)有限公司 | Method for capturing motion trail of virtual object to be rendered |
| CN114419230A (en) * | 2022-01-21 | 2022-04-29 | 北京字跳网络技术有限公司 | An image rendering method, device, electronic device and storage medium |
Non-Patent Citations (3)
| Title |
|---|
| KOYAMA Y等: "A Procedural MatCap System for Cel-Shaded Japanese Animation Production", 《 SA \'21 POSTERS: SIGGRAPH ASIA 2021 POSTERS》, 17 December 2021 (2021-12-17) * |
| 杨丽洁;徐添辰;吴恩华;: "动态模拟中国水墨画中的笔画绘制", 计算机辅助设计与图形学学报, no. 05, 15 May 2016 (2016-05-15) * |
| 王飞: "基于笔划模板的地形编辑", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 April 2020 (2020-04-15) * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2024051756A1 (en) * | 2022-09-08 | 2024-03-14 | 北京字跳网络技术有限公司 | Special effect image drawing method and apparatus, device, and medium |
| CN115496832A (en) * | 2022-10-12 | 2022-12-20 | 北京天融信网络安全技术有限公司 | Method, device, electronic device, and computer-readable storage medium for drawing trajectory |
| CN116030221A (en) * | 2022-10-28 | 2023-04-28 | 北京字跳网络技术有限公司 | Processing method, device, electronic device and storage medium of augmented reality image |
| WO2024088144A1 (en) * | 2022-10-28 | 2024-05-02 | 北京字跳网络技术有限公司 | Augmented reality picture processing method and apparatus, and electronic device and storage medium |
| CN116030221B (en) * | 2022-10-28 | 2025-11-28 | 北京字跳网络技术有限公司 | Processing method and device of augmented reality picture, electronic equipment and storage medium |
| CN116824009A (en) * | 2023-06-29 | 2023-09-29 | 广州市大神文化传播有限公司 | An animation rendering method, system, device and storage medium |
| CN116824009B (en) * | 2023-06-29 | 2024-03-26 | 广州市大神文化传播有限公司 | Animation rendering method, system, equipment and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115063518B (en) | 2025-01-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN115063518B (en) | Trajectory rendering method, device, electronic device and storage medium | |
| CN112037311B (en) | Animation generation method, animation playing method and related devices | |
| US11062494B2 (en) | Electronic messaging utilizing animatable 3D models | |
| KR102491140B1 (en) | Method and apparatus for generating virtual avatar | |
| US12002160B2 (en) | Avatar generation method, apparatus and device, and medium | |
| CN112669417B (en) | Virtual image generation method and device, storage medium and electronic equipment | |
| CN113905251A (en) | Virtual object control method, device, electronic device and readable storage medium | |
| CN111047509B (en) | Image special effect processing method, device and terminal | |
| CN114663560B (en) | Method, device, storage medium and electronic device for realizing animation of target model | |
| CN116958344A (en) | Animation generation method and device for virtual image, computer equipment and storage medium | |
| CN111652794B (en) | Face adjusting and live broadcasting method and device, electronic equipment and storage medium | |
| CN111652807B (en) | Eye adjusting and live broadcasting method and device, electronic equipment and storage medium | |
| US20180276870A1 (en) | System and method for mass-animating characters in animated sequences | |
| CN113487662A (en) | Picture display method and device, electronic equipment and storage medium | |
| WO2021139359A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
| CN109145688A (en) | The processing method and processing device of video image | |
| CN111652791A (en) | Face replacement display method, face replacement display device, live broadcast method, live broadcast device, electronic equipment and storage medium | |
| CN111462205B (en) | Transformation of image data, live broadcast method, device, electronic equipment and storage medium | |
| CN111652024B (en) | Face display and live broadcast method and device, electronic equipment and storage medium | |
| CN114904279B (en) | Data preprocessing method, device, medium and equipment | |
| CN114742970B (en) | Virtual three-dimensional model processing method, non-volatile storage medium and electronic device | |
| WO2020042442A1 (en) | Expression package generating method and device | |
| CN111652025B (en) | Face processing and live broadcasting method and device, electronic equipment and storage medium | |
| CN111652023B (en) | Mouth-type adjustment and live broadcast method and device, electronic equipment and storage medium | |
| CN116112716B (en) | Virtual person live broadcast method, device and system based on single instruction stream and multiple data streams |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |