[go: up one dir, main page]

CN119680189A - Stroke display method, device, medium, electronic device and program product - Google Patents

Stroke display method, device, medium, electronic device and program product Download PDF

Info

Publication number
CN119680189A
CN119680189A CN202411814690.4A CN202411814690A CN119680189A CN 119680189 A CN119680189 A CN 119680189A CN 202411814690 A CN202411814690 A CN 202411814690A CN 119680189 A CN119680189 A CN 119680189A
Authority
CN
China
Prior art keywords
image
virtual object
virtual
outer contour
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411814690.4A
Other languages
Chinese (zh)
Inventor
李思萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202411814690.4A priority Critical patent/CN119680189A/en
Publication of CN119680189A publication Critical patent/CN119680189A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

本公开涉及一种描边显示方法、装置、介质、电子设备及程序产品,涉及计算机技术领域,该方法通过显示虚拟场景画面,响应于针对虚拟场景中的虚拟对象的选取操作,在与虚拟场景画面对应的图像空间中,对虚拟对象的外轮廓进行描边,获得虚拟对象的外轮廓描边,并在虚拟场景画面上叠加显示虚拟对象的外轮廓描边,可以使得在对虚拟场景进行缩放时,外轮廓描边的宽度不会受到影响,始终保持一致,从而保证描边效果的一致性。

The present disclosure relates to a stroke display method, device, medium, electronic device and program product, and relates to the field of computer technology. The method displays a virtual scene screen, responds to a selection operation on a virtual object in the virtual scene, strokes the outer contour of the virtual object in an image space corresponding to the virtual scene screen, obtains the outer contour stroke of the virtual object, and overlays and displays the outer contour stroke of the virtual object on the virtual scene screen. When the virtual scene is scaled, the width of the outer contour stroke will not be affected and remains consistent, thereby ensuring the consistency of the stroke effect.

Description

Edge tracing display method, device, medium, electronic equipment and program product
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a method, an apparatus, a medium, an electronic device, and a program product for displaying a tracing edge.
Background
In UGC (User Generated Content ) type games, a player or game developer is free to edit the game scene. In the process of editing a game scene, a selected state effect of displaying a virtual object (Entity) has an important meaning for a player to clearly and currently operate the virtual object. In general, the selected state effect of presenting the virtual object in the virtual scene may be presented by tracing the outer contour of the selected virtual object to enhance the visibility of the virtual object in the game scene.
In the related art, displaying the description effect in the game scene requires two times of rendering of the model, in the first time of rendering, vertex coordinates of the model are extended outwards along a normal direction, description colors are used for rendering, and in the second time of rendering, original vertices of the model are normally rendered. Then, the result of the second rendering is covered with a part which is repeated with the result of the first rendering, so that the width of the uncovered vertex in the result of the first rendering is used as the edge drawing effect.
However, the width of the thus obtained welt may vary with the change in the line-of-sight distance for the game scene, and when the line-of-sight distance is large, the player may not see the welt effect even clearly.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a description display method, including:
Displaying a virtual scene picture;
Responding to a selection operation for a virtual object in the virtual scene, and carrying out edge tracing on the outer outline of the virtual object in an image space corresponding to the virtual scene picture to obtain the outer outline edge tracing of the virtual object;
And superposing and displaying the outline tracing of the virtual object on the virtual scene picture, wherein the width of the outline tracing is kept unchanged when the virtual scene is zoomed.
In a second aspect, the present disclosure provides a stroked display apparatus comprising:
the first display module is configured to display a virtual scene picture;
the tracing module is configured to respond to the selection operation of the virtual object in the virtual scene, and trace the outline of the virtual object in an image space corresponding to the virtual scene picture to obtain the outline tracing of the virtual object;
and a second display module configured to superimpose and display an outer contour tracing of the virtual object on the virtual scene picture, wherein the width of the outer contour tracing remains unchanged when the virtual scene is zoomed.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing device, implements the steps of the method of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
A storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method according to the first aspect.
In a fifth aspect, the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
Based on the technical scheme, the virtual scene picture is displayed, the outer contour of the virtual object is traced in the image space corresponding to the virtual scene picture in response to the selection operation of the virtual object in the virtual scene, the outer contour tracing of the virtual object is obtained, and the outer contour tracing of the virtual object is overlapped and displayed on the virtual scene picture, so that the width of the outer contour tracing is not influenced when the virtual scene is zoomed, and the consistency of the tracing effect is always kept, thereby ensuring the consistency of the tracing effect.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flow chart illustrating a stroked display method according to some embodiments.
FIG. 2 is a schematic diagram illustrating a virtual object being at least partially occluded, according to some embodiments.
FIG. 3 is a flow chart illustrating a stroked display method according to further embodiments.
Fig. 4 is a schematic diagram of a virtual scene screen shown in accordance with some embodiments.
Fig. 5 is a schematic diagram of a first image shown according to some embodiments.
Fig. 6 is a schematic diagram of a second image shown according to some embodiments.
FIG. 7 is a schematic diagram illustrating a display of an outline tracing, according to some embodiments.
Fig. 8 is a detailed flowchart of step 330 shown in fig. 3.
Fig. 9 is a schematic diagram of a sampling range shown according to some embodiments.
Fig. 10 is a schematic diagram of a second pixel point shown according to some embodiments.
FIG. 11 is a schematic diagram of a virtual scene screen shown according to some embodiments.
Fig. 12 is a schematic diagram of a first image shown according to some embodiments.
Fig. 13 is a schematic diagram of an outer contoured tracing shown in accordance with some embodiments.
FIG. 14 is a schematic diagram illustrating a display of an outline tracing, according to some embodiments.
Fig. 15 is a schematic diagram illustrating gaussian blur effects according to some embodiments.
FIG. 16 is a schematic diagram illustrating a display of an outlining according to further embodiments.
Fig. 17 is a schematic diagram of a structure of a described display device, shown according to some embodiments.
Fig. 18 is a schematic structural diagram of an electronic device shown according to some embodiments.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment," another embodiment "means" at least one additional embodiment, "and" some embodiments "means" at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
FIG. 1 is a flow chart illustrating a stroked display method according to some embodiments. As shown in fig. 1, an embodiment of the present disclosure provides a method for displaying an edge, which may be performed by an electronic device, and in particular, may be performed by an edge display apparatus, where the apparatus may be implemented by software and/or hardware, and configured in the electronic device. As shown in fig. 1, the method may include the following steps.
In step 110, a virtual scene picture is displayed.
Here, the virtual scene screen includes a virtual scene. A virtual scene is a virtual scene that an application program displays or provides when running on an electronic device, where a virtual scene may refer to a game scene, such as a two-dimensional game scene, a three-dimensional game scene, a virtual reality scene, and so on.
It should be appreciated that the virtual scene picture may be a picture obtained by picture capturing of a virtual scene by a virtual camera.
In step 120, in response to the selection operation for the virtual object in the virtual scene, the outer contour of the virtual object is traced in the image space corresponding to the virtual scene picture, and outer contour tracing of the virtual object is obtained.
Here, the selection operation for the virtual object in the virtual scene may be a click operation, a box selection operation, or the like for the virtual object in the virtual scene. A virtual object may refer to an element used to build an environment in a virtual scene. For example, the virtual object may be a building, tree, etc. model in a virtual scene.
Outlining refers to the additional visual effect added to the outer contour of a virtual object. After a virtual object in the virtual scene is selected, the virtual object is in a selected state, and in order to display the virtual object in the selected state in the virtual scene, an outline description may be added to the virtual object to highlight the selected virtual object in the virtual scene.
The image space of the virtual scene is a two-dimensional space, which is a space composed of the width and the height of the virtual scene. The image space corresponding to the virtual scene refers to an image space that coincides with the image space of the virtual scene.
It is noted that the size of the image space of the virtual scene picture may be determined by the size of the screen used to display the virtual scene picture. Therefore, the outer contour of the virtual object is traced in the image space corresponding to the virtual scene picture, which can also be understood as tracing the outer contour of the virtual object in the screen space.
The virtual object may be rendered into an image space corresponding to the virtual scene picture, and then, in the image space, an outer contour of the virtual object is traced based on a preset tracing width, so as to obtain an outer contour tracing of the virtual object.
It should be noted that the tracing of the outline of the virtual object in the image space corresponding to the virtual scene is not to directly trace the outline of the virtual object in the virtual scene, but to trace the outline of the virtual object in the image space in an image space consistent with the image space of the virtual scene.
It should be appreciated that in embodiments of the present disclosure, the obtained outer contour tracing of the virtual object may be an image that includes the outer contour tracing of the virtual object.
In step 130, the outer contour tracing of the virtual object is superimposed on the virtual scene picture, wherein the width of the outer contour tracing remains unchanged as the virtual scene is scaled.
Here, the outer contour tracing of the virtual object in the selected state may be superimposed on the virtual scene picture, thereby displaying the outer contour tracing of the virtual object in the selected state.
As some examples, the obtained outer contour tracing of the virtual object may be superimposed on an upper display of the virtual scene picture.
As yet other examples, the obtained outer contour tracing of the virtual object may be fused at an upper layer of the virtual scene picture to display the outer contour tracing of the virtual object by displaying the fused virtual scene picture.
That is, in the embodiment of the present disclosure, the outer contour tracing of the virtual object may be superimposed and displayed by one image (the virtual scene picture after fusion), or the outer contour tracing of the virtual object may be superimposed and displayed by two images (the virtual scene picture and the outer contour tracing).
It should be noted that, since the outer contour tracing of the virtual object is obtained by tracing the outer contour of the virtual object in the image space corresponding to the virtual scene picture, even if the size of the displayed virtual object is changed, the width of the outer contour tracing of the virtual object before and after the size change is still the preset tracing width, so that the width of the outer contour tracing of the virtual object remains unchanged when the virtual scene is zoomed.
For example, when a virtual scene is enlarged, the virtual object is enlarged accordingly, which corresponds to a smaller line-of-sight distance between the player and the virtual object, and when the virtual scene is reduced, the virtual object is reduced accordingly, which corresponds to a larger line-of-sight distance between the player and the virtual object. At this time, according to the stroking display method provided by the embodiment of the present disclosure, since the outline stroking is obtained in the image space corresponding to the virtual scene, the outline stroking of the virtual object is related to the image space of the virtual scene, and even if the virtual scene in the virtual scene changes, the width of the outline stroking of the virtual object is not affected. Therefore, by displaying the outer contour tracing of the virtual object obtained from the image space superimposed on the virtual scene image, the width of the outer contour tracing of the enlarged virtual object can be made uniform in player's vision with respect to the width of the outer contour tracing of the reduced virtual object, and it is achieved that the width of the outer contour tracing remains unchanged at the time of scaling of the virtual scene.
It should be noted that, if the outline tracing is performed on the virtual object in the geometric space of the virtual scene, the outline tracing corresponds to a line in the virtual scene, when the virtual scene is zoomed out and zoomed in, the outline tracing is correspondingly zoomed out and zoomed in, so that the width of the outline tracing changes along with the change of the sight line distance, and when the sight line distance becomes gradually larger, even the user cannot see the outline tracing of the virtual object. Moreover, the outer contour tracing is also affected by perspective, resulting in a larger width outer contour tracing at a near location and a smaller width outer contour tracing at a far location.
According to the tracing display method provided by the embodiment of the disclosure, when a player zooms a virtual scene, the zoomed virtual scene is presented through the virtual scene picture, and because the outer contour tracing is obtained in the image space corresponding to the virtual scene picture, even if the player zooms the virtual scene, the width of the outer contour tracing is kept consistent, so that the consistency of the outer contour tracing is ensured.
Therefore, by displaying the virtual scene picture, responding to the selection operation of the virtual object in the virtual scene, carrying out edge tracing on the outline of the virtual object in the image space corresponding to the virtual scene picture, obtaining the outline edge tracing of the virtual object, and superposing and displaying the outline edge tracing of the virtual object on the virtual scene picture, the width of the outline edge tracing can not be influenced when the virtual scene is zoomed, and the outline edge tracing is always kept consistent, so that the consistency of edge tracing effects is ensured.
In some implementations, the outer contour tracing of the virtual object is a complete outer contour tracing of the virtual object when the virtual object is at least partially occluded.
Here, the virtual object being at least partially occluded may mean that a part or all of the virtual object is occluded. FIG. 2 is a schematic diagram illustrating a virtual object being at least partially occluded, according to some embodiments. As shown in fig. 2, in a virtual scene 201, a first virtual object 202 is occluded by a second virtual object 203 and a third virtual object 204. In this scenario, the outer contour tracing 205 of the first virtual object 202 is the complete outer contour tracing of the first virtual object 202.
It should be noted that, the complete outline tracing of the virtual object refers to an outline tracing obtained by tracing a complete outline line of the virtual object. Even if the virtual object is at least partially occluded in the virtual scene, the obtained outer contour tracing is still the complete outer contour tracing of the virtual object.
Therefore, through the embodiment, even if the virtual object in the selected state in the virtual scene is partially or completely blocked by other virtual objects, the complete outline description of the virtual object in the selected state can be displayed, so that the player can clearly identify the virtual object in the selected state in the virtual scene.
FIG. 3 is a flow chart illustrating a stroked display method according to further embodiments. As shown in fig. 3, the following steps may be included.
In step 310, a virtual scene picture is displayed.
Here, the detailed description of step 310 may refer to the related description of step 110 in the above embodiments, and will not be repeated here.
In step 320, the virtual object is rendered to a first image, where the first image is an image constructed based on an image space corresponding to the virtual scene picture.
Here, the virtual object in the selected state in the virtual scene may be individually rendered to the first image. The first image may be an image constructed based on an image space corresponding to the virtual scene picture. That is, the width and height of the first image are consistent with the width and height of the virtual scene picture.
It should be appreciated that the size of the virtual objects in the first image is consistent with the size, presentation angle, etc. of the virtual objects in the virtual scene.
In Unity, the LAYERMASK functions provided by Unity may be used to individually render the virtual object in the selected state into the first image. The virtual object in the selected state may be set as a stroked level, and the level to be drawn is screened by FILTERINGSETTINGS in the DRAWRENDERERS function.
Of course, the CommandBuffer functions provided by Unity may also be used to specify the virtual object in the selected state, thereby rendering the virtual object in the selected state into the first image. Wherein CommandBuffer functions store a series of rendering commands, such as setting a rendering target or drawing a particular grid. Through CommandBuffer functions, custom rendering commands can be inserted and executed at specific points in the Unity built-in rendering pipeline, specifying rendering of specific virtual objects to control rendering flows at finer granularity.
It should be noted that, in the embodiment of the present disclosure, the specific implementation of rendering the virtual object to the first image is not limited, and any game engine or any rendering manner that can be implemented may be used to render the virtual object to the first image.
Fig. 4 is a schematic diagram of a virtual scene screen shown in accordance with some embodiments. As shown in fig. 4, in the virtual scene screen 401, a virtual object 402 is a virtual object in a selected state.
Fig. 5 is a schematic diagram of a first image shown according to some embodiments. As shown in fig. 5, a virtual object 402 in a virtual scene is rendered in a first image 403.
It should be noted that, even if the virtual object in the virtual scene is occluded, the virtual object in the virtual scene is still completely rendered in the first image. By rendering the virtual object to the first image, not only can the outer contour tracing of the virtual object be obtained through the first image, but also the complete outer contour tracing of the virtual object can be obtained.
In some embodiments, the virtual object is rendered to the first image with a solid color, the first image having a color different from the solid color.
The solid color in the embodiments of the present disclosure is used as a material, and the solid color may be used as a material of the virtual object, and then the virtual object with the solid color material is rendered in the first image. Illustratively, the solid color may be white and the first image may be black in color. Of course, the pure color can be other colors, and can be set according to actual conditions.
It should be noted that rendering the virtual object to the first image by using the solid color means that the solid color is used as a rendering material of the virtual object, and then the virtual object of the solid color material is rendered to the first image. Since the solid color does not contain complex texture details, the inner contour of the virtual object in the first image may be weakened. Wherein the inner contour refers to an edge line inside the virtual object, which defines the shape and boundaries of the internal structure of the virtual object. For example, the inner contour may be a void, a groove, or a boundary between different surfaces inside the virtual object. Further, by rendering the virtual object to the first image using a solid color, it is possible to facilitate the subsequent use of color channels to distinguish between pixels belonging to the virtual object and pixels not belonging to the virtual object in the first image.
As shown in fig. 5, a virtual object 402 is rendered in a first image 403 that is entirely black using white material.
In other embodiments, when the number of the selected virtual objects is a plurality of, rendering the plurality of virtual objects to the first image through the same solid color, so as to obtain an outline drawing corresponding to an overall outline formed by the plurality of virtual objects through the first image, wherein the color of the first image is different from the solid color.
The number of the selected virtual objects is a plurality, which means that the number of the virtual objects in the selected state in the virtual scene is a plurality. And rendering the plurality of virtual objects to the first image by using the same solid color, wherein the overlapped parts do not have color difference due to the fact that the plurality of virtual objects use the same solid color, so that the outer contour tracing corresponding to the whole outer contour formed by the plurality of virtual objects can be obtained through the first image.
It should be noted that, the overall outline formed by the plurality of virtual objects may refer to an outline curve formed by the plurality of virtual objects when they are combined together. When the plurality of virtual objects comprise overlapped virtual objects, the whole outline corresponding to the overlapped virtual objects is a closed curve formed by connecting all vertexes of the overlapped virtual objects according to a certain sequence. When the plurality of virtual objects includes non-overlapping virtual objects, the overall outer contour of the plurality of non-overlapping virtual objects is made up of the outer contours of the individual virtual objects. Thus, where the plurality of virtual objects includes overlapping virtual objects and non-overlapping virtual objects, the overall outline formed by the plurality of virtual objects may include a closed curve formed by the overlapping virtual objects and an outline formed by each non-overlapping virtual object separately.
In step 330, a second image is obtained that includes an outline description of the virtual object from the first image.
Here, a pixel position where a tracing is required may be calculated in the first graph, and then a second image including an outer contour tracing of the virtual object may be drawn based on the pixel position.
It should be noted that the second image may be an image constructed based on an image space corresponding to the virtual scene picture. That is, the width and height of the second image may be consistent with the width and height of the virtual scene picture and the first image.
Fig. 6 is a schematic diagram of a second image shown according to some embodiments. As shown in fig. 6, the second image 405 includes an outlining 404.
It is worth noting that the area of the second image other than the outline border may be transparent. Of course, the second image may also comprise only pixels of the outlining.
In step 340, a second image is superimposed on the virtual scene picture.
Here, the second image may be incorporated into the virtual scene screen to be superimposed on the virtual scene screen for display. For example, the outer contour tracing 404 shown in FIG. 6 may be incorporated into the virtual scene picture shown in FIG. 4.
FIG. 7 is a schematic diagram illustrating a display of an outline tracing, according to some embodiments. As shown in fig. 7, an outline border 404 of a virtual object 402 is displayed in a virtual scene screen 401.
Therefore, through the embodiment, the width of the outline tracing is not influenced when the virtual scene is zoomed, and the outline tracing is always consistent, so that the consistency of the tracing effect is ensured. Moreover, since the virtual object is individually rendered to the first image and the outer contour tracing of the virtual object is superimposed on the upper layer of the virtual scene picture, even if the virtual object is blocked in the virtual scene, the complete outer contour tracing of the virtual object can be displayed.
Fig. 8 is a detailed flowchart of step 330 shown in fig. 3. As shown in fig. 8, in some implementations that may be implemented, the following steps may be included.
In step 331, a first pixel point not belonging to the virtual object is determined in the first image based on the color channel to which the virtual object belongs.
Here, in the above-described embodiment, the virtual object is rendered by the solid color into the first image different from the solid color, and therefore, the first pixel point which does not belong to the virtual object can be determined in the first image by the color channel to which the solid color of the virtual object belongs and the color channel to which the background color of the first image belongs.
As shown in fig. 5, the white virtual object 402 and the background color (black) in the first image 403 belong to different color channels, and the first pixel point determined not to belong to the virtual object 402 can be determined in the first image 403 through the color channels.
For each pixel in the first image, whether the pixel belongs to the virtual object or not can be determined according to the color channel to which the pixel belongs, if the color channel of the pixel belongs to the color channel to which the virtual object belongs, the pixel belongs to the virtual object, and if the color channel of the pixel does not belong to the color channel to which the virtual object belongs, the pixel does not belong to the first pixel of the virtual object.
It should be noted that, through the color channel, whether each pixel point of the first image is inside or outside the virtual object can be distinguished in the first image, so as to accurately tracing the outline of the virtual object.
In step 332, for each first pixel, at least one pixel is sampled in the first image within a sampling range centered on the first pixel and having a predetermined stroked width as a radius.
Here, after determining the first pixel points, it may be determined for each first pixel point whether the first pixel point is a second pixel point for which edge tracing is required.
For each determined first pixel point, sampling at least one pixel point in the first image through a sampling range taking the first pixel point as a center and taking a preset tracing width as a radius. The preset tracing width can be set according to actual conditions. For example, the preset stroked width may be 1 pixel, 2 pixels, etc.
Fig. 9 is a schematic diagram of a sampling range shown according to some embodiments. As shown in fig. 9, a sampling range 902 is constructed with a first pixel 901 as a center and a tracing width as a radius, and a plurality of pixels are sampled from a first image through the sampling range 902.
It should be understood that some or all of the pixels within the sampling range are sampled pixels.
In step 333, in the case where the sampled pixel points have pixel points belonging to the virtual object, the first pixel point is determined as the second pixel point.
Here, when at least one pixel belonging to the virtual object exists in the sampled pixel, it is described that the first pixel is an edge point adjacent to the outer contour of the virtual object. At this time, the first pixel point is determined as a second pixel point where the tracing is required.
Fig. 10 is a schematic diagram of a second pixel point shown according to some embodiments. As shown in fig. 10, the gray pixel is the second pixel to be traced.
In step 334, a second image is generated according to the position information of the second pixel, where the pixel corresponding to the second pixel in the second image includes a stroked color.
Here, after all the second pixel points are determined in the first image, based on the position information of the second pixel points in the first image, a second image is generated,
For example, a second image including an outline border of the virtual object may be obtained by coloring, on a transparent image that belongs to the same image space as the first image, a pixel in the transparent image that matches the position information of the second pixel by a preset border color.
In some embodiments, the pixel point corresponding to the second pixel point in the initial image may be colored according to the position information of the second pixel point, and the colored pixel point may be subjected to gaussian blur processing based on a gaussian blur algorithm, so as to obtain the second image.
Wherein the initial image may be an image having a width and a height that are identical to those of the first image. The initial image may refer to a transparent image, for example. After the position information of the second pixel point is obtained, coloring the pixel point corresponding to the second pixel point in the initial image by using a preset tracing color, and performing Gaussian blur processing on the colored pixel point by using a Gaussian blur algorithm.
The gaussian blur algorithm is used to spread the blur effect of the colored pixel points to the side away from the virtual object. The side far from the virtual object refers to an external space outside the outer contour of the virtual object, which does not belong to the virtual object. The Gaussian blur algorithm enables the blur effect to spread to the outside of the virtual object, namely the edge tracing effect can be optimized, and rendering of the virtual object is not affected.
The Gaussian blur algorithm can reduce the jaggy feeling of the outline tracing by diffusing the blurring effect of the colored pixel points to the side far away from the virtual object, so that the outline tracing effect is smoother and more natural.
Therefore, through the embodiment, the virtual object is rendered to the first image, and the outline tracing of the virtual object can be accurately and clearly drawn by distinguishing the pixel points which do not belong to the virtual object, so that the inner outline inside the virtual object is prevented from being drawn.
The following illustrates a description of a method of displaying a tracing edge according to an embodiment of the present disclosure with reference to fig. 11 to 16.
FIG. 11 is a schematic diagram of a virtual scene screen shown according to some embodiments. As shown in fig. 11, in response to a selection operation for a virtual object in a virtual scene, a corresponding virtual scene picture is acquired.
Fig. 12 is a schematic diagram of a first image shown according to some embodiments. As shown in fig. 12, in response to a selection operation for a virtual object in a virtual scene, a virtual object 1202 in the virtual scene that needs to be stroked is rendered into a first image 1201 using a solid color.
Fig. 13 is a schematic diagram of an outer contoured tracing shown in accordance with some embodiments. As shown in fig. 13, a second image 1301 including an outer contour tracing 1302 corresponding to a virtual object 1202 is obtained by the first image 1201 shown in fig. 12.
FIG. 14 is a schematic diagram illustrating a display of an outline tracing, according to some embodiments. As shown in fig. 11, 13, and 14, the outer contour outline 1302 shown in fig. 13 is incorporated into the virtual scene screen shown in fig. 11, so that the outer contour outline 1302 corresponding to the virtual object 1202 is displayed in the virtual scene screen.
Fig. 15 is a schematic diagram illustrating gaussian blur effects according to some embodiments. As shown in fig. 15, when the outline description is drawn, the directional blurring effect calculation is performed by the gaussian blurring algorithm so that the blurring effect of the outline description spreads to the side away from the virtual object.
FIG. 16 is a schematic diagram illustrating a display of an outlining according to further embodiments. As shown in fig. 16, the outer contour tracing after the gaussian blur processing in fig. 15 is combined in the virtual scene picture, so as to reduce the jaggy feel of the outer contour tracing, and make the outer contour tracing effect smoother and more natural.
Fig. 17 is a schematic diagram of a structure of a described display device, shown according to some embodiments. As shown in fig. 17, an embodiment of the present disclosure provides a stroked display apparatus 1700, the stroked display apparatus 1700 including:
a first display module 1701 configured to display a virtual scene picture;
A tracing module 1702 configured to, in response to a selection operation for a virtual object in the virtual scene, tracing an outer contour of the virtual object in an image space corresponding to the virtual scene picture, to obtain an outer contour tracing of the virtual object;
A second display module 1703 configured to superimpose and display an outer contour tracing of the virtual object on the virtual scene picture, wherein a width of the outer contour tracing remains unchanged when the virtual scene is zoomed.
Optionally, when the virtual object is at least partially occluded, the outer contour tracing of the virtual object is a complete outer contour tracing of the virtual object.
Optionally, the tracing module 1702 is specifically configured to:
rendering the virtual object to a first image, wherein the first image is an image constructed based on an image space corresponding to the virtual scene picture;
Obtaining a second image comprising an outer contour tracing of the virtual object according to the first image;
The second display module 1703 is specifically configured to:
And superposing and displaying the second image on the virtual scene picture.
Optionally, the tracing module 1702 is specifically configured to:
and rendering the virtual object to a first image through the solid color, wherein the color of the first image is different from that of the solid color.
Optionally, the tracing module 1702 is specifically configured to:
And when the number of the selected virtual objects is multiple, rendering the multiple virtual objects to a first image through the same solid color so as to obtain outer contour tracing corresponding to the whole outer contour formed by the multiple virtual objects through the first image, wherein the color of the first image is different from the solid color.
Optionally, the tracing module 1702 is specifically configured to:
Determining a first pixel point which does not belong to the virtual object in the first image based on a color channel to which the virtual object belongs;
sampling at least one pixel point in the first image in a sampling range taking the first pixel point as a center and taking a preset tracing width as a radius for each first pixel point;
Determining the first pixel point as a second pixel point under the condition that the sampled pixel point has the pixel point belonging to the virtual object;
And generating a second image according to the position information of the second pixel point, wherein the pixel point corresponding to the second pixel point in the second image comprises a tracing color.
Optionally, the tracing module 1702 is specifically configured to:
Coloring a pixel point corresponding to the second pixel point in an initial image according to the position information of the second pixel point, and performing Gaussian blur processing on the colored pixel point based on a Gaussian blur algorithm to obtain the second image, wherein the Gaussian blur algorithm is used for enabling the blur effect of the colored pixel point to be diffused to one side far away from the virtual object.
For the logic of the method executed by each functional module in the above-mentioned description display device 1700, reference may be made to the portions of the method related to the above-mentioned embodiments, which are not described herein again.
Referring now to fig. 18, a schematic diagram of an electronic device (e.g., terminal device) 1800 suitable for practicing the embodiments of the disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 18 is merely an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 18, the electronic device 1800 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1801, which may perform various appropriate actions and processes in accordance with programs stored in a Read Only Memory (ROM) 1802 or programs loaded from a storage 1808 into a Random Access Memory (RAM) 1803. In the RAM 1803, various programs and data required for the operation of the electronic device 1800 are also stored. The processing device 1801, the ROM 1802, and the RAM 1803 are connected to each other via a bus 1804. An input/output (I/O) interface 1805 is also connected to the bus 1804.
In general, devices may be connected to the I/O interface 1805 including input devices 1806 such as a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 1807 including a Liquid Crystal Display (LCD), speaker, vibrator, etc., storage devices 1808 including magnetic tape, hard disk, etc., and communication devices 1809. The communication means 1809 may allow the electronic device 1800 to communicate with other devices, either wirelessly or by wire, to exchange data. While fig. 18 illustrates an electronic device 1800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1809, or from the storage device 1808, or from the ROM 1802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 1801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the electronic device may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be included in the electronic device or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs, when the one or more programs are executed by the electronic device, the electronic device is caused to display a virtual scene picture, in response to a selection operation for a virtual object in the virtual scene, carry out edge tracing on an outer outline of the virtual object in an image space corresponding to the virtual scene picture to obtain an outer outline of the virtual object, and display the outer outline of the virtual object in a superimposed manner on the virtual scene picture, wherein the width of the outer outline is kept unchanged when the virtual scene is zoomed.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module does not in some cases define the module itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.

Claims (11)

1.一种描边显示方法,其特征在于,包括:1. A stroke display method, characterized by comprising: 显示虚拟场景画面;Displaying a virtual scene screen; 响应于针对所述虚拟场景中的虚拟对象的选取操作,在与所述虚拟场景画面对应的图像空间中,对所述虚拟对象的外轮廓进行描边,获得所述虚拟对象的外轮廓描边;In response to a selection operation on a virtual object in the virtual scene, in an image space corresponding to the virtual scene screen, stroking an outer contour of the virtual object to obtain an outer contour stroke of the virtual object; 在所述虚拟场景画面上叠加显示所述虚拟对象的外轮廓描边,其中,所述外轮廓描边的宽度在所述虚拟场景缩放时保持不变。The outer contour stroke of the virtual object is superimposed and displayed on the virtual scene screen, wherein the width of the outer contour stroke remains unchanged when the virtual scene is scaled. 2.根据权利要求1所述的方法,其特征在于,在所述虚拟对象至少部分被遮挡时,所述虚拟对象的外轮廓描边为所述虚拟对象完整的外轮廓描边。2 . The method according to claim 1 , wherein when the virtual object is at least partially blocked, the outer contour stroke of the virtual object is a complete outer contour stroke of the virtual object. 3.根据权利要求1所述的方法,其特征在于,所述在与所述虚拟场景画面对应的图像空间中,对所述虚拟对象的外轮廓进行描边,获得所述虚拟对象的外轮廓描边,包括:3. The method according to claim 1, characterized in that, in the image space corresponding to the virtual scene screen, stroking the outer contour of the virtual object to obtain the outer contour stroke of the virtual object comprises: 将所述虚拟对象渲染至第一图像,其中,所述第一图像为基于所述虚拟场景画面对应的图像空间构建的图像;Rendering the virtual object to a first image, wherein the first image is an image constructed based on an image space corresponding to the virtual scene screen; 根据所述第一图像,获得包括所述虚拟对象的外轮廓描边的第二图像;Obtaining, according to the first image, a second image including an outer contour stroke of the virtual object; 所述在所述虚拟场景画面上叠加显示所述虚拟对象的外轮廓描边,包括:The step of superimposing and displaying the outer contour stroke of the virtual object on the virtual scene screen includes: 在所述虚拟场景画面上叠加显示所述第二图像。The second image is displayed superimposed on the virtual scene screen. 4.根据权利要求3所述的方法,其特征在于,所述将所述虚拟对象渲染至第一图像,包括:4. The method according to claim 3, wherein rendering the virtual object to the first image comprises: 通过纯色,将所述虚拟对象渲染至第一图像,所述第一图像的颜色与所述纯色不同。The virtual object is rendered into a first image by a solid color, and the color of the first image is different from the solid color. 5.根据权利要求3所述的方法,其特征在于,所述将所述虚拟对象渲染至第一图像,包括:5. The method according to claim 3, wherein rendering the virtual object to the first image comprises: 在选取的所述虚拟对象的数量为多个时,通过相同的纯色,将多个所述虚拟对象渲染至第一图像,以通过所述第一图像获得多个所述虚拟对象形成的整体外轮廓对应的外轮廓描边,所述第一图像的颜色与所述纯色不同。When the number of selected virtual objects is multiple, the multiple virtual objects are rendered to a first image using the same solid color to obtain an outer contour stroke corresponding to the overall outer contour formed by the multiple virtual objects through the first image, and the color of the first image is different from the solid color. 6.根据权利要求3至5中任一项所述的方法,其特征在于,所述根据所述第一图像,获得包括所述虚拟对象的外轮廓描边的第二图像,包括:6. The method according to any one of claims 3 to 5, characterized in that obtaining a second image including an outer contour stroke of the virtual object according to the first image comprises: 基于所述虚拟对象所属的颜色通道,在所述第一图像中确定不属于所述虚拟对象的第一像素点;Based on the color channel to which the virtual object belongs, determining a first pixel point that does not belong to the virtual object in the first image; 针对各所述第一像素点,在以所述第一像素点为中心和以预设的描边宽度为半径的采样范围内,在所述第一图像中采样至少一个像素点;For each of the first pixel points, sampling at least one pixel point in the first image within a sampling range with the first pixel point as the center and a preset stroke width as the radius; 在采样到的像素点存在属于所述虚拟对象的像素点的情况下,将所述第一像素点确定为第二像素点;If the sampled pixel points include a pixel point belonging to the virtual object, determining the first pixel point as a second pixel point; 根据所述第二像素点的位置信息,生成第二图像,所述第二图像中与所述第二像素点对应的像素点包括描边颜色。A second image is generated according to the position information of the second pixel point, wherein the pixel points in the second image corresponding to the second pixel point include a stroke color. 7.根据权利要求6所述的方法,其特征在于,所述根据所述第二像素点的位置信息,生成第二图像,包括:7. The method according to claim 6, wherein generating the second image according to the position information of the second pixel point comprises: 根据所述第二像素点的位置信息,对初始图像中与所述第二像素点对应的像素点进行着色,并基于高斯模糊算法对着色的像素点进行高斯模糊处理,获得所述第二图像,其中,所述高斯模糊算法用于使得所述着色的像素点的模糊效果向远离所述虚拟对象的一侧扩散。According to the position information of the second pixel point, the pixel points in the initial image corresponding to the second pixel point are colored, and the colored pixel points are Gaussian blurred based on a Gaussian blur algorithm to obtain the second image, wherein the Gaussian blur algorithm is used to make the blur effect of the colored pixel points diffuse toward the side away from the virtual object. 8.一种描边显示装置,其特征在于,包括:8. A stroke display device, comprising: 第一显示模块,被配置为显示虚拟场景画面;A first display module is configured to display a virtual scene image; 描边模块,被配置为响应于针对所述虚拟场景中的虚拟对象的选取操作,在与所述虚拟场景画面对应的图像空间中,对所述虚拟对象的外轮廓进行描边,获得所述虚拟对象的外轮廓描边;A stroking module is configured to, in response to a selection operation on a virtual object in the virtual scene, stroke an outer contour of the virtual object in an image space corresponding to the virtual scene image to obtain an outer contour stroke of the virtual object; 第二显示模块,被配置为在所述虚拟场景画面上叠加显示所述虚拟对象的外轮廓描边,其中,所述外轮廓描边的宽度在所述虚拟场景缩放时保持不变。The second display module is configured to overlay and display the outer contour stroke of the virtual object on the virtual scene screen, wherein the width of the outer contour stroke remains unchanged when the virtual scene is scaled. 9.一种计算机可读介质,其上存储有计算机程序,其特征在于,该计算机程序被处理装置执行时实现权利要求1-7中任一项所述的方法的步骤。9. A computer-readable medium having a computer program stored thereon, wherein the computer program, when executed by a processing device, implements the steps of the method according to any one of claims 1 to 7. 10.一种电子设备,其特征在于,包括:10. An electronic device, comprising: 存储装置,其上存储有计算机程序;a storage device having a computer program stored thereon; 处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-7中任一项所述的方法的步骤。A processing device, configured to execute the computer program in the storage device to implement the steps of the method according to any one of claims 1 to 7. 11.一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1-7中任一项所述的方法的步骤。11. A computer program product, comprising a computer program, characterized in that when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 7 are implemented.
CN202411814690.4A 2024-12-10 2024-12-10 Stroke display method, device, medium, electronic device and program product Pending CN119680189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411814690.4A CN119680189A (en) 2024-12-10 2024-12-10 Stroke display method, device, medium, electronic device and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411814690.4A CN119680189A (en) 2024-12-10 2024-12-10 Stroke display method, device, medium, electronic device and program product

Publications (1)

Publication Number Publication Date
CN119680189A true CN119680189A (en) 2025-03-25

Family

ID=95032806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411814690.4A Pending CN119680189A (en) 2024-12-10 2024-12-10 Stroke display method, device, medium, electronic device and program product

Country Status (1)

Country Link
CN (1) CN119680189A (en)

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN110058685B (en) Virtual object display method and device, electronic equipment and computer-readable storage medium
CN110062176B (en) Method and device for generating video, electronic equipment and computer readable storage medium
US20250225620A1 (en) Special effect image processing method and apparatus, electronic device, and storage medium
CN112929582A (en) Special effect display method, device, equipment and medium
CN111243049B (en) Face image processing method and device, readable medium and electronic equipment
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
CN114782612A (en) Image rendering method, device, electronic device and storage medium
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN112132859B (en) Sticker generation method, device, medium and electronic device
CN114461064A (en) Virtual reality interaction method, apparatus, device and storage medium
CN111862349A (en) Virtual brush implementation method and device and computer readable storage medium
CN111833459A (en) Image processing method and device, electronic equipment and storage medium
CN115330925A (en) Image rendering method, device, electronic device and storage medium
WO2024016930A1 (en) Special effect processing method and apparatus, electronic device, and storage medium
CN114928718B (en) Video monitoring method, device, electronic equipment and storage medium
CN111292245B (en) Image processing method and device
CN118154758A (en) Image processing method, device, medium, program product and electronic equipment
CN114693885B (en) Three-dimensional virtual object generation method, device, equipment, medium and program product
EP4485357A2 (en) Image processing method and apparatus, electronic device, and storage medium
WO2024051541A1 (en) Special-effect image generation method and apparatus, and electronic device and storage medium
US20220319062A1 (en) Image processing method, apparatus, electronic device and computer readable storage medium
CN119680189A (en) Stroke display method, device, medium, electronic device and program product
CN110209861A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN109472873B (en) Three-dimensional model generation method, device and hardware device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination