[go: up one dir, main page]

CN117252974A - Mapping method and device for three-dimensional image, electronic equipment and storage medium - Google Patents

Mapping method and device for three-dimensional image, electronic equipment and storage medium Download PDF

Info

Publication number
CN117252974A
CN117252974A CN202311434416.XA CN202311434416A CN117252974A CN 117252974 A CN117252974 A CN 117252974A CN 202311434416 A CN202311434416 A CN 202311434416A CN 117252974 A CN117252974 A CN 117252974A
Authority
CN
China
Prior art keywords
dimensional
image
dimensional image
selected region
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311434416.XA
Other languages
Chinese (zh)
Inventor
李燕
张学成
刘琼瑶
秦锋慈
蒋晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202311434416.XA priority Critical patent/CN117252974A/en
Publication of CN117252974A publication Critical patent/CN117252974A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开涉及一种三维图像的贴图方法和装置、存储介质及电子设备。确定三维图像中的三维选中区域;基于三维选中区域,在对应于三维图像的二维图像中确定并显示二维选中区域;至少基于二维选中区域的尺寸,生成目标贴图数据;将二维图像中的二维选中区域的图像数据替换为目标贴图数据,生成贴图后的二维图像;以及将贴图后的二维图像转换为贴图后的三维图像。本公开根据用户选取区域替换目标贴图,用户可以通过自由选区将目标贴图放置于想放置的位置,丰富了三维图形的贴图类型,可操作性更强,增加了贴图的自由性,有效提高了贴图过程中的用户体验。

The present disclosure relates to a three-dimensional image mapping method and device, a storage medium and an electronic device. Determine the three-dimensional selected area in the three-dimensional image; based on the three-dimensional selected area, determine and display the two-dimensional selected area in the two-dimensional image corresponding to the three-dimensional image; generate target map data based on at least the size of the two-dimensional selected area; convert the two-dimensional image The image data of the two-dimensional selected area in is replaced with the target map data to generate a mapped two-dimensional image; and the mapped two-dimensional image is converted into a mapped three-dimensional image. This disclosure replaces the target texture according to the area selected by the user. The user can place the target texture at the desired location through free selection, enriches the texture types of three-dimensional graphics, has stronger operability, increases the freedom of texture, and effectively improves the texture quality. user experience in the process.

Description

Mapping method and device for three-dimensional image, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of graphics processing, and in particular relates to a mapping method and device of a three-dimensional image, electronic equipment and a storage medium.
Background
In the prior art, the image needing to be mapped is generally imported into software, the required mapping is selected from the mapping preset in the software, and the selected mapping is covered on the model to finish the replacement of the image material mapping.
However, the existing software generally can only replace the mapping for the whole image, but cannot freely map a partial region (for example, a region freely selected by a user) of the image, and cannot exhibit the mapping effect which the user wants to realize, so that the user experience is reduced.
Disclosure of Invention
The present disclosure has been made in view of the above-described problems. The disclosure provides a mapping method and device of a three-dimensional image, electronic equipment and a storage medium.
According to one aspect of the present disclosure, there is provided a mapping method of a three-dimensional image, including: determining a three-dimensional selected area in the three-dimensional image; determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region; generating target map data based at least on the size of the two-dimensional selected region; replacing the image data of the two-dimensional selected area in the two-dimensional image with target mapping data to generate a mapped two-dimensional image; and converting the mapped two-dimensional image into a mapped three-dimensional image.
Further, a mapping method of a three-dimensional image according to an aspect of the present disclosure, which determines and displays a two-dimensional selected area in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected area, includes: mapping the three-dimensional image into a two-dimensional image, acquiring an initial two-dimensional image comprising two-dimensional selected areas corresponding to the three-dimensional selected areas, generating a selected area display image corresponding to the initial two-dimensional image, wherein textures and/or colors of the selected area display image are different from those of the initial two-dimensional image, generating a mask image with the same size as the initial two-dimensional image, displaying the areas corresponding to the two-dimensional selected areas in a first color, displaying the areas except the two-dimensional selected areas in a second color, and displaying the two-dimensional selected areas in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image, the selected area display image and the mask image, wherein the two-dimensional selected areas are displayed in textures and colors of the selected area display image.
Further, a mapping method of a three-dimensional image according to an aspect of the present disclosure, generating target mapping data based on at least a size of a two-dimensional selected region, includes: determining a minimum circumscribed rectangular frame of the two-dimensional selected area; and generating target map data having the same aspect ratio as the minimum bounding rectangle.
Further, according to a mapping method of a three-dimensional image of an aspect of the present disclosure, replacing image data of a two-dimensional selected area in a two-dimensional image with target mapping data, generating a mapped two-dimensional image, includes: a mapped two-dimensional image is generated based on the map image, the initial two-dimensional image, and the mask image corresponding to the target map data.
Further, according to a mapping method of a three-dimensional image of an aspect of the present disclosure, determining a three-dimensional selected region in the three-dimensional image includes: in response to a smearing operation on the three-dimensional image, a three-dimensional selected region is determined.
Further, a mapping apparatus of another three-dimensional image according to the present disclosure, the apparatus comprising: a three-dimensional region acquisition unit that determines a three-dimensional selected region in the three-dimensional image; a two-dimensional region acquisition unit that determines and displays a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image, based on the three-dimensional selected region; a data generation unit that generates target map data based at least on the size of the two-dimensional selected region; the image generation unit is used for replacing the image data of the two-dimensional selected area in the two-dimensional image with target mapping data to generate a mapped two-dimensional image; and converting the mapped two-dimensional image into a mapped three-dimensional image.
Further, according to another three-dimensional image mapping apparatus of the present disclosure, the two-dimensional region acquisition unit is further configured to: mapping the three-dimensional image into a two-dimensional image; acquiring an initial two-dimensional image including a two-dimensional selected region corresponding to the three-dimensional selected region; generating a selected region display image corresponding to the initial two-dimensional image, wherein the texture and/or color of the selected region display image is different from the initial two-dimensional image; generating a mask image of the same size as the initial two-dimensional image, wherein an area corresponding to the two-dimensional selected area is displayed in a first color, and an area other than the two-dimensional selected area is displayed in a second color; displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image, the selected region presentation image, and the mask image, wherein the two-dimensional selected region is displayed in textures and colors of the selected region presentation image; determining a minimum circumscribed rectangular frame of the two-dimensional selected area; and generating target map data having the same aspect ratio as the minimum bounding rectangle; the data generating unit is further configured to generate a mapped two-dimensional image based on the mapped image, the initial two-dimensional image, and the mask image corresponding to the target mapped data.
Further, according to another three-dimensional image mapping apparatus of the present disclosure, the mapping apparatus further includes: and an area transmitting unit determining a three-dimensional selected area in response to a smearing operation on the three-dimensional image.
According to still another aspect of the present disclosure, there is provided an electronic device including: a memory for storing computer readable instructions; and a processor for executing the computer readable instructions to cause the electronic device to perform the mapping method of the three-dimensional image as above.
According to yet another aspect of the disclosure, a non-transitory computer-readable storage medium stores computer-readable instructions that, when executed by a processor, cause the processor to perform a mapping method of three-dimensional images in a virtual space.
As will be described in detail below, according to the mapping method and apparatus, the electronic device, and the storage medium of the three-dimensional image in the virtual space of the embodiments of the present disclosure, the selected region of the three-dimensional image may be replaced with the target map by selecting the selected region in the three-dimensional image, thereby completing the image region replacement. According to the technical scheme, mapping of the three-dimensional graph can be carried out, the target mapping is replaced according to the unfolded mapping of the selected area of the user, then the target mapping is restored, the user can place the target mapping at a position where the user wants to place through the free selection area, mapping types of the three-dimensional graph are enriched, operability is higher, freedom of mapping is improved, and user experience in the mapping process is effectively improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and are intended to provide further explanation of the technology claimed.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments thereof with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a schematic view illustrating a scene in which a mapping method of a three-dimensional image according to an embodiment of the present disclosure is applied;
FIG. 2 is a flow chart illustrating a method of mapping a three-dimensional image according to an embodiment of the present disclosure;
FIG. 3 is a flow chart further illustrating a method of mapping a three-dimensional image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an overall process of applying a mapping method of three-dimensional images according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a mapping method applying a three-dimensional image according to an embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating a mapping apparatus according to an embodiment of the present disclosure;
FIG. 7 is a hardware block diagram illustrating an electronic device according to an embodiment of the disclosure; and
fig. 8 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, exemplary embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. For example, the first color and the second color are only used to distinguish two different colors.
Fig. 1 is a schematic view illustrating a scene to which a mapping method of a three-dimensional image according to an embodiment of the present disclosure is applied. As shown in fig. 1, the user may freely select (e.g., by a painting operation on the original three-dimensional image) an area to be mapped, generate a corresponding mapping according to the user's needs, and, for example, may generate a mapping according to a keyword or description "i want an xxxx mapping" entered by the user, and generate a mapping through an artificial intelligence model, so as to paste the mapping generated to satisfy the user's needs into the mapping area freely selected by the user.
The mapping processing method can be applied to a hardware environment formed by a server and a terminal device. The platform may be used to provide services to a terminal device or an application installed on a terminal device, which may be a video application, an instant messaging application, a browser application, an educational application, a gaming application, etc. The database may be provided on or separate from the server for providing data storage services for the server, such as a game data storage server, which may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI and other networks for realizing wireless communication, wherein the user equipment mainly comprises terminal equipment, display equipment and input equipment, and the terminal equipment can be a terminal provided with an application program and can comprise at least one of the following components, but not limited to: the mobile phone (such as an Android mobile phone, an iOS mobile phone, etc.), a notebook computer, a tablet computer, a palm computer, an MID (Mobile Internet Devices, mobile internet device), a PAD, a desktop computer, an intelligent television, an intelligent voice interaction device, an intelligent household appliance, a vehicle-mounted terminal, an aircraft, etc., the input device can be a mouse, a VR handle, a scanner, a light pen, etc., the display device can be an LED display screen, and the server can be a single server, a server cluster formed by a plurality of servers, or a cloud server. The specific type of the above device is not limited herein, and the user can select the device according to the actual situation. The terminal device is connected with the display device and the input device, and the connection mode can comprise wired communication connection and wireless communication connection.
Fig. 2 is a flowchart illustrating a mapping method of a three-dimensional image according to an embodiment of the present disclosure, as shown in fig. 2, the mapping method of the three-dimensional image may include:
determining a three-dimensional selected region 400 in the three-dimensional image, determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region 400, generating target map data based at least on the size of the two-dimensional selected region, replacing the image data of the two-dimensional selected region in the two-dimensional image with the target map data, generating a mapped two-dimensional image 407, and converting the mapped two-dimensional image 407 into a mapped three-dimensional image 408.
In step S201, the platform may acquire an image, and acquire, according to a user operation, a selected area selected by a user on the image by using an input device, where the image may be loaded on a terminal device based on a user selection, or may be directly uploaded by the user, and a dimension of the selected area is the same as that of the image. The image is not limited to two dimensions or three dimensions, the image format is not limited, and the user can select the image according to the actual situation. Further, for convenience of understanding, in the following description, a three-dimensional image is taken as an example of the platform acquired image. The selected area can be a regular shape set by the platform or an irregular shape area selected by a user according to actual conditions. Further, the user may select the selected area by painting or dragging the input device to define.
As shown in step S202, since the dimension of the selected area is the same as the dimension of the image, the selected area is a three-dimensional image, and the direct mapping results in mismatching of the mapping position and the model and failure to attach to the correct position in consideration of the possible occurrence of irregular shape of the three-dimensional image. Therefore, the mapping representation of the three-dimensional image can be mapped, each vertex on the three-dimensional model is projected onto a two-dimensional plane, a two-dimensional selected area is generated, and the influence of the gesture and shape change can be reduced by projecting on the two-dimensional plane.
Further, as shown in step S203, target map data may be generated according to the image size of the two-dimensional selected region, wherein the target map data includes at least the size of the target map 405. In order to facilitate the subsequent generation of the target map 405, the size of the target map 405 may be greater than or equal to the image size of the two-dimensional selected area, which prevents the number of pixels from being affected during the subsequent amplification process due to the target map 405 being too small, reduces the resolution, and increases the difficulty of post-rendering.
Further, as shown in step S204, the target map data may further include image data, the two-dimensional selected region image data is replaced with image data of the target map 405, a mapped target image is generated, and the mapped target image is converted into a three-dimensional image.
Fig. 3 is a flowchart further illustrating a mapping method of a three-dimensional image according to an embodiment of the present disclosure, and fig. 4 is a schematic diagram illustrating an overall process of applying the mapping method of a three-dimensional image according to an embodiment of the present disclosure, as shown in fig. 3 and 4, the mapping method may further include:
determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region 400, comprising: mapping the three-dimensional image into a two-dimensional image, acquiring an initial two-dimensional image 403 including a two-dimensional selected area corresponding to the three-dimensional selected area 400, generating a selected area display image 402 corresponding to the initial two-dimensional image 403, wherein the texture and/or color of the selected area display image 402 is different from the initial two-dimensional image 403, generating a mask image 404 of the same size as the initial two-dimensional image 403, wherein the area corresponding to the two-dimensional selected area is displayed in a first color, the area except the two-dimensional selected area is displayed in a second color, and displaying the two-dimensional selected area in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image 403, the selected area display image 402 and the mask image 404, wherein the two-dimensional selected area is displayed in the texture and color of the selected area display image 402.
After the three-dimensional selected region 400 is acquired, the platform may map the three-dimensional image in order to acquire the two-dimensional selected region, as shown in steps S301 and S302. The user can select the patches within the range of the three-dimensional graph 401, wherein the patches can be triangular patches or quadrilateral patches, and the user can select the patches according to actual conditions without limitation. And according to the seam of the three-dimensional image, expanding the image to obtain a two-dimensional image.
Further, as shown in step S303, an initial two-dimensional image 403 is acquired from the three-dimensional selected region 400. In order to present the user-selected region in real-time, the platform may generate a selected region presentation image 402 corresponding to the initial two-dimensional image 403, as shown in step S304. The selected region presentation image 402 is presented by the user terminal. Wherein the texture and/or color of the selected region presentation image 402 is different from the initial two-dimensional image 403 for convenience of the user to distinguish the selected region presentation image 402 from the initial two-dimensional image 403.
As shown in step S305, a mask image 404 of the same size may be generated from the initial two-dimensional image 403. Wherein the mask image 404 may be used to hide or display portions of the image layer. The first color display may correspond to a region of the two-dimensional selected region, and the other regions may be displayed with a second color, where the first color is different from the second color for distinguishing different transparency. The specific colors of the first color and the second color are not limited herein, and the first color may be white or black, and may be selected by the user according to actual situations.
Fig. 4 is a schematic diagram illustrating a mask generating process of applying a mapping method of a three-dimensional image according to an embodiment of the present disclosure, and as shown in fig. 4, taking a user selecting a patch within a range of a three-dimensional graphic 401 as a triangle patch as an example, specific steps of generating a mask image 404 may include: traversing the triangle patch to obtain the triangle patch of the three-dimensional selected area 400. For the current triangle patch ABC, the triangle area A1B1C1 corresponding to the mask image 404 is searched, and coordinates of A1, B1 and C1 are obtained. And filling the area surrounded by the A1, the B1 and the C1 with a first color.
Further, as shown in step S306, in order to display the blending effect in real time, the initial two-dimensional image 403, the selected region presentation image 402, and the mask image 404 may be blended. The mask image 404 and the selected region presentation image 402 may be superimposed, where the superimposed presentation image is superimposed to the initial two-dimensional image 403.
Generating target map data based at least on the size of the two-dimensional selected region, comprising: a minimum bounding rectangle 406 of the two-dimensional selected region is determined, and target map data is generated having the same aspect ratio as the minimum bounding rectangle 406.
Further, as shown in step S307, the specific step of generating target map data may include: a minimum bounding rectangle of the first color region of the mask image 404 is calculated. The minimum bounding rectangle refers to the maximum range of a plurality of two-dimensional shapes (such as points, straight lines and polygons) expressed by two-dimensional coordinates, namely, the rectangle with the lower boundary defined by the maximum abscissa, the minimum abscissa, the maximum ordinate and the minimum ordinate of each vertex of the given two-dimensional shape.
Based on the user operation, the target map 405 is obtained, where the platform may obtain the target map 405 through a user upload or search instruction, or generate the target map 405 according to the requirement input by the user at the interface, and the specific form is freely selected by the user, which is not limited herein. Target map data having the same aspect ratio as the minimum bounding rectangle 406 is obtained from the target image based on the minimum bounding rectangle size information. The target image data is preferably greater than or equal to the minimum bounding rectangle, and if the target image data is too small, the user's margin for positional adjustment of the target image at a later stage is too small. And when the user adjusts the target image to be attached to the selected area image, the size of the target image needs to be changed, so that the pixels of the target image are changed, and the later rendering difficulty is increased.
Further, the mapping method of the three-dimensional image may further include:
replacing the image data of the two-dimensional selected area in the two-dimensional image with target map data to generate a mapped two-dimensional image 407, including: based on the map image corresponding to the target map data, the initial two-dimensional image 403, and the mask image 404, a mapped two-dimensional image 407 is generated.
The target map data is superimposed on the mask image 404 to generate a process two-dimensional image, and the process two-dimensional image is superimposed on the initial two-dimensional image 403 to generate a mapped two-dimensional image 407. As shown in step S308, a mapped three-dimensional image 408 is produced from the mapped two-dimensional image 407.
Further, the mapping method of the three-dimensional image may further include: determining a three-dimensional selected region 400 in a three-dimensional image includes: in response to a smearing operation on the three-dimensional image, a three-dimensional selected region 400 is determined.
Further, after the generation of the mapped three-dimensional image 408 is completed, the mapped three-dimensional image 408 may be modified according to a modification operation of the user. Wherein the modifying operation at least comprises a translation, scaling, rotation operation on the mapped three-dimensional image 408. The specific steps may include: resulting in the width, height of the modified mask image 404. The sample area of the post-modification target map 405 is calculated and the pixels of the initial two-dimensional image 403 are updated.
In the above, the mapping method of the three-dimensional image according to the embodiment of the present disclosure is described, and the mapping generation apparatus 600 for implementing the mapping method of the three-dimensional image will be further depicted below the patch within the range where the user can select the three-dimensional image 401.
The mapping processing device can be applied to a hardware environment formed by a server and a terminal device. The platform may be used to provide services to a terminal device or an application installed on a terminal device, which may be a video application, an instant messaging application, a browser application, an educational application, a gaming application, etc. The database may be provided on or separate from the server for providing data storage services for the server, such as a game data storage server, which may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI and other networks for realizing wireless communication, wherein the user equipment mainly comprises terminal equipment, display equipment and input equipment, and the terminal equipment can be a terminal provided with an application program and can comprise at least one of the following components, but not limited to: the mobile phone (such as an Android mobile phone, an iOS mobile phone, etc.), a notebook computer, a tablet computer, a palm computer, an MID (Mobile Internet Devices, mobile internet device), a PAD, a desktop computer, an intelligent television, an intelligent voice interaction device, an intelligent household appliance, a vehicle-mounted terminal, an aircraft, etc., the input device can be a mouse, a VR handle, a scanner, a light pen, etc., the display device can be an LED display screen, and the server can be a single server, a server cluster formed by a plurality of servers, or a cloud server. The specific type of the above device is not limited herein, and the user can select the device according to the actual situation. The terminal device is connected with the display device and the input device, and the connection mode can comprise wired communication connection and wireless communication connection.
The map generation apparatus may include:
the three-dimensional region acquisition unit 601 determines a three-dimensional selected region 400 in a three-dimensional image, the two-dimensional region acquisition unit 602 determines and displays a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region 400, the data generation unit 603 generates target map data based on at least the size of the two-dimensional selected region, the image generation unit 604 replaces the image data of the two-dimensional selected region in the two-dimensional image with the target map data, generates a mapped two-dimensional image 407, and converts the mapped two-dimensional image 407 into a mapped three-dimensional image 408.
Specifically, the map generating apparatus 500 may include a three-dimensional region acquiring unit 601, a two-dimensional region acquiring unit 602, a data generating unit 603, and an image generating unit 604.
The three-dimensional region acquiring unit 601 may be configured to acquire an image, and acquire a selected region selected on the image by a user using an input device according to a user operation, and send the three-dimensional selected region 400 to the two-dimensional region acquiring unit 602. The image can be loaded on the terminal equipment based on the selection of the user, or can be directly uploaded by the user, and the dimension of the selected area is the same as the dimension of the image. The image is not limited to two dimensions or three dimensions, the image format is not limited, and the user can select the image according to the actual situation. Further, for convenience of understanding, in the following description, a three-dimensional image is taken as an example of the platform acquired image. The selected area can be a regular shape set by the platform or an irregular shape area selected by a user according to actual conditions. Further, the user may select the selected area by painting or dragging the input device to define.
The two-dimensional region acquisition unit 602 may accept the three-dimensional selected region 400, generate a two-dimensional selected region from the three-dimensional selected region 400, and send the two-dimensional selected region to the data generation unit 603. Because the dimension of the selected area is the same as the dimension of the image, the selected area is a three-dimensional image, and the fact that the three-dimensional image is irregular in shape can be considered, the direct mapping results in mismatching of mapping positions and images, and the mapping positions and the images cannot be attached to the correct positions. Therefore, the mapping representation of the three-dimensional image can be mapped, each vertex on the three-dimensional image is projected onto a two-dimensional plane, a two-dimensional selected area is generated, and the influence of the gesture and the shape change can be reduced by projecting on the two-dimensional plane.
Further, the data generating unit 603 may acquire target map data from the two-dimensional selected region. From the image size of the two-dimensional selected region, target map data may be generated, wherein the target map data includes at least the size of the target map 405. In order to facilitate the subsequent generation of the target map 405, the size of the target map 405 may be greater than or equal to the image size of the two-dimensional selected area, which prevents the number of pixels from being affected during the subsequent amplification process due to the target map 405 being too small, reduces the resolution, and increases the rendering difficulty.
Further, upon receiving the target map data, the image generation unit 604 may replace the image data of the two-dimensional selected region in the two-dimensional image with the target map data, generate a mapped two-dimensional image 407, and convert the mapped two-dimensional image 407 into a mapped three-dimensional image 408. The target map data may also include image data, replace the two-dimensional selected region image data with image data of the target map 405, generate a mapped target image, and convert the mapped target image into a three-dimensional image.
The two-dimensional region acquisition unit 602 is further configured to: mapping the three-dimensional image into a two-dimensional image, acquiring an initial two-dimensional image 403 including a two-dimensional selected area corresponding to the three-dimensional selected area 400, generating a selected area display image 402 corresponding to the initial two-dimensional image 403, wherein the texture and/or color of the selected area display image 402 is different from the initial two-dimensional image 403, generating a mask image 404 of the same size as the initial two-dimensional image 403, wherein the area corresponding to the two-dimensional selected area is displayed in a first color, the area except the two-dimensional selected area is displayed in a second color, and displaying the two-dimensional selected area in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image 403, the selected area display image 402 and the mask image 404, wherein the two-dimensional selected area is displayed in the texture and color of the selected area display image 402.
Determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region 400, comprising: mapping the three-dimensional image into a two-dimensional image, acquiring an initial two-dimensional image 403 including a two-dimensional selected area corresponding to the three-dimensional selected area 400, generating a selected area display image 402 corresponding to the initial two-dimensional image 403, wherein the texture and/or color of the selected area display image 402 is different from the initial two-dimensional image 403, generating a mask image 404 of the same size as the initial two-dimensional image 403, wherein the area corresponding to the two-dimensional selected area is displayed in a first color, the area except the two-dimensional selected area is displayed in a second color, and displaying the two-dimensional selected area in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image 403, the selected area display image 402 and the mask image 404, wherein the two-dimensional selected area is displayed in the texture and color of the selected area display image 402.
After acquiring the three-dimensional selected region 400, the platform may map the three-dimensional image in order to acquire the two-dimensional selected region. The user can select the patches within the range of the three-dimensional graph 401, wherein the patches can be triangular patches or quadrilateral patches, and the user can select the patches according to actual conditions without limitation. And according to the seam of the three-dimensional image, expanding the image to obtain a two-dimensional image.
Further, an initial two-dimensional image 403 is acquired from the three-dimensional selected region 400. In order to present the user-selected region in real-time, the platform may generate a selected region presentation image 402 corresponding to the initial two-dimensional image 403, as shown in step S304. The selected region presentation image 402 is presented by the user terminal. Wherein the texture and/or color of the selected region presentation image 402 is different from the initial two-dimensional image 403 for convenience of the user to distinguish the selected region presentation image 402 from the initial two-dimensional image 403.
Further, a mask image 404 of the same size may be generated from the initial two-dimensional image 403. Wherein the mask image 404 may be used to hide or display portions of the image layer. The first color display may correspond to a region of the two-dimensional selected region, and the other regions may be displayed with a second color, where the first color is different from the second color for distinguishing different transparency. The specific colors of the first color and the second color are not limited herein, and the first color may be white or black, and may be selected by the user according to actual situations.
Taking a user selecting a patch within the three-dimensional graphic 401 as a triangle patch as an example, the specific steps of generating the mask image 404 may include: traversing the triangle patch to obtain the triangle patch of the three-dimensional selected area 400. For the current triangle patch ABC, the triangle area A1B1C1 corresponding to the mask image 404 is searched, and coordinates of A1, B1 and C1 are obtained. And filling the area surrounded by the A1, the B1 and the C1 with a first color.
Further, to display the blending effect in real time, the initial two-dimensional image 403, the selected region presentation image 402, and the mask image 404 may be blended. The mask image 404 and the selected region presentation image 402 may be superimposed, where the superimposed presentation image is superimposed to the initial two-dimensional image 403.
The image generation unit is further configured to: and determining a minimum external rectangular frame of the two-dimensional selected area, and generating target map data with the same aspect ratio as the minimum external rectangular frame.
Further, based on the size of the two-dimensional selected region, the specific step of generating target map data may include: a minimum bounding rectangle of the first color region of the mask image 404 is calculated. The minimum bounding rectangle refers to the maximum range of a plurality of two-dimensional shapes (such as points, straight lines and polygons) expressed by two-dimensional coordinates, namely, the rectangle with the lower boundary defined by the maximum abscissa, the minimum abscissa, the maximum ordinate and the minimum ordinate of each vertex of the given two-dimensional shape.
Based on the user operation, the target map 405 is obtained, where the platform may obtain the target map 405 through a user upload or search instruction, or generate the target map 405 according to the requirement input by the user at the interface, and the specific form is freely selected by the user, which is not limited herein. Target map data having the same aspect ratio as the minimum bounding rectangle 406 is obtained from the target image based on the minimum bounding rectangle size information. The target image data is preferably greater than or equal to the minimum bounding rectangle, and if the target image data is too small, the user's margin for positional adjustment of the target image at a later stage is too small. And when the user adjusts the target image to be attached to the selected area image, the size of the target image needs to be changed, so that the pixels of the target image are changed, and the later rendering difficulty is increased.
The data generation unit is further configured to: based on the map image corresponding to the target map data, the initial two-dimensional image 403, and the mask image 404, a mapped two-dimensional image 407 is generated.
Specifically, the target map data is superimposed with the mask image 404 to generate a process two-dimensional image, and the process two-dimensional image is superimposed with the initial two-dimensional image 403 to generate a mapped two-dimensional image 407.
The mapping device further includes: and an area transmitting unit determining the three-dimensional selected area 400 in response to a smearing operation on the three-dimensional image.
Fig. 7 is a hardware block diagram illustrating an electronic device 700 according to an embodiment of the disclosure. An electronic device according to an embodiment of the present disclosure includes at least a processor, and a memory for storing computer readable instructions. When loaded and executed by a processor, the processor performs the mapping method of three-dimensional images in virtual space 101 as described above.
The electronic device 700 shown in fig. 6 specifically includes: a Central Processing Unit (CPU) 701, a Graphics Processing Unit (GPU) 702, and a main memory 703. These units are interconnected by a bus 704. A Central Processing Unit (CPU) 701 and/or a Graphics Processing Unit (GPU) 702 may be used as the above-described processor, and a main memory 703 may be used as the above-described memory storing computer-readable instructions. In addition, the electronic device 700 may further comprise a communication unit 705, a storage unit 706, an output unit 707, an input unit 708 and an external device 709, which are also connected to the bus 704.
Fig. 8 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 8, a computer-readable storage medium 800 according to an embodiment of the present disclosure has computer-readable instructions 801 stored thereon. When the computer readable instructions 801 are executed by the processor, the method of generating in the virtual space 101 according to the embodiments of the present disclosure described with reference to the above figures is performed. Computer-readable storage media include, but are not limited to, volatile memory and/or nonvolatile memory, for example. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, optical disk, magnetic disk, and the like.
In the above, the mapping method and apparatus, the storage medium, and the electronic device for three-dimensional image according to the embodiments of the present disclosure have been described with reference to the accompanying drawings, where the target mapping 405 may be replaced according to the user selection area, and the user may place the target mapping 405 at the position where the user wants to place through the free selection area, so that the mapping type of the three-dimensional image 401 is enriched, the operability is stronger, the freedom of mapping is increased, and the user experience in the mapping process is effectively improved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are only illustrative examples and schematic diagrams require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
In addition, as used herein, the use of "or" in the recitation of items beginning with "at least one" indicates a separate recitation, such that recitation of "at least one of A, B or C" for example means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C). Furthermore, the term "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
Various changes, substitutions, and alterations are possible to the techniques herein without departing from the teachings as defined by the appended claims. Furthermore, the scope of the claims is not limited to the exact aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. The processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is to be limited in scope by the aspects shown herein but in the broadest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Further, this descriptive schematic diagram limits embodiments of the present disclosure to the forms disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A method of mapping a three-dimensional image, comprising:
determining a three-dimensional selected area in the three-dimensional image;
determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region;
generating target map data based at least on the size of the two-dimensional selected region;
replacing the image data of the two-dimensional selected area in the two-dimensional image with the target mapping data to generate a mapped two-dimensional image; and
and converting the mapped two-dimensional image into a mapped three-dimensional image.
2. The method of mapping a three-dimensional image according to claim 1, wherein the determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region comprises:
mapping the three-dimensional image to the two-dimensional image;
acquiring an initial two-dimensional image including the two-dimensional selected region corresponding to the three-dimensional selected region;
generating a selected region display image corresponding to the initial two-dimensional image, wherein the texture and/or color of the selected region display image is different from the initial two-dimensional image;
generating a mask image of the same size as the initial two-dimensional image, wherein an area corresponding to the two-dimensional selected area is displayed in a first color, and an area other than the two-dimensional selected area is displayed in a second color;
and displaying the two-dimensional selected area in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image, the selected area display image and the mask image, wherein the two-dimensional selected area is displayed in the texture and the color of the selected area display image.
3. The method of mapping a three-dimensional image of claim 2, wherein the generating target mapping data based at least on the size of the two-dimensional selected region comprises:
determining the minimum circumscribed rectangular frame of the two-dimensional selected area; and
the target map data having the same aspect ratio as the minimum bounding rectangle is generated.
4. A method of mapping a three-dimensional image as defined in any one of claim 3, wherein the replacing the image data of the two-dimensionally selected region in the two-dimensional image with the target mapping data to generate a mapped two-dimensional image comprises:
a mapped two-dimensional image is generated based on the mapped image corresponding to the target mapped data, the initial two-dimensional image, and the mask image.
5. A method of mapping a three-dimensional image as defined in any one of claims 1 to 4, wherein the determining a three-dimensional selected region in the three-dimensional image comprises:
the three-dimensional selected region is determined in response to a smearing operation on the three-dimensional image.
6. A device for mapping a three-dimensional image, the device comprising:
a three-dimensional region acquisition unit that determines a three-dimensional selected region in the three-dimensional image;
a two-dimensional region acquisition unit that determines and displays a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image, based on the three-dimensional selected region;
a data generation unit that generates target map data based at least on the size of the two-dimensional selected region; and
and an image generation unit that replaces the image data of the two-dimensional selected area in the two-dimensional image with the target mapping data, generates a mapped two-dimensional image, and converts the mapped two-dimensional image into a mapped three-dimensional image.
7. A three-dimensional image mapping apparatus according to claim 6,
the two-dimensional region acquisition unit is further configured to:
mapping the three-dimensional image to the two-dimensional image;
acquiring an initial two-dimensional image including the two-dimensional selected region corresponding to the three-dimensional selected region;
generating a selected region display image corresponding to the initial two-dimensional image, wherein the texture and/or color of the selected region display image is different from the initial two-dimensional image;
generating a mask image of the same size as the initial two-dimensional image, wherein an area corresponding to the two-dimensional selected area is displayed in a first color, and an area other than the two-dimensional selected area is displayed in a second color;
displaying the two-dimensional selected region in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image, the selected region display image, and the mask image, wherein the two-dimensional selected region is displayed in textures and colors of the selected region display image;
the data generating unit is further configured to determine a minimum circumscribed rectangular box of the two-dimensional selected area; and generating the target map data having the same aspect ratio as the minimum bounding rectangle;
the image generation unit is further configured to generate a mapped two-dimensional map based on a map image corresponding to the target map data, the initial two-dimensional image, and the mask image.
8. The mapping apparatus of three-dimensional images according to claim 6 to 7, wherein the three-dimensional region acquisition unit is further configured to:
the three-dimensional selected region is determined in response to a smearing operation on the three-dimensional image.
9. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions to cause the electronic device to perform the method of mapping a three-dimensional image as claimed in any one of claims 1 to 5.
10. A non-transitory computer readable storage medium storing computer readable instructions which, when executed by a processor, cause the processor to perform a method of mapping a three-dimensional image according to any one of claims 1 to 5.
CN202311434416.XA 2023-10-31 2023-10-31 Mapping method and device for three-dimensional image, electronic equipment and storage medium Pending CN117252974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311434416.XA CN117252974A (en) 2023-10-31 2023-10-31 Mapping method and device for three-dimensional image, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311434416.XA CN117252974A (en) 2023-10-31 2023-10-31 Mapping method and device for three-dimensional image, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117252974A true CN117252974A (en) 2023-12-19

Family

ID=89131433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311434416.XA Pending CN117252974A (en) 2023-10-31 2023-10-31 Mapping method and device for three-dimensional image, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117252974A (en)

Similar Documents

Publication Publication Date Title
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
JP7235875B2 (en) Point cloud colorization system with real-time 3D visualization
JP5299173B2 (en) Image processing apparatus, image processing method, and program
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
KR20080090671A (en) Method and device for mapping texture to 3D object model
US10719920B2 (en) Environment map generation and hole filling
CN112652046B (en) Game picture generation method, device, equipment and storage medium
CN108765520B (en) Text information rendering method and device, storage medium and electronic device
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
WO2023207452A1 (en) Virtual reality-based video generation method and apparatus, device, and medium
CN109671147B (en) Texture map generation method and device based on three-dimensional model
US20200302579A1 (en) Environment map generation and hole filling
CN112734896A (en) Environment shielding rendering method and device, storage medium and electronic equipment
CN110969688A (en) A real-time 3D model color equalization method
JP6028527B2 (en) Display processing apparatus, display processing method, and program
CN117788689A (en) Interactive virtual cloud exhibition hall construction method and system based on three-dimensional modeling
CN113034661A (en) Method and device for generating MatCap map
WO2022100059A1 (en) Data storage management method, object rendering method, and device
CN114219888A (en) Three-dimensional character dynamic silhouette effect generation method and device, storage medium
GB2584753A (en) All-around spherical light field rendering method
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN117252974A (en) Mapping method and device for three-dimensional image, electronic equipment and storage medium
CN116712727A (en) Same-screen picture rendering method and device and electronic equipment
CN109729285B (en) Fuse grid special effect generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination