CN113205582A - Baking paste generating and using method, device, equipment and medium - Google Patents
Baking paste generating and using method, device, equipment and medium Download PDFInfo
- Publication number
- CN113205582A CN113205582A CN202110619628.XA CN202110619628A CN113205582A CN 113205582 A CN113205582 A CN 113205582A CN 202110619628 A CN202110619628 A CN 202110619628A CN 113205582 A CN113205582 A CN 113205582A
- Authority
- CN
- China
- Prior art keywords
- map
- view
- maps
- angle
- baking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a method, a device, equipment and a medium for generating and using a baking map, and belongs to the technical field of computers. The method comprises the following steps: determining viewing angle division information of m first viewing angle ranges and n second viewing angle ranges of the camera model, wherein the first viewing angle range is not divided into viewing angle sub-ranges, and the second viewing angle range is divided into a plurality of viewing angle sub-ranges; acquiring a first visual angle mapping of the three-dimensional virtual article in a first visual angle range through a camera model to obtain m first visual angle mappings; acquiring a second visual angle mapping of the three-dimensional virtual article in the visual angle sub-range through the camera model to obtain n groups of sub-visual angle mappings; performing down-sampling processing on the n groups of sub-viewing angle maps to obtain n groups of second viewing angle maps; and mapping the m first view map and the n groups of second view maps to different map areas of the baking map according to the view dividing information. The method realizes that the baking map can accommodate more visual angle maps under the condition of unchanged size.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for generating and using a baking map.
Background
The method comprises the steps that a camera model is arranged in the three-dimensional virtual space, the camera model shoots a three-dimensional virtual article located in the three-dimensional virtual space, and a virtual environment picture shot by the camera model is rendered to a screen of the terminal by the terminal to be displayed.
In order to reduce the calculation amount of the rendering process, a baking map of the three-dimensional virtual article is prepared and stored in advance, and the baking map comprises map images of the three-dimensional virtual article under a plurality of different viewing angles. The terminal loads the baking map in the memory in advance, when the camera model shoots the three-dimensional virtual article (or one patch disguised as the three-dimensional virtual article), the terminal obtains the current shooting visual angle of the camera model, selects a visual angle map matched with the current shooting visual angle from the baking map, and renders the visual angle map on a screen of the terminal for display.
However, the baking maps also need to occupy a memory space, so the baking maps are not too large, and under the condition that the size of the baking maps is fixed, the problem that the number of view angle maps is too small exists in the baking maps which are prepared in advance in the related art, correspondingly, the selectable range of the shooting view angle of the camera model is too narrow, and an obvious view angle switching jump phenomenon is easy to occur in the continuous moving process of the camera model.
Disclosure of Invention
The application provides a method, a device, equipment and a medium for generating and using a baking map, which can set more viewing angle maps on the baking map under the condition that the size of the baking map is fixed. The technical scheme is as follows:
according to an aspect of the present application, there is provided a method of generating a baking map, the method including:
determining viewing angle division information of m first viewing angle ranges and n second viewing angle ranges of the camera model, wherein the first viewing angle range is not divided into viewing angle sub-ranges, and the second viewing angle range is divided into a plurality of viewing angle sub-ranges;
acquiring a first visual angle mapping of the three-dimensional virtual article in a first visual angle range through a camera model to obtain m first visual angle mappings;
acquiring a second visual angle mapping of the three-dimensional virtual article in the visual angle sub-range through the camera model to obtain n groups of sub-visual angle mappings; performing down-sampling processing on the n groups of sub-viewing angle maps to obtain n groups of second viewing angle maps;
mapping and drawing the m first view map and the n groups of second view maps to different map areas of the baking map according to the view dividing information;
wherein m and n are integers more than 0.
According to one aspect of the present application, there is provided a method of using a baking map, the method comprising:
acquiring a baking map of the three-dimensional virtual article, wherein the baking map is acquired by respectively mapping and drawing m first visual angle maps and n groups of second visual angle maps to different map areas of the baking map according to visual angle division information, the m first visual angle maps are acquired by acquiring first visual angle maps of the three-dimensional virtual article in a first visual angle range through a camera model, the n groups of second visual angle maps are acquired by acquiring second visual angle maps of the three-dimensional virtual article in a visual angle sub-range through the camera model to acquire n groups of sub-visual angle maps, and the n groups of sub-visual angle maps are obtained by performing down-sampling processing, the visual angle sub-range is acquired by dividing a second visual angle range, and the m first visual angle ranges and the n groups of second visual angle ranges form an acquisition visual angle range of the three-dimensional virtual article according to visual angle division information;
determining an acquisition visual angle of the three-dimensional virtual article, wherein the acquisition visual angle is the visual angle of the camera model towards the three-dimensional virtual article;
determining a view angle map corresponding to the collection view angle from the baking map;
wherein m and n are integers more than 0.
According to an aspect of the present application, there is provided a baking map generation apparatus including:
the camera model comprises a determining module, a calculating module and a calculating module, wherein the determining module is used for determining view division information of m first view ranges and n second view ranges of the camera model, the first view ranges are not divided into view sub-ranges, and the second view ranges are divided into a plurality of view sub-ranges;
the processing module is used for acquiring a first visual angle mapping of the three-dimensional virtual article in a first visual angle range through the camera model to obtain m first visual angle mappings;
the processing module is further used for acquiring a second view angle mapping of the three-dimensional virtual article in the view angle sub-range through the camera model to obtain n groups of sub-view angle mappings; performing down-sampling processing on the n groups of sub-viewing angle maps to obtain n groups of second viewing angle maps;
the drawing module is used for respectively mapping and drawing the m first visual angle maps and the n groups of second visual angle maps to different map areas of the baking map according to the visual angle dividing information;
wherein m and n are integers more than 0.
According to one aspect of the present application, there is provided a device for use in baking a map, the device comprising:
the acquiring module is used for acquiring a baking map of the three-dimensional virtual article, wherein the baking map is obtained by respectively mapping and drawing m first visual angle maps and n groups of second visual angle maps to different map areas of the baking map according to visual angle dividing information, the m first visual angle maps are obtained by acquiring a first visual angle map of the three-dimensional virtual article in a first visual angle range through a camera model, the n groups of second visual angle maps are obtained by acquiring a second visual angle map of the three-dimensional virtual article in a visual angle sub-range through a camera model to obtain n groups of sub-visual angle maps, and the n groups of sub-visual angle maps are subjected to down-sampling processing, the visual angle sub-range is obtained by dividing a second visual angle range, and the m first visual angle ranges and the n second visual angle ranges form a collection visual angle range of the three-dimensional virtual article according to visual angle dividing information;
the determining module is used for determining an acquisition visual angle of the three-dimensional virtual article, wherein the acquisition visual angle is the visual angle of the camera model towards the three-dimensional virtual article;
the determining module is further used for determining a view angle map corresponding to the acquisition view angle from the baking map;
wherein m and n are integers more than 0.
According to an aspect of the present application, there is provided a computer device including: a processor and a memory, the memory storing a computer program that is loaded and executed by the processor to implement the method of generating a baking map or the method of using a baking map as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium storing a computer program which is loaded and executed by a processor to implement the method of generating a baking map or the method of using a baking map as described above.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the method for generating the baking map or the method for using the baking map.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the baking mapping is finally generated by obtaining m first visual angle mapping corresponding to m first visual angle ranges, n groups of second visual angle mapping corresponding to n second visual angle ranges, and mapping and drawing the m first visual angle mapping and the n groups of second visual angle mapping to different mapping areas of the baking mapping, wherein the image resolution of the second visual angle mapping is lower than that of the first visual angle mapping. Compared with the existing imprester (camouflage) technology, the method has the advantages that by reducing the image resolution of part of the visual angle maps, the baking maps can contain more visual angle maps under the condition of unchanged size, the selectable range of the shooting visual angle of the camera model for shooting the three-dimensional virtual article is greatly widened, and the pictures displayed on the screen of the terminal are switched more smoothly in the continuous moving process of the camera model.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of a baking chartlet rendered based on the imprester technique provided by an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 3 is a flow chart of a method for generating a baking map provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a camera model provided by an exemplary embodiment of the present application capturing a three-dimensional virtual object from different perspectives;
FIG. 5 is a schematic illustration of a bake map provided by an exemplary embodiment of the present application;
FIG. 6 is an enlarged schematic view of the mapping region 503 of FIG. 5;
FIG. 7 is a diagram of a first mapping provided by an exemplary embodiment of the present application;
FIG. 8 is a schematic illustration of a bake map provided by an exemplary embodiment of the present application;
fig. 9 is a schematic diagram of a picture taken by a camera model to shoot a three-dimensional virtual object according to an exemplary embodiment of the present application;
fig. 10 is a schematic diagram of a picture taken by a camera model to shoot a three-dimensional virtual object according to another exemplary embodiment of the present application;
FIG. 11 is a flow chart of a method of using a bake map provided by an exemplary embodiment of the present application;
FIG. 12 is a flow chart of a method of generating a baking sticker provided by another exemplary embodiment of the present application;
FIG. 13 is a schematic diagram of a development environment in which a developer is located, as provided by an exemplary embodiment of the present application;
FIG. 14 is a flow chart of a method of using a bake map provided by another exemplary embodiment of the present application;
FIG. 15 is a schematic diagram of a baking map generation device provided by an exemplary embodiment of the present application;
FIG. 16 is a schematic view of an apparatus for using a baking map provided in accordance with an exemplary embodiment of the present application;
fig. 17 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
improster (masquerier) technique: a technique for disguising as a three-dimensional virtual object by one patch in real-time rendering. The method comprises the steps of sampling a chartlet of the three-dimensional virtual article under each visual angle at the resource making stage, drawing the chartlet on a baking chartlet, finding a corresponding visual angle chartlet on the baking chartlet according to the current observed visual angle during operation, and displaying the chartlet. By adopting the Impaster technology, the number of drawn patches can be greatly reduced, and the performance is optimized.
Schematically, FIG. 1 is a baking mapping based on the Impaster technique. The baking map in fig. 1 has 10 rows and 10 columns, and the baking map includes 100 viewing angle maps, wherein 100 viewing angle maps are identified by numbers 1-100, and in fig. 1, the resolution of each viewing angle map is the same.
Baking and mapping: refers to a set of mapping images under various visual angles of a three-dimensional virtual article based on the imprester technology. Optionally, the method shown in fig. 1 is adopted to bake 100 perspective maps onto the baking map, that is, there are 100 perspective maps of each three-dimensional virtual article on the baking map, and in the case that the baking map is not additionally set and the size of the baking map is fixed, the selectable range of the shooting perspective of the camera model is too narrow by only shooting each three-dimensional virtual article using 100 perspective maps, and in the continuous moving process of the camera model, an obvious perspective switching jump phenomenon easily occurs.
Three-dimensional virtual space: the three-dimensional virtual space may be an open space, and the three-dimensional virtual space may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as a desert, a city, and the like. Of course, the three-dimensional virtual space may also include three-dimensional virtual objects, such as a throwing object, a building, a vehicle, a virtual object in a virtual scene, and a prop for arming itself or fighting with other virtual objects, and the three-dimensional virtual space may also be used for simulating real environments in different weathers, such as sunny days, rainy days, foggy days, or dark nights. The variety of scene elements enhances the diversity and realism of the three-dimensional virtual space. Optionally, a camera model is arranged in the three-dimensional virtual space for shooting the three-dimensional virtual article.
Viewing angle: in this embodiment, a viewing angle of the three-dimensional virtual article photographed by the camera model may be divided into m first viewing angle ranges and n second viewing angle ranges, where optionally, the m first viewing angle ranges are used to photograph an original resolution map of the three-dimensional virtual article, and the n second viewing angle ranges are used to photograph a viewing angle resolution map of the three-dimensional virtual article, where the viewing angle resolution map is obtained by downsampling based on the original resolution. Optionally, the ith second viewing angle range in the n second viewing angle ranges may be divided into k2 viewing angle sub-ranges, and each viewing angle sub-range corresponds to one viewing angle map.
Fig. 2 is a schematic diagram of a computer system provided in an exemplary embodiment of the present application, and fig. 2 illustrates a baking map generating device 210 and a baking map using device 220, where the baking map generating device 210 generates a baking map and transmits the baking map to the baking map using device 220 as shown in fig. 2.
The baking map generating device 210 and the baking map using device 220 may be computer devices having the ability to generate and use baking maps, for example, the computer devices may be terminals or servers.
Alternatively, the baking map generating device 210 and the baking map using device 220 may be the same computer device, or the baking map generating device 210 and the baking map using device 220 may be different computer devices. Also, when the baking plaster generating device 210 and the baking plaster using device 220 are different devices, the baking plaster generating device 210 and the baking plaster using device 220 may be the same type of device, such as the baking plaster generating device 210 and the baking plaster using device 220 may both be servers; alternatively, the baking map generating apparatus 210 and the baking map using apparatus 220 may be different types of apparatuses. The server may be an independent physical server, a server cluster or distributed device formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. In the following embodiments, the generation device of the baking map and the use device of the baking map are all discussed in the terminal.
Having described the terminology and the environment in which the embodiments are related throughout the application, a discussion of specific embodiments follows. To achieve the purpose of setting more viewing angle maps on the baking map under the condition that the size of the baking map is not changed, fig. 3 is a flowchart of a method for generating the baking map, which is provided by an exemplary embodiment of the present application, and the method includes:
wherein m and n are integers more than 0;
in one embodiment, the m first view angle ranges correspond to view angle maps of original resolutions obtained by shooting the three-dimensional virtual article by the camera model, the n second view angle ranges correspond to view angle maps of view angle resolutions obtained by shooting the three-dimensional virtual article by the camera model, the original resolutions are resolutions corresponding to the view angle maps obtained by shooting the three-dimensional virtual article by the camera model without down-sampling, and the view angle resolutions are resolutions corresponding to the view angle maps obtained by shooting the three-dimensional virtual article by the camera model with down-sampling.
Illustratively, the original resolution of the view map is a, the first view resolution of the view map is 1/4a, and the second view resolution of the view map is 1/16a, that is, the view resolution is a resolution corresponding to 1/4 down-sampling of the view map obtained by shooting the three-dimensional virtual object by the camera model.
It should be noted that the m first viewing angle ranges may also correspond to the viewing angle maps of the three-dimensional virtual object captured by the camera model to obtain the viewing angle resolution, that is, the m first viewing angle ranges correspond to the viewing angle maps subjected to the down-sampling. That is, the embodiment of the present application may be implemented to generate a baking map based on a downsampled view map.
Schematically, fig. 4 shows a schematic diagram of a camera model provided by an exemplary embodiment of the present application capturing a three-dimensional virtual object from different perspectives. Fig. 4 shows a camera shooting a three-dimensional virtual article from m first view angle ranges and n second view angle ranges, wherein the cameras at any two view angles are located at the same distance from the tree (equal to the center distance of the tree). Alternatively, fig. 4 shows m first viewing angle ranges 401 and n second viewing angle ranges 402. In one embodiment, the ith of the n second view angle ranges is divided into k2Optionally, the 1 st to 32 th ones of the n second viewing angle ranges are divided into 4 viewing angle sub-ranges, and the 33 th to 36 th ones of the n second viewing angle ranges are divided into 16 viewing angle sub-ranges.
Wherein each of the m first viewing angle ranges is continuous but non-overlapping, each of the n second viewing angle ranges is continuous but non-overlapping, and each of the ith second viewing angle sub-ranges is continuous but non-overlapping.
with reference to fig. 4, the terminal obtains, through the camera model, first view maps of the three-dimensional virtual object in m first view ranges 401, to obtain m first view maps.
Referring to fig. 5 in combination, fig. 5 is a schematic diagram of a baking map according to an exemplary embodiment of the present application, and fig. 5 shows a view map 1-20, a view map 21, a view map 22, a view map 47, a view map 48, a view map 49, a view map 50, a view map 75, a view map 76, a view map 77, a view map 78, a view map 127, a view map 128, a view map 129, a view map 130, a view map 179, a view map 180, a view map 181, a view map 182, a view map 207, a view map 208, a view map 209, a view map 210, a view map 235, a view map 236, and a view map 237 and 256, which total 64 first view maps, and the 64 first view maps are disposed on 64 map regions 501 of the baking map, referring to fig. 5.
with reference to fig. 4, the terminal obtains a second view angle map of the three-dimensional virtual article in the view angle sub-range through the camera model to obtain n groups of sub-view angle maps; and performing down-sampling processing on the n groups of sub-viewing angle maps to obtain n groups of second viewing angle maps, wherein the viewing angle sub-range is obtained by dividing n second viewing angle ranges 402.
In an optional embodiment, an original resolution of a view map obtained by shooting a three-dimensional virtual article by a camera model is a, a terminal down-samples n groups of sub-view maps obtained based on a view sub-range to obtain n groups of second view maps, so that an nth 1 th group of view maps in the n groups of second view maps contains 4 view maps, a resolution of each view map is 1/4a, an nth 2 th group of view maps contains 16 view maps, and a resolution of each view map is 1/16 a.
It should be noted that, in the embodiment of the present application, a sampling rate for down-sampling n groups of sub-view maps is not limited, and optionally, the n1 th group of view maps in the n second group of view maps includes 4 view maps, a resolution of each view map is 1/4a, the n2 th group of view maps includes 9 view maps, a resolution of each view map is 1/9a, and the n3 th group of view maps includes 16 view maps, and a resolution of each view map is 1/16 a.
In the embodiment of the present application, the area of the map region occupied by each of the m first perspective maps is the same as the area of the map region occupied by each group of second perspective maps. Namely, 4 × 1/4a ═ a or 16 × 1/16a ═ a.
Referring to fig. 5 in combination, in fig. 5, the view maps 23-26 are a set of second view maps, the view maps 27-30 are a set of second view maps, the view maps 31-34 are a set of second view maps, the view maps 35-38 are a set of second view maps, the view maps 39-42 are a set of second view maps, the view maps 43-46 are a set of second view maps, the view maps 51-54 are a set of second view maps, the view maps 55-58 are a set of second view maps, the view maps 59-62 are a set of second view maps, the view maps 63-66 are a set of second view maps, the view maps 67-70 are a set of second view maps, the view maps 71-74 are a set of second view maps, the view maps 79-82 are a set of second view maps, the view angle maps 83-86 are a set of second view angle maps, the view angle map 119-122 is a set of second view angle maps, the view angle map 123-126 is a set of second view angle maps, the view angle map 131-134 is a set of second view angle maps, the view angle map 135-138 is a set of second view angle maps, the view angle map 171-174 is a set of second view angle maps, the view angle map 175-178 is a set of second view angle maps, the view angle map 183-186 is a set of second view angle maps, the view angle map 187-190 is a set of second view angle maps, the view angle map 191-194 is a set of second view angle maps, the view angle map 195-198 is a set of second view angle maps, the view angle map 199-202 is a set of second view angle maps, the view angle map 203-206 is a set of second view angle maps, the view angle map 211-214 is a set of second view angle maps, the view map 215-218 is a set of second view maps, the view map 219-222 is a set of second view maps, the view map 223-226 is a set of second view maps, the view map 227-230 is a set of second view maps, and the view map 231-234 is a set of second view maps; the 32 groups of second perspective maps are formed, the 32 groups of second perspective maps are arranged on a map area 502 of the baking map, and the map area 502 comprises 128 map sub-areas.
Referring to fig. 6, fig. 6 is an enlarged schematic view of the map region 503 where the 4 sets of second view angle maps in fig. 5 are located in the baking map, wherein the view angle maps 87-102 constitute a set of second view angle maps, the view angle maps 103-118 constitute a set of second view angle maps, the view angle maps 139-154 constitute a set of second view angle maps, and the view angle maps 155-170 constitute a set of second view angle maps.
The 4 sets of second perspective maps are arranged in a map region 503 of the bake map, and the map region 503 includes 64 map sub-regions.
And 380, respectively mapping the m first view map and the n groups of second view maps to different map areas of the baking map according to the view division information.
In one embodiment, the terminal maps and draws the m first perspective maps and the n groups of second perspective maps to different map areas of the baking map according to the perspective division information, and may include the following two steps:
s1, mapping and drawing the m first view map into m map areas of the baking map according to the view dividing information;
in one embodiment, the terminal determines a map area corresponding to a first view angle range on the baking map according to view angle division information and a first mapping relation, and maps and draws each first view angle map to the map area according to the first mapping relation, wherein m first view angle maps correspond to m first view angle ranges, and the first mapping relation is a mapping relation between a three-dimensional view angle of the camera model and two-dimensional coordinates on the baking map.
Schematically, fig. 7 is a schematic diagram of a first mapping relationship provided in an exemplary embodiment of the present application.
The first mapping relation realizes that m first view angle ranges of the three-dimensional virtual article shot by the camera model are converted into m map areas of the baking map, namely the first mapping realizes that the three-dimensional view angle of the camera model is converted into two-dimensional coordinates on the baking map. Thus, the first mapping implements m map regions that are converted from m first viewing angle ranges to a bake map.
Illustratively, the three-dimensional view angle passing through the point a' is (1, 2, 3), the coordinate of the point a is (1, 1), and the first mapping relation is converted from (1, 2, 3) to (1, 1).
And S2, mapping and drawing the n groups of second view angle maps to n map areas of the baking map according to the view angle dividing information, wherein each group of second view angle maps in the n groups of second view angle maps are arranged in different map sub-areas in the same map area.
In one embodiment, the ith of the n second view angle ranges is divided into k2Each view sub-range, k is an integer larger than 1, the terminal determines a chartlet area corresponding to the ith second view range on the baking chartlet according to the view division information and the first mapping relation, and the chartlet area corresponding to the ith second view range is divided into k2An individual map sub-area; then, the terminal maps and draws each second visual angle map in the ith group of second visual angle maps to k according to the first mapping relation2One of the map sub-regions.
The ith group of second visual angle maps correspond to the ith second visual angle range, and the first mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the two-dimensional coordinates on the baking map.
Schematically, fig. 7 is a schematic diagram of a first mapping relationship provided in an exemplary embodiment of the present application.
The first mapping realizes that the n second visual angle ranges of the three-dimensional virtual article shot by the camera model are converted into n mapping areas of the baking mapping, and k of the ith second visual angle range in the n second visual angle ranges2Conversion of sub-ranges of view into k in n mapped regions2And (4) a map sub-area. That is, the first mapping realizes that the three-dimensional view angle is converted into the two-dimensional coordinate, in one embodiment, the camera model shoots that the view angle of the three-dimensional virtual article passes through a point a 'of the three-dimensional virtual article, the point a' of the three-dimensional virtual article corresponds to the point a on the baking mapping through the first mapping relationship, the mapping area where the point a is located is the mapping area corresponding to the view angle passing through the point a, and the view angle mapping on the mapping area where the point a is located is the view angle mapping corresponding to the view angle passing through the point a. Thus, the first mapping relationship realizes the conversion from the n second viewing angle ranges to k of the ith map region of the n map regions of the bake map2And (4) a map sub-area.
In summary, the baked map is finally generated by obtaining m first perspective maps corresponding to m first perspective ranges, n groups of second perspective maps corresponding to n second perspective ranges, and mapping and drawing the m first perspective maps and the n groups of second perspective maps to different map areas of the baked map, wherein the image resolution of the second perspective map is lower than that of the first perspective map. Compared with the prior art of the imprester, the method has the advantages that by reducing the image resolution of part of the visual angle maps, the baking maps can contain more visual angle maps under the condition of unchanged size, the selectable range of the shooting visual angle of the camera model for shooting the three-dimensional virtual article is greatly widened, and the pictures displayed on the screen of the terminal are switched more smoothly in the continuous moving process of the camera model.
In the above method, the ith second viewing angle range of the n second mapping viewing angle ranges is divided into k2A sub-range of viewing angles, the ith group of second viewing angle maps of the n groups of second viewing angle maps corresponding to the sub-range of viewing anglesk2One of the second perspective maps mapping k of the n map regions of the bake map2One of the individual map sub-regions provides a way to set the resolution of the view map on the bake map such that the size of the bake map is perfectly compatible with the resolution of all view maps.
FIG. 5 is a schematic view of a bake map provided in accordance with an exemplary embodiment of the present application, and FIG. 6 is an enlarged view of the map region 503 of FIG. 5.
FIG. 5 shows that m first perspective maps are disposed in a map region 501 of the bake map, 128 second perspective maps of the n second perspective maps are disposed in a map region 502 of the bake map, the map region 502 includes 128 map sub-regions, 64 second perspective maps of the n second perspective maps are disposed in a map region 503 of the bake map, the map region 503 includes 64 map sub-regions, wherein the map region 501 is disposed in a glyph region at the outermost circle of the bake map, the map region 502 is disposed in a glyph region at the middle circle of the bake map, and the map region 503 is disposed in a region at the innermost circle of the bake map.
Optionally, the view map in the map region 501 has an original resolution, the view map in the map region 502 has a first view resolution, and the view map in the map region 503 has a second view resolution, where, for example, the resolution of the view map in the map region 501 is a, the resolution of the view map in the map region 502 is 1/4a, and the resolution of the view map in the map region 503 is 1/16 a.
Optionally, the area of the map region occupied by each of the m first perspective maps is the same as the area of the map region occupied by each group of second perspective maps. Namely, 4 × 1/4a ═ a or 16 × 1/16a ═ a.
Illustratively, fig. 8 is a schematic diagram of a bake map provided by an exemplary embodiment of the present application, and fig. 8 is a bake map obtained by shooting a "tree" from different perspectives by a camera model.
In order to obtain the perspective map of the three-dimensional virtual article through the camera model, the shooting parameters of the camera are further required to be set, in an embodiment of the method for generating the baking map shown in fig. 3, step 320 further includes the following steps:
step 312, setting a first shooting parameter of the camera model;
in an alternative embodiment, a camera model is placed at a default position in the three-dimensional virtual space, and the first shooting parameters of the camera are set, so that the three-dimensional virtual object is completely contained in the picture shot by the camera model at the default position.
Optionally, the first shooting parameter is a focal length of the camera model from the three-dimensional virtual object, and the shooting picture of the camera model can completely contain the three-dimensional virtual object by adjusting the focal length of the camera model from the three-dimensional virtual object.
In one embodiment, the terminal setting the first photographing parameters of the camera model may include the steps of: the terminal obtains a minimum bounding box of the three-dimensional virtual article; the terminal takes the diagonal length of the minimum bounding box as the length and width of a shot picture of the camera model; the terminal sets a first photographing parameter of the camera model based on the length and width of the photographed picture.
Wherein, the minimum bounding box refers to the minimum cube which can surround the three-dimensional virtual object.
Schematically, fig. 9 shows that the camera model photographs a photographed picture of a three-dimensional virtual article based on the first photographing parameter, wherein the length and width of the photographed picture shown in fig. 9 are equal to the diagonal length of the minimum bounding box of the "tree".
Step 314, adjusting to obtain a second shooting parameter of the camera model according to the size of an area occupied by the three-dimensional virtual article in a shooting picture of the camera model based on the first shooting parameter;
in an optional embodiment, based on the first shooting parameter, the terminal adjusts to obtain a second shooting parameter of the camera model according to the size of an area occupied by a shooting picture of the three-dimensional virtual article in the camera model.
Optionally, step 314 may further include the steps of:
step 314-1, controlling a camera to shoot the three-dimensional virtual article from different viewing angles based on the first shooting parameters to obtain a plurality of viewing angle maps;
in one embodiment, the terminal controls the camera to shoot the three-dimensional virtual object from m view angle ranges and n view angle ranges to obtain (m + n × k)2) And (6) pasting the picture at each view angle.
Step 314-2, overlapping pixel points at the same position corresponding to the three-dimensional virtual article in the multiple visual angle maps to obtain an overlapped image;
in one embodiment, the terminal will (m + n k)2) In each view map, pixel points corresponding to the same position of the three-dimensional virtual article are superimposed to obtain a superimposed image, and fig. 10 is a schematic view of the superimposed image provided in an exemplary embodiment of the present application.
Step 314-3, obtaining a first length ratio by quoting the length of the area with pixels of the superposed image and the length of the superposed image, and obtaining a first width ratio by quoting the width of the area with pixels of the superposed image and the width of the superposed image, wherein the area with pixels is an area occupied by pixels of the three-dimensional virtual article in the superposed image;
in one embodiment, the terminal obtains a first length ratio by quotient of the length of the pixel-carrying region of the superimposed image and the length of the superimposed image, e.g., the length of the pixel-carrying region is x1, the length of the superimposed image is x2, and the first length ratio is x1/x 2; and the terminal obtains a first width ratio by quotient of the width of the band pixel region of the superimposed image and the width of the superimposed image, for example, the width of the band pixel region is y1, the width of the superimposed image is y2, and the first width ratio is y1/y 2.
And step 314-4, adjusting to obtain a second shooting parameter of the camera based on the first length ratio and the first width ratio, wherein the second shooting parameter is used for enabling the second length ratio to be equal to the first length ratio, the second length ratio is obtained by quotient of the length of any view angle mapping obtained by shooting based on the first shooting parameter and the length of any view angle mapping obtained by shooting based on the second shooting parameter, the second shooting parameter is also used for enabling the second width ratio to be equal to the first width ratio, and the second width ratio is obtained by quotient of the width of any view angle mapping obtained by shooting based on the first shooting parameter and the width of any view angle mapping obtained by shooting based on the second shooting parameter.
In one embodiment, the terminal adjusts a second shooting parameter of the camera based on the first length ratio and the first width ratio, the second shooting parameter is used for enabling a second length ratio of any view angle map shot based on the first shooting parameter to any view angle map shot based on the second shooting parameter to be equal to the first length ratio, and the second shooting parameter is also used for enabling a second width ratio of any view angle map shot based on the first shooting parameter to any view angle map shot based on the second shooting parameter to be equal to the first width ratio;
illustratively, the length of any view angle map obtained by shooting based on the first shooting parameters is x2, the length of any view angle map obtained by shooting based on the second shooting parameters is x3, and the second length ratio is equal to the first length ratio, namely x2/x3 is x1/x 2.
The width of any view map shot based on the first shooting parameters is y2, the width of any view map shot based on the second shooting parameters is y3, and the second width ratio is equal to the first width ratio, namely y2/y3 is y1/y 2.
In summary, by adjusting the first shooting parameter and the second shooting parameter, the length and width of the shooting picture are closer to the length and width of the pixel area of the three-dimensional virtual article. The method ensures that the shooting picture obtained by shooting through the camera model is filled with the three-dimensional virtual article to the maximum extent, and ensures that the visual angle mapping drawn on the baking mapping carries the information of the three-dimensional virtual article with a larger proportion as far as possible, namely, ensures that the space of the baking mapping is not abused as far as possible.
In order to store the viewing angle map in the area information of the baking map, in the embodiment of the method for generating a baking map shown in fig. 3, step 380 is followed by:
step 392, storing the area information of the map area where the m first view maps are located in the baking map according to the view dividing information;
in one embodiment, the terminal determines a first storage position corresponding to the ith first view angle range in a primary storage table according to the view angle division information and the second mapping relation; the terminal stores area information of a map area where an ith first visual angle map is located in the baking map at a first storage position, wherein the ith first visual angle map is a visual angle map corresponding to an ith first visual angle range;
the second mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the storage position on the primary storage table, and the area information comprises the coordinates of the positioning point, the height information and the width information of the map area.
In one embodiment, the area information of the map area where the first view angle map is located in the bake map includes start point coordinates of the map area where the first view angle map is located in the bake map, length information of the first view angle map, and width information of the first view angle map.
The primary storage table is used for storing the area information of the map area where the ith first view map is located in the baking map and the index information of the target storage area of the ith group of second view maps in the secondary storage table.
Optionally, the primary storage table is in the form of a two-dimensional array, for example, the primary storage table is defined as inta [9] [9] ═ {1, 1, 1, 2, 1, 3, 1, 4, 1, 5, ·, 10, 9, 10, 10 }.
The second mapping relation is used for converting the three-dimensional visual angle of the camera model into a storage position on the primary storage table, for example, the three-dimensional visual angle of the camera model is [1, 2, 3], the storage position on the primary storage table corresponding to the visual angle is a [3] [4], and the three-dimensional visual angle [1, 2, 3] of the camera model is converted into a two-dimensional integer [3, 4] of the primary storage table.
Specifically, according to the view division information and the second mapping relationship, the terminal determines a first storage position a [3] [4] corresponding to the three-dimensional view [1, 2, 3] in the primary storage table, and the terminal stores area information of a map area where the first view map corresponding to the three-dimensional view [1, 2, 3] is located in the baking map in the first storage position a [3] [4 ].
Step 394, storing the region information of the map region where the n groups of second perspective maps are located in the bake map according to the perspective division information.
In one embodiment, the terminal stores the area information of the map sub-area where each second perspective map in the ith group of second perspective maps is located in the baking map in the target storage area of the secondary storage table; the ith group of second visual angle maps corresponds to the ith second visual angle range; the terminal determines a second storage position corresponding to the ith second visual angle range in the primary storage table according to the visual angle division information and the second mapping relation; the terminal stores the index information of the target storage area of the secondary storage table in a second storage position;
and the second mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the storage position on the primary storage table.
The target memory area of the secondary memory table is used for storing the area information of the sub-area of the baking map where each second perspective map in the ith group of second perspective maps is located.
Optionally, the secondary storage table is in the form of a two-dimensional array, for example, the secondary storage table is defined as intb [9] ═ 1, 1, 1, 2, 1, 3, 1, 4, 1, 5, ·, 10, 9, 10, 10 }.
The index information comprises the serial number of a secondary storage table where the ith group of second visual angle maps are located, the start point coordinates of the ith group of second visual angle maps in a target storage area of the secondary storage table, and the data length of the secondary storage table occupied by the ith group of second visual angle maps.
Illustratively, the number of the secondary storage table where the ith group of second view maps is located is 1, the starting point of the target storage area of the secondary storage table of the ith group of second view maps is b [0] [0], the data length of the secondary storage table occupied by the ith group of second view maps is 16, i.e. from b [0] [0] to b [1] [5], i.e. the target storage area is from b [0] [0] to b [1] [5 ].
Optionally, the second storage location further stores flag bits corresponding to the n second viewing angle ranges, and the flag bits are used to mark the n second viewing angle ranges corresponding to the second storage location.
Wherein, the terminal stores the area information of the map sub-area where each second viewing angle map in the ith group of second viewing angle maps is located in the baking map in a secondary storage table, and the method comprises the following steps:
according to the view division information and the third mapping relation, the terminal determines a third storage position corresponding to the view range of the jth second view map in the ith group of second view maps in a target storage area of the secondary storage table; the terminal stores the area information of the map sub-area where the jth second visual angle map is located in the baking map in a third storage position;
and the third mapping relation is the mapping relation between the three-dimensional view angle of the camera model and the storage position on the secondary storage table.
In one embodiment, the area information of the map area where the second perspective map is located in the bake map includes: the coordinates of the starting point of the map area where the second view map is located in the baking map, the length information of the second view map, and the width information of the second view map.
The third mapping relation is used for converting the three-dimensional visual angle of the camera model into a storage position on the secondary storage table, for example, the three-dimensional visual angle of the camera model is [2, 3, 4], and the storage position of a target storage area of the secondary storage table corresponding to the visual angle is b [1] [3], so that the three-dimensional visual angle [2, 3, 4] of the camera model is converted into a two-dimensional integer [1, 3] of the secondary storage table.
In one embodiment, according to the view dividing information and the second mapping relation, the terminal determines the storage position corresponding to the three-dimensional view [2, 3, 4] in the primary storage table to be a [7] [8], the terminal determines the target storage area to be b [0] [0] to b [1] [5] according to the index information of the target storage area stored by the a [7] [8], according to the view dividing information and the third mapping relation, the terminal determines the b [1, 3] in the target storage area to be a third storage position, and the third storage position stores the area information of the map area where the second view map corresponding to the three-dimensional view [2, 3, 4] is located in the baking map.
To sum up, by setting a primary storage table and a secondary storage table, the area information of the map area where the m first perspective maps are located in the baking map and the index information of the n groups of second perspective maps in the target storage area of the secondary storage table are stored in the primary storage table, and the area information of the map area where the n groups of second perspective maps are located in the baking map is stored in the target storage area of the secondary storage table, the area information of the baking map where the viewing angle maps are stored is realized, so that when a user uses the baking map, the corresponding viewing angle map can be found on the baking map according to the primary storage table and the secondary storage table. The method standardizes the process of determining the visual angle mapping by the terminal, and improves the efficiency of determining the visual angle mapping by the terminal on the premise of ensuring the accuracy.
Having described embodiments of the method for generating a bake map of the present application, embodiments using a bake map will be discussed next. To achieve a more viewing angle of the tile image on the baking tile with the baking tile size unchanged, fig. 11 is a flowchart of a method for using the baking tile according to an exemplary embodiment of the present application, where the method includes:
the baking mapping is obtained by respectively mapping and drawing m first visual angle mapping and n groups of second visual angle mapping to different mapping areas of the baking mapping according to visual angle dividing information, the m first visual angle mapping is obtained by acquiring a first visual angle mapping of a three-dimensional virtual article in a first visual angle range through a camera model, the n groups of second visual angle mapping is obtained by acquiring a second visual angle mapping of the three-dimensional virtual article in a visual angle sub-range through the camera model to obtain n groups of sub-visual angle mapping, and the n groups of sub-visual angle mapping are subjected to down-sampling processing, the visual angle sub-range is obtained by dividing a second visual angle range, and the m first visual angle ranges and the n second visual angle ranges form a collecting visual angle range of the three-dimensional virtual article according to the visual angle dividing information;
schematically, fig. 4 shows a schematic diagram of a camera model provided by an exemplary embodiment of the present application capturing a three-dimensional virtual object from different perspectives. FIG. 4 shows a first view of a camera from mThe three-dimensional virtual object is shot by the angle range and the n second visual angle ranges, wherein the shooting positions of the cameras on any two visual angles are equal to the distance from the tree (equal to the distance from the middle point of the tree). Alternatively, fig. 4 shows m first viewing angle ranges 401 and n second viewing angle ranges 402. In one embodiment, the ith of the n second view angle ranges is divided into k2Optionally, the 1 st to 32 th ones of the n second viewing angle ranges are divided into 4 viewing angle sub-ranges, and the 33 th to 36 th ones of the n second viewing angle ranges are divided into 16 viewing angle sub-ranges.
Referring to fig. 5 in combination, the view map 1-20, the view map 21, the view map 22, the view map 47, the view map 48, the view map 49, the view map 50, the view map 75, the view map 76, the view map 77, the view map 78, the view map 127, the view map 128, the view map 129, the view map 130, the view map 179, the view map 180, the view map 181, the view map 182, the view map 207, the view map 208, the view map 209, the view map 210, the view map 235, the view map 236, and the view map 237 and 256 in fig. 5 are 64 first view maps, and the 64 first view maps are disposed on the 64 map areas 501 of the baking map in combination with fig. 5.
Referring to fig. 5 and 6 in combination, in fig. 5, the view maps 23-26 are a set of second view maps, the view maps 27-30 are a set of second view maps, the view maps 31-34 are a set of second view maps, the view maps 35-38 are a set of second view maps, the view maps 39-42 are a set of second view maps, the view maps 43-46 are a set of second view maps, the view maps 51-54 are a set of second view maps, the view maps 55-58 are a set of second view maps, the view maps 59-62 are a set of second view maps, the view maps 63-66 are a set of second view maps, the view maps 67-70 are a set of second view maps, the view maps 71-74 are a set of second view maps, the view maps 79-82 are a set of second view maps, the view angle maps 83-86 are a set of second view angle maps, the view angle map 119-122 is a set of second view angle maps, the view angle map 123-126 is a set of second view angle maps, the view angle map 131-134 is a set of second view angle maps, the view angle map 135-138 is a set of second view angle maps, the view angle map 171-174 is a set of second view angle maps, the view angle map 175-178 is a set of second view angle maps, the view angle map 183-186 is a set of second view angle maps, the view angle map 187-190 is a set of second view angle maps, the view angle map 191-194 is a set of second view angle maps, the view angle map 195-198 is a set of second view angle maps, the view angle map 199-202 is a set of second view angle maps, the view angle map 203-206 is a set of second view angle maps, the view angle map 211-214 is a set of second view angle maps, the view map 215-218 is a set of second view maps, the view map 219-222 is a set of second view maps, the view map 223-226 is a set of second view maps, the view map 227-230 is a set of second view maps, and the view map 231-234 is a set of second view maps; the 32 groups of second perspective maps are formed, the 32 groups of second perspective maps are arranged on a map area 502 of the baking map, and the map area 502 comprises 128 map sub-areas.
Referring to fig. 6, fig. 6 is an enlarged view of the map region 501 where the 4 sets of second view maps in fig. 5 are located in the baking map, wherein the view maps 87-102 constitute a set of second view maps, the view maps 103-118 constitute a set of second view maps, the view maps 139-154 constitute a set of second view maps, and the view maps 155-170 constitute a set of second view maps.
Wherein each of the m first viewing angle ranges is continuous but non-overlapping, each of the n second viewing angle ranges is continuous but non-overlapping, and each of the ith second viewing angle sub-ranges is continuous but non-overlapping.
In an alternative embodiment, the m first perspective maps are mapped and drawn to m map regions of the baked map according to the perspective division information, the n groups of second perspective maps are mapped and drawn to n map regions of the baked map according to the perspective division information, and each group of second perspective maps in the n groups of second perspective maps are arranged in different map sub-regions in the same map region.
Optionally, the ith of the n second viewing angle ranges is divided into k2Each view sub-range, k is an integer not less than 2, the n groups of second view maps are mapping to k according to the view division information and the first mapping relation, the terminal determines a map area corresponding to the ith second view range on the baking map, and then each second view map in the ith group of second view maps is mapped and drawn to the k according to the first mapping relation2Of one of the individual map sub-regions;
wherein a map region corresponding to the ith second viewing angle range is divided into k2And in the map sub-area, the ith group of second visual angle maps correspond to the ith second visual angle range, and the first mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the two-dimensional coordinates on the baking map.
Schematically, fig. 7 is a schematic diagram of a first mapping relationship provided by an exemplary embodiment of the present application,
the first mapping relation realizes that m first view angle ranges of the three-dimensional virtual article shot by the camera model are converted into m map areas of the baking map, namely the first mapping realizes that the three-dimensional view angle of the camera model is converted into two-dimensional coordinates on the baking map. Thus, the first mapping implements m map regions that are converted from m first viewing angle ranges to a bake map.
Optionally, the area of the map region occupied by each of the m first perspective maps is the same as the area of the map region occupied by each group of second perspective maps.
wherein the collection view angle is a view angle of the camera model towards the three-dimensional virtual object.
In one embodiment, the terminal acquires a collection visual angle of the camera model towards the three-dimensional virtual article (or one patch disguised as the three-dimensional virtual article), and determines three-dimensional coordinates of the collection visual angle. Optionally, when the terminal obtains the collection angle of view of a facet camouflaged as a three-dimensional virtual object from the camera, the three-dimensional coordinate of the collection angle of view is obtained based on a billboard technology.
In one embodiment, the terminal determines a target first view map corresponding to a first collection view from the baking map according to view division information; or the terminal determines a target second visual angle map corresponding to the second acquisition visual angle from the baking map according to the visual angle division information.
Wherein the first acquisition view is one of m first view ranges, the second acquisition view is one of ith view ranges of n second view ranges, and i is an integer greater than 0.
In one embodiment, the primary storage table stores area information of a map area where m first viewing angle maps corresponding to m first viewing angle ranges are located in the bake map.
Optionally, the determining, by the terminal, a first view map corresponding to the first collection view from the baking map according to the view division information includes:
firstly, the terminal determines a first storage position corresponding to a first acquisition visual angle in a primary storage table according to visual angle division information and a second mapping relation;
secondly, the terminal acquires the area information of a map area where the first collection visual angle is located in the baking map from the first storage position;
thirdly, the terminal determines a target first visual angle chartlet corresponding to the first acquisition visual angle from the baking chartlet according to the area information of the chartlet area where the first acquisition visual angle is located in the baking chartlet;
the second mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the storage position on the primary storage table, and the area information comprises the coordinates of the positioning point, the height information and the width information of the map area.
In one embodiment, the primary storage table stores index information of target storage areas of the secondary storage table corresponding to the n second view angle ranges; the secondary storage table stores k corresponding to the ith viewing angle range2And the second visual angle maps the area information of the map area in which the baking map is positioned.
Optionally, the determining, by the terminal, a second view map corresponding to a second collection view from the baking map according to the view division information includes:
firstly, according to the view dividing information and the second mapping relation, the terminal determines a second storage position corresponding to the ith second view range in a primary storage table;
secondly, the terminal acquires target index information from a second storage position, wherein the target index information is index information of a target storage area of the ith second visual angle range in the secondary storage table;
thirdly, according to the target index information and a third mapping relation, the terminal determines a third storage position in the target storage area in the secondary storage table, and the third storage position stores area information of a map sub-area where a target second visual angle map corresponding to the target visual angle sub-range is located in the baking map;
fourthly, the terminal acquires sub-region information from the third storage position, wherein the sub-region information is the region information of the map sub-region where the target second visual angle map is located in the baking map;
fifthly, according to the information of the target subarea, the terminal determines a target second visual angle map from the baking map;
and the third mapping relation is the mapping relation between the three-dimensional view angle of the camera model and the storage position on the secondary storage table.
In summary, through the primary storage table and the secondary storage table, the terminal determines the view map corresponding to the collection view from the baked map, the baked map is obtained by mapping and drawing m first view maps and n groups of second view maps, and the resolution of the second view map is lower than that of the first view map. Compared with a baking map generated by using the existing imprester (camouflage) technology, the method has the advantages that the range of the selected acquisition visual angle is wider, and the switching of the pictures displayed on the screen of the terminal is smoother in the continuous moving process of the camera model.
The method also standardizes the process of determining the visual angle mapping by the terminal, and improves the efficiency of determining the visual angle mapping by the terminal on the premise of ensuring the accuracy.
Fig. 12 is a flowchart of a method for generating a baking sticker provided in an exemplary embodiment of the present application, where the method for generating the baking sticker includes:
the terminal sets the partition coefficient at the time of the improstat baking. Fig. 13 shows an interface for setting a division coefficient when a developer performs development, and for a baking map originally to be sampled by 10 × 10 of the map layout, the outermost circle and the second outermost circle retain the original map size, each of the perspective maps of the third circle and the fourth circle is divided by 2 × 2, and each of the perspective maps of the fifth circle is divided by 4 × 4.
the terminal sets all shooting angle information of the camera.
the terminal judges whether the view angle needs to be divided, if the shooting view angle needs to be divided, the step 1204 is entered, and if the shooting view angle does not need to be divided, the step 1203 is entered.
the terminal configures the position of the tree in the scene.
the terminal sets all shooting angle information of the camera.
the terminal adjusts the camera parameters so that the tree can fill the camera screen.
and the terminal controls the camera model to shoot, generates baking maps from the shot visual angle maps, and stores the position information of the baking maps of each visual angle map in a conversion table and a division table.
And 4 floating-point numbers are adopted in the conversion table to record the information of the view angle. If the viewing angle does not need to be further divided, the 4 numbers recorded in the conversion table are:
x is the starting coordinate point X of the view existence map;
y is the starting coordinate point Y of the map existing in the view angle;
z is the width of the map at the view angle;
w is the chartlet height for that viewing angle;
if the viewing angle needs to be further divided, the 4 numbers recorded in the conversion table are:
x is the index of the partition table needing to be searched;
y is the starting point of the division table;
z is the length occupied by the view angle;
w ═ 0 (division identification bit, if the bit is 0, meaning that division occurs for the view);
and the division table is composed of four floating point numbers. If the view angle needs to be further divided, the 4 numbers recorded in the division table are respectively:
x is the initial coordinate point X of the mapping existing in the divided view angle;
y is the initial coordinate point Y of the mapping existing in the divided view angle;
z is the mapping width of the divided visual angle;
w is the mapping height of the divided visual angle;
in step 1208, export a map file, a partition table, and a translation table.
The terminal exports the generated baking map, the partition table, and the conversion table.
FIG. 14 is a flow chart of a method of using a bake map provided by an exemplary embodiment of the present application, the method of using the bake map including:
and (4) drawing a patch which always faces the camera model by the terminal, and optionally drawing the patch by adopting a billboard method.
the terminal converts the position of the camera model into a model space of a patch, obtains the current shooting visual angle of the camera model based on the patch, and judges whether the visual angle is divided according to whether the W value of the conversion table is larger than 0. If the view is divided, go to step 1404, and if the view is not divided, go to step 1403.
and the terminal reads the position of the visual angle map and samples.
the terminal finds the corresponding division table position according to the stored division information, reads the position stored by the divided visual angle map and samples;
And the terminal renders and displays the visual angle map on the terminal according to the sampled visual angle map.
Fig. 15 is a schematic diagram of a baking sticker generation apparatus 1500 provided in an exemplary embodiment of the present application, where the baking sticker generation apparatus 1500 includes:
the determining module 1501 is configured to determine view division information of m first view ranges and n second view ranges of the camera model, where the first view range does not divide a view sub-range, and the second view range is divided into multiple view sub-ranges;
the processing module 1502 is configured to obtain, through the camera model, first view maps of the three-dimensional virtual article in a first view range, to obtain m first view maps;
the processing module 1502 is further configured to obtain, by using the camera model, a second view map of the three-dimensional virtual article in the view sub-range, to obtain n groups of sub-view maps; performing down-sampling processing on the n groups of sub-viewing angle maps to obtain n groups of second viewing angle maps;
a drawing module 1503, configured to map and draw the m first view maps and the n groups of second view maps to different map regions of the baking map according to the view division information;
wherein m and n are integers more than 0.
In an alternative embodiment, the rendering module 1503 is further configured to render the m first view map maps into m map regions of the bake map according to the view partition information.
In an optional embodiment, the drawing module 1503 is further configured to map n groups of second view maps to n map regions of the baking map according to the view dividing information, where each group of the n groups of second view maps is arranged in a different map sub-region in the same map region.
In an alternative embodiment, the ith of the n second view angle ranges is divided into k2And k is an integer not less than 2.
In an optional embodiment, the drawing module 1503 is further configured to determine a map region corresponding to an ith second viewing angle range on the baking map according to the viewing angle division information and the first mapping relationship, where the map region corresponding to the ith second viewing angle range is divided into k2And (4) a map sub-area.
In an alternative embodiment, the rendering module 1503 is further configured to render each of the ith set of second perspective maps to k according to the first mapping relationship2One of the individual map sub-regions; the ith group of second viewing angle maps corresponds to the ith second viewing angle range.
The first mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the two-dimensional coordinates on the baking chartlet.
In an alternative embodiment, each of the m first perspective maps occupies the same area of the map region as each set of the second perspective maps.
In an alternative embodiment, the apparatus further includes a setup module 1504.
In an alternative embodiment, the setting module 1504 is used to set the first shooting parameters of the camera model.
In an optional embodiment, the setting module 1504 is further configured to adjust the second shooting parameter of the camera model according to the size of the area occupied by the shooting picture of the three-dimensional virtual article in the camera model based on the first shooting parameter.
In an alternative embodiment, the setup module 1504 is also used to obtain a minimal bounding box of the three-dimensional virtual item.
In an alternative embodiment, the setting module 1504 is further configured to use the diagonal length of the minimum bounding box as the length and width of the captured frame of the camera model.
In an alternative embodiment, the setting module 1504 is further configured to set a first shooting parameter of the camera model based on the length and width of the shooting picture.
In an optional embodiment, the setting module 1504 is further configured to control the camera model to shoot the three-dimensional virtual object from different viewing angles based on the first shooting parameter, so as to obtain a plurality of viewing angle maps.
In an optional embodiment, the setting module 1504 is further configured to superimpose pixel points corresponding to the same position of the three-dimensional virtual article in the multiple view maps to obtain a superimposed image.
In an optional embodiment, the setting module 1504 is further configured to obtain a first length ratio by quotient of the length of the area with pixels of the superimposed image and the length of the superimposed image, and obtain a first width ratio by quotient of the width of the area with pixels of the superimposed image and the width of the superimposed image, where the area with pixels is an area occupied by pixels of the three-dimensional virtual article in the superimposed image.
In an optional embodiment, the setting module 1504 is further configured to adjust a second shooting parameter of the camera model based on the first length ratio and the first width ratio, where the second shooting parameter is used to make the second length ratio equal to the first length ratio, the second length ratio is obtained by quotient of the length of any view map shot based on the first shooting parameter and the length of any view map shot based on the second shooting parameter, the second shooting parameter is further used to make the second width ratio equal to the first width ratio, and the second width ratio is obtained by quotient of the width of any view map shot based on the first shooting parameter and the width of any view map shot based on the second shooting parameter.
In an alternative embodiment, the apparatus further includes a storage module 1505.
In an alternative embodiment, the storage module 1505 is used for storing the area information of the map area where the m first viewing angle maps are located in the bake map according to the viewing angle division information.
In an alternative embodiment, the storage module 1505 is further configured to store the region information of the map region where the n sets of second viewing angle maps are located in the bake map according to the viewing angle division information.
In an alternative embodiment, the storage module 1505 is further configured to determine a first storage location corresponding to the ith first view angle range in the primary storage table according to the view angle partition information and the second mapping relationship.
In an alternative embodiment, the storing module 1505 is further configured to store the area information of the map area where the ith first viewing angle map is located in the baking map in the first storage location, where the ith first viewing angle map is the viewing angle map corresponding to the ith first viewing angle range.
Wherein i is an integer greater than 0, the second mapping relation is a mapping relation between a three-dimensional view angle of the camera model and a storage position on the primary storage table, and the region information includes positioning point coordinates, height information and width information of the map region.
In an alternative embodiment, the storing module 1505 is further configured to store the region information of the map sub-region where each second perspective map in the ith group of second perspective maps is located in the bake map in the target storage region of the secondary storage table; the ith group of second viewing angle maps corresponds to the ith second viewing angle range.
In an alternative embodiment, the storage module 1505 is further configured to determine a second storage location corresponding to the ith second view angle range in the primary storage table according to the view angle partition information and the second mapping relationship.
In an alternative embodiment, the storage module 1505 is further configured to store index information of the target storage area of the secondary storage table in the second storage location.
And the second mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the storage position on the primary storage table.
In an optional embodiment, the storage module 1505 is further configured to determine, according to the view dividing information and the third mapping relationship, a third storage location corresponding to a view range of a jth second view map in the ith group of second view maps in the target storage area of the secondary storage table.
In an alternative embodiment, the storage module 1505 is further configured to store the area information of the sub-area of the map where the jth second perspective map is located in the bake map in the third storage location.
In an alternative embodiment, the storage module 1505 is further operable wherein the third mapping is a mapping of a three-dimensional perspective of the camera model to a storage location on the secondary storage table.
In summary, the baked map is finally generated by obtaining m first perspective maps corresponding to m first perspective ranges, n groups of second perspective maps corresponding to n second perspective ranges, and mapping and drawing the m first perspective maps and the n groups of second perspective maps to different map areas of the baked map, wherein the image resolution of the second perspective map is lower than that of the first perspective map. Compared with the prior art of the imprester, the device has the advantages that by reducing the image resolution of part of the visual angle maps, under the condition of unchanging the size, the generated baking maps can contain more visual angle maps, the selectable range of the shooting visual angle of the camera model for shooting the three-dimensional virtual article is greatly widened, and the switching of the pictures displayed on the screen of the terminal is smoother in the continuous moving process of the camera model.
In the above device, n second mapping view angle rangesThe ith second viewing angle range in the enclosure is divided into k2A sub-range of viewing angles, k of the ith group of second viewing angle maps of the n groups of second viewing angle maps corresponding to the sub-range of viewing angles2One of the second perspective maps mapping k of the n map regions of the bake map2One of the individual map sub-regions provides a way to set the resolution of the view map on the bake map such that the size of the bake map is perfectly compatible with the resolution of all view maps.
FIG. 16 is a schematic view of a baking mapping use apparatus 1600 provided in an exemplary embodiment of the present application, including:
the obtaining module 1601 is configured to obtain a baking map of the three-dimensional virtual article, where the baking map is obtained by mapping and drawing m first view maps and n groups of second view maps to different map regions of the baking map according to view division information, where the m first view maps are obtained by obtaining a first view map of the three-dimensional virtual article in a first view range through a camera model, the n groups of second view maps are obtained by obtaining a second view map of the three-dimensional virtual article in a view sub-range through a camera model, so as to obtain n groups of sub-view maps, and the n groups of sub-view maps are obtained by performing down-sampling processing on the n groups of sub-view maps, where the view sub-range is obtained by dividing a second view range, and the m first view ranges and the n second view ranges form a collection view range of the three-dimensional virtual article according to the view division information;
a determining module 1602, configured to determine an acquisition viewing angle of the three-dimensional virtual article, where the acquisition viewing angle is a viewing angle of the camera model towards the three-dimensional virtual article;
a determining module 1602, further configured to determine a view map corresponding to the collection view from the baking map;
wherein m and n are integers more than 0.
In an alternative embodiment, the m first perspective maps are mapped and drawn to m map regions of the baked map according to the perspective division information, the n groups of second perspective maps are mapped and drawn to n map regions of the baked map according to the perspective division information, and each group of second perspective maps in the n groups of second perspective maps are arranged in different map sub-regions in the same map region.
In an alternative embodiment, the ith of the n second view angle ranges is divided into k2And k is an integer not less than 2.
In an alternative embodiment, the n groups of second perspective maps are generated by determining a map region corresponding to the ith second perspective range on the baking map according to the perspective division information and the first mapping relationship, and then mapping each second perspective map in the ith group of second perspective maps to k according to the first mapping relationship2Of one of the map sub-regions.
Wherein a map region corresponding to the ith second viewing angle range is divided into k2And in the map sub-area, the ith group of second visual angle maps correspond to the ith second visual angle range, and the first mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the two-dimensional coordinates on the baking map.
In an alternative embodiment, each of the m first perspective maps occupies the same area of the map region as each set of the second perspective maps.
In an optional embodiment, the determining module 1602 is further configured to determine a target first view angle map corresponding to the first collection view angle from the baking map according to the view angle division information.
In an optional embodiment, the determining module 1602 is further configured to determine a target second view angle map corresponding to the second collection view angle from the baking map according to the view angle division information.
Wherein the first acquisition view is one of m first view ranges, the second acquisition view is one of ith view ranges of n second view ranges, and i is an integer greater than 0.
In an optional embodiment, the determining module 1602 is further configured to determine, according to the view dividing information and the second mapping relationship, a first storage location corresponding to the first collection view in the primary storage table.
In an alternative embodiment, the determining module 1602 is further configured to obtain the area information of the map area where the first collection view is located in the baking map from the first storage location.
In an optional embodiment, the determining module 1602 is further configured to determine, according to the area information of the map area where the first collection view is located in the baked map, a target first view map corresponding to the first collection view from the baked map.
The second mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the storage position on the primary storage table, and the area information comprises the coordinates of the positioning point, the height information and the width information of the map area.
In an optional embodiment, the primary storage table stores index information of target storage areas of the secondary storage table corresponding to the n second view angle ranges; the secondary storage table stores k corresponding to the ith viewing angle range2And the second visual angle maps the area information of the map area in which the baking map is positioned.
In an optional embodiment, the determining module 1602 is further configured to determine, according to the view dividing information and the second mapping relationship, a second storage location corresponding to the ith second view range in the primary storage table.
In an optional embodiment, the determining module 1602 is further configured to obtain target index information from the second storage location, where the target index information is index information of a target storage area of the ith second view angle range in the secondary storage table.
In an optional embodiment, the determining module 1602 is further configured to determine, according to the target index information and the third mapping relationship, a third storage location in the target storage area in the secondary storage table, where the third storage location stores area information of a map sub-area where a target second view map corresponding to the target view sub-range is located in the bake map.
In an alternative embodiment, the determining module 1602 is further configured to obtain the sub-region information from the third storage location, where the sub-region information is the region information of the map sub-region where the target second perspective map is located in the bake map.
In an alternative embodiment, the determining module 1602 determines the target second perspective map from the bake map according to the target sub-region information.
And the third mapping relation is the mapping relation between the three-dimensional view angle of the camera model and the storage position on the secondary storage table.
In summary, through the primary storage table and the secondary storage table, the apparatus determines the view map corresponding to the collection view from the baked map, where the baked map is obtained by mapping and drawing m first view maps and n sets of second view maps, and the resolution of the second view map is lower than that of the first view map. Compared with the baking mapping generated by using the existing imprester technology, the device has the advantages that the range of the selectable acquisition visual angle is wider, and the switching of the pictures displayed on the screen of the terminal is smoother in the continuous moving process of the camera model.
The device also standardizes the flow of determining the visual angle mapping, and improves the efficiency of determining the visual angle mapping by the device on the premise of ensuring the accuracy.
Fig. 17 shows a block diagram of a computer device provided in an exemplary embodiment of the present application. The computer device 1700 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Computer device 1700 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, computer device 1700 includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement a method of generating a baking map or a method of using a baking map as provided by the method embodiments herein.
In some embodiments, computer device 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuit 1704, display screen 1705, camera assembly 1706, audio circuit 1707, positioning assembly 1708, and power supply 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display screen 1705 may be one, disposed on a front panel of computer device 1700; in other embodiments, the display screens 1705 may be at least two, each disposed on a different surface of the computer device 1700 or in a folded design; in other embodiments, display 1705 may be a flexible display, disposed on a curved surface or on a folded surface of computer device 1700. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location on the computer device 1700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
In some embodiments, computer device 1700 also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensor 1714, optical sensor 1715, and proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the computer apparatus 1700. For example, the acceleration sensor 1711 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1701 may control the display screen 1705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1712. The acceleration sensor 1712 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1712 may detect a body direction and a rotation angle of the computer apparatus 1700, and the gyro sensor 1712 may acquire a 3D motion of the user on the computer apparatus 1700 in cooperation with the acceleration sensor 1711. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1713 may be positioned on the side frames of computer device 1700 and/or underneath display screen 1705. When the pressure sensor 1713 is disposed on the side frame of the computer apparatus 1700, the user's grip signal to the computer apparatus 1700 can be detected, and the processor 1701 performs right-left hand recognition or shortcut operation based on the grip signal acquired by the pressure sensor 1712. When the pressure sensor 1713 is disposed below the display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1714 is configured to capture a fingerprint of the user, and the processor 1701 is configured to identify the user based on the fingerprint captured by the fingerprint sensor 1714, or the fingerprint sensor 1714 is configured to identify the user based on the captured fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1714 may be disposed on the front, back, or side of computer device 1700. When a physical key or vendor Logo is provided on computer device 1700, fingerprint sensor 1714 may be integrated with the physical key or vendor Logo.
The optical sensor 1715 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the display screen 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the display screen 1705 is reduced. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1715.
Those skilled in the art will appreciate that the architecture shown in FIG. 17 is not intended to be limiting of the computer device 1700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The present application further provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the method for generating a baking map or the method for using a baking map provided by the above-mentioned method embodiments.
A computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the method for generating the baking map or the method for using the baking map provided by the method embodiment.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (22)
1. A method of generating a baking map, the method comprising:
determining viewing angle division information of m first viewing angle ranges and n second viewing angle ranges of the camera model, wherein the first viewing angle ranges are not divided into viewing angle sub-ranges, and the second viewing angle ranges are divided into a plurality of viewing angle sub-ranges;
acquiring a first visual angle map of the three-dimensional virtual article in the first visual angle range through the camera model to obtain m first visual angle maps;
acquiring a second view angle mapping of the three-dimensional virtual article under the view angle sub-range through the camera model to obtain n groups of sub-view angle mappings; performing down-sampling processing on the n groups of sub-viewing angle maps to obtain n groups of second viewing angle maps;
mapping and drawing the m first view map and the n groups of second view maps to different map areas of the baking map according to the view dividing information;
wherein m and n are integers more than 0.
2. The method according to claim 1, wherein the mapping the m first perspective maps and the n groups of second perspective maps to different map areas of the baking map according to the perspective division information comprises:
mapping and drawing the m first view map into m map areas of the baking map according to the view dividing information;
and mapping and drawing the n groups of second view angle maps to n map areas of the baking map according to the view angle dividing information, wherein each group of second view angle maps in the n groups of second view angle maps are arranged in different map sub-areas in the same map area.
3. The method according to claim 2, wherein an ith of the n second view angle ranges is divided into k2A view sub-range, k being an integer no less than 2;
the mapping the n groups of second view maps to n map regions of the baking map according to the view division information includes:
determining a map region corresponding to the ith second viewing angle range on the baking map according to the viewing angle division information and the first mapping relation, wherein the map region corresponding to the ith second viewing angle range is divided into k2An individual map sub-area;
mapping each second visual angle mapping in the ith group of second visual angle mapping to the k according to the first mapping relation2One of the individual map sub-regions; the ith group of second viewing angle maps corresponds to the ith second viewing angle range;
wherein the first mapping is a mapping of a three-dimensional perspective of the camera model to two-dimensional coordinates on the bake map.
4. The method according to claim 2, wherein each of the m first perspective maps occupies the same area of the map region as each set of second perspective maps.
5. The method of any of claims 1 to 4, further comprising:
setting a first shooting parameter of the camera model;
and adjusting to obtain a second shooting parameter of the camera model according to the size of the area occupied by the three-dimensional virtual article in the shooting picture of the camera model based on the first shooting parameter.
6. The method according to claim 5, wherein the setting of the first shooting parameter of the camera comprises:
acquiring a minimum bounding box of the three-dimensional virtual article;
taking the diagonal length of the minimum bounding box as the length and width of a shooting picture of the camera model;
setting a first shooting parameter of the camera model based on the length and width of the shot picture.
7. The method according to claim 5, wherein the adjusting the second shooting parameter of the camera model according to the size of the area occupied by the three-dimensional virtual article in the shooting picture of the camera model based on the first shooting parameter comprises:
based on the first shooting parameters, controlling the camera model to shoot the three-dimensional virtual article from different viewing angles to obtain a plurality of viewing angle maps;
superposing pixel points corresponding to the same position of the three-dimensional virtual article in the plurality of visual angle maps to obtain superposed images;
obtaining a first length ratio by quotient of the length of the area with pixels of the superposed image and the length of the superposed image, and obtaining a first width ratio by quotient of the width of the area with pixels of the superposed image and the width of the superposed image, wherein the area with pixels is an area occupied by pixels of the three-dimensional virtual article in the superposed image;
and adjusting a second shooting parameter of the camera model based on the first length ratio and the first width ratio, wherein the second shooting parameter is used for enabling a second length ratio to be equal to the first length ratio, the second length ratio is obtained by quotient of the length of any view angle mapping obtained by shooting based on the first shooting parameter and the length of any view angle mapping obtained by shooting based on the second shooting parameter, the second shooting parameter is also used for enabling a second width ratio to be equal to the first width ratio, and the second width ratio is obtained by quotient of the width of any view angle mapping obtained by shooting based on the first shooting parameter and the width of any view angle mapping obtained by shooting based on the second shooting parameter.
8. The method of any of claims 2 to 4, further comprising:
storing area information of map areas where the m first view maps are located in the baking map according to the view dividing information;
and storing the area information of the map area where the n groups of second view maps are located in the baking map according to the view dividing information.
9. The method according to claim 8, wherein the storing area information of the map area where the m first view maps are located in the bake map according to the view dividing information comprises:
determining a first storage position corresponding to the ith first visual angle range in a primary storage table according to the visual angle division information and the second mapping relation;
storing area information of a map area where an ith first viewing angle map is located in the baking map at the first storage location, the ith first viewing angle map being a viewing angle map corresponding to the ith first viewing angle range;
the second mapping relation is a mapping relation between a three-dimensional visual angle of the camera model and a storage position on the primary storage table, and the region information comprises positioning point coordinates, height information and width information of the map region.
10. The method according to claim 8, wherein the storing the area information of the map area where the n groups of second view maps are located in the bake map according to the view dividing information comprises:
storing the area information of the map sub-area where each second visual angle map in the ith group of second visual angle maps is located in the baking map in a target storage area of a secondary storage table; the ith group of second viewing angle maps corresponds to the ith second viewing angle range;
determining a second storage position corresponding to the ith second view angle range in a primary storage table according to the view angle division information and a second mapping relation;
storing the index information of the target storage area of the secondary storage table in the second storage position;
wherein the second mapping relationship is a mapping relationship of a three-dimensional perspective of the camera model and a storage location on the primary storage table.
11. The method of claim 10, wherein storing the region information of the map sub-region in which each second perspective map in the ith set of second perspective maps is located in the bake map in a secondary storage table comprises:
determining a third storage position corresponding to the view angle range of the jth second view angle map in the ith group of second view angle maps in a target storage area of the secondary storage table according to the view angle division information and a third mapping relation;
storing the area information of the map sub-area where the jth second view map is located in the baking map at the third storage position;
wherein the third mapping relationship is a mapping relationship of a three-dimensional perspective of the camera model and a storage location on the secondary storage table.
12. A method of using a baking chartlet, the method comprising:
acquiring a baking map of the three-dimensional virtual article, wherein the baking map is obtained by mapping and drawing the m first view angle maps and the n groups of second view angle maps to different map areas of the baking map according to view angle dividing information, the m first visual angle maps are obtained by acquiring the first visual angle maps of the three-dimensional virtual article in a first visual angle range through a camera model, the n groups of second view angle maps are obtained by obtaining the second view angle maps of the three-dimensional virtual article in the view angle sub-range through the camera model to obtain n groups of sub-view angle maps and performing down-sampling processing on the n groups of sub-view angle maps, the view sub-range is obtained by dividing a second view range, and the m first view ranges and the n second view ranges form a collection view range of the three-dimensional virtual article according to the view division information;
determining a collection perspective of the three-dimensional virtual article, the collection perspective being a perspective of a camera model towards the three-dimensional virtual article;
determining a view angle map corresponding to the collection view angle from the baking map;
wherein m and n are integers more than 0.
13. The method of claim 12, wherein the m first perspective maps are mapped to m map regions of the bake map according to the perspective partition information, wherein the n groups of second perspective maps are mapped to n map regions of the bake map according to the perspective partition information, and wherein each group of second perspective maps in the n groups of second perspective maps are arranged in different map sub-regions in a same map region.
14. The method according to claim 13, wherein an ith of the n second view angle ranges is divided into k2A viewing angle sub-range, i is an integer greater than 0, i is less than n, and k is an integer not less than 2;
the n groups of second view angle maps are mapped to n map areas of the baking map according to the view angle division information, and comprise:
the n groups of second view angle maps are mapped to k according to the first mapping relationship after determining a map region corresponding to the ith second view angle range on the baking map according to the view angle division information and the first mapping relationship2One of the map sub-regionsOf a domain;
wherein a map region corresponding to the ith second viewing angle range is divided into the k2And the ith group of second visual angle maps correspond to the ith second visual angle range, and the first mapping relation is the mapping relation between the three-dimensional visual angle of the camera model and the two-dimensional coordinates on the baking map.
15. The method according to claim 13, wherein each of the m first perspective maps occupies the same area of the map region as each of the groups of second perspective maps.
16. The method of any of claims 12 to 15, wherein said determining a perspective map from said baking map corresponding to said collection perspective comprises:
determining a first view angle map of a target corresponding to a first collection view angle from the baking map according to the view angle division information;
or the like, or, alternatively,
determining a target second visual angle map corresponding to a second acquisition visual angle from the baking map according to the visual angle division information;
wherein the first acquisition view is one of the m first view ranges, the second acquisition view is one of the target view sub-ranges of the ith second view range, the ith second view range is one of the n second view ranges, and i is an integer greater than 0.
17. The method according to claim 16, wherein a primary storage table stores area information of a map area in which m first viewing angle maps corresponding to the m first viewing angle ranges are located in the bake map;
the determining a first view angle map of a target corresponding to a first collection view angle from the baking map according to the view angle division information includes:
determining a first storage position corresponding to the first acquisition visual angle in the primary storage table according to the visual angle division information and a second mapping relation;
acquiring area information of a map area where the first collection visual angle is located in the baking map from the first storage position;
determining a target first visual angle map corresponding to the first acquisition visual angle from the baking map according to the area information of the map area where the first acquisition visual angle is located in the baking map;
the second mapping relation is a mapping relation between a three-dimensional visual angle of the camera model and a storage position on the primary storage table, and the region information comprises positioning point coordinates, height information and width information of the map region.
18. The method according to claim 16, wherein a primary storage table stores index information of target storage areas of the secondary storage table corresponding to the n second view angle ranges; the second-level storage table stores k corresponding to the ith viewing angle range2Region information of a map region where a second perspective map is located in the bake map;
the determining the second view angle map corresponding to a second collection view angle from the baking map according to the view angle division information includes:
determining a second storage position corresponding to the ith second view angle range in the primary storage table according to the view angle division information and a second mapping relation;
acquiring target index information from the second storage position, wherein the target index information is index information of a target storage area of the ith second view angle range in the secondary storage table;
determining a third storage position in a target storage area in the secondary storage table according to the target index information and a third mapping relation, wherein the third storage position stores area information of a map sub-area where a target second view angle map corresponding to the target view angle sub-range is located in the baking map;
obtaining sub-region information from the third storage location, the sub-region information being region information of a map sub-region where the target second perspective map is located in the bake map;
determining the target second visual angle map from the baking map according to the target sub-region information;
wherein the third mapping relationship is a mapping relationship of a three-dimensional perspective of the camera model and a storage location on the secondary storage table.
19. An apparatus for generating a baking sticker, the apparatus comprising:
a determining module, configured to determine view division information of m first view ranges and n second view ranges of the camera model, where the first view range is not divided into view sub-ranges, and the second view range is divided into multiple view sub-ranges;
the processing module is used for acquiring a first visual angle mapping of the three-dimensional virtual article in the first visual angle range through the camera model to obtain m first visual angle mappings;
the processing module is further used for acquiring a second view angle mapping of the three-dimensional virtual article in the view angle sub-range through the camera model to obtain n groups of sub-view angle mappings; performing down-sampling processing on the n groups of sub-viewing angle maps to obtain n groups of second viewing angle maps;
a drawing module, configured to map and draw the m first view maps and the n groups of second view maps to different map areas of the baking map according to the view division information;
wherein m and n are integers more than 0.
20. A device for using a baking figure, said device comprising:
an obtaining module, configured to obtain a baking map of the three-dimensional virtual article, where the baking map is obtained by mapping and drawing the m first view angle maps and the n groups of second view angle maps to different map areas of the baking map according to view angle division information, the m first visual angle maps are obtained by acquiring the first visual angle maps of the three-dimensional virtual article in a first visual angle range through a camera model, the n groups of second view angle maps are obtained by obtaining the second view angle maps of the three-dimensional virtual article in the view angle sub-range through the camera model to obtain n groups of sub-view angle maps and performing down-sampling processing on the n groups of sub-view angle maps, the view sub-range is obtained by dividing a second view range, and the m first view ranges and the n second view ranges form a collection view range of the three-dimensional virtual article according to the view division information;
a determining module for determining a collection perspective of the three-dimensional virtual article, the collection perspective being a perspective of a camera model towards the three-dimensional virtual article;
the determining module is further used for determining a view angle map corresponding to the acquisition view angle from the baking map;
wherein m and n are integers more than 0.
21. A computer device, characterized in that the computer device comprises: a processor and a memory, the memory storing a computer program that is loaded and executed by the processor to implement a method of generating a baking map as claimed in any one of claims 1 to 11, or a method of using a baking map as claimed in any one of claims 12 to 18.
22. A computer-readable storage medium, characterized in that it stores a computer program which is loaded and executed by a processor to implement the method of generating a baking map as claimed in any one of claims 1 to 11 or the method of using a baking map as claimed in any one of claims 12 to 18.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110619628.XA CN113205582B (en) | 2021-06-03 | 2021-06-03 | Method, device, equipment and medium for generating and using baking paste chart |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110619628.XA CN113205582B (en) | 2021-06-03 | 2021-06-03 | Method, device, equipment and medium for generating and using baking paste chart |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113205582A true CN113205582A (en) | 2021-08-03 |
CN113205582B CN113205582B (en) | 2022-12-13 |
Family
ID=77024425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110619628.XA Active CN113205582B (en) | 2021-06-03 | 2021-06-03 | Method, device, equipment and medium for generating and using baking paste chart |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113205582B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228509A (en) * | 2016-07-22 | 2016-12-14 | 网易(杭州)网络有限公司 | Performance methods of exhibiting and device |
CN108154548A (en) * | 2017-12-06 | 2018-06-12 | 北京像素软件科技股份有限公司 | Image rendering method and device |
US20180249076A1 (en) * | 2017-02-27 | 2018-08-30 | Alibaba Group Holding Limited | Image Mapping and Processing Method, Apparatus and Machine-Readable Media |
CN109658365A (en) * | 2017-10-11 | 2019-04-19 | 阿里巴巴集团控股有限公司 | Image processing method, device, system and storage medium |
CN110291777A (en) * | 2018-04-09 | 2019-09-27 | 深圳市大疆创新科技有限公司 | Image acquisition method, device and machine-readable storage medium |
CN111382591A (en) * | 2018-12-27 | 2020-07-07 | 海信集团有限公司 | Binocular camera ranging correction method and vehicle-mounted equipment |
-
2021
- 2021-06-03 CN CN202110619628.XA patent/CN113205582B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228509A (en) * | 2016-07-22 | 2016-12-14 | 网易(杭州)网络有限公司 | Performance methods of exhibiting and device |
US20180249076A1 (en) * | 2017-02-27 | 2018-08-30 | Alibaba Group Holding Limited | Image Mapping and Processing Method, Apparatus and Machine-Readable Media |
CN108513119A (en) * | 2017-02-27 | 2018-09-07 | 阿里巴巴集团控股有限公司 | Mapping, processing method, device and the machine readable media of image |
CN109658365A (en) * | 2017-10-11 | 2019-04-19 | 阿里巴巴集团控股有限公司 | Image processing method, device, system and storage medium |
CN108154548A (en) * | 2017-12-06 | 2018-06-12 | 北京像素软件科技股份有限公司 | Image rendering method and device |
CN110291777A (en) * | 2018-04-09 | 2019-09-27 | 深圳市大疆创新科技有限公司 | Image acquisition method, device and machine-readable storage medium |
CN111382591A (en) * | 2018-12-27 | 2020-07-07 | 海信集团有限公司 | Binocular camera ranging correction method and vehicle-mounted equipment |
Non-Patent Citations (2)
Title |
---|
HE Q H ET AL: "Texture Baking Techniques on Construction of Virtual Reality Interactive Scenes", 《IN APPLIED MECHANICS AND MATERIALS》 * |
裴玉: "大庆湿地旅游虚拟现实系统设计", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113205582B (en) | 2022-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136136B (en) | Scene segmentation method and device, computer equipment and storage medium | |
CN112870707B (en) | Virtual object display method in virtual scene, computer device and storage medium | |
CN109614171B (en) | Virtual item transfer method and device, electronic equipment and computer storage medium | |
CN110064200B (en) | Object construction method and device based on virtual environment and readable storage medium | |
CN111701238A (en) | Virtual picture volume display method, device, equipment and storage medium | |
JP2022537614A (en) | Multi-virtual character control method, device, and computer program | |
CN111464749B (en) | Method, device, equipment and storage medium for image synthesis | |
CN114170349B (en) | Image generation method, device, electronic device and storage medium | |
WO2022042425A1 (en) | Video data processing method and apparatus, and computer device and storage medium | |
CN109859102B (en) | Special effect display method, device, terminal and storage medium | |
CN109886208B (en) | Object detection method and device, computer equipment and storage medium | |
CN110839174A (en) | Image processing method and device, computer equipment and storage medium | |
CN110880204A (en) | Virtual vegetation display method and device, computer equipment and storage medium | |
CN111784841B (en) | Method, device, electronic equipment and medium for reconstructing three-dimensional image | |
CN110928464A (en) | User interface display method, device, equipment and medium | |
CN113592997A (en) | Object drawing method, device and equipment based on virtual scene and storage medium | |
CN108305262A (en) | File scanning method, device and equipment | |
CN112396076A (en) | License plate image generation method and device and computer storage medium | |
CN112907716A (en) | Cloud rendering method, device, equipment and storage medium in virtual environment | |
CN112308103B (en) | Method and device for generating training samples | |
CN109771950B (en) | Node map setting method, device and storage medium | |
CN113240784B (en) | Image processing method, device, terminal and storage medium | |
CN112116681A (en) | Image generation method and device, computer equipment and storage medium | |
CN114155336B (en) | Virtual object display method, device, electronic device and storage medium | |
CN110992268B (en) | Background setting method, device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40052199 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |