CN109461197B - Cloud real-time drawing optimization method based on spherical UV and re-projection - Google Patents
Cloud real-time drawing optimization method based on spherical UV and re-projection Download PDFInfo
- Publication number
- CN109461197B CN109461197B CN201710727498.5A CN201710727498A CN109461197B CN 109461197 B CN109461197 B CN 109461197B CN 201710727498 A CN201710727498 A CN 201710727498A CN 109461197 B CN109461197 B CN 109461197B
- Authority
- CN
- China
- Prior art keywords
- cloud
- spherical
- texture
- image
- noise
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 title claims description 12
- 230000000007 visual effect Effects 0.000 claims abstract description 12
- 238000009877 rendering Methods 0.000 claims description 20
- 238000005516 engineering process Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 description 2
- 238000004040 coloring Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a cloud real-time drawing optimization algorithm based on spherical UV and re-projection, which calculates the shape of a cloud layer through a three-dimensional noise texture and generates a three-dimensional cloud layer texture, parameterizes the spherical UV for the whole sky hemisphere, divides the spherical UV space into small blocks of 4x4 pixels, draws only one cloud in the direction corresponding to one pixel for each small block in each frame, reserves the whole spherical UV map after each 16 frames, re-projects the two latest generated complete spherical UV maps onto a visual area according to the current camera visual angle, and interpolates the two latest complete spherical UV maps, so that a final drawing result is obtained. The invention solves the problem that noise occurs when the visual angle or cloud layer is changed rapidly in the existing algorithm.
Description
Technical Field
The invention relates to the field of cloud real-time drawing algorithms, in particular to a cloud real-time drawing optimization algorithm based on spherical UV and re-projection.
Background
Cloud rendering is an important topic in real-time rendering natural environment, and a typical cloud rendering algorithm consists of two steps of shaping and coloring, wherein the shaping generates a basic shape of the cloud in one step by using three-dimensional noise textures at a plurality of different frequencies. The coloring step calculates the color of the cloud by ray tracing in the shaped cloud layer.
Due to the irregularities of the cloud layer shape and the complexity of the lighting environment in the atmosphere, it is generally difficult for various real-time cloud rendering algorithms to complete the entire rendering process within one frame (i.e., on the order of tens of milliseconds). One common optimization algorithm is to split the rendering between frames, each frame rendering only a portion of the cloud, and re-projecting and interpolating between frames. However, there is a fundamental problem with this optimization algorithm: because the cloud drawing is calculated in the view port space, when the camera rotates, the calculated result of the previous frame can not be partially or completely re-projected into the current view port due to the change of the visual angle, and the calculated result can not be multiplexed. At this time, only the region corresponding to the current frame updates the cloud drawing, so that a noise problem occurs, particularly at the viewport edge. Meanwhile, as the drawing result of the region corresponding to each frame is re-projected to the next frame, when the change speed of the cloud layer is too high, noise points are generated due to mismatching of the shape of the cloud layer between frames.
Disclosure of Invention
The invention provides a cloud real-time drawing optimization algorithm based on spherical UV and re-projection, which solves the problem that noise occurs when the visual angle or cloud layer of the existing algorithm changes rapidly.
In order to achieve the above purpose, the present invention provides the following technical solutions: a cloud real-time drawing optimization algorithm based on spherical UV and re-projection comprises the following steps:
step one: the cloud image is generated by combining perlin noise with ray marking at different frequencies, the cloud image is generated by the three-dimensional noise texture, the three-dimensional noise texture comprises a denoising processing step, and then the cloud image is mapped onto the sky of the virtual scene by using an image-based drawing technology.
Step two: the entire sky hemisphere is parameterized with spherical UV, and the unit vector of (X, Y, Z) direction on the hemisphere corresponds to the position (u=0.5x+0.5, v=0.5y+0.5) in texture space.
Step three: dividing the spherical UV space into small blocks with 4x4 pixels, drawing only one cloud in the direction corresponding to one pixel in each small block in each image frame, and reserving the whole spherical UV image after the cloud on the whole spherical UV image is drawn after every 16 frames, wherein each image frame is collected by a camera, and the camera comprises a depth video collection camera and a color video collection camera.
Step four: in the rendering of each image frame, the two latest generated complete spherical UV images are re-projected onto the visual area according to the current camera visual angle and interpolation is carried out, so that a final drawing result is obtained.
The interpolation is nearest neighbor linear interpolation, and the two recently generated complete spherical UV maps are F respectively -1 (u,v),F 0 (u, v) the interpolation function F (u, v) of the current frame after t frames is
In a preferred embodiment, the three-dimensional cloud texture comprises 2 3d textures, 12 d texture, the 2 3d textures placing a combination of perlin noise and world noise for computing cloud shape and surface detail; the 12 d texture puts curl noise of different frequencies for distorting the shape of the cloud.
Preferably, the cloud scene rendering and drawing work is implemented based on a GPU.
The beneficial effects of the invention are as follows:
1. according to the invention, the spherical UV parameterization is used for calculating the cloud in the whole sky instead of only in the visual area as in the traditional algorithm, so that the correlation between the cloud drawing and the current camera visual angle is stripped, the calculation result of the previous frame can be reused when the camera rotates, and stable drawing efficiency and quality are achieved.
2. Unlike traditional algorithm, the invention does not project the result of each frame into the next frame, but only carries out reprojection calculation on the complete spherical UV image generated every 16 frames, thus ensuring the integrity of projection data and avoiding the noise problem when the cloud layer changes too fast.
3. The cloud drawing effect generated by using the optimization algorithm of the present invention can stably draw a high-quality cloud at a frame rate of 100 frames/second in the case of camera variation.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
In the drawings:
FIG. 1 is a flow chart of an algorithm of the present invention;
FIG. 2 is a schematic view of spherical parameterization according to the present invention;
fig. 3 is a schematic diagram of an interpolation algorithm according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Referring to fig. 1, a cloud real-time rendering optimization algorithm based on spherical UV and re-projection includes the following steps:
step one: the cloud image is generated by combining perlin noise with ray marking at different frequencies, the cloud image is generated by the three-dimensional noise texture, the three-dimensional noise texture comprises a denoising processing step, and then the cloud image is mapped onto the sky of the virtual scene by using an image-based drawing technology.
Step two: referring to fig. 2, the entire sky hemisphere is parameterized with a spherical UV, and the unit vector of (X, Y, Z) direction on the hemisphere corresponds to the position (u=0.5x+0.5, v=0.5y+0.5) in texture space.
Step three: dividing the spherical UV space into small blocks with 4x4 pixels, drawing only one cloud in the direction corresponding to one pixel in each small block in each image frame, and reserving the whole spherical UV image after the cloud on the whole spherical UV image is drawn after every 16 frames, wherein each image frame is collected by a camera, and the camera comprises a depth video collection camera and a color video collection camera.
Step four: in the rendering of each image frame, the two latest generated complete spherical UV images are re-projected onto the visual area according to the current camera visual angle and interpolation is carried out, so that a final drawing result is obtained. Because the image is digitized, the position of the pixel coordinates of the UV image of the complete sphere has the value, the point between the coordinates of the pixel coordinates of the UV image of the complete sphere has no value, and the influence of the size of the pixel on the actual measurement cannot be ignored, the value is obtained by interpolating the point between the coordinates of the UV image of the two complete spheres, and the value of each point is ensured.
Referring to FIG. 3, the interpolation is a nearest-neighbor linear interpolation method, and the two recently generated complete spherical UV maps are F -1 (u,v),F 0 (u, v) the interpolation function F (u, v) of the current frame after t frames is
In a preferred embodiment, the three-dimensional cloud texture comprises 2 3d textures, 12 d texture.
The 1 st 3d texture is internally provided with a combination of the perlin noise and the world noise, the world noise is used for achieving the focusing and scattering effect, and after the world noise and the perlin noise are combined, the resolution is 128x128, and 4 channels store the noise with different frequencies of the perlin and the world, and the noise is used for defining the general shape of the cloud.
The 2 nd 3d texture, resolution is 32x32,3 channels, puts the world noise at different frequencies.
12 d texture, resolution 128x128, puts curl noise at different frequencies, is used to distort the shape of the cloud, and adds some feeling of disturbance.
Preferably, the cloud scene rendering and drawing work is implemented based on a GPU.
Finally, it should be noted that: the foregoing is merely a preferred example of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (6)
1. The cloud real-time drawing optimization method based on spherical UV and re-projection is characterized by comprising the following steps of:
step one: calculating cloud layer shape through three-dimensional noise texture, generating three-dimensional cloud layer texture, generating cloud image by using the three-dimensional noise texture, then mapping the cloud image onto sky of virtual scene by using image-based drawing technology,
step two: the whole sky hemisphere is parameterized by a spherical surface UV, a unit vector with directions (X, Y, Z) on the hemisphere corresponds to a position (u=0.5x+0.5, v=0.5y+0.5) in a texture space, wherein X is a transverse axis in a space rectangular coordinate system, Y is a longitudinal axis in the space rectangular coordinate system, Z is a vertical axis in the space rectangular coordinate system, u and v respectively represent coordinates in the texture space, and X and Y respectively represent parameters for representing the coordinates in the texture space;
step three: dividing the spherical UV space into small blocks of 4x4 pixels, drawing only one cloud in the direction corresponding to one pixel in each small block in each image frame, keeping the complete spherical UV image after drawing the cloud on the whole spherical UV image after every 16 frames,
step four: in the rendering of each image frame, the two latest generated complete spherical UV images are projected onto a visual area according to the current camera apparent weight and are interpolated, so that a final drawing result is obtained, the interpolation is a nearest neighbor linear interpolation method, and the two latest generated complete spherical UV images are F respectively -1 (u,v),F 0 (u, v) the interpolation function F (u, v) of the current frame after t frames is
2. The cloud real-time rendering optimization method based on spherical UV and re-projection according to claim 1, wherein the cloud image is generated by combining perlin noise with ray marking at different frequencies.
3. The cloud real-time rendering optimization method based on spherical UV and re-projection according to claim 1, wherein the three-dimensional cloud texture comprises 2 3d textures, 12 d texture,
the combination of the 2 3d texture perlin noise and the world noise is used for calculating the shape and the surface detail of the cloud; the 12 d texture puts curl noise of different frequencies for distorting the shape of the cloud.
4. The cloud real-time rendering optimization method based on spherical UV and re-projection according to claim 1, wherein the three-dimensional noise texture generation cloud image process comprises a denoising processing step.
5. The cloud real-time rendering optimization method based on spherical UV and re-projection according to claim 1, wherein each of the image frames is captured by a camera, the camera comprising a depth video capture camera and a color video capture camera.
6. The cloud real-time rendering optimization method based on spherical UV and re-projection according to claim 1, wherein cloud scene rendering and rendering work is implemented based on GPU.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710727498.5A CN109461197B (en) | 2017-08-23 | 2017-08-23 | Cloud real-time drawing optimization method based on spherical UV and re-projection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710727498.5A CN109461197B (en) | 2017-08-23 | 2017-08-23 | Cloud real-time drawing optimization method based on spherical UV and re-projection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109461197A CN109461197A (en) | 2019-03-12 |
CN109461197B true CN109461197B (en) | 2023-06-30 |
Family
ID=65605691
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710727498.5A Active CN109461197B (en) | 2017-08-23 | 2017-08-23 | Cloud real-time drawing optimization method based on spherical UV and re-projection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109461197B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145326B (en) * | 2019-12-26 | 2023-12-19 | 网易(杭州)网络有限公司 | Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device |
CN111563947B (en) * | 2020-03-25 | 2023-06-13 | 南京舆图科技发展有限公司 | Interactive real-time volume rendering method of global three-dimensional cloud |
CN111951362A (en) * | 2020-07-01 | 2020-11-17 | 北京领为军融科技有限公司 | Three-dimensional volume cloud rendering method and system based on three-dimensional noise map |
CN112190935B (en) * | 2020-10-09 | 2024-08-23 | 网易(杭州)网络有限公司 | Rendering method and device of dynamic volume cloud and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408770A (en) * | 2014-12-03 | 2015-03-11 | 北京航空航天大学 | Method for modeling cumulus cloud scene based on Landsat8 satellite image |
US9342920B1 (en) * | 2011-11-15 | 2016-05-17 | Intrinsic Medical Imaging, LLC | Volume rendering using scalable GPU-based cloud computing |
CN106570929A (en) * | 2016-11-07 | 2017-04-19 | 北京大学(天津滨海)新代信息技术研究院 | Dynamic volume cloud construction and drawing method |
-
2017
- 2017-08-23 CN CN201710727498.5A patent/CN109461197B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9342920B1 (en) * | 2011-11-15 | 2016-05-17 | Intrinsic Medical Imaging, LLC | Volume rendering using scalable GPU-based cloud computing |
CN104408770A (en) * | 2014-12-03 | 2015-03-11 | 北京航空航天大学 | Method for modeling cumulus cloud scene based on Landsat8 satellite image |
CN106570929A (en) * | 2016-11-07 | 2017-04-19 | 北京大学(天津滨海)新代信息技术研究院 | Dynamic volume cloud construction and drawing method |
Non-Patent Citations (2)
Title |
---|
基于噪声纹理的云层绘制方法;杨丹等;《现代电子技术》;20091015(第20期);全文 * |
基于小光线束光子映射的单次散射绘制算法;王元龙等;《计算机辅助设计与图形学学报》;20131215(第12期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109461197A (en) | 2019-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7456670B2 (en) | 3D face model construction method, 3D face model construction device, computer equipment, and computer program | |
CN102768765B (en) | Real-time soft shadow rendering method for point light sources | |
US8860712B2 (en) | System and method for processing video images | |
CN103345771B (en) | A kind of Efficient image rendering intent based on modeling | |
Wei et al. | Fisheye video correction | |
US11790610B2 (en) | Systems and methods for selective image compositing | |
WO2017206325A1 (en) | Calculation method and apparatus for global illumination | |
CN110223387A (en) | A kind of reconstructing three-dimensional model technology based on deep learning | |
EP3533218B1 (en) | Simulating depth of field | |
CN109461197B (en) | Cloud real-time drawing optimization method based on spherical UV and re-projection | |
CN105205861B (en) | Tree three-dimensional Visualization Model implementation method based on Sphere Board | |
US9165397B2 (en) | Texture blending between view-dependent texture and base texture in a geographic information system | |
CN104837000B (en) | The virtual visual point synthesizing method that a kind of utilization profile is perceived | |
JP7571032B2 (en) | Method for generating 3D asteroid dynamic map and portable terminal | |
CN104103089A (en) | Real-time soft shadow realization method based on image screen space | |
CN111862295A (en) | Virtual object display method, device, equipment and storage medium | |
CN102147936B (en) | Cascade-based method for seamlessly superposing two-dimensional vectors on three-dimensional topography surface | |
JP2019046077A (en) | Image synthesizing apparatus, program and method for synthesizing viewpoint video by projection of object information on plural planes | |
CN114972612A (en) | A kind of image texture generation method and related equipment based on three-dimensional simplified model | |
CN109358430A (en) | A kind of real-time three-dimensional display methods based on two-dimentional LED fan screen | |
CN111382618B (en) | Illumination detection method, device, equipment and storage medium for face image | |
CN108734772A (en) | High accuracy depth image acquisition methods based on Kinect fusion | |
CN116363290A (en) | Texture map generation method for large-scale scene three-dimensional reconstruction | |
CN119068154A (en) | A method and system for fusion of virtuality and reality in ultra-large space based on metaverse | |
CN107590858A (en) | Medical sample methods of exhibiting and computer equipment, storage medium based on AR technologies |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 307, 3 / F, supporting public building, Mantingfangyuan community, qingyanli, Haidian District, Beijing 100086 Applicant after: Beijing Wuyi Vision digital twin Technology Co.,Ltd. Address before: Room 307, 3 / F, supporting public building, Mantingfangyuan community, Qingyun Li, Haidian District, Beijing Applicant before: DANGJIA MOBILE GREEN INTERNET TECHNOLOGY GROUP Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |