[go: up one dir, main page]

CN117011446B - Real-time rendering method for dynamic environment illumination - Google Patents

Real-time rendering method for dynamic environment illumination Download PDF

Info

Publication number
CN117011446B
CN117011446B CN202311061199.4A CN202311061199A CN117011446B CN 117011446 B CN117011446 B CN 117011446B CN 202311061199 A CN202311061199 A CN 202311061199A CN 117011446 B CN117011446 B CN 117011446B
Authority
CN
China
Prior art keywords
panoramic
illumination
rendering
denoising
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311061199.4A
Other languages
Chinese (zh)
Other versions
CN117011446A (en
Inventor
赵凌霄
程浩杰
王佳俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Suzhou Shenjie Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Shenjie Information Technology Co ltd filed Critical Suzhou Shenjie Information Technology Co ltd
Priority to CN202311061199.4A priority Critical patent/CN117011446B/en
Publication of CN117011446A publication Critical patent/CN117011446A/en
Application granted granted Critical
Publication of CN117011446B publication Critical patent/CN117011446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种实时环境光照的渲染方法,涉及计算机图像处理技术领域。包括:获取场景信息,场景信息包括实时动态环境全景信息和/或静态环境全景信息。动态环境光照重建,根据场景信息生成第一全景图像,并获取场景信息的环境光照参数,根据第一全景图像生成全景辐照图,并将环境光照参数映射至全景辐照图中,全景辐照图包括全方位动态光照信息。获取虚拟模型,通过虚拟三维数据建模,生成虚拟模型;真实感渲染,根据全景辐照图,对虚拟模型进行渲染并生成渲染结果,其中,渲染结果能够传输至MR设备中,并在MR设备内显示。通过采用本发明提供的技术,能够实现高精度MR的渲染质量的同时提供高效计算,满足MR相关应用的实时交互需求。

The invention discloses a real-time environment lighting rendering method, and relates to the technical field of computer image processing. Including: obtaining scene information, which includes real-time dynamic environment panoramic information and/or static environmental panoramic information. Dynamic environmental illumination reconstruction, generates a first panoramic image based on scene information, obtains the environmental illumination parameters of the scene information, generates a panoramic irradiation map based on the first panoramic image, and maps the environmental illumination parameters to the panoramic irradiation map. Panoramic irradiation The graph includes a full range of dynamic lighting information. Obtain the virtual model and generate the virtual model through virtual three-dimensional data modeling; realistic rendering, render the virtual model and generate rendering results based on the panoramic irradiation map, where the rendering results can be transmitted to the MR device and displayed on the MR device displayed within. By adopting the technology provided by the present invention, it is possible to achieve high-precision MR rendering quality while providing efficient calculation and meeting the real-time interaction requirements of MR-related applications.

Description

Real-time rendering method for dynamic environment illumination
Technical Field
The invention relates to the technical field of computer image processing, in particular to a real-time rendering method of dynamic environment illumination.
Background
Mixed Reality (MR) and augmented Reality (Augmented Reality, AR) technologies are increasingly used, wherein the key of the Mixed Reality technology is to implement realistic rendering on a virtual model according to ambient light shadow information in a real scene, and a user can observe the virtual model at any time and place after wearing an MR display device, and generate virtual-real fusion and an immersive visual effect that the virtual model really exists in the real environment. In MR application, the realistic rendering result can achieve the effect of spurious and spurious, thereby improving the visual perception of the virtual model when the user browses.
However, in the prior art, the current hardware devices and software algorithms are limited, so that the rendering quality and the interaction efficiency are often contradictory, and both of them affect the experience of the user in the MR application. In order to avoid the problems of dizziness and the like of users during use, the MR application usually discards high rendering quality to ensure efficient interactive functions, so that the sense of reality of the visual experience is often limited and weakened.
In order to improve the degree of fusion of MR rendering results with the real environment surrounding the user, many studies at home and abroad have been made in recent years. For example, in terms of illumination estimation, some researches focus on deducing omnidirectional illumination from images with limited field of view through a mobile phone, but illumination information parameters acquired in this way are less, rendering based on illumination deduced by less information parameters is difficult to be applied to the case of complex scenes, and visual reality and rationality of display effects are weak.
In addition, in some related researches of illumination estimation based on panoramic images, illumination information of the panoramic images is obtained through analysis of high-dynamic images, and the method is accurate, but has high calculation difficulty and low efficiency, and cannot meet practical MR application development requirements.
Disclosure of Invention
The invention provides a rendering method of illumination in a dynamic illumination environment, which aims to solve the problem that the prior art cannot render illumination according to surrounding scenes in real time when the interaction efficiency is high.
In order to solve the technical problems, the technical scheme adopted by the invention is to provide a rendering method of real-time environmental illumination, which comprises the following steps:
and acquiring scene information, wherein the scene information comprises real-time dynamic environment panoramic information and/or static environment panoramic information. Dynamic ambient illumination reconstruction, generating a first panoramic image according to scene information, and acquiring ambient illumination parameters of the scene information, wherein the ambient illumination parameters comprise the position, the type and the illumination intensity of a light source in a real scene, generating a panoramic irradiation map according to the first panoramic image, and mapping the ambient illumination parameters into the panoramic irradiation map, wherein the panoramic irradiation map comprises omnibearing dynamic illumination information.
Obtaining a virtual model, and generating the virtual model through virtual three-dimensional data modeling; and rendering the virtual model according to the panoramic irradiation map and generating a rendering result, wherein the rendering result can be transmitted to the MR equipment and displayed in the MR equipment.
The technical scheme provided by the invention has the beneficial effects that:
and splicing information in a real environment scene where a user is located through Dynamic environment illumination to obtain a first panoramic image, detecting environment illumination parameters in the first panoramic image, and further generating an HDR (High-Dynamic Range) panoramic irradiation map. Compared with two modes in the prior art, namely, the method for deducing the omnidirectional illumination information through an algorithm and directly acquiring the HDR image, the calculation pressure can be reduced while the omnidirectional dynamic illumination is acquired. The method can be still applicable to various types of natural scene and model data under the condition of low hardware cost, and can maintain high-precision realistic rendering quality.
In addition, the sense of realism rendering renders any virtual model generated in the virtual three-dimensional data modeling by a ray tracing method to obtain a rendering result matched with the current ambient illumination, and interacts with the MR device. Namely, the realism rendering is used for rendering different degrees of illumination surfaces and shadow surfaces for the virtual model, so that when a user views a virtual object or even interacts with the virtual model, the user can synchronize real-time and dynamic environment illumination to generate vivid shadows and bright and dark effects, and the realism of the user is improved. Therefore, high-efficiency calculation is provided while the rendering quality of the high-precision MR is realized, and the real-time interaction requirement of MR related applications is met.
In some embodiments, obtaining scene information includes: at least one camera is arranged, and the camera can continuously shoot scenes around the camera according to a set view angle, wherein the set view angle is a plurality of preset camera view angles; the camera shoots a plurality of continuous images with set angles; the plurality of images are transmitted to a graphics workstation.
By adopting the technical scheme, a plurality of continuous multi-angle LDR (Low Dynamic Range ) images are acquired through camera shooting, are transmitted to a graphic workstation, and are spliced and calculated to generate a first panoramic image. The method can reduce the calculation difficulty of directly acquiring the HDR (High Dynamic Range ) panoramic image for calculation under the condition of acquiring all information of the real scene.
In some embodiments, the dynamic ambient light reconstruction further comprises: panoramic reconstruction, namely acquiring a plurality of images, performing panoramic stitching on the plurality of images at each moment by an image stitching technology, and generating a first panoramic image, wherein the first panoramic image comprises omnidirectional environment information; coordinate system correction for converting a first panoramic image in a reference coordinate system with a camera into a second panoramic image in a world space coordinate system; spatial position correction, which is used to simulate illumination changes in different world space coordinate systems.
By adopting the technical scheme, a plurality of LDR images shot by the camera are spliced into a first scene image, and the first scene image has omnibearing environment information, but is environment information taking the shot camera as a center, and is not a virtual model. Therefore, when the virtual model and the camera are at different positions, it is not accurate enough to directly calculate the illumination information around the virtual model if the first panoramic image centered on the camera. The ambient lighting parameters used for rendering are centered on the virtual model by correction of the coordinate system and spatial position correction.
In some embodiments, the coordinate system correction specific step includes: acquiring UV coordinates of each pixel on the second panoramic imageThe method comprises the steps of carrying out a first treatment on the surface of the UV coordinates of each pixel +.>Conversion to spherical coordinates>And spherical coordinates +.>Conversion to three-dimensional Cartesian coordinates>The method comprises the steps of carrying out a first treatment on the surface of the Three-dimensional Cartesian coordinates->The conversion from the camera coordinate system to the world coordinate system is noted as:
wherein,to set the rotation matrix +.>Is world coordinates; traversing each pixel of the second panoramic image and generating the second panoramic image, wherein UV coordinates of each pixel +.>The corresponding pixel value is world coordinate +.>And the pixel value corresponding to the first panoramic image.
By adopting the technical scheme, through specific step operation, the panoramic image with the two-dimensional plane coordinate centered on the camera is converted into the second panoramic image with the world space coordinate system centered on the virtual model. In addition, the situation that the light shadow rendered on the surface of the virtual model is flickering, shaking and the like can not be caused by converting the reference coordinate system of the MR equipment in the moving process can be effectively ensured.
In some embodiments, spatial position correctionThe positive specific steps include: mapping a second panoramic image under a world space coordinate system onto a unit sphere, wherein the spherical coordinates of a preset virtual model are as followsThe method comprises the steps of carrying out a first treatment on the surface of the Rotating the environment ball until the horizontal angle of the virtual model is consistent with the preset sight line direction, and marking as follows:
wherein,is the angular coordinate of the virtual model; the panoramic image corresponding to the rotated environmental ball is a third panoramic image; the pixels on the environmental sphere that are sampled centered on the virtual model can be expressed as:
representing the variables->Representing a preset distance;The following is also satisfied:
wherein the variables areTo preserve the positive solution, all pixels on the third panoramic image are according to the variable +.>Mapped to the fourth panoramic image.
By adopting the technical scheme, when the user continuously moves, the virtual object can be moved to different world space positions by the user, and in order to avoid changing the coordinates of all other objects in the world space coordinates to compensate the coordinate correction of the user, the world coordinate system which can be reliably mapped by constructing all the coordinates is constructed. And stable transformation is set to define the depth relation between the user and the virtual model in the world, so that the stability of the space coordinates is improved, and the rendering result is more true for the user.
In some embodiments, the dynamic ambient illumination reconstruction further comprises performing illumination analysis on the panoramic irradiation map, the illumination analysis comprising illumination estimation and illumination difference value calculation between adjacent frames; performing illumination estimation based on the first panoramic image to obtain an environment irradiation map;
wherein, the illumination difference value calculation between adjacent frames comprises: calculating sampling weight of the environment irradiation map, and marking as follows:
wherein,high, ∈representing ambient irradiance pattern>Representing the ordinate as +.>Is +.>The method comprises the steps of carrying out a first treatment on the surface of the All pixels on the environmental irradiation map are combined with the sampling weight +.>Obtaining a panoramic irradiation map after multiplication; local illumination similarity calculation, dividing a panoramic irradiation map into +.>A picture block in which->The illumination similarity between adjacent frames of each image block is calculated according to the SSIM index and is recorded as:
wherein,representing the average value of pixels in an image block, +.>The value range of (2) is +.>When->The more the value of (2) is close to 1 +.>The more similar the individual image blocks are between adjacent frames; global illumination difference calculation, namely, according to a result of local illumination similarity calculation, the global illumination difference of the panoramic irradiation map can be recorded as:
wherein,the value range of (2) is +.>When->Smaller, adjacent inter-frame ringsThe smaller the difference in ambient lighting.
By adopting the technical scheme, for practical application, a user can always move continuously and interact with the virtual model when wearing the MR equipment, and if the user only relies on the environment illumination information which is continuously changed around the user to be captured in real time, the fault tolerance rate and the calculation power requirements are higher. Since the illumination variation between adjacent video frames is generally relatively small, the first panoramic image (i.e., the fourth panoramic image) after the coordinate correction is subjected to illumination estimation, thereby obtaining an environmental radiation map. The light source which is closer to the top or the bottom of the panoramic image has larger area, so that the light source is easier to sample, and the problem of uneven sampling is effectively avoided by sampling the environment irradiation map.
In addition, the local illumination similarity and the global illumination difference of the panoramic irradiation map are judged based on whether the rendering result of the previous frame can be used for further optimizing the current frame, and the rendering result of the previous frame can be used for rendering optimization of the current frame by analyzing that when the difference between the current frame and the previous frame is smaller.
In some implementations, the photorealistic rendering includes ray-trace based rendering and real-time denoising; the real-time denoising comprises time domain denoising and space-time denoising, the time domain denoising performs denoising on the current frame rendering through the rendering result of the previous frame, and the space-time denoising is used for denoising among neighborhood pixels and among MR devices, wherein the MR devices comprise a first screen and a second screen; the real-time denoising method comprises the following specific steps of: a re-projection calculation, the re-projection calculation comprising: obtaining pixels of a current frame of a first screen by a projection matrixPixels of the corresponding frame of the corresponding second screen +.>The method comprises the steps of carrying out a first treatment on the surface of the And combines pixels of the previous frame of the first screen +.>And pixels of the previous frame on the second screen +.>Optimization->Pixel value +.>The method comprises the steps of carrying out a first treatment on the surface of the The first step of denoising is recorded as:
wherein,is->Pixel value of the first step denoising, +.>Is->The second step of denoising pixel values,is->Pixel values of the second step of denoising, +.>Including time domain denoising and space-time denoising; and step two, denoising, namely:
wherein,is->Pixel value denoised in the first step, < >>Is->The pixel value of the first step denoising;Is->The corresponding pixel value is +.>And->Is set with a weighting value->Representation->Surface reflectivity of the corresponding virtual model in three-dimensional space.
By adopting the technical scheme, the environment illumination information can be conveniently sampled and the coloring effect (such as adding a light shadow surface) of the current virtual model is more vivid through the rendering based on ray tracing.
In addition, the denoising effect of the MR equipment is realized through the first step and the second step of denoising operation in real time, namely, the time domain denoising performs denoising on the rendering of the current frame through the rendering result of the previous frame; and denoising the image passing through the neighborhood pixels and between the first screen and the second screen in time space.
After the real-time denoising is performed, the graphic workstation stores the rendering result of each frame into a two-dimensional texture image, and the rendering result can be continuously transmitted to the MR device in real time according to the sequence through communication protocols of rendering engines such as Unity or UE and the like, and is displayed in the MR device.
In some embodiments, determining the contribution of the previous frame of rendered image to the current frame of rendered image from the difference in illumination between adjacent frames is:
wherein,global illumination difference for the second panoramic irradiation map,/->Representation->Function (F)>Representing bilateral weight calculation factors between adjacent frames; ray-tracing based rendering includes: sampling and coloring in a three-dimensional virtual space generated by three-dimensional data through ray tracing, and acquiring a rendered two-dimensional texture image; and saving the rendering result after the real-time denoising to the two-dimensional texture image.
By adopting the technical scheme, the contribution degree calculation of the previous frame of the rendering image to the current frame of the image is set, and the duty ratio weight of the previous frame of the rendering image for denoising the current frame can be judged.
In some embodiments, the MR device comprises a plurality of sensors, which are capable of acquiring sensor parameters of the MR device, obtaining sensor parameters that are relatively stable in time series by means of kalman filtering, and enabling smooth transition of image content between adjacent frames by means of an anti-shake algorithm.
By adopting the technical scheme, in the prior art, a plurality of sensors are arranged on the MR equipment, and the parameters of the sensors are sensitive, if the parameters of the sensors are directly used for interacting with a graphic workstation, the unstable jitter problem of a rendering result on a time sequence is easily caused, so that the anti-jitter algorithm is used for filtering the parameters of the sensors in the MR equipment, and the smoothness of images between adjacent frames can be effectively improved.
In some embodiments, the MR device further comprises a panoramic camera, the panoramic camera being fixed to the MR device, and the lens axis of the panoramic camera being coincident with the axis of the MR device.
By adopting the technical scheme, the panoramic camera is fixed on the MR equipment and shares the same set of camera coordinate system with the MR equipment, so that the center of the visual field of a user can be effectively aligned with the centers of a plurality of panoramic pictures after the reconstruction of the environmental illumination.
Drawings
For a clearer description of the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly introduced below, it will be obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art, wherein:
FIG. 1 is a flowchart illustrating a method for rendering real-time ambient illumination according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a second embodiment of a method for rendering real-time ambient illumination according to the present invention;
FIG. 3 is a flow chart of an embodiment of a dynamic ambient illumination reconstruction according to the present invention;
fig. 4 is a flowchart illustrating a method for rendering real-time ambient illumination according to an embodiment of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
For convenience of the following description, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Referring to fig. 1 to 2, fig. 1 is a flowchart illustrating an embodiment of a method for rendering real-time ambient illumination according to the present application; fig. 2 is a flow chart illustrating a second embodiment of a method for rendering real-time ambient illumination according to the present application.
In some embodiments, a method of rendering real-time ambient illumination includes:
step S10, scene information is acquired, wherein the scene information comprises real-time dynamic environment panoramic information and/or static environment panoramic information.
At least one camera is provided, for example, a fisheye panoramic camera, which is capable of continuously capturing a scene around the camera according to a set viewing angle, wherein the set viewing angle is a plurality of preset camera viewing angles, for example, the set camera sets three capturing viewing angles at 200 ° intervals along the circumferential direction, and images of the top viewing angle of the camera are captured at the same time.
The camera shoots a plurality of continuous images with set angles; and transmitting the plurality of images to the graphics workstation. The plurality of cameras acquire an HDR panoramic image of a static surrounding scene and panoramic information of the static surrounding scene at the same time, and acquire the HDR panoramic image of a dynamic scene and the panoramic information of the dynamic surrounding scene when the plurality of cameras continuously shoot in a time sequence (for example, continuously shoot the same view angle in 0.03S). Wherein, the scene represented by the application is the surrounding environment where the camera is located
A first panoramic image is generated by performing stitching calculation on a plurality of continuous multi-angle panoramic images of LDR (Low Dynamic Range ) obtained through camera shooting. The calculation difficulty of directly acquiring the HDR image for calculation can be reduced. The scene is a space scene of a real environment where the camera is located, can include a virtual model, and is not limited to a duty ratio of the virtual model, for example, an indoor game scene, an outdoor game scene, and an indoor three-dimensional model scene.
Step S20, dynamic ambient illumination reconstruction is carried out, a first panoramic image is generated according to scene information, ambient illumination parameters of the first panoramic image are obtained, wherein the ambient illumination parameters comprise the position, the type and the illumination intensity of a light source in a real scene, a panoramic illumination map is generated according to the first panoramic image, the ambient illumination parameters are mapped into the panoramic illumination map, and the panoramic illumination map comprises omnibearing dynamic illumination information.
And reconstructing a continuous LDR panoramic image sequence chart around the camera, namely a first panoramic image, from a plurality of images with multiple visual angles shot by the camera in real time through a panoramic stitching algorithm in the prior art. And then the continuous first panoramic image is subjected to an illumination estimation algorithm according to the dynamically changing environment illumination parameters in the real scene (namely, the surrounding scene of the camera) corresponding to the time sequence. Which includes at least the location, type and illumination intensity of the light source within the real scene. For example, when the real scene is an indoor scene, the real scene at least includes the indoor light source position (for example, coordinates of a plurality of lamps), the light source type (for example, lamplight) and the illumination intensity (for example, the light intensity of a plurality of lamplight is acquired), and is mapped into the HDR panoramic irradiation map.
Wherein, panorama irradiation pattern is: an image of the actual scene luminance is calculated using a plurality of common digital images of different exposure levels and has a higher luminance and a higher contrast than the HDR first panoramic image. So that the graphics workstation, when rendered, the panoramic irradiation map can include all-round dynamic illumination information, i.e., dynamic variation values of illumination parameters within a time sequence (e.g., the position, type, and brightness of the light within 0.01S).
Step S30, a virtual model is obtained, and the virtual model is generated through virtual three-dimensional data modeling.
In the prior art, an MR device can be connected to a graphics workstation, and a three-dimensional model measurement and reconstruction technique is used to obtain a three-dimensional grid surface model or a three-dimensional image acquisition technique is used to obtain three-dimensional volume data, so as to reconstruct the three-dimensional model or render the three-dimensional volume data. For example, various modeling software can form a three-dimensional virtual model through three-dimensional model reconstruction or three-dimensional volume data rendering.
And S40, rendering the virtual model according to the panoramic irradiation map and generating a rendering result, wherein the rendering result can be transmitted to the MR equipment and displayed in the MR equipment.
And rendering any virtual model (for example, the generated virtual model is a cube) generated in the virtual three-dimensional data modeling by using a ray tracing method to obtain a rendering result matched with the current environment illumination. Ray tracing is a rendering technique in which an image is generated by simulating rays in a living object and effects of collision with a virtual model. The method comprises the steps of carrying out a rendering on a cube according to all-dimensional dynamic illumination information acquired by a panoramic irradiation map, enabling the cube to present shadows and bright-dark effects synchronous with ambient illumination, enabling a user to present a certain depth of field and bright-dark feeling through shadows with different gray scales in a sketch drawing, and effectively improving immersion feeling of the user.
Step S50 is also included, MR function integration. The realism rendering result in step S40 is transmitted to the MR device in the form of data of consecutive image frames and displayed, wherein the transmission mode is not limited to wireless transmission (e.g. WIFI) or wired transmission mode. The MR device is capable of capturing a variety of interaction parameters including a variety of interaction parameters such as a user's current pose, a virtual model pose, a user's gesture, a user's voice, and the like. These interaction parameters can be transmitted to the graphics workstation in the same manner of transmission. And then, according to the corresponding changed coordinates, morphology and dynamic illumination information, the sense of reality rendering of step S40 is continued again.
For example, when the MR device detects that the user grabs a virtual cube, the graphics workstation can render corresponding shading effects according to the state change and the illumination change of the cube.
In the embodiment of the application, the computation complexity of the step S20 and the step S40 is high, and in order to meet the requirements of rendering quality and interaction speed at the same time, the steps are deployed in a graphics workstation to perform rapid high-quality illumination computation and rendering work. The application is not limited in this respect, but can also be deployed in MR devices, for example when the MR device is of sufficient strength. The rendering result obtained by the graphics workstation in each frame may be continuously transmitted to the screen of the MR head-mounted display in real time through WiFi or other transmission modes according to the sequence through a communication protocol carried by a rendering engine (such as a unit engine) such as a unit or UE (User Experience) or the like (a well-written core component or system capable of implementing image rendering).
In some embodiments, the MR device comprises a plurality of sensors, which are capable of acquiring sensor parameters of the MR device, obtaining sensor parameters that are relatively stable in time series by means of kalman filtering, and enabling smooth transition of image content between adjacent frames by means of an anti-shake algorithm.
In this embodiment of the present application, in the prior art, a plurality of sensors are set on an MR device, and parameters of the sensors are sensitive, if the parameters of the sensors are directly used for interacting with a graphics workstation, unstable jitter problem of a rendering result on a time sequence is easily caused, so that the image smoothness between adjacent frames can be effectively improved by using an anti-jitter algorithm in sensor parameter filtering in the MR device. However, the anti-shake algorithm is not limited in this application, for example, a video anti-shake algorithm commonly used in the prior art.
In some embodiments, the MR device further comprises a panoramic camera, the panoramic camera being fixed to the MR device, and the lens axis of the panoramic camera being coincident with the axis of the MR device.
In the embodiment of the application, the panoramic camera is fixed on the MR equipment and shares the same set of camera coordinate system with the MR equipment, so that the center of the visual field of the user can be effectively aligned with the centers of a plurality of panoramic images after the reconstruction of the environmental illumination. Illustratively, step S500 sensor parameter records interact with step S50 functional integration.
So that
(1) Transmitting a realism rendering result on the graphic workstation and the MR device;
(2) The MR device transmits the user pose and sensor parameters of the MR device to the graphics workstation.
It should be noted that, in order to expand the active range of the user as much as possible, since the different devices perform tasks and refresh rates are different, a data synchronization transmission module needs to be further set to ensure normal browsing of the MR device, and the data update does not conflict with the current frame rendering task. In addition, the transmission mode is not limited in this application, for example, the camera transmits to the graphics workstation through a wireless or wired transmission mode, and the graphics workstation interacts with the MR device through a wireless or wired transmission mode.
Referring to fig. 3, fig. 3 is a flow chart illustrating an embodiment of dynamic ambient illumination reconstruction in step S20 of the present application.
Furthermore, in some embodiments, the dynamic ambient light reconstruction further comprises:
step S200, panoramic reconstruction is carried out, a plurality of images are obtained, panoramic stitching is carried out on the plurality of images at each moment through an image stitching technology, and a first panoramic image is generated, wherein the first panoramic image comprises all-dimensional environment information.
Image stitching is performed on a plurality of multi-angle LDR images shot by a camera to form a first panoramic image, and redundant parts in a plurality of photos are removed by means of feature extraction, image transformation fusion and the like. A first image is formed having all illumination parameters within the scene surrounding the camera.
Since the first panoramic image is a panoramic image with two-dimensional plane coordinates centered on the camera, there is an offset from the virtual model modeled in step S30. In order to improve the authenticity, the coordinate system is rectified for converting the first panoramic image in the reference coordinate system with the camera into the second panoramic image in the world space coordinate system centered on the virtual model, by step S201.
In step S201, the specific steps of coordinate system correction include:
(1) Acquiring UV coordinates of each pixel on the second panoramic imageWherein the current second image is an empty image with pixels (e.g., fixed size).
(2) UV coordinates of each pixelConversion to spherical coordinates>And spherical coordinates +.>Conversion to three-dimensional Cartesian coordinates>
(3) To three-dimensional Cartesian coordinatesThe conversion from the camera coordinate system to the world coordinate system is noted as:The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>To set the rotation matrix +.>Is world coordinates; setting the rotation matrix as the rotation matrix of the user's head, e.g. one for performing a simplified calculation according to the direction of the user's line of sight>Is a matrix of (a) in the matrix.
(4) Traversing each pixel of the second panoramic image and generating the second panoramic image, wherein the UV coordinates of each pixelThe corresponding pixel value is world coordinate +.>And the pixel value corresponding to the first panoramic image.
It is also possible to effectively ensure that the shadows on the surface of the built virtual model (e.g., a cube with varying shadows) do not flicker, jam, etc. when the MR apparatus is worn by the user for movement through step S201.
In order to simulate the illumination situation when the virtual model is moved to different world space positions by the user, the method further comprises step S202, the space position correction is used for simulating illumination changes under different world space coordinate systems.
The step S202, the specific steps of spatial position correction include:
mapping a second panoramic image under a world space coordinate system onto a unit sphere, wherein the spherical coordinates of a preset virtual model are as follows
Rotating the environment ball until the horizontal angle of the virtual model is consistent with the preset sight line direction, and marking as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is the angular coordinate of the virtual model; the panoramic image corresponding to the rotated environmental ball is a third panoramic image; the preset implementation direction is the current sight direction of the user detected by the MR device when the user wears the MR device.
The pixels on the environmental sphere that are sampled centered on the virtual model can be expressed as:
representing the variables->Representing a preset distance;The following is also satisfied:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein, the variable->To preserve the positive solution, all pixels on the third panoramic image are according to the variable +.>Mapped to the fourth panoramic image.
In embodiments of the present application, an environment Ball (Env Ball) may be represented in an environment (real world or computer generated environment) that may be recreated using highly reflective chrome Ball images. When the user moves continuously, the virtual article can be moved to different world space positions by the user, and in order to avoid changing the coordinates of all other articles in the world space coordinates to compensate the coordinate correction of the user, a world coordinate system which can be reliably mapped by all the coordinates is built. And stable transformation is set to define the depth relation between the user and the virtual model in the world, so that the stability of the space coordinates is improved, and the rendering result is more true for the user. For example, when a virtual model cube is 5 meters from a user, the distance between the cube and the user changes when the user moves to the virtual model cube and picks up the cube, and the pose of the cube changes dynamically. Based on the change, the space scene is spatially divided, and the pose change between the user and the virtual model in the space can be consistent with the current illumination parameter and the sight sense of the user in real time through the correction of the space position.
In some embodiments, the dynamic ambient illumination reconstruction further comprises performing step S210 on the panoramic irradiation map, and the illumination analysis step S210 comprises step S211 illumination estimation and step S212 illumination difference value calculation between adjacent frames.
In step S211, illumination estimation is performed to obtain an environmental radiation map based on the first panoramic image.
Because the fourth panoramic image after coordinate correction is an LDR image, important illumination information is lost, if the fourth panoramic image is taken as a rendering illumination basis, a realistic shadow effect cannot be rendered on the surface of the virtual model, so that the HDR illumination in the real scene, namely, an environment illumination map and a panoramic illumination map, including environment illumination parameters, namely, the position, the type and the illumination intensity of a light source in the scene, are predicted through an illumination estimation algorithm in the prior art. However, the illumination estimation method is not limited, and includes, but is not limited to, depth parameter (Deep parameter) estimation, neural illumination (Neural illumination) estimation, and other modes, so long as a fast illumination estimation method capable of being based on a single LDR panoramic image is selected, that is, illumination estimation is performed on an HDR image converted from an LDR image to directly predict information such as a light source position. It should be noted that, in step S211, the global illumination and the information related to each light source are predicted with respect to the camera.
In addition, in step S212, the calculation of the illumination difference value between adjacent frames includes:
calculating sampling weight of the environment irradiation map, and marking as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>High, ∈representing ambient irradiance pattern>Representing the ordinate as +.>Is +.>
All pixels on the environment irradiation map and sampling weightsMultiplying to obtain the wholeJing Fuzhao; local illumination similarity calculation, dividing a panoramic irradiation map into +.>A picture block in which->The illumination similarity between adjacent frames of each image block is calculated according to the SSIM index and is recorded as:
wherein,representing the average value of pixels in an image block, +.>The value range of (2) is +.>When->The more the value of (2) is close to 1 +.>The more similar the individual image blocks are between adjacent frames; global illumination difference calculation, namely, according to a result of local illumination similarity calculation, the global illumination difference of the panoramic irradiation map can be recorded as:
wherein,the value range of (2) is +.>When->The smaller the ambient lighting difference between adjacent frames is, the smaller.
In the embodiment of the application, for practical application, a user usually moves continuously and interacts with the virtual model when wearing the MR device, and if the user only relies on capturing continuously changing ambient illumination information around the user in real time, the fault tolerance and the computational power requirements are high. Because the illumination variation between adjacent video frames is usually smaller, the judgment on the basis of the sampling weight, the local illumination similarity and the global illumination difference of the panoramic radiation map can be used for further optimizing the current frame based on the rendering result of the previous frame, and the rendering result of the previous frame can be used for the rendering optimization of the current frame by analyzing the fact that the difference between the current frame and the current frame is smaller.
Referring to fig. 4, fig. 4 shows a flowchart of a third embodiment of a method for rendering real-time ambient illumination provided in the present application.
In some embodiments, step S40, the photorealistic rendering includes: step S41, based on the ray traced rendering and step S42, denoising in real time.
Step S41, ray tracing based rendering includes: sampling and coloring in a three-dimensional virtual space generated by three-dimensional data through ray tracing, and acquiring a rendered two-dimensional texture image; and saving the rendering result after the real-time denoising to the two-dimensional texture image.
In step S41, in ray tracing-based rendering, sampling of ambient illumination information and more realistic coloring effects (e.g. adding a light shadow surface) of the current virtual model are facilitated, and a rendered two-dimensional texture is obtained, and a rendered image is generated by using a ray tracing algorithm based on monte carlo (e.g. ray tracing and path tracing of monte carlo) in the prior art, for example.
Wherein, rendering (Render) refers to the process of two-dimensionally projecting a model in a three-dimensional scene into a digital image according to set illumination, environment, material and rendering parameters. The application refers to a process of performing shadow rendering on a three-dimensional object based on ambient scene illumination to form an image, so that the three-dimensional object is presented closer to the real world and is convenient to operate and transform.
Texture (Texel) refers to a particular surface feature of an object, different object surfaces producing different texture images. It should be understood that texture information refers to two-dimensional coordinates in two-dimensional space, i.e., texture coordinates or UV coordinates, where U represents a horizontal direction, V represents a vertical direction, and the range of values of the UV coordinates in both the horizontal direction and the vertical direction is [0,1], independent of the texture size, independent of the texture aspect ratio, and is a relative coordinate.
Further, in step S41, in order to balance the contradiction between the quality and efficiency of the rendering, the rendering based on ray tracing adopts a smaller number of rays during sampling, thereby obtaining a rendering result with more preliminary noise.
The specific operations of the real-time denoising in step S42 include temporal denoising and spatio-temporal denoising, where the temporal denoising performs denoising on the current frame rendering by using the rendering result of the previous frame, and the spatio-temporal denoising is used for denoising between neighboring pixels and between MR devices, where the MR devices include a first screen and a second screen.
Step S42, the specific steps of real-time denoising are as follows:
in step S420, the re-projection calculation includes: obtaining pixels of a current frame of a first screen by a projection matrixPixels of the corresponding frame of the corresponding second screen +.>The method comprises the steps of carrying out a first treatment on the surface of the And combines pixels of the previous frame of the first screen +.>And pixels of the previous frame on the second screen +.>Optimization->Pixel value +.>
And obtaining the triangular positioning of the previous frame pixel of the first screen, the current frame pixel of the first screen and the previous frame of the second screen through feature matching and according to the projection of the current frame. Thereby realizing the calculation of the reprojection error to construct a cost function and minimizing the cost function, thereby realizing the calculation of the reprojection errorIs described.
Step S421, the first step of denoising is recorded as:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is->Pixel value of the first step denoising, +.>Is->Pixel values of the second step of denoising, +.>Is->Pixel values of the second step of denoising, +.>Including time domain denoising and space-time denoising.
And for the first step of denoising of the current frame, rendering and denoising the current frame by the corresponding frame of the mapped second screen of the current frame, the two-step denoising result of the previous frame of the first screen and the two-step denoising result of the previous frame of the second screen.
According to step S212, the contribution degree of the previous frame of the rendered image to the current frame of the rendered image is determined by calculating the illumination difference value between adjacent frames as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Global illumination difference for the second panoramic irradiation map,/->Representation->Function (F)>Representing bilateral weight calculation factors between adjacent frames. And setting the contribution degree calculation of the previous frame of rendering image to the current frame of image, and judging the duty ratio weight of the previous frame of rendering image for denoising the current frame.
Denoising the current frame rendering by time domain denoising (i.e., image pixel neighborhood denoising) by the rendering result of the previous frame; and denoising in time space through neighborhood pixels and between the first screen and the second screen. Thereby realizing denoising between adjacent frames.
Step S422, the second step of denoising is recorded as:
wherein,is->Pixel value denoised in the first step, < >>Is->The pixel value of the first step denoising;Is->The corresponding pixel value is +.>And->Is set with a weighting value->Representation->Surface reflectivity of the corresponding virtual model in three-dimensional space.
After the real-time denoising (i.e., denoising the image between the first screen and the second screen), the graphics workstation saves the rendering result of each frame to a two-dimensional texture image, and the rendering result can be continuously and real-time transmitted to the MR device according to the sequence through the communication protocol of the rendering engine such as Unity or UE, etc., and the rendering result is displayed in the MR device in step S400.
The foregoing description is only of embodiments of the present invention, and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields using the descriptions and drawings of the present invention should be carried within the scope of the present invention.

Claims (7)

1. A method for real-time rendering of dynamic ambient lighting, comprising:
acquiring scene information, wherein the scene information comprises real-time dynamic environment panoramic information and/or static environment panoramic information;
dynamic ambient illumination reconstruction, a first panoramic image is generated according to the scene information, and ambient illumination parameters of the first panoramic image are obtained, wherein the ambient illumination parameters comprise the position, the type and the illumination intensity of a light source in the scene, a panoramic irradiation map is generated according to the first panoramic image, the ambient illumination parameters are mapped into the panoramic irradiation map, and the panoramic irradiation map comprises omnibearing dynamic illumination information;
obtaining a virtual model, and generating the virtual model through virtual three-dimensional data modeling;
rendering the virtual model according to the panoramic irradiation map and generating a rendering result;
wherein the rendering results can be transmitted into an MR device and displayed within the MR device;
the dynamic ambient light reconstruction further comprises:
at least one camera is arranged, and the camera can continuously shoot scenes around the camera according to a set view angle, wherein the set view angle is a plurality of preset camera view angles;
panoramic reconstruction, namely acquiring a plurality of images, performing panoramic stitching on the images at each moment by an image stitching technology, and generating a first panoramic image, wherein the first panoramic image comprises omnidirectional environment information;
a coordinate system correction for converting the first panoramic image in the camera reference coordinate system into a second panoramic image in world space coordinate system;
spatial position correction for simulating illumination changes in different world space coordinate systems;
the specific coordinate system correction steps comprise:
acquiring UV coordinates of each pixel on the second panoramic image
The UV coordinates of each pixel are calculatedConversion to spherical coordinates>And the spherical coordinates +.>Conversion to three-dimensional Cartesian coordinates>
-combining said three-dimensional cartesian coordinatesThe conversion from the camera coordinate system to the world coordinate system is noted as:
wherein,to set the rotation matrix +.>Is world coordinates;
traversing each pixel of the second panoramic image and generating the second panoramic image, wherein the UV coordinates of each pixelThe corresponding pixel value is the world coordinate +.>Corresponding pixel values in the first panoramic image;
the specific steps of spatial position correction include:
mapping the second panoramic image under the world space coordinate system onto a unit sphere, wherein the spherical coordinates of a preset virtual model are as follows
Rotating the environment ball until the horizontal angle of the virtual model is consistent with the preset sight line direction, and marking as follows:
wherein,an angular coordinate of the virtual model; the panoramic image corresponding to the environment ball after rotation is a third panoramic image;
pixels on the environmental sphere sampled centered on the virtual model may be represented as:
representing the variables->Representing a preset distance;
the saidThe following is also satisfied:
wherein the variables areIt is necessary to preserve a positive solution, all pixels on said third panoramic image being according to said variable +.>Mapped to the fourth panoramic image.
2. The method for real-time rendering of dynamic ambient light according to claim 1, wherein the acquiring scene information comprises:
the camera shoots a plurality of continuous images with set angles;
a plurality of said images are transmitted to a graphics workstation.
3. The method of claim 1, wherein the dynamic ambient illumination reconstruction further comprises illumination analysis of the panoramic irradiation map, the illumination analysis comprising illumination estimation and illumination difference value calculation between adjacent frames;
the illumination estimation is carried out based on the first panoramic image to obtain an environment irradiation map;
wherein, the calculating of the illumination difference value between the adjacent frames comprises:
calculating sampling weight of the environment irradiation map, and marking as follows:
wherein,high, ∈representing ambient irradiance pattern>Representing the ordinate as +.>Is +.>
All pixels on the environment irradiation map are weighted according to the sampling weightMultiplying to obtain the panoramic irradiation map;
local illumination similarity calculation, dividing the panoramic irradiation map intoA picture block in which->The illumination similarity between adjacent frames of each image block is calculated according to the SSIM index and is recorded as:
wherein,representing the average value of pixels in an image block, +.>The value range of (2) is +.>When->The value of (2) is closer to 1, said +.>The more similar the individual image blocks are between adjacent frames;
global illumination difference calculation, according to the result of the local illumination similarity calculation, the global illumination difference of the panoramic irradiation map can be recorded as:
wherein,the value range of (2) is +.>When->The smaller the ambient lighting difference between adjacent frames is, the smaller.
4. The method of claim 1, wherein the realistic rendering comprises ray-tracing based rendering and real-time denoising;
the real-time denoising comprises time domain denoising and space-time denoising, the time domain denoising is used for denoising the rendering of the current frame through the rendering result of the previous frame, and the space-time denoising is used for denoising among neighborhood pixels and among MR devices, wherein the MR devices comprise a first screen and a second screen;
the real-time denoising method specifically comprises the following steps:
a re-projection calculation, the re-projection calculation comprising:
obtaining pixels of a current frame of the first screen through a projection matrixPixels of a corresponding frame of said second screen corresponding +.>
Combining pixels of a previous frame of the first screenAnd pixels of the previous frame on said second screen +.>Optimization->Pixel value +.>
The first step of denoising is recorded as:
wherein,is->Pixel value of the first step denoising, +.>Is->Pixel values of the second step of denoising, +.>Is->Pixel values of the second step of denoising, +.>Including the time domain denoising and the space-time denoising;
and step two, denoising, namely:
wherein,is->Pixel value denoised in the first step, < >>Is->The pixel value of the first step denoising;Is->The corresponding pixel value is +.>And->Is set with a weighting value->Representation->And the surface reflectivity of the virtual model corresponding to the three-dimensional space.
5. The method for real-time rendering of dynamic ambient light according to claim 3 or 4, wherein the determining the contribution degree of the previous frame of rendered image to the current frame of rendered image according to the difference of light between adjacent frames is:
wherein,for global illumination variability of the panoramic irradiation map,/->Representation->Function (F)>Representing bilateral weight calculation factors between adjacent frames;
ray-tracing based rendering includes: sampling and coloring in a three-dimensional virtual space generated by the three-dimensional data through ray tracing, and obtaining a rendered two-dimensional texture image; and saving the rendering result after the real-time denoising to the two-dimensional texture image.
6. The method of real-time rendering of dynamic ambient light according to claim 1, wherein the MR device comprises a plurality of sensors capable of acquiring sensor parameters of the MR device, obtaining the sensor parameters that are more stable in time series by kalman filtering, and enabling smooth transition of image content between adjacent frames by an anti-shake algorithm.
7. The method of claim 1, wherein the MR device further comprises a panoramic camera, the panoramic camera is fixed to the MR device, and a lens axis of the panoramic camera is aligned with an axis of the MR device.
CN202311061199.4A 2023-08-23 2023-08-23 Real-time rendering method for dynamic environment illumination Active CN117011446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311061199.4A CN117011446B (en) 2023-08-23 2023-08-23 Real-time rendering method for dynamic environment illumination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311061199.4A CN117011446B (en) 2023-08-23 2023-08-23 Real-time rendering method for dynamic environment illumination

Publications (2)

Publication Number Publication Date
CN117011446A CN117011446A (en) 2023-11-07
CN117011446B true CN117011446B (en) 2024-03-08

Family

ID=88561792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311061199.4A Active CN117011446B (en) 2023-08-23 2023-08-23 Real-time rendering method for dynamic environment illumination

Country Status (1)

Country Link
CN (1) CN117011446B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119559317B (en) * 2024-11-15 2025-10-28 东北大学 A hybrid screen-space global illumination method based on resampling and probes
CN119989827B (en) * 2025-04-15 2025-07-15 上海引昱数字科技集团有限公司 Digital twin processing system and method for virtual reality of travel

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205853A (en) * 2014-06-20 2015-12-30 西安英诺视通信息技术有限公司 3D image splicing synthesis method for panoramic view management
CN106921824A (en) * 2017-05-03 2017-07-04 丁志宇 Circulating type mixes light field imaging device and method
CN110908510A (en) * 2019-11-08 2020-03-24 四川大学 An application method of oblique photography modeling data in immersive display device
CN111784585A (en) * 2020-09-07 2020-10-16 成都纵横自动化技术股份有限公司 Image splicing method and device, electronic equipment and computer readable storage medium
CN112116530A (en) * 2019-06-19 2020-12-22 杭州海康威视数字技术股份有限公司 Fisheye image distortion correction method, device and virtual display system
CN113269863A (en) * 2021-07-19 2021-08-17 成都索贝视频云计算有限公司 Video image-based foreground object shadow real-time generation method
CN113362232A (en) * 2021-08-09 2021-09-07 湖北亿咖通科技有限公司 Vehicle panoramic all-around image generation method and system
CN113572962A (en) * 2021-07-28 2021-10-29 北京大学 Illumination estimation method and device for outdoor natural scene
CN113870161A (en) * 2021-09-13 2021-12-31 杭州鸿泉物联网技术股份有限公司 Vehicle-mounted 3D (three-dimensional) panoramic stitching method and device based on artificial intelligence
CN114996814A (en) * 2022-06-15 2022-09-02 河海大学 Furniture design system based on deep learning and three-dimensional reconstruction
WO2022206380A1 (en) * 2021-04-02 2022-10-06 腾讯科技(深圳)有限公司 Illumination shading method and apparatus, and computer device and storage medium
WO2022222077A1 (en) * 2021-04-21 2022-10-27 浙江大学 Indoor scene virtual roaming method based on reflection decomposition
CN115908755A (en) * 2022-11-15 2023-04-04 深圳市盛仁电子科技有限公司 A kind of AR projection method, system and AR projector
CN115936995A (en) * 2023-01-03 2023-04-07 华南理工大学 A method for panorama stitching of vehicle four-way fisheye camera
CN116485984A (en) * 2023-06-25 2023-07-25 深圳元戎启行科技有限公司 Global illumination simulation method, device, equipment and medium for panoramic image vehicle model

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205853A (en) * 2014-06-20 2015-12-30 西安英诺视通信息技术有限公司 3D image splicing synthesis method for panoramic view management
CN106921824A (en) * 2017-05-03 2017-07-04 丁志宇 Circulating type mixes light field imaging device and method
CN112116530A (en) * 2019-06-19 2020-12-22 杭州海康威视数字技术股份有限公司 Fisheye image distortion correction method, device and virtual display system
CN110908510A (en) * 2019-11-08 2020-03-24 四川大学 An application method of oblique photography modeling data in immersive display device
CN111784585A (en) * 2020-09-07 2020-10-16 成都纵横自动化技术股份有限公司 Image splicing method and device, electronic equipment and computer readable storage medium
WO2022206380A1 (en) * 2021-04-02 2022-10-06 腾讯科技(深圳)有限公司 Illumination shading method and apparatus, and computer device and storage medium
WO2022222077A1 (en) * 2021-04-21 2022-10-27 浙江大学 Indoor scene virtual roaming method based on reflection decomposition
CN113269863A (en) * 2021-07-19 2021-08-17 成都索贝视频云计算有限公司 Video image-based foreground object shadow real-time generation method
CN113572962A (en) * 2021-07-28 2021-10-29 北京大学 Illumination estimation method and device for outdoor natural scene
CN113362232A (en) * 2021-08-09 2021-09-07 湖北亿咖通科技有限公司 Vehicle panoramic all-around image generation method and system
CN113870161A (en) * 2021-09-13 2021-12-31 杭州鸿泉物联网技术股份有限公司 Vehicle-mounted 3D (three-dimensional) panoramic stitching method and device based on artificial intelligence
CN114996814A (en) * 2022-06-15 2022-09-02 河海大学 Furniture design system based on deep learning and three-dimensional reconstruction
CN115908755A (en) * 2022-11-15 2023-04-04 深圳市盛仁电子科技有限公司 A kind of AR projection method, system and AR projector
CN115936995A (en) * 2023-01-03 2023-04-07 华南理工大学 A method for panorama stitching of vehicle four-way fisheye camera
CN116485984A (en) * 2023-06-25 2023-07-25 深圳元戎启行科技有限公司 Global illumination simulation method, device, equipment and medium for panoramic image vehicle model

Also Published As

Publication number Publication date
CN117011446A (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN106133796B (en) Method and system for representing virtual objects in a view of a real environment
JP7007348B2 (en) Image processing equipment
Fender et al. Optispace: Automated placement of interactive 3d projection mapping content
CN117011446B (en) Real-time rendering method for dynamic environment illumination
CN107862718B (en) 4D holographic video capture method
JPH11175762A (en) Light environment measuring instrument and device and method for shading virtual image using same
CN113822936A (en) Data processing method and device, computer equipment and storage medium
CN108364292A (en) A kind of illumination estimation method based on several multi-view images
US20230171506A1 (en) Increasing dynamic range of a virtual production display
JP2002117413A (en) Image generating apparatus and image generating method for reflecting changes in light source environment in real time
WO2023109582A1 (en) Light ray data processing method and apparatus, device and storage medium
Santos et al. Display and rendering technologies for virtual and mixed reality design review
CN119342155B (en) Virtual shooting methods, devices, equipment and storage media
WO2023094870A1 (en) Increasing dynamic range of a virtual production display
CN118890442B (en) Twin system video mapping method, device, electronic device and storage medium
CN109309827A (en) Multi-person real-time tracking device and method for 360° suspended light field three-dimensional display system
CN119342155A (en) Virtual shooting method, device, equipment and storage medium
Abad et al. Integrating synthetic objects into real scenes
CN116486046A (en) Method and equipment for displaying virtual object based on illumination intensity
CN119946375A (en) A 3D model placement method and system based on pre-recorded 2D video
CN120496167A (en) Human body posture estimation data generation method and electronic equipment
WO2023094882A1 (en) Increasing dynamic range of a virtual production display
WO2023094872A1 (en) Increasing dynamic range of a virtual production display
WO2023094874A1 (en) Increasing dynamic range of a virtual production display
CN120635228A (en) Smoke effect generation method, device, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20260107

Address after: Kolding road high tech Zone of Suzhou City, Jiangsu Province, No. 88 215000

Patentee after: Suzhou Institute of Biomedical Engineering and Technology Chinese Academy of Sciences

Country or region after: China

Address before: 215000 Jiangsu Province, Suzhou City, Gaoxin District, Zhu Yuan Road 209, Building 4, 26th Floor, Room 2610

Patentee before: Suzhou Shenjie Information Technology Co.,Ltd.

Country or region before: China