[go: up one dir, main page]

CN114663632B - Method and device for displaying virtual objects based on spatial position illumination - Google Patents

Method and device for displaying virtual objects based on spatial position illumination

Info

Publication number
CN114663632B
CN114663632B CN202210206906.3A CN202210206906A CN114663632B CN 114663632 B CN114663632 B CN 114663632B CN 202210206906 A CN202210206906 A CN 202210206906A CN 114663632 B CN114663632 B CN 114663632B
Authority
CN
China
Prior art keywords
virtual object
point cloud
cloud data
sphere
color information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210206906.3A
Other languages
Chinese (zh)
Other versions
CN114663632A (en
Inventor
孟亚州
黄萌瑶
郝冬宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202210206906.3A priority Critical patent/CN114663632B/en
Publication of CN114663632A publication Critical patent/CN114663632A/en
Application granted granted Critical
Publication of CN114663632B publication Critical patent/CN114663632B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

本申请涉及AR技术领域,提供一种基于空间位置的光照显示虚拟物体的方法及设备,当在真实画面上放置虚拟物体时,构建真实画面对应的三维点云数据集,并将虚拟物体在所要放置目标位置二维坐标转换为三维坐标,从三维点云数据集中获取以三维坐标为中心,预设的长、宽、高为边的长方体包含的目标点云数据集,并根据目标点云数据集,确定光照信息图,该光照信息图可以准确地反映目标位置在真实环境的光照信息,进一步地,根据光照信息图,渲染放置在目标位置的虚拟物体,使得虚拟物体的表面光照与真实环境一致,从而将虚拟物体真实地叠加显示在真实画面上,提高了虚拟融合的真实性,进而提升了AR体验。

The present application relates to the field of AR technology and provides a method and device for displaying virtual objects based on lighting of spatial positions. When a virtual object is placed on a real picture, a three-dimensional point cloud dataset corresponding to the real picture is constructed, and the two-dimensional coordinates of the virtual object at the target position to be placed are converted into three-dimensional coordinates. A target point cloud dataset contained in a rectangular block with the three-dimensional coordinates as the center and preset length, width and height as sides is obtained from the three-dimensional point cloud dataset, and a lighting information map is determined based on the target point cloud dataset. The lighting information map can accurately reflect the lighting information of the target position in the real environment. Furthermore, based on the lighting information map, the virtual object placed at the target position is rendered so that the surface lighting of the virtual object is consistent with the real environment, thereby realistically superimposing the virtual object on the real picture, improving the authenticity of virtual fusion, and thus enhancing the AR experience.

Description

Method and equipment for displaying virtual object by illumination based on spatial position
Technical Field
The application relates to the technical field of augmented reality (Augmented Reality, AR), in particular to a method and equipment for displaying a virtual object by illumination based on a spatial position.
Background
The AR technology is a new technology developed on the basis of virtual reality, and is a technology for increasing the perception of a user to the real world through information provided by a computer system, and superimposing virtual objects, virtual scenes, system prompt information or non-geometric information about the real objects generated by the computer system and the like into the real scenes, thereby realizing 'enhancement' to the real world.
In AR experience, a user has subtle feeling on illumination, and generally, illumination consistency is taken as an important index of virtual-real fusion, wherein the illumination consistency refers to that a virtual object has the same illumination effect as a real scene. The aim of illumination consistency is to make the illumination condition of the virtual object consistent with the illumination condition in the real scene, namely the virtual object has consistent brightness and shadow effect with the real object so as to enhance the sense of reality of the virtual object.
At present, most of related technologies adopt a deep learning model to estimate illumination information based on input RGB images, and the method cannot accurately estimate illumination information in a 3D space because the RGB images are two-dimensional.
Disclosure of Invention
The embodiment of the application provides a method and equipment for displaying a virtual object by illumination based on a spatial position, which are used for improving the illumination consistency of the virtual object and a real environment.
In a first aspect, an embodiment of the present application provides a method for displaying a virtual object by illumination based on a spatial location, which is applied to an AR scene, and includes:
responding to the operation of placing a virtual object on a real picture, and constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
Acquiring a target point cloud data set contained in a cuboid taking the three-dimensional coordinates as a center and preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
and rendering a virtual object placed at the target position according to the illumination information graph, and displaying the virtual object in a superposition manner on the real picture.
In a second aspect, an embodiment of the present application provides a display device supporting an augmented reality AR function, including a processor, a memory, a camera, and a display screen, where the display screen, the camera, and the memory are connected to the processor through a bus;
the camera is used for collecting real pictures;
the memory stores a computer program, and the processor performs the following operations according to the computer program:
Responding to the operation of placing a virtual object on a real picture displayed by the display screen, and constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
Acquiring a target point cloud data set contained in a cuboid taking the three-dimensional coordinates as a center and preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
And rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object on the real picture in a superposition way through the display screen.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform a method for displaying a virtual object based on illumination of a spatial location.
In the above embodiment of the present application, when a virtual object is placed on a real picture, three-dimensional coordinates of a target position in a three-dimensional space where the virtual object is to be placed are determined, and a target point cloud data set contained in a cuboid centered on the three-dimensional coordinates and having a preset length, width, and height as sides is acquired from a three-dimensional point cloud data set corresponding to the real picture, by describing a real environment in a point cloud form, such that each point cloud data contains three-dimensional coordinates, color information, and intensity information of a corresponding point on the real object, the method comprises the steps of generating an illumination information graph corresponding to a target position of a virtual object to be placed by using a target point cloud data set, wherein the illumination information graph can accurately reflect illumination information of a real environment, and further, rendering the virtual object placed at the target position according to the illumination information graph, so that the surface illumination of the virtual object is consistent with the real environment, the virtual object is truly superimposed and displayed on a real picture, the authenticity of virtual fusion is improved, and AR experience is further improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 schematically illustrates a graph of illumination effects at different locations in a real environment provided by an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for displaying a virtual object based on illumination of a spatial location according to an embodiment of the present application;
FIG. 3 schematically illustrates an interface for placing a virtual object according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for determining an illumination information map according to three-dimensional point cloud data according to an embodiment of the present application;
FIG. 5 illustrates a flow chart of a method of rendering a virtual object provided by an embodiment of the application;
FIG. 6 illustrates an AR effect map provided by an embodiment of the present application;
fig. 7 is a diagram schematically illustrating a structure of a display device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, embodiments and advantages of the present application more apparent, an exemplary embodiment of the present application will be described more fully hereinafter with reference to the accompanying drawings in which exemplary embodiments of the application are shown, it being understood that the exemplary embodiments described are merely some, but not all, of the examples of the application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the appended claims. Furthermore, while the present disclosure has been described in terms of an exemplary embodiment or embodiments, it should be understood that each aspect of the disclosure can be practiced separately from the other aspects.
In AR scenes, it becomes particularly important to extract ambient light in order to place a simulated virtual object on a real picture realistically. The virtual object should have a consistent illumination effect with the real environment, which requires the AR device to have the sensing capability of ambient illumination, that is, the brightness of the surface of the virtual object should be darker in a darker environment, the brightness of the surface of the virtual object should be brighter in a brighter environment, and when the virtual object has a specular reflection material, the surface of the virtual object can reflect the picture in the real scene.
In a real environment, the illumination information received by an object at different positions often varies. As shown in fig. 1, the a position is located above the desktop, the lighting effect is bright, and the B position is located below the desktop, the lighting effect is dark. In order to make the virtual object in the AR scene have a more realistic AR fusion effect, the virtual object is required to have different illumination information at different positions.
For example, when the virtual object is placed at the position A, the surface of the virtual object is brighter, and the reflection effect at the position A should be reflected when the virtual object is made of a specular reflection material, and when the virtual object is placed at the position B, the surface of the virtual object is darker, and the reflection effect at the position B should be reflected when the virtual object is made of a specular reflection material.
The method and the device for displaying the virtual object by illumination based on the space position can realize that the illumination effect of the virtual object is changed according to different placement positions. Specifically, when a user places a virtual object, according to point cloud data of the virtual object in a three-dimensional space, illumination information of the placement position of the virtual object is determined, the illumination information can enable the virtual object to have correct brightness change, and the virtual object has correct reflection effect when the surface of the virtual object is made of a specular reflection material.
Referring to fig. 2, a flowchart of a method for displaying a virtual object by illumination based on spatial location according to an embodiment of the present application is implemented by a display device with an AR supporting function, including, but not limited to, a display device such as a smart phone, a tablet, a notebook, a smart tv, a desktop, a vehicle-mounted device, and a wearable device, where the flowchart mainly includes the following steps:
S201, responding to the operation of placing the virtual object on the real picture, and adopting SLAM technology to construct three-dimensional point cloud data corresponding to the real picture.
In an alternative embodiment, taking the display device as a tablet, as shown in fig. 3, a user opens an AR application, and the AR application opens a camera to obtain a color picture of a real environment and displays the color picture, in the display process, detects whether the user touches or clicks a display screen, when detecting that the user touches or clicks the display screen, uses the position touched or clicked as a target position for placing a virtual object in the real picture, and pops up prompt information to ask the user whether to place the virtual object. When a user clicks the 'yes' option, the AR application adopts SLAM (Smultaneous Localization AND MAPPING, positioning and drawing) technology to construct a three-dimensional point cloud data set corresponding to a real picture, wherein each point cloud data in the three-dimensional point cloud data set comprises three-dimensional coordinates, color information and intensity information of a corresponding point on a real object in a three-dimensional space.
S202, determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space.
In the embodiment of the application, the real picture acquired by the camera of the display device is displayed through a two-dimensional display screen, a user selects the target position of the virtual object placed on the real picture through touching or clicking the display screen, and the display device determines the two-dimensional coordinate of the target position on the display screen according to the touch or clicking result. In addition, in the process of displaying a real picture, the AR application establishes a virtual screen corresponding to the display screen in the three-dimensional space. And after the two-dimensional coordinates are determined, a ray is emitted along the direction perpendicular to the display screen by taking the two-dimensional coordinates as a starting point, and the three-dimensional coordinates of the intersection point of the ray and the virtual screen are determined as the three-dimensional coordinates of the target position.
S203, acquiring a target point cloud data set which is contained in a cuboid taking a three-dimensional coordinate as a center and preset length, width and height as sides from the three-dimensional point cloud data set.
In general, the three-dimensional point cloud data set includes all areas of the real picture, and the virtual object is placed only in a local area of the real picture, so that a cuboid can be constructed by taking the three-dimensional coordinates as the center and taking the preset length, width and height as sides. The preset length, width and height can be set according to experience values or virtual scenes, wherein the smaller the volume of the cuboid is, the smaller the calculation time for estimating illumination is. Optionally, the preset length, width and height are greater than the length, width and height of the virtual object. Further, the point cloud data contained in the cuboid is obtained from the point cloud data set of the real picture, so that the point cloud of real objects around the virtual object is obtained, and the accuracy of illumination estimation is improved.
S204, determining an illumination information graph corresponding to the target position according to the target point cloud data set.
In S204, each target point cloud data in the target point cloud data set includes three-dimensional coordinates, color information and intensity information of a corresponding point on a real object around the virtual object to be placed in the three-dimensional space, so that the ambient illumination of the target position can be accurately estimated, and the illumination consistency between the virtual object placed at the target position and the real environment is improved. For a specific estimation process of illumination information, see fig. 4:
s2041, determining at least one surface contained in the target point cloud data set, and obtaining a surface set.
In an alternative embodiment, a random sample consensus (Random Sample Consensus, RANSAC) algorithm is used to fit the target point cloud dataset and a surface set is generated from the at least one fitted surface. The fitting surfaces comprise planes or curved surfaces, and the corresponding fitting equations are different for different types of surfaces.
It should be noted that, the method for obtaining the surface set further includes a hidden function method or a trigonometric method, and the application is not limited thereto.
S2042, uniformly dividing a plurality of direction rays in a three-dimensional space by taking the three-dimensional coordinates of the target position as an origin.
In S2042, a reference line is selected by taking the three-dimensional coordinates of the target position as the origin, and the three-dimensional space of 360 ° is uniformly divided into a plurality of direction rays according to a fixed angle, wherein the more the divided direction rays are, the more the illumination information is, and the more the illumination effect of the virtual object surface is real.
S2043, for each direction ray, determining whether the direction ray intersects with the target point cloud data closest to the origin in the cuboid, if so, executing S2044, otherwise executing S2045.
In the embodiment of the application, the origin point is a target position for placing the virtual object, and the cloud data of the target point in the cuboid is a corresponding point on the real object, and the three-dimensional coordinates, the color information and the intensity information of the point are included. According to the three-dimensional coordinates of the target position and the three-dimensional coordinates of the target point cloud data in the cuboid, the target point cloud data closest to the origin can be determined, the target point cloud data can accurately reflect illumination information of the target position, further, whether the direction ray intersects with the target point cloud data closest to the origin is determined, and illumination information corresponding to the target position is determined according to the intersecting result.
And S2044, taking the color information of the point cloud data closest to the origin as the color information of the sub-sphere corresponding to the spherical center angle of the directional ray.
In the embodiment of the application, a 360-degree three-dimensional space is divided by a plurality of direction rays, the sphere center angles corresponding to each direction ray are the same in size, and the directions of the directions rays are 360 degrees/N, and N is the number of the direction rays. When the direction ray intersects with the target point cloud data closest to the origin, it is indicated that the illumination information of the target position can be estimated by using the target point cloud data closest to the origin, and therefore, the color information of the point cloud data closest to the origin can be used as the color information of the sub-sphere corresponding to the spherical center angle of the direction ray.
S2045, determining whether the directional ray intersects one of the surfaces in the set of surfaces, if so, executing S2046, otherwise, executing S2047.
When the direction ray does not intersect with the target point cloud data closest to the origin, it may be determined whether the direction ray intersects with one of the surfaces in the surface set, thereby estimating illumination information of the target position from the target point cloud data on the surface.
And S2046, interpolating the color information of the target point cloud data in the preset range on the surface and the color information of the intersecting point cloud data, and taking the color information after interpolation as the color information of the sub-sphere corresponding to the spherical center angle of the directional ray.
When the direction ray intersects one surface in the surface set, an optional implementation mode is that color information of target point cloud data adjacent to the intersecting point cloud data on the surface is interpolated with the color information of the intersecting point cloud data, and the interpolated color information is used as color information of a sub-sphere corresponding to a spherical center angle of the direction ray.
And S2047, interpolating the color information of the sub-sphere adjacent to the rest sub-spheres with no color value on the sphere to obtain the color information of the rest sub-spheres.
In the embodiment of the present application, when the direction ray does not intersect with the cloud data of the target point closest to the origin in the rectangular volume and does not intersect with the surfaces in the surface set, the sub-sphere corresponding to the spherical center angle of the direction ray is not given color information, and the sub-sphere to which color information is not given is referred to as a remaining sub-sphere. And interpolating the color information of the adjacent sub-sphere of the residual sub-sphere aiming at the residual sub-sphere, and taking the color information after interpolation as the color information of the residual sub-sphere.
S2048, obtaining a panoramic image corresponding to the spherical surface with the three-dimensional coordinates as the sphere center and the preset length as the radius according to the color information of each sub-spherical surface determined by the plurality of direction rays, and taking the panoramic image as an illumination information image corresponding to the target position.
In the embodiment of the application, each sub-sphere forms a complete sphere with a three-dimensional coordinate as a sphere center and a preset length as a radius, and color information is added to the sub-sphere corresponding to each direction ray, so that a panorama corresponding to the sphere is obtained, and the panorama is used as an illumination information diagram corresponding to a target position, so that illumination information of the target position in each direction is estimated, and the illumination effect of a virtual object placed at the target position is improved.
And S205, rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object in a superposition manner on a real picture.
Wherein, the rendering process of the virtual object is shown in fig. 5:
S2051, taking the illumination information graph as a sky box surrounding the virtual object, and setting the sky box to be invisible.
In the embodiment of the application, the Unity engine on the display device is configured in advance, and the space box is set to be invisible by checking the 'invisible' option so as to avoid shielding the real picture in the display device. The sky box comprises an upper surface, a lower surface, a left surface, a right surface, a front surface and a rear surface, the illumination information graph is mapped into the sky box surrounding the virtual object, and illumination textures of the virtual object in all directions on the target position can be obtained, so that shadows and reflection effects of the surface of the virtual object can be accurately determined, and the authenticity of illumination is improved.
S2052, rendering the virtual object placed at the target position according to the illumination textures of the sky box in all directions.
Generally, the shape of the virtual object is irregular, the surface shapes of different virtual objects are different, and the sky box is a regular hexahedron and can surround the virtual object, so that the virtual object is rendered by means of the sky box, the illumination textures in the six directions of up and down, left and right, front and back are applicable to the virtual object with different shapes, and the compatibility of the virtual object is improved.
When the method provided by the embodiment of the application is adopted to render and display the virtual object, the illumination information graph at the space position can be accurately estimated according to the space position where the virtual object is placed, the illumination information graph is set as the space box in the Unity engine so as to change the surface brightness of the virtual object, and when the virtual object is made of a specular reflection material, the picture on the illumination information graph can be reflected, so that the effect of reflecting the real environment is realized, the illumination consistency of the virtual object and the real environment is improved, the virtual-real fusion land is more real, and the AR experience is facilitated.
Referring to fig. 6, an AR effect diagram obtained by a method according to an embodiment of the present application is shown, in which a teapot is an added virtual object, and the surface brightness and light shadow of the teapot are consistent with the illumination intensity of a real environment, so as to improve the authenticity of a picture.
Based on the same technical concept, the embodiment of the application provides a display device, which supports an AR function, can realize the method steps of displaying a virtual object based on illumination of a spatial position in the embodiment, and can achieve the same technical effect.
Referring to fig. 7, the display device comprises a processor 701, a memory 702, a camera 703 and a display screen 704, wherein the display screen 704, the camera 703 and the memory 702 are connected with the processor 701 through a bus 705;
The camera 703 is used for collecting real pictures;
The memory 702 stores a computer program, and the processor 701 performs the following operations according to the computer program stored in the memory 702:
In response to an operation of placing a virtual object on a real picture displayed on the display screen 704, a three-dimensional point cloud data set corresponding to the real picture is constructed by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
Acquiring a target point cloud data set contained in a cuboid taking the three-dimensional coordinates as a center and preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
And rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object on the real picture in a superposition way through the display screen 704.
Optionally, the processor 701 determines, according to the target point cloud data set, an illumination information map corresponding to the target position, and specifically includes:
Determining at least one surface contained in the target point cloud data set to obtain a surface set, wherein the surface in the surface set is a plane or a curved surface;
Uniformly dividing a plurality of direction rays in the three-dimensional space by taking the three-dimensional coordinates as an origin;
For each direction ray, if the direction ray intersects with the target point cloud data closest to the origin in the cuboid, color information of the closest target point cloud data is used as color information of a sub-sphere corresponding to a spherical center angle of the direction ray, or if the direction ray intersects with one surface in the surface set, color information of the target point cloud data in a preset range on the surface and color information of the intersecting point cloud data are interpolated, and the color information after interpolation is used as color information of the sub-sphere corresponding to the spherical center angle of the direction ray;
According to the color information of each sub-sphere determined by the plurality of direction rays, obtaining a panoramic image corresponding to a sphere with the three-dimensional coordinate as a sphere center and a preset length as a radius, and taking the panoramic image as an illumination information image corresponding to the target position.
Optionally, when there are remaining sub-spheres of the sphere to which color information is not given, the processor 701 further performs the following operations:
And interpolating the color information of the sub-sphere adjacent to the residual sub-sphere on the sphere to obtain the color information of the residual sub-sphere.
Optionally, the processor 701 renders the virtual object placed at the target position according to the illumination information map, and specifically includes:
Taking the illumination information graph as a sky box surrounding the virtual object, and setting the sky box invisible to avoid shielding the real picture, wherein the sky box comprises illumination textures in six directions of up and down, left and right, front and back;
And rendering the virtual object placed at the target position according to the illumination textures of the sky box in all directions.
The optional processor 701 determines three-dimensional coordinates of the target position where the virtual object is placed in the three-dimensional space, specifically including:
determining two-dimensional coordinates of a target position where the virtual object is placed on the display screen;
And transmitting a ray along the direction perpendicular to the display screen by taking the two-dimensional coordinate as a starting point, and determining the three-dimensional coordinate of the intersection point of the ray and the virtual screen corresponding to the display screen in the three-dimensional space as the three-dimensional coordinate of the target position.
The Processor referred to in fig. 7 of the embodiments of the present application may be a central processing unit (Central Processing Unit, CPU), a general purpose Processor, a graphics Processor (Graphics Processing Unit, GPU) a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application-specific integrated Circuit (ASIC), a field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, a combination of a DSP and a microprocessor, and so forth. The memory may be integrated into the processor or may be provided separately from the processor.
It should be noted that fig. 7 is only an example, and shows the hardware necessary for executing the steps of the method for displaying a virtual object based on illumination of a spatial location provided by the embodiment of the present application by using a display device with AR function, which is not shown, and the display device further includes common hardware of a man-machine interaction device, such as a speaker, a microphone, a mouse, a keyboard, and so on.
Embodiments of the present application also provide a computer readable storage medium storing instructions that, when executed, perform the method of the previous embodiments.
The embodiment of the application also provides a computer program product for storing a computer program for executing the method of the previous embodiment.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. A method of displaying virtual objects based on illumination of spatial locations, applied to an augmented reality AR scene, comprising:
responding to the operation of placing a virtual object on a real picture, and constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
Acquiring a target point cloud data set contained in a cuboid taking the three-dimensional coordinates as a center and preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
According to the illumination information graph, rendering a virtual object placed at the target position, and displaying the virtual object in a superposition manner on the real picture;
According to the target point cloud data set, determining an illumination information graph corresponding to the target position includes:
Determining at least one surface contained in the target point cloud data set to obtain a surface set, wherein the surface in the surface set is a plane or a curved surface;
Uniformly dividing a plurality of direction rays in the three-dimensional space by taking the three-dimensional coordinates as an origin;
For each direction ray, if the direction ray intersects with the target point cloud data closest to the origin in the cuboid, color information of the closest target point cloud data is used as color information of a sub-sphere corresponding to a spherical center angle of the direction ray, or if the direction ray intersects with one surface in the surface set, color information of the target point cloud data in a preset range on the surface and color information of the intersecting point cloud data are interpolated, and the color information after interpolation is used as color information of the sub-sphere corresponding to the spherical center angle of the direction ray;
According to the color information of each sub-sphere determined by the plurality of direction rays, obtaining a panoramic image corresponding to a sphere with the three-dimensional coordinate as a sphere center and a preset length as a radius, and taking the panoramic image as an illumination information image corresponding to the target position.
2. The method of claim 1, wherein when there are remaining sub-spheres in the sphere to which color information is not imparted, the method further comprises:
And interpolating the color information of the sub-sphere adjacent to the residual sub-sphere on the sphere to obtain the color information of the residual sub-sphere.
3. The method of claim 1, wherein said rendering a virtual object placed at said target location from said illumination information map comprises:
Taking the illumination information graph as a sky box surrounding the virtual object, and setting the sky box invisible to avoid shielding the real picture, wherein the sky box comprises illumination textures in six directions of up and down, left and right, front and back;
And rendering the virtual object placed at the target position according to the illumination textures of the sky box in all directions.
4. The method of claim 1, wherein the determining three-dimensional coordinates of the target location in three-dimensional space for the virtual object placement comprises:
determining two-dimensional coordinates of a target position where the virtual object is placed on a display screen;
And transmitting a ray along the direction perpendicular to the display screen by taking the two-dimensional coordinate as a starting point, and determining the three-dimensional coordinate of the intersection point of the ray and the virtual screen corresponding to the display screen in the three-dimensional space as the three-dimensional coordinate of the target position.
5. The display device is characterized by supporting an Augmented Reality (AR) function and comprising a processor, a memory, a camera and a display screen, wherein the display screen, the camera and the memory are connected with the processor through a bus;
the camera is used for collecting real pictures;
the memory stores a computer program, and the processor performs the following operations according to the computer program:
Responding to the operation of placing a virtual object on a real picture displayed by the display screen, and constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
Acquiring a target point cloud data set contained in a cuboid taking the three-dimensional coordinates as a center and preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
according to the illumination information graph, rendering a virtual object placed at the target position, and displaying the virtual object on the real picture in a superposition manner through the display screen;
The processor determines an illumination information graph corresponding to the target position according to the target point cloud data set, and specifically comprises the following steps:
Determining at least one surface contained in the target point cloud data set to obtain a surface set, wherein the surface in the surface set is a plane or a curved surface;
Uniformly dividing a plurality of direction rays in the three-dimensional space by taking the three-dimensional coordinates as an origin;
For each direction ray, if the direction ray intersects with the target point cloud data closest to the origin in the cuboid, color information of the closest target point cloud data is used as color information of a sub-sphere corresponding to a spherical center angle of the direction ray, or if the direction ray intersects with one surface in the surface set, color information of the target point cloud data in a preset range on the surface and color information of the intersecting point cloud data are interpolated, and the color information after interpolation is used as color information of the sub-sphere corresponding to the spherical center angle of the direction ray;
According to the color information of each sub-sphere determined by the plurality of direction rays, obtaining a panoramic image corresponding to a sphere with the three-dimensional coordinate as a sphere center and a preset length as a radius, and taking the panoramic image as an illumination information image corresponding to the target position.
6. The display device of claim 5, wherein when there are remaining sub-spheres in the sphere to which color information is not assigned, the processor further performs the operations of:
And interpolating the color information of the sub-sphere adjacent to the residual sub-sphere on the sphere to obtain the color information of the residual sub-sphere.
7. The display device of claim 5, wherein the processor renders a virtual object placed at the target location from the illumination information map, in particular by:
Taking the illumination information graph as a sky box surrounding the virtual object, and setting the sky box invisible to avoid shielding the real picture, wherein the sky box comprises illumination textures in six directions of up and down, left and right, front and back;
And rendering the virtual object placed at the target position according to the illumination textures of the sky box in all directions.
8. The display device of claim 5, wherein the processor determines three-dimensional coordinates of a target location in three-dimensional space for placement of the virtual object, in particular by:
determining two-dimensional coordinates of a target position where the virtual object is placed on the display screen;
And transmitting a ray along the direction perpendicular to the display screen by taking the two-dimensional coordinate as a starting point, and determining the three-dimensional coordinate of the intersection point of the ray and the virtual screen corresponding to the display screen in the three-dimensional space as the three-dimensional coordinate of the target position.
CN202210206906.3A 2022-03-04 2022-03-04 Method and device for displaying virtual objects based on spatial position illumination Active CN114663632B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210206906.3A CN114663632B (en) 2022-03-04 2022-03-04 Method and device for displaying virtual objects based on spatial position illumination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210206906.3A CN114663632B (en) 2022-03-04 2022-03-04 Method and device for displaying virtual objects based on spatial position illumination

Publications (2)

Publication Number Publication Date
CN114663632A CN114663632A (en) 2022-06-24
CN114663632B true CN114663632B (en) 2025-09-23

Family

ID=82028202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210206906.3A Active CN114663632B (en) 2022-03-04 2022-03-04 Method and device for displaying virtual objects based on spatial position illumination

Country Status (1)

Country Link
CN (1) CN114663632B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330805B (en) * 2022-10-17 2023-03-24 江苏贯森新材料科技有限公司 Laser radar-based method for detecting abrasion of high-voltage cable protective layer at metal bracket
CN117953132A (en) 2022-10-19 2024-04-30 北京字跳网络技术有限公司 Display method, device and electronic equipment
CN115631291B (en) * 2022-11-18 2023-03-10 如你所视(北京)科技有限公司 Real-time relighting method and device, device and medium for augmented reality

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503711A (en) * 2019-08-22 2019-11-26 三星电子(中国)研发中心 The method and device of dummy object is rendered in augmented reality

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833423A (en) * 2020-06-30 2020-10-27 北京市商汤科技开发有限公司 Presentation method, presentation device, presentation equipment and computer-readable storage medium
WO2022040970A1 (en) * 2020-08-26 2022-03-03 南京翱翔信息物理融合创新研究院有限公司 Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration
CN113552942B (en) * 2021-07-14 2024-08-30 海信视像科技股份有限公司 Method and equipment for displaying virtual object based on illumination intensity

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503711A (en) * 2019-08-22 2019-11-26 三星电子(中国)研发中心 The method and device of dummy object is rendered in augmented reality

Also Published As

Publication number Publication date
CN114663632A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
JP6967043B2 (en) Virtual element modality based on location in 3D content
US11694392B2 (en) Environment synthesis for lighting an object
CN114663632B (en) Method and device for displaying virtual objects based on spatial position illumination
JP5693966B2 (en) Projecting graphic objects on interactive uneven displays
JP2019149202A5 (en)
US9530242B2 (en) Point and click lighting for image based lighting surfaces
US20190026935A1 (en) Method and system for providing virtual reality experience based on ultrasound data
US11562545B2 (en) Method and device for providing augmented reality, and computer program
US20180300852A1 (en) Scaling and feature retention in graphical elements defined based on functions
JP2023171435A (en) Device and method for generating dynamic virtual content in mixed reality
US20120229463A1 (en) 3d image visual effect processing method
CN103247072B (en) The method and device at three-dimensional rotation interface is realized based on Android system
CN110378947B (en) 3D model reconstruction method and device and electronic equipment
Cosco et al. Augmented touch without visual obtrusion
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
JP2023527438A (en) Geometry Recognition Augmented Reality Effect Using Real-time Depth Map
EP3062293A1 (en) Shadow rendering apparatus and control method thereof
CN112884874A (en) Method, apparatus, device and medium for applying decals on virtual model
US9483873B2 (en) Easy selection threshold
WO2017113729A1 (en) 360-degree image loading method and loading module, and mobile terminal
US12002165B1 (en) Light probe placement for displaying objects in 3D environments on electronic devices
JP3035571B2 (en) Image processing device
KR20060131145A (en) Rendering Method of 3D Objects Using 2D Image
WO2022089061A1 (en) Object annotation information presentation method and apparatus, and electronic device and storage medium
KR101227155B1 (en) Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant