Detailed Description
For the purposes of making the objects, embodiments and advantages of the present application more apparent, an exemplary embodiment of the present application will be described more fully hereinafter with reference to the accompanying drawings in which exemplary embodiments of the application are shown, it being understood that the exemplary embodiments described are merely some, but not all, of the examples of the application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the appended claims. Furthermore, while the present disclosure has been described in terms of an exemplary embodiment or embodiments, it should be understood that each aspect of the disclosure can be practiced separately from the other aspects.
In AR scenes, it becomes particularly important to extract ambient light in order to place a simulated virtual object on a real picture realistically. The virtual object should have a consistent illumination effect with the real environment, which requires the AR device to have the sensing capability of ambient illumination, that is, the brightness of the surface of the virtual object should be darker in a darker environment, the brightness of the surface of the virtual object should be brighter in a brighter environment, and when the virtual object has a specular reflection material, the surface of the virtual object can reflect the picture in the real scene.
In a real environment, the illumination information received by an object at different positions often varies. As shown in fig. 1, the a position is located above the desktop, the lighting effect is bright, and the B position is located below the desktop, the lighting effect is dark. In order to make the virtual object in the AR scene have a more realistic AR fusion effect, the virtual object is required to have different illumination information at different positions.
For example, when the virtual object is placed at the position A, the surface of the virtual object is brighter, and the reflection effect at the position A should be reflected when the virtual object is made of a specular reflection material, and when the virtual object is placed at the position B, the surface of the virtual object is darker, and the reflection effect at the position B should be reflected when the virtual object is made of a specular reflection material.
The method and the device for displaying the virtual object by illumination based on the space position can realize that the illumination effect of the virtual object is changed according to different placement positions. Specifically, when a user places a virtual object, according to point cloud data of the virtual object in a three-dimensional space, illumination information of the placement position of the virtual object is determined, the illumination information can enable the virtual object to have correct brightness change, and the virtual object has correct reflection effect when the surface of the virtual object is made of a specular reflection material.
Referring to fig. 2, a flowchart of a method for displaying a virtual object by illumination based on spatial location according to an embodiment of the present application is implemented by a display device with an AR supporting function, including, but not limited to, a display device such as a smart phone, a tablet, a notebook, a smart tv, a desktop, a vehicle-mounted device, and a wearable device, where the flowchart mainly includes the following steps:
S201, responding to the operation of placing the virtual object on the real picture, and adopting SLAM technology to construct three-dimensional point cloud data corresponding to the real picture.
In an alternative embodiment, taking the display device as a tablet, as shown in fig. 3, a user opens an AR application, and the AR application opens a camera to obtain a color picture of a real environment and displays the color picture, in the display process, detects whether the user touches or clicks a display screen, when detecting that the user touches or clicks the display screen, uses the position touched or clicked as a target position for placing a virtual object in the real picture, and pops up prompt information to ask the user whether to place the virtual object. When a user clicks the 'yes' option, the AR application adopts SLAM (Smultaneous Localization AND MAPPING, positioning and drawing) technology to construct a three-dimensional point cloud data set corresponding to a real picture, wherein each point cloud data in the three-dimensional point cloud data set comprises three-dimensional coordinates, color information and intensity information of a corresponding point on a real object in a three-dimensional space.
S202, determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space.
In the embodiment of the application, the real picture acquired by the camera of the display device is displayed through a two-dimensional display screen, a user selects the target position of the virtual object placed on the real picture through touching or clicking the display screen, and the display device determines the two-dimensional coordinate of the target position on the display screen according to the touch or clicking result. In addition, in the process of displaying a real picture, the AR application establishes a virtual screen corresponding to the display screen in the three-dimensional space. And after the two-dimensional coordinates are determined, a ray is emitted along the direction perpendicular to the display screen by taking the two-dimensional coordinates as a starting point, and the three-dimensional coordinates of the intersection point of the ray and the virtual screen are determined as the three-dimensional coordinates of the target position.
S203, acquiring a target point cloud data set which is contained in a cuboid taking a three-dimensional coordinate as a center and preset length, width and height as sides from the three-dimensional point cloud data set.
In general, the three-dimensional point cloud data set includes all areas of the real picture, and the virtual object is placed only in a local area of the real picture, so that a cuboid can be constructed by taking the three-dimensional coordinates as the center and taking the preset length, width and height as sides. The preset length, width and height can be set according to experience values or virtual scenes, wherein the smaller the volume of the cuboid is, the smaller the calculation time for estimating illumination is. Optionally, the preset length, width and height are greater than the length, width and height of the virtual object. Further, the point cloud data contained in the cuboid is obtained from the point cloud data set of the real picture, so that the point cloud of real objects around the virtual object is obtained, and the accuracy of illumination estimation is improved.
S204, determining an illumination information graph corresponding to the target position according to the target point cloud data set.
In S204, each target point cloud data in the target point cloud data set includes three-dimensional coordinates, color information and intensity information of a corresponding point on a real object around the virtual object to be placed in the three-dimensional space, so that the ambient illumination of the target position can be accurately estimated, and the illumination consistency between the virtual object placed at the target position and the real environment is improved. For a specific estimation process of illumination information, see fig. 4:
s2041, determining at least one surface contained in the target point cloud data set, and obtaining a surface set.
In an alternative embodiment, a random sample consensus (Random Sample Consensus, RANSAC) algorithm is used to fit the target point cloud dataset and a surface set is generated from the at least one fitted surface. The fitting surfaces comprise planes or curved surfaces, and the corresponding fitting equations are different for different types of surfaces.
It should be noted that, the method for obtaining the surface set further includes a hidden function method or a trigonometric method, and the application is not limited thereto.
S2042, uniformly dividing a plurality of direction rays in a three-dimensional space by taking the three-dimensional coordinates of the target position as an origin.
In S2042, a reference line is selected by taking the three-dimensional coordinates of the target position as the origin, and the three-dimensional space of 360 ° is uniformly divided into a plurality of direction rays according to a fixed angle, wherein the more the divided direction rays are, the more the illumination information is, and the more the illumination effect of the virtual object surface is real.
S2043, for each direction ray, determining whether the direction ray intersects with the target point cloud data closest to the origin in the cuboid, if so, executing S2044, otherwise executing S2045.
In the embodiment of the application, the origin point is a target position for placing the virtual object, and the cloud data of the target point in the cuboid is a corresponding point on the real object, and the three-dimensional coordinates, the color information and the intensity information of the point are included. According to the three-dimensional coordinates of the target position and the three-dimensional coordinates of the target point cloud data in the cuboid, the target point cloud data closest to the origin can be determined, the target point cloud data can accurately reflect illumination information of the target position, further, whether the direction ray intersects with the target point cloud data closest to the origin is determined, and illumination information corresponding to the target position is determined according to the intersecting result.
And S2044, taking the color information of the point cloud data closest to the origin as the color information of the sub-sphere corresponding to the spherical center angle of the directional ray.
In the embodiment of the application, a 360-degree three-dimensional space is divided by a plurality of direction rays, the sphere center angles corresponding to each direction ray are the same in size, and the directions of the directions rays are 360 degrees/N, and N is the number of the direction rays. When the direction ray intersects with the target point cloud data closest to the origin, it is indicated that the illumination information of the target position can be estimated by using the target point cloud data closest to the origin, and therefore, the color information of the point cloud data closest to the origin can be used as the color information of the sub-sphere corresponding to the spherical center angle of the direction ray.
S2045, determining whether the directional ray intersects one of the surfaces in the set of surfaces, if so, executing S2046, otherwise, executing S2047.
When the direction ray does not intersect with the target point cloud data closest to the origin, it may be determined whether the direction ray intersects with one of the surfaces in the surface set, thereby estimating illumination information of the target position from the target point cloud data on the surface.
And S2046, interpolating the color information of the target point cloud data in the preset range on the surface and the color information of the intersecting point cloud data, and taking the color information after interpolation as the color information of the sub-sphere corresponding to the spherical center angle of the directional ray.
When the direction ray intersects one surface in the surface set, an optional implementation mode is that color information of target point cloud data adjacent to the intersecting point cloud data on the surface is interpolated with the color information of the intersecting point cloud data, and the interpolated color information is used as color information of a sub-sphere corresponding to a spherical center angle of the direction ray.
And S2047, interpolating the color information of the sub-sphere adjacent to the rest sub-spheres with no color value on the sphere to obtain the color information of the rest sub-spheres.
In the embodiment of the present application, when the direction ray does not intersect with the cloud data of the target point closest to the origin in the rectangular volume and does not intersect with the surfaces in the surface set, the sub-sphere corresponding to the spherical center angle of the direction ray is not given color information, and the sub-sphere to which color information is not given is referred to as a remaining sub-sphere. And interpolating the color information of the adjacent sub-sphere of the residual sub-sphere aiming at the residual sub-sphere, and taking the color information after interpolation as the color information of the residual sub-sphere.
S2048, obtaining a panoramic image corresponding to the spherical surface with the three-dimensional coordinates as the sphere center and the preset length as the radius according to the color information of each sub-spherical surface determined by the plurality of direction rays, and taking the panoramic image as an illumination information image corresponding to the target position.
In the embodiment of the application, each sub-sphere forms a complete sphere with a three-dimensional coordinate as a sphere center and a preset length as a radius, and color information is added to the sub-sphere corresponding to each direction ray, so that a panorama corresponding to the sphere is obtained, and the panorama is used as an illumination information diagram corresponding to a target position, so that illumination information of the target position in each direction is estimated, and the illumination effect of a virtual object placed at the target position is improved.
And S205, rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object in a superposition manner on a real picture.
Wherein, the rendering process of the virtual object is shown in fig. 5:
S2051, taking the illumination information graph as a sky box surrounding the virtual object, and setting the sky box to be invisible.
In the embodiment of the application, the Unity engine on the display device is configured in advance, and the space box is set to be invisible by checking the 'invisible' option so as to avoid shielding the real picture in the display device. The sky box comprises an upper surface, a lower surface, a left surface, a right surface, a front surface and a rear surface, the illumination information graph is mapped into the sky box surrounding the virtual object, and illumination textures of the virtual object in all directions on the target position can be obtained, so that shadows and reflection effects of the surface of the virtual object can be accurately determined, and the authenticity of illumination is improved.
S2052, rendering the virtual object placed at the target position according to the illumination textures of the sky box in all directions.
Generally, the shape of the virtual object is irregular, the surface shapes of different virtual objects are different, and the sky box is a regular hexahedron and can surround the virtual object, so that the virtual object is rendered by means of the sky box, the illumination textures in the six directions of up and down, left and right, front and back are applicable to the virtual object with different shapes, and the compatibility of the virtual object is improved.
When the method provided by the embodiment of the application is adopted to render and display the virtual object, the illumination information graph at the space position can be accurately estimated according to the space position where the virtual object is placed, the illumination information graph is set as the space box in the Unity engine so as to change the surface brightness of the virtual object, and when the virtual object is made of a specular reflection material, the picture on the illumination information graph can be reflected, so that the effect of reflecting the real environment is realized, the illumination consistency of the virtual object and the real environment is improved, the virtual-real fusion land is more real, and the AR experience is facilitated.
Referring to fig. 6, an AR effect diagram obtained by a method according to an embodiment of the present application is shown, in which a teapot is an added virtual object, and the surface brightness and light shadow of the teapot are consistent with the illumination intensity of a real environment, so as to improve the authenticity of a picture.
Based on the same technical concept, the embodiment of the application provides a display device, which supports an AR function, can realize the method steps of displaying a virtual object based on illumination of a spatial position in the embodiment, and can achieve the same technical effect.
Referring to fig. 7, the display device comprises a processor 701, a memory 702, a camera 703 and a display screen 704, wherein the display screen 704, the camera 703 and the memory 702 are connected with the processor 701 through a bus 705;
The camera 703 is used for collecting real pictures;
The memory 702 stores a computer program, and the processor 701 performs the following operations according to the computer program stored in the memory 702:
In response to an operation of placing a virtual object on a real picture displayed on the display screen 704, a three-dimensional point cloud data set corresponding to the real picture is constructed by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
Acquiring a target point cloud data set contained in a cuboid taking the three-dimensional coordinates as a center and preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
And rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object on the real picture in a superposition way through the display screen 704.
Optionally, the processor 701 determines, according to the target point cloud data set, an illumination information map corresponding to the target position, and specifically includes:
Determining at least one surface contained in the target point cloud data set to obtain a surface set, wherein the surface in the surface set is a plane or a curved surface;
Uniformly dividing a plurality of direction rays in the three-dimensional space by taking the three-dimensional coordinates as an origin;
For each direction ray, if the direction ray intersects with the target point cloud data closest to the origin in the cuboid, color information of the closest target point cloud data is used as color information of a sub-sphere corresponding to a spherical center angle of the direction ray, or if the direction ray intersects with one surface in the surface set, color information of the target point cloud data in a preset range on the surface and color information of the intersecting point cloud data are interpolated, and the color information after interpolation is used as color information of the sub-sphere corresponding to the spherical center angle of the direction ray;
According to the color information of each sub-sphere determined by the plurality of direction rays, obtaining a panoramic image corresponding to a sphere with the three-dimensional coordinate as a sphere center and a preset length as a radius, and taking the panoramic image as an illumination information image corresponding to the target position.
Optionally, when there are remaining sub-spheres of the sphere to which color information is not given, the processor 701 further performs the following operations:
And interpolating the color information of the sub-sphere adjacent to the residual sub-sphere on the sphere to obtain the color information of the residual sub-sphere.
Optionally, the processor 701 renders the virtual object placed at the target position according to the illumination information map, and specifically includes:
Taking the illumination information graph as a sky box surrounding the virtual object, and setting the sky box invisible to avoid shielding the real picture, wherein the sky box comprises illumination textures in six directions of up and down, left and right, front and back;
And rendering the virtual object placed at the target position according to the illumination textures of the sky box in all directions.
The optional processor 701 determines three-dimensional coordinates of the target position where the virtual object is placed in the three-dimensional space, specifically including:
determining two-dimensional coordinates of a target position where the virtual object is placed on the display screen;
And transmitting a ray along the direction perpendicular to the display screen by taking the two-dimensional coordinate as a starting point, and determining the three-dimensional coordinate of the intersection point of the ray and the virtual screen corresponding to the display screen in the three-dimensional space as the three-dimensional coordinate of the target position.
The Processor referred to in fig. 7 of the embodiments of the present application may be a central processing unit (Central Processing Unit, CPU), a general purpose Processor, a graphics Processor (Graphics Processing Unit, GPU) a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application-specific integrated Circuit (ASIC), a field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, a combination of a DSP and a microprocessor, and so forth. The memory may be integrated into the processor or may be provided separately from the processor.
It should be noted that fig. 7 is only an example, and shows the hardware necessary for executing the steps of the method for displaying a virtual object based on illumination of a spatial location provided by the embodiment of the present application by using a display device with AR function, which is not shown, and the display device further includes common hardware of a man-machine interaction device, such as a speaker, a microphone, a mouse, a keyboard, and so on.
Embodiments of the present application also provide a computer readable storage medium storing instructions that, when executed, perform the method of the previous embodiments.
The embodiment of the application also provides a computer program product for storing a computer program for executing the method of the previous embodiment.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.