CN111986086B - Three-dimensional image optimization generation method and system - Google Patents
Three-dimensional image optimization generation method and system Download PDFInfo
- Publication number
- CN111986086B CN111986086B CN202010877012.8A CN202010877012A CN111986086B CN 111986086 B CN111986086 B CN 111986086B CN 202010877012 A CN202010877012 A CN 202010877012A CN 111986086 B CN111986086 B CN 111986086B
- Authority
- CN
- China
- Prior art keywords
- point
- points
- scale
- homonymous
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005457 optimization Methods 0.000 title claims description 66
- 238000000034 method Methods 0.000 title claims description 46
- 239000011159 matrix material Substances 0.000 claims description 84
- 238000013519 translation Methods 0.000 claims description 26
- 230000009466 transformation Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 9
- 239000004576 sand Substances 0.000 claims description 6
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a three-dimensional image optimization generation method and a three-dimensional image optimization generation system, wherein panoramic images of all positions in a room are acquired; according to the point positions in the panoramic image of each position, homonymy points describing the same space are obtained, and homonymy point combinations are generated; respectively carrying out pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination to obtain an optimized pose relationship and scale parameters; and splicing to form a three-dimensional image of the room according to the optimized pose relationship and the scale parameters. According to the scheme, the scale of the point cloud of the adjacent point locations is optimized through a three-dimensional model scale optimization algorithm based on depth image matching. The method can splice all panoramic images of the room to obtain a three-dimensional image of the whole room.
Description
Technical Field
The invention relates to the technical field of computer three-dimensional space modeling, in particular to a three-dimensional image optimization generation method and a three-dimensional image optimization generation system.
Background
In the data acquisition stage of the indoor three-dimensional model, different shooting point data (including depth data and RGB image data) are collected by using special equipment. And after the data collection is finished, obtaining an RGBD image through the calibrated camera parameters, the RGB image and the depth image. RGB is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G), and blue (B) and superimposing the three color channels on each other, where RGB represents colors of the three channels of red, green, and blue, and the standard includes almost all colors that can be perceived by human vision, and is one of the most widely used color systems at present. A Depth Map (Depth Map) is an image or image channel containing information about the distance of the surface of a scene object from a viewpoint. Where the Depth Map is similar to a grayscale image except that each pixel value thereof is the actual distance of the sensor from the object. Usually, the RGB image and the Depth image are registered, so that there is a one-to-one correspondence between the pixel points.
In the prior art, an RGBD (red green blue) graph can be converted into point cloud of a single camera point by combining camera poses. This requires finding out the coordinates of the different shot points in the global coordinate system, so that the point cloud data can be spliced into a point cloud model of the complete house.
In the prior art, a structure line position (wall-wall, wall-ground and wall-ceiling boundary line) in an image is found by a deep learning means directly through an RGB (red, green and blue) color panoramic image, and then a three-dimensional model with a very simple point position is reconstructed through an algorithm. That is, the depth information (depth image) of the extremely simple model of the current point location can be acquired. In this way, the RGBD information can be generated by corresponding the color RGB image and the depth image deduced by the model one by one, and the extremely simple model point cloud data with the color information can be recovered.
Different from the traditional method of acquiring point location depth information through a structured light camera and recovering point location point clouds, the method infers the position of a structured line in a color image through an RGB color image, and has some errors due to the interference of conditions such as illumination, image resolution, object shielding and the like, and the accuracy of the inferred structured line is difficult to avoid. The most important and difficult to avoid is that the object is imaged in the image with big and small distances, and the difference caused by the big and small distances is not consistent in different directions. It can be said that the scale error is anisotropic.
Specifically, if the room is large and long and narrow, and the image shot at one end of the room, the proportion of the far room structure line (marked as a) in the image is small, the recognition accuracy is inevitably reduced; in the image shot in the middle of the room, the ratio of the marked room structure line A in the image is relatively large, and the recognition accuracy is relatively good. The point cloud models recovered from the two images are inevitably inconsistent in the dimension of the long axis of the room (the long side direction of the long and narrow room); and the dimension consistency in the short axis direction (short side direction of a long and narrow room) is better. Therefore, when the point cloud is spliced, the problem of misalignment caused by the inconsistency of the two point cloud scales occurs.
Disclosure of Invention
The embodiment of the invention aims to solve the technical problem that: the method and the system for generating the three-dimensional image in an optimized mode are provided, and the problem that scales are inconsistent when point cloud reconstruction is carried out on a single-point depth image obtained by panoramic image structure line identification in the prior art is solved.
According to an aspect of the present invention, there is provided a three-dimensional image optimization generation method, including:
acquiring panoramic images of all positions in a room;
according to the point positions in the panoramic image of each position, homonymy points describing the same space are obtained, and homonymy point combinations are generated;
respectively carrying out pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination to obtain an optimized pose relationship and scale parameters;
and splicing to form a three-dimensional image of the room according to the optimized pose relationship and the scale parameters.
Optionally, the obtaining, according to the point location in the panoramic image at each position, a homonymy point describing the same space, and generating a homonymy point combination specifically includes:
respectively acquiring all point positions in the panoramic image of each position;
performing point-by-point matching on the depth images corresponding to the two point locations to determine whether the same space is described;
and when the two point locations are determined to describe the same space, generating a homonymous point combination according to the two point locations.
Optionally, the performing point-by-point matching on the depth images corresponding to the two point locations to determine whether the same space is described includes:
projecting the depth images of the two point locations into a three-dimensional space to generate a point cloud A and a point cloud B;
rotating and translating the point cloud A to a coordinate system where the point cloud B is located to generate a point cloud t-A;
for each point P of the point cloud t-AiProjecting the point cloud B onto a panoramic image corresponding to the point cloud B to obtain a projection point (T)ix,Tiy);
According to the projection point (T)ix,Tiy) Obtaining the point P of the corresponding point cloud Bj;
When determining the point PjAnd PiThe included angle of the normal vector is smaller than a preset angle delta alpha; and P isiAnd PjWhen the Euclidean distance is less than the preset Euclidean distance delta D, the point P of the point cloud B is processedjRotating the point cloud A to the coordinate system of the point cloud A and projecting the point cloud A to the panoramic image corresponding to the point cloud A;
when determining the point PjProjection pixel (S)jx,Sjy) And point PiPixel point (S) of itselfjx,Sjy) When the pixel distance is less than the preset pixel distance DeltaL, the point PiAnd point PjAnd forming a group of homonym points, and storing the homonym points into a homonym point set.
Optionally, the method further comprises:
when the number of the homonymous points is larger than a preset threshold value, adding point positions in the homonymous point set into homonymous point combinations; and the preset threshold is set according to the ratio of the number of the same-name points to the number of the points in the corresponding point cloud.
Optionally, the performing pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination respectively to obtain an optimized pose relationship and scale parameters specifically includes:
and respectively adjusting values of a rotation matrix, a scale matrix, a transformation matrix and a translation matrix according to the pose and scale parameters of the homonymy points in the homonymy point combination, and iteratively optimizing the pose and scale parameters of the homonymy points to obtain the optimized pose relationship and scale parameters.
Optionally, respectively adjusting values of a rotation matrix, a scale matrix, a transformation matrix and a translation matrix according to the pose and scale parameters of the homonymy point in the homonymy point combination, and iteratively optimizing the pose and scale parameters of the homonymy point, including:
adjusting the value of each matrix according to the following formula, and iteratively optimizing the pose relationship and scale parameters of the homonymy point:
wherein, PfAnd PsTwo homonym points in the homonym point combination are respectively; r is a rotation matrix of 3 x 3, Rf、RsAre respectively PfAnd PsValues in the rotation matrix R; s is a scale matrix of 3 x 3, Sf、SsAre respectively PfAnd PsValues in the scale matrix S; q is a 4 x 4 transformation matrix, Qf、QsAre respectively PfAnd PsValues in the transformation matrix Q; t is a translation matrix of 3 x 1, Tf、TsAre respectively PfAnd PsValues in the translation matrix T; n is a radical off、NsAre respectively PfAnd PsThe normal vector of (a); n is a radical ofMf、NMsAre respectively PfAnd PsThe main direction of the point cloud; lambda [ alpha ]1、λ2As a penalty factor, D⊥Is PfAt PsThe distance in the normal vector direction, and n is the number of all homonymous points;represents a pair Pf、PsAnd performing rotational translation to a world coordinate system.
Optionally, the method further comprises:
and when the maximum iteration times are reached or the difference of the two iteration optimization results is smaller than a preset threshold value, using the position and scale parameters of the homonymous points in the homonymous point combination as the optimized position and scale parameters.
According to another aspect of the embodiments of the present invention, there is provided a three-dimensional image optimization generation system including:
the panoramic image acquisition unit is used for acquiring panoramic images of all positions in a room;
a homonymy point combination generating unit, configured to obtain homonymy points describing the same space according to point locations in the panoramic image at each of the positions, and generate a homonymy point combination;
the iterative optimization unit is used for respectively carrying out pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination to obtain an optimized pose relationship and scale parameters;
and the splicing unit is used for splicing to form a three-dimensional image of the room according to the optimized pose relationship and the scale parameters.
Optionally, the homonymy point combination generating unit is specifically configured to:
respectively acquiring all point positions in the panoramic image of each position; performing point-by-point matching on the depth images corresponding to the two point locations to determine whether the same space is described; and when the two point locations are determined to describe the same space, generating a homonymous point combination according to the two point locations.
Optionally, the homonymy point combination generating unit is specifically configured to:
projecting the depth images of the two point locations into a three-dimensional space to generate a point cloud A and a point cloud B;
rotating and translating the point cloud A to a coordinate system where the point cloud B is located to generate a point cloud t-A;
for each point P of the point cloud t-AiProjecting the point cloud B onto a panoramic image corresponding to the point cloud B to obtain a projection point (T)ix,Tiy);
According to the projection point (T)ix,Tiy) Obtaining the point P of the corresponding point cloud Bj;
When determining the point PjAnd PiThe included angle of the normal vector is smaller than a preset angle delta alpha; and P isiAnd PjWhen the Euclidean distance is less than the preset Euclidean distance delta D, the point P of the point cloud B is processedjRotating the point cloud A to the coordinate system of the point cloud A and projecting the point cloud A to the panoramic image corresponding to the point cloud A;
when determining the point PjProjection pixel (S)jx,Sjy) The pixel point (S) where the point Pi is locatedix,Siy) When the pixel distance is less than the preset pixel distance DeltaL, the point PiAnd point PjAnd forming a group of homonym points, and storing the homonym points into a homonym point set.
Optionally, the iterative optimization unit is specifically configured to:
and respectively adjusting values of a rotation matrix, a scale matrix, a transformation matrix and a translation matrix according to the pose and scale parameters of the homonymy points in the homonymy point combination, and iteratively optimizing the pose and scale parameters of the homonymy points to obtain the optimized pose relationship and scale parameters.
Optionally, the iterative optimization unit is further configured to:
adjusting the value of each matrix according to the following formula, and iteratively optimizing the pose relationship and scale parameters of the homonymy point:
wherein, PfAnd PsTwo homonym points in the homonym point combination are respectively; r is a rotation matrix of 3 x 3, Rf、RsAre respectively PfAnd PsValues in the rotation matrix R; s is a scale matrix of 3 x 3, Sf、SsAre respectively PfAnd PsValues in the scale matrix S; q is a 4 x 4 transformation matrix, Qf、QsAre respectively PfAnd PsValues in the transformation matrix Q; t is a translation matrix of 3 x 1, Tf、TsAre respectively PfAnd PsValues in the translation matrix T; n is a radical off、NsAre respectively PfAnd PsThe normal vector of (a); n is a radical ofMf、NMsAre respectively PfAnd PsThe main direction of the point cloud; lambda [ alpha ]1、λ2As a penalty factor, D⊥Is PfAt PsThe distance in the normal vector direction, and n is the number of all homonymous points;represents a pair Pf、PsAnd performing rotational translation to a world coordinate system.
Optionally, the iterative optimization unit is further configured to:
and when the maximum iteration times are reached or the difference of the two iteration optimization results is smaller than a preset threshold value, using the position and scale parameters of the homonymous points in the homonymous point combination as the optimized position and scale parameters.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing a computer program for executing the method described above.
According to another aspect of the present invention, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method.
Based on the scheme provided by the embodiment of the invention, the method mainly comprises the following beneficial effects:
acquiring panoramic images of all positions in a room; according to the point positions in the panoramic image of each position, homonymy points describing the same space are obtained, and homonymy point combinations are generated; respectively carrying out pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination to obtain an optimized pose relationship and scale parameters; and splicing to form a three-dimensional image of the room according to the optimized pose relationship and the scale parameters. According to the scheme, the scale of the point cloud of the adjacent point locations is optimized through a three-dimensional model scale optimization algorithm based on depth image matching. Because the point cloud has the problem of inconsistent scales in the directions of x, y and z, the scale optimization proposed by the scheme is the independent scale optimization in the directions of x, y and z. Furthermore, all panoramic images of the room can be spliced to obtain a three-dimensional image of the whole room. According to the scheme, the room panoramic images are automatically spliced into the three-dimensional images, a vectorized extremely-simple three-dimensional model of the room can be deduced by acquiring the panoramic images of the room points through a simple commercial panoramic camera, and the requirement of quick and simple full-house reconstruction can be met.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The invention will be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a source flow flowchart of a three-dimensional image optimization generation method according to an embodiment of the present invention.
Fig. 2 is a source flow chart of a three-dimensional image optimization generation method according to another embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a three-dimensional image optimization generation system according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
According to the embodiment of the invention, the structural model of the room where the point is located can be reconstructed by directly using one RGB color panoramic image in a deep learning manner. Inputting an RGB panoramic image, and outputting the wall-ceiling, wall-floor intersecting line and wall-wall intersecting line positions in the RGB panoramic image.
The existing technical scheme that a structural line of a single shot point location panoramic image is identified, a point cloud model of a single point location is reconstructed, a depth image is obtained, point clouds of different point locations are spliced into a whole house model has the problem that the point clouds of different point locations are different in scale, scale optimization must be carried out on the model obtained by each point location, and a model capable of meeting engineering application conditions can be obtained after splicing.
Meanwhile, the phenomenon of the inconsistency of the scale of the description model is not the simple inconsistency of the scale of the point cloud which can be described through a single parameter, and the scale of the point cloud is independent in the x direction, the y direction and the z direction, namely 3 parameters need to be optimized.
Finally, in the optimization process, pose changes caused by optimization of different direction scales are also considered, so that the pose needs to be continuously re-estimated through a matching relation in the optimization process.
As shown in fig. 1, a source flow chart of the three-dimensional image optimization generation method provided in this embodiment is provided, wherein,
and step 11, acquiring panoramic images of all positions in the room.
Panoramic images in a room are usually captured by a panoramic camera in the room through multi-dimensional photographing. The acquired panoramic image is typically in RGB format. Because the panoramic camera can only shoot plane images, the acquired panoramic images are one or more two-dimensional plane images.
In one embodiment of the invention, a room needs to collect a plurality of panoramic images, and then the panoramic images are spliced to complete the model establishment of the whole room. However, a house usually includes a plurality of rooms, and panoramic images of different rooms also need to be spliced and then uniformly modeled.
And step 12, obtaining homonymy points describing the same space according to the point positions in the panoramic image of each position, and generating homonymy point combinations.
The three-dimensional image neural network is trained from the correspondence between the panoramic image and the corresponding three-dimensional image of the previous known room. Or, the three-dimensional image neural network is trained according to the corresponding relation between the panoramic image acquired in advance and the corresponding structure points and structure lines in the room.
In one embodiment of the invention, the trained neural network is used for deducing the panoramic photos of all rooms, and the point lines related to the room structures are obtained. By utilizing the prior knowledge (the height between the camera and the ground), the three-dimensional absolute coordinates of the structure points and the lines in the space can be obtained through the projection relation, and the point clouds of all the wall surfaces, the ground surfaces and the ceilings can also be obtained.
In an embodiment of the present invention, it is determined whether two point locations describe the same space, that is, whether two point clouds can be matched is determined by the number of the found homonymous points. Specifically, whether the depth images of the two point locations are homonymous points is judged by performing point-to-point matching on the depth images of the two point locations. The homonym points are combined into homonym point combinations in pairs.
In one embodiment of the invention, multiple panoramic images may be taken of various locations in a room. When the panoramic images are spliced, the overlapped parts of the two adjacent panoramic images need to be optimized. Because the direction, angle, size, etc. of each panoramic image are different, the direct splicing can have dislocation. Therefore, it is necessary to find the coincident points, i.e., the homonymous points, in the two panoramic images, and combine the homonymous points into a group of two-by-two homonymous points.
And step 13, respectively carrying out pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination to obtain an optimized pose relationship and scale parameters.
In one embodiment of the invention, the pose and scale parameters are iteratively optimized using cerees, the pose parameter Rx,Ry,Rz,Tx,Ty,TzScale parameter Sx,Sy,Sz. One optimization iteration comprises the step of optimizing the pose and the scale once respectively, and the optimization is stopped if the change values of the scale and pose parameters are smaller than a threshold value or the iteration times meet set values.
And step 14, splicing to form a three-dimensional image of the room according to the optimized pose relationship and the scale parameters.
And splicing the panoramic images according to the pose relations and the scale parameters of the plurality of panoramic images to obtain the panoramic image of the whole room. And processing the panoramic image by combining the relative positions of the structure points and the structure lines in the panoramic image to obtain a three-dimensional image of the room.
In an embodiment of the present invention, the obtaining, according to the point location in the panoramic image at each position, a homonymy point describing the same space, and generating a homonymy point combination specifically includes:
respectively acquiring all point positions in the panoramic image of each position;
performing point-by-point matching on the depth images corresponding to the two point locations to determine whether the same space is described;
and when the two point locations are determined to describe the same space, generating a homonymous point combination according to the two point locations.
The panoramic images at each position need to be judged whether to be adjacent or whether to have coincident points.
In an embodiment of the present invention, the performing point-by-point matching on the depth images corresponding to the two point locations to determine whether to describe the same space includes:
projecting the depth images of the two point locations into a three-dimensional space to generate a point cloud A and a point cloud B;
rotating and translating the point cloud A to a coordinate system where the point cloud B is located to generate a point cloud t-A;
for each point P of the point cloud t-AiProjecting the point cloud B onto a panoramic image corresponding to the point cloud B to obtain a projection point (T)ix,Tiy);
According to the projection point (T)ix,Tiy) Obtaining the point P of the corresponding point cloud BjThat is, each point P of the point cloud t-A is obtainediEach point P in the corresponding point cloud Bj(ii) a The obtaining mode may be to determine whether the two point locations are adjacent or whether the two point locations are coincident.
When determining the point PjAnd PiThe included angle of the normal vector is smallAt a preset angle delta alpha; and P isiAnd PjWhen the Euclidean distance is less than the preset Euclidean distance delta D, the point P of the point cloud B is processedjRotating the point cloud A to the coordinate system of the point cloud A and projecting the point cloud A to the panoramic image corresponding to the point cloud A; the preset angle Δ α is empirically set, and may be set to 45 degrees, for example. Euclidean distance Δ D according to PiAnd PjThe ratio of the euclidean distance of (a) to the distance to the origin of coordinates is set, for example, to 0.1.
When determining the point PjProjection pixel (S)jx,Sjy) And point PiPixel point (S) of itselfjx,Sjy) When the pixel distance is less than the preset pixel distance DeltaL, the point PiAnd point PjAnd forming a group of homonym points, and storing the homonym points into a homonym point set. The preset pixel distance Δ L is set according to a ratio of the number of the homologous points to the total number of all the points in the point cloud, and may be set in a range of 0.06-0.1 pixel, for example.
In an embodiment of the present invention, when the number of the homonymous points is greater than a preset threshold, adding point locations in the homonymous point set to a homonymous point combination; and the preset threshold is set according to the ratio of the number of the same-name points to the number of the points in the corresponding point cloud.
In one embodiment of the invention, the homonymy point extraction process is repeated for the point clouds of every two point locations in sequence until homonymy points of all the two point locations are found. Because of global optimization, it is necessary to find the homonymous point combinations between all the shot points, and count them together for subsequent optimization.
In an embodiment of the present invention, the performing iterative optimization of pose and scale parameters on the homonymous points in the homonymous point combination respectively to obtain an optimized pose relationship and scale parameters specifically includes:
and respectively adjusting values of a rotation matrix, a scale matrix, a transformation matrix and a translation matrix according to the pose and scale parameters of the homonymy points in the homonymy point combination, and iteratively optimizing the pose and scale parameters of the homonymy points to obtain the optimized pose relationship and scale parameters.
Specifically, the values of the matrices can be adjusted according to the following formula (1) and formula (2), and the pose relationship and scale parameters of the homonymy point are iteratively optimized:
wherein, PfAnd PsTwo homonym points in the homonym point combination are respectively; r is a rotation matrix of 3 x 3, Rf、RsAre respectively PfAnd PsValues in the rotation matrix R; s is a scale matrix of 3 x 3, Sf、SsAre respectively PfAnd PsValues in the scale matrix S; q is a 4 x 4 transformation matrix, Qf、QsAre respectively PfAnd PsValues in the transformation matrix Q; t is a translation matrix of 3 x 1, Tf、TsAre respectively PfAnd PsValues in the translation matrix T; n is a radical off、NsAre respectively PfAnd PsThe normal vector of (a); n is a radical ofMf、NMsAre respectively PfAnd PsThe main direction of the point cloud; lambda [ alpha ]1、λ2As a penalty factor, D⊥Is PfAt PsThe distance in the normal vector direction, n is the number of all homonymous points, and the value of i is 0-n;represents a pair Pf、PsAnd performing rotational translation to a world coordinate system.
In an embodiment of the invention, when the maximum iteration number is reached or the difference between two iteration optimization results is smaller than a preset threshold, the pose relationship and the scale parameter of the homonymous point in the homonymous point combination are used as the optimized pose relationship and the optimized scale parameter.
According to the target loss function, a ceres optimization item is written, and a termination condition for stopping optimization is set: and reaching the maximum iteration number, or enabling the Loss term gap of the two times of optimization to be smaller than a threshold value.
In an embodiment of the present invention, as shown in fig. 2, an embodiment of the present invention provides an improved three-dimensional image optimization generation method, where the overall flow is as follows:
and step 21, sequentially acquiring the homonymous points of the point clouds corresponding to the point location i and the point location j from the point location i. Wherein i is 0,1,2 … n; j is 0,1,2 … n; n is the total number of points in the point cloud.
And step 22, adding the point cloud pairs of the point location i and the point location j into the set to be optimized when the number of the same-name points is greater than or equal to a preset threshold value. Otherwise, the point location i and the point location j are considered to have insufficient constraint relation.
And step 23, traversing the whole set of i ═ n, and acquiring all the point pairs.
And 24, performing iterative optimization on the component error function at all points by using a Ceres function.
And 25, stopping optimization when the iteration times reach the maximum iteration times or the change values before and after two times of optimization are smaller than a preset threshold value, and obtaining the optimized scale parameter and pose relationship.
And carrying out scale optimization on the obtained point cloud data through the optimization algorithm described in the scheme. And splicing point clouds of all point locations by using the optimized pose relationship and scale parameters to obtain the whole house model.
An embodiment of the present invention provides a three-dimensional image optimization generation system, as shown in fig. 3, including:
a panoramic image acquisition unit 31 for acquiring panoramic images of respective positions in a room;
a homonymy point combination generating unit 32, configured to obtain homonymy points describing the same space according to point locations in the panoramic image at each position, and generate a homonymy point combination;
the iterative optimization unit 33 is configured to perform pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination respectively to obtain an optimized pose relationship and scale parameters;
and the splicing unit 34 is used for splicing and forming a three-dimensional image of the room according to the optimized pose relationship and the scale parameters.
In an embodiment of the present invention, the homonymy point combination generating unit 32 is specifically configured to:
respectively acquiring all point positions in the panoramic image of each position; performing point-by-point matching on the depth images corresponding to the two point locations to determine whether the same space is described; and when the two point locations are determined to describe the same space, generating a homonymous point combination according to the two point locations.
In an embodiment of the present invention, the homonymy point combination generating unit 32 is specifically configured to:
projecting the depth images of the two point locations into a three-dimensional space to generate a point cloud A and a point cloud B;
rotating and translating the point cloud A to a coordinate system where the point cloud B is located to generate a point cloud t-A;
for each point P of the point cloud t-AiProjecting the point cloud B onto a panoramic image corresponding to the point cloud B to obtain a projection point (T)ix,Tiy);
According to the projection point (T)ix,Tiy) Obtaining the point P of the corresponding point cloud Bj;
When determining the point PjAnd PiThe included angle of the normal vector is smaller than a preset angle delta alpha; and P isiAnd PjWhen the Euclidean distance is less than the preset Euclidean distance delta D, the point P of the point cloud B is processedjRotating the point cloud A to the coordinate system of the point cloud A and projecting the point cloud A to the panoramic image corresponding to the point cloud A;
when determining the point PjProjection pixel (S)jx,Sjy) The pixel point (S) where the point Pi is locatedix,Siy) When the pixel distance is less than the preset pixel distance DeltaL, the point PiAnd point PjAnd forming a group of homonym points, and storing the homonym points into a homonym point set.
In an embodiment of the present invention, the iterative optimization unit 33 is specifically configured to:
and respectively adjusting values of a rotation matrix, a scale matrix, a transformation matrix and a translation matrix according to the pose and scale parameters of the homonymy points in the homonymy point combination, and iteratively optimizing the pose and scale parameters of the homonymy points to obtain the optimized pose relationship and scale parameters.
In an embodiment of the present invention, the iterative optimization unit 33 is further configured to:
adjusting the value of each matrix according to the following formula, and iteratively optimizing the pose relationship and scale parameters of the homonymy point:
wherein, PfAnd PsTwo homonym points in the homonym point combination are respectively; r is a rotation matrix of 3 x 3, Rf、RsAre respectively PfAnd PsValues in the rotation matrix R; s is a scale matrix of 3 x 3, Sf、SsAre respectively PfAnd PsValues in the scale matrix S; q is a 4 x 4 transformation matrix, Qf、QsAre respectively PfAnd PsValues in the transformation matrix Q; t is a translation matrix of 3 x 1, Tf、TsAre respectively PfAnd PsValues in the translation matrix T; n is a radical off、NsAre respectively PfAnd PsThe normal vector of (a); n is a radical ofMf、NMsAre respectively PfAnd PsThe main direction of the point cloud; lambda [ alpha ]1、λ2As a penalty factor, D⊥Is PfAt PsThe distance in the normal vector direction, and n is the number of all homonymous points;represents a pair Pf、PsAnd performing rotational translation to a world coordinate system.
In an embodiment of the present invention, the iterative optimization unit 33 is further configured to:
and when the maximum iteration times are reached or the difference of the two iteration optimization results is smaller than a preset threshold value, using the position and scale parameters of the homonymous points in the homonymous point combination as the optimized position and scale parameters.
In an embodiment of the present invention, there is also provided a computer-readable storage medium storing a computer program for executing the above-mentioned method.
In one embodiment of the present invention, there is also provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method.
Next, an electronic apparatus according to an embodiment of the present invention is described with reference to fig. 4.
Fig. 4 is a schematic structural diagram of an embodiment of an electronic device according to the present invention. As shown in fig. 4, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
The memory may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by a processor to implement the behavior analysis based matching methods of the various embodiments of the invention described above and/or other desired functions.
In one example, the electronic device may further include: an input device and an output device, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device may also include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, and the like to the outside. The output devices may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device that are relevant to the present invention are shown in fig. 4, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present invention may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the behavioral analysis-based matching method according to various embodiments of the present invention described in the above-mentioned part of the present specification.
The computer program product may write program code for carrying out operations for embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Based on the scheme provided by the embodiment of the invention, the method mainly comprises the following beneficial effects:
acquiring panoramic images of all positions in a room; according to the point positions in the panoramic image of each position, homonymy points describing the same space are obtained, and homonymy point combinations are generated; respectively carrying out pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination to obtain an optimized pose relationship and scale parameters; and splicing to form a three-dimensional image of the room according to the optimized pose relationship and the scale parameters. According to the scheme, the scale of the point cloud of the adjacent point locations is optimized through a three-dimensional model scale optimization algorithm based on depth image matching. Because the point cloud has the problem of inconsistent scales in the directions of x, y and z, the scale optimization proposed by the scheme is the independent scale optimization in the directions of x, y and z. Furthermore, all panoramic images of the room can be spliced to obtain a three-dimensional image of the whole room. According to the scheme, the room panoramic images are automatically spliced into the three-dimensional images, a vectorized extremely-simple three-dimensional model of the room can be deduced by acquiring the panoramic images of the room points through a simple commercial panoramic camera, and the requirement of quick and simple full-house reconstruction can be met.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The method and system of the present invention may be implemented in a number of ways. For example, the methods and systems of the present invention may be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustrative purposes only, and the steps of the method of the present invention are not limited to the order specifically described above unless specifically indicated otherwise. Furthermore, in some embodiments, the present invention may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present invention. Thus, the present invention also covers a recording medium storing a program for executing the method according to the present invention.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Claims (11)
1. A three-dimensional image optimization generation method is characterized by comprising the following steps:
acquiring panoramic images of all positions in a room;
according to the point positions in the panoramic image of each position, homonymy points describing the same space are obtained, and homonymy point combinations are generated; the homonymous points are coincident points in the two panoramic pictures, and the homonymous point combination is formed by combining every two homonymous points; the obtaining, according to the point location in the panoramic image at each position, a homonymy point describing the same space, and generating a homonymy point combination includes: respectively acquiring all point positions in the panoramic image of each position; performing point-by-point matching on the depth images corresponding to the two point locations to determine whether the same space is described; when the two point locations are determined to describe the same space, generating a homonymy point combination according to the two point locations;
respectively carrying out pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination to obtain an optimized pose relationship and scale parameters; performing pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination respectively to obtain an optimized pose relationship and scale parameters, wherein the method comprises the following steps: respectively adjusting values of a rotation matrix, a scale matrix, a transformation matrix and a translation matrix according to the pose and scale parameters of the homonymy points in the homonymy point combination, and iteratively optimizing the pose and scale parameters of the homonymy points to obtain an optimized pose relationship and scale parameters;
and splicing to form a three-dimensional image of the room according to the optimized pose relationship and the scale parameters.
2. The method of claim 1, wherein performing point-by-point matching on the depth images corresponding to the two point locations to determine whether the same space is described comprises:
projecting the depth images of the two point locations into a three-dimensional space to generate a point cloud A and a point cloud B;
rotating and translating the point cloud A to a coordinate system where the point cloud B is located to generate a point cloud t-A;
for each point P of the point cloud t-AiProjecting the point cloud B onto a panoramic image corresponding to the point cloud B to obtain a projection point (T)ix,Tiy);
According to the projection point (T)ix,Tiy) Obtaining the point P of the corresponding point cloud Bj;
When determining the point PjAnd PiThe included angle of the normal vector is smaller than a preset angle delta alpha; and P isiAnd PjWhen the Euclidean distance is less than the preset Euclidean distance delta D, the point P of the point cloud B is processedjRotate back toThe coordinate system of the point cloud A is projected to the panoramic image corresponding to the point cloud A;
when determining the point PjProjection pixel (S)jx,Sjy) And point PiPixel point (S) of itselfjx,Sjy) When the pixel distance is less than the preset pixel distance DeltaL, the point PiAnd point PjAnd forming a group of homonym points, and storing the homonym points into a homonym point set.
3. The method of claim 1 or 2, wherein the method further comprises:
when the number of the homonymous points is larger than a preset threshold value, adding point positions in the homonymous point set into homonymous point combinations; and the preset threshold is set according to the ratio of the number of the same-name points to the number of the points in the corresponding point cloud.
4. The method of claim 1, wherein the iterative optimization of the pose and scale parameters of the homonym point by adjusting the values of a rotation matrix, a scale matrix, a transformation matrix, and a translation matrix, respectively, according to the pose and scale parameters of the homonym point in the homonym point combination comprises:
adjusting the value of each matrix according to the following formula, and iteratively optimizing the pose relationship and scale parameters of the homonymy point:
wherein, PfAnd PsTwo homonym points in the homonym point combination are respectively; r is a rotation matrix of 3 x 3, Rf、RsAre respectively PfAnd PsValues in the rotation matrix R; s is a scale matrix of 3 x 3, Sf、SsAre respectively PfAnd PsOn-scale matrixThe value in S; q is a 4 x 4 transformation matrix, Qf、QsAre respectively PfAnd PsValues in the transformation matrix Q; t is a translation matrix of 3 x 1, Tf、TsAre respectively PfAnd PsValues in the translation matrix T; n is a radical off、NsAre respectively PfAnd PsThe normal vector of (a); n is a radical ofMf、NMsAre respectively PfAnd PsThe main direction of the point cloud; lambda [ alpha ]1、λ2As a penalty factor, D⊥Is PfAt PsThe distance in the normal vector direction, and n is the number of all homonymous points;represents a pair Pf、PsAnd performing rotational translation to a world coordinate system.
5. The method of claim 4, wherein the method further comprises:
and when the maximum iteration times are reached or the difference of the two iteration optimization results is smaller than a preset threshold value, using the position and scale parameters of the homonymous points in the homonymous point combination as the optimized position and scale parameters.
6. A three-dimensional image optimization generation system, comprising:
the panoramic image acquisition unit is used for acquiring panoramic images of all positions in a room;
a homonymy point combination generating unit, configured to obtain homonymy points describing the same space according to point locations in the panoramic image at each of the positions, and generate a homonymy point combination; the homonymous points are coincident points in the two panoramic pictures, and the homonymous point combination is formed by combining every two homonymous points; the homonymy point combination generation unit is specifically configured to: respectively acquiring all point positions in the panoramic image of each position; performing point-by-point matching on the depth images corresponding to the two point locations to determine whether the same space is described; when the two point locations are determined to describe the same space, generating a homonymy point combination according to the two point locations;
the iterative optimization unit is used for respectively carrying out pose and scale parameter iterative optimization on the homonymous points in the homonymous point combination to obtain an optimized pose relationship and scale parameters; wherein, the iterative optimization unit is specifically configured to: respectively adjusting values of a rotation matrix, a scale matrix, a transformation matrix and a translation matrix according to the pose and scale parameters of the homonymy points in the homonymy point combination, and iteratively optimizing the pose and scale parameters of the homonymy points to obtain an optimized pose relationship and scale parameters;
and the splicing unit is used for splicing to form a three-dimensional image of the room according to the optimized pose relationship and the scale parameters.
7. The system of claim 6, wherein the homonymy point combination generation unit is specifically configured to:
projecting the depth images of the two point locations into a three-dimensional space to generate a point cloud A and a point cloud B;
rotating and translating the point cloud A to a coordinate system where the point cloud B is located to generate a point cloud t-A;
for each point P of the point cloud t-AiProjecting the point cloud B onto a panoramic image corresponding to the point cloud B to obtain a projection point (T)ix,Tiy);
According to the projection point (T)ix,Tiy) Obtaining the point P of the corresponding point cloud Bj;
When determining the point PjAnd PiThe included angle of the normal vector is smaller than a preset angle delta alpha; and P isiAnd PjWhen the Euclidean distance is less than the preset Euclidean distance delta D, the point P of the point cloud B is processedjRotating the point cloud A to the coordinate system of the point cloud A and projecting the point cloud A to the panoramic image corresponding to the point cloud A;
when determining the point PjProjection pixel (S)jx,Sjy) The pixel point (S) where the point Pi is locatedix,Siy) When the pixel distance is less than the preset pixel distance DeltaL, the point PiAnd point PjAnd forming a group of homonym points, and storing the homonym points into a homonym point set.
8. The system of claim 6, wherein the iterative optimization unit is further configured to:
adjusting the value of each matrix according to the following formula, and iteratively optimizing the pose relationship and scale parameters of the homonymy point:
wherein, PfAnd PsTwo homonym points in the homonym point combination are respectively; r is a rotation matrix of 3 x 3, Rf、RsAre respectively PfAnd PsValues in the rotation matrix R; s is a scale matrix of 3 x 3, Sf、SsAre respectively PfAnd PsValues in the scale matrix S; q is a 4 x 4 transformation matrix, Qf、QsAre respectively PfAnd PsValues in the transformation matrix Q; t is a translation matrix of 3 x 1, Tf、TsAre respectively PfAnd PsValues in the translation matrix T; n is a radical off、NsAre respectively PfAnd PsThe normal vector of (a); n is a radical ofMf、NMsAre respectively PfAnd PsThe main direction of the point cloud; lambda [ alpha ]1、λ2As a penalty factor, D⊥Is PfAt PsThe distance in the normal vector direction, and n is the number of all homonymous points;represents a pair Pf、PsAnd performing rotational translation to a world coordinate system.
9. The system of claim 8, wherein the iterative optimization unit is further configured to:
and when the maximum iteration times are reached or the difference of the two iteration optimization results is smaller than a preset threshold value, using the position and scale parameters of the homonymous points in the homonymous point combination as the optimized position and scale parameters.
10. A computer-readable storage medium, in which a computer program is stored, characterized in that the computer program is adapted to perform the method of any of the preceding claims 1-5.
11. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method of any one of the claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010877012.8A CN111986086B (en) | 2020-08-27 | 2020-08-27 | Three-dimensional image optimization generation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010877012.8A CN111986086B (en) | 2020-08-27 | 2020-08-27 | Three-dimensional image optimization generation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111986086A CN111986086A (en) | 2020-11-24 |
CN111986086B true CN111986086B (en) | 2021-11-09 |
Family
ID=73439911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010877012.8A Active CN111986086B (en) | 2020-08-27 | 2020-08-27 | Three-dimensional image optimization generation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986086B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113223137B (en) * | 2021-05-13 | 2023-03-24 | 广州虎牙科技有限公司 | Generation method and device of perspective projection human face point cloud image and electronic equipment |
CN113823001A (en) * | 2021-09-23 | 2021-12-21 | 北京有竹居网络技术有限公司 | Method, device, equipment and medium for generating house type graph |
CN113989376B (en) * | 2021-12-23 | 2022-04-26 | 贝壳技术有限公司 | Method and device for acquiring indoor depth information and readable storage medium |
CN114627191A (en) * | 2022-02-10 | 2022-06-14 | 深圳积木易搭科技技术有限公司 | Texture mapping homonymy point adjusting method and device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9189853B1 (en) * | 2011-11-30 | 2015-11-17 | Google Inc. | Automatic pose estimation from uncalibrated unordered spherical panoramas |
CN105678748A (en) * | 2015-12-30 | 2016-06-15 | 清华大学 | Interactive calibration method and apparatus based on three dimensional reconstruction in three dimensional monitoring system |
CN109523597A (en) * | 2017-09-18 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | The scaling method and device of Camera extrinsic |
CN109544677A (en) * | 2018-10-30 | 2019-03-29 | 山东大学 | Indoor scene main structure method for reconstructing and system based on depth image key frame |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105828034B (en) * | 2016-03-22 | 2018-08-31 | 合肥师范学院 | A kind of pipe reaction stove burner hearth panoramic picture imaging method |
CN106251399B (en) * | 2016-08-30 | 2019-04-16 | 广州市绯影信息科技有限公司 | A kind of outdoor scene three-dimensional rebuilding method and implementing device based on lsd-slam |
CA3040002C (en) * | 2016-10-18 | 2023-12-12 | Photonic Sensors & Algorithms, S.L. | A device and method for obtaining distance information from views |
US20180295335A1 (en) * | 2017-04-10 | 2018-10-11 | Red Hen Systems Llc | Stereographic Imaging System Employing A Wide Field, Low Resolution Camera And A Narrow Field, High Resolution Camera |
CN108171790B (en) * | 2017-12-25 | 2019-02-15 | 北京航空航天大学 | An Object Reconstruction Method Based on Dictionary Learning |
CN108389157A (en) * | 2018-01-11 | 2018-08-10 | 江苏四点灵机器人有限公司 | A kind of quick joining method of three-dimensional panoramic image |
CN109685838B (en) * | 2018-12-10 | 2023-06-09 | 上海航天控制技术研究所 | Image elastic registration method based on super-pixel segmentation |
CN110675314B (en) * | 2019-04-12 | 2020-08-21 | 北京城市网邻信息技术有限公司 | Image processing method, image processing apparatus, three-dimensional object modeling method, three-dimensional object modeling apparatus, image processing apparatus, and medium |
CN110070571B (en) * | 2019-04-28 | 2020-10-16 | 安徽农业大学 | Phyllostachys pubescens morphological parameter detection method based on depth camera |
CN111080804B (en) * | 2019-10-23 | 2020-11-06 | 贝壳找房(北京)科技有限公司 | Three-dimensional image generation method and device |
CN110842918B (en) * | 2019-10-24 | 2020-12-08 | 华中科技大学 | An autonomous positioning method for robot mobile processing based on point cloud servo |
-
2020
- 2020-08-27 CN CN202010877012.8A patent/CN111986086B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9189853B1 (en) * | 2011-11-30 | 2015-11-17 | Google Inc. | Automatic pose estimation from uncalibrated unordered spherical panoramas |
CN105678748A (en) * | 2015-12-30 | 2016-06-15 | 清华大学 | Interactive calibration method and apparatus based on three dimensional reconstruction in three dimensional monitoring system |
CN109523597A (en) * | 2017-09-18 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | The scaling method and device of Camera extrinsic |
CN109544677A (en) * | 2018-10-30 | 2019-03-29 | 山东大学 | Indoor scene main structure method for reconstructing and system based on depth image key frame |
Also Published As
Publication number | Publication date |
---|---|
CN111986086A (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986086B (en) | Three-dimensional image optimization generation method and system | |
CN111080804B (en) | Three-dimensional image generation method and device | |
CN110998659B (en) | Image processing system, image processing method, and program | |
JP5538617B2 (en) | Methods and configurations for multi-camera calibration | |
JP6011102B2 (en) | Object posture estimation method | |
US8452081B2 (en) | Forming 3D models using multiple images | |
CN101785025B (en) | System and method for three-dimensional object reconstruction from two-dimensional images | |
US8433157B2 (en) | System and method for three-dimensional object reconstruction from two-dimensional images | |
Schöning et al. | Evaluation of multi-view 3D reconstruction software | |
KR100793838B1 (en) | Camera motion extraction device, system and method for providing augmented reality of marine scene using the same | |
CN113689578B (en) | Human body data set generation method and device | |
US20160189419A1 (en) | Systems and methods for generating data indicative of a three-dimensional representation of a scene | |
CN111161336B (en) | Three-dimensional reconstruction method, three-dimensional reconstruction apparatus, and computer-readable storage medium | |
EP3547260B1 (en) | System and method for automatic calibration of image devices | |
US20120177283A1 (en) | Forming 3d models using two images | |
CN109521879B (en) | Interactive projection control method and device, storage medium and electronic equipment | |
KR20130138247A (en) | Rapid 3d modeling | |
JP2010109783A (en) | Electronic camera | |
US11074752B2 (en) | Methods, devices and computer program products for gradient based depth reconstructions with robust statistics | |
CN109613974B (en) | An AR home experience method in a large scene | |
GB2567245A (en) | Methods and apparatuses for depth rectification processing | |
CN112116714A (en) | Method and device for generating room structure model based on two-dimensional image | |
CN112073640B (en) | Panoramic information acquisition pose acquisition method, device and system | |
JP2006113832A (en) | Stereoscopic image processor and program | |
KR102538685B1 (en) | Method and apparatus for restoring 3d information using multi-view information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210413 Address after: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd. Address before: Unit 05, room 112, 1st floor, office building, Nangang Industrial Zone, economic and Technological Development Zone, Binhai New Area, Tianjin 300457 Applicant before: BEIKE TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |