CN117953147A - Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction equipment and storage medium - Google Patents
Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction equipment and storage medium Download PDFInfo
- Publication number
- CN117953147A CN117953147A CN202311632943.1A CN202311632943A CN117953147A CN 117953147 A CN117953147 A CN 117953147A CN 202311632943 A CN202311632943 A CN 202311632943A CN 117953147 A CN117953147 A CN 117953147A
- Authority
- CN
- China
- Prior art keywords
- plane
- planes
- space
- target
- segmented
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000000605 extraction Methods 0.000 claims abstract description 20
- 230000011218 segmentation Effects 0.000 claims description 33
- 230000004044 response Effects 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000003780 insertion Methods 0.000 description 5
- 230000037431 insertion Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000000638 solvent extraction Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011960 computer-aided design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000003012 network analysis Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a three-dimensional reconstruction method, a device, equipment and a storage medium, wherein the three-dimensional reconstruction method comprises the following steps: performing plane extraction on the three-dimensional point cloud or the three-dimensional grid to obtain a plurality of planes, wherein the three-dimensional point cloud or the three-dimensional grid is the point cloud or the grid related to the target object or the target scene; based on the area of each plane and the relative position relation between the planes, carrying out space division on a target space to obtain a space division result, wherein the target space comprises a space region covered by each plane; and carrying out three-dimensional reconstruction on the target object or the target scene by using the space division result to obtain a vectorization model of the target object or the target scene. By the aid of the scheme, accuracy of the three-dimensional reconstruction result can be improved.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a three-dimensional reconstruction method and apparatus, a device, and a storage medium.
Background
In recent years, 3D point cloud vectorization modeling has become more attractive, and vectorized models have considerable advantages in terms of reduced computational and memory overhead due to their lightweight Computer Aided Design (CAD) style representation, and are favored by various applications such as rendering, simulation, navigation, and Building Information Models (BIM), etc. The most popular method of reconstructing a 3D polygonal vectorization model comprises two steps: extraction of plane primitives and reconstruction of a polygonal vectorization model. The planar primitive is represented by a set of internal 3D points and corresponding support planes. All plane primitives are extracted from the input 3D point cloud. After the plane primitive is extracted, the space containing the plane is generally divided according to one direction, and then three-dimensional reconstruction is performed according to the divided space, but the space division may be inaccurate according to the space division mode, so that the three-dimensional model obtained by three-dimensional reconstruction is not accurate enough.
Disclosure of Invention
The application provides at least one three-dimensional reconstruction method, a three-dimensional reconstruction device, three-dimensional reconstruction equipment and a storage medium.
The application provides a three-dimensional reconstruction method, which comprises the following steps: performing plane extraction on the three-dimensional point cloud or the three-dimensional grid to obtain a plurality of planes, wherein the three-dimensional point cloud or the three-dimensional grid is the point cloud or the grid related to the target object or the target scene; based on the area of each plane and the relative position relation between the planes, carrying out space division on a target space to obtain a space division result, wherein the target space comprises a space region covered by each plane; and carrying out three-dimensional reconstruction on the target object or the target scene by using the space division result to obtain a vectorization model of the target object or the target scene.
In the above scheme, after the three-dimensional point cloud or the three-dimensional grid is subjected to plane extraction to obtain a plurality of planes, the target space comprising each plane is spatially divided according to the area of each plane and the relative position relationship between the planes, and compared with the space division according to one direction, the space division method can consider the area size of each plane and the relative position relationship between the planes in the space division process, so that the space division is more accurate, and the three-dimensional model established according to the space division result obtained in the division mode is more accurate.
In some embodiments, the space division is performed on the target space based on the area of each plane and the relative positional relationship between the planes, so as to obtain a space division result, including: for each space to be segmented, the following segmentation operation is performed: determining the division fraction of each plane in the space to be divided, wherein the space to be divided is a target space or a subspace obtained by dividing the target space n times, n is greater than or equal to 1, and the division fraction of each plane is related to the area of the plane and the relative position relationship between the plane and other planes in the space to be divided; selecting at least one plane from the planes in the space to be segmented as a target plane based on the segmentation score of the planes; space division is carried out on the space to be segmented by utilizing the target plane, so that a next-stage space to be segmented corresponding to the space to be segmented is obtained; and stopping space division and taking a division result obtained by the executed division operation as a space division result in response to the next-stage space to be divided meeting the space division stopping condition.
In the scheme, the target space or the nth level subspace obtained by dividing the target space is used as the space to be divided, then the dividing score of each plane is determined through the respective areas of the planes in the space to be divided and the relative position relationship among the planes, then the dividing score is utilized to determine the target plane for dividing the space to be divided, and the most suitable plane insertion sequence is selected by comprehensively considering the position relationship and the size of the planes, so that the space is divided more efficiently.
In some embodiments, space division is performed on a space to be segmented by using a target plane to obtain a next-stage space to be segmented corresponding to the space to be segmented, including: and dividing other planes except the target plane in the space to be divided into the next-stage space to be divided.
In the above scheme, after the space is divided, other planes except the target plane in the space to be divided are divided into the next-stage spaces to be divided, so that the next-stage spaces to be divided can be conveniently and subsequently divided according to the planes in the next-stage spaces to be divided.
In some embodiments, dividing other planes than the target plane in the space to be divided into the next-stage spaces to be divided includes: dividing a plane at a first side of the target plane into a space to be segmented at a next stage at the first side of the target plane, and dividing a plane at a second side of the target plane into a space to be segmented at a next stage at the second side of the target plane, wherein the first side of the target plane is opposite to the second side of the target plane; and dividing the plane intersected with the target plane into a space to be segmented at the next stage on the first side of the target plane and a space to be segmented at the next stage on the second side of the target plane.
In the above scheme, by dividing each plane into the corresponding next-stage space to be divided according to the relative positional relationship between each plane and the target plane, the planes can be more accurately divided into the next-stage space to be divided.
In some embodiments, determining a segmentation score for each plane in the space to be segmented includes: for each plane, determining a first number of planes intersecting the plane, a second number of planes at a first side of the plane, and a third number of planes at a second side of the plane in the space to be segmented, the first side of the plane being opposite the second side of the plane; the first plane number, the second plane number and the third plane number are weighted and fused to obtain a position score about the plane; determining the area fraction of the plane based on the area of the plane and the total number of planes corresponding to the three-dimensional point cloud; based on the position score and the area score, a segmentation score of the plane is obtained.
In the above scheme, because some planes may be located at the center position in the target space or some planes may be located at the edge position in the target space, some planes have larger areas and some planes have smaller areas, by observing the number of planes intersecting with the planes and the number of planes located at both sides of the planes, whether the planes are located at important positions in all planes can be determined, and by considering whether the planes are located at important positions and the area sizes of the planes, the order in which the planes divide the space is determined, so that the space division is more efficient and accurate.
In some embodiments, weighting and fusing the first number of planes, the second number of planes, and the third number of planes to obtain a location score for the planes includes: determining a first difference between the second number of planes and the third number of planes, and a first product between the first difference and the first weight, and a second product between the first number of planes and the second weight; the sum of the first product and the second product is taken as the location score.
In the above scheme, by performing weighted fusion on the first plane number, the second plane number and the third plane number, a more accurate position score can be obtained.
In some embodiments, the area fraction is a product of a normalized area of the plane and a total number of planes, and deriving the segmentation fraction of the plane based on the position fraction and the area fraction includes: the difference between the area fraction and the position fraction is used as the division fraction.
In the above-described scheme, by determining the area fraction by considering the area of a single plane and the number of planes contained in the space to be segmented and then combining the position fractions of the planes, a more accurate segmentation fraction can be determined.
In some embodiments, before the space division is performed on the target space based on the area of each plane and the relative positional relationship between the planes, the method further includes: for each plane, determining a second plane at a first side of the plane, a third plane at a second side of the plane and planes other than the first plane and the second plane as first planes intersecting the planes, and obtaining a relative positional relationship between the planes and other planes; the relative positional relationship is stored in memory for later recall.
In the scheme, the relative position relation of the planes is firstly determined and stored in the memory, so that the subsequent calculation of the segmentation scores of the planes is facilitated, the relative position relation among the planes is not required to be repeatedly determined, and the calculation cost is reduced.
In some embodiments, before the space division is performed on the target space based on the area of each plane and the relative positional relationship between the planes, the method further includes: respectively determining adjacent planes of each plane; for each plane, the initial boundary of the plane is expanded towards the adjacent plane direction of the plane, and the intersection point of the plane and each adjacent plane is taken as the new boundary of the plane.
In the above scheme, the planes are extended by referring to the adjacent planes of the planes, and then the space division is performed by using the extended planes, so that the relative position relationship between the extended plane area and the planes is clearer, and the space division result is more accurate.
In some embodiments, determining adjacent planes of each plane separately includes: respectively determining the minimum distance between each plane and the inner points of other planes, wherein the inner points of the planes are any point in the planes; and determining that the two planes are adjacent to each other in response to the minimum distance between the inner points of the two planes being less than or equal to a preset value.
In the above-described aspect, by determining whether or not two planes are adjacent surfaces based on the distance between the inner points in the respective planes, it is possible to reduce the occurrence of a situation in which the boundary points of the two adjacent surfaces are far apart, relative to whether or not the boundary points determine the adjacent surfaces, resulting in erroneous recognition that the two planes are not adjacent surfaces.
In some embodiments, expanding the initial boundary of the plane toward an adjacent plane direction of the plane includes: sampling the initial boundary of the plane to obtain a plurality of sample vertexes; each sample vertex is expanded towards an adjacent plane on a target straight line, wherein the target straight line is a straight line passing through the centroid of the initial boundary and the initial position of the sample vertex.
In the above scheme, by extending the boundary points of the planes towards the adjacent planes, the computational complexity of the scheme is lower than when extending the whole boundary of the planes towards the adjacent planes.
In some embodiments, expanding the initial boundary of the plane toward adjacent planes of the plane and taking the intersection of the plane and each adjacent plane as a new boundary of the plane includes: for each sample vertex, determining an intersection point of the sample vertex after expansion with each adjacent plane; taking the intersection point farthest from the initial position of the sample vertex as a new position of the sample vertex; based on the new positions of the vertices of each sample, a new boundary of the plane is obtained.
In the scheme, the intersection point is taken as a new vertex, and the boundary is determined according to the new vertex, so that the subsequent determination of the relative position relationship of each plane is more accurate.
The application provides a three-dimensional reconstruction device, comprising: the plane extraction module is used for carrying out plane extraction on the three-dimensional point cloud or the three-dimensional grid to obtain a plurality of planes, wherein the three-dimensional point cloud or the three-dimensional grid is related to the point cloud or the grid of the target object or the target scene; the space division module is used for carrying out space division on the target space based on the area of each plane and the relative position relation between the planes to obtain a space division result, wherein the target space comprises a space region covered by each plane; and the three-dimensional reconstruction module is used for carrying out three-dimensional reconstruction on the target object or the target scene by utilizing the space division result to obtain a vectorization model of the target object or the target scene.
The application provides an electronic device, which comprises a memory and a processor which are mutually coupled, wherein the processor is used for executing program instructions stored in the memory so as to realize the three-dimensional reconstruction method.
The present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the above-described three-dimensional reconstruction method.
In the above scheme, after the three-dimensional point cloud or the three-dimensional grid is subjected to plane extraction to obtain a plurality of planes, the target space comprising each plane is spatially divided according to the area of each plane and the relative position relationship between the planes, and compared with the space divided according to one direction, the method can consider the area size of each plane and the relative position relationship between each plane in the space division process, so that the space obtained by division is more accurate, and the vectorization model established according to the space obtained by division in the mode is more accurate.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of an embodiment of a three-dimensional reconstruction method according to the present application;
FIG. 2 is a schematic flow chart of an embodiment of the three-dimensional reconstruction method of the present application;
FIG. 3 is a flow chart of a segmentation operation shown in an embodiment of the three-dimensional reconstruction method of the present application;
FIG. 4 is a schematic diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a frame of an embodiment of an electronic device of the present application;
FIG. 6 is a block diagram of a computer readable storage medium according to an embodiment of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
The main execution body of the three-dimensional reconstruction method may be a three-dimensional reconstruction device, which may be disposed in any terminal device or server or other processing device capable of executing the method embodiment of the present application, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, and so on. In some possible implementations, the three-dimensional reconstruction method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, fig. 1 is a flow chart of an embodiment of a three-dimensional reconstruction method according to the present application.
Specifically, the method may include the steps of:
Step S11: and carrying out plane extraction on the three-dimensional point cloud or the three-dimensional grid to obtain a plurality of planes.
A three-dimensional point cloud is a point cloud about a target object or target scene. A three-dimensional mesh is a mesh with respect to a target object or target scene. In some application scenarios, the three-dimensional point cloud may be acquired by capturing an entity or video of the target object or the target scene, or may be artificially created by image creation software or the like. No particular limitation is made here as to how the three-dimensional point cloud and the three-dimensional grid are obtained in particular. The target object may be any object such as an animal, a vehicle, a virtual character, and the target scene may be any scene such as a room, a city, a desktop, and the like. Among them, many ways of extracting a plane from a three-dimensional point cloud or a three-dimensional grid are provided, for example, a RANSAC algorithm (RANdom SAmple Consensus), a normal-based method, and many image-based point cloud plane itqu methods (for example, a projection method, a contour method), and a depth learning-based point cloud plane extraction method, and the ways of extracting a plane from a three-dimensional point cloud are provided, and are not particularly limited herein.
Step S12: and carrying out space division on the target space based on the area of each plane and the relative position relation between the planes to obtain a space division result.
The target space includes a spatial region covered by each plane. Specifically, before dividing the target space, the target space capable of containing all planes may be determined from the space according to the boundaries of the planes, or in other embodiments, the space containing all three-dimensional points in the three-dimensional point cloud may be determined from the space according to the edge three-dimensional points of the three-dimensional point cloud, because all three-dimensional points can be contained and all planes extracted from the three-dimensional points can be contained. The area of a plane is understood to mean in particular the area of the area enclosed by the boundary of the plane. For a plane, the relative positional relationship between the other plane and the plane may be that the other plane is on a first side of the plane, on a second side of the plane, or intersects the plane, where the first side of the plane is opposite to the second side, for example, the first side of the plane is the left side, the second side of the plane is the right side of the plane, if the first side of the plane is the upper side, the second side of the plane is the lower side of the plane, and in other embodiments, the relative positional relationship may also include the relative distance between the planes, and so on. The target space is divided based on the area of each plane and the relative positional relationship between the planes, and the space division result may be obtained by determining the order of the planes dividing the space according to the area of each plane and the relative positional relationship between the planes. The spatial division result may be a result of one space being specifically divided into a plurality of subspaces.
Step S13: and carrying out three-dimensional reconstruction on the target object or the target scene by using the space division result to obtain a vectorization model of the target object or the target scene.
The three-dimensional reconstruction may be a vectorized modeling, resulting in a three-dimensional vectorized model. The three-dimensional vectorization model is a popularization of a two-dimensional midpoint, line and plane vectorization model in three dimensions. It abstracts entities in three-dimensional space into four basic elements of points, lines, planes, and volumes in three-dimensional space, and then constructs more complex objects with a set of these four basic geometric elements. Defining its boundary with a start point, an end point, and its shape with a set of value points; defining its boundaries with an outer boundary and a number of inner boundaries, defining its shape with a set of profile curves; the boundaries and shape of which are defined by a set of curved surfaces. The vector model can accurately express irregular boundaries of three-dimensional linear entities, planar entities and volumetric entities, has compact data storage format and small data volume, can intuitively express topological relations among space geometric elements, has strong capability of space query, topology query, adjacency analysis and network analysis, attractive graphic output and is easy to realize space operations such as geometric transformation. Specifically, the three-dimensional reconstruction is performed on the target object or the target scene by using the space division result, and the mode of obtaining the vectorization model of the target object or the target scene can be to select a proper space and plane composition vectorization model by using methods such as optimization.
In the above scheme, a mode of firstly carrying out plane extraction on the three-dimensional point cloud or the three-dimensional grid to obtain a plurality of planes and then carrying out space division on the target space containing each plane according to the area of each plane and the relative position relation between each plane is adopted, and compared with a mode of carrying out space division according to one direction, the space division method can consider the area size of each plane and the relative position relation between each plane in the space division process, so that the space division is more accurate, and a vectorization model established according to the space division result obtained in the division mode is more accurate.
In some embodiments, referring to fig. 2, fig. 2 is another flow chart of an embodiment of the three-dimensional reconstruction method according to the present application. As shown in fig. 2, the following steps may be further performed before performing the above step S12:
step S21: adjacent planes to each plane are determined separately.
The specific implementation manner of step S21 may be to determine whether the planes are adjacent planes according to the relative distance between the planes. Illustratively, the minimum distance between each plane and the interior points of the other planes, which are any point within the plane, is determined separately. The interior point of the plane may be any point within the plane. For example, if there are n three-dimensional points in the a-plane and m three-dimensional points in the B-plane, then n×m relative distances need to be calculated and the minimum distance determined therefrom. And determining that the two planes are adjacent to each other in response to the minimum distance between the inner points of the two planes being less than or equal to a preset value. That is, if the minimum distance between the interior points of each pair of planes is less than a threshold, they are considered to be adjacent connections.
Step S22: for each plane, the initial boundary of the plane is expanded towards the adjacent plane direction of the plane, and the intersection point of the plane and each adjacent plane is taken as the new boundary of the plane.
The method for expanding the initial boundary of the plane towards the adjacent plane of the plane may be that the initial boundary of the plane is sampled to obtain a plurality of sample points, and then the sample points are expanded to obtain a new boundary, or the initial boundary is fitted into a boundary line, and the boundary line is expanded to obtain the new boundary. In some embodiments, the specific extension may be: and sampling the initial boundary of the plane to obtain a plurality of sample vertexes. Then, each sample vertex is expanded toward an adjacent plane on a target straight line, which is a straight line passing through the centroid of the initial boundary and the initial position of the sample vertex. For example, the boundary of each plane is initialized to a convex hull of the interior point projection on its supporting plane. The initial boundaries of the planes are uniformly sampled with an average spacing S and each sample vertex is expanded in a direction away from the initial boundary centroid toward its neighboring plane.
Specifically, the manner of expanding each sample vertex on the target straight line toward the adjacent plane may be: for each sample vertex, the intersection point of the sample vertex after expansion with each adjacent plane is determined. The intersection point farthest from the initial position of the sample vertex is taken as a new position of the sample vertex. Based on the new positions of the vertices of each sample, a new boundary of the plane is obtained. Illustratively, all intersections where the sample vertices collide with intersections between their neighboring planar primitives are traversed and the new boundary of the plane is determined as the set of intersections furthest from the initial sample vertex position.
In some embodiments, the following steps may also be performed before performing step S12 described above: for each plane, a second plane at a first side of the plane, a third plane at a second side of the plane, and planes other than the first plane and the second plane are determined as first planes intersecting the planes, and a relative positional relationship between the planes and the other planes is obtained. The relative positional relationship is stored in memory for later recall. Because the relative position relation between planes needs to be frequently accessed in the subsequent calculation process of the segmentation score and the like, the position relation between every two planes is calculated and stored in a memory in advance, so that the method is convenient to call. Alternatively, taking the first side as the left side and the second side as the right side as an example, for one plane, the other plane and its positional relationship are defined as three, respectively, on the left side of the plane, on the right side of the plane and intersecting the plane. In the actual calculation process, for the plane P1, we calculate the positional relationship between all vertices Vi of another plane P2 and the plane P1. If all vertices Vi are to the left or right of the plane P1, we call the other plane P2 to the left or right of the plane P1, otherwise the two planes intersect. The first side and the second side may be a front side and a rear side, respectively, or an upper side and a lower side, respectively, which are opposite to each other.
In some embodiments, the step S12 may include the following steps: the following division operation is performed on each space to be divided and in response to the next-stage space to be divided satisfying the stop space division condition, space division is stopped and a division result obtained by the performed division operation is taken as a space division result. Referring to fig. 3, fig. 3 is a schematic flow chart of a segmentation operation in an embodiment of the three-dimensional reconstruction method according to the present application, where the steps of the segmentation operation include:
Step S121: a segmentation score for each plane in the space to be segmented is determined.
For the whole space, we calculate the bounding box of the whole plane set, which is the target space that we need to divide. The space to be segmented is a target space or a subspace obtained by dividing the target space n times, wherein n is greater than or equal to 1. For example, when n is equal to 1, the space to be divided is a subspace obtained by dividing the target space, when n is equal to 2, the space to be divided is a secondary subspace obtained by dividing the subspace, and so on. The segmentation score of each plane is related to the area of that plane and the relative positional relationship between that plane and other planes in the space to be segmented. In the scheme, a target space and all plane primitives form a root node of a Binary Space Partition (BSP) tree, and the target space is an initial space S1 to be segmented. The segmentation scores of all planes in the space S1 are calculated, the plane with the highest score is selected to segment the space into two new subspaces, and the same segmentation operation is continued for each subspace until all subspaces cannot be segmented. There are various cases where division cannot be continued, for example, there is no plane in the subspace or any case where division cannot be continued, such as where the plane is coincident with the boundary of the subspace.
Specifically, the manner of determining the segmentation score of each plane in the space to be segmented may be: for each plane, determining a first number of planes intersecting the plane in the space to be segmented, a second number of planes at a first side of the plane, and a third number of planes at a second side of the plane, the first side of the plane being opposite the second side of the plane. And carrying out weighted fusion on the first plane number, the second plane number and the third plane number to obtain the position score of the plane. Illustratively, the first side is the left side of the plane, the second side is the right side of the plane, the first plane is the plane intersecting the plane in the space to be segmented, the second plane is the plane left of the plane in the space to be segmented, and the third plane is the plane right of the plane in the space to be segmented.
The method for obtaining the position score about the plane by weighting and fusing the first plane number, the second plane number and the third plane number may be: a first difference between the second number of planes and the third number of planes, and a first product between the first difference and the first weight, and a second product between the first number of planes and the second weight are determined. Then, the sum of the first product and the second product is taken as a position score.
Wherein the area fraction is the product of the normalized area of the plane and the total number of planes. The normalized area of the plane may be determined by: the inverse of the weighted average area of the largest N planes in the space to be segmented is multiplied by the area of that plane. N may be a specific number, or may be the total number of planes in the space to be segmented, or may be N percent of the total number of planes in the space to be segmented. The normalized planar area is the relative size of the planar area, independent of scene size and planar absolute area size. In other embodiments, the area fraction may also be the product of the actual area of the plane and the total number of planes. The manner of obtaining the division score of the plane based on the position score and the area score may be: the difference between the area fraction and the position fraction is used as the division fraction.
And determining the area fraction of the plane based on the area of the plane and the total number of planes corresponding to the three-dimensional point cloud. Then, based on the position score and the area score, a division score of the plane is obtained.
The formula for determining the segmentation score for a plane can refer to formula (1):
Where W k(Pi) represents the segmentation score for plane P i, A (P i) represents the normalized area of plane P i, Representing the total number of planes contained in the space to be segmented. w c represents the second weight, and w b represents the first weight. /(I)Representing the first planar number,/>Representing the number of second planes,/>Representing a third number of planes. Wherein the first weight and the second weight are preset values.
Step S122: at least one plane is selected as a target plane from among the planes in the space to be segmented based on the segmentation score of the planes.
Illustratively, a plane whose score satisfies the condition is selected as the target plane. The score satisfaction condition may be that the score is highest among the division scores corresponding to the planes in the space to be divided.
Step S123: and carrying out space division on the space to be segmented by utilizing the target plane to obtain a next-stage space to be segmented corresponding to the space to be segmented.
Wherein, the step S123 includes the following steps: and dividing other planes except the target plane in the space to be divided into the next-stage space to be divided. Each space to be segmented can be divided into two next-stage spaces to be segmented by the target plane. Specifically, a plane at a first side of the target plane is divided into a space to be segmented at a next stage at the first side of the target plane, and a plane at a second side of the target plane is divided into a space to be segmented at a next stage at the second side of the target plane, the first side of the target plane being opposite to the second side of the target plane. And dividing the plane intersected with the target plane into a space to be segmented at the next stage on the first side of the target plane and a space to be segmented at the next stage on the second side of the target plane.
The above-mentioned stopping of the space division in response to the space to be divided at the next stage satisfying the stopping space division condition, and taking the division result obtained by the executed division operation as the stopping space division condition in the space division result may be any case where the division cannot be continued any more, such as that there is no plane in the space to be divided at the next stage or that the plane existing in the space to be divided coincides with the boundary of the space to be divided at the next stage. Finally, a proper space and plane composition vectorization model can be selected from the partitioned spaces.
In the above scheme, after the three-dimensional point cloud or the three-dimensional grid is subjected to plane extraction to obtain a plurality of planes, the target space comprising each plane is spatially divided according to the area of each plane and the relative position relationship between the planes, and compared with the space divided according to one direction, the space dividing process can consider the area size of each plane and the relative position relationship between each plane, so that the space obtained by division is more accurate, and the three-dimensional model established according to the space obtained by division in the mode is more accurate.
In the scheme, a plane is extracted from a point cloud model or a grid model. The planes are then expanded taking into account the adjacency between the planes, resulting in the final boundary of the planes. Instead of expanding the boundaries during the spatial segmentation, the planar adjacencies are pre-computed and the final boundary intersection points are determined prior to spatial segmentation, thereby avoiding incorrect spatial partitioning due to incorrect planar primitive insertion order.
Then, by pre-calculating the plane position relationship and storing the plane position relationship in the memory, the calculation and selection of the plane insertion sequence can be quickened, and the memory and calculation cost of the algorithm can be reduced. Considering that the more space is divided, the more time and memory is consumed for the space division process, we tend to design the insertion order of planes to minimize the amount of space that is ultimately divided. Therefore, when the binary space is used for dividing the tree into the spaces, a score is calculated for all planes at a time, and finally the plane with the largest score is selected to divide the space into two subspaces. Then the same dividing flow is continued to the subspaces in turn until all the spaces are divided.
In some application scenes, after a reconstruction model of the scene is acquired by using equipment such as a camera, the scheme can be used for extracting plane primitives and generating a space division result, and then a vectorization model of the whole model can be obtained through optimization.
The scheme can be used for obtaining a vectorized model of the scene and objects in the scene, and the model can be used for intelligent management of the scene by combining semantics, building Information Models (BIM) and the like.
The scheme can be used for obtaining a scene and a vectorization model of an object in the scene for scene navigation, and the vectorization model can be used for efficient collision detection.
The scheme can be applied to the fields of three-dimensional modeling, vectorization modeling, augmented reality, games, animation, film and television, electronic commerce, education, real estate, home decoration, intelligent industry, navigation and the like.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a three-dimensional reconstruction device according to an embodiment of the application. The three-dimensional reconstruction device 40 includes a plane extraction module 41, a spatial division module 42, and a three-dimensional reconstruction module 43. A plane extraction module 41, configured to perform plane extraction on a three-dimensional point cloud or a three-dimensional grid, to obtain a plurality of planes, where the three-dimensional point cloud or the three-dimensional grid is a point cloud or a grid related to a target object or a target scene; the space division module 42 is configured to divide a target space based on an area of each plane and a relative positional relationship between the planes, so as to obtain a space division result, where the target space includes a space region covered by each plane; the three-dimensional reconstruction module 43 is configured to perform three-dimensional reconstruction on the target object or the target scene by using the spatial division result, so as to obtain a vectorized model of the target object or the target scene.
In the above scheme, after the three-dimensional point cloud or the three-dimensional grid is subjected to plane extraction to obtain a plurality of planes, the target space comprising each plane is spatially divided according to the area of each plane and the relative position relationship between the planes, and compared with the space divided according to one direction, the space dividing process can consider the area size of each plane and the relative position relationship between each plane, so that the space obtained by division is more accurate, and the three-dimensional model established according to the space obtained by division in the mode is more accurate.
In some embodiments, the space division module 42 performs space division on the target space based on the area of each plane and the relative positional relationship between the planes, to obtain a space division result, including: determining the division fraction of each plane in the space to be divided, wherein the space to be divided is a target space or a subspace obtained by dividing the target space n times, n is greater than or equal to 1, and the division fraction of each plane is related to the area of the plane and the relative position relationship between the plane and other planes in the space to be divided; selecting at least one plane from the planes in the space to be segmented as a target plane based on the segmentation score of the planes; space division is carried out on the space to be segmented by utilizing the target plane, so that a next-stage space to be segmented corresponding to the space to be segmented is obtained; and stopping space division and taking a division result obtained by the executed division operation as a space division result in response to the next-stage space to be divided meeting the space division stopping condition.
In the scheme, the target space or the nth level subspace obtained by dividing the target space is used as the space to be divided, then the dividing score of each plane is determined through the respective areas of the planes in the space to be divided and the relative position relationship among the planes, then the dividing score is utilized to determine the target plane for dividing the space to be divided, and the most suitable plane insertion sequence is selected by comprehensively considering the position relationship and the size of the planes, so that the space is divided more efficiently.
In some embodiments, the space division module 42 performs space division on the space to be divided by using the target plane to obtain a next-stage space to be divided corresponding to the space to be divided, including: and dividing other planes except the target plane in the space to be divided into the next-stage space to be divided.
In the above scheme, after the space is divided, other planes except the target plane in the space to be divided are divided into the next-stage spaces to be divided, so that the next-stage spaces to be divided can be conveniently and subsequently divided according to the planes in the next-stage spaces to be divided.
In some embodiments, the space division module 42 divides other planes than the target plane within the space to be divided into the next-stage spaces to be divided, including: dividing a plane at a first side of the target plane into a space to be segmented at a next stage at the first side of the target plane, and dividing a plane at a second side of the target plane into a space to be segmented at a next stage at the second side of the target plane, wherein the first side of the target plane is opposite to the second side of the target plane; and dividing the plane intersected with the target plane into a space to be segmented at the next stage on the first side of the target plane and a space to be segmented at the next stage on the second side of the target plane.
In the above scheme, by dividing each plane into the corresponding next-stage space to be divided according to the relative positional relationship between each plane and the target plane, the planes can be more accurately divided into the next-stage space to be divided.
In some embodiments, the spatial partitioning module 42 determines a partition score for each plane in the space to be partitioned, including: for each plane, determining a first number of planes intersecting the plane, a second number of planes at a first side of the plane, and a third number of planes at a second side of the plane in the space to be segmented, the first side of the plane being opposite the second side of the plane; the first plane number, the second plane number and the third plane number are weighted and fused to obtain a position score about the plane; determining the area fraction of the plane based on the area of the plane and the total number of planes corresponding to the three-dimensional point cloud; based on the position score and the area score, a segmentation score of the plane is obtained.
In the above scheme, since some planes may be at the center position in the target space or some planes may be at the edge position in the target space, some planes have larger area, but some planes have smaller area, by observing the number of planes intersecting with the planes and the number of planes located at both sides of the planes, it is possible to determine whether the planes are at important positions in all planes, and by considering whether the planes are at important positions and the area sizes of the planes, the order in which the planes divide the space is determined, so that the space division is more efficient and accurate.
In some embodiments, the spatial partitioning module 42 performs weighted fusion of the first number of planes, the second number of planes, and the third number of planes to obtain a location score for the planes, including: determining a first difference between the second number of planes and the third number of planes, and a first product between the first difference and the first weight, and a second product between the first number of planes and the second weight; the sum of the first product and the second product is taken as the location score.
In the above scheme, by performing weighted fusion on the first plane number, the second plane number and the third plane number, a more accurate position score can be obtained.
In some embodiments, the area fraction is a product of a normalized area of the plane and a total number of planes, and the spatial division module 42 obtains a division fraction of the plane based on the location fraction and the area fraction, including: the difference between the area fraction and the position fraction is used as the division fraction.
In the above-described scheme, by determining the area fraction by considering the area of a single plane and the number of planes contained in the space to be segmented and then combining the position fractions of the planes, a more accurate segmentation fraction can be determined.
In some embodiments, before the space division is performed on the target space based on the area of each plane and the relative positional relationship between the planes, the space division module 42 is further configured to: for each plane, determining a second plane at a first side of the plane, a third plane at a second side of the plane and planes other than the first plane and the second plane as first planes intersecting the planes, and obtaining a relative positional relationship between the planes and other planes; the relative positional relationship is stored in memory for later recall.
In the scheme, the relative position relation of the planes is firstly determined and stored in the memory, so that the subsequent calculation of the segmentation scores of the planes is facilitated, the relative position relation among the planes is not required to be repeatedly determined, and the calculation cost is reduced.
In some embodiments, before the space division is performed on the target space based on the area of each plane and the relative positional relationship between the planes, the space division module 42 is further configured to: respectively determining adjacent planes of each plane; for each plane, the initial boundary of the plane is expanded towards the adjacent plane direction of the plane, and the intersection point of the plane and each adjacent plane is taken as the new boundary of the plane.
In the above scheme, the planes are extended by referring to the adjacent planes of the planes, and then the space division is performed by using the extended planes, so that the relative position relationship between the extended plane area and the planes is clearer, and the space division result is more accurate.
In some embodiments, the spatial division module 42 determines adjacent planes of the planes, respectively, including: respectively determining the minimum distance between each plane and the inner points of other planes, wherein the inner points of the planes are any point in the planes; and determining that the two planes are adjacent to each other in response to the minimum distance between the inner points of the two planes being less than or equal to a preset value.
In the above-described aspect, by determining whether or not two planes are adjacent surfaces based on the distance between the inner points in the respective planes, it is possible to reduce the occurrence of a situation in which the boundary points of the two adjacent surfaces are far apart, relative to whether or not the boundary points determine the adjacent surfaces, resulting in erroneous recognition that the two planes are not adjacent surfaces.
In some embodiments, the spatial partitioning module 42 expands an initial boundary of a plane toward an adjacent plane direction of the plane, including: sampling the initial boundary of the plane to obtain a plurality of sample vertexes; each sample vertex is expanded towards an adjacent plane on a target straight line, wherein the target straight line is a straight line passing through the centroid of the initial boundary and the initial position of the sample vertex.
In the above scheme, by extending the boundary points of the planes towards the adjacent planes, the computational complexity of the scheme is lower than when extending the whole boundary of the planes towards the adjacent planes.
In some embodiments, the spatial partitioning module 42 expands an initial boundary of a plane toward an adjacent plane direction of the plane and regards an intersection of the plane with each adjacent plane as a new boundary of the plane, including: for each sample vertex, determining an intersection point of the sample vertex after expansion with each adjacent plane; taking the intersection point farthest from the initial position of the sample vertex as a new position of the sample vertex; based on the new positions of the vertices of each sample, a new boundary of the plane is obtained.
In the scheme, the intersection point is taken as a new vertex, and the boundary is determined according to the new vertex, so that the subsequent determination of the relative position relationship of each plane is more accurate.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an electronic device according to an embodiment of the application. The electronic device 50 comprises a memory 51 and a processor 52 coupled to each other, the processor 52 being adapted to execute program instructions stored in the memory 51 for implementing the steps of any of the three-dimensional reconstruction method embodiments described above. In one particular implementation scenario, electronic device 50 may include, but is not limited to: the microcomputer and the server, and the electronic device 50 may also include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.
In particular, the processor 52 is configured to control itself and the memory 51 to implement the steps of, or to implement the steps of, any of the three-dimensional reconstruction method embodiments described above. The processor 52 may also be referred to as a CPU (Central Processing Unit ). The processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
In the above scheme, after the three-dimensional point cloud or the three-dimensional grid is subjected to plane extraction to obtain a plurality of planes, the target space comprising each plane is spatially divided according to the area of each plane and the relative position relationship between the planes, and compared with the space divided according to one direction, the space dividing process can consider the area size of each plane and the relative position relationship between each plane, so that the space obtained by division is more accurate, and the three-dimensional model established according to the space obtained by division in the mode is more accurate.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a frame of an embodiment of a computer readable storage medium according to the present application. The computer readable storage medium 60 stores program instructions 601 that can be executed by a processor, the program instructions 601 being configured to implement the steps of any of the three-dimensional reconstruction method embodiments described above.
In the above scheme, after the three-dimensional point cloud or the three-dimensional grid is subjected to plane extraction to obtain a plurality of planes, the target space comprising each plane is spatially divided according to the area of each plane and the relative position relationship between the planes, and compared with the space divided according to one direction, the space dividing process can consider the area size of each plane and the relative position relationship between each plane, so that the space obtained by division is more accurate, and the three-dimensional model established according to the space obtained by division in the mode is more accurate.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Claims (15)
1. A three-dimensional reconstruction method, comprising:
Performing plane extraction on a three-dimensional point cloud or a three-dimensional grid to obtain a plurality of planes, wherein the three-dimensional point cloud or the three-dimensional grid is a point cloud or a grid related to a target object or a target scene;
Based on the areas of the planes and the relative position relation between the planes, carrying out space division on a target space to obtain a space division result, wherein the target space comprises a space region covered by the planes;
and carrying out three-dimensional reconstruction on the target object or the target scene by utilizing the space division result to obtain a vectorization model of the target object or the target scene.
2. The method according to claim 1, wherein the spatially dividing the target space based on the area of each plane and the relative positional relationship between each plane to obtain the spatial division result includes:
for each space to be segmented, the following segmentation operation is performed:
Determining the segmentation score of each plane in the space to be segmented, wherein the space to be segmented is the target space or a subspace obtained by dividing the target space n times, n is greater than or equal to 1, and the segmentation score of each plane is related to the area of the plane and the relative position relationship between the plane and other planes in the space to be segmented;
selecting at least one plane from the planes in the space to be segmented as a target plane based on the segmentation score of each plane;
Carrying out space division on the space to be segmented by utilizing the target plane to obtain a next-stage space to be segmented corresponding to the space to be segmented;
And stopping space division and taking a division result obtained by the executed division operation as the space division result in response to the next-stage space to be divided meeting a space division stopping condition.
3. The method of claim 2, wherein the performing spatial division on the space to be segmented by using the target plane to obtain a next-stage space to be segmented corresponding to the space to be segmented, includes:
And dividing other planes except the target plane in the space to be divided into the next-stage space to be divided.
4. A method according to claim 3, wherein dividing the other planes than the target plane in the space to be divided into the next-stage space to be divided includes:
Dividing a plane at a first side of the target plane into a space to be segmented at a next stage at the first side of the target plane, and dividing a plane at a second side of the target plane into a space to be segmented at a next stage at the second side of the target plane, wherein the first side of the target plane is opposite to the second side of the target plane;
and dividing the plane intersected with the target plane into a space to be segmented at the next stage on the first side of the target plane and a space to be segmented at the next stage on the second side of the target plane.
5. The method according to any one of claims 2 to 4, wherein determining the segmentation score of each of the planes in the space to be segmented comprises:
For each plane, determining a first number of planes intersecting the plane, a second number of planes on a first side of the plane, and a third number of planes on a second side of the plane in the space to be segmented, the first side of the plane being opposite to the second side of the plane;
The first plane number, the second plane number and the third plane number are weighted and fused to obtain a position score related to the planes; determining the area fraction of the plane based on the area of the plane and the total number of planes corresponding to the three-dimensional point cloud;
And obtaining the segmentation score of the plane based on the position score and the area score.
6. The method of claim 5, wherein said weighting said first number of planes, said second number of planes, and said third number of planes together to obtain a location score for said planes, comprises:
Determining a first difference between the second number of planes and the third number of planes, and a first product between the first difference and a first weight, and a second product between the first number of planes and a second weight;
and taking the sum of the first product and the second product as the position score.
7. The method of claim 5, wherein the area fraction is a product of a normalized area of the plane and the total number of planes, the deriving the segmentation fraction of the plane based on the location fraction and the area fraction comprising:
And taking the difference between the area fraction and the position fraction as the segmentation fraction.
8. The method according to any one of claims 1 to 7, wherein before the spatial division of the target space based on the area of each of the planes and the relative positional relationship between each of the planes, the method further comprises:
for each plane, determining a second plane positioned on a first side of the plane, a third plane positioned on a second side of the plane and planes except the first plane and the second plane as a first plane intersecting the planes, and obtaining the relative positional relationship between the planes and other planes;
and storing the relative position relation into a memory for subsequent calling.
9. The method according to any one of claims 1 to 8, wherein before the spatial division of the target space based on the area of each of the planes and the relative positional relationship between each of the planes, the method further comprises:
Determining adjacent planes of the planes respectively;
For each plane, expanding the initial boundary of the plane towards the direction of the adjacent plane of the plane, and taking the intersection point of the plane and each adjacent plane as the new boundary of the plane.
10. The method of claim 9, wherein said separately determining adjacent ones of each of said planes comprises:
respectively determining the minimum distance between each plane and the inner points of other planes, wherein the inner points of the planes are any point in the planes;
and determining that the two planes are adjacent to each other in response to the minimum distance between the inner points of the two planes being less than or equal to a preset value.
11. The method of claim 9, wherein the expanding the initial boundary of the plane toward the adjacent plane direction of the plane comprises:
sampling the initial boundary of the plane to obtain a plurality of sample vertexes;
Each sample vertex is expanded towards an adjacent plane on a target straight line, wherein the target straight line is a straight line passing through the centroid of the initial boundary and the initial position of the sample vertex.
12. The method of claim 11, wherein expanding the initial boundary of the plane toward adjacent planes of the plane and taking the intersection of the plane and each of the adjacent planes as the new boundary of the plane comprises:
For each sample vertex, determining an intersection point of the expanded sample vertex and each adjacent plane;
Taking an intersection point farthest from the initial position of the sample vertex as a new position of the sample vertex;
based on the new positions of the vertices of the samples, new boundaries of the plane are obtained.
13. A three-dimensional reconstruction apparatus, comprising:
The plane extraction module is used for carrying out plane extraction on the three-dimensional point cloud or the three-dimensional grid to obtain a plurality of planes, wherein the three-dimensional point cloud or the three-dimensional grid is related to the point cloud or the grid of the target object or the target scene;
the space division module is used for carrying out space division on a target space based on the area of each plane and the relative position relation between the planes to obtain a space division result, wherein the target space comprises a space region covered by each plane;
And the three-dimensional reconstruction module is used for carrying out three-dimensional reconstruction on the target object or the target scene by utilizing the space division result to obtain a vectorization model of the target object or the target scene.
14. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the three-dimensional reconstruction method of any one of claims 1 to 12.
15. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the three-dimensional reconstruction method of any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311632943.1A CN117953147A (en) | 2023-11-30 | 2023-11-30 | Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311632943.1A CN117953147A (en) | 2023-11-30 | 2023-11-30 | Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117953147A true CN117953147A (en) | 2024-04-30 |
Family
ID=90804533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311632943.1A Pending CN117953147A (en) | 2023-11-30 | 2023-11-30 | Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117953147A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118334278A (en) * | 2024-06-17 | 2024-07-12 | 之江实验室 | Point cloud data processing method, device, storage medium and equipment |
-
2023
- 2023-11-30 CN CN202311632943.1A patent/CN117953147A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118334278A (en) * | 2024-06-17 | 2024-07-12 | 之江实验室 | Point cloud data processing method, device, storage medium and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11798239B2 (en) | Method and device for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment | |
US8711143B2 (en) | System and method for interactive image-based modeling of curved surfaces using single-view and multi-view feature curves | |
EP1746541B1 (en) | Rendering by using object symmetry | |
US20160005227A1 (en) | Technique for filling holes in a three-dimensional model | |
US10452788B2 (en) | Modeling a three-dimensional object having multiple materials | |
KR20080022551A (en) | Procedure Triangulation of Geometric Objects | |
CN107578467B (en) | Three-dimensional modeling method and device for medical instrument | |
CN110706332B (en) | Scene reconstruction method based on noise point cloud | |
JP2023529790A (en) | Method, apparatus and program for generating floorplans | |
CN110930503A (en) | Method and system for establishing three-dimensional model of clothing, storage medium and electronic equipment | |
CN116012542A (en) | A method and device for dynamic visualization of earthquake disasters | |
CN117953147A (en) | Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction equipment and storage medium | |
CN111340100A (en) | Similarity calculation method of BIM (building information modeling) model | |
US7586494B2 (en) | Surface detail rendering using leap textures | |
CN115375847B (en) | Material recovery method, three-dimensional model generation method and model training method | |
CN116127802A (en) | Method, device and system for displaying CAE simulation result on 3D (three-dimensional) view | |
KR101032397B1 (en) | 3D shape representation device and method using spherical coordinate system | |
KR100693134B1 (en) | 3D image processing | |
CN115965735B (en) | Texture map generation method and device | |
CN117635804A (en) | Efficient low-occupancy volume cloud rendering method, system, equipment and medium | |
CN117635412A (en) | Super-resolution projection mapping method, system and equipment based on GPU acceleration | |
Kawachi et al. | Distance computation between non-convex polyhedra at short range based on discrete Voronoi regions | |
CN113963028A (en) | Collision detection method and device, computer equipment and storage medium | |
CN116226426B (en) | Three-dimensional model retrieval method based on shape, computer device and storage medium | |
CN117422848B (en) | Method and device for segmenting three-dimensional model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |