[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = photometric and geometric point cloud segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 24584 KiB  
Article
Lightweight Semantic Architecture Modeling by 3D Feature Line Detection
by Shibiao Xu, Jiaxi Sun, Jiguang Zhang, Weiliang Meng and Xiaopeng Zhang
Remote Sens. 2023, 15(8), 1957; https://doi.org/10.3390/rs15081957 - 7 Apr 2023
Cited by 1 | Viewed by 1969
Abstract
Existing architecture semantic modeling methods in 3D complex urban scenes continue facing difficulties, such as limited training data, lack of semantic information, and inflexible model processing. Focusing on extracting and adopting accurate semantic information into a modeling process, this work presents a framework [...] Read more.
Existing architecture semantic modeling methods in 3D complex urban scenes continue facing difficulties, such as limited training data, lack of semantic information, and inflexible model processing. Focusing on extracting and adopting accurate semantic information into a modeling process, this work presents a framework for lightweight modeling of buildings that joints point clouds semantic segmentation and 3D feature line detection constrained by geometric and photometric consistency. The main steps are: (1) Extraction of single buildings from point clouds using 2D-3D semi-supervised semantic segmentation under photometric and geometric constraints. (2) Generation of lightweight building models by using 3D plane-constrained multi-view feature line extraction and optimization. (3) Introduction of detailed semantics of building elements into independent 3D building models by using fine-grained segmentation of multi-view images to achieve high-accuracy architecture lightweight modeling with fine-grained semantic information. Experimental results demonstrate that it can perform independent lightweight modeling of each building on point cloud at various scales and scenes, with accurate geometric appearance details and realistic textures. It also enables independent processing and analysis of each building in the scenario, making them more useful in practical applications. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the proposed framework. <b>Heterogeneous Data</b>: Different from traditional methods, which take sparse pure LiDAR point clouds as input, our method leverages heterogeneous data including dense point clouds and multi-view images to generate vectorized fine-grained semantic models. <b>Semantic Segmentation</b>: To eliminate background and noise point clouds, coarse-level segmentation of point clouds is created from heterogeneous data in this step. Under the multi-view photometric consistency constraints and 3D co-planar geometric consistency constraints, our method is robust to common challenges in point cloud semantic segmentation, such as occlusion, dense clouds, and complex structures. <b>Facet Contour</b>: To represent architecture in a lightweight model instead of memory costly dense point cloud, polygon facet contour is generated from 3D feature lines which are detected from multi-view images and optimized jointly with dense point clouds of architecture. <b>Vectorized Semantic Modeling</b>: Aiming to produce vectorized fine-grained semantics with a lightweight model, fine-grained semantic pixels were detected from multi-view images and back-projected to the architecture surface. For a lightweight semantic representation, 3D feature lines with fine-grained pixels are leveraged to optimize the vectorized fine-grained semantic annotation.</p>
Full article ">Figure 2
<p>Three common situations when filtering line segments. Line segments are represented by green line. Search bounding boxes are red. Points of facet outside bounding box are green. Points located inside bounding box are participated into <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> </semantics></math> and positive <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> </semantics></math> and displayed in red and blue, respectively. The left situation meets the reservation condition. In the middle situation, <math display="inline"><semantics> <mrow> <mo movablelimits="true" form="prefix">max</mo> <mo>(</mo> <mfrac> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> </mfrac> <mo>,</mo> <mfrac> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> </mfrac> <mo>)</mo> <mo>&lt;</mo> <mn>2</mn> </mrow> </semantics></math>. In the right situation, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <mo>+</mo> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> <mo>&lt;</mo> <mi>τ</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Line Segment Direction Acquisition. To assign line segments with proper direction, each detected line segment is matched with a directed edge in the facet rough boundary. As shown above, the matched pair is tagged with the same number. After the matching process, line segments are assigned with the direction of the matched edges in the rough boundary.</p>
Full article ">Figure 4
<p>Comparison of different semantic segmentation methods on Dataset I. (<b>a</b>) Origin PointCloud (<b>b</b>) Ground Truth (<b>c</b>) PointNet (<b>d</b>) PointNet++ (<b>e</b>) RandLANet (<b>f</b>) PointFormer (<b>g</b>) PointNeXt (<b>h</b>) Our.</p>
Full article ">Figure 5
<p>Dataset I, large urban area, used in our experiments. The urban scene of Dataset I, which includes 52 million points were reconstructed from 1705 images collected by unmanned aerial vehicle. The original multi-view images are shown in the top left of the image. We can reconstruct and segment the scene point cloud in the bottom left with multi-view images. The point cloud semantic segmentation results generated by our algorithm are shown in the bottom right of the image. After extracting the building point clouds, the building models can be optimized and vectorized. The produced vectorized semantic models are shown in the top right.</p>
Full article ">Figure 6
<p>Visualization of fine grained models. The models generated by our algorithm before the process of fine-grained semantic generation are shown in (<b>a</b>). The models after applying fine-grained semantic parsing are displayed in (<b>b</b>).</p>
Full article ">Figure 7
<p>Visualization of fine-grained semantic segmentation on Dataset [<a href="#B38-remotesensing-15-01957" class="html-bibr">38</a>]. (<b>a</b>) Selected images from the dataset. (<b>b</b>) Results generated from the DeepLabv3+ [<a href="#B34-remotesensing-15-01957" class="html-bibr">34</a>] before Geometric-based Contour Optimization(GCO). After applying GCO, the final results are presented in (<b>c</b>). Compared with pixel-wise segmentation result, the edges of the segmentation are more lightweight and smoother after optimization.</p>
Full article ">Figure 8
<p>Visualization of our model reconstruction in Dataset [<a href="#B38-remotesensing-15-01957" class="html-bibr">38</a>]. The top of the figure is shows the multi-view images, and the middle of the the figure is the image segmentation produced by GCO optimization based on DeepLabv3+ [<a href="#B34-remotesensing-15-01957" class="html-bibr">34</a>] segmentation results, and the bottom is the vectorized semantics with pseudo building model.</p>
Full article ">
Back to TopTop