[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = fine-grained lightweight building modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 4452 KiB  
Article
Hollow Concrete Block Based on High-Strength Concrete as a Tool for Reducing the Carbon Footprint in Construction
by Mikhail Elistratkin, Alena Salnikova, Nataliya Alfimova, Natalia Kozhukhova and Elena Pospelova
J. Compos. Sci. 2024, 8(9), 358; https://doi.org/10.3390/jcs8090358 - 13 Sep 2024
Viewed by 635
Abstract
The production and servicing of cement-based building materials is a source of large amounts of carbon dioxide emissions globally. One of the ways to reduce its negative impact, is to reduce concrete consumption per cubic meter of building structure through the introduction of [...] Read more.
The production and servicing of cement-based building materials is a source of large amounts of carbon dioxide emissions globally. One of the ways to reduce its negative impact, is to reduce concrete consumption per cubic meter of building structure through the introduction of hollow concrete products. At the same time, to maintain the load-bearing capacity of the building structure, it is necessary to significantly increase the strength of the concrete used. However, an increase in strength should be achieved not by increasing cement consumption, but by increasing the efficiency of its use. This research is focused on the development of technology for the production of thin-walled hollow concrete blocks based on high-strength, self-compacting, dispersed, micro-reinforced, fine-grained concrete. The use of this concrete provides 2–2.5 times higher strength in the amount of Portland cement consumed in comparison with ordinary concrete. The formation of external contours and partitions of thin-walled hollow blocks is ensured through the use of disposable formwork or cores used as void formers obtained by FDM 3D printing. This design solution makes it possible to obtain products based on high-strength concrete with higher structural and thermal insulation properties compared to now existing lightweight concrete-based blocks. Another area of application of this technology could be the production of wall structures of free configuration and cross-section due to their division, at the digital modeling stage, into individual element-blocks, manufactured in a factory environment. Full article
(This article belongs to the Special Issue Research on Sustainable Cement-Based Composites)
Show Figures

Figure 1

Figure 1
<p>Research design.</p>
Full article ">Figure 2
<p>Production of void formers using a 3D printer.</p>
Full article ">Figure 3
<p>Visualization of reducing the material consumption of concrete products due to the use of high-strength concrete.</p>
Full article ">Figure 4
<p>Structure of high-strength fine-grained concrete with a micro-reinforcing additive.</p>
Full article ">Figure 5
<p>Polymer void former and metal cube mold 70 × 70 × 70 mm with installed void formers prepared for molding.</p>
Full article ">Figure 6
<p>Disposable polymer mold for producing blocks of free configuration.</p>
Full article ">Figure 7
<p>Molding and subsequent consolidation of hollow concrete blocks: (<b>a</b>)—pouring concrete mixture into the mold; (<b>b</b>)—hardening of samples in water; (<b>c</b>)—hardened samples ready for testing.</p>
Full article ">Figure 8
<p>Hollow concrete blocks connected with a “Lego block-type” principle: (<b>a</b>)—products with a connection system based on the “Lego block-type” principle; (<b>b</b>)—assembling blocks into a linear structure; (<b>c</b>)—assembling blocks into an angular structure.</p>
Full article ">
23 pages, 1334 KiB  
Article
A Secure Data-Sharing Model Resisting Keyword Guessing Attacks in Edge–Cloud Collaboration Scenarios
by Ye Li, Mengen Xiong, Junling Yuan, Qikun Zhang and Hongfei Zhu
Electronics 2024, 13(16), 3236; https://doi.org/10.3390/electronics13163236 - 15 Aug 2024
Viewed by 650
Abstract
In edge–cloud collaboration scenarios, data sharing is a critical technological tool, yet smart devices encounter significant challenges in ensuring data-sharing security. Attribute-based keyword search (ABKS) is employed in these contexts to facilitate fine-grained access control over shared data, allowing only users with the [...] Read more.
In edge–cloud collaboration scenarios, data sharing is a critical technological tool, yet smart devices encounter significant challenges in ensuring data-sharing security. Attribute-based keyword search (ABKS) is employed in these contexts to facilitate fine-grained access control over shared data, allowing only users with the necessary privileges to retrieve keywords. The implementation of secure data sharing is threatened since most of the current ABKS protocols cannot resist keyword guessing attacks (KGAs), which can be launched by an untrusted cloud server and result in the exposure of sensitive personal information. Using attribute-based encryption (ABE) as the foundation, we build a secure data exchange paradigm that resists KGAs in this work. In our paper, we provide a secure data-sharing framework that resists KGAs and uses ABE as the foundation to achieve fine-grained access control to resources in the ciphertext. To avoid malicious guessing of keywords by the cloud server, the edge layer computes two encryption session keys based on group key agreement (GKA) technology, which are used to re-encrypt the data user’s secret key of the keyword index and keyword trapdoor. The model is implemented using the JPBC library. According to the security analysis, the model can resist KGAs in the random oracle model. The model’s performance examination demonstrates its feasibility and lightweight nature, its large computing advantages, and lower storage consumption. Full article
(This article belongs to the Special Issue Artificial Intelligence in Cyberspace Security)
Show Figures

Figure 1

Figure 1
<p>Simplified process for our model.</p>
Full article ">Figure 2
<p>Problems solved by the methodology.</p>
Full article ">Figure 3
<p>System model.</p>
Full article ">Figure 4
<p>Interaction process of different SDSM-KGA algorithms.</p>
Full article ">Figure 5
<p>Computational costs in <span class="html-italic">KeyGen</span>. (The red line of MKS-VABKS overlaps with the purple line of ABKS-SM).</p>
Full article ">Figure 6
<p>Storage costs in <span class="html-italic">KeyGen</span>. (The red line of MKS-VABKS overlaps with the purple line of ABKS-SM).</p>
Full article ">Figure 7
<p>Computational costs in <span class="html-italic">Enc</span>.</p>
Full article ">Figure 8
<p>Storage costs in <span class="html-italic">Enc</span>.</p>
Full article ">Figure 9
<p>Computational costs in <span class="html-italic">TrapdoorGen</span>. (The red line of MKS-VABKS and the blue line of HP-CPABKS overlap with the purple line of ABKS-SM.)</p>
Full article ">Figure 10
<p>Storage costs in <span class="html-italic">TrapdoorGen</span>. (The red line of MKS-VABKS, the blue line of HP-CPABKS, and the purple line of ABKS-SM overlap with the cyan line of CABKS-CRF.)</p>
Full article ">Figure 11
<p>Computational costs in <span class="html-italic">Keyword Search</span>. (The red line of MKS-VABKS and the blue line of HP-CPABKS overlap with the purple line of ABKS-SM.)</p>
Full article ">Figure 12
<p>Storage costs in <span class="html-italic">Keyword Search</span>. (The gray line of our scheme, the red line of MKS-VABKS, and the blue line of HP-CPABKS overlap with the purple line of ABKS-SM.)</p>
Full article ">Figure 13
<p>Computational costs in <span class="html-italic">Dec</span>.</p>
Full article ">Figure 14
<p>Storage costs in <span class="html-italic">Dec</span>. (The gray line of our scheme overlaps with the red line of MKS-VABKS.)</p>
Full article ">
28 pages, 16553 KiB  
Review
Progress in Additive Manufacturing of Magnesium Alloys: A Review
by Jiayu Chen and Bin Chen
Materials 2024, 17(15), 3851; https://doi.org/10.3390/ma17153851 - 3 Aug 2024
Viewed by 2020
Abstract
Magnesium alloys, renowned for their lightweight yet high-strength characteristics, with exceptional mechanical properties, are highly coveted for numerous applications. The emergence of magnesium alloy additive manufacturing (Mg AM) has further propelled their popularity, offering advantages such as unparalleled precision, swift production rates, enhanced [...] Read more.
Magnesium alloys, renowned for their lightweight yet high-strength characteristics, with exceptional mechanical properties, are highly coveted for numerous applications. The emergence of magnesium alloy additive manufacturing (Mg AM) has further propelled their popularity, offering advantages such as unparalleled precision, swift production rates, enhanced design freedom, and optimized material utilization. This technology holds immense potential in fabricating intricate geometries, complex internal structures, and performance-tailored microstructures, enabling groundbreaking applications. In this paper, we delve into the core processes and pivotal influencing factors of the current techniques employed in Mg AM, including selective laser melting (SLM), electron beam melting (EBM), wire arc additive manufacturing (WAAM), binder jetting (BJ), friction stir additive manufacturing (FSAM), and indirect additive manufacturing (I-AM). Laser powder bed fusion (LPBF) excels in precision but is limited by a low deposition rate and chamber size; WAAM offers cost-effectiveness, high efficiency, and scalability for large components; BJ enables precise material deposition for customized parts with environmental benefits; FSAM achieves fine grain sizes, low defect rates, and potential for precision products; and I-AM boasts a high build rate and industrial adaptability but is less studied recently. This paper attempts to explore the possibilities and challenges for future research in AM. Among them, two issues are how to mix different AM applications and how to use the integration of Internet technologies, machine learning, and process modeling with AM, which are innovative breakthroughs in AM. Full article
(This article belongs to the Special Issue 3D Printing Technology with Metal Materials)
Show Figures

Figure 1

Figure 1
<p>Examples of Mg alloy biomedical implants employed in clinical applications [<a href="#B13-materials-17-03851" class="html-bibr">13</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>f</b>) Mg alloy scaffolds manufactured using LPBF process. (<b>g</b>) biomedical components of LPBFed Mg alloy and their application [<a href="#B24-materials-17-03851" class="html-bibr">24</a>].</p>
Full article ">Figure 3
<p>Publication trends of AMed Mg alloy comparison showing publication trends of LPBF and WAAM of Mg alloys year-wise (WAAM&amp;Mg or WAAM&amp;Magnesium or Wire-arc&amp;Mg or Wire-arc&amp;Magnesium, LPBF&amp;Mg or LPBF&amp;Magnesium or SLM&amp;Mg or SLM&amp;Magnesium, Web of Science (SCIE, SSCI, A&amp;HCI, ISI Proceedings) databases (2024 version)).</p>
Full article ">Figure 4
<p>The schematic diagram of a PBF system [<a href="#B31-materials-17-03851" class="html-bibr">31</a>].</p>
Full article ">Figure 5
<p>Schematic of LPBF process [<a href="#B33-materials-17-03851" class="html-bibr">33</a>].</p>
Full article ">Figure 6
<p>The grain orientation map of the LPBFed WE43 Mg alloy. (<b>a</b>) Fine, equiaxed, and randomly orientated grains in the last melt pool [<a href="#B44-materials-17-03851" class="html-bibr">44</a>]; (<b>b</b>,<b>c</b>) figures revealing the grain structure and the random texture; (<b>d</b>) large, irregular-shape, and basal-orientated grains [<a href="#B47-materials-17-03851" class="html-bibr">47</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) The bright-field image of LPBF-processed WE43. The cross section shows a partially melted zone and a lamellar zone (labeled in the figure). The building direction of the specimen and the directions of the lamellae are indicated by white arrows. (<b>b</b>) A chemical mapping of the lamellar zone and partially melted zone reveals Nd-rich particles on the lamellae boundaries. The corresponding line scan shows varying Nd concentration across adjacent lamellae. Concentration peaks in the lamellae’s centers are indicated by red arrows. (<b>c</b>) BF-TEM image showing a similar crystallographic orientation of adjacent lamellae in the lamellar zone. (<b>d</b>) Heat-affected zone in LPBF processed WE43 STEM imaging [<a href="#B44-materials-17-03851" class="html-bibr">44</a>].</p>
Full article ">Figure 8
<p>Evaporation products formed during SLM processing of magnesium alloys. (1)–(4) describe the machining process of SLM [<a href="#B60-materials-17-03851" class="html-bibr">60</a>].</p>
Full article ">Figure 9
<p>(<b>a</b>) The material deposition for wire arc additive manufacturing [<a href="#B27-materials-17-03851" class="html-bibr">27</a>], (<b>b</b>) the symmetric representation of the GTAW–WAAM process [<a href="#B65-materials-17-03851" class="html-bibr">65</a>], (<b>c</b>) CMT–WAAM system [<a href="#B12-materials-17-03851" class="html-bibr">12</a>].</p>
Full article ">Figure 10
<p>The EBSD inverse pole figure orientation maps, grain size distribution figures, and pole figures of WE43-Mg cross-sectional specimens deposited by different CMT process modes in the top region: (<b>a</b>) Scanning location region A; inverse pole figures with reconstructed grain boundaries of (<b>b</b>) CMT, (<b>c</b>) CMT-P, (<b>d</b>) CMT-ADV, (<b>e</b>) CMT-PADV [<a href="#B12-materials-17-03851" class="html-bibr">12</a>].</p>
Full article ">Figure 11
<p>Microstructures of AZ31 deposited by different pulse frequencies: (<b>a</b>) 500 Hz, (<b>b</b>) 100 Hz, (<b>c</b>) 10 Hz, (<b>d</b>) 5 Hz, (<b>e</b>) 2 Hz, and (<b>f</b>) 1 Hz [<a href="#B75-materials-17-03851" class="html-bibr">75</a>].</p>
Full article ">Figure 12
<p>Comparison of mechanical properties between (<b>a</b>,<b>a<sub>1</sub></b>,<b>a<sub>2</sub></b>) WAAMed Mg alloy and (<b>b</b>,<b>b<sub>1</sub></b>,<b>b<sub>2</sub></b>) LPBFed Mg alloy [<a href="#B24-materials-17-03851" class="html-bibr">24</a>].</p>
Full article ">Figure 13
<p>Schematic illustration of FSAM process [<a href="#B108-materials-17-03851" class="html-bibr">108</a>].</p>
Full article ">Figure 14
<p>The EBSD orientation map of the WE43 alloy built by joining four layers of 1.7 mm thick sheets via the FSAM method, showing the distribution of grains in (<b>a</b>) the top layer (layer 4), (<b>b</b>) the sandwiched microstructure at the interface of layer 3 and 4, (<b>c</b>) the bottom layer (layer 1), and (<b>d</b>) a representative thermomechanically affected zone. Adapted from [<a href="#B102-materials-17-03851" class="html-bibr">102</a>].</p>
Full article ">Figure 15
<p>(<b>a</b>) Process flow diagram for binder jetting printing [<a href="#B123-materials-17-03851" class="html-bibr">123</a>]; (<b>b</b>) Principle of binder-less jetting: (<b>b1</b>) solvent deposition, (<b>b2</b>) development of capillary bridges among wet particles, (<b>b3</b>) pre-addition of next powder layer, (<b>b4</b>) capillary action forming bridges between particles in new and previous layers, and (<b>b5</b>) fully developed solid structure formed after drying and sintering [<a href="#B29-materials-17-03851" class="html-bibr">29</a>].</p>
Full article ">Figure 16
<p>A printed thermoplastic mold through a computer-aided design was first manufactured and a silk fibroin (SF) scaffold was obtained using the indirect additive manufacturing technique. The top view (bar = 500 µm) and side view (bar = 200 µm) of the circled area in the SF scaffold by scanning electron microscopy are shown. Arrowheads indicate penetrating channels. Solid and dotted lines demonstrate channel and inter-channel regions, respectively [<a href="#B62-materials-17-03851" class="html-bibr">62</a>].</p>
Full article ">Figure 17
<p>Macrographs of the process flow for the indirect additive manufacturing of a Mg scaffold at different stages, which include fabricating a NaCl mold via direct ink writing, sintering the mold, infiltrating the mold with liquid Mg, and leaching the mold to obtain the Mg scaffold. Adapted from [<a href="#B157-materials-17-03851" class="html-bibr">157</a>].</p>
Full article ">
25 pages, 34697 KiB  
Article
Lightweight Pedestrian Detection Network for UAV Remote Sensing Images Based on Strideless Pooling
by Sanzai Liu, Lihua Cao and Yi Li
Remote Sens. 2024, 16(13), 2331; https://doi.org/10.3390/rs16132331 - 26 Jun 2024
Cited by 1 | Viewed by 1317
Abstract
The need for pedestrian target detection in uncrewed aerial vehicle (UAV) remote sensing images has become increasingly significant as the technology continues to evolve. UAVs equipped with high-resolution cameras can capture detailed imagery of various scenarios, making them ideal for monitoring and surveillance [...] Read more.
The need for pedestrian target detection in uncrewed aerial vehicle (UAV) remote sensing images has become increasingly significant as the technology continues to evolve. UAVs equipped with high-resolution cameras can capture detailed imagery of various scenarios, making them ideal for monitoring and surveillance applications. Pedestrian detection is particularly crucial in scenarios such as traffic monitoring, security surveillance, and disaster response, where the safety and well-being of individuals are paramount. However, pedestrian detection in UAV remote sensing images poses several challenges. Firstly, the small size of pedestrians relative to the overall image, especially at higher altitudes, makes them difficult to detect. Secondly, the varying backgrounds and lighting conditions in remote sensing images can further complicate the task of detection. Traditional object detection methods often struggle to handle these complexities, resulting in decreased detection accuracy and increased false positives. Addressing the aforementioned concerns, this paper proposes a lightweight object detection model that integrates GhostNet and YOLOv5s. Building upon this foundation, we further introduce the SPD-Conv module to the model. With this addition, the aim is to preserve fine-grained features of the images during downsampling, thereby enhancing the model’s capability to recognize small-scale objects. Furthermore, the coordinate attention module is introduced to further improve the model’s recognition accuracy. In the proposed model, the number of parameters is successfully reduced to 4.77 M, compared with 7.01 M in YOLOv5s, representing a 32% reduction. The mean average precision (mAP) increased from 0.894 to 0.913, reflecting a 1.9% improvement. We have named the proposed model “GSC-YOLO”. This study holds significant importance in advancing the lightweighting of UAV target detection models and addressing the challenges associated with complex scene object detection. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of the GSC-YOLO model. We replaced all convolutional structures in the original YOLOv5 with Ghost Conv, applied SPD-Conv modules for downsampling operations, and incorporated the Coordinate Attention mechanism to enhance detection accuracy.</p>
Full article ">Figure 2
<p>The architecture of original YOLOv5 model. In YOLOv5, CBS stands for Conv-BN-SiLU. It represents a convolutional layer followed by batch normalization (BN) and the SiLU activation function. C3 is a network architecture module in YOLOv5 that is used to extract features (the specific network structure of CBS and C3 is shown in <a href="#remotesensing-16-02331-f003" class="html-fig">Figure 3</a>). Spatial Pyramid Pooling Fusion (SPPF) captures information of different scales using pooling nuclei of different sizes.</p>
Full article ">Figure 3
<p>Comparison of the original YOLOv5 structural unit and the GhostNet structural unit with replacement. The above section shows the original unit of YOLOv5, while the unit below illustrates the replacement unit we used. This modification effectively reduces the model’s parameter count and improves inference speed.</p>
Full article ">Figure 4
<p>Comparison between convolution layer and Ghost module: ordinary convolution directly convolves the entire input feature map with filters, while Ghost module convolution splits the operation into two stages, using a standard convolution followed by a lightweight convolution (depthwise separable convolution).</p>
Full article ">Figure 5
<p>Ghost Bottleneck. In the structure on the left, the backbone consists of two concatenated Ghost Modules (GM). With a stride of 1, this configuration does not compress the height and width of the input feature layer, thereby deepening the network. In contrast, the structure on the right introduces a depthwise separable convolution with a stride of 2 between the two GM. This allows compression of the feature map’s height and width by half compared to the input size.</p>
Full article ">Figure 6
<p>The process of downsampling the feature map using SPD-Conv when <span class="html-italic">scale</span> = 2, where the star represents the 1 × 1 convolution operation. In this process, SPD-Conv first divides the input feature graph into four sub-feature graphs, then concatenates them along the channel direction, and finally employs a 1 × 1 convolution to achieve the desired number of channels.</p>
Full article ">Figure 7
<p>Replacement of the downsampling module with SPD-Conv. We adopt SPD-Conv within the backbone and neck architectures, respectively, as substitutes for the conventional downsampling operation inherent to standard convolutions. This choice is made with the intention of preserving fine-grained characteristics during transition phases.</p>
Full article ">Figure 8
<p>Comparison of structures of two attention mechanisms: (<b>a</b>) SE, (<b>b</b>) CA. CA can be considered an extension of SE, as it not only attends to inter-channel correlations, but also concentrates attention on spatial locations, thereby enhancing the model’s spatial awareness.</p>
Full article ">Figure 9
<p>Pedestrian images in the dataset corresponding to different scenarios: (<b>a</b>) park; (<b>b</b>) street; (<b>c</b>) forest; (<b>d</b>) snowfield.</p>
Full article ">Figure 10
<p>We label the target instance in the figure as person, and the number after the label is the confidence score given to the target after the model inference. (<b>a</b>) Annotated pedestrian images in the dataset. Detection results with confidence scores for (<b>b</b>) the original YOLOv5s and (<b>c</b>) our proposed GSC-YOLO.</p>
Full article ">Figure 11
<p>As the altitude of the UAV increases, pedestrian targets become smaller. (<b>a</b>) Ground truth. Detection results for (<b>b</b>) YOLOv5s; (<b>c</b>) GSC-YOLO.</p>
Full article ">Figure 12
<p>After the UAV flies to a more complex scene, the object detection task becomes increasingly challenging. (<b>a</b>) Ground truth. Detection results for (<b>b</b>) YOLOv5s; (<b>c</b>) GSC-YOLO.</p>
Full article ">Figure 13
<p>mAP variation curves during the training process for (<b>a</b>) YOLOv5s; (<b>b</b>) GSC-YOLO.</p>
Full article ">
21 pages, 24584 KiB  
Article
Lightweight Semantic Architecture Modeling by 3D Feature Line Detection
by Shibiao Xu, Jiaxi Sun, Jiguang Zhang, Weiliang Meng and Xiaopeng Zhang
Remote Sens. 2023, 15(8), 1957; https://doi.org/10.3390/rs15081957 - 7 Apr 2023
Cited by 1 | Viewed by 1969
Abstract
Existing architecture semantic modeling methods in 3D complex urban scenes continue facing difficulties, such as limited training data, lack of semantic information, and inflexible model processing. Focusing on extracting and adopting accurate semantic information into a modeling process, this work presents a framework [...] Read more.
Existing architecture semantic modeling methods in 3D complex urban scenes continue facing difficulties, such as limited training data, lack of semantic information, and inflexible model processing. Focusing on extracting and adopting accurate semantic information into a modeling process, this work presents a framework for lightweight modeling of buildings that joints point clouds semantic segmentation and 3D feature line detection constrained by geometric and photometric consistency. The main steps are: (1) Extraction of single buildings from point clouds using 2D-3D semi-supervised semantic segmentation under photometric and geometric constraints. (2) Generation of lightweight building models by using 3D plane-constrained multi-view feature line extraction and optimization. (3) Introduction of detailed semantics of building elements into independent 3D building models by using fine-grained segmentation of multi-view images to achieve high-accuracy architecture lightweight modeling with fine-grained semantic information. Experimental results demonstrate that it can perform independent lightweight modeling of each building on point cloud at various scales and scenes, with accurate geometric appearance details and realistic textures. It also enables independent processing and analysis of each building in the scenario, making them more useful in practical applications. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the proposed framework. <b>Heterogeneous Data</b>: Different from traditional methods, which take sparse pure LiDAR point clouds as input, our method leverages heterogeneous data including dense point clouds and multi-view images to generate vectorized fine-grained semantic models. <b>Semantic Segmentation</b>: To eliminate background and noise point clouds, coarse-level segmentation of point clouds is created from heterogeneous data in this step. Under the multi-view photometric consistency constraints and 3D co-planar geometric consistency constraints, our method is robust to common challenges in point cloud semantic segmentation, such as occlusion, dense clouds, and complex structures. <b>Facet Contour</b>: To represent architecture in a lightweight model instead of memory costly dense point cloud, polygon facet contour is generated from 3D feature lines which are detected from multi-view images and optimized jointly with dense point clouds of architecture. <b>Vectorized Semantic Modeling</b>: Aiming to produce vectorized fine-grained semantics with a lightweight model, fine-grained semantic pixels were detected from multi-view images and back-projected to the architecture surface. For a lightweight semantic representation, 3D feature lines with fine-grained pixels are leveraged to optimize the vectorized fine-grained semantic annotation.</p>
Full article ">Figure 2
<p>Three common situations when filtering line segments. Line segments are represented by green line. Search bounding boxes are red. Points of facet outside bounding box are green. Points located inside bounding box are participated into <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> </semantics></math> and positive <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> </semantics></math> and displayed in red and blue, respectively. The left situation meets the reservation condition. In the middle situation, <math display="inline"><semantics> <mrow> <mo movablelimits="true" form="prefix">max</mo> <mo>(</mo> <mfrac> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> </mfrac> <mo>,</mo> <mfrac> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> </mfrac> <mo>)</mo> <mo>&lt;</mo> <mn>2</mn> </mrow> </semantics></math>. In the right situation, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <mo>+</mo> <msub> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> <mo>&lt;</mo> <mi>τ</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Line Segment Direction Acquisition. To assign line segments with proper direction, each detected line segment is matched with a directed edge in the facet rough boundary. As shown above, the matched pair is tagged with the same number. After the matching process, line segments are assigned with the direction of the matched edges in the rough boundary.</p>
Full article ">Figure 4
<p>Comparison of different semantic segmentation methods on Dataset I. (<b>a</b>) Origin PointCloud (<b>b</b>) Ground Truth (<b>c</b>) PointNet (<b>d</b>) PointNet++ (<b>e</b>) RandLANet (<b>f</b>) PointFormer (<b>g</b>) PointNeXt (<b>h</b>) Our.</p>
Full article ">Figure 5
<p>Dataset I, large urban area, used in our experiments. The urban scene of Dataset I, which includes 52 million points were reconstructed from 1705 images collected by unmanned aerial vehicle. The original multi-view images are shown in the top left of the image. We can reconstruct and segment the scene point cloud in the bottom left with multi-view images. The point cloud semantic segmentation results generated by our algorithm are shown in the bottom right of the image. After extracting the building point clouds, the building models can be optimized and vectorized. The produced vectorized semantic models are shown in the top right.</p>
Full article ">Figure 6
<p>Visualization of fine grained models. The models generated by our algorithm before the process of fine-grained semantic generation are shown in (<b>a</b>). The models after applying fine-grained semantic parsing are displayed in (<b>b</b>).</p>
Full article ">Figure 7
<p>Visualization of fine-grained semantic segmentation on Dataset [<a href="#B38-remotesensing-15-01957" class="html-bibr">38</a>]. (<b>a</b>) Selected images from the dataset. (<b>b</b>) Results generated from the DeepLabv3+ [<a href="#B34-remotesensing-15-01957" class="html-bibr">34</a>] before Geometric-based Contour Optimization(GCO). After applying GCO, the final results are presented in (<b>c</b>). Compared with pixel-wise segmentation result, the edges of the segmentation are more lightweight and smoother after optimization.</p>
Full article ">Figure 8
<p>Visualization of our model reconstruction in Dataset [<a href="#B38-remotesensing-15-01957" class="html-bibr">38</a>]. The top of the figure is shows the multi-view images, and the middle of the the figure is the image segmentation produced by GCO optimization based on DeepLabv3+ [<a href="#B34-remotesensing-15-01957" class="html-bibr">34</a>] segmentation results, and the bottom is the vectorized semantics with pseudo building model.</p>
Full article ">
28 pages, 4798 KiB  
Article
Fusion of a Static and Dynamic Convolutional Neural Network for Multiview 3D Point Cloud Classification
by Wenju Wang, Haoran Zhou, Gang Chen and Xiaolin Wang
Remote Sens. 2022, 14(9), 1996; https://doi.org/10.3390/rs14091996 - 21 Apr 2022
Cited by 7 | Viewed by 3135
Abstract
Three-dimensional (3D) point cloud classification methods based on deep learning have good classification performance; however, they adapt poorly to diverse datasets and their classification accuracy must be improved. Therefore, FSDCNet, a neural network model based on the fusion of static and dynamic convolution, [...] Read more.
Three-dimensional (3D) point cloud classification methods based on deep learning have good classification performance; however, they adapt poorly to diverse datasets and their classification accuracy must be improved. Therefore, FSDCNet, a neural network model based on the fusion of static and dynamic convolution, is proposed and applied for multiview 3D point cloud classification in this paper. FSDCNet devises a view selection method with fixed and random viewpoints, which effectively avoids the overfitting caused by the traditional fixed viewpoint. A local feature extraction operator of dynamic and static convolution adaptive weight fusion was designed to improve the model’s adaptability to different types of datasets. To address the problems of large parameters and high computational complexity associated with the current methods of dynamic convolution, a lightweight and adaptive dynamic convolution operator was developed. In addition, FSDCNet builds a global attention pooling, integrating the most crucial information on different view features to the greatest extent. Due to these characteristics, FSDCNet is more adaptable, can extract more fine-grained detailed information, and can improve the classification accuracy of point cloud data. The proposed method was applied to the ModelNet40 and Sydney Urban Objects datasets. In these experiments, FSDCNet outperformed its counterparts, achieving state-of-the-art point cloud classification accuracy. For the ModelNet40 dataset, the overall accuracy (OA) and average accuracy (AA) of FSDCNet in a single view reached 93.8% and 91.2%, respectively, which were superior to those values for many other methods using 6 and 12 views. FSDCNet obtained the best results for 6 and 12 views, achieving 94.6%, 93.3%, 95.3%, and 93.6% in OA and AA metrics, respectively. For the Sydney Urban Objects dataset, FSDCNet achieved an OA and F1 score of 81.2% and 80.1% in a single view, respectively, which were higher than most of the compared methods. In 6 and 12 views, FSDCNet reached an OA of 85.3% and 83.6% and an F1 score of 85.5% and 83.7%, respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A series of examples of 3D point cloud acquisition.</p>
Full article ">Figure 2
<p>The framework of our FSDCNet model used to classify a 3D point cloud.</p>
Full article ">Figure 3
<p>Multiview selection with a fixed random viewpoint.</p>
Full article ">Figure 4
<p>FSDC local feature extraction. (<b>a</b>) The architecture of local feature extraction. (<b>b</b>) The structure of the i-th FSDC layer.</p>
Full article ">Figure 5
<p>Lightweight dynamic convolution. “<math display="inline"><semantics> <mo>∗</mo> </semantics></math>” means dot product.</p>
Full article ">Figure 6
<p>Adaptive attention pooling. (<b>a</b>) Dynamic weight generation.(<b>b</b>) Global views fusion.</p>
Full article ">Figure 7
<p>ModelNet40 dataset.</p>
Full article ">Figure 8
<p>Sydney Urban Objects dataset.</p>
Full article ">Figure 9
<p>Confusion matrices for all categories of the two datasets. (<b>a</b>) ModelNet40. (<b>b</b>) Sydney Urban Objects.</p>
Full article ">Figure 9 Cont.
<p>Confusion matrices for all categories of the two datasets. (<b>a</b>) ModelNet40. (<b>b</b>) Sydney Urban Objects.</p>
Full article ">Figure 10
<p>Average ROC curve for all classes on two datasets.</p>
Full article ">
Back to TopTop