[go: up one dir, main page]

 
 

Topic Editors

School of Software, BNRist, Tsinghua University, Beijing 100084, China
Beijing Key Laboratory of Intelligent Processing for Building Big Data, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
School of Civil Engineering, Tsinghua University, Beijing, China
Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China
Dr. Yi Fang
Associate Professor, Electrical Engineering and Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
Prof. Dr. Anthony Tzes
Electrical Engineering and Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates

3D Computer Vision and Smart Building and City

Abstract submission deadline
closed (20 February 2023)
Manuscript submission deadline
closed (30 April 2023)
Viewed by
33220

Topic Information

Dear Colleagues,

3D computer vision is an interdisciplinary subject involving computer vision, computer graphics, artificial intelligence and other fields. Its main contents include 3D perception, 3D understanding and 3D modeling. In recent years, 3D computer vision technology has been developed rapidly and has been widely used in unmanned aerial vehicle, robot, autonomous driving, AR, VR and other fields. Smart building and city use various information technologies or innovative concepts to connect and various systems and services, so as to improve the efficiency of resource utilization, optimize management and services, and improve the quality of life. The smart building and city can involve some frontier techniques like 3D CV for building information model, digital twins, city information model, simultaneous localization and mapping, robot, etc. The applications of 3D computer vision in smart building and city is a valuable research direction, but it still faces many major challenges in theory and technology. This topic focuses on the theory and technology of 3D computer vision in smart building and city. We invite the research community to publish papers and provide innovative technologies, theories or case studies.

Dr. Yu-Shen Liu
Prof. Dr. Xiaoping Zhou
Dr. Jia-Rui Lin
Dr. Ge Gao
Dr. Yi Fang
Prof. Dr. Anthony Tzes
Topic Editors

Keywords

  • smart building and city
  • 3D computer vision
  • SLAM
  • building information model
  • city information model
  • robot

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600
Sustainability
sustainability
3.3 6.8 2009 20 Days CHF 2400
Buildings
buildings
3.1 3.4 2011 17.2 Days CHF 2600
Energies
energies
3.0 6.2 2008 17.5 Days CHF 2600
Drones
drones
4.4 5.6 2017 21.7 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (16 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
14 pages, 17481 KiB  
Article
Integration with Visual Perception—Research on the Usability of a Data Visualization Interface Layout in Zero-Carbon Parks Based on Eye-Tracking Technology
by Guangxu Li, Lingyu Wang and Jie Hu
Sustainability 2023, 15(14), 11102; https://doi.org/10.3390/su151411102 - 17 Jul 2023
Cited by 2 | Viewed by 1633
Abstract
With the continued application of data visualization technology in sustainable development, the construction of carbon emission monitoring platforms is becoming increasingly popular in industrial parks. However, there are many kinds of such interfaces, the usability of which remains unclear. Therefore, in order to [...] Read more.
With the continued application of data visualization technology in sustainable development, the construction of carbon emission monitoring platforms is becoming increasingly popular in industrial parks. However, there are many kinds of such interfaces, the usability of which remains unclear. Therefore, in order to explore the usability of current carbon emission visualization interfaces in parks and put forward humanized optimization strategies for their subsequent design, this study used eye-tracking technology to analyze the data readability of six types of layouts from three aspects of visual perception features: integrity, understandability, and selectivity. Quantitative data from eye movement experiments and visual perception characteristics were evaluated using a Likert scale in an analysis of different layouts, and the correlation data between three visual perception characteristics and the readability of different layout data were obtained using an SPSS tool. The results show that, compared with a layout containing 3D graphics, the pure data type of interface has a shorter task completion time and higher readability; however, it provides fewer choices for users and is less interesting. In addition, there is a significant negative correlation between integrity and task completion time; the more complete the interface layout, the shorter the task completion time. In summary, a certain correlation was found between visual perception characteristics and the readability of interface layout using this method. At the same time, the advantages and disadvantages of different interface layouts were also analyzed, and more humanized optimization directions and strategies were devised. This is vital for aiding subsequent research on the influence of specific layout elements to optimize visualization interfaces that display carbon emission data. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Carbon emission visualization process.</p>
Full article ">Figure 2
<p>Representative sample.</p>
Full article ">Figure 3
<p>Design of experimental stimulus materials: (<b>a</b>) type of interface layout after classification and analysis; (<b>b</b>) stimulus material processed via homogenization and de-differentiation.</p>
Full article ">Figure 4
<p>Sample images: (<b>a</b>) gaze heat map; (<b>b</b>) eye track map.</p>
Full article ">Figure 5
<p>Results of the collection of two eye movement indicators: (<b>a</b>) gazing hotspot accumulation graph of six types of samples; (<b>b</b>) the cumulative plot of the eye track for the six types of samples.</p>
Full article ">Figure 6
<p>Visual perception evaluation of interfaces: (<b>a</b>) visual perception evaluation value of six samples; (<b>b</b>) the mean value of visual perception evaluation of the two types of interfaces.</p>
Full article ">
27 pages, 15219 KiB  
Article
SGGTSO: A Spherical Vector-Based Optimization Algorithm for 3D UAV Path Planning
by Wentao Wang, Chen Ye and Jun Tian
Drones 2023, 7(7), 452; https://doi.org/10.3390/drones7070452 - 7 Jul 2023
Cited by 6 | Viewed by 1937
Abstract
The application of 3D UAV path planning algorithms in smart cities and smart buildings can improve logistics efficiency, enhance emergency response capabilities as well as provide services such as indoor navigation, thus bringing more convenience and safety to people’s lives and work. The [...] Read more.
The application of 3D UAV path planning algorithms in smart cities and smart buildings can improve logistics efficiency, enhance emergency response capabilities as well as provide services such as indoor navigation, thus bringing more convenience and safety to people’s lives and work. The main idea of the 3D UAV path planning problem is how to plan to get an optimal flight path while ensuring that the UAV does not collide with obstacles during flight. This paper transforms the 3D UAV path planning problem into a multi-constrained optimization problem by formulating the path length cost function, the safety cost function, the flight altitude cost function and the smoothness cost function. This paper encodes each feasible flight path as a set of vectors consisting of magnitude, elevation and azimuth angles and searches for the optimal flight path in the configuration space by means of a metaheuristic algorithm. Subsequently, this paper proposes an improved tuna swarm optimization algorithm based on a sigmoid nonlinear weighting strategy, multi-subgroup Gaussian mutation operator and elite individual genetic strategy, called SGGTSO. Finally, the SGGTSO algorithm is compared with some other classical and novel metaheuristics in a 3D UAV path planning problem with nine different terrain scenarios and in the CEC2017 test function set. The comparison results show that the flight path planned by the SGGTSO algorithm significantly outperforms other comparison algorithms in nine different terrain scenarios, and the optimization performance of SGGTSO outperforms other comparison algorithms in 24 CEC2017 test functions. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Obstacle and security threats.</p>
Full article ">Figure 2
<p>Flight altitude scope.</p>
Full article ">Figure 3
<p>UAV flight 3-dimensional coordinate figure.</p>
Full article ">Figure 4
<p>Comparison of two nonlinear weighting coefficients.</p>
Full article ">Figure 5
<p>Convergence curve of each algorithm.</p>
Full article ">Figure 6
<p>Top view of the paths planned by each algorithm.</p>
Full article ">Figure 7
<p>Convergence curves of each algorithm in 9 scenarios.</p>
Full article ">Figure 8
<p>3D view of SGGTSO’s planned flight path.</p>
Full article ">Figure 9
<p>Top view of the paths planned by 3 improved TSO algorithms.</p>
Full article ">
28 pages, 12417 KiB  
Article
Analysis of Polarization Detector Performance Parameters on Polarization 3D Imaging Accuracy
by Pengzhang Dai, Dong Yao, Tianxiang Ma, Honghai Shen, Weiguo Wang and Qingyu Wang
Sensors 2023, 23(11), 5129; https://doi.org/10.3390/s23115129 - 27 May 2023
Cited by 2 | Viewed by 1567
Abstract
Three-dimensional (3D) reconstruction of objects using the polarization properties of diffuse light on the object surface has become a crucial technique. Due to the unique mapping relation between the degree of polarization of diffuse light and the zenith angle of the surface normal [...] Read more.
Three-dimensional (3D) reconstruction of objects using the polarization properties of diffuse light on the object surface has become a crucial technique. Due to the unique mapping relation between the degree of polarization of diffuse light and the zenith angle of the surface normal vector, polarization 3D reconstruction based on diffuse reflection theoretically has high accuracy. However, in practice, the accuracy of polarization 3D reconstruction is limited by the performance parameters of the polarization detector. Improper selection of performance parameters can result in large errors in the normal vector. In this paper, the mathematical models that relate the polarization 3D reconstruction errors to the detector performance parameters including polarizer extinction ratio, polarizer installation error, full well capacity and analog-to-digital (A2D) bit depth are established. At the same time, polarization detector parameters suitable for polarization 3D reconstruction are provided by the simulation. The performance parameters we recommend include an extinction ratio ≥ 200, an installation error [−1°, 1°], a full-well capacity ≥ 100 Ke, and an A2D bit depth ≥ 12 bits. The models provided in this paper are of great significance for improving the accuracy of polarization 3D reconstruction. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Object surface normal vector. (<b>a</b>) surface normal vector diagram (<b>b</b>) normal vector in the coordinate system.</p>
Full article ">Figure 2
<p>Mapping between the <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> and zenith angle.</p>
Full article ">Figure 3
<p>Polarizer direction error of the <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>o</mi> <mi>F</mi> <mi>P</mi> </mrow> </semantics></math> detector.</p>
Full article ">Figure 4
<p>Polarizer installation error for rotating polarizer imaging system; (<b>a</b>) diagram illustrating the installation error; (<b>b</b>) photograph of actual installation error.</p>
Full article ">Figure 5
<p>Influence of the extinction ratio (ER) on the zenith angle under the condition of different <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> of the light reflected from the target surface.</p>
Full article ">Figure 6
<p>Schematic diagram of the zenith angle errors of a sphere under the condition of ER = 200. (<b>a</b>) schematic diagram of a sphere with a radius of 1. (<b>b</b>) zenith angle distribution of a sphere (<b>c</b>) azimuth angle distribution of a sphere. (<b>d</b>) zenith angle error distribution map of the sphere under the condition of ER = 200. Where x, y, z are the spatial coordinate axes, and the radius of the sphere is 1.</p>
Full article ">Figure 7
<p>Zenith angle errors under the conditions of [Δα0, Δα45, Δα90, Δα135] ∈ (<b>a</b>). [−1°,1°], (<b>b</b>). [−2°,2°], (<b>c</b>). [−3°,3°], and (<b>d</b>). [−4°,4°].</p>
Full article ">Figure 8
<p>Azimuth angle errors under the conditions of [Δα0, Δα45, Δα90, Δα135] ∈ (<b>a</b>) [−1°,1°], (<b>b</b>) [−2°,2°], (<b>c</b>) [−3°,3°], and (<b>d</b>) [−4°,4°].</p>
Full article ">Figure 9
<p>Polarization 3D imaging accuracy of the sphere with the install error ∈ [−1°,1°] (<b>a</b>) zenith angle errors distribution map. (<b>b</b>) azimuth angle errors distribution map. Where x, y, z are the spatial coordinate axes, and the radius of the sphere is 1.</p>
Full article ">Figure 10
<p>Zenith angle standard deviation for different full-well capacities.</p>
Full article ">Figure 11
<p>Azimuth angle standard deviation with different full-well capacities.</p>
Full article ">Figure 12
<p>Polarization 3D imaging accuracy of the sphere with full well capacity equal to 100 <span class="html-italic">Ke</span><sup>−</sup>. (<b>a</b>) standard deviation of zenith angle. (<b>b</b>) standard deviation of azimuth angle. Where x and y are the spatial coordinate axes, and the radius of the sphere is 1.</p>
Full article ">Figure 13
<p>Standard deviation of the zenith angle with different A2D bit depths.</p>
Full article ">Figure 14
<p>Standard deviation of the azimuth angle with different A2D bit depths.</p>
Full article ">Figure 15
<p>Polarization 3D imaging accuracy of the sphere with A2D bit depth equal to 12 bit. (<b>a</b>) standard deviation of zenith angle. (<b>b</b>) standard deviation of azimuth angle. Where x and y are the spatial coordinate axes, and the radius of the sphere is 1.</p>
Full article ">Figure 16
<p>Standard deviation of the zenith angle at different A2D bit depths under the condition of the full well capacity equals to 1 Me-.</p>
Full article ">Figure 17
<p>Standard deviation of azimuth angle at different A2D bit depths under the condition of full well capacity equals to 1 Me-.</p>
Full article ">Figure 18
<p>Photograph of the actual polarization experiment platform.</p>
Full article ">Figure 19
<p>Schematic diagram of the experimental platform.</p>
Full article ">Figure 20
<p>Experimentally collected polarization 3D images at (<b>a</b>) 0°, (<b>b</b>) 45°, (<b>c</b>) 90°, and (<b>d</b>) 135°.</p>
Full article ">Figure 21
<p>Actual (experimental) and theoretical zenith-angle error caused by the ER. Here pixel number is column number.</p>
Full article ">Figure 22
<p>Actual (experimental) and theoretical zenith angle error caused by installation error. Here, pixel number is column number.</p>
Full article ">Figure 23
<p>Typical points selected in the experiment for analysis.</p>
Full article ">
16 pages, 3853 KiB  
Article
A Lightweight UAV System: Utilizing IMU Data for Coarse Judgment of Loop Closure
by Hongwei Zhu, Guobao Zhang, Zhiqi Ye and Hongyi Zhou
Drones 2023, 7(6), 338; https://doi.org/10.3390/drones7060338 - 23 May 2023
Viewed by 1773
Abstract
Unmanned aerial vehicles (UAVs) can experience significant performance issues during flight due to heavy CPU load, affecting their flight capabilities, communication, and endurance. To address this issue, this paper presents a lightweight stereo-inertial state estimator for addressing the heavy CPU load issue of [...] Read more.
Unmanned aerial vehicles (UAVs) can experience significant performance issues during flight due to heavy CPU load, affecting their flight capabilities, communication, and endurance. To address this issue, this paper presents a lightweight stereo-inertial state estimator for addressing the heavy CPU load issue of ORB-SLAM. It utilizes nonlinear optimization and features to incorporate inertial information throughout the Simultaneous Localization and Mapping (SLAM) pipeline. The first key innovation is a coarse-to-fine optimization method that targets the enhancement of tracking speed by efficiently addressing bias and noise in the IMU parameters. A novel visual–inertial pose graph is proposed as an observer to assess error thresholds and guide the system towards visual-only or visual–inertial maximum a posteriori (MAP) estimation accordingly. Furthermore, this paper introduces the incorporation of inertial data in the loop closure thread. The IMU data provide displacement direction relative to world coordinates, which is serving as a necessary condition for loop detection. The experimental results demonstrate that our method maintains excellent localization accuracy compared to other state-of-the-art approaches on benchmark datasets, while also significantly reducing CPU load. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>The whole frame of system.</p>
Full article ">Figure 2
<p>Three methods of graph optimizations along the system.</p>
Full article ">Figure 3
<p>Schematic diagram of coarse loop closure detection.</p>
Full article ">Figure 4
<p>Comparison of our system with ground truth.</p>
Full article ">Figure 5
<p>Motion track in X, Y, and Z directions. (<b>a</b>) Motion track in three directions for V102. (<b>b</b>) Motion track in three directions for V202.</p>
Full article ">Figure 6
<p>ATE comparison with ORB-SLAM3 on EuRoC. The top of this group of pictures is the ATE comparison of V102, and the bottom is ATE comparison for V202.</p>
Full article ">Figure 7
<p>Fusion of IMU data and loop closure. The red line represents the motion trajectory of the camera, the green frame is the current keyframe, the blue frame is the keyframe in the map, and the red point represents the 3D map point.</p>
Full article ">Figure 8
<p>The number of loops per sequence. Each sequence is performed 5 times, with the mode as the statistical standard.</p>
Full article ">Figure 9
<p>CPU comparison in loop closure.</p>
Full article ">Figure 10
<p>CPU comparison in local mapping.</p>
Full article ">
25 pages, 7806 KiB  
Article
A Cross-Source Point Cloud Registration Algorithm Based on Trigonometric Mutation Chaotic Harris Hawk Optimisation for Rockfill Dam Construction
by Bingyu Ren, Hao Zhao and Shuyang Han
Sensors 2023, 23(10), 4942; https://doi.org/10.3390/s23104942 - 21 May 2023
Viewed by 1532
Abstract
A high-precision three-dimensional (3D) model is the premise and vehicle of digitalising hydraulic engineering. Unmanned aerial vehicle (UAV) tilt photography and 3D laser scanning are widely used for 3D model reconstruction. Affected by the complex production environment, in a traditional 3D reconstruction based [...] Read more.
A high-precision three-dimensional (3D) model is the premise and vehicle of digitalising hydraulic engineering. Unmanned aerial vehicle (UAV) tilt photography and 3D laser scanning are widely used for 3D model reconstruction. Affected by the complex production environment, in a traditional 3D reconstruction based on a single surveying and mapping technology, it is difficult to simultaneously balance the rapid acquisition of high-precision 3D information and the accurate acquisition of multi-angle feature texture characteristics. To ensure the comprehensive utilisation of multi-source data, a cross-source point cloud registration method integrating the trigonometric mutation chaotic Harris hawk optimisation (TMCHHO) coarse registration algorithm and the iterative closest point (ICP) fine registration algorithm is proposed. The TMCHHO algorithm generates a piecewise linear chaotic map sequence in the population initialisation stage to improve population diversity. Furthermore, it employs trigonometric mutation to perturb the population in the development stage and thus avoid the problem of falling into local optima. Finally, the proposed method was applied to the Lianghekou project. The accuracy and integrity of the fusion model compared with those of the realistic modelling solutions of a single mapping system improved. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Research framework diagram.</p>
Full article ">Figure 2
<p>The process of Harris hawk chasing its prey.</p>
Full article ">Figure 3
<p>(<b>a</b>) Histogram of the logistic sequence; (<b>b</b>) distribution point diagram of the logistic sequence; (<b>c</b>) histogram of the PWLCM sequence; (<b>d</b>) distribution point diagram of the PWLCM sequence.</p>
Full article ">Figure 4
<p>The progress of trigonometric mutation.</p>
Full article ">Figure 5
<p>Flowchart of TMCHHO algorithm.</p>
Full article ">Figure 6
<p>Comparison of algorithm optimisation ability: (<b>a</b>) <span class="html-italic">f</span><sub>1</sub>(<span class="html-italic">x</span>); (<b>b</b>) <span class="html-italic">f</span><sub>2</sub>(<span class="html-italic">x</span>); (<b>c</b>) <span class="html-italic">f</span><sub>5</sub>(<span class="html-italic">x</span>); (<b>d</b>) <span class="html-italic">f</span><sub>6</sub>(<span class="html-italic">x</span>).</p>
Full article ">Figure 7
<p>Standard point cloud datasets: (<b>a</b>) Armadillo; (<b>b</b>) Bunny; (<b>c</b>) Dragon; (<b>d</b>) Happy.</p>
Full article ">Figure 8
<p>Iteration curves of each algorithm on different datasets: (<b>a</b>) Armadillo dataset; (<b>b</b>) Bunny dataset; (<b>c</b>) Dragon dataset; (<b>d</b>) Happy dataset.</p>
Full article ">Figure 9
<p>Comparison of algorithm alignment effects: (<b>a</b>) Armadillo dataset; (<b>b</b>) Bunny dataset; (<b>c</b>) Dragon dataset; (<b>d</b>) Happy dataset.</p>
Full article ">Figure 10
<p>Alignment results using the ICP algorithm only.</p>
Full article ">Figure 11
<p>Schematic diagram of data acquisition strategy.</p>
Full article ">Figure 12
<p>(<b>a</b>) Terrestrial laser scanner; (<b>b</b>) Laser scanning point cloud.</p>
Full article ">Figure 13
<p>(<b>a</b>) UAV tilt photography; (<b>b</b>) point cloud from UAV tilt photography.</p>
Full article ">Figure 14
<p>(<b>a</b>) Registration accuracy of different algorithms; (<b>b</b>) variation in fitness values in the actual project.</p>
Full article ">Figure 15
<p>The process of cross-source point cloud fusion.</p>
Full article ">Figure 16
<p>Colour mapping diagram of the survey area.</p>
Full article ">Figure 17
<p>Evaluation of fusion model accuracy.</p>
Full article ">Figure 18
<p>(<b>a</b>) Tilt photography point cloud under the top-down view along the dam axis; (<b>b</b>) fusion point cloud under the top-down view along the dam axis; (<b>c</b>) tilt photography point cloud under the 30-degree overhead view perpendicular to the dam axis; (<b>d</b>) fusion point cloud under the 30-degree overhead view perpendicular to the dam axis.</p>
Full article ">
27 pages, 7713 KiB  
Article
Thermal Characterization of Buildings with as-is Thermal-Building Information Modelling
by Víctor Pérez-Andreu, Antonio Adán Oliver, Carolina Aparicio-Fernández and José-Luis Vivancos Bono
Buildings 2023, 13(4), 972; https://doi.org/10.3390/buildings13040972 - 6 Apr 2023
Viewed by 1609
Abstract
Developing methodologies to accurately characterise the energy conditions of existing building stock is a fundamental aspect of energy consumption reduction strategies. To that end, a case study using a thermal information modelling method for existing buildings (as-is T-BIM) is reported. This proposed new [...] Read more.
Developing methodologies to accurately characterise the energy conditions of existing building stock is a fundamental aspect of energy consumption reduction strategies. To that end, a case study using a thermal information modelling method for existing buildings (as-is T-BIM) is reported. This proposed new method is based on the automatic processing of 3D thermal clouds of interior zones of a building that generates a semantic proprietary model that contains time series of surface temperatures assigned to its surface elements. The proprietary as-is T-BIM automatically generates an as-is BEM model with gbXML standards for energy simulation. This is a multi-zone energy model of the building. In addition, the surface temperature data series of the as-is T-BIM model elements permit the calculation of their thermal transmittances, increasing the calibration options of the obtained as-is BEM model. To test the as-is TBIM method, a case study compares the as-is BEM model obtained by as-is T-BIM methods with the one obtained by standard methods for the same building. The results demonstrate differences in geometry, transmittance, and infiltration values, as well as insignificant differences in annual air conditioning energy consumption or the comfort parameters tested. This seems to indicate shorter modelling times and greater accuracy of the as-is T-BIM model. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Diagram of research procedures and results.</p>
Full article ">Figure 2
<p>As-is T-BIM thermal point-cloud: (<b>a</b>) perspective of the block of five thermal zones; (<b>b</b>) indoor view of a thermal zone.</p>
Full article ">Figure 3
<p>(<b>a</b>) 3D geometric and thermal scanner; (<b>b</b>) Perspective of one of the rooms of the case study; (<b>c</b>) Detail of the objectives for reading the parameters T<sub>ref</sub> and ε.</p>
Full article ">Figure 4
<p>(<b>a</b>) T-BIM model, with surface temperatures in its viewer. (<b>b</b>) GBXML model of adiabatic adjacencies and envelope transmittance calculated with the DesignBuilder viewer.</p>
Full article ">Figure 5
<p>General plans of the block of five thermal zones under study: (<b>a</b>) layout of the ground floor; and (<b>b</b>) sectional view.</p>
Full article ">Figure 6
<p>gbXML model geometries in the DesignBuilder V.6 display interface. (<b>a</b>) as-is STD model; and (<b>b</b>) as-is T-BIM model.</p>
Full article ">Figure 7
<p>Series of orthoimages of the western wall, Room W, 19 November 2021.</p>
Full article ">Figure 8
<p>Thermographic ortho-images obtained from the 3D thermal point cloud with real calculated temperatures: (<b>a</b>) indoor surface of the west wall of zone W, 10:00 h 19 November 2021; (<b>b</b>) indoor surface of the south wall of zone W, 10:00 h 19 November 2021. The colour palette changes with the temperature in steps of 0.4 °C to facilitate the visualisation of thermal surface differences.</p>
Full article ">Figure 9
<p>Average simulation and monitored surface temperatures of the wall and the floor, of Room W1, 19 November 2021.</p>
Full article ">Figure 10
<p>Average simulation and monitored surface temperatures of the wall and floor of Room W1, 18, 19, and 20 November 2021.</p>
Full article ">Figure 11
<p>Daily temperatures of indoor air in all zones are average according to the simulation results of the as-is STD and as-is T-BIM models and the average daily outdoor temperature of the typical meteorological year TMY 2007–2021 used in the simulations of both models.</p>
Full article ">Figure 12
<p>Means of monthly average monthly energy gains obtained by energy simulations of the two models, as-is STD and as-is T-BIM, on a TMY 2007–2021.</p>
Full article ">Figure 13
<p>Monthly demand for heating and cooling of the models, as-is STD and as-is T-BIM, in a TMY 2007–2021.</p>
Full article ">Figure 14
<p>Hourly values of Fanger comfort parameters for zone W1 on 18–20November of TMY 2007–2021: (<b>a</b>) PMV and (<b>b</b>) PPD.</p>
Full article ">Figure 14 Cont.
<p>Hourly values of Fanger comfort parameters for zone W1 on 18–20November of TMY 2007–2021: (<b>a</b>) PMV and (<b>b</b>) PPD.</p>
Full article ">Figure 15
<p>Models of Room W1, domain mesh, air intake, and outtake vents in red (<b>a</b>) as-is STD and (<b>b</b>) as-is T-BIM. Where: Vent 1 is ventilation air intake; Clim 1–3 are climatized air intake diffusors; Inf 1–7 are infiltrations of air through the window frames; Ext 1–2 are air extractions from the air conditioning system and through the door access.</p>
Full article ">Figure 16
<p>Surface temperature values of contour elements linked for CFD simulation testing at 17:00 h on 19 November, in Room W1: (<b>a</b>) as-is STD and (<b>b</b>) as-is T-BIM.</p>
Full article ">Figure 17
<p>Temperature value 2D graph maps on a horizontal plane at Z = 1.50 m, obtained by CFD simulation of air flows, according to the condition of Room W1, at 17 h, on 19NovemberTMY 2007–2021: (<b>a</b>) Temperatures with the as-is STD Model, (<b>b</b>) Temperatures with the as-is T-BIM Model, and (<b>c</b>) Point-to-point resolution of temperature differences between both models that were compared.</p>
Full article ">Figure 18
<p>2D graph maps of PPD comfort values on a horizontal plane at Z = 1.50 m, obtained by CFD simulation of atmospheric air flows, according to the conditions of Room W1, at 17 h, on 19 November TMY 2007–2021: (<b>a</b>) PPD of the as-is STD model; (<b>b</b>) PPD of the as-is T-BIM model; and (<b>c</b>) Point-to-point resolution of PPD differences in a comparison between both models.</p>
Full article ">Figure 19
<p>2D graph maps of PMV comfort values on the horizontal plane at Z = 1.50 m, obtained by CFD simulation of atmospheric air flows, according to the conditions of Room W1, at 17 h, on 19NovemberTMY 2007–2021: (<b>a</b>) PMV of the as-is STD model; (<b>b</b>) PMV of the as-is T-BIM model; and (<b>c</b>) Point-to-point resolution of PMV differences in a comparison between both models.</p>
Full article ">
27 pages, 11254 KiB  
Article
A Robust Real-Time Ellipse Detection Method for Robot Applications
by Wenshan He, Gongping Wu, Fei Fan, Zhongyun Liu and Shujie Zhou
Drones 2023, 7(3), 209; https://doi.org/10.3390/drones7030209 - 17 Mar 2023
Cited by 5 | Viewed by 2890
Abstract
Over the years, many ellipse detection algorithms have been studied broadly, while the critical problem of accurately and effectively detecting ellipses in the real-world using robots remains a challenge. In this paper, we proposed a valuable real-time robot-oriented detector and simple tracking algorithm [...] Read more.
Over the years, many ellipse detection algorithms have been studied broadly, while the critical problem of accurately and effectively detecting ellipses in the real-world using robots remains a challenge. In this paper, we proposed a valuable real-time robot-oriented detector and simple tracking algorithm for ellipses. This method uses low-cost RGB cameras for conversion into HSV space to obtain reddish regions of interest (RROIs) contours, effective arc selection and grouping strategies, and the candidate ellipses selection procedures that eliminate invalid edges and clustering functions. Extensive experiments are conducted to adjust and verify the method’s parameters for achieving the best performance. The method combined with a simple tracking algorithm executes only approximately 30 ms on a video frame in most cases. The results show that the proposed method had high-quality performance (precision, recall, F-Measure scores) and the least execution time compared with the existing nine most advanced methods on three public actual application datasets. Our method could detect elliptical markers in real-time in practical applications, detect ellipses adaptively under natural light, well detect severely blocked and specular reflection ellipses when the elliptical object was far from or close to the robot. The average detection frequency can meet the real-time requirements (>10 Hz). Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Contour types of detected ellipses in different conditions: (<b>a</b>) single ellipse contour divided into four parts and labeled in four quadrants; (<b>b</b>) partially obscured single ellipse contour.</p>
Full article ">Figure 2
<p>Extracted contours and fitted ellipses: (<b>a</b>) general Canny method and (<b>b</b>) optimized Canny method for edge extraction and ellipse fitting.</p>
Full article ">Figure 3
<p>RROI ellipse target tracking algorithm flowchart.</p>
Full article ">Figure 4
<p>Parametric sensitivity test: (<b>a</b>) experimental setup; (<b>b</b>) UI interface.</p>
Full article ">Figure 5
<p>Evaluation platform and test results. Performance and execution times of our algorithm’s parameters contour overlap threshold (<math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>o</mi> <mi>u</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> and ellipse overlap threshold <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>e</mi> <mi>l</mi> <mi>l</mi> <mi>i</mi> <mi>p</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math>: (<b>a</b>) Accuracy, recall, and F-measure scores of algorithm increase with increased <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>o</mi> <mi>u</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> up to a point. (<b>b</b>) Increasing <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>o</mi> <mi>u</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> increases the number of true negatives and can result in fewer true positives after a certain point. (<b>c</b>) Accuracy, recall, and F-measure scores of algorithm increase with increased <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>e</mi> <mi>l</mi> <mi>l</mi> <mi>i</mi> <mi>p</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> up to a point. (<b>d</b>) Increasing <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>e</mi> <mi>l</mi> <mi>l</mi> <mi>i</mi> <mi>p</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> increases the number of true negatives and can result in fewer true positives after a certain point.</p>
Full article ">Figure 6
<p>Experimental platform: (<b>a</b>) field flight test; (<b>b</b>) hardware components.</p>
Full article ">Figure 7
<p>Ellipse results of three datasets: traffic signs, Prasad, and random images.</p>
Full article ">Figure 8
<p>Performance with different thresholds of overlap ratio: (<b>a</b>) traffic sign dataset precision; (<b>b</b>) traffic sign dataset recall; (<b>c</b>) traffic sign dataset F-measure; (<b>d</b>) Prasad dataset precision; (<b>e</b>) Prasad dataset recall; (<b>f</b>) Prasad dataset F-measure; (<b>g</b>) random image dataset precision; (<b>h</b>) random image dataset recall; (<b>i</b>) random image dataset F-measure.</p>
Full article ">Figure 9
<p>Prasad Dataset Image 043_0045 (640 × 480): (<b>a</b>) original, (<b>b</b>) ground truth, (<b>c</b>) CRTH, (<b>d</b>) Prasad, (<b>e</b>) YAED, (<b>f</b>) RCNN, (<b>g</b>) Wang, (<b>h</b>) ELSD, (<b>i</b>) CNED, (<b>j</b>) ours.</p>
Full article ">Figure 10
<p>This Experimental environment: indoor dark condition: (<b>a</b>) original; (<b>b</b>) RROI; (<b>c</b>) detection result.</p>
Full article ">Figure 11
<p>Experimental environment: indoor bright condition: (<b>a</b>) original; (<b>b</b>) RROI; (<b>c</b>) detection result.</p>
Full article ">Figure 12
<p>Experimental environment: noon sunny weather condition: (<b>a</b>) original; (<b>b</b>) RROI; (<b>c</b>) detection result.</p>
Full article ">Figure 13
<p>Experimental environment: cloudy, partially blocked condition: (<b>a</b>) original; (<b>b</b>) RROI; (<b>c</b>) detection result.</p>
Full article ">Figure 14
<p>Experimental environment: propeller shadows at noon condition: (<b>a</b>) original; (<b>b</b>) RROI; (<b>c</b>) detection result.</p>
Full article ">Figure 15
<p>Experimental environment: before sunset a cloudy day condition: (<b>a</b>) original; (<b>b</b>) RROI; (<b>c</b>) detection result.</p>
Full article ">Figure 16
<p>Algorithm performance evaluated in practical application: (<b>a</b>) precision, (<b>b</b>) recall, (<b>c</b>) F-measure, (<b>d</b>) time consumption.</p>
Full article ">
14 pages, 5319 KiB  
Article
Physical Structure Expression for Dense Point Clouds of Magnetic Levitation Image Data
by Yuxin Zhang, Lei Zhang, Guochen Shen and Qian Xu
Sensors 2023, 23(5), 2535; https://doi.org/10.3390/s23052535 - 24 Feb 2023
Viewed by 1411
Abstract
The research and development of an intelligent magnetic levitation transportation system has become an important research branch of the current intelligent transportation system (ITS), which can provide technical support for state-of-the-art fields such as intelligent magnetic levitation digital twin. First, we applied unmanned [...] Read more.
The research and development of an intelligent magnetic levitation transportation system has become an important research branch of the current intelligent transportation system (ITS), which can provide technical support for state-of-the-art fields such as intelligent magnetic levitation digital twin. First, we applied unmanned aerial vehicle oblique photography technology to acquire the magnetic levitation track image data and preprocessed them. Then, we extracted the image features and matched them based on the incremental structure from motion (SFM) algorithm, recovered the camera pose parameters of the image data and the 3D scene structure information of key points, and optimized the bundle adjustment to output 3D magnetic levitation sparse point clouds. Then, we applied multiview stereo (MVS) vision technology to estimate the depth map and normal map information. Finally, we extracted the output of the dense point clouds that can precisely express the physical structure of the magnetic levitation track, such as turnout, turning, linear structures, etc. By comparing the dense point clouds model with the traditional building information model, experiments verified that the magnetic levitation image 3D reconstruction system based on the incremental SFM and MVS algorithm has strong robustness and accuracy and can express a variety of physical structures of magnetic levitation track with high accuracy. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Two randomly selected magnetic levitation image data as seed images.</p>
Full article ">Figure 2
<p>Seed image feature point matching.</p>
Full article ">Figure 3
<p>RANSAC algorithm to eliminate seed image feature matching error.</p>
Full article ">Figure 4
<p>BA optimization decomposition.</p>
Full article ">Figure 5
<p>Structure of magnetic levitation image to generate dense point clouds method.</p>
Full article ">Figure 6
<p>Magnetic levitation transportation test line at Tongji University’s Jiading campus.</p>
Full article ">Figure 7
<p>Oblique photography route plan of the magnetic levitation test line.</p>
Full article ">Figure 8
<p>UAV images of the physical structure of the magnetic levitation track grider: (<b>a</b>) Turnout structure; (<b>b</b>) Turning structure; (<b>c</b>) Linear structure.</p>
Full article ">Figure 9
<p>Overall effect of sparse point clouds of magnetic levitation track: (<b>a</b>) The color of points are empty; (<b>b</b>) The color of points with RGB.</p>
Full article ">Figure 10
<p>The original image and depth image and normal image of one picture.</p>
Full article ">Figure 11
<p>Overall effect of dense point clouds of magnetic levitation track.</p>
Full article ">Figure 12
<p>Illustration of the dense point clouds of the magnetic floating turnout structure.</p>
Full article ">Figure 13
<p>Illustration of the dense point clouds of the magnetic floating turning structure.</p>
Full article ">Figure 14
<p>Illustration of the dense point clouds of the magnetic floating linear structure.</p>
Full article ">Figure 15
<p>Local details of turnout structure: (<b>a</b>) dense point clouds model; (<b>b</b>) BIM model; (<b>c</b>) standard cross-sectional turnout structure diagram.</p>
Full article ">
16 pages, 6307 KiB  
Article
Event-Guided Image Super-Resolution Reconstruction
by Guangsha Guo, Yang Feng, Hengyi Lv, Yuchen Zhao, Hailong Liu and Guoling Bi
Sensors 2023, 23(4), 2155; https://doi.org/10.3390/s23042155 - 14 Feb 2023
Cited by 5 | Viewed by 2741
Abstract
The event camera efficiently detects scene radiance changes and produces an asynchronous event stream with low latency, high dynamic range (HDR), high temporal resolution, and low power consumption. However, the large output data caused by the asynchronous imaging mechanism makes the increase in [...] Read more.
The event camera efficiently detects scene radiance changes and produces an asynchronous event stream with low latency, high dynamic range (HDR), high temporal resolution, and low power consumption. However, the large output data caused by the asynchronous imaging mechanism makes the increase in spatial resolution of the event camera limited. In this paper, we propose a novel event camera super-resolution (SR) network (EFSR-Net) based on a deep learning approach to address the problems of low spatial resolution and poor visualization of event cameras. The network model is capable of reconstructing high-resolution (HR) intensity images using event streams and active sensor pixel (APS) frame information. We design the coupled response blocks (CRB) in the network that are able of fusing the feature information of both data to achieve the recovery of detailed textures in the shadows of real images. We demonstrate that our method is able to reconstruct high-resolution intensity images with more details and less blurring in synthetic and real datasets, respectively. The proposed EFSR-Net can improve the peak signal-to-noise ratio (PSNR) metric by 1–2 dB compared with state-of-the-art methods. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Three-layer model of a human retina and corresponding event camera pixel circuitry. The first layer is similar to retinal cone cells for photoelectric conversion; the second layer, similar to bipolar cells in the retina, is used to obtain changes in light intensity; the third layer is similar to the ganglion cells of the retina for outputting the light intensity change sign.</p>
Full article ">Figure 2
<p>The process of generating events by the event camera. Each pixel acts as an independent detection unit for luminance changes, and events are generated immediately when the log intensity change at the pixel reaches the specified threshold <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>. Continuous generation of events will form event streams. The event streams contain events of two polarities. When the light intensity changes from strong to weak and reaches the threshold, the camera outputs an OFF event (indicated by the blue arrow); when the light intensity changes from weak to strong and reaches the threshold, the camera outputs an ON event (indicated by the red arrow).</p>
Full article ">Figure 3
<p>EFSR-Net network structure. The event data is first preprocessed to form a stack, followed by a series of encoding and decoding through the network. The processed event information and APS frame information are used as inputs into the upper and lower coupling sub-networks. Each sub-network consists of a feature extraction block (FEB), a coupled response block (CRB), and a reconstruction block (REB). The final super-resolution image reconstruction is achieved by the mixer (MIX) convolutional network.</p>
Full article ">Figure 4
<p>Comparison of the visual quality of our proposed method with other state-of-the-art methods for 2× SR on synthetic datasets. The APS frame and event stack are upsampled with bicubic interpolation to the corresponding scale for reference.</p>
Full article ">Figure 5
<p>Comparison of the visual quality of our proposed method with other state-of-the-art methods for 4× SR on synthetic datasets. The APS frame and event stack are upsampled with bicubic interpolation to the corresponding scale for reference.</p>
Full article ">Figure 6
<p>Comparison of the visual quality of our proposed method with other state-of-the-art methods for 2× SR on real datasets. The APS frame is upsampled with bicubic interpolation to the corresponding scale for reference. The 4× SR results of eSL-Net is downsampled with bicubic interpolation to the corresponding scale for reference.</p>
Full article ">Figure 6 Cont.
<p>Comparison of the visual quality of our proposed method with other state-of-the-art methods for 2× SR on real datasets. The APS frame is upsampled with bicubic interpolation to the corresponding scale for reference. The 4× SR results of eSL-Net is downsampled with bicubic interpolation to the corresponding scale for reference.</p>
Full article ">Figure 7
<p>Comparison of the visual quality of our proposed method with other state-of-the-art methods for 4× SR on real datasets. The APS frame and the 2× SR results of E2SRI are upsampled with bicubic interpolation to the corresponding scale for reference.</p>
Full article ">Figure 8
<p>Comparison of the visual quality of our proposed method with other state-of-the-art methods for 2× SR on real datasets. The APS frame is upsampled with bicubic interpolation to the corresponding scale for reference. The 4× SR results of eSL-Net is downsampled with bicubic interpolation to the corresponding scale for reference.</p>
Full article ">Figure 9
<p>The effects of different values of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>e</mi> </msub> </mrow> </semantics></math> on event stacks and reconstructed images are qualitatively compared. The APS frame is upsampled with bicubic interpolation to the corresponding scale for reference.</p>
Full article ">
18 pages, 14274 KiB  
Article
Development of a Construction-Site Work Support System Using BIM-Marker-Based Augmented Reality
by Jae-Wook Yoon and Seung-Hyun Lee
Sustainability 2023, 15(4), 3222; https://doi.org/10.3390/su15043222 - 10 Feb 2023
Cited by 6 | Viewed by 2650
Abstract
Augmented reality (AR) in 3D has been proposed as a way to overcome the shortcomings of 2D drawings. In particular, marker-based AR is known to be more accurate in implementation, but it is not easy to use on construction sites because it requires [...] Read more.
Augmented reality (AR) in 3D has been proposed as a way to overcome the shortcomings of 2D drawings. In particular, marker-based AR is known to be more accurate in implementation, but it is not easy to use on construction sites because it requires more time and effort to create corresponding markers for information. Therefore, the purpose of this study was to develop a building information modeling (BIM)-based AR construction work support system that can be applied to construction sites by automatically generating markers. The system algorithm consists of three modules. The first module classifies and groups the objects of the BIM-based 3D model by work order. The second is used to reconstruct the 3D model by groups and automatically generate the corresponding individual markers for each object. The third specifies the marker position and implements AR by automatically matching of 3D model objects to the corresponding markers. To verify this system, a case study was implemented by projecting the BIM-marker-based AR of a 3D model on an existing building. The results show that the developed system provides 3D models and work-related information in AR at the correct scale, size, and location. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Research procedures and scope.</p>
Full article ">Figure 2
<p>AR application in earthwork stage [<a href="#B18-sustainability-15-03222" class="html-bibr">18</a>].</p>
Full article ">Figure 3
<p>Graphical scripting for surface classification with Dynamo [<a href="#B29-sustainability-15-03222" class="html-bibr">29</a>].</p>
Full article ">Figure 4
<p>Rebar measurement by marker-based AR [<a href="#B8-sustainability-15-03222" class="html-bibr">8</a>].</p>
Full article ">Figure 5
<p>Visualization of member information using Unity [<a href="#B38-sustainability-15-03222" class="html-bibr">38</a>].</p>
Full article ">Figure 6
<p>Overview of BIM-marker-based CWSS.</p>
Full article ">Figure 7
<p>BIM-marker-based AR system algorithm.</p>
Full article ">Figure 8
<p>Classification method by the work order of each object.</p>
Full article ">Figure 9
<p>Three-dimensional reconstruction and automatic marker creation system by Dynamo.</p>
Full article ">Figure 10
<p>Divide 3D model by zones through Module 1.</p>
Full article ">Figure 11
<p>(<b>a</b>) Implementing a BIM-based AR application, (<b>b</b>) marker recognition through application, and (<b>c</b>) AR visible in the application.</p>
Full article ">Figure 11 Cont.
<p>(<b>a</b>) Implementing a BIM-based AR application, (<b>b</b>) marker recognition through application, and (<b>c</b>) AR visible in the application.</p>
Full article ">
17 pages, 4799 KiB  
Article
Dynamic Occlusion Modeling and Clearance Control of the Visual Field of Curved Highway Roadside Landscape
by Jian Xiao, Xudong Zha, Liulin Yang and Jie Wei
Sustainability 2023, 15(4), 3200; https://doi.org/10.3390/su15043200 - 9 Feb 2023
Viewed by 1220
Abstract
In order to solve the control of the degree of anti-occlusion of the roadside landscape of the expressway curve according to the drivers’ visual characteristics during high-speed driving, a dynamic space model of the visual process of curved highway roadside landscape was established, [...] Read more.
In order to solve the control of the degree of anti-occlusion of the roadside landscape of the expressway curve according to the drivers’ visual characteristics during high-speed driving, a dynamic space model of the visual process of curved highway roadside landscape was established, and the calculation equation of roadside landscape visual field was derived. The dynamic occlusion ratio was defined by space coordinates, and the judgment model was proposed for the de-occlusion of the roadside landscape visual field. According to the standard design parameters of the G4 Highway Hunan section, the occlusion laws were analyzed by MATLAB for different widths and different heights of obstructions as well as the same widths and heights of obstructions at different positions in the roadside landscape visual field, thus the control value and control content of anti-occlusion clearance for the roadside landscapes were proposed. The results show that the anti-occlusion clearance control range of the roadside landscape is 270 m at the design speed of 120 km/h, 220 m at 100 km/h, and 170 m at 80 km/h. The control value of clearance width is 25 m, and the control value of clearance height is recommended to be 20 m. Within the scope of highway land expropriation, it is recommended to land expropriation 25 m wide from the road boundary. The research provides model support for building the closeness and openness of highway roadside landscape. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Visual field of the roadside landscape on a curved highway.</p>
Full article ">Figure 2
<p>Schematic representation of the visual field volume of the roadside landscape on a curved highway.</p>
Full article ">Figure 3
<p>Schematic representation of the occluded volume in the visual field of the roadside landscape (point A).</p>
Full article ">Figure 4
<p>Schematic representation of the occluded volume of the visual field of the roadside landscape on a curved highway (volume at time t<sub>i</sub>).</p>
Full article ">Figure 5
<p>Occlusion judgment of the occlusion width D<sub>j</sub> value.</p>
Full article ">Figure 6
<p>Occlusion judgment of the occlusion height H<sub>j</sub>.</p>
Full article ">Figure 7
<p>Typical standard cross section of G4 Hunan section (unit: cm).</p>
Full article ">Figure 8
<p>Simulation of the change of the occlusion volume and occlusion ratio of the roadside landscape by the occlusions of different heights under the same main line design speed, the same position, and the same width: (<b>a</b>) Obstruction of different heights when D = 3 m; (<b>b</b>) Obstructions of different heights when D = 10 m; (<b>c</b>) Obstructions of different heights when D = 20 m; (<b>d</b>) Obstructions of different heights when D = 30 m.</p>
Full article ">Figure 8 Cont.
<p>Simulation of the change of the occlusion volume and occlusion ratio of the roadside landscape by the occlusions of different heights under the same main line design speed, the same position, and the same width: (<b>a</b>) Obstruction of different heights when D = 3 m; (<b>b</b>) Obstructions of different heights when D = 10 m; (<b>c</b>) Obstructions of different heights when D = 20 m; (<b>d</b>) Obstructions of different heights when D = 30 m.</p>
Full article ">Figure 9
<p>Simulation of the change of the occlusion volume and occlusion ratio of roadside landscape by occlusions of different widths under the conditions of the same main line design speed, the same position, and the same height. (<b>a</b>) H = 3 m with different widths of obstruction; (<b>b</b>) H = 5 m with different widths of obstruction; (<b>c</b>) H = 10 m with different widths of obstruction; (<b>d</b>) H = 20 m with different widths of obstruction.</p>
Full article ">Figure 10
<p>Schematic diagram of roadside landscape clearance control for a curved highway.</p>
Full article ">
17 pages, 3290 KiB  
Article
Research on Public Space Micro-Renewal Strategy of Historical and Cultural Blocks in Sanhe Ancient Town under Perception Quantification
by Wenqing Ding, Qinqin Wei, Jing Jin, Juanjuan Nie, Fanfan Zhang, Xiaotian Zhou and Youhua Ma
Sustainability 2023, 15(3), 2790; https://doi.org/10.3390/su15032790 - 3 Feb 2023
Cited by 5 | Viewed by 2599
Abstract
The public space environment of historical and cultural blocks is inseparable from human activities, which affects tourists’ behavior and perception activities. Through the evaluation of tourists’ environmental behavior perception, the relationship between spatial characteristics and tourists’ perception is fully considered, which plays an [...] Read more.
The public space environment of historical and cultural blocks is inseparable from human activities, which affects tourists’ behavior and perception activities. Through the evaluation of tourists’ environmental behavior perception, the relationship between spatial characteristics and tourists’ perception is fully considered, which plays an important role in the protection and development of public space in historical and cultural blocks. This paper takes the historical and cultural block of Sanhe Ancient Town in Hefei as the research area, focusing on the public space of the block. Through the analysis of the semantic differential method and eye movement legal quantitative analysis, from the angle of psychological perception and visual perception, we carried out an analysis of the historical and cultural block’s public space, and built the double sense of a comprehensive evaluation system of parsing the historical and cultural blocks with the whole situation of public space and the rule of the performance. The results show that: (1) the visual perception preference for spatial elements is in the order of architectural structure > green landscape > architectural decoration > commercial activities > participants > pavement > street furniture > others. (2) There is a significant correlation but not a complete convergence of psychological perception and visual perception. (3) Buildings, structures, and space formats play a key role in creating a sense of space scale, with the former playing a positive role and the latter a negative role. (4) The visual attraction of a green landscape is strong and can improve the visual quality of space. The research found that there are evaluation differences between the visual perception and psychological perception of spatial elements, which are significantly correlated but not fully convergent. Through quantitative analysis and the interpretation of tourists’ perception from different perspectives, we can take relevant improvement and optimization measures for many deficiencies of public space in historical and cultural districts. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Location of Sanhe ancient town in Hefei.</p>
Full article ">Figure 2
<p>Distribution diagram of research samples.</p>
Full article ">Figure 3
<p>Overall flow chart of eye movement analysis.</p>
Full article ">Figure 4
<p>Division of spatial elements.</p>
Full article ">Figure 5
<p>Double perception comprehensive evaluation system.</p>
Full article ">Figure 6
<p>Derivation of micro renewal strategy of public space in historical and cultural blocks of Sanhe Ancient Town.</p>
Full article ">
19 pages, 10037 KiB  
Article
Affordable Robotic Mobile Mapping System Based on Lidar with Additional Rotating Planar Reflector
by Janusz Będkowski and Michał Pełka
Sensors 2023, 23(3), 1551; https://doi.org/10.3390/s23031551 - 31 Jan 2023
Cited by 3 | Viewed by 2360
Abstract
This paper describes an affordable robotic mobile 3D mapping system. It is built with Livox Mid–40 lidar with a conic field of view extended by a custom rotating planar reflector. This 3D sensor is compared with the more expensive Velodyne VLP 16 lidar. [...] Read more.
This paper describes an affordable robotic mobile 3D mapping system. It is built with Livox Mid–40 lidar with a conic field of view extended by a custom rotating planar reflector. This 3D sensor is compared with the more expensive Velodyne VLP 16 lidar. It is shown that the proposed sensor reaches satisfactory accuracy and range. Furthermore, it is able to preserve the metric accuracy and non–repetitive scanning pattern of the unmodified sensor. Due to preserving the non–repetitive scan pattern, our system is capable of covering the entire field of view of 38.4 × 360 degrees, which is an added value of conducted research. We show the calibration method, mechanical design, and synchronization details that are necessary to replicate our system. This work extends the applicability of solid–state lidars since the field of view can be reshaped with minimal loss of measurement properties. The solution was part of a system that was evaluated during the 3rd European Robotics Hackathon in the Zwentendorf Nuclear Power Plant. The experimental part of the paper demonstrates that our affordable robotic mobile 3D mapping system is capable of providing 3D maps of a nuclear facility that are comparable to the more expensive solution. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Scheme of the affordable 3D scanning system composed of lidar and rotating planar reflector. The goal of the calibration procedure is to minimize the sum of errors squared between measured and ground truth points obtained woth TLS.</p>
Full article ">Figure 2
<p>Intersection of a reflective plane and the beam <math display="inline"><semantics> <msup> <mrow> <mi mathvariant="bold">b</mi> </mrow> <mi>d</mi> </msup> </semantics></math>.</p>
Full article ">Figure 3
<p>Measurement stations. Stations 2 and 3 were used in calibration. Stations 1, 4, and 5 were used in validation.</p>
Full article ">Figure 4
<p>Example of planar feature. Gray scale: data from TLS. The color point cloud is the observation of the feature from the presented system. The color map shows the distance to the planar feature.</p>
Full article ">Figure 5
<p>Mechanical design of mirror drive: (1) Livox Mid–40 lidar, (2) contactless encoder, (3) mirror support, (4) motor housing, (6) top plate, (5) bottom plate, (7) pillars. The resulting field of view is shown in blue. The system can be mounted in any position.</p>
Full article ">Figure 6
<p>Photo of the mechanical design of the mirror drive.</p>
Full article ">Figure 7
<p>Time diagram of synchronization signals, register, and timers.</p>
Full article ">Figure 8
<p>Robotic platform equipped with multiple mapping systems. This robot was used for the evaluation of the proposed contribution. The robot provided two 3D data streams for further comparison.</p>
Full article ">Figure 9
<p>Undistorted, aggregated point clouds were built using data from the Livox Mid–40 with the rotated reflector (blue) and Velodyne VLP 16 (red) at Zwentendorf NPP. It is important to notice that our system has a larger field of view. Note the non repetitive scanning pattern produced by the rotating reflector.</p>
Full article ">Figure 10
<p>Undistorted, aggregated point clouds were built using data from the Livox Mid–40 with the rotated reflector (blue) and Velodyne VLP 16 (red) at Zwentendorf NPP. Top down view of <a href="#sensors-23-01551-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 11
<p>Undistorted, aggregated point cloud built using data from the Livox Mid–40 with the rotated reflector at Zwentendorf NPP. The distance to the reference point cloud from the Velodyne VLP 16 is marked by colors.</p>
Full article ">Figure 12
<p>Histogram showing the distribution of distances from our system to reference Velodyne VLP 16. It can be seen the majority are below 10 cm.</p>
Full article ">Figure 13
<p>Used factor graph. Factors from <math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>f</mi> <mn>6</mn> </msub> </semantics></math> are odometry factors. Factors from <math display="inline"><semantics> <msub> <mi>f</mi> <mn>11</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>f</mi> <mn>16</mn> </msub> </semantics></math> are observation factors. Factors from <math display="inline"><semantics> <msub> <mi>f</mi> <mn>31</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>f</mi> <mn>36</mn> </msub> </semantics></math> are IMU prior factors. Factor <math display="inline"><semantics> <msub> <mi>f</mi> <mn>20</mn> </msub> </semantics></math> is a loop closure. Note that the observation factor that connects pose <math display="inline"><semantics> <msub> <mi>x</mi> <mn>3</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>4</mn> </msub> </semantics></math> does not exist, due to failed NDT matching. The variables <math display="inline"><semantics> <msub> <mi>u</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>u</mi> <mn>6</mn> </msub> </semantics></math> are robot odometry readings. The variables <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>s</mi> <mn>6</mn> </msub> </semantics></math> are scans taken near the corresponding poses.</p>
Full article ">Figure 14
<p>Side view of map of the Zwentendorf NPP obtained with Livox Mid–40 with rotated mirror.</p>
Full article ">Figure 15
<p>Side view of map of the Zwentendorf NPP obtained with VLP 16.</p>
Full article ">Figure 16
<p>Undistorted, aggregated point cloud built using data from Livox Mid–40 with a rotated reflector at Zwentendorf NPP. Distance to reference point cloud from Velodyne VLP 16. The larger range of our solution and similar metric measurements compared with Velodyne VLP 16 can be seen.</p>
Full article ">Figure 17
<p>Comparison of the intersection of VLP 16 and Livox Mid–40 with rotated mirror. There is visibly no discrepancy between blue (Velodyne VLP 16) and red (Livox with rotated reflector).</p>
Full article ">Figure 18
<p>Histogram for the entire experiment showing the distribution of distances from our system to reference Velodyne VLP 16. It can be seen that the majority are below 10 cm.</p>
Full article ">Figure 19
<p>Comparison of scan obtained with designed system and DEM. DEM is given with gray scale; the obtained data from the system is given with color map. Color map encodes distance between measurement from our system and DEM.</p>
Full article ">Figure 20
<p>Cross section of scan obtained with designed system (red color) and DEM (black color).</p>
Full article ">Figure 21
<p>Cross section of scan obtained with designed system (red color) and DEM (black color).</p>
Full article ">Figure 22
<p>The possible use case for the proposed solution is fulfilling gaps in DEM due to ALS’ limited field of view. Points with RGB color belongs to DEM, the data with the colored height is coming from the proposed system.</p>
Full article ">
12 pages, 3518 KiB  
Article
BIM Style Restoration Based on Image Retrieval and Object Location Using Convolutional Neural Network
by Yalong Yang, Yuanhang Wang, Xiaoping Zhou, Liangliang Su and Qizhi Hu
Buildings 2022, 12(12), 2047; https://doi.org/10.3390/buildings12122047 - 22 Nov 2022
Cited by 1 | Viewed by 1632
Abstract
BIM is one of the main technical ways to realize building informatization, and the model’s texture is essential to its style design during BIM construction. However, the texture maps provided by mainstream BIM software are not realistic enough and monotonous to meet the [...] Read more.
BIM is one of the main technical ways to realize building informatization, and the model’s texture is essential to its style design during BIM construction. However, the texture maps provided by mainstream BIM software are not realistic enough and monotonous to meet the actual needs of users for the model style. Therefore, an interior furniture BIM style restoration method was proposed based on image retrieval and object location using convolutional neural network. First, two types of furniture images, namely grayscale contour images from BIM software and real images from the Internet, were collected to train the following network model. Second, a multi-feature weighted fusion neural network model based on an attention mechanism (AM-rVGG) was proposed, which focused on the structural information of furniture images to retrieve the most similar real image, and then some furniture image patches from the retrieved one were generated with object location and random cropping techniques as the candidate texture maps of the furniture BIM. Finally, the candidate ones were fed back into the BIM software to realize the restoration of the furniture BIM style. The experimental results showed that the average retrieval accuracy of the proposed network model was 83.1%, and the obtained texture maps could effectively restore the real style of the furniture BIM. This work provides a new idea for restoring the realism in other BIM. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>The overall framework of the BIM style restoration method.</p>
Full article ">Figure 2
<p>AM-rVGG model framework.</p>
Full article ">Figure 3
<p>Feature weighted fusion method based on the attention mechanism.</p>
Full article ">Figure 4
<p>Framework diagram of the texture map restoration method.</p>
Full article ">Figure 5
<p>Some examples of the furniture dataset collected by us.</p>
Full article ">Figure 6
<p>Comparison curve of the network training accuracy.</p>
Full article ">Figure 7
<p>Example of the retrieval result of the BIM door model outline drawing.</p>
Full article ">Figure 8
<p>The BIM style restoration example.</p>
Full article ">Figure 9
<p>Examples of the furniture BIM style restoration.</p>
Full article ">
26 pages, 10353 KiB  
Article
INV-Flow2PoseNet: Light-Resistant Rigid Object Pose from Optical Flow of RGB-D Images Using Images, Normals and Vertices
by Torben Fetzer, Gerd Reis and Didier Stricker
Sensors 2022, 22(22), 8798; https://doi.org/10.3390/s22228798 - 14 Nov 2022
Viewed by 1685
Abstract
This paper presents a novel architecture for simultaneous estimation of highly accurate optical flows and rigid scene transformations for difficult scenarios where the brightness assumption is violated by strong shading changes. In the case of rotating objects or moving light sources, such as [...] Read more.
This paper presents a novel architecture for simultaneous estimation of highly accurate optical flows and rigid scene transformations for difficult scenarios where the brightness assumption is violated by strong shading changes. In the case of rotating objects or moving light sources, such as those encountered for driving cars in the dark, the scene appearance often changes significantly from one view to the next. Unfortunately, standard methods for calculating optical flows or poses are based on the expectation that the appearance of features in the scene remains constant between views. These methods may fail frequently in the investigated cases. The presented method fuses texture and geometry information by combining image, vertex and normal data to compute an illumination-invariant optical flow. By using a coarse-to-fine strategy, globally anchored optical flows are learned, reducing the impact of erroneous shading-based pseudo-correspondences. Based on the learned optical flows, a second architecture is proposed that predicts robust rigid transformations from the warped vertex and normal maps. Particular attention is paid to situations with strong rotations, which often cause such shading changes. Therefore, a 3-step procedure is proposed that profitably exploits correlations between the normals and vertices. The method has been evaluated on a newly created dataset containing both synthetic and real data with strong rotations and shading effects. These data represent the typical use case in 3D reconstruction, where the object often rotates in large steps between the partial reconstructions. Additionally, we apply the method to the well-known Kitti Odometry dataset. Even if, due to fulfillment of the brightness assumption, this is not the typical use case of the method, the applicability to standard situations and the relation to other methods is therefore established. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>Sketch of the proposed methodology: In a first step, the pixel-wise optical flow is predicted from all available input (images, normals and vertices). In a second step, normal and vertex maps are warped to the reference frame using the predicted flow field. The stacked, warped normal and vertex maps are subsequently processed by another sub-network in order to predict a rigid transformation that aligns the underlying geometry.</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>d</b>) show matches based on SIFT features and optical flow. (<b>b</b>) shows the scene, which has been illuminated by a strong spot light, in a different color space that is more visual to human perception. This more clearly visualizes the different shadings of the object, which is the reason for the failure of the common method based on SIFT features. (<b>c</b>) shows overlapping regions of subsequent scans. Even a rotation of approx. 45° yields a large overlap of more than 80%.</p>
Full article ">Figure 3
<p>Image <math display="inline"><semantics> <msub> <mi>I</mi> <mn>0</mn> </msub> </semantics></math> in comparison to image <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math> that has been warped by optical flow <math display="inline"><semantics> <msup> <mi>F</mi> <mn>01</mn> </msup> </semantics></math>. Assuming consistent brightness, these should be identical (ignoring masked pixels due to the semi-dense optical flow from real data). In case of strong rotations of the object, the shading changes dramatically, which violates this assumption.</p>
Full article ">Figure 4
<p>Sketch of the <span class="html-italic">PWC-Net</span> architecture. The input is convolved through multiple layers and the optical flow is predicted starting from the lowest level upwards in a U-Net structure. In each level, the layers of <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math> are warped towards the layers of <math display="inline"><semantics> <msub> <mi>I</mi> <mn>0</mn> </msub> </semantics></math> in order to provide initial flows from previous lower levels. With this pyramidal approach, large flows are also predictable with quite small filter kernels.</p>
Full article ">Figure 5
<p>Possible input that is available to the task of light resistant optical flow estimation and subsequent pose prediction. In addition to texture images, there are depth maps, vertex maps, point clouds and normal maps available. The depth maps as well as the vertex maps contain geometrical information. Since the vertex maps are independent of the calibration, it is the preferable choice for the presented method.</p>
Full article ">Figure 6
<p>Flow prediction architecture in each layer (except first one). Features of images (texture), normals (shading) and vertices (geometry) are extracted separately and jointly fed to the prediction module.</p>
Full article ">Figure 7
<p>Normal maps and vertex maps that have been warped by optical flow <math display="inline"><semantics> <msup> <mi>F</mi> <mn>01</mn> </msup> </semantics></math>. Assuming rigid scenes, normals should be identical up to a rotation, vertices up to a rotation and a translation.</p>
Full article ">Figure 8
<p>Point clouds of the two exemplary views. The resulting transformation <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>=</mo> <mo stretchy="false">(</mo> <mover> <mi mathvariant="bold">R</mi> <mo stretchy="false">^</mo> </mover> <mo>,</mo> <mover> <mi mathvariant="bold">t</mi> <mo>^</mo> </mover> <mo stretchy="false">)</mo> </mrow> </semantics></math> aligns the point cloud of the first view to the one of the second view. The registered combined point cloud is shown besides.</p>
Full article ">Figure 9
<p>Architecture of <span class="html-italic">Flow2PoseNet</span>. The left part of the network aims to predict accurate optical flow from images, normal- and vertex-maps, using textural features from images, shading features from normals and geometrical features from vertices in order to predict accurate and light resistant flow fields. The pose of the rigid scene is computed in three steps from the warped normal- and vertex-maps. The first step predicts the normals from the warped normal-maps. The second step predicts the translation from the warped and rotated vertex-maps. The third step predicts a correction transformation to refine the predicted rotation and translation incrementally.</p>
Full article ">Figure 10
<p>3D models that have been used to create the synthetic and real datasets. (<b>a</b>) shows the models on which the synthetic training scenes are based on; (<b>b</b>) shows the models of the synthetic test scenes; (<b>c</b>) shows the models that result from the captured real data.</p>
Full article ">Figure 11
<p>Example scene of the synthetic (top row) and real (bottom row) datasets. Each scene contains images, depth maps, normal maps and flow fields of two different camera views. In addition, a data file for each camera is stored that contains calibration information, camera position, light source position and minimal/maximal values of flows and depths in order to allow memory efficient saving of the data.</p>
Full article ">Figure 12
<p>Application of the method to a full sequence of partial reconstructions of a real Buddha object from the <span class="html-italic">BuddhaBirdReal</span> dataset. Such sequences usually result from 3D scanners (as here from a structured light scanner). Since often a turntable is used, strong rotations (<math display="inline"><semantics> <mrow> <mo>≈</mo> <msup> <mn>45</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>) and shading changes disturb the data. After pre-alignment, a few iterations of the <span class="html-italic">ICP</span> algorithm are applied to refine the alignment of the point clouds. The image on the bottom right shows the impressive result on the overall aligned full point cloud of the statue.</p>
Full article ">Figure 13
<p>Qualitative results of the proposed method on training (top 3 rows) and test (bottom 3 rows) data of the synthetic <span class="html-italic">consistent light</span> dataset. The situation of <span class="html-italic">consistent light</span> represents the standard case, where, for example, the camera moves through a static scene with static light sources. The brightness assumption is usually not violated. The network generalizes well from known training to unknown test data.</p>
Full article ">Figure 14
<p>Qualitative results of the proposed method on training (top 3 rows) and test (bottom 3 rows) data of the synthetic <span class="html-italic">inconsistent light</span> dataset as well as real test data. The situation of <span class="html-italic">inconsistent light</span> represents the situation under investigation, motivating this paper, where the light sources or the objects in the scene move or rotate, yielding strong shading changes. The brightness assumption is dramatically violated. The network still generalizes well from known training to unknown test data. Even for real data without additional fine-tuning, the results are impressive.</p>
Full article ">Figure 15
<p>Qualitative results of the proposed method on training and test data of the <span class="html-italic">Kitti Odometry</span> dataset. The method also works on this kind of scenario with less rotations and less shading changes than in the mainly investigated case, but also handles noise resulting from the lidar depth measurement in the Kitti data. The network generalizes well from known training to unknown test data.</p>
Full article ">Figure 16
<p>Qualitative results of the predicted (dense) optical flow. The network allows for computing accurate flows for invisible pixels from the context of visible parts as well as for the consistent and inconsistent data.</p>
Full article ">
21 pages, 4502 KiB  
Article
A Laboratory and Field Universal Estimation Method for Tire–Pavement Interaction Noise (TPIN) Based on 3D Image Technology
by Hui Wang, Xun Zhang and Shengchuan Jiang
Sustainability 2022, 14(19), 12066; https://doi.org/10.3390/su141912066 - 23 Sep 2022
Cited by 71 | Viewed by 2158
Abstract
Tire–pavement interaction noise (TPIN) accounts mainly for traffic noise, a sensitive parameter affecting the eco-based maintenance decision outcome. Consistent methods or metrics for lab and field pavement texture evaluation are lacking. TPIN prediction based on pavement structural and material characteristics is not yet [...] Read more.
Tire–pavement interaction noise (TPIN) accounts mainly for traffic noise, a sensitive parameter affecting the eco-based maintenance decision outcome. Consistent methods or metrics for lab and field pavement texture evaluation are lacking. TPIN prediction based on pavement structural and material characteristics is not yet available. This paper used 3D point cloud data scanned from specimens and road pavement to conduct correlation and clustering analysis based on representative 3D texture metrics. We conducted an influence analysis to exclude macroscope pavement detection metrics and macro deformation metrics’ effects (international roughness index, IRI, and mean profile depth, MPD). The cluster analysis results verified the feasibility of texture metrics for evaluating lab and field pavement wear, differentiating the wear states. TPIN prediction accuracy based on texture indicators was high (R2 = 0.9958), implying that it is feasible to predict the TPIN level using 3D texture metrics. The effects of pavement texture changes on TPIN can be simulated by laboratory wear. Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

Figure 1
<p>The framework of our study.</p>
Full article ">Figure 2
<p>The example of the original detection data.</p>
Full article ">Figure 3
<p>The example of the data after preprocessing.</p>
Full article ">Figure 4
<p>Distribution box plot of TPIN data.</p>
Full article ">Figure 5
<p>The correlations of IRI, MPD, and TPIN. (<b>Left</b>): ZS80 represents TPIN at 80 km/h. (<b>Right</b>): ZS70 represents TPIN at 70 km/h.</p>
Full article ">Figure 6
<p>Spearman correlation results.</p>
Full article ">Figure 7
<p>Clustering results (two classes).</p>
Full article ">Figure 8
<p>Clustering results (three classes).</p>
Full article ">Figure 9
<p>Clustering results (four classes).</p>
Full article ">Figure 10
<p>The MDIs of the six indicators.</p>
Full article ">Figure 11
<p>RF prediction results.</p>
Full article ">Figure 12
<p>GBDT prediction results.</p>
Full article ">
Back to TopTop