[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Advances in Indoor Positioning and Indoor Navigation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Navigation and Positioning".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 65611

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Computer Sciences, Multimedia and Telecommunication at Universitat Oberta de Catalunya (UOC), Barcelona, Spain; Internet Interdisciplinary Institute (IN3) at UOC, Castelldefels, Spain
Interests: physics; physics and science fiction; e-learning; geographic information systems and indoor positioning; context-aware recommender systems; location-based systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of new imaging technologies (INIT), Jaume I University, Castellon, Spain
Interests: indoor localization and navigation; human and social behavior from sensor data; sport video analysis and surveillance applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Locating devices in indoor environments has become a key issue for many emerging location‐based applications and intelligent spaces in different fields. However, there is no overall easy solution today. Although no large‐scale deployment of such location systems is available yet, this strategic topic is called to lead technological innovations of great impact on the daily activities of people in the coming years, in areas such as healthy and independent living, leisure, security, etc.

Topics in the area include, among others, 5G positioning; algorithms for wireless sensor networks; applications of location awareness and context detection; benchmarking, assessment, evaluation, standards, interoperability, and research reproducibility; data compression, data augmentation, and generative modeling in indoor positioning; health and wellness applications; human motion monitoring and modeling; indoor maps, indoor spatial data models, indoor mobile mapping, and 3D building models; indoor positioning, navigation, and tracking methods (such as AOA, TOF, TDOA based localization, cooperative, machine learning systems, frameworks for hybrid positioning, hybrid IMU pedestrian navigation and foot mounted navigation, magnetic field based methods, mapping, SLAM, optical systems, RFID, radar, device-free systems, routing in indoor environments, signal strength based methods, fingerprinting, ultrasound systems, UWB); privacy and security for indoor location systems; robotics and UAV; high-sensitivity GNSS, indoor GNSS, pseudolites; industrial metrology and geodetic systems; IGPS self-contained sensors; user requirements for location-based systems; visible light positioning; wearable and multisensor systems; and wireless power transfer system for localization.

Prof. Dr. Antoni Perez-Navarro
Dr. Raúl Montoliu
Dr. Joaquín Torres-Sospedra
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • indoor location
  • indoor navigation
  • fingerprinting
  • UWB
  • indoor maps
  • AOA
  • TOF
  • TDOA
  • privacy

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 3752 KiB  
Editorial
Advances in Indoor Positioning and Indoor Navigation
by Antoni Perez-Navarro, Raúl Montoliu and Joaquín Torres-Sospedra
Sensors 2022, 22(19), 7375; https://doi.org/10.3390/s22197375 - 28 Sep 2022
Cited by 2 | Viewed by 2603
Abstract
Locating devices in indoor environments has become a key issue for many emerging location-based applications and intelligent spaces in different fields [...] Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)

Research

Jump to: Editorial

29 pages, 27963 KiB  
Article
Analysis of Magnetic Field Measurements for Indoor Positioning
by Guanglie Ouyang and Karim Abed-Meraim
Sensors 2022, 22(11), 4014; https://doi.org/10.3390/s22114014 - 25 May 2022
Cited by 11 | Viewed by 3551
Abstract
Infrastructure-free magnetic fields are ubiquitous and have attracted tremendous interest in magnetic field-based indoor positioning. However, magnetic field-based indoor positioning applications face challenges such as low discernibility, heterogeneous devices, and interference from ferromagnetic materials. This paper first analyzes the statistical characteristics of magnetic [...] Read more.
Infrastructure-free magnetic fields are ubiquitous and have attracted tremendous interest in magnetic field-based indoor positioning. However, magnetic field-based indoor positioning applications face challenges such as low discernibility, heterogeneous devices, and interference from ferromagnetic materials. This paper first analyzes the statistical characteristics of magnetic field (MF) measurements from heterogeneous smartphones. It demonstrates that, in the absence of disturbances, the MF measurements in indoor environments follow a Gaussian distribution with temporal stability and spatial discernibility. It shows the fluctuations in magnetic field intensity caused by the rotation of a smartphone around the Z-axis. Secondly, it suggests that the RLOWESS method can be used to eliminate magnetic field anomalies, using magnetometer calibration to ensure consistent MF measurements in heterogeneous smartphones. Thirdly, it tests the magnetic field positioning performance of homogeneous and heterogeneous devices using different machine learning methods. Finally, it summarizes the feasibility/limitations of using only MF measurement for indoor positioning. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Soft and hard iron effects: (<b>a</b>) soft iron effect; (<b>b</b>) hard iron effect.</p>
Full article ">Figure 2
<p>MF measurement of heterogeneous smartphones: (<b>a</b>,<b>c</b>,<b>e</b>) are the MFs of iPhone Xs Max, Huawei P9, and Bluebird, respectively. (<b>b</b>,<b>d</b>,<b>f</b>) are the histogram of magnitude for iPhone Xs Max, Huawei P9, and Bluebird, respectively.</p>
Full article ">Figure 2 Cont.
<p>MF measurement of heterogeneous smartphones: (<b>a</b>,<b>c</b>,<b>e</b>) are the MFs of iPhone Xs Max, Huawei P9, and Bluebird, respectively. (<b>b</b>,<b>d</b>,<b>f</b>) are the histogram of magnitude for iPhone Xs Max, Huawei P9, and Bluebird, respectively.</p>
Full article ">Figure 3
<p>MF measurements of heterogeneous smartphones at different dates on the same path. (<b>a</b>) iPhone Xs Max; (<b>b</b>) Samsung S9; (<b>c</b>) Redmi Note 10 Pro; (<b>d</b>) Huawei P9.</p>
Full article ">Figure 4
<p>Trajectory test: (<b>a</b>) Comparison of MF measurements of heterogeneous smartphones in the same path. (<b>b</b>) Comparison of the iPhone Xs Max’s MF measurements under two different paths.</p>
Full article ">Figure 5
<p>Rotatable and height-adjustable platform.</p>
Full article ">Figure 6
<p>Smartphone rotation test: (<b>a</b>) magnetic field with rotation; (<b>b</b>) magnetic direction.</p>
Full article ">Figure 7
<p>Ellipse plot: (<b>a</b>) xyz plot; (<b>b</b>) xy plot; (<b>c</b>) xz plot.</p>
Full article ">Figure 8
<p>Nine-DoF LSM9DS1 embedded with Arduino Pro Mini.</p>
Full article ">Figure 9
<p>Nine-DoF LSM9DS1 sensor’s measurements: (<b>a</b>) original magnitude; (<b>b</b>) original magnitude histogram; (<b>c</b>) smoothing magnitude; (<b>d</b>) smoothing magnitude histogram.</p>
Full article ">Figure 10
<p>Smartphone calibration test: (<b>a</b>) iPhone Xs Max; (<b>b</b>) Huawei P9; (<b>c</b>) Bluebird.</p>
Full article ">Figure 11
<p>Magnitude of heterogeneous smartphones from 7 February 2020 to 29 June 2020: (<b>a</b>,<b>c</b>,<b>e</b>) are the original MF magnitudes of iPhone Xs Max, Huawei P9, and Bluebird, respectively. (<b>b</b>,<b>d</b>,<b>f</b>) are the calibrated MF magnitudes of iPhone Xs Max, Huawei P9, and Bluebird, respectively.</p>
Full article ">Figure 11 Cont.
<p>Magnitude of heterogeneous smartphones from 7 February 2020 to 29 June 2020: (<b>a</b>,<b>c</b>,<b>e</b>) are the original MF magnitudes of iPhone Xs Max, Huawei P9, and Bluebird, respectively. (<b>b</b>,<b>d</b>,<b>f</b>) are the calibrated MF magnitudes of iPhone Xs Max, Huawei P9, and Bluebird, respectively.</p>
Full article ">Figure 12
<p>Comparison of heterogeneous smartphones. (<b>a</b>) Uncalibrated MF measurement of P2. (<b>b</b>) Calibration result of P2 and P10.</p>
Full article ">Figure 13
<p>Architecture of magnetic-based positioning system.</p>
Full article ">Figure 14
<p>Building of Polytech Orléans—Galilée, Univ. of Orléans with test zone 1, 2 and 3.</p>
Full article ">Figure 15
<p>Smartphone training set in zone 2. (<b>a</b>) iPhone Xs Max. (<b>b</b>) Huawei P9. (<b>c</b>) Bluebird.</p>
Full article ">Figure 16
<p>Confusion matrix for KNN methods with different smartphones. (<b>a</b>) iPhone Xs Max. (<b>b</b>) Huawei P9. (<b>c</b>) Bluebird.</p>
Full article ">
19 pages, 5370 KiB  
Article
Real-Time Map Matching with a Backtracking Particle Filter Using Geospatial Analysis
by Dorian Harder, Hossein Shoushtari and Harald Sternberg
Sensors 2022, 22(9), 3289; https://doi.org/10.3390/s22093289 - 25 Apr 2022
Cited by 2 | Viewed by 2175
Abstract
Inertial odometry is a typical localization method that is widely and easily accessible in many devices. Pedestrian positioning can benefit from this approach based on inertial measurement unit (IMU) values embedded in smartphones. Fitting the inertial odometry outputs, namely step length and step [...] Read more.
Inertial odometry is a typical localization method that is widely and easily accessible in many devices. Pedestrian positioning can benefit from this approach based on inertial measurement unit (IMU) values embedded in smartphones. Fitting the inertial odometry outputs, namely step length and step heading of a human for instance, with spatial information is an ubiquitous way to correct for the cumulative noises. This so-called map-matching process can be achieved in several ways. In this paper, a novel real-time map-matching approach was developed, using a backtracking particle filter that benefits from the implemented geospatial analysis, which reduces the complexity of spatial queries and provides flexibility in the use of different kinds of spatial constraints. The goal was to generalize the algorithm to permit the use of any kind of odometry data calculated by different sensors and approaches as the input. Further research, development, and comparisons have been done by the easy implementation of different spatial constraints and use cases due to the modular structure. Additionally, a simple map-based optimization using transition areas between floors has been developed. The developed algorithm could achieve accuracies of up to 3 m at approximately the 90th percentile for two different experiments in a complex building structure. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Overview of the structure of the developed algorithm that uses particle filtering as a map-matching process. Starting from a position derived, for example, through a location fingerprint, odometry data from inertial measurements are transferred to polar coordinates, resulting in the estimation of the propagated position. This estimation is improved via spatial constraints and further corrected by additional information from the location fingerprint, resulting in the final position estimate, which serves as the starting point for the next propagation step.</p>
Full article ">Figure 2
<p>Overview of some of the geospatial analyses used in GeoPandas, with <span class="html-italic">distance</span> shown in (<b>a</b>), <span class="html-italic">spatial join</span> in (<b>b</b>), <span class="html-italic">query by attribute</span> in (<b>c</b>), and <span class="html-italic">buffer</span> in (<b>d</b>).</p>
Full article ">Figure 3
<p>Overview of the backtracking process. A particle with valid trajectory is randomly chosen (<b>a</b>). Particles are randomly created in a specified radius (<b>b</b>). Newly created particles with invalid trajectories (red) are discarded until a particle with a valid trajectory (green) is found (<b>c</b>) and is accepted as a new particle (<b>d</b>).</p>
Full article ">Figure 4
<p>Overview of the particle filter algorithm. The initial position and heading, derived from the location fingerprint, is used for the initialization of the particles, which are then propagated according to the odometry output. The particles are then filtered in the correction/update according to spatial constraints, leading to a final location estimation, which is further improved by the location fingerprint. Via the backtracking process, the number of valid particles is increased up to the original amount of particles by considering spatial constraints. Next, the next propagation step starts.</p>
Full article ">Figure 5
<p>Overview of the used filtering methods, with the cm in (<b>a</b>), cr in (<b>b</b>), and cl in (<b>c</b>).</p>
Full article ">Figure 6
<p>Floor plans of the ground floor (<b>a</b>), 1st floor (<b>b</b>), and 4th floor (<b>c</b>), with the blue lines representing the routing edges.</p>
Full article ">Figure 7
<p>Ground truth points (orange) of the “eight path”; the red dot is the starting position and the green dot is the finish.</p>
Full article ">Figure 8
<p>Ground truth points (orange) of the <span class="html-italic">zerotofour</span> path in (<b>a</b>–<b>c</b>); the red dot is the starting position and the green dot is the finish.</p>
Full article ">Figure 9
<p>CDF of the positioning error of the <span class="html-italic">eight</span> path with a step length correction of 0.1 m for the backtracking PF and the PDR with a step length correction of 0.1 m.</p>
Full article ">Figure 10
<p>Part of the trajectory of the <span class="html-italic">eight</span> path, that wrongly proceeds in the “gallery”, when using the weighting by routing support method, where black arrows indicate the walking direction and the green points represent the position estimates for each step.</p>
Full article ">Figure 11
<p>Example of the trajectory correction (from (<b>a</b>–<b>d</b>)) through the backtracking functionality, where the green dots represent the valid, propagated particles, and the blue dot is the resulting position estimate. First, most particles took a “wrong” path (<b>a</b>), continuously getting deleted when trying to pass the wall (<b>a</b>–<b>c</b>). While more particles are getting deleted, more particles with plausible trajectories are getting created through the backtracking process, until all particles are recovered and now only represent positions with reasonable trajectories again (<b>d</b>).</p>
Full article ">Figure 12
<p>Effect of the support through the weighting by routing support on the deviated trajectory.</p>
Full article ">Figure 13
<p>CDF of the positioning error of the <span class="html-italic">zerotofour</span> path with a step length correction of 0.2 m for the backtracking PF and the PDR with a step length correction of 0.2 m.</p>
Full article ">Figure 14
<p>CDF of the positioning error of the <span class="html-italic">eight</span> path without step length correction for the backtracking PF and the PDR without step length correction.</p>
Full article ">Figure 15
<p>Trajectory (green dots) from the cl method for the <span class="html-italic">eight</span> path without step length correction for the backtracking PF.</p>
Full article ">Figure 16
<p>CDF of the positioning error of the <span class="html-italic">zerotofour</span> path with a step length correction of 0.15 m for the backtracking PF and the PDR with a step length correction of 0.15 m.</p>
Full article ">
27 pages, 104112 KiB  
Article
Toward Accurate Indoor Positioning: An RSS-Based Fusion of UWB and Machine-Learning-Enhanced WiFi
by Ghazaleh Kia, Laura Ruotsalainen and Jukka Talvitie
Sensors 2022, 22(9), 3204; https://doi.org/10.3390/s22093204 - 21 Apr 2022
Cited by 10 | Viewed by 4226
Abstract
A wide variety of sensors and devices are used in indoor positioning scenarios to improve localization accuracy and overcome harsh radio propagation conditions. The availability of these individual sensors suggests the idea of sensor fusion to achieve a more accurate solution. This work [...] Read more.
A wide variety of sensors and devices are used in indoor positioning scenarios to improve localization accuracy and overcome harsh radio propagation conditions. The availability of these individual sensors suggests the idea of sensor fusion to achieve a more accurate solution. This work aims to address, with the goal of improving localization accuracy, the fusion of two conventional candidates for indoor positioning scenarios: Ultra Wide Band (UWB) and Wireless Fidelity (WiFi). The proposed method consists of a Machine Learning (ML)-based enhancement of WiFi measurements, environment observation, and sensor fusion. In particular, the proposed algorithm takes advantage of Received Signal Strength (RSS) values to fuse range measurements utilizing a Gaussian Process (GP). The range values are calculated using the WiFi Round Trip Time (RTT) and UWB Two Way Ranging (TWR) methods. To evaluate the performance of the proposed method, trilateration is used for positioning. Furthermore, empirical range measurements are obtained to investigate and validate the proposed approach. The results prove that UWB and WiFi, working together, can compensate for each other’s limitations and, consequently, provide a more accurate position solution. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>WiFi Fine Timing Measurement (FTM) protocol illustrating one burst with 3 FTM interchanges.</p>
Full article ">Figure 2
<p>The three phases of the proposed method.</p>
Full article ">Figure 3
<p>(<b>a</b>) Measurement setup in the anechoic chamber. (<b>b</b>) Data collection for model estimation.</p>
Full article ">Figure 4
<p>Cummulative Distribution Function (CDF) of range measurement mean error at 21 points in the anechoic chamber.</p>
Full article ">Figure 5
<p>Linear regression applied to all the range measurements.</p>
Full article ">Figure 6
<p>Linear regression for the mean values.</p>
Full article ">Figure 7
<p>CDF of range errors for the measurements collected in the real environment using the WiFi device.</p>
Full article ">Figure 8
<p>Range measurement using the anchors and User Equipment (UE).</p>
Full article ">Figure 9
<p>Reference points for evaluating the results inside the hall and the corridor.</p>
Full article ">Figure 10
<p>Floor plan of the area of measurement campaign.</p>
Full article ">Figure 11
<p>(<b>a</b>) The devices used for data collection on the UE side. (<b>b</b>) Configuration of devices at the location of one anchor.</p>
Full article ">Figure 12
<p>UE positions estimated using standalone WiFi devices.</p>
Full article ">Figure 13
<p>Distribution of Received Signal Strength (RSS) values received from WiFi anchors at point 10.</p>
Full article ">Figure 14
<p>Distribution of RSS values received from WiFi anchors at point 18.</p>
Full article ">Figure 15
<p>UE positions estimated using standalone Ultra Wide Band (UWB) devices.</p>
Full article ">Figure 16
<p>Distribution of RSS values received from UWB anchors at point 10.</p>
Full article ">Figure 17
<p>Distribution of RSS values received from UWB anchors at point 18.</p>
Full article ">Figure 18
<p>Maximum RSS values in one epoch at each reference point for signals received from three WiFi anchors.</p>
Full article ">Figure 19
<p>Maximum RSS values in one epoch at each reference point for signals received from three UWB anchors.</p>
Full article ">Figure 20
<p>UE positions estimated using the proposed method.</p>
Full article ">Figure 21
<p>Box plots showing positioning errors using different methods. The proposed hybrid method provides the best accuracy with the smallest error variance.</p>
Full article ">Figure 22
<p>Position error at three Line Of Sight (LOS) and three Non-Line Of Sight (NLOS) points using several range selection methods.</p>
Full article ">
18 pages, 21115 KiB  
Article
Real-Time Sonar Fusion for Layered Navigation Controller
by Wouter Jansen, Dennis Laurijssen and Jan Steckel
Sensors 2022, 22(9), 3109; https://doi.org/10.3390/s22093109 - 19 Apr 2022
Cited by 4 | Viewed by 2185
Abstract
Navigation in varied and dynamic indoor environments remains a complex task for autonomous mobile platforms. Especially when conditions worsen, typical sensor modalities may fail to operate optimally and subsequently provide inapt input for safe navigation control. In this study, we present an approach [...] Read more.
Navigation in varied and dynamic indoor environments remains a complex task for autonomous mobile platforms. Especially when conditions worsen, typical sensor modalities may fail to operate optimally and subsequently provide inapt input for safe navigation control. In this study, we present an approach for the navigation of a dynamic indoor environment with a mobile platform with a single or several sonar sensors using a layered control system. These sensors can operate in conditions such as rain, fog, dust, or dirt. The different control layers, such as collision avoidance and corridor following behavior, are activated based on acoustic flow queues in the fusion of the sonar images. The novelty of this work is allowing these sensors to be freely positioned on the mobile platform and providing the framework for designing the optimal navigational outcome based on a zoning system around the mobile platform. Presented in this paper is the acoustic flow model used, as well as the design of the layered controller. Next to validation in simulation, an implementation is presented and validated in a real office environment using a real mobile platform with one, two, or three sonar sensors in real time with 2D navigation. Multiple sensor layouts were validated in both the simulation and real experiments to demonstrate that the modular approach for the controller and sensor fusion works optimally. The results of this work show stable and safe navigation of indoor environments with dynamic objects. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of a reflection expressed in the spherical coordinate system (<math display="inline"><semantics> <mrow> <mi>r</mi> <mo>,</mo> <mi>θ</mi> <mo>,</mo> <mi>φ</mi> </mrow> </semantics></math>) associated with the sensor. The embedded real-time imaging sonar (eRTIS) sensor is drawn to show the irregularly positioned array of 32 digital microphones and the emitter. (<b>b</b>) A photo taken from the eRTIS sensor, as used in the real experiments. (<b>c</b>) An example of a 2D energyscape, a 2D sonar image of an 180° horizontal scan. Subfigures (<b>b</b>,<b>c</b>) were added for better visualisation of the sensor and energyscape concepts.</p>
Full article ">Figure 2
<p>Diagram of the digital signal processing steps to go from the raw microphone signals to the energyscapes. The initial signals containing the reflections of the detected objects are first demodulated to the full audio signals of the N channels. A matched filter is used for finding the actual reflections of the emitted signal. Delay-and-sum beamforming creates a spacial filter for every direction of interest. Subsequently, envelope detection is used to clean up the spatial image.</p>
Full article ">Figure 3
<p>Drawing of a mobile platform seen from the top with three eRTIS sensors. Each is described by the distance <span class="html-italic">l</span> from the platform’s rotation center point, the angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> around on the XY-plane relative to the x-axis, and the angle <math display="inline"><semantics> <mi>β</mi> </semantics></math> which describes the local rotation of the sensor around its own center. The only constraint is that all sensors must be located roughly on the same horizontal XY-plane.</p>
Full article ">Figure 4
<p>(<b>a</b>) A mobile platform seen from the top with three sonar sensors. The <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>α</mi> <mo>,</mo> <mi>β</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>)</mo> </mrow> </semantics></math> parameter values are shown as well. Next to it, two reflecting objects exist, defined as A and B with their <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </semantics></math> coordinates. The placement of these reflectors in the figure is not accurate or to scale. (<b>b</b>) The <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>r</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>,</mo> <mi>θ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>)</mo> </mrow> </semantics></math> flow-lines of reflections A and B for a positive linear motion <span class="html-italic">V</span> are marked for sonar sensors 1, 2, and 3 of <a href="#sensors-22-03109-f004" class="html-fig">Figure 4</a>a. The arrow indicates the direction the reflection would move with such a motion. The flow-lines for other distances falling in range (<math display="inline"><semantics> <mrow> <mi>r</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>;</mo> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>]</mo> </mrow> </semantics></math>) are also shown, with <math display="inline"><semantics> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> being the maximum detected range of the eRTIS sensor.</p>
Full article ">Figure 5
<p>The <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>r</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>,</mo> <mi>θ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>)</mo> </mrow> </semantics></math> flow-lines for a positive rotation movement <math display="inline"><semantics> <mi>ω</mi> </semantics></math> are marked for sonar sensors 1, 2, and 3 and reflections A and B of <a href="#sensors-22-03109-f004" class="html-fig">Figure 4</a>a. The arrow indicates the direction the reflection would move with such a motion. The flow-lines for other distances falling in range (<math display="inline"><semantics> <mrow> <mi>r</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>;</mo> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>]</mo> </mrow> </semantics></math>) are also shown, with <math display="inline"><semantics> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> being the maximum detected range of the eRTIS sensor.</p>
Full article ">Figure 6
<p>Diagram of the subsumption architecture of the controller. The input of each layer is the energyscape and acoustic mask where <span class="html-italic">c</span> stands for the active motion primitive layer and <span class="html-italic">j</span> defines the index of the sensor. Note that the lower the layer is in the list, the higher its priority will be in a subsumption architecture.</p>
Full article ">Figure 7
<p>A region is defined for every control layer around the mobile platform. They are described by lines parallel to the mobile platform’s x-axis, a circular shape with a certain radius from the center of the platform or limited by an angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math> from that same center point. The described shapes provide the input for the acoustic flow models to generate masks within the eneryscapes of each sonar sensor.</p>
Full article ">Figure 8
<p>The generated masks for each eRTIS sensor in <a href="#sensors-22-03109-f004" class="html-fig">Figure 4</a>a are drawn for the control zones of <a href="#sensors-22-03109-f007" class="html-fig">Figure 7</a> for the four different motion behavior layers.</p>
Full article ">Figure 9
<p>(<b>a</b>) A heat map of the trajectory distribution of the simulated scenario. It was taken over ten multi-sonar configurations. Each configuration was run 15 times for a total of 150 trajectories. The start zone, indicated as a rectangle with dashed edges, was the location where the starting position is chosen at random. Afterwards, the mobile platform would start moving between the sequential waypoints (shown as the blue numbered circles 1 to 6) based on the local layered navigation controller and global waypoint navigator. (<b>b</b>) A more detailed highlight of an area where a dynamic object interfered with the mobile platform path. This shows the adjustment the navigation controller made to avoid the dynamic object more clearly.</p>
Full article ">Figure 10
<p>(<b>a</b>) The experimental setup with a Clearpath Husky UGV as mobile platform used for validating the controller behavior. This figure shows the setup with three mounted eRTIS sensors that each have a built-in NVIDIA Jetson TX2 NX module. Furthermore, an Ouster OS0-128 LiDAR sensor is mounted on the top and is used for 2D map generation during the validation experiments. (<b>b</b>) A photo of the UGV taken during one of the experiments. More specifically, the avoidance of a direct wall collision. Subfigure (<b>a</b>) was moved from <a href="#sensors-22-03109-f001" class="html-fig">Figure 1</a> to better fit the paper structure. Subfigure (<b>b</b>) was added to show one of the setups.</p>
Full article ">Figure 11
<p>Experimental results for a real mobile platform navigating a room with an obstacle that needs to be dealt with safely. The room was empty except for the obstacle. The figure shows the trajectories of the six different configurations that were used for validating the adaptability of the layered controller. The trajectories were combined by using a single starting point. The map shown here is a manually cleaned-up version of the occupancy grid results of a LiDAR SLAM algorithm that was run offline. This figure shows avoiding a static obstacle, a chair in this instance. Although the behavior layer focused on here is obstacle avoidance, all layers were active during the experiments.</p>
Full article ">Figure 12
<p>The maps shown are manually cleaned-up versions of the occupancy grid results of a LiDAR SLAM algorithm that was run offline. Although the behavior layer focused on in each scenario is specific, all layers were active during the experiments. (<b>a</b>) Results for a real mobile platform navigating a room with an artificially added wall that needs to be dealt with safely. The figure shows the trajectories of the six different configurations that were used for validating the adaptability of the layered controller. The trajectories were combined by using a single starting point. (<b>b</b>) Experimental results for a real mobile platform navigating a T-shaped corridor. The figure shows the trajectories of the six different configurations that were used for validating the adaptability of the layered controller. The trajectories were combined by using a single starting point. (<b>c</b>) This figure shows one of the individual trajectories of (<b>b</b>), indicating for each point on the trajectory what layer of the controller was generating the output velocity.</p>
Full article ">
31 pages, 15792 KiB  
Article
NIKE BLUETRACK: Blue Force Tracking in GNSS-Denied Environments Based on the Fusion of UWB, IMUs and 3D Models
by Karin Mascher, Markus Watzko, Axel Koppert, Julian Eder, Peter Hofer and Manfred Wieser
Sensors 2022, 22(8), 2982; https://doi.org/10.3390/s22082982 - 13 Apr 2022
Cited by 5 | Viewed by 3099
Abstract
Blue force tracking represents an essential task in the field of military applications. A blue force tracking system provides the location information of their own forces on a map to commanders. For the command post, this results in more efficient operation control with [...] Read more.
Blue force tracking represents an essential task in the field of military applications. A blue force tracking system provides the location information of their own forces on a map to commanders. For the command post, this results in more efficient operation control with increasing safety. In underground structures (e.g., tunnels or subways), the localisation is challenging due to the lack of GNSS signals. This paper presents a localisation system for military or emergency forces tailored to usage in complex underground structures. In a particle filter, position changes from a dual foot-mounted INS are fused with opportunistic UWB ranges and data from a 3D tunnel model to derive position information. A concept to deal with the absence of UWB infrastructure or 3D tunnel models is illustrated. Recurrent neural network methodologies are applied to cope with different motion types of the operators. The evaluation of the positioning algorithm took place in a street tunnel. If a fully installed infrastructure was available, positioning errors under one metre were reached. The results also showed that the INS can bridge UWB outages. A particle-filter-based approach to UWB anchor mapping is presented, and the first simulation results showed its viability. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Subsurface structures pose high challenges to navigation systems due to changing distances, narrow spaces, and multiple levels without visual connection. The illustration shows a part of the Zentrum am Berg experimentation facility, where the field tests were conducted. (Picture: laabmayr).</p>
Full article ">Figure 2
<p>Usage of virtual reality (VR) for mission preparation. (<b>a</b>) VR user. (Picture: OEBH/Seeger). (<b>b</b>) Tactical symbols inside virtual reality (VR). (Picture: laabmayr).</p>
Full article ">Figure 3
<p>The overall system architecture.</p>
Full article ">Figure 4
<p>View in Fast Tunnel Modelling Tool (FTMT) from breakdown bay into a tunnel (map polygon (MP) in red, tunnel axis in white).</p>
Full article ">Figure 5
<p>Proposed filter architecture.</p>
Full article ">Figure 6
<p>Concept of spatially constraining a dual foot-mounted inertial navigation system (INS) in 2D using a maximal possible separation <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>. The actual computed distance is denoted as <span class="html-italic">d</span>. (Adapted from [<a href="#B12-sensors-22-02982" class="html-bibr">12</a>]).</p>
Full article ">Figure 7
<p>On the left, the concept of a recurrent-neural-network (RNN)-based zero-velocity detector is shown. <span class="html-italic">M</span> refers to the length of the sequence. On the right, the structure of a gated recurrent unit (GRU) cell (adapted from [<a href="#B43-sensors-22-02982" class="html-bibr">43</a>]) is illustrated, showing the context between the reset gate <span class="html-italic">r</span>, the update gate <span class="html-italic">z</span>, the activation <span class="html-italic">h</span>, and the candidate activation <math display="inline"><semantics> <mover accent="true"> <mi>h</mi> <mo>˜</mo> </mover> </semantics></math>.</p>
Full article ">Figure 8
<p>Flowchart of the particle filter architecture.</p>
Full article ">Figure 9
<p>Schematic of the feedback algorithm.</p>
Full article ">Figure 10
<p>Multiple VR users looking at tactical symbols in the Subsurface Operation Mission Tool (SOMT).</p>
Full article ">Figure 11
<p>Test subject with sensors mounted on helmet and shoes.</p>
Full article ">Figure 12
<p>Filter solutions of Test 5. The top panel, middle panel, and bottom panel show the accumulated position changes from the INS, the derived position from the ultra-wideband (UWB), data and the filter solution, respectively. The UWB anchors are marked as triangles (white filling: on, black filling: off).</p>
Full article ">Figure 13
<p>Different filter effects on the INS using the first round of Test 5 as an example. The top left and top right panels illustrate the individual and the fused solution of the foot-mounted zero-velocity-aided INS, respectively. The bottom left panel represents the accumulated position changes from the INS, which act as the input to the particle filter (PF). The bottom right panel shows the corrected INS positions due to the feedback from the PF. The UWB anchors are marked as triangles (white filling: on, black filling: off).</p>
Full article ">Figure 14
<p>Filter solutions of Tests 3 (top) and 4 (bottom). In Test 3, the operator walks 3 times along the wall and centre line, whereas Test 4 contains running along the centre line. UWB data are additionally plotted to highlight interference areas due to parked vehicles. The UWB anchors are marked as triangles.</p>
Full article ">Figure 15
<p>Time series of errors in the anchor position estimates for Case 1 for each anchor, with and without the wall constraint. In each epoch, one range measurement was processed. The number of epochs differs due to the number of available ranges (simulated ranges &lt;45 m).</p>
Full article ">Figure 16
<p>Time series of errors in the anchor position estimates for Case 2, with and without the wall constraint. In each epoch, one range measurement was processed. The number of epochs differs due to the number of available ranges (simulated ranges &lt;45 m).</p>
Full article ">
20 pages, 4125 KiB  
Article
Accuracy and Precision of Agents Orientation in an Indoor Positioning System Using Multiple Infrastructure Lighting Spotlights and a PSD Sensor
by Álvaro De-La-Llana-Calvo, José Luis Lázaro-Galilea, Aitor Alcázar-Fernández, Alfredo Gardel-Vicente, Ignacio Bravo-Muñoz and Andreea Iamnitchi
Sensors 2022, 22(8), 2882; https://doi.org/10.3390/s22082882 - 9 Apr 2022
Cited by 6 | Viewed by 2642
Abstract
In indoor localization there are applications in which the orientation of the agent to be located is as important as knowing the position. In this paper we present the results of the orientation estimation from a local positioning system based on position-sensitive device [...] Read more.
In indoor localization there are applications in which the orientation of the agent to be located is as important as knowing the position. In this paper we present the results of the orientation estimation from a local positioning system based on position-sensitive device (PSD) sensors and the visible light emitted from the illumination of the room in which it is located. The orientation estimation will require that the PSD sensor receives signal from either 2 or 4 light sources simultaneously. As will be shown in the article, the error determining the rotation angle of the agent with the on-board sensor is less than 0.2 degrees for two emitters. On the other hand, by using 4 light sources the three Euler rotation angles are determined, with mean errors in the measurements smaller than 0.35° for the x- and y-axis and 0.16° for the z-axis. The accuracy of the measurement has been evaluated experimentally in a 2.5 m-high ceiling room over an area of 2.2 m2 using geodetic measurement tools to establish the reference ground truth values. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Scheme of the proposed positioning system.</p>
Full article ">Figure 2
<p>Equivalent circuit of the PSD pin-cushion (image courtesy of Hamamatsu, obtained from the PSD technical information).</p>
Full article ">Figure 3
<p>Triangulation method to obtaining the height <span class="html-italic">H</span> parameter.</p>
Full article ">Figure 4
<p>Method for obtaining the angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>PSD</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Diagram of the four emitters and their projections on the PSD surface.</p>
Full article ">Figure 6
<p>Test environment.</p>
Full article ">Figure 7
<p>Rotation angle measurements using two Leica TS60 total stations.</p>
Full article ">Figure 8
<p>Emitter positions (1–4), receiver (green), and two total stations TS1/TS2 in the environment.</p>
Full article ">Figure 9
<p>Rotation angle measurements using all combinations of emitter pairs.</p>
Full article ">Figure 10
<p>Rotation error as a function of true values of the rotation angles for all combinations of emitter pairs.</p>
Full article ">Figure 11
<p>CDF of the rotation error for measurements carried out for all combinations of emitter pairs.</p>
Full article ">Figure 12
<p>Mean value of the variance of the rotation angles as a function of the measurement time for each combination of the emitters pairs.</p>
Full article ">Figure 13
<p>X-axis angle using DLT and CPC.</p>
Full article ">Figure 14
<p>Y-axis angle using DLT and CPC.</p>
Full article ">Figure 15
<p>Z-axis angle using DLT and CPC.</p>
Full article ">Figure 16
<p>Angle error in the 3 axes using DLT in function of the true value rotation angle (z angle).</p>
Full article ">Figure 17
<p>Angle error in the 3 axes using CPC in function of the true value rotation angle (z angle).</p>
Full article ">Figure 18
<p>CDF of the angle error in the 3 axes using DLT.</p>
Full article ">Figure 19
<p>CDF of the angle error in the 3 axes using CPC.</p>
Full article ">Figure 20
<p>Mean value of the variance of the three axis orientation angles as a function of the measurement time using DLT.</p>
Full article ">Figure 21
<p>Mean value of the variance of the three axis orientation angles as a function of the measurement time using CPC.</p>
Full article ">
19 pages, 4605 KiB  
Article
Multipath-Assisted Radio Sensing and State Detection for the Connected Aircraft Cabin
by Jonas Ninnemann, Paul Schwarzbach, Michael Schultz and Oliver Michler
Sensors 2022, 22(8), 2859; https://doi.org/10.3390/s22082859 - 8 Apr 2022
Cited by 11 | Viewed by 2560
Abstract
Efficiency and reliable turnaround time are core features of modern aircraft transportation and key to its future sustainability. Given the connected aircraft cabin, the deployment of digitized and interconnected sensors, devices and passengers provides comprehensive state detection within the cabin. More specifically, passenger [...] Read more.
Efficiency and reliable turnaround time are core features of modern aircraft transportation and key to its future sustainability. Given the connected aircraft cabin, the deployment of digitized and interconnected sensors, devices and passengers provides comprehensive state detection within the cabin. More specifically, passenger localization and occupancy detection can be monitored using location-aware communication systems, also known as wireless sensor networks. These multi-purpose communication systems serve a variety of capabilities, ranging from passenger convenience communication services, over crew member devices, to maintenance planning. In addition, radio-based sensing enables an efficient sensory basis for state monitoring; e.g., passive seat occupancy detection. Within the scope of the connected aircraft cabin, this article presents a multipath-assisted radio sensing (MARS) approach using the propagation information of transmitted signals, which are provided by the channel impulse response (CIR) of the wireless communication channel. By performing a geometrical mapping of the CIR, reflection sources are revealed, and the occupancy state can be derived. For this task, both probabilistic filtering and k-nearest neighbor classification are discussed. In order to evaluate the proposed methods, passenger occupancy detection and state detection for the future automation of passenger safety announcements and checks are addressed. Therefore, experimental measurements are performed using commercially available wideband communication devices, both in close to ideal conditions in an RF anechoic chamber and a cabin seat mockup. In both environments, a reliable radio sensing state detection was achieved. In conclusion, this paper provides a basis for the future integration of energy and spectrally efficient joint communication and sensing radio systems within the connected aircraft cabin. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Potentials of integrated communication, localization and sensing for connected cabins.</p>
Full article ">Figure 2
<p>Multipath propagation in the aircraft cabin: 3D model of a seat row including a passenger, the theoretic CIR and the geometric interpretation using ellipses. Direct paths are given in red, reflection paths in green and blue.</p>
Full article ">Figure 3
<p>Radio propagation simulation with deterministic ray tracing to explore the multipath propagation of signals in the aircraft cabin: (<b>a</b>) seat 2 occupied with (<b>b</b>) corresponding CIR, (<b>c</b>) reference measurement (no seat occupied) with (<b>d</b>) corresponding CIR. The CIR plots show the propagation paths or multipath components (red) and the bandlimited reconstructed CIR (blue line).</p>
Full article ">Figure 4
<p>Flowchart of data processing and computation steps for radio sensing state detection.</p>
Full article ">Figure 5
<p>Detailed computation steps of CIR processing.</p>
Full article ">Figure 6
<p>Subtraction and reflection path estimation: CIR with the object (blue), static background CIR (green), subtracted CIR (red), detection corridor (orange) and estimated reflection path (red dot).</p>
Full article ">Figure 7
<p>Flowgraph of probability grid mapping.</p>
Full article ">Figure 8
<p>Exemplary probability mapping output for a seat occupancy detection scenario.</p>
Full article ">Figure 9
<p>CIR kNN classification flowgraph.</p>
Full article ">Figure 10
<p>Example time series of the testing dataset (red) with the three closest neighbors from the training dataset (black) considered for the classification: (<b>a</b>) seat 1 occupied by a passenger (IV) and (<b>b</b>) Table 3 in use (I).</p>
Full article ">Figure 11
<p>Measurement setup with one transmitter and two receivers: (<b>a</b>) in the RF anechoic chamber and (<b>b</b>) in the lab mockup. Applied seat/table numbering is also indicated.</p>
Full article ">Figure 12
<p>Grid-based mapping and detection: (<b>a</b>) mapping with Table 3 in use (dataset I) and (<b>b</b>) mapping with seat 1 occupied (dataset IV). Estimated positions for all observation steps (grey crosses), sensor position (grey dots) and detection area (red). In addition, the probability map is displayed, including the Likelihood result of the last observation step.</p>
Full article ">Figure 13
<p>Example of CIR time series for each dataset (I–IV) across the different detection scenes.</p>
Full article ">Figure 14
<p>Confusion matrix for the classification: (<b>a</b>) table detection in the anechoic chamber (I a); (<b>b</b>) table detection in the lab mockup (II a); (<b>c</b>) seat occupancy detection in the anechoic chamber (III).</p>
Full article ">Figure 15
<p>Influence of different parameters on the accuracy of the kNN classification: (<b>a</b>) size of the training dataset relative to overall dataset size; (<b>b</b>) number <span class="html-italic">k</span> of neighbors; (<b>c</b>) distance metrics. Accuracy of the testing data (solid line) and accuracy of the training data (dotted line).</p>
Full article ">Figure 16
<p>Future connected aircraft cabin system for communication, localization and sensing.</p>
Full article ">
19 pages, 3910 KiB  
Article
An Extended Kalman Filter for Magnetic Field SLAM Using Gaussian Process Regression
by Frida Viset, Rudy Helmons and Manon Kok
Sensors 2022, 22(8), 2833; https://doi.org/10.3390/s22082833 - 7 Apr 2022
Cited by 25 | Viewed by 4430
Abstract
We present a computationally efficient algorithm for using variations in the ambient magnetic field to compensate for position drift in integrated odometry measurements (dead-reckoning estimates) through simultaneous localization and mapping (SLAM). When the magnetic field map is represented with a reduced-rank Gaussian process [...] Read more.
We present a computationally efficient algorithm for using variations in the ambient magnetic field to compensate for position drift in integrated odometry measurements (dead-reckoning estimates) through simultaneous localization and mapping (SLAM). When the magnetic field map is represented with a reduced-rank Gaussian process (GP) using Laplace basis functions defined in a cubical domain, analytic expressions of the gradient of the learned magnetic field become available. An existing approach for magnetic field SLAM with reduced-rank GP regression uses a Rao-Blackwellized particle filter (RBPF). For each incoming measurement, training of the magnetic field map using an RBPF has a computational complexity per time step of O(NpNm2), where Np is the number of particles, and Nm is the number of basis functions used to approximate the Gaussian process. Contrary to the existing particle filter-based approach, we propose applying an extended Kalman filter based on the gradients of our learned magnetic field map for simultaneous localization and mapping. Our proposed algorithm only requires training a single map. It, therefore, has a computational complexity at each time step of O(Nm2). We demonstrate the workings of the extended Kalman filter for magnetic field SLAM on an open-source data set from a foot-mounted sensor and magnetic field measurements collected onboard a model ship in an indoor pool. We observe that the drift compensating abilities of our algorithm are comparable to what has previously been demonstrated for magnetic field SLAM with an RBPF. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Learned magnetic field variations in test pool. The color corresponds to the estimated norm of the magnetic field map, while the opacity is inversely proportional with the variance of the estimate.</p>
Full article ">Figure 2
<p>Comparison of approximations of the filtered position distribution given measurements from a simulated nonlinear field. The color indicates the norm of the simulated magnetic field. The covariance ellipsoids indicate the <math display="inline"><semantics> <mrow> <mn>68</mn> <mo>%</mo> </mrow> </semantics></math> confidence interval of the EKF estimate. (<b>a</b>) Estimates of the filtered distribution based on predictive estimates with error 0.40 m. (<b>b</b>) Estimates of the filtered distribution based on predictive estimates with error 0.05 m.</p>
Full article ">Figure 3
<p>Simulation, investigating drift-compensating abilities given varying predictive position estimation errors. Comparison of position estimation error at the end of the trajectory between Algorithm 1 and a particle filter for localization in a known map with varying predictive position errors at the initialisation of the simulation. The lines connect the average results after 100 Monte Carlo repetitions with different realisations of the odometry noise, and the error bars represent one standard deviation. (<b>a</b>) Estimation accuracies with varying predictive position error. (<b>b</b>) Estimation accuracies with varying length scales <math display="inline"><semantics> <msub> <mi>l</mi> <mi>SE</mi> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>Comparison of the model ship position trajectory estimates for a single realisation of simulated odometry noise from a birds-eye view. (<b>a</b>) Comparing Algorithm 1 and the odometry to the ground truth. (<b>b</b>) Comparison of the position estimates from the RBPF with 100, 200 and 500 particles respectively to the ground truth.</p>
Full article ">Figure 5
<p>Measured and estimated magnetic field and position trajectories for the model ship. The upper plot marks with circles the locations where magnetic field measurements were successfully collected and matched with a ground truth position in the model ship, and the colors of the circles correspond to the norm of the measured magnetic field. The lower plot displays the trajectory estimate from applying Algorithm 1 in black. It also shows the learned magnetic field map, where the color corresponds to the norm of the estimated magnetic field <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msub> <mo>∇</mo> <mi>p</mi> </msub> <mi mathvariant="normal">Φ</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <msub> <mover accent="true"> <mi>m</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>N</mi> <mo>|</mo> <mi>N</mi> </mrow> </msub> <msub> <mrow> <mo>∥</mo> </mrow> <mn>2</mn> </msub> </mrow> </semantics></math>, and the opacity is inversely proportional with the trace of the covariance matrix of the magnetic field map estimate in each location, <math display="inline"><semantics> <mrow> <mi>Tr</mi> <mo>(</mo> <msub> <mo>∇</mo> <mi>p</mi> </msub> <mi mathvariant="normal">Φ</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mo>|</mo> <mi>N</mi> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msub> <mo>∇</mo> <mi>p</mi> </msub> <mi mathvariant="normal">Φ</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) Measured magnetic field norm in ground truth positions. (<b>b</b>) The estimated magnetic field norm is displayed with the semi-transparent color map and the estimated trajectory is displayed with the black line.</p>
Full article ">Figure 6
<p>Comparison of model ship position estimation errors from Algorithm 1, drifting odometry and the PF with 100, 200 and 500 particles respectively for a single realisation of simulated odometry noise.</p>
Full article ">Figure 7
<p>Investigation of the effect of varying odometry noise on the model ship position estimate. The lines connect the average results of the ship position estimation after 100 Monte Carlo repetitions with different realisations of the simulated odometry for varying amounts of odometry noise. (<b>a</b>) Model ship position estimation error at the end of the trajectory for varying amounts of odometry noise. (<b>b</b>) The max norm of the predictive covariance of the estimate from Algorithm 1 depending on varying odometry noise.</p>
Full article ">Figure 8
<p>Trajectory and magnetic field map estimate for the foot-mounted sensor data. The estimated trajectory obtained with Algorithm 1 is compared to odometry from the foot-mounted sensor data obtained via [<a href="#B40-sensors-22-02833" class="html-bibr">40</a>] implementation of the ZUPT-aided EKF using a foot-mounted accelerometer and gyroscope. The color of the magnetic field map corresponds to the norm of the estimated magnetic field, and the opacity is inversely proportional with the sum of the marginal variance for each of the three estimated magnetic field components. (<b>a</b>) Learned magnetic field displayed with the semi-transparent color map and estimated trajectory displayed with the black line with odometry from foot-mounted sensor from a birds eye view. (<b>b</b>) Trajectory estimate from Algorithm 1 compared to odometry from a birds eye view.</p>
Full article ">
37 pages, 1872 KiB  
Article
Meaningful Test and Evaluation of Indoor Localization Systems in Semi-Controlled Environments
by Jakob Schyga, Johannes Hinckeldeyn and Jochen Kreutzfeldt
Sensors 2022, 22(7), 2797; https://doi.org/10.3390/s22072797 - 6 Apr 2022
Cited by 5 | Viewed by 3506
Abstract
Despite their enormous potential, the use of indoor localization systems (ILS) remains seldom. One reason is the lack of market transparency and stakeholders’ trust in the systems’ performance as a consequence of insufficient use of test and evaluation (T&E) methodologies. The heterogeneous nature [...] Read more.
Despite their enormous potential, the use of indoor localization systems (ILS) remains seldom. One reason is the lack of market transparency and stakeholders’ trust in the systems’ performance as a consequence of insufficient use of test and evaluation (T&E) methodologies. The heterogeneous nature of ILS, their influences, and their applications pose various challenges for the design of a methodology that provides meaningful results. Methodologies for building-wide testing exist, but their use is mostly limited to associated indoor localization competitions. In this work, the T&E 4iLoc Framework is proposed—a methodology for T&E of indoor localization systems in semi-controlled environments based on a system-level and black-box approach. In contrast to building-wide testing, T&E in semi-controlled environments, such as test halls, is characterized by lower costs, higher reproducibility, and better comparability of the results. The limitation of low transferability to real-world applications is addressed by an application-driven design approach. The empirical validation of the T&E 4iLoc Framework, based on the examination of a contour-based light detection and ranging (LiDAR) ILS, an ultra wideband ILS, and a camera-based ILS for the application of automated guided vehicles in warehouse operation, demonstrates the benefits of T&E with the T&E 4iLoc Framework. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Matching requirements and localization systems—a multi-dimensional problem.</p>
Full article ">Figure 2
<p>The V-Model—illustration of the application-driven T&amp;E process with the involved stakeholders, their functions, and requirements.</p>
Full article ">Figure 3
<p>Architecture of the <span class="html-italic">T&amp;E 4iLoc Framework</span>.</p>
Full article ">Figure 4
<p>Functions of the module <span class="html-italic">Application Definition</span>.</p>
Full article ">Figure 5
<p>Functions of the module <span class="html-italic">Requirement Specification</span> (<b>a</b>) and <span class="html-italic">Scenario Definition</span> (<b>b</b>).</p>
Full article ">Figure 6
<p>Functions of the module <span class="html-italic">Experiment Specification</span> (<b>a</b>) and <span class="html-italic">Experiment Execution</span> (<b>b</b>).</p>
Full article ">Figure 7
<p>Random grid-based sampled evaluation poses. The arrow points into the heading directions of an evaluation pose.</p>
Full article ">Figure 8
<p>Determination of the transformation matrix <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> <mo>,</mo> <mi>l</mi> <mi>o</mi> <mi>c</mi> </mrow> </msub> </semantics></math> between <math display="inline"><semantics> <msub> <mi>O</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>O</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msub> </semantics></math> with the Umeyama alignment [<a href="#B48-sensors-22-02797" class="html-bibr">48</a>].</p>
Full article ">Figure 9
<p>Functions of the module <span class="html-italic">Performance Evaluation</span> (<b>a</b>) and <span class="html-italic">System Evaluation</span> (<b>b</b>).</p>
Full article ">Figure 10
<p>The left side shows a top view of the test area in a semi-controlled test environment, with evaluation poses, interpolated reference data, and aligned localization data. On the right side, a focused view of a test point is shown to illustrate the determination of the <span class="html-italic">Evaluation Data</span>.</p>
Full article ">Figure 11
<p>Overview of the <span class="html-italic">T&amp;E 4iLoc Framework</span>, with its modules, functions, and output data.</p>
Full article ">Figure 12
<p>Dimensions of an AGV in an aisle for the quantification of performance requirements.</p>
Full article ">Figure 13
<p>(<b>a</b>) Turtlebot2 carrying the localization sensors and motion capture reflectors. (<b>b</b>) Schematic overview of the <span class="html-italic">Experiment Spec</span>.</p>
Full article ">Figure 14
<p>(<b>a</b>) Setup of the environment at the Institute for Technical Logistics. (<b>b</b>) Recorded map from the LiDAR ILS. The grid with a grid length of 1 m is aligned with the map coordinate system.</p>
Full article ">Figure 15
<p>(<b>a</b>) Trajectories based on continuous position estimates. (<b>b</b>) Horizontal error over measurement time. (<b>c</b>) Cumulative distribution histogram of the horizontal error. (<b>d</b>) Error scatter. (<b>e</b>) Heading error over measurement time. (<b>f</b>) Cumulative distribution histogram of the absolute heading error.</p>
Full article ">
18 pages, 5971 KiB  
Article
Deep Learning-Based Indoor Localization Using Multi-View BLE Signal
by Aristotelis Koutris, Theodoros Siozos, Yannis Kopsinis, Aggelos Pikrakis, Timon Merk, Matthias Mahlig, Stylianos Papaharalabos and Peter Karlsson
Sensors 2022, 22(7), 2759; https://doi.org/10.3390/s22072759 - 2 Apr 2022
Cited by 26 | Viewed by 6416
Abstract
In this paper, we present a novel Deep Neural Network-based indoor localization method that estimates the position of a Bluetooth Low Energy (BLE) transmitter (tag) by using the received signals’ characteristics at multiple Anchor Points (APs). We use the received signal strength indicator [...] Read more.
In this paper, we present a novel Deep Neural Network-based indoor localization method that estimates the position of a Bluetooth Low Energy (BLE) transmitter (tag) by using the received signals’ characteristics at multiple Anchor Points (APs). We use the received signal strength indicator (RSSI) value and the in-phase and quadrature-phase (IQ) components of the received BLE signals at a single time instance to simultaneously estimate the angle of arrival (AoA) at all APs. Through supervised learning on simulated data, various machine learning (ML) architectures are trained to perform AoA estimation using varying subsets of anchor points. In the final stage of the system, the estimated AoA values are fed to a positioning engine which uses the least squares (LS) algorithm to estimate the position of the tag. The proposed architectures are trained and rigorously tested on several simulated room scenarios and are shown to achieve a localization accuracy of 70 cm. Moreover, the proposed systems possess generalization capabilities by being robust to modifications in the room’s content or anchors’ configuration. Additionally, some of the proposed architectures have the ability to distribute the computational load over the APs. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Channel fusion module; (<b>b</b>) independent AP architecture. Layer sizes for each MLP are shown in parentheses.</p>
Full article ">Figure 2
<p>(<b>a</b>) Channel and AP fusion module; (<b>b</b>) Fully joint AP architecture.</p>
Full article ">Figure 3
<p>Triplets of APs architecture.</p>
Full article ">Figure 4
<p>Architecture of CNN-based joint APs.</p>
Full article ">Figure 5
<p>Visualization of the four room configurations.</p>
Full article ">Figure 6
<p>(<b>a</b>) Measured RSSI values by a single AP for 3 different rooms configurations and (<b>b</b>) cosine distance of IQ features between the low furniture room configuration and the rest of the furniture configurations.</p>
Full article ">Figure 7
<p>MEDE of the CNN-based joint model on different rooms. PDDA methods performance in each room is included for reference.</p>
Full article ">Figure 8
<p>MEDE of the pairs of APs model on different rooms. PDDA methods performance in each room is included for reference.</p>
Full article ">Figure 9
<p>Spatial distribution of Euclidean error on the high furniture room of (<b>a</b>) PDDA; (<b>b</b>) CNN-based joint model; (<b>c</b>) pairs of APs Model.</p>
Full article ">Figure 10
<p>Performance degradation when the anchor points are rotated by 5° or translated by 10 cm.</p>
Full article ">Figure 11
<p>MEDE of joint model across all rooms with different number of training points.</p>
Full article ">Figure 12
<p>Training (red dots), validation (green dots) and test locations (blue dots) for training set of size (<b>a</b>) 140 and (<b>b</b>) 25.</p>
Full article ">Figure 13
<p>MEDE of fully joint model and CNN-based joint model across all rooms with number of parameters.</p>
Full article ">
24 pages, 9920 KiB  
Article
Indoor Positioning of Low-Cost Narrowband IoT Nodes: Evaluation of a TDoA Approach in a Retail Environment
by Daniel Neunteufel, Stefan Grebien and Holger Arthaber
Sensors 2022, 22(7), 2663; https://doi.org/10.3390/s22072663 - 30 Mar 2022
Cited by 8 | Viewed by 2288
Abstract
The localization of internet of things (IoT) nodes in indoor scenarios with strong multipath channel components is challenging. All methods using radio signals, such as received signal strength (RSS) or angle of arrival (AoA), are inherently prone to multipath fading. Especially for time [...] Read more.
The localization of internet of things (IoT) nodes in indoor scenarios with strong multipath channel components is challenging. All methods using radio signals, such as received signal strength (RSS) or angle of arrival (AoA), are inherently prone to multipath fading. Especially for time of flight (ToF) measurements, the low available transmit bandwidth of the used transceiver hardware is problematic. In our previous work on this topic we showed that wideband signal generation on narrowband low-power transceiver chips is feasible without any changes to existing hardware. Together with a fixed wideband receiving anchor infrastructure, this facilitates time difference of arrival (TDoA) and AoA measurements and allows for localization of the fully asynchronously transmitting nodes. In this paper, we present a measurement campaign using a receiver infrastructure based on software-defined radio (SDR) platforms. This proves the actual usability of the proposed method within the limitations of the bandwidth available in the ISM band at 2.4 GHz. We use the results to analyze the effects of possible anchor placement schemes and scenario geometries. We further demonstrate how this node-to-infrastructure-based localization scheme can be supported by additional node-to-node RSS measurements using a simple clustering approach. In the considered scenario, an overall positioning root-mean-square error (RMSE) of 2.19 m is achieved. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Basic characteristics of the proposed modulation format in the time-frequency plane. Customized chirped waveform generation allows to cover a bandwidth much larger than the typical transmission bandwidth of about 1 <math display="inline"><semantics> <mi mathvariant="normal">M</mi> </semantics></math><math display="inline"><semantics> <mi>Hz</mi> </semantics></math>. The typical transmitter hardware characteristics limit the achievable chirp-width to about 10 <math display="inline"><semantics> <mi mathvariant="normal">M</mi> </semantics></math><math display="inline"><semantics> <mi>Hz</mi> </semantics></math> to 12 <math display="inline"><semantics> <mi mathvariant="normal">M</mi> </semantics></math><math display="inline"><semantics> <mi>Hz</mi> </semantics></math>. By using overlapping sub-chirps, an even larger bandwidth can be covered coherently if the phase relations between the sub-chirps are recovered. Leading and trailing continuous wave (CW) periods allow for the required frequency synchronization between transmitter and receiver. Due to hardware limitations, band margins are required which limits the usable bandwidth within the available frequency band.</p>
Full article ">Figure 2
<p>Floorplan of the covered area highlighting different aspects of the scenario, including the aisles labeled 0 to 3. (<b>a</b>) Number of nodes per shelf. There are 71 populated shelves with 1265 nodes in total. (<b>b</b>) position error bound (PEB) for the used anchor placement in the covered area. (<b>c</b>) Locations of the wideband localization infrastructure components and cabling. The three USRP radio platforms are labeled #1 to #3, the eight anchors <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mn>7</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Grocery store test environment. (<b>a</b>) Aisle 1 with anchor 0 in the foreground. (<b>b</b>) Single shelf at <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>13.5</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>10</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>. (<b>c</b>) Transmitter node closeup.</p>
Full article ">Figure 4
<p>Measurement setup components mounted in the test environment. (<b>a</b>) USRP #2 mounted on the top of a shelf. (<b>b</b>) Overview over the covered area. Visible are the host setup, the nodes on the shelves of aisle 1, and the cable harnesses to USRP #2 (foreground) and #3 (background). (<b>c</b>) Anchor 3. (<b>d</b>) Anchor 5.</p>
Full article ">Figure 5
<p>Block diagram of the full measurement setup. The USB dongles used to communicate with the target nodes are connected via a USB hub. The power supply uses a power cord with 230 <math display="inline"><semantics> <mi mathvariant="normal">V</mi> </semantics></math> AC. All RF cables are 50 <math display="inline"><semantics> <mi mathvariant="sans-serif">Ω</mi> </semantics></math> coaxial cables with SMA connectors. The connections indicated as parallel use distinct cables for each USRP radio platform, connected to distinct ports at the host setup, e.g., three SMA connectors for the 10 <math display="inline"><semantics> <mi mathvariant="normal">M</mi> </semantics></math><math display="inline"><semantics> <mi>Hz</mi> </semantics></math> reference, three SFP fiber connectors, etc.</p>
Full article ">Figure 6
<p>Position estimates for selected shelves and different receiving anchor configurations. Both TDoA+AoA as well as AoA-only estimates are shown. The 99% position error bound (PEB) ellipses are indicated for both methods. The spatial resolution is 40 <math display="inline"><semantics> <mi mathvariant="normal">c</mi> </semantics></math><math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math>. (<b>a</b>) Shelf with 29 nodes in aisle 2 and all anchors active. (<b>b</b>) Shelf with 14 nodes in aisle 0 and all anchors active. (<b>c</b>) The same shelf with with only anchors <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mrow> </semantics></math> active.</p>
Full article ">Figure 7
<p>Cumulative positioning error distributions in the horizontal plane using both TDoA and AoA information. In (<b>a</b>–<b>c</b>), all anchors are active. (<b>a</b>) Shows the Euclidean error norm for all nodes and broken down by aisle. (<b>b</b>) Compares the Euclidean to its <span class="html-italic">x</span> and <span class="html-italic">y</span> components. (<b>c</b>,<b>d</b>) show the error component orthogonal to the shelf surface for each node. Positive values are distance from the shelf surface towards the middle of the aisle. (<b>c</b>) All anchors active. (<b>d</b>) Anchors <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math> active.</p>
Full article ">Figure 8
<p>Shelf-wise median of the absolute positioning error for different anchor configurations. (<b>a</b>) All anchors active. (<b>b</b>) Anchors <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math> active. (<b>c</b>) Anchors <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math> active. (<b>d</b>) Anchors <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mrow> </semantics></math> active. (The one “good” shelf in aisle 2 has only 1 node and must thus be considered an outlier).</p>
Full article ">Figure 9
<p>Cumulative absolute error distributions for the four aisles. The anchor configurations used in <a href="#sensors-22-02663-f008" class="html-fig">Figure 8</a> are shown. Note that in the previous shelf-wise consideration shelves with a high number of nodes do not stand out in particular, while here all of them contribute individually. The curves for all anchors are the same as in <a href="#sensors-22-02663-f007" class="html-fig">Figure 7</a>a. (<b>a</b>) Aisle 0. (<b>b</b>) Aisle 1. (<b>c</b>) Aisle 2. (<b>d</b>) Aisle 3.</p>
Full article ">Figure 10
<p>Effects of clustering on overall positioning root-mean-square error (RMSE) and error distribution for single nodes. A cluster size of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>cl</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (indicated by a marker) means only the node by itself. All anchors active. (<b>a</b>) RMSE over cluster size for genie-aided and RSS-based clustering. The TDoA+AoA (not explicitly indicated) and the AoA-only cases are compared. (<b>b</b>) Cumulative absolute error distribution for a single node and one fixed cluster size of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>cl</mi> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Shelf-wise median of the absolute positioning error for the described clustering approaches. In (<b>a</b>) and (<b>b</b>), both the TDoA and AoA information is used, in (<b>c</b>) and (<b>d</b>) only the AoA information. The clustering is based on actual node-to-node RSS measurements. (<b>a</b>) No clustering, TDoA+AoA. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>cl</mi> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, TDoA+AoA. (<b>c</b>) No clustering, AoA-only. (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>cl</mi> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, AoA-only.</p>
Full article ">Figure 12
<p>Cumulative absolute error distributions for effects of different clustering approaches in the four aisles. A number of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>cl</mi> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> nodes is used for clustering. All anchors active. The TDoA+AoA (not explicitly indicated) and the AoA-only cases are compared. The curves for single nodes are the same as in <a href="#sensors-22-02663-f007" class="html-fig">Figure 7</a>a. (<b>a</b>) Aisle 0. (<b>b</b>) Aisle 1. (<b>c</b>) Aisle 2. (<b>d</b>) Aisle 3.</p>
Full article ">
18 pages, 1889 KiB  
Article
Experimental Evaluation of IEEE 802.15.4z UWB Ranging Performance under Interference
by Janis Tiemann, Johannes Friedrich and Christian Wietfeld
Sensors 2022, 22(4), 1643; https://doi.org/10.3390/s22041643 - 19 Feb 2022
Cited by 11 | Viewed by 5361
Abstract
The rise of precise wireless localization for industrial and consumer use is continuing to challenge a significant amount of research. Recently the new ultra-wideband standard IEEE 802.15.4z was released to increase both the robustness and security of the underlying message exchanges. Due to [...] Read more.
The rise of precise wireless localization for industrial and consumer use is continuing to challenge a significant amount of research. Recently the new ultra-wideband standard IEEE 802.15.4z was released to increase both the robustness and security of the underlying message exchanges. Due to the lack of accessible transceivers, most of the current research on this is of theoretical nature though. This work provides the first experimental evaluation of the ranging performance in realistic environments and also assesses the robustness to different sources of interference. To evaluate the individual aspects, a set of three different experiments are conducted. One experiment with realistic movement and two additional with targeted interference. It could be shown that the cryptographic additions of the new standard can provide sufficient information to improve the reliability of the ranging results under multi-user interference significantly. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Illustration of the major ranging error sources for accurate ultra-wideband time of arrival estimation. While characteristics of the timebase are addressable through higher quality components and changes in protocol, the channel aspects are mostly immutable. Therefore, this work focuses on the parts of the IEEE 802.15.4z UWB standard that address the synchronization to ensure reliable and safe ranging.</p>
Full article ">Figure 2
<p>Illustration of different PHY frame formats as defined in the IEEE 802.15.4z-2020 standard. Mode 1–3 incorporate the STS. Note that different positions of the STS can enable several different functionalities, such as potential IEEE 802.15.4a backwards compatibility for Mode 2, or sending no payload at all for Mode 3.</p>
Full article ">Figure 3
<p>Illustrated photo of the scenario for dynamic ranging accuracy evaluation. A trajectory with a variety of absorption regions is evaluated under a large motion-capture installation for ground truth measurement.</p>
Full article ">Figure 4
<p>Pictures of the two helmet orientations used in the experiments. On the left hand side, the back-mounted tag is depicted. On the right hand side, the top-mounted tag is shown. Here, the tag is lying flat on the top of the helmet. Note that both configurations are equipped with motion capture markers for ground-truth generation.</p>
Full article ">Figure 5
<p>Schematic top-down illustration of the scenario for dynamic ranging accuracy evaluation. A trajectory through different regions of absorption and interference is followed under continuous tracking by a motion capture system.</p>
Full article ">Figure 6
<p>Timeseries of a single dynamic ranging accuracy evaluation trajectory. The different regions of obstruction become visible in the timeseries. With total shading by an absorber wall, rangings take an indirect path. With partial shading, noise increases but the rangings results remain mainly consistent. Note the interference-caused outliers throughout the whole experiment time.</p>
Full article ">Figure 7
<p>CDF of the ranging accuracy during the experiment for different ranging schemes without active interference. The performance of DS and SS TWR is similar for both the IEEE 802.15.4a and the IEEE 802.15.4z based rangings.</p>
Full article ">Figure 8
<p>CDF of the ranging accuracy with spatially distributed interference. The overall performance is influenced by sparse but relatively large outliers that mainly affect DS-z.</p>
Full article ">Figure 9
<p>Schematic illustration of the scenario for controlled interference evaluation. Two experiments were conducted with controlled interference. In experiment <span class="html-italic">B</span> the interferer is static and not directly obstructed. In experiment <span class="html-italic">C</span> the interferer is mobile and carried in front of the body, obstructing the LOS to the ranging pair in the first half of the experiment. In the second half, the interferer is facing towards the ranging pair and has therefore, direct LOS while coming closer to both.</p>
Full article ">Figure 10
<p>Timeseries of the controlled interference evaluation with a mobile ranging node for a variety of ranging schemes and configurations in <span class="html-italic">experiment B</span>. On the left hand side the experiment without interference is depicted. On the right hand side IEEE 802.15.4a based interference is introduced. Note that the effect of this active in-channel interference is significantly influencing the ranging results. While with basic ranging without STS large errors are introduced, STS is preventing this effect at the cost of few to no successful rangings.</p>
Full article ">Figure 11
<p>Timeseries of the controlled interference experiment <span class="html-italic">C</span> for the static ranging pair. On the left hand side, no active interference is introduced. On the right hand side active IEEE 802.15.4z interference is introduced. The interferer is carried first away from the ranging pair in NLOS, then back facing towards the ranging pair with LOS. Note the different effects of the interference for different ranging schemes and configurations.</p>
Full article ">Figure 12
<p>Illustrative timeseries of three runs of the controlled interference <span class="html-italic">experiment C</span>. Next to a floating success- and error rate for the rangings the accumulator based CIR is depicted. Note the difference in success- and error rates under interference for plain IEEE 802.15.4a in subfigure (<b>a</b>) and plain IEEE 802.15.4z SS TWR in subfigure (<b>b</b>). Also note the difference to the STS enhanced variant employed by IEEE 802.15.4z depicted in subfigure (<b>c</b>).</p>
Full article ">Figure 13
<p>Bar chart of the ranging loss (the percentage of trials that did not lead to a ranging result) for experiment <span class="html-italic">C</span>. Next to the unsuccessful rangings the percentage of trails that led to erroneous rangings with errors greater 0.2 m is shown. Note the strong cancellation effect of STS in contrast to plain TWR.</p>
Full article ">Figure 14
<p>Bar chart of the ranging loss for interference with non-matching preamble codes but within the same channel. Note that with a ranging loss of over 25% the IEEE 802.15.4a based ranging is still strongly influenced by the interferer.</p>
Full article ">Figure 15
<p>Bar chart of ranging loss for interference with non-matching channels while using the same preamble code. Note the absence of erroneous ranging results. Moreover, the SDC-based approach is influenced notably by the interference.</p>
Full article ">Figure 16
<p>Bar chart of ranging loss for interference with non-matching channels and non-matching preamble codes. Note the overall low effect on most approaches except for the SDC-based method.</p>
Full article ">
19 pages, 5744 KiB  
Article
Component-Wise Error Correction Method for UWB-Based Localization in Target-Following Mobile Robot
by Kyungbin Bae, Yooha Son, Young-Eun Song and Hoeryong Jung
Sensors 2022, 22(3), 1180; https://doi.org/10.3390/s22031180 - 4 Feb 2022
Cited by 6 | Viewed by 2957
Abstract
Target-following mobile robots have gained attention in various industrial applications. This study proposes an ultra-wideband-based target localization method that provides highly accurate and robust target tracking performance for a following robot. Based on the least square approximation framework, the proposed method improves localization [...] Read more.
Target-following mobile robots have gained attention in various industrial applications. This study proposes an ultra-wideband-based target localization method that provides highly accurate and robust target tracking performance for a following robot. Based on the least square approximation framework, the proposed method improves localization accuracy by compensating localization bias and high-frequency deviations component by component. Initial calibration method is proposed to measure the device-dependent localization bias, which enables a compensation of the bias error not only at the calibration points, but also at the any other points. An iterative complementary filter, which recursively produces optimal estimation for each timeframe as a weighted sum of previous and current estimation depending on the reliability of each estimation, is proposed to reduce the deviation of the localization error. The performance of the proposed method is validated using simulations and experiments. Both the magnitude and deviation of the localization error were significantly improved by up to 77 and 51%, respectively, compared with the previous method. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Overall configuration of UWB transceivers for the target localization.</p>
Full article ">Figure 2
<p>Component-wise classification of the estimation error. <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> </semantics></math> denotes the initial estimation using the LS approximation of the tag position. <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>C</mi> </msup> </mrow> </semantics></math> denotes the calibrated estimation by removing the bias error <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">E</mi> <mstyle stretchy="true" mathsize="80%"> <mo>˜</mo> </mstyle> </mover> <mrow> <mi>b</mi> <mi>i</mi> <mi>a</mi> <mi>s</mi> </mrow> </msup> </mrow> </semantics></math> through initial calibration, and <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>F</mi> </msup> </mrow> </semantics></math> denotes the final estimation, which removes <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">E</mi> <mstyle stretchy="true" mathsize="80%"> <mo>˜</mo> </mstyle> </mover> <mrow> <mi>n</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </msup> </mrow> </semantics></math> from <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>C</mi> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Initial calibration method to measure bias error <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">E</mi> <mstyle stretchy="true" mathsize="80%"> <mo>˜</mo> </mstyle> </mover> <mrow> <mi>b</mi> <mi>i</mi> <mi>a</mi> <mi>s</mi> </mrow> </msup> </mrow> </semantics></math>. Initial calibration is conducted prior to the measurement at multiple calibration points to measure <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi mathvariant="bold-italic">E</mi> <mstyle stretchy="true" mathsize="80%"> <mo>˜</mo> </mstyle> </mover> <mrow> <mi>n</mi> <mi>o</mi> <mi>r</mi> <mi>m</mi> </mrow> <mrow> <mi>b</mi> <mi>i</mi> <mi>a</mi> <mi>s</mi> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Alleviating the radial directional noise component of <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi mathvariant="bold-italic">E</mi> <mstyle stretchy="true" mathsize="80%"> <mo>˜</mo> </mstyle> </mover> <mi>k</mi> <mrow> <mi>n</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </msubsup> </mrow> </semantics></math>. The magnitude of <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>k</mi> <mi>C</mi> </msubsup> </mrow> </semantics></math> is adjusted by the distance estimation <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>r</mi> <mo>˜</mo> </mover> <mi>k</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Computing estimation candidate <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi mathvariant="bold-italic">Z</mi> <mo>˜</mo> </mover> <mi>k</mi> </msub> </mrow> </semantics></math> from previous estimation <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> <mi>F</mi> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Configuration of UWB anchors used in simulations and experiments for localization-accuracy evaluation: (<b>a</b>) UWB anchors attached to the mobile robot, (<b>b</b>) UWB anchor configurations with attached coordinate system and dimensions.</p>
Full article ">Figure 7
<p>Two reference paths used in the simulation: (<b>a</b>) square path with a side length of 5 m, (<b>b</b>) circular path with a diameter of 5 m.</p>
Full article ">Figure 8
<p>Simulation results demonstrating the estimation accuracy of the proposed method for the square path: (<b>a</b>) estimation results of <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> </semantics></math><b>,</b> (<b>b</b>) estimation results of <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>C</mi> </msup> </mrow> </semantics></math>, (<b>c</b>) estimation results of <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>R</mi> </msup> </mrow> </semantics></math>, (<b>d</b>) estimation results of <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>F</mi> </msup> </mrow> </semantics></math>, (<b>e</b>) comparison of estimation error for <math display="inline"><semantics> <mrow> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mo>,</mo> <mo> </mo> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>C</mi> </msup> <mo>,</mo> <mo> </mo> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>R</mi> </msup> <mo>,</mo> <mo> </mo> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mo> </mo> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>F</mi> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Simulation results demonstrating the estimation accuracy of the proposed method for the circular path: (<b>a</b>) estimation results of <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> </semantics></math><b>,</b> (<b>b</b>) estimation results of <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>C</mi> </msup> </mrow> </semantics></math>, (<b>c</b>) estimation results of <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>R</mi> </msup> </mrow> </semantics></math>, (<b>d</b>) estimation results of <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>F</mi> </msup> </mrow> </semantics></math>, (<b>e</b>) comparison of estimation error for <math display="inline"><semantics> <mrow> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mo>,</mo> <mo> </mo> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>C</mi> </msup> <mo>,</mo> <mo> </mo> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>R</mi> </msup> <mo>,</mo> <mo> </mo> <mi>and</mi> <mo> </mo> <msup> <mover accent="true"> <mi mathvariant="bold-italic">T</mi> <mstyle stretchy="true" mathsize="120%"> <mo>˜</mo> </mstyle> </mover> <mi>F</mi> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Comparison of estimation accuracy between the proposed method and previous methods for the square path: (<b>a</b>) comparison of estimation results, (<b>b</b>) comparison of estimation error.</p>
Full article ">Figure 11
<p>Comparison of estimation accuracy between the proposed method and previous methods for the circular path: (<b>a</b>) comparison of estimation results, (<b>b</b>) comparison of estimation error.</p>
Full article ">Figure 12
<p>Experimental setup for performance evaluation: (<b>a</b>) experimental setup, (<b>b</b>) reference positions of the UWB tag.</p>
Full article ">Figure 13
<p>Experimental setup for performance evaluation. (<b>a</b>) experimental setup, (<b>b</b>–<b>d</b>) reference locations.</p>
Full article ">
39 pages, 6473 KiB  
Article
Can I Trust This Location Estimate? Reproducibly Benchmarking the Methods of Dynamic Accuracy Estimation of Localization
by Grigorios G. Anagnostopoulos and Alexandros Kalousis
Sensors 2022, 22(3), 1088; https://doi.org/10.3390/s22031088 - 30 Jan 2022
Cited by 5 | Viewed by 3396
Abstract
Despite the great attention that the research community has paid to the creation of novel indoor positioning methods, a rather limited volume of works has focused on the confidence that Indoor Positioning Systems (IPS) assign to the position estimates that they produce. The [...] Read more.
Despite the great attention that the research community has paid to the creation of novel indoor positioning methods, a rather limited volume of works has focused on the confidence that Indoor Positioning Systems (IPS) assign to the position estimates that they produce. The concept of estimating, dynamically, the accuracy of the position estimates provided by an IPS has been sporadically studied in the literature of the field. Recently, this concept has started being studied as well in the context of outdoor positioning systems of Internet of Things (IoT) based on Low-Power Wide-Area Networks (LPWANs). What is problematic is that the consistent comparison of the proposed methods is quasi nonexistent: new methods rarely use previous ones as baselines; often, a small number of evaluation metrics are reported while different metrics are reported among different relevant publications, the use of open data is rare, and the publication of open code is absent. In this work, we present an open-source, reproducible benchmarking framework for evaluating and consistently comparing various methods of Dynamic Accuracy Estimation (DAE). This work reviews the relevant literature, presenting in a consistent terminology commonalities and differences and discussing baselines and evaluation metrics. Moreover, it evaluates multiple methods of DAE using open data, open code, and a rich set of relevant evaluation metrics. This is the first work aiming to establish the state of the art of methods of DAE determination in IPS and in LPWAN positioning systems, through an open, transparent, holistic, reproducible, and consistent evaluation of the methods proposed in the relevant literature. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Visualizing the actual positioning error and the DAE.</p>
Full article ">Figure 2
<p>A radar plot of most of the relevant metrics for the DAE evaluation of various methods, on the validation set of the Sigfox dataset. Methods are depicted in continuous lines while baselines are in dashed lines. This plot depicts metrics and values of all methods, as reported in <a href="#sensors-22-01088-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 3
<p>Boxplots indicating the distribution of the absolute DAE error of all studied methods and baselines, in the validation set of the Sigfox dataset. The green triangles indicate the mean values while outliers are depicted as black circles.</p>
Full article ">Figure 4
<p>The Cumulative Distribution Function (CDF) of the absolute DAE error (in meters), for all studied methods and baselines, in the validation set of the Sigfox dataset.</p>
Full article ">Figure 5
<p>The Cumulative Distribution Function (CDF) of the signed DAE error (in meters) for all studied methods and one baseline, in the validation set of the Sigfox dataset. Negative values correspond to DAE overestimating the error (“ground-truth position inside the DAE circle”), while positive values correspond to DAE underestimating the error (“ground-truth position outside the DAE circle”).</p>
Full article ">Figure 6
<p>Scatter plots for five DAE methods and one baseline, depicting the DAE estimates against the actual positioning error values. Pearson’s correlation coefficient as well as Spearman’s rank correlation coefficient are indicated for each method. The validation set of the Sigfox dataset is used.</p>
Full article ">Figure 7
<p>In this plot, the horizontal axis represents the percentage of location estimates selected based on their ranking according to the DAE values. The ranking of each DAE method is depicted with a different color. The vertical axis reports the respective mean positioning error for each selected portion of the original dataset. The values correspond to the validation set of the Sigfox dataset.</p>
Full article ">Figure 8
<p>A radar plot of most of the relevant metrics for the DAE evaluation of various methods, on the validation set of the LoRaWAN dataset. Methods are depicted in continuous lines while baselines are in dashed lines. This plot depicts metrics and values of all methods, as reported in <a href="#sensors-22-01088-t005" class="html-table">Table 5</a>.</p>
Full article ">Figure 9
<p>Boxplots indicating the distribution of the absolute DAE error of all studied methods and baselines, in the validation set of the LoRaWAN dataset. The green triangles indicate the mean values while outliers are depicted as black circles.</p>
Full article ">Figure 10
<p>The Cumulative Distribution Function (CDF) of the absolute DAE error (in meters), for all the studied methods and baselines, in the validation set of the LoRaWAN dataset.</p>
Full article ">Figure 11
<p>The Cumulative Distribution Function (CDF) of the signed DAE error (in meters) for all studied methods and one baseline, in the validation set of the LoRaWAN dataset. Negative values correspond to DAE overestimating the error (“ground-truth position inside the DAE circle”), while positive values correspond to DAE underestimating the error (“ground-truth position outside the DAE circle”).</p>
Full article ">Figure 12
<p>Scatterplots for five DAE methods and one baseline, depicting the DAE estimates against the actual positioning error values. Pearson’s correlation coefficient as well as Spearman’s rank correlation coefficient are indicated for each method. The validation set of the LoRaWAN dataset is used.</p>
Full article ">Figure 13
<p>In this plot, the horizontal axis represents the percentage of location estimates selected based on their ranking according to the DAE values. The ranking of each DAE method is depicted with a different color. The vertical axis reports the respective mean positioning error for each selected portion of the original dataset. The values correspond to the validation set of the LoRaWAN dataset.</p>
Full article ">Figure 14
<p>A radar plot of most of the relevant metrics for the DAE evaluation of various methods, on the validation set of the DSI dataset. Methods are depicted in continuous lines while baselines are in dashed lines. This plot depicts metrics and values of all methods, as reported in <a href="#sensors-22-01088-t006" class="html-table">Table 6</a>.</p>
Full article ">Figure 15
<p>Boxplots indicating the distribution of the absolute DAE error of all studied methods and baselines, in the validation set of the DSI dataset. The green triangles indicate the mean values while outliers are depicted as black circles.</p>
Full article ">Figure 16
<p>The Cumulative Distribution Function (CDF) of the absolute DAE error (in meters), for all the studied methods and baselines, in the validation set of the DSI dataset.</p>
Full article ">Figure 17
<p>The Cumulative Distribution Function (CDF) of the signed DAE error (in meters) for all studied methods and one baseline, in the validation set of the DSI dataset. Negative values correspond to DAE overestimating the error (“ground-truth position inside the DAE circle”), while positive values correspond to DAE underestimating the error (“ground-truth position outside the DAE circle”).</p>
Full article ">Figure 18
<p>Scatterplots for five DAE methods and one baseline, depicting the DAE estimates against the actual positioning error values. Pearson’s correlation coefficient as well as Spearman’s rank correlation coefficient are indicated for each method. The validation set of the DSI dataset is used.</p>
Full article ">Figure 19
<p>In this plot, the horizontal axis represents the percentage of location estimates selected based on their ranking according to the DAE values. The ranking of each DAE method is depicted with a different color. The vertical axis reports the respective mean positioning error for each selected portion of the original dataset. The values correspond to the validation set of the DSI dataset.</p>
Full article ">Figure 20
<p>A radar plot of most of the relevant metrics for the DAE evaluation of various methods, on the validation set of the MAN dataset. Methods are depicted in continuous lines while baselines are in dashed lines. This plot depicts metrics and values of all methods, as reported in <a href="#sensors-22-01088-t007" class="html-table">Table 7</a>.</p>
Full article ">Figure 21
<p>Boxplots indicating the distribution of the absolute DAE error of all studied methods and baselines, in the validation set of the MAN dataset. The green triangles indicate the mean values while outliers are depicted as black circles.</p>
Full article ">Figure 22
<p>The Cumulative Distribution Function (CDF) of the absolute DAE error (in meters), for all the studied methods and baselines, in the validation set of the MAN dataset.</p>
Full article ">Figure 23
<p>The Cumulative Distribution Function (CDF) of the signed DAE error (in meters) for all studied methods and one baseline, in the validation set of the MAN dataset. Negative values correspond to DAE overestimating the error (“ground-truth position inside the DAE circle”), while positive values correspond to DAE underestimating the error (“ground-truth position outside the DAE circle”).</p>
Full article ">Figure 24
<p>Scatterplots for five DAE methods and one baseline, depicting the DAE estimates against the actual positioning error values. Pearson’s correlation coefficient as well as Spearman’s rank correlation coefficient are indicated for each method. The validation set of the MAN dataset is used.</p>
Full article ">Figure 25
<p>In this plot, the horizontal axis represents the percentage of location estimates selected based on their ranking according to the DAE values. The ranking of each DAE method is depicted with a different color. The vertical axis reports the respective mean positioning error for each selected portion of the original dataset. The values correspond to the validation set of the MAN dataset.</p>
Full article ">
24 pages, 10862 KiB  
Article
Simulation Tool and Online Demonstrator for CDMA-Based Ultrasonic Indoor Localization Systems
by María Carmen Pérez-Rubio, Álvaro Hernández, David Gualda-Gómez, Santiago Murano, Jorge Vicente-Ranera, Francisco Ciudad-Fernández, José Manuel Villadangos and Rubén Nieto
Sensors 2022, 22(3), 1038; https://doi.org/10.3390/s22031038 - 28 Jan 2022
Cited by 7 | Viewed by 3517
Abstract
This work presents the CODEUS platform, which includes a simulation tool together with an online experimental demonstrator to offer analysis and testing flexibility for researchers and developers in Ultrasonic Indoor Positioning Systems (UIPSs). The simulation platform allows most common encoding techniques and sequences [...] Read more.
This work presents the CODEUS platform, which includes a simulation tool together with an online experimental demonstrator to offer analysis and testing flexibility for researchers and developers in Ultrasonic Indoor Positioning Systems (UIPSs). The simulation platform allows most common encoding techniques and sequences to be tested in a configurable UIPS. It models the signal modulation and processing, the ultrasonic transducers’ response, the beacon distribution, the channel propagation effects, the synchronism, and the application of different positioning algorithms. CODEUS provides results and performance analysis for different metrics and at different stages of the signal processing. The UIPS simulation tool is specified by means of the MATLAB© App-Designer environment, which enables the definition of a user-friendly interface. It has also been linked to an online demonstrator that can be managed remotely by means of a website, thus avoiding any hardware requirement or equipment on behalf of researchers. This demonstrator allows the selected transmission schemes, modulation or encoding techniques to be validated in a real UIPS, therefore enabling a fast and easy way of carrying out experimental tests in a laboratory environment, while avoiding the time-consuming tasks related to electronic design and prototyping in the UIPS field. Both simulator and online demonstrator are freely available for researchers and students through the corresponding website. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>General block diagram of the CODEUS platform: simulator and online demonstrator.</p>
Full article ">Figure 2
<p>Example of emission configuration with five 255-bit Kasami codes modulated in BPSK with two carrier cycles at 41.67 kHz and a sampling frequency of 500 kHz.</p>
Full article ">Figure 3
<p>Correlation functions for those sequences considered in the CODEUS platform: (<b>a</b>) normalized periodic and (<b>b</b>) aperiodic correlation functions of Kasami sequences with length <span class="html-italic">L<sub>Kas</sub></span> = 63; (<b>c</b>) SACF and (<b>d</b>) SCCF for CSS with <span class="html-italic">M<sub>CSS</sub></span> = 2 and <span class="html-italic">L<sub>CSS</sub></span> = 64; (<b>e</b>) normalized aperiodic correlation function of LS sequences with length <span class="html-italic">L<sub>LS</sub></span> = 79 and Zero Correlation Zone around the origin <span class="html-italic">ZCZ</span> = 15; (<b>f</b>) normalized periodic and (<b>g</b>) aperiodic correlation function for Zadoff-Chu sequences with <span class="html-italic">L<sub>ZC</sub></span> = 61.</p>
Full article ">Figure 4
<p>Effect of the BPSK modulation in the auto-correlation function of a 63-bit long Kasami code with <span class="html-italic">f</span><sub>c</sub> = 41.67 kHz, <span class="html-italic">f</span><sub>s</sub> = 12 and: (<b>a</b>) <span class="html-italic">O</span><sub>c</sub> = 2; (<b>b</b>) <span class="html-italic">O</span><sub>c</sub> = 4.</p>
Full article ">Figure 5
<p>328ST160 transducer frequency response, measured (red) and the FIR filter model (blue) [<a href="#B45-sensors-22-01038" class="html-bibr">45</a>].</p>
Full article ">Figure 6
<p>Configuration of the multiple emission scheme: (<b>a</b>) CDMA, (<b>b</b>) TCDMA (<span class="html-italic">T<sub>shift</sub></span> = <span class="html-italic">T<sub>e</sub></span>), and (<b>c</b>) TCDMA with configurable <span class="html-italic">T<sub>shift</sub></span>.</p>
Full article ">Figure 7
<p>Study of the PDOP considering a 6 m × 6 m area in the XY plane, and five beacons placed at a height of 348 cm, in a 70.7 cm × 70.7 cm square structure: (<b>a</b>) spherical; (<b>b</b>) hyperbolic.</p>
Full article ">Figure 8
<p>Example of a scene configuration in the simulator environment.</p>
Full article ">Figure 9
<p>Configuration of different channel phenomena in the CODEUS simulator environment.</p>
Full article ">Figure 10
<p>Example of a non-valid measurement after the peak detection algorithm.</p>
Full article ">Figure 11
<p>Auto-correlation and cross-correlation results for three 63-bit Kasami codes modulated in BPSK with two carrier cycles at 41.67 kHz and a sampling frequency of 500 kHz.</p>
Full article ">Figure 12
<p>Correlation results after the emission of five simultaneous 255-bit BPSK modulated codes with SNR = 5 dB.</p>
Full article ">Figure 13
<p>Positioning estimates obtained with 255-bit BPSK modulated codes with SNR = 5 dB, after 30 realizations.</p>
Full article ">Figure 14
<p>(<b>a</b>) General view of the experimental setup of the CODEUS online demonstrator, and (<b>b</b>) a scheme with the dimensions for that setup, where E1–E5 are the emitters and R1–R4 the receivers.</p>
Full article ">Figure 15
<p>General block diagram of the server designed for the CODEUS online demonstrator.</p>
Full article ">Figure 16
<p>General block diagram of the beacon unit involved in the CODEUS online demonstrator [<a href="#B65-sensors-22-01038" class="html-bibr">65</a>].</p>
Full article ">Figure 17
<p>General block diagram of the receiving module included in the CODEUS online demonstrator [<a href="#B23-sensors-22-01038" class="html-bibr">23</a>].</p>
Full article ">Figure 18
<p>Simulated results for point R1: (<b>top</b>) The received signal with SNR = 10 dB; (<b>middle</b>) the correlation functions between this incoming signal and the codes emitted by beacons <span class="html-italic">E<sub>i</sub></span>; (<b>bottom</b>) magnification of the area of interest, where the maximum values of those correlation functions can be observed.</p>
Full article ">Figure 19
<p>Experimental acquisition for point R1: the received signal (<b>top</b>); the correlation functions between this incoming signal and the codes emitted by beacons <span class="html-italic">Ei</span> (<b>middle</b>); and magnification of the area of interest, where the maximum values of those correlation functions can be observed (<b>bottom</b>).</p>
Full article ">Figure 20
<p>Clouds of position estimates: (<b>a</b>) simulated results for R1 with SNR = 10 dB; (<b>b</b>) simulated results for R3 with SNR = 10 dB; (<b>c</b>) experimental results for R1 and R3.</p>
Full article ">Figure 21
<p>Positioning error CDF: (<b>a</b>) simulated results at R1, with SNR = 10 dB; (<b>b</b>) simulated results at R3, with SNR = 10 dB; (<b>c</b>) simulated results at R1, with SNR = 0 dB; (<b>d</b>) simulated results at R3, with SNR = 0 dB; (<b>e</b>) experimental results.</p>
Full article ">
16 pages, 3058 KiB  
Article
Using Perspective-n-Point Algorithms for a Local Positioning System Based on LEDs and a QADA Receiver
by Elena Aparicio-Esteve, Jesús Ureña, Álvaro Hernández, Daniel Pizarro and David Moltó
Sensors 2021, 21(19), 6537; https://doi.org/10.3390/s21196537 - 30 Sep 2021
Cited by 4 | Viewed by 2418
Abstract
The research interest on location-based services has increased during the last years ever since 3D centimetre accuracy inside intelligent environments could be confronted with. This work proposes an indoor local positioning system based on LED lighting, transmitted from a set of beacons to [...] Read more.
The research interest on location-based services has increased during the last years ever since 3D centimetre accuracy inside intelligent environments could be confronted with. This work proposes an indoor local positioning system based on LED lighting, transmitted from a set of beacons to a receiver. The receiver is based on a quadrant photodiode angular diversity aperture (QADA) plus an aperture placed over it. This configuration can be modelled as a perspective camera, where the image position of the transmitters can be used to recover the receiver’s 3D pose. This process is known as the perspective-n-point (PnP) problem, which is well known in computer vision and photogrammetry. This work investigates the use of different state-of-the-art PnP algorithms to localize the receiver in a large space of 2 × 2 m2 based on four co-planar transmitters and with a distance from transmitters to receiver up to 3.4 m. Encoding techniques are used to permit the simultaneous emission of all the transmitted signals and their processing in the receiver. In addition, correlation techniques (match filtering) are used to determine the image points projected from each emitter on the QADA. This work uses Monte Carlo simulations to characterize the absolute errors for a grid of test points under noisy measurements, as well as the robustness of the system when varying the 3D location of one transmitter. The IPPE algorithm obtained the best performance in this configuration. The proposal has also been experimentally evaluated in a real setup. The estimation of the receiver’s position at three particular points for roll angles of the receiver of γ={0°, 120°, 210° and 300°} using the IPPE algorithm achieves average absolute errors and standard deviations of 4.33 cm, 3.51 cm and 28.90 cm; and 1.84 cm, 1.17 cm and 19.80 cm in the coordinates x, y and z, respectively. These positioning results are in line with those obtained in previous work using triangulation techniques but with the addition that the complete pose of the receiver (x, y, z, α, β, γ) is obtained in this proposal. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Global overview of the proposed system.</p>
Full article ">Figure 2
<p>Representation of the geometry of the QADA sensor and the aperture over it.</p>
Full article ">Figure 3
<p>Mean absolute errors in the grid of considered points in the floor for the EPnP algorithm and a rotation in the <span class="html-italic">Z</span> axis <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> </mrow> </semantics></math> 120°.</p>
Full article ">Figure 4
<p>Mean absolute errors in the grid of considered points in the floor for the IPPE algorithm and a rotation in the <span class="html-italic">Z</span> axis <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> </mrow> </semantics></math> 120°.</p>
Full article ">Figure 5
<p>Mean absolute errors in the grid of considered points in the floor for the RPnP algorithm and a rotation in the <span class="html-italic">Z</span> axis <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> </mrow> </semantics></math> 120°.</p>
Full article ">Figure 6
<p>CDF of the absolute pose errors for EPnP, IPPE and RPnP according to the rotation in the <span class="html-italic">Z</span> axis <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> </mrow> </semantics></math> {0°, 120°, 240°}.</p>
Full article ">Figure 7
<p>CDF of the absolute pose errors for the nine representative points defined in <a href="#sensors-21-06537-t001" class="html-table">Table 1</a> for the case of using the EPnP algorithm.</p>
Full article ">Figure 8
<p>CDF of the absolute pose errors for the nine representative points defined in <a href="#sensors-21-06537-t001" class="html-table">Table 1</a> for the case of using the IPPE algorithm.</p>
Full article ">Figure 9
<p>CDF of the absolute pose errors for the nine representative points defined in <a href="#sensors-21-06537-t001" class="html-table">Table 1</a> for the case of using the RPnP algorithm.</p>
Full article ">Figure 10
<p>CDF of the absolute pose errors for EPnP, IPPE and RPnP algorithms and a rotation in the <span class="html-italic">Z</span> axis <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> </mrow> </semantics></math> {0°, 120°, 240°} when varying the location of a transmitter with a Gaussian noise <math display="inline"><semantics> <mi>σ</mi> </semantics></math> = 1 cm.</p>
Full article ">Figure 11
<p>Experimental setup in the proposed scenario.</p>
Full article ">Figure 12
<p>Experimental position estimates at <span class="html-italic">z</span> = 0 m for <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> </mrow> </semantics></math> {0°, 120°, 210° and 300°} with the IPPE algorithm.</p>
Full article ">Figure 13
<p>CDF of the absolute errors for every experimental point at <span class="html-italic">z</span> = 0 m for <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> </mrow> </semantics></math>{0°, 120°, 210° and 300°} with the IPPE algorithm.</p>
Full article ">
23 pages, 5185 KiB  
Article
Smartphone-Based Inertial Odometry for Blind Walkers
by Peng Ren, Fatemeh Elyasi and Roberto Manduchi
Sensors 2021, 21(12), 4033; https://doi.org/10.3390/s21124033 - 11 Jun 2021
Cited by 13 | Viewed by 3085
Abstract
Pedestrian tracking systems implemented in regular smartphones may provide a convenient mechanism for wayfinding and backtracking for people who are blind. However, virtually all existing studies only considered sighted participants, whose gait pattern may be different from that of blind walkers using a [...] Read more.
Pedestrian tracking systems implemented in regular smartphones may provide a convenient mechanism for wayfinding and backtracking for people who are blind. However, virtually all existing studies only considered sighted participants, whose gait pattern may be different from that of blind walkers using a long cane or a dog guide. In this contribution, we present a comparative assessment of several algorithms using inertial sensors for pedestrian tracking, as applied to data from WeAllWalk, the only published inertial sensor dataset collected indoors from blind walkers. We consider two situations of interest. In the first situation, a map of the building is not available, in which case we assume that users walk in a network of corridors intersecting at 45° or 90°. We propose a new two-stage turn detector that, combined with an LSTM-based step counter, can robustly reconstruct the path traversed. We compare this with RoNIN, a state-of-the-art algorithm based on deep learning. In the second situation, a map is available, which provides a strong prior on the possible trajectories. For these situations, we experiment with particle filtering, with an additional clustering stage based on mean shift. Our results highlight the importance of training and testing inertial odometry systems for assisted navigation with data from blind walkers. Full article
(This article belongs to the Special Issue Advances in Indoor Positioning and Indoor Navigation)
Show Figures

Figure 1

Figure 1
<p>Examples of step detection. (<b>a</b>) Top row: the output of LSTM (black line) is thresholded, and the midpoints of the resulting positive segments (gray line) are taken as the estimated times of heel strike (ground-truth shown by red line). The LSTM takes in input the 3-axes rotation rate (middle row) and the 3-axes user acceleration (bottom row). Examples of overcounts are seen in (<b>b</b>) between <math display="inline"><semantics> <mi>t</mi> </semantics></math> = 24 s and <math display="inline"><semantics> <mi>t</mi> </semantics></math> = 26 s. An example of undercount is seen between <math display="inline"><semantics> <mi>t</mi> </semantics></math> = 29 s and <math display="inline"><semantics> <mi>t</mi> </semantics></math> = 30 s.</p>
Full article ">Figure 2
<p>Diagrammatic example of our two-stage turn detector. The blue line (<b>left</b>) represents a path taken by a walker, with a <math display="inline"><semantics> <mrow> <mn>45</mn> <mo>°</mo> </mrow> </semantics></math> (<b>left</b>) turn, followed by a −<math display="inline"><semantics> <mrow> <mn>90</mn> <mo>°</mo> </mrow> </semantics></math> (<b>right</b>) turn. A turn is detected by comparing the orientation <math display="inline"><semantics> <mrow> <mi>O</mi> <mfenced> <mi>t</mi> </mfenced> </mrow> </semantics></math> produced by the orientation tracker between two consecutive straight walking (SW) segments (highlighted in yellow).</p>
Full article ">Figure 3
<p>Example of SW segment detection using our GRU system. Top: Azimuth signal; Bottom: output of SW detector (blue line), shown together with the segments marked as “features” in WeAllWalk (orange line).</p>
Full article ">Figure 4
<p>Example of two-stage turn detection. Black line: azimuth signal. Green line: walker’s discrete orientation as tracked by MKF. Orange line: walker’s discrete orientation as obtained by integrating the turns detected by the two-stage system. Red line: ground-truth walker’s discrete orientation. The orientation resolution of the MKF was <math display="inline"><semantics> <mrow> <mn>90</mn> <mo>°</mo> <mo> </mo> </mrow> </semantics></math> (<b>a</b>) or <math display="inline"><semantics> <mrow> <mn>45</mn> <mo>°</mo> <mo> </mo> </mrow> </semantics></math>(<b>b</b>).</p>
Full article ">Figure 5
<p>The set of particles at a certain time during tracking. Note that the particles are distributed in two main clusters (shown in different colors). The mean particle location (posterior mean), shown by a black star, is in an incorrect position. The highest mode of the distribution, as found by mean shift, is shown by a white star.</p>
Full article ">Figure 6
<p>UC rate vs. OC rate curves as a function of the threshold <math display="inline"><semantics> <mi>S</mi> </semantics></math> on the LSTM output.</p>
Full article ">Figure 7
<p>An illustration explaining the metrics used to evaluate the quality of an estimated trajectory <math display="inline"><semantics> <mrow> <msup> <mi>P</mi> <mi>i</mi> </msup> <mfenced> <mi>t</mi> </mfenced> </mrow> </semantics></math> (shown as blue line after alignment). The gray shape shows the location of the corridors along the path, with individual segment separation shown in red. Waypoints (<math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>P</mi> <mo>¯</mo> </mover> <mi>j</mi> <mi>i</mi> </msubsup> </mrow> </semantics></math>) are shown as red dots, while the estimated walker’s locations at waypoint timestamps (<math display="inline"><semantics> <mrow> <msup> <mi>P</mi> <mi>i</mi> </msup> <mfenced> <mrow> <msubsup> <mi>t</mi> <mi>j</mi> <mi>i</mi> </msubsup> </mrow> </mfenced> </mrow> </semantics></math>) are shown as blue dots. The distances <math display="inline"><semantics> <mrow> <mo stretchy="false">||</mo> <msubsup> <mover accent="true"> <mi>P</mi> <mo>¯</mo> </mover> <mi>j</mi> <mi>i</mi> </msubsup> <mo>−</mo> <msup> <mi>P</mi> <mi>i</mi> </msup> <mfenced> <mrow> <msubsup> <mi>t</mi> <mi>j</mi> <mi>i</mi> </msubsup> </mrow> </mfenced> <mo stretchy="false">||</mo> </mrow> </semantics></math> are shown by white arrows. The uniform distance sampling of the segments joining consecutive waypoints (<math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <msubsup> <mover accent="true"> <mi>Q</mi> <mo>¯</mo> </mover> <mi>n</mi> <mi>i</mi> </msubsup> </mrow> </mfenced> </mrow> </semantics></math>) is shown by orange dots, while that of the estimated trajectory (<math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <msubsup> <mi>Q</mi> <mi>m</mi> <mi>i</mi> </msubsup> </mrow> </mfenced> </mrow> </semantics></math>) is shown by light blue dots. The associated distances <math display="inline"><semantics> <mrow> <munder> <mrow> <mi>min</mi> </mrow> <mi>m</mi> </munder> <mfenced> <mrow> <mo stretchy="false">||</mo> <msubsup> <mi>Q</mi> <mi>m</mi> <mi>i</mi> </msubsup> <mo>−</mo> <msubsup> <mover accent="true"> <mi>Q</mi> <mo>¯</mo> </mover> <mi>n</mi> <mi>i</mi> </msubsup> <mo stretchy="false">||</mo> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <munder> <mrow> <mi>min</mi> </mrow> <mi>n</mi> </munder> <mfenced> <mrow> <mo stretchy="false">||</mo> <msubsup> <mi>Q</mi> <mi>m</mi> <mi>i</mi> </msubsup> <mo>−</mo> <msubsup> <mover accent="true"> <mi>Q</mi> <mo>¯</mo> </mover> <mi>n</mi> <mi>i</mi> </msubsup> <mo stretchy="false">||</mo> </mrow> </mfenced> </mrow> </semantics></math> are shown with orange and light blue arrows, respectively.</p>
Full article ">Figure 8
<p>Diagrammatic examples of the algorithms used for map-less path reconstruction. The blue line represents the path taken by the walker. The black line represents the estimated path. Dots represent heel strikes; circles represent turns.</p>
Full article ">Figure 9
<p>Examples of path reconstruction from the TA:LC training/test modality. Left: Map-less: A/S, <math display="inline"><semantics> <mrow> <mn>90</mn> <mo>°</mo> </mrow> </semantics></math> T/S; Map-assisted: A/S-PF, A/S-PF-MS, A/S-PF-MS-G. Right: Map-less: FR; Map-assisted: FR-PF, FR-PF-MS, FR-PF-MS-G. Reconstruction errors for the three metrics (RMSEwp, Hauss, avHauss) are shown (in units of meters) in the legend of each figure. (<b>a</b>–<b>c</b>) refer to different buildings.</p>
Full article ">
Back to TopTop