[go: up one dir, main page]

Next Issue
Volume 21, March-2
Previous Issue
Volume 21, February-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 21, Issue 5 (March-1 2021) – 372 articles

Cover Story (view full-size image): Surgical gesture detection can provide targeted surgical skill assessment and feedback during surgical training for robot-assisted surgery (RAS). We extracted features from electroencephalogram (EEG) data, utilizing network neuroscience algorithms, and used them in machine learning algorithms to classify robot-assisted surgical gestures. EEG was collected from 5 RAS surgeons while performing 34 robot-assisted radical prostatectomies over the course of 3 years. Eight dominant and 6 non-dominant hand gesture types were extracted and synchronized with associated EEG data. Our proposed method was used to classify 8 gesture types performed by the dominant hand with accuracy: 90%, precision: 90%, sensitivity: 88%, and also 6 gesture types performed by the non-dominant hand with accuracy: 93%, precision: 94%, sensitivity: 94%. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 2771 KiB  
Article
Supernovae Detection with Fully Convolutional One-Stage Framework
by Kai Yin, Juncheng Jia, Xing Gao, Tianrui Sun and Zhengyin Zhou
Sensors 2021, 21(5), 1926; https://doi.org/10.3390/s21051926 - 9 Mar 2021
Cited by 3 | Viewed by 3244
Abstract
A series of sky surveys were launched in search of supernovae and generated a tremendous amount of data, which pushed astronomy into a new era of big data. However, it can be a disastrous burden to manually identify and report supernovae, because such [...] Read more.
A series of sky surveys were launched in search of supernovae and generated a tremendous amount of data, which pushed astronomy into a new era of big data. However, it can be a disastrous burden to manually identify and report supernovae, because such data have huge quantity and sparse positives. While the traditional machine learning methods can be used to deal with such data, deep learning methods such as Convolutional Neural Networks demonstrate more powerful adaptability in this area. However, most data in the existing works are either simulated or without generality. How do the state-of-the-art object detection algorithms work on real supernova data is largely unknown, which greatly hinders the development of this field. Furthermore, the existing works of supernovae classification usually assume the input images are properly cropped with a single candidate located in the center, which is not true for our dataset. Besides, the performance of existing detection algorithms can still be improved for the supernovae detection task. To address these problems, we collected and organized all the known objectives of the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) and the Popular Supernova Project (PSP), resulting in two datasets, and then compared several detection algorithms on them. After that, the selected Fully Convolutional One-Stage (FCOS) method is used as the baseline and further improved with data augmentation, attention mechanism, and small object detection technique. Extensive experiments demonstrate the great performance enhancement of our detection algorithm with the new datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Sample images in the Pan-STARRS1 SuperNova (PS1-SN) dataset.</p>
Full article ">Figure 2
<p>Defections in the PSP-SN dataset.</p>
Full article ">Figure 3
<p>Sample images in the Popular Supernova Project augmented (PSP-Aug)dataset.</p>
Full article ">Figure 4
<p>The structure of the Fully Convolutional One-Stage (FCOS) with attention blocks.</p>
Full article ">Figure 5
<p>The structure of the attention blocks: (<b>a</b>) structure of CBAM, (<b>b</b>) structure of channel attention, (<b>c</b>) structure of spatial attention.</p>
Full article ">Figure 6
<p>The feature map visualization with different attention blocks of FCOS.</p>
Full article ">Figure 7
<p>The unaligned images cause false positive predictions. Green box indicates the location of the supernova. The classification result and confidence is labeled above the green box. (<b>a</b>) The model doesn’t mistake unaligned stars as supernovae. The model can discover the features of unaligned stars to some extent. (<b>b</b>) Unaligned stars are mistaken for supernovae. (<b>c</b>) Unaligned edges of galaxies are mistaken for supernovae.</p>
Full article ">Figure 8
<p>The dim targets cause false negative predictions. Red arrow points to the dim supernova in the image.</p>
Full article ">
18 pages, 715 KiB  
Article
Frequency Domain Analysis of Partial-Tensor Rotating Accelerometer Gravity Gradiometer
by Xuewu Qian, Liye Zhao, Weiming Liu and Jianqiang Sun
Sensors 2021, 21(5), 1925; https://doi.org/10.3390/s21051925 - 9 Mar 2021
Cited by 1 | Viewed by 2431
Abstract
The output model of a rotating accelerometer gravity gradiometer (RAGG) established by the inertial dynamics method cannot reflect the change of signal frequency, and calibration sensitivity and self-gradient compensation effect for the RAGG is a very important stage in the development process that [...] Read more.
The output model of a rotating accelerometer gravity gradiometer (RAGG) established by the inertial dynamics method cannot reflect the change of signal frequency, and calibration sensitivity and self-gradient compensation effect for the RAGG is a very important stage in the development process that cannot be omitted. In this study, a model based on the outputs of accelerometers on the disc of RGAA is established to calculate the gravity gradient corresponding to the distance, through the study of the RAGG output influenced by a surrounding mass in the frequency domain. Taking particle, sphere, and cuboid as examples, the input-output models of gravity gradiometer are established based on the center gradient and four accelerometers, respectively. Simulation results show that, if the scale factors of the four accelerometers on the disk are the same, the output signal of the RAGG only contains (4k+2)ω (ω is the spin frequency of disc for RAGG) harmonic components, and its amplitude is related to the orientation of the surrounding mass. Based on the results of numerical simulation of the three models, if the surrounding mass is close to the RAGG, the input-output models of gravity gradiometer are more accurate based on the four accelerometers. Finally, some advantages and disadvantages of cuboid and sphere are compared and some suggestions related to calibration and self-gradient compensation are given. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Working principle schematic diagram for the rotating accelerometer gravity gradiometer (RAGG).</p>
Full article ">Figure 2
<p>Schematic diagram for the particle act on the RAGG.</p>
Full article ">Figure 3
<p>Output signal of accelerometer <math display="inline"><semantics> <msub> <mi>A</mi> <mn>1</mn> </msub> </semantics></math>. (<b>a</b>) Time-domain waveform of accelerometer <math display="inline"><semantics> <msub> <mi>A</mi> <mn>1</mn> </msub> </semantics></math>. (<b>b</b>) Spectrum of accelerometer <math display="inline"><semantics> <msub> <mi>A</mi> <mn>1</mn> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>Gravity-gradient of the two models based on the particle. (<b>a</b>) The relationship between the calculated results of the two models and the distance. (<b>b</b>) The relationship between the calculated results error of the two models and the distance.</p>
Full article ">Figure 5
<p>Schematic diagram for the cuboid act on the RAGG.</p>
Full article ">Figure 6
<p>Gravity-gradient of the two models based on the cuboid. (<b>a</b>) The relationship between the calculated results of the two models and the distance. (<b>b</b>) The relationship between the calculated results error of the two models and the distance.</p>
Full article ">Figure 7
<p>Schematic diagram for the sphere act on the RAGG.</p>
Full article ">Figure 8
<p>Gravity-gradient of the two models based on the sphere. (<b>a</b>) The relationship between the calculated results of the two models and the distance. (<b>b</b>) The relationship between the calculated results error of the two models and the distance.</p>
Full article ">Figure 9
<p>Real-gravity gradient for the RAGG caused by the particle and the cuboid which are the same mass. (<b>a</b>) The relationship between the real-gravity gradient for the RAGG caused by the particle and the cuboid and the distance. (<b>b</b>) The relationship between the Real-gravity gradient error for the RAGG caused by the particle and the cuboid and the distance. (<b>c</b>) The relationship between the Real-gravity gradient error for the RAGG caused by the particle and the sphere and the distance.</p>
Full article ">
17 pages, 1040 KiB  
Article
Real-Time Compression for Tactile Internet Data Streams
by Patrick Seeling, Martin Reisslein and Frank H. P. Fitzek
Sensors 2021, 21(5), 1924; https://doi.org/10.3390/s21051924 - 9 Mar 2021
Cited by 3 | Viewed by 2896
Abstract
The Tactile Internet will require ultra-low latencies for combining machines and humans in systems where humans are in the control loop. Real-time and perceptual coding in these systems commonly require content-specific approaches. We present a generic approach based on deliberately reduced number accuracy [...] Read more.
The Tactile Internet will require ultra-low latencies for combining machines and humans in systems where humans are in the control loop. Real-time and perceptual coding in these systems commonly require content-specific approaches. We present a generic approach based on deliberately reduced number accuracy and evaluate the trade-off between savings achieved and errors introduced with real-world data for kinesthetic movement and tele-surgery. Our combination of bitplane-level accuracy adaptability with perceptual threshold-based limits allows for great flexibility in broad application scenarios. Combining the attainable savings with the relatively small introduced errors enables the optimal selection of a working point for the method in actual implementations. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>BINBLISS low-latency compression concept: Floating-point values are truncated in granularity, delta-coded with a threshold, and subsequently repackaged by adding binary indicators for the non-changed and modified values. Truncated bits can be added at the message end to enable straight-forward bit-level scalability.</p>
Full article ">Figure 2
<p>Median results for the <span class="html-italic">TUM Kinesthetic</span> data set, evaluated with forced repeats every measurement and no limit (effect of bit reduction only).</p>
Full article ">Figure 3
<p>Median relative delta results for the <span class="html-italic">TUM Kinesthetic</span> data set with different limits as change thresholds for perceptual coding.</p>
Full article ">Figure 4
<p>Relative savings results for the <span class="html-italic">TUM Kinesthetic</span> dataset with different limits as change thresholds for perceptual coding.</p>
Full article ">Figure 5
<p>Median error results for all subject experiments in the <span class="html-italic">Knot</span>, <span class="html-italic">Needle</span>, and <span class="html-italic">Suture</span> groups from the <span class="html-italic">JIGSAW</span> data set.</p>
Full article ">Figure 6
<p>Relative delta results for all subject experiments in the <span class="html-italic">Knot</span>, <span class="html-italic">Needle</span>, and <span class="html-italic">Suture</span> groups from the <span class="html-italic">JIGSAW</span> data set.</p>
Full article ">Figure 7
<p>Relative saving results for all subject experiments in the <span class="html-italic">Knot</span>, <span class="html-italic">Needle</span>, and <span class="html-italic">Suture</span> groups from the <span class="html-italic">JIGSAW</span> data set.</p>
Full article ">Figure 8
<p>Relative errors, savings, and combined metric results for the <span class="html-italic">TUM Kinesthetic</span> dataset with different bit limits and change thresholds for perceptual coding.</p>
Full article ">Figure 9
<p>Relative errors, savings, and combined metric results for the three different <span class="html-italic">JIGSAWS</span> datasets with different bit limits and change thresholds for perceptual coding.</p>
Full article ">
11 pages, 445 KiB  
Communication
Characterization of Changes in P-Wave VCG Loops Following Pulmonary-Vein Isolation
by Nuria Ortigosa, Óscar Cano and Frida Sandberg
Sensors 2021, 21(5), 1923; https://doi.org/10.3390/s21051923 - 9 Mar 2021
Cited by 1 | Viewed by 2209
Abstract
Atrial fibrillation is the most common type of cardiac arrhythmia in clinical practice. Currently, catheter ablation for pulmonary-vein isolation is a well-established treatment for maintaining sinus rhythm when antiarrhythmic drugs do not succeed. Unfortunately, arrhythmia recurrence after catheter ablation remains common, with estimated [...] Read more.
Atrial fibrillation is the most common type of cardiac arrhythmia in clinical practice. Currently, catheter ablation for pulmonary-vein isolation is a well-established treatment for maintaining sinus rhythm when antiarrhythmic drugs do not succeed. Unfortunately, arrhythmia recurrence after catheter ablation remains common, with estimated rates of up to 45%. A better understanding of factors leading to atrial-fibrillation recurrence is needed. Hence, the aim of this study is to characterize changes in the atrial propagation pattern following pulmonary-vein isolation, and investigate the relation between such characteristics and atrial-fibrillation recurrence. Fifty patients with paroxysmal atrial fibrillation who had undergone catheter ablation were included in this study. Time-segment and vectorcardiogram-loop-morphology analyses were applied to characterize P waves extracted from 1 min long 12-lead electrocardiogram segments before and after the procedure, respectively. Results showed that P-wave vectorcardiogram loops were significantly less round and more planar, P waves and PR intervals were significantly shorter, and heart rate was significantly higher after the procedure. Differences were larger for patients who did not have arrhythmia recurrences at 2 years of follow-up; for these patients, the pre- and postprocedure P waves could be identified with 84% accuracy. Full article
(This article belongs to the Special Issue Biomedical Signal Processing for Disease Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Pre- and post- pulmonary-vein isolation (PVI) loops for two different patients. (<b>a</b>) Patient with atrial-fibrillation (AF) recurrence during follow-up: <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.32 vs. 0.29, <math display="inline"><semantics> <mi>φ</mi> </semantics></math> = 0.11 vs. 0.10, respectively. (<b>b</b>) Patient without AF recurrence during follow-up: <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.34 vs. 0.28, <math display="inline"><semantics> <mi>φ</mi> </semantics></math> = 0.13 vs. 0.10, respectively.</p>
Full article ">Figure 2
<p>Example of box plots for different morphological features of P-wave loops under study comparing pre- and post-PVI distribution for a patient included in the study.</p>
Full article ">Figure 3
<p>Dispersion diagram for <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> and <math display="inline"><semantics> <mi>φ</mi> </semantics></math> for dataset including all analyzed P waves of the study. Pearson correlation is 0.47.</p>
Full article ">
9 pages, 3329 KiB  
Communication
Metal Oxide Nanorods-Based Sensor Array for Selective Detection of Biomarker Gases
by Gwang Su Kim, Yumin Park, Joonchul Shin, Young Geun Song and Chong-Yun Kang
Sensors 2021, 21(5), 1922; https://doi.org/10.3390/s21051922 - 9 Mar 2021
Cited by 7 | Viewed by 3957
Abstract
The breath gas analysis through gas phase chemical analysis draws attention in terms of non-invasive and real time monitoring. The array-type sensors are one of the diagnostic methods with high sensitivity and selectivity towards the target gases. Herein, we presented a 2 × [...] Read more.
The breath gas analysis through gas phase chemical analysis draws attention in terms of non-invasive and real time monitoring. The array-type sensors are one of the diagnostic methods with high sensitivity and selectivity towards the target gases. Herein, we presented a 2 × 4 sensor array with a micro-heater and ceramic chip. The device is designed in a small size for portability, including the internal eight-channel sensor array. In2O3 NRs and WO3 NRs manufactured through the E-beam evaporator’s glancing angle method were used as sensing materials. Pt, Pd, and Au metal catalysts were decorated for each channel to enhance functionality. The sensor array was measured for the exhaled gas biomarkers CH3COCH3, NO2, and H2S to confirm the respiratory diagnostic performance. Through this operation, the theoretical detection limit was calculated as 1.48 ppb for CH3COCH3, 1.9 ppt for NO2, and 2.47 ppb for H2S. This excellent detection performance indicates that our sensor array detected the CH3COCH3, NO2, and H2S as biomarkers, applying to the breath gas analysis. Our results showed the high potential of the gas sensor array as a non-invasive diagnostic tool that enables real-time monitoring. Full article
(This article belongs to the Special Issue Gas Sensors for Internet of Things Era)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of (<b>a</b>) the fabrication procedures for porous nanostructures using GAD. (<b>b</b>) A design of Pt-IDEs and metal oxide nanorods grown the direction of the vapor flux. (<b>c</b>) The position of Au, Pt, and Pd catalysts decorated by e-beam evaporator using on-axis mode. (<b>d</b>) 2 × 4 sensor array with 2 × 4 sensor array, back heater, chip carrier, and Au wires.</p>
Full article ">Figure 2
<p>Top-view and cross-sectional (inset) FE-SEM images of (<b>a</b>–<b>d</b>) bare, Au-, Pt-, and Pd-decorated In<sub>2</sub>O<sub>3</sub> nanorods, (<b>f</b>–<b>i</b>) bare, Au- (2 nm), Pt- (1 nm), and Pd- (2 nm) decorated WO<sub>3</sub> nanorods. X-ray diffraction pattern of (<b>e</b>) In<sub>2</sub>O<sub>3</sub> and (<b>j</b>) WO<sub>3</sub> nanorods as a function of decorated catalysts.</p>
Full article ">Figure 3
<p>(<b>a</b>) Optical image of the gas sensing chamber and (<b>b</b>) 2 × 4 sensor array mounted on the micro-heater and the chip carrier.</p>
Full article ">Figure 4
<p>(<b>a</b>) Power consumption of the micro-heater. Infrared images of 2 × 4 sensor array with different operating temperature; (<b>b</b>) 200 °C and (<b>c</b>) 300 °C.</p>
Full article ">Figure 5
<p>Response of the 2 × 4 sensor array to (<b>a</b>) 10 ppm CH<sub>3</sub>COCH<sub>3</sub>, (<b>b</b>) 1 ppm NO<sub>2</sub>, and (<b>c</b>) H<sub>2</sub>S 1 ppm vs. the wide range of operating temperatures from 150 to 350 °C. (<b>d</b>) Transducer function, (<b>e</b>) utility function, and (<b>f</b>) receptor function and spill-over effect that represent gas mechanisms.</p>
Full article ">Figure 6
<p>Response of the 2 × 4 sensor array to 100–500 ppb (<b>a</b>) CH<sub>3</sub>COCH<sub>3</sub>, (<b>b</b>) NO<sub>2</sub>, and (<b>c</b>) H<sub>2</sub>S at 300 °C, 150 °C, and 250 °C, respectively. Theoretical detection of limit of 2 × 4 sensor array to 100–500 ppb (<b>d</b>) CH<sub>3</sub>COCH<sub>3</sub>, (<b>e</b>) NO<sub>2</sub>, and (<b>f</b>) H<sub>2</sub>S.</p>
Full article ">Figure 7
<p>Polar plot of 2 × 4 array sensor responses of (<b>a</b>) 10 ppm CH<sub>3</sub>COCH<sub>3</sub>, (<b>b</b>) 1 ppm NO<sub>2</sub>, and (<b>c</b>) 1 ppm H<sub>2</sub>S at the operating temperatures of 300, 150, and 250 °C.</p>
Full article ">
17 pages, 10641 KiB  
Article
Deep Learning Driven Noise Reduction for Reduced Flux Computed Tomography
by Khalid L. Alsamadony, Ertugrul U. Yildirim, Guenther Glatz, Umair Bin Waheed and Sherif M. Hanafy
Sensors 2021, 21(5), 1921; https://doi.org/10.3390/s21051921 - 9 Mar 2021
Cited by 13 | Viewed by 3498
Abstract
Deep neural networks have received considerable attention in clinical imaging, particularly with respect to the reduction of radiation risk. Lowering the radiation dose by reducing the photon flux inevitably results in the degradation of the scanned image quality. Thus, researchers have sought to [...] Read more.
Deep neural networks have received considerable attention in clinical imaging, particularly with respect to the reduction of radiation risk. Lowering the radiation dose by reducing the photon flux inevitably results in the degradation of the scanned image quality. Thus, researchers have sought to exploit deep convolutional neural networks (DCNNs) to map low-quality, low-dose images to higher-dose, higher-quality images, thereby minimizing the associated radiation hazard. Conversely, computed tomography (CT) measurements of geomaterials are not limited by the radiation dose. In contrast to the human body, however, geomaterials may be comprised of high-density constituents causing increased attenuation of the X-rays. Consequently, higher-dose images are required to obtain an acceptable scan quality. The problem of prolonged acquisition times is particularly severe for micro-CT based scanning technologies. Depending on the sample size and exposure time settings, a single scan may require several hours to complete. This is of particular concern if phenomena with an exponential temperature dependency are to be elucidated. A process may happen too fast to be adequately captured by CT scanning. To address the aforementioned issues, we apply DCNNs to improve the quality of rock CT images and reduce exposure times by more than 60%, simultaneously. We highlight current results based on micro-CT derived datasets and apply transfer learning to improve DCNN results without increasing training time. The approach is applicable to any computed tomography technology. Furthermore, we contrast the performance of the DCNN trained by minimizing different loss functions such as mean squared error and structural similarity index. Full article
(This article belongs to the Special Issue Image Sensing and Processing with Convolutional Neural Networks)
Show Figures

Figure 1

Figure 1
<p>VDSR architecture showing cascaded pair of layers. The input is a low-resolution image, or a noisy image in our case, which goes through layers and gets transformed to a high-resolution or denoised image. The convolutional layers use 64 filters each.</p>
Full article ">Figure 2
<p>Architecture of the proposed DCNN (U-Net), which is based on a residual encoder–decoder structure. Each black box represents a feature map. The number of channels is denoted at the top of each box. The input and output images have the same size (height and width), which is indicated at the sides of the first and last box. Dark green boxes represent copied feature maps from the encoder block. The arrows state different operations.</p>
Full article ">Figure 3
<p>High exposure time, 1.4 s, CT image (<b>left</b>) and low exposure time, 0.5 s, CT image (<b>right</b>) of a carbonate rock sample where dark colors are indicative of pore space. Evidently, a reduced exposure time results in an increased noise level owing to the photon starvation at the detector.</p>
Full article ">Figure 4
<p>The image on the left is an example of a low exposure time slice with the image in the center being the high exposure time equivalent. The image on the far right is the reconstruction based on the SSIM optimized DCNN (U-Net). The DCNN performs remarkably well in reconstructing fine scale features, barely visible even in the case of a longer exposure time.</p>
Full article ">Figure 5
<p>Summary plot of the average SSIM values employing 400 test images as predicted by the pre-trained VDSR network and a VDSR initialized following the approach of He et al. [<a href="#B50-sensors-21-01921" class="html-bibr">50</a>]. From 50 to 200 training images, both networks show similar performance gains with respect to the corresponding SSIM values. After 200 training images, however, the pre-trained network further increases the SSIM value compared to the VDSR. In general, however, the pre-trained VDSR yields greater SSIM values for all cases.</p>
Full article ">Figure 6
<p>Summary plot of the average PSNR values employing 400 test images as predicted by the pre-trained VDSR network and the VDSR. From 50 to 100 training images, the VDSR shows a slightly better improvement compared to the pre-trained VDSR network. This advantage diminishes as the number of training images increases. Similar to the SSIM plot shown in <a href="#sensors-21-01921-f005" class="html-fig">Figure 5</a>, the pre-trained VDSR yields greater SSIM values for all cases.</p>
Full article ">Figure 7
<p>From left to right: High exposure (reference) image, low-exposure image (SSIM = 0.54, PSNR = 23 dB) and denoised image (SSIM = 0.78, PSNR = 34 dB) using the pre-trained VDSR network.</p>
Full article ">Figure 8
<p>Histogram of the PSNR values obtained for the 400 test images. “Before filtering” refers to the low exposure scans. “After filtering” refers to the DCNN (U-Net) denoised scans where the network as optimized with respect to the MSE loss function.</p>
Full article ">Figure 9
<p>Histogram of the SSIM values obtained for the the 400 test scans. “Before filtering” refers to the low-exposure images. “After filtering” refers to the DCNN (U-Net) denoised scans where the network was optimized with respect to the MSE loss function.</p>
Full article ">Figure 10
<p>Histogram of the PSNR values obtained for the 400 test images. “Before filtering” refers to the low exposure scans. “After filtering” refers to the DCNN (U-Net) denoised scans where the network was optimized with respect to the SSIM loss function.</p>
Full article ">Figure 11
<p>Histogram of the SSIM values obtained for the the 400 test scans. “Before filtering” refers to the low-exposure images. “After filtering” refers to the DCNN (U-Net) denoised scans where the network was optimized with respect to the SSIM loss function.</p>
Full article ">Figure 12
<p>DCNN (U-Net) denoising example from the test set: (<b>a</b>) A pair of images from the test dataset. The image on the left is a high exposure time (1.4 s) scan, while the image on the right is the equivalent low exposure time (0.5 s) scan (SSIM = 0.52, PSNR = 22 dB). (<b>b</b>) Denoising results exemplifying the performance of the network. The left image shows the prediction of the DCNN optimized using the MSE loss function (SSIM = 0.77, PSNR = 34 dB), while the right image exhibits the prediction of the SSIM optimized network (SSIM = 0.77, PSNR = 34 dB). The MSE optimized network predicts coarser grain textures (greater variation in grayscale values indicative of larger variations in grain density) and boundaries and seems to be more sensitive to fine scale pore space (compare upper left quadrant of both images for the presence of fine scale pore space). Conversely, the SSIM optimized network suggests smoother textures, sharper grain boundaries and appears to be less sensitive to fine scale pore space. The white lines indicate the location of a horizontal profile to compare the network’s performance in the subsequent figure.</p>
Full article ">Figure 13
<p>Intensity profiles taken along the white lines shown in <a href="#sensors-21-01921-f012" class="html-fig">Figure 12</a> that demonstrate the remarkable ability of the network to remove noise: (<b>a</b>) the profiles taken along the cross sections depicted in <a href="#sensors-21-01921-f012" class="html-fig">Figure 12</a>a that represent the high- and low-exposure images respectively; and (<b>b</b>) the corresponding profiles that represent the MSE reconstructed (left) and SSIM reconstructed (right) images.</p>
Full article ">Figure 14
<p>From left to right: Reference image (as predicted from the network representing the ground truth), FDK reconstruction, SIRT reconstruction and CGLS reconstruction.</p>
Full article ">Figure 15
<p>From left to right: Reference image (as predicted from the network representing the ground truth); artificial low-exposure image created via FDK, i.e., VDSR input (SSIM = 0.17, PSNR = 14 dB); artificial high-exposure image created, via FDK i.e., VDSR label (SSIM = 0.30, PSNR = 21 dB); and VDSR output (SSIM = 0.89, PSNR = 26 dB).</p>
Full article ">
30 pages, 9202 KiB  
Article
Vision-Based Tactile Sensor Mechanism for the Estimation of Contact Position and Force Distribution Using Deep Learning
by Vijay Kakani, Xuenan Cui, Mingjie Ma and Hakil Kim
Sensors 2021, 21(5), 1920; https://doi.org/10.3390/s21051920 - 9 Mar 2021
Cited by 36 | Viewed by 7160
Abstract
This work describes the development of a vision-based tactile sensor system that utilizes the image-based information of the tactile sensor in conjunction with input loads at various motions to train the neural network for the estimation of tactile contact position, area, and force [...] Read more.
This work describes the development of a vision-based tactile sensor system that utilizes the image-based information of the tactile sensor in conjunction with input loads at various motions to train the neural network for the estimation of tactile contact position, area, and force distribution. The current study also addresses pragmatic aspects, such as choice of the thickness and materials for the tactile fingertips and surface tendency, etc. The overall vision-based tactile sensor equipment interacts with an actuating motion controller, force gauge, and control PC (personal computer) with a LabVIEW software on it. The image acquisition was carried out using a compact stereo camera setup mounted inside the elastic body to observe and measure the amount of deformation by the motion and input load. The vision-based tactile sensor test bench was employed to collect the output contact position, angle, and force distribution caused by various randomly considered input loads for motion in X, Y, Z directions and RxRy rotational motion. The retrieved image information, contact position, area, and force distribution from different input loads with specified 3D position and angle are utilized for deep learning. A convolutional neural network VGG-16 classification modelhas been modified to a regression network model and transfer learning was applied to suit the regression task of estimating contact position and force distribution. Several experiments were carried out using thick and thin sized tactile sensors with various shapes, such as circle, square, hexagon, for better validation of the predicted contact position, contact area, and force distribution. Full article
Show Figures

Figure 1

Figure 1
<p>Principle of detection in vision-based tactile sensor technology.</p>
Full article ">Figure 2
<p>Problem statement of vision-based tactile sensor mechanism for the estimation of contact location and force distribution using deep learning: Data acquisition and training and inference stage.</p>
Full article ">Figure 3
<p>Equipment setup and schematic: (<b>a</b>) Overall system installation. (<b>b</b>) Flow schematic of visual tactile sensor mechanism.</p>
Full article ">Figure 4
<p>Making of tactile fingertips: (<b>a</b>) Defoaming process with upper mold and lower mold structures. (<b>b</b>) Fingertips produced from 3D printing process. (<b>c</b>) Fingertips produced from defoaming injection mold process. (<b>d</b>) Mold injection causes surface light reflection. (<b>e</b>) Sanding the mold surface reduced the light reflection. (<b>f</b>) Vacuum degassing process. (<b>g</b>) Marker painting process.</p>
Full article ">Figure 5
<p>Selection of tactile fingertip material based on shore hardness (surface hardness): Force displacement characteristic when shore hardness is 40, 60, 70, 80 for t = 1 mm.</p>
Full article ">Figure 6
<p>Choosing the thickness of the tactile fingertip: (<b>a</b>) Force displacement characteristics by thickness at 1 N for shore 70. (<b>b</b>) Force displacement characteristics by thickness at 10 N for shore 70.</p>
Full article ">Figure 7
<p>Stereo camera system: (<b>a</b>) Stereo camera with baseline of 10 mm. (<b>b</b>) Compact stereo camera attached to tactile fingertip.</p>
Full article ">Figure 8
<p>Flowchart schematic of transfer learning applied on the images acquired from tactile stereo camera setup.</p>
Full article ">Figure 9
<p>Pre-processing: (<b>a</b>) Cropping the input data through Region-of-Interest (ROI) setting for the stereo image pair. (<b>b</b>) Types of modes.</p>
Full article ">Figure 10
<p>Network architecture of VGG16 regression model employed in the study.</p>
Full article ">Figure 11
<p>Flow schematic of 2D contact area estimation process.</p>
Full article ">Figure 12
<p>Data acquisition procedure for training and testing scenarios: (<b>a</b>) Instrument to conduct experiments. (<b>b</b>) LabVIEW GUI for collecting data under various motions (<math display="inline"><semantics> <mrow> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>,</mo> <mi>Z</mi> <mo>,</mo> <msub> <mi>R</mi> <mi>x</mi> </msub> <mo>,</mo> <msub> <mi>R</mi> <mi>y</mi> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 13
<p>Successful case scenario of network training on thin sensor data (Data01) in the form of Mode1.</p>
Full article ">Figure 14
<p>Successful case scenario of network training on thick sensor data (Data02) in the form of Mode1.</p>
Full article ">Figure 15
<p>Testing scenarios and outcomes.</p>
Full article ">Figure 16
<p>Full Scale Output (FSO) (%) output scores for force estimation tests.</p>
Full article ">Figure 17
<p>Average displacement error in <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>,</mo> <mi>Y</mi> </mrow> </semantics></math> contact position w.r.t corresponding ground-truth over 13 points (−6 mm∼+6 mm): (<b>a</b>) Average X-displacement errors (in mm). (<b>b</b>) Average Y-displacement errors (in mm).</p>
Full article ">Figure 18
<p>Displacement error in <span class="html-italic">Z</span> contact position w.r.t diverse force ranges <math display="inline"><semantics> <mrow> <mn>0.1</mn> </mrow> </semantics></math> N∼1 N.</p>
Full article ">Figure 19
<p>Angular displacement error in <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> </semantics></math> axis w.r.t diverse forces (<math display="inline"><semantics> <mrow> <mn>0.1</mn> </mrow> </semantics></math> N∼<math display="inline"><semantics> <mrow> <mn>0.9</mn> </mrow> </semantics></math> N).</p>
Full article ">Figure 20
<p>Estimation of 2D contact area: (<b>a</b>) Contact area estimation w.r.t circular tool. (<b>b</b>) Contact area estimation w.r.t square tool. (<b>c</b>) Contact area estimation w.r.t hexagonal tool.</p>
Full article ">Figure A1
<p>Tactile sensor-related aspects: (<b>a</b>) Stable calibration of tactile sensor at an angle of <math display="inline"><semantics> <msup> <mn>45</mn> <mo>∘</mo> </msup> </semantics></math>. (<b>b</b>) Measuring the height of tactile sensor (72 mm) using vernier caliper. (<b>c</b>) Width of tactile sensor (44 mm). (<b>d</b>) Tactile sensor camera. (<b>e</b>) Vision-based tactile sensor used in a robotic arm grasping a raw egg. (<b>f</b>) Lifting and placing a bottle. (<b>g</b>) Grasping a tennis ball.</p>
Full article ">
18 pages, 33568 KiB  
Article
A Robot Object Recognition Method Based on Scene Text Reading in Home Environments
by Shuhua Liu, Huixin Xu, Qi Li, Fei Zhang and Kun Hou
Sensors 2021, 21(5), 1919; https://doi.org/10.3390/s21051919 - 9 Mar 2021
Cited by 3 | Viewed by 2786
Abstract
With the aim to solve issues of robot object recognition in complex scenes, this paper proposes an object recognition method based on scene text reading. The proposed method simulates human-like behavior and accurately identifies objects with texts through careful reading. First, deep learning [...] Read more.
With the aim to solve issues of robot object recognition in complex scenes, this paper proposes an object recognition method based on scene text reading. The proposed method simulates human-like behavior and accurately identifies objects with texts through careful reading. First, deep learning models with high accuracy are adopted to detect and recognize text in multi-view. Second, datasets including 102,000 Chinese and English scene text images and their inverse are generated. The F-measure of text detection is improved by 0.4% and the recognition accuracy is improved by 1.26% because the model is trained by these two datasets. Finally, a robot object recognition method is proposed based on the scene text reading. The robot detects and recognizes texts in the image and then stores the recognition results in a text file. When the user gives the robot a fetching instruction, the robot searches for corresponding keywords from the text files and achieves the confidence of multiple objects in the scene image. Then, the object with the maximum confidence is selected as the target. The results show that the robot can accurately distinguish objects with arbitrary shape and category, and it can effectively solve the problem of object recognition in home environments. Full article
Show Figures

Figure 1

Figure 1
<p>Failed recognition cases (Yellow fonts are the recognition results).</p>
Full article ">Figure 2
<p>Object recognition framework of robot.</p>
Full article ">Figure 3
<p>Framework of text detection and recognition.</p>
Full article ">Figure 4
<p>Text detection model based on Cascade Mask Region Convolution Neural Network (R-CNN).</p>
Full article ">Figure 5
<p>Comparison of different scene texts.</p>
Full article ">Figure 6
<p>Text recognition model based on Long Short-term Memory (LSTM) and Attention Mechanism.</p>
Full article ">Figure 7
<p>Examples of generating dataset, including Chinese, uppercase, and lowercase letters.</p>
Full article ">Figure 7 Cont.
<p>Examples of generating dataset, including Chinese, uppercase, and lowercase letters.</p>
Full article ">Figure 8
<p>Histogram of experimental results from the RCTW-17 competition platform.</p>
Full article ">Figure 9
<p>Correct recognition results on RCTW-17.</p>
Full article ">Figure 9 Cont.
<p>Correct recognition results on RCTW-17.</p>
Full article ">Figure 10
<p>Failed recognition cases.</p>
Full article ">Figure 11
<p>Contrast samples of generated and inverse datasets.</p>
Full article ">Figure 11 Cont.
<p>Contrast samples of generated and inverse datasets.</p>
Full article ">Figure 12
<p>Nao robot.</p>
Full article ">Figure 13
<p>Part of test samples.</p>
Full article ">Figure 14
<p>Confidence threshold comparison. There are 45 test samples; with the increase of confidence threshold, the number of correct recognition objects is reducing.</p>
Full article ">Figure 15
<p>Nao is recognizing objects.</p>
Full article ">
17 pages, 2001 KiB  
Article
Visualizing and Evaluating Finger Movement Using Combined Acceleration and Contact-Force Sensors: A Proof-of-Concept Study
by Hitomi Oigawa, Yoshiro Musha, Youhei Ishimine, Sumito Kinjo, Yuya Takesue, Hideyuki Negoro and Tomohiro Umeda
Sensors 2021, 21(5), 1918; https://doi.org/10.3390/s21051918 - 9 Mar 2021
Cited by 6 | Viewed by 2571
Abstract
The 10-s grip and release is a method to evaluate hand dexterity. Current evaluations only visually determine the presence or absence of a disability, but experienced physicians may also make other diagnoses. In this study, we investigated a method for evaluating hand movement [...] Read more.
The 10-s grip and release is a method to evaluate hand dexterity. Current evaluations only visually determine the presence or absence of a disability, but experienced physicians may also make other diagnoses. In this study, we investigated a method for evaluating hand movement function by acquiring and analyzing fingertip data during a 10-s grip and release using a wearable sensor that can measure triaxial acceleration and strain. The subjects were two healthy females. The analysis was performed on the x-, y-, and z-axis data, and absolute acceleration and contact force of all fingertips. We calculated the variability of the data, the number of grip and release, the frequency response, and each finger’s correlation. Experiments with some grip-and-release patterns have resulted in different characteristics for each. It was suggested that this could be expressed in radar charts to intuitively know the state of grip and release. Contact-force data of each finger were found to be useful for understanding the characteristics of grip and release and improving the accuracy of calculating the number of times to grip and release. Frequency analysis suggests that knowing the periodicity of grip and release can detect unnatural grip and release and tremor states. The correlations between the fingers allow us to consider the finger’s grip-and-release characteristics, considering the hand’s anatomy. By taking these factors into account, it is thought that the 10-s grip-and-release test could give us a new value by objectively assessing the motor functions of the hands other than the number of times of grip and release. Full article
(This article belongs to the Special Issue Body Worn Sensors and Related Applications)
Show Figures

Figure 1

Figure 1
<p>HapLog’s structure (<b>a</b>) sensor, (<b>a</b>)’ front of the sensor: the left and right accelerations were x-axis accelerations, front and back accelerations were y-axis accelerations, and up and down accelerations were z-axis accelerations, (<b>b</b>) bangle-type connector, and (<b>c</b>) calibration unit.</p>
Full article ">Figure 2
<p>Mounting of sensors. Two HapLogs were used, and a sensor of each HapLog was attached to the thumb.</p>
Full article ">Figure 3
<p>Radar charts of A (Top: thumb, clockwise, index, middle, ring, and little fingers).</p>
Full article ">Figure 4
<p>Radar charts of B (Top: thumb, clockwise, index, middle, ring, and little fingers).</p>
Full article ">Figure 5
<p>Radar charts of raw contact-force data.</p>
Full article ">Figure 6
<p>Frequency analysis result: The value of the highest power frequency divided by the total power and the average value and its ranking by condition.</p>
Full article ">Figure 7
<p>Mean of the correlation coefficients for each of conditions 1 to 5 (<b>A</b>: subject A, <b>B</b>: subject B).</p>
Full article ">Figure 8
<p>Correlation coefficient after condition 6.</p>
Full article ">
17 pages, 4094 KiB  
Article
Automatic Identification of Tool Wear Based on Thermography and a Convolutional Neural Network during the Turning Process
by Nika Brili, Mirko Ficko and Simon Klančnik
Sensors 2021, 21(5), 1917; https://doi.org/10.3390/s21051917 - 9 Mar 2021
Cited by 30 | Viewed by 3622
Abstract
This article presents a control system for a cutting tool condition supervision, which recognises tool wear automatically during turning. We used an infrared camera for process control, which—unlike common cameras—captures the thermographic state, in addition to the visual state of the process. Despite [...] Read more.
This article presents a control system for a cutting tool condition supervision, which recognises tool wear automatically during turning. We used an infrared camera for process control, which—unlike common cameras—captures the thermographic state, in addition to the visual state of the process. Despite challenging environmental conditions (e.g., hot chips) we protected the camera and placed it right up to the cutting knife, so that machining could be observed closely. During the experiment constant cutting conditions were set for the dry machining of workpiece (low alloy carbon steel 1.7225 or 42CrMo4). To build a dataset of over 9000 images, we machined on a lathe with tool inserts of different wear levels. Using a convolutional neural network (CNN), we developed a model for tool wear and tool damage prediction. It determines the state of a cutting tool automatically (none, low, medium, high wear level), based on thermographic process data. The accuracy of classification was 99.55%, which affirms the adequacy of the proposed method. Such a system enables immediate action in the case of cutting tool wear or breakage, regardless of the operator’s knowledge and competence. Full article
(This article belongs to the Special Issue Industry 4.0 and Smart Manufacturing)
Show Figures

Figure 1

Figure 1
<p>Inverse proportionality of machining parameters.</p>
Full article ">Figure 2
<p>Schematic representation of the model for an intelligent control of the cutting tool wear level and breakage, using thermography.</p>
Full article ">Figure 3
<p>A cutting tool with the flank wear (VB).</p>
Full article ">Figure 4
<p>The effect of a cutting tool wear on the workpiece diameter deviation (<span class="html-italic">r<sub>1</sub></span> is actual workpiece radius, <span class="html-italic">r<sub>2</sub></span> is expected workpiece radius in case of no tool wear).</p>
Full article ">Figure 5
<p>The camera position in a lathe (<b>a</b>) Protection for an infrared (IR) camera; (<b>b</b>) Mounting of the IR camera closely against the cutting knife.</p>
Full article ">Figure 6
<p>The cutting tool: (<b>a</b>) Holder of type CKJNL; (<b>b</b>) Cutting tool insert KNUX.</p>
Full article ">Figure 7
<p>The categorisation of the acquired images into wear classes.</p>
Full article ">Figure 8
<p>Inception V3 architecture.</p>
Full article ">Figure 9
<p>Last layers of CNN that are trained for a specific case.</p>
Full article ">Figure 10
<p>Graphical representation of the accuracy and the calculation time at different numbers of iterations.</p>
Full article ">Figure 11
<p>Classification results for different image test sets.</p>
Full article ">
17 pages, 2545 KiB  
Article
Evaluation of Thawing and Stress Restoration Method for Artificial Frozen Sandy Soils Using Sensors
by Jongchan Kim, Jong-Sub Lee, Cody Arnold and Sang Yeob Kim
Sensors 2021, 21(5), 1916; https://doi.org/10.3390/s21051916 - 9 Mar 2021
Cited by 1 | Viewed by 2034
Abstract
Undisturbed frozen samples can be efficiently obtained using the artificial ground freezing method. Thereafter, the restoration of in situ conditions, such as stress and density after thawing, is critical for laboratory testing. This study aims to experimentally explore the effects of thawing and [...] Read more.
Undisturbed frozen samples can be efficiently obtained using the artificial ground freezing method. Thereafter, the restoration of in situ conditions, such as stress and density after thawing, is critical for laboratory testing. This study aims to experimentally explore the effects of thawing and the in situ stress restoration process on the geomechanical properties of sandy soils. Specimens were prepared at a relative density of 60% and frozen at −20 °C under the vertical stress of 100 kPa. After freezing, the specimens placed in the triaxial cell underwent thawing and consolidation phases with various drainage and confining stress conditions, followed by the shear phase. The elastic wave signals and axial deformation were measured during the entire protocol; the shear strength was evaluated from the triaxial compression test. Monotonic and cyclic simple shear tests were conducted to determine the packing density effect on liquefaction resistance. The results show that axial deformation, stiffness, and strength are minimized for a specimen undergoing drained thawing, restoring the initial stress during the consolidation phase, and that denser specimens are less susceptible to liquefaction. Results highlight that the thawing and stress restoration process should be considered to prevent the overestimation of stiffness, strength, and liquefaction resistance of sandy soils. Full article
(This article belongs to the Special Issue Emerging Characterization of Geomaterials Using Advanced Geo-Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic drawings of the freezing mold. Insulation materials are covered, except the bottom of the mold, to ensure unidirectional freezing (i.e., bottom to top). The volume change of pore water caused by freezing can be expelled through the top and bottom drainage. (<b>a</b>) Cross-sectional drawing with dimensions. (<b>b</b>) Three-dimensional drawing before and after assembling.</p>
Full article ">Figure 2
<p>Evolution of specimen temperature during freezing. Thermocouples are installed on the top, middle, and bottom of the specimen to monitor the direction of cooling/freezing.</p>
Full article ">Figure 3
<p>Measurement system during the triaxial test. The wave sensors (i.e., bender element (BEs) and piezo disk element (PDEs)) and thermocouple are equipped on the top and bottom pedestals.</p>
Full article ">Figure 4
<p>The measured temperature of the tested specimen (DT20-C100) during thawing. The specimen was placed on the pedestal of the triaxial system and was subjected to a 20 kPa confining stress. The cell was filled with cold water to avoid the sudden/uncontrolled melting of the specimen, and then were in equilibrium at room temperature.</p>
Full article ">Figure 5
<p>Results of undrained triaxial compression tests. The shear phase continues until the axial strain reached 25%. (<b>a</b>) Deviatoric stress. (<b>b</b>) Excess pore pressure.</p>
Full article ">Figure 6
<p>The measured P- and S-wave velocities during the saturation phase. The specimens were subjected to different degrees of confining stress. (<b>a</b>) P-wave velocity versus B-value. The two lines indicate the relationship between the B-value and P-wave velocity [<a href="#B12-sensors-21-01916" class="html-bibr">12</a>]. (<b>b</b>) S-wave velocity versus B-value.</p>
Full article ">Figure 7
<p>Measured P- and S-wave velocities during the consolidation phase. The stress rate is 100 kPa/min. (<b>a</b>) P-wave velocity versus time. (<b>b</b>) S-wave velocity versus time. (<b>c</b>) S-wave velocity versus confining stress. Note, the DT20-C150 specimen is subjected to 150 kPa as the confining stress.</p>
Full article ">Figure 8
<p>Measured P- and S-wave velocities during the shear phase. (<b>a</b>) P-wave velocity versus axial strain. (<b>b</b>) S-wave velocity versus axial strain.</p>
Full article ">Figure 9
<p>Cyclic simple shear results for loose sample <span class="html-italic">D<sub>r</sub></span> ≈ 50%. Stresses are normalized by initial vertical stress (<span class="html-italic">σ<sub>v</sub></span> = 100 kPa). (<b>a</b>) Normalized shear stress versus normalized vertical effective stress. (<b>b</b>) Normalized shear stress versus shear strain. (<b>c</b>) The number of cycles versus excess pore pressure ratio. (<b>d</b>) The number of cycles versus shear strain.</p>
Full article ">Figure 10
<p>Cyclic simple shear results for dense sample <span class="html-italic">D<sub>r</sub></span> ≈ 83%. Stresses are normalized by initial vertical stress (<span class="html-italic">σ<sub>v</sub></span> = 100 kPa). (<b>a</b>) Normalized shear stress versus normalized vertical effective stress. (<b>b</b>) Normalized shear stress versus shear strain. (<b>c</b>) The number of cycles versus excess pore pressure ratio. (<b>d</b>) The number of cycles versus shear strain.</p>
Full article ">Figure 11
<p>Cyclic stress ratio versus the number of cycles to liquefaction. The inserted reference bar indicates the 15 cycles corresponding to the earthquake magnitude 7.5.</p>
Full article ">
20 pages, 11477 KiB  
Article
Tactile Sensors for Parallel Grippers: Design and Characterization
by Andrea Cirillo, Marco Costanzo, Gianluca Laudante and Salvatore Pirozzi
Sensors 2021, 21(5), 1915; https://doi.org/10.3390/s21051915 - 9 Mar 2021
Cited by 15 | Viewed by 3586
Abstract
Tactile data perception is of paramount importance in today’s robotics applications. This paper describes the latest design of the tactile sensor developed in our laboratory. Both the hardware and firmware concepts are reported in detail in order to allow the research community the [...] Read more.
Tactile data perception is of paramount importance in today’s robotics applications. This paper describes the latest design of the tactile sensor developed in our laboratory. Both the hardware and firmware concepts are reported in detail in order to allow the research community the sensor reproduction, also according to their needs. The sensor is based on optoelectronic technology and the pad shape can be adapted to various robotics applications. A flat surface, as the one proposed in this paper, can be well exploited if the object sizes are smaller than the pad and/or the shape recognition is needed, while a domed pad can be used to manipulate bigger objects. Compared to the previous version, the novel tactile sensor has a larger sensing area and a more robust electronic, mechanical and software design that yields less noise and higher flexibility. The proposed design exploits standard PCB manufacturing processes and advanced but now commercial 3D printing processes for the realization of all components. A GitHub repository has been prepared with all files needed to allow the reproduction of the sensor for the interested reader. The whole sensor has been tested with a maximum load equal to 15N, by showing a sensitivity equal to 0.018V/N. Moreover, a complete and detailed characterization for the single taxel and the whole pad is reported to show the potentialities of the sensor also in terms of response time, repeatability, hysteresis and signal to noise ratio. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>CAD drawing of an assembled sensor (<b>left</b>) with all components (<b>middle</b>) and details about 3D machines used for the production (<b>right</b>).</p>
Full article ">Figure 2
<p>Electronics block diagram.</p>
Full article ">Figure 3
<p>Manufactured PCBs: sensing board (<b>left</b>) and power supply board (<b>right</b>).</p>
Full article ">Figure 4
<p>Deformable layer and rigid grid characteristics (<b>a</b>). Mechanical assembly of mechanical and electronic parts (<b>b</b>). Section of the assembled parts with dimensions: full view (<b>c</b>) and zoom view (<b>d</b>).</p>
Full article ">Figure 5
<p>Manufactured grid in black ABS (<b>a</b>), deformable layer in silicone (<b>b</b>), case in nylon (<b>c</b>) and complete assembled finger (<b>d</b>).</p>
Full article ">Figure 6
<p>Elaboration system software design: software flow chart (<b>a</b>) and protocol sequence diagram (<b>b</b>).</p>
Full article ">Figure 7
<p>Experimental setup for single taxel (<b>a</b>) and taxel numbering used in the experiment description (<b>b</b>).</p>
Full article ">Figure 8
<p>Hysteresis experiment for taxel 17: applied force (<b>a</b>) and voltage variations (<b>b</b>).</p>
Full article ">Figure 9
<p>Hysteresis graphs for taxel 5 (<b>a</b>), taxel 13 (<b>b</b>) and taxel 17 (<b>c</b>).</p>
Full article ">Figure 10
<p>Repeatability experiment for taxel 17: applied force (<b>a</b>) and voltage variations (<b>b</b>).</p>
Full article ">Figure 11
<p>Repeatability error graphs for taxel 5 (<b>a</b>), taxel 13 (<b>b</b>) and taxel 17 (<b>c</b>).</p>
Full article ">Figure 12
<p>Response time graphs for taxel 5 (<b>a</b>), taxel 13 (<b>b</b>) and taxel 17 (<b>c</b>).</p>
Full article ">Figure 13
<p>Power spectrum of taxels 5, 13 and 17.</p>
Full article ">Figure 14
<p>Experimental setup for the whole pad characterization: components (<b>a</b>) and contact example (<b>b</b>).</p>
Full article ">Figure 15
<p>Hysteresis experiment for the whole pad characterization: force profile (<b>a</b>) used to stimulate the pad and corresponding voltage variations (<b>b</b>).</p>
Full article ">Figure 16
<p>Hysteresis graph for the whole pad characterization: a single voltage (<b>a</b>) and mean of voltages (<b>b</b>).</p>
Full article ">Figure 17
<p>Repeatability graph for the whole pad characterization: a single voltage (<b>a</b>) and mean of voltages (<b>b</b>).</p>
Full article ">Figure 18
<p>Sensitivity graph for the whole pad characterization.</p>
Full article ">Figure 19
<p>Examples of tactile maps during the grasp of a cable: linear horizontal case (<b>a</b>), quadratic horizontal case (<b>b</b>), quadratic vertical case (<b>c</b>) and high curvature case (<b>d</b>).</p>
Full article ">
17 pages, 1153 KiB  
Article
Relay Positioning for Load-Balancing and Throughput Enhancement in Dual-Hop Relay Networks
by Byungkwan Kim and Taejoon Kim
Sensors 2021, 21(5), 1914; https://doi.org/10.3390/s21051914 - 9 Mar 2021
Cited by 3 | Viewed by 2130
Abstract
In a cellular communication system, deploying a relay station (RS) is an effective alternative to installing a new base station (BS). A dual-hop network enhances the throughput of mobile stations (MSs) located in shadow areas or at cell edges by installing RSs between [...] Read more.
In a cellular communication system, deploying a relay station (RS) is an effective alternative to installing a new base station (BS). A dual-hop network enhances the throughput of mobile stations (MSs) located in shadow areas or at cell edges by installing RSs between BSs and MSs. Because additional radio resources should be allocated to the wireless link between BS and RS, a frame to be transmitted from BS is divided into an access zone (AZ) and a relay zone (RZ). BS and RS communicate with each other through the RZ, and they communicate with their registered MSs through an AZ. However, if too many MSs are registered with a certain BS or RS, MS overloading may cause performance degradation. To prevent such performance degradation, it is very important to find the proper positions for RSs to be deployed. In this paper, we propose a method for finding the sub-optimal RS deployment location for the purpose of load-balancing and throughput enhancement. The advantage of the proposed method is the efficiency in find the sub-optimal location of RSs and its reliable tradeoff between load-balancing throughput enhancement. Since the proposed scheme finds the proper position by adjusting the distance and angle of RSs, its computational complexity lower than other global optimization approach or learning-based approach. In addition, the proposed scheme is constituted with the two stages of load-balancing and throughput enhancement. These procedures result in the appropriate tradeoff between load-balancing and throughput enhancement. The simulation results support these advancements of the proposed scheme. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Dual-hop relay system.</p>
Full article ">Figure 2
<p>Orthogonal frequency allocation with two RSs per sector.</p>
Full article ">Figure 3
<p>Downlink frame structure in a dual-hop relay network.</p>
Full article ">Figure 4
<p>RS location using a polar coordinate.</p>
Full article ">Figure 5
<p>Spectral efficiencies with different step size <math display="inline"><semantics> <msup> <mi>x</mi> <mo>′</mo> </msup> </semantics></math>.</p>
Full article ">Figure 6
<p>Generated sample MS location MAP.</p>
Full article ">Figure 7
<p>MS distribution sample drawn from MAP with user thinning.</p>
Full article ">Figure 8
<p>Variation of SINR distribution by RS location: (<b>a</b>) default position, (<b>b</b>) after Algorithm 1 and (<b>c</b>) after Algorithm 2.</p>
Full article ">Figure 9
<p>Throughput with different algorithms.</p>
Full article ">Figure 10
<p>Standard deviations of MS distributions.</p>
Full article ">
13 pages, 30768 KiB  
Article
Modular MA-XRF Scanner Development in the Multi-Analytical Characterisation of a 17th Century Azulejo from Portugal
by Sergio Augusto Barcellos Lins, Marta Manso, Pedro Augusto Barcellos Lins, Antonio Brunetti, Armida Sodo, Giovanni Ettore Gigante, Andrea Fabbri, Paolo Branchini, Luca Tortora and Stefano Ridolfi
Sensors 2021, 21(5), 1913; https://doi.org/10.3390/s21051913 - 9 Mar 2021
Cited by 10 | Viewed by 3504
Abstract
A modular X-ray scanning system was developed, to fill in the gap between portable instruments (with a limited analytical area) and mobile instruments (with large analytical areas, and sometimes bulky and difficult to transport). The scanner has been compared to a commercial tabletop [...] Read more.
A modular X-ray scanning system was developed, to fill in the gap between portable instruments (with a limited analytical area) and mobile instruments (with large analytical areas, and sometimes bulky and difficult to transport). The scanner has been compared to a commercial tabletop instrument, by analysing a Portuguese tile (azulejo) from the 17th century. Complementary techniques were used to achieve a throughout characterisation of the sample in a complete non-destructive approach. The complexity of the acquired X-ray fluorescence (XRF) spectra, due to inherent sample stratigraphy, has been resolved using Monte Carlo simulations, and Raman spectroscopy, as the most suitable technique to complement the analysis of azulejos colours, yielding satisfactory results. The colouring agents were identified as cobalt blue and a Zn-modified Naples-yellow. The stratigraphy of the area under study was partially modelled with Monte Carlo simulations. The scanners performance has been compared by evaluating the images outputs and the global spectrum. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sample and Macro-X-Ray Fluorescence (MA-XRF) scanned area (1). The colours identified are: yellow (<b>A</b>), white (<b>B</b>), orange (<b>C</b>) and blue (<b>D</b>).</p>
Full article ">Figure 2
<p>Translation stages: portable version (<b>A</b>) and mobile version (<b>B</b>).</p>
Full article ">Figure 3
<p>Elemental distribution maps obtained with the Modular Scanner and the M4 Tornado. Scale is 10 millimetres.</p>
Full article ">Figure 4
<p>Sum spectra of the scanned regions.</p>
Full article ">Figure 5
<p>Recorded <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-Raman spectra for blue glazed regions: (<b>A</b>) vitreous blue region, (<b>B</b>) darker blue region; (<b>B</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mrow> <mi>e</mi> <mi>x</mi> <mi>c</mi> <mi>i</mi> <mi>t</mi> <mi>a</mi> <mi>t</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 532 nm.</p>
Full article ">Figure 6
<p>Recorded Raman spectra for a yellow glazed region (<math display="inline"><semantics> <msub> <mi>λ</mi> <mrow> <mi>e</mi> <mi>x</mi> <mi>c</mi> <mi>i</mi> <mi>t</mi> <mi>a</mi> <mi>t</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 532 nm).</p>
Full article ">Figure 7
<p>Micrographs from the yellow, blue and orange colours. Scale bar is 2 mm wide.</p>
Full article ">Figure 8
<p>Comparison of recorded Raman spectra obtained from the orange region: orange grain, yellow matrix, and dark grain. The <math display="inline"><semantics> <msub> <mi>λ</mi> <mrow> <mi>e</mi> <mi>x</mi> <mi>c</mi> <mi>i</mi> <mi>t</mi> <mi>a</mi> <mi>t</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> </semantics></math> for the dark grain spectrum was of 785 nm, while for the remaining spectra, 532 nm.</p>
Full article ">
18 pages, 5302 KiB  
Article
Design and Performance Evaluation of a “Fixed-Point” Spar Buoy Equipped with a Piezoelectric Energy Harvesting Unit for Floating Near-Shore Applications
by Damiano Alizzio, Marco Bonfanti, Nicola Donato, Carla Faraci, Giovanni Maria Grasso, Fabio Lo Savio, Roberto Montanini and Antonino Quattrocchi
Sensors 2021, 21(5), 1912; https://doi.org/10.3390/s21051912 - 9 Mar 2021
Cited by 6 | Viewed by 2497
Abstract
In the present work, a spar-buoy scaled model was designed and built through a “Lab-on-Sea” unit, equipped with an energy harvesting system. Such a system is based on deformable bands, which are loyal to the unit, to convert wave motion energy into electricity [...] Read more.
In the present work, a spar-buoy scaled model was designed and built through a “Lab-on-Sea” unit, equipped with an energy harvesting system. Such a system is based on deformable bands, which are loyal to the unit, to convert wave motion energy into electricity by means of piezo patch transducers. In a preliminary stage, the scaled model, suitable for tests in a controlled ripples-type wave motion channel, was tested in order to verify the “fixed-point” assumption in pitch and roll motions and, consequently, to optimize energy harvesting. A special type of structure was designed, numerically simulated, and experimentally verified. The proposed solution represents an advantageous compromise between the lightness of the used materials and the amount of recoverable energy. The energy, which was obtained from the piezo patch transducers during the simulations in the laboratory, was found to be enough to self-sustain the feasible on-board sensors and the remote data transmission system. Full article
Show Figures

Figure 1

Figure 1
<p>Spar buoy scaled model at “fixed-point” test configuration.</p>
Full article ">Figure 2
<p>Scheme of the energy conversion apparatus for the Lab on sea unit: 1. spar buoy; 2. ballast; 3. piezoelectric patch transducers (PPTs); 4. band (a) deformable part, (b) rigid part; 5. external float; and, 6. rod grip. The vertical direction of the wave motion is indicated with <span class="html-italic">x</span>, while <span class="html-italic">y</span> is the vertical displacement of the spay buoy and <span class="html-italic">γ</span> is its angular oscillation.</p>
Full article ">Figure 3
<p>Magnitude ratio and phase of the response of a typical mass-spring-damper as a function of λ.</p>
Full article ">Figure 4
<p>Comparison between acceleration and normalized DFT (<b>a</b>) at sea and (<b>b</b>) in artificial channel.</p>
Full article ">Figure 5
<p>Buoyancy-sinking curve for the spar buoy.</p>
Full article ">Figure 6
<p>Spar buoy scaled model during test.</p>
Full article ">Figure 7
<p>Wave amplitude and normalized DFT in artificial channel.</p>
Full article ">Figure 8
<p>Geometrical characteristics of the deformable band.</p>
Full article ">Figure 9
<p>FEA screens from Solidworks: (<b>a</b>) imposed displacements at the free end over time, (<b>b</b>) Z-normal nodal averaged strain on upper PPT surface over time, (<b>c</b>) displacements along Z-axis at 100 mm from fixed joint cross-section over time, and (<b>d</b>) nodal averaged reaction force along Z-axis at 100 mm from fixed joint cross-section.</p>
Full article ">Figure 10
<p>Z-axis normal strain corresponding to maximal imposed displacement. The measurement unit is mm/mm.</p>
Full article ">Figure 11
<p>(<b>a</b>) Setup scheme (signal generator and amplifier (<span class="html-italic">I</span>), data acquisition system (<span class="html-italic">O</span>), linear displacement signal (<span class="html-italic">o</span><sub>1</sub>), load cell signal (<span class="html-italic">o</span><sub>2</sub>), PPT signal (<span class="html-italic">o</span><sub>3</sub>), probe (<span class="html-italic">Z</span>), resistor (<span class="html-italic">R</span>)), and (<b>b</b>) experimental setup.</p>
Full article ">Figure 12
<p>Signals from (<b>a</b>) accelerometer and (<b>b</b>) gyroscope of iNEMO inertial module.</p>
Full article ">Figure 13
<p>(<b>a</b>) Harvester and linear displacement sensor signals over time and (<b>b</b>) sinusoidal laws of outputs from PPT and displacement sensor.</p>
Full article ">Figure 14
<p>Comparison between load cell output and FEA computed reaction force.</p>
Full article ">Figure 15
<p>(<b>a</b>) comparison between the vertical displacement measured by accelerometer along the Z-axis and the wave magnitude; and, (<b>b</b>) comparison between the angular oscillation measured by gyroscope around X-axis and the oscillation occurred by the wave.</p>
Full article ">Figure 16
<p>Output power disposable from a single PPT as a function of (<b>a</b>) frequency and (<b>b</b>) shaker travel, (<b>c</b>) energy recoverable from a single PPT.</p>
Full article ">Figure 17
<p>(<b>a</b>) Power/linear displacement ratio frequency response functions (FRF) and (<b>b</b>) Energy/linear displacement ratio FRF.</p>
Full article ">
12 pages, 3019 KiB  
Communication
Rapid Fabrication of Renewable Carbon Fibres by Plasma Arc Discharge and Their Humidity Sensing Properties
by Yi Chen, Fang Fang, Robert Abbel, Meeta Patel and Kate Parker
Sensors 2021, 21(5), 1911; https://doi.org/10.3390/s21051911 - 9 Mar 2021
Cited by 3 | Viewed by 2704
Abstract
Submicron-sized carbon fibres have been attracting research interest due to their outstanding mechanical and electrical properties. However, the non-renewable resources and their complex fabrication processes limit the scalability and pose difficulties for the utilisation of these materials. Here, we investigate the use of [...] Read more.
Submicron-sized carbon fibres have been attracting research interest due to their outstanding mechanical and electrical properties. However, the non-renewable resources and their complex fabrication processes limit the scalability and pose difficulties for the utilisation of these materials. Here, we investigate the use of plasma arc technology to convert renewable electrospun lignin fibres into a new kind of carbon fibre with a globular and porous microstructure. The influence of arc currents (up to 60 A) on the structural and morphological properties of as-prepared carbon fibres is discussed. Owing to the catalyst-free synthesis, high purity micro-structured carbon fibres with nanocrystalline graphitic domains are produced. Furthermore, the humidity sensing characteristics of the treated fibres at room temperature (23 °C) are demonstrated. Sensors produced from these carbon fibres exhibit good humidity response and repeatability in the range of 30% to 80% relative humidity (RH) and an excellent sensitivity (0.81/%RH) in the high RH regime (60–80%). These results demonstrate that the plasma arc technology has great potential for the development of sustainable, lignin-based carbon fibres for a broad range of application in electronics, sensors and energy storage. Full article
(This article belongs to the Special Issue Smart Composite and Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram of the plasma arc discharge apparatus used to treat lignin fibres. (<b>b</b>) Schematic of the humidity sensor measurement setup.</p>
Full article ">Figure 2
<p>Low and high magnification SEM images of the carbon fibres treated with different levels of arc current: (<b>a</b>,<b>b</b>) 0 A (starting material; untreated lignin fibres), (<b>c</b>,<b>d</b>) 10 A, (<b>e</b>,<b>f</b>) 20 A, (<b>g</b>,<b>h</b>) 35 A, and (<b>i</b>,<b>j</b>) 45 A. All scale bars are 1 μm. (<b>k</b>) Average thicknesses of the fibres prepared with different levels of arc current. (<b>l</b>) Sample collected after arc discharge at 45 A.</p>
Full article ">Figure 3
<p>FTIR spectra of lignin and carbon fibres treated with different arc currents.</p>
Full article ">Figure 4
<p>(<b>a</b>) Raman spectra of lignin fibres treated with different arc currents. (<b>b</b>) Dependence of I<sub>D</sub>/I<sub>G</sub> ratio and graphite crystallite size (<span class="html-italic">L<sub>a</sub></span>) on arc currents.</p>
Full article ">Figure 5
<p>(<b>a</b>) Thermogravimetric analysis (TGA) results of lignin fibres and fibres treated with 45 A arc current, (<b>b</b>) XRD pattern of carbon fibres produced with 45 A arc current.</p>
Full article ">Figure 6
<p>(<b>a</b>) Response of a humidity sensor based on carbon fibres (45 A) as a function of relative humidity; (<b>b</b>) dynamic response and recovery of this sensor during cycling between 30% and 80% relative humidity (RH).</p>
Full article ">
24 pages, 2256 KiB  
Article
Kubernetes Cluster for Automating Software Production Environment
by Aneta Poniszewska-Marańda and Ewa Czechowska
Sensors 2021, 21(5), 1910; https://doi.org/10.3390/s21051910 - 9 Mar 2021
Cited by 18 | Viewed by 7447
Abstract
Microservices, Continuous Integration and Delivery, Docker, DevOps, Infrastructure as Code—these are the current trends and buzzwords in the technological world of 2020. A popular tool which can facilitate the deployment and maintenance of microservices is Kubernetes. Kubernetes is a platform for running containerized [...] Read more.
Microservices, Continuous Integration and Delivery, Docker, DevOps, Infrastructure as Code—these are the current trends and buzzwords in the technological world of 2020. A popular tool which can facilitate the deployment and maintenance of microservices is Kubernetes. Kubernetes is a platform for running containerized applications, for example microservices. There are two main questions which answer was important for us: how to deploy Kubernetes itself and how to ensure that the deployment fulfils the needs of a production environment. Our research concentrates on the analysis and evaluation of Kubernetes cluster as the software production environment. However, firstly it is necessary to determine and evaluate the requirements of production environment. The paper presents the determination and analysis of such requirements and their evaluation in the case of Kubernetes cluster. Next, the paper compares two methods of deploying a Kubernetes cluster: kops and eksctl. Both of the methods concern the AWS cloud, which was chosen mainly because of its wide popularity and the range of provided services. Besides the two chosen methods of deployment, there are many more, including the DIY method and deploying on-premises. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Kubernetes dashboard depicting CPU and Memory usage by Kubernetes pods [<a href="#B20-sensors-21-01910" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>Schema presenting the stages of working with AWS EKS.</p>
Full article ">Figure 3
<p>Stages of working with Kubernetes cluster deployed on AWS with kops.</p>
Full article ">Figure 4
<p>Comparison of how each production requirement was satisfied using kops and eksctl.</p>
Full article ">Figure 5
<p>Cost report available on AWS Cost Explorer, grouped by the AWS tag: deployment.</p>
Full article ">
21 pages, 43436 KiB  
Article
Automatic Ankle Angle Detection by Integrated RGB and Depth Camera System
by Guillermo Díaz-San Martín, Luis Reyes-González, Sergio Sainz-Ruiz, Luis Rodríguez-Cobo and José M. López-Higuera
Sensors 2021, 21(5), 1909; https://doi.org/10.3390/s21051909 - 9 Mar 2021
Cited by 4 | Viewed by 3958
Abstract
Depth cameras are developing widely. One of their main virtues is that, based on their data and by applying machine learning algorithms and techniques, it is possible to perform body tracking and make an accurate three-dimensional representation of body movement. Specifically, this paper [...] Read more.
Depth cameras are developing widely. One of their main virtues is that, based on their data and by applying machine learning algorithms and techniques, it is possible to perform body tracking and make an accurate three-dimensional representation of body movement. Specifically, this paper will use the Kinect v2 device, which incorporates a random forest algorithm for 25 joints detection in the human body. However, although Kinect v2 is a powerful tool, there are circumstances in which the device’s design does not allow the extraction of such data or the accuracy of the data is low, as is usually the case with foot position. We propose a method of acquiring this data in circumstances where the Kinect v2 device does not recognize the body when only the lower limbs are visible, improving the ankle angle’s precision employing projection lines. Using a region-based convolutional neural network (Mask RCNN) for body recognition, raw data extraction for automatic ankle angle measurement has been achieved. All angles have been evaluated by inertial measurement units (IMUs) as gold standard. For the six tests carried out at different fixed distances between 0.5 and 4 m to the Kinect, we have obtained (mean ± SD) a Pearson’s coefficient, r = 0.89 ± 0.04, a Spearman’s coefficient, ρ = 0.83 ± 0.09, a root mean square error, RMSE = 10.7 ± 2.6 deg and a mean absolute error, MAE = 7.5 ± 1.8 deg. For the walking test, or variable distance test, we have obtained a Pearson’s coefficient, r = 0.74, a Spearman’s coefficient, ρ = 0.72, an RMSE = 6.4 deg and an MAE = 4.7 deg. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Example of frames from a recording of the Kinect v2 measuring simultaneously with the inertial measurement units (IMUs), one on the foot and one on the leg below the knee, for the calculation of the ankle angle. Distance of 0.5 m. Only shows the region of interest for each frame.</p>
Full article ">Figure 2
<p>Distances covered during the walking test, always facing the camera.</p>
Full article ">Figure 3
<p>Scheme of the methods used to calculate the ankle angle using the Kinect v2 device with a region-based convolutional neural network (RCNN Mask) and OpenPose on the one hand, and two Inertial Measurement Units on the other.</p>
Full article ">Figure 4
<p>Different recording angles depending on the range of distances to be measured. (<b>a</b>) 0.5 and 1 m distance, as well as the walking test. (<b>b</b>) 1.5, 2, 3 and 4 m distances.</p>
Full article ">Figure 5
<p>Steps to follow from reading the raw data to measuring the angle.</p>
Full article ">Figure 6
<p>Real and ideal sampling rate of a IMUs experiment at 100 Hz with a computer that is simultaneously recording video with Kinect v2. The real signal must be corrected due to the lag that results from the lack of computer processing power.</p>
Full article ">Figure 7
<p>Real synchronization example for the IMUs and the Kinect before (<b>up</b>) and after (<b>bottom</b>) their adjustment.</p>
Full article ">Figure 8
<p>Frame of a recording to appreciate the correspondence between the human skeleton display by OpenPose (<b>left</b>) and the mask calculated by the Mask RCNN network (<b>right</b>).</p>
Full article ">Figure 9
<p>Diagram of the properties of the RGB and the depth cameras of the Kinect v2. (<b>a</b>) Properties of the horizontal field of view and focal plane distances for RGB and IR cameras. (<b>b</b>) Properties of the vertical field of view and focal plane distances for RGB and IR cameras.</p>
Full article ">Figure 10
<p>(<b>Up</b>) Correspondence between the mask in the RGB image and its calculated equivalent in the depth image. (<b>Down</b>) Correspondence between the OpenPose skeleton in the RGB image and its calculated equivalent in the depth image.</p>
Full article ">Figure 11
<p>Relationship between the height of an object in the real world and the pixel of that object in a Kinect v2 depth image.</p>
Full article ">Figure 12
<p>Example of two images processed to obtain the projection line on the ankle for (<b>a</b>) distance: 1 m (<b>b</b>) distance: 4 m. For each of one, it shows the mask transferred from the RGB image (<b>left</b>), the mask processed to improve the fit of the body by distance discrimination (<b>center</b>), and finally the resulting projection line (<b>right</b>).</p>
Full article ">Figure 13
<p>Graphic example showing the steps since the data obtained directly by the projection line (blue), the smoothed curve (orange) to the rotation line (green), that rotes beta degrees with the origin in the foot point, F, to keep the ankle as the furthest point to facilitate its recognition, A’. Finally, the regression lines (red) form the alpha angle, which is the angle of the ankle that is finally measured.</p>
Full article ">Figure 14
<p>Graphic example to measure the angle formed by two IMUs, <span class="html-italic">θ</span>, through their Euler <span class="html-italic">α</span> and <span class="html-italic">β</span> ‘pitch’ angles.</p>
Full article ">Figure 15
<p>Graph showing the comparison between the ankle angle measured by the Kinect v2 with our projection lines method (red), the ankle angle measured by OpenPose (violet) and the angle measured by the IMUs using Euler’s angles (blue), for different distances in relation to the Kinect, which are: 0.5, 1, 1.5, 2, 3 and 4 m. Red and violet shaded area is the Kinect angle error following Equation (20). In addition, when it exists, the ankle angle measured with the default skeleton generated by the Kinect (green) is also represented.</p>
Full article ">Figure 16
<p>Graph showing the linearity between the ankle angle measured by the Kinect v2 with our method of projection lines and the angle measured by the IMUs using Euler’s angles, for different distances in relation to the Kinect, which are: (<b>a</b>) 0.5, (<b>b</b>) 1, (<b>c</b>) 1.5, (<b>d</b>) 2, (<b>e</b>) 3 and (<b>f</b>) 4 m.</p>
Full article ">Figure 17
<p>Graph showing the comparison between the ankle angle measured by the Kinect v2 with our projection lines method (red), the ankle angle measured by OpenPose (violet) and the angle measured by the IMUs using Euler’s angles (blue), for the walking test. Red and violet shaded area is the Kinect angle error following Equation (20). The body distance average is shown below to complete the information.</p>
Full article ">Figure 18
<p>Graph showing the linearity between the ankle angle measured by the Kinect v2 with our method of projection lines and the angle measured by the IMUs using Euler’s angles, for different a walking test between 5 and 1 m.</p>
Full article ">
32 pages, 1827 KiB  
Article
Lung Nodule Segmentation with a Region-Based Fast Marching Method
by Marko Savic, Yanhe Ma, Giovanni Ramponi, Weiwei Du and Yahui Peng
Sensors 2021, 21(5), 1908; https://doi.org/10.3390/s21051908 - 9 Mar 2021
Cited by 33 | Viewed by 4899
Abstract
When dealing with computed tomography volume data, the accurate segmentation of lung nodules is of great importance to lung cancer analysis and diagnosis, being a vital part of computer-aided diagnosis systems. However, due to the variety of lung nodules and the similarity of [...] Read more.
When dealing with computed tomography volume data, the accurate segmentation of lung nodules is of great importance to lung cancer analysis and diagnosis, being a vital part of computer-aided diagnosis systems. However, due to the variety of lung nodules and the similarity of visual characteristics for nodules and their surroundings, robust segmentation of nodules becomes a challenging problem. A segmentation algorithm based on the fast marching method is proposed that separates the image into regions with similar features, which are then merged by combining regions growing with k-means. An evaluation was performed with two distinct methods (objective and subjective) that were applied on two different datasets, containing simulation data generated for this study and real patient data, respectively. The objective experimental results show that the proposed technique can accurately segment nodules, especially in solid cases, given the mean Dice scores of 0.933 and 0.901 for round and irregular nodules. For non-solid and cavitary nodules the performance dropped—0.799 and 0.614 mean Dice scores, respectively. The proposed method was compared to active contour models and to two modern deep learning networks. It reached better overall accuracy than active contour models, having comparable results to DBResNet but lesser accuracy than 3D-UNet. The results show promise for the proposed method in computer-aided diagnosis applications. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of region assigning.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>A seed grid generation example. (<b>a</b>) Input image. (<b>b</b>) Initial equidistant seed grid. (<b>c</b>) Seed grid after only deletion of points with high local gradient mean. (<b>d</b>) Seed grid after shifting and deletion.</p>
Full article ">Figure 4
<p>Evolution of the times matrix (<span class="html-italic">T</span>).</p>
Full article ">Figure 5
<p>Evolution of the regions matrix (<span class="html-italic">R</span>).</p>
Full article ">Figure 6
<p>An example of grouping regions into clusters with k-means. (<b>a</b>) Regions. (<b>b</b>) Clusters. (<b>c</b>) Seed grouping shown over input image.</p>
Full article ">Figure 7
<p>Merging of clusters, with a step counter.</p>
Full article ">Figure 8
<p>Lung phantom placed in CT scanner and a single axial slice.</p>
Full article ">Figure 9
<p>Examples of Lung Image Database Consortium (LIDC) nodules from every category and subcategory.</p>
Full article ">Figure 10
<p>Preprocessing flowchart.</p>
Full article ">Figure 11
<p>An example of activecontours segmentation.</p>
Full article ">Figure 12
<p>Excerpts from the questionnaire part one.</p>
Full article ">Figure 13
<p>Solid-round nodules’ objective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 14
<p>Solid-irregular nodules’ objective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 15
<p>Sub-solid nodules’ objective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 16
<p>Cavitary nodules’ objective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 17
<p>Examples with high Dice scores.</p>
Full article ">Figure 18
<p>Examples with low Dice scores.</p>
Full article ">Figure 19
<p>Solid-round nodules’ subjective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 20
<p>Solid-irregular nodules’ subjective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 21
<p>Sub-solid nodules’ subjective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 22
<p>Cavitary nodules’ subjective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 23
<p>Examples with high mean opinion scores (MOS).</p>
Full article ">Figure 24
<p>Examples with low mean opinion scores (MOS).</p>
Full article ">
20 pages, 2415 KiB  
Review
Review on Carbon Nanomaterials-Based Nano-Mass and Nano-Force Sensors by Theoretical Analysis of Vibration Behavior
by Jin-Xing Shi, Xiao-Wen Lei and Toshiaki Natsuki
Sensors 2021, 21(5), 1907; https://doi.org/10.3390/s21051907 - 9 Mar 2021
Cited by 16 | Viewed by 4475
Abstract
Carbon nanomaterials, such as carbon nanotubes (CNTs), graphene sheets (GSs), and carbyne, are an important new class of technological materials, and have been proposed as nano-mechanical sensors because of their extremely superior mechanical, thermal, and electrical performance. The present work reviews the recent [...] Read more.
Carbon nanomaterials, such as carbon nanotubes (CNTs), graphene sheets (GSs), and carbyne, are an important new class of technological materials, and have been proposed as nano-mechanical sensors because of their extremely superior mechanical, thermal, and electrical performance. The present work reviews the recent studies of carbon nanomaterials-based nano-force and nano-mass sensors using mechanical analysis of vibration behavior. The mechanism of the two kinds of frequency-based nano sensors is firstly introduced with mathematical models and expressions. Afterward, the modeling perspective of carbon nanomaterials using continuum mechanical approaches as well as the determination of their material properties matching with their continuum models are concluded. Moreover, we summarize the representative works of CNTs/GSs/carbyne-based nano-mass and nano-force sensors and overview the technology for future challenges. It is hoped that the present review can provide an insight into the application of carbon nanomaterials-based nano-mechanical sensors. Showing remarkable results, carbon nanomaterials-based nano-mass and nano-force sensors perform with a much higher sensitivity than using other traditional materials as resonators, such as silicon and ZnO. Thus, more intensive investigations of carbon nanomaterials-based nano sensors are preferred and expected. Full article
(This article belongs to the Special Issue Micro and Nanodevices for Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>Kinds of carbon nanomaterials.</p>
Full article ">Figure 2
<p>Continuum mechanical approach for modeling C–C chemical bond as an equivalent continuum beam from a linkage between molecular mechanics and structural mechanics.</p>
Full article ">Figure 3
<p>(<b>a</b>) Young’s moduli and (<b>b</b>) shear moduli of carbon nanotubes versus tube diameter. Adapted with permission for [<a href="#B100-sensors-21-01907" class="html-bibr">100</a>], copyright Elsevier, 2003.</p>
Full article ">Figure 4
<p>(<b>a</b>) Cantilevered and (<b>b</b>) simply supported carbon nanotube resonators with an attached mass. Adapted with permission for [<a href="#B126-sensors-21-01907" class="html-bibr">126</a>], copyright AIP Publishing, 2004.</p>
Full article ">Figure 5
<p>Fundamental frequency of (<b>a</b>) cantilevered and (<b>b</b>) bridged carbon nanotube resonators with different length L vs. attached mass. Adapted with permission for [<a href="#B126-sensors-21-01907" class="html-bibr">126</a>], copyright AIP Publishing, 2004.</p>
Full article ">Figure 6
<p>Frequency shift of cantilevered carbon nanotube resonators with (<b>a</b>) different lengths L or (<b>b</b>) different diameters d vs. attached mass. Adapted with permission for [<a href="#B126-sensors-21-01907" class="html-bibr">126</a>], copyright AIP Publishing, 2004.</p>
Full article ">Figure 7
<p>(<b>a</b>) A proposed carbon nanotubes-based nano-force sensor, and (<b>b</b>) the analytical model of the partial embedded carbon nanotubes resonator [<a href="#B92-sensors-21-01907" class="html-bibr">92</a>].</p>
Full article ">Figure 8
<p>The first three vibrational modes of the partial embedded carbon nanotubes resonator under an external compressive force [<a href="#B92-sensors-21-01907" class="html-bibr">92</a>].</p>
Full article ">Figure 9
<p>Relationship between the external compressive force and the frequency of the partial embedded carbon nanotubes resonator [<a href="#B92-sensors-21-01907" class="html-bibr">92</a>].</p>
Full article ">
17 pages, 6051 KiB  
Article
Detection of Myocardial Infarction Using ECG and Multi-Scale Feature Concatenate
by Jia-Zheng Jian, Tzong-Rong Ger, Han-Hua Lai, Chi-Ming Ku, Chiung-An Chen, Patricia Angela R. Abu and Shih-Lun Chen
Sensors 2021, 21(5), 1906; https://doi.org/10.3390/s21051906 - 9 Mar 2021
Cited by 17 | Viewed by 4923
Abstract
Diverse computer-aided diagnosis systems based on convolutional neural networks were applied to automate the detection of myocardial infarction (MI) found in electrocardiogram (ECG) for early diagnosis and prevention. However, issues, particularly overfitting and underfitting, were not being taken into account. In other words, [...] Read more.
Diverse computer-aided diagnosis systems based on convolutional neural networks were applied to automate the detection of myocardial infarction (MI) found in electrocardiogram (ECG) for early diagnosis and prevention. However, issues, particularly overfitting and underfitting, were not being taken into account. In other words, it is unclear whether the network structure is too simple or complex. Toward this end, the proposed models were developed by starting with the simplest structure: a multi-lead features-concatenate narrow network (N-Net) in which only two convolutional layers were included in each lead branch. Additionally, multi-scale features-concatenate networks (MSN-Net) were also implemented where larger features were being extracted through pooling the signals. The best structure was obtained via tuning both the number of filters in the convolutional layers and the number of inputting signal scales. As a result, the N-Net reached a 95.76% accuracy in the MI detection task, whereas the MSN-Net reached an accuracy of 61.82% in the MI locating task. Both networks give a higher average accuracy and a significant difference of p < 0.001 evaluated by the U test compared with the state-of-the-art. The models are also smaller in size thus are suitable to fit in wearable devices for offline monitoring. In conclusion, testing throughout the simple and complex network structure is indispensable. However, the way of dealing with the class imbalance problem and the quality of the extracted features are yet to be discussed. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

Figure 1
<p>The schematic diagram of healthcare application with integration of Internet of Things (IoT) technology. 12-lead electrocardiogram (ECG) information can combine with other physiological signal values from different wearable devices, such as blood glucose, body temperature, and blood pressure, for algorithm models. Physicians can investigate or diagnosis and undertake further action, such as calling an ambulance or ward, through offline monitoring and/or cloud computing.</p>
Full article ">Figure 2
<p>Block diagram of the research, including Physikalisch-Technische Bundesanstalt (PTB), 12-lead ECG database gathering, signal preprocessing, data rearrangement, study model networks training, and performance, compared with other studies. The final objective was to build models capable of detecting and/or locating the occurrence of MI by analyzing the 12-lead ECG.</p>
Full article ">Figure 3
<p>Block diagram of the proposed method, including signal preprocessing, dataset rearrangement, and model development.</p>
Full article ">Figure 4
<p>Samples of healthy control (HC) and inferior myocardial infarction (IMI) 12-lead ECG. IMI can be detected by the abnormalities waveform features of the ECG, as indicated by the arrows that include the ST displacement, T wave inversion, silent Q wave, and so on.</p>
Full article ">Figure 5
<p>Multi-lead features-concatenate network. Twelve (12) lead branch convolutional neural networks (CNNs) were used to extract features in each lead independently and were then concatenated and classified.</p>
Full article ">Figure 6
<p>Multi-scale features-concatenate network in every lead branch. K represents kernel size, S represents strides. The N-Net only contained structures illustrated in scale 1, and MSN-Net that used two scales will contain the structure of both scale 1 and scale 2. MSN-Net that used three scales contained structures of scale 1, scale 2 and scale 3, and so on.</p>
Full article ">Figure 7
<p>Accuracy trends of the proposed myocardial infarction (MI) detection models plotted in shaded error bar. Blue- and gray-shaded areas indicate the range of one standard deviation, and the best average accuracy under the same scale number is marked by an arrow: Accuracy of models using (<b>a</b>) single-scale features; (<b>b</b>) two-scale features; (<b>c</b>) three-scale features, (<b>d</b>) four-scale feature, and (<b>e</b>) five-scale features.</p>
Full article ">Figure 8
<p>Accuracy trends of our proposed MI locating models plotted in shaded error bar. Blue- and gray-shaded areas indicate the range of one standard deviation and best average accuracy under the same scale number are marked by an arrow: Accuracy of models using (<b>a</b>) single-scale features, (<b>b</b>) two-scale features, (<b>c</b>) three-scale features, (<b>d</b>) four-scale features, and (<b>e</b>) five-scale features.</p>
Full article ">Figure 9
<p>Box plot and significance of the accuracy of the proposed network where 1S indicates models using single-scale features, 9F indicates models using nine filters, and so on. (<b>a</b>) Evidence that supports accuracy decrease due to overfitting caused by excessive model strength. The average accuracy decreased from 95.76% to 94.95% (<span class="html-italic">p</span> &lt; 0.05); (<b>b</b>) Evidence that supports using multi-scale features will increase MI locating accuracy. The average accuracy increased from 60.49% to 61.82% (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 10
<p>The performances of the proposed networks were recorded during each epoch where 1S indicates networks using single-scale features, 9F indicates networks using nine filters, and so on. (<b>a</b>) Trained and validated loss of MI detection model during each epoch; (<b>b</b>) Trained and validated the accuracy of MI detection model during each epoch; (<b>c</b>) Trained and validated loss of MI locating model during each epoch; (<b>d</b>) Trained and validated the accuracy of MI locating model during each epoch.</p>
Full article ">Figure 11
<p>Comparison between the proposed model and literature where 1S indicates models using single-scale features, 9F indicates models using nine filters, and so on. Single star interprets <span class="html-italic">p</span> &lt; 0.05, double star interprets <span class="html-italic">p</span> &lt; 0.01, and triple star interprets <span class="html-italic">p</span> &lt; 0.001. (<b>a</b>) MI detection accuracy comparison; (<b>b</b>) MI locating accuracy comparison; (<b>c</b>) MI detection sensitivity comparison; (<b>d</b>) MI detection specificity comparison; (<b>e</b>) MI detection F1-score comparison; and (<b>f</b>) MI detection area under the receiver operating characteristic curve comparison.</p>
Full article ">
17 pages, 3379 KiB  
Article
Advanced Network Sampling with Heterogeneous Multiple Chains
by Jaekoo Lee, MyungKeun Yoon and Song Noh
Sensors 2021, 21(5), 1905; https://doi.org/10.3390/s21051905 - 9 Mar 2021
Viewed by 2591
Abstract
Recently, researchers have paid attention to many types of huge networks such as the Internet of Things, sensor networks, social networks, and traffic networks because of their untapped potential for theoretical and practical outcomes. A major obstacle in studying large-scale networks is that [...] Read more.
Recently, researchers have paid attention to many types of huge networks such as the Internet of Things, sensor networks, social networks, and traffic networks because of their untapped potential for theoretical and practical outcomes. A major obstacle in studying large-scale networks is that their size tends to increase exponentially. In addition, access to large network databases is limited for security or physical connection reasons. In this paper, we propose a novel sampling method that works effectively for large-scale networks. The proposed approach makes multiple heterogeneous Markov chains by adjusting random-walk traits on the given network to explore the target space efficiently. This approach provides better unbiased sampling results with reduced asymptotic variance within reasonable execution time than previous random-walk-based sampling approaches. We perform various experiments on large networks databases obtained from synthesis to real–world applications. The results demonstrate that the proposed method outperforms existing network sampling methods. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Diagrammatic explanation of what inspired the proposed network sampling method with heterogeneous multiple chains (best viewed in color).</p>
Full article ">Figure 2
<p>Overview of proposed method (best viewed in color).</p>
Full article ">Figure 3
<p>Degree distributions of real–world datasets.</p>
Full article ">Figure 4
<p>Results of sampling with various momentum parameters on huge synthetic scale–free networks with <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. (<b>a</b>) Synthetic networks with <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>n</mi> <mo>|</mo> <mo>=</mo> <mn>50</mn> <mo>,</mo> <mn>000</mn> <mo>,</mo> <mn>000</mn> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>n</mi> <mo>|</mo> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mn>000</mn> <mo>,</mo> <mn>000</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Sampling results obtained with various momentum parameters on real–world network databases.</p>
Full article ">Figure 6
<p>Results of sampling performance for various burn–in periods.</p>
Full article ">Figure 7
<p>Comparison of proposed and well-known network sampling methods relative to node degree.</p>
Full article ">Figure 8
<p>Comparison of sampling methods on clustering coefficients of networks.</p>
Full article ">Figure 9
<p>Comparison of total run time of sampling methods (colored).</p>
Full article ">
16 pages, 2485 KiB  
Article
Intermuscular Coordination in the Power Clean Exercise: Comparison between Olympic Weightlifters and Untrained Individuals—A Preliminary Study
by Paulo D. G. Santos, João R. Vaz, Paulo F. Correia, Maria J. Valamatos, António P. Veloso and Pedro Pezarat-Correia
Sensors 2021, 21(5), 1904; https://doi.org/10.3390/s21051904 - 9 Mar 2021
Cited by 6 | Viewed by 4190
Abstract
Muscle coordination in human movement has been assessed through muscle synergy analysis. In sports science, this procedure has been mainly applied to the comparison between highly trained and unexperienced participants. However, the lack of knowledge regarding strength training exercises led us to study [...] Read more.
Muscle coordination in human movement has been assessed through muscle synergy analysis. In sports science, this procedure has been mainly applied to the comparison between highly trained and unexperienced participants. However, the lack of knowledge regarding strength training exercises led us to study the differences in neural strategies to perform the power clean between weightlifters and untrained individuals. Synergies were extracted from electromyograms of 16 muscles of ten unexperienced participants and seven weightlifters. To evaluate differences, we determined the pairwise correlations for the synergy components and electromyographic profiles. While the shape of activation patterns presented strong correlations across participants of each group, the weightings of each muscle were more variable. The three extracted synergies were shifted in time with the unexperienced group anticipating synergy #1 (−2.46 ± 18.7%; p < 0.001) and #2 (−4.60 ± 5.71%; p < 0.001) and delaying synergy #3 (1.86 ± 17.39%; p = 0.01). Moreover, muscle vectors presented more inter-group variability, changing the composition of synergy #1 and #3. These results may indicate an adaptation in intermuscular coordination with training, and athletes in an initial phase of training should attempt to delay the hip extension (synergy #1), as well as the upper-limb flexion (synergy #2). Full article
(This article belongs to the Special Issue Sensors and Technologies in Skeletal Muscle Disorder)
Show Figures

Figure 1

Figure 1
<p>Top panel: Mean values of variance accounted for (VAF) relatively to the original extraction iteration of muscle synergies for unexperienced participants (UNE) and weightlifters (EXP); Down panel: mean values of variance accounted for each muscle (VAF<sub>muscle</sub>) regarding a three synergy-model for day one and two.</p>
Full article ">Figure 2
<p>Inter-individual variability of Synergy #3 activation coefficients (UA—arbitrary units). Bottom panel corresponds to weightlifters’ group while top panel regards to unexperienced participants’ group. The thick black line represents the group mean, while the thin lines represent individual synergy activation coefficients.</p>
Full article ">Figure 3
<p>Averaged synergy activation coefficients of weightlifter (EXP) and unexperienced participant (UNE) groups. Left panel refers to Synergy #1, central panel to Synergy #2, and right panel to Synergy #3. The upper hemisphere of the graphs corresponds to the ascendant phase (0–100% of the power clean cycle), while the lower hemisphere corresponds to descendant phase (100–0% of the power clean cycle).</p>
Full article ">Figure 4
<p>Averaged muscle synergy vectors of weightlifter (EXP) and unexperienced participant (UNE) groups (UA–arbitrary units). Top panel regards to Synergy #1, central panel to Synergy #2, and bottom panel to Synergy #3.</p>
Full article ">Figure 5
<p>Averaged EMG envelopes (UA—arbitrary units) from 16 muscles obtained in weightlifters (EXP) and unexperienced participants during power clean cycle (200 time points).</p>
Full article ">
21 pages, 1252 KiB  
Article
A New Approach to Enhanced Swarm Intelligence Applied to Video Target Tracking
by Edwards Cerqueira de Castro, Evandro Ottoni Teatini Salles and Patrick Marques Ciarelli
Sensors 2021, 21(5), 1903; https://doi.org/10.3390/s21051903 - 9 Mar 2021
Cited by 11 | Viewed by 2009
Abstract
This work proposes a new approach to improve swarm intelligence algorithms for dynamic optimization problems by promoting a balance between the transfer of knowledge and the diversity of particles. The proposed method was designed to be applied to the problem of video tracking [...] Read more.
This work proposes a new approach to improve swarm intelligence algorithms for dynamic optimization problems by promoting a balance between the transfer of knowledge and the diversity of particles. The proposed method was designed to be applied to the problem of video tracking targets in environments with almost constant lighting. This approach also delimits the solution space for a more efficient search. A robust version to outliers of the double exponential smoothing (DES) model is used to predict the target position in the frame delimiting the solution space in a more promising region for target tracking. To assess the quality of the proposed approach, an appropriate tracker for a discrete solution space was implemented using the meta-heuristic Shuffled Frog Leaping Algorithm (SFLA) adapted to dynamic optimization problems, named the Dynamic Shuffled Frog Leaping Algorithm (DSFLA). The DSFLA was compared with other classic and current trackers whose algorithms are based on swarm intelligence. The trackers were compared in terms of the average processing time per frame and the area under curve of the success rate per Pascal metric. For the experiment, we used a random sample of videos obtained from the public Hanyang visual tracker benchmark. The experimental results suggest that the DSFLA has an efficient processing time and higher quality of tracking compared with the other competing trackers analyzed in this work. The success rate of the DSFLA tracker is about 7.2 to 76.6% higher on average when comparing the success rate of its competitors. The average processing time per frame is about at least 10% faster than competing trackers, except one that was about 26% faster than the DSFLA tracker. The results also show that the predictions of the robust DES model are quite accurate. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>An example of delimiting the solution space obtained from the tenth frame of the video BlurBody (this video was selected from the public Hanyang visual tracker benchmark [<a href="#B44-sensors-21-01903" class="html-bibr">44</a>]).</p>
Full article ">Figure 2
<p>The boxplot of the variable processing time per frame for all trackers.</p>
Full article ">Figure 3
<p>The boxplot of the AUC variable for all trackers.</p>
Full article ">Figure 4
<p>The success rate curve per Pascal metric of video 4 (BlurFace) for all trackers.</p>
Full article ">Figure 5
<p>The success rate curve per Pascal metric of video 7 (Couple) for all trackers.</p>
Full article ">
25 pages, 4008 KiB  
Article
Smartwatch-Based Eating Detection: Data Selection for Machine Learning from Imbalanced Data with Imperfect Labels
by Simon Stankoski, Marko Jordan, Hristijan Gjoreski and Mitja Luštrek
Sensors 2021, 21(5), 1902; https://doi.org/10.3390/s21051902 - 9 Mar 2021
Cited by 12 | Viewed by 5899
Abstract
Understanding people’s eating habits plays a crucial role in interventions promoting a healthy lifestyle. This requires objective measurement of the time at which a meal takes place, the duration of the meal, and what the individual eats. Smartwatches and similar wrist-worn devices are [...] Read more.
Understanding people’s eating habits plays a crucial role in interventions promoting a healthy lifestyle. This requires objective measurement of the time at which a meal takes place, the duration of the meal, and what the individual eats. Smartwatches and similar wrist-worn devices are an emerging technology that offers the possibility of practical and real-time eating monitoring in an unobtrusive, accessible, and affordable way. To this end, we present a novel approach for the detection of eating segments with a wrist-worn device and fusion of deep and classical machine learning. It integrates a novel data selection method to create the training dataset, and a method that incorporates knowledge from raw and virtual sensor modalities for training with highly imbalanced datasets. The proposed method was evaluated using data from 12 subjects recorded in the wild, without any restriction about the type of meals that could be consumed, the cutlery used for the meal, or the location where the meal took place. The recordings consist of data from accelerometer and gyroscope sensors. The experiments show that our method for detection of eating segments achieves precision of 0.85, recall of 0.81, and F1-score of 0.82 in a person-independent manner. The results obtained in this study indicate that reliable eating detection using in the wild recorded data is possible with the use of wearable sensors on the wrist. Full article
(This article belongs to the Special Issue New Frontiers in Sensor-Based Activity Recognition)
Show Figures

Figure 1

Figure 1
<p>Distribution of the cutlery used for the recorded meals.</p>
Full article ">Figure 2
<p>An overall pipeline of the proposed eating detection framework.</p>
Full article ">Figure 3
<p>Raw data to features pipeline.</p>
Full article ">Figure 4
<p>An example of an original and filtered (low-pass and band-pass) 15-s accelerometer x-axis signal.</p>
Full article ">Figure 5
<p>Architecture of Inception Block Type A.</p>
Full article ">Figure 6
<p>Architectures of the proposed models for bite detection. (<b>a</b>) Short architecture. (<b>b</b>) Medium architecture. (<b>c</b>) Long architecture.</p>
Full article ">Figure 7
<p>Composition of the eating and non-eating classes before and after all steps of the data selection procedure.</p>
Full article ">Figure 8
<p>Model training procedure for one subject. The same procedure is repeated for each subject in the dataset.</p>
Full article ">Figure 9
<p>F1-score of personalized and non-personalized models shown for each subject separately. Non-personalized results achieved using LOSO evaluation, personalized results achieved using leave-one-recording-out (LORO) evaluation.</p>
Full article ">Figure 10
<p>Average recognition for each type of cutlery.</p>
Full article ">
11 pages, 801 KiB  
Communication
Angle-of-Arrival Estimation Using Difference Beams in Localized Hybrid Arrays
by Hang Li and Zhiqun Cheng
Sensors 2021, 21(5), 1901; https://doi.org/10.3390/s21051901 - 9 Mar 2021
Cited by 2 | Viewed by 2573
Abstract
Angle-of-arrival (AoA) estimation in localized hybrid arrays suffers from phase ambiguity owing to its localized structure and vulnerability to noise. In this letter, we propose a novel phase shift design, allowing each subarray to exploit difference beam steering in two potential AoA directions. [...] Read more.
Angle-of-arrival (AoA) estimation in localized hybrid arrays suffers from phase ambiguity owing to its localized structure and vulnerability to noise. In this letter, we propose a novel phase shift design, allowing each subarray to exploit difference beam steering in two potential AoA directions. This enables the calibration of cross-correlations and an enhanced phase offset estimation between adjacent subarrays. We propose two unambiguous AoA estimation schemes based on the even and odd ratios of the number of antennas per subarray N to the number of different phase shifts per symbol K (i.e., N/K), respectively. The simulation results show that the proposed approach greatly improves the estimation accuracy as compared to the state of the art when the ratio N/K is even. Full article
(This article belongs to the Special Issue Communications and Sensing Technologies for the Future)
Show Figures

Figure 1

Figure 1
<p>Illustration of a localized array with <span class="html-italic">M</span> subarrays, where the RF and down conversion components are omitted for simplicity.</p>
Full article ">Figure 2
<p>An example of normalized synthesized patterns of difference beams and sum beams.</p>
Full article ">Figure 3
<p>MSE of <math display="inline"> <semantics> <msup> <mi>e</mi> <mrow> <mi>j</mi> <mover accent="true"> <mrow> <mi>N</mi> <mi>u</mi> </mrow> <mo>^</mo> </mover> </mrow> </msup> </semantics> </math> versus <math display="inline"> <semantics> <msub> <mi>γ</mi> <mi>a</mi> </msub> </semantics> </math> (<math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 4
<p><math display="inline"> <semantics> <msub> <mi>P</mi> <mi>d</mi> </msub> </semantics> </math> versus <math display="inline"> <semantics> <msub> <mi>γ</mi> <mi>a</mi> </msub> </semantics> </math> (<math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 5
<p>MSE of <math display="inline"> <semantics> <mover accent="true"> <mi>u</mi> <mo>^</mo> </mover> </semantics> </math> versus <span class="html-italic">Q</span>, (<math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>24</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 6
<p>MSE of <math display="inline"> <semantics> <mover accent="true"> <mi>u</mi> <mo>^</mo> </mover> </semantics> </math> versus <math display="inline"> <semantics> <msub> <mi>γ</mi> <mi>a</mi> </msub> </semantics> </math>, (<math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>24</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics> </math>).</p>
Full article ">
35 pages, 16034 KiB  
Article
Analytical Approach to Sampling Estimation of Underwater Tunnels Using Mechanical Profiling Sonars
by Vitor Augusto Machado Jorge, Pedro Daniel de Cerqueira Gava, Juan Ramon Belchior de França Silva, Thais Machado Mancilha, Waldir Vieira, Geraldo José Adabo and Cairo Lúcio Nascimento, Jr.
Sensors 2021, 21(5), 1900; https://doi.org/10.3390/s21051900 - 9 Mar 2021
Cited by 7 | Viewed by 3069
Abstract
Hydroelectric power plants often make use of tunnels to redirect the flow of water to the plant power house. Such tunnels are often flooded and can span considerable distances. Periodical inspections of such tunnels are highly desirable since a tunnel collapse will be [...] Read more.
Hydroelectric power plants often make use of tunnels to redirect the flow of water to the plant power house. Such tunnels are often flooded and can span considerable distances. Periodical inspections of such tunnels are highly desirable since a tunnel collapse will be catastrophic, disrupting the power plant operation. In many cases, the use of Unmanned Underwater Vehicles (UUVs) equipped with mechanical profiling sonars is a suitable and affordable way to gather data to generate 3D mapping of flooded tunnels. In this paper, we study the resolution of 3D tunnel maps generated by one or more mechanical profiling sonars working in tandem, considering synchronization and occlusion problems. The article derives the analytical equations to estimate the sampling of the underwater tunnels using mechanical profiling sonars (scanning sonars). Experiments in a simulated environment using up to four sensors simultaneously are presented. We also report experimental results obtained by a UUV inside a large power plant tunnel, together with a first map of this environment using a single sonar sensor. Full article
Show Figures

Figure 1

Figure 1
<p>An example of profiling sensor with a single ray connected to a moving head.</p>
Full article ">Figure 2
<p>An example of tunnel cross-section with base with length <span class="html-italic">a</span>, the tunnel height is <span class="html-italic">b</span> and its arc shaped ceiling has radius <span class="html-italic">c</span>. Vectors <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="italic">u</mi> <mo>^</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="italic">v</mi> <mo>^</mo> </mover> </semantics></math> and <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="italic">w</mi> <mo>^</mo> </mover> </semantics></math> are the normalized basis vectors of the tunnel reference frame, where <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="italic">u</mi> <mo>^</mo> </mover> </semantics></math> is orthogonal to the cross-section of the entrance of the tunnel.</p>
Full article ">Figure 3
<p>Analysis of floor samples in the two extreme cases: (<b>a</b>) when the sensor is centered and perpendicular to the floor; and, (<b>b</b>) when the sensor hits the corner.</p>
Full article ">Figure 4
<p>The impact of velocity on mapping with single point sensor when a robot moves with constant linear velocity <math display="inline"><semantics> <mrow> <mi mathvariant="italic">v</mi> <mo>=</mo> <mo>(</mo> <mi>v</mi> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Distance between pair of sensor readings which are closest to the corner, <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>θ</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>m</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> <mi>θ</mi> </mrow> </semantics></math> angular positions, during the first sensor revolution are displayed as blue vertices, while the same samples at the next revolution are in red. Note that they form a parallelogram of sides <math display="inline"><semantics> <msub> <mi mathvariant="script">T</mi> <mi>x</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="script">T</mi> <msub> <mi>y</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </msub> </semantics></math>, with the largest distance between samples at the parallelogram at the largest diagonal, shown in red dashed lines.</p>
Full article ">Figure 6
<p>The behavior of four identical sensors <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mn>0</mn> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>1</mn> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>s</mi> <mn>3</mn> </msub> </semantics></math>, with phases <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics></math> for each sensor <math display="inline"><semantics> <msub> <mi>s</mi> <mi>k</mi> </msub> </semantics></math>—that is, the phase for each sensor is <math display="inline"><semantics> <msup> <mn>0</mn> <mo>°</mo> </msup> </semantics></math>, <math display="inline"><semantics> <msup> <mn>90</mn> <mo>°</mo> </msup> </semantics></math>, <math display="inline"><semantics> <msup> <mn>180</mn> <mo>°</mo> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mn>270</mn> <mo>°</mo> </msup> </semantics></math> respectively. Samples at the same head position for all sensors are out of phase and the distance between subsequent samples with the same head position is always <math display="inline"><semantics> <msub> <mi mathvariant="script">T</mi> <mi>x</mi> </msub> </semantics></math>/s, where <span class="html-italic">s</span> is the number of sensors. For all sensors, the distance covered while performing one full sensor revolution is <math display="inline"><semantics> <msub> <mi mathvariant="script">T</mi> <mi>x</mi> </msub> </semantics></math>. A 4x faster sensor is shown in blue dashed lines. As the slow sensors can sample 4 points at the same time, sampling does not happen exactly at the same points of a faster sensor, but the distance between samples are the same—for example, the floor is sampled with the same spatial distance, but not at the same points.</p>
Full article ">Figure 7
<p>Impact of sampling with two sensors when the robot moves with constant velocity <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">v</mi> <mo>=</mo> <mo>(</mo> <mi>v</mi> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> and the distance between sensors is <span class="html-italic">x</span>. When <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math>, reaches the position of <math display="inline"><semantics> <msub> <mi>s</mi> <mn>0</mn> </msub> </semantics></math>, phases must be shifted by <math display="inline"><semantics> <msup> <mn>180</mn> <mo>°</mo> </msup> </semantics></math>.</p>
Full article ">Figure 8
<p>Sector sampling of a flat region. Dashed black lines represent the readings of a sensor, while green dashed lines represent the reading of an adjacent sensor. The distance between readings at the same head position, <math display="inline"><semantics> <mi>β</mi> </semantics></math>, but subsequent time-steps, presents different values—depending on the path taken by the sensor. The Long path (blue dashed) results in greater distance between samples than the shorter path (red dashed). Smaller and larger distances alternate each other and also that the sum of both is <math display="inline"><semantics> <mfrac> <mrow> <mn>2</mn> <msub> <mi mathvariant="script">T</mi> <mi>x</mi> </msub> </mrow> <mi>s</mi> </mfrac> </semantics></math>.</p>
Full article ">Figure 9
<p>Robot connected to one mechanical profiling sonar (MPS) with the sensor rotation axis aligned to the robot heading.</p>
Full article ">Figure 10
<p>The tunnel scenario considered for simulations and real world tests. The rock trap (the ditch shown in the figure) along with an auxiliary tunnel entrance.</p>
Full article ">Figure 11
<p>Robots positioned in the entrance of the tunnel. In (<b>a</b>), the vector in red depicts the robot heading vector, while the green vector is the ray of the sensor which is always co-planar with the cross-section of the tunnel. Blue lines depict the ping rays of four ping sensors which are used to keep the two robots centered at the cross-section of the tunnel as they move. In (<b>b</b>–<b>d</b>) we see VITA 2 with several 881L configurations.</p>
Full article ">Figure 12
<p>Comparison of the two configuration of sensors with respect to phase alignment.</p>
Full article ">Figure 13
<p>The problem with occlusions in the phase shift approach is shown in (<b>a</b>,<b>b</b>) when using sensors side by side, for the MEDIUM configuration of the 881L sensor, which makes it easier to see the resulting sampling of both approaches. Note the reading gaps at the side walls. The space between sensors also results in gaps in the sector approach, the gap is smaller on side walls than the phase shift approach, see (<b>c</b>,<b>d</b>). Comparing figure (<b>a</b>) with (<b>c</b>) and figure (<b>b</b>) with (<b>d</b>), the resolution difference between the two approaches. The resulting meshes for the phase shift and sector approaches for three and four sensors are shown in (<b>e</b>,<b>f</b>) and (<b>g</b>,<b>h</b>), respectively. The phase shift approach for three and four sensors shows aliasing at the side walls at the occlusions, but there is little aliasing at the floor for three and four sensors. Note the aliasing in (<b>g</b>), for sector offset for three sensors shown at the regions in blue next to the floor, while the mesh reconstruction algorithm can decrease the aliasing when using four sensors in (<b>h</b>). Also aliasing at side walls seems to be smaller for the sector offset than the phase shift approach.</p>
Full article ">Figure 14
<p>Low visibility using the low light camera at the tunnel: centered at the cross-section (<b>a</b>) and close the floor (<b>b</b>).</p>
Full article ">Figure 15
<p>The behavior of projected distances on a surface as the range and frequency change.</p>
Full article ">Figure 16
<p>Cross-section of the tunnel and the red thinning line which represents its skeletonization followed by pruning.</p>
Full article ">Figure 17
<p>The reconstructed tunnel. Regions of interest are marked in red. Note we can detect height differences in the ceiling and also some details of the end of the rock trap. Due to poor sampling along the tunnel length we cannot get a lot of details of the side tunnel we saw in previous experiments.</p>
Full article ">Figure A1
<p>The upper part of a tunnel with an arbitrarily convex but symmetric ceiling. Note that <math display="inline"><semantics> <msub> <mi mathvariant="script">T</mi> <mrow> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> always form a square angle, therefore <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>+</mo> <mi>q</mi> <mo>=</mo> <mi>b</mi> <mo>/</mo> <mo>(</mo> <mn>2</mn> <mo form="prefix">cos</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">
15 pages, 5003 KiB  
Article
Variable Admittance Control Based on Human–Robot Collaboration Observer Using Frequency Analysis for Sensitive and Safe Interaction
by Hyomin Kim and Woosung Yang
Sensors 2021, 21(5), 1899; https://doi.org/10.3390/s21051899 - 8 Mar 2021
Cited by 8 | Viewed by 3735
Abstract
A collaborative robot should be sensitive to the user intention while maintaining safe interaction during tasks such as hand guiding. Observers based on the discrete Fourier transform have been studied to distinguish between the low-frequency motion elicited by the operator and high-frequency behavior [...] Read more.
A collaborative robot should be sensitive to the user intention while maintaining safe interaction during tasks such as hand guiding. Observers based on the discrete Fourier transform have been studied to distinguish between the low-frequency motion elicited by the operator and high-frequency behavior resulting from system instability and disturbances. However, the discrete Fourier transform requires an excessively long sampling time. We propose a human–robot collaboration observer based on an infinite impulse response filter to increase the intention recognition speed. By using this observer, we also propose a variable admittance controller to ensure safe collaboration. The recognition speed of the human–robot collaboration observer is 0.29 s, being 3.5 times faster than frequency analysis based on the discrete Fourier transform. The performance of the variable admittance controller and its improved recognition speed are experimentally verified on a two-degrees-of-freedom manipulator. We confirm that the improved recognition speed of the proposed human–robot collaboration observer allows us to timely recover from unsafe to safe collaboration. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of admittance control for stability analysis.</p>
Full article ">Figure 2
<p>Design of the IIR Butterworth filter (<b>a</b>) Structure of 2nd IIR filter. (<b>b</b>) Magnitude response of the LPF and HPF. (<b>c</b>) Pole-zero map of LPF and HPF.</p>
Full article ">Figure 3
<p>Block diagram of human–robot collaboration observer.</p>
Full article ">Figure 4
<p>Stability analysis of admittance control for desired inertia <math display="inline"> <semantics> <mrow> <msub> <mi>m</mi> <mi mathvariant="normal">d</mi> </msub> </mrow> </semantics> </math> and damper <math display="inline"> <semantics> <mrow> <msub> <mi>d</mi> <mi mathvariant="normal">d</mi> </msub> </mrow> </semantics> </math> at a fixed ratio. (<b>a</b>) Frequency response for human stiffness of 176.39 N/m. (<b>b</b>) root locus plot for increasing external stiffness.</p>
Full article ">Figure 5
<p>Simulation verification according to various magnitudes and frequencies. (<b>a</b>) Frequency of input force. (<b>b</b>) Magnitude of input force. (<b>c</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>HSO</mi> </mrow> </msub> </mrow> </semantics> </math> output (red curve). (<b>d</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mi mathvariant="normal">O</mi> </msub> </mrow> </semantics> </math> output (gray curve) and <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>HRCO</mi> </mrow> </msub> </mrow> </semantics> </math> output (blue curve).</p>
Full article ">Figure 5 Cont.
<p>Simulation verification according to various magnitudes and frequencies. (<b>a</b>) Frequency of input force. (<b>b</b>) Magnitude of input force. (<b>c</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>HSO</mi> </mrow> </msub> </mrow> </semantics> </math> output (red curve). (<b>d</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mi mathvariant="normal">O</mi> </msub> </mrow> </semantics> </math> output (gray curve) and <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>HRCO</mi> </mrow> </msub> </mrow> </semantics> </math> output (blue curve).</p>
Full article ">Figure 6
<p>Step input response for input frequencies of 1–5 Hz. (<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>HSO</mi> </mrow> </msub> </mrow> </semantics> </math> output (red curve). (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>HRCO</mi> </mrow> </msub> </mrow> </semantics> </math> output (blue curve).</p>
Full article ">Figure 7
<p>Block diagram of variable admittance control based on HRCO.</p>
Full article ">Figure 8
<p>Experimental setup for sudden change of operator’s intention. (<b>a</b>) Starting position. (<b>b</b>) Motion with constant speed. (<b>c</b>) Stop with sudden deceleration.</p>
Full article ">Figure 9
<p>Experimental results of controllers under sudden change in operator’s intention. (<b>a</b>) End-effector position along <span class="html-italic">x</span> axis, (<b>b</b>) external force along <span class="html-italic">x</span> axis, (<b>c</b>) HRCO output, and (<b>d</b>) admittance parameters.</p>
Full article ">Figure 9 Cont.
<p>Experimental results of controllers under sudden change in operator’s intention. (<b>a</b>) End-effector position along <span class="html-italic">x</span> axis, (<b>b</b>) external force along <span class="html-italic">x</span> axis, (<b>c</b>) HRCO output, and (<b>d</b>) admittance parameters.</p>
Full article ">Figure 10
<p>Position–velocity graphs along the <span class="html-italic">x</span> axis for experiment with sudden change in operator’s intention. Admittance control with (<b>a</b>) low and (<b>b</b>) high admittance parameters and (<b>c</b>) proposed variable admittance control based on HRCO.</p>
Full article ">Figure 11
<p>Experimental setup for virtual object collision. (<b>a</b>) Starting position. (<b>b</b>) Motion with constant speed. (<b>c</b>) Collision with virtual object.</p>
Full article ">Figure 12
<p>Experimental results of controllers for collision with virtual object. (<b>a</b>) End-effector position along <span class="html-italic">x</span> axis, (<b>b</b>) external force along <span class="html-italic">x</span> axis, (<b>c</b>) HRCO output, and (<b>d</b>) admittance parameters.</p>
Full article ">Figure 13
<p>Position–velocity graphs along <span class="html-italic">x</span> axis for collision with virtual object. Admittance control with (<b>a</b>) low and (<b>b</b>) high admittance parameters and (<b>c</b>) proposed variable admittance control based on HRCO.</p>
Full article ">
14 pages, 4221 KiB  
Article
Occupational Noise on Floating Storage and Offloading Vessels (FSO)
by Grzegorz Rutkowski and Jarosław Korzeb
Sensors 2021, 21(5), 1898; https://doi.org/10.3390/s21051898 - 8 Mar 2021
Cited by 6 | Viewed by 2361
Abstract
The purpose and scope of this paper are to provide guidance of the potential impacts of being subjected to high level noise recorded on 1st generation (30 years old) floating storage and offloading vessels (FSO) in sector offshore. The international community recognizes that [...] Read more.
The purpose and scope of this paper are to provide guidance of the potential impacts of being subjected to high level noise recorded on 1st generation (30 years old) floating storage and offloading vessels (FSO) in sector offshore. The international community recognizes that vibroacoustic impacts from commercial ships may have negative consequences for both humans (worker’s) and marine life, especially marine mammals. As regards the effect of noise on human health, there are legal requirements imposing the noise exposure control on personnel working on ships. The acceptable noise exposure standards are established in European Union Directive 2003/10/EC (2003), the NOPSEMA Regulation (2006), the Maritime Labor Convention (MLC) guidelines (2006), and the recommendations of the International Maritime Organization IMO contained, e.g., IMO MEPC.1/Circ.833 (2014). These regulations inform employers and employees what they must do to effectively protect both the marine environment and the health and life safety of workers employed in the maritime industry offshore. This study also presents an analysis of the results of noise measurements carried out on exemplary 1st generation FSO units. Full article
Show Figures

Figure 1

Figure 1
<p>Types of noise.</p>
Full article ">Figure 2
<p>The need for noise measurements [<a href="#B28-sensors-21-01898" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Classification of noise according to the time of duration [<a href="#B17-sensors-21-01898" class="html-bibr">17</a>].</p>
Full article ">Figure 4
<p>Correction filters characteristics.</p>
Full article ">Figure 5
<p>Daily exposure to noise E<sub>A Te</sub> (noise dose).</p>
Full article ">Figure 6
<p>Acceptable time of work.</p>
Full article ">Figure 7
<p>Overview of major noise sources on subjected FSO contributing to noise exposure for onboard workers in chosen areas, based on data survey from typical 1st generation FSO [<a href="#B1-sensors-21-01898" class="html-bibr">1</a>].</p>
Full article ">
15 pages, 1142 KiB  
Article
Deploying an NFV-Based Experimentation Scenario for 5G Solutions in Underserved Areas
by Victor Sanchez-Aguero, Ivan Vidal, Francisco Valera, Borja Nogales, Luciano Leonel Mendes, Wheberth Damascena Dias and Alexandre Carvalho Ferreira
Sensors 2021, 21(5), 1897; https://doi.org/10.3390/s21051897 - 8 Mar 2021
Cited by 9 | Viewed by 3105
Abstract
Presently, a significant part of the world population does not have Internet access. The fifth-generation cellular network technology evolution (5G) is focused on reducing latency, increasing the available bandwidth, and enhancing network performance. However, researchers and companies have not invested enough effort into [...] Read more.
Presently, a significant part of the world population does not have Internet access. The fifth-generation cellular network technology evolution (5G) is focused on reducing latency, increasing the available bandwidth, and enhancing network performance. However, researchers and companies have not invested enough effort into the deployment of the Internet in remote/rural/undeveloped areas for different techno-economic reasons. This article presents the result of a collaboration between Brazil and the European Union, introducing the steps designed to create a fully operational experimentation scenario with the main purpose of integrating the different achievements of the H2020 5G-RANGE project so that they can be trialed together into a 5G networking use case. The scenario encompasses (i) a novel radio access network that targets a bandwidth of 100 Mb/s in a cell radius of 50 km, and (ii) a network of Small Unmanned Aerial Vehicles (SUAV). This set of SUAVs is NFV-enabled, on top of which Virtual Network Functions (VNF) can be automatically deployed to support occasional network communications beyond the boundaries of the 5G-RANGE radio cells. The whole deployment implies the use of a virtual private overlay network enabling the preliminary validation of the scenario components from their respective remote locations, and simplifying their subsequent integration into a single local demonstrator, the configuration of the required GRE/IPSec tunnels, the integration of the new 5G-RANGE physical, MAC and network layer components and the overall validation with voice and data services. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the testbed components and the experimentation scenario.</p>
Full article ">Figure 2
<p>High-level overview of the 5G-RANGE architecture.</p>
Full article ">Figure 3
<p>Methodology to define, deploy, integrate and validate the experimentation scenario.</p>
Full article ">Figure 4
<p>Data-plane protocol stack of the residential environment.</p>
Full article ">Figure 5
<p>Performance evaluation of GRE/IPsec tunnel endpoints.</p>
Full article ">Figure 6
<p>Data rates of SIP and Skype calls.</p>
Full article ">Figure 7
<p>Data rates of video-on-demand service.</p>
Full article ">Figure 8
<p>Transoceanic network path performance between 5TONIC and Inatel.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop