[go: up one dir, main page]

Next Issue
Volume 21, February-1
Previous Issue
Volume 21, January-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 21, Issue 2 (January-2 2021) – 353 articles

Cover Story (view full-size image): Understanding what farm animals tell us is not only important for business, but also the key to unlock ways to enhance their welfare. A critical review providing a framework for developing an architecture for sensor-based animal emotional health tool. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 28421 KiB  
Article
The Event Detection System in the NEXT-White Detector
by Raúl Esteve Bosch, José F. Toledo Alarcón, Vicente Herrero Bosch, Ander Simón Estévez, Francesc Monrabal Capilla, Vicente Álvarez Puerta, Javier Rodríguez Samaniego, Marc Querol Segura and Francisco Ballester Merelo
Sensors 2021, 21(2), 673; https://doi.org/10.3390/s21020673 - 19 Jan 2021
Cited by 4 | Viewed by 3415
Abstract
This article describes the event detection system of the NEXT-White detector, a 5 kg high pressure xenon TPC with electroluminescent amplification, located in the Laboratorio Subterráneo de Canfranc (LSC), Spain. The detector is based on a plane of photomultipliers (PMTs) for energy measurements [...] Read more.
This article describes the event detection system of the NEXT-White detector, a 5 kg high pressure xenon TPC with electroluminescent amplification, located in the Laboratorio Subterráneo de Canfranc (LSC), Spain. The detector is based on a plane of photomultipliers (PMTs) for energy measurements and a silicon photomultiplier (SiPM) tracking plane for offline topological event filtering. The event detection system, based on the SRS-ATCA data acquisition system developed in the framework of the CERN RD51 collaboration, has been designed to detect multiple events based on online PMT signal energy measurements and a coincidence-detection algorithm. Implemented on FPGA, the system has been successfully running and evolving during NEXT-White operation. The event detection system brings some relevant and new functionalities in the field. A distributed double event processor has been implemented to detect simultaneously two different types of events thus allowing simultaneous calibration and physics runs. This special feature provides constant monitoring of the detector conditions, being especially relevant to the lifetime and geometrical map computations which are needed to correct high-energy physics events. Other features, like primary scintillation event rejection, or a double buffer associated with the type of event being searched, help reduce the unnecessary data throughput thus minimizing dead time and improving trigger efficiency. Full article
(This article belongs to the Special Issue Electronics for Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic view of the NEXT-White Detector. Top right: Picture of the PMT sensors plane. Top left: Picture of the SiPM sensors plane. In the active volume of the TPC: Drawing with the principle of operation of the detector.</p>
Full article ">Figure 2
<p>NEXT-White most common signal searches: (<b>a</b>) Online S<sub>1</sub> signal search, with offline S<sub>2</sub> signal search; (<b>b</b>) Online S<sub>2</sub> signal search with offline S<sub>1</sub> signal search. In both cases, a data acquisition window of 1300 µs and pre-trigger of 650 µs is applied.</p>
Full article ">Figure 3
<p>NEXT-White Data Acquisition Hardware Architecture.</p>
Full article ">Figure 4
<p>NEXT-DEMO trigger scheme.</p>
Full article ">Figure 5
<p>Example of signal candidate generation. In red, the complete set of configuration parameters to generate an event candidate from a PMT signal. In blue, data estimated by the event processor over the reconstructed signal by the BLR algorithm.</p>
Full article ">Figure 6
<p>Set of events with a different range of energies from RUN 8250. General configuration: Circular Buffer size of 1600 µs and pre-trigger of 800 µs. EVT1 type set for low energy: Maximum amplitude of 1000 ADC counts, minimum and maximum amplitude thresholds of 5000 and 50,000 sum of ADC counts, and minimum and maximum time thresholds of 2 and 40 µs. EVT2 type set for high energy: Maximum amplitude of 4095 ADC counts (maximum possible value), minimum and maximum amplitude thresholds of 50,000 and 16,777,215 (maximum possible value) sum of ADC counts, and minimum and maximum time thresholds of 2 and 600 µs.</p>
Full article ">Figure 7
<p>Run 7979 <sup>83m</sup>Kr energy deposition signal (S<sub>2</sub>) in PMT0 with possible false S<sub>1</sub> signals after and before the S<sub>2</sub> signal that could be set as possible event candidates.</p>
Full article ">Figure 8
<p>Event Accept example in Double Searching Mode: Run 4405 electron like Type A signal followed by a Type B signal for PMT0, PMT1 and PMT2. Double Search configuration set: 625 µs Maximum Time Event A to B. Type A signal configuration set: 50 ns Coincidence Window (CW<sub>A</sub>) and 3 minimum number of PMT hits (N<sub>A</sub>). Type B signal configuration set: 1250 ns Coincidence Window (CW<sub>B</sub>) and 3 minimum number of PMT hits (N<sub>B</sub>).</p>
Full article ">Figure 9
<p>System event detection and Multi-Hit Memory scheme and functionality example.</p>
Full article ">
19 pages, 4292 KiB  
Article
A Service Discovery Solution for Edge Choreography-Based Distributed Embedded Systems
by Sara Blanc, José-Luis Bayo-Montón, Senén Palanca-Barrio and Néstor X. Arreaga-Alvarado
Sensors 2021, 21(2), 672; https://doi.org/10.3390/s21020672 - 19 Jan 2021
Cited by 4 | Viewed by 2616
Abstract
This paper presents a solution to support service discovery for edge choreography based distributed embedded systems. The Internet of Things (IoT) edge architectural layer is composed of Raspberry Pi machines. Each machine hosts different services organized based on the choreography collaborative paradigm. The [...] Read more.
This paper presents a solution to support service discovery for edge choreography based distributed embedded systems. The Internet of Things (IoT) edge architectural layer is composed of Raspberry Pi machines. Each machine hosts different services organized based on the choreography collaborative paradigm. The solution adds to the choreography middleware three messages passing models to be coherent and compatible with current IoT messaging protocols. It is aimed to support blind hot plugging of new machines and help with service load balance. The discovery mechanism is implemented as a broker service and supports regular expressions (Regex) in message scope to discern both publishing patterns offered by data providers and client services necessities. Results compare Control Process Unit (CPU) usage in a request–response and datacentric configuration and analyze both regex interpreter latency times compared with a traditional message structure as well as its impact on CPU and memory consumption. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>A layered structured supported by a choreograph engine.</p>
Full article ">Figure 2
<p>Choreography system at the edge level.</p>
Full article ">Figure 3
<p>Physical service level.</p>
Full article ">Figure 4
<p>Logical service abstraction level.</p>
Full article ">Figure 5
<p>Web Service Discovery: a general view.</p>
Full article ">Figure 6
<p>Datacentric pattern: selective activation.</p>
Full article ">Figure 7
<p>Examples of rules.</p>
Full article ">Figure 8
<p>Request‒response vs. datacentric pattern: CPU usage in a Raspberry Pi.</p>
Full article ">Figure 9
<p>Regular expressions interpreter: example.</p>
Full article ">Figure 10
<p>Message built with a regular expression in the scope: C# example.</p>
Full article ">Figure 11
<p>CPU usage and memory consumption under test.</p>
Full article ">Figure 11 Cont.
<p>CPU usage and memory consumption under test.</p>
Full article ">
25 pages, 13734 KiB  
Article
Optimal Consensus with Dual Abnormality Mode of Cellular IoT Based on Edge Computing
by Shin-Hung Pan and Shu-Ching Wang
Sensors 2021, 21(2), 671; https://doi.org/10.3390/s21020671 - 19 Jan 2021
Viewed by 2331
Abstract
The continuous development of fifth-generation (5G) networks is the main driving force for the growth of Internet of Things (IoT) applications. It is expected that the 5G network will greatly expand the applications of the IoT, thereby promoting the operation of cellular networks, [...] Read more.
The continuous development of fifth-generation (5G) networks is the main driving force for the growth of Internet of Things (IoT) applications. It is expected that the 5G network will greatly expand the applications of the IoT, thereby promoting the operation of cellular networks, the security and network challenges of the IoT, and pushing the future of the Internet to the edge. Because the IoT can make anything in anyplace be connected together at any time, it can provide ubiquitous services. With the establishment and use of 5G wireless networks, the cellular IoT (CIoT) will be developed and applied. In order to provide more reliable CIoT applications, a reliable network topology is very important. Reaching a consensus is one of the most important issues in providing a highly reliable CIoT design. Therefore, it is necessary to reach a consensus so that even if some components in the system is abnormal, the application in the system can still execute correctly in CIoT. In this study, a protocol of consensus is discussed in CIoT with dual abnormality mode that combines dormant abnormality and malicious abnormality. The protocol proposed in this research not only allows all normal components in CIoT to reach a consensus with the minimum times of data exchange, but also allows the maximum number of dormant and malicious abnormal components in CIoT. In the meantime, the protocol can make all normal components in CIoT satisfy the constraints of reaching consensus: Termination, Agreement, and Integrity. Full article
Show Figures

Figure 1

Figure 1
<p>The structure of ECIoT.</p>
Full article ">Figure 2
<p>The Access-layer of ECIoT.</p>
Full article ">Figure 3
<p>The progression of the influence of dormant and malicious abnormal processing elements (PEs) removed.</p>
Full article ">Figure 4
<p>The proposed OCDAM.</p>
Full article ">Figure 5
<p>The execution steps of the proposed method.</p>
Full article ">Figure 6
<p>The example environment constructed by ECIoT.</p>
Full article ">Figure 7
<p>An example of the communication range of a specific <span class="html-italic">BS</span><sub>1</sub>.</p>
Full article ">Figure 8
<p>(<b>a</b>) The initial value of each PE in Edge cloud <span class="html-italic">E</span><sub>1</sub> of Edge-layer. (<b>b</b>) The <span class="html-italic">dg-graph</span> of each PE in Edge cloud <span class="html-italic">E</span><sub>1</sub> during the first data exchange in the Data Gathering Stage. (<b>c</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">e</span><sub>12</sub> during the second data exchange in the Data Gathering Stage. (<b>d</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">e</span><sub>13</sub> during the second data exchange in the Data Gathering Stage. (<b>e</b>) The consensus value of e12 by Consensus Decision Stage. (<b>f</b>) The consensus value of e13 by Consensus Decision Stage.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) The initial value of each PE in Edge cloud <span class="html-italic">E</span><sub>1</sub> of Edge-layer. (<b>b</b>) The <span class="html-italic">dg-graph</span> of each PE in Edge cloud <span class="html-italic">E</span><sub>1</sub> during the first data exchange in the Data Gathering Stage. (<b>c</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">e</span><sub>12</sub> during the second data exchange in the Data Gathering Stage. (<b>d</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">e</span><sub>13</sub> during the second data exchange in the Data Gathering Stage. (<b>e</b>) The consensus value of e12 by Consensus Decision Stage. (<b>f</b>) The consensus value of e13 by Consensus Decision Stage.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) The initial value of each PE in Edge cloud <span class="html-italic">E</span><sub>1</sub> of Edge-layer. (<b>b</b>) The <span class="html-italic">dg-graph</span> of each PE in Edge cloud <span class="html-italic">E</span><sub>1</sub> during the first data exchange in the Data Gathering Stage. (<b>c</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">e</span><sub>12</sub> during the second data exchange in the Data Gathering Stage. (<b>d</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">e</span><sub>13</sub> during the second data exchange in the Data Gathering Stage. (<b>e</b>) The consensus value of e12 by Consensus Decision Stage. (<b>f</b>) The consensus value of e13 by Consensus Decision Stage.</p>
Full article ">Figure 9
<p>(<b>a</b>) The initial value of each Cloud PE of Cloud-layer. (<b>b</b>) The <span class="html-italic">dg-graph</span> of each Cloud PE in Cloud-layer during the first data exchange in the Data Gathering Stage. (<b>c</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">c</span><sub>2</sub> during the second data exchange in the Data Gathering Stage. (<b>d</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">c</span><sub>3</sub> during the second data exchange in the Data Gathering Stage. (<b>e</b>) The consensus vector of <span class="html-italic">c</span><sub>2</sub> by Consensus Decision Stage. (<b>f</b>) The consensus vector of <span class="html-italic">c</span><sub>3</sub> by Consensus Decision Stage.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) The initial value of each Cloud PE of Cloud-layer. (<b>b</b>) The <span class="html-italic">dg-graph</span> of each Cloud PE in Cloud-layer during the first data exchange in the Data Gathering Stage. (<b>c</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">c</span><sub>2</sub> during the second data exchange in the Data Gathering Stage. (<b>d</b>) The final <span class="html-italic">dg-graph</span> of <span class="html-italic">c</span><sub>3</sub> during the second data exchange in the Data Gathering Stage. (<b>e</b>) The consensus vector of <span class="html-italic">c</span><sub>2</sub> by Consensus Decision Stage. (<b>f</b>) The consensus vector of <span class="html-italic">c</span><sub>3</sub> by Consensus Decision Stage.</p>
Full article ">Figure A1
<p>Example of <span class="html-italic">dg-graph</span>.</p>
Full article ">Figure A2
<p>The pseudo code of OCDAM.</p>
Full article ">
14 pages, 3418 KiB  
Communication
An Optical Frequency Domain Angle Measurement Method Based on Second Harmonic Generation
by Wijayanti Dwi Astuti, Hiraku Matsukuma, Masaru Nakao, Kuangyi Li, Yuki Shimizu and Wei Gao
Sensors 2021, 21(2), 670; https://doi.org/10.3390/s21020670 - 19 Jan 2021
Cited by 14 | Viewed by 3805
Abstract
This paper proposes a new optical angle measurement method in the optical frequency domain based on second harmonic generation with a mode-locked femtosecond laser source by making use of the unique characteristic of the high peak power and wide spectral range of the [...] Read more.
This paper proposes a new optical angle measurement method in the optical frequency domain based on second harmonic generation with a mode-locked femtosecond laser source by making use of the unique characteristic of the high peak power and wide spectral range of the femtosecond laser pulses. To get a wide measurable range of angle measurement, a theoretical calculation for several nonlinear optical crystals is performed. As a result, LiNbO3 crystal is employed in the proposed method. In the experiment, the validity of the use of a parabolic mirror is also demonstrated, where the chromatic aberration of the focusing beam caused the localization of second harmonic generation in our previous research. Moreover, an experimental demonstration is also carried out for the proposed angle measurement method. The measurable range of 10,000 arc-seconds is achieved. Full article
(This article belongs to the Collection Position Sensor)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A schematic of the previous research. (<b>b</b>) A schematic view of this research.</p>
Full article ">Figure 2
<p>(<b>a</b>) Two-dimensional perspective of the refractive indices surface for type I negative uniaxial crystal; (<b>b</b>) phase matching condition; (<b>c</b>) phase mismatching condition; (<b>d</b>) the change on refractive indices’ surface followed by the change of phase-matching angle; (<b>e</b>) the change of refractive indices’ surface with the same phase-matching angle in a certain range of wavelengths.</p>
Full article ">Figure 3
<p>Wavelength-dependent phase-matching angle: (<b>a</b>) BBO; (<b>b</b>) MgO: LiNbO<sub>3</sub>.</p>
Full article ">Figure 4
<p>Schematic of refraction effect due to crystal diffraction; the change in direction of an incident laser beam through a nonlinear optical crystal.</p>
Full article ">Figure 5
<p>(<b>a</b>) Schematic of the experimental setup to observe FW and SHG phenomena using an off-axis parabolic mirror for beam focusing; (<b>b</b>) a photograph of the experimental setup.</p>
Full article ">Figure 6
<p>(<b>a</b>) FW spectrum before conversion; (<b>b</b>) second harmonic wave (SHW) spectrum focusing by a parabolic mirror; (<b>c</b>) SHW spectrum focusing by a lens.</p>
Full article ">Figure 7
<p>Characteristic of SHW spectra in different incident angles using LiNbO<sub>3</sub> crystal.</p>
Full article ">Figure 8
<p>The sensitivity of the peak wavelength to the angular displacements.</p>
Full article ">Figure 9
<p>Experimental result of noise level using the standard deviation of the center-of-gravity wavelength calculation for each data point.</p>
Full article ">
12 pages, 851 KiB  
Letter
Ramie Yield Estimation Based on UAV RGB Images
by Hongyu Fu, Chufeng Wang, Guoxian Cui, Wei She and Liang Zhao
Sensors 2021, 21(2), 669; https://doi.org/10.3390/s21020669 - 19 Jan 2021
Cited by 13 | Viewed by 3868
Abstract
Timely and accurate crop growth monitoring and yield estimation are important for field management. The traditional sampling method used for estimation of ramie yield is destructive. Thus, this study proposed a new method for estimating ramie yield based on field phenotypic data obtained [...] Read more.
Timely and accurate crop growth monitoring and yield estimation are important for field management. The traditional sampling method used for estimation of ramie yield is destructive. Thus, this study proposed a new method for estimating ramie yield based on field phenotypic data obtained from unmanned aerial vehicle (UAV) images. A UAV platform carrying RGB cameras was employed to collect ramie canopy images during the whole growth period. The vegetation indices (VIs), plant number, and plant height were extracted from UAV-based images, and then, these data were incorporated to establish yield estimation model. Among all of the UAV-based image data, we found that the structure features (plant number and plant height) could better reflect the ramie yield than the spectral features, and in structure features, the plant number was found to be the most useful index to monitor the yield, with a correlation coefficient of 0.6. By fusing multiple characteristic parameters, the yield estimation model based on the multiple linear regression was obviously more accurate than the stepwise linear regression model, with a determination coefficient of 0.66 and a relative root mean square error of 1.592 kg. Our study reveals that it is feasible to monitor crop growth based on UAV images and that the fusion of phenotypic data can improve the accuracy of yield estimations. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Study area and arrangement of the experimental sites: (<b>a</b>) geographic map of the study area, (<b>b</b>) experiment fields for ramie, (<b>c</b>) nitrogen fertilizer application levels represented by number, and (<b>d</b>) nitrogen fertilizer application levels in each plot.</p>
Full article ">Figure 2
<p>Variation trend of the digital surface model (DSM)-based plant heights. (<b>a</b>) XiangZhu No. 3 and (<b>b</b>) XiangZhu No. 7.</p>
Full article ">Figure 3
<p>Effects of different nitrogen levels on the biomass of ramie. (<b>a</b>) XiangZhu No. 3 and (<b>b</b>) XiangZhu No. 7.</p>
Full article ">Figure 4
<p>Precision analysis of the unmanned aerial vehicle (UAV)-based data. (<b>a</b>) Precision analysis of the UAV-based plant heights and (<b>b</b>) precision analysis of the UAV-based lant numbers.</p>
Full article ">Figure 5
<p>Estimation results of the ramie yield by using the multiple linear regression model and stepwise linear regression models. (<b>a</b>) Stepwise linear regression model only including the plant number, (<b>b</b>) stepwise linear regression model including the plant number and plant height, and (<b>c</b>) the multiple linear regression model.</p>
Full article ">
10 pages, 4335 KiB  
Letter
Serial MTJ-Based TMR Sensors in Bridge Configuration for Detection of Fractured Steel Bar in Magnetic Flux Leakage Testing
by Zhenhu Jin, Muhamad Arif Ihsan Mohd Noor Sam, Mikihiko Oogane and Yasuo Ando
Sensors 2021, 21(2), 668; https://doi.org/10.3390/s21020668 - 19 Jan 2021
Cited by 48 | Viewed by 6287
Abstract
Thanks to high sensitivity, excellent scalability, and low power consumption, magnetic tunnel junction (MTJ)-based tunnel magnetoresistance (TMR) sensors have been widely implemented in various industrial fields. In nondestructive magnetic flux leakage testing, the magnetic sensor plays a significant role in the detection results. [...] Read more.
Thanks to high sensitivity, excellent scalability, and low power consumption, magnetic tunnel junction (MTJ)-based tunnel magnetoresistance (TMR) sensors have been widely implemented in various industrial fields. In nondestructive magnetic flux leakage testing, the magnetic sensor plays a significant role in the detection results. As highly sensitive sensors, integrated MTJs can suppress frequency-dependent noise and thereby decrease detectivity; therefore, serial MTJ-based sensors allow for the design of high-performance sensors to measure variations in magnetic fields. In the present work, we fabricated serial MTJ-based TMR sensors and connected them to a full Wheatstone bridge circuit. Because noise power can be suppressed by using bridge configuration, the TMR sensor with Wheatstone bridge configuration showed low noise spectral density (0.19 μV/Hz0.5) and excellent detectivity (5.29 × 10−8 Oe/Hz0.5) at a frequency of 1 Hz. Furthermore, in magnetic flux leakage testing, compared with one TMR sensor, the Wheatstone bridge TMR sensors provided a higher signal-to-noise ratio for inspection of a steel bar. The one TMR sensor system could provide a high defect signal due to its high sensitivity at low lift-off (4 cm). However, as a result of its excellent detectivity, the full Wheatstone bridge-based TMR sensor detected the defect even at high lift-off (20 cm). This suggests that the developed TMR sensor provides excellent detectivity, detecting weak field changes in magnetic flux leakage testing. Full article
(This article belongs to the Special Issue Magnetic Sensing/Functionalized Devices and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>,<b>b</b>) Microscopy image of a tunnel magnetoresistance (TMR) sensor containing 500 magnetic tunnel junctions (MTJs) in series. The two pinned junctions were etched onto the bottom electrode of the free layer. (<b>c</b>) Stacking structure of MTJ film.</p>
Full article ">Figure 2
<p>Schematic diagram of the measurement system.</p>
Full article ">Figure 3
<p>Schematic diagram of the developed magnetic flux leakage (MFL) testing system.</p>
Full article ">Figure 4
<p>(<b>a</b>) Outputs for one serial MTJ sensor and (<b>b</b>) four serial MTJ sensors connected in a full Wheatstone bridge circuit at room temperature. Sensitivity is determined by Δ<span class="html-italic">V</span>/Δ<span class="html-italic">H</span> term slope at zero field. The linear range is defined as a dynamic range with nonlinearity of 10% FS for each sensor.</p>
Full article ">Figure 5
<p>(<b>a</b>) Noise spectral density <span class="html-italic">S<sub>v</sub></span> as a function of frequency for one serial MTJ sensor and FWB-TMR sensor with bias current of 0.7 mA at zero external field. (<b>b</b>) Detectivity for one serial MTJ sensor and FWB-TMR sensor at zero external field.</p>
Full article ">Figure 6
<p>MFL testing results of flawless and fractured steel bars for one serial MTJ sensor. The peak-to-valley amplitude of the output signal around the fracture position is defined as defect signal Δ<span class="html-italic">V</span>. The estimated magnet’s north and south poles are labeled with “N” and “S”. Estimated defect location is defined as the center of the peak to valley along the <span class="html-italic">x</span>-axis direction. The dotted line represents the actual fracture position on the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 7
<p>MFL testing results of fractured steel bars for one serial MTJ sensor and FWB-TMR sensor. The Δ<span class="html-italic">V</span> of 0.93 V and 0.87 V were obtained by using one serial MTJ sensor and FWB-TMR sensor, respectively. The dotted line represents the actual fracture position on the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 8
<p>Dependence of defect signal Δ<span class="html-italic">V</span> and variations in estimated MFL field Δ<span class="html-italic">B<sub>z</sub></span> on various lift-off values.</p>
Full article ">Figure 9
<p>Dependence of SNR of defect detection on various lift-off values.</p>
Full article ">
20 pages, 617 KiB  
Article
Measurement-Based Modelling of Material Moisture and Particle Classification for Control of Copper Ore Dry Grinding Process
by Oliwia Krauze, Dariusz Buchczik and Sebastian Budzan
Sensors 2021, 21(2), 667; https://doi.org/10.3390/s21020667 - 19 Jan 2021
Cited by 6 | Viewed by 2804
Abstract
Moisture of bulk material has a significant impact on energetic efficiency of dry grinding, resultant particle size distribution and particle shape, and conditions of powder transport. As a consequence, moisture needs to be measured or estimated (modelled) in many points. This research investigates [...] Read more.
Moisture of bulk material has a significant impact on energetic efficiency of dry grinding, resultant particle size distribution and particle shape, and conditions of powder transport. As a consequence, moisture needs to be measured or estimated (modelled) in many points. This research investigates mutual relations between material moisture and particle classification process in a grinding installation. The experimental setup involves an inertial-impingement classifier and cyclone being part of dry grinding circuit with electromagnetic mill and recycle of coarse particles. The tested granular material is copper ore of particle size 0–1.25 mm and relative moisture content 0.5–5%, fed to the installation at various rates. Higher moisture of input material is found to change the operation of the classifier. Computed correlation coefficients show increased content of fine particles in lower product of classification. Additionally, drying of lower and upper classification products with respect to moisture of input material is modelled. Straight line models with and without saturation are estimated with recursive least squares method accounting for measurement errors in both predictor and response variables. These simple models are intended for use in automatic control system of the grinding installation. Full article
(This article belongs to the Special Issue Humidity Sensors for Industrial and Agricultural Applications)
Show Figures

Figure 1

Figure 1
<p>Installation for dry grinding with electromagnetic mill: (<b>a</b>) diagram, (<b>b</b>) photo–with cyclone in the foreground and precise classifier in the background. Credits: (<b>a</b>)–by authors, (<b>b</b>)–by Szymon Ogonowski.</p>
Full article ">Figure 2
<p>Moisture model (block diagram) of the classification subsystem.</p>
Full article ">Figure 3
<p>Experimental setup involving classification subsystem of the grinding circuit.</p>
Full article ">Figure 4
<p>Histogram of particle size distribution for input material. Color bar heights indicate mean values for all experiments and error bars extend to <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>1</mn> <mspace width="3.33333pt"/> <mo>×</mo> </mrow> </semantics></math> standard deviation.</p>
Full article ">Figure 5
<p>Partition curves for separator fed with material of varying moisture content. The material was supplied at: (<b>a</b>) 50%, (<b>b</b>) 100% of nominal throughput of the screw feeder.</p>
Full article ">Figure 6
<p>Degrees of separation from each experiment grouped by granularity class, in relation to input material moisture. The material was supplied to the separator at: (<b>a</b>) 50%, (<b>b</b>) 100% of the nominal throughput of the screw feeder.</p>
Full article ">Figure 7
<p>Measured moisture of both classification products related to moisture of input material, separately for different throughputs of the screw feeder: (<b>a</b>) lower product, 50% of nominal throughput; (<b>b</b>) lower product, 100% of nominal throughput; (<b>c</b>) upper product, 50% of nominal throughput; (<b>d</b>) upper product, 100% of nominal throughput. Points indicate three measurement attempts for each quantity in each experiment, error bars extend to <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>1</mn> <mspace width="3.33333pt"/> <mo>×</mo> </mrow> </semantics></math> sample standard deviation of the three measurements, cross-sections of horizontal and vertical error bars mark the averages of the three measurements.</p>
Full article ">Figure 8
<p>Straight line models fitted to measured moisture of classification products in relation to moisture of input material, separately for different products and different throughput of the screw feeder: (<b>a</b>) lower product, 50% of nominal throughput; (<b>b</b>) lower product, 100% of nominal throughput; (<b>c</b>) upper product, 50% of nominal throughput; (<b>d</b>) upper product, 100% of nominal throughput.</p>
Full article ">Figure 9
<p>Residual plots for straight line models from <a href="#sensors-21-00667-f008" class="html-fig">Figure 8</a>: (<b>a</b>) lower product, 50% of nominal throughput; (<b>b</b>) lower product, 100% of nominal throughput; (<b>c</b>) upper product, 50% of nominal throughput; (<b>d</b>) upper product, 100% of nominal throughput.</p>
Full article ">Figure 10
<p>Straight lines with saturation fitted to measured moisture of upper classification product in relation to moisture of input material, separately for: (<b>a</b>) 50%, (<b>b</b>) 100% of nominal throughput of the screw feeder. Data for lower product are not plotted as they are identical to <a href="#sensors-21-00667-f008" class="html-fig">Figure 8</a>a,b.</p>
Full article ">Figure 11
<p>Residual plots for straight lines with saturation from <a href="#sensors-21-00667-f010" class="html-fig">Figure 10</a>: (<b>a</b>) upper product, 50% of nominal throughput; (<b>b</b>) upper product, 100% of nominal throughput. Data for lower product are not plotted as they are identical to <a href="#sensors-21-00667-f009" class="html-fig">Figure 9</a>a,b.</p>
Full article ">Figure 12
<p>Comparison of models for lower and upper product of classification, for 50% and 100% nominal feeder throughput: straight line models for lower product and saturated straight line models for upper product.</p>
Full article ">Figure 13
<p>Comparison of measured moisture (average values) for lower and upper product of classification, for 50% and 100% nominal feeder throughput.</p>
Full article ">
19 pages, 8741 KiB  
Article
Double Ghost Convolution Attention Mechanism Network: A Framework for Hyperspectral Reconstruction of a Single RGB Image
by Wenju Wang and Jiangwei Wang
Sensors 2021, 21(2), 666; https://doi.org/10.3390/s21020666 - 19 Jan 2021
Cited by 11 | Viewed by 4355
Abstract
Current research on the reconstruction of hyperspectral images from RGB images using deep learning mainly focuses on learning complex mappings through deeper and wider convolutional neural networks (CNNs). However, the reconstruction accuracy of the hyperspectral image is not high and among other issues [...] Read more.
Current research on the reconstruction of hyperspectral images from RGB images using deep learning mainly focuses on learning complex mappings through deeper and wider convolutional neural networks (CNNs). However, the reconstruction accuracy of the hyperspectral image is not high and among other issues the model for generating these images takes up too much storage space. In this study, we propose the double ghost convolution attention mechanism network (DGCAMN) framework for the reconstruction of a single RGB image to improve the accuracy of spectral reconstruction and reduce the storage occupied by the model. The proposed DGCAMN consists of a double ghost residual attention block (DGRAB) module and optimal nonlocal block (ONB). DGRAB module uses GhostNet and PRELU activation functions to reduce the calculation parameters of the data and reduce the storage size of the generative model. At the same time, the proposed double output feature Convolutional Block Attention Module (DOFCBAM) is used to capture the texture details on the feature map to maximize the content of the reconstructed hyperspectral image. In the proposed ONB, the Argmax activation function is used to obtain the region with the most abundant feature information and maximize the most useful feature parameters. This helps to improve the accuracy of spectral reconstruction. These contributions enable the DGCAMN framework to achieve the highest spectral accuracy with minimal storage consumption. The proposed method has been applied to the NTIRE 2020 dataset. Experimental results show that the proposed DGCAMN method outperforms the spectral accuracy reconstructed by advanced deep learning methods and greatly reduces storage consumption. Full article
(This article belongs to the Special Issue Computational Spectral Imaging)
Show Figures

Figure 1

Figure 1
<p>Double Ghost Convolution Attention Mechanism Network framework.</p>
Full article ">Figure 2
<p>Double Ghost Residual Attention Module.</p>
Full article ">Figure 3
<p>Ghost Network.</p>
Full article ">Figure 4
<p>Convolution Attention Mechanism Module Diagram.</p>
Full article ">Figure 5
<p>Optimal Nonlocal Block.</p>
Full article ">Figure 6
<p>RMSE variation curve with m.</p>
Full article ">Figure 7
<p>RMSE variation curve with batch size.</p>
Full article ">Figure 8
<p>NTIRE 2020 HS verification set for 451 RGB images as determined by YAN, HRN, AWAN, and our method. The reconstructed and real image visualized in the 16th channel map is shown.</p>
Full article ">Figure 9
<p>A spectral reversion of the HSI reversion error image in band 31. The analysis uses a validation set for NTIRE 2020.</p>
Full article ">Figure 10
<p>NTIRE 2020 HS validation for NONE, channel, spatial, CBAM, CBAM + ResNet, and DOFCBAM.</p>
Full article ">Figure 10 Cont.
<p>NTIRE 2020 HS validation for NONE, channel, spatial, CBAM, CBAM + ResNet, and DOFCBAM.</p>
Full article ">Figure 11
<p>NTIRE 2020 HS validation: (<b>a</b>) Visualization for YAN, HRN, AWAN, and our work on the NTIRE 2020 HS validation set. (<b>b</b>) Visualization diagrams of YAN, HRN, AWAN, and our work on the NTIRE 2020 HS validation set.</p>
Full article ">Figure 11 Cont.
<p>NTIRE 2020 HS validation: (<b>a</b>) Visualization for YAN, HRN, AWAN, and our work on the NTIRE 2020 HS validation set. (<b>b</b>) Visualization diagrams of YAN, HRN, AWAN, and our work on the NTIRE 2020 HS validation set.</p>
Full article ">Figure 12
<p>The spectral response curves of multiple spatial points selected from the reconstructed NTIRE 2020 HS verification set. As for <a href="#sensors-21-00666-f011" class="html-fig">Figure 11</a>a: (<b>a</b>) Comparison of results as spectral reflectance curves for the validation set of the different algorithms and our work with the NTIRE 2020 HS verification set; As for <a href="#sensors-21-00666-f011" class="html-fig">Figure 11</a>b: (<b>b</b>) Comparison of results as spectral reflectance curves for the validation set of the different algorithms and our work with the NTIRE 2020 HS verification set.</p>
Full article ">
30 pages, 7999 KiB  
Article
Multi-Zone Authentication and Privacy-Preserving Protocol (MAPP) Based on the Bilinear Pairing Cryptography for 5G-V2X
by Shimaa A. Abdel Hakeem and HyungWon Kim
Sensors 2021, 21(2), 665; https://doi.org/10.3390/s21020665 - 19 Jan 2021
Cited by 18 | Viewed by 2907
Abstract
5G-Vehicle-to-Everything (5G-V2X) supports high-reliability and low latency autonomous services and applications. Proposing an efficient security solution that supports multi-zone broadcast authentication and satisfies the 5G requirement is a critical challenge. In The 3rd Generation Partnership Project (3GPP) Release 16 standard, for Cellular- Vehicle-to-Everything [...] Read more.
5G-Vehicle-to-Everything (5G-V2X) supports high-reliability and low latency autonomous services and applications. Proposing an efficient security solution that supports multi-zone broadcast authentication and satisfies the 5G requirement is a critical challenge. In The 3rd Generation Partnership Project (3GPP) Release 16 standard, for Cellular- Vehicle-to-Everything (C-V2X) single-cell communication is suggested to reuse the IEEE1609.2 security standard that utilizes the Public Key Infrastructure (PKI) cryptography. PKI-based solutions provide a high-security level, however, it suffers from high communication and computation overhead, due to the large size of the attached certificate and signature. In this study, we propose a light-weight Multi-Zone Authentication and Privacy-Preserving Protocol (MAPP) based on the bilinear pairing cryptography and short-size signature. MAPP protocol provides three different authentication methods that enable a secure broadcast authentication over multiple zones of large-scale base stations, using a single message and a single short signature. We also propose a centralized dynamic key generation method for multiple zones. We implemented and analyzed the proposed key generation and authentication methods using an authentication simulator and a bilinear pairing library. The proposed methods significantly reduce the signature generation time by 16 times–80 times, as compared to the previous methods. Additionally, the proposed methods significantly reduced the signature verification time by 10 times–16 times, as compared to the two previous methods. The three proposed authentication methods achieved substantial speed-up in the signature generation time and verification time, using a short bilinear pairing signature. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>The proposed protocol architecture.</p>
Full article ">Figure 2
<p>Single-zone and multi-zone communication in the 5G-V2X network model time for the three proposed authentication methods, and six previous related methods for single-zone and multi-zone scenarios.</p>
Full article ">Figure 3
<p>BSs registration with CA.</p>
Full article ">Figure 4
<p>Vehicle primary authorization.</p>
Full article ">Figure 5
<p>Single-zone communication in the TCA method.</p>
Full article ">Figure 6
<p>Overlapped area communication in TCA method.</p>
Full article ">Figure 7
<p>Multi-zone communication in TCA.</p>
Full article ">Figure 8
<p>Multi-zone authentication using the signature concatenation (SCA) method.</p>
Full article ">Figure 9
<p>Multi-zone authentication using Receiver-Centric Authentication (RCA) method.</p>
Full article ">Figure 10
<p>The proposed protocol message structure. (<b>a</b>) TCA method, (<b>b</b>) SCA method, and (<b>c</b>) RCA method. Where <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mi>i</mi> </msub> </mrow> </semantics></math> represents the message payload, <math display="inline"><semantics> <mrow> <mi>p</mi> <mi>i</mi> <msub> <mi>d</mi> <mi>i</mi> </msub> </mrow> </semantics></math> represents the pseudo-identity of <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>Z</mi> <mi>I</mi> <mi>D</mi> </mrow> </msub> </mrow> </semantics></math> represents a list of zone IDs. <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>s</mi> </msub> </mrow> </semantics></math> represents the time stamp, <math display="inline"><semantics> <mrow> <mi>p</mi> <msub> <mi>k</mi> <mi>i</mi> </msub> </mrow> </semantics></math> represents the public key of <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>i</mi> </msub> </mrow> </semantics></math> represents the signature over the message. In our implementation, a signature <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>i</mi> </msub> <mo>∈</mo> <msub> <mi>G</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and the public key <math display="inline"><semantics> <mrow> <mi>p</mi> <msub> <mi>k</mi> <mi>i</mi> </msub> </mrow> </semantics></math> is <math display="inline"><semantics> <mrow> <mo>∈</mo> <msub> <mi>G</mi> <mn>2</mn> </msub> </mrow> </semantics></math>. The total communication overhead of one message is <math display="inline"><semantics> <mrow> <mo>|</mo> <msub> <mi>L</mi> <mrow> <mi>Z</mi> <mi>I</mi> <mi>D</mi> </mrow> </msub> <mrow> <mo>|</mo> <mrow> <mo>+</mo> <mo>|</mo> <mi>p</mi> <mi>i</mi> <msub> <mi>d</mi> <mi>i</mi> </msub> </mrow> <mo>|</mo> </mrow> <mo>+</mo> <mrow> <mo>|</mo> <mrow> <msub> <mi>T</mi> <mi>s</mi> </msub> </mrow> <mo>|</mo> </mrow> <mo>+</mo> <mrow> <mo>|</mo> <mrow> <mi>p</mi> <msub> <mi>k</mi> <mi>i</mi> </msub> </mrow> <mo>|</mo> </mrow> <mo>+</mo> <mrow> <mo>|</mo> <mrow> <msub> <mi>σ</mi> <mi>i</mi> </msub> </mrow> <mo>|</mo> </mrow> <mo>=</mo> </mrow> </semantics></math> 4 + 4 + 1 + 64 + 32 = 105 bytes.</p>
Full article ">Figure 11
<p>Comparison of communication cost per message for the three proposed authentication methods and the compared security protocols for single-zone scenario.</p>
Full article ">Figure 12
<p>Comparison of communication cost for the proposed authentication methods and the compared security protocols for the multi-zone scenario.</p>
Full article ">Figure 13
<p>A comparison of signature generation time and verification time for the proposed TCA method and the six previous methods in a single-zone scenario.</p>
Full article ">Figure 14
<p>Comparison of signature generation time for the three proposed authentication methods and the two previous methods for multi-zone scenarios.</p>
Full article ">Figure 15
<p>Comparison of signature verification time for the three proposed authentication methods and the two previous methods for multi-zone scenarios.</p>
Full article ">
17 pages, 7359 KiB  
Article
Optimization of 3D Point Clouds of Oilseed Rape Plants Based on Time-of-Flight Cameras
by Zhihong Ma, Dawei Sun, Haixia Xu, Yueming Zhu, Yong He and Haiyan Cen
Sensors 2021, 21(2), 664; https://doi.org/10.3390/s21020664 - 19 Jan 2021
Cited by 9 | Viewed by 3855
Abstract
Three-dimensional (3D) structure is an important morphological trait of plants for describing their growth and biotic/abiotic stress responses. Various methods have been developed for obtaining 3D plant data, but the data quality and equipment costs are the main factors limiting their development. Here, [...] Read more.
Three-dimensional (3D) structure is an important morphological trait of plants for describing their growth and biotic/abiotic stress responses. Various methods have been developed for obtaining 3D plant data, but the data quality and equipment costs are the main factors limiting their development. Here, we propose a method to improve the quality of 3D plant data using the time-of-flight (TOF) camera Kinect V2. A K-dimension (k-d) tree was applied to spatial topological relationships for searching points. Background noise points were then removed with a minimum oriented bounding box (MOBB) with a pass-through filter, while outliers and flying pixel points were removed based on viewpoints and surface normals. After being smoothed with the bilateral filter, the 3D plant data were registered and meshed. We adjusted the mesh patches to eliminate layered points. The results showed that the patches were closer. The average distance between the patches was 1.88 × 10−3 m, and the average angle was 17.64°, which were 54.97% and 48.33% of those values before optimization. The proposed method performed better in reducing noise and the local layered-points phenomenon, and it could help to more accurately determine 3D structure parameters from point clouds and mesh models. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Figure 1
<p>Acquisition system and point cloud acquisition. (<b>a</b>) Acquisition system; (<b>b</b>) the process of obtaining a single-frame point cloud.</p>
Full article ">Figure 2
<p>The flow chart of the proposed method.</p>
Full article ">Figure 3
<p>The relationship between the object, minimum oriented bounding box (MOBB) and coordinate in 2D space, (<b>a</b>) under ideal conditions and (<b>b</b>) under general conditions.</p>
Full article ">Figure 4
<p>Relationship between triangular patches, (<b>a</b>–<b>c</b>) are 3D view images while (<b>d</b>–<b>f</b>) are front view images. (<b>a</b>,<b>d</b>) Intersecting patches. (<b>b</b>,<b>e</b>) Plane intersecting patches. (<b>c</b>,<b>f</b>) Parallel patches.</p>
Full article ">Figure 5
<p>The valid points (red points). (<b>a</b>) Original point cloud. (<b>b</b>) Original point cloud with valid points.</p>
Full article ">Figure 6
<p>Removal of BN points. (<b>a</b>) Original point cloud. (<b>b</b>) Point cloud rotated by MOBB. (<b>c</b>) Original point cloud after filtering with pass-through filter. (<b>d</b>) The rotated point cloud after filtering with pass-through filter.</p>
Full article ">Figure 7
<p>Results of different denoising methods: (<b>a</b>–<b>d</b>) front view images, (<b>e</b>–<b>h</b>) side view images. (<b>a</b>,<b>e</b>) The original point cloud. (<b>b</b>,<b>f</b>) The result after using the radius-based outlier filter. (<b>c</b>,<b>g</b>) The result after using the radius-density-based outlier filter. (<b>d</b>,<b>h</b>) The result after using the proposed method.</p>
Full article ">Figure 8
<p>The distributions of the normals of (<b>a</b>) the original points and (<b>b</b>) the points after smoothing.</p>
Full article ">Figure 9
<p>The results for the point cloud after registration: (<b>a</b>) Meshes of point cloud without optimization. (<b>b</b>) Meshes of point cloud after optimization.</p>
Full article ">Figure 10
<p>(<b>a</b>) The time of each step in the proposed method. (<b>b</b>) The proportion of the time cost for each step.</p>
Full article ">
25 pages, 7511 KiB  
Article
Development of a Real-Time Human-Robot Collaborative System Based on 1 kHz Visual Feedback Control and Its Application to a Peg-in-Hole Task
by Yuji Yamakawa, Yutaro Matsui and Masatoshi Ishikawa
Sensors 2021, 21(2), 663; https://doi.org/10.3390/s21020663 - 19 Jan 2021
Cited by 9 | Viewed by 3517
Abstract
In this research, we focused on Human-Robot collaboration. There were two goals: (1) to develop and evaluate a real-time Human-Robot collaborative system, and (2) to achieve concrete tasks such as collaborative peg-in-hole using the developed system. We proposed an algorithm for visual sensing [...] Read more.
In this research, we focused on Human-Robot collaboration. There were two goals: (1) to develop and evaluate a real-time Human-Robot collaborative system, and (2) to achieve concrete tasks such as collaborative peg-in-hole using the developed system. We proposed an algorithm for visual sensing and robot hand control to perform collaborative motion, and we analyzed the stability of the collaborative system and a so-called collaborative error caused by image processing and latency. We achieved collaborative motion using this developed system and evaluated the collaborative error on the basis of the analysis results. Moreover, we aimed to realize a collaborative peg-in-hole task that required a system with high speed and high accuracy. To achieve this goal, we analyzed the conditions required for performing the collaborative peg-in-hole task from the viewpoints of geometric, force and posture conditions. Finally, in this work, we show the experimental results and data of the collaborative peg-in-hole task, and we examine the effectiveness of our collaborative system. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Goal of this research. HRI: Human-Robot interaction.</p>
Full article ">Figure 2
<p>Purpose of this research [<a href="#B24-sensors-21-00663" class="html-bibr">24</a>].</p>
Full article ">Figure 3
<p>Human-Robot collaborative system [<a href="#B22-sensors-21-00663" class="html-bibr">22</a>].</p>
Full article ">Figure 4
<p>Mechanism of high-speed robot hand [<a href="#B25-sensors-21-00663" class="html-bibr">25</a>].</p>
Full article ">Figure 5
<p>Configuration of axes on the board [<a href="#B22-sensors-21-00663" class="html-bibr">22</a>].</p>
Full article ">Figure 6
<p>Board with hole and peg used in the collaborative peg-in-hole task.</p>
Full article ">Figure 7
<p>Control flow of Human-Robot collaborative system. PD: proportional derivative.</p>
Full article ">Figure 8
<p>Flow of image processing and measurement of board state.</p>
Full article ">Figure 9
<p>Relationship between transformation matrices <math display="inline"><semantics> <mi mathvariant="bold-italic">T</mi> </semantics></math> [<a href="#B22-sensors-21-00663" class="html-bibr">22</a>].</p>
Full article ">Figure 10
<p>Inverse kinematics calculation of robot hand.</p>
Full article ">Figure 11
<p>Sequential photographs of experimental results [<a href="#B22-sensors-21-00663" class="html-bibr">22</a>]. (<b>a</b>–<b>h</b>): the time interval of the sequential photographs is 1 s.</p>
Full article ">Figure 12
<p>Sequential photographs of experimental results (side view). (<b>a</b>–<b>h</b>): the time interval of the sequential photographs is 0.5 s.</p>
Full article ">Figure 13
<p>Data of experimental results.</p>
Full article ">Figure 14
<p>Theoretical collaborative error and actual collaborative error with various frame rates.</p>
Full article ">Figure 15
<p>Comparison between various frame rates.</p>
Full article ">Figure 16
<p>Peg shape and hole.</p>
Full article ">Figure 17
<p>Human-Robot cooperative peg-in-hole task. In addition to the illustrated force, gravitational acceleration <span class="html-italic">g</span> always acts on the board. (<b>a</b>) in the case of insertion (downward motion); (<b>b</b>) in the case of removal (upward motion).</p>
Full article ">Figure 18
<p>Collaborative peg-in-hole task. (<b>a</b>–<b>d</b>): collaborative motion; (<b>e</b>–<b>i</b>): collaborative peg-in-hole.</p>
Full article ">Figure 19
<p>Data of collaborative peg-in-hole task.</p>
Full article ">
16 pages, 836 KiB  
Article
Supervised SVM Transfer Learning for Modality-Specific Artefact Detection in ECG
by Jonathan Moeyersons, John Morales, Amalia Villa, Ivan Castro, Dries Testelmans, Bertien Buyse, Chris Van Hoof, Rik Willems, Sabine Van Huffel and Carolina Varon
Sensors 2021, 21(2), 662; https://doi.org/10.3390/s21020662 - 19 Jan 2021
Cited by 4 | Viewed by 2682
Abstract
The electrocardiogram (ECG) is an important diagnostic tool for identifying cardiac problems. Nowadays, new ways to record ECG signals outside of the hospital are being investigated. A promising technique is capacitively coupled ECG (ccECG), which allows ECG signals to be recorded through insulating [...] Read more.
The electrocardiogram (ECG) is an important diagnostic tool for identifying cardiac problems. Nowadays, new ways to record ECG signals outside of the hospital are being investigated. A promising technique is capacitively coupled ECG (ccECG), which allows ECG signals to be recorded through insulating materials. However, as the ECG is no longer recorded in a controlled environment, this inevitably implies the presence of more artefacts. Artefact detection algorithms are used to detect and remove these. Typically, the training of a new algorithm requires a lot of ground truth data, which is costly to obtain. As many labelled contact ECG datasets exist, we could avoid the use of labelling new ccECG signals by making use of previous knowledge. Transfer learning can be used for this purpose. Here, we applied transfer learning to optimise the performance of an artefact detection model, trained on contact ECG, towards ccECG. We used ECG recordings from three different datasets, recorded with three recording devices. We showed that the accuracy of a contact-ECG classifier improved between 5 and 8% by means of transfer learning when tested on a ccECG dataset. Furthermore, we showed that only 20 segments of the ccECG dataset are sufficient to significantly increase the accuracy. Full article
(This article belongs to the Special Issue Recent Advances in ECG Monitoring)
Show Figures

Figure 1

Figure 1
<p>Comparison between a clean (blue) and noisy (red) segment for both contact (<b>a</b>,<b>b</b>) and non-contact (<b>d</b>,<b>e</b>) signals. A clear difference in autocorrelation function (ACF) (<b>c</b>,<b>f</b>) shape can be observed between the two signals.</p>
Full article ">Figure 2
<p>(<b>a</b>) An electrocardiogram (ECG) segment that contains a large artefact. (<b>b</b>) The respective ACF of the ECG segment (red), together with the ACF of the clean non-contact segment (blue) of <a href="#sensors-21-00662-f001" class="html-fig">Figure 1</a>. The dotted red line in plot indicates the first local minimum and the dotted blue line indicates maximum amplitude at 35 ms.</p>
Full article ">Figure 3
<p>Comparison between a clean (blue) and noisy (red) ECG signal (<b>a</b>,<b>c</b>), together with the ACF’s of their respective sliding windows (<b>b</b>,<b>d</b>). The difference between the ACF’s within the search window, as depicted by the black box, is clearly higher for the noisy ECG signal.</p>
Full article ">Figure 4
<p>We tested the base classifiers using 5-fold cross-validation. Additionally, each base classifiers was also tested on all folds of the other datasets. This results in a total of 25 performance evaluations.</p>
Full article ">Figure 5
<p>Feature space of the three datasets. Green dots indicate clean samples, red dots indicate noisy samples: (<b>a</b>) PSG dataset. (<b>b</b>) HH dataset. (<b>c</b>) CC dataset.</p>
Full article ">Figure 6
<p>Comparison of the performance on the CC dataset without (blue bars) and with (red bars) transfer learning. (<b>a</b>) Trained on the PSG dataset. (<b>b</b>) Trained on the HH dataset.</p>
Full article ">Figure 7
<p>The effect on performance of different subset sizes and the two sampling techniques. The first row (<b>a</b>–<b>c</b>) corresponds to the performance of the base classifier when trained on the PSG dataset and the second row (<b>d</b>–<b>f</b>)when trained on the HH dataset. The performance at 0 indicates the performance of the base classifier. As this is evaluated only 25 times instead of the 250 times for the transfer learning performances, we used a blue boxplot. The blue circles below the boxplot indicate outlier values. The blue line that starts in the median value at 0, corresponds to the random sampling procedure and the red line to the fixed size procedure. The dotted lines indicate the interquartile ranges. These provide insight in the variability of the classifier performance. The black boxes indicate a significantly lower performance of the active sampling strategy, compared to random sampling.</p>
Full article ">Figure 8
<p>The difference in entropy for the clean (<b>a</b>) and noisy (<b>b</b>) samples of the CC subsets. The blue line indicates the random sampling and the red line indicates the fixed-size sampling approach. The full line indicates the median values and the dotted lines indicate the interquartile ranges. We can observe that the entropy is consistently higher for the fixed-size sampling.</p>
Full article ">Figure 9
<p>The Se of the CC base classifier when applied on the HH dataset. The full line corresponds to the median values and the dotted lines indicate the interquartile ranges. We used a different graphical representation for the results of the base classifiers at zero, as these results originate from only 25 folds, compared to the 250 folds of the transfer learning approach. A strong increase could already be observed after including only 20 samples.</p>
Full article ">
15 pages, 3273 KiB  
Article
A Novel Hybrid Approach for Risk Evaluation of Vehicle Failure Modes
by Wencai Zhou, Zhaowen Qiu, Shun Tian, Yongtao Liu, Lang Wei and Reza Langari
Sensors 2021, 21(2), 661; https://doi.org/10.3390/s21020661 - 19 Jan 2021
Cited by 4 | Viewed by 2309
Abstract
This paper addresses the problem of evaluating vehicle failure modes efficiently during the driving process. Generally, the most critical factors for preventing risk in potential failure modes are identified by the experience of experts through the widely used failure mode and effect analysis [...] Read more.
This paper addresses the problem of evaluating vehicle failure modes efficiently during the driving process. Generally, the most critical factors for preventing risk in potential failure modes are identified by the experience of experts through the widely used failure mode and effect analysis (FMEA). However, it has previously been difficult to evaluate the vehicle failure mode with crisp values. In this paper, we propose a novel hybrid scheme based on a cost-based FMEA, fuzzy analytic hierarchy process (FAHP), and extended fuzzy multi-objective optimization by ratio analysis plus full multiplicative form (EFMULTIMOORA) to evaluate vehicle failure modes efficiently. Specifically, vehicle failure modes are first screened out by cost-based FMEA according to maintenance information, and then the weights of the three criteria of maintenance time (T), maintenance cost (C), and maintenance benefit (B) are calculated using FAHP and the rankings of failure modes are determined by EFMULTIMOORA. Different from existing schemes, the EFMULTIMOORA in our proposed hybrid scheme calculates the ranking of vehicle failure modes based on three new risk factors (T, C, and B) through fuzzy linguistic terms for order preference. Furthermore, the applicability of the proposed hybrid scheme is presented by conducting a case study involving vehicle failure modes of one common vehicle type (Hyundai), and a sensitivity analysis and comparisons are conducted to validate the effectiveness of the obtained results. In summary, our numerical analyses indicate that the proposed method can effectively help enterprises and researchers in the risk evaluation and the identification of critical vehicle failure modes. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Fuzzy triangular number with 10-cut.</p>
Full article ">Figure 2
<p>Flowchart of the proposed hybrid evaluation method.</p>
Full article ">Figure 3
<p>Results of ranking in sensitivity analysis.</p>
Full article ">
18 pages, 3129 KiB  
Review
Resonance Energy Transfer-Based Biosensors for Point-of-Need Diagnosis—Progress and Perspectives
by Felix Weihs, Alisha Anderson, Stephen Trowell and Karine Caron
Sensors 2021, 21(2), 660; https://doi.org/10.3390/s21020660 - 19 Jan 2021
Cited by 18 | Viewed by 4612
Abstract
The demand for point-of-need (PON) diagnostics for clinical and other applications is continuing to grow. Much of this demand is currently serviced by biosensors, which combine a bioanalytical sensing element with a transducing device that reports results to the user. Ideally, such devices [...] Read more.
The demand for point-of-need (PON) diagnostics for clinical and other applications is continuing to grow. Much of this demand is currently serviced by biosensors, which combine a bioanalytical sensing element with a transducing device that reports results to the user. Ideally, such devices are easy to use and do not require special skills of the end user. Application-dependent, PON devices may need to be capable of measuring low levels of analytes very rapidly, and it is often helpful if they are also portable. To date, only two transduction modalities, colorimetric lateral flow immunoassays (LFIs) and electrochemical assays, fully meet these requirements and have been widely adopted at the point-of-need. These modalities are either non-quantitative (LFIs) or highly analyte-specific (electrochemical glucose meters), therefore requiring considerable modification if they are to be co-opted for measuring other biomarkers. Förster Resonance Energy Transfer (RET)-based biosensors incorporate a quantitative and highly versatile transduction modality that has been extensively used in biomedical research laboratories. RET-biosensors have not yet been applied at the point-of-need despite its advantages over other established techniques. In this review, we explore and discuss recent developments in the translation of RET-biosensors for PON diagnoses, including their potential benefits and drawbacks. Full article
(This article belongs to the Special Issue Biennial State-of-the-Art Sensors Technology in Australia 2019-2020)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Overview of different variations on the Resonance Energy Transfer principle, together with a range of different types of biological recognition element. Red domains indicate recognition elements, and orange domains indicate targeted analytes. (<b>b</b>) Examples of different ways that energy donors and acceptors can be combined with biological recognition elements. All illustrations are simplified and not shown to scale. Images were taken from the following sources: The “Fluorescent proteins” image illustrates the Green Fluorescent Protein (doi:10.2210/rcsb_pdb/mom_2003_6). The “Organic dyes” structure shows cyanine. The quantum dot image was taken from Reference [<a href="#B49-sensors-21-00660" class="html-bibr">49</a>]. The “Luciferase–Luciferin” image is composed of the Firefly luciferase (doi:10.2210/rcsb_pdb/mom_2006_6) and D-Luciferin as its substrate. The structure of horseradish peroxidase was taken from the Protein Data Bank (1W4E, doi:10.2210/pdb1W4W/pdb). The dark quencher is the black hole quencher BHQ1 from atdbio (<a href="https://www.atdbio.com/content/35/FRET-fluorescence-quenchers" target="_blank">https://www.atdbio.com/content/35/FRET-fluorescence-quenchers</a>).</p>
Full article ">Figure 2
<p>Examples of point-of-need (PON)-suitable applications using compact RET detection devices in combination with microfluidics. In one example, Bioluminescence Resonance Energy Transfer (BRET)-based biosensors are run on a microfluidics chip integrated into a compact device containing micro photon multiplier tubes (µPMTs). In other examples, FRET biosensor signals are recorded by using a fluorescence microscope or laser excitation followed by detection using PMTs. (<b>a</b>) Lymphocytes secreting Matrix metalloproteinase-9 (MMP9) are trapped by antibodies in a micro well located on a microfluidic chip. The peptides are labeled with Fluorescein isothiocyanate (FITC) and 4-(dimethylaminoazo)benzene-4-carboxylic acid (DABCYL), and they contain MMP9-specific cleavage sites. These are immobilized close to the micro wells, to detect any MMP9 activity released by the cells [<a href="#B52-sensors-21-00660" class="html-bibr">52</a>]. (<b>b</b>) Aptamers, labeled with quantum dots (QDs), specific for cancer-related cells or protein markers are attached on a graphene monoxide layer. Binding of the target cells or proteins to their specific aptamer results in a release of the aptamer from graphene oxide, activating the fluorescence signal of the quantum dot/organic dye [<a href="#B53-sensors-21-00660" class="html-bibr">53</a>,<a href="#B54-sensors-21-00660" class="html-bibr">54</a>]. (<b>c</b>) CYBERTONGUE<sup>®</sup> protease biosensors consist of the Renilla luciferase RLuc8 connected through a peptide linker, containing specific recognition sites for the target protease, to the Green Fluorescent Protein variant GFP<sup>2</sup>. Proteolytic activity exerted on the connecting peptide results in the dissociation of GFP<sup>2</sup> from RLuc8, leading to a profound change in BRET ratio [<a href="#B48-sensors-21-00660" class="html-bibr">48</a>,<a href="#B55-sensors-21-00660" class="html-bibr">55</a>,<a href="#B56-sensors-21-00660" class="html-bibr">56</a>]. (<b>d</b>) The CYBERTONGUE<sup>®</sup> lactose biosensor consists of a lactose-binding protein tagged with RLuc8 and GFP<sup>2</sup> that undergoes a conformational change upon binding to lactose [<a href="#B57-sensors-21-00660" class="html-bibr">57</a>]. Binding of lactose results in the distancing of the two BRET components, thereby changing the BRET ratio.</p>
Full article ">Figure 3
<p>Overview of the Cybertongue<sup>®</sup> BRET analysis device: (<b>a</b>) functional schematic of the measurement device, (<b>b</b>) schematic design of a microfluidic chip used for protease assays and (<b>c</b>) image of compact microfluidics device with closed lid. Figure was taken from Weihs et al. [<a href="#B48-sensors-21-00660" class="html-bibr">48</a>], with permission.</p>
Full article ">Figure 4
<p>Examples of PON-suitable applications using a digital camera or smartphone in combination with micro plates or paper-based devices. BRET-based biosensors are spotted on paper-based analytical devices (PADs), and signals are recorded with a digital camera or smart phone. FRET biosensors require an additional source of excitation, such as a light-emitting diode (LED) or UV-lamp. (<b>a</b>) LUMABS biosensors (LUMinescent AntiBody Sensor) are comprised of the luciferase NanoLuc and the fluorescent protein mNeonGreen connected through a linker containing linear epitopes for antibodies of interest. In the absence of the antibody of interest, NanoLuc and mNeonGreen dimerize through the connector domains [<a href="#B73-sensors-21-00660" class="html-bibr">73</a>,<a href="#B74-sensors-21-00660" class="html-bibr">74</a>,<a href="#B78-sensors-21-00660" class="html-bibr">78</a>]. (<b>b</b>) LUMABs were modified by replacing linear epitopes with unnatural amino acids acting as a chemical handle to introduce analogues of analytes of interest. An antibody binding to these analogues is introduced, separating NanoLuc and mNeonGreen. In the presence of the analyte, antibodies preferentially bind to the analyte instead of its analogues incorporated in the LUMAB biosensor [<a href="#B77-sensors-21-00660" class="html-bibr">77</a>]. (<b>c</b>) LUCIDs (luciferase-based indicators of drugs) are protein fusions comprising NanoLuc, a receptor protein for the drug of interest and the self-labeling enzyme SNAP. A SNAP-functionalized organic dye Cy3 is attached to an analyte analogue, which is covalently incorporated by the SNAP protein. In the presence of the analyte, the receptor preferentially binds the analyte over its SNAP–Cy3-bound analogue [<a href="#B79-sensors-21-00660" class="html-bibr">79</a>,<a href="#B80-sensors-21-00660" class="html-bibr">80</a>,<a href="#B81-sensors-21-00660" class="html-bibr">81</a>]. (<b>d</b>) If target miRNAs are present in a sample, miRNA templates are amplified through a rolling circle amplification (not illustrated). Complementary oligonucleotides form double-stranded DNAs that are recognized by fusions of NanoLuc and mNeonGreen with zinc finger proteins that specifically bind to different but nearby sequences [<a href="#B82-sensors-21-00660" class="html-bibr">82</a>]. (<b>e</b>) Quantum dot (QD)–organic fluorescent dye conjugates joined by a peptide-containing protease-specific recognition site are immobilized on paper. In the absence of proteolytic activity, FRET occurs between the QD and the organic dye, resulting in a yellow/orange emission signal. If the peptide is cleaved due to the proteolytic activity of the protease of interest, the dye diffuses away from the QD, leading to a green emission from the QDs [<a href="#B83-sensors-21-00660" class="html-bibr">83</a>]. (<b>f</b>) Paper-immobilized quantum dot–oligonucleotides and free Cy3–oligonucleotides contain different DNA segments complementary to the target gene fragment. In a sandwich format, the target gene serves as a hybridization bridge for the QD–oligonucleotide and Cy3–oligonucleotide, which in turn enables FRET between QD and Cy3 [<a href="#B84-sensors-21-00660" class="html-bibr">84</a>,<a href="#B85-sensors-21-00660" class="html-bibr">85</a>,<a href="#B86-sensors-21-00660" class="html-bibr">86</a>,<a href="#B87-sensors-21-00660" class="html-bibr">87</a>]; (<b>g</b>) A Cy3-labeled kanamycin-specific aptamer partially hybridizes to an anchor/connector oligonucleotide immobilized on glass. The connector oligonucleotide is conjugated to Cy5. Binding of kanamycin spatially separates Cy3 from Cy5 components, leading to a lower FRET efficiency [<a href="#B88-sensors-21-00660" class="html-bibr">88</a>]. (<b>h</b>) An upconversion nanoparticle (UCNP) consisting of ytterbium (Yb<sup>3+</sup>) and thulium (Tm<sup>3+</sup>) is conjugated to the organic dye rhodol. FRET occurs between the UCNP and rhodol, while organophosphonates perform a nucleophilic attack on rhodol, inactivating it as a FRET acceptor [<a href="#B89-sensors-21-00660" class="html-bibr">89</a>].</p>
Full article ">Figure 5
<p>Examples of Lanthanide-FRET (LRET) applications that are potentially suitable for on-site testing. (<b>a</b>) Commonly applied TR-FRET technologies rely on a sandwich-based homogeneous assay, where two antibodies targeting different epitopes of the analyte are labeled with a lanthanide energy donor, while the other is labeled, with an organic dye or fluorescent protein, as the energy acceptor. Binding of both antibodies enables FRET between the lanthanide and acceptor which is measured by their altered fluorescence lifetimes. (<b>b</b>) Protein L, an antibody light-chain-binding protein [<a href="#B102-sensors-21-00660" class="html-bibr">102</a>], is labeled with Europium. Antigens to an antibody of interest are labeled with the organic dye AlexaFluor647 (LFRET) [<a href="#B103-sensors-21-00660" class="html-bibr">103</a>]. If a sample contains antibodies against the antigen–dye fusion, FRET occurs between Protein-L-Europium bound to the light chain of the antibody and the Antigen–AlexaFluor647 fusion. Image of the ProciseDx device was used with permission from ProciseDx.</p>
Full article ">
18 pages, 11655 KiB  
Article
Improving the Head Pose Variation Problem in Face Recognition for Mobile Robots
by Samuel-Felipe Baltanas, Jose-Raul Ruiz-Sarmiento and Javier Gonzalez-Jimenez
Sensors 2021, 21(2), 659; https://doi.org/10.3390/s21020659 - 19 Jan 2021
Cited by 4 | Viewed by 3480
Abstract
Face recognition is a technology with great potential in the field of robotics, due to its prominent role in human-robot interaction (HRI). This interaction is a keystone for the successful deployment of robots in areas requiring a customized assistance like education and healthcare, [...] Read more.
Face recognition is a technology with great potential in the field of robotics, due to its prominent role in human-robot interaction (HRI). This interaction is a keystone for the successful deployment of robots in areas requiring a customized assistance like education and healthcare, or assisting humans in everyday tasks. These unconstrained environments present additional difficulties for face recognition, extreme head pose variability being one of the most challenging. In this paper, we address this issue and make a fourfold contribution. First, it has been designed a tool for gathering an uniform distribution of head pose images from a person, which has been used to collect a new dataset of faces, both presented in this work. Then, the dataset has served as a testbed for analyzing the detrimental effects this problem has on a number of state-of-the-art methods, showing their decreased effectiveness outside a limited range of poses. Finally, we propose an optimization method to mitigate said negative effects by considering key pose samples in the recognition system’s set of known faces. The conducted experiments demonstrate that this optimized set of poses significantly improves the performance of a state-of-the-art, cutting-edge system based on Multitask Cascaded Convolutional Neural Networks (MTCNNs) and ArcFace. Full article
(This article belongs to the Special Issue Social Robots in Healthcare)
Show Figures

Figure 1

Figure 1
<p>Typical pipeline for face recognition (FR) systems.</p>
Full article ">Figure 2
<p>Embedding distance comparison between various poses of the same individual using ArcFace with its standard distance threshold of <math display="inline"><semantics> <mrow> <mn>0.4</mn> </mrow> </semantics></math>. The distance between (<b>a</b>,<b>b</b>) is <math display="inline"><semantics> <mrow> <mn>0.412</mn> </mrow> </semantics></math>, which is considered a negative match. On the other side, the distance between (<b>b</b>,<b>c</b>) is <math display="inline"><semantics> <mrow> <mn>0.347</mn> </mrow> </semantics></math>, which is considered a positive match.</p>
Full article ">Figure 3
<p>Schema of the HPE system stacking three methods, where <math display="inline"><semantics> <mover> <mi>x</mi> <mo>¯</mo> </mover> </semantics></math> stands for the average of the three estimated poses.</p>
Full article ">Figure 4
<p>Log-in view of the developed application. It is the first window appearing when the tool is launched.</p>
Full article ">Figure 5
<p>Interface of the interactive application for collecting face images. The interface is composed of 5 parts: the collection state (<b>①</b>), the camera feed (<b>②</b>), the control buttons (<b>③</b>), a black pointer (<b>④</b>), and a colorbar (<b>⑤</b>).</p>
Full article ">Figure 6
<p>Estimated pitch and yaw results from a single individual in the MAPIR Faces dataset.</p>
Full article ">Figure 7
<p>Face similarity results of ArcFace distributed across the selected head pose space. The blue samples represent correct identifications, while the red ones represent false rejections. The hue of the color is proportional to the similarity scores. The images on the left are examples of: a selected pose (<b>a</b>), a correct identification (<b>b</b>), and a false rejection (<b>c</b>).</p>
Full article ">Figure 8
<p>Face similarity results of ArcFace using a non-frontal face image. The gallery image depicts a head pose with an estimated pitch of 20° and a yaw of −37.2°. These samples appear in white at the top-left of the figure. The images on the left are examples of: a selected pose (<b>a</b>), a correct identification (<b>b</b>), and a false rejection (<b>c</b>).</p>
Full article ">Figure 9
<p>Length of a combination against the number of combinations of said length in trillions.</p>
Full article ">Figure 10
<p>Metrics for the top-1 accuracy configurations found. (<b>a</b>) Accuracy against the number of poses. (<b>b</b>) Average distance to the nearest true match against the number of poses. (<b>c</b>) Average distance to the nearest false match against the number of poses.</p>
Full article ">Figure 11
<p>Top-5 accuracy configurations for all lengths of the gallery set. For example, the first row in the 1–10 column reports, as black squares, the 5 most optimal poses to be stored in the gallery set from a <math display="inline"><semantics> <mrow> <mn>7</mn> <mo>×</mo> <mn>7</mn> </mrow> </semantics></math> grid of them.</p>
Full article ">Figure 12
<p>Example of the top-1 combination with 3 poses. (<b>a</b>) shows the poses in black in a <math display="inline"><semantics> <mrow> <mn>7</mn> <mo>×</mo> <mn>7</mn> </mrow> </semantics></math> grid. (<b>b</b>) shows sample images for each of the 3 poses for an individual.</p>
Full article ">
19 pages, 5112 KiB  
Article
An Accurate Linear Method for 3D Line Reconstruction for Binocular or Multiple View Stereo Vision
by Lijun Zhong, Junyou Qin, Xia Yang, Xiaohu Zhang, Yang Shang, Hongliang Zhang and Qifeng Yu
Sensors 2021, 21(2), 658; https://doi.org/10.3390/s21020658 - 19 Jan 2021
Cited by 12 | Viewed by 3048
Abstract
For the problem of 3D line reconstruction in binocular or multiple view stereo vision, when there are no corresponding points on the line, the method called Direction-then-Point (DtP) can be used, and if there are two pairs of corresponding points on the line, [...] Read more.
For the problem of 3D line reconstruction in binocular or multiple view stereo vision, when there are no corresponding points on the line, the method called Direction-then-Point (DtP) can be used, and if there are two pairs of corresponding points on the line, the method called Two Points 3D coordinates (TPS) can be used. However, when there is only one pair of corresponding points on the line, can we get the better accuracy than DtP for 3D line reconstruction? In this paper, a linear and more accurate method called Point-then-Direction (PtD) is proposed. First, we used the intersection method to obtain the 3D point’s coordinate from its corresponding image points. Then, we used this point as a position on the line to calculate the direction of the line by minimizing the image angle residual. PtD is also suitable for multiple camera systems. The simulation results demonstrate that PtD increases the accuracy of both the direction and the position of the 3D line compared to DtP. At the same time, PtD achieves a better result in direction of the 3D line than TPS, but has a lower accuracy in the position of 3D lines than TPS. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Plane intersection for stereo vision. AB is the 3D line <math display="inline"><semantics> <mi>I</mi> </semantics></math>, C<sub>1</sub> and C<sub>2</sub> are the optical centers of the two cameras, A<sub>1</sub>B<sub>1</sub> is the projection of <math display="inline"><semantics> <mi>I</mi> </semantics></math> on camera C<sub>1</sub>, and A<sub>2</sub>B<sub>2</sub> is the projection of <math display="inline"><semantics> <mi>I</mi> </semantics></math> on camera C<sub>2</sub>.</p>
Full article ">Figure 2
<p>Schematic diagram of the image angle residual (IAR). The C<sub>2</sub>A<sub>2</sub>B<sub>2</sub> plane is the plane composed of the optical center and the axis of the camera; AB is the 3D line; A′B′ is the reconstructed result of the 3D line; B<sub>2</sub>A<sub>3</sub> is the projection of A′B′ on the image plane; and α is the IAR.</p>
Full article ">Figure 3
<p>The top view of the simulation scenario.</p>
Full article ">Figure 4
<p>Relationship between the angular and positioning error and the intersection angle. (<b>a</b>) The result when the 3D line was parallel to the imaging plane and perpendicular to the baseline; and (<b>b</b>) the result when the 3D line was not parallel to the imaging plane and was not perpendicular to the baseline.</p>
Full article ">Figure 5
<p>Relationship between the angular and positioning error and the line offset. (<b>a</b>) The result when the intersection angle was 90°; and (<b>b</b>) the result when the intersection angle was 120°.</p>
Full article ">Figure 6
<p>Relationship between the line error and error of 2D line.</p>
Full article ">Figure 7
<p>Relationship between the line error and error of external camera parameters.</p>
Full article ">Figure 8
<p>Relationship between the line error and the error of intersection parameters.</p>
Full article ">Figure 9
<p>Relationship between the line error and the camera number.</p>
Full article ">Figure 10
<p>Relationship between the running time (in seconds) and the number of cameras. TPS, PtD, and DtP are linear methods, but PtD takes longer than DtP.</p>
Full article ">Figure 11
<p>Physical scene for experiment 1.</p>
Full article ">Figure 12
<p>The image captured by camera 1. The eight 3D lines to be reconstructed are marked by the red lines on the images; (<b>a</b>) is more perpendicular and in a better intersection condition than (<b>b</b>).</p>
Full article ">Figure 13
<p>Physical scene for experiment 2. The target was placed on the board in the center, and the diagonal markers on the side were the calibration control points (CCPs). The 3D line to be reconstructed was the target’s central axis.</p>
Full article ">
18 pages, 14707 KiB  
Article
Improvement of Reliability Determination Performance of Real Time Kinematic Solutions Using Height Trajectory
by Aoki Takanose, Yoshiki Atsumi, Kanamu Takikawa and Junichi Meguro
Sensors 2021, 21(2), 657; https://doi.org/10.3390/s21020657 - 19 Jan 2021
Cited by 4 | Viewed by 2622
Abstract
Autonomous driving support systems and self-driving cars require the determination of reliable vehicle positions with high accuracy. The real time kinematic (RTK) algorithm with global navigation satellite system (GNSS) is generally employed to obtain highly accurate position information. Because RTK can estimate the [...] Read more.
Autonomous driving support systems and self-driving cars require the determination of reliable vehicle positions with high accuracy. The real time kinematic (RTK) algorithm with global navigation satellite system (GNSS) is generally employed to obtain highly accurate position information. Because RTK can estimate the fix solution, which is a centimeter-level positioning solution, it is also used as an indicator of the position reliability. However, in urban areas, the degradation of the GNSS signal environment poses a challenge. Multipath noise caused by surrounding tall buildings degrades the positioning accuracy. This leads to large errors in the fix solution, which is used as a measure of reliability. We propose a novel position reliability estimation method by considering two factors; one is that GNSS errors are more likely to occur in the height than in the plane direction; the other is that the height variation of the actual vehicle travel path is small compared to the amount of movement in the horizontal directions. Based on these considerations, we proposed a method to detect a reliable fix solution by estimating the height variation during driving. To verify the effectiveness of the proposed method, an evaluation test was conducted in an urban area of Tokyo. According to the evaluation test, a reliability judgment rate of 99% was achieved in an urban environment, and a plane accuracy of less than 0.3 m in RMS was achieved. The results indicate that the accuracy of the proposed method is higher than that of the conventional fix solution, demonstratingits effectiveness. Full article
(This article belongs to the Special Issue GNSS Data Processing and Navigation in Challenging Environments)
Show Figures

Figure 1

Figure 1
<p>Description of ratio-test and its failure; In the red boxes <span class="html-italic">th</span> stands for threshold for Ratio (Equation (8)).</p>
Full article ">Figure 2
<p>Schematic of confidence determination method.</p>
Full article ">Figure 3
<p>Diagram of height trajectory estimation algorithm.</p>
Full article ">Figure 4
<p>Relationship between laws of motion of a car on a slope.</p>
Full article ">Figure 5
<p>Difference in fix solution caused by acceleration error with respect to time (or distance).</p>
Full article ">Figure 6
<p>Flowchart for determining the reliability of the fix solution.</p>
Full article ">Figure 7
<p>Fitting height trajectory to fix solution to determine confidence level.</p>
Full article ">Figure 8
<p>Schematic of proposed algorithm for sequential decision making.</p>
Full article ">Figure 9
<p>Vehicle used for evaluation tests.</p>
Full article ">Figure 10
<p>Test route A in urban environment.</p>
Full article ">Figure 11
<p>Results of real time kinematic (RTK) performed on test route A. (<b>a</b>) Fix solution for the entire route, (<b>b</b>) Error distribution of fix solution.</p>
Full article ">Figure 12
<p>Results of reliable assessments using Route A evaluation method. (<b>a</b>) Positive fix for the entire route. (<b>b</b>) Environment in the red box. (<b>c</b>) Fix solution in the red box. (<b>d</b>) Positive fix in the red box.</p>
Full article ">Figure 13
<p>Error of positive fix by proposed method for Route A. (<b>a</b>) Positive fix plane and height error, (<b>b</b>) Error distribution of positive fix.</p>
Full article ">Figure 14
<p>Test route B in a dense urban environment.</p>
Full article ">Figure 15
<p>Results of RTK performed on test route B. (<b>a</b>) Fix solution for the entire route (<b>b</b>) Error distribution of fix solution.</p>
Full article ">Figure 16
<p>Results of reliability determination using Route B evaluation method. (<b>a</b>) Positive fix for the entire route, (<b>b</b>) Environment in the red box, (<b>c</b>) Fix solution in the red box, (<b>d</b>) Positive fix in the red box.</p>
Full article ">Figure 17
<p>Error of positive fix by proposed method for Route B. (<b>a</b>) Positive fix plane and height error, (<b>b</b>) Error distribution of positive fix.</p>
Full article ">
15 pages, 904 KiB  
Article
An IoT-Focused Intrusion Detection System Approach Based on Preprocessing Characterization for Cybersecurity Datasets
by Xavier Larriva-Novo, Víctor A. Villagrá, Mario Vega-Barbas, Diego Rivera and Mario Sanz Rodrigo
Sensors 2021, 21(2), 656; https://doi.org/10.3390/s21020656 - 19 Jan 2021
Cited by 63 | Viewed by 5443
Abstract
Security in IoT networks is currently mandatory, due to the high amount of data that has to be handled. These systems are vulnerable to several cybersecurity attacks, which are increasing in number and sophistication. Due to this reason, new intrusion detection techniques have [...] Read more.
Security in IoT networks is currently mandatory, due to the high amount of data that has to be handled. These systems are vulnerable to several cybersecurity attacks, which are increasing in number and sophistication. Due to this reason, new intrusion detection techniques have to be developed, being as accurate as possible for these scenarios. Intrusion detection systems based on machine learning algorithms have already shown a high performance in terms of accuracy. This research proposes the study and evaluation of several preprocessing techniques based on traffic categorization for a machine learning neural network algorithm. This research uses for its evaluation two benchmark datasets, namely UGR16 and the UNSW-NB15, and one of the most used datasets, KDD99. The preprocessing techniques were evaluated in accordance with scalar and normalization functions. All of these preprocessing models were applied through different sets of characteristics based on a categorization composed by four groups of features: basic connection features, content characteristics, statistical characteristics and finally, a group which is composed by traffic-based features and connection direction-based traffic characteristics. The objective of this research is to evaluate this categorization by using various data preprocessing techniques to obtain the most accurate model. Our proposal shows that, by applying the categorization of network traffic and several preprocessing techniques, the accuracy can be enhanced by up to 45%. The preprocessing of a specific group of characteristics allows for greater accuracy, allowing the machine learning algorithm to correctly classify these parameters related to possible attacks. Full article
(This article belongs to the Special Issue Cybersecurity and Privacy in Smart Cities)
Show Figures

Figure 1

Figure 1
<p>Three-tier architecture of IoT and IoT IDS approach.</p>
Full article ">Figure 2
<p>Categorical data preprocessing results in terms of accuracy scaled between 0 to 1.</p>
Full article ">Figure 3
<p>Categorical data preprocessing results in terms of accuracy scaled [0–1].</p>
Full article ">Figure 4
<p>Standardization and normalization functions applied for the entire group of characteristics versus no processing functions for the UGR16, NSL-KDD and UNSW-NB15 datasets accuracy scaled between 0 to 1.</p>
Full article ">
13 pages, 2832 KiB  
Article
Laser Cut Interruption Detection from Small Images by Using Convolutional Neural Network
by Benedikt Adelmann, Max Schleier and Ralf Hellmann
Sensors 2021, 21(2), 655; https://doi.org/10.3390/s21020655 - 19 Jan 2021
Cited by 7 | Viewed by 3540
Abstract
In this publication, we use a small convolutional neural network to detect cut interruptions during laser cutting from single images of a high-speed camera. A camera takes images without additional illumination at a resolution of 32 × 64 pixels from cutting steel sheets [...] Read more.
In this publication, we use a small convolutional neural network to detect cut interruptions during laser cutting from single images of a high-speed camera. A camera takes images without additional illumination at a resolution of 32 × 64 pixels from cutting steel sheets of varying thicknesses with different laser parameter combinations and classifies them into cuts and cut interruptions. After a short learning period of five epochs on a certain sheet thickness, the images are classified with a low error rate of 0.05%. The use of color images reveals slight advantages with lower error rates over greyscale images, since, during cut interruptions, the image color changes towards blue. A training set on all sheet thicknesses in one network results in tests error rates below 0.1%. This low error rate and the short calculation time of 120 µs on a standard CPU makes the system industrially applicable. Full article
Show Figures

Figure 1

Figure 1
<p>Optical setup of the cutting head.</p>
Full article ">Figure 2
<p>Top view of a metal sheet with a cut interruption (<b>top</b>) and complete cut (<b>bottom</b>).</p>
Full article ">Figure 3
<p>Original camera image.</p>
Full article ">Figure 4
<p>Experimental design of image provisions and evaluations.</p>
Full article ">Figure 5
<p>Design of the used convolutional neural network image.</p>
Full article ">Figure 6
<p>Error rate as a function of the training image percentage.</p>
Full article ">Figure 7
<p>Error rate compared between the color images (here shown in black columns) and greyscale images (here shown as grey columns).</p>
Full article ">Figure 8
<p>Error rate as a function of the laser power.</p>
Full article ">
14 pages, 2951 KiB  
Article
Moving the Lab into the Mountains: A Pilot Study of Human Activity Recognition in Unstructured Environments
by Brian Russell, Andrew McDaid, William Toscano and Patria Hume
Sensors 2021, 21(2), 654; https://doi.org/10.3390/s21020654 - 19 Jan 2021
Cited by 13 | Viewed by 3141
Abstract
Goal: To develop and validate a field-based data collection and assessment method for human activity recognition in the mountains with variations in terrain and fatigue using a single accelerometer and a deep learning model. Methods: The protocol generated an unsupervised labelled dataset of [...] Read more.
Goal: To develop and validate a field-based data collection and assessment method for human activity recognition in the mountains with variations in terrain and fatigue using a single accelerometer and a deep learning model. Methods: The protocol generated an unsupervised labelled dataset of various long-term field-based activities including run, walk, stand, lay and obstacle climb. Activity was voluntary so transitions could not be determined a priori. Terrain variations included slope, crossing rivers, obstacles and surfaces including road, gravel, clay, mud, long grass and rough track. Fatigue levels were modulated between rested to physical exhaustion. The dataset was used to train a deep learning convolutional neural network (CNN) capable of being deployed on battery powered devices. The human activity recognition results were compared to a lab-based dataset with 1,098,204 samples and six features, uniform smooth surfaces, non-fatigued supervised participants and activity labelling defined by the protocol. Results: The trail run dataset had 3,829,759 samples with five features. The repetitive activities and single instance activities required hyper parameter tuning to reach an overall accuracy 0.978 with a minimum class precision for the one-off activity (climbing gate) of 0.802. Conclusion: The experimental results showed that the CNN deep learning model performed well with terrain and fatigue variations compared to the lab equivalents (accuracy 97.8% vs. 97.7% for trail vs. lab). Significance: To the authors knowledge this study demonstrated the first successful human activity recognition (HAR) in a mountain environment. A robust and repeatable protocol was developed to generate a validated trail running dataset when there were no observers present and activity types changed on a voluntary basis across variations in terrain surface and both cognitive and physical fatigue levels. Full article
(This article belongs to the Special Issue Wearable Sensors for Biomechanical Monitoring in Sport)
Show Figures

Figure 1

Figure 1
<p>Data pipeline for trail calibration and sensor fusion.</p>
Full article ">Figure 2
<p>(<b>a</b>) Track calibration with segmentation waypoints and (<b>b</b>) ground features validation.</p>
Full article ">Figure 3
<p>Time series example of the one-off activity “climb gate” verses repetitive data “run” and “walk”.</p>
Full article ">Figure 4
<p>Protocol description for the physical and cognitive load during the experiment.</p>
Full article ">Figure 5
<p>CNN structure.</p>
Full article ">Figure 6
<p>Acceleration waveforms for running up and down slopes over various terrain (road, track), (<b>a</b>) run up road, (<b>b</b>) run down road, (<b>c</b>) run up dirt track, (<b>d</b>) run down dirt track.</p>
Full article ">Figure 7
<p>Precision by window size results for trail run activities (lay, sit, walk, run, climb) from the for CNN with MVTH = 0.8.</p>
Full article ">
17 pages, 3040 KiB  
Article
Highly Efficient Lossless Coding for High Dynamic Range Red, Clear, Clear, Clear Image Sensors
by Paweł Pawłowski, Karol Piniarski and Adam Dąbrowski
Sensors 2021, 21(2), 653; https://doi.org/10.3390/s21020653 - 19 Jan 2021
Cited by 5 | Viewed by 3799
Abstract
In this paper we present a highly efficient coding procedure, specially designed and dedicated to operate with high dynamic range (HDR) RCCC (red, clear, clear, clear) image sensors used mainly in advanced driver-assistance systems (ADAS) and autonomous driving systems (ADS). The coding procedure [...] Read more.
In this paper we present a highly efficient coding procedure, specially designed and dedicated to operate with high dynamic range (HDR) RCCC (red, clear, clear, clear) image sensors used mainly in advanced driver-assistance systems (ADAS) and autonomous driving systems (ADS). The coding procedure can be used for a lossless reduction of data volume under developing and testing of video processing algorithms, e.g., in software in-the-loop (SiL) or hardware in-the-loop (HiL) conditions. Therefore, it was designed to achieve both: the state-of-the-art compression ratios and real-time compression feasibility. In tests we utilized FFV1 lossless codec and proved efficiency of up to 81 fps (frames per second) for compression and 87 fps for decompression performed on a single Intel i7 CPU. Full article
(This article belongs to the Special Issue CMOS Image Sensors and Related Applications)
Show Figures

Figure 1

Figure 1
<p>Piecewise linear representation in HDR image sensors.</p>
Full article ">Figure 2
<p>Color filter arrays: (<b>a</b>) monochrome (CCCC), (<b>b</b>) RGGB, (<b>c</b>) RCCC, (<b>d</b>) RCCB, (<b>e</b>) RGBC, (<b>f</b>) RYYC.</p>
Full article ">Figure 3
<p>Three sample parts of source RCCC image.</p>
Full article ">Figure 4
<p>Three sample parts of source RCCC image decomposed into two images with R and CCCC components (the second image is created with the CCC components while the missing values are interpolated).</p>
Full article ">Figure 5
<p>Three sample parts of source RCCC image decomposed into four images with all components R, C1, C2, C3 separated.</p>
Full article ">Figure 6
<p>Three sample parts of source RCCC image decomposed into three images: two small images with R and C1 components and a horizontally two times larger image comprising C2 and C3 components.</p>
Full article ">Figure 7
<p>RCCC lossless compression procedures with FFV1 codec for separated R, C1 and joined C2 and C3 components in: (<b>a</b>) intra-frame mode, (<b>b</b>) inter-frame mode.</p>
Full article ">
14 pages, 3078 KiB  
Article
Quantifying Coordination and Variability in the Lower Extremities after Anterior Cruciate Ligament Reconstruction
by Sangheon Park and Sukhoon Yoon
Sensors 2021, 21(2), 652; https://doi.org/10.3390/s21020652 - 19 Jan 2021
Cited by 2 | Viewed by 2327
Abstract
Patients experience various biomechanical changes following reconstruction for anterior cruciate ligament (ACL) injury. However, previous studies have focused on lower extremity joints as a single joint rather than simultaneous lower extremity movements. Therefore, this study aimed to determine the movement changes in the [...] Read more.
Patients experience various biomechanical changes following reconstruction for anterior cruciate ligament (ACL) injury. However, previous studies have focused on lower extremity joints as a single joint rather than simultaneous lower extremity movements. Therefore, this study aimed to determine the movement changes in the lower limb coordination patterns according to movement type following ACL reconstruction. Twenty-one post ACL reconstruction patients (AG) and an equal number of healthy adults (CG) participated in this study. They were asked to perform walking, running, and cutting maneuvers. The continuous relative phase and variability were calculated to examine the coordination pattern. During running and cutting at 30 and 60°, the AG demonstrated a lower in-phase hip–knee coordination pattern in the sagittal plane. The AG demonstrated low hip–knee variability in the sagittal plane during cutting at 60°. The low in-phase coordination pattern can burden the knee by generating unnatural movements following muscle contraction in the opposite direction. Based on the results, it would be useful to identify the problem and provide the fundamental evidence for the optimal timing of return-to-sport after ACL reconstruction (ACLR) rehabilitation, if the coordination variable is measured with various sensors promptly in the sports field to evaluate the coordination of human movement. Full article
(This article belongs to the Special Issue Smart Sensors: Applications and Advances in Human Motion Analysis)
Show Figures

Figure 1

Figure 1
<p>Attachment of markers on lower extremity (the axis of angular displacement and velocity were defined as <a href="#sensors-21-00652-f002" class="html-fig">Figure 2</a>, red line = positive <span class="html-italic">x</span>-axis vector, green line = positive <span class="html-italic">y</span>-axis vector, blue line = positive <span class="html-italic">z</span>-axis vector, white circle = attachment of marker).</p>
Full article ">Figure 2
<p>Three-dimensional capture area by Qualisys track manager (global coordination system, red line = <span class="html-italic">x</span>-axis, green line = <span class="html-italic">y</span>-axis, blue line = <span class="html-italic">z</span>-axis, shaded area = camera field-of-view of 3D cones).</p>
Full article ">Figure 3
<p>Procedure of continuous relative phase (CRP) data processing (the 1st row = each joint angular displacement and velocity on polar coordination system, the 2nd row = each joint phase angles, the 3rd row = CRP angle).</p>
Full article ">Figure 4
<p>Ensemble average of CRP between hip–knee joint according to difficulty of movement.</p>
Full article ">Figure 5
<p>Ensemble average of CRP between knee–ankle joint according to difficulty of movement.</p>
Full article ">
2 pages, 143 KiB  
Editorial
Special Issue: ECG Monitoring System
by Florent Baty
Sensors 2021, 21(2), 651; https://doi.org/10.3390/s21020651 - 19 Jan 2021
Cited by 1 | Viewed by 1870
Abstract
This editorial of the Special Issue “ECG Monitoring System” provides a short overview of the 13 contributed articles published in this issue [...] Full article
(This article belongs to the Special Issue ECG Monitoring System)
12 pages, 2628 KiB  
Letter
Combination of Aptamer Amplifier and Antigen-Binding Fragment Probe as a Novel Strategy to Improve Detection Limit of Silicon Nanowire Field-Effect Transistor Immunosensors
by Cao-An Vu, Pin-Hsien Pan, Yuh-Shyong Yang, Hardy Wai-Hong Chan, Yoichi Kumada and Wen-Yih Chen
Sensors 2021, 21(2), 650; https://doi.org/10.3390/s21020650 - 19 Jan 2021
Cited by 4 | Viewed by 3456
Abstract
Detecting proteins at low concentrations in high-ionic-strength conditions by silicon nanowire field-effect transistors (SiNWFETs) is severely hindered due to the weakened signal, primarily caused by screening effects. In this study, aptamer as a signal amplifier, which has already been reported by our group, [...] Read more.
Detecting proteins at low concentrations in high-ionic-strength conditions by silicon nanowire field-effect transistors (SiNWFETs) is severely hindered due to the weakened signal, primarily caused by screening effects. In this study, aptamer as a signal amplifier, which has already been reported by our group, is integrated into SiNWFET immunosensors employing antigen-binding fragments (Fab) as the receptors to improve its detection limit for the first time. The Fab-SiNWFET immunosensors were developed by immobilizing Fab onto Si surfaces modified with either 3-aminopropyltriethoxysilane (APTES) and glutaraldehyde (GA) (Fab/APTES-SiNWFETs), or mixed self-assembled monolayers (mSAMs) of polyethylene glycol (PEG) and GA (Fab/PEG-SiNWFETs), to detect the rabbit IgG at different concentrations in a high-ionic-strength environment (150 mM Bis-Tris Propane) followed by incubation with R18, an aptamer which can specifically target rabbit IgG, for signal enhancement. Empirical results revealed that the signal produced by the sensors with Fab probes was greatly enhanced compared to the ones with whole antibody (Wab) after detecting similar concentrations of rabbit IgG. The Fab/PEG-SiNWFET immunosensors exhibited an especially improved limit of detection to determine the IgG level down to 1 pg/mL, which has not been achieved by the Wab/PEG-SiNWFET immunosensors. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Taiwan)
Show Figures

Figure 1

Figure 1
<p>Depiction of the fabrication of Wab/APTES-SiNWFET (sample 1) and Fab/APTES-SiNWFET (sample 2) immunosensors in this study. (<b>A</b>) The SiNW channels (light blue bar) were modified with APTES (black short zigzag shapes) and (<b>B</b>) GA (green zigzag shapes) before (<b>C</b>) immobilizing either the Wab (bronze Y shapes in sample 1) or the Fab (yellow-bronze bars in sample 2).</p>
Full article ">Figure 2
<p>Depiction of the fabrication of Wab/PEG-SiNWFET (sample 3) and Fab/PEG-SiNWFET (sample 4) immunosensors in this study. (<b>A</b>) The SiNW channels (light blue bar) were modified with PEG-mSAMs (silane-PEG-NH<sub>2</sub>: black long zigzag shapes, silane-PEG-OH: grey long zigzag shapes) and (<b>B</b>) GA (green zigzag shapes) before (<b>C</b>) either the Wab (bronze Y shapes in sample 3) or the Fab (yellow-bronze bars in sample 4) were immobilized.</p>
Full article ">Figure 3
<p>Illustration of the detection of rabbit IgG by (1) Wab/APTES-SiNWFETs, (2) Fab/APTES-SiNWFETs, (3) Wab/PEG-SiNWFETs, and (4) Fab/PEG-SiNWFETs as well as their corresponding signal enhancement by R18 aptamer in this study. Both of the Wab/APTES-SiNWFETs (sample 1) and Fab/APTES-SiNWFETs (sample 2) were used to detect (<b>A</b>) rabbit IgG (blue Y shapes) at concentrations of 100 pg/mL and 1 ng/mL, before binding with (<b>B</b>) 3 μg/mL R18 aptamer (green curves) for signal enhancement. The Wab/PEG-SiNWFETs (sample 3) could only determine (<b>A</b>) rabbit IgG (blue Y shapes) at concentrations of 10 pg/mL, 100 pg/mL and 1 ng/mL, whereas the Fab/PEG-SiNWFETs (sample 4) could recognize (<b>A</b>) rabbit IgG (blue Y shapes) at concentrations of 1 pg/mL, 10 pg/mL, 100 pg/mL, and 1 ng/mL. Both of them were then also incubated in (<b>B</b>) 3 μg/mL R18 aptamer (green curves) for signal enhancement. All the biosensing experiments in this Figure were performed in 150 mM BTP.</p>
Full article ">Figure 4
<p>(<b>A</b>) Verification of immobilization method with PEG-mSAMs and target-probe binding by indirect ELISA on silica surfaces. Absorbance at 450 nm of pure water (black bar), silica sample modified with PEG-SAMs and GA but without immobilizing IgG (negative control (NC), yellow bar), silica sample prepared with IgG immobilization after modifying PEG-SAMs and GA (blue bar). (<b>B</b>,<b>C</b>) Plot of the concentrations (nM) of either Wab or Fab bind to IgG versus their corresponding fractional occupancy to determine affinity binding of IgG-Wab (blue curve in (<b>B</b>)) and IgG-Fab (red curve in (<b>C</b>)).</p>
Full article ">Figure 5
<p>(<b>A</b>,<b>B</b>) Representative samples to illustrate the method described in <a href="#sec2dot6-sensors-21-00650" class="html-sec">Section 2.6</a>. (<b>A</b>) Electrical response of the Wab/APTES-SiNWFETs was initially recorded in 150 mM BTP and plotted as the first curve (the black curve). This immunosensor was then employed to detect rabbit IgG at 1 ng/mL, followed by incubation with 3 μg/mL R18 (IgG and R18 were diluted in 150 mM BTP). Finally, its electrical response was measured again and plotted as the blue curve. The signal change generated by formation of the biocomplex (Wab-IgG-R18) was calculated from the formula ΔV = V<sub>d1</sub> − V<sub>d0</sub> (1), with V<sub>d1</sub> as the gate voltage value at I<sub>d</sub> = 10<sup>−9</sup> A (LgI = −9) of the blue curve, whereas V<sub>d0</sub> is the gate voltage value at I<sub>d</sub> = 10<sup>−9</sup> A (LgI = −9) of the black curve. (<b>B</b>) Electrical response of the Fab/APTES-SiNWFET was initially recorded in 150 mM BTP and plotted as the first curve (the black curve). This immunosensor was then employed to detect rabbit IgG at 1 ng/mL following by incubation with 3 μg/mL R18 (IgG and R18 were all diluted in 150 mM BTP). Finally, its electrical response was measured again and plotted as the blue curve. The signal change generated by formation of the biocomplex (Fab-IgG-R18) was calculated from the formula ΔV = V<sub>d1</sub> − V<sub>d0</sub> (1), with V<sub>d1</sub> is the gate voltage value at I<sub>d</sub> = 10<sup>−9</sup> A (LgI = −9) of the blue curve, whereas V<sub>d0</sub> is the gate voltage value at I<sub>d</sub> = 10<sup>−9</sup> A (LgI = −9) of the black curve. (<b>C</b>) Comparison of the signal amplified by R18 (mV) after determining rabbit IgG at different concentrations (0.1 ng/mL and 1 ng/mL) in 150 mM BTP by Wab/APTES-SiNWFETs (blue bars) and Fab/APTES-SiNWFETs (red bars). The voltage shift (mV) generated by IgG detection of APTES-SiNWFETs without probes (Wab nor Fab) (black bar), and by recognizing R18 without IgG (0 ng/mL) of Fab/APTES-SiNWFETs, was also calculated for analysis.</p>
Full article ">Figure 6
<p>(<b>A</b>) Comparison of the signal amplified by R18 (mV) after sensing rabbit IgG at various levels in 150 mM BTP by Wab/PEG-SiNWFETs (blue bars) and Fab/APTES-SiNWFETs (red bars). (<b>B</b>) Plot of the voltage shift by R18 versus logarithmic concentrations of rabbit IgG and two respective calibration lines obtained by Wab/PEG-SiNWFETs (blue line) and Fab/APTES-SiNWFETs (red line).</p>
Full article ">
16 pages, 21196 KiB  
Article
3D Hand Pose Estimation Based on Five-Layer Ensemble CNN
by Lili Fan, Hong Rao and Wenji Yang
Sensors 2021, 21(2), 649; https://doi.org/10.3390/s21020649 - 19 Jan 2021
Cited by 9 | Viewed by 4160
Abstract
Estimating accurate 3D hand pose from a single RGB image is a highly challenging problem in pose estimation due to self-geometric ambiguities, self-occlusions, and the absence of depth information. To this end, a novel Five-Layer Ensemble CNN (5LENet) is proposed based on hierarchical [...] Read more.
Estimating accurate 3D hand pose from a single RGB image is a highly challenging problem in pose estimation due to self-geometric ambiguities, self-occlusions, and the absence of depth information. To this end, a novel Five-Layer Ensemble CNN (5LENet) is proposed based on hierarchical thinking, which is designed to decompose the hand pose estimation task into five single-finger pose estimation sub-tasks. Then, the sub-task estimation results are fused to estimate full 3D hand pose. The hierarchical method is of great benefit to extract deeper and better finger feature information, which can effectively improve the estimation accuracy of 3D hand pose. In addition, we also build a hand model with the center of the palm (represented as Palm) connected to the middle finger according to the topological structure of hand, which can further boost the performance of 3D hand pose estimation. Additionally, extensive quantitative and qualitative results on two public datasets demonstrate the effectiveness of 5LENet, yielding new state-of-the-art 3D estimation accuracy, which is superior to most advanced estimation methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of proposed 5LENet framework. Our 5LENet receives a cropped RGB image as the input to estimate hand mask. Then, the 2D hand heatmaps are estimated according to the features of RGB image and hand mask. Finally, 3D hand pose is estimated through the hierarchical ensemble network in the dotted box.</p>
Full article ">Figure 2
<p>The diagram and examples of hand keypoint division. (<b>a</b>) Skeleton graph of 21 hand keypoints, in which triangle, square, and circle represent Palm, the metacarpophalangeal joint, and the phalangeal joint, respectively. The middle finger containing the Palm is marked in red, and others are marked in blue. (<b>b</b>) An example diagram of different finger keypoints from the real dataset Stereo Hand Pose Tracking Benchmark (STB).</p>
Full article ">Figure 3
<p>The architecture of the hierarchical estimation network. Five 2D single finger heatmaps are estimated by the network, of which <math display="inline"><semantics> <mi>n</mi> </semantics></math>-<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">F</mi> <mrow> <mi>f</mi> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> represents the <math display="inline"><semantics> <mi>n</mi> </semantics></math>-channel 2D heatmaps of the <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>th</mi> </mrow> </semantics></math> finger estimated by <math display="inline"><semantics> <mrow> <mi>j</mi> <mi>th</mi> </mrow> </semantics></math> layer.</p>
Full article ">Figure 4
<p>Comparative experiment of the effectiveness of the five-layer network.</p>
Full article ">Figure 5
<p>Comparative experiment of the effectiveness of newly added five 3D finger pose constraints.</p>
Full article ">Figure 6
<p>Comparative experiment of the effectiveness of the Palm connecting with a finger.</p>
Full article ">Figure 7
<p>Comparative experiment of the effectiveness of the Palm connecting with middle finger.</p>
Full article ">Figure 8
<p>Comparison with the state-of-the-art methods on Rendered Hand Pose (RHD) dataset [<a href="#B12-sensors-21-00649" class="html-bibr">12</a>].</p>
Full article ">Figure 9
<p>Comparison with the state-of-the-art methods on STB dataset [<a href="#B33-sensors-21-00649" class="html-bibr">33</a>].</p>
Full article ">Figure 10
<p>Qualitative results for STB dataset [<a href="#B33-sensors-21-00649" class="html-bibr">33</a>].</p>
Full article ">Figure 11
<p>Qualitative results for RHD dataset [<a href="#B12-sensors-21-00649" class="html-bibr">12</a>].</p>
Full article ">
16 pages, 5655 KiB  
Article
Design and Validation of a Scalable, Reconfigurable and Low-Cost Structural Health Monitoring System
by Juan J. Villacorta, Lara del-Val, Roberto D. Martínez, José-Antonio Balmori, Álvaro Magdaleno, Gamaliel López, Alberto Izquierdo, Antolín Lorenzana and Luis-Alfonso Basterra
Sensors 2021, 21(2), 648; https://doi.org/10.3390/s21020648 - 19 Jan 2021
Cited by 18 | Viewed by 4394
Abstract
This paper presents the design, development and testing of a low-cost Structural Health Monitoring (SHM) system based on MEMS (Micro Electro-Mechanical Systems) triaxial accelerometers. A new control system composed by a myRIO platform, managed by specific LabVIEW software, has been developed. The LabVIEW [...] Read more.
This paper presents the design, development and testing of a low-cost Structural Health Monitoring (SHM) system based on MEMS (Micro Electro-Mechanical Systems) triaxial accelerometers. A new control system composed by a myRIO platform, managed by specific LabVIEW software, has been developed. The LabVIEW software also computes the frequency response functions for the subsequent modal analysis. The proposed SHM system was validated by comparing the data measured by this set-up with a conventional SHM system based on piezoelectric accelerometers. After carrying out some validation tests, a high correlation can be appreciated in the behavior of both systems, being possible to conclude that the proposed system is sufficiently accurate and sensitive for operative purposes, apart from being significantly more affordable than the traditional one. Full article
(This article belongs to the Special Issue Sensors for Cultural Heritage Monitoring)
Show Figures

Figure 1

Figure 1
<p>Proposed system architecture.</p>
Full article ">Figure 2
<p>(<b>a</b>) ADXL355 accelerometer (<b>b</b>) adaptor board (<b>c</b>) 3D printed box (<b>d</b>) sensor assembled.</p>
Full article ">Figure 3
<p>MyRIO device with two adapter boards and an accelerometer.</p>
Full article ">Figure 4
<p>Distributed system configuration.</p>
Full article ">Figure 5
<p>Stand-alone system configuration.</p>
Full article ">Figure 6
<p>Elevation view of the measurement layout for the validation tests.</p>
Full article ">Figure 7
<p>Measurement layout.</p>
Full article ">Figure 8
<p>Time signals of accelerometers at the excitation sensors position. (<b>a</b>) Full signal. (<b>b</b>) Zoom between 10 and 11 s. (<b>c</b>) Zoom between 152 and 153 s.</p>
Full article ">Figure 9
<p>Time signals of accelerometers at position E4. (<b>a</b>) Full signal. (<b>b</b>) Zoom between 140 and 145 s. (<b>c</b>) Zoom between 140 and 140.5 s. (<b>d</b>) Zoom between 142.5 and 143 s. (<b>e</b>) Zoom between 330 and 335 s. (<b>f</b>) Zoom between 333 and 333.5 s.</p>
Full article ">Figure 10
<p>Frequency Response Function from 4.5 to 50 Hz.</p>
Full article ">Figure 11
<p>Frequency Response Function centered at the main resonance.</p>
Full article ">Figure 12
<p>Frequency Response Function with resolution increased.</p>
Full article ">
13 pages, 7243 KiB  
Article
Strength Training Characteristics of Different Loads Based on Acceleration Sensor and Finite Element Simulation
by Bo Pang, Zhongqiu Ji, Zihua Zhang, Yunchuan Sun, Chunmin Ma, Zirong He, Xin Hu and Guiping Jiang
Sensors 2021, 21(2), 647; https://doi.org/10.3390/s21020647 - 19 Jan 2021
Cited by 8 | Viewed by 2723
Abstract
Deep squat, bench press and hard pull are important ways for people to improve their strength. The use of sensors to measure force is rare. Measuring strength with sensors is extremely valuable for people to master the intensity of exercise to scientifically effective [...] Read more.
Deep squat, bench press and hard pull are important ways for people to improve their strength. The use of sensors to measure force is rare. Measuring strength with sensors is extremely valuable for people to master the intensity of exercise to scientifically effective exercise. To this end, in this paper, we used a real-time wireless motion capture and mechanical evaluation system of the wearable sensor to measure the dynamic characteristics of 30 young men performing deep squat, bench press and hard pull maneuvers. The data of tibia were simulated with AnyBody 5.2 and ANSYS 19.2 to verify the authenticity. The result demonstrated that the appropriate force of the deep squat elbow joint, the hip joint and the knee joint is 40% 1RM, the appropriate force of the bench press is 40% 1RM and the appropriate force of the hard pull is 80% 1RM. The external force is the main factor of bone change. The mechanical characteristics of knee joint can be simulated after the Finite Element Analysis and the simulation of AnyBody model are verified. Full article
(This article belongs to the Special Issue Wearable Sensors for Healthcare)
Show Figures

Figure 1

Figure 1
<p>Software analysis interface.</p>
Full article ">Figure 2
<p>The working interface of the sensor.</p>
Full article ">Figure 3
<p>Subject information collection chart.</p>
Full article ">Figure 4
<p>Comparisons of the different load intensity test angles (°). * There was a significant difference between 40% 1RM (One repetition maximum testing) and 80% 1RM (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 5
<p>Comparisons of the different load intensity test angular velocities (°/s). * There was a significant difference between 40% 1RM and 80% 1RM (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 6
<p>Comparisons of the different load intensity test angular accelerations (°/s<sup>2</sup>). * There was a significant difference between 40% 1RM and 80% 1RM (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 7
<p>Comparisons of the different load intensity test peak stress (N). * There was a significant difference between 40% 1RM and 80% 1RM (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 8
<p>Comparisons of the different load intensity test peak muscle strength (N). * There was a significant difference between 40% 1RM and 80% 1RM (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 9
<p>Comparisons of the different force on knee joint (N). * There was a significant difference between 40% 1RM and 80% 1RM (<span class="html-italic">p</span> &lt; 0.05). <sup>★</sup> There was a significant difference between 40% 1RM and 60% 1RM (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 10
<p>Cloud chart of tibia stress distribution of deep squat.</p>
Full article ">Figure 11
<p>Cloud chart of tibia stress distribution of hard pull.</p>
Full article ">
14 pages, 4515 KiB  
Article
Optical Fiber Pyrometer Designs for Temperature Measurements Depending on Object Size
by Arántzazu Núñez-Cascajero, Alberto Tapetado, Salvador Vargas and Carmen Vázquez
Sensors 2021, 21(2), 646; https://doi.org/10.3390/s21020646 - 19 Jan 2021
Cited by 13 | Viewed by 3368
Abstract
The modelling of temperature measurements using optical fiber pyrometers for different hot object sizes with new generalized integration limits is presented. The closed equations for the calculus of the radiated power that is coupled to the optical fiber for two specific scenarios are [...] Read more.
The modelling of temperature measurements using optical fiber pyrometers for different hot object sizes with new generalized integration limits is presented. The closed equations for the calculus of the radiated power that is coupled to the optical fiber for two specific scenarios are proposed. Accurate predictions of critical distance for avoiding errors in the optical fiber end location depending on fiber types and object sizes for guiding good designs are reported. A detailed model for estimating errors depending on target size and distance is provided. Two-color fiber pyrometers as a general solution are also discussed. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Set up of the target surface, variables and the optical fiber of the pyrometer. Adapted from [<a href="#B37-sensors-21-00646" class="html-bibr">37</a>].</p>
Full article ">Figure 2
<p>Types of integrations over (<b>a</b>) only arcs, (<b>b</b>) only circumferences, (<b>c</b>) circumferences and arcs.</p>
Full article ">Figure 3
<p>Case I integration sequence as <span class="html-italic">r</span> increases (<b>a</b>) only circumferences, (<b>b</b>) circumferences and arcs, (<b>c</b>) only arcs.</p>
Full article ">Figure 4
<p>Case II integration sequence as <span class="html-italic">r</span> increases (<b>a</b>) circumferences and arcs, (<b>b</b>) circumferences and arcs with a different upper u limit, (<b>c</b>) only arcs.</p>
Full article ">Figure 5
<p>Case III integration (<b>a</b>) circumferences and arcs, (<b>b</b>) only arcs, (<b>c</b>) only arcs with a different upper u limit.</p>
Full article ">Figure 6
<p>The proposed model simulations with new integration limits. Power measured by the pyrometer for different distances to the target at 2000 °C and using the parameters shown in <a href="#sensors-21-00646-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 7
<p>Relations between the numerical aperture radius <span class="html-italic">r<sub>NA</sub></span> and the target radius <span class="html-italic">r<sub>T</sub></span> for different <span class="html-italic">t</span> distances.</p>
Full article ">Figure 8
<p>Left y-axis and bottom x-axis: spatial resolution as a function of the target-fiber distance for three different optical fibers (core diameter/NA): black: 9 μm/0.14, red: 62.5μm/0.275, blue: 200 μm/0.2. Right y-axis and top x-axis: <span class="html-italic">t<sub>c</sub></span> as a function of the object size for those fibers: black squares: 9 μm/0.14, red triangles: 62.5 μm/0.275, blue pentagons: 200 μm/0.2.</p>
Full article ">Figure 9
<p>(<b>a</b>) Pyrometer optical power at 1550 nm versus the target-fiber distance for different target sizes at 1000 °C (black squares) 5 μm, (red dots) 10 μm, (blue △ triangles) 50 μm, (pink ▽ triangles) 100 μm. (<b>b</b>) Pyrometer optical power at 1550 nm versus distance (filled markers) at a constant temperature of 1000 °C for different target sizes and versus temperature (unfilled markers) at a constant distance of 0.3 mm for an infinite surface.</p>
Full article ">Figure 10
<p>Pyrometer optical power versus the target-fiber distance for different target sizes including <span class="html-italic">r<sub>T</sub></span> &gt; 4<span class="html-italic">r<sub>F</sub></span> in the 1550 nm channel at 1000 °C.</p>
Full article ">Figure 11
<p>Pyrometer optical power versus the target-fiber distance for different target sizes in a single channel around 1550 nm at 1000 °C.</p>
Full article ">Figure 12
<p>Effect of the target size in the energy recovered at 1000 °C by the 1310 nm channel for different distances. Right y-axis ratio (1310/1550 nm) (unfilled markers) for the different target sizes.</p>
Full article ">Figure 13
<p>Energy recovered by the optical fiber at both wavelength bands for different target sizes (5 and 10 μm) at (<b>a</b>) 500 °C and (<b>b</b>) 1000 °C. The right y-axis in both cases shows the ratio (unfilled markers) between both wavelength channels for the different target sizes.</p>
Full article ">
10 pages, 6415 KiB  
Letter
Using Geiger Dosimetry EKO-C Device to Detect Ionizing Radiation Emissions from Building Materials
by Maciej Gliniak, Tomasz Dróżdż, Sławomir Kurpaska and Anna Lis
Sensors 2021, 21(2), 645; https://doi.org/10.3390/s21020645 - 18 Jan 2021
Cited by 2 | Viewed by 2360
Abstract
The purpose of the article is to check and assess what radiation is emitted by particular building materials with the passage of time. The analysis was performed with the EKO-C dosimetry device from Polon-Ekolab. The scope of the work included research on sixteen [...] Read more.
The purpose of the article is to check and assess what radiation is emitted by particular building materials with the passage of time. The analysis was performed with the EKO-C dosimetry device from Polon-Ekolab. The scope of the work included research on sixteen selected construction materials, divided into five groups. The analysis of the results showed that samples such as bricks (first group) and hollow blocks (second group) emit the highest radiation in the tested objects. When comparing these materials, the highest value was recorded when measuring the ceramic block of 15.76 mSv·yr−1. Taking into account the bricks, the highest value of radiation was shown by a full clinker brick, 11.3 mSv·yr−1. Insulation materials and finishing boards are two other groups of building materials that have been measured. They are characterised by a low level of radiation. In the case of materials for thermal insulation, the highest condition was demonstrated by graphite polystyrene of 4.463 mSv·yr−1, while among finishing boards, the highest value of radiation was recorded for the measurement of gypsum board of 3.76 mSv·yr−1. Comparing the obtained test results to the requirements of the Regulation of the Council of Ministers on ionizing radiation dose limits applicable in Poland, it can be noted that the samples examined individually do not pose a radiation risk to humans. When working with all types of samples, the radiation doses are added up. According to the guidelines of the regulation, the total radiation dose does not exceed 50 mSv·yr−1 and does not constitute a threat to human health. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of building materials used in the research.</p>
Full article ">Figure 2
<p>Scheme of the measuring stand.</p>
Full article ">
15 pages, 6095 KiB  
Article
Implementation of Neuro-Memristive Synapse for Long-and Short-Term Bio-Synaptic Plasticity
by Zubaer I. Mannan, Hyongsuk Kim and Leon Chua
Sensors 2021, 21(2), 644; https://doi.org/10.3390/s21020644 - 18 Jan 2021
Cited by 20 | Viewed by 6726
Abstract
In this paper, we propose a complex neuro-memristive synapse that exhibits the physiological acts of synaptic potentiation and depression of the human-brain. Specifically, the proposed neuromorphic synapse efficiently imitates the synaptic plasticity, especially long-term potentiation (LTP) and depression (LTD), and short-term facilitation (STF) [...] Read more.
In this paper, we propose a complex neuro-memristive synapse that exhibits the physiological acts of synaptic potentiation and depression of the human-brain. Specifically, the proposed neuromorphic synapse efficiently imitates the synaptic plasticity, especially long-term potentiation (LTP) and depression (LTD), and short-term facilitation (STF) and depression (STD), phenomena of a biological synapse. Similar to biological synapse, the short- or long-term potentiation (STF and LTP) or depression (STD or LTD) of the memristive synapse are distinguished on the basis of time or repetition of input cycles. The proposed synapse is also designed to exhibit the effect of reuptake and neurotransmitters diffusion processes of a bio-synapse. In addition, it exhibits the distinct bio-realistic attributes, i.e., strong stimulation, exponentially decaying conductance trace of synapse, and voltage dependent synaptic responses, of a neuron. The neuro-memristive synapse is designed in SPICE and its bio-realistic functionalities are demonstrated via various simulations. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Diagram of neuron–neuron communication in a neuronal network, (<b>b</b>) steps of bio-synaptic transmission, and (<b>c</b>) neurotransmitters reuptake process.</p>
Full article ">Figure 2
<p>Proposed neuro-memristive synapse.</p>
Full article ">Figure 3
<p>Normal synaptic response (NSR) of the proposed memristive synapse: (<b>a</b>) current stimulus (I<sub>in</sub>), (<b>b</b>) memory fading effect (V<sub>mfe</sub>) and depression (V<sub>DEP</sub>) signals, artificial (<b>c</b>) synaptic strength (M<sub>syn</sub>), and (<b>d</b>) synaptic voltage (V<sub>syn</sub>).</p>
Full article ">Figure 4
<p>Long-term potentiation (LTP) and long-term depression (LTD) synaptic response of neuro-memristive synapse: (<b>a</b>) input stimulus (I<sub>in</sub>) and current stimulus passes through proposed synapse (I<sub>mem</sub>), (<b>b</b>) control signals of V<sub>mfe</sub> and V<sub>DEP</sub>, artificial synaptic (<b>c</b>) strength (M<sub>syn</sub>), and (<b>d</b>) voltage (V<sub>syn</sub>).</p>
Full article ">Figure 4 Cont.
<p>Long-term potentiation (LTP) and long-term depression (LTD) synaptic response of neuro-memristive synapse: (<b>a</b>) input stimulus (I<sub>in</sub>) and current stimulus passes through proposed synapse (I<sub>mem</sub>), (<b>b</b>) control signals of V<sub>mfe</sub> and V<sub>DEP</sub>, artificial synaptic (<b>c</b>) strength (M<sub>syn</sub>), and (<b>d</b>) voltage (V<sub>syn</sub>).</p>
Full article ">Figure 5
<p>Volatile short-term facilitation (STF) and short-term depression STD, and nonvolatile synaptic responses of the proposed circuit. (<b>a</b>) Input stimulus (I<sub>in</sub>) and current stimulus passes through proposed synapse (I<sub>mem</sub>), (<b>b</b>) V<sub>mfe</sub> and V<sub>DEP</sub> signals, artificial synaptic (<b>c</b>) efficacy (M<sub>syn</sub>), and (<b>d</b>) voltage (V<sub>syn</sub>).</p>
Full article ">Figure 5 Cont.
<p>Volatile short-term facilitation (STF) and short-term depression STD, and nonvolatile synaptic responses of the proposed circuit. (<b>a</b>) Input stimulus (I<sub>in</sub>) and current stimulus passes through proposed synapse (I<sub>mem</sub>), (<b>b</b>) V<sub>mfe</sub> and V<sub>DEP</sub> signals, artificial synaptic (<b>c</b>) efficacy (M<sub>syn</sub>), and (<b>d</b>) voltage (V<sub>syn</sub>).</p>
Full article ">Figure 6
<p>Comparison of synaptic weakening (M<sub>syn</sub>) between the different modes of proposed neuro-memristive synapse.</p>
Full article ">Figure 7
<p>Strong stimulation response of neuro-memristive synapse. (<b>a</b>) Strong stimulus (I<sub>in</sub>), (<b>b</b>) V<sub>mfe</sub> and V<sub>DEP</sub> signals, artificial (<b>c</b>) synaptic strength (M<sub>syn</sub>), and (<b>d</b>) synaptic voltage (V<sub>syn</sub>).</p>
Full article ">Figure 7 Cont.
<p>Strong stimulation response of neuro-memristive synapse. (<b>a</b>) Strong stimulus (I<sub>in</sub>), (<b>b</b>) V<sub>mfe</sub> and V<sub>DEP</sub> signals, artificial (<b>c</b>) synaptic strength (M<sub>syn</sub>), and (<b>d</b>) synaptic voltage (V<sub>syn</sub>).</p>
Full article ">
Previous Issue
Back to TopTop