[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Sensors Applications on Emotion Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (10 June 2024) | Viewed by 5197

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, National Taichung University of Science and Technology, Taichung City 404348, Taiwan
Interests: IoT; social computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, National Taichung University of Science and Technology, Taichung City 404348, Taiwan
Interests: big data analysis; social network mining

Special Issue Information

Dear Colleagues,

Emotion recognition is an ad hoc research subject in various fields that apply human emotional reactions as a signal for marketing, automation, entertainment, technical equipment, and human–robot interaction. Sensors are used to detect human emotions and are associated with many developments. The recognition and evaluation of emotions are complex subjects due to their interdisciplinary nature. However, sensors support much information for these recognitions and evaluations. Many scientific disciplines, including psychology, medical sciences, data analysis, and mechatronics, are involved in the research into sensor applications for emotion recognition.

This Special Issue aims to bring together researchers and practitioners working on the design, development, and evaluation of sensor-based emotion recognition systems. The objective of this Special Issue is to provide a comprehensive view of the latest research and advancements in the field of sensors applications on emotion recognition. This Special Issue provides a framework to discuss and study sensor applications from the perspective of emotion recognition. We invite researchers to contribute to this Special Issue by submitting comprehensive reviews, case studies, and research articles in the field of theoretical and methodological interdisciplinary sensors applications on emotion recognition. In particular, sensor application technologies specifically devised, adapted, or tailored to address problems in emotion recognition are welcome.

Prof. Dr. Jason C. Hung
Dr. Neil Yuwen Yen
Dr. Hao-Shang Ma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • body posture
  • facial expression
  • gesture analysis
  • electroencephalography (EEG)
  • electrocardiography (ECG)
  • galvanic skin response (GSR)
  • heart rate variability (HRV)
  • sensor technologies for emotion recognition (e.g., physiological sensors, facial recognition, voice analysis)
  • machine learning and artificial intelligence techniques for emotion recognition
  • wearable sensors and Internet of Things (IoT) devices for emotion recognition
  • ethical and privacy issues related to sensor-based emotion recognition systems
  • applications of sensor-based emotion recognition in different domains (e.g., healthcare, education, entertainment, marketing)
  • user studies and evaluations of sensor-based emotion recognition systems

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 10469 KiB  
Article
RGB Color Model: Effect of Color Change on a User in a VR Art Gallery Using Polygraph
by Irena Drofova, Paul Richard, Martin Fajkus, Pavel Valasek, Stanislav Sehnalek and Milan Adamek
Sensors 2024, 24(15), 4926; https://doi.org/10.3390/s24154926 - 30 Jul 2024
Viewed by 351
Abstract
This paper presents computer and color vision research focusing on human color perception in VR environments. A VR art gallery with digital twins of original artworks is created for this experiment. In this research, the field of colorimetry and the application of the [...] Read more.
This paper presents computer and color vision research focusing on human color perception in VR environments. A VR art gallery with digital twins of original artworks is created for this experiment. In this research, the field of colorimetry and the application of the L*a*b* and RGB color models are applied. The inter-relationships of the two color models are applied to create a color modification of the VR art gallery environment using C# Script procedures. This color-edited VR environment works with a smooth change in color tone in a given time interval. At the same time, a sudden change in the color of the RGB environment is defined in this interval. This experiment aims to record a user’s reaction embedded in a VR environment and the effect of color changes on human perception in a VR environment. This research uses lie detector sensors that record the physiological changes of the user embedded in VR. Five sensors are used to record the signal. An experiment on the influence of the user’s color perception in a VR environment using lie detector sensors has never been conducted. This research defines the basic methodology for analyzing and evaluating the recorded signals from the lie detector. The presented text thus provides a basis for further research in the field of colors and human color vision in a VR environment and lays an objective basis for use in many scientific and commercial areas. Full article
(This article belongs to the Special Issue Sensors Applications on Emotion Recognition)
Show Figures

Figure 1

Figure 1
<p>Color model and gamut: (<b>a</b>) RGB and color space sRGB in the Chromacity Diagram CIE 1931 and (<b>b</b>) standardized color scale and color position in the Chromacity Diagram CIE 1931 (CIE 1976) [<a href="#B21-sensors-24-04926" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>Color model and gamut: (<b>a</b>) VR headset Oculus Quest2 and color space Rec.2020 in the Chromacity Diagram CIE 1931 and (<b>b</b>) L*a*b* color model and L*a*b* gamut in the Chromacity Diagram CIE 1931.</p>
Full article ">Figure 3
<p>Digitization and creation of a digital twin of a work of art: (<b>a</b>) an original image in an art gallery environment and (<b>b</b>) a digitized art image for a VR environment.</p>
Full article ">Figure 4
<p>The virtual art gallery environment.</p>
Full article ">Figure 5
<p>The initial white background of the VR environment: (<b>a</b>) white color is the first color to immerse the user in the VR environment before color modification and (<b>b</b>–<b>d</b>) reference images of the smooth color tone change in the VR gallery environment.</p>
Full article ">Figure 5 Cont.
<p>The initial white background of the VR environment: (<b>a</b>) white color is the first color to immerse the user in the VR environment before color modification and (<b>b</b>–<b>d</b>) reference images of the smooth color tone change in the VR gallery environment.</p>
Full article ">Figure 6
<p>Static direct absolute RGB process colors: (<b>a</b>) direct absolute background color R (255), (<b>b</b>) direct absolute background color G (255), and (<b>c</b>) direct absolute background color B (255).</p>
Full article ">Figure 7
<p>User connected to lie detector sensors in a virtual gallery environment: (1) and (3) computing and display units, (2) VR headset, (4) and (5) Pneumo Chest Assembly, (6) Photoelectric Plethysmograph, and (7) Electrodermal Activity (EDA).</p>
Full article ">Figure 8
<p>Detail of progress captured by lie detector sensors in a virtual gallery environment: (P1) and (P2) Pneumo Chest Assembly, (PL) Photoelectric Plethysmograph, (GS) Electrodermal Activity (EDA) and (SE) Activity Sensors.</p>
Full article ">Figure 9
<p>Display of lie detector signals measured by sensors in the total measurement time interval. <a href="#sensors-24-04926-f008" class="html-fig">Figure 8</a> shows the details of individual sensors.</p>
Full article ">Figure 10
<p>Graphic representation of signals measured by polygraph sensors.</p>
Full article ">Figure 11
<p>Graphic representation of the GS sensor signal (EDA).</p>
Full article ">Figure 12
<p>Graphic representation of the signal from the sensor P1 Abdominal Respiration trace and P2 Thoracic Respiration trace.</p>
Full article ">Figure 13
<p>Graphic representation of the signal from the PL Photoelectric Plethysmograph and SE Activity Sensor sensors.</p>
Full article ">
12 pages, 1226 KiB  
Article
Speech Emotion Recognition Incorporating Relative Difficulty and Labeling Reliability
by Youngdo Ahn, Sangwook Han, Seonggyu Lee and Jong Won Shin
Sensors 2024, 24(13), 4111; https://doi.org/10.3390/s24134111 - 25 Jun 2024
Viewed by 374
Abstract
Emotions in speech are expressed in various ways, and the speech emotion recognition (SER) model may perform poorly on unseen corpora that contain different emotional factors from those expressed in training databases. To construct an SER model robust to unseen corpora, regularization approaches [...] Read more.
Emotions in speech are expressed in various ways, and the speech emotion recognition (SER) model may perform poorly on unseen corpora that contain different emotional factors from those expressed in training databases. To construct an SER model robust to unseen corpora, regularization approaches or metric losses have been studied. In this paper, we propose an SER method that incorporates relative difficulty and labeling reliability of each training sample. Inspired by the Proxy-Anchor loss, we propose a novel loss function which gives higher gradients to the samples for which the emotion labels are more difficult to estimate among those in the given minibatch. Since the annotators may label the emotion based on the emotional expression which resides in the conversational context or other modality but is not apparent in the given speech utterance, some of the emotional labels may not be reliable and these unreliable labels may affect the proposed loss function more severely. In this regard, we propose to apply label smoothing for the samples misclassified by a pre-trained SER model. Experimental results showed that the performance of the SER on unseen corpora was improved by adopting the proposed loss function with label smoothing on the misclassified data. Full article
(This article belongs to the Special Issue Sensors Applications on Emotion Recognition)
Show Figures

Figure 1

Figure 1
<p>Block diagrams of speech emotion recognition models incorporating (<b>a</b>) the Proxy-Anchor loss <math display="inline"><semantics> <msub> <mi mathvariant="script">L</mi> <mi>PA</mi> </msub> </semantics></math> and (<b>b</b>) the proposed relative difficulty-aware loss <math display="inline"><semantics> <msub> <mi mathvariant="script">L</mi> <mi>RD</mi> </msub> </semantics></math>. <math display="inline"><semantics> <msub> <mi mathvariant="script">L</mi> <mi>CE</mi> </msub> </semantics></math> denotes the cross-entropy loss and <span class="html-italic">P</span> represents the set of proxies. The models consist of fully connected (FC) layers. <span class="html-italic">x</span> and <span class="html-italic">y</span> represent the input feature and the target label.</p>
Full article ">Figure 2
<p>An example in the IEMOCAP dataset for which the emotion is not clear in the current speech utterance but can be inferred by the conversational context.</p>
Full article ">Figure 3
<p>The procedure of input feature processing for speech emotion recognition with IS10, wav2vec, and BERT feature set.</p>
Full article ">
14 pages, 6945 KiB  
Article
Portable Facial Expression System Based on EMG Sensors and Machine Learning Models
by Paola A. Sanipatín-Díaz, Paul D. Rosero-Montalvo and Wilmar Hernandez
Sensors 2024, 24(11), 3350; https://doi.org/10.3390/s24113350 - 23 May 2024
Viewed by 775
Abstract
One of the biggest challenges of computers is collecting data from human behavior, such as interpreting human emotions. Traditionally, this process is carried out by computer vision or multichannel electroencephalograms. However, they comprise heavy computational resources, far from final users or where the [...] Read more.
One of the biggest challenges of computers is collecting data from human behavior, such as interpreting human emotions. Traditionally, this process is carried out by computer vision or multichannel electroencephalograms. However, they comprise heavy computational resources, far from final users or where the dataset was made. On the other side, sensors can capture muscle reactions and respond on the spot, preserving information locally without using robust computers. Therefore, the research subject is the recognition of the six primary human emotions using electromyography sensors in a portable device. They are placed on specific facial muscles to detect happiness, anger, surprise, fear, sadness, and disgust. The experimental results showed that when working with the CortexM0 microcontroller, enough computational capabilities were achieved to store a deep learning model with a classification store of 92%. Furthermore, we demonstrate the necessity of collecting data from natural environments and how they need to be processed by a machine learning pipeline. Full article
(This article belongs to the Special Issue Sensors Applications on Emotion Recognition)
Show Figures

Figure 1

Figure 1
<p>Location of three EMG sensors to collect data. Sensor 1: Orbicularis oculi and the Corrugator supercilii muscles; Sensor 2: Zygomaticus major; Sensor 3: Depressor Anguli Oris and Masseter muscles. Green section: top face muscles; Blue section: middle face muscles; Purple section: bottom face muscles.</p>
Full article ">Figure 2
<p>Electronic system design. Step (A): Sensors gather data and analog filtering is carried out. Step (B): The microcontroller receives the EMG signals and converts them into digital signals to send to the computer. Step (C): The computer stores the EMG signals to train ML models. Step (D): The inference is allocated to the microcontroller.</p>
Full article ">Figure 3
<p>Samples reshaped into one-dimensional array.</p>
Full article ">Figure 4
<p>Anger representation by EMG signals. Y-axis: Analog-to-digital converter resolution; X-axis: Sample size. Sensor 1: (<span style="color: #00FF00">–</span>); Sensor 2: (<span style="color: #FF0000">–</span>); Sensor 3: (<span style="color: #0000FF">–</span>).</p>
Full article ">Figure 5
<p>Multidimensional EMG signals. Y-axis: Analog-to-digital converter resolution; X-axis: Sample size. Sensor 1: (<span style="color: #00FF00">–</span>); Sensor 2: (<span style="color: #FF0000">–</span>); Sensor 3: (<span style="color: #0000FF">–</span>) or (<span style="color: #000000">–</span>).</p>
Full article ">Figure 6
<p>One-dimensional EMG signal. Y-axis: Analog-to-digital converter resolution; X-axis: Sample size.</p>
Full article ">Figure 7
<p>Digital electronic system. Blue rectangle: sensors’ board; small blue and red boxes: jumpers; red rectangle: noise filter board placed allocated over the microcontroller.</p>
Full article ">Figure 8
<p>Emotion recognition electronic case. (<b>a</b>) Sensors’ PCB board. (<b>b</b>) Electronic case with electrodes.</p>
Full article ">
24 pages, 2189 KiB  
Article
Generating Synthetic Health Sensor Data for Privacy-Preserving Wearable Stress Detection
by Lucas Lange, Nils Wenzlitschke and Erhard Rahm
Sensors 2024, 24(10), 3052; https://doi.org/10.3390/s24103052 - 11 May 2024
Viewed by 882
Abstract
Smartwatch health sensor data are increasingly utilized in smart health applications and patient monitoring, including stress detection. However, such medical data often comprise sensitive personal information and are resource-intensive to acquire for research purposes. In response to this challenge, we introduce the privacy-aware [...] Read more.
Smartwatch health sensor data are increasingly utilized in smart health applications and patient monitoring, including stress detection. However, such medical data often comprise sensitive personal information and are resource-intensive to acquire for research purposes. In response to this challenge, we introduce the privacy-aware synthetization of multi-sensor smartwatch health readings related to moments of stress, employing Generative Adversarial Networks (GANs) and Differential Privacy (DP) safeguards. Our method not only protects patient information but also enhances data availability for research. To ensure its usefulness, we test synthetic data from multiple GANs and employ different data enhancement strategies on an actual stress detection task. Our GAN-based augmentation methods demonstrate significant improvements in model performance, with private DP training scenarios observing an 11.90–15.48% increase in F1-score, while non-private training scenarios still see a 0.45% boost. These results underline the potential of differentially private synthetic data in optimizing utility–privacy trade-offs, especially with the limited availability of real training samples. Through rigorous quality assessments, we confirm the integrity and plausibility of our synthetic data, which, however, are significantly impacted when increasing privacy requirements. Full article
(This article belongs to the Special Issue Sensors Applications on Emotion Recognition)
Show Figures

Figure 1

Figure 1
<p>A brief description of the basic GAN architecture: The generator, denoted as <span class="html-italic">G</span>, creates an artificial sample <math display="inline"><semantics> <msup> <mi>x</mi> <mo>′</mo> </msup> </semantics></math> using a random noise input <span class="html-italic">z</span>. These artificial samples <math display="inline"><semantics> <msup> <mi>x</mi> <mo>′</mo> </msup> </semantics></math> and the real samples <span class="html-italic">x</span> are fed into the discriminator <span class="html-italic">D</span>, which categorizes each sample as either real or artificial. The classification results are used to compute the loss, which is then used to update both the generator and the discriminator through backpropagation.</p>
Full article ">Figure 2
<p>Our experimental methods are illustrated by the given workflow. In the first step, we load and pre-process the WESAD dataset. We then train different GAN models for our data augmentation purposes. Each resulting model generates synthetic data, which are evaluated on data quality and, finally, compared on their ability to improve our stress detection models.</p>
Full article ">Figure 3
<p>The individual signal modalities plotted for Subject ID4 after resampling, relabeling, and normalizing the data. The orange line shows the label, which equals 0 for non-stress and 1 for stress.</p>
Full article ">Figure 4
<p>The spectrum plots from the FFT calculations of all subwindows in a 60-s window (<b>a</b>), and the plot of the averaged spectrum representation over these subwindows (<b>b</b>).</p>
Full article ">Figure 5
<p>Visualization of synthetic data from our GANs using PCA and t-SNE to cluster data points against original WESAD data. Generated data are more realistic when they fit the original data points.</p>
Full article ">Figure 6
<p>The signal contributions to the two PCs of our PCA model fitted on the original WESAD data. A high positive or negative contribution signifies that the feature greatly influences the variance explained by that component.</p>
Full article ">Figure 7
<p>The matrices showing the Pearson correlation between the available signals. We compare real WESAD data and data from each of our GANs. In each matrix, the diagonal and all values to the right of it represent the correlation between signals. A higher value signifies a stronger correlation. The lower half of the matrices, left of the diagonal, shows the corresponding <span class="html-italic">p</span>-values for the signal correlation. A lower <span class="html-italic">p</span>-value translates to a higher statistical significance.</p>
Full article ">Figure 8
<p>Histograms showing the distribution density of EDA signal values compared between original and generated data. The y-axis gives the density as <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>12</mn> <mo>]</mo> </mrow> </semantics></math>, and on the x-axis, the normalized signal value is <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math>. The plots for all signal modalities are located in <a href="#sensors-24-03052-f0A1" class="html-fig">Figure A1</a> of <a href="#app1-sensors-24-03052" class="html-app">Appendix A</a>.</p>
Full article ">Figure 9
<p>The results of our baseline experiment on stress detection using spectral power features. We employ a Logistic Regression (LR) model and test the effectiveness of various signal combinations.</p>
Full article ">Figure A1
<p>An overview of the histograms giving the distribution density of signal values, while comparing generated and original data. This covers the omitted signals from <a href="#sensors-24-03052-f008" class="html-fig">Figure 8</a>, which solely focused on EDA.</p>
Full article ">
13 pages, 865 KiB  
Article
Electroencephalogram-Based Facial Gesture Recognition Using Self-Organizing Map
by Takahiro Kawaguchi, Koki Ono and Hiroomi Hikawa
Sensors 2024, 24(9), 2741; https://doi.org/10.3390/s24092741 - 25 Apr 2024
Viewed by 661
Abstract
Brain–computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by [...] Read more.
Brain–computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain’s status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, β, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow. Full article
(This article belongs to the Special Issue Sensors Applications on Emotion Recognition)
Show Figures

Figure 1

Figure 1
<p>Configuration of the proposed facial recognition system. The system consists of an EEG headset, Brain Flow, and SOM-Hebb classifier.</p>
Full article ">Figure 2
<p>SOM-Hebb classifier comprising an SOM with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>×</mo> <mi>M</mi> </mrow> </semantics></math> neurons and a Hebbian learning network.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mi>α</mi> </semantics></math> band strength for eyes closed and open.</p>
Full article ">Figure 4
<p>Head plots for (<b>A</b>) eyes wide open, (<b>B</b>) left teeth clenched, and (<b>C</b>) right teeth clenched. The deeper the red in a region, the more brain activity is occurring.</p>
Full article ">Figure 5
<p>Acquisition of training data and offline training of SOM-Hebb classifier.</p>
Full article ">Figure 6
<p>Online facial gesture recognition system. Colored circles are the neurons associated with gesture classes (black: G1, white: G2, blue: G3, Red: G4, green: G5, purple: G6, blown: G7, yellow: Winner neuron).</p>
Full article ">Figure 7
<p>Illustrations indicating gestures.</p>
Full article ">Figure 8
<p>Neuron map of the SOM trained for (<b>A</b>) two facial gestures, (<b>B</b>) four facial gestures, (<b>C</b>) five facial gestures, (<b>D</b>) seven facial gestures.</p>
Full article ">
17 pages, 3003 KiB  
Article
The Difference in the Assessment of Knee Extension/Flexion Angles during Gait between Two Calibration Methods for Wearable Goniometer Sensors
by Tomoya Ishida and Mina Samukawa
Sensors 2024, 24(7), 2092; https://doi.org/10.3390/s24072092 - 25 Mar 2024
Viewed by 950
Abstract
Frontal and axial knee motion can affect the accuracy of the knee extension/flexion motion measurement using a wearable goniometer. The purpose of this study was to test the hypothesis that calibrating the goniometer on an individual’s body would reduce errors in knee flexion [...] Read more.
Frontal and axial knee motion can affect the accuracy of the knee extension/flexion motion measurement using a wearable goniometer. The purpose of this study was to test the hypothesis that calibrating the goniometer on an individual’s body would reduce errors in knee flexion angle during gait, compared to bench calibration. Ten young adults (23.2 ± 1.3 years) were enrolled. Knee flexion angles during gait were simultaneously assessed using a wearable goniometer sensor and an optical three-dimensional motion analysis system, and the absolute error (AE) between the two methods was calculated. The mean AE across a gait cycle was 2.4° (0.5°) for the on-body calibration, and the AE was acceptable (<5°) throughout a gait cycle (range: 1.5–3.8°). The mean AE for the on-bench calibration was 4.9° (3.4°) (range: 1.9–13.6°). Statistical parametric mapping (SPM) analysis revealed that the AE of the on-body calibration was significantly smaller than that of the on-bench calibration during 67–82% of the gait cycle. The results indicated that the on-body calibration of a goniometer sensor had acceptable and better validity compared to the on-bench calibration, especially for the swing phase of gait. Full article
(This article belongs to the Special Issue Sensors Applications on Emotion Recognition)
Show Figures

Figure 1

Figure 1
<p>Experimental protocol.</p>
Full article ">Figure 2
<p>On-bench and on-body calibrations. In both calibration methods, knee flexion angles of 0° and 90° were confirmed by a single examiner using a universal goniometer and registered on a mobile application. The application allowed the examiner to manipulate the black, red, and blue circles on the screen and to calibrate at any two angles (the knee in extension and flexion), which in this study were defined as 0 and 90°.</p>
Full article ">Figure 3
<p>Thigh (<b>left</b>) and shank (<b>right</b>) segment coordinate systems in the optical motion analysis systems. The red, green, and blue axes indicate <span class="html-italic">X</span>-, <span class="html-italic">Y</span>-, and <span class="html-italic">Z</span>-axes, respectively.</p>
Full article ">Figure 4
<p>Comparison of knee flexion angles during a gait cycle acquired by three-dimensional motion analysis and wearable goniometer sensor with on-body calibration (<b>a</b>) and with on-bench calibration (<b>b</b>). The results of a statistical parametric mapping (SPM) paired <span class="html-italic">t</span>-test comparing the results of three-dimensional motion analysis and wearable goniometer sensor with on-body calibration (<b>c</b>) and with on-bench calibration (<b>d</b>). Red dashed lines in (<b>c</b>,<b>d</b>) indicate significant differences at <span class="html-italic">p</span> &lt; 0.05. * indicates thresholds of significant difference.</p>
Full article ">Figure 5
<p>Comparison of knee flexion angles during a gait cycle acquired by three-dimensional motion analysis and wearable goniometer sensor with on-body calibration for each participant.</p>
Full article ">Figure 6
<p>Comparison of knee flexion angles during a gait cycle acquired by three-dimensional motion analysis and wearable goniometer sensor with on-bench calibration for each participant.</p>
Full article ">Figure 7
<p>Comparison of the absolute errors (AEs) between results obtained using three-dimensional optical motion analysis and a wearable goniometer sensor with on-body calibration and on-bench calibration. Black dashed lines indicate the interpretation of AEs representing good (≤2°), acceptable (≤5°), tolerable (≤10°), and unacceptable (&gt;10°) accuracy. Statistical parametric mapping (SPM) paired <span class="html-italic">t</span>-test results are shown with red dashed lines indicating statistical significance at <span class="html-italic">p</span> &lt; 0.05. * indicates thresholds of significant difference.</p>
Full article ">
Back to TopTop