[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Biomedical Data in Human-Machine Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (25 April 2023) | Viewed by 105849

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, 45-758 Opole, Poland
Interests: analysis of biomedical data; human–computer interactions; brain–computer interfaces and the use of modern technologies in the diagnosis of neurodegenerative diseases
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Neuroinformatics and Biomedical Engineering, Maria Curie-Sklodowska University, 20-400 Lublin, Poland
Interests: human-machine interfaces; brain-computer interfaces; signal processing; biomedical data; electroencephalography; psychology

E-Mail Website
Guest Editor
Department of Automatic Control and Robotics, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Kraków, Poland
Interests: signal processing; human-computer interaction; control; data analysis; fractional systems

Special Issue Information

Dear Colleagues,

This special issue aims at introducing the most recent advances in the interpretation of biomedical data and their potential implementation in human-computer interaction. This issue shall cover new methods for interpretation of various types of biomedical data, including model-based biosignal analysis, data interpretation and integration, medical decision making extending existing data processing methods and technologies for their effective application in clinical environments. Analysis of biomedical data covers wide range of interesting topics and this special issue is going to cover many of them.

The submitted papers should cover the following areas:

  1. Analysis of biomedical data, such as among the others: electroencephalography (EEG), electromyography (EMG), electrocardiography (ECG), magnetic resonance imaging (MRI), etc.
  2. Smart cities;
  3. Smart homes;
  4. Brain-Computer Interfaces;
  5. Human-Machine Interaction;
  6. Graphomotorics;
  7. Neurodegenerative disorders - diagnostics;
  8. Movement disordrs.

Other topics could be also taken into account for publication. We also welcome concept and review papers.

Dr. Aleksandra Kawala-Sterniuk
Prof. Dr. Grzegorz Marcin Wójcik
Dr. Waldemar Bauer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical data
  • signal processing
  • data analysis
  • human-computer interfaces
  • human-machine interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (21 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

5 pages, 184 KiB  
Editorial
Editorial: Biomedical Data in Human–Machine Interaction
by Aleksandra Kawala-Sterniuk, Grzegorz Marcin Wójcik and Waldemar Bauer
Sensors 2023, 23(18), 7983; https://doi.org/10.3390/s23187983 - 20 Sep 2023
Viewed by 1318
Abstract
Analysis of biomedical data can provide useful information regarding human condition and as a result—analysis of these signals has become one of the most popular diagnostic methods [...] Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)

Research

Jump to: Editorial, Review

19 pages, 486 KiB  
Article
Investigating the Impact of Guided Imagery on Stress, Brain Functions, and Attention: A Randomized Trial
by Katarzyna Zemla, Grzegorz Sedek, Krzysztof Wróbel, Filip Postepski and Grzegorz M. Wojcik
Sensors 2023, 23(13), 6210; https://doi.org/10.3390/s23136210 - 7 Jul 2023
Cited by 2 | Viewed by 11800
Abstract
The aim of this study was to investigate the potential impact of guided imagery (GI) on attentional control and cognitive performance and to explore the relationship between guided imagery, stress reduction, alpha brainwave activity, and attentional control using common cognitive performance tests. Executive [...] Read more.
The aim of this study was to investigate the potential impact of guided imagery (GI) on attentional control and cognitive performance and to explore the relationship between guided imagery, stress reduction, alpha brainwave activity, and attentional control using common cognitive performance tests. Executive function was assessed through the use of attentional control tests, including the anti-saccade, Stroop, and Go/No-go tasks. Participants underwent a guided imagery session while their brainwave activity was measured, followed by attentional control tests. The study’s outcomes provide fresh insights into the influence of guided imagery on brain wave activity, particularly in terms of attentional control. The findings suggest that guided imagery has the potential to enhance attentional control by augmenting the alpha power and reducing stress levels. Given the limited existing research on the specific impact of guided imagery on attention control, the study’s findings carry notable significance. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Research protocol used for data processing of both types of sessions: GI and mental task workloads.</p>
Full article ">Figure 2
<p>The 14th min choice justification.</p>
Full article ">Figure 3
<p>The effect of GI on reducing erroneous reactions in the Stroop test is mediated by the alpha power at 14 min. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span>&lt; 0.01.</p>
Full article ">Figure 4
<p>The effect of GI on reducing erroneous reactions in the anti-saccade test is mediated by the alpha power at 14 min. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span>&lt; 0.01.</p>
Full article ">
13 pages, 342 KiB  
Article
Development of Supervised Speaker Diarization System Based on the PyAnnote Audio Processing Library
by Volodymyr Khoma, Yuriy Khoma, Vitalii Brydinskyi and Alexander Konovalov
Sensors 2023, 23(4), 2082; https://doi.org/10.3390/s23042082 - 13 Feb 2023
Cited by 9 | Viewed by 5152
Abstract
Diarization is an important task when work with audiodata is executed, as it provides a solution to the problem related to the need of dividing one analyzed call recording into several speech recordings, each of which belongs to one speaker. Diarization systems segment [...] Read more.
Diarization is an important task when work with audiodata is executed, as it provides a solution to the problem related to the need of dividing one analyzed call recording into several speech recordings, each of which belongs to one speaker. Diarization systems segment audio recordings by defining the time boundaries of utterances, and typically use unsupervised methods to group utterances belonging to individual speakers, but do not answer the question “who is speaking?” On the other hand, there are biometric systems that identify individuals on the basis of their voices, but such systems are designed with the prerequisite that only one speaker is present in the analyzed audio recording. However, some applications involve the need to identify multiple speakers that interact freely in an audio recording. This paper proposes two architectures of speaker identification systems based on a combination of diarization and identification methods, which operate on the basis of segment-level or group-level classification. The open-source PyAnnote framework was used to develop the system. The performance of the speaker identification system was verified through the application of the AMI Corpus open-source audio database, which contains 100 h of annotated and transcribed audio and video data. The research method consisted of four experiments to select the best-performing supervised diarization algorithms on the basis of PyAnnote. The first experiment was designed to investigate how the selection of the distance function between vector embedding affects the reliability of identification of a speaker’s utterance in a segment-level classification architecture. The second experiment examines the architecture of cluster-centroid (group-level) classification, i.e., the selection of the best clustering and classification methods. The third experiment investigates the impact of different segmentation algorithms on the accuracy of identifying speaker utterances, and the fourth examines embedding window sizes. Experimental results demonstrated that the group-level approach offered better identification results were compared to the segment-level approach, and the latter had the advantage of real-time processing. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Basic structure of the diarization system with clustering based on PyAnnote.</p>
Full article ">Figure 2
<p>Architectures of the general-purpose diarization systems based on PyAnnote: <b>(Architecture A)</b> identification via separate segments, <b>(Architecture B)</b> identification based on the group (cluster) of segments, and unsupervised segment clustering.</p>
Full article ">
20 pages, 370 KiB  
Article
Comparative Study of Fuzzy Rule-Based Classifiers for Medical Applications
by Anna Czmil
Sensors 2023, 23(2), 992; https://doi.org/10.3390/s23020992 - 15 Jan 2023
Cited by 9 | Viewed by 2519
Abstract
The use of machine learning in medical decision support systems can improve diagnostic accuracy and objectivity for clinical experts. In this study, we conducted a comparison of 16 different fuzzy rule-based algorithms applied to 12 medical datasets and real-world data. The results of [...] Read more.
The use of machine learning in medical decision support systems can improve diagnostic accuracy and objectivity for clinical experts. In this study, we conducted a comparison of 16 different fuzzy rule-based algorithms applied to 12 medical datasets and real-world data. The results of this comparison showed that the best performing algorithms in terms of average results of Matthews correlation coefficient (MCC), area under the curve (AUC), and accuracy (ACC) was a classifier based on fuzzy logic and gene expression programming (GPR), repeated incremental pruning to produce error reduction (Ripper), and ordered incremental genetic algorithm (OIGA), respectively. We also analyzed the number and size of the rules generated by each algorithm and provided examples to objectively evaluate the utility of each algorithm in clinical decision support. The shortest and most interpretable rules were generated by 1R, GPR, and C45Rules-C. Our research suggests that GPR is capable of generating concise and interpretable rules while maintaining good classification performance, and it may be a valuable algorithm for generating rules from medical data. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Distribution of the MCC values for each algorithm in all datasets.</p>
Full article ">Figure 2
<p>Distribution of the AUC values for each algorithm in all datasets.</p>
Full article ">Figure 3
<p>Distribution of ACC values for each algorithm in all datasets.</p>
Full article ">
29 pages, 11125 KiB  
Article
Advanced Modeling and Signal Processing Methods in Brain–Computer Interfaces Based on a Vector of Cyclic Rhythmically Connected Random Processes
by Serhii Lupenko, Roman Butsiy and Nataliya Shakhovska
Sensors 2023, 23(2), 760; https://doi.org/10.3390/s23020760 - 9 Jan 2023
Cited by 9 | Viewed by 2418
Abstract
In this study is substantiated the new mathematical model of vector of electroencephalographic signals, registered under the conditions of multiple repetitions of the mental control influences of brain–computer interface operator, in the form of a vector of cyclic rhythmically connected random processes, which, [...] Read more.
In this study is substantiated the new mathematical model of vector of electroencephalographic signals, registered under the conditions of multiple repetitions of the mental control influences of brain–computer interface operator, in the form of a vector of cyclic rhythmically connected random processes, which, due to taking into account the stochasticity and cyclicity, the variability and commonality of the rhythm of the investigated signals have a number of advantages over the known models. This new model opens the way for the study of multidimensional distribution functions; initial, central, and mixed moment functions of higher order such as for each electroencephalographic signal separately; as well as for their respective compatible probabilistic characteristics, among which the most informative characteristics can be selected. This provides an increase in accuracy in the detection (classification) of mental control influences of the brain–computer interface operators. Based on the developed mathematical model, the statistical processing methods of vector of electroencephalographic signals are substantiated, which consist of statistical evaluation of its probabilistic characteristics and make it possible to conduct an effective joint statistical estimation of the probability characteristics of electroencephalographic signals. This provides the basis for coordinated integration of information from different sensors. The use of moment functions of higher order and their spectral images in the frequency domain, as informative characteristics in brain–computer interface systems, are substantiated. Their significant sensitivity to the mental controlling influence of the brain–computer interface operator is experimentally established. The application of Bessel’s inequality to the problems of reducing the dimensions (from 500 to 20 numbers) of the vectors of informative features makes it possible to significantly reduce the computational complexity of the algorithms for the functioning of brain–computer interface systems. Namely, we experimentally established that only the first 20 values of the Fourier transform of the estimation of moment functions of higher-order electroencephalographic signals are sufficient to form the vector of informative features in brain–computer interface systems, because these spectral components make up at least 95% of the total energy of the corresponding statistical estimate of the moment functions of higher-order electroencephalographic signals. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Main stages of signal processing by a BCI system.</p>
Full article ">Figure 2
<p>Illustration of a conceptual approach to the study of the characteristics of EEG signals.</p>
Full article ">Figure 3
<p>Block diagram of the OpenBCI platform.</p>
Full article ">Figure 4
<p>The open source brain–computer Interface platform.</p>
Full article ">Figure 5
<p>The OpenBCI GUI.</p>
Full article ">Figure 6
<p>The signals after the analog-to-digital converter, recorded by the OpenBCI platform (the upper, middle, and lower graphs correspond to the 1st, 2nd, and 3rd EEG channels, respectively).</p>
Full article ">Figure 7
<p>The signals after the first stage of filtering with a 50 Hz notch filter (the upper, middle, and lower graphs correspond to the 1st, 2nd, and 3rd EEG channels, respectively).</p>
Full article ">Figure 8
<p>The signals after the second stage of filtering with a bandpass filter (the upper, middle, and lower graphs correspond to the 1st, 2nd, and 3rd EEG channels, respectively).</p>
Full article ">Figure 9
<p>Graph of the estimation of the vector EEG rhythm function.</p>
Full article ">Figure 10
<p>Graphs of realizations of statistical estimates of mathematical expectations of vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 11
<p>Graphs of Fourier transforms of realizations of of statistical estimates of mathematical expectations of vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 12
<p>Graphs of realizations of statistical estimates of the initial moment functions of the second-order vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 13
<p>Graphs of Fourier transforms of realizations of statistical estimates of the initial moment functions of the second-order vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 14
<p>Graphs of realizations of statistical estimates of the initial moment functions of the third-order vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 15
<p>Graphs of Fourier transforms of realizations of statistical estimates of the initial moment functions of the third-order vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 16
<p>Graphs of realizations of statistical estimates of the initial moment functions of the fourth-order vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 17
<p>Graphs of Fourier transforms of realizations of statistical estimates of the initial moment functions of the fourth-order vector EEG component: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 18
<p>Graphs of realizations of statistical estimates of the dispersions of vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 19
<p>Graphs of Fourier transforms of realizations of statistical estimates of the dispersions of vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 20
<p>Graphs of realizations of statistical estimates of the central moment functions of the fourth order vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">Figure 21
<p>Graphs of Fourier transforms of realizations of statistical estimates of the central moment functions of the fourth-order vector EEG components: (<b>a</b>) zone of passivity (lack of action) of the BCI operator; (<b>b</b>) zone of activity of the BCI operator.</p>
Full article ">
10 pages, 1688 KiB  
Article
Correlations between the EMG Structure of Movement Patterns and Activity of Postural Muscles in Able-Bodied and Wheelchair Fencers
by Zbigniew Borysiuk, Monika Blaszczyszyn, Katarzyna Piechota, Mariusz Konieczny and Wojciech J. Cynarski
Sensors 2023, 23(1), 135; https://doi.org/10.3390/s23010135 - 23 Dec 2022
Cited by 5 | Viewed by 2383
Abstract
The study involved Paralympic wheelchair fencers (N = 7) in two disability categories, and able-bodied female epee fencers (N = 7), members of the polish paralympic fencing teams. The performance of postural muscles and sword arm muscles in both groups of fencers, and [...] Read more.
The study involved Paralympic wheelchair fencers (N = 7) in two disability categories, and able-bodied female epee fencers (N = 7), members of the polish paralympic fencing teams. The performance of postural muscles and sword arm muscles in both groups of fencers, and of the front and rear leg muscles in the able-bodied fencers, was examined using surface electromyography with an accelerometer and the OptiTrack motion analysis system, as well as ground force reaction platforms. The activation sequence of individual muscles was determined and the structure of movement patterns in able-bodied and wheelchair fencers was formulated. A statistically significant correlation was found between the complex motor reaction time and latissimus dorsi muscle activation (p = 0.039, Z = −2.062) in wheelchair fencers. High correlations between the vertical force and EMG signal values of the gastrocnemius caput laterale muscle (0.85 for p = 0.022) were found in able-bodied fencers. A heuristic analysis indicated the significance of postural muscles in the movement patterns of wheelchair and able-bodied fencers. These muscles play a crucial role in the anticipatory postural adjustment of the trunk during technical fencing actions, including attacks on the opponent’s body. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Wheelchair fencing medalist Paralympic Games.</p>
Full article ">Figure 2
<p>A lunge executed by the saber fencing world champion (Diagnostic Laboratory of the Department of Human Movements, Opole University of Technology).</p>
Full article ">Figure 3
<p>(<b>a</b>) Waveforms of muscle activity and ground reaction forces with marked moments of coach’s movement (Coach markers)—arm muscles (FCU, ECR, BB, TB). (<b>b</b>) Waveforms of muscle activity and ground reaction forces with marked moments of coach’s movement (Coach markers)—rear leg muscles (RF, BF). (<b>c</b>) Waveforms of muscle activity and ground reaction forces with marked moments of coach’s and fencer’s movement (Coach markers and Fencer markers)—front leg muscles (RF, BF).</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) Waveforms of muscle activity and ground reaction forces with marked moments of coach’s movement (Coach markers)—arm muscles (FCU, ECR, BB, TB). (<b>b</b>) Waveforms of muscle activity and ground reaction forces with marked moments of coach’s movement (Coach markers)—rear leg muscles (RF, BF). (<b>c</b>) Waveforms of muscle activity and ground reaction forces with marked moments of coach’s and fencer’s movement (Coach markers and Fencer markers)—front leg muscles (RF, BF).</p>
Full article ">
18 pages, 1925 KiB  
Article
Repeatability of the Vibroarthrogram in the Temporomandibular Joints
by Adam Łysiak, Tomasz Marciniak and Dawid Bączkowicz
Sensors 2022, 22(23), 9542; https://doi.org/10.3390/s22239542 - 6 Dec 2022
Cited by 2 | Viewed by 2202
Abstract
Current research concerning the repeatability of the joint’s sounds examination in the temporomandibular joints (TMJ) is inconclusive; thus, the aim of this study was to investigate the repeatability of the specific features of the vibroarthrogram (VAG) in the TMJ using accelerometers. The joint [...] Read more.
Current research concerning the repeatability of the joint’s sounds examination in the temporomandibular joints (TMJ) is inconclusive; thus, the aim of this study was to investigate the repeatability of the specific features of the vibroarthrogram (VAG) in the TMJ using accelerometers. The joint sounds of both TMJs were measured with VAG accelerometers in two groups, study and control, each consisting of 47 participants (n = 94). Two VAG recording sessions consisted of 10 jaw open/close cycles guided by a metronome. The intraclass correlation coefficient (ICC) was calculated for seven VAG signal features. Additionally, a k-nearest-neighbors (KNN) classifier was defined and compared with a state-of-the-art method (joint vibration analysis (JVA) decision tree). ICC indicated excellent (for the integral below 300 Hz feature), good (total integral, integral above 300 Hz, and median frequency features), moderate (integral below to integral above 300 Hz ratio feature) and poor (peak amplitude feature) reliability. The accuracy scores for the KNN classifier (up to 0.81) were higher than those for the JVA decision tree (up to 0.60). The results of this study could open up a new field of research focused on the features of the vibroarthrogram in the context of the TMJ, further improving the diagnosing process. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Exemplary VAG signal for (<b>a</b>) asymptomatic and (<b>b</b>) symptomatic temporomandibular joints.</p>
Full article ">Figure 2
<p>Sensors and their placement on the subject’s joints.</p>
Full article ">Figure 3
<p>Box plots of <span class="html-italic">raw</span> features: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure 4
<p>Boxplots of the <span class="html-italic">norm1</span> features: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure 5
<p>Box plots of <span class="html-italic">norm2</span> features: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A1
<p>Box plots of <span class="html-italic">raw</span> features obtained for the first measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A2
<p>Box plots of <span class="html-italic">raw</span> features obtained for the second measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A3
<p>Box plots of <span class="html-italic">norm1</span> features obtained for the first measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A4
<p>Box plots of <span class="html-italic">norm1</span> features obtained for the second measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A5
<p>Box plots of <span class="html-italic">norm2</span> features obtained for the first measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A6
<p>Box plots of <span class="html-italic">norm2</span> features obtained for the second measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A7
<p>Confusion matrices for <span class="html-italic">raw</span> features used in the JVA decision tree classifier for the (<b>a</b>) first and (<b>b</b>) second signals.</p>
Full article ">Figure A8
<p>Confusion matrices for <span class="html-italic">norm1</span> features used in the JVA decision tree classifier for the (<b>a</b>) first and (<b>b</b>) second signals.</p>
Full article ">Figure A9
<p>Confusion matrices for <span class="html-italic">norm2</span> features used in the JVA decision tree classifier for the (<b>a</b>) first and (<b>b</b>) second signals.</p>
Full article ">Figure A10
<p>Confusion matrices for <span class="html-italic">raw</span> features used in the KNN classifier for the (<b>a</b>) first and (<b>b</b>) second signals.</p>
Full article ">
22 pages, 4357 KiB  
Article
Brain Age Prediction: A Comparison between Machine Learning Models Using Brain Morphometric Data
by Juhyuk Han, Seo Yeong Kim, Junhyeok Lee and Won Hee Lee
Sensors 2022, 22(20), 8077; https://doi.org/10.3390/s22208077 - 21 Oct 2022
Cited by 21 | Viewed by 8274
Abstract
Brain structural morphology varies over the aging trajectory, and the prediction of a person’s age using brain morphological features can help the detection of an abnormal aging process. Neuroimaging-based brain age is widely used to quantify an individual’s brain health as deviation from [...] Read more.
Brain structural morphology varies over the aging trajectory, and the prediction of a person’s age using brain morphological features can help the detection of an abnormal aging process. Neuroimaging-based brain age is widely used to quantify an individual’s brain health as deviation from a normative brain aging trajectory. Machine learning approaches are expanding the potential for accurate brain age prediction but are challenging due to the great variety of machine learning algorithms. Here, we aimed to compare the performance of the machine learning models used to estimate brain age using brain morphological measures derived from structural magnetic resonance imaging scans. We evaluated 27 machine learning models, applied to three independent datasets from the Human Connectome Project (HCP, n = 1113, age range 22–37), the Cambridge Centre for Ageing and Neuroscience (Cam-CAN, n = 601, age range 18–88), and the Information eXtraction from Images (IXI, n = 567, age range 19–86). Performance was assessed within each sample using cross-validation and an unseen test set. The models achieved mean absolute errors of 2.75–3.12, 7.08–10.50, and 8.04–9.86 years, as well as Pearson’s correlation coefficients of 0.11–0.42, 0.64–0.85, and 0.63–0.79 between predicted brain age and chronological age for the HCP, Cam-CAN, and IXI samples, respectively. We found a substantial difference in performance between models trained on the same data type, indicating that the choice of model yields considerable variation in brain-predicted age. Furthermore, in three datasets, regularized linear regression algorithms achieved similar performance to nonlinear and ensemble algorithms. Our results suggest that regularized linear algorithms are as effective as nonlinear and ensemble algorithms for brain age prediction, while significantly reducing computational costs. Our findings can serve as a starting point and quantitative reference for future efforts at improving brain age prediction using machine learning models applied to brain morphometric data. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Similarity in predicted brain age in the hold-out test sets for the HCP, Cam-CAN, and IXI samples across 27 algorithms. For the HCP sample, (<b>a</b>) similarity matrix representing between-algorithm correlations of individual predicted brain age and (<b>b</b>) distance matrix and dendrogram resulting from hierarchical clustering of the individual brain age results of the 27 algorithms. For the Cam-CAN sample, (<b>c</b>) similarity matrix representing between-algorithm correlations of individual predicted brain age and (<b>d</b>) distance matrix and dendrogram resulting from hierarchical clustering of the individual brain age results of the 27 algorithms. For the IXI sample, (<b>e</b>) similarity matrix representing between-algorithm correlations of individual predicted brain age and (<b>f</b>) distance matrix and dendrogram resulting from hierarchical clustering of the individual brain age results of the 27 algorithms. lasso = Least Absolute Shrinkage and Selection Operator; llar = Lasso Least Angle Regression; svr = Support Vector Regression; lar = Least Angle Regression; en = Elastic Net Regression; br = Bayesian Ridge Regression; ridge = Ridge Regression; ard = Automatic Relevance Determination; rf = Random Forest Regression; par = Passive Aggressive Regression; cat = Category Boosting Regression; rvr = Relevance Vector Regression; lgbm = Light Gradient Boosting Machine; gbm = Gradient Boosting Machine; knn = K-Nearest Neighbors; ada = Adaptive Boosting Regression; et = Extra Trees Regression; xgb = Extreme Gradient Boosting; kr = Kernel Ridge Regression; gp = Gaussian Processes Regression; mlp = Multi-layer Perceptron Regression; omp = Orthogonal Matching Pursuit; lr = Linear Regression; huber = Huber Regression; tr = Theil–Sen Regression; ransac = Random Sample Consensus; dt = Decision Tree Regression.</p>
Full article ">Figure 2
<p>Corrected brainPAD (corrected predicted brain age–chronological age) in the HCP, Cam-CAN, and IXI samples. Violin plots showing the distributions of individual corrected brainPAD values in the hold-out test sets for the (<b>a</b>) HCP, (<b>b</b>) Cam-CAN, and (<b>c</b>) IXI samples. Box plot within each violin plot shows the first quartile (Q1) and third quartile (Q3) of the corrected brainPAD values. White circle within each boxplot indicates the median corrected brainPAD value. lasso = Least Absolute Shrinkage and Selection Operator; llar = Lasso Least Angle Regression; svr = Support Vector Regression; lar = Least Angle Regression; en = Elastic Net Regression; br = Bayesian Ridge Regression; ridge = Ridge Regression; ard = Automatic Relevance Determination; rf = Random Forest Regression; par = Passive Aggressive Regression; cat = Category Boosting Regression; rvr = Relevance Vector Regression; lgbm = Light Gradient Boosting Machine; gbm = Gradient Boosting Machine; knn = K-Nearest Neighbors; ada = Adaptive Boosting Regression; et = Extra Trees Regression; xgb = Extreme Gradient Boosting; kr = Kernel Ridge Regression; gp = Gaussian Processes Regression; mlp = Multi-layer Perceptron Regression; omp = Orthogonal Matching Pursuit; lr = Linear Regression; huber = Huber Regression; tr = Theil–Sen Regression; ransac = Random Sample Consensus; dt = Decision Tree Regression.</p>
Full article ">Figure 3
<p>SHAP feature importance quantified as the mean absolute SHAP value for the (<b>a</b>) HCP, (<b>b</b>) Cam-CAN, and (<b>c</b>) IXI samples. Mean absolute feature importance (SHAP value) averaged across all subjects for regional cortical thickness, surface area, and subcortical volume for Least Absolute Shrinkage and Selection Operator (Lasso) Regression, Gaussian Process Regression (GPR), and Gradient Boosting Machine (GBM). Darker colors indicate higher feature importance in the explanation of model prediction error or brainPAD. The relative feature importance values shown are rescaled such that the feature with the maximum average absolute SHAP value in each model is assigned a value of 1. The top 20 regional features for all models are shown in <a href="#app1-sensors-22-08077" class="html-app">Supplementary Tables S4–S6</a>.</p>
Full article ">Figure 3 Cont.
<p>SHAP feature importance quantified as the mean absolute SHAP value for the (<b>a</b>) HCP, (<b>b</b>) Cam-CAN, and (<b>c</b>) IXI samples. Mean absolute feature importance (SHAP value) averaged across all subjects for regional cortical thickness, surface area, and subcortical volume for Least Absolute Shrinkage and Selection Operator (Lasso) Regression, Gaussian Process Regression (GPR), and Gradient Boosting Machine (GBM). Darker colors indicate higher feature importance in the explanation of model prediction error or brainPAD. The relative feature importance values shown are rescaled such that the feature with the maximum average absolute SHAP value in each model is assigned a value of 1. The top 20 regional features for all models are shown in <a href="#app1-sensors-22-08077" class="html-app">Supplementary Tables S4–S6</a>.</p>
Full article ">
11 pages, 1271 KiB  
Article
Age-Related Differences in Intermuscular Coherence EMG-EMG of Ankle Joint Antagonist Muscle Activity during Maximal Leaning
by Mariusz Konieczny, Przemysław Domaszewski, Elżbieta Skorupska, Zbigniew Borysiuk and Kajetan J. Słomka
Sensors 2022, 22(19), 7527; https://doi.org/10.3390/s22197527 - 4 Oct 2022
Cited by 3 | Viewed by 2036
Abstract
Background: Intermuscular synchronization is one of the fundamental aspects of maintaining a stable posture and is of great importance in the aging process. This study aimed to assess muscle synchronization and postural stabilizer asymmetry during quiet standing and the limits of stability using [...] Read more.
Background: Intermuscular synchronization is one of the fundamental aspects of maintaining a stable posture and is of great importance in the aging process. This study aimed to assess muscle synchronization and postural stabilizer asymmetry during quiet standing and the limits of stability using wavelet analysis. Intermuscular synchrony and antagonistic sEMG-sEMG (surface electromyography) coherence asymmetry were evaluated in the tibialis anterior and soleus muscles. Methods: The study involved 20 elderly (aged 65 ± 3.6) and 20 young (aged 21 ± 1.3) subjects. The task was to perform a maximum forward bend in a standing position. The prone test was divided into three phases: quiet standing (10 s), dynamic learning, and maintenance of maximum leaning (20 s). Wavelet analysis of coherence was performed in the delta and beta bands. Results: Young subjects modulated interface coherences to a greater extent in the beta band. Analysis of postural stability during standing tasks showed that only the parameter R2b (the distance between the maximal and minimal position central of pressure), as an indicator for assessing the practical limits of stability, was found to be significantly associated with differences in aging. Conclusion: The results showed differences in the beta and delta band oscillations between young and older subjects in a postural task involving standing quietly and leaning forward. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Illustration of the asymmetry of the logarithmic delta band frequency data in the (qs/lean) tasks in the comparison of the two groups.</p>
Full article ">Figure 2
<p>Illustration of the asymmetry of the logarithmic beta band frequency data in the (qs/lean) tasks in the comparison of the two groups.</p>
Full article ">Figure 3
<p>Graphical illustration of the asymmetry (left/right leg) of the delta and beta band coherence oscillations in the tasks (qs/leaning) in both groups.</p>
Full article ">
15 pages, 5844 KiB  
Article
Implementation of a Morphological Filter for Removing Spikes from the Epileptic Brain Signals to Improve Identification Ripples
by Amir F. Al-Bakri, Radek Martinek, Mariusz Pelc, Jarosław Zygarlicki and Aleksandra Kawala-Sterniuk
Sensors 2022, 22(19), 7522; https://doi.org/10.3390/s22197522 - 4 Oct 2022
Cited by 7 | Viewed by 2175
Abstract
Epilepsy is a very common disease affecting at least 1% of the population, comprising a number of over 50 million people. As many patients suffer from the drug-resistant version, the number of potential treatment methods is very small. However, since not only the [...] Read more.
Epilepsy is a very common disease affecting at least 1% of the population, comprising a number of over 50 million people. As many patients suffer from the drug-resistant version, the number of potential treatment methods is very small. However, since not only the treatment of epilepsy, but also its proper diagnosis or observation of brain signals from recordings are important research areas, in this paper, we address this very problem by developing a reliable technique for removing spikes and sharp transients from the baseline of the brain signal using a morphological filter. This allows much more precise identification of the so-called epileptic zone, which can then be resected, which is one of the methods of epilepsy treatment. We used eight patients with 5 KHz data set and depended upon the Staba 2002 algorithm as a reference to detect the ripples. We found that the average sensitivity and false detection rate of our technique are significant, and they are ∼94% and ∼14%, respectively. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Flowchart—spike detection, true and false positive.</p>
Full article ">Figure 2
<p>Flowchart with the steps of choosing the best threshold and removing spikes.</p>
Full article ">Figure 3
<p>Example of a spike in the data set detected with the Staba 2002 ([<a href="#B71-sensors-22-07522" class="html-bibr">71</a>]) algorithm.</p>
Full article ">Figure 4
<p>Rectified first difference spike with respect to the original one.</p>
Full article ">Figure 5
<p>Closing operation demonstrated enveloped spike (green signal).</p>
Full article ">Figure 6
<p>Opening operation demonstrated truncated spike (red signal).</p>
Full article ">Figure 7
<p>Another spike in the training set with the maximum value of the truncated level. Hint: this value set as the optimal threshold separated between candidate events and background in the data set.</p>
Full article ">Figure 8
<p>Receiver operating characteristic curve (ROC) shows how to choose the optimal point based on the shortest distance from (0, 1).</p>
Full article ">
17 pages, 706 KiB  
Article
Stressors Length and the Habituation Effect—An EEG Study
by Izabela Rejer, Daniel Wacewicz, Mateusz Schab, Bartosz Romanowski, Kacper Łukasiewicz and Michał Maciaszczyk
Sensors 2022, 22(18), 6862; https://doi.org/10.3390/s22186862 - 10 Sep 2022
Cited by 2 | Viewed by 1939
Abstract
The research described in this paper aimed to determine whether people respond differently to short and long stimuli and whether stress stimuli repeated over time evoke a habituation effect. To meet this goal, we performed a cognitive experiment with eight subjects. During this [...] Read more.
The research described in this paper aimed to determine whether people respond differently to short and long stimuli and whether stress stimuli repeated over time evoke a habituation effect. To meet this goal, we performed a cognitive experiment with eight subjects. During this experiment, the subjects were presented with two trays of stress-inducing stimuli (different in length) interlaced with the main tasks. The mean beta power calculated from the EEG signal recorded from the two prefrontal electrodes (Fp1 and Fp2) was used as a stress index. The main results are as follows: (i) we confirmed the previous finding that beta power assessed from the EEG signal recorded from prefrontal electrodes is significantly higher for the STRESS condition compared to NON-STRESS condition; (ii) we found a significant difference in beta power between STRESS conditions that differed in length—the beta power was four times higher for short, compared to long, stress-inducing stimuli; (iii) we did not find enough evidence to confirm (or reject) the hypothesis that stress stimuli repeated over time evoke the habituation effect; although the general trends aggregated over subjects and stressors were negative, their slopes were not statistically significant; moreover, there was no agreement among subjects with respect to the slope of individual trends. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>The order of events in Task A.</p>
Full article ">Figure 2
<p>The order of events in Task B.</p>
Full article ">Figure 3
<p>The preprocessing procedure via signal recorded from subject S1 performing Task B; left column of plots—Fp1 channel, right column of plots—Fp2 channel; (<b>a</b>) raw signal, (<b>b</b>) signal after temporal filtering, (<b>c</b>) signal after spatial filtering, (<b>d</b>) signal after median filtering, (<b>e</b>) signal divided into epochs, each vertical line corresponds to the epoch onset (the epochs’ order is presented in <a href="#sensors-22-06862-f002" class="html-fig">Figure 2</a>).</p>
Full article ">Figure 4
<p>STRESS vs. NON-STRESS condition—difference in medians for Task A and Task B.</p>
Full article ">Figure 5
<p>STRESS vs. NON-STRESS condition—difference in medians across events in Task A (<b>a</b>), and Task B (<b>b</b>); Q: Question.</p>
Full article ">Figure 6
<p>STRESS vs. NON-STRESS condition—difference in medians across subjects for Task A (<b>a</b>), and Task B (<b>b</b>); S: Subject.</p>
Full article ">Figure 7
<p>Task A vs. Task B—difference in medians across subjects for the STRESS condition; S: Subject.</p>
Full article ">Figure 8
<p>The comparison of medians for RELAX and STROOP events arranged in time (<b>a</b>), arranged according to tasks (<b>b</b>).</p>
Full article ">Figure 9
<p>The stress-inducing events over time; (<b>a</b>) Task A, (<b>b</b>) Task B; Q: Question; blue line: beta power for each question, dotted line: linear trend.</p>
Full article ">Figure 10
<p>The stress-inducing events from Task A over time; upward trends are marked in red, downward trends are marked in blue, dotted lines present linear trends, asterisks (*) denote linear trends with a significant slope (<span class="html-italic">p</span>-value &lt; 0.05); Q: Question, S: Subject.</p>
Full article ">Figure 11
<p>The stress-inducing events from Task B over time; upward trends are marked in red, downward trends are marked in blue, dotted lines present linear trends; Q: Question, S: Subject.</p>
Full article ">
14 pages, 1145 KiB  
Article
Digital Stereotypes in HMI—The Influence of Feature Quantity Distribution in Deep Learning Models Training
by Pawel Antonowicz, Michal Podpora and Joanna Rut
Sensors 2022, 22(18), 6739; https://doi.org/10.3390/s22186739 - 6 Sep 2022
Cited by 3 | Viewed by 1986
Abstract
This paper proposes a concept of Digital Stereotypes, observed during research on quantitative overrepresentation of one class over others, and its impact on the results of the training of Deep Learning models. The real-life observed data classes are rarely of the same size, [...] Read more.
This paper proposes a concept of Digital Stereotypes, observed during research on quantitative overrepresentation of one class over others, and its impact on the results of the training of Deep Learning models. The real-life observed data classes are rarely of the same size, and the intuition of presenting multiple examples of one class and then showing a few counterexamples may be very misleading in multimodal classification. Deep Learning models, when taught with overrepresentation, may produce incorrect inferring results, similar to stereotypes. The generic idea of stereotypes seems to be helpful for categorisation from the training point of view, but it has a negative influence on the inferring result. Authors evaluate a large dataset in various scenarios: overrepresentation of one or two classes, underrepresentation of some classes, and same-size (trimmed) classes. The presented research can be applied to any multiclassification applications, but it may be especially important in AI, where the classification, uncertainty and building new knowledge overlap. This paper presents specific ’decreases in accuracy’ observed within multiclassification of unleveled datasets. The ’decreases in accuracy’, named by the authors ’stereotypes’, can also bring an inspiring insight into other fields and applications, not only multimodal sentiment analysis. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Data distribution of 80,000 images trimmed to avoid overrepresentation.</p>
Full article ">Figure 2
<p>Training data distribution within the ’manually annotated images’ subset (A, left, blue) of the AffectNet [<a href="#B20-sensors-22-06739" class="html-bibr">20</a>] Dataset and within its first 80,000 records only (B, right, orange).</p>
Full article ">Figure 3
<p>Data distribution of training set limited to 30,000 images. At the left side is data distribution using the first method, at the right—using the second method.</p>
Full article ">Figure 4
<p>Data distribution of training set limited to 60,000 images. At the left side, the data distribution using the first method is presented, at the right—using the second method.</p>
Full article ">Figure 5
<p>Data distribution of training set limited to 80,000 images. The left-hand side graph depicts the data distribution using the first method, while the second one—using the trimmed loading.</p>
Full article ">Figure 6
<p>Data distribution of training set composed of 200,000 images. At the left side is data distribution using the first method, at the right using the second method.</p>
Full article ">Figure 7
<p>Initial training results, with incorrect validation set.</p>
Full article ">Figure 8
<p>Initial training results, with correct validation set.</p>
Full article ">Figure 9
<p>Training results using 30,000 images. Charts on the left side of the figure present results using unleveled data distribution, charts on the right side show results using leveled data distribution.</p>
Full article ">Figure 10
<p>Training results using 60,000 images. Charts on the left side of figure present results using natural data distribution, charts on the right side show results using trimmed data distribution.</p>
Full article ">Figure 11
<p>Training results using 80,000 images. Charts on the left side of figure present results using unleveled data distribution, charts on the right side show results using leveled data distribution.</p>
Full article ">Figure 12
<p>Training results using 200,000 images of the unleveled training set. The accuracy chart is presented on the left side, the loss value chart on the right side.</p>
Full article ">
12 pages, 3531 KiB  
Communication
Using Nonlinear Vibroartrographic Parameters for Age-Related Changes Assessment in Knee Arthrokinematics
by Krzysztof Kręcisz, Dawid Bączkowicz and Aleksandra Kawala-Sterniuk
Sensors 2022, 22(15), 5549; https://doi.org/10.3390/s22155549 - 25 Jul 2022
Cited by 4 | Viewed by 1787
Abstract
Changes in articular surfaces can be associated with the aging process and as such may lead to quantitative and qualitative impairment of joint motion. This study is aiming to evaluate the age-related quality of the knee joint arthrokinematic motion using nonlinear parameters of [...] Read more.
Changes in articular surfaces can be associated with the aging process and as such may lead to quantitative and qualitative impairment of joint motion. This study is aiming to evaluate the age-related quality of the knee joint arthrokinematic motion using nonlinear parameters of the vibroarthrographic (VAG) signal. To analyse the age-related quality of the patellofemoral joint (PFJ), motion vibroarthrography was used. The data that were subject to analysis represent 220 participants divided into five age groups. The VAG signals were acquired during flexion/extension knee motion and described with the following nonlinear parameters: recurrence rate (RR) and multi-scale entropy (MSE). RR and MSE decrease almost in a linear way with age (main effects of group p<0.001; means (SD): RR=0.101(0.057)0.020(0.017); and MSE=20.9(8.56)13.6(6.24)). The RR post-hoc analysis showed that there were statistically significant differences (p<0.01) in all comparisons with the exception of the 5th–6th life decade. For MSE, statistically significant differences (p<0.01) occurred for: 3rd–7th, 4th–7th, 5th–7th and 6th life decades. Our results imply that degenerative age-related changes are associated with lower repeatability, greater heterogeneity in state space dynamics, and greater regularity in the time domain of VAG signal. In comparison with linear VAG measures, our results provide additional information about the nature of changes of the vibration dynamics of PFJ motion with age. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>VAG signal’s recording.</p>
Full article ">Figure 2
<p>VAG signals’ representative sample wave-forms and their recurrence.</p>
Full article ">Figure 3
<p>Mean and confidence intervals of VAG signal parameters in the further decades of life—for <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>R</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>R</mi> </mrow> </semantics></math> Tukey analysis results.</p>
Full article ">Figure 5
<p>Mean and confidence intervals of VAG signal parameters in the further decades of life—for <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>The <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> Tukey analysis results.</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> analysis results in particular age groups.</p>
Full article ">
17 pages, 2344 KiB  
Article
Towards Dynamic Multi-Modal Intent Sensing Using Probabilistic Sensor Networks
by Joseph Russell, Jeroen H. M. Bergmann and Vikranth H. Nagaraja
Sensors 2022, 22(7), 2603; https://doi.org/10.3390/s22072603 - 29 Mar 2022
Cited by 5 | Viewed by 2810
Abstract
Intent sensing—the ability to sense what a user wants to happen—has many potential technological applications. Assistive medical devices, such as prosthetic limbs, could benefit from intent-based control systems, allowing for faster and more intuitive control. The accuracy of intent sensing could be improved [...] Read more.
Intent sensing—the ability to sense what a user wants to happen—has many potential technological applications. Assistive medical devices, such as prosthetic limbs, could benefit from intent-based control systems, allowing for faster and more intuitive control. The accuracy of intent sensing could be improved by using multiple sensors sensing multiple environments. As users will typically pass through different sensing environments throughout the day, the system should be dynamic, with sensors dropping in and out as required. An intent-sensing algorithm that allows for this cannot rely on training from only a particular combination of sensors. It should allow any (dynamic) combination of sensors to be used. Therefore, the objective of this study is to develop and test a dynamic intent-sensing system under changing conditions. A method has been proposed that treats each sensor individually and combines them using Bayesian sensor fusion. This approach was tested on laboratory data obtained from subjects wearing Inertial Measurement Units and surface electromyography electrodes. The proposed algorithm was then used to classify functional reach activities and compare the performance to an established classifier (k-nearest-neighbours) in cases of simulated sensor dropouts. Results showed that the Bayesian sensor fusion algorithm was less affected as more sensors dropped out, supporting this intent-sensing approach as viable in dynamic real-world scenarios. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Framework representing the three aspects of intent sensing. Adapted from [<a href="#B1-sensors-22-02603" class="html-bibr">1</a>]. Grey regions indicate stages not utilised in this study. Environmental Stimulus indicates change in the user’s surroundings (such as sound or the arrival of another person) that may trigger a response. Contextual Stimulus includes wider factors such as the time of day, typical routine, previous actions etc. The Inception of the Idea to Act represents the user’s conscious decision to take action and predominantly concerns the brain and nervous system. The Planning phase includes any preparation that may take place before the activity begins (such as a visual inspection of the path ahead when about to start walking or the pre-tensing of muscles before attempting a timed grasp). The Action phase covers the real-time execution of the activity, including any changes observable while the activity is being performed. The Maintenance or Change of Task Goal phase looks ahead to the objective of the activity, considering why it is being performed and whether this objective alters over the course of the activity.</p>
Full article ">Figure 2
<p>Model of a Probabilistic Sensor Network for event detection (adapted from [<a href="#B11-sensors-22-02603" class="html-bibr">11</a>]). The four steps that lead from the genesis of a given event up to its detection are shown at the top of the figure. (<b>i</b>) shows the simplest design, with only one node at each stage. (<b>ii</b>) shows a more complex network, featuring two sensors sharing the same sensing environment, each with their own conditioning step. (<b>iii</b>) displays the network used in this study, with two sensing environments (EMG and IMU sensing), each with 12 sensors, conditioned individually and combined together in the processing step.</p>
Full article ">Figure 3
<p>Photographs of the experimental setup used for the acquisition of data used in this study. The configuration pictured is for the reach-grasp activity.</p>
Full article ">Figure 4
<p>Overall pipeline of training and testing for the two algorithms. The Bayesian Fusion (MM) algorithm divides the training dataset into Classifier Training and Probability Learning subsets. The Classifier Training Set is used to train the classifier for each sensor, and then the Probability Learning set is used to populate the confusion matrix for each classifier. The confusion matrix then provides weightings for each sensor’s contribution to the overall network output, which is used to predict the class of the testing set. The Combined KNN (NMM) does not subdivide the training set, instead using all sensor data to train a single classifier, which then predicts the class of the testing set. Dashed lines indicate testing data.</p>
Full article ">Figure 5
<p>Graphs to show the accuracy of the intent-classification system against the time allowed to pass after the activity’s inception before classification was performed, up to the 1-second limit and with all 24 sensors active (no dropout). These graphs are shown separately to clearly illustrate the presence of a general trend for each algorithm, but comparisons between the two in this context should be avoided (see <a href="#sec4dot1-sensors-22-02603" class="html-sec">Section 4.1</a> for discussion on this). The Combined KNN method does not have confidence intervals, as all the sensors are included and there is no subdivision of the training data, so its performance is entirely reproducible.</p>
Full article ">Figure 6
<p>Graph showing the accuracy of the intent-classification system using the Bayesian Fusion method (treating each sensor separately and then combining them) and the combined method (putting all sensor information into a single KNN classifier) as the number of sensors increases. No sensors dropped out—instead, the number of sensors was varied from 1 to 24, and the algorithms were trained on the number of sensors active in each case.</p>
Full article ">Figure 7
<p>Graph showing the accuracy of the intent-classification system with increasing number of sensors dropping out. The Bayesian Fusion method (treating each sensor separately and then combining them) and the combined method (putting all sensor information into a single KNN classifier) are shown.</p>
Full article ">Figure A1
<p>Example graphs of randomly selected volunteers to show the accuracy across the number of features selected in the Bayesian Fusion method with all sensors included. The Bayesian Fusion algorithm has confidence intervals, as its performance depends on how the training data is subdivided. The Combined KNN method does not have confidence intervals, as all the sensors are included and there is no subdivision of the training data, so its performance is entirely reproducible.</p>
Full article ">
19 pages, 3256 KiB  
Article
Advanced Computing Methods for Impedance Plethysmography Data Processing
by Volodymyr Khoma, Halyna Kenyo and Aleksandra Kawala-Sterniuk
Sensors 2022, 22(6), 2095; https://doi.org/10.3390/s22062095 - 8 Mar 2022
Cited by 2 | Viewed by 2630
Abstract
In this paper we are introducing innovative solutions applied in impedance plethysmography concerning improvement of the rheagraph characteristics and the efficiency increase of the developing rheograms using computer methods. The described methods have been developed in order to ensure the stability of parameters [...] Read more.
In this paper we are introducing innovative solutions applied in impedance plethysmography concerning improvement of the rheagraph characteristics and the efficiency increase of the developing rheograms using computer methods. The described methods have been developed in order to ensure the stability of parameters and to extend the functionality of the rheographic system based on digital signal processing, which applies to the compensation of the base resistance with a digital potentiometer, digital synthesis of quadrature excitation signals and the performance of digital synchronous detection. The emphasis was put on methods for determination of hemodynamic parameters by computer processing of the rheograms. As a result–three methods for respiratory artifacts elimination have been proposed: based on the discrete cosine transform, the discrete wavelet transform and the approximation of the zero line with spline functions. Additionally, computer methods for physiological indicators determination, including those based on wavelet decomposition, were also proposed and described in this paper. The efficiency of various rheogram compression algorithms was tested, evaluated and presented in this work. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>The structure of the digital rheograph (analog blocks in gray).</p>
Full article ">Figure 2
<p>Rheogram fluctuations recorded in real conditions (blue line) and baseline drift approximation (green line).</p>
Full article ">Figure 3
<p>Fluctuations of the rheogram recorded in real conditions (blue line) and the baseline drift joint function approximation (black line).</p>
Full article ">Figure 4
<p>Approximation of the respiratory artifacts of a real rheosignal by different methods.</p>
Full article ">Figure 5
<p>Rheocycle (<b>a</b>) and its derivative with characteristic points (<b>b</b>).</p>
Full article ">Figure 6
<p>Waveform of the raw rheogram (<b>top</b>), its first derivative (<b>middle</b>), wavelet scalogram of derivative rheogram (<b>bottom</b>); along the horizontal axis—sample numbers.</p>
Full article ">Figure 7
<p>Recognizing the characteristic points of the rheogramme; along the horizontal axis—sample numbers.</p>
Full article ">Figure 8
<p>Recognition of the rheogram characteristic points along the horizontal axis—sample numbers.</p>
Full article ">
12 pages, 3655 KiB  
Article
Pilot Study on Analysis of Electroencephalography Signals from Children with FASD with the Implementation of Naive Bayesian Classifiers
by Katarzyna Anna Dyląg, Wiktoria Wieczorek, Waldemar Bauer, Piotr Walecki, Bozena Bando, Radek Martinek and Aleksandra Kawala-Sterniuk
Sensors 2022, 22(1), 103; https://doi.org/10.3390/s22010103 - 24 Dec 2021
Cited by 5 | Viewed by 3591
Abstract
In this paper Naive Bayesian classifiers were applied for the purpose of differentiation between the EEG signals recorded from children with Fetal Alcohol Syndrome Disorders (FASD) and healthy ones. This work also provides a brief introduction to the FASD itself, explaining the social, [...] Read more.
In this paper Naive Bayesian classifiers were applied for the purpose of differentiation between the EEG signals recorded from children with Fetal Alcohol Syndrome Disorders (FASD) and healthy ones. This work also provides a brief introduction to the FASD itself, explaining the social, economic and genetic reasons for the FASD occurrence. The obtained results were good and promising and indicate that EEG recordings can be a helpful tool for potential diagnostics of FASDs children affected with it, in particular those with invisible physical signs of these spectrum disorders. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Example raw data from channel <b>’F4-C4’</b>: (<b>a</b>) time-series of both study group and control group (1 [s]); (<b>b</b>) normalised time series of study group and control group (1 [s]); (<b>c</b>) 10 [s] sample of FASD (top spectrogram); (<b>d</b>) 10 [s] of healthy control (bottom spectrogram)—sample 1.</p>
Full article ">Figure 2
<p>Example raw data from channel <b>’C3-P3’</b>: (<b>a</b>) time-series of both study group and control group (1 [s]); (<b>b</b>) normalised time series of study group and control group (1 [s]); (<b>c</b>) 10 [s] sample of FASD (top spectrogram); (<b>d</b>) 10 [s] of healthy control (bottom spectrogram)—sample 1.</p>
Full article ">Figure 3
<p>Example raw data from channel <b>’C4-P4’</b>: (<b>a</b>) time-series of both study group and control group (1 [s]); (<b>b</b>) normalised time series of study group and control group (1 [s]); (<b>c</b>) 10 [s] sample of FASD (top spectrogram); (<b>d</b>) 10 [s] of healthy control (bottom spectrogram)—sample 1.</p>
Full article ">Figure 4
<p>Example raw data from channel <b>’F3-C3’</b>: (<b>a</b>) time-series of both study group and control group (1 [s]); (<b>b</b>) normalised time series of study group and control group (1 [s]); (<b>c</b>) 10 [s] sample of FASD (top spectrogram); (<b>d</b>) 10 [s] of healthy control (bottom spectrogram)—sample 1.</p>
Full article ">Figure 5
<p>10 [s] sample from four channels (<b>’F4-C4’</b>, <b>’C4-P4’</b>, <b>’F3-C3’</b> and <b>’C3-P3’</b>)—study group subject (<b>top</b>) and control group subject (<b>bottom</b>).</p>
Full article ">Figure 6
<p>Averaged sample study group (<b>top</b>) and control group (<b>bottom</b>)—time series and spectrogram (10 [s]).</p>
Full article ">Figure 7
<p>Result of using the Naive Bayesian classifier—confusion matrix.</p>
Full article ">
15 pages, 1107 KiB  
Article
Functional Living Skills: A Non-Immersive Virtual Reality Training for Individuals with Major Neurocognitive Disorders
by Simonetta Panerai, Donatella Gelardi, Valentina Catania, Francesco Rundo, Domenica Tasca, Sabrina Musso, Giuseppina Prestianni, Stefano Muratore, Claudio Babiloni and Raffaele Ferri
Sensors 2021, 21(17), 5751; https://doi.org/10.3390/s21175751 - 26 Aug 2021
Cited by 11 | Viewed by 3036
Abstract
The loss of functional living skills (FLS) is an essential feature of major neurocognitive disorders (M-NCD); virtual reality training (VRT) offers many possibilities for improving FLS in people with M-NCD. The aim of our study was to verify the effectiveness of a non-immersive [...] Read more.
The loss of functional living skills (FLS) is an essential feature of major neurocognitive disorders (M-NCD); virtual reality training (VRT) offers many possibilities for improving FLS in people with M-NCD. The aim of our study was to verify the effectiveness of a non-immersive VRT on FLS for patients with M-NCD. VRT was carried out for 10 to 20 sessions, by means of four 3D apps developed in our institute and installed on a large touch screen. The experimental group (EG) and the control group (CG) included 24 and 18 patients with M-NCD, respectively. They were administered the in vivo test (in specific hospital places reproducing the natural environments) at T1 (pre-training) and T3 (post-training); at T2, only EG was administered VRT. Statistically significant differences between EG and CG in all the in vivo tests were found in the number of correct responses; during VRT, the number of correct responses increased, while the execution times and the number of clues decreased. The improvement in the in vivo tests appeared to be related to the specific VRT applied. The satisfaction of participants with the VRT was moderate to high. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Results obtained in each of the 20 sessions in each task; data are shown as mean (squares, circles, diamonds, and triangles) and standard errors (whiskers), asterisks indicate a statistically significant difference (Wilcoxon rank test, Bonferroni-corrected <span class="html-italic">p</span> &lt; 0.05 vs. the value obtained for session 1. (<b>a</b>) Number of correct responses; (<b>b</b>) number of errors; (<b>c</b>) number of missing responses; (<b>d</b>) number of clues provided; (<b>e</b>) average execution time during the task.</p>
Full article ">

Review

Jump to: Editorial, Research

16 pages, 2494 KiB  
Review
Hearing and Seeing Nerve/Tendon Snapping: A Systematic Review on Dynamic Ultrasound Examination
by Carmelo Pirri, Nina Pirri, Carla Stecco, Veronica Macchi, Andrea Porzionato, Raffaele De Caro and Levent Özçakar
Sensors 2023, 23(15), 6732; https://doi.org/10.3390/s23156732 - 27 Jul 2023
Cited by 4 | Viewed by 1544
Abstract
Nerve/tendon snapping can occur due to their sudden displacement during the movement of an adjacent joint, and the clinical condition can really be painful. It can actually be challenging to determine the specific anatomic structure causing the snapping in various body regions. In [...] Read more.
Nerve/tendon snapping can occur due to their sudden displacement during the movement of an adjacent joint, and the clinical condition can really be painful. It can actually be challenging to determine the specific anatomic structure causing the snapping in various body regions. In this sense, ultrasound examination, with all its advantages (especially providing dynamic imaging), appears to be quite promising. To date, there are no comprehensive reviews reporting on the use of dynamic ultrasound examination in the diagnosis of nerve/tendon snapping. Accordingly, this article aims to provide a substantial discussion as to how US examination would contribute to ‘seeing’ and ‘hearing’ these pathologies’ different maneuvers/movements. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Study selection flow diagram.</p>
Full article ">Figure 2
<p>Snapping of the ulnar nerve: (<b>A</b>) neutral position, (<b>B</b>) 45° elbow flexion, (<b>C</b>) 110° elbow flexion, (<b>D</b>) 45° elbow flexion during return to neutral position and (<b>E</b>) return to neutral position. Arrow: Ulnar nerve.</p>
Full article ">Figure 3
<p>Snapping of peroneal tendons: (<b>A</b>) neutral position, (<b>B</b>) first degrees of foot eversion and (<b>C</b>) complete foot eversion. *: fibularis brevis tendon. °: fibularis longus tendon.</p>
Full article ">
35 pages, 918 KiB  
Review
Advanced Bioelectrical Signal Processing Methods: Past, Present and Future Approach—Part II: Brain Signals
by Radek Martinek, Martina Ladrova, Michaela Sidikova, Rene Jaros, Khosrow Behbehani, Radana Kahankova and Aleksandra Kawala-Sterniuk
Sensors 2021, 21(19), 6343; https://doi.org/10.3390/s21196343 - 23 Sep 2021
Cited by 27 | Viewed by 10622
Abstract
As it was mentioned in the previous part of this work (Part I)—the advanced signal processing methods are one of the quickest and the most dynamically developing scientific areas of biomedical engineering with their increasing usage in current clinical practice. In this paper, [...] Read more.
As it was mentioned in the previous part of this work (Part I)—the advanced signal processing methods are one of the quickest and the most dynamically developing scientific areas of biomedical engineering with their increasing usage in current clinical practice. In this paper, which is a Part II work—various innovative methods for the analysis of brain bioelectrical signals were presented and compared. It also describes both classical and advanced approaches for noise contamination removal such as among the others digital adaptive and non-adaptive filtering, signal decomposition methods based on blind source separation, and wavelet transform. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>The EEG electrodes placement system “10–20”.</p>
Full article ">Figure 2
<p>Various EEG montages.</p>
Full article ">Figure 3
<p>Scheme of a BCI system [<a href="#B20-sensors-21-06343" class="html-bibr">20</a>].</p>
Full article ">Figure 4
<p>EEG frequency spectrum and its frequency bands [<a href="#B31-sensors-21-06343" class="html-bibr">31</a>].</p>
Full article ">Figure 5
<p>Sample EEG data recorded from ‘F3–C3’ location.</p>
Full article ">Figure 6
<p>Example of the SEP signal.</p>
Full article ">Figure 7
<p>Example of AEP signal [<a href="#B177-sensors-21-06343" class="html-bibr">177</a>].</p>
Full article ">Figure 8
<p>Example of the VEP signal [<a href="#B192-sensors-21-06343" class="html-bibr">192</a>].</p>
Full article ">Figure 9
<p>Example of the ECoG signal.</p>
Full article ">
33 pages, 964 KiB  
Review
Advanced Bioelectrical Signal Processing Methods: Past, Present, and Future Approach—Part III: Other Biosignals
by Radek Martinek, Martina Ladrova, Michaela Sidikova, Rene Jaros, Khosrow Behbehani, Radana Kahankova and Aleksandra Kawala-Sterniuk
Sensors 2021, 21(18), 6064; https://doi.org/10.3390/s21186064 - 10 Sep 2021
Cited by 48 | Viewed by 23710
Abstract
Analysis of biomedical signals is a very challenging task involving implementation of various advanced signal processing methods. This area is rapidly developing. This paper is a Part III paper, where the most popular and efficient digital signal processing methods are presented. This paper [...] Read more.
Analysis of biomedical signals is a very challenging task involving implementation of various advanced signal processing methods. This area is rapidly developing. This paper is a Part III paper, where the most popular and efficient digital signal processing methods are presented. This paper covers the following bioelectrical signals and their processing methods: electromyography (EMG), electroneurography (ENG), electrogastrography (EGG), electrooculography (EOG), electroretinography (ERG), and electrohysterography (EHG). Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Example of the surface EMG signal measurement.</p>
Full article ">Figure 2
<p>Examples of the MUAP potentials.</p>
Full article ">Figure 3
<p>Example of the surface EMG frequency spectrum (based on the work in [<a href="#B18-sensors-21-06064" class="html-bibr">18</a>]).</p>
Full article ">Figure 4
<p>The ENG mixed nerve.</p>
Full article ">Figure 5
<p>Sample ENG measurement.</p>
Full article ">Figure 6
<p>Examples of the surface placement of the EGG electrodes: signal electrodes (1–6), reference electrode (R), and ground electrode (U).</p>
Full article ">Figure 7
<p>Sample EGG frequency spectrum.</p>
Full article ">Figure 8
<p>EGG potentials.</p>
Full article ">Figure 9
<p>Placement of the EOG electrodes.</p>
Full article ">Figure 10
<p>Sample EOG sisgnals.</p>
Full article ">Figure 11
<p>Sample EOG spectrum.</p>
Full article ">Figure 12
<p>The biphasic waveform of a healthy patient—the negative wave (<b>a</b>) and the positive wave (<b>b</b>).</p>
Full article ">Figure 13
<p>Plot of a recorded IUCP signal.</p>
Full article ">Figure 14
<p>Plot of a recorded EHG signal.</p>
Full article ">Figure 15
<p>Power spectrum of the first contraction from the <a href="#sensors-21-06064-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 16
<p>Configuration of location of the electrodes for the EHG acquisition.</p>
Full article ">
32 pages, 889 KiB  
Review
Advanced Bioelectrical Signal Processing Methods: Past, Present and Future Approach—Part I: Cardiac Signals
by Radek Martinek, Martina Ladrova, Michaela Sidikova, Rene Jaros, Khosrow Behbehani, Radana Kahankova and Aleksandra Kawala-Sterniuk
Sensors 2021, 21(15), 5186; https://doi.org/10.3390/s21155186 - 30 Jul 2021
Cited by 29 | Viewed by 8923
Abstract
Advanced signal processing methods are one of the fastest developing scientific and technical areas of biomedical engineering with increasing usage in current clinical practice. This paper presents an extensive literature review of the methods for the digital signal processing of cardiac bioelectrical signals [...] Read more.
Advanced signal processing methods are one of the fastest developing scientific and technical areas of biomedical engineering with increasing usage in current clinical practice. This paper presents an extensive literature review of the methods for the digital signal processing of cardiac bioelectrical signals that are commonly applied in today’s clinical practice. This work covers the definition of bioelectrical signals. It also covers to the extreme extent of classical and advanced approaches to the alleviation of noise contamination such as digital adaptive and non-adaptive filtering, signal decomposition methods based on blind source separation and wavelet transform. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Biosignals—general scheme.</p>
Full article ">Figure 2
<p>Einthoven’s triangle.</p>
Full article ">Figure 3
<p>Sample ECG curve.</p>
Full article ">Figure 4
<p>Sample ECG power spectrum.</p>
Full article ">Figure 5
<p>The Frank lead system [<a href="#B102-sensors-21-05186" class="html-bibr">102</a>,<a href="#B106-sensors-21-05186" class="html-bibr">106</a>].</p>
Full article ">Figure 6
<p>3-D image of the VCGm [<a href="#B102-sensors-21-05186" class="html-bibr">102</a>,<a href="#B106-sensors-21-05186" class="html-bibr">106</a>].</p>
Full article ">Figure 7
<p>An example of signals acquired from the maternal body during non-invasive and invasive fECG monitoring in time and frequency domains: NI-fECG signals are acquired using abdominal electrodes AE1–AE5; direct invasive fECG signals are recorded by means of fetal scalp electrode (FSE); maternal ECG signal can be sensed on the maternal thorax (electrode marked as TE).</p>
Full article ">
Back to TopTop