[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = non-invasive brain sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2111 KiB  
Article
Using Biosensors to Detect and Map Language Areas in the Brain for Individuals with Traumatic Brain Injury
by Ahmed Alduais, Hessah Saad Alarifi and Hind Alfadda
Diagnostics 2024, 14(14), 1535; https://doi.org/10.3390/diagnostics14141535 - 16 Jul 2024
Viewed by 834
Abstract
The application of biosensors in neurolinguistics has significantly advanced the detection and mapping of language areas in the brain, particularly for individuals with brain trauma. This study explores the role of biosensors in this domain and proposes a conceptual model to guide their [...] Read more.
The application of biosensors in neurolinguistics has significantly advanced the detection and mapping of language areas in the brain, particularly for individuals with brain trauma. This study explores the role of biosensors in this domain and proposes a conceptual model to guide their use in research and clinical practice. The researchers explored the integration of biosensors in language and brain function studies, identified trends in research, and developed a conceptual model based on cluster and thematic analyses. Using a mixed-methods approach, we conducted cluster and thematic analyses on data curated from Web of Science, Scopus, and SciSpace, encompassing 392 articles. This dual analysis facilitated the identification of research trends and thematic insights within the field. The cluster analysis highlighted Functional Magnetic Resonance Imaging (fMRI) dominance and the importance of neuroplasticity in language recovery. Biosensors such as the Magnes 2500 watt-hour (WH) neuromagnetometer and microwire-based sensors are reliable for real-time monitoring, despite methodological challenges. The proposed model synthesizes these findings, emphasizing biosensors’ potential in preoperative assessments and therapeutic customization. Biosensors are vital for non-invasive, precise mapping of language areas, with fMRI and repetitive Transcranial Magnetic Stimulation (rTMS) playing pivotal roles. The conceptual model serves as a strategic framework for employing biosensors and improving neurolinguistic interventions. This research may enhance surgical planning, optimize recovery therapies, and encourage technological advancements in biosensor precision and application protocols. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

Figure 1
<p>PRISMA Flow Diagram.</p>
Full article ">Figure 2
<p>Density Visualization of Co-occurrence from Web of Science.</p>
Full article ">Figure 3
<p>Density Visualization of Co-occurrence from Scopus and SciSpace.</p>
Full article ">Figure 4
<p>A Landscape Visualization of the Largest 10 Clusters.</p>
Full article ">Figure 5
<p>Top 15 Keywords with the Strongest Citation Bursts.</p>
Full article ">Figure 6
<p>A Model Guiding the Use of Biosensors in Language Area Detection in the Brain.</p>
Full article ">
11 pages, 1974 KiB  
Article
Conductive Hydrogel Tapes for Tripolar EEG: A Promising Solution to Paste-Related Challenges
by Cassidy Considine and Walter Besio
Sensors 2024, 24(13), 4222; https://doi.org/10.3390/s24134222 - 29 Jun 2024
Viewed by 697
Abstract
Electroencephalography (EEG) remains pivotal in neuroscience for its non-invasive exploration of brain activity, yet traditional electrodes are plagued with artifacts and the application of conductive paste poses practical challenges. Tripolar concentric ring electrode (TCRE) sensors used for EEG (tEEG) attenuate artifacts automatically, improving [...] Read more.
Electroencephalography (EEG) remains pivotal in neuroscience for its non-invasive exploration of brain activity, yet traditional electrodes are plagued with artifacts and the application of conductive paste poses practical challenges. Tripolar concentric ring electrode (TCRE) sensors used for EEG (tEEG) attenuate artifacts automatically, improving the signal quality. Hydrogel tapes offer a promising alternative to conductive paste, providing mess-free application and reliable electrode–skin contact in locations without hair. Since the electrodes of the TCRE sensors are only 1.0 mm apart, the impedance of the skin-to-electrode impedance-matching medium is critical. This study evaluates four hydrogel tapes’ efficacies in EEG electrode application, comparing impedance and alpha wave characteristics. Healthy adult participants underwent tEEG recordings using different tapes. The results highlight varying impedances and successful alpha wave detection despite increased tape-induced impedance. MATLAB’s EEGLab facilitated signal processing. This study underscores hydrogel tapes’ potential as a convenient and effective alternative to traditional paste, enriching tEEG research methodologies. Two of the conductive hydrogel tapes had significantly higher alpha wave power than the other tapes, but were never significantly lower. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Tripolar Electrode on (<b>left</b>) and Disc electrode on the (<b>right</b>).</p>
Full article ">Figure 2
<p>A TCRE has been placed on the right mastoid process and held in place with the KM40C hydrogel tape. The inset in the lower right shows the KM40C hydrogel tape on the TCRE.</p>
Full article ">Figure 3
<p>Schematic of the experimental paradigm timing. This procedure was repeated four times for the four different hydrogel tapes.</p>
Full article ">Figure 4
<p>Upper row: tEEG measurements from participant C with the TCRE on the KM40C tape, recorded while eyes were open (<b>A</b>) and closed (<b>B</b>). Middle row: power spectrum, in dB, of eyes open (panel (<b>C</b>) from signals in panel (<b>A</b>)), and eyes closed (panel (<b>D</b>) from signals in panel (<b>B</b>)). Bottom, panel (<b>E</b>): spectrogram from the signals from panels (<b>A</b>) and (<b>B</b>). For panel (<b>E</b>), the vertical axis is the frequency that increases towards the top. The horizontal axis is time in seconds increasing to the right. The power intensity, in dB, is indicated with the color bar to the right of the spectrogram where dark red is the strongest power and dark blue is the weakest power.</p>
Full article ">
18 pages, 4904 KiB  
Article
An Overall Automated Architecture Based on the Tapping Test Measurement Protocol: Hand Dexterity Assessment through an Innovative Objective Method
by Tommaso Di Libero, Chiara Carissimo, Gianni Cerro , Angela Marie Abbatecola , Alessandro Marino, Gianfranco Miele , Luigi Ferrigno  and Angelo Rodio
Sensors 2024, 24(13), 4133; https://doi.org/10.3390/s24134133 - 26 Jun 2024
Cited by 1 | Viewed by 2909
Abstract
The present work focuses on the tapping test, which is a method that is commonly used in the literature to assess dexterity, speed, and motor coordination by repeatedly moving fingers, performing a tapping action on a flat surface. During the test, the activation [...] Read more.
The present work focuses on the tapping test, which is a method that is commonly used in the literature to assess dexterity, speed, and motor coordination by repeatedly moving fingers, performing a tapping action on a flat surface. During the test, the activation of specific brain regions enhances fine motor abilities, improving motor control. The research also explores neuromuscular and biomechanical factors related to finger dexterity, revealing neuroplastic adaptation to repetitive movements. To give an objective evaluation of all cited physiological aspects, this work proposes a measurement architecture consisting of the following: (i) a novel measurement protocol to assess the coordinative and conditional capabilities of a population of participants; (ii) a suitable measurement platform, consisting of synchronized and non-invasive inertial sensors to be worn at finger level; (iii) a data analysis processing stage, able to provide the final user (medical doctor or training coach) with a plethora of useful information about the carried-out tests, going far beyond state-of-the-art results from classical tapping test examinations. Particularly, the proposed study underscores the importance interdigital autonomy for complex finger motions, despite the challenges posed by anatomical connections; this deepens our understanding of upper limb coordination and the impact of neuroplasticity, holding significance for motor abilities assessment, improvement, and therapeutic strategies to enhance finger precision. The proof-of-concept test is performed by considering a population of college students. The obtained results allow us to consider the proposed architecture to be valuable for many application scenarios, such as the ones related to neurodegenerative disease evolution monitoring. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The measurement setup: data are acquired through a couple of IMU sensors, driven by a proprietary MOVELLA DOT App, which communicates with a PC where data processing is carried out in a MATLAB<sup>®</sup> environment R2023b.</p>
Full article ">Figure 2
<p>Two different configurations: (<b>a</b>) IMUs are placed on the index and middle of the dominant hand; (<b>b</b>) IMUs are placed on the index fingers of the right and left hand.</p>
Full article ">Figure 3
<p>Algorithm block diagram: description of the main steps for acquiring and selecting the most sensitive axis (INPUT DATA), processing (DATA PROCESSING) and analyzing (DATA ANALYSIS) IMU inertial data.</p>
Full article ">Figure 4
<p>In configuration (<b>A</b>), the sensor is placed at a fixed distance from the metacarpal joint of 2 cm. In configuration (<b>B</b>), the sensor is placed on the distal phalanx with an unfixed distance.</p>
Full article ">Figure 5
<p>The top figure shows the average number of taps obtained in the two different configurations. The second plot compares the coefficient of variation calculated under conditions A and B.</p>
Full article ">Figure 6
<p>Example of linear inter-times fitting versus execution time—participant ID: 19; test: UniALT (index finger). The blue points are the raw intertime evaluation data, the red line is the linear fitting curve.</p>
Full article ">Figure 7
<p>Mean excursion of tap movement acceleration calculated for each finger for the different case studies.</p>
Full article ">Figure 8
<p>Bar chart of average trends of simultaneity times during UniSIM test. The horizontal dashed blue line is the experimental threshold for the simultaneity check.</p>
Full article ">Figure 9
<p>Bar chart of average trends of alternation and simultaneity times during BimSIM test. The horizontal dashed blue line is the experimental threshold for the simultaneity check.</p>
Full article ">Figure 10
<p>Comparison between the mean and standard deviation of UniSIM and BimSIM tests.</p>
Full article ">Figure 11
<p>Bar chart of average trends of simultaneity times during UniALT test. The horizontal dashed blue line is the experimental threshold for the simultaneity check.</p>
Full article ">Figure 12
<p>Bar chart of average trends of alternation and simultaneity times during BimALT test. The horizontal dashed blue line is the experimental threshold for the simultaneity check.</p>
Full article ">Figure 13
<p>Comparison between the means and standard deviations of UniALT and BimALT tests.</p>
Full article ">
16 pages, 3684 KiB  
Article
Noise Reduction and Localization Accuracy in a Mobile Magnetoencephalography System
by Timothy Bardouille, Vanessa Smith, Elias Vajda, Carson Drake Leslie and Niall Holmes
Sensors 2024, 24(11), 3503; https://doi.org/10.3390/s24113503 - 29 May 2024
Viewed by 633
Abstract
Magnetoencephalography (MEG) non-invasively provides important information about human brain electrophysiology. The growing use of optically pumped magnetometers (OPM) for MEG, as opposed to fixed arrays of cryogenic sensors, has opened the door for innovation in system design and use cases. For example, cryogenic [...] Read more.
Magnetoencephalography (MEG) non-invasively provides important information about human brain electrophysiology. The growing use of optically pumped magnetometers (OPM) for MEG, as opposed to fixed arrays of cryogenic sensors, has opened the door for innovation in system design and use cases. For example, cryogenic MEG systems are housed in large, shielded rooms to provide sufficient space for the system dewar. Here, we investigate the performance of OPM recordings inside of a cylindrical shield with a 1 × 2 m2 footprint. The efficacy of shielding was measured in terms of field attenuation and isotropy, and the value of post hoc noise reduction algorithms was also investigated. Localization accuracy was quantified for 104 OPM sensors mounted on a fixed helmet array based on simulations and recordings from a bespoke current dipole phantom. Passive shielding attenuated the vector field magnitude to 50.0 nT at direct current (DC), to 16.7 pT/√Hz at power line, and to 71 fT/√Hz (median) in the 10–200 Hz range. Post hoc noise reduction provided an additional 5–15 dB attenuation. Substantial field isotropy remained in the volume encompassing the sensor array. The consistency of the isotropy over months suggests that a field nulling solution could be readily applied. A current dipole phantom generating source activity at an appropriate magnitude for the human brain generated field fluctuations on the order of 0.5–1 pT. Phantom signals were localized with 3 mm localization accuracy, and no significant bias in localization was observed, which is in line with performance for cryogenic and OPM MEG systems. This validation of the performance of a small footprint MEG system opens the door for lower-cost MEG installations in terms of raw materials and facility space, as well as mobile imaging systems (e.g., truck-based). Such implementations are relevant for global adoption of MEG outside of highly resourced research and clinical institutions. Full article
(This article belongs to the Special Issue Quantum Sensors and Their Biomedical Applications)
Show Figures

Figure 1

Figure 1
<p>System coordinates and phantom design. (<b>a</b>) The 2 passive and 1 active shielding cylinders are shown as semi-transparent blue rectangles. Dimensions for the cylinders and end caps are provided in the cylinder coordinate system. The location of the helmet and the volume for field mapping are shown as overlapping semi-transparent orange rectangles. Points A and B indicate a 24 cm sided cube within which the vector magnetic field will be mapped. Points C and D indicate the volume containing the sensor array (i.e., helmet). (<b>b</b>) Schematic of the phantom with the relevant angles for the coordinate system indicated in blue. (<b>c</b>) Phantom mounted in the OPM helmet. (<b>d</b>) Participant ready to be inserted into the shield. (<b>e</b>) OPM time courses from a single recording. Each line represents the magnetic field recorded at one of sixteen sensors (one colour per sensor) during a phantom recording. A 30 Hz low-pass filter (no high pass) was applied to the data to highlight raw signals in the lower-frequency regime.</p>
Full article ">Figure 2
<p>Magnetic field and shielding factor spectra. Magnetic field spectra acquired (top row) outside of the shield and (middle row) at the centre of the helmet are shown, as well as the (bottom row) associated shielding factor, for sensors oriented (<b>a</b>) posterior to anterior, (<b>b</b>) left to right, and (<b>c</b>) superior to inferior. Coordinates are with respect to a human participant in a supine position with their head in the helmet. (<b>d</b>) The vector magnitude spectra and associated shielding factors.</p>
Full article ">Figure 3
<p>Vector field within the volume of interest. DC field vector strength along each cardinal axis is provided as a function of location within a volume encompassing the OPM helmet. Field vector amplitudes in the (top row) x, (middle row) y, and (bottom row) z orientations are shown with respect to position in the (<b>a</b>) x, (<b>b</b>) y, and (<b>c</b>) z directions. Data are shown for points on a 4 × 4 × 8 (x-y-z) grid. Each line in each plot represents field vector amplitude along one line.</p>
Full article ">Figure 4
<p>A 3-D vector representation of the field map. The arrows represent the field vector at each point in space, with orientation accurately represented and arrow length proportional to field strength. The mean field across the volume is subtracted from each vector to highlight the field gradients across the volume.</p>
Full article ">Figure 5
<p>Evoked field data for phantom sources. (<b>a</b>) Evoked field topographies generated by activation of four current sources on the phantom. A schematic of the helmet is superimposed in each topography to clarify the spatial arrangement of sensors. Participant left is on the left and the nose is at the top for each topography. (<b>b</b>) Evoked field as a function of time for all 104 OPM sites and four current sources on the phantom.</p>
Full article ">Figure 6
<p>Localization errors (LE) for measured and simulated equivalent current dipoles. LE is shown for each of the 12 sources, along each cardinal axis (x, y, z) and as a vector magnitude (R). LE measured with a phantom in the OPM system is shown on the left. LE measured based on simulations with the same parameters is shown on the right. Each coloured column indicates LE for a different phantom source location.</p>
Full article ">Figure 7
<p>Noise reduction in phantom recordings. Magnetic field spectra (<b>a</b>) prior to and (<b>b</b>) following reference array regression and homogenous field correction for all 104 OPM recording sites. (<b>c</b>) The shielding factor spectra associated with these two processes are also shown. Black lines in (<b>a</b>–<b>c</b>) indicate the mean across all sensors. Each coloured lines indicates the spectrum for one recording site in the helmet.</p>
Full article ">
24 pages, 9326 KiB  
Article
Information-Theoretical Analysis of the Cycle of Creation of Knowledge and Meaning in Brains under Multiple Cognitive Modalities
by Joshua J. J. Davis, Florian Schübeler and Robert Kozma
Sensors 2024, 24(5), 1605; https://doi.org/10.3390/s24051605 - 29 Feb 2024
Viewed by 1432
Abstract
It is of great interest to develop advanced sensory technologies allowing non-invasive monitoring of neural correlates of cognitive processing in people performing everyday tasks. A lot of progress has been reported in recent years in this research area using scalp EEG arrays, but [...] Read more.
It is of great interest to develop advanced sensory technologies allowing non-invasive monitoring of neural correlates of cognitive processing in people performing everyday tasks. A lot of progress has been reported in recent years in this research area using scalp EEG arrays, but the high level of noise in the electrode signals poses a lot of challenges. This study presents results of detailed statistical analysis of experimental data on the cycle of creation of knowledge and meaning in human brains under multiple cognitive modalities. We measure brain dynamics using a HydroCel Geodesic Sensor Net, 128-electrode dense-array electroencephalography (EEG). We compute a pragmatic information (PI) index derived from analytic amplitude and phase, by Hilbert transforming the EEG signals of 20 participants in six modalities, which combine various audiovisual stimuli, leading to different mental states, including relaxed and cognitively engaged conditions. We derive several relevant measures to classify different brain states based on the PI indices. We demonstrate significant differences between engaged brain states that require sensory information processing to create meaning and knowledge for intentional action, and relaxed-meditative brain states with less demand on psychophysiological resources. We also point out that different kinds of meanings may lead to different brain dynamics and behavioral responses. Full article
Show Figures

Figure 1

Figure 1
<p>An example of the ambiguous images shown to the participants. (My Wife and My Mother-In-Law, W. E. Hill, 1915).</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematics of the Hilbert transform-based methodology when a narrow frequency band is applied to the EEG signal (X<sub>t</sub>), producing the filtered signal Y<sub>t</sub>, followed by a Hilbert transformation. This leads to signal Z<sub>t</sub>, which is complex valued. Considering polar coordinates, the signal is described by its amplitude and phase. The modulus of Z<sub>t</sub> gives the analytic amplitude (AA), while the angle produces the analytic phase (AP). (<b>b</b>) Resulting signals after Hilbert transform is applied to a sinus time series, showing the real and imaginary parts of the complex signal. (<b>c</b>) Analytic amplitude (AA) and phase (AP) derived from the resulting signals after Hilbert transform is applied. Examples of the different indices that were computed after Hilbert transforming the signal amplitude for EEG Channel 2: (<b>d</b>) <b>AA</b>(t), (<b>e</b>) <b>IF</b>(t), (<b>f</b>) <b>AP</b>(t), (<b>g</b>) <b>SA</b>(t).</p>
Full article ">Figure 3
<p>Illustration on the cycle of creation of knowledge and meaning. A visual stimulus is presented to the animal at time instant 3 s. The stimulus is processed and resolved in the 1 s window following stimulus presentation [<a href="#B37-sensors-24-01605" class="html-bibr">37</a>].</p>
Full article ">Figure 4
<p>Examples of the different indices that were computed after Hilbert transforming the signal amplitude for each of the 128 electrodes (plotted in different colors in <b>a</b>, <b>b</b> and <b>e</b>): (<b>a</b>) analytic amplitude A(t) or <b>AA</b>(t), (<b>b</b>) signal amplitude S(t) or <b>SA</b>(t), (<b>c</b>) spatial ensemble averages 〈<b>AA</b>(t)〉 with 3-sigma band, (<b>d</b>) spatial ensemble averages 〈<b>SA</b>(t)〉 with 3-sigma band, (<b>e</b>) analytic frequency <b>IF</b>(t).</p>
Full article ">Figure 5
<p>Pragmatic information illustration; (<b>e</b>) H<sub>e</sub>(t) is the result of the ratio <math display="inline"><semantics> <mrow> <mrow> <mrow> <mfenced open="&#x2329;" close="&#x232A;" separators="|"> <mrow> <msup> <mrow> <mi mathvariant="bold">A</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> <mfenced separators="|"> <mrow> <mi mathvariant="bold">t</mi> </mrow> </mfenced> </mrow> </mfenced> </mrow> <mo>/</mo> <mrow> <msub> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </mrow> <mfenced separators="|"> <mrow> <mi mathvariant="normal">t</mi> </mrow> </mfenced> </mrow> </semantics></math> where D<sub>e</sub>(t) and <b>AA</b><sup>2</sup>(t) are shown in plot (<b>a</b>,<b>b</b>,<b>c</b>), respectively. (<b>d</b>) displays H<sub>e</sub>(t)<sub>1</sub> and (<b>e</b>) H<sub>e</sub>(t)<sub>2</sub>; these are pragmatic information indices where D<sub>e</sub>(t) is based on amplitude and phase, respectively.</p>
Full article ">Figure 6
<p>(<b>a</b>) Example of the positioning of the EGI EEG array (128 electrodes) on participant’s scalp. (<b>b</b>) Brain areas color coded and represented in a matrix. (<b>c</b>) Contour plot of the pragmatic information index H<sub>e</sub>(t) during the 3.5 s response time across pre-frontal, frontal, central, and occipital brain areas, as displayed on subplot (<b>b</b>); see also [<a href="#B37-sensors-24-01605" class="html-bibr">37</a>]; (<b>d</b>) shows the H<sub>e</sub>(t) signals for the same brain areas and time windows Δt. (<b>e</b>) Both graphs (<b>c</b>,<b>d</b>) display results for participant P7, stimulus S9, in modality WORDS, for the Theta band, where the stimuli presentation (LVEO) coincides with start time 0, and the pressing of the button to provide an answer takes place at the end of the processing of the stimuli, which coincides with time 3.5 s.</p>
Full article ">Figure 7
<p>Pragmatic information index H<sub>e</sub>(t) for (<b>a</b>) in the Alpha and (<b>b</b>) the H-Gamma frequency bands; illustrating peaks above threshold 0.1, with selection rules considering peak duration and time between peaks.</p>
Full article ">Figure 8
<p>(<b>a</b>) Comprehensive illustration of <b>NPS</b> <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mover accent="true"> <mrow> <mover accent="true"> <mrow> <mi mathvariant="bold-italic">γ</mi> </mrow> <mo>˙</mo> </mover> </mrow> <mo stretchy="false">¯</mo> </mover> </mrow> <mrow> <mi mathvariant="bold-italic">b</mi> </mrow> <mrow> <mi mathvariant="bold-italic">p</mi> <mo>,</mo> <mi mathvariant="bold-italic">m</mi> </mrow> </msubsup> </mrow> </semantics></math> across six modalities and six frequency bands, in the case of participant 1. The modalities are in the same order as introduced at the beginning, e.g., Meditation (M) first in dark blue and MathMind (MM) fourth in yellow. (<b>b</b>) Mean peaks/second <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mover accent="true"> <mrow> <mover accent="true"> <mrow> <mi mathvariant="bold-italic">γ</mi> </mrow> <mo>˙</mo> </mover> </mrow> <mo stretchy="false">¯</mo> </mover> </mrow> <mrow> <mi mathvariant="bold-italic">b</mi> </mrow> <mrow> <mi mathvariant="bold-italic">p</mi> </mrow> </msubsup> </mrow> </semantics></math>, across modalities, for the six frequency bands; (<b>c</b>) mean peaks/second <math display="inline"><semantics> <mrow> <msup> <mrow> <mover accent="true"> <mrow> <mover accent="true"> <mrow> <mi mathvariant="bold-italic">γ</mi> </mrow> <mo>˙</mo> </mover> </mrow> <mo stretchy="false">¯</mo> </mover> </mrow> <mrow> <mi mathvariant="bold-italic">p</mi> <mo>,</mo> <mi mathvariant="bold-italic">m</mi> </mrow> </msup> </mrow> </semantics></math> across frequencies, for the six modalities.</p>
Full article ">Figure 9
<p>Results of NPS evaluations: (<b>a</b>) mean NPS for various frequencies, clustered according to the six modalities; (<b>c</b>) mean NPS for various modalities, clustered according to the frequencies. Mean NPS with error bars across participants for (<b>b</b>) modalities and (<b>d</b>) frequency bands.</p>
Full article ">Figure 10
<p>Illustration of the quantities time between peaks (TBP) in green, and time or duration of a peak (TOP), in red. The blue line shows the computed PI index H<sub>e</sub>(t)<sub>2</sub>.</p>
Full article ">Figure 11
<p>Upper row shows CDF plots for TOP values in (<b>a</b>) the Alpha and (<b>b</b>) the H-Gamma frequency band for all participants in each modality; (<b>c</b>) shows 3D bar graphs of mean TOP values for each participant (<span class="html-italic">x</span>-axis) and each modality (<span class="html-italic">y</span>-axis) for the Alpha frequency band and (<b>d</b>) for the H-Gamma frequency band. Lower row shows CDF plots for TBP values in (<b>e</b>) the Alpha and (<b>f</b>) the H-Gamma frequency band for all participants in each modality; (<b>g</b>) shows 3D bar graphs of mean TBP values for each participant (<span class="html-italic">x</span>-axis) and each modality (<span class="html-italic">y</span>-axis) for the Alpha frequency band and (<b>h</b>) for the H-Gamma frequency band. The different colors in (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>) represent the 20 participants from P1 (dark blue) to P20 (dark red).</p>
Full article ">Figure 12
<p>Mean TOP (<b>a</b>,<b>b</b>) and mean TBP (<b>c</b>,<b>d</b>) values with corresponding error bars for all participants and each modality in the Alpha frequency band (<b>a</b>,<b>c</b>) and the H-Gamma frequency band (<b>b</b>,<b>d</b>).</p>
Full article ">Figure 13
<p>Mean NPS with error bars for all participants in each modality for the Alpha and the H-Gamma frequency bands.</p>
Full article ">Figure 14
<p>CDF for PIPT, considering all modalities, in Alpha (<b>a</b>), H-Gamma (<b>b</b>), L-Gamma (<b>e</b>), L-Beta (<b>f</b>) and H-Beta (<b>g</b>), and for PQPT in Alpha (<b>c</b>) and H-Gamma (<b>d</b>) frequency bands.</p>
Full article ">Figure 15
<p>PIPT mean values with error bars, for all modalities in each frequency band.</p>
Full article ">Figure 16
<p>The relationships between mean TOP, NPS, and TBP: (<b>a</b>) mean TOP vs. NPS values for Alpha and H-Gamma (<b>d</b>), TBP vs. NPS values for Alpha (<b>b</b>) and H-Gamma (<b>e</b>) and TOP vs. TBP (in red) for Alpha (<b>c</b>) and H-Gamma (<b>f</b>).</p>
Full article ">
28 pages, 1351 KiB  
Systematic Review
Time-Series Modeling and Forecasting of Cerebral Pressure–Flow Physiology: A Scoping Systematic Review of the Human and Animal Literature
by Nuray Vakitbilir, Logan Froese, Alwyn Gomez, Amanjyot Singh Sainbhi, Kevin Y. Stein, Abrar Islam, Tobias J. G. Bergmann, Izabella Marquez, Fiorella Amenta, Younis Ibrahim and Frederick A. Zeiler
Sensors 2024, 24(5), 1453; https://doi.org/10.3390/s24051453 - 23 Feb 2024
Viewed by 1130
Abstract
The modeling and forecasting of cerebral pressure–flow dynamics in the time–frequency domain have promising implications for veterinary and human life sciences research, enhancing clinical care by predicting cerebral blood flow (CBF)/perfusion, nutrient delivery, and intracranial pressure (ICP)/compliance behavior in advance. Despite its potential, [...] Read more.
The modeling and forecasting of cerebral pressure–flow dynamics in the time–frequency domain have promising implications for veterinary and human life sciences research, enhancing clinical care by predicting cerebral blood flow (CBF)/perfusion, nutrient delivery, and intracranial pressure (ICP)/compliance behavior in advance. Despite its potential, the literature lacks coherence regarding the optimal model type, structure, data streams, and performance. This systematic scoping review comprehensively examines the current landscape of cerebral physiological time-series modeling and forecasting. It focuses on temporally resolved cerebral pressure–flow and oxygen delivery data streams obtained from invasive/non-invasive cerebral sensors. A thorough search of databases identified 88 studies for evaluation, covering diverse cerebral physiologic signals from healthy volunteers, patients with various conditions, and animal subjects. Methodologies range from traditional statistical time-series analysis to innovative machine learning algorithms. A total of 30 studies in healthy cohorts and 23 studies in patient cohorts with traumatic brain injury (TBI) concentrated on modeling CBFv and predicting ICP, respectively. Animal studies exclusively analyzed CBF/CBFv. Of the 88 studies, 65 predominantly used traditional statistical time-series analysis, with transfer function analysis (TFA), wavelet analysis, and autoregressive (AR) models being prominent. Among machine learning algorithms, support vector machine (SVM) was widely utilized, and decision trees showed promise, especially in ICP prediction. Nonlinear models and multi-input models were prevalent, emphasizing the significance of multivariate modeling and forecasting. This review clarifies knowledge gaps and sets the stage for future research to advance cerebral physiologic signal analysis, benefiting neurocritical care applications. Full article
(This article belongs to the Special Issue Biomedical Signals, Images and Healthcare Data Analysis)
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram of this systematic review.</p>
Full article ">Figure 2
<p>Distribution of studies corresponding to the methodologies employed as well as the medical diagnostic tests with respect to studied pathology.</p>
Full article ">
28 pages, 13458 KiB  
Article
Crescent Antennas as Sensors: Case of Sensing Brain Pathology
by Usman Anwar, Tughrul Arslan and Peter Lomax
Sensors 2024, 24(4), 1305; https://doi.org/10.3390/s24041305 - 18 Feb 2024
Viewed by 1095
Abstract
Microstrip crescent antennas offer compactness, conformability, low profile, high sensitivity, multi-band operability, cost-effectiveness and ease of fabrication in contrast to bulky, rigid horn, helical and Vivaldi antennas. This work presents crescent sensors for monitoring brain pathology associated with stroke and atrophy. Single- and [...] Read more.
Microstrip crescent antennas offer compactness, conformability, low profile, high sensitivity, multi-band operability, cost-effectiveness and ease of fabrication in contrast to bulky, rigid horn, helical and Vivaldi antennas. This work presents crescent sensors for monitoring brain pathology associated with stroke and atrophy. Single- and multi-element crescent sensors are designed and validated by software simulations. The fabricated sensors are integrated with glasses and experimentally evaluated using a realistic brain phantom. The performance of the sensors is compared in terms of peak gain, directivity, radiation performance, flexibility and detection capability. The crescent sensors can detect the pathologies through the monitoring of backscattered electromagnetic signals that are triggered by dielectric variations in the affected tissues. The proposed sensors can effectively detect stroke and brain atrophy targets with a volume of 25 mm3 and 56 mm3, respectively. The safety of the sensors is examined through the evaluation of Specific Absorption Rate (peak SAR < 1.25 W/Kg, 100 mW), temperature increase within brain tissues (max: 0.155 °C, min: 0.115 °C) and electric field analysis. The results suggest that the crescent sensors can provide a flexible, portable and non-invasive solution to monitor degenerative brain pathology. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Remote brain sensing and neurodiagnostics with wearable radio frequency (RF) sensors.</p>
Full article ">Figure 2
<p>Design stages for single-element crescent sensors.</p>
Full article ">Figure 3
<p>Sensor-1: single-element crescent sensor with circular slots on a partial ground structure. (<b>a</b>) Fabricated sensor. (<b>b</b>) Geometric configuration.</p>
Full article ">Figure 4
<p>Sensor-2: single-element crescent sensor with tapered feed line and partial ground plane structure. (<b>a</b>) Fabricated sensor. (<b>b</b>) Geometric configuration.</p>
Full article ">Figure 5
<p>(<b>a</b>) Reflection (S<sub>11</sub>) measurements from free-space simulations. (<b>b</b>) Simulated voltage standing wave ratio (VSWR) for sensor-1 and sensor-2.</p>
Full article ">Figure 6
<p>Simulated far-field radiation patterns for (<b>a</b>) sensor-1 at 1.9 GHz and (<b>b</b>) sensor-2 at 2.6 GHz.</p>
Full article ">Figure 7
<p>Sensor-3: dual-element crescent sensor. (<b>a</b>) Fabricated sensor. (<b>b</b>) Geometric configuration.</p>
Full article ">Figure 8
<p>Sensor-4: quad-element crescent sensor array. (<b>a</b>) Fabricated sensor. (<b>b</b>) Geometric configuration.</p>
Full article ">Figure 9
<p>(<b>a</b>) Reflection (S<sub>11</sub>) measurements from free-space simulations. (<b>b</b>) Simulated voltage standing wave ratio (VSWR) for sensor-3 and sensor-4.</p>
Full article ">Figure 10
<p>Simulated far-field radiation patterns for (<b>a</b>) sensor-3 at 2.6 GHz and (<b>b</b>) sensor-4 at 1.75 GHz.</p>
Full article ">Figure 11
<p>Sensor-5: flexible slotted crescent sensor. (<b>a</b>) Fabricated sensor. (<b>b</b>) Geometric configuration.</p>
Full article ">Figure 12
<p>(<b>a</b>) Free-space reflection (S11) measurements with flat and bending configurations. (<b>b</b>) Simulated far-field radiation pattern for sensor-5 at 2.25 GHz.</p>
Full article ">Figure 13
<p>(<b>a</b>) Simulation head voxel model with sensor placement on glasses. (<b>b</b>) Customized voxel with the placement of stroke target at points A, B and C. (<b>c</b>) Normal brain voxel without atrophy and shrinkage. (<b>d</b>) Customized brain model representing 15% atrophy of grey matter. (<b>e</b>) Brain model with 35% atrophy of grey matter of the brain.</p>
Full article ">Figure 14
<p>Simulated reflection (S<sub>11</sub>) results from sensor-1 for (<b>a</b>) brain stroke and (<b>b</b>) brain atrophy, sensor-2 for (<b>c</b>) brain stroke and (<b>d</b>) brain atrophy, sensor-3 for (<b>e</b>) brain stroke and (<b>f</b>) brain atrophy, sensor-4 for (<b>g</b>) brain stroke and (<b>h</b>) brain atrophy and sensor-5 for (<b>i</b>) brain stroke and (<b>j</b>) brain atrophy.</p>
Full article ">Figure 14 Cont.
<p>Simulated reflection (S<sub>11</sub>) results from sensor-1 for (<b>a</b>) brain stroke and (<b>b</b>) brain atrophy, sensor-2 for (<b>c</b>) brain stroke and (<b>d</b>) brain atrophy, sensor-3 for (<b>e</b>) brain stroke and (<b>f</b>) brain atrophy, sensor-4 for (<b>g</b>) brain stroke and (<b>h</b>) brain atrophy and sensor-5 for (<b>i</b>) brain stroke and (<b>j</b>) brain atrophy.</p>
Full article ">Figure 15
<p>Simulated specific absorption rate (SAR) for (<b>a</b>) sensor-1 at 1.9 GHz, (<b>b</b>) sensor-2 at 2.6 GHz, (<b>c</b>) sensor-3 at 2.6 GHz, (<b>d</b>) sensor-4 at 1.75 GHz and (<b>e</b>) sensor-5 at 2.25 GHz.</p>
Full article ">Figure 16
<p>(<b>a</b>) Glasses with integrated crescent sensors for experimentation. (<b>b</b>) Fabricated gel-based brain phantom customized with stroke and brain atrophy targets.</p>
Full article ">Figure 17
<p>Measured reflection (S<sub>11</sub>) results from sensor-1 for (<b>a</b>) brain stroke and (<b>b</b>) brain atrophy, sensor-2 for (<b>c</b>) brain stroke and (<b>d</b>) brain atrophy, sensor-3 for (<b>e</b>) brain stroke and (<b>f</b>) brain atrophy, sensor-4 for (<b>g</b>) brain stroke and (<b>h</b>) brain atrophy and sensor-5 for (<b>i</b>) brain stroke and (<b>j</b>) brain atrophy.</p>
Full article ">Figure 17 Cont.
<p>Measured reflection (S<sub>11</sub>) results from sensor-1 for (<b>a</b>) brain stroke and (<b>b</b>) brain atrophy, sensor-2 for (<b>c</b>) brain stroke and (<b>d</b>) brain atrophy, sensor-3 for (<b>e</b>) brain stroke and (<b>f</b>) brain atrophy, sensor-4 for (<b>g</b>) brain stroke and (<b>h</b>) brain atrophy and sensor-5 for (<b>i</b>) brain stroke and (<b>j</b>) brain atrophy.</p>
Full article ">Figure 18
<p>Simulated reflection (S<sub>11</sub>) results from the circular sensor for (<b>a</b>) brain stroke and (<b>b</b>) brain atrophy and the multi-ring sensor for (<b>c</b>) brain stroke and (<b>d</b>) brain atrophy.</p>
Full article ">
12 pages, 2774 KiB  
Article
A Surface Imprinted Polymer EIS Sensor for Detecting Alpha-Synuclein, a Parkinson’s Disease Biomarker
by Roslyn Simone Massey, Rishabh Ramesh Appadurai and Ravi Prakash
Micromachines 2024, 15(2), 273; https://doi.org/10.3390/mi15020273 - 15 Feb 2024
Cited by 1 | Viewed by 1419
Abstract
Parkinson’s Disease (PD) is a debilitating neurodegenerative disease, causing loss of motor function and, in some instances, cognitive decline and dementia in those affected. The quality of life can be improved, and disease progression delayed through early interventions. However, current methods of confirming [...] Read more.
Parkinson’s Disease (PD) is a debilitating neurodegenerative disease, causing loss of motor function and, in some instances, cognitive decline and dementia in those affected. The quality of life can be improved, and disease progression delayed through early interventions. However, current methods of confirming a PD diagnosis are extremely invasive. This prevents their use as a screening tool for the early onset stages of PD. We propose a surface imprinted polymer (SIP) electroimpedance spectroscopy (EIS) biosensor for detecting α-Synuclein (αSyn) and its aggregates, a biomarker that appears in saliva and blood during the early stages of PD as the blood-brain barrier degrades. The surface imprinted polymer stamp is fabricated by low-temperature melt stamping polycaprolactone (PCL) on interdigitated EIS electrodes. The result is a low-cost, small-footprint biosensor that is highly suitable for non-invasive monitoring of the disease biomarker. The sensors were tested with αSyn dilutions in deionized water and in constant ionic concentration matrix solutions with decreasing concentrations of αSyn to remove the background effects of concentration. The device response confirmed the specificity of these devices to the target protein of monomeric αSyn. The sensor limit of detection was measured to be 5 pg/L, and its linear detection range was 5 pg/L–5 µg/L. This covers the physiological range of αSyn in saliva and makes this a highly promising method of quantifying αSyn monomers for PD patients in the future. The SIP surface was regenerated, and the sensor was reused to demonstrate its capability for repeat sensing as a potential continuous monitoring tool for the disease biomarker. Full article
(This article belongs to the Section B:Biology and Biomedicine)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) aSyn Stamp fabrication process (<b>b</b>) SIP fabrication on IDEs (<b>c</b>) microfluidic channel addition method (<b>d</b>)Sample testing and regeneration process (<b>e</b>) Photograph showing PCL SIP device under test (DUT) (<b>e</b>) EIS IDE prior to PCL deposition (<b>f</b>) A microfluidic channel integrated SIP EIS DUT.</p>
Full article ">Figure 2
<p>(<b>a</b>) SEM image of the PCL SIP on IDE electrode (<b>b</b>) AFM image of PCL SIP after testing and regeneration (<b>c</b>) AFM micrograph showing surface profile characteristics of the SIP in sensing area (<b>d</b>) AFM micrograph of the αSyn stamp showing variable sizes of immobilized material.</p>
Full article ">Figure 3
<p>(<b>a</b>) Randles-Ershler circuit model (<b>b</b>) Equivalent circuit data shape and parameter extraction (<b>c</b>) Real sensor data example and extraction locations for Randles-Ershler behavior (<b>d</b>) EIS response of real data for dilutions series of αSyn in DI.</p>
Full article ">Figure 4
<p>(<b>a</b>) Raw impedance data for the total synuclein protein concentration of 100 µg/L with decreasing ratios of αSyn to ßSyn (<b>b</b>) αSyn concentration dependence in constant ionic concentration solutions. Solid line is the fitted curve with grey region indicating the 95% confidence interval, circles are the averaged % Change C<sub>G</sub> data with error bars showing the standard deviation.</p>
Full article ">Figure 5
<p>(<b>a</b>) Raw impedance data for series dilutions of αSyn in the total synuclein protein concentration in a 100 µg/L solution, (<b>b</b>) linear fit graph showing the demonstrable linear range response to αSyn for the microfluidic channel integrated SIP EIS biosensor. Solid line is the fitted linear curve with grey region indicating the 95% confidence interval, circles are the averaged % Change C<sub>G</sub> data with error bars showing the standard deviation.</p>
Full article ">
36 pages, 21226 KiB  
Article
Brain Wearables: Validation Toolkit for Ear-Level EEG Sensors
by Guilherme Correia, Michael J. Crosse and Alejandro Lopez Valdes
Sensors 2024, 24(4), 1226; https://doi.org/10.3390/s24041226 - 15 Feb 2024
Cited by 1 | Viewed by 1974
Abstract
EEG-enabled earbuds represent a promising frontier in brain activity monitoring beyond traditional laboratory testing. Their discrete form factor and proximity to the brain make them the ideal candidate for the first generation of discrete non-invasive brain–computer interfaces (BCIs). However, this new technology will [...] Read more.
EEG-enabled earbuds represent a promising frontier in brain activity monitoring beyond traditional laboratory testing. Their discrete form factor and proximity to the brain make them the ideal candidate for the first generation of discrete non-invasive brain–computer interfaces (BCIs). However, this new technology will require comprehensive characterization before we see widespread consumer and health-related usage. To address this need, we developed a validation toolkit that aims to facilitate and expand the assessment of ear-EEG devices. The first component of this toolkit is a desktop application (“EaR-P Lab”) that controls several EEG validation paradigms. This application uses the Lab Streaming Layer (LSL) protocol, making it compatible with most current EEG systems. The second element of the toolkit introduces an adaptation of the phantom evaluation concept to the domain of ear-EEGs. Specifically, it utilizes 3D scans of the test subjects’ ears to simulate typical EEG activity around and inside the ear, allowing for controlled assessment of different ear-EEG form factors and sensor configurations. Each of the EEG paradigms were validated using wet-electrode ear-EEG recordings and benchmarked against scalp-EEG measurements. The ear-EEG phantom was successful in acquiring performance metrics for hardware characterization, revealing differences in performance based on electrode location. This information was leveraged to optimize the electrode reference configuration, resulting in increased auditory steady-state response (ASSR) power. Through this work, an ear-EEG evaluation toolkit is made available with the intention to facilitate the systematic assessment of novel ear-EEG devices from hardware to neural signal acquisition. Full article
(This article belongs to the Special Issue Biomedical Electronics and Wearable Systems)
Show Figures

Figure 1

Figure 1
<p><b>EaR-P Lab</b>—structure and main attributes.</p>
Full article ">Figure 2
<p><b>EaR-P Lab</b>—schematic of functional framework.</p>
Full article ">Figure 3
<p><b>EaR-P Lab</b>—main menu.</p>
Full article ">Figure 4
<p>Latency variation when recording multiple event-related potential (ERP) blocks on the same file, exemplified for auditory stimuli—a similar effect happens for visual stimuli.</p>
Full article ">Figure 5
<p>Nullified cascading effect is when recording multiple ERP blocks in different files after restarting data streaming, exemplified for auditory stimuli—a similar effect happens for visual stimuli.</p>
Full article ">Figure 6
<p>EEG acquisition setup schematic and equipment: (a) USB audio interface TASCAM US-100; (b) digital-analog converter (DAC) amplifier FiiO Alpen 2; (c) ER2 etymotic tubal-insert research-grade earphones.</p>
Full article ">Figure 7
<p>Grand average spectrogram for the alpha block paradigm at Oz (Cz referenced). The bottom horizontal plot shows the mean alpha power (8 Hz) as a function of eye state, while the left vertical plot shows the frequency response for the two conditions.</p>
Full article ">Figure 8
<p>Grand average ASSR responses (black line) to a 40 Hz AM auditory stimulus at P4 (<b>left</b>) and T8 (<b>right</b>). Statistically significant peaks are highlighted by the green star token, based on an <span class="html-italic">F</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>), grey lines represent individual responses.</p>
Full article ">Figure 9
<p>Grand average SSVEP responses (black line) to a 10 Hz visual stimuli at Oz (<b>left</b>) and T8 (<b>right</b>). Statistically significant peaks are highlighted by the green star token, based on an <span class="html-italic">F</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>), grey lines represent individual responses. Only the first harmonic was statistically evaluated.</p>
Full article ">Figure 10
<p>Grand average AEP waveform (black line) at T8. Statistically significant segments are highlighted in green, based on <span class="html-italic">t</span>-tests (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons), grey lines represent individual responses.</p>
Full article ">Figure 11
<p>Grand average VEP waveform (black line) at Oz (<b>left</b>) and T8 (<b>right</b>). Statistically significant segments are highlighted in green, based on <span class="html-italic">t</span>-tests (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons), grey lines represent individual responses.</p>
Full article ">Figure 12
<p>Grand average mismatch negativity (MMN) waveform at (<b>left</b>) Pz and (<b>right</b>) T8. Statistically significant segments are highlighted in green, based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons).</p>
Full article ">Figure 13
<p>Grand average P300 waveform at (<b>left</b>) P4 and (<b>right</b>) T8. Statistically significant segments are highlighted in green, based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons).</p>
Full article ">Figure 14
<p>EOG amplitudes for soft and hard blinks in an example subject recorded at F3 (<b>left</b>) and T8 (<b>right</b>).</p>
Full article ">Figure 15
<p>Grand average saccade profiles in the four cardinal directions, at T7 (<b>left</b>) and T8 (<b>right</b>).</p>
Full article ">Figure 16
<p>CAD drawings of ear-EEG phantom mold casing. Closed render of the mold (<b>left</b>). Exploded render of the mold (<b>right</b>).</p>
Full article ">Figure 17
<p>Outer ear scans (blue: left ear, red: right ear) from an example subject shown in elevation (<b>left</b>) and plan (<b>right</b>) view. The scans were obtained by an expert audiologist and digitized as <span class="html-italic">.stl</span> files.</p>
Full article ">Figure 18
<p>Example of a left ear scan being centered and oriented with the phantom’s lid mesh. Different views of the alignment and depth of the ear mesh and the lid mesh into a single rendered object are shown below.</p>
Full article ">Figure 19
<p>Disassembled ear-EEG phantom: bottom half (yellow), top half (white), and two lids with a left and right ear imprint from one of the test subjects.</p>
Full article ">Figure 20
<p>Ear-EEG phantom assembly—antennas and railing fittings were sealed with tape.</p>
Full article ">Figure 21
<p>Ear-EEG phantoms made with agar (<b>left</b>) and ballistic gelatin (BG) (<b>right</b>).</p>
Full article ">Figure 22
<p>CF-doped silicone ear-EEG phantom—the lack of conductive homogeneity is highlighted on the right, with conductive and non-conductive zones visible.</p>
Full article ">Figure 23
<p>Schematic of testing setup of the proposed ear-EEG phantom.</p>
Full article ">Figure 24
<p>Contact impedance measures (kΩ) for the agar ear-EEG phantom for wet- and dry-electrode conditions (taken on Day 2 of testing). * indicates electrodes that surpassed an impedance of 50 kΩ in the dry condition.</p>
Full article ">Figure 25
<p>Noise floor measures (µVrms) for the agar ear-EEG phantom for wet- and dry-electrode conditions (taken on Day 2 of testing). * indicates electrodes that surpassed a noise floor of 50 µVrms in the dry condition.</p>
Full article ">Figure 26
<p>Power spectrum (dB) showing the synthetically generated alpha wave (10 Hz input signal) recorded using a custom ear-EEG device (electrode ER8) for the agar and BG phantoms in dry-and wet-electrode conditions.</p>
Full article ">Figure 27
<p>EEG earbuds developed by Segotia. Internal side of the tested earbuds (<b>left</b>). External side of the tested earbuds (<b>right</b>). Senors are numbered in order of signal channel acquisition 1–8.</p>
Full article ">Figure 28
<p>Ear- and scalp-EEG electrode configurations color-coded as in <a href="#sensors-24-01226-f027" class="html-fig">Figure 27</a>. Electrode numbers are provided in the legend, where “<b>x</b>” is replaced by <b>L</b> or <b>R</b> to indicate the left or right ear, respectively.</p>
Full article ">Figure 29
<p>Ear- and scalp-EEG setup for side view (<b>left</b>) and posterior view (<b>right</b>).</p>
Full article ">Figure 30
<p>Grand average alpha modulation (wet ear-EEG) at ER8/EL8 for different referencing configurations. Omitted results are not significant based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 31
<p>Grand average ASSR responses (wet ear-EEG) to a 40 Hz AM auditory stimulus at ER8/EL8 for different referencing configurations. Omitted results are not significant based on an <span class="html-italic">F</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 32
<p>Grand average SSVEP responses (wet ear-EEG) to a 10 Hz visual stimulus at ER8/EL8 for different referencing configurations. Omitted results are not significant based on an <span class="html-italic">F</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 33
<p>Grand average AEP waveform (black line, wet ear-EEG) at ER8 referenced to Cz (<b>left</b>) and T8 (<b>right</b>). Statistically significant segments are highlighted in green, based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons). Grey lines represent individual responses.</p>
Full article ">Figure 34
<p>Grand average AEP waveform (black line, wet ear-EEG) at EL8 referenced to ER3. Statistically significant segments are highlighted in green, based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons). Grey lines represent individual responses.</p>
Full article ">Figure 35
<p>Grand average VEP waveform (black line, wet ear-EEG) at ER8 referenced to Cz (<b>left</b>) and T8 (<b>right</b>). Statistically significant segments are highlighted in green, based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons). Grey lines represent individual responses.</p>
Full article ">Figure 36
<p>Grand average VEP waveform (black line, wet ear-EEG) at EL8 referenced to ER3. Statistically significant segments are highlighted in green, based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons). Grey lines represent individual responses.</p>
Full article ">Figure 37
<p>Grand average MMN waveform (wet ear-EEG) at ER8 referenced to Cz (<b>left</b>) and T8 (<b>right</b>). Statistically significant segments are highlighted in green, based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons).</p>
Full article ">Figure 38
<p>Grand average P300 waveform (wet ear-EEG) at ER8 referenced to Cz (<b>left</b>) and T8 (<b>right</b>). Statistically significant segments are highlighted in green, based on a <span class="html-italic">t</span>-test (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, not corrected for multiple comparisons).</p>
Full article ">Figure 39
<p>EOG amplitude ratio for soft and hard blinks (wet ear-EEG) for different reference configurations.</p>
Full article ">Figure 40
<p>Grand average saccade profiles (wet ear-EEG) in the four cardinal directions at ER8 referenced to Cz (<b>left</b>) and T8 (<b>right</b>).</p>
Full article ">Figure 41
<p>Grand average saccade profiles (wet ear-EEG) in the four cardinal directions at ER8 referenced within-ear (<b>left</b>) and between ears (<b>right</b>).</p>
Full article ">Figure 42
<p>Reassessment of dry-electrode ear-EEG ASSR data for the subject used in the construction of the ear-EEG phantom. Data were re-referenced from ER3 (original reference) to ER4 (better electrode proposed by the phantom), resulting in an increase in SNR.</p>
Full article ">Figure A1
<p>Focus cross utilized by the “Resting State”, “ASSR”, and “Alpha Block” paradigms.</p>
Full article ">Figure A2
<p>Target used for the “SSVEP” experiment in <b>EaR-P Lab</b>.</p>
Full article ">Figure A3
<p>“Alpha Block” paradigm functioning, with the markers that are sent at the start and end of each phase.</p>
Full article ">Figure A4
<p>“AEP” sequence featured in <b>EaR-P Lab</b>.</p>
Full article ">Figure A5
<p>“VEP” button paradigm structure and options.</p>
Full article ">Figure A6
<p>“OddBall”-type paradigm sequence—example of visual oddBalls.</p>
Full article ">Figure A7
<p>“Follow-the-dot” phase of the “EOG” paradigm in <b>EaR-P Lab</b>.</p>
Full article ">Figure A8
<p><b>EaR-P Lab</b>—Settings menu.</p>
Full article ">Figure A9
<p><b>EaR-P Lab</b>—Markers menu.</p>
Full article ">Figure A10
<p>Proposed ear-EEG phantom prototype dimensions—side view.</p>
Full article ">Figure A11
<p>Proposed ear-EEG phantom prototype dimensions—front view.</p>
Full article ">Figure A12
<p>Proposed ear-EEG phantom prototype dimensions—top view.</p>
Full article ">Figure A13
<p>Proposed ear-EEG phantom prototype dimensions—lids.</p>
Full article ">
2020 KiB  
Proceeding Paper
Directed Evolution of a Genetically Encoded Bioluminescent Ca2+ Sensor
by Yufeng Zhao, Sungmoo Lee, Robert E. Campbell and Michael Z. Lin
Eng. Proc. 2023, 35(1), 20; https://doi.org/10.3390/IECB2023-14563 - 8 May 2023
Cited by 1 | Viewed by 938
Abstract
The use of genetically encoded fluorescent sensors for the calcium ion (Ca2+) has revolutionized neuroscience research by allowing for the recording of dozens of neurons at the single-cell level in living animals. However, fluorescence imaging has some limitations such as the [...] Read more.
The use of genetically encoded fluorescent sensors for the calcium ion (Ca2+) has revolutionized neuroscience research by allowing for the recording of dozens of neurons at the single-cell level in living animals. However, fluorescence imaging has some limitations such as the need for excitation light, which can result in a highly auto-fluorescent background and phototoxicity. In contrast, bioluminescent sensors using luciferase do not require excitation light, making them ideal for non-invasive deep tissue imaging in mammals. Our lab has previously developed a bioluminescent Ca2+ sensor CaMBI to image Ca2+ activity in the mouse liver, but its responsiveness to Ca2+ changes was suboptimal. To improve the performance of this sensor, we applied directed evolution to screen for genetic variants with increased responsiveness. Through several rounds of evolution, we identified variants with more than five times improved responsiveness in vitro. We characterized the improved sensors in culture cell lines and dissociated rat neurons and confirmed that they exhibited a higher sensitivity to changes in intracellular Ca2+ levels compared to their progenitor. These optimized Ca2+ sensors have the potential for non-invasive imaging of Ca2+ activity in vivo, particularly in the brain. Full article
(This article belongs to the Proceedings of The 3rd International Electronic Conference on Biosensors)
Show Figures

Figure 1

Figure 1
<p>(<bold>a</bold>) Schematic representation of CaMBI. (<bold>b</bold>) Relative total luminescence of CaMBI and Antares3 CaMBI in the presence and absence of Ca<sup>2+</sup>. (<bold>c</bold>) Luminescence spectra of CaMBI and Antares3 CaMBI. (Error bars represent s.d.).</p>
Full article ">Figure 2
<p>Directed evolution of CaMBI.</p>
Full article ">Figure 3
<p>(<bold>a</bold>) Relative total luminescence of new CaMBI variants and Antares3 CaMBI in the presence and absence of Ca<sup>2+</sup>. (<bold>b</bold>) Luminescence spectra of CaMBI variants. (<bold>c</bold>) Luminescence of CaMBI variants at 0, 146 nM, or 39 μM Ca<sup>2+</sup>. (Error bars represent s.d.).</p>
Full article ">Figure 4
<p>(<bold>a</bold>) Representative fluorescence (<bold>upper</bold>) and luminescence (<bold>lower</bold>) microscopic image of L3-P2C9. (<bold>b</bold>) Representative time-lapsed luminescence signals of individual cells upon histamine stimulation (indicated by the arrow). Each trace represents the luminescence signal of a single cell. (<bold>c</bold>) Responses of CaMBI variants to histamine stimulation. (Error bars represent s.e.m.).</p>
Full article ">Figure 5
<p>(<bold>a</bold>) Representative luminescence microscopic image of L3-P2C9. (<bold>b</bold>) Representative time-lapsed luminescence signals of individual neurons upon KCl-induced depolarization (indicated by the arrow). (<bold>c</bold>) Responses of CaMBI variants to Ca<sup>2+</sup> elevation upon KCl-induced depolarization. (Error bars represent s.e.m.).</p>
Full article ">
18 pages, 3408 KiB  
Article
A Non-Invasive Optical Multimodal Photoplethysmography-Near Infrared Spectroscopy Sensor for Measuring Intracranial Pressure and Cerebral Oxygenation in Traumatic Brain Injury
by Maria Roldan and Panicos A. Kyriacou
Appl. Sci. 2023, 13(8), 5211; https://doi.org/10.3390/app13085211 - 21 Apr 2023
Cited by 5 | Viewed by 2199
Abstract
(1) Background: Traumatic brain injuries (TBI) result in high fatality and lifelong disability rates. Two of the primary biomarkers in assessing TBI are intracranial pressure (ICP) and brain oxygenation. Both are assessed using standalone techniques, out of which ICP can only be assessed [...] Read more.
(1) Background: Traumatic brain injuries (TBI) result in high fatality and lifelong disability rates. Two of the primary biomarkers in assessing TBI are intracranial pressure (ICP) and brain oxygenation. Both are assessed using standalone techniques, out of which ICP can only be assessed utilizing invasive techniques. The motivation of this research is the development of a non-invasive optical multimodal monitoring technology for ICP and brain oxygenation which will enable the effective management of TBI patients. (2) Methods: a multiwavelength optical sensor was designed and manufactured so as to assess both parameters based on the pulsatile and non-pulsatile signals detected from cerebral backscatter light. The probe consists of four LEDs and three photodetectors that measure photoplethysmography (PPG) and near-infrared spectroscopy (NIRS) signals from cerebral tissue. (3) Results: The instrumentation system designed to acquire these optical signals is described in detail along with a rigorous technical evaluation of both the sensor and instrumentation. Bench testing demonstrated the right performance of the electronic circuits while a signal quality assessment showed good indices across all wavelengths, with the signals from the distal photodetector being of highest quality. The system performed well within specifications and recorded good-quality pulsations from a head phantom and provided non-pulsatile signals as expected. (4) Conclusions: This development paves the way for a multimodal non-invasive tool for the effective assessment of TBI patients. Full article
Show Figures

Figure 1

Figure 1
<p>PCB design of the multimodal probe showing both proximal and distal sub-probes, with their respective optical components.</p>
Full article ">Figure 2
<p>Detailed block diagram showing the ZenTBI architecture, considering the power supply module, core module, current source module, transimpedance amplifier module, and bus board.</p>
Full article ">Figure 3
<p>Current source schematic design.</p>
Full article ">Figure 4
<p>LEDs multiplexing and photodetectors’ signals demultiplexing. Each color box represents a wavelength.</p>
Full article ">Figure 5
<p>The head phantom consists of a brain, skull, cerebrospinal fluid, and blood circulation. The sensor on top of the forehead shines light to reach the pulsatile vessels of the brain and receives light that correlates to oxygenation and ICP.</p>
Full article ">Figure 6
<p>(<b>A</b>) Probe prototype with its 4 LEDs, 3 photodiodes and 15D connector. (<b>B</b>) Open ZenTBI view of the instrumentation system.</p>
Full article ">Figure 7
<p>ZenTBI debug tests on the oscilloscope. (<b>A</b>) shows the outputs of the microcontroller CMUX0 in blue and CMUX1 in green. In accordance with the multiplexor’s truth table, CMUX0 and CMUX1 generate CNTL0, shown in yellow. (<b>B</b>) shows the output of the Howland pump circuits that control the LEDs’ currents. Two LEDs are controlled by the yellow pulses and other two by the green pulses. (<b>C</b>) shows the impulse response of a photodetector. (<b>D</b>) shows the demux controls generated by the microcontroller, CMUX0 as the green line, CMUX1 as the purple line, and the INHIBIT as the yellow line.</p>
Full article ">Figure 8
<p>Raw and filtered PPG signals from two photodiodes using the head phantom at a normal ICP of 15 mmHg.</p>
Full article ">Figure 9
<p>Light intensity DC from all three photodetectors at 810 nm.</p>
Full article ">
12 pages, 5973 KiB  
Article
Control of a Production Manipulator with the Use of BCI in Conjunction with an Industrial PLC
by Dmitrii Borkin, Andrea Nemethova, Martin Nemeth and Pavol Tanuska
Sensors 2023, 23(7), 3546; https://doi.org/10.3390/s23073546 - 28 Mar 2023
Cited by 2 | Viewed by 1687
Abstract
Research in the field of gathering and analyzing biological signals is growing. The sensors are becoming more available and more non-invasive for examining such signals, which in the past required the inconvenient acquisition of data. This was achieved mainly by the fact that [...] Read more.
Research in the field of gathering and analyzing biological signals is growing. The sensors are becoming more available and more non-invasive for examining such signals, which in the past required the inconvenient acquisition of data. This was achieved mainly by the fact that biological sensors were able to be built into wearable and portable devices. The representation and analysis of EEGs (electroencephalograms) is nowadays commonly used in various application areas. The application of the use of the EEG signals to the field of automation is still an unexplored area and therefore provides opportunities for interesting research. In our research, we focused on the area of processing automation; especially the use of the EEG signals to bridge the communication between control of individual processes and a human. In this study, the real-time communication between a PLC (programmable logic controller) and BCI (brain computer interface) was investigated and described. In the future, this approach can help people with physical disabilities to control certain machines or devices and therefore it could find applicability in overcoming physical disabilities. The main contribution of the article is, that we have demonstrated the possibility of interaction between a person and a manipulator controlled by a PLC with the help of a BCI. Potentially, with the expansion of functionality, such solutions will allow a person with physical disabilities to participate in the production process. Full article
Show Figures

Figure 1

Figure 1
<p>Description of the time process for (<b>a</b>) one experiment; (<b>b</b>) all sessions.</p>
Full article ">Figure 2
<p>(<b>a</b>) Programmable Logic Controller (S7-314, Siemens), (<b>b</b>) an actuator in which we control two positions, movement to the right and to the left, (<b>c</b>) EEG headset (OpenBCI).</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) Programmable Logic Controller (S7-314, Siemens), (<b>b</b>) an actuator in which we control two positions, movement to the right and to the left, (<b>c</b>) EEG headset (OpenBCI).</p>
Full article ">Figure 3
<p>Scheme of the location of the electrodes: (<b>a</b>) frontal, parietal; (<b>b</b>) frontal, parietal, occipital, temporal. Yellow are nodes that were available for our version of BCI headset.</p>
Full article ">Figure 4
<p>Flow chart of the experiment with all of its parts.</p>
Full article ">Figure 5
<p>Changes in signals for one of the subjects for three states (rest, movement to the left, movement to the right) over time. Orange represents left hand, blue represents right hand, and green represents neutral state of mind.</p>
Full article ">Figure 5 Cont.
<p>Changes in signals for one of the subjects for three states (rest, movement to the left, movement to the right) over time. Orange represents left hand, blue represents right hand, and green represents neutral state of mind.</p>
Full article ">Figure 6
<p>Distribution of signal magnitude for one of the subjects for three states (rest, movement to the left, movement to the right).</p>
Full article ">Figure 6 Cont.
<p>Distribution of signal magnitude for one of the subjects for three states (rest, movement to the left, movement to the right).</p>
Full article ">Figure 7
<p>Signals from sensor P7 for two participants. (<b>a</b>) Signal changes over time. (<b>b</b>) Distribution of values for each of the signals. It is clearly seen that for the P7_IM signal, all values were grouped in the −14,000 microvolt region, while for the P7_ID signal, these values were in the −12,250 microvolt region.</p>
Full article ">Figure 7 Cont.
<p>Signals from sensor P7 for two participants. (<b>a</b>) Signal changes over time. (<b>b</b>) Distribution of values for each of the signals. It is clearly seen that for the P7_IM signal, all values were grouped in the −14,000 microvolt region, while for the P7_ID signal, these values were in the −12,250 microvolt region.</p>
Full article ">
17 pages, 1601 KiB  
Review
The Use of Sensors in Blood-Brain Barrier-on-a-Chip Devices: Current Practice and Future Directions
by András Kincses, Judit P. Vigh, Dániel Petrovszki, Sándor Valkai, Anna E. Kocsis, Fruzsina R. Walter, Hung-Yin Lin, Jeng-Shiung Jan, Mária A. Deli and András Dér
Biosensors 2023, 13(3), 357; https://doi.org/10.3390/bios13030357 - 8 Mar 2023
Cited by 10 | Viewed by 3382
Abstract
The application of lab-on-a-chip technologies in in vitro cell culturing swiftly resulted in improved models of human organs compared to static culture insert-based ones. These chip devices provide controlled cell culture environments to mimic physiological functions and properties. Models of the blood-brain barrier [...] Read more.
The application of lab-on-a-chip technologies in in vitro cell culturing swiftly resulted in improved models of human organs compared to static culture insert-based ones. These chip devices provide controlled cell culture environments to mimic physiological functions and properties. Models of the blood-brain barrier (BBB) especially profited from this advanced technological approach. The BBB represents the tightest endothelial barrier within the vasculature with high electric resistance and low passive permeability, providing a controlled interface between the circulation and the brain. The multi-cell type dynamic BBB-on-chip models are in demand in several fields as alternatives to expensive animal studies or static culture inserts methods. Their combination with integrated biosensors provides real-time and noninvasive monitoring of the integrity of the BBB and of the presence and concentration of agents contributing to the physiological and metabolic functions and pathologies. In this review, we describe built-in sensors to characterize BBB models via quasi-direct current and electrical impedance measurements, as well as the different types of biosensors for the detection of metabolites, drugs, or toxic agents. We also give an outlook on the future of the field, with potential combinations of existing methods and possible improvements of current techniques. Full article
(This article belongs to the Special Issue Lab-on-a-Chip Devices and Biosensors to Model Biological Barriers)
Show Figures

Figure 1

Figure 1
<p>Blood-brain barrier-on-a-chip models. (<b>a</b>) The cellular composition of the blood-brain barrier (BBB). Endothelial cells (EC), which are the functional basis of the BBB, are surrounded by pericytes (PC) and the astrocytes’ endfeet (AC). (<b>b</b>) Schematic representation of a BBB-on-a-chip design with two compartments separated by a porous membrane and the co-culture of three cell types. ‘Blood’ represents the compartment with fluid flow in contact with the luminal plasma membrane of ECs. ‘Brain’ indicates the abluminal compartment in which the PCs and ACs are cultured. (<b>c</b>) Different designs of BBB-on-a-chip models. Created with BioRender.com.</p>
Full article ">Figure 2
<p>Schematic illustration of BBB-on-a-chip devices with widely used or promising integrated and/or modular (bio)sensors. For electric signal measurements, chip-integrated sensors are used to measure transendothelial electrical resistance (TEER) and electrical impedance spectroscopy, while electrochemical biosensors can be designed as modular sensing techniques. Regarding optical sensing and monitoring, microscopic observation provides a direct and practical chip-integrated approach. Evanescent-field sensing methods, such as surface plasmon resonance or integrated optical (IO) interferometry—e.g., Mach-Zehnder interferometer (MZI)—can be used as modules attached to chips to detect bioparticles, e.g., proteins, pathogens, of interest. The figure was created with Biorender.com.</p>
Full article ">Figure 3
<p>Current and potential sensing technologies that can be integrated with blood-brain barrier-on-a-chip devices. Abbreviations: TEER, transendothelial electrical resistance; EIS, electrical impedance spectroscopy. Created with BioRender.com.</p>
Full article ">
20 pages, 5942 KiB  
Article
A New Generation of OPM for High Dynamic and Large Bandwidth MEG: The 4He OPMs—First Applications in Healthy Volunteers
by Tjerk P. Gutteling, Mathilde Bonnefond, Tommy Clausner, Sébastien Daligault, Rudy Romain, Sergey Mitryukovskiy, William Fourcault, Vincent Josselin, Matthieu Le Prado, Agustin Palacios-Laloy, Etienne Labyt, Julien Jung and Denis Schwartz
Sensors 2023, 23(5), 2801; https://doi.org/10.3390/s23052801 - 3 Mar 2023
Cited by 14 | Viewed by 4522
Abstract
MagnetoEncephaloGraphy (MEG) provides a measure of electrical activity in the brain at a millisecond time scale. From these signals, one can non-invasively derive the dynamics of brain activity. Conventional MEG systems (SQUID-MEG) use very low temperatures to achieve the necessary sensitivity. This leads [...] Read more.
MagnetoEncephaloGraphy (MEG) provides a measure of electrical activity in the brain at a millisecond time scale. From these signals, one can non-invasively derive the dynamics of brain activity. Conventional MEG systems (SQUID-MEG) use very low temperatures to achieve the necessary sensitivity. This leads to severe experimental and economical limitations. A new generation of MEG sensors is emerging: the optically pumped magnetometers (OPM). In OPM, an atomic gas enclosed in a glass cell is traversed by a laser beam whose modulation depends on the local magnetic field. MAG4Health is developing OPMs using Helium gas (4He-OPM). They operate at room temperature with a large dynamic range and a large frequency bandwidth and output natively a 3D vectorial measure of the magnetic field. In this study, five 4He-OPMs were compared to a classical SQUID-MEG system in a group of 18 volunteers to evaluate their experimental performances. Considering that the 4He-OPMs operate at real room temperature and can be placed directly on the head, our assumption was that 4He-OPMs would provide a reliable recording of physiological magnetic brain activity. Indeed, the results showed that the 4He-OPMs showed very similar results to the classical SQUID-MEG system by taking advantage of a shorter distance to the brain, despite having a lower sensitivity. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental setup. (<b>A</b>) SQUID-MEG system used in this study with the subject in a typical seated position. (<b>B</b>): Top left: Subject setup with the five <sup>4</sup>He-OPMs used in the somatosensory task. One of them serving as a reference sensor (green label) is placed over the top of the head, and the four other ones are located on the left side of the subject. The cables are supported by a wooden frame. Top right: Same setup on a phantom head without the wooden frame. Bottom right: zoomed view of one of the <sup>4</sup>He-OPMs used and zoomed view of the sensors installed in the headset. The sensor has a 2 cm by 2 cm by 5 cm footprint. The glass cell containing the sensitive helium gas and the associated Helmholtz coils are visible. (<b>C</b>) SQUID-MEG sensors layout with the sensors closest to the OPMs location in red for the somatosensory task and in blue for the visual task. (<b>D</b>) <sup>4</sup>He-OPMs sensors layout in red for the somatosensory task and in blue for the visual task.</p>
Full article ">Figure 2
<p>Empty room and visual task baseline average PSDs for SQUID-MEG and <sup>4</sup>He-OPMs. Mean PSDs obtained after averaging PSDs over sessions and over all the sensors used in this study: SQUID-MEG: MLO31, MLO11, MRO21, MRO11, MLC11, MLC13, MLC33, MLC31 and <sup>4</sup>He-OPMs: All 4 sensors except the reference with the two directions (radial and a tangential one) used in this study. No notch filters were applied for this figure. Top: Empty room full spectrum up to 300 Hz. Middle: Empty room spectrum zoomed up to 100 Hz. Bottom: Visual task baseline (500 ms) spectrum up to 100 Hz.</p>
Full article ">Figure 3
<p>Event-related fields for SQUID-MEG (<b>A</b>), <sup>4</sup>He-OPMs in the radial (<b>B</b>) and tangential axis (<b>C</b>). Gray-filled lines at the bottom of each panel represent the RMS of the combined signal. Gray vertical area denotes the suppressed stimulation artifact. Note that the scales for SQUID-MEG and <sup>4</sup>He-OPMs are not the same.</p>
Full article ">Figure 4
<p>Individual time-courses of best SNR sensors following somatosensory stimulation for SQUID-MEG, radial <sup>4</sup>He-OPMs and tangential <sup>4</sup>He-OPMs. For visualization only, a multiplication factor and polarity alignment are applied to the SQUID-MEG and tangential axis of the <sup>4</sup>He-OPMs sensors with reference to the radial axis <sup>4</sup>He-OPMs. The top three panels depict three representative subjects with varying degrees of correlation between SQUID-MEG and <sup>4</sup>He-OPMs. The bottom panel shows the group average (<span class="html-italic">n</span> = 17).</p>
Full article ">Figure 5
<p>Average signal-to-noise ratio per modality, sensor type and axis. Black horizontal bars denote the group means. Plots span the entire data range.</p>
Full article ">Figure 6
<p>Group averaged event-related fields for conventional SQUID-MEG (<b>A</b>), <sup>4</sup>He-OPMs in the radial (<b>B</b>) and tangential direction (<b>C</b>). Gray-filled lines at the bottom of each panel represent the RMS of the combined signal. Note that the scales for SQUID-MEG and <sup>4</sup>He-OPMs are not the same.</p>
Full article ">Figure 7
<p>Individual time-courses of best SNR sensors following visual stimulation for SQUID, <sup>4</sup>He-OPMs radial and <sup>4</sup>He-OPMs tangential sensors. For visualization only, a multiplication factor and polarity alignment are applied to the SQUID-MEG and tangential <sup>4</sup>He-OPMs with reference to the radial <sup>4</sup>He-OPM sensor. The top three panels depict three representative subjects with varying degrees of correspondence between SQUID-MEG and <sup>4</sup>He-OPMs. The bottom panel shows the group average (<span class="html-italic">n</span> = 18).</p>
Full article ">Figure 8
<p>Average signal-to-noise ratio per modality, sensor type and axis, calculated as the maximum absolute post-stimulus onset deflection [0 s, 0.3 s] divided by the standard error of the baseline [−0.2 s, 0 s]. Black horizontal bars denote the group means. Plots span the entire data range.</p>
Full article ">Figure 9
<p>Group-average time-frequency representation of the visual experiment MEG data for the SQUID-MEG and <sup>4</sup>He-OPM in the radial and tangential axes (<b>A</b>). Values denote the percent change relative to baseline [−0.4 s, 0 s]. Note that the scale is different between SQUID and <sup>4</sup>He-OPMs sensors. Significant clusters (<span class="html-italic">p</span> &lt; 0.05, two-tailed) are contained within areas marked in black. The onset of the visual stimulus was at t = 0. (<b>B</b>,<b>C</b>) depict time-frequency representations of two selected participants, one with a high individual gamma frequency (<b>B</b>) and a low to average frequency (<b>C</b>), in the gamma range for SQUID-MEG (<b>left</b>) and <sup>4</sup>He-OPMs (radial axis, (<b>middle</b>)). Post-stimulus percent signal change [0.1 s, 0.4 s] is depicted on the (<b>right</b>) (scaling is adjusted for comparison).</p>
Full article ">Figure A1
<p>Individual averages of the somatosensory stimulation experiment, comparing SQUID-MEG and <sup>4</sup>He-OPMs in radial and tangential direction for the sensors with the best SNR. Pearson product-moment correlations between SQUID-MEG and either radial <sup>4</sup>He-OPMs (r<sub>radial</sub>) or tangential <sup>4</sup>He-OPMs (r<sub>tangential</sub>). Amplification factors for SQUID-MEG and tangential <sup>4</sup>He-OPM relative to radial <sup>4</sup>He-OPM are indicated in the individual figure legends.</p>
Full article ">Figure A2
<p>Individual averages of the visual stimulation experiment, comparing SQUID-MEG and <sup>4</sup>He-OPMs in radial and tangential direction for the sensor with the best SNR. Pearson product-moment correlations between SQUID-MEG and either radial <sup>4</sup>He-OPMs (r<sub>radial</sub>) or tangential <sup>4</sup>He-OPMs (r<sub>tangential</sub>). Amplification factors for SQUID-MEG and tangential <sup>4</sup>He-OPMs relative to radial <sup>4</sup>He-OPMs are indicated in the individual figure legends.</p>
Full article ">
23 pages, 2756 KiB  
Article
Online Learning for Wearable EEG-Based Emotion Classification
by Sidratul Moontaha, Franziska Elisabeth Friederike Schumann and Bert Arnrich
Sensors 2023, 23(5), 2387; https://doi.org/10.3390/s23052387 - 21 Feb 2023
Cited by 8 | Viewed by 4356
Abstract
Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by [...] Read more.
Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by the brain. Therefore, we used non-invasive and portable EEG sensors to develop a real-time emotion classification pipeline. The pipeline trains different binary classifiers for Valence and Arousal dimensions from an incoming EEG data stream achieving a 23.9% (Arousal) and 25.8% (Valence) higher F1-Score on the state-of-art AMIGOS dataset than previous work. Afterward, the pipeline was applied to the curated dataset from 15 participants using two consumer-grade EEG devices while watching 16 short emotional videos in a controlled environment. Mean F1-Scores of 87% (Arousal) and 82% (Valence) were achieved for an immediate label setting. Additionally, the pipeline proved to be fast enough to achieve predictions in real-time in a live scenario with delayed labels while continuously being updated. The significant discrepancy from the readily available labels on the classification scores leads to future work to include more data. Thereafter, the pipeline is ready to be used for real-time applications of emotion classification. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (Volume II))
Show Figures

Figure 1

Figure 1
<p>Different electrode positions according to the international 10–20 system of the EEG devices used in Dataset I (<b>a</b>) and in Dataset II and III (<b>b</b>,<b>c</b>). Sensor locations are marked in blue, references are in orange.</p>
Full article ">Figure 2
<p>Two consumer-grade EEG devices with integrated electrodes used in the experiments.</p>
Full article ">Figure 3
<p>Screenshots from the PsychoPy [<a href="#B48-sensors-23-02387" class="html-bibr">48</a>] setup of self-assessment questions. (<b>a</b>) Partial PANAS questionnaire with five different levels represented by clickable radio buttons (in red) with the levels’ explanation on top, (<b>b</b>) AS for valence displayed on top and the slider for arousal on the bottom.</p>
Full article ">Figure 4
<p>Experimental setup for curating Dataset II. The participants watched a relaxation video at the beginning and eight videos, two of each dimension category wearing one of the two devices. Between the eight videos, they answered AS slider and familiarity with the video.</p>
Full article ">Figure 5
<p>Experimental setup for curating Dataset III. In the first session, the participants watched a relaxation video at the beginning and eight videos, two of each dimension category wearing one of the two devices. Between the eight videos, they answered AS slider, familiarity with the video, and had seen the actual AS label. In the second session, they watched the same set of videos while the prediction was available to the experimenter before the delayed label arrived.</p>
Full article ">Figure 6
<p>Overview of pipeline steps for affect classification. The top gray rectangle shows the pipeline steps employed in an immediate label setting with prerecorded data. For each extracted feature vector the model (1) first classifies its label before (2) being updated with the true label for that sample. In the live setting, the model is not updated after every prediction, as the true label of a video only becomes available after the stimulus has ended. The timestamp of the video is matched to the samples’ timestamps to find all samples that fall into the corresponding time frame and update the model with their true labels (shown in dotted lines).</p>
Full article ">Figure 7
<p>The incoming data stream is processed in tumbling windows (gray rectangles). One window includes all samples <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">x</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mo>…</mo> </mrow> </semantics></math> arriving during a specified time period, e.g., 1 s. The pipeline extracts one feature vector, <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mi>i</mi> </msub> </semantics></math>, per window. Windows during a stimulus (video) are marked in dark gray. Participants rated each video with one label per affect dimension, <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>j</mi> </msub> </semantics></math>. All feature vectors extracted from windows that fall into the time frame of a video (between <math display="inline"><semantics> <msub> <mi>t</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>a</mi> <mi>r</mi> <mi>t</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>t</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>d</mi> </mrow> </msub> </semantics></math> of that video) receive a label <math display="inline"><semantics> <msub> <mi>y</mi> <mi>i</mi> </msub> </semantics></math> corresponding to the reported label, <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>j</mi> </msub> </semantics></math>, of that video. If possible, the windows are aligned with the end of the stimulus; otherwise, all windows that lie completely inside a video’s time range are considered.</p>
Full article ">Figure 8
<p>(<b>a</b>) Progressive validation incorporated into the basic flow of the training process (‘test-then-train’) of an online classifier in an immediate label setting. (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> represents an input feature vector and its corresponding label. (<b>b</b>) Evaluation incorporated into the basic flow of the training process of an online classifier when labels arrive delayed (<math display="inline"><semantics> <mrow> <mi>i</mi> <mo>≥</mo> <mi>j</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 9
<p>F1-Score for Valence and Arousal classification achieved by ARF and SRP per subject from Dataset I.</p>
Full article ">Figure 10
<p>Mean F1-Score achieved by ARF, SRP, and LR over the whole dataset for both affect dimension with respect to window length.</p>
Full article ">Figure 11
<p>Confusion matrices for the live affect classification (Dataset III, part 2). Employed model: ARF (four trees), window length = 1 s. Recall was calculated only for a low class for both the models.</p>
Full article ">
Back to TopTop