[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Image, and Signal Processing for Biomedical Applications

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Biomedical Sensors".

Viewed by 64630

Editor


E-Mail Website
Guest Editor
Institute for Biomedical Mechatronics, Johannes Kepler University, 4020 Linz, Austria
Interests: biosignal processing; cardiac electrophysiology; 3D imaging
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Recording information from the human body by measuring signals and taking images is important throughout the entire clinical process covering anamnesis, diagnosis, therapy, and treatment. In addition to proper recording, preprocessing and pre-analyzing signals and information from the patient fusion of quantitative data and qualitative information also play an important role. The field of biomedical imaging and signal processing has been and still is open to new developments in other disciplines and fields such as physics and chemistry, independent of how remote these may first appear—this is highlighted in the example of Kinect and another kind of devices originally developed for gaming rather than imaging.

This issue puts the focus on recent developments in the fields of biomedical, medical, and clinical image and signal processing. These include new sensing methods, approaches to analyzing the recorded images and signals, data fusion methods, and algorithms to obtain new and additional insights, and how they help to improve clinical processes and free clinicians and doctors to spend more time in direct contact with their patients rather than interpreting the recorded data and signals.

Dr. Christoph Hintermüller
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensing principles
  • sensors
  • image and signal processing
  • clinical applications
  • data fusion
  • image and signal analysis
  • image and signal classification

Published Papers (18 papers)

2024

Jump to: 2023, 2022, 2021

32 pages, 8409 KiB  
Article
Evaluation of Diffuse Reflectance Spectroscopy Vegetal Phantoms for Human Pigmented Skin Lesions
by Sonia Buendia-Aviles, Margarita Cunill-Rodríguez, José A. Delgado-Atencio, Enrique González-Gutiérrez, José L. Arce-Diego and Félix Fanjul-Vélez
Sensors 2024, 24(21), 7010; https://doi.org/10.3390/s24217010 - 31 Oct 2024
Viewed by 1037
Abstract
Pigmented skin lesions have increased considerably worldwide in the last years, with melanoma being responsible for 75% of deaths and low survival rates. The development and refining of more efficient non-invasive optical techniques such as diffuse reflectance spectroscopy (DRS) is crucial for the [...] Read more.
Pigmented skin lesions have increased considerably worldwide in the last years, with melanoma being responsible for 75% of deaths and low survival rates. The development and refining of more efficient non-invasive optical techniques such as diffuse reflectance spectroscopy (DRS) is crucial for the diagnosis of melanoma skin cancer. The development of novel diagnostic approaches requires a sufficient number of test samples. Hence, the similarities between banana brown spots (BBSs) and human skin pigmented lesions (HSPLs) could be exploited by employing the former as an optical phantom for validating these techniques. This work analyses the potential similarity of BBSs to HSPLs of volunteers with different skin phototypes by means of several characteristics, such as symmetry, color RGB tonality, and principal component analysis (PCA) of spectra. The findings demonstrate a notable resemblance between the attributes concerning spectrum, area, and color of HSPLs and BBSs at specific ripening stages. Furthermore, the spectral similarity is increased when a fiber-optic probe with a shorter distance (240 µm) between the source fiber and the detector fiber is utilized, in comparison to a probe with a greater distance (2500 µm) for this parameter. A Monte Carlo simulation of sampling volume was used to clarify spectral similarities. Full article
Show Figures

Figure 1

Figure 1
<p>Scale (1–8) for the classification of the different ripening stages of bananas, associated with changes in the color of the peel, following the one proposed by Escalante et al. [<a href="#B25-sensors-24-07010" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>Experimental configuration implemented for the acquisition of diffuse reflectance spectra for banana tissue and volunteers participating in this study: (<b>a</b>) Mini spectrometer; (<b>b</b>) tungsten halogen light source; (<b>c</b>) personal computer equipment for spectral analysis; (<b>d</b>) bifurcated fiber-optic probe and zoom of the geometries of the optical probes used.</p>
Full article ">Figure 3
<p>Shades of skin color ordered in ascending order according to Fitzpatrick’s classification (I–VI). The shades presented are based on those proposed by Caerwyn et al. [<a href="#B37-sensors-24-07010" class="html-bibr">37</a>].</p>
Full article ">Figure 4
<p>Areas selected to calculate the average color value of two of the regions of interest measured with DRS. The area covered by the blue box represents the area selected to average the healthy skin color, while the region covered by the red box represents the area chosen to average the color of the nevus. This image corresponds to a volunteer classified with skin phototype II.</p>
Full article ">Figure 5
<p>Main layers of the skin. The simulation of the sampling volume for both probes considers two main skin layers. The first layer corresponds to the epidermis, with a thickness of 60 microns, and the second layer represents the dermis, with a thickness of 5000 microns. ‘S’ and ‘D’ stand for “source fiber” and “detector fiber”, respectively. Yellow arrows show the direction of energy emitted from the source fiber into the sample and collected by the detector fiber, while brown arrows illustrate an example of photons’ paths within the tissue.</p>
Full article ">Figure 6
<p>Diffuse reflectance spectra of banana fruit skin in two measurement regions with the fiber-optic probe with the largest distance between the centers of the emitting and collecting fibers (homemade probe): (<b>a</b>) Spectral curves of seven ripening stages of an area without a spot of banana fruit skin; (<b>b</b>) spectral response of an area with a spot at different ripening stages.</p>
Full article ">Figure 7
<p>Normalized averaged spectral curves obtained from two regions studied on the skin of the banana fruit for samples in seven stages of maturation: (<b>a</b>) Spectra of the skin without the presence of the spots or lesions in a region near to the selected spot, and (<b>b</b>) spectra of the brown spots selected in the same sample.</p>
Full article ">Figure 8
<p>Normalized spectral curves recorded with the commercial probe on banana fruit skin spots at four ripening stages (4 to 7).</p>
Full article ">Figure 9
<p>Spectral normalized curves obtained in two skin regions of 10 volunteers: (<b>a</b>) Spectra corresponding to a skin region close to the nevus; (<b>b</b>) spectra corresponding to the selected nevi (one per volunteer).</p>
Full article ">Figure 10
<p>Normalized spectral curves were obtained from two regions of interest from eight volunteers: (<b>a</b>) Spectra taken in a region of healthy skin in close proximity to the nevus, and (<b>b</b>) spectra corresponding to the nevi measured in each volunteer. In the case of volunteer 8, three measurements were taken, designated as “8-1”, “8-2”, and “8-3”, corresponding to the order in which they were taken.</p>
Full article ">Figure 11
<p>(<b>a</b>) Areas of the volunteers’ nevi and brown spots on the skin of the banana fruit. The red circles represent the nevi of the 10 volunteers (volunteer 4 was excluded due to the large area of their nevus, 42.43 mm<sup>2</sup>). Black circles represent the selected spots in the different banana fruit samples. (<b>b</b>) Graph showing the average area of the brown spots according to the ripening or maturation stage of the sample, where the yellow line shows the value of the average area of the nevi (in this calculation the nevus of volunteer 4 was excluded, as indicated before).</p>
Full article ">Figure 12
<p>Images of spots on the skin of banana fruit taken from samples of different ripening stages.</p>
Full article ">Figure 13
<p>Results of the analysis of the average value of the color intensity of each volunteer ordered by skin phototype according to Fitzpatrick’s classification (see <a href="#sensors-24-07010-f003" class="html-fig">Figure 3</a>), after evaluating two skin regions, from the results of the Silonie Sachveda survey. (<b>a</b>) The first row corresponds to a visually selected area of healthy skin (HS) without the presence of lesions or hair, measured with DRS; (<b>b</b>) region of healthy skin (HS) under similar circumstances in which no DRS measurements were performed (each volunteer is referred by the number 1–10).</p>
Full article ">Figure 14
<p>Shades obtained for the volunteers’ nevi as a result of the calculation of the average value of the color intensity in the RGB system.</p>
Full article ">Figure 15
<p>Comparison of the average color of nevi, (N1–N10) and spots (Sp1, Sp2, and Sp29) in the banana fruit skin, where the minimum value of ARE between the three relative percentage errors was obtained for each nevus–spot combination. The spots with the greatest similarity were captured in samples with the higher ripening stages: 4, 5, and 6. Furthermore, the actual images of the nevus and the spot are positioned in the top and bottom rows, respectively.</p>
Full article ">Figure 16
<p>Spectral comparison between nevi and spots considering the minimum MSE values with the normalized spectra from 400 nm to 750 nm. The first column shows the general spectral comparison, while the second column presents the comparison of the nevus with the spots with the minimum MSE value for the spectral region classified as R (from 571 to 750 nm; range marked by a horizontal red bar in the figures): (<b>a</b>) Spectra of the nevus of volunteer 1 and the Sp23 spot, with an MSE of 0.02, and the Sp22 spot; (<b>b</b>) spectra of the nevus of volunteer 2 and the Sp9 spot, with an MSE of 0.04, and the Sp29 spot; (<b>c</b>) spectra of the nevus of volunteer 4 and the Sp41 spot, with an MSE of 0.04, and the Sp22 spot; (<b>d</b>) spectra of the nevus of volunteer 7 and the Sp23 spot, with an MSE of 0.03, and the Sp21 spot.</p>
Full article ">Figure 17
<p>Spectral comparison between nevi and spots measured with the commercial probe, considering the minimum MSE values of the normalized spectra from 400 nm to 750 nm. The first column (<b>a</b>,<b>c</b>,<b>e</b>) presents the three combinations with the maximum MSE values obtained among the minima, while column two (<b>b</b>,<b>d</b>,<b>f</b>) shows the three combinations with the lowest MSE, highlighting mainly (<b>a</b>) nevus of volunteer 7 compared to Sp18 spot, with an MSE of 0.0087, being the maximum MSE value among the minima obtained; and (<b>b</b>) nevus of volunteer 4 compared to Sp22 spot, with an MSE of 0.0010, being the lowest value among those obtained with the commercial probe.</p>
Full article ">Figure 18
<p>Cumulative variance explained by different numbers of principal components of nevi and spot spectra. The dashed line indicates the 90% variance threshold, which is obtained from the third component on.</p>
Full article ">Figure 19
<p>Spectral comparisons between nevi and spots according to the minimum values of the Mahalanobis distance, calculated from the first five principal components. (<b>a</b>) Comparison between the normalized spectra of the nevus of volunteer 4 and Sp23 spot; (<b>b</b>) comparison between the normalized spectra of volunteer 8’s nevus and Sp24 spot.</p>
Full article ">Figure 20
<p>A comparative study of the SV simulation of both probes for light with wavelengths representative of the spectral range of measurement and its impact on the similarity of diffuse reflectance spectra of pigmented lesions of human skin and brown spots on bananas.</p>
Full article ">Figure 21
<p>Spectral dependence of the maximum depth of the sampling volume, Z<sup>0</sup><sub>Max</sub>, for the two fiber-optic probes employed in this study using nine discrete wavelengths in the spectral range of interest.</p>
Full article ">
19 pages, 4459 KiB  
Review
Development of a Sexological Ontology
by Dariusz S. Radomski, Zuzanna Oscik, Ewa Dmoch-Gajzlerska and Anna Szczotka
Sensors 2024, 24(21), 6968; https://doi.org/10.3390/s24216968 - 30 Oct 2024
Viewed by 1571
Abstract
This study aimed to show what role biomedical engineering can play in sexual health. A new concept of sexological ontology, an essential tool for building evidence-based models of sexual health, is proposed. This ontology should be based on properly validated mathematical models of [...] Read more.
This study aimed to show what role biomedical engineering can play in sexual health. A new concept of sexological ontology, an essential tool for building evidence-based models of sexual health, is proposed. This ontology should be based on properly validated mathematical models of sexual reactions identified using reliable measurements of physiological signals. This paper presents a review of the recommended measurement methods. Moreover, a general human sexual reaction model based on dynamic systems built at different levels of time × space × detail is presented, and the actual used modeling approaches are reviewed, referring to the introduced model. Lastly, examples of devices and computer programs designed for sexual therapy are described, indicating the need for legal regulation of their manufacturing, similar to that for other medical devices. Full article
Show Figures

Figure 1

Figure 1
<p>Measurements and mathematical models used as tools to build human sexuality ontology.</p>
Full article ">Figure 2
<p>RigiScan for evaluation of nocturnal penile erections: (<b>a</b>) measuring method; (<b>b</b>) recording in a 36-year-old man with psychogenic erectile dysfunction. Five well-defined erectile events are recorded with more than 10 min duration and rigidity at the tip of the penis more than 70% (in 4/5 events). This is a “classic normal” recording in psychogenic cases.; (<b>c</b>) recording in a 54-year-old man with mixed vasculogenic erectile dysfunction. This is a “classic abnormal” recording in vasculogenic cases [<a href="#B8-sensors-24-06968" class="html-bibr">8</a>].</p>
Full article ">Figure 3
<p>Biophysical interpretation of the parameters used to assess blood flow in clitoral or penile vessels.</p>
Full article ">Figure 4
<p>The temperature distribution of the vulva obtained using thermography during sexual arousal. The green and red colors represent higher temperatures. (<a href="https://www.sciencephoto.com/media/638333/view/female-genitals-thermogram-and-animation" target="_blank">https://www.sciencephoto.com/media/638333/view/female-genitals-thermogram-and-animation</a>, accessed on 16 August 2024).</p>
Full article ">Figure 5
<p>The vaginal probe pressure 1 balloon Epi-no Delphine Plus (<a href="https://www.epino.de/en/epi-no-delphine-plus.html" target="_blank">https://www.epino.de/en/epi-no-delphine-plus.html</a> accessed on 23 October 2024).</p>
Full article ">Figure 6
<p>Smart vibrator equipped with three types of sensors to measure the sexual response of female genital organs (<a href="https://lioness.io/products/the-lioness-vibrator" target="_blank">https://lioness.io/products/the-lioness-vibrator</a>, accessed on 16 August 2024).</p>
Full article ">Figure 7
<p>Schema of a vulvagesiometer for vulvar pain threshold assessment.</p>
Full article ">Figure 8
<p>Biocybernetic view of human sexual reactions.</p>
Full article ">Figure 9
<p>The model time × space × detail resolution levels of sexual reaction models.</p>
Full article ">Figure 10
<p>An example of a structural equation model based on Basson’s theoretical model of female sexual reactions. This model shows mutual dependences between factors producing sexual satisfaction and orgasm. The dashed lines denote the statistically insignificant correlations.</p>
Full article ">Figure 11
<p>FDA-approved vibrator for the therapy of penile erectile dysfunction.</p>
Full article ">Figure 12
<p>The most advanced medical devices intended for physiotherapy of pelvic floor muscles [<a href="#B47-sensors-24-06968" class="html-bibr">47</a>].</p>
Full article ">Figure 13
<p>A screenshot of the ToTu application, enabling the selection of a body part, degree of pleasure (including unpleasant), type of touch, and additional individually interpreted icons (<a href="https://www.youtube.com/watch?v=evB_feTSzU8&amp;t=21s" target="_blank">https://www.youtube.com/watch?v=evB_feTSzU8&amp;t=21s</a>, accessed on 16 August 2025).</p>
Full article ">
11 pages, 1876 KiB  
Article
Blood Biomarker Detection Using Integrated Microfluidics with Optical Label-Free Biosensor
by Chiung-Hsi Li, Chen-Yuan Chang, Yan-Ru Chen and Cheng-Sheng Huang
Sensors 2024, 24(20), 6756; https://doi.org/10.3390/s24206756 - 21 Oct 2024
Viewed by 1382
Abstract
In this study, we developed an optofluidic chip consisting of a guided-mode resonance (GMR) sensor incorporated into a microfluidic chip to achieve simultaneous blood plasma separation and label-free albumin detection. A sedimentation chamber is integrated into the microfluidic chip to achieve plasma separation [...] Read more.
In this study, we developed an optofluidic chip consisting of a guided-mode resonance (GMR) sensor incorporated into a microfluidic chip to achieve simultaneous blood plasma separation and label-free albumin detection. A sedimentation chamber is integrated into the microfluidic chip to achieve plasma separation through differences in density. After a blood sample is loaded into the optofluidic chip in two stages with controlled flow rates, the blood cells are kept in the sedimentation chamber, enabling only the plasma to reach the GMR sensor for albumin detection. This GMR sensor, fabricated using plastic replica molding, achieved a bulk sensitivity of 175.66 nm/RIU. With surface-bound antibodies, the GMR sensor exhibited a limit of detection of 0.16 μg/mL for recombinant albumin in buffer solution. Overall, our findings demonstrate the potential of our integrated chip for use in clinical samples for biomarker detection in point-of-care applications. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Fabrication of the (<b>a</b>–<b>d</b>) GMR biosensor and (<b>e</b>–<b>h</b>) microfluidic chip, and (<b>i</b>,<b>j</b>) the bonding process.</p>
Full article ">Figure 2
<p>(<b>a</b>) GMR sensor. Scanning electron microscopy images of the GMR sensor: (<b>b</b>) top view and (<b>c</b>) cross-sectional view.</p>
Full article ">Figure 3
<p>(<b>a</b>) Tilted and (<b>b</b>) side views of the optofluidic chip. (<b>c</b>) Top view of the microscopic image showing the entrance of the sedimentation chamber, highlighted by the red-dashed box in (<b>b</b>),with numerous blood cells. (<b>d</b>–<b>f</b>) Microscopic images of the GMR sensor region for flow rates of 5, 1, and 0.5 μL/min.</p>
Full article ">Figure 4
<p>(<b>a</b>) Transmission spectra at different concentrations of sucrose. (<b>b</b>) Resonant wavelength as a function of sucrose concentration. (<b>c</b>) Resonant wavelength as a function of RI.</p>
Full article ">Figure 5
<p>(<b>a</b>) Transmission spectra after silanization, antibody immobilization, and blocking. (<b>b</b>) Resonant wavelength as a function of antigen concentration. (<b>c</b>) Dose–response curve for albumin detection.</p>
Full article ">Figure 6
<p>Microscopic image of the GMR sensor region (<b>a</b>) before and (<b>b</b>) after washing.</p>
Full article ">
21 pages, 991 KiB  
Article
A Novel Online Position Estimation Method and Movement Sonification System: The Soniccup
by Thomas H. Nown, Madeleine A. Grealy, Ivan Andonovic, Andrew Kerr and Christos Tachtatzis
Sensors 2024, 24(19), 6279; https://doi.org/10.3390/s24196279 - 28 Sep 2024
Viewed by 3573
Abstract
Existing methods to obtain position from inertial sensors typically use a combination of multiple sensors and orientation modeling; thus, obtaining position from a single inertial sensor is highly desirable given the decreased setup time and reduced complexity. The dead reckoning method is commonly [...] Read more.
Existing methods to obtain position from inertial sensors typically use a combination of multiple sensors and orientation modeling; thus, obtaining position from a single inertial sensor is highly desirable given the decreased setup time and reduced complexity. The dead reckoning method is commonly chosen to obtain position from acceleration; however, when applied to upper limb tracking, the accuracy of position estimates are questionable, which limits feasibility. A new method of obtaining position estimates through the use of zero velocity updates is reported, using a commercial IMU, a push-to-make momentary switch, and a 3D printed object to house the sensors. The generated position estimates can subsequently be converted into sound through sonification to provide audio feedback on reaching movements for rehabilitation applications. An evaluation of the performance of the generated position estimates from a system labeled ‘Soniccup’ is presented through a comparison with the outputs from a Vicon Nexus system. The results indicate that for reaching movements below one second in duration, the Soniccup produces positional estimates with high similarity to the same movements captured through the Vicon system, corresponding to comparable audio output from the two systems. However, future work to improve the performance of longer-duration movements and reduce the system latency to produce real-time audio feedback is required to improve the acceptability of the system. Full article
Show Figures

Figure 1

Figure 1
<p>Rudimentary example of a movement sonification system. The stages of the system occur sequentially, starting with the capture of performed movement, the extraction and processing of data, translation into the auditory domain, and the playback of audio as a mode of feedback to the movement performer.</p>
Full article ">Figure 2
<p>Images showing the hardware components used in the Soniccup system. Image (<b>a</b>) shows the Soniccup placed on the table. An NGIMU sensor plus stripboard are attached to the top of the 3D printed object. The stripboard contains analogue electronic components used to connect a push-to-make switch to the NGIMU. Image (<b>b</b>) shows the protruded segment of the push-to-make switch at the bottom of the Soniccup.</p>
Full article ">Figure 3
<p>Block diagram showing the signal conditioning steps of the sonification stage, starting from analogue input and Earth acceleration.</p>
Full article ">Figure 4
<p>Figure depicting the recording of mechanical bouncing. Two events are shown with orange and green circles, corresponding to a momentary placement and momentary lift of the Soniccup, respectively.</p>
Full article ">Figure 5
<p>Figure showing associated data prior to and with the first designed Kalman filter: (<b>top</b>) plot of analogue voltage obtained through NGIMU, (<b>middle</b>) plot of raw data values corresponding to acceleration in Earth reference frame obtained through NGIMU sensor, (<b>bottom</b>) output estimated velocity from first Kalman filter.</p>
Full article ">Figure 6
<p>Figure showing velocity data before and after the first stage of processing; the orange trace corresponds to the estimated velocity plot through the first Kalman filter, and the green trace is the processed velocity data with ZUPT.</p>
Full article ">Figure 7
<p>Figure showing velocity data before and after the second stage of processing; the green trace corresponds to the estimated velocity plot immediately after the application of ZUPT, and the red trace corresponds to the velocity data after further error mitigation to remove the intermediary accumulation error that occurs during stationary periods.</p>
Full article ">Figure 8
<p>Figure showing associated position data as output of the second Kalman filter used in this algorithm. The purple trace corresponds to position data as output of the second Kalman filter, without integration error mitigation. The blue trace corresponds to the same position data with the inclusion of a function to reset the starting position to zero at every second placement.</p>
Full article ">Figure 9
<p>Figure showing associated position data in the <span class="html-italic">Z</span>-axis as output of the second Kalman filter used in this algorithm. The purple trace corresponds to position data as output of the second Kalman filter, without integration error mitigation. The olive trace corresponds to the same position data with the inclusion of a function to reset starting position to zero at every second placement.</p>
Full article ">Figure 10
<p>Figure showing the effect of the additional correction mechanism implemented for data associated with the cranial/caudal (<span class="html-italic">Z</span>) axis. The olive trace represents the data before the correction mechanism, and the blue trace represents data after the correction mechanism.</p>
Full article ">Figure 11
<p>Figure containing four plots corresponding to the estimated position from the Soniccup. The top plot corresponds to the frontal/parietal (<span class="html-italic">X</span>) axis, the second plot corresponds to the medial/lateral (<span class="html-italic">Y</span>) axis, the third plot corresponds to the cranial/caudal (<span class="html-italic">Z</span>) axis, and the bottom corresponds to the radial distance.</p>
Full article ">Figure 12
<p>Scatter plot presenting the calculated MSE for each movement. Data points with a positive velocity correspond to the extension phase of the reaching movement, whilst data points with negative velocity correspond to the retraction phase. Boxes enclose plot segments and are labeled with association to the movement set: ‘Movement 1’ for normal speed movement, ‘Movement 2’ for slow speed movement, and ‘Movement 3’ for fast speed movement.</p>
Full article ">Figure 13
<p>Plot presenting the radial distance obtained through the Soniccup (blue) and Vicon (orange) systems for four movements within Movement Set 2. Data associated with each trace have been normalized so that the maximum data value in the 15 captured reaching movements is equal to one, resulting in the trace associated with the Soniccup showing all data points in the first four reaching movements to be &lt;0.7.</p>
Full article ">Figure A1
<p>Figure showing six plots corresponding to the effect of altering the audio resolution parameter as stated in Equation (<a href="#FD9-sensors-24-06279" class="html-disp-formula">A3</a>) on ‘Movement Set 1’. Labels (<b>a</b>–<b>f</b>) correspond to the numeric values 1, 2, 3, 4, 6, and 9 used for the <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> <mi>s</mi> </mrow> </msub> </semantics></math> parameter. Dark green traces correspond to generated MIDI notes from position estimates shown through light blue traces.</p>
Full article ">Figure A2
<p>Figure displaying the calculated velocity obtained from the Vicon system, and the analogue voltage recordings corresponding to the switch state obtained through the Soniccup system, with magnitude reduction in this trace by a third.</p>
Full article ">Figure A3
<p>Figure illustrating the variation in the sample offset between the theoretical start of movement and the change in switch state due to the lifting of the Soniccup for ‘Movement Set 1’ (<b>top</b>) and ‘Movement Set 2’ (<b>bottom</b>). A density plot is shown in both plots as a black trace. The positive offset indicates that the movement began before the switch changed state.</p>
Full article ">Figure A4
<p>Flowchart presenting sources of delay associated with the start of movement that accumulate during the Soniccup methodology. This figure shows <a href="#sensors-24-06279-f003" class="html-fig">Figure 3</a> with added annotations corresponding to sources of delay. The total delay is shown as 23 samples, with one sample’s worth of delay equaling 10 ms.</p>
Full article ">

2023

Jump to: 2024, 2022, 2021

28 pages, 13276 KiB  
Article
ECG Electrode Localization: 3D DS Camera System for Use in Diverse Clinical Environments
by Jennifer Bayer, Christoph Hintermüller, Hermann Blessberger and Clemens Steinwender
Sensors 2023, 23(12), 5552; https://doi.org/10.3390/s23125552 - 13 Jun 2023
Viewed by 1998
Abstract
Models of the human body representing digital twins of patients have attracted increasing interest in clinical research for the delivery of personalized diagnoses and treatments to patients. For example, noninvasive cardiac imaging models are used to localize the origin of cardiac arrhythmias and [...] Read more.
Models of the human body representing digital twins of patients have attracted increasing interest in clinical research for the delivery of personalized diagnoses and treatments to patients. For example, noninvasive cardiac imaging models are used to localize the origin of cardiac arrhythmias and myocardial infarctions. The precise knowledge of a few hundred electrocardiogram (ECG) electrode positions is essential for their diagnostic value. Smaller positional errors are obtained when extracting the sensor positions, along with the anatomical information, for example, from X-ray Computed Tomography (CT) slices. Alternatively, the amount of ionizing radiation the patient is exposed to can be reduced by manually pointing a magnetic digitizer probe one by one to each sensor. An experienced user requires at least 15 min. to perform a precise measurement. Therefore, a 3D depth-sensing camera system was developed that can be operated under adverse lighting conditions and limited space, as encountered in clinical settings. The camera was used to record the positions of 67 electrodes attached to a patient’s chest. These deviate, on average, by 2.0 mm ±1.5 mm from manually placed markers on the individual 3D views. This demonstrates that the system provides reasonable positional precision even when operated within clinical environments. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic presentation of the overall approach used to extract ECG electrode positions from 3D depth-sensing camera data. The first step involves methods to control the camera’s white balance and exposure settings and generate textured 3D surface meshes from the recorded depth data. During the offline processing step, these surfaces are aligned to extract the electrode positions within clusters of marker vertices found using texture images in the RGB chromacity color space and 3D surfaces.</p>
Full article ">Figure 2
<p>The elliptic area in the RGB chromacity space corresponds to the pixels encoding shades of gray [<a href="#B23-sensors-23-05552" class="html-bibr">23</a>]. The red <span class="html-italic">r</span> and green <span class="html-italic">g</span> chromacity values of the natural illumination color gamut of 5500 K define a point that is shifted slightly off the mean RGB chromacity toward yellowish colors. Standard 3D DS cameras designed for indoor use such as the Intel Realsense<sup>TM</sup> typically allow adjusting the gains for the red and blue channels to illumination color gamuts between, for example, 2800 K and 6500 K, as indicated on the color gamut curve. However, in real clinical settings, gamuts from 2000 K up to 10,000 K can be expected, depending on the number of light sources and shades cast by objects and people.</p>
Full article ">Figure 3
<p>The histogram-based auto-exposure algorithm considers only the pixels that most likely correspond to the patient and ignores any others. This ensures that the brightness of the patient’s skin remains as constant as possible, regardless of whether the camera points toward a window (<b>a</b>) or the darkest corner of the room (<b>b</b>). Each visible electrode is labeled with the corresponding channel number and the ‘+’ markers indicate the projected location of the computed electrode position.</p>
Full article ">Figure 4
<p>Examples for texture images recorded from the front (<b>a</b>) and back (<b>c</b>) of the torso, and the corresponding color-corrected versions (<b>b</b>,<b>d</b>). In (<b>b</b>,<b>d</b>), each visible electrode is labeled with the corresponding channel number. The ‘+’ markers indicate the projected location of the computed electrode position.</p>
Full article ">Figure 5
<p>The electrodes mounted on the patient’s torso (<b>a</b>) are attached to the red electrode clips. The blue boundary of each clip head (<b>b</b>) forms a circular marker with the red electrode clip. Based on the red and blue colors, each marker can be recognized from the recorded texture images along with 3D surface information.</p>
Full article ">Figure 6
<p>Chromacity heat map of the pixels representing the electrode markers created from the texture images of three patients. The brighter the color, the higher the pixel count for the corresponding point in the RGB chromacity space, represented by its red <span class="html-italic">r</span> and green <span class="html-italic">g</span> chromacity values. For better readability, the RGB chromacity values are displayed in RGB gamma-compressed form. The Gaussian peaks (dash doted ellipses) representing the red (1) and blue (2) pixels of the electrode markers are clearly visible. They can easily be distinguished from the peak (3) representing the color highlights and reflections. Peak (4) is caused by inappropriately chosen values for the parameters required to convert raw color sensor data to the RGB color space.</p>
Full article ">Figure 7
<p>The preview screen is divided into three panels. The main panel (1) displays the image recorded by the color sensor of the 3D DS camera. The parts of the 3D image for which no color information could be captured are replaced with the edges extracted from the depth image shown in panel 3. The current values of the color temperature, exposure time, frames per second, and other process parameters are displayed in panel 2. The two vertical lines indicate the area where the views of both cameras overlap.</p>
Full article ">Figure 8
<p>Manual evaluation of the proposed approach for locating the electrode positions on a patient’s torso. The electrode positions are backprojected onto each recorded torso surface segment (<b>a</b>). An electrode position can be moved by clicking on the corresponding green cross-shaped graphical marker displayed on the texture image. Its new position is selected by pointing and clicking on it (<b>b</b>). In case the position pointed to is not backed by a valid surface triangle, the new point (red cross) is moved to the closest possible position. By right-clicking on an electrode marker, it can be disabled and/or enabled on the presented view. Any disabled markers are not considered suitable for further evaluation.</p>
Full article ">
23 pages, 9605 KiB  
Article
Specular Reflections Detection and Removal for Endoscopic Images Based on Brightness Classification
by Chao Nie, Chao Xu, Zhengping Li, Lingling Chu and Yunxue Hu
Sensors 2023, 23(2), 974; https://doi.org/10.3390/s23020974 - 14 Jan 2023
Cited by 13 | Viewed by 4651
Abstract
Specular Reflections often exist in the endoscopic image, which not only hurts many computer vision algorithms but also seriously interferes with the observation and judgment of the surgeon. The information behind the recovery specular reflection areas is a necessary pre-processing step in medical [...] Read more.
Specular Reflections often exist in the endoscopic image, which not only hurts many computer vision algorithms but also seriously interferes with the observation and judgment of the surgeon. The information behind the recovery specular reflection areas is a necessary pre-processing step in medical image analysis and application. The existing highlight detection method is usually only suitable for medium-brightness images. The existing highlight removal method is only applicable to images without large specular regions, when dealing with high-resolution medical images with complex texture information, not only does it have a poor recovery effect, but the algorithm operation efficiency is also low. To overcome these limitations, this paper proposes a specular reflection detection and removal method for endoscopic images based on brightness classification. It can effectively detect the specular regions in endoscopic images of different brightness and can improve the operating efficiency of the algorithm while restoring the texture structure information of the high-resolution image. In addition to achieving image brightness classification and enhancing the brightness component of low-brightness images, this method also includes two new steps: In the highlight detection phase, the adaptive threshold function that changes with the brightness of the image is used to detect absolute highlights. During the highlight recovery phase, the priority function of the exemplar-based image inpainting algorithm was modified to ensure reasonable and correct repairs. At the same time, local priority computing and adaptive local search strategies were used to improve algorithm efficiency and reduce error matching. The experimental results show that compared with the other state-of-the-art, our method shows better performance in terms of qualitative and quantitative evaluations, and the algorithm efficiency is greatly improved when processing high-resolution endoscopy images. Full article
Show Figures

Figure 1

Figure 1
<p>Endoscopy images with specular highlights.</p>
Full article ">Figure 2
<p>Schematic of the proposed method.</p>
Full article ">Figure 3
<p>Image Brightness Classification Results. (<b>a</b>) high-brightness images; (<b>b</b>) medium-brightness images; (<b>c</b>) low-brightness images.</p>
Full article ">Figure 3 Cont.
<p>Image Brightness Classification Results. (<b>a</b>) high-brightness images; (<b>b</b>) medium-brightness images; (<b>c</b>) low-brightness images.</p>
Full article ">Figure 4
<p>Low-brightness image brightness enhancement results. The first row is the original low-brightness image, and the second row is the brightness-enhanced image.</p>
Full article ">Figure 5
<p>Histogram results. From left to right are the original image; red component histogram; green component histogram; blue component histogram.</p>
Full article ">Figure 6
<p>Scheme of our detection approach.</p>
Full article ">Figure 7
<p>Schematic of the exemplar-based algorithm.</p>
Full article ">Figure 8
<p>Priority calculation range. (<b>a</b>) Before improvement (blue boundary); (<b>b</b>) After improvement (red boundary).</p>
Full article ">Figure 9
<p>Adaptive local search. (<b>a</b>) single highlight to be repaired (<b>b</b>) search area.</p>
Full article ">Figure 10
<p>Qualitative comparison results of detection performance on the DYY-Spec dataset. From top to bottom are the original image, the specular ground truth label (GT), and the Highlight segmentation results of Arnold et al. [<a href="#B12-sensors-23-00974" class="html-bibr">12</a>], Meslouhi et al. [<a href="#B13-sensors-23-00974" class="html-bibr">13</a>], Alsaleh et al. [<a href="#B18-sensors-23-00974" class="html-bibr">18</a>], Shen et al. [<a href="#B19-sensors-23-00974" class="html-bibr">19</a>], Asif et al. [<a href="#B35-sensors-23-00974" class="html-bibr">35</a>], and Ours.</p>
Full article ">Figure 11
<p>Qualitative results of highlight detection by our method on the Hyper-Kvasir dataset, first row: original highlight image, second row: highlight detection result, third row: highlight detection result label.</p>
Full article ">Figure 12
<p>Comparison of qualitative results of highlight removal. From top to bottom are the original image, specular reflection region, and the highlight removal results of Arnold et al. [<a href="#B16-sensors-23-00974" class="html-bibr">16</a>], Shen et al. [<a href="#B23-sensors-23-00974" class="html-bibr">23</a>], Li et al. [<a href="#B20-sensors-23-00974" class="html-bibr">20</a>], Asif et al. [<a href="#B35-sensors-23-00974" class="html-bibr">35</a>], Wang et al. [<a href="#B6-sensors-23-00974" class="html-bibr">6</a>], Yin et al. [<a href="#B28-sensors-23-00974" class="html-bibr">28</a>], and Ours.</p>
Full article ">Figure 12 Cont.
<p>Comparison of qualitative results of highlight removal. From top to bottom are the original image, specular reflection region, and the highlight removal results of Arnold et al. [<a href="#B16-sensors-23-00974" class="html-bibr">16</a>], Shen et al. [<a href="#B23-sensors-23-00974" class="html-bibr">23</a>], Li et al. [<a href="#B20-sensors-23-00974" class="html-bibr">20</a>], Asif et al. [<a href="#B35-sensors-23-00974" class="html-bibr">35</a>], Wang et al. [<a href="#B6-sensors-23-00974" class="html-bibr">6</a>], Yin et al. [<a href="#B28-sensors-23-00974" class="html-bibr">28</a>], and Ours.</p>
Full article ">Figure 12 Cont.
<p>Comparison of qualitative results of highlight removal. From top to bottom are the original image, specular reflection region, and the highlight removal results of Arnold et al. [<a href="#B16-sensors-23-00974" class="html-bibr">16</a>], Shen et al. [<a href="#B23-sensors-23-00974" class="html-bibr">23</a>], Li et al. [<a href="#B20-sensors-23-00974" class="html-bibr">20</a>], Asif et al. [<a href="#B35-sensors-23-00974" class="html-bibr">35</a>], Wang et al. [<a href="#B6-sensors-23-00974" class="html-bibr">6</a>], Yin et al. [<a href="#B28-sensors-23-00974" class="html-bibr">28</a>], and Ours.</p>
Full article ">

2022

Jump to: 2024, 2023, 2021

20 pages, 9702 KiB  
Article
Comparing the Clinical Viability of Automated Fundus Image Segmentation Methods
by Gorana Gojić, Veljko B. Petrović, Dinu Dragan, Dušan B. Gajić, Dragiša Mišković, Vladislav Džinić, Zorka Grgić, Jelica Pantelić and Ana Oros
Sensors 2022, 22(23), 9101; https://doi.org/10.3390/s22239101 - 23 Nov 2022
Viewed by 1661
Abstract
Recent methods for automatic blood vessel segmentation from fundus images have been commonly implemented as convolutional neural networks. While these networks report high values for objective metrics, the clinical viability of recovered segmentation masks remains unexplored. In this paper, we perform a pilot [...] Read more.
Recent methods for automatic blood vessel segmentation from fundus images have been commonly implemented as convolutional neural networks. While these networks report high values for objective metrics, the clinical viability of recovered segmentation masks remains unexplored. In this paper, we perform a pilot study to assess the clinical viability of automatically generated segmentation masks in the diagnosis of diseases affecting retinal vascularization. Five ophthalmologists with clinical experience were asked to participate in the study. The results demonstrate low classification accuracy, inferring that generated segmentation masks cannot be used as a standalone resource in general clinical practice. The results also hint at possible clinical infeasibility in experimental design. In the follow-up experiment, we evaluate the clinical quality of masks by having ophthalmologists rank generation methods. The ranking is established with high intra-observer consistency, indicating better subjective performance for a subset of tested networks. The study also demonstrates that objective metrics are not correlated with subjective metrics in retinal segmentation tasks for the methods involved, suggesting that objective metrics commonly used in scientific papers to measure the method’s performance are not plausible criteria for choosing clinically robust solutions. Full article
Show Figures

Figure 1

Figure 1
<p>A typical retinal vessel segmentation workflow consists of probability map inference from an input fundus image followed by thresholding <span class="html-italic">t</span> to create a binary segmentation mask from the probability map.</p>
Full article ">Figure 2
<p>The CNN lifecycle. The training schema shows (1) feeding CNN with training images to obtain predicted outputs, (2) using predicted outputs and ground truth images to update CNN parameters, and (3) updating CNN parameters. In the inference schema, a trained model is fed with unseen images to obtain predictions without the ground truth being known.</p>
Full article ">Figure 3
<p>Examples of images used in the study. In the first row are fundus images from DRIVE, STARE, and CHASE datasets. In the second row are the corresponding segmentation masks.</p>
Full article ">Figure 4
<p>Experimental pipeline illustrating a pipeline to create segmentation masks used in the experiment from input fundus images.</p>
Full article ">Figure 5
<p>An example of a color fundus image and corresponding eight segmentation masks produced by eight CNN models. From left to right, the masks are generated by DBUNet, RVSGAN, UNet, SA-UNet (top row), LadderNet, IterNet-UNI, IterNet, and ESWANet (bottom row).</p>
Full article ">Figure 6
<p>Grading score distributions grouped by (<b>a</b>) observers, (<b>b</b>) datasets, (<b>c</b>) datasets and observers, and (<b>d</b>) networks and observers. The elementary results for UNet-generated segmentation masks were not included for the reasons discussed in <a href="#sec2dot3dot1-sensors-22-09101" class="html-sec">Section 2.3.1</a>.</p>
Full article ">Figure 7
<p>Objective vs. subjective metrics on the DRIVE dataset. The Copeland score is on the <span class="html-italic">x</span>-axes, while the <span class="html-italic">y</span>-axes is (<b>a</b>) accuracy (RVSGAN and IterNet_UNI are ommitted since the original paper does not report corresponding accuracy values) and (<b>b</b>) F1 score, respectively (LadderNet and IterNet_UNI are omitted since the original paper does not report corresponding accuracy values). Each dot on a diagram corresponds to one of the CNNs with an associated Copeland score and objective metric value. The accuracy and F1 score values for RSVGAN, ESWANet, SA-UNet, and IterNet are from the original papers. Values for DBUNet are from supplementary materials of the project used in [<a href="#B37-sensors-22-09101" class="html-bibr">37</a>]. The results for UNet are from [<a href="#B31-sensors-22-09101" class="html-bibr">31</a>] since UNet was not evaluated on the DRIVE dataset in the original paper.</p>
Full article ">Figure A1
<p>A representative page of the survey used in the first experiment with examples of selected answers.</p>
Full article ">Figure A2
<p>A representative page of the survey used in the second experiment with examples of the selected answer.</p>
Full article ">
25 pages, 27984 KiB  
Article
Multiple Preprocessing Hybrid Level Set Model for Optic Disc Segmentation in Fundus Images
by Xiaozhong Xue, Linni Wang, Weiwei Du, Yusuke Fujiwara and Yahui Peng
Sensors 2022, 22(18), 6899; https://doi.org/10.3390/s22186899 - 13 Sep 2022
Cited by 5 | Viewed by 2155
Abstract
The accurate segmentation of the optic disc (OD) in fundus images is a crucial step for the analysis of many retinal diseases. However, because of problems such as vascular occlusion, parapapillary atrophy (PPA), and low contrast, accurate OD segmentation is still a challenging [...] Read more.
The accurate segmentation of the optic disc (OD) in fundus images is a crucial step for the analysis of many retinal diseases. However, because of problems such as vascular occlusion, parapapillary atrophy (PPA), and low contrast, accurate OD segmentation is still a challenging task. Therefore, this paper proposes a multiple preprocessing hybrid level set model (HLSM) based on area and shape for OD segmentation. The area-based term represents the difference of average pixel values between the inside and outside of a contour, while the shape-based term measures the distance between a prior shape model and the contour. The average intersection over union (IoU) of the proposed method was 0.9275, and the average four-side evaluation (FSE) was 4.6426 on a public dataset with narrow-angle fundus images. The IoU was 0.8179 and the average FSE was 3.5946 on a wide-angle fundus image dataset compiled from a hospital. The results indicate that the proposed multiple preprocessing HLSM is effective in OD segmentation. Full article
Show Figures

Figure 1

Figure 1
<p>Example of posterior fundus image.</p>
Full article ">Figure 2
<p>Example of wide-angle fundus image.</p>
Full article ">Figure 3
<p>Problems in OD segmentation.</p>
Full article ">Figure 4
<p>The flowchart of the proposal.</p>
Full article ">Figure 5
<p>Comparison of using and not using a line integral term; (<b>b</b>) the contour was significantly smoother than that in (<b>a</b>). (blue contour: results of segmentation, red area: obviously smoother part).</p>
Full article ">Figure 6
<p>Segmentation result with the gradient-based level set model.</p>
Full article ">Figure 7
<p>Comparison of shape-based term is used and not used. (blue contour: the segmentation result, green contour: the ground-truth of OD).</p>
Full article ">Figure 8
<p>Flowchart of ROI detection: (<b>a</b>–<b>i</b>) correspond to images in <a href="#sensors-22-06899-f009" class="html-fig">Figure 9</a> and <a href="#sensors-22-06899-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 9
<p>Results of ROI detection in posterior fundus images.</p>
Full article ">Figure 10
<p>Results of each process of ROI detection in wide-angle fundus images.</p>
Full article ">Figure 10 Cont.
<p>Results of each process of ROI detection in wide-angle fundus images.</p>
Full article ">Figure 11
<p>Rough ROI with poor result.</p>
Full article ">Figure 12
<p>Flowchart of blood-vessel and noise removal ((<b>a</b>–<b>d</b>) correspond to <a href="#sensors-22-06899-f013" class="html-fig">Figure 13</a>).</p>
Full article ">Figure 13
<p>The result of each process in blood-vessel and noise removal.</p>
Full article ">Figure 14
<p>Frequency domain image after each process.</p>
Full article ">Figure 15
<p>After blood-vessel and noise removal, bright noise (parts indicated by blue arrows) is generated, rendering the OD boundaries not obvious.</p>
Full article ">Figure 16
<p>The flowchart of initial-value detection ((<b>a</b>–<b>f</b>) correspond to the images in <a href="#sensors-22-06899-f017" class="html-fig">Figure 17</a>).</p>
Full article ">Figure 17
<p>The results of initial-value detection.</p>
Full article ">Figure 18
<p>Adaptive thresholding can ignore some PPAs (green contour: ground truth, blue curves and arrows: PPA area. In (<b>b</b>,<b>c</b>), some PPA regions are ignored).</p>
Full article ">Figure 19
<p>An example of the DRSHTI-GS dataset. (left to right) Original fundus image, GT soft map, and the GT used in this paper (In a soft map, the pixel value of each annotation is 0.25. The part with a pixel value greater than or equal to 0.75 was used as the GT).</p>
Full article ">Figure 20
<p>An example of the TMUEH dataset: (left to right) original fundus image and the GT of the OD boundary.</p>
Full article ">Figure 21
<p>Limitations of IoU evaluation methods, (green contour: GT, red contour: hypothetical segmentation results).</p>
Full article ">Figure 22
<p>The four parts of OD.</p>
Full article ">Figure 23
<p>Segmentation result examples in the DRISHTI-GS dataset (green contour: GT, blue contour: segmentation results).</p>
Full article ">Figure 24
<p>Segmentation result examples in TMUEH dataset (green contour: GT, blue contour: segmentation results).</p>
Full article ">Figure 25
<p>Unsolved problems in OD segmentation (green contour: GT, blue contour: segmentation results).</p>
Full article ">Figure 26
<p>Comparison of some representative cases (green contour: GT, blue contour: segmentation results).</p>
Full article ">
19 pages, 6265 KiB  
Article
EIEN: Endoscopic Image Enhancement Network Based on Retinex Theory
by Ziheng An, Chao Xu, Kai Qian, Jubao Han, Wei Tan, Dou Wang and Qianqian Fang
Sensors 2022, 22(14), 5464; https://doi.org/10.3390/s22145464 - 21 Jul 2022
Cited by 9 | Viewed by 3017
Abstract
In recent years, deep convolutional neural network (CNN)-based image enhancement has shown outstanding performance. However, due to the problems of uneven illumination and low contrast existing in endoscopic images, the implementation of medical endoscopic image enhancement using CNN is still an exploratory and [...] Read more.
In recent years, deep convolutional neural network (CNN)-based image enhancement has shown outstanding performance. However, due to the problems of uneven illumination and low contrast existing in endoscopic images, the implementation of medical endoscopic image enhancement using CNN is still an exploratory and challenging task. An endoscopic image enhancement network (EIEN) based on the Retinex theory is proposed in this paper to solve these problems. The structure consists of three parts: decomposition network, illumination correction network, and reflection component enhancement algorithm. First, the decomposition network model of pre-trained Retinex-Net is retrained on the endoscopic image dataset, and then the images are decomposed into illumination and reflection components by this decomposition network. Second, the illumination components are corrected by the proposed self-attention guided multi-scale pyramid structure. The pyramid structure is used to capture the multi-scale information of the image. The self-attention mechanism is based on the imaging nature of the endoscopic image, and the inverse image of the illumination component is fused with the features of the green and blue channels of the image to be enhanced to generate a weight map that reassigns weights to the spatial dimension of the feature map, to avoid the loss of details in the process of multi-scale feature fusion and image reconstruction by the network. The reflection component enhancement is achieved by sub-channel stretching and weighted fusion, which is used to enhance the vascular information and image contrast. Finally, the enhanced illumination and reflection components are multiplied to obtain the reconstructed image. We compare the results of the proposed method with six other methods on a test set. The experimental results show that EIEN enhances the brightness and contrast of endoscopic images and highlights vascular and tissue information. At the same time, the method in this paper obtained the best results in terms of visual perception and objective evaluation. Full article
Show Figures

Figure 1

Figure 1
<p>An example of image enhancement by different algorithms. The first row is the result of image enhancement, and the second row is the illumination component of the enhanced image, where the illuminance component is derived from our decomposition network. The figure shows that the enhancement results of other algorithms show over-enhancement or smoothing, while the image enhanced by EIEN has rich details [<a href="#B6-sensors-22-05464" class="html-bibr">6</a>,<a href="#B9-sensors-22-05464" class="html-bibr">9</a>,<a href="#B11-sensors-22-05464" class="html-bibr">11</a>,<a href="#B12-sensors-22-05464" class="html-bibr">12</a>,<a href="#B14-sensors-22-05464" class="html-bibr">14</a>,<a href="#B16-sensors-22-05464" class="html-bibr">16</a>].</p>
Full article ">Figure 2
<p>The framework of the proposed EIEN. The left half is the decomposition network, and the two images are decomposed with the same values of network weights. <span class="html-italic">f</span><sub>1</sub>–<span class="html-italic">f</span><sub>7</sub> are the extracted feature maps, where <span class="html-italic">f</span><sub>1</sub>–<span class="html-italic">f</span><sub>6</sub> are the features with 64 channels; <span class="html-italic">f</span><sub>7</sub> are the features with 4 channels, the first three channels are the reflection components, and the last channel is the illumination component. The right half is divided into three steps illumination and reflection component enhancement and reconstruction of the image.</p>
Full article ">Figure 3
<p>Structure of the designed illumination correction. It mainly consists of a pyramid structure, a self-attention module, a PPM module, and a residual block. The input is I<sub>1</sub>, the two downsampled images are I<sub>2</sub> and I<sub>3</sub>; ‘C’ denotes the channel; downsampling is implemented using maximum pooling, where kernel size is 2 and stride is 2; the upsampling process uses bilinear interpolation.</p>
Full article ">Figure 4
<p>The required modules in <a href="#sensors-22-05464-f003" class="html-fig">Figure 3</a>. It contains the self-attention mechanism (<b>A</b>), the PPM module (<b>B</b>), and the residual block (<b>C</b>). Among them, ‘C’ denotes the channel; ‘Mean’ in the blue block in A indicates the operation of taking the feature to mean; the output of PPM and residual block does not change the number of channels.</p>
Full article ">Figure 5
<p>Designed method of reflective component correction.</p>
Full article ">Figure 6
<p>The reflection component enhancement results are obtained by setting different values of τ. Images in columns 1 and 3 suffer from visual discomfort or loss of detail, which are highlighted by red rectangles.</p>
Full article ">Figure 7
<p>The images of the training and test sets. (<b>a</b>) shows part of the training set images; (<b>b</b>) shows the test set images.</p>
Full article ">Figure 8
<p>Comparison image of different methods. The three sets of images (<b>a</b>–<b>c</b>) include photos of different parts of the human body. The classical image enhancement methods and the enhancement results of the method proposed in this paper are shown [<a href="#B6-sensors-22-05464" class="html-bibr">6</a>,<a href="#B9-sensors-22-05464" class="html-bibr">9</a>,<a href="#B11-sensors-22-05464" class="html-bibr">11</a>,<a href="#B12-sensors-22-05464" class="html-bibr">12</a>,<a href="#B14-sensors-22-05464" class="html-bibr">14</a>,<a href="#B16-sensors-22-05464" class="html-bibr">16</a>].</p>
Full article ">Figure 9
<p>Comparison image of different methods. The two sets of images (<b>a</b>,<b>b</b>) include photos of different parts of the human body. The classical image enhancement methods and the enhancement results of the method proposed in this paper are shown. The red box is the case where the local enhancement effect is not good [<a href="#B6-sensors-22-05464" class="html-bibr">6</a>,<a href="#B9-sensors-22-05464" class="html-bibr">9</a>,<a href="#B11-sensors-22-05464" class="html-bibr">11</a>,<a href="#B12-sensors-22-05464" class="html-bibr">12</a>,<a href="#B14-sensors-22-05464" class="html-bibr">14</a>,<a href="#B16-sensors-22-05464" class="html-bibr">16</a>].</p>
Full article ">Figure 10
<p>Average index comparison chart. The value of PSNR to 1/10 of the original value, and the value of GMSD to 10 times the original value.</p>
Full article ">Figure 11
<p>The initial decomposition results and the decomposition results after weight fine-tuning. The second and third columns show the reflectance and illuminance components of the original Retinex-Net decomposition results, respectively. The fourth and fifth columns show the reflectance and illuminance components of the decomposition after fine-tuning the weights.</p>
Full article ">Figure 12
<p>Results of comparison with and without attention mechanism. Locally over-enhanced regions are shown in the red square.</p>
Full article ">Figure 13
<p>Results of comparison with and without reflection component enhancement. (<b>a</b>) Original image; (<b>b</b>) Result without reflection component enhancement; (<b>c</b>) Result of stretching all three channels of the reflective component; (<b>d</b>) G and B channels of the reflectance component are adaptively stretched, and the R channel remains unchanged.</p>
Full article ">
12 pages, 1512 KiB  
Article
Aberrated Multidimensional EEG Characteristics in Patients with Generalized Anxiety Disorder: A Machine-Learning Based Analysis Framework
by Zhongxia Shen, Gang Li, Jiaqi Fang, Hongyang Zhong, Jie Wang, Yu Sun and Xinhua Shen
Sensors 2022, 22(14), 5420; https://doi.org/10.3390/s22145420 - 20 Jul 2022
Cited by 25 | Viewed by 5421
Abstract
Although increasing evidences support the notion that psychiatric disorders are associated with abnormal communication between brain regions, scattered studies have investigated brain electrophysiological disconnectivity of patients with generalized anxiety disorder (GAD). To this end, this study intends to develop an analysis framework for [...] Read more.
Although increasing evidences support the notion that psychiatric disorders are associated with abnormal communication between brain regions, scattered studies have investigated brain electrophysiological disconnectivity of patients with generalized anxiety disorder (GAD). To this end, this study intends to develop an analysis framework for automatic GAD detection through incorporating multidimensional EEG feature extraction and machine learning techniques. Specifically, resting-state EEG signals with a duration of 10 min were obtained from 45 patients with GAD and 36 healthy controls (HC). Then, an analysis framework of multidimensional EEG characteristics (including univariate power spectral density (PSD) and fuzzy entropy (FE), and multivariate functional connectivity (FC), which can decode the EEG information from three different dimensions) were introduced for extracting aberrated multidimensional EEG features via statistical inter-group comparisons. These aberrated features were subsequently fused and fed into three previously validated machine learning methods to evaluate classification performance for automatic patient detection. We showed that patients exhibited a significant increase in beta rhythm and decrease in alpha1 rhythm of PSD, together with the reduced long-range FC between frontal and other brain areas in all frequency bands. Moreover, these aberrated features contributed to a very good classification performance with 97.83 ± 0.40% of accuracy, 97.55 ± 0.31% of sensitivity, 97.78 ± 0.36% of specificity, and 97.95 ± 0.17% of F1. These findings corroborate previous hypothesis of disconnectivity in psychiatric disorders and further shed light on distribution patterns of aberrant spatio-spectral EEG characteristics, which may lead to potential application of automatic diagnosis of GAD. Full article
Show Figures

Figure 1

Figure 1
<p>Brain topography of the PSD for the four EEG rhythms. Each value is the average of all subjects. The relative powers of HC (<b>a</b>) and GAD (<b>b</b>) have been normalized between 0 and 1 for the theta, alpha1, alpha2, and beta rhythms for the sake of better visualization, so they share the same color bar. The red dots represent these EEG channels have significant differences (<span class="html-italic">p</span> &lt; 0.05). The subgraphs of (<b>c</b>) are the relative PSD (RP: |PSD<sub>GAD</sub>-PSD<sub>HC</sub>|/PSD<sub>HC</sub>) of GAD relative to HC for the four rhythms.</p>
Full article ">Figure 2
<p>Brain topography of the FE for all EEG rhythms. Each value is the average of all subjects. The FE of HC (<b>a</b>) and GAD (<b>b</b>) have been normalized between 0 and 1 for the theta, alpha1, alpha2, and beta rhythms for the sake of better visualization, so they share the same color bar. All rhythms have no significant differences (<span class="html-italic">p</span> &gt; 0.05). The subgraphs of (<b>c</b>) are the relative FE (RFE: |FE<sub>GAD</sub>–FE<sub>HC</sub>|/FE<sub>HC</sub>) of GAD relative to HC for the four rhythms.</p>
Full article ">Figure 3
<p>Brain functional network of theta, alpha1, alpha2, and beta rhythms. In the brain functional networks, the red edge means the PLI value of GAD is lower than that of HC. Meanwhile, the blue edge represents the PLI value of GAD is higher than that of HC.</p>
Full article ">
23 pages, 3404 KiB  
Article
ECG Classification Using Orthogonal Matching Pursuit and Machine Learning
by Sandra Śmigiel
Sensors 2022, 22(13), 4960; https://doi.org/10.3390/s22134960 - 30 Jun 2022
Cited by 13 | Viewed by 3595
Abstract
Health monitoring and related technologies are a rapidly growing area of research. To date, the electrocardiogram (ECG) remains a popular measurement tool in the evaluation and diagnosis of heart disease. The number of solutions involving ECG signal monitoring systems is growing exponentially in [...] Read more.
Health monitoring and related technologies are a rapidly growing area of research. To date, the electrocardiogram (ECG) remains a popular measurement tool in the evaluation and diagnosis of heart disease. The number of solutions involving ECG signal monitoring systems is growing exponentially in the literature. In this article, underestimated Orthogonal Matching Pursuit (OMP) algorithms are used, demonstrating the significant effect of concise representation parameters on improving the performance of the classification process. Cardiovascular disease classification models based on classical Machine Learning classifiers were defined and investigated. The study was undertaken on the recently published PTB-XL database, whose ECG signals were previously subjected to detailed analysis. The classification was realized for class 2, class 5, and class 15 cardiac diseases. A new method of detecting R-waves and, based on them, determining the location of QRS complexes was presented. Novel aggregation methods of ECG signal fragments containing QRS segments, necessary for tests for classical classifiers, were developed. As a result, it was proved that ECG signal subjected to algorithms of R wave detection, QRS complexes extraction, and resampling performs very well in classification using Decision Trees. The reason can be found in structuring the signal due to the actions mentioned above. The implementation of classification issues achieved the highest Accuracy of 90.4% in recognition of 2 classes, as compared to less than 78% for 5 classes and 71% for 15 classes. Full article
Show Figures

Figure 1

Figure 1
<p>General overview diagram of the method.</p>
Full article ">Figure 2
<p>Distribution of PTB-XL database data by classes.</p>
Full article ">Figure 3
<p>Distribution of PTB-XL database data by subclasses.</p>
Full article ">Figure 4
<p>Block diagram of the proposed R-wave detection algorithm.</p>
Full article ">Figure 5
<p>Example input ECG signal (original), non–zero coefficients, the signal after reconstruction.</p>
Full article ">Figure 6
<p>Atoms with non–zero coefficients, for example, signal decomposed using 6 non-zero coefficients.</p>
Full article ">Figure 7
<p>Operation of the OMP algorithm −30 non–zero coefficients.</p>
Full article ">Figure 8
<p>Atoms with non–zero coefficients, for example, signal decomposed using 30 non–zero coefficients.</p>
Full article ">Figure 9
<p>Example atoms of the DL dictionary.</p>
Full article ">Figure 10
<p>Examples atoms of the KSVD dictionary.</p>
Full article ">Figure 11
<p>Histogram of the number of segments comprising the QRS complex.</p>
Full article ">Figure 12
<p>Aggregation method—Single.</p>
Full article ">Figure 13
<p>Aggregation method—Mean.</p>
Full article ">Figure 14
<p>Aggregation method—Max.</p>
Full article ">Figure 15
<p>Aggregation method—Voting.</p>
Full article ">Figure 16
<p>Steps of implementation of research related to classification.</p>
Full article ">Figure 17
<p>Confusion matrix of the best model in classification for 2 classes.</p>
Full article ">Figure 18
<p>Confusion matrix of the best model in classification for 5 classes.</p>
Full article ">Figure 19
<p>Confusion matrix of the best model in classification for 15 classes.</p>
Full article ">
21 pages, 4276 KiB  
Article
A Novel Method for Baroreflex Sensitivity Estimation Using Modulated Gaussian Filter
by Tienhsiung Ku, Serge Ismael Zida, Latifa Nabila Harfiya, Yung-Hui Li and Yue-Der Lin
Sensors 2022, 22(12), 4618; https://doi.org/10.3390/s22124618 - 18 Jun 2022
Cited by 1 | Viewed by 2690
Abstract
The evaluation of baroreflex sensitivity (BRS) has proven to be critical for medical applications. The use of α indices by spectral methods has been the most popular approach to BRS estimation. Recently, an algorithm termed Gaussian average filtering decomposition (GAFD) has been proposed [...] Read more.
The evaluation of baroreflex sensitivity (BRS) has proven to be critical for medical applications. The use of α indices by spectral methods has been the most popular approach to BRS estimation. Recently, an algorithm termed Gaussian average filtering decomposition (GAFD) has been proposed to serve the same purpose. GAFD adopts a three-layer tree structure similar to wavelet decomposition but is only constructed by Gaussian windows in different cutoff frequency. Its computation is more efficient than that of conventional spectral methods, and there is no need to specify any parameter. This research presents a novel approach, referred to as modulated Gaussian filter (modGauss) for BRS estimation. It has a more simplified structure than GAFD using only two bandpass filters of dedicated passbands, so that the three-level structure in GAFD is avoided. This strategy makes modGauss more efficient than GAFD in computation, while the advantages of GAFD are preserved. Both GAFD and modGauss are conducted extensively in the time domain, yet can achieve similar results to conventional spectral methods. In computational simulations, the EuroBavar dataset was used to assess the performance of the novel algorithm. The BRS values were calculated by four other methods (three spectral approaches and GAFD) for performance comparison. From a comparison using the Wilcoxon rank sum test, it was found that there was no statistically significant dissimilarity; instead, very good agreement using the intraclass correlation coefficient (ICC) was observed. The modGauss algorithm was also found to be the fastest in computation time and suitable for the long-term estimation of BRS. The novel algorithm, as described in this report, can be applied in medical equipment for real-time estimation of BRS in clinical settings. Full article
Show Figures

Figure 1

Figure 1
<p>BRS analysis with different values of <math display="inline"><semantics> <mi>κ</mi> </semantics></math> (1.1, 2 and 3). It can be observed that the BRS value is identical for each value of <math display="inline"><semantics> <mi>κ</mi> </semantics></math>. In this study, <math display="inline"><semantics> <mi>κ</mi> </semantics></math> is constantly set to 2. The data points beyond the whiskers are marked by red <b>+</b> symbol.</p>
Full article ">Figure 2
<p>Generation procedure for the modGauss filters in LF and HF bands (from time-domain perspective).</p>
Full article ">Figure 3
<p>Generation procedure for the modGauss filters in LF and HF bands (from frequency-domain perspective).</p>
Full article ">Figure 4
<p>Derived results for the interpolated SBP sequence in both the LF and HF band. Results from time-domain perspective with the B002LB data of EuroBavar dataset as example.</p>
Full article ">Figure 5
<p>Derived results for the interpolated SBP sequence in both the LF and HF band. Results from frequency-domain perspective with the B002LB data of EuroBavar dataset as example.</p>
Full article ">Figure 6
<p>Derived results for the interpolated IBI sequence in LF band and HF band (from time-domain perspective, with the data record B002LB of the EuroBavar dataset as an example).</p>
Full article ">Figure 7
<p>Derived results for the interpolated IBI sequence in LF band and HF band (from frequency-domain perspective, with the data record B002LB of the EuroBavar dataset as an example).</p>
Full article ">Figure 8
<p>Estimation of the BRS values by five different methods including modGauss, GAFD, AR spectrum, Welch’s periodogram and wavelet. The data points beyond the whiskers are marked by red <b>+</b> symbol.</p>
Full article ">Figure 9
<p>Long-term BRS analysis by modGauss in comparison with sleep stage (with the sleep stages 1–4 being denoted by 01–04, the subject motion by 0M, rapid eye movement sleep by 0R, and the awake stage by 0W).</p>
Full article ">Figure 10
<p>Long-term sleep BRS analysis by five methods including GAFD, AR spectrum, Welch’s periodogram and Wavelet.</p>
Full article ">
12 pages, 2335 KiB  
Article
2D Gait Skeleton Data Normalization for Quantitative Assessment of Movement Disorders from Freehand Single Camera Video Recordings
by Wei Tang, Peter M. A. van Ooijen, Deborah A. Sival and Natasha M. Maurits
Sensors 2022, 22(11), 4245; https://doi.org/10.3390/s22114245 - 2 Jun 2022
Cited by 8 | Viewed by 2768
Abstract
Overlapping phenotypic features between Early Onset Ataxia (EOA) and Developmental Coordination Disorder (DCD) can complicate the clinical distinction of these disorders. Clinical rating scales are a common way to quantify movement disorders but in children these scales also rely on the observer’s assessment [...] Read more.
Overlapping phenotypic features between Early Onset Ataxia (EOA) and Developmental Coordination Disorder (DCD) can complicate the clinical distinction of these disorders. Clinical rating scales are a common way to quantify movement disorders but in children these scales also rely on the observer’s assessment and interpretation. Despite the introduction of inertial measurement units for objective and more precise evaluation, special hardware is still required, restricting their widespread application. Gait video recordings of movement disorder patients are frequently captured in routine clinical settings, but there is presently no suitable quantitative analysis method for these recordings. Owing to advancements in computer vision technology, deep learning pose estimation techniques may soon be ready for convenient and low-cost clinical usage. This study presents a framework based on 2D video recording in the coronal plane and pose estimation for the quantitative assessment of gait in movement disorders. To allow the calculation of distance-based features, seven different methods to normalize 2D skeleton keypoint data derived from pose estimation using deep neural networks applied to freehand video recording of gait were evaluated. In our experiments, 15 children (five EOA, five DCD and five healthy controls) were asked to walk naturally while being videotaped by a single camera in 1280 × 720 resolution at 25 frames per second. The high likelihood of the prediction of keypoint locations (mean = 0.889, standard deviation = 0.02) demonstrates the potential for distance-based features derived from routine video recordings to assist in the clinical evaluation of movement in EOA and DCD. By comparison of mean absolute angle error and mean variance of distance, the normalization methods using the Euclidean (2D) distance of left shoulder and right hip, or the average distance from left shoulder to right hip and from right shoulder to left hip were found to better perform for deriving distance-based features and further quantitative assessment of movement disorders. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed pipeline for the quantitative assessment of gait from freehand 2D camera recordings. From left to right: S1: Configure the camera and its settings. S2: Collect data with the camera. S3: Extract keypoints using Alphapose. S4: Match the skeleton using PoseFlow. S5: Normalize the skeleton data with one of methods given in <a href="#sec2dot4-sensors-22-04245" class="html-sec">Section 2.4</a>. (BoN: box normaliztion; S: shoulder normaliztion; H: hip normalization; LR/RH: left-shoulder right-hip normalization; LS/RH-d: left-shoulder right-hip distance normalization; MS/MH-d: mid-shoulder mid-hip distance normalization; ASH: average shoulder hip normalization). S6: Obtain the normalized keypoint skeleton sequences. S7: Analyze the (distance-based) features derived from skeleton data for classification.</p>
Full article ">Figure 2
<p>Example of several frames of an EOA patient walking towards the camera illustrating the effect of the different normalization steps when using the ASH method.</p>
Full article ">Figure 3
<p>Mean absolute angle error results of proposed methods. (BoN: box normalization; S: shoulder normalization; H: hip normalization; LR/RH: left-shoulder right-hip normalization; LS/RH-d: left-shoulder right-hip distance normalization; MS/MH-d: mid-shoulder mid-hip distance normalization; ASH: average shoulder hip normalization).</p>
Full article ">Figure 4
<p>Mean variance of distance between shoulder (green), wrist (purple), hip (orange) and ankle (yellow) keypoints. (BoN: box normalization; LR/RH: left-shoulder right-hip normalization; LS/RH-d: left-shoulder right-hip distance normalization; MS/MH-d: mid-shoulder mid-hip distance normalization; ASH: average shoulder hip normalization).</p>
Full article ">Figure 5
<p>Mean variance of distance between groups. (Ori: original data; BoN: box normalization; LR/RH: left-shoulder right-hip normalization; LS/RH-d: left-shoulder right-hip distance normalization; MS/MH-d: mid-shoulder mid-hip distance normalization; ASH: average shoulder hip normalization).</p>
Full article ">Figure 6
<p>Distribution of the locations of the 17 keypoints before and after LS/RH-d normalization for one segment of a DCD child walking towards/away from the camera. (<b>Left</b>): before scaling; (<b>right</b>): after scaling.</p>
Full article ">
25 pages, 1534 KiB  
Article
Study of the Few-Shot Learning for ECG Classification Based on the PTB-XL Dataset
by Krzysztof Pałczyński, Sandra Śmigiel, Damian Ledziński and Sławomir Bujnowski
Sensors 2022, 22(3), 904; https://doi.org/10.3390/s22030904 - 25 Jan 2022
Cited by 35 | Viewed by 7971
Abstract
The electrocardiogram (ECG) is considered a fundamental of cardiology. The ECG consists of P, QRS, and T waves. Information provided from the signal based on the intervals and amplitudes of these waves is associated with various heart diseases. The first step in isolating [...] Read more.
The electrocardiogram (ECG) is considered a fundamental of cardiology. The ECG consists of P, QRS, and T waves. Information provided from the signal based on the intervals and amplitudes of these waves is associated with various heart diseases. The first step in isolating the features of an ECG begins with the accurate detection of the R-peaks in the QRS complex. The database was based on the PTB-XL database, and the signals from Lead I–XII were analyzed. This research focuses on determining the Few-Shot Learning (FSL) applicability for ECG signal proximity-based classification. The study was conducted by training Deep Convolutional Neural Networks to recognize 2, 5, and 20 different heart disease classes. The results of the FSL network were compared with the evaluation score of the neural network performing softmax-based classification. The neural network proposed for this task interprets a set of QRS complexes extracted from ECG signals. The FSL network proved to have higher accuracy in classifying healthy/sick patients ranging from 93.2% to 89.2% than the softmax-based classification network, which achieved 90.5–89.2% accuracy. The proposed network also achieved better results in classifying five different disease classes than softmax-based counterparts with an accuracy of 80.2–77.9% as opposed to 77.1% to 75.1%. In addition, the method of R-peaks labeling and QRS complexes extraction has been implemented. This procedure converts a 12-lead signal into a set of R waves by using the detection algorithms and the k-mean algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>The illustrative waveform of the ECG signal.</p>
Full article ">Figure 2
<p>General overview diagram of the method.</p>
Full article ">Figure 3
<p>Classes and subclasses of used records.</p>
Full article ">Figure 4
<p>Sample record of NORM class for I lead, with places for section cuts (Red).</p>
Full article ">Figure 5
<p>Designed Neural Network architecture.</p>
Full article ">Figure 6
<p>Comparison of average accuracy of evaluated models on 2, 5, and 20 classes detection.</p>
Full article ">Figure 7
<p>Comparison of average F1 score of evaluated models on 2, 5, and 20 classes detection.</p>
Full article ">Figure 8
<p>ACC as a function of the size of the original test dataset.</p>
Full article ">Figure 9
<p>Confusion Matrix for Few-Shot (2 classes) with proximity-based classification.</p>
Full article ">Figure 10
<p>Confusion Matrix for Few-Shot (5 classes) with proximity-based classification.</p>
Full article ">Figure 11
<p>Confusion Matrix for Few-Shot (20 classes) with proximity-based classification.</p>
Full article ">Figure 12
<p>Confusion Matrix for softmax-based classification (2 classes).</p>
Full article ">Figure 13
<p>Confusion Matrix for softmax-based classification (5 classes).</p>
Full article ">Figure 14
<p>Confusion Matrix for softmax-based classification (20 classes).</p>
Full article ">Figure 15
<p>Learning process of the Neural Network for Few-Shot (2 classes) with proximity-based classification.</p>
Full article ">Figure 16
<p>Learning process of the Neural Network for Few-Shot (5 classes) with proximity-based classification.</p>
Full article ">Figure 17
<p>Learning process of the Neural Network for Few-Shot (20 classes) with proximity-based classification.</p>
Full article ">Figure 18
<p>Learning process of the Neural Network for softmax-based classification (2 classes).</p>
Full article ">Figure 19
<p>Learning process of the Neural Network for softmax-based classification (5 classes).</p>
Full article ">Figure 20
<p>Learning process of the Neural Network for softmax-based classification (20 classes).</p>
Full article ">

2021

Jump to: 2024, 2023, 2022

12 pages, 1373 KiB  
Article
Quantification of the Link between Timed Up-and-Go Test Subtasks and Contractile Muscle Properties
by Andreas Ziegl, Dieter Hayn, Peter Kastner, Ester Fabiani, Boštjan Šimunič, Kerstin Löffler, Lisa Weidinger, Bianca Brix, Nandu Goswami and Schreier Günter
Sensors 2021, 21(19), 6539; https://doi.org/10.3390/s21196539 - 30 Sep 2021
Cited by 5 | Viewed by 3075
Abstract
Frailty and falls are a major public health problem in older adults. Muscle weakness of the lower and upper extremities are risk factors for any, as well as recurrent falls including injuries and fractures. While the Timed Up-and-Go (TUG) test is often used [...] Read more.
Frailty and falls are a major public health problem in older adults. Muscle weakness of the lower and upper extremities are risk factors for any, as well as recurrent falls including injuries and fractures. While the Timed Up-and-Go (TUG) test is often used to identify frail members and fallers, tensiomyography (TMG) can be used as a non-invasive tool to assess the function of skeletal muscles. In a clinical study, we evaluated the correlation between the TMG parameters of the skeletal muscle contraction of 23 elderly participants (22 f, age 86.74 ± 7.88) and distance-based TUG test subtask times. TUG tests were recorded with an ultrasonic-based device. The sit-up and walking phases were significantly correlated to the contraction and delay time of the muscle vastus medialis (? = 0.55–0.80, p < 0.01). In addition, the delay time of the muscles vastus medialis (? = 0.45, p = 0.03) and gastrocnemius medialis (? = ?0.44, p = 0.04) correlated to the sit-down phase. The maximal radial displacements of the biceps femoris showed significant correlations with the walk-forward times (? = ?0.47, p = 0.021) and back (? = ?0.43, p = 0.04). The association of TUG subtasks to muscle contractile parameters, therefore, could be utilized as a measure to improve the monitoring of elderly people’s physical ability in general and during rehabilitation after a fall in particular. TUG test subtask measurements may be used as a proxy to monitor muscle properties in rehabilitation after long hospital stays and injuries or for fall prevention. Full article
Show Figures

Figure 1

Figure 1
<p>The TUG device is attached to the backrest of a chair (<b>left</b>). The measurement setting allowed ultrasonic-based distance measurement. The chair with the attached device was positioned 3.5 m away from a wall. Participants eventually walked 3 m (<b>right</b>).</p>
Full article ">Figure 2
<p>Example signal with the marked subtasks: sit-up (<b>1</b>), walk-forward (<b>2</b>), turnaround (<b>3</b>), walk-back (<b>4</b>), sit-down (<b>5</b>).</p>
Full article ">Figure 3
<p>A TMG apparatus with a linear displacement sensor (<b>left</b>) was used to detect contractile properties while two self-adhesive electrodes were positioned distal and proximal to the thickest part of the muscle belly (<b>right</b>).</p>
Full article ">Figure 4
<p>TMG signal with marked parameters: contraction time (<b>T<sub>c</sub></b>), delay time (<b>T<sub>d</sub></b>), and the maximal displacement amplitude (<b>D<sub>m</sub></b>).</p>
Full article ">Figure 5
<p>Timeline of the study. Participants could attend six Timed Up-and-Go (<b>TUG</b>) test measurements and two tensiomyography (<b>TMG</b>) measurements within 15 weeks.</p>
Full article ">Figure 6
<p>Distribution of Timed Up-and-Go test subtask times for all 23 participants.</p>
Full article ">Figure 7
<p>Scatter plots of significant correlations between vastus medialis (<b>VM</b>), biceps femoris (<b>BF</b>), gastrocnemius medialis (<b>GM</b>) tensiomyographic data, and TUG subtask parameters.</p>
Full article ">
22 pages, 5740 KiB  
Article
Interactive Blood Vessel Segmentation from Retinal Fundus Image Based on Canny Edge Detector
by Alexander Ze Hwan Ooi, Zunaina Embong, Aini Ismafairus Abd Hamid, Rafidah Zainon, Shir Li Wang, Theam Foo Ng, Rostam Affendi Hamzah, Soo Siang Teoh and Haidi Ibrahim
Sensors 2021, 21(19), 6380; https://doi.org/10.3390/s21196380 - 24 Sep 2021
Cited by 39 | Viewed by 5183
Abstract
Optometrists, ophthalmologists, orthoptists, and other trained medical professionals use fundus photography to monitor the progression of certain eye conditions or diseases. Segmentation of the vessel tree is an essential process of retinal analysis. In this paper, an interactive blood vessel segmentation from retinal [...] Read more.
Optometrists, ophthalmologists, orthoptists, and other trained medical professionals use fundus photography to monitor the progression of certain eye conditions or diseases. Segmentation of the vessel tree is an essential process of retinal analysis. In this paper, an interactive blood vessel segmentation from retinal fundus image based on Canny edge detection is proposed. Semi-automated segmentation of specific vessels can be done by simply moving the cursor across a particular vessel. The pre-processing stage includes the green color channel extraction, applying Contrast Limited Adaptive Histogram Equalization (CLAHE), and retinal outline removal. After that, the edge detection techniques, which are based on the Canny algorithm, will be applied. The vessels will be selected interactively on the developed graphical user interface (GUI). The program will draw out the vessel edges. After that, those vessel edges will be segmented to bring focus on its details or detect the abnormal vessel. This proposed approach is useful because different edge detection parameter settings can be applied to the same image to highlight particular vessels for analysis or presentation. Full article
Show Figures

Figure 1

Figure 1
<p>Example of a fundus image from DRIVE dataset.</p>
Full article ">Figure 2
<p>Flowchart of program.</p>
Full article ">Figure 3
<p>Flowchart of the modified edge detection.</p>
Full article ">Figure 4
<p>Image of input retinal fundus image and its image after edge detection is applied.</p>
Full article ">Figure 5
<p>Image of toolbar.</p>
Full article ">Figure 6
<p>(<b>a</b>) Automatic edge detection is done on a vessel when a cursor is moved along it. (<b>b</b>) Multiple colors are used to categorize and differentiate the vessels for analysis.</p>
Full article ">Figure 7
<p>Segmentation of detected edges are displayed separately.</p>
Full article ">Figure 8
<p>(<b>a</b>) Window displaying combination of all segmented parts. (<b>b</b>) Image created and saved as <tt>“segmented_img.jpg”</tt>.</p>
Full article ">Figure 9
<p>High threshold at 55, and low threshold at (<b>a</b>) 10 (the obtained PFOM value is 0.4739), (<b>b</b>) 30 (the obtained PFOM value is 0.5448), and (<b>c</b>) 50 (the obtained PFOM value is 0.5830).</p>
Full article ">Figure 10
<p>High threshold at 105, and low threshold at (<b>a</b>) 10 (the obtained PFOM value is 0.5886), (<b>b</b>) 30 (the obtained PFOM value is 0.600), and (<b>c</b>) 50 (the obtained PFOM value is 0.5511).</p>
Full article ">Figure 11
<p>High threshold at 155, and low threshold at (<b>a</b>) 10 (the obtained PFOM value is 0.5941), (<b>b</b>) 30 (the obtained PFOM value is 0.5532), and (<b>c</b>) 50 (the obtained PFOM value is 0.5025).</p>
Full article ">Figure 12
<p>Edges detected by using a Gaussian filter of size (<b>a</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> pixels, (<b>b</b>) <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <mn>5</mn> </mrow> </semantics></math> pixels, (<b>c</b>) <math display="inline"><semantics> <mrow> <mn>7</mn> <mo>×</mo> <mn>7</mn> </mrow> </semantics></math> pixels, and (<b>d</b>) <math display="inline"><semantics> <mrow> <mn>9</mn> <mo>×</mo> <mn>9</mn> </mrow> </semantics></math> pixels.</p>
Full article ">Figure 13
<p>Green channel extraction and CLAHE on <tt>Photo 1</tt>. (<b>a</b>) Grayscale image. (<b>b</b>) CLAHE on grayscale image. (<b>c</b>) Extracted green channel. (<b>d</b>) CLAHE on extracted green channel.</p>
Full article ">Figure 14
<p>Green channel extraction and CLAHE on <tt>Photo 2</tt>. (<b>a</b>) Grayscale image. (<b>b</b>) CLAHE on grayscale image. (<b>c</b>) Extracted green channel. (<b>d</b>) CLAHE on extracted green channel.</p>
Full article ">Figure 15
<p>Comparison of segmented images for <tt>Photo 1</tt>. (<b>a</b>) Input image. (<b>b</b>) Segmented image using the proposed approach (GUI based). (<b>c</b>) Segmented image using Canny detector with ideal parameters. (<b>d</b>) Ground truth image from DRIVE dataset.</p>
Full article ">Figure 15 Cont.
<p>Comparison of segmented images for <tt>Photo 1</tt>. (<b>a</b>) Input image. (<b>b</b>) Segmented image using the proposed approach (GUI based). (<b>c</b>) Segmented image using Canny detector with ideal parameters. (<b>d</b>) Ground truth image from DRIVE dataset.</p>
Full article ">Figure 16
<p>Comparison of segmented images for <tt>Photo 2</tt>. (<b>a</b>) Input image. (<b>b</b>) Segmented image using the proposed approach (GUI based). (<b>c</b>) Segmented image using Canny detector with ideal parameters. (<b>d</b>) Ground truth image from DRIVE dataset.</p>
Full article ">Figure 17
<p>Comparison of segmented images for <tt>Photo 3</tt>. (<b>a</b>) Input image. (<b>b</b>) Segmented image using the proposed approach (GUI based). (<b>c</b>) Segmented image using Canny detector with ideal parameters. (<b>d</b>) Ground truth image from DRIVE dataset.</p>
Full article ">Figure 18
<p>Comparison of PFOM values of photos 1 to 3 from DRIVE dataset with different segmentation methods and different processing on input image.</p>
Full article ">Figure 19
<p>Comparison of segmented images for <tt>image 0001</tt> from STARE dataset. (<b>a</b>) Input image. (<b>b</b>) Segmented image using the proposed approach (GUI based) (the obtained PFOM value is 0.5208). (<b>c</b>) Segmented image using Canny detector with ideal parameters (i.e., low threshold is set at 10, and high threshold is set at 32) (the obtained PFOM value is 0.4346). (<b>d</b>) Ground truth image from STARE dataset.</p>
Full article ">Figure 20
<p>Comparison of segmented images for <tt>Image_01R.jpg</tt> from the CHASE_DB1 dataset. (<b>a</b>) Input image. (<b>b</b>) Segmented image using the proposed approach (GUI based) (the obtained PFOM value is 0.5208). (<b>c</b>) Segmented image using Canny detector with ideal parameters (i.e., low threshold is set at 6, and high threshold is set at 60) (the obtained PFOM value is 0.3664). (<b>d</b>) Ground truth image from the CHASE_DB1 dataset.</p>
Full article ">
26 pages, 36023 KiB  
Article
Automatic Polyp Segmentation in Colonoscopy Images Using a Modified Deep Convolutional Encoder-Decoder Architecture
by Chin Yii Eu, Tong Boon Tang, Cheng-Hung Lin, Lok Hua Lee and Cheng-Kai Lu
Sensors 2021, 21(16), 5630; https://doi.org/10.3390/s21165630 - 20 Aug 2021
Cited by 9 | Viewed by 4037
Abstract
Colorectal cancer has become the third most commonly diagnosed form of cancer, and has the second highest fatality rate of cancers worldwide. Currently, optical colonoscopy is the preferred tool of choice for the diagnosis of polyps and to avert colorectal cancer. Colon screening [...] Read more.
Colorectal cancer has become the third most commonly diagnosed form of cancer, and has the second highest fatality rate of cancers worldwide. Currently, optical colonoscopy is the preferred tool of choice for the diagnosis of polyps and to avert colorectal cancer. Colon screening is time-consuming and highly operator dependent. In view of this, a computer-aided diagnosis (CAD) method needs to be developed for the automatic segmentation of polyps in colonoscopy images. This paper proposes a modified SegNet Visual Geometry Group-19 (VGG-19), a form of convolutional neural network, as a CAD method for polyp segmentation. The modifications include skip connections, 5 × 5 convolutional filters, and the concatenation of four dilated convolutions applied in parallel form. The CVC-ClinicDB, CVC-ColonDB, and ETIS-LaribPolypDB databases were used to evaluate the model, and it was found that our proposed polyp segmentation model achieved an accuracy, sensitivity, specificity, precision, mean intersection over union, and dice coefficient of 96.06%, 94.55%, 97.56%, 97.48%, 92.3%, and 95.99%, respectively. These results indicate that our model performs as well as or better than previous schemes in the literature. We believe that this study will offer benefits in terms of the future development of CAD tools for polyp segmentation for colorectal cancer diagnosis and management. In the future, we intend to embed our proposed network into a medical capsule robot for practical usage and try it in a hospital setting with clinicians. Full article
Show Figures

Figure 1

Figure 1
<p>Modifications made to the original SegNet Visual Geometry Group-19 (VGG-19) structure: (<b>a</b>) overview of skip connections (solid arrow) introduced in our modified SegNet; (<b>b</b>) parallel dilated convolutions used at the end of the model encoder network; (<b>c</b>) the 5 × 5 kernel size convolution blocks used in our modified SegNet.</p>
Full article ">Figure 2
<p>Training accuracy curves for the three networks: (<b>a</b>) 100-layer network; (<b>b</b>) 154-layer network; (<b>c</b>) 208-layer network.</p>
Full article ">Figure 3
<p>Comparative examples of polyp segmentation results. The first column contains raw images, the second column contains the reference ground truths, the third column shows the segmentation results from the original SegNet VGG-19, and the final column shows the segmentation results from our modified SegNet. The red boxes in the raw images (<b>a</b>–<b>f</b>) indicate the location of polyps. The white patches in the second, third, and final columns indicate the segmented polyps, the black areas indicate the non-polyp regions.</p>
Full article ">Figure A1
<p>Samples of colonoscopy images with a variety of polyp conditions.</p>
Full article ">Figure A2
<p>An overview of our three proposed networks: (<b>a</b>) 100-layer network; (<b>b</b>) modified SegNet (154-layer network); (<b>c</b>) 208-layer network. The small numbers represent the number of image channels used in each convolution block.</p>
Full article ">Figure A2 Cont.
<p>An overview of our three proposed networks: (<b>a</b>) 100-layer network; (<b>b</b>) modified SegNet (154-layer network); (<b>c</b>) 208-layer network. The small numbers represent the number of image channels used in each convolution block.</p>
Full article ">Figure A3
<p>The image on the left is a raw colonoscopy image used for visualisation, and the image on the right represents the ground truth for the left image.</p>
Full article ">Figure A4
<p>Intermediate sector between the encoder and decoder: (<b>a</b>) output activation map from the last max pooling layer in SegNet VGG-16; (<b>b</b>) output activation map from the last max pooling layer in SegNet VGG-19; (<b>c</b>) output activation maps from the ReLU layers from four different dilation factor convolution blocks (shown on the left), and the output of the ReLU layer from the convolution block after concatenation (shown on the right) for our modified SegNet network. The red boxes in the output activation maps indicate the location of polyps.</p>
Full article ">Figure A5
<p>Segmentation results for a colorectal image from SegNet VGG-16, SegNet VGG-19, and our modified SegNet, with the ground truth for the colorectal image as a reference.</p>
Full article ">
32 pages, 1827 KiB  
Article
Lung Nodule Segmentation with a Region-Based Fast Marching Method
by Marko Savic, Yanhe Ma, Giovanni Ramponi, Weiwei Du and Yahui Peng
Sensors 2021, 21(5), 1908; https://doi.org/10.3390/s21051908 - 9 Mar 2021
Cited by 43 | Viewed by 5657
Abstract
When dealing with computed tomography volume data, the accurate segmentation of lung nodules is of great importance to lung cancer analysis and diagnosis, being a vital part of computer-aided diagnosis systems. However, due to the variety of lung nodules and the similarity of [...] Read more.
When dealing with computed tomography volume data, the accurate segmentation of lung nodules is of great importance to lung cancer analysis and diagnosis, being a vital part of computer-aided diagnosis systems. However, due to the variety of lung nodules and the similarity of visual characteristics for nodules and their surroundings, robust segmentation of nodules becomes a challenging problem. A segmentation algorithm based on the fast marching method is proposed that separates the image into regions with similar features, which are then merged by combining regions growing with k-means. An evaluation was performed with two distinct methods (objective and subjective) that were applied on two different datasets, containing simulation data generated for this study and real patient data, respectively. The objective experimental results show that the proposed technique can accurately segment nodules, especially in solid cases, given the mean Dice scores of 0.933 and 0.901 for round and irregular nodules. For non-solid and cavitary nodules the performance dropped—0.799 and 0.614 mean Dice scores, respectively. The proposed method was compared to active contour models and to two modern deep learning networks. It reached better overall accuracy than active contour models, having comparable results to DBResNet but lesser accuracy than 3D-UNet. The results show promise for the proposed method in computer-aided diagnosis applications. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of region assigning.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>A seed grid generation example. (<b>a</b>) Input image. (<b>b</b>) Initial equidistant seed grid. (<b>c</b>) Seed grid after only deletion of points with high local gradient mean. (<b>d</b>) Seed grid after shifting and deletion.</p>
Full article ">Figure 4
<p>Evolution of the times matrix (<span class="html-italic">T</span>).</p>
Full article ">Figure 5
<p>Evolution of the regions matrix (<span class="html-italic">R</span>).</p>
Full article ">Figure 6
<p>An example of grouping regions into clusters with k-means. (<b>a</b>) Regions. (<b>b</b>) Clusters. (<b>c</b>) Seed grouping shown over input image.</p>
Full article ">Figure 7
<p>Merging of clusters, with a step counter.</p>
Full article ">Figure 8
<p>Lung phantom placed in CT scanner and a single axial slice.</p>
Full article ">Figure 9
<p>Examples of Lung Image Database Consortium (LIDC) nodules from every category and subcategory.</p>
Full article ">Figure 10
<p>Preprocessing flowchart.</p>
Full article ">Figure 11
<p>An example of activecontours segmentation.</p>
Full article ">Figure 12
<p>Excerpts from the questionnaire part one.</p>
Full article ">Figure 13
<p>Solid-round nodules’ objective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 14
<p>Solid-irregular nodules’ objective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 15
<p>Sub-solid nodules’ objective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 16
<p>Cavitary nodules’ objective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 17
<p>Examples with high Dice scores.</p>
Full article ">Figure 18
<p>Examples with low Dice scores.</p>
Full article ">Figure 19
<p>Solid-round nodules’ subjective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 20
<p>Solid-irregular nodules’ subjective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 21
<p>Sub-solid nodules’ subjective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 22
<p>Cavitary nodules’ subjective evaluation results as boxplots, overlayed with the values of individual cases divided into subcategories.</p>
Full article ">Figure 23
<p>Examples with high mean opinion scores (MOS).</p>
Full article ">Figure 24
<p>Examples with low mean opinion scores (MOS).</p>
Full article ">
Back to TopTop