[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Advances on Data Transmission and Analysis for Wearable Sensors Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (30 June 2016) | Viewed by 216044

Special Issue Editors


E-Mail Website
Guest Editor
Communication Engineering Department, Beijing Jiaotong University, Haidian District, Beijing 100044, China
Interests: communication networking; wireless sensor networks and Big Data

E-Mail Website
Guest Editor
Department of Electrical Engineering, National Dong Hwa University, Taiwan
Interests: wireless network; mobile computing; IoT; bacteria-inspired network
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Cisco Systems, Inc., 170 West Tasman Dr., San Jose, CA 95134, USA
Interests: mobile communications; wireless sensor networks; data mining

E-Mail Website
Guest Editor
School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
Interests: wireless localization and tracking; energy harvesting based network resource management; distributed machine learning for big data; wireless sensor networks; internet of things
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Due to the advantages of practicability, flexibility, low-cost, and unobtrusiveness, in recent years, wearable sensor systems have infiltrated daily human life in applications like activity monitoring, healthcare, rehabilitation, sport training, and entertaining. The development of wearable sensor systems aims at continuously improving the quality of user experience and the performance of researches, which requires the sensor data transmission to be more robust, secure and energy saving, along with more intelligent and reliable data processing techniques. Additionally, many emerging applications of wearable sensors have brought new challenging problems, such as group behavior analysis, dynamic routing and wearable sensor based in-door localization, which force us to find new models or solutions to address them.

In this Special Issue, we solicit original papers with high quality related to data transmission and analysis for wearable sensor systems. Contribution may include, but are not limited to:

  • Ÿ   Pattern recognition and analysis, such as daily activity recognition, gait analysis, behavior analysis, disease diagnosis;
  • Ÿ   Localization and tracking problems for wearable sensors;
  • Ÿ   Data fusion for heterogeneous or multisource wearable sensor data;
  • Ÿ   Mobility support for wearable sensors, such as energy harvesting and dynamic routing or clustering;
  • Ÿ   Group behavior analysis and prediction;
  • Ÿ   Security and privacy issues in data transmission and system protection;
  • Ÿ   Other emerging wearable sensor applications;

Prof. Dr. Yun Liu
Prof. Dr. Wendong Xiao
Prof. Dr. Han-Chieh Chao
Dr. Pony Chu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Wearable sensors
  • Intelligent computing
  • Activity monitoring
  • Behavior analysis
  • Security and privacy
  • Mobility support

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

9687 KiB  
Article
An Integrated Wireless Wearable Sensor System for Posture Recognition and Indoor Localization
by Jian Huang, Xiaoqiang Yu, Yuan Wang and Xiling Xiao
Sensors 2016, 16(11), 1825; https://doi.org/10.3390/s16111825 - 31 Oct 2016
Cited by 47 | Viewed by 7689
Abstract
In order to provide better monitoring for the elderly or patients, we developed an integrated wireless wearable sensor system that can realize posture recognition and indoor localization in real time. Five designed sensor nodes which are respectively fixed on lower limbs and a [...] Read more.
In order to provide better monitoring for the elderly or patients, we developed an integrated wireless wearable sensor system that can realize posture recognition and indoor localization in real time. Five designed sensor nodes which are respectively fixed on lower limbs and a standard Kalman filter are used to acquire basic attitude data. After the attitude angles of five body segments (two thighs, two shanks and the waist) are obtained, the pitch angles of the left thigh and waist are used to realize posture recognition. Based on all these attitude angles of body segments, we can also calculate the coordinates of six lower limb joints (two hip joints, two knee joints and two ankle joints). Then, a novel relative localization algorithm based on step length is proposed to realize the indoor localization of the user. Several sparsely distributed active Radio Frequency Identification (RFID) tags are used to correct the accumulative error in the relative localization algorithm and a set-membership filter is applied to realize the data fusion. The experimental results verify the effectiveness of the proposed algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>The structure of the integrated wireless wearable sensor system.</p>
Full article ">Figure 2
<p>The structure of the whole system. (<b>a</b>) The picture of proposed wearable sensor system; (<b>b</b>) Work principle of indoor localization corrected by Radio Frequency Identification (RFID) tags.</p>
Full article ">Figure 3
<p>The flow chart of the whole system.</p>
Full article ">Figure 4
<p>The structure of the sensor node.</p>
Full article ">Figure 5
<p>The designed sensor node.</p>
Full article ">Figure 6
<p>Rotation transformation.</p>
Full article ">Figure 7
<p>The features selection of posture recognition. (<b>a</b>) The illustration of pitch angle; (<b>b</b>) The pitch angles of the left thigh and waist in each posture.</p>
Full article ">Figure 8
<p>Rotation transformation.</p>
Full article ">Figure 9
<p>Typical normal gait cycle.</p>
Full article ">Figure 10
<p>The coordinate definition of the indoor localization subsystem. (<b>a</b>) Indoor coordinate system and base coordinate system.; (<b>b</b>) Updating of localization algorithm.</p>
Full article ">Figure 11
<p>The process of time updating and observation updating without observation missing. (<b>a</b>) The process of time updating; (<b>b</b>) Observation updating without observation missing.</p>
Full article ">Figure 12
<p>The five postures of proposed posture recognition algorithm. (<b>a</b>) Standing posture; (<b>b</b>) Sitting posture; (<b>c</b>) Squatting posture; (<b>d</b>) Supine posture.; (<b>e</b>) Prone posture.</p>
Full article ">Figure 13
<p>The setup of one-step experiments.</p>
Full article ">Figure 14
<p>The mean (error bar) and standard deviation (black lines on the error bar) of the measurement error per subject according to the step length and step angle. (<b>a</b>) The measurement error per subject according to the step length; (<b>b</b>) The measurement per subject according to the step angle.</p>
Full article ">Figure 15
<p>Wearable sensor system for posture recognition and indoor localization.</p>
Full article ">Figure 16
<p>The ichnography of indoor localization environment.</p>
Full article ">Figure 17
<p>The average trajectory curves of subject A walking with normal steps.</p>
Full article ">Figure 18
<p>The mean error curves of Subject A walking with normal steps.</p>
Full article ">Figure 19
<p>The mean (error bar) and standard deviation (black lines on the error bar) of the localization error per subject using the relative localization algorithm and the proposed algorithm.</p>
Full article ">Figure 20
<p>The average trajectory curves of subject A walking with small steps.</p>
Full article ">Figure 21
<p>The mean error curves of Subject A walking with small steps.</p>
Full article ">Figure 22
<p>The mean (error bar) and standard deviation (black lines on the error bar) of the localization error of subject A in different walking styles.</p>
Full article ">
399 KiB  
Article
Ensemble of One-Class Classifiers for Personal Risk Detection Based on Wearable Sensor Data
by Jorge Rodríguez, Ari Y. Barrera-Animas, Luis A. Trejo, Miguel Angel Medina-Pérez and Raúl Monroy
Sensors 2016, 16(10), 1619; https://doi.org/10.3390/s16101619 - 29 Sep 2016
Cited by 14 | Viewed by 5754
Abstract
This study introduces the One-Class K-means with Randomly-projected features Algorithm (OCKRA). OCKRA is an ensemble of one-class classifiers built over multiple projections of a dataset according to random feature subsets. Algorithms found in the literature spread over a wide range of applications where [...] Read more.
This study introduces the One-Class K-means with Randomly-projected features Algorithm (OCKRA). OCKRA is an ensemble of one-class classifiers built over multiple projections of a dataset according to random feature subsets. Algorithms found in the literature spread over a wide range of applications where ensembles of one-class classifiers have been satisfactorily applied; however, none is oriented to the area under our study: personal risk detection. OCKRA has been designed with the aim of improving the detection performance in the problem posed by the Personal RIsk DEtection(PRIDE) dataset. PRIDE was built based on 23 test subjects, where the data for each user were captured using a set of sensors embedded in a wearable band. The performance of OCKRA was compared against support vector machine and three versions of the Parzen window classifier. On average, experimental results show that OCKRA outperformed the other classifiers for at least 0.53% of the area under the curve (AUC). In addition, OCKRA achieved an AUC above 90% for more than 57% of the users. Full article
Show Figures

Figure 1

Figure 1
<p>Precision-recall curves (<b>a</b>) and ROC curves (<b>b</b>) based on the average performance and standard deviation for all users.</p>
Full article ">Figure 2
<p>Pairwise comparisons of the algorithms based on the AUC results. AUC winning count: (<b>a</b>) ocSVM versus all; (<b>b</b>) Parzen versus all; (<b>c</b>) k-means 1 versus all; (<b>d</b>) k-means 2 versus all; (<b>e</b>) OCKRA versus all. The columns with red outer rectangles indicate significant differences according to Wilcoxon’s signed-rank test at a significance level of <math display="inline"> <semantics> <mrow> <mn>0</mn> <mo>.</mo> <mn>05</mn> </mrow> </semantics> </math>.</p>
Full article ">
5967 KiB  
Article
Collection and Processing of Data from Wrist Wearable Devices in Heterogeneous and Multiple-User Scenarios
by Francisco De Arriba-Pérez, Manuel Caeiro-Rodríguez and Juan M. Santos-Gago
Sensors 2016, 16(9), 1538; https://doi.org/10.3390/s16091538 - 21 Sep 2016
Cited by 121 | Viewed by 22687
Abstract
Over recent years, we have witnessed the development of mobile and wearable technologies to collect data from human vital signs and activities. Nowadays, wrist wearables including sensors (e.g., heart rate, accelerometer, pedometer) that provide valuable data are common in market. We are working [...] Read more.
Over recent years, we have witnessed the development of mobile and wearable technologies to collect data from human vital signs and activities. Nowadays, wrist wearables including sensors (e.g., heart rate, accelerometer, pedometer) that provide valuable data are common in market. We are working on the analytic exploitation of this kind of data towards the support of learners and teachers in educational contexts. More precisely, sleep and stress indicators are defined to assist teachers and learners on the regulation of their activities. During this development, we have identified interoperability challenges related to the collection and processing of data from wearable devices. Different vendors adopt specific approaches about the way data can be collected from wearables into third-party systems. This hinders such developments as the one that we are carrying out. This paper contributes to identifying key interoperability issues in this kind of scenario and proposes guidelines to solve them. Taking into account these topics, this work is situated in the context of the standardization activities being carried out in the Internet of Things and Machine to Machine domains. Full article
Show Figures

Figure 1

Figure 1
<p>Wrist wearable devices’ 2015 market share on the left side, 1Q2016 market share on the right side, both by IDC Research Inc. (Framingham, MA, USA) (Adapted from [<a href="#B30-sensors-16-01538" class="html-bibr">30</a>,<a href="#B31-sensors-16-01538" class="html-bibr">31</a>]).</p>
Full article ">Figure 2
<p>2016 smartwatch OS prediction in the wrist wearable sector by IDC Research Inc. Adapted from [<a href="#B32-sensors-16-01538" class="html-bibr">32</a>].</p>
Full article ">Figure 3
<p>Sensors available in wearables.</p>
Full article ">Figure 4
<p>Systems involved in the smartwatch data collection scenario.</p>
Full article ">Figure 5
<p>System representation for the wearable data transfer—indirect access.</p>
Full article ">Figure 6
<p>System representation for the warehouse data transfer—direct access.</p>
Full article ">Figure 7
<p>System representation for the warehouse data transfer—indirect access.</p>
Full article ">Figure 8
<p>System representation for the wearable data transfer—direct access.</p>
Full article ">Figure 9
<p>Sleep analysis by Fitbit based on data from the Fitbit Surge.</p>
Full article ">Figure 10
<p>Sleep analysis by Microsoft through the Microsoft Band.</p>
Full article ">Figure 11
<p>Sleep analysis by Jawbone through the Jawbone Up move.</p>
Full article ">Figure 12
<p>Architecture of our analytic engine.</p>
Full article ">Figure 13
<p>Sleep segments in Microsoft.</p>
Full article ">Figure 14
<p>Sleep segments in Fitbit. We only can detect two states of sleep in Fitbit wearables.</p>
Full article ">Figure 15
<p>Sleep segments in Jawbone.</p>
Full article ">Figure 16
<p>Analytics location models.</p>
Full article ">Figure 17
<p>Adapted representation of the IoT Reference Model by Cisco System, Inc. [<a href="#B66-sensors-16-01538" class="html-bibr">66</a>].</p>
Full article ">
2150 KiB  
Article
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
by Xiaobin Xu, Fang Zhao, Wendong Wang and Hui Tian
Sensors 2016, 16(9), 1385; https://doi.org/10.3390/s16091385 - 31 Aug 2016
Cited by 4 | Viewed by 4531
Abstract
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches [...] Read more.
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. Full article
Show Figures

Figure 1

Figure 1
<p>Results of sampling approaches, (<b>a</b>) original data; (<b>b</b>) uniform sampling; (<b>c</b>) Bernoulli sampling; (<b>d</b>) ASIC.</p>
Full article ">Figure 2
<p>A typical scenario of data sampling in WBAN.</p>
Full article ">Figure 3
<p>Data range affected by sampling probability in a real dataset.</p>
Full article ">Figure 4
<p>Coverage of sampled data.</p>
Full article ">Figure 5
<p>Comparison of entropy.</p>
Full article ">Figure 6
<p>Effects of number of bins.</p>
Full article ">Figure 7
<p>Comparison of distributions of data collected from real dataset.</p>
Full article ">Figure 8
<p>Comparison of distributions of data collected from synthetic dataset.</p>
Full article ">
2370 KiB  
Article
Recognition of Daily Gestures with Wearable Inertial Rings and Bracelets
by Alessandra Moschetti, Laura Fiorini, Dario Esposito, Paolo Dario and Filippo Cavallo
Sensors 2016, 16(8), 1341; https://doi.org/10.3390/s16081341 - 22 Aug 2016
Cited by 68 | Viewed by 8871
Abstract
Recognition of activities of daily living plays an important role in monitoring elderly people and helping caregivers in controlling and detecting changes in daily behaviors. Thanks to the miniaturization and low cost of Microelectromechanical systems (MEMs), in particular of Inertial Measurement Units, in [...] Read more.
Recognition of activities of daily living plays an important role in monitoring elderly people and helping caregivers in controlling and detecting changes in daily behaviors. Thanks to the miniaturization and low cost of Microelectromechanical systems (MEMs), in particular of Inertial Measurement Units, in recent years body-worn activity recognition has gained popularity. In this context, the proposed work aims to recognize nine different gestures involved in daily activities using hand and wrist wearable sensors. Additionally, the analysis was carried out also considering different combinations of wearable sensors, in order to find the best combination in terms of unobtrusiveness and recognition accuracy. In order to achieve the proposed goals, an extensive experimentation was performed in a realistic environment. Twenty users were asked to perform the selected gestures and then the data were off-line analyzed to extract significant features. In order to corroborate the analysis, the classification problem was treated using two different and commonly used supervised machine learning techniques, namely Decision Tree and Support Vector Machine, analyzing both personal model and Leave-One-Subject-Out cross validation. The results obtained from this analysis show that the proposed system is able to recognize the proposed gestures with an accuracy of 89.01% in the Leave-One-Subject-Out cross validation and are therefore promising for further investigation in real life scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Placement of inertial sensors on the dominant hand and on the wrist. In the circle a focus on the placement of the SensHand is represented, while in the half-body figure the position of the wrist sensor is shown.</p>
Full article ">Figure 2
<p>Focus on grasping of objects involved in the different gestures (<b>a</b>) grasp some chips with the hand (HA); (<b>b</b>) take the cup (CP); (<b>c</b>) grasp the phone (PH); (<b>d</b>) take the toothbrush (TB).</p>
Full article ">Figure 3
<p>Example of (<b>a</b>) eating with the hand gesture (HA); (<b>b</b>) drink from the cup (CP); (<b>c</b>) answer the telephone (PH); (<b>d</b>) brushing the teeth (TB).</p>
Full article ">Figure 4
<p>Precision, recall and specificity of personal analysis for (<b>a</b>) DT and (<b>b</b>) SVM.</p>
Full article ">Figure 5
<p>Precision, recall and specificity of impersonal analysis for (<b>a</b>) DT and (<b>b</b>) SVM.</p>
Full article ">Figure 6
<p>Precision, recall and specificity of SVM impersonal analysis for (<b>a</b>) Hand gesture; (<b>b</b>) Glass gesture; (<b>c</b>) Fork gesture; (<b>d</b>) Spoon gesture; (<b>e</b>) Cup gesture; <b>(f</b>) Phone gesture; (<b>g</b>) Toothbrush gesture; (<b>h</b>) Hairbrush gesture; (<b>i</b>) Hair dryer gesture.</p>
Full article ">Figure 6 Cont.
<p>Precision, recall and specificity of SVM impersonal analysis for (<b>a</b>) Hand gesture; (<b>b</b>) Glass gesture; (<b>c</b>) Fork gesture; (<b>d</b>) Spoon gesture; (<b>e</b>) Cup gesture; <b>(f</b>) Phone gesture; (<b>g</b>) Toothbrush gesture; (<b>h</b>) Hairbrush gesture; (<b>i</b>) Hair dryer gesture.</p>
Full article ">
4879 KiB  
Article
A Fuzzy Logic Prompting Mechanism Based on Pattern Recognition and Accumulated Activity Effective Index Using a Smartphone Embedded Sensor
by Chung-Tse Liu and Chia-Tai Chan
Sensors 2016, 16(8), 1322; https://doi.org/10.3390/s16081322 - 19 Aug 2016
Cited by 8 | Viewed by 8186
Abstract
Sufficient physical activity can reduce many adverse conditions and contribute to a healthy life. Nevertheless, inactivity is prevalent on an international scale. Improving physical activity is an essential concern for public health. Reminders that help people change their health behaviors are widely applied [...] Read more.
Sufficient physical activity can reduce many adverse conditions and contribute to a healthy life. Nevertheless, inactivity is prevalent on an international scale. Improving physical activity is an essential concern for public health. Reminders that help people change their health behaviors are widely applied in health care services. However, timed-based reminders deliver periodic prompts suffer from flexibility and dependency issues which may decrease prompt effectiveness. We propose a fuzzy logic prompting mechanism, Accumulated Activity Effective Index Reminder (AAEIReminder), based on pattern recognition and activity effective analysis to manage physical activity. AAEIReminder recognizes activity levels using a smartphone-embedded sensor for pattern recognition and analyzing the amount of physical activity in activity effective analysis. AAEIReminder can infer activity situations such as the amount of physical activity and days spent exercising through fuzzy logic, and decides whether a prompt should be delivered to a user. This prompting system was implemented in smartphones and was used in a short-term real-world trial by seventeenth participants for validation. The results demonstrated that the AAEIReminder is feasible. The fuzzy logic prompting mechanism can deliver prompts automatically based on pattern recognition and activity effective analysis. AAEIReminder provides flexibility which may increase the prompts’ efficiency. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Platform of fuzzy logic prompting mechanism implemented in a smartphone.</p>
Full article ">Figure 2
<p>The multi-stage process of pattern recognition and activity level estimation.</p>
Full article ">Figure 3
<p>The raw data sensing by accelerometer contains motion acceleration and gravity.</p>
Full article ">Figure 4
<p>The activity level estimation procedure and algorithm.</p>
Full article ">Figure 5
<p>An accumulated activity effective index is estimated based on the time sequence activity level.</p>
Full article ">Figure 6
<p>The principle of prompting decision making. The AAEI should be above average line in the goal achieving phase and the average line should above the goal in the maintenance phase.</p>
Full article ">Figure 7
<p>The framework of fuzzy logic prompting system.</p>
Full article ">Figure 8
<p>Fuzzy membership functions: (<b>a</b>) the fuzzy membership functions of <span class="html-italic">P1</span>, <span class="html-italic">P2</span>, and <span class="html-italic">P3</span>; (<b>b</b>) the fuzzy membership function of <span class="html-italic">P4</span>; (<b>c</b>) the fuzzy membership function of the prompting level.</p>
Full article ">Figure 9
<p>Physical activity and corresponding AAEI without prompting. Simulation model parameters- Intensity: 5 MET; Duration: 30 min; Frequency: 3.</p>
Full article ">Figure 10
<p>Physical activity and corresponding AAEI with prompting. Simulation model parameters- Intensity: 5 MET; Duration: 30 min; Frequency: 3.</p>
Full article ">Figure 11
<p>(<b>a</b>) The mobile phone is placed on the left upper arm while doing physical activity; (<b>b</b>) the screenshot of AAEI feedback and prompting suggestion before physical activity; (<b>c</b>) the screenshot of AAEI feedback and prompting suggestion when doing physical activity; (<b>d</b>) an introduction to the AAEI value for the user; (<b>e</b>) a delivered prompting message.</p>
Full article ">Figure 12
<p>Different cases of recorded AAEI and prompting values in the real-world trail. (<b>a</b>) A case increase physical activity to achieve the goal; (<b>b</b>) a case increase physical activity near the goal; (<b>c</b>) and (<b>d</b>) two different cases that exercises sometimes; (<b>e</b>) a case achieve the goal without any prompts; (<b>f</b>) a case do little physical activity.</p>
Full article ">Figure 12 Cont.
<p>Different cases of recorded AAEI and prompting values in the real-world trail. (<b>a</b>) A case increase physical activity to achieve the goal; (<b>b</b>) a case increase physical activity near the goal; (<b>c</b>) and (<b>d</b>) two different cases that exercises sometimes; (<b>e</b>) a case achieve the goal without any prompts; (<b>f</b>) a case do little physical activity.</p>
Full article ">
8477 KiB  
Article
Physical Behavior in Older Persons during Daily Life: Insights from Instrumented Shoes
by Christopher Moufawad el Achkar, Constanze Lenoble-Hoskovec, Anisoara Paraschiv-Ionescu, Kristof Major, Christophe Büla and Kamiar Aminian
Sensors 2016, 16(8), 1225; https://doi.org/10.3390/s16081225 - 3 Aug 2016
Cited by 40 | Viewed by 6696
Abstract
Activity level and gait parameters during daily life are important indicators for clinicians because they can provide critical insights into modifications of mobility and function over time. Wearable activity monitoring has been gaining momentum in daily life health assessment. Consequently, this study seeks [...] Read more.
Activity level and gait parameters during daily life are important indicators for clinicians because they can provide critical insights into modifications of mobility and function over time. Wearable activity monitoring has been gaining momentum in daily life health assessment. Consequently, this study seeks to validate an algorithm for the classification of daily life activities and to provide a detailed gait analysis in older adults. A system consisting of an inertial sensor combined with a pressure sensing insole has been developed. Using an algorithm that we previously validated during a semi structured protocol, activities in 10 healthy elderly participants were recorded and compared to a wearable reference system over a 4 h recording period at home. Detailed gait parameters were calculated from inertial sensors. Dynamics of physical behavior were characterized using barcodes that express the measure of behavioral complexity. Activity classification based on the algorithm led to a 93% accuracy in classifying basic activities of daily life, i.e., sitting, standing, and walking. Gait analysis emphasizes the importance of metrics such as foot clearance in daily life assessment. Results also underline that measures of physical behavior and gait performance are complementary, especially since gait parameters were not correlated to complexity. Participants gave positive feedback regarding the use of the instrumented shoes. These results extend previous observations in showing the concurrent validity of the instrumented shoes compared to a body-worn reference system for daily-life physical behavior monitoring in older adults. Full article
Show Figures

Figure 1

Figure 1
<p>Instrumented shoe system (right shoe). The Physilog<sup>®</sup> is placed on a strap looping around the shoe with Velcro<sup>®</sup> tape. The insole (in blue) is placed inside the shoe and linked to the Physilog<sup>®</sup> by a cable. Converting electronics are in the box with handles (lateral side of the shoe), connected to the strip stemming from the insole.</p>
Full article ">Figure 2
<p>Foot clearance during a step from a single foot. The maximum heel (HC) and minimum toe (TC) clearance are shown with arrows. Two consecutive toe off instants are shown, forming a complete gait cycle.</p>
Full article ">Figure 3
<p>Snapshot of classifier output from one participant (taken ~1 h after the beginning of the recording). Top: plot of TF showing the 50% BW line (dashed red line). Bottom: pitch angular velocity: right foot (blue) and left foot (green). The vertical dashed bars represent different activity periods (walking, standing and sitting).</p>
Full article ">Figure 4
<p>Zoom-in on the walking period from <a href="#sensors-16-01225-f003" class="html-fig">Figure 3</a>. The pitch angular velocity of the right foot is shown as a continuous line, and the left foot as a dashed line. TO instants are represented by circles.</p>
Full article ">Figure 5
<p>(<b>a</b>) Mean cadence distribution for all locomotion periods with three or more steps; (<b>b</b>) Instantaneous cadence distribution for locomotion periods with 20 or more steps vs less than 20 steps; (<b>c</b>) Cumulative distribution of locomotion period duration across all subjects (log scale for locomotion period duration axis). For (<b>b</b>,<b>c</b>): mean is represented by a thick line and SD by a shaded area.</p>
Full article ">Figure 6
<p><b>Left</b>: stride velocity distribution; <b>Right</b>: stride length distribution as mean (thick line) and 5th/95th percentile shading across all subjects.</p>
Full article ">Figure 7
<p>Maximum HC (<b>left</b>) and minimum TC (<b>right</b>) as a function of stride velocity for all analyzed steps.</p>
Full article ">
4578 KiB  
Article
Examination of Inertial Sensor-Based Estimation Methods of Lower Limb Joint Moments and Ground Reaction Force: Results for Squat and Sit-to-Stand Movements in the Sagittal Plane
by Jun Kodama and Takashi Watanabe
Sensors 2016, 16(8), 1209; https://doi.org/10.3390/s16081209 - 1 Aug 2016
Cited by 20 | Viewed by 9025
Abstract
Joint moment estimation by a camera-based motion measurement system and a force plate has a limitation of measurement environment and is costly. The purpose of this paper is to evaluate quantitatively inertial sensor-based joint moment estimation methods with five-link, four-link and three-link rigid [...] Read more.
Joint moment estimation by a camera-based motion measurement system and a force plate has a limitation of measurement environment and is costly. The purpose of this paper is to evaluate quantitatively inertial sensor-based joint moment estimation methods with five-link, four-link and three-link rigid body models using different trunk segmented models. Joint moments, ground reaction forces (GRF) and center of pressure (CoP) were estimated for squat and sit-to-stand movements in the sagittal plane measured with six healthy subjects. The five-link model and the four-link model that the trunk was divided at the highest point of the iliac crest (four-link-IC model) were appropriate for joint moment estimation with inertial sensors, which showed average RMS values of about 0.1 Nm/kg for all lower limb joints and average correlation coefficients of about 0.98 for hip and knee joints and about 0.80 for ankle joint. Average root mean square (RMS) errors of horizontal and vertical GRFs and CoP were about 10 N, 15 N and 2 cm, respectively. Inertial sensor-based method was suggested to be an option for estimating joint moments of the trunk segments. Inertial sensors were also shown to be useful for the bottom-up estimation method using measured GRFs, in which average RMS values and average correlation coefficients were about 0.06 Nm/kg and larger than about 0.98 for all joints. Full article
Show Figures

Figure 1

Figure 1
<p>Multi-link models tested in this paper. <span class="html-italic">a</span>, <span class="html-italic">b</span>, and <span class="html-italic">c</span> show the lower end of the rib, the highest point of the iliac crest and the great trochanter, respectively.</p>
Full article ">Figure 2
<p>Definitions of segment inclination angle, joint reaction force and joint moment: (<b>a</b>) inclination angle of the five-link model, where <span class="html-italic">a</span>, <span class="html-italic">b</span> and <span class="html-italic">c</span> are landmarks of the body shown in <a href="#sensors-16-01209-f001" class="html-fig">Figure 1</a>; and (<b>b</b>) joint reaction forces and joint moments at segment <span class="html-italic">i</span>.</p>
Full article ">Figure 3
<p>Experimental setup of measurement of movements with inertial sensors, 3D motion measurement system and force plates: (<b>a</b>) attachment positions of inertial sensors and markers of the 3D motion measurement system and placement of force plates; and (<b>b</b>) a picture of a subject with attached markers and sensors.</p>
Full article ">Figure 4
<p>Root mean square (RMS) errors of segment inclination angles between estimation with inertial sensors and measurement with markers attached on the landmarks. s1, s2 and s3 show sensors mounted on the trunk as shown in <a href="#sensors-16-01209-f003" class="html-fig">Figure 3</a>a. (<b>a</b>) RMS errors calculated for the thigh and the shank segments; (<b>b</b>) RMS errors for the upper trunk (UT), the middle trunk (MT) and the lower trunk (LT) of the five-link model; (<b>c</b>) RMS errors for the upper trunk (UT) and the middle-lower trunk (MLT) of the four-link-R model; (<b>d</b>) RMS errors for the upper-middle trunk (UMT) and the lower trunk (LT) of the four-link-IC model; and (<b>e</b>) RMS errors for the trunk segment (HAT) of the three-link model.</p>
Full article ">Figure 5
<p>RMS values and correlation coefficients of inclination angles between the upper and the middle trunk segments (UT-MT) and between the middle and the lower trunk segments (MT-LT) of the five-link model. UT, MT and LT are the upper, the middle and the lower trunk segments, respectively.</p>
Full article ">Figure 6
<p>Examples of waveforms of estimated joint moments during squat movement (6 s) with four estimation methods (five-link-model). “Ext” and “Flex” show extension and flexion moments, respectively. “P-Flex” and “D-Flex” show plantar flexion and dorsiflexion moments of the ankle joint, respectively.</p>
Full article ">Figure 7
<p>RMS values and correlation coefficients of estimated joint moments between the top-down methods and the conventional method: (<b>a</b>) SI method; (<b>b</b>) CI method; and (<b>c</b>) SA method. In the SI method, “(s1)” and “(s2)” for the four-link-IC model show the inertial sensor that was used for estimation of inclination angle of the upper-middle trunk to estimate joint moments.</p>
Full article ">Figure 8
<p>Examples of estimated hip joint moment in comparison between different trunk link models during squat movement (6 s): (<b>a</b>) estimation with the SI method and the conventional method; and (<b>b</b>) estimation with the CI method and the conventional method.</p>
Full article ">Figure 9
<p>RMS values of joint moments between the conventional method and the SI method with the four-link-IC model using the sensor s2. “STS” shows the sit-to-stand movement. Red line shows the median and small square shows average value. The maximum and the minimum values are also shown.</p>
Full article ">Figure 10
<p>RMS values of joint moments between the conventional method and the SI method with the four-link-IC model using the sensor s2 for 6 Subjects. Red line shows the median and small square shows average value. The maximum and the minimum values are also shown: (<b>a</b>) squat movement (normal speed: 6 s); and (<b>b</b>) sit-to-stand movement (normal).</p>
Full article ">Figure 11
<p>RMS values and correlation coefficients of estimated joint moments of the trunk link nodes: (<b>a</b>) SI method; (<b>b</b>) CI method; and (<b>c</b>) SA method. M-L and U-M are joints of the highest point of the iliac crest and the lower end of the rib, respectively. R, IC and 5 are the four-link-R, the four-link-IC and the five-link models, respectively. In the SI method, “(s1)” and “(s2)” are the same meaning as shown in <a href="#sensors-16-01209-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 12
<p>RMS values and correlation coefficients of joint moments between the conventional method and the bottom-up method using inertial sensors and force plate. M-L, U-M, IC, R and 5 are the same meaning as shown in <a href="#sensors-16-01209-f011" class="html-fig">Figure 11</a>. Ankle joint moments are the same between both methods.</p>
Full article ">Figure 13
<p>RMS errors and correlation coefficients of ground reaction forces between estimated values with each estimation method and those measured with the force plate; (<b>a</b>) SI method; (<b>b</b>) CI method; and (<b>c</b>) SA method. In the SI method, “four-link-IC” shows results with sensor s1.</p>
Full article ">Figure 14
<p>RMS errors and correlation coefficients of CoP between estimated values with each estimation method and those measured with the force plate; (<b>a</b>) SI method; (<b>b</b>) CI method; and (<b>c</b>) SA method. In the SI method, “four-link-IC” shows results with sensor s1.</p>
Full article ">
6096 KiB  
Article
A Method of Data Aggregation for Wearable Sensor Systems
by Bo Shen and Jun-Song Fu
Sensors 2016, 16(7), 954; https://doi.org/10.3390/s16070954 - 23 Jun 2016
Cited by 5 | Viewed by 5461
Abstract
Data aggregation has been considered as an effective way to decrease the data to be transferred in sensor networks. Particularly for wearable sensor systems, smaller battery has less energy, which makes energy conservation in data transmission more important. Nevertheless, wearable sensor systems usually [...] Read more.
Data aggregation has been considered as an effective way to decrease the data to be transferred in sensor networks. Particularly for wearable sensor systems, smaller battery has less energy, which makes energy conservation in data transmission more important. Nevertheless, wearable sensor systems usually have features like frequently dynamic changes of topologies and data over a large range, of which current aggregating methods can’t adapt to the demand. In this paper, we study the system composed of many wearable devices with sensors, such as the network of a tactical unit, and introduce an energy consumption-balanced method of data aggregation, named LDA-RT. In the proposed method, we develop a query algorithm based on the idea of ‘happened-before’ to construct a dynamic and energy-balancing routing tree. We also present a distributed data aggregating and sorting algorithm to execute top-k query and decrease the data that must be transferred among wearable devices. Combining these algorithms, LDA-RT tries to balance the energy consumptions for prolonging the lifetime of wearable sensor systems. Results of evaluation indicate that LDA-RT performs well in constructing routing trees and energy balances. It also outperforms the filter-based top-k monitoring approach in energy consumption, load balance, and the network’s lifetime, especially for highly dynamic data sources. Full article
Show Figures

Figure 1

Figure 1
<p>Kernel node and structure of LDA. (<b>a</b>) Single sub-network; (<b>b</b>) Cascaded sub-networks.</p>
Full article ">Figure 2
<p>Example of LDA algorithm. (<b>a</b>) Merge sort algorithm; (<b>b</b>) Distributed merge sort algorithm.</p>
Full article ">Figure 3
<p>Routing tree in a wireless sensor network.</p>
Full article ">Figure 4
<p>Routing trees constructed by LDA-RT with time drift (−3 ms~+3 ms) and different processing delays: for (<b>a</b>) and (<b>c</b>), processing delay is 0; for (<b>b</b>) and (<b>d</b>), the processing delay is 40 ms; (<b>a</b>) and (<b>b</b>) is the results of grid networks, and (<b>c</b>) and (<b>d</b>) random networks.</p>
Full article ">Figure 5
<p>Average path length with different processing delays: (<b>a</b>) Base station located at the center; (<b>b</b>) Base station located at the bottom-left corner.</p>
Full article ">Figure 6
<p>Mean and standard deviation of the number of children nodes of kernel nodes with different processing delays.</p>
Full article ">Figure 7
<p>(<b>a</b>) Average path length vs. different time drifts; (<b>b</b>) Mean and standard deviation of the number of children nodes of kernel nodes with different time drifts.</p>
Full article ">Figure 8
<p>Number of forwarded query message vs. number of nodes.</p>
Full article ">Figure 9
<p>Mean value of residual energy with path length in grid network (<b>a</b>) and in random network (<b>b</b>); (<b>c</b>) Standard deviation of residual energy with different values of <math display="inline"> <semantics> <mi>k</mi> </semantics> </math>.</p>
Full article ">Figure 10
<p>Standard deviation of residual energy vs. the number of query.</p>
Full article ">Figure 11
<p>Standard deviation of final time of nodes vs. query number for various time drifts. (<b>a</b>) Grid network; (<b>b</b>) Random network.</p>
Full article ">Figure 12
<p>The comparison of (<b>a</b>) average energy consumption and (<b>b</b>) lifetime between LDA-RT and Filter-Based top-<span class="html-italic">k</span>.</p>
Full article ">Figure 13
<p>The comparison of lifetime under different conditions. (<b>a</b>) Different probability of change; (<b>b</b>), (<b>c</b>) and (<b>d</b>) Different initial energy with probability of change 0.1, 0.3 and 0.5 respectively.</p>
Full article ">Figure 14
<p>Plots of energy consumption vs. <span class="html-italic">k</span>.</p>
Full article ">
2709 KiB  
Article
A Novel Field-Circuit FEM Modeling and Channel Gain Estimation for Galvanic Coupling Real IBC Measurements
by Yue-Ming Gao, Zhu-Mei Wu, Sio-Hang Pun, Peng-Un Mak, Mang-I Vai and Min Du
Sensors 2016, 16(4), 471; https://doi.org/10.3390/s16040471 - 2 Apr 2016
Cited by 28 | Viewed by 6698
Abstract
Existing research on human channel modeling of galvanic coupling intra-body communication (IBC) is primarily focused on the human body itself. Although galvanic coupling IBC is less disturbed by external influences during signal transmission, there are inevitable factors in real measurement scenarios such as [...] Read more.
Existing research on human channel modeling of galvanic coupling intra-body communication (IBC) is primarily focused on the human body itself. Although galvanic coupling IBC is less disturbed by external influences during signal transmission, there are inevitable factors in real measurement scenarios such as the parasitic impedance of electrodes, impedance matching of the transceiver, etc. which might lead to deviations between the human model and the in vivo measurements. This paper proposes a field-circuit finite element method (FEM) model of galvanic coupling IBC in a real measurement environment to estimate the human channel gain. First an anisotropic concentric cylinder model of the electric field intra-body communication for human limbs was developed based on the galvanic method. Then the electric field model was combined with several impedance elements, which were equivalent in terms of parasitic impedance of the electrodes, input and output impedance of the transceiver, establishing a field-circuit FEM model. The results indicated that a circuit module equivalent to external factors can be added to the field-circuit model, which makes this model more complete, and the estimations based on the proposed field-circuit are in better agreement with the corresponding measurement results. Full article
Show Figures

Figure 1

Figure 1
<p>The electric field model of the signal transmission path in the galvanic intra-body communication. (<b>a</b>) Three dimensional view of the electric field model; (<b>b</b>) Cross section of the electric field model.</p>
Full article ">Figure 2
<p>The field-circuit model of signal transmission path in the galvanic intra-body communication.</p>
Full article ">Figure 3
<p>Experimental setup for the body channel measurement, Agilent CXA N9000A spectrum analyzer as a transmitter, Agilent 1141A differential probe and spectrum analyzer as a receiver.</p>
Full article ">Figure 4
<p>Mean values of the intra-body communication channel characteristics, measured on six test subjects at different frequencies over a period of several days.</p>
Full article ">Figure 5
<p>Equivalent circuit model of the circuit module in the field-circuit model, including the parasitic impedance between the transceiver electrodes, the internal impedances of the voltmeters and generators.</p>
Full article ">Figure 6
<p>Circuit model channel characteristics for different distances, here the part of the EF and circuit were treated as decoupling in C-EF model.</p>
Full article ">Figure 7
<p>The flow chart of the short distance intra-body communication channel characteristics estimated in Field-Circuit model.</p>
Full article ">Figure 8
<p>Mean values of the estimation results for long distance intra-body communication channel characteristics in the field-circuit model.</p>
Full article ">Figure 9
<p>Mean values of the intra-body communication channel characteristics for the field-circuit model, electric field model and the body experiments, measured on 10 test subjects at different frequencies.</p>
Full article ">Figure 10
<p>Comparison between <span class="html-italic">in vivo</span> measurements and C-EF channel estimation results, measured on 10 test subjects at different distances, (<b>a</b>) error of the experiment and simulation results, (<b>b</b>) the results of experiment and simulation; (<b>c</b>) results under 6 cm; (<b>d</b>) results under 10 cm; (<b>e</b>) results under 14 cm.</p>
Full article ">Figure 10 Cont.
<p>Comparison between <span class="html-italic">in vivo</span> measurements and C-EF channel estimation results, measured on 10 test subjects at different distances, (<b>a</b>) error of the experiment and simulation results, (<b>b</b>) the results of experiment and simulation; (<b>c</b>) results under 6 cm; (<b>d</b>) results under 10 cm; (<b>e</b>) results under 14 cm.</p>
Full article ">Figure 11
<p>Mean values of the long distance intra-body communication channel characteristic, measured on 10 subjects at different distances.</p>
Full article ">Figure 12
<p>Mean values of the long distance human body channel amplitude characteristic variation range, measured on 10 test subjects. The star curve and red dot line curve represent the average voltage gain for the channel length in the range of 20 cm to 40 cm.</p>
Full article ">
1733 KiB  
Article
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems
by Liangtian Wan, Guangjie Han, Hao Wang, Lei Shu, Nanxing Feng and Bao Peng
Sensors 2016, 16(3), 368; https://doi.org/10.3390/s16030368 - 12 Mar 2016
Cited by 14 | Viewed by 5776
Abstract
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the [...] Read more.
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of a wearable health-monitoring system.</p>
Full article ">Figure 2
<p>Localization scheme based on VMIMO systems.</p>
Full article ">Figure 3
<p>The array configuration of the uniform rectangle array (URA) in a BS.</p>
Full article ">Figure 4
<p>The sub-arrays of URA.</p>
Full article ">Figure 5
<p>The RMSE of azimuth for the ID source <span class="html-italic">versus</span> SNR.</p>
Full article ">Figure 6
<p>The RMSE of the elevation for the ID source <span class="html-italic">versus</span> SNR.</p>
Full article ">Figure 7
<p>The RMSE of the azimuth for the CD source <span class="html-italic">versus</span> SNR.</p>
Full article ">Figure 8
<p>The RMSE of the elevation for the CD source <span class="html-italic">versus</span> SNR.</p>
Full article ">Figure 9
<p>The RMSE of azimuth for the ID source <span class="html-italic">versus</span> snapshot number.</p>
Full article ">Figure 10
<p>The RMSE of elevation for the ID source <span class="html-italic">versus</span> snapshot number.</p>
Full article ">Figure 11
<p>The RMSE of azimuth for the CD source <span class="html-italic">versus</span> snapshot number.</p>
Full article ">Figure 12
<p>The RMSE of elevation for the CD source <span class="html-italic">versus</span> snapshot number.</p>
Full article ">Figure 13
<p>The localization of the biosensor.</p>
Full article ">
542 KiB  
Article
Towards Reliable and Energy-Efficient Incremental Cooperative Communication for Wireless Body Area Networks
by Sidrah Yousaf, Nadeem Javaid, Umar Qasim, Nabil Alrajeh, Zahoor Ali Khan and Mansoor Ahmed
Sensors 2016, 16(3), 284; https://doi.org/10.3390/s16030284 - 24 Feb 2016
Cited by 40 | Viewed by 7346
Abstract
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying [...] Read more.
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption. Full article
Show Figures

Figure 1

Figure 1
<p>Three-stage incremental cooperative communication.</p>
Full article ">Figure 2
<p>Packet error rate (PER) analysis. (<b>a</b>) PER for on-body NLOS communication; (<b>b</b>) PER for on-body LOS communication.</p>
Full article ">Figure 3
<p>Energy efficiency (EE) analysis. (<b>a</b>) EE for on-body NLOS communication; (<b>b</b>) EE for on-body LOS communication.</p>
Full article ">Figure 4
<p>Network topology of incremental cooperative critical data transmission in emergencies for static WBANs (InCo-CEStat) and enhanced InCo-CEStat (EInCo-CEStat).</p>
Full article ">Figure 5
<p>Communication flow diagram of InCo-CEStat, EInCo-CEStat and Co-CEStat.</p>
Full article ">Figure 6
<p>Stability period and network lifetime.</p>
Full article ">Figure 7
<p>Number of packets received successfully at the sink.</p>
Full article ">Figure 8
<p>Number of packets dropped.</p>
Full article ">Figure 9
<p>Residual energy of the network.</p>
Full article ">
3492 KiB  
Article
A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients
by Lei Yu, Daxi Xiong, Liquan Guo and Jiping Wang
Sensors 2016, 16(2), 202; https://doi.org/10.3390/s16020202 - 5 Feb 2016
Cited by 24 | Viewed by 8070
Abstract
Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have [...] Read more.
Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. Full article
Show Figures

Figure 1

Figure 1
<p>System structure of a compressed sensing-based wearable sensor network.</p>
Full article ">Figure 2
<p>Structure of single layer feedforward network.</p>
Full article ">Figure 3
<p>General view of (<b>a</b>) the accelerometer sensors and (<b>b</b>) the accelerometer sensor location.</p>
Full article ">Figure 4
<p>Raw accelerometer signals.</p>
Full article ">Figure 5
<p>Compressed accelerometer signals.</p>
Full article ">Figure 6
<p>Reconstructed accelerometer signals and absolute errors (AE).</p>
Full article ">Figure 7
<p>Effects of the block size on SNR.</p>
Full article ">Figure 8
<p>Effects of the compression ratio on SNR.</p>
Full article ">Figure 9
<p>Sparsity of raw accelerometer signal (axis X1).</p>
Full article ">Figure 10
<p>(<b>a</b>)–(<b>e</b>) Reconstructed and raw accelerometer signals from Brunnstrom stages II to VI.</p>
Full article ">Figure 11
<p>Comparison of compressed and raw signals on quantitative assessment model accuracy.</p>
Full article ">
1779 KiB  
Article
Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies
by Hesham Mahrous and Rabab Ward
Sensors 2016, 16(2), 201; https://doi.org/10.3390/s16020201 - 5 Feb 2016
Cited by 20 | Viewed by 6988
Abstract
This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the [...] Read more.
This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Block Sparsity of EEG DCT Coefficients of EEG channels. (<b>a</b>) The DCT coefficients of <math display="inline"> <semantics> <mrow> <mi>v</mi> <mi>e</mi> <mi>c</mi> <mrow> <mo stretchy="false">[</mo> <mrow> <msup> <mi>X</mi> <mi>T</mi> </msup> </mrow> <mo stretchy="false">]</mo> </mrow> </mrow> </semantics> </math>; (<b>b</b>) The DCT coefficients of <math display="inline"> <semantics> <mrow> <mi>v</mi> <mi>e</mi> <mi>c</mi> <mrow> <mo stretchy="false">[</mo> <mi>X</mi> <mo stretchy="false">]</mo> </mrow> </mrow> </semantics> </math> ; (<b>c</b>) The DCT coefficients of <math display="inline"> <semantics> <mrow> <mo> </mo> <msub> <mi>x</mi> <mi>l</mi> </msub> <mo>,</mo> <mo> </mo> <mi>w</mi> <mi>h</mi> <mi>e</mi> <mi>n</mi> <mo> </mo> <msub> <mi>x</mi> <mi>l</mi> </msub> </mrow> </semantics> </math> is formed of 23 s of data of the channel <span class="html-italic">l</span>; (<b>d</b>) The DCT coefficients of <math display="inline"> <semantics> <mrow> <msub> <mi>x</mi> <mi>l</mi> </msub> </mrow> </semantics> </math> when it is formed of one second of data of the same channel.</p>
Full article ">Figure 2
<p>Block Diagram showing our approach for multivariate compression in CS.</p>
Full article ">Figure 3
<p>Block structure of correlated and uncorrelated signals in the DCT domain. (<b>a</b>) The DCT coefficients of the vectorized form of uncorrelated random signals; (<b>b</b>) The DCT coefficients of the vectorized forms of correlated 6 channel signals; (<b>c</b>) The DCT coefficients of the vectorized forms of correlated 10 channel signals; (<b>d</b>) The DCT coefficients of the vectorized forms of correlated 14 channel signals.</p>
Full article ">Figure 4
<p>Mean Correlation and PLV in the blocks and between the blocks. (<b>a</b>) Average Correlation in the blocks; (<b>b</b>) Average Correlation between the blocks; (<b>c</b>) Average PLV in the blocks; (<b>d</b>) Average Correlation between the blocks.</p>
Full article ">Figure 5
<p>NMSE vs Number of Channels of proposed method at different compression % rates.</p>
Full article ">
3493 KiB  
Article
Recognition of Human Activities Using Continuous Autoencoders with Wearable Sensors
by Lukun Wang
Sensors 2016, 16(2), 189; https://doi.org/10.3390/s16020189 - 4 Feb 2016
Cited by 73 | Viewed by 8057
Abstract
This paper provides an approach for recognizing human activities with wearable sensors. The continuous autoencoder (CAE) as a novel stochastic neural network model is proposed which improves the ability of model continuous data. CAE adds Gaussian random units into the improved sigmoid activation [...] Read more.
This paper provides an approach for recognizing human activities with wearable sensors. The continuous autoencoder (CAE) as a novel stochastic neural network model is proposed which improves the ability of model continuous data. CAE adds Gaussian random units into the improved sigmoid activation function to extract the features of nonlinear data. In order to shorten the training time, we propose a new fast stochastic gradient descent (FSGD) algorithm to update the gradients of CAE. The reconstruction of a swiss-roll dataset experiment demonstrates that the CAE can fit continuous data better than the basic autoencoder, and the training time can be reduced by an FSGD algorithm. In the experiment of human activities’ recognition, time and frequency domain feature extract (TFFE) method is raised to extract features from the original sensors’ data. Then, the principal component analysis (PCA) method is applied to feature reduction. It can be noticed that the dimension of each data segment is reduced from 5625 to 42. The feature vectors extracted from original signals are used for the input of deep belief network (DBN), which is composed of multiple CAEs. The training results show that the correct differentiation rate of 99.3% has been achieved. Some contrast experiments like different sensors combinations, sensor units at different positions, and training time with different epochs are designed to validate our approach. Full article
Show Figures

Figure 1

Figure 1
<p>Basic autoencoder model. <math display="inline"> <semantics> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>∈</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>n</mi> </mrow> </semantics> </math> represents the input of autoencoder, <math display="inline"> <semantics> <mrow> <msub> <mi>h</mi> <mi>j</mi> </msub> <mo>,</mo> <mi>j</mi> <mo>∈</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>k</mi> </mrow> </semantics> </math> is the value of hidden units, <math display="inline"> <semantics> <mrow> <msub> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>∈</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>n</mi> </mrow> </semantics> </math> is the approximate output, <math display="inline"> <semantics> <mrow> <msup> <mi>W</mi> <mrow> <mo stretchy="false">(</mo> <mi>i</mi> <mo stretchy="false">)</mo> </mrow> </msup> <mo>,</mo> <mi>i</mi> <mo>∈</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics> </math> denotes the weight matrix, <math display="inline"> <semantics> <mi>b</mi> </semantics> </math> is the bias term.</p>
Full article ">Figure 2
<p>Reconstruction of swiss-roll dataset (<b>a</b>) Original swiss-roll dataset. The 2000 points are normalized to <math display="inline"> <semantics> <mrow> <mo stretchy="false">[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo stretchy="false">]</mo> </mrow> </semantics> </math>; (<b>b</b>) Autoencoder reconstruction; and (<b>c</b>) CAE reconstruction.</p>
Full article ">Figure 3
<p>The error curve of FSGD. The green line represents the training error of each epoch.</p>
Full article ">Figure 4
<p>Sensor signals: (<b>a</b>) <span class="html-italic">z</span>-axis acceleration signals of the right arm for jumping and walking; (<b>b</b>) <span class="html-italic">z</span>-axis gyroscope signals of the right arm for jumping and walking.</p>
Full article ">Figure 5
<p>FFT and cepstrum: (<b>a</b>) FFT of the signals for walking in a parking lot; (<b>b</b>) FFT of the signals for jumping (the maximum five FFT peaks are marked with “O”); (<b>c</b>) Cepstrum of the signals for walking in a parking lot; (<b>d</b>) Cepstrum of the signals for jumping (the maximum five cepstrum peaks are marked with “O”).</p>
Full article ">Figure 6
<p>Eigenvalues: (<b>a</b>) The percentage of eigenvalues, the percentage of eigenvalues can be calculated by accumulation; (<b>b</b>) The eigenvalues of contribution matrix. The “·” represents each eigenvalues.</p>
Full article ">Figure 7
<p>Scatter plots of PCA. There are totally 173,280 (= 9120 × 19) points in these scatter plots. According to the 19 activities, each point has been labeled with different legends. (<b>a</b>) Scatter plots of features 1 and 2; (<b>b</b>) Scatter plots of features 2 and 3; (<b>c</b>) 3-D scatter plots of features 1–3.</p>
Full article ">Figure 7 Cont.
<p>Scatter plots of PCA. There are totally 173,280 (= 9120 × 19) points in these scatter plots. According to the 19 activities, each point has been labeled with different legends. (<b>a</b>) Scatter plots of features 1 and 2; (<b>b</b>) Scatter plots of features 2 and 3; (<b>c</b>) 3-D scatter plots of features 1–3.</p>
Full article ">Figure 8
<p>The structure of DBN. <math display="inline"> <semantics> <mi>V</mi> </semantics> </math> and <math display="inline"> <semantics> <mi>T</mi> </semantics> </math> denote the input and output layer, <math display="inline"> <semantics> <mrow> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mn>3</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics> </math> are the hidden layers, <math display="inline"> <semantics> <mrow> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mn>4</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics> </math> represent the weight matrix.</p>
Full article ">Figure 9
<p>DBN error curve. The blue line represents the training error of each epoch.</p>
Full article ">Figure 10
<p>Correct differentiation rates of time domain features.</p>
Full article ">Figure 11
<p>Correct differentiation rates of frequency domain features.</p>
Full article ">
2156 KiB  
Article
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
by Francisco Javier Ordóñez and Daniel Roggen
Sensors 2016, 16(1), 115; https://doi.org/10.3390/s16010115 - 18 Jan 2016
Cited by 2053 | Viewed by 92419
Abstract
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of [...] Read more.
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. Full article
Show Figures

Figure 1

Figure 1
<p>Different types of units in neural networks. (<b>a</b>) MLP with three dense layers; (<b>b</b>) recurrent neural network (RNN) with two dense layers. The activation and hidden value of the unit in layer (<math display="inline"> <mrow> <mi>l</mi> <mo>+</mo> <mn>1</mn> </mrow> </math>) are computed in the same time step <span class="html-italic">t</span>; (<b>c</b>) The recurrent LSTM cell is an extension of RNNs, where the internal memory can be updated, erased or read out.</p>
Full article ">Figure 2
<p>Representation of a temporal convolution over a single sensor channel in a three-layer convolutional neural network (CNN). Layer <math display="inline"> <mrow> <mo>(</mo> <mi>l</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </math> defines the sensor data at the input. The next layer (<span class="html-italic">l</span>) is composed of two feature maps (<math display="inline"> <mrow> <msubsup> <mi>a</mi> <mn>1</mn> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </math> and <math display="inline"> <mrow> <msubsup> <mi>a</mi> <mn>2</mn> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </math>) extracted by two different kernels (<math display="inline"> <msubsup> <mi>K</mi> <mrow> <mn>11</mn> </mrow> <mrow> <mo>(</mo> <mi>l</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </math> and <math display="inline"> <msubsup> <mi>K</mi> <mrow> <mn>21</mn> </mrow> <mrow> <mo>(</mo> <mi>l</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </math>). The deepest layer (layer <math display="inline"> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </math>) is composed by a single feature map, resulting from temporal convolution in layer <span class="html-italic">l</span> of a two-dimensional kernel <math display="inline"> <msubsup> <mi>K</mi> <mrow> <mn>1</mn> </mrow> <mi>l</mi> </msubsup> </math>. The time axis (which is convolved over) is horizontal.</p>
Full article ">Figure 3
<p>Architecture of the DeepConvLSTM (Conv, convolutional) framework for activity recognition. From the left, the signals coming from the wearable sensors are processed by four convolutional layers, which allow learning features from the data. Two dense layers then perform a non-linear transformation, which yields the classification outcome with a softmax logistic regression output layer on the right. Input at Layer 1 corresponds to sensor data of size <math display="inline"> <mrow> <mi>D</mi> <mo>×</mo> <msup> <mi>S</mi> <mn>1</mn> </msup> </mrow> </math>, where <span class="html-italic">D</span> denotes the number of sensor channels and <math display="inline"> <msup> <mi>S</mi> <mi>l</mi> </msup> </math> the length of features maps in layer <span class="html-italic">l</span>. Layers 2–5 are convolutional layers. <math display="inline"> <msup> <mi>K</mi> <mi>l</mi> </msup> </math> denotes the kernels in layer <span class="html-italic">l</span> (depicted as red squares). <math display="inline"> <msup> <mi>F</mi> <mi>l</mi> </msup> </math> denotes the number of feature maps in layer <span class="html-italic">l</span>. In convolutional layers, <math display="inline"> <msubsup> <mi>a</mi> <mi>i</mi> <mi>l</mi> </msubsup> </math> denotes the activation that defines the feature map <span class="html-italic">i</span> in layer <span class="html-italic">l</span>. Layers 6 and 7 are dense layers. In dense layers, <math display="inline"> <msubsup> <mi>a</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>l</mi> </msubsup> </math> denotes the activation of the unit <span class="html-italic">i</span> in hidden layer <span class="html-italic">l</span> at time <span class="html-italic">t</span>. The time axis is vertical.</p>
Full article ">Figure 4
<p>Placement of on-body sensors used in the OPPORTUNITYdataset (left: inertial measurements units; right: 3-axis accelerometers) [<a href="#B7-sensors-16-00115" class="html-bibr">7</a>].</p>
Full article ">Figure 5
<p>Sequence labelling after segmenting the data with a sliding window. The sensor signals are segmented by a jumping window. The activity class within each sequence is considered to be the ground truth label annotated at the sample <span class="html-italic">T</span> of that window.</p>
Full article ">Figure 6
<p>Output class probabilities for a ~25 s-long fragment of sensor signals in the test set of the OPPORTUNITY dataset, which comprises 10 annotated gestures. Each point in the plot represents the class probabilities obtained from processing the data within a sequence of 500 ms obtained from a sliding window ending at that point. The dashed line represents the <span class="html-italic">Null</span> class. DeepConvLSTM offers a better performance identifying the start and ending of gestures.</p>
Full article ">Figure 7
<p><math display="inline"> <msub> <mi>F</mi> <mn>1</mn> </msub> </math> score performance of DeepConvLSTM on the OPPORTUNITY dataset. Classification performance is displayed individually per gesture, for different lengths of the input sensor data segments. Experiments carried out with sequences of length of 400 ms, 500 ms, 1400 ms and 2750 ms. The horizontal axis represents the ratio between the gesture length and the sequence length (ratios under one represent performance for gestures whose durations are shorter than the sequence duration).</p>
Full article ">Figure 8
<p>Performance of Skoda and OPPORTUNITY (recognizing gestures and with the <span class="html-italic">Null</span> class) datasets with different numbers of convolutional layers.</p>
Full article ">
Back to TopTop