Leveraging Wearable Sensors for Human Daily Activity Recognition with Stacked Denoising Autoencoders
<p>The overall framework of stacked denoising autoencoder (SDAE)-based activity recognition.</p> "> Figure 2
<p>Accuracy of each activity class by applying random oversampling, SMOTE, and without resampling. (A0-standing, A1-sleeping, A2-watching TV, A3-walking, A4-running, A5-sweeping, A6-stand-to-sit, A7-sit-to-stand, A8-stand-to-walk, A9-walk-to-stand, A10-lie-to-sit, and A11-sit-to-lie).</p> "> Figure 3
<p>The confusion matrix obtained by applying random oversampling.</p> "> Figure 4
<p>The recognition performance of each activity when using different iterations. (Pretraining learning rate is set to <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>7</mn> </mrow> </msup> </mrow> </semantics></math>, number of hidden layer is set to 2; fine-tuning learning rate is set to 0.01).</p> "> Figure 5
<p>The recognition performance of each activity when using different pretraining learning rates (iterations is set to 200; number of hidden layer is set to 2; fine-tuning learning rate is set to 0.01).</p> "> Figure 6
<p>The recognition performance of each activity when using different fine-tuning learning rates (iterations is set to 200; number of hidden layer is set to 2; pretraining learning rate is set to <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>7</mn> </mrow> </msup> </mrow> </semantics></math>).</p> "> Figure 7
<p>The performance of each activity recognition when selecting different numbers of hidden layer (iterations is set to 200, pretraining learning rate is set to <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>7</mn> </mrow> </msup> </mrow> </semantics></math>; fine-tuning learning rate is set to 0.01).</p> "> Figure 8
<p>The performance changes of each activity recognition when adopting different sensors.</p> ">
Abstract
:1. Introduction
- In this paper, besides static and dynamic activities recognition, we also focus on utilizing deep learning models to perform transitional activities recognition which are more difficult and complicated than other two cases.
- We have improved the problem of unbalanced samples due to relatively short duration characteristic of transitional activity by applying the resampling methods and adopted varied resampling methods to search optimal scenario.
- A novel framework based on stacked denoising autoencoder is utilized to recognize three types of activities, which has achieved significant performances and compared with other classical methods to verify the effectiveness of SDAE model on activity recognition, especially for transitional activities.
2. Related Work
2.1. Conventional Machine Learning Methods
2.2. Deep Learning Methods
3. Materials and Methods
3.1. Data Preprocessing
3.1.1. Segmentation
3.1.2. Resampling
3.2. Stacked Denoising Autoencoder
3.2.1. Pretraining
3.2.2. Fine-Tuning
Algorithm 1 Human activity recognition method with SDAE. |
Input: Raw dataset D |
Output: Activity types of testing dataset |
1: Data Preprocessing: |
2: Segment the dataset according to sampling frequency |
3: Apply the random oversampling |
4: Standardize the dataset to obtain input vector |
5: Divide the dataset into training dataset , validation dataset , test dataset |
6: Pretraining: |
7: while Hidden layers do |
8: The dataset in is corrupted into by adding a denoising factor. Then let as input to train l-th layer of stacked denoising autoencoder. |
9: The output of l-th layer will be the input of l + 1-th layer |
10: l += 1 |
11: end while |
12: Fine-tuning: |
13: Fine-tune the whole network by applying backpropagation. Utilize labeled dataset to train softmax layer. |
14: Test: |
15: Use the and to train model and validate performance of model respectively. Recognize the activity type of test data . |
3.3. Experimental Design
4. Results
4.1. Experimental Result Without Resampling
4.2. Performance Enhancement with Resampling
4.3. Hyperparameter Analysis
- (a)
- Iterations of training progress
- (b)
- Pretraining learning rate
- (c)
- Fine-tuning learning rate
- (d)
- The number of hidden layer
4.4. The Influence of Single and Several Sensors
4.5. Comparison with Other Conventional Methods
4.6. The Performance of SDAE Model on Three Public Datasets
5. Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Chen, Y.; Yu, L.; Ota, K.; Dong, M. Robust Activity Recognition for Aging Society. IEEE J. Biomed. Health Inform. 2018, 22, 1754–1764. [Google Scholar] [CrossRef] [Green Version]
- Yang, X.; Tian, Y. Super Normal Vector for Human Activity Recognition with Depth Cameras. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1028–1039. [Google Scholar] [CrossRef]
- Ward, J.A.; Lukowicz, P.; Troster, G.; Starner, T.E. Activity Recognition of Assembly Tasks Using Body-Worn Microphones and Accelerometers. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1553–1567. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zheng, Y.L.; Ding, X.R.; Poon, C.C.; Lo, B.P.; Zhang, H.; Zhou, X.L.; Yang, G.Z.; Zhao, N.; Zhang, Y.T. Unobtrusive Sensing and Wearable Devices for Health Informatics. IEEE Trans. Biomed. Eng. 2014, 61, 1538–1554. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.; Hoey, J.; Nugent, C.D.; Cook, D.J.; Yu, Z. Sensor-Based Activity Recognition. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2012, 42, 790–808. [Google Scholar] [CrossRef]
- Sanchez-Comas, A.; Synnes, K.; Hallberg, J. Hardware for Recognition of Human Activities: A Review of Smart Home and AAL Related Technologies. Sensors 2020, 20, 4227. [Google Scholar] [CrossRef]
- Chen, Y.; Shen, C. Performance Analysis of Smartphone-Sensor Behavior for Human Activity Recognition. IEEE Access 2017, 5, 3095–3110. [Google Scholar] [CrossRef]
- Gu, F.; Khoshelham, K.; Valaee, S.; Shang, J.; Zhang, R. Locomotion Activity Recognition Using Stacked Denoising Autoencoders. IEEE Internet Things J. 2018, 5, 2085–2093. [Google Scholar] [CrossRef]
- Chen, Y.; Xue, Y. A Deep Learning Approach to Human Activity Recognition Based on Single Accelerometer. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015. [Google Scholar]
- Zeng, M.; Nguyen, L.T.; Yu, B.; Mengshoel, O.J.; Zhu, J.; Wu, P.; Zhang, J. Convolutional Neural Networks for human activity recognition using mobile sensors. In Proceedings of the 6th International Conference on Mobile Computing, Applications and Services, Austin, TX, USA, 6–7 November 2014. [Google Scholar]
- Hsu, Y.L.; Yang, S.C.; Chang, H.C.; Lai, H.C. Human Daily and Sport Activity Recognition Using a Wearable Inertial Sensor Network. IEEE Access 2018, 6, 31715–31728. [Google Scholar] [CrossRef]
- Paraschiakos, S.; Cachucho, R.; Moed, M.; van Heemst, D.; Mooijaart, S.; Slagboom, E.P.; Knobbe, A.; Beekman, M. Activity recognition using wearable sensors for tracking the elderly. User Model. User Adapt. Interact. 2020, 30, 567–605. [Google Scholar] [CrossRef]
- Elsts, A.; Twomey, N.; McConville, R.; Craddock, I. Energy-efficient activity recognition framework using wearable accelerometers. J. Netw. Comput. Appl. 2020, 168, 102770. [Google Scholar] [CrossRef]
- Lawal, I.A.; Bano, S. Deep Human Activity Recognition With Localisation of Wearable Sensors. IEEE Access 2020, 8, 155060–155070. [Google Scholar] [CrossRef]
- Xie, L.; Tian, J.; Ding, G.; Zhao, Q. Human activity recognition method based on inertial sensor and barometer. In Proceedings of the 2018 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL), Moltrasio, Italy, 26–29 March 2018. [Google Scholar]
- Moufawad, C.E.A.; Lenbolehoskovec, C.; Paraschivionescu, A.; Major, K.; Böla, C.; Aminian, K. Classification and characterization of postural transitions using instrumented shoes. Med. Biolo. Eng. Comput. 2018, 56, 1403–1412. [Google Scholar] [CrossRef] [PubMed]
- Ali, R.; Atallah, L.; Lo, B.; Yang, G.-Z. Transitional Activity Recognition with Manifold Embedding. In Proceedings of the 22009 Sixth International Workshop on Wearable and Implantable Body Sensor Networks, Berkeley, CA, USA, 3–5 June 2009. [Google Scholar]
- Melo, T.; Duarte, A.C.; Bezerra, T.S.; França, F.; Soares, N.S.; Brito, D. The Five Times Sit-to-Stand Test: Safety and reliability with older intensive care unit patients at discharge. Rev. Bras. Ter. Intensiv. 2019, 31, 27–33. [Google Scholar] [CrossRef]
- Otebolaku, A.; Enamamu, T.; Alfouldi, A.; Ikpehai, A.; Marchang, J. Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition Using Deep Convolutional Neural Networks. Sensors 2020, 20, 3803. [Google Scholar] [CrossRef]
- Bolic, M.; Djuric, P.M.; Hong, S. Resampling algorithms and architectures for distributed particle filters. IEEE Trans. Signal Proc. 2005, 53, 2442–2450. [Google Scholar] [CrossRef] [Green Version]
- Wannenburg, J.; Malekian, R. Physical Activity Recognition From Smartphone Accelerometer Data for User Context Awareness Sensing. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 3142–3149. [Google Scholar] [CrossRef]
- Gupta, P.; Dallas, T. Feature Selection and Activity Recognition System Using a Single Triaxial Accelerometer. IEEE Trans. Biomed. Eng. 2014, 61, 1780–1786. [Google Scholar] [CrossRef]
- Chen, Z.; Zhu, Q.; Soh, Y.C.; Zhang, L. Robust Human Activity Recognition Using Smartphone Sensors via CT-PCA and Online SVM. IEEE Trans. Ind. Inform. 2017, 13, 3070–3080. [Google Scholar] [CrossRef]
- Xu, H.; Pan, Y.; Li, J.; Nie, L.; Xu, X. Activity Recognition Method for Home-Based Elderly Care Service Based on Random Forest and Activity Similarity. IEEE Access 2019, 7, 16217–16225. [Google Scholar] [CrossRef]
- Gaglio, S.; Re, G.L.; Morana, M. Human Activity Recognition Process Using 3-D Posture Dat. IEEE Trans. Hum. Mach. Syst. 2015, 45, 586–597. [Google Scholar] [CrossRef]
- Plötz, T.; Guan, Y. Deep Learning for Human Activity Recognition in Mobile Computing. Computer 2018, 51, 50–59. [Google Scholar] [CrossRef]
- Wang, J.; Zhang, X.; Gao, Q.; Yue, H.; Wang, H. Device-Free Wireless Localization and Activity Recognition: A Deep Learning Approach. IEEE Trans. Veh. Technol. 2017, 66, 6258–6267. [Google Scholar] [CrossRef]
- Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A.; Bottou, L. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
- Tao, D.; Jin, L.; Yuan, Y.; Xue, Y. Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1392–1404. [Google Scholar] [CrossRef] [PubMed]
- Khan, A.M.; Lee, Y.K.; Lee, S.Y.; Kim, T.S. A Triaxial Accelerometer-Based Physical-Activity Recognition via Augmented-Signal Features and a Hierarchical Recognizer. IEEE Trans. Inform. Technol. Biomed. 2010, 14, 1166–1172. [Google Scholar] [CrossRef]
- Dernbach, S.; Das, B.; Krishnan, N.C.; Thomas, B.L.; Cook, D.J. Simple and complex activity recognition through smart phones. In Proceedings of the Eighth International Conference on Intelligent Environments, Guanajuato, Mexico, 26–29 June 2012. [Google Scholar]
- Reyes-Ortiz, J.L.; Oneto, L.; Samà, A.; Parra, X.; Anguita, D. Transition-Aware Human Activity Recognition Using Smartphones. Neurocomputing 2015, 171, 754–767. [Google Scholar] [CrossRef] [Green Version]
- Li, J.H.; Tian, L.; Wang, H.; An, Y.; Wang, K.; Yu, L. Segmentation and Recognition of Basic and Transitional Activities for Continuous Physical Human Activity. IEEE Access 2019, 7, 42565–42576. [Google Scholar] [CrossRef]
- Nweke, H.F.; Teh, Y.W.; Al-Garadi, M.A.; Alo, U.R. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Exp. Syst. Appl. 2018, 105, 233–261. [Google Scholar] [CrossRef]
- He, Z.; Jin, L. Activity recognition from acceleration data based on discrete consine transform and SVM. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, 11–14 October 2009. [Google Scholar]
- McCarthy, M.W.; James, D.A.; Lee, J.B.; Rowlands, D.D. Decision-tree-based human activity classification algorithm using single-channel foot-mounted gyroscope. Electron. Lett. 2015, 51, 675–676. [Google Scholar] [CrossRef] [Green Version]
- Rogers, E.; Kelleher, J.D.; Ross, R.J. Towards a Deep Learning-Based Activity Discovery System; Dublin Institute of Technology: Dublin, Ireland, 2016. [Google Scholar]
- Fang, H.; He, L.; Si, H.; Liu, P.; Xie, X. Human activity recognition based on feature selection in smart home using back-propagation algorithm. ISA Trans. 2014, 53, 1629–1638. [Google Scholar] [CrossRef] [PubMed]
- Safi, K.; Mohammed, S.; Attal, F.; Khalil, M.; Amirat, Y. Recognition of different daily living activities using hidden Markov model regression. In Proceedings of the 2016 3rd Middle East Conference on Biomedical Engineering (MECBME), Beirut, Lebanon, 6–7 October 2016. [Google Scholar]
- Jaf, S.; Calder, C. Deep Learning for Natural Language Parsing. IEEE Access 2019, 7, 131363–131373. [Google Scholar] [CrossRef]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
- Khalil, R.A.; Jones, E.; Babar, M.I.; Jan, T.; Zafar, M.H.; Alhussain, T. Speech Emotion Recognition Using Deep Learning Techniques: A Review. IEEE Access 2019, 7, 117327–117345. [Google Scholar] [CrossRef]
- Ronao, C.A.; Cho, S. Human activity recognition with smartphone sensors using deep learning neural networks. Exp. Syst. Appl. 2016, 59, 235–244. [Google Scholar] [CrossRef]
- Lee, S.; Yoon, S.M.; Cho, H. Human Activity Recognition From Accelerometer Data Using Convolutional Neural Network. In Proceedings of the 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju, Korea, 13–16 February 2017. [Google Scholar]
- Mario, M. Human Activity Recognition Based on Single Sensor Square HV Acceleration Images and Convolutional Neural Networks. IEEE Sens. J. 2019, 19, 1487–1498. [Google Scholar] [CrossRef]
- Wang, A.; Chen, G.; Shang, C.; Zhang, M.; Liu, L. Human Activity Recognition in a Smart Home Environment with Stacked Denoising Autoencoders. In Proceedings of the 17th International Conference Web-Age Information Management, Nanchang, China, 3–5 June 2016; pp. 29–40. [Google Scholar]
- Gao, J.; Yang, J.; Wang, G.; Li, M. A novel feature extraction method for scene recognition based on Centered Convolutional Restricted Boltzmann Machines. Neurocomputing 2015, 214, 708–717. [Google Scholar] [CrossRef] [Green Version]
- Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
- Zhou, X.; Guo, J.; Wang, S. Motion Recognition by Using a Stacked Autoencoder-Based Deep Learning Algorithm with Smart Phones. In Proceedings of the International Conference on Wireless Algorithms, Systems, and Applications, Qufu, China, 10–12 August 2015; pp. 778–787. [Google Scholar]
- Inoue, M.; Inoue, S.; Nishida, T. Deep Recurrent Neural Network for Mobile Human Activity Recognition with High Throughput. Artif. Life Robot. 2018, 23, 173–185. [Google Scholar] [CrossRef] [Green Version]
- Yao, S.; Hu, S.; Zhao, Y.; Zhang, A.; Abdelzaher, T. DeepSense: A Unified Deep Learning Framework for Time-Series Mobile Sensing Data Processing. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 April 2017. [Google Scholar]
- Yu, S.; Qin, L. Human Activity Recognition with Smartphone Inertial Sensors Using Bidir-LSTM Networks. In Proceedings of the 2018 3rd International Conference on Mechanical, Control and Computer Engineering (ICMCCE), Huhhot, China, 14–16 September 2018. [Google Scholar]
- Zhang, L.; Wu, X.; Luo, D. Real-Time Activity Recognition on Smartphones Using Deep Neural Networks. In Proceedings of the 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Beijing, China, 10–14 August 2015. [Google Scholar]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Galar, M.; Fernandez, A.; Barrenechea, E.; Bustince, H.; Herrera, F. A Review on Ensembles for the Class Imbalance Problem: Bagging-, Boosting-, and Hybrid-Based Approaches. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2012, 42, 463–484. [Google Scholar] [CrossRef]
- Abdi, L.; Hashemi, S. To Combat Multi-Class Imbalanced Problems by Means of Over-Sampling Techniques. IEEE Trans. Knowl. Data Eng. 2016, 28, 238–251. [Google Scholar] [CrossRef]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. Human Activity Recognition on Smartphones using a Multiclass Hardware-Friendly Support Vector Machine. In Ambient Assisted Living and Home Care; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; pp. 216–223. [Google Scholar]
- Casale, P.; Pujol, O.; Radeva, P. Human activity recognition from accelerometer data using a wearable device. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis, Las Palmas de Gran Canaria, Spain, 8–10 June 2011. [Google Scholar]
- Altun, K.; Barshan, B.; Tunçel, O. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognit. 2010, 43, 3605–3620. [Google Scholar] [CrossRef]
Reference | Method | Sensor | Activity Classes | Accuracy |
---|---|---|---|---|
Gu et al. [8] | SDAE | Acc+ Gyr+ Mag+ Bar | stilling, running, walking, upstairs, downstairs, upElevator, downElevato, falsemotion | 94.34% |
Charissa et al. [43] | CNN+ tFFT | Acc+ Gyr | walking sitting, upstairs, downstairs, standing, laying | 95.75% |
Song-Mi et al. [44] | 1D-CNN | Acc | run, walk, still | 92.71% |
Mario [45] | CNN | Acc | walking, sitting, jumping, lying, climbing_up, standing, running, climbing_down | 94% |
Masaya Inoue et al. [50] | RNN | Acc | standing, sitting, downstairs, laying, walking, upstairs | 95.42% |
Yao et al. [51] | CNN+ RNN | Acc+ Gyr | standing, climbStair-down, biking, walking, sitting, climbStair-up | 94.20% |
Yu et al. [52] | LSTM | Acc+ Gyr | walking, upstarirs, standing, sitting, downstairs, laying down | 93.79% |
Zhang et al. [53] | DBN | Acc | walking, running, standing, sitting, upstairs, downstairs, lying | 98.60% |
Class | Activities | Description |
---|---|---|
Stationary Activities | standing | The subject stands still and maintains 5 min |
sleeping | The subject sleeps on the sofa for 5 min and is allowed to do some small movements, such as changing the lying posture | |
watching TV | The subject watches TV for 5 min when he sits on the sofa in a comfortable position. And changing sitting posture is allowed | |
Dynamic Activities | walking | The subject walks on treadmill at constant speed for 5 min |
running | The subject runs on treadmill for 5 min | |
sweeping | The subject sweeps in room with vacuum cleaner for 5 min | |
Transitional Activities | stand-to-sit | Standing for 15 s, and then sitting on the sofa, repeat 15 times |
sit-to-stand | Sitting on the sofa for 10 s, and then standing up, repeat 15 times | |
stand-to-walk | Standing for 15 s, and then walking for 15 s, repeat 15 times | |
walk-to-stand | Walking for 15 s, and then standing for 15 s, repeat 15 times | |
lie-to-sit | Sitting on the sofa for 15 s, and then lying down, repeat 15 times | |
sit-to-lie | Lying on the sofa, and then sitting on the sofa, repeat 15 times |
Accuracy (%) | Precision (%) | Recall (%) | F1 Score (%) | |
---|---|---|---|---|
standing | 97.03 | 94.72 | 97.03 | 95.86 |
sleeping | 97.32 | 98.37 | 97.32 | 97.84 |
watching TV | 96.82 | 92.54 | 96.82 | 94.63 |
walking | 86.75 | 85.71 | 86.75 | 86.25 |
running | 95.73 | 91.81 | 95.73 | 93.73 |
sweeping | 88.92 | 82.52 | 88.92 | 85.60 |
stand-to-sit | 62.16 | 63.01 | 62.16 | 62.59 |
sit-to-stand | 51.19 | 70.49 | 51.19 | 59.31 |
stand-to-walk | 35.14 | 39.39 | 35.14 | 37.14 |
walk-to-stand | 26.51 | 51.16 | 26.51 | 34.92 |
lie-to-sit | 73.33 | 68.75 | 73.33 | 70.97 |
sit-to-lie | 58.73 | 58.73 | 58.73 | 58.73 |
Initial Number | After Segmentation | Undersampling | SMOTE | Oversampling | |
---|---|---|---|---|---|
standing | 307,061 | 1198 | 238 | 1200 | 1200 |
sleeping | 307,109 | 1200 | 238 | 1200 | 1200 |
watching TV | 306,228 | 1196 | 238 | 1200 | 1200 |
walking | 300,457 | 1174 | 238 | 1200 | 1200 |
running | 294,676 | 1151 | 238 | 1200 | 1200 |
sweeping | 302,052 | 1179 | 238 | 1200 | 1200 |
stand-to-sit | 61,173 | 239 | 238 | 1200 | 1200 |
sit-to-stand | 61,035 | 238 | 238 | 1200 | 1200 |
stand-to-walk | 61,881 | 242 | 238 | 1200 | 1200 |
walk-to-stand | 61,640 | 242 | 238 | 1200 | 1200 |
lie-to-sit | 61,454 | 240 | 238 | 1200 | 1200 |
sit-to-lie | 62,089 | 242 | 238 | 1200 | 1200 |
Accuracy (%) | Precision (%) | Recall (%) | F1 Score (%) | |
---|---|---|---|---|
standing | 96.75 | 95.61 | 96.75 | 95.73 |
sleeping | 96.74 | 98.79 | 96.74 | 97.75 |
watching TV | 95.77 | 98.37 | 95.77 | 96.85 |
walking | 87.34 | 89.46 | 87.34 | 88.39 |
running | 93.70 | 97.61 | 93.70 | 95.62 |
sweeping | 84.81 | 89.97 | 84.81 | 87.31 |
stand-to-sit | 98.92 | 95.80 | 98.92 | 97.34 |
sit-to-stand | 95.53 | 96.07 | 95.53 | 95.80 |
stand-to-walk | 95.92 | 93.39 | 95.92 | 94.64 |
walk-to-stand | 97.53 | 93.92 | 97.53 | 95.69 |
lie-to-sit | 97.34 | 96.32 | 97.34 | 96.83 |
sit-to-lie | 98.31 | 93.57 | 98.31 | 95.88 |
Hyperparameter | Value |
---|---|
number of hidden layers | 2 |
number of units per layer | 500 |
pretraining learning rate | |
fine-tuning learning rate | 0.01 |
iteration | 200 |
denoising factor | 0.5 |
data segment size (in seconds) | 5 |
Sensors | Accuracy (%) | Precision (%) | Recall (%) | F1 Score (%) |
---|---|---|---|---|
Acc | 93.36 | 93.35 | 93.36 | 93.27 |
Gyro | 75.76 | 74.77 | 75.76 | 72.60 |
Acc+Gyro | 94.88 | 94.88 | 94.88 | 94.86 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1 Score (%) |
---|---|---|---|---|
SVM | 90.95 | 90.81 | 90.95 | 90.60 |
DT | 88.15 | 87.53 | 88.15 | 87.54 |
KNN | 84.84 | 84.38 | 84.84 | 84.29 |
CNN | 81.33 | 79.85 | 81.33 | 80.27 |
LSTM | 81.63 | 83.56 | 81.63 | 81.62 |
BiLSTM | 84.75 | 85.23 | 84.75 | 84.63 |
SDAE | 94.88 | 94.88 | 94.88 | 94.86 |
Datasets | People | Classes | Sensors | Transitions |
---|---|---|---|---|
Smartphone | 30 | 6 | Acc+Gyro | No |
Chest-mounted | 15 | 7 | Acc | No |
UCI | 8 | 19 | Acc+Gyro+Mag | No |
Datasets | Accuracy(%) | Precision(%) | Recall(%) | F1 Score(%) |
---|---|---|---|---|
Smartphone | 97.15 | 97.19 | 97.15 | 97.15 |
Chest-mounted | 89.99 | 89.96 | 89.99 | 89.83 |
UCI | 95.26 | 95.42 | 95.26 | 95.15 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ni, Q.; Fan, Z.; Zhang, L.; Nugent, C.D.; Cleland, I.; Zhang, Y.; Zhou, N. Leveraging Wearable Sensors for Human Daily Activity Recognition with Stacked Denoising Autoencoders. Sensors 2020, 20, 5114. https://doi.org/10.3390/s20185114
Ni Q, Fan Z, Zhang L, Nugent CD, Cleland I, Zhang Y, Zhou N. Leveraging Wearable Sensors for Human Daily Activity Recognition with Stacked Denoising Autoencoders. Sensors. 2020; 20(18):5114. https://doi.org/10.3390/s20185114
Chicago/Turabian StyleNi, Qin, Zhuo Fan, Lei Zhang, Chris D. Nugent, Ian Cleland, Yuping Zhang, and Nan Zhou. 2020. "Leveraging Wearable Sensors for Human Daily Activity Recognition with Stacked Denoising Autoencoders" Sensors 20, no. 18: 5114. https://doi.org/10.3390/s20185114