HumanActivity Recognition Deep Learning
HumanActivity Recognition Deep Learning
Abstract—The study of human activities has always incite electrocardiogram, gyroscope, inertial measurement units,
researchers. The human actions are acquired from raw time- magnetometer, object sensor, and temperature sensor [2].
series signals using smartphones and wearable devices' These sensors are used to detect human actions in real-time
integrated sensors. Human Activity Recognition (HAR) has applications [3]. The deep learning-based architectures are
wide applications in rehabilitation centres, geriatric care houses,
orphanages, and public places with large crowds. The essential
incorporated for human action recognition because they
stages of HAR architecture are feature detection, feature reduce the computational complexity of pre-processing and
selection, and feature extraction. The majority of existing HAR feature extraction, that makes the model more robust. The
algorithms implement conventional machine learning deep learning techniques are sometimes used to generate
techniques for detection and classification. However, the desired temporal and spatial information simultaneously. The
computational results are not obtained for complex human computed results have high accuracy even with smaller
activities. Thus, deep learning techniques have attracted dataset.
attention of researchers. The deep learning techniques are
widely used in various applications involving feature extraction, This paper presents a comprehensive review on HAR-
classification and detection. The deep learning techniques based computer vision problems. There are various HAR-
automatically extract features and reduces the computational based models that exist in the literature. The main
complexity. This paper discusses various existing benchmark contributions of this paper are as follows. Section II presents
datasets to evaluate the performance of HAR models. A the attributes of existing benchmark datasets on HAR. This
comprehensive review of sensor-based human activity section discusses various benchmark datasets on HAR that are
recognition using convolution neural network-based models,
long short-term memory-based models, and hybrid models is
collected in indoor environment such as MAPAP2, UCI-
presented. Finally, the challenges and significance of human HAR, and OPPURTUNITY, and outdoor environment such
activity recognition is discussed. as HHAR, MHEALTH, and position aware activity
recognition. A verbose on different convolution neural
Keywords— Convolution neural network, human activity
network-based (CNN-based), long short-term memory-based
recognition, long short-term memory, smartphones, wearable
sensors.
(LSTM-based), and hybrid models used for HAR is presented
in Section III. Various pros and cons the existing HAR
I. INTRODUCTION techniques are discussed. Section IV presents various
challenges faced during the designing of a HAR model along
There is an exponential growth in wearable smart
with its significance. The concluding remarks are presented in
electronic devices in the market over the years that works on
the principle of Internet of Things (IoT). There is a sudden Section V.
improvement in healthcare industry because there is early II. BENCHMARK DATASETS
detection of the risks involved. The smart wearable devices
and smartphones’ integrated sensors. The signals are collected Various benchmark datasets exist in the literature to
from different body parts. The computed results are evaluate the performance of machine learning-based and deep
categorized in different classes of actions. The complete learning-based HAR architectures [4,5]. There are numerous
procedure of actions’ selection and detection is called Human smart devices’ integrated sensors available to collect the
Activity Recognition (HAR). HAR is the study of different signals from different human body parts such as chest, head,
human actions categorized as simple, complex and postural arms, legs, and thighs. For instance, electrodes are used to
activities. Its application is found in elders’ home, sport injury collect electronic signals from heart, brain, and muscles to
detection, and medical diagnosis. [1]. Figure 1 depicts a obtain results for electrocardiogram (ECG),
generalized methodology of HAR using deep learning. The electroencephalogram (EEG), and electromyogram (EMG),
signals are acquired from human body using sensors and respectively. With the technology advancement, there are
computed using long short-term memory-based architectures. various applications in the smartphones that uses
Accelerometer (A) to estimate the acceleration, Ambient
There are various machine learning and deep learning- sensors (AM) to provide lighting details, Gyroscope (G) to
based architectures exist in literature that exploit the signals find the angular displacement in various axis, Magnetometer
received from different human body parts such as heart, brain, (M) to measure geomagnetic filed intensity, Object sensor
chest, hand, wrist, and leg. The applications of environmental (Obj) to detect the presence of other objects, and Temperature
sensors and vision systems is reduced due to large space sensor (T) to provide information of ambient temperature
occupancy and high cost as compared to other reliable and level. The datasets consist of images of various activities such
cost-effective sensors such as accelerometer, ambient sensors, as walking, standing, sitting, lying, jumping, and running.
1
Authorized licensed use limited to: Manipal University. Downloaded on May 23,2024 at 08:44:08 UTC from IEEE Xplore. Restrictions apply.
Multiple sensors Signals obtained from Human Body LSTM-based Neural Network Architecture
connected to
Human Body
Fig. 1. A generalized methodology of human activity recognition using deep learning algorithm
TABLE I. SUMMARY OF BENCHMARK DATASETS FOR HAR (A-ACCELEROMETER, AM-AMBIENT SENSORS, ECG-ELECTROCARDIOGRAM, G-GYROSCOPE,
IMU–INERTIAL MEASUREMENT UNITS, M-MAGNETOMETER, OBJ-OBJECT SENSOR, T-TEMPERATURE SENSOR) [6]
Indoor Environment
3 IMU units,
A, G, O,
PAMAP2 [7] 1 Heart rate Wrist, chest, ankle 12 ✔ ✔ ✘ 9 100Hz
T
monitor
Left belt, no
UCI-HAR [8] Smartphone A, G 6 ✔ ✘ ✘ 30 50Hz
position specified
A, AM,
Upper body, hip,
OPPORTUNITY [9] Body-worn Obj, G, 6 ✔ ✔ ✘ 4 -
leg, shoes
M
Smartphone Trousers’ pocket,
WISDM [10] integrated smartwatch on A, G 18 ✔ ✔ ✘ 51 20Hz
Smartwatch wrist
Left & Right
UniMiB-SHAR [11] Smartphone A 9 ✔ ✘ ✘ 30 50Hz
trousers’ pockets
MobiAct [12] Smartphone Trouser pockets A, G 12 ✔ ✘ ✘ 66 100Hz
HAPT [13] Smartphone Waist A 21 ✔ ✘ ✔ 30 50Hz
No position
Cooking dataset [14] Wearable device Five IMU 16 ✔ ✔ ✘ 7 110Hz
specified
Dataset created for Chen Trousers’ pocket,
Smartphone A 5 ✔ ✘ ✘ 100 100Hz
et al. [15] waist
Handheld,
Dataset created for Zhu et
Smartphone trousers’ pocket, A, G, M 7 ✔ ✘ ✘ 100 50Hz
al., [16]
and backpack
Pouch around the
Dataset was created for 2 Smartphones,
waist with tight A 8 ✔ ✘ ✘ 15 50Hz
Khan et al., 2018 [17]. 1 smartwatch
grip, smartwatch
Sequential Weakly
Labeled Multi-Activity iPhone Right wrist A 3 ✔ ✘ ✘ 10 50Hz
Dataset (SWLM) [18]
w-HAR [19] Wearable Right ankle A 7 ✔ ✘ ✔ 22 250Hz
Outdoor Environment
4 smartwatches
HHAR [20] Waist, Pouch A, G 6 ✔ ✘ ✘ 9 Highest
& 8 smartphones
A, G, M,
Shimmer Opposite wrist and
MHEALTH [21,22] ECG 12 ✔ ✘ ✘ 10 50Hz
wearable sensors ankle, chest
signals
Position aware activity Chest, head, thigh,
Wearable device A 8 ✔ ✘ ✘ 15 50Hz
recognition (HAR) [23] upper arm, waist
2
Authorized licensed use limited to: Manipal University. Downloaded on May 23,2024 at 08:44:08 UTC from IEEE Xplore. Restrictions apply.
Still the prediction of the activity is a challenging task. It [24]. CNN models used for classification of image sequences.
is because there are different human activities in everyday life LSTM models predict the raw sequences of time-series
which have transition after every few seconds. Also, there are signals instead of images. The feedback connections are used
some activities which cannot be recorded in a close lab to classify time-series data for sequence of data. The hybrid
environment such as mopping the floor, taking bath, and models use combination of CNN and LSTM architectures.
washing clothes. Only few datasets are collected in outdoor The advantages of hybrid models are less computation time,
environment. The physical and health-related attributes of the and locating position of different sensors. Table II presents a
subjects such as height, weight, age, strength, stamina, and review of various deep learning techniques incorporated for
endurance must be considered to avoid the discrepancy. HAR applications.
Table I discusses the attributes of benchmark datasets for
IV. CHALLENGES AND SIGNIFICANCE OF HAR
human activity recognition.
There are various applications in the field of artificial
III. HAR USING DEEP LEARNING
intelligence that involves the study of human actions and
The deep learning techniques have the advantage of gestures such as smart electronic devices, injury detection,
automatic extraction of features. Also, the need of pre- and ambient assisted living. Although numerous techniques
processing the data is eliminated. The deep learning exist in the literature to achieve the task such as convolution
techniques for HAR recognition applications are categorized neural networks, long short-term memory architectures, and
as CNN-based models, LSTM-based models, and hybrid hybrid models, they do not produce desired outcome in real-
models. time applications. It is because the existing models are trained
on database comprising simple activities. This section
The greatest advantages of using CNN for HAR
discusses various challenges faced in development and
application are its local dependency and scale invariance. It
implementation of these techniques.
detects and extracts non-linear features in complex activities
TABLE II. A SUMMARY OF DEEP LEARNING BASED HAR TECHNIQUES
Deep
Reference learning Dataset(s) used Accuracy of network Advantages Disadvantages
model
Avilés-Cruz et Computed Subject Complex activities are not
CNN UCI-HAR 100%
al, 2019 [25] Independent Dataset recognized
Wan et al., UCI HAR 92.71% Minimal cost of hardware Hyperparameter tuning is not
CNN
2019 [26] PAMAP2 91.00% and energy consumption performed
Cheng et al, PAMAP2 94.01% Less computational cost and Hyperparameter tuning is not
CNN
2020 [30] UNIMIB-SHAR 77.31% complexity performed
OPPORTUNITY 81.18%
UCI-HAR 91.98%
Cruciani et al, Concept of transfer learning
CNN DCASE (Audio-based Lack of optimization
2020 [31] 92.30% is implemented
HAR)
AMASS dataset 87.46%
Online training and offline
Xiao et al,
CNN DIP dataset 89.08% testing of complex activities Lack of optimization
2020 [32]
is performed
AMASS & DI 91.15%
Improves Accuracy to Residual connections
Shojaedini et Complex activities are not
CNN WISDM 5% than traditional eliminate vanishing gradient
al, 2020 [33] considered
methods issue
3
Authorized licensed use limited to: Manipal University. Downloaded on May 23,2024 at 08:44:08 UTC from IEEE Xplore. Restrictions apply.
Agarwal et al., Information from multiple
LSTM WISDM 95.78% Efficient for edge computing
2020 [35] sensors cannot be collected
UniMiB SHAR F1 score - 79%
Zhou et al, Position-aware activity F1-score of 97% on Weak labelled sensors’ data
LSTM Size of dataset is small
2020 [36] recognition with wearable positioning the devices is identified
devices on Chest and Shin
UCI – HAR F1 Score - 95.78%
High accuracy and
Xia et al, 2020 Complex and postural activities
Hybrid WISDM F1 Score - 95.85% robustness using fewer
[37] are not considered
parameters
Opportunity F1 Score - 92.63%
Waist - 92.93 ± 3.32% Efficient for motion
Qi et al., 2020 Smartphone-based
Hybrid detection in dynamic High computation time
[38] adaptive HAR dataset Pocket - 88.37 ± 4.87%
situation
WISDM 97.1%
ADL & Fall -92.3% ADL Transfer learning technique can
Mukherjee et Detection of sequential
Hybrid UniMiB SHAR - 98.7% Fall - 84.8% 2 be incorporated for further
al, 2020 [39] activities on data
ADL & 2 Fall - 99.4% improvement in accuracy
MobiAct 95.1%
Wang et al., Efficient results for Postural Increase the number of complex
Hybrid HAPT dataset 95.87%
2020a [40] transition of activities activities for training
IXMAS dataset 89.6%
UCF Sports dataset 99.7%
High accuracy with less
Kiran et al. Complex and postural activities
CNN YouTube 100% computation time for large
2021 [41] are not considered
dataset
UT-Interaction 96.7%
KTH 96.6%
Ramanujan et Leave one-subject out Avg. Sensitivity – 83% Gyroscope is applied for Distortion in IMU signals due to
LSTM
al. 2021 [43] validation Avg. Specificity – 91.7% template matching body movement
Effective results for face
Chen et al. Detection of neck movement is
LSTM Self-collected dataset - detection in virtual meetings
2021 [44] difficult due to facial hair
and conferences
Opportunity 88.09%
PAMAP2 93.50% Light neural network is
Tang et al. Results are not computed for
CNN designed using Lego layers
2020 [45] UCI-HAR 96.90% complex activities
with high speed and accuracy
WISDM 98.82%
Postural activities can be
Hanif et al. Applicable for complex
LSTM Self-formulated dataset 99.34% considered for further
2022 [46] human activities
improvement
Synthia 87.03% Temporal and spatial
Wei et al. 2022 Neural networks should have a
Hybrid information are used to
[47] MSRAction3D 89.22% greater number of layers
enhance resolution
The primary requirement for designing any deep learning decreases the accuracy of the system. For instance, if the
model is availability of large dataset for training and testing. heartrate or blood pressure of an individual is high, it is
It is an expensive and time-consuming task to collect and difficult to tell that the person is exercising or in stress.
annotate the data obtained from sensory activities. The
UCI-HAPT is the only dataset that addresses the
accuracy of the designed model is less during real-time
transition of one human activity to another called postural
scenarios because the data is collected in a laboratory with
transitions. The majority of human activities include sleep,
controlled environment. Sometimes, the dataset comprises
walk, and sit. But there is continuous transition in the
100k images or data values but the sample size is small. The
activities such as walking-to-standing and standing-to-sitting.
training, validation, and testing must be performed on a large
Also, there are a large number of complex activities such as
sample size, i.e., a greater number of people and for longer
bathing, mopping the floor, and washing clothes. They are
duration. It will also reduce the dependency of the results on
not included in the benchmark datasets. Also, humans have
behavior of few people.
tendency to perform multiple activities at same time. So,
The performance of HAR architectures is degraded due there must be a multimodal sensor detection architecture. One
to lack of hyperparameter tuning. It is a major issue observed possible solution is to train the architecture with the signals
in the majority of existing works. Thus, appropriate kernel obtained from fusing multimodal sensors.
size, epochs, batch size, and optimizer must be selected.
The biggest challenge in HAR is the implementation of
Sometimes, there is an ambiguity in judgement of class that
the designed architecture in smart electronic devices such as
4
Authorized licensed use limited to: Manipal University. Downloaded on May 23,2024 at 08:44:08 UTC from IEEE Xplore. Restrictions apply.
smartwatches and smartphones. It is because of the memory [11] D. Micucci, M. Mobilio, and P. Napoletano, “Unimib shar: A
storage and compatibility issues. However, there are multiple dataset for human activity recognition using acceleration data
from smartphones,” Applied Sciences, Vol. 7, no. 10, p.1101,
sensor-based and motion-based applications in smartphones. 2017.
Also, health-related smart electronic devices that displays [12] C. Chatzaki, M. Pediaditis, G. Vavoulas, and M. Tsiknakis,
heart rate and pulse rate are available in the market. “Human daily activity and fall recognition using a smartphone’s
acceleration sensor,” In International Conference on Information
V. FUTURE SCOPE and Communication Technologies for Ageing Well and e-Health,
pp. 100-118, Springer, Cham, April 2016.
This paper will help the researchers to design a better J.L. Reyes-Ortiz, L. Oneto, A. Ghio, A. Samá, D. Anguita, and X.
[13]
human activity recognition algorithm. A novel dataset can be Parra, “Human activity recognition on smartphones with
designed which include images recorder in a close lab awareness of basic activities and postural transitions,” In
environment such as mopping the floor, taking bath, and International conference on artificial neural networks, pp. 177-
184. Springer, Cham, September 2014.
washing clothes. A recurrent neural network or recurrent
[14] F. Krüger, A. Hein, K. Yordanova, and T. Kirste, “Recognising
convolution neural network architecture can be designed to user actions during cooking task (cooking task dataset) imu
minimize the challenges discussed in previous section. data,”. University Library, University of Rostock, 2017.
[15] Y. Chen, and Y. Xue, “A deep learning approach to human
VI. CONCLUSION
activity recognition based on single accelerometer,” In “2015
The deep learning techniques are proven efficient in the IEEE international conference on systems, man, and
cybernetics,” (pp. 1488-1492), IEEE, October 2015
field of detection and classification. An efficient human
[16] R. Zhu, Z. Xiao, Y. Li, M. Yang, Y. Tan, L. Zhou, S. Lin, and H.
activity recognition algorithm is designed by collecting Wen, “Efficient human activity recognition solving the confusing
signals using smartphones and wearable devices' integrated activities via deep ensemble learning,” Ieee Access, vol. 7,
sensors. In this paper, an exhaustive review of various deep pp.75490-75499, 2019.
learning-based human activity recognition algorithms, namely [17] M.A.A.H. Khan, N. Roy, and A. Misra, “Scaling human activity
convolution neural networks and long short-term memory recognition via deep learning-based domain adaptation,” In 2018
IEEE international conference on pervasive computing and
networks, is presented. Various state-of-the-art datasets used communications (PerCom), pp. 1-9, IEEE, March 2018.
for evaluating the performance of HAR models are also [18] K. Wang, J. He, and L. Zhang, “Sequential weakly labeled
discussed in this paper. Finally, the significance and issues multiactivity localization and recognition on wearable sensors
involved in the processing of human actions are discussed. using recurrent attention networks,” IEEE Transactions on
Human-Machine Systems, vol. 51, no. 4, pp.355-364, 2021.
REFERENCES [19] G. Bhat, N. Tran, H. Shill, and U.Y. Ogras, “w-HAR: An activity
recognition dataset and framework using low-power wearable
[1] H.F. Nweke, Y.W. Teh, G. Mujtaba, and M.A. Al-Garadi, “Data devices,”. Sensors, vol. 20, no. 18, p.5356, 2020.
fusion and multiple classifier systems for human activity
[20] A. Stisen, H. Blunck, S. Bhattacharya, T.S. Prentow, M.B.
detection and health monitoring: Review and open research
Kjærgaard, A. Dey, T. Sonne, and M.M. Jensen, “Smart devices
directions,” Information Fusion, Vol. 46, pp.147-170, 2019.
are different: Assessing and mitigatingmobile sensing
[2] J. Wang, Y. Chen, S. Hao, X. Peng and L. Hu, “Deep learning for heterogeneities for activity recognition,” In Proceedings of the
sensor-based activity recognition: A survey,” Pattern recognition 13th ACM conference on embedded networked sensor systems pp.
letters, vol. 119, pp.3-11, 2019. 127-140, November 2015.
[3] M. Masoud, Y. Jaradat, A. Manasrah, and I. Jannoud, “Sensors of [21] O. Banos, R. Garcia, J.A. Holgado-Terriza, M. Damas, H.
smart devices in the internet of everything (IoE) era: big Pomares, I. Rojas, A. Saez, A. and C. Villalonga, “mHealthDroid:
opportunities and massive doubts,” Journal of Sensors, 2019. a novel framework for agile development of mobile health
[4] Anon., UCI Machine Learning Repository. Available online: applications,” In International workshop on ambient assisted
https://archive.ics.uci.edu/ml/index.php, Date of Access: living (pp. 91-98). Springer, Cham, December 2014.
December 12, 2022. [22] O. Banos, C. Villalonga, R. Garcia, A. Saez, M. Damas, J.A.
[5] Anon. Kaggle datasets: Available online: Holgado-Terriza, S. Lee, H. Pomares, and I. Rojas, “Design,
https://www.kaggle.com/datasets, Date of Access: December 12, implementation and validation of a novel open framework for
2022. agile development of mobile health applications,” Biomedical
[6] E. Ramanujam, T. Perumal, and S. Padmavathi, “Human activity engineering online, Vol. 14, no. 2, pp.1-20, 2015.
recognition with smartphone and wearable sensors using deep [23] T. Sztyler, H. Stuckenschmidt, and W. Petrich, 2017. Position-
learning techniques: A review,” IEEE Sensors Journal, vol. 21, aware activity recognition with wearable devices. Pervasive and
no. 12, pp.13029-13040, 2021. mobile computing, 38, pp.281-295.
[7] A. Reiss, and D. Stricker, “Introducing a new benchmarked [24] J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, “Deep learning
dataset for activity monitoring,”. In 2012 16th international for sensor-based activity recognition: A survey,” Pattern
symposium on wearable computers, pp. 108-109, IEEE, June recognition letters, vol. 119, pp.3-11, 2019.
2012. [25] C. Avilés-Cruz, A. Ferreyra-Ramírez, A. Zúñiga-López, and J.
[8] D. Anguita, A. Ghio, L. Oneto, X. Parra Perez, and J.L. Reyes Villegas-Cortéz, “Coarse-fine convolutional deep-learning
Ortiz, “A public domain dataset for human activity recognition strategy for human activity recognition,” Sensors, vol. 19, no. 7,
using smartphones,” In Proceedings of the 21th international p.1556, 2019.
European symposium on artificial neural networks, [26] S. Wan, L. Qi, X. Xu, C. Tong, and Z. Gu, “Deep learning models
computational intelligence and machine learning, pp. 437-442, for real-time human activity recognition with smartphones,”
2013. Mobile Networks and Applications, vol. 25, no. 2, pp.743-755,
[9] R. Chavarriaga, H. Sagha, A. Calatroni, S.T. Digumarti, G. 2020.
Tröster, J.D.R. Millán, and D. Roggen, “The Opportunity [27] Y. Tang, Q. Teng, L. Zhang, F. Min, and J. He, “Layer-wise
challenge: A benchmark database for on-body sensor-based training convolutional neural networks with smaller filters for
activity recognition,” Pattern Recognition Letters, Vol. 34, no. human activity recognition using wearable sensors,” IEEE
15, pp.2033-2042, 2013. Sensors Journal, vol. 21, no. 1, pp.581-592, 2020.
[10] G.M. Weiss, K. Yoneda, and T. Hayajneh, “Smartphone and [28] T. Su, H. Sun, C. Ma, L. Jiang, and T. Xu, “HDL: Hierarchical
smartwatch-based biometrics using activities of daily living,” deep learning model based human activity recognition using
IEEE Access, vol. 7, pp.133190-133202, 2019.
5
Authorized licensed use limited to: Manipal University. Downloaded on May 23,2024 at 08:44:08 UTC from IEEE Xplore. Restrictions apply.
smartphone sensors,”. In 2019 International Joint Conference on activities,” IEEE Transactions on Human-Machine Systems, vol.
Neural Networks (IJCNN), pp. 1-8, IEEE, July 2019. 50, no. 5, pp.414-423, 2020.
[29] A. Gumaei, M.M. Hassan, A. Alelaiwi, and H. Alsalman, “A [39] D. Mukherjee, R. Mondal, P.K. Singh, R. Sarkar, and D.
hybrid deep learning model for human activity recognition using Bhattacharjee, “EnsemConvNet: a deep learning approach for
multimodal body sensing data,” IEEE Access, vol. 7, pp.99152- human activity recognition using smartphone sensors for
99160, 2019. healthcare applications,” Multimedia Tools and Applications,
[30] X. Cheng, L. Zhang, Y. Tang, Y. Liu, H. Wu, and J. He, “Real- Vol. 79, no. 41, pp.31663-31690, 2020.
time human activity recognition using conditionally parametrized [40] H. Wang, J. Zhao, J. Li, L. Tian, P. Tu, T. Cao, Y. An, K. Wang,
convolutions on mobile and wearable devices,” IEEE Sensors and S. Li, “Wearable sensor-based human activity recognition
Journal, 22(6), pp.5889-5901, 2022. using hybrid deep learning techniques,” Security and
[31] F. Cruciani, A. Vafeiadis, C. Nugent, I. Cleland, P. McCullagh, communication Networks, 2020.
K. Votis, D. Giakoumis, D. Tzovaras, L. Chen, and R. Hamzaoui, [41] S. Kiran, M.A. Khan, M.Y. Javed, M. Alhaisoni, U. Tariq, Y.
“Feature learning for human activity recognition using Nam, R. Damasevicius, and M. Sharif, “Multi-layered deep
convolutional neural networks,” CCF Transactions on Pervasive learning features fusion for human action recognition,” 2021
Computing and Interaction, Vol. 2, no. 1, pp.18-32, 2020. [42] N. Rashid, B.U. Demirel, and M.A. Al Faruque, “AHAR:
[32] F. Xiao, L. Pei, L. Chu, D. Zou, W. Yu, Y. Zhu, and T. Li, “A Adaptive CNN for energy-efficient human activity recognition in
deep learning method for complex human activity recognition low-power edge devices,” IEEE Internet of Things Journal, 2022.
using virtual wearable sensors,” In International Conference on [43] E. Nemati, S. Zhang, T. Ahmed, M.M. Rahman, J. Kuang, and A.
Spatial Data and Intelligence, pp. 261-270, Springer, Cham, Gao, “Coughbuddy: Multi-modal cough event detection using
2020, May. earbuds platform,” In 2021 IEEE 17th International Conference
[33] S.V. Shojaedini, and M.J. Beirami, “Mobile sensor based human on Wearable and Implantable Body Sensor Networks (BSN), pp.
activity recognition: distinguishing of challenging activities by 1-4, IEEE, 2021, July.
applying long short-term memory deep learning modified by [44] T. Chen, Y. Li, S. Tao, H. Lim, M. Sakashita, R. Zhang, F.
residual network concept,” Biomedical Engineering Letters, vol. Guimbretiere, and C. Zhang, “NeckFace: Continuously Tracking
10, no. 3, pp.419-430, 2020. Full Facial Expressions on Neck-mounted Wearables,”
[34] L. Wang, and R. Liu, “Human activity recognition based on Proceedings of the ACM on Interactive, Mobile, Wearable and
wearable sensor using hierarchical deep LSTM networks,” Ubiquitous Technologies, vol. 5, no. 2, pp.1-31, 2021.
Circuits, Systems, and Signal Processing, vol. 39, no. 2, pp.837- [45] Y. Tang, Q. Teng, L. Zhang, F. Min, and J. He, “Layer-wise
856, 2020. training convolutional neural networks with smaller filters for
[35] P. Agarwal, and M. Alam, “A lightweight deep learning model human activity recognition using wearable sensors,” IEEE
for human activity recognition on edge devices,” Procedia Sensors Journal, vol. 21, no. 1, pp.581-592, 2020.
Computer Science, vol. 167, pp.2364-2373, 2020. [46] M.A. Hanif, T. Akram, A. Shahzad, M.A. Khan, U. Tariq, J.I.
[36] X. Zhou, W. Liang, I. Kevin, K. Wang, H. Wang, L.T. Yang, and Choi, Y. Nam, and Z. Zulfiqar, “Smart devices based
Q. Jin, “Deep-learning-enhanced human activity recognition for multisensory approach for complex human activity recognition,”
Internet of healthcare things,” IEEE Internet of Things Journal, 2022.
vol. 7, no. 7, pp.6429-6438, 2020. [47] Y. Wei, H. Liu, T. Xie, Q. Ke, and Y. Guo, “Spatial-temporal
[37] K. Xia, J. Huang, and H. Wang, “LSTM-CNN architecture for transformer for 3d point cloud sequences,” In Proceedings of the
human activity recognition,” IEEE Access, vol. 8, pp.56855- IEEE/CVF Winter Conference on Applications of Computer
56866, 2020. Vision, pp. 1171-1180, 2022.
[38] W. Qi, H. Su, and A. Aliverti, “A smartphone-based adaptive
recognition and real-time monitoring system for human
6
Authorized licensed use limited to: Manipal University. Downloaded on May 23,2024 at 08:44:08 UTC from IEEE Xplore. Restrictions apply.