Rafiq et al., 2025 - Google Patents
A deep learning framework for healthy lifestyle monitoring and outdoor localizationRafiq et al., 2025
View PDF- Document ID
- 17380795427560535210
- Author
- Rafiq M
- Alshammari N
- Alhasson H
- AlHammadi D
- Alshehri M
- Jalal A
- Liu H
- Publication year
- Publication venue
- Ieee Access
External Links
Snippet
The green research field of ubiquitous computing has been able to draw and hold academics' interest for a while. Recognition and localization of human locomotion have also been widely developed as ubiquitous computing applications. Personal safety, behavior …
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00624—Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in preceding groups
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Khan et al. | Robust human locomotion and localization activity recognition over multisensory | |
| Cruciani et al. | Feature learning for human activity recognition using convolutional neural networks: A case study for inertial measurement unit and audio data | |
| Porzi et al. | A smart watch-based gesture recognition system for assisting people with visual impairments | |
| Yao et al. | Deepsense: A unified deep learning framework for time-series mobile sensing data processing | |
| Kaghyan et al. | Activity recognition using k-nearest neighbor algorithm on smartphone with tri-axial accelerometer | |
| Su et al. | Activity recognition with smartphone sensors | |
| Rafiq et al. | A deep learning framework for healthy lifestyle monitoring and outdoor localization | |
| Kaur et al. | Human activity recognition: A comprehensive review | |
| Rafiq et al. | Wearable sensors-based human locomotion and indoor localization with smartphone | |
| Varshney et al. | Human activity recognition by combining external features with accelerometer sensor data using deep learning network model | |
| CN116465412A (en) | An Improved PDR Indoor Localization Method Based on LSTM and Attention Mechanism | |
| Sideridis et al. | Gesturekeeper: Gesture recognition for controlling devices in iot environments | |
| Saeedi | Context-aware personal navigation services using multi-level sensor fusion algorithms | |
| Amrani et al. | Leveraging dataset integration and continual learning for human activity recognition | |
| Park et al. | Pedestrian stride length estimation based on bidirectional LSTM and CNN architecture | |
| Wang | Data feature extraction method of wearable sensor based on convolutional neural network | |
| Jiang et al. | Fast, accurate event classification on resource-lean embedded sensors | |
| Atashi et al. | Online Dynamic Window (ODW) Assisted two-stage LSTM frameworks for indoor localization | |
| ALHAMMADI et al. | A Deep Learning Framework for Healthy Lifestyle Monitoring and Outdoor Localization | |
| Kaghyan et al. | Human movement activity classification approaches that use wearable sensors and mobile devices | |
| Gomaa | Comparative analysis of different approaches to human activity recognition based on accelerometer signals | |
| Raj et al. | Qualitative Analysis of Techniques for Device-Free Human Activity Recognition. | |
| Saeedi et al. | Context aware mobile personal navigation services using multi-level sensor fusion | |
| Rubin Bose et al. | Precise Recognition of Vision Based Multi-hand Signs Using Deep Single Stage Convolutional Neural Network | |
| Sheishaa et al. | A context-aware motion mode recognition system using embedded inertial sensors in portable smart devices |