Heravi et al., 2020 - Google Patents
Learning an action-conditional model for haptic texture generationHeravi et al., 2020
View PDF- Document ID
- 2313876407320408070
- Author
- Heravi N
- Yuan W
- Okamura A
- Bohg J
- Publication year
- Publication venue
- 2020 IEEE International Conference on Robotics and Automation (ICRA)
External Links
Snippet
Rich haptic sensory feedback in response to user interactions is desirable for an effective, immersive virtual reality or teleoperation system. However, this feedback depends on material properties and user interactions in a complex, non-linear manner. Therefore, it is …
- 239000000463 material 0 abstract description 62
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6232—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
- G06K9/6247—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on an approximation criterion, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00362—Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00335—Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Heravi et al. | Learning an action-conditional model for haptic texture generation | |
Li et al. | Dynamic gesture recognition in the internet of things | |
Boulahia et al. | Early, intermediate and late fusion strategies for robust deep learning-based multimodal action recognition | |
Taheri et al. | GRAB: A dataset of whole-body human grasping of objects | |
Zhao et al. | Compositional human-scene interaction synthesis with semantic control | |
Aslan et al. | Shape feature encoding via fisher vector for efficient fall detection in depth-videos | |
Faria et al. | Extracting data from human manipulation of objects towards improving autonomous robotic grasping | |
Yang et al. | Binding touch to everything: Learning unified multimodal tactile representations | |
Meena | A study on hand gesture recognition technique | |
CN110073369A (en) | The unsupervised learning technology of time difference model | |
Zhang et al. | Handsense: smart multimodal hand gesture recognition based on deep neural networks | |
Zapata-Impata et al. | Generation of tactile data from 3D vision and target robotic grasps | |
Abid et al. | Dynamic hand gesture recognition for human-robot and inter-robot communication | |
Patel et al. | Deep tactile experience: Estimating tactile sensor output from depth sensor data | |
Jenkins et al. | Interactive human pose and action recognition using dynamical motion primitives | |
Dhandapani et al. | Body language recognition using machine learning | |
Hasan et al. | Gesture feature extraction for static gesture recognition | |
Zerrouki et al. | Exploiting deep learning-based LSTM classification for improving hand gesture recognition to enhance visitors’ museum experiences | |
Sudhakar et al. | Controlling the world by sleight of hand | |
Lopes et al. | Towards grounded human-robot communication | |
Rahman et al. | Intelligent system for Islamic prayer (salat) posture monitoring | |
Udendhran et al. | Enhancing representational learning for cloud robotic vision through explainable fuzzy convolutional autoencoder framework | |
CN113176822A (en) | Virtual user detection | |
Rahman et al. | Monitoring and alarming activity of islamic prayer (salat) posture using image processing | |
CN116185192B (en) | Eye movement recognition VR interaction method based on denoising variational encoder |