Atienza et al., 2005 - Google Patents
Intuitive human-robot interaction through active 3d gaze trackingAtienza et al., 2005
View PDF- Document ID
- 5376426600689207052
- Author
- Atienza R
- Zelinsky A
- Publication year
- Publication venue
- Robotics Research. The Eleventh International Symposium: With 303 Figures
External Links
Snippet
One of the biggest obstacles facing humans and robots is the lack of means for natural and meaningful interaction. Robots find it difficult to understand human intentions since our way of communication is different from the way machines exchange their information. Our aim is …
- 230000003993 interaction 0 title abstract description 8
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
- G06K9/00268—Feature extraction; Face representation
- G06K9/00281—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00362—Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12131529B2 (en) | Virtual teach and repeat mobile manipulation system | |
| Mazhar et al. | Towards real-time physical human-robot interaction using skeleton information and hand gestures | |
| US10068135B2 (en) | Face detection, identification, and tracking system for robotic devices | |
| Atienza et al. | Intuitive human-robot interaction through active 3d gaze tracking | |
| Lee et al. | Visual perception framework for an intelligent mobile robot | |
| Kozamernik et al. | Visual quality and safety monitoring system for human-robot cooperation | |
| Fernández et al. | A Kinect-based system to enable interaction by pointing in smart spaces | |
| Lin et al. | The implementation of augmented reality in a robotic teleoperation system | |
| Yonemoto et al. | Egocentric articulated pose tracking for action recognition | |
| Huang et al. | Human-to-robot handover control of an autonomous mobile robot based on hand-masked object pose estimation | |
| Freddi et al. | Development and experimental validation of algorithms for human–robot interaction in simulated and real scenarios | |
| Bdiwi et al. | Handing-over model-free objects to human hand with the help of vision/force robot control | |
| Sigalas et al. | Visual estimation of attentive cues in HRI: the case of torso and head pose | |
| Hwang et al. | Neural-network-based 3-D localization and inverse kinematics for target grasping of a humanoid robot by an active stereo vision system | |
| Knoop et al. | Sensor fusion for model based 3d tracking | |
| Morales et al. | An approach to estimate the orientation and movement trend of a person in the vicinity of an industrial robot | |
| Kahily et al. | Real-time human detection and tracking from a mobile armed robot using RGB-D sensor | |
| Durdu et al. | Morphing estimated human intention via human-robot interactions | |
| Bdiwi et al. | Segmentation of model-free objects carried by human hand: Intended for human-robot interaction applications | |
| Aguilar et al. | A simple yet smart head module for mobile manipulators | |
| Kim et al. | Pointing gesture-based unknown object extraction for learning objects with robot | |
| Cheng et al. | A Vision-based Remote Assistance Method and it's Application in Object Transfer | |
| Eayrs et al. | An Intelligent Autonomous Robot with Recognition, Depth-Aware Perception, and Manipulation | |
| Walter et al. | Appearance-based object reacquisition for mobile manipulation | |
| Shen et al. | A trifocal tensor based camera-projector system for robot-human interaction |