Bae et al., 2023 - Google Patents
SiT dataset: socially interactive pedestrian trajectory dataset for social navigation robotsBae et al., 2023
View PDF- Document ID
- 239799703651133768
- Author
- Bae J
- Kim J
- Yun J
- Kang C
- Choi J
- Kim C
- Lee J
- Choi J
- Choi J
- Publication year
- Publication venue
- Advances in neural information processing systems
External Links
Snippet
To ensure secure and dependable mobility in environments shared by humans and robots, social navigation robots should possess the capability to accurately perceive and predict the trajectories of nearby pedestrians. In this paper, we present a novel dataset of pedestrian …
- 230000002452 interceptive effect 0 title abstract description 14
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00624—Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
- G06K9/00791—Recognising scenes perceived from the perspective of a land vehicle, e.g. recognising lanes, obstacles or traffic signs on road scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00362—Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand
- G06K9/00369—Recognition of whole body, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Martin-Martin et al. | Jrdb: A dataset and benchmark of egocentric robot visual perception of humans in built environments | |
US11900536B2 (en) | Visual-inertial positional awareness for autonomous and non-autonomous tracking | |
US10366508B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
Shin et al. | Roarnet: A robust 3d object detection based on region approximation refinement | |
Haseeb et al. | DisNet: A novel method for distance estimation from monocular camera | |
US10410328B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
Meng et al. | Real‐time automatic crack detection method based on drone | |
Jebamikyous et al. | Autonomous vehicles perception (avp) using deep learning: Modeling, assessment, and challenges | |
US10437252B1 (en) | High-precision multi-layer visual and semantic map for autonomous driving | |
US10794710B1 (en) | High-precision multi-layer visual and semantic map by autonomous units | |
US10192113B1 (en) | Quadocular sensor design in autonomous platforms | |
US10496104B1 (en) | Positional awareness with quadocular sensor in autonomous platforms | |
Tsai et al. | Real-time indoor scene understanding using bayesian filtering with motion cues | |
Leung et al. | Visual navigation aid for the blind in dynamic environments | |
Tapu et al. | A computer vision system that ensure the autonomous navigation of blind people | |
JP5023186B2 (en) | Object motion detection system based on combination of 3D warping technique and proper object motion (POM) detection | |
CN109214986A (en) | High-resolution 3-D point cloud is generated from the low resolution LIDAR 3-D point cloud and camera review of down-sampling | |
CN109214987A (en) | High-resolution 3-D point cloud is generated from the low resolution LIDAR 3-D point cloud and camera review of up-sampling | |
Mo et al. | Terra: A smart and sensible digital twin framework for robust robot deployment in challenging environments | |
Bae et al. | SiT dataset: socially interactive pedestrian trajectory dataset for social navigation robots | |
WO2017139516A1 (en) | System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation | |
KR20200075727A (en) | Method and apparatus for calculating depth map | |
Schwarze et al. | An intuitive mobility aid for visually impaired people based on stereo vision | |
Agafonov et al. | 3D objects detection in an autonomous car driving problem | |
Alam et al. | Staircase detection systems for the visually impaired people: a review |