default search action
PUI 2001: Orlando, FL, USA
- Proceedings of the 2001 workshop on Perceptive user interfaces, PUI '01, Orlando, Florida, USA, November 15-16, 2001. ACM 2001
Paper session #1
- Emilio Schapira, Rajeev Sharma:
Experimental evaluation of vision and speech based multimodal interfaces. 1:1-1:9
Panel on augmented cognition
- Dylan Schmorrow, Jim Patrey:
Perceptive user interfaces workshop. 1:1-1:2
Posters & demos
- Tevfik Metin Sezgin, Thomas F. Stahovich, Randall Davis:
Sketch based interfaces: early processing for sketch understanding. 1:1-1:8
Paper session #2
- Scott Stillman, Irfan Essa:
Towards reliable multimodal sensing in aware environments. 1:1-1:6
Paper session #3
- Rainer Stiefelhagen, Jie Yang, Alex Waibel:
Estimating focus of attention based on gaze and sound. 1:1-1:9
Posters & demos
- Ashish Kapoor, Rosalind W. Picard:
A real-time head nod and shake detector. 1:1-1:5
Paper session #4
- Robert Headon, Rupert Curwen:
Recognizing movements from the ground reaction force. 1:1-1:8
Paper session #1
- Mitsutoshi Yoshizaki, Yoshinori Kuno, Akio Nakamura:
Human-robot interface based on the mutual assistance between speech and vision. 2:1-2:4
Posters & demos
- Praveen K. Kakumanu, Ricardo Gutierrez-Osuna, Anna Esposito, Robert K. Bryll, Ardeshir Goshtasby, Oscar N. Garcia:
Speech driven facial animation. 2:1-2:5
Paper session #2
- Anoop K. Sinha, James A. Landay:
Visually prototyping perceptual user interfaces through multimodal storyboarding. 2:1-2:4
Paper session #3
- Mario Enriquez, Oleg Afonin, Brent Yager, Karon E. MacLean:
A pneumatic tactile alerting system for the driving environment. 2:1-2:7
Posters & demos
- A. Chris Long, James A. Landay, Lawrence A. Rowe:
"Those look similar!" issues in automating gesture design advice. 2:1-2:5
Paper session #4
- Desney S. Tan, Jeanine K. Stefanucci, Dennis R. Proffitt, Randy Pausch:
The Infocockpit: providing location and place to aid human memory. 2:1-2:4
Paper session #1
- David R. McGee, Misha Pavel, Adriana M. Adami, Guoping Wang, Philip R. Cohen:
A visual modality for the augmentation of paper. 3:1-3:7
Posters & demos
- Kenji Matsui, Yumi Wakita, Tomohiro Konuma, Kenji Mizutani, Mitsuru Endo, Masashi Murata:
An experimental multilingual speech translation system. 3:1-3:4
Paper session #2
- Michael Oltmans, Randall Davis:
Naturally conveyed explanations of device behavior. 3:1-3:8
Paper session #3
- Christopher S. Campbell, Paul P. Maglio:
A robust algorithm for reading detection. 3:1-3:7
Posters & demos
- Rick Kjeldsen, Jacob Hartman:
Design issues for vision-based computer interaction systems. 3:1-3:8
Paper session #4
- Zhengyou Zhang, Ying Wu, Ying Shan, Steven Shafer:
Visual panel: virtual mouse, keyboard and 3D controller with an ordinary piece of paper. 3:1-3:8
Paper session #1
- John W. Fisher III, Trevor Darrell:
Signal level fusion for multimodal perceptual user interface. 4:1-4:7
Posters & demos
- Christian Elting, Georg Michelitsch:
A multimodal presentation planner for a home entertainment environment. 4:1-4:5
Paper session #2
- Kevin W. Wilson, Neal Checka, David Demirdjian, Trevor Darrell:
Audio-video array source separation for perceptual user interfaces. 4:1-4:7
Paper session #3
- James W. Davis, Serge Vaks:
A perceptual user interface for recognizing head gesture acknowledgements. 4:1-4:7
Posters & demos
- Giancarlo Iannizzotto, Massimo Villari, Lorenzo Vita:
Hand tracking for human-computer interaction with Graylevel VisualGlove: turning back to the simple way. 4:1-4:7 - Martha E. Crosby, Brent Auernheimer, Christoph Aschwanden, Curtis S. Ikehara:
Physiological data feedback for application in distance education. 5:1-5:5
Paper session #3
- Faustina Hwang, Simeon Keates, Patrick Langdon, P. John Clarkson, Peter Robinson:
Perception and haptics: towards more accessible computers for motion-impaired users. 5:1-5:9
Posters & demos
- Sylvia M. Dominguez, Trish Keaton, Ali H. Sayed:
Robust finger tracking for wearable computer interfacing. 5:1-5:5 - Yuan Qi, Carson Reynolds, Rosalind W. Picard:
The Bayes Point Machine for computer-user frustration detection via pressuremouse. 6:1-6:5 - Suriyon Tansuriyavong, Shin-ichi Hanaki:
Privacy protection by concealing persons in circumstantial video image. 6:1-6:4 - Ellen Campana, Jason Baldridge, John Dowding, Beth Ann Hockey, Roger W. Remington, Leland S. Stone:
Using eye movements to determine referents in a spoken dialogue system. 7:1-7:5 - Christian von Hardenberg, François Bérard:
Bare-hand human-computer interaction. 7:1-7:8 - Jie Yang, Jiang Gao, Ying Zhang, Xilin Chen, Alex Waibel:
An automatic sign recognition and translation system. 8:1-8:8 - Yoshinori Kuno, Yoshifumi Murakami, Nobutaka Shimada:
User and social interfaces by observing human faces for intelligent wheelchairs. 8:1-8:4 - John Harper, Donal Sweeney:
Multimodal optimizations: can legacy systems defeat them? 9:1-9:8 - Bjorn Braathen, Marian Stewart Bartlett, Gwen Littlewort, Javier R. Movellan:
First steps towards automatic recognition of spontaneous facial action units. 9:1-9:5 - Frank Althoff, Gregor McGlaun, Björn W. Schuller, Peter Morguet, Manfred K. Lang:
Using multimodal interaction to navigate in arbitrary virtual VRML worlds. 10:1-10:8 - Gary R. Bradski, Victor Eruhimov, Sergey Molinov, Valery Mosyagin, Vadim Pisarevsky:
A video joystick from a toy. 10:1-10:4 - Robert G. Capra III, Manuel A. Pérez-Quiñones, Naren Ramakrishnan:
WebContext: remote access to shared context. 11:1-11:9
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.