Skip to main content
  • Patrick Baudisch is a professor in Computer Science at Hasso Plattner Institute at Potsdam University and chair of th... moreedit
  • Dieter Boecker, Joe Konstanedit
Soap is a pointing device based on hardware found in a mouse, yet works in mid-air. Soap consists of an optical sensor device moving freely inside a hull made of fabric. As the user applies pressure from the outside, the optical sensor... more
Soap is a pointing device based on hardware found in a mouse, yet works in mid-air. Soap consists of an optical sensor device moving freely inside a hull made of fabric. As the user applies pressure from the outside, the optical sensor moves independent from the hull. The optical sensor perceives this relative motion and reports it as position input. Soap offers many of the benefits of optical mice, such as high-accuracy sensing. We describe the design of a soap prototype and report our experiences with four application scenarios, including a wall display, Windows Media Center, slide presentation, and interactive video games.
ABSTRACT Computer mice do not work in mid air. The reason is that a mouse is really only half an input device - the other half being the surface the mouse is operated on, such as a mouse pad. In this demo, we demonstrate how to combine a... more
ABSTRACT Computer mice do not work in mid air. The reason is that a mouse is really only half an input device - the other half being the surface the mouse is operated on, such as a mouse pad. In this demo, we demonstrate how to combine a mouse and a mouse pad into soap, a device that can be operated in mid air with a single hand. We have used soap to control video games, interact with wall displays and Windows Media Center, and to give slide presentations.
We explore how to track people and furniture based on a high-resolution pressure-sensitive floor. Gravity pushes people and objects against the floor, causing them to leave imprints of pressure distributions across the surface. While the... more
We explore how to track people and furniture based on a high-resolution pressure-sensitive floor. Gravity pushes people and objects against the floor, causing them to leave imprints of pressure distributions across the surface. While the sensor is limited to sensing direct contact with the surface, we can sometimes conclude what takes place above the surface, such as users’ poses or collisions with virtual objects. We demonstrate how to extend the range of this approach by sensing through passive furniture that propagates pressure to the floor. To explore our approach, we have created an 8 m2 back-projected floor prototype, termed GravitySpace, a set of passive touch-sensitive furniture, as well as algorithms for identifying users, furniture, and poses. Pressure-based sensing on the floor offers four potential benefits over camera- based solutions: (1) it provides consistent coverage of rooms wall-to-wall, (2) is less susceptible to occlusion between users, (3) allows for the use of simpler recognition algorithms, and (4) intrudes less on users’ privacy.
Current touch devices, such as capacitive touchscreens are based on the implicit assumption that users acquire targets with the center of the contact area between finger and device. Findings from our previous work indicate, however, that... more
Current touch devices, such as capacitive touchscreens are based on the implicit assumption that users acquire targets with the center of the contact area between finger and device. Findings from our previous work indicate, however, that such devices are subject to systematic error offsets. This suggests that the underlying assumption is most likely wrong. In this paper, we therefore revisit this assumption.
In a series of three user studies, we find evidence that the features that users align with the target are visual features. These features are located on the top of the user's fingers, not at the bottom, as assumed by traditional devices. We present the projected center model, under which error offsets drop to 1.6mm, compared to 4mm for the traditional model. This suggests that the new model is indeed a good approximation of how users conceptualize touch input.
The primary contribution of this paper is to help understand touch—one of the key input technologies in human-computer interaction. At the same time, our findings inform the design of future touch input technology. They explain the inaccuracy of traditional touch devices as a "parallax" artifact between user control based on the top of the finger and sensing based on the bottom side of the finger. We conclude that certain camera-based sensing technologies can inherently be more accurate than contact area-based sensing.
It is generally assumed that touch input cannot be accurate because of the fat finger problem, i.e., the softness of the fingertip combined with the occlusion of the target by the finger. In this paper, we show that this is not the case.... more
It is generally assumed that touch input cannot be accurate because of the fat finger problem, i.e., the softness of the fingertip combined with the occlusion of the target by the finger. In this paper, we show that this is not the case. We base our argument on a new model of touch inaccuracy. Our model is not based on the fat finger problem, but on the perceived input point model. In its published form, this model states that touch screens report touch location at an offset from the intended target. We generalize this model so that it represents offsets for individual finger postures and users. We thereby switch from the traditional 2D model of touch to a model that considers touch a phenomenon in 3-space. We report a user study, in which the generalized model explained 67% of the touch inaccuracy that was previously attributed to the fat finger problem.
In the second half of this paper, we present two devices that exploit the new model in order to improve touch accuracy. Both model touch on per-posture and per-user basis in order to increase accuracy by applying respective offsets. Our RidgePad prototype extracts posture and user ID from the user’s fingerprint during each touch interaction. In a user study, it achieved 1.8 times higher accuracy than a simulated capacitive baseline condition. A prototype based on optical tracking achieved even 3.3 times higher accuracy. The increase in accuracy can be used to make touch interfaces more reliable, to pack up to 3.3^2 > 10 times more controls into the same surface, or to bring touch input to very small mobile devices.
Tabletop applications cannot display more than a few dozen on-screen objects. The reason is their limited size: tables cannot become larger than arm's length without giving up direct touch. We propose creating direct touch surfaces that... more
Tabletop applications cannot display more than a few dozen on-screen objects. The reason is their limited size: tables cannot become larger than arm's length without giving up direct touch. We propose creating direct touch surfaces that are orders of magnitude larger. We approach this challenge by integrating high-resolution multi-touch input into a back-projected floor. As the same time, we maintain the purpose and interaction concepts of tabletop computers, namely direct manipulation.
We base our hardware design on frustrated total internal reflection. Its ability to sense per-pixel pressure allows the floor to locate and analyze users' soles. We demonstrate how this allows the floor to recognize foot postures and identify users. These two functions form the basis of our system. They allow the floor to ignore users unless they interact explicitly, identify and track users based on their shoes, enable high-precision interaction, invoke menus, track heads, and allow users to control high-degree of freedom interactions using their feet. While we base our designs on a series of simple user studies, the primary contribution on this paper is in the engineering domain.
We propose a method for learning how to use an imaginary interface (i.e., a spatial non-visual interface) that we call “transfer learning”. By using a physical device (e.g. an iPhone) a user inadvertently learns the interface and can then... more
We propose a method for learning how to use an imaginary interface (i.e., a spatial non-visual interface) that we call “transfer learning”. By using a physical device (e.g. an iPhone) a user inadvertently learns the interface and can then transfer that knowledge to an imaginary interface. We illustrate this concept with our Imaginary Phone prototype. With it users interact by mimicking the use of a physical iPhone by tapping and sliding on their empty non-dominant hand without visual feedback. Pointing on the hand is tracked using a depth camera and touch events are sent wirelessly to an actual iPhone, where they invoke the corresponding actions. Our prototype allows the user to perform everyday task such as picking up a phone call or launching the timer app and setting an alarm. Imaginary Phone thereby serves as a shortcut that frees users from the necessity of retrieving the actual physical device.

We present two user studies that validate the three assumptions underlying the transfer learning method. (1) Users build up spatial memory automatically while using a physical device: participants knew the correct location of 68% of their own iPhone home screen apps by heart. (2) Spatial memory transfers from a physical to an imaginary interface: participants recalled 61% of their home screen apps when recalling app location on the palm of their hand. (3) Palm interaction is precise enough to operate a typical mobile phone: Participants could reliably acquire 0.95cm wide iPhone targets on their palm—sufficiently large to operate any iPhone standard widget.
In order to enable personalized functionality, such as to log tabletop activity by user, tabletop systems need to recognize users. DiamondTouch does so reliably, but requires users to stay in assigned seats and cannot recognize users... more
In order to enable personalized functionality, such as to log tabletop activity by user, tabletop systems need to recognize users. DiamondTouch does so reliably, but requires users to stay in assigned seats and cannot recognize users across sessions. We propose a different approach based on distinguishing users’ shoes. While users are interacting with the table, our system Bootstrapper observes their shoes using one or more depth cameras mounted to the edge of the table. It then identifies users by matching camera images with a database of known shoe images. When multiple users interact, Bootstrapper associates touches with shoes based on hand orientation. The approach can be implemented using consumer depth cameras because (1) shoes offer large distinct features such as color, (2) shoes naturally align themselves with the ground, giving the system a well-defined perspective and thus reduced ambiguity. We report two simple studies in which Bootstrapper recognized participants from a database of 18 users with 95.8% accuracy.
We explore the future of fabrication, in particular the vision of mobile fabrication, which we define as " personal fabrication on the go ". We explore this vision with two surveys, two simple hardware prototypes, matching custom apps... more
We explore the future of fabrication, in particular the vision of mobile fabrication, which we define as " personal fabrication on the go ". We explore this vision with two surveys, two simple hardware prototypes, matching custom apps that provide users with access to a solution database, custom fabrication processes we designed specifically for these devices, and a user study conducted in situ on metro trains. Our findings suggest that mobile fabrication is a compelling next direction for personal fabrication. From our experience with the prototypes we derive the hardware requirements to make mobile fabrication technically feasible .
Drag-and-pop is a n interaction technique designed to ac- celerate drag-and-drop on large screens. By animating po- tential targets and bringing them to the dragged object, drag-and-pop reduces the user effort required for dragging an... more
Drag-and-pop is a n interaction technique designed to ac- celerate drag-and-drop on large screens. By animating po- tential targets and bringing them to the dragged object, drag-and-pop reduces the user effort required for dragging an object across the screen to a desired target. To preserve users' spatial memory, targets are not moved away from their original location, but are instead

And 188 more