Skip to main content

    Photchara Ratsamee

    This paper presents an alternative approach to identify and classify the small flying drones from the birds in near field, by examining the pattern of their flight paths and trajectories. The trajectories of the drones and birds were... more
    This paper presents an alternative approach to identify and classify the small flying drones from the birds in near field, by examining the pattern of their flight paths and trajectories. The trajectories of the drones and birds were extracted from multiple clips of videos including various natural and synthetic database. Small drones that are piloted both automatically and manually usually fly in a stable manner. Their flight paths are usually straight and smooth with sharp angle turns. Whereas birds being natural flyers have flight paths that are intrinsically periodic due to their flapping motion with occasional straight glides and soaring sections. Five (5) trajectories characteristics and observed from the object’s flight paths: turning angle, periodicity (frequency), curvature, object pace (velocity and acceleration). Subsequently, principal component analyses were applied to reduce the number of these trajectory features from 5 to 2 parameters. Classification by support vecto...
    Accurate camera localization is an essential part of tracking systems. However, localization results are greatly affected by illumination. Including data collected under various lighting conditions can improve the robustness of the... more
    Accurate camera localization is an essential part of tracking systems. However, localization results are greatly affected by illumination. Including data collected under various lighting conditions can improve the robustness of the localization algorithm to lighting variation. However, this is very tedious and time consuming. By using synthetic images, it is possible to easily accumulate a large variety of views under varying illumination and weather conditions. Despite continuously improving processing power and rendering algorithms, synthetic images do not perfectly match real images of the same scene, i.e., there exists a gap between real and synthetic images that also affects the accuracy of camera localization. To reduce the impact of this gap, we introduce “REal-to-Synthetic Transform (REST).” REST is an autoencoder-like network that converts real features to their synthetic counterpart. The converted features can then be matched against the accumulated database for robust cam...
    Evacuation drills are carried out to reduce the injury or death caused by earthquakes. However, the content of evacuation drills is generally predetermined for specific evacuation routes and actions. This inflexibility can reduce user... more
    Evacuation drills are carried out to reduce the injury or death caused by earthquakes. However, the content of evacuation drills is generally predetermined for specific evacuation routes and actions. This inflexibility can reduce user motivation and sincerity.In this paper, we propose an Augmented Reality (AR) based evacuation drill system. We use an optical see-through head-mounted display (HMD) for mapping a room and recognizing interior. Our system constructs an AR drill environment of the real environment, and reproduces the after-effects of an earthquake by applying vibrations to the objects. We evaluated our system in an experiment with 10 participants. Comparing cases with and without AR obstacles, we found that AR training affected participant motivation and the diversity of traversed evacuation routes during practice.
    Accurate camera localization is an essential part of tracking systems. However, localization results are greatly affected by illumination. Including data collected under various lighting conditions can improve the robustness of the... more
    Accurate camera localization is an essential part of tracking systems. However, localization results are greatly affected by illumination. Including data collected under various lighting conditions can improve the robustness of the localization algorithm to lighting variation. However, this is very tedious and time consuming. By using synthesized images it is possible to easily accumulate a large variety of views under varying illumination and weather conditions. Despite continuously improving processing power and rendering algorithms, synthesized images do not perfectly match real images of the same scene, i.e. there exists a gap between real and synthesized images that also affects the accuracy of camera localization. To reduce the impact of this gap, we introduce "REal-to-Synthetic Transform (REST)." REST is an autoencoder-like network that converts real features to their synthetic counterpart. The converted features can then be matched against the accumulated database ...
    In practical use of optical see-through head-mounted displays, users often have to adjust the brightness of virtual content to ensure that it is at the optimal level. Automatic adjustment is still a challenging problem, largely due to the... more
    In practical use of optical see-through head-mounted displays, users often have to adjust the brightness of virtual content to ensure that it is at the optimal level. Automatic adjustment is still a challenging problem, largely due to the bidirectional nature of the structure of the human eye, complexity of real world lighting, and user perception. Allowing the right amount of light to pass through to the retina requires a constant balance of incoming light from the real world, additional light from the virtual image, pupil contraction, and feedback from the user. While some automatic light adjustment methods exist, none have completely tackled this complex input-output system. As a step towards overcoming this issue, we introduce IntelliPupil, an approach that uses eye tracking to properly modulate augmentation lighting for a variety of lighting conditions and real scenes. We first take the data from a small form factor light sensor and changes in pupil diameter from an eye trackin...
    A significant issue associated with the use of video see-through head-mounted displays (VST-HMD) for augmented reality is the presence of latency between real-world images and the images displayed to the HMD. For a static scene, this... more
    A significant issue associated with the use of video see-through head-mounted displays (VST-HMD) for augmented reality is the presence of latency between real-world images and the images displayed to the HMD. For a static scene, this latency provides no real problem, however for dynamic scenes, which arise when the HMD user moves their head, when real-world objects move, or a combination of the two, the accompanying delay may result in significant registration error. To address this issue, we present DotWarp, a novel latency reduction technique for VST-HMDs that does not rely on head motion and compensates for the delay arising from real-world object motion. The algorithm requires a two-camera setup and matches dynamic objects in both images by tracking on the faster image and warping the pixels of the slower image, with the fast and slow components being RGB and IR components, respectively, for our system. First, moving objects are extracted from the faster camera scene using a mot...
    This paper presents an alternative approach to identify and classify the group of small flying objects especially drones from others, notably birds and kites (inclusive of kiteflying), in near field, by examining the pattern of their... more
    This paper presents an alternative approach to identify and classify the group of small flying objects especially drones from others, notably birds and kites (inclusive of kiteflying), in near field, by examining the pattern of their flight paths and trajectories. The trajectories of the drones and other flying objects were extracted from multiple clips of videos including various natural and synthetic database. Four trajectories characteristics are observed and extracted from the object’s flight paths, i.e., heading or turning angle, curvature, pace velocity, and pace acceleration. Subsequently, principal component analyses were applied to reduce the number of these trajectory features from 4 to 2 parameters. Multi-class classification by support vector machine (SVM) with non-linear transformation kernel was used. Multiple classification models were developed by several algorithms with various transformation kernels. The hyperparameters were optimized using Bayesian optimization. T...
    In this paper, we present a spherical magnetic joint for the inverted locomotion of a multi-legged robot. The permanent magnet’s spherical shape allows the robot to attach its foot to a steel surface without energy consumption. However,... more
    In this paper, we present a spherical magnetic joint for the inverted locomotion of a multi-legged robot. The permanent magnet’s spherical shape allows the robot to attach its foot to a steel surface without energy consumption. However, the robot’s inverted locomotion requires foot flexibility for placement and gait construction of the robot. Therefore, the spherical magnetic joint mechanism was designed and implemented for the robot feet to deal with angular placement. For decoupling the foot from the steel surface, the attractive force is adjusted by tilting the adjustable sleeve mechanism at an adequate angle between the surface and foot tip. Experimental results show that the spherical magnetic joint can maintain the attractive force at any angle, and the sleeve mechanism can reduce 20% of the reaction force for pulling the legs from the steel surfaces. Furthermore, the designed gait for inverted locomotion with a spherical magnetic joint was tested and compared to prove the concept of the spherical magnetic joint and sleeve mechanism.
    Low light situations pose a significant challenge to individuals working in a variety of different fields such as firefighting, rescue, maintenance and medicine. Tools like flashlights and infrared (IR) cameras have been used to augment... more
    Low light situations pose a significant challenge to individuals working in a variety of different fields such as firefighting, rescue, maintenance and medicine. Tools like flashlights and infrared (IR) cameras have been used to augment light in the past, but they must often be operated manually, provide a field of view that is decoupled from the operator's own view, and utilize color schemes that can occlude content from the original scene. To help address these issues, we present VisMerge, a framework that combines a thermal imaging head mounted display (HMD) and algorithms that temporally and spectrally merge video streams of different light bands into the same field of view. For temporal synchronization, we first develop a variant of the time warping algorithm used in virtual reality (VR), but redesign it to merge video see-through (VST) cameras with different latencies. Next, using computer vision and image compositing we develop five new algorithms designed to merge non-uniform video streams from a standard RGB camera and small form-factor infrared (IR) camera. We then implement six other existing fusion methods, and conduct a series of comparative experiments, including a system level analysis of the augmented reality (AR) time warping algorithm, a pilot experiment to test perceptual consistency across all eleven merging algorithms, and an in-depth experiment on performance testing the top algorithms in a VR (simulated AR) search task. Results showed that we can reduce temporal registration error due to inter-camera latency by an average of 87.04%, that the wavelet and inverse stipple algorithms were perceptually rated the highest, that noise modulation performed best, and that freedom of user movement is significantly increased with visualizations engaged.
    Gestures are used naturally in communication, and use with modern computer systems is becoming increasingly feasible and applicable for interaction. Analyzing how gesture interfaces are perceived in different parts of the world can help... more
    Gestures are used naturally in communication, and use with modern computer systems is becoming increasingly feasible and applicable for interaction. Analyzing how gesture interfaces are perceived in different parts of the world can help in understanding possible weak and strong points, allowing for different improvements to the interfaces. This study aims to analyze how different technological familiarization levels impacts the user experience of gestural interfaces. Our work describes the findings of an experiment that was replicated in two countries: Brazil and Japan. In each experiment, 20 subjects tested two applications; one had a mouse-based interface and the other a gesture-based interface. User experience was measured using questionnaires from AttrakDiff. Subjectivity and abstract concepts largely differed, but major agreement was found regarding room for improvement of the pragmatist quality of the gestural interface in order to be embraced by the average user.
    Water is an essential substance for humans in their daily lives. There are many opportunities for us to come in contact with water, such as cooking, bathing, and swimming. However, few studies have reproduced the sensation of water... more
    Water is an essential substance for humans in their daily lives. There are many opportunities for us to come in contact with water, such as cooking, bathing, and swimming. However, few studies have reproduced the sensation of water touching the skin. This study aims to propose a novel midair haptic device, named FlowHaptics, that reproduces the feeling of the force of flowing water over human fingers using multiple air jets. We first estimated the temporal pressure distribution change of water in two-dimensional space using machine-learning-accelerated fluid simulation. We controlled the airflow based on the pressure distribution change obtained from the fluid simulation to reproduce the feeling of flowing water over the fingers using our proposed device, which can control multiple air jets in real time. We performed a psycho-physical evaluation of different flow velocities and a subjective evaluation of different velocity profiles. We found that FlowHaptics reliably created the ill...
    Recent research has focused on how to facilitate interaction between humans and robots, giving rise to the field of human robot interaction. A related research area is human-drone interaction (HDI), investigating how interaction between... more
    Recent research has focused on how to facilitate interaction between humans and robots, giving rise to the field of human robot interaction. A related research area is human-drone interaction (HDI), investigating how interaction between humans and drones can be expanded in novel and meaningful ways. In this work, we explore the use of drones as companions in a home environment. We present three consecutive studies addressing user requirements and design space of companion drones. Following a user-centered approach, the three stages include online questionnaire, design workshops, and simulated virtual reality (VR) home environment. Our results show that participants preferred the idea of a drone companion at home, particularly for tasks such as fetching items and cleaning. The participants were also positive towards a drone companion that featured anthropomorphic features.
    In this paper, we propose a design and an implementation of spherical magnet joint (SMJ) - based gait generation for inverted locomotion of multi-legged robots. A spherical permanent magnet is selected to generate a consistent attractive... more
    In this paper, we propose a design and an implementation of spherical magnet joint (SMJ) - based gait generation for inverted locomotion of multi-legged robots. A spherical permanent magnet is selected to generate a consistent attractive force for the robot to perform inverted locomotion under steel structures. Additionally, the tip of the robot's foot is designed as a ball-joint mechanism to give flexibility to the foot placement at any angle between the tip and surfaces. We also propose an adjustable sleeve mechanism to detach the tip of the foot during locomotion by creating a fulcrum point during the tilt and pull step. As a result, the reaction force can be reduced according to sleeve diameter. Experimental results show that the presented load decreased by 46% from direct pulling with the adjustable sleeve mechanism. For inverted locomotion, a quadruped robot and a hexapod robot were constructed to represent the predominant type of multi-legged robot. We integrated the SMJ ...
    We present FrictionHaptics which is an encountered-type haptic device that emulates friction when a user touches a virtual object by using a rotating sphere as an end effector to a 3DOF robot arm. Our proposed device has two advantages.... more
    We present FrictionHaptics which is an encountered-type haptic device that emulates friction when a user touches a virtual object by using a rotating sphere as an end effector to a 3DOF robot arm. Our proposed device has two advantages. Firstly, the device creates a tangential friction sensation of a virtual object, even the size of the real object is limited. Secondly, compared to wearable-type or grip-type, our device does not limit the sensing part of a human. We conducted a perceptual experiment to determine how a human perceives friction generated from our proposed device. As a result, we found that our device renders a reliable illusion of tangential friction even the human perceive it using different sensing part.
    We present a human-centered designed social drone aiming to be used in a human crowd environment. Based on design studies and focus groups, we created a prototype of a social drone with a social shape, face and voice for human... more
    We present a human-centered designed social drone aiming to be used in a human crowd environment. Based on design studies and focus groups, we created a prototype of a social drone with a social shape, face and voice for human interaction. We used the prototype for a proxemic study, comparing the required distance from the drone humans could comfortably accept compared with what they would require for a nonsocial drone. The social shaped design with greeting voice added decreased the acceptable distance markedly, as did present or previous pet ownership, and maleness. We also explored the proximity sphere around humans with a social shaped drone based on a validation study with variation of lateral distance and heights. Both lateral distance and the higher height of 1.8 m compared to the lower height of 1.2 m decreased the required comfortable distance as it approached.
    Air conditioners enable a comfortable environment for people in a variety of scenarios. However, in the case of a room with multiple people, the specific comfort for a particular person is highly dependent on their clothes, metabolism,... more
    Air conditioners enable a comfortable environment for people in a variety of scenarios. However, in the case of a room with multiple people, the specific comfort for a particular person is highly dependent on their clothes, metabolism, preference, and so on, and the ideal conditions for each person in a room can conflict with each other. An ideal way to resolve these kinds of conflicts is an intelligent air conditioning system that can independently control air temperature and flow at different areas in a room and then produce thermal comfort for multiple users, which we define as the personal preference of air flow and temperature. In this paper, we propose Personal Atmosphere, a machine learning based method to obtain parameters of air conditioners which generate non-uniform distributions of air temperature and flow in a room. In this method, two dimensional air-temperature and -flow distributions in a room are used as input to a machine learning model. These inputs can be conside...
    Firefighters need to gain information from both inside and outside of buildings in first response emergency scenarios. For this purpose, drones are beneficial. This paper presents an elicitation study that showed the firefighters’ desire... more
    Firefighters need to gain information from both inside and outside of buildings in first response emergency scenarios. For this purpose, drones are beneficial. This paper presents an elicitation study that showed the firefighters’ desire to collaborate with autonomous drones. We developed a Human-Drone Interaction (HDI) method for indicating a target to a drone using 3D pointing gestures estimated solely from a monocular camera. The participant first points to a window without using any wearable or body-attached device. Through the drone’s front-facing camera, the drone detects the gesture and computes the target window. This work includes a description of the process for choosing the gesture, detecting and localizing objects, and carrying out the transformations between coordinate systems. Our proposed 3D pointing gesture interface improves a 2D pointing gesture interface by integrating depth information with SLAM, solving multiple objects aligned on the same plane ambiguity, in a ...
    Drone navigation in complex environments poses many problems to teleoperators. Especially in three dimensional (3D) structures such as buildings or tunnels, viewpoints are often limited to the drone’s current camera view, nearby objects... more
    Drone navigation in complex environments poses many problems to teleoperators. Especially in three dimensional (3D) structures such as buildings or tunnels, viewpoints are often limited to the drone’s current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation. To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and provide smooth operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D point-cloud information into account to modify the user’s viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera, and we use the resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first- and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.
    This paper proposes an “Isotropic” discrete coordinate system for regular tessellations in Euclidean plane, E2. Its conceivable robotics applications include: workspace description, mobile robot positioning, and configuration rendering... more
    This paper proposes an “Isotropic” discrete coordinate system for regular tessellations in Euclidean plane, E2. Its conceivable robotics applications include: workspace description, mobile robot positioning, and configuration rendering for swarm robots. The new coordinate system, named HC/P, is generated via projection process of three-dimensional (3D) honeycomb over a two-dimensional (2D) plane. The HC/P coordinate system is composed of three axes in order for indicating 2D position. This redundancy enables HC/P to posses the isotropy that is the most significant advantage of the system. The characteristic features of HC/P, such as distance computation and rotational operation, are described in this paper with an application example.
    In this research, we present an object search framework using robot-gaze interaction that support patients with motor paralysis conditions. A patient can give commands by gazing to the target object and a robot starts to search au-... more
    In this research, we present an object search framework using robot-gaze interaction that support patients with motor paralysis conditions. A patient can give commands by gazing to the target object and a robot starts to search au- tonomously. Apart from multiple gaze interaction, our approach uses few gaze interaction to specify location clue and object clue and thus integrates the RGB-D sensing to segment unknown objects from the environment. Based on hypotheses from gaze information, we utilize multiregion Graph Cuts method along with an analysis of depth information. Furthermore, our search algorithm allows a robot to find a main observation point which is the point where user can clearly observe the target object. If a first segmentation was not satisfied by the user, the robot is able to adapt its pose to find different views of object. The approach has been implemented and tested on the humanoid robot ENON. With a few gaze guidance, the success rate of segmentation of unknown...
    Research Interests: