[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,302)

Search Parameters:
Keywords = human–robot interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 40746 KiB  
Article
An Admittance Parameter Optimization Method Based on Reinforcement Learning for Robot Force Control
by Xiaoyi Hu, Gongping Liu, Peipei Ren, Bing Jia, Yiwen Liang, Longxi Li and Shilin Duan
Actuators 2024, 13(9), 354; https://doi.org/10.3390/act13090354 - 12 Sep 2024
Viewed by 216
Abstract
When a robot performs tasks such as assembly or human–robot interaction, it is inevitable for it to collide with the unknown environment, resulting in potential safety hazards. In order to improve the compliance of robots to cope with unknown environments and enhance their [...] Read more.
When a robot performs tasks such as assembly or human–robot interaction, it is inevitable for it to collide with the unknown environment, resulting in potential safety hazards. In order to improve the compliance of robots to cope with unknown environments and enhance their intelligence in contact force-sensitive tasks, this paper proposes an improved admittance force control method, which combines classical adaptive control and machine learning methods to make them use their respective advantages in different stages of training and, ultimately, achieve better performance. In addition, this paper proposes an improved Deep Deterministic Policy Gradient (DDPG)-based optimizer, which is combined with the Gaussian process (GP) model to optimize the admittance parameters. In order to verify the feasibility of the algorithm, simulations and experiments are carried out in MATLAB and on a UR10e robot, respectively. The experimental results show that the algorithm improves the convergence speed by 33% in comparison to the general model-free learning method, and has better control performance and robustness. Finally, the adjustment time required by the algorithm is 44% shorter than that of classical adaptive admittance control. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Classical admittance control system structure.</p>
Full article ">Figure 2
<p>Robot and environment contact process model.</p>
Full article ">Figure 3
<p>Intelligent admittance control system structure.</p>
Full article ">Figure 4
<p>The schematic diagram of the GP-DDPG algorithm.</p>
Full article ">Figure 5
<p>Robot task environment. The environment is a vertical wall on the right side of the picture. It is assumed that there is no noise in the control process.</p>
Full article ">Figure 6
<p>Trajectory correction performance of the GP-DDPG algorithm with initial policy.</p>
Full article ">Figure 7
<p>Force-tracking performance of the GP-DDPG algorithm with initial policy: (<b>a</b>) The change in contact force in 1∼10 learnings; (<b>b</b>) <span class="html-italic">N</span> = 1∼5; (<b>c</b>) <span class="html-italic">N</span> = 5∼10.</p>
Full article ">Figure 8
<p>Damping variation performance of the GP-DDPG algorithm with initial policy: (<b>a</b>) The change in damping in 1∼10 learnings; (<b>b</b>) <span class="html-italic">N</span> = 1∼5; (<b>c</b>) <span class="html-italic">N</span> = 5∼10.</p>
Full article ">Figure 9
<p>Force-tracking performance of GP-DDPG without initial policy: (<b>a</b>) The change in force in 1∼15 learnings; (<b>b</b>) <span class="html-italic">N</span> = 1∼5; (<b>c</b>) <span class="html-italic">N</span> = 5∼10; (<b>d</b>) <span class="html-italic">N</span> = 10∼15.</p>
Full article ">Figure 10
<p>Damping variation performance of GP-DDPG without initial policy: (<b>a</b>) The change in damping in 1∼15 learnings; (<b>b</b>) <span class="html-italic">N</span> = 1∼5; (<b>c</b>) <span class="html-italic">N</span> = 5∼10; (<b>d</b>) <span class="html-italic">N</span> = 10∼15.</p>
Full article ">Figure 11
<p>The performance of each algorithm under the variable stiffness plane: (<b>a</b>) Position-tracking performance; (<b>b</b>) position-tracking error; (<b>c</b>) force-tracking performance; (<b>d</b>) evolution of the damping parameter.</p>
Full article ">Figure 12
<p>The performance of each algorithm under the complex environment: (<b>a</b>) Position-tracking performance; (<b>b</b>) position-tracking error; (<b>c</b>) force-tracking performance; (<b>d</b>) evolution of the damping parameter.</p>
Full article ">Figure 13
<p>Hardware configuration of the experiment.</p>
Full article ">Figure 14
<p>The real-world experiment. (<b>a</b>) Experimental installation. The tool coordinate system is <math display="inline"><semantics> <msub> <mi>O</mi> <mi>T</mi> </msub> </semantics></math>, the base coordinate system is <math display="inline"><semantics> <msub> <mi>O</mi> <mi>B</mi> </msub> </semantics></math>, and the mapping between the two can be calculated by the robot kinematics. At this time, the specific location of the slope is unknown to the control system. (<b>b</b>) Different environments in the third experiment. (<b>c</b>) Experimental process. The red dotted line represents the trajectory of the robot.</p>
Full article ">Figure 15
<p>Constant-admittance control steep slope tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">Figure 16
<p>Classical adaptive-admittance steep control slope tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">Figure 17
<p>Intelligent-admittance control steep slope tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">Figure 18
<p>Constant-admittance control gentle slope tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">Figure 19
<p>Classical adaptive-admittance control gentle slope tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">Figure 20
<p>Intelligent-admittance control gentle slope tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">Figure 21
<p>Constant-admittance control for different environments’ characteristic tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">Figure 22
<p>Classical adaptive-admittance control for different environments’ characteristic tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">Figure 23
<p>Intelligent-admittance control for different environments’ characteristic tracking performance: (<b>a</b>) force-tracking performance; (<b>b</b>) position-tracking performance; (<b>c</b>) evolution of the damping parameter.</p>
Full article ">
16 pages, 3585 KiB  
Article
Upper-Limb and Low-Back Load Analysis in Workers Performing an Actual Industrial Use-Case with and without a Dual-Arm Collaborative Robot
by Alessio Silvetti, Tiwana Varrecchia, Giorgia Chini, Sonny Tarbouriech, Benjamin Navarro, Andrea Cherubini, Francesco Draicchio and Alberto Ranavolo
Safety 2024, 10(3), 78; https://doi.org/10.3390/safety10030078 - 11 Sep 2024
Viewed by 178
Abstract
In the Industry 4.0 scenario, human–robot collaboration (HRC) plays a key role in factories to reduce costs, increase production, and help aged and/or sick workers maintain their job. The approaches of the ISO 11228 series commonly used for biomechanical risk assessments cannot be [...] Read more.
In the Industry 4.0 scenario, human–robot collaboration (HRC) plays a key role in factories to reduce costs, increase production, and help aged and/or sick workers maintain their job. The approaches of the ISO 11228 series commonly used for biomechanical risk assessments cannot be applied in Industry 4.0, as they do not involve interactions between workers and HRC technologies. The use of wearable sensor networks and software for biomechanical risk assessments could help us develop a more reliable idea about the effectiveness of collaborative robots (coBots) in reducing the biomechanical load for workers. The aim of the present study was to investigate some biomechanical parameters with the 3D Static Strength Prediction Program (3DSSPP) software v.7.1.3, on workers executing a practical manual material-handling task, by comparing a dual-arm coBot-assisted scenario with a no-coBot scenario. In this study, we calculated the mean and the standard deviation (SD) values from eleven participants for some 3DSSPP parameters. We considered the following parameters: the percentage of maximum voluntary contraction (%MVC), the maximum allowed static exertion time (MaxST), the low-back spine compression forces at the L4/L5 level (L4Ort), and the strength percent capable value (SPC). The advantages of introducing the coBot, according to our statistics, concerned trunk flexion (SPC from 85.8% without coBot to 95.2%; %MVC from 63.5% without coBot to 43.4%; MaxST from 33.9 s without coBot to 86.2 s), left shoulder abdo-adduction (%MVC from 46.1% without coBot to 32.6%; MaxST from 32.7 s without coBot to 65 s), and right shoulder abdo-adduction (%MVC from 43.9% without coBot to 30.0%; MaxST from 37.2 s without coBot to 70.7 s) in Phase 1, and right shoulder humeral rotation (%MVC from 68.4% without coBot to 7.4%; MaxST from 873.0 s without coBot to 125.2 s), right shoulder abdo-adduction (%MVC from 31.0% without coBot to 18.3%; MaxST from 60.3 s without coBot to 183.6 s), and right wrist flexion/extension rotation (%MVC from 50.2% without coBot to 3.0%; MaxST from 58.8 s without coBot to 1200.0 s) in Phase 2. Moreover, Phase 3, which consisted of another manual handling task, would be removed by using a coBot. In summary, using a coBot in this industrial scenario would reduce the biomechanical risk for workers, particularly for the trunk, both shoulders, and the right wrist. Finally, the 3DSSPP software could be an easy, fast, and costless tool for biomechanical risk assessments in an Industry 4.0 scenario where ISO 11228 series cannot be applied; it could be used by occupational medicine physicians and health and safety technicians, and could also help employers to justify a long-term investment. Full article
Show Figures

Figure 1

Figure 1
<p>Some 3DSSPP reconstructions of the three subtasks analyzed: Phase 1 with (<b>a1</b>) and without (<b>b1</b>) the coBot; Phase 2 with (<b>a2</b>) and without (<b>b2</b>) the coBot; and Phase 3 with (<b>a3</b>) and without (<b>b3</b>) the coBot.</p>
Full article ">Figure 2
<p>Mean and SD values for Phase 1, with Bazar (wB) in blue and without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). An asterisk (*) over the bars shows statistical significance.</p>
Full article ">Figure 3
<p>Mean and SD values for Phase 2, with Bazar (wB) in blue and without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). An asterisk (*) over the bars shows statistical significance.</p>
Full article ">Figure 4
<p>Mean and SD values for Phase 3 without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). When using the Bazar coBot, this phase would be totally automatized, so we do not have values with the Bazar (wB).</p>
Full article ">
15 pages, 2585 KiB  
Article
Exploring the Effects of Multi-Factors on User Emotions in Scenarios of Interaction Errors in Human–Robot Interaction
by Wa Gao, Yuan Tian, Shiyi Shen, Yang Ji, Ning Sun, Wei Song and Wanli Zhai
Appl. Sci. 2024, 14(18), 8164; https://doi.org/10.3390/app14188164 - 11 Sep 2024
Viewed by 257
Abstract
Interaction errors are hard to avoid in the process of human–robot interaction (HRI). User emotions toward interaction errors could further affect the user’s attitudes to robots and experiences of HRI and so on. In this regard, the present study explores the effects of [...] Read more.
Interaction errors are hard to avoid in the process of human–robot interaction (HRI). User emotions toward interaction errors could further affect the user’s attitudes to robots and experiences of HRI and so on. In this regard, the present study explores the effects of different factors on user emotions when interaction errors occur in HRI. There is sparse research directly studying this perspective. In so doing, three factors, including robot feedback, passive and active contexts, and previous user emotions, were considered. Two stages of online surveys with 465 participants were implemented to explore attitudes to robots and the self-reporting of emotions in active and passive HRI. Then, a Yanshee robot was selected as the experimental platform, and 61 participants were recruited for a real human–robot empirical study based on the two surveys. According to the results of statistical analysis, we conclude some design guides can cope with scenarios of interaction errors. For example, feedback and previous emotions have impacts on user emotions after encountering interaction errors, but contexts do not. There are no interactive effects between the three factors. The approach to reduce negative emotions in the cases of interaction errors in HRI, such as providing irrelevant feedback and so on, is also illustrated in the contributions. Full article
Show Figures

Figure 1

Figure 1
<p>Main flow diagram for the second survey.</p>
Full article ">Figure 2
<p>(<b>a</b>) Yanshee robot. (<b>b</b>) The experimental framework in real environments.</p>
Full article ">Figure 3
<p>(<b>a</b>) The flow of participants in experiments. (<b>b</b>) Example scenario for HRI with interaction errors.</p>
Full article ">Figure 4
<p>(<b>a</b>) The data of concerned interaction cues. (<b>b</b>) The score of different interaction cues considering the level of importance.</p>
Full article ">Figure 5
<p>The percentages of users’ emotion types when users encounter CF repeatedly.</p>
Full article ">Figure 6
<p>(<b>a</b>) The percentages of emotions after interaction errors in CPIR. (<b>b</b>) The percentages of emotions after interaction errors in CAIR.</p>
Full article ">
17 pages, 16821 KiB  
Article
Guessing Human Intentions to Avoid Dangerous Situations in Caregiving Robots
by Noé Zapata, Gerardo Pérez, Lucas Bonilla, Pedro Núñez, Pilar Bachiller and Pablo Bustos
Appl. Sci. 2024, 14(17), 8057; https://doi.org/10.3390/app14178057 - 9 Sep 2024
Viewed by 318
Abstract
The integration of robots into social environments necessitates their ability to interpret human intentions and anticipate potential outcomes accurately. This capability is particularly crucial for social robots designed for human care, as they may encounter situations that pose significant risks to individuals, such [...] Read more.
The integration of robots into social environments necessitates their ability to interpret human intentions and anticipate potential outcomes accurately. This capability is particularly crucial for social robots designed for human care, as they may encounter situations that pose significant risks to individuals, such as undetected obstacles in their path. These hazards must be identified and mitigated promptly to ensure human safety. This paper delves into the artificial theory of mind (ATM) approach to inferring and interpreting human intentions within human–robot interaction. We propose a novel algorithm that detects potentially hazardous situations for humans and selects appropriate robotic actions to eliminate these dangers in real time. Our methodology employs a simulation-based approach to ATM, incorporating a “like-me” policy to assign intentions and actions to human subjects. This strategy enables the robot to detect risks and act with a high success rate, even under time-constrained circumstances. The algorithm was seamlessly integrated into an existing robotics cognitive architecture, enhancing its social interaction and risk mitigation capabilities. To evaluate the robustness, precision, and real-time responsiveness of our implementation, we conducted a series of three experiments: (i) A fully simulated scenario to assess the algorithm’s performance in a controlled environment; (ii) A human-in-the-loop hybrid configuration to test the system’s adaptability to real-time human input; and (iii) A real-world scenario to validate the algorithm’s effectiveness in practical applications. These experiments provided comprehensive insights into the algorithm’s performance across various conditions, demonstrating its potential for improving the safety and efficacy of social robots in human care settings. Our findings contribute to the growing research on social robotics and artificial intelligence, offering a promising approach to enhancing human–robot interaction in potentially hazardous environments. Future work may explore the scalability of this algorithm to more complex scenarios and its integration with other advanced robotic systems. Full article
(This article belongs to the Special Issue Advances in Cognitive Robotics and Control)
Show Figures

Figure 1

Figure 1
<p>The robot identifies two potential targets for the human: a door and a couch. For each target, it generates a trajectory, excluding any obstacles that fall outside the human’s field of view from the planning process. Upon analysis of the generated trajectories, a potential collision with an obstacle is identified in the trajectory corresponding to the door. To avoid this collision and maintain the human’s safety, the robot considers a range of potential actions that could be taken to eliminate the collision in the human’s trajectory towards the door. (<b>a</b>) Estimation of trajectories from the person to the objects of interest. (<b>b</b>) Possible collision avoidance action, displacement of the robot to the obstacle.</p>
Full article ">Figure 2
<p>Schematic representation of the CORTEX architecture. The architecture is divided into two levels: cognitive and sub-cognitive. The cognitive level focuses on hosting <math display="inline"><semantics> <mi mathvariant="script">W</mi> </semantics></math> and the internal simulator (PyBullet), while the sub-cognitive level hosts the software that manages the low-level processes.</p>
Full article ">Figure 3
<p>Simulated scenario used to evaluate the robot’s ability to anticipate and mitigate risks. The room contains several key elements: an autonomous robot positioned near the door, a person located near the wall opposite the door, a door and a couch representing potential targets for the person, and a soccer ball as a dangerous object during human movement. The robot must use its internal model to predict possible trajectories of the person, acting proactively to ensure safety.</p>
Full article ">Figure 4
<p>Combined view showing the contents of <math display="inline"><semantics> <mi mathvariant="script">W</mi> </semantics></math> (<b>up-left</b>); a 2D graphical representation of <math display="inline"><semantics> <mi mathvariant="script">W</mi> </semantics></math> with two paths going from the person to the couch (<b>up-right</b>), the yellow path represents the route that would take the person directly to the couch without seeing the obstacle, while the red path shows the alternative route when the robot is moved; a zenithal view of the scene as rendered by Webots (<b>down-left</b>); and a 3D view of the internal simulator, PyBullet, with simple geometric forms representing the elements in the scene (<b>down-right</b>).</p>
Full article ">Figure 5
<p>Human-in-the-loop experiment (top left to bottom right). The upper half of each frame is the zenithal view rendered by the Webots simulator, where the red, blue, and green axes mark the reference system of the obstacle between the person and the couch. The lower half of each frame is representative of the first-person perspective, as observed by the human subject who was undertaking the experimental procedure. As can be seen, the obstacle was outside the field of view. When the robot approached the obstacle, frames 2–3, the subject turned left, overcoming it and safely reaching the couch.</p>
Full article ">Figure 6
<p>Real-world experiment (left to right). The upper half of the frame shows the view from the robot’s camera. The lower half shows a schematic view of <math display="inline"><semantics> <mi mathvariant="script">W</mi> </semantics></math>, with the person represented as a yellow circle, the backpack on the floor as a red square, and the target chair as a green square. The robot is colored dark red. The subject walks distractedly towards the chair (frame 1) and reacts when the robot starts moving (frame 2), changing direction and continuing.</p>
Full article ">
16 pages, 1760 KiB  
Article
Robot Control Platform for Multimodal Interactions with Humans Based on ChatGPT
by Jingtao Qu, Mateusz Jarosz and Bartlomiej Sniezynski
Appl. Sci. 2024, 14(17), 8011; https://doi.org/10.3390/app14178011 - 7 Sep 2024
Viewed by 510
Abstract
This paper presents the architecture of a multimodal human–robot interaction control platform that leverages the advanced language capabilities of ChatGPT to facilitate more natural and engaging conversations between humans and robots. Implemented on the Pepper humanoid robot, the platform aims to enhance communication [...] Read more.
This paper presents the architecture of a multimodal human–robot interaction control platform that leverages the advanced language capabilities of ChatGPT to facilitate more natural and engaging conversations between humans and robots. Implemented on the Pepper humanoid robot, the platform aims to enhance communication by providing a richer and more intuitive interface. The motivation behind this study is to enhance robot performance in human interaction through cutting-edge natural language processing technology, thereby improving public attitudes toward robots, fostering the development and application of robotic technology, and reducing the negative attitudes often associated with human–robot interactions. To validate the system, we conducted experiments measuring negative attitude robot scale and their robot anxiety scale scores before and after interacting with the robot. Statistical analysis of the data revealed a significant improvement in the participants’ attitudes and a notable reduction in anxiety following the interaction, indicating that the system holds promise for fostering more positive human–robot relationships. Full article
Show Figures

Figure 1

Figure 1
<p>Robot Control Platform for Multimodal Interactions with Humans based on ChatGPT.</p>
Full article ">Figure 2
<p>Sequence of interactions in the proposed architecture, highlighting envisioned actions.</p>
Full article ">Figure 3
<p>Application working on Pepper robot, user view of the robot during conversation.</p>
Full article ">Figure 4
<p>Interaction flow used in experiments.</p>
Full article ">Figure 5
<p>Results before and after experiment with NARS survey.</p>
Full article ">Figure 6
<p>Results before and after experiment with NARS survey, grouped into three factors: S1—negative attitude towards interaction with robots, S2—negative attitude towards social influence of robots, and S3—negative attitude toward emotions in interaction with robots.</p>
Full article ">Figure 7
<p>Results before and after experiment with RAS survey.</p>
Full article ">Figure 8
<p>Results before and after experiment with RAS survey, grouped into three factors: S1—anxiety towards communication capability of robots, S2—anxiety towards behavioral characteristics of robots, S3—anxiety towards discourse with robots.</p>
Full article ">
24 pages, 4205 KiB  
Article
Using Mixed Reality for Control and Monitoring of Robot Model Based on Robot Operating System 2
by Dominik Janecký, Erik Kučera, Oto Haffner, Erika Výchlopeňová and Danica Rosinová
Electronics 2024, 13(17), 3554; https://doi.org/10.3390/electronics13173554 - 6 Sep 2024
Viewed by 294
Abstract
This article presents the design and implementation of an innovative human–machine interface (HMI) in mixed reality for a robot model operating within Robot Operating System 2 (ROS 2). The interface is specifically developed for compatibility with Microsoft HoloLens 2 hardware and leverages the [...] Read more.
This article presents the design and implementation of an innovative human–machine interface (HMI) in mixed reality for a robot model operating within Robot Operating System 2 (ROS 2). The interface is specifically developed for compatibility with Microsoft HoloLens 2 hardware and leverages the Unity game engine alongside the Mixed Reality Toolkit (MRTK) to create an immersive mixed reality application. The project uses the Turtlebot 3 Burger model robot, simulated within the Gazebo virtual environment, as a representative mechatronic system for demonstration purposes. Communication between the mixed reality application and ROS 2 is facilitated through a publish–subscribe mechanism, utilizing ROS TCP Connector for message serialization between nodes. This interface not only enhances the user experience by allowing for the real-time monitoring and control of the robotic system but also aligns with the principles of Industry 5.0, emphasizing human-centric and inclusive technological advancements. The practical outcomes of this research include a fully functional mixed reality application that integrates seamlessly with ROS 2, showcasing the potential of mixed reality technologies in advancing the field of industrial automation and human–machine interaction. Full article
(This article belongs to the Special Issue Advanced Industry 4.0/5.0: Intelligence and Automation)
Show Figures

Figure 1

Figure 1
<p>Reality–virtuality continuum.</p>
Full article ">Figure 2
<p>Illustration of mixed reality in industrial environment.</p>
Full article ">Figure 3
<p>Differences between ROS 1 and ROS 2 architectures.</p>
Full article ">Figure 4
<p>Differences between ROS 1 and ROS 2.</p>
Full article ">Figure 5
<p>Communication using topics.</p>
Full article ">Figure 6
<p>Communication using services.</p>
Full article ">Figure 7
<p>Communication using actions.</p>
Full article ">Figure 8
<p>Virtual model of Turtlebot 3 in Gazebo environment.</p>
Full article ">Figure 9
<p>Illustration of room mapping using Microsoft HoloLens 2.</p>
Full article ">Figure 10
<p>ROS 2 integration design with HoloLens 2 application.</p>
Full article ">Figure 11
<p>Design of graphical user interface.</p>
Full article ">Figure 12
<p>Design of communication among nodes.</p>
Full article ">Figure 13
<p>Air tap gesture demo for HoloLens 2.</p>
Full article ">Figure 14
<p>Section 1 on UI—input fields.</p>
Full article ">Figure 15
<p>Section 1 on UI—connection information.</p>
Full article ">Figure 16
<p>Section 2 on UI—buttons and sliders to move the robot.</p>
Full article ">Figure 17
<p>Section 3 on UI—text printouts of information about the robot.</p>
Full article ">Figure 18
<p>Section 4 on UI—sending requests and targets/goals.</p>
Full article ">Figure 19
<p>Section 5 on UI—UI design (color).</p>
Full article ">Figure 20
<p>Sending messages by using ROS TCP Connector.</p>
Full article ">Figure 21
<p>Scheme of proposed solution—upcoming research.</p>
Full article ">
20 pages, 4733 KiB  
Article
Movement-Based Prosthesis Control with Angular Trajectory Is Getting Closer to Natural Arm Coordination
by Effie Segas, Vincent Leconte, Emilie Doat, Daniel Cattaert and Aymar de Rugy
Biomimetics 2024, 9(9), 532; https://doi.org/10.3390/biomimetics9090532 - 4 Sep 2024
Viewed by 340
Abstract
Traditional myoelectric controls of trans-humeral prostheses fail to provide intuitive coordination of the necessary degrees of freedom. We previously showed that by using artificial neural network predictions to reconstruct distal joints, based on the shoulder posture and movement goals (i.e., position and orientation [...] Read more.
Traditional myoelectric controls of trans-humeral prostheses fail to provide intuitive coordination of the necessary degrees of freedom. We previously showed that by using artificial neural network predictions to reconstruct distal joints, based on the shoulder posture and movement goals (i.e., position and orientation of the targeted object), participants were able to position and orient an avatar hand to grasp objects with natural arm performances. However, this control involved rapid and unintended prosthesis movements at each modification of the movement goal, impractical for real-life scenarios. Here, we eliminate this abrupt change using novel methods based on an angular trajectory, determined from the speed of stump movement and the gap between the current and the ‘goal’ distal configurations. These new controls are tested offline and online (i.e., involving participants-in-the-loop) and compared to performances obtained with a natural control. Despite a slight increase in movement time, the new controls allowed twelve valid participants and six participants with trans-humeral limb loss to reach objects at various positions and orientations without prior training. Furthermore, no usability or workload degradation was perceived by participants with upper limb disabilities. The good performances achieved highlight the potential acceptability and effectiveness of those controls for our target population. Full article
(This article belongs to the Special Issue Biomimetic Aspects of Human–Computer Interactions)
Show Figures

Figure 1

Figure 1
<p>Methods. (<b>a</b>) 7 DoF kinematic arm chain and Denavit—Hartenberg parameters. (<b>b</b>) Control principle based on angular interpolation. The movement time is estimated from the speed of the hand that corresponds to the ongoing shoulder velocity, and the remaining distance between the hand and the target. The gap between the current and the ‘goal’ ANN-predicted distal posture is filled based on this estimated movement time.</p>
Full article ">Figure 2
<p>Protocol. Each box contains the phase name and the control used name, with Fam. and Init. Acq. standing for familiarization and initial acquisition phases, respectively. In Exp1, the order of the PC+ and C+ test phases was counterbalanced among participants. In Exp2, participants performed all the test phases with their stump except for the one with the natural control that they performed with their sound limb.</p>
Full article ">Figure 3
<p>Exp1 and Exp2 Performance Metrics. The (<b>a</b>) success rate, (<b>b</b>) movement time, (<b>c</b>) shoulder spread volume, and (<b>d</b>) validation time are reported for each test phase. Each grey line corresponds to a participant with dashed and plain lines indicating participants who began by the control with the C+ and PC+ ANN, respectively. Box limits show the first and third quartiles, whereas the inside line shows the median value. Whiskers show min and max values without taking into account outliers. PC+, C+, PC−, and Nat represent the control used during the phases. Significant differences are indicated by stars, with * for <span class="html-italic">p</span> &lt; 0.05, ** for <span class="html-italic">p</span> &lt; 0.01, and *** for <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 4
<p>Exp1 and Exp2 Subjective Metrics. (<b>a</b>) Workload assessed with Pros-TLX scores and (<b>b</b>) usability assessed with SUS scores are presented for each test phase. Each grey line represents a participant, with dashed and solid lines indicating participants who began by the control with the C+ and PC+ ANN test phases, respectively. Box boundaries indicate the first and third quartiles, while the inner line represents the median value. Whiskers depict minimum and maximum values without taking into account outliers. PC+, C+, PC−, and Nat denote the controls used during the phases. Significant differences are indicated by stars, with * for <span class="html-italic">p</span> &lt; 0.05, ** for <span class="html-italic">p</span> &lt; 0.01, and *** for <span class="html-italic">p</span> &lt; 0.001. The minimum possible Pros-TLX score is indicated by a line at 3.5. The colored dashed lines and corresponding adjectives are used to contextualize the SUS score associated with each control, according to [<a href="#B27-biomimetics-09-00532" class="html-bibr">27</a>].</p>
Full article ">Figure 5
<p>Exp1 Online and Offline Trajectory Study. (Upper line) The results for (<b>a</b>) the spectral arc length (SAL), (<b>b</b>) the distal index, and (<b>c</b>) the curvature of the trajectory are derived from data recorded during the experiment. PC+, C+, PC−, and Nat represent the controls used during the test phases. (Lower line) The results for (<b>d</b>) the median mean absolute error, (<b>e</b>) the median hand position distance, and (<b>f</b>) the median hand orientation distance are calculated based on simulated trajectories generated offline by the different control types. PC+, C+, and PC− represent the controls used to recreate the trajectories. Each grey line corresponds to a participant of the present study, with dashed and solid lines denoting participants who began with the C+ and PC+ ANN controls, respectively. Box plots display the first and third quartiles, with the inner line indicating the median value. Whiskers represent the minimum and maximum values without taking into account outliers. Red dotted lines in the SAL graph, with value sourced from [<a href="#B29-biomimetics-09-00532" class="html-bibr">29</a>], are included for comparison. The −2.1 and −1.97 lines indicate the SAL of a participant not suffering from any arm disability performing a 2D reaching task with a force field before and after the learning of the task, respectively. The −3.5 line represents the SAL of a hemiparetic patient performing the task after 30 rehabilitation sessions. Significant differences are denoted by stars, with * for <span class="html-italic">p</span> &lt; 0.05, ** for <span class="html-italic">p</span> &lt; 0.01, and *** for <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 6
<p>Exp1 Offline Trajectory Study. The hand’s position (<b>a</b>–<b>c</b>) and orientation (<b>d</b>–<b>f</b>) are depicted in the x, y, and z planes over time. The hand movement to reach two consecutive targets is illustrated, and the targets’ positions in x, y, and z, as well as their orientations in x and z, are shown in the corresponding graph for comparison. The targets’ y orientation is omitted as no constraints were applied to this dimension (i.e., the user can reach the target in every orientation around this axis).</p>
Full article ">
63 pages, 37620 KiB  
Article
BLUE SABINO: Development of a BiLateral Upper-Limb Exoskeleton for Simultaneous Assessment of Biomechanical and Neuromuscular Output
by Christopher K. Bitikofer, Sebastian Rueda Parra, Rene Maura, Eric T. Wolbrecht and Joel C. Perry
Machines 2024, 12(9), 617; https://doi.org/10.3390/machines12090617 - 3 Sep 2024
Viewed by 461
Abstract
Arm and hand function play a critical role in the successful completion of everyday tasks. Lost function due to neurological impairment impacts millions of lives worldwide. Despite improvements in the ability to assess and rehabilitate arm deficits, knowledge about underlying sources of impairment [...] Read more.
Arm and hand function play a critical role in the successful completion of everyday tasks. Lost function due to neurological impairment impacts millions of lives worldwide. Despite improvements in the ability to assess and rehabilitate arm deficits, knowledge about underlying sources of impairment and related sequela remains limited. The comprehensive assessment of function requires the measurement of both biomechanics and neuromuscular contributors to performance during the completion of tasks that often use multiple joints and span three-dimensional workspaces. To our knowledge, the complexity of movement and diversity of measures required are beyond the capabilities of existing assessment systems. To bridge current gaps in assessment capability, a new exoskeleton instrument is developed with comprehensive bilateral assessment in mind. The development of the BiLateral Upper-limb Exoskeleton for Simultaneous Assessment of Biomechanical and Neuromuscular Output (BLUE SABINO) expands on prior iterations toward full-arm assessment during reach-and-grasp tasks through the development of a dual-arm and dual-hand system, with 9 active degrees of freedom per arm and 12 degrees of freedom (six active, six passive) per hand. Joints are powered by electric motors driven by a real-time control system with input from force and force/torque sensors located at all attachment points between the user and exoskeleton. Biosignals from electromyography and electroencephalography can be simultaneously measured to provide insight into neurological performance during unimanual or bimanual tasks involving arm reach and grasp. Design trade-offs achieve near-human performance in exoskeleton speed and strength, with positional measurement at the wrist having an error of less than 2 mm and supporting a range of motion approximately equivalent to the 50th-percentile human. The system adjustability in seat height, shoulder width, arm length, and orthosis width accommodate subjects from approximately the 5th-percentile female to the 95th-percentile male. Integration between precision actuation, human–robot-interaction force-torque sensing, and biosignal acquisition systems successfully provide the simultaneous measurement of human movement and neurological function. The bilateral design enables use with left- or right-side impairments as well as intra-subject performance comparisons. With the resulting instrument, the authors plan to investigate underlying neural and physiological correlates of arm function, impairment, learning, and recovery. Full article
(This article belongs to the Special Issue Advances in Assistive Robotics)
Show Figures

Figure 1

Figure 1
<p>Bilateral exoskeleton predecessors to the BLUE SABINO: (<b>LEFT</b>) the EXO-UL7 and (<b>RIGHT</b>) the EXO-UL8.</p>
Full article ">Figure 2
<p>The BLUE SABINO instrument design is composed of a width-adjustable base, height-adjustable chair, length-adjustable upper arm and forearm segments, size-adjustable HRA attachments, remote-center four-bar mechanisms, two-DOF shoulder modules (PRISM), and optional 12-DOF hand modules.</p>
Full article ">Figure 3
<p>The kinematics of the human arm from the shoulder to the wrist can be represented by nine degrees of freedom. (<b>A</b>) Joints J<sub>1</sub>–J<sub>9</sub> and their corresponding anatomical axes (indicated by red dashed arrows oriented along the axes of rotation). (<b>B</b>) The kinematics of BLUE SABINO accommodate these nine degrees of freedom per arm (here, red arrows indicate the selected positive orientation of each joint’s rotation axis).</p>
Full article ">Figure 4
<p>BLUE SABINO rigid body links. The right-side rigid links of the right BLUE SABINO arm are shown in an exploded view ((<b>top left</b>) and (<b>center</b>)). Three of the links form movable assemblies composed of various parallel link mechanisms. The upper-arm (<b>top right</b>) and forearm (<b>bottom left</b>) remote-center mechanisms are composed of five primary links and an additional link for arm-length adjustment. PRISM (<b>bottom right</b>) is constructed with ten links, nine moving, and one stationary base.</p>
Full article ">Figure 5
<p>Anthropomorphic arm modeling for human–robot attachments (HRAs). Elliptical profiles for proximal and distal ends of the (<b>A</b>) upper arm and (<b>B</b>) forearm, and (<b>C</b>) lofted bend-U-shaped profile for the hand.</p>
Full article ">Figure 6
<p>Adjustable HRA orthotic designs and exploded assembly views. (<b>A</b>) The upper-arm orthosis. (<b>B</b>) The forearm orthosis. (<b>C</b>) The hand orthosis.</p>
Full article ">Figure 7
<p>Definition of manipulator points (q), axes (ω), and force sensor body frames.</p>
Full article ">Figure 8
<p>BLUE SABINO 18-DOF bilateral electromechanical system.</p>
Full article ">Figure 9
<p>Anticipated torque distributions per joint during ADL tasks (adapted from [<a href="#B75-machines-12-00617" class="html-bibr">75</a>]) used to select motors and gears for Joints 1–6.</p>
Full article ">Figure 10
<p>Layout for BLUE SABINO power and communication distribution.</p>
Full article ">Figure 11
<p>BLUE SABINO system startup sequence.</p>
Full article ">Figure 12
<p>Automatic and manual safety systems are integrated into the BLUE SABINO control architecture. Automatic systems provide fast and dependable safety responses, while the manual system allows the user and operator to stop the system manually, if needed.</p>
Full article ">Figure 13
<p>Admittance control scheme for BLUE SABINO. (1) User-applied forces are converted to human joint torques, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mi mathvariant="normal">h</mi> </msub> </mrow> </semantics></math>. (2) The admittance-control loop uses <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mi mathvariant="normal">h</mi> </msub> </mrow> </semantics></math> to set target states. (3) Joint-level admittance models, including inertia, m<sub>a</sub> velocity damping, b<sub>v</sub>, and velocity error damping, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">b</mi> <mrow> <mi>ve</mi> </mrow> </msub> </mrow> </semantics></math>, to set the inner-loop trajectory targets. (4) The trajectory-control loop computes proportional-derivative (PD) admittance-state tracking control torques, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mrow> <mi>PD</mi> </mrow> </msub> </mrow> </semantics></math>. (5) Model-based compensation for friction and gravity is added to the control torque, resulting in <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mi mathvariant="normal">u</mi> </msub> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>.</mo> </mrow> </semantics></math> (6) Safety limits are enforced on human–robot interaction forces and joint range of motion. Control-state monitoring disables control torque throughput if any safety limits are exceeded or network/device faults are detected.</p>
Full article ">Figure 14
<p>System integration phases: (<b>A</b>) An initial two-DOF version supported elbow flexion/extension and forearm pronosupination. (<b>B</b>) The five-DOF version added three orthogonal joints to the shoulder. (<b>C</b>) The future 18-DOF bilateral version adds two joints at the wrist and two joints at the base of the shoulder.</p>
Full article ">Figure 15
<p>Motion-capture setup and predefined robot trajectories. (<b>Top</b>) The motion of the right-side seven-DOF BLUE SABINO arm is recorded simultaneously using a set of five Flex 13 IR motion-capture cameras. The cameras track the spatial positions of retroreflective markers on a special wrist-mounted motion-capture end-effector part. Optitrack software fits a rigid body in real time to the markerset to define the position and orientation of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">c</mi> </msub> </mrow> </semantics></math>. (<b>Bottom</b>) Three-dimensional views of the upper-arm section of BLUE SABINO (with orthosis components removed for clarity) are shown in relation to the predefined trajectories traced out for motion-capture experiments. The path is traced by the end-effector point <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">c</mi> </msub> </mrow> </semantics></math>, whose initial position is indicated by the purple sphere and represents the centroid of the rigid body tracked by the motion-capture system.</p>
Full article ">Figure 16
<p>Sinusoid tracking inputs. The input position (red)- and velocity (blue)-state target signals are shown for an experiment using joint J<sub>5</sub>. The PD control torque <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mrow> <mi>PD</mi> </mrow> </msub> </mrow> </semantics></math> (purple) and gravity/friction compensation torque <math display="inline"><semantics> <mrow> <msub> <mrow> <mo> </mo> <mover> <mi mathvariant="sans-serif">τ</mi> <mo stretchy="false">^</mo> </mover> </mrow> <mi mathvariant="normal">g</mi> </msub> <msub> <mrow> <mrow> <mo>+</mo> <mover> <mi mathvariant="sans-serif">τ</mi> <mo stretchy="false">^</mo> </mover> </mrow> </mrow> <mi mathvariant="normal">f</mi> </msub> </mrow> </semantics></math> (orange) are combined to generate the control input torque <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mi mathvariant="normal">u</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Logarithmic-ramping chirp-state inputs. The input position (red) and velocity (blue) state target signals, and the instantaneous command frequency (black) are shown for the first half of the experiment with chirp ramping up between 0.1 and 2 Hz.</p>
Full article ">Figure 18
<p>Biosignal acquisition validation task. The user begins the task with the fingertips touching the start target (tennis ball) located in the lower front part of the workspace. After hearing an audio cue, the user reaches to the second target (tennis ball) in the upper right part of their workspace, touches it, and returns their hand to touch the start target. An example trajectory for a single motion is illustrated in pink (<b>left</b>) in context of a virtual robot model. Pink arrows overlaid on the experimental setup (<b>center</b>) represent the movement from one target to the next and back. The transparent grey scatter (<b>right</b>) illustrates the area traveled in all repetitions on the same virtual robot charts.</p>
Full article ">Figure 19
<p>Topographical layout of EEG montage with reference electrode at A2 (blue) and ground at Afz (red).</p>
Full article ">Figure 20
<p>EMG montage and target muscles of the upper limb. Five EMG locations were placed on the skin over the shown target muscles. Bipolar electrodes were placed in pairs to enable differential measurement for improved noise rejection.</p>
Full article ">Figure 21
<p>BLUE SABINO exoskeleton: (<b>A</b>) Seven-DOF bilateral arm configuration with task display screen, operator console, control tower, and shoulder-width adjustment mechanism; (<b>B</b>) Experimental chair with footrest and control-enable footswitch; (<b>C</b>) Overhead and (<b>D</b>) front views with subject wearing right-hand three-fingered OTHER Hand module.</p>
Full article ">Figure 22
<p>OTHER Hand on the BLUE SABINO System.</p>
Full article ">Figure 23
<p>Animation of BLUE SABINO joints via kinematics-driven MATLAB script.</p>
Full article ">Figure 24
<p>Results of motion capture for a segment of the UofI I trajectory illustrate the high agreement between end-effector position measured by motion capture (blue) vs. encoder position (red). The absolute difference (purple) remains low, indicating that the robot is able to accurately measure its true position within 0.4 mm on average for the task.</p>
Full article ">Figure 25
<p>Mean tracking error per shape. The means of the distance error between the end-effector point <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and the target position are shown on the left and center charts. The left chart shows errors reported by forward kinematics according to joint position measurements. The center chart shows the error according to motion capture. The rightmost chart displays mean absolute difference between the forward kinematic and motion capture measurements. Error bars display 95% confidence intervals of the means. Blue bars indicate statistics computed over individual actions (five repetitions each), while orange bars show statistics over all motions and repetitions (20 repetitions total).</p>
Full article ">Figure 26
<p>The adjustments supporting 5th to 95th percentile users in BLUE SABINO are included in its custom chair, base structure, length-adjustable arms, and the size-adjustable orthotic components forming the human–machine attachments (HMAs) and adjustment mechanisms.</p>
Full article ">Figure 27
<p>Range of motion comparison between healthy male and female ROM reported in [<a href="#B62-machines-12-00617" class="html-bibr">62</a>,<a href="#B63-machines-12-00617" class="html-bibr">63</a>,<a href="#B103-machines-12-00617" class="html-bibr">103</a>,<a href="#B104-machines-12-00617" class="html-bibr">104</a>,<a href="#B105-machines-12-00617" class="html-bibr">105</a>], healthy movements measured by motion capture during ADL tasks [<a href="#B64-machines-12-00617" class="html-bibr">64</a>,<a href="#B106-machines-12-00617" class="html-bibr">106</a>,<a href="#B107-machines-12-00617" class="html-bibr">107</a>], and BLUE SABINO’s achieved ROM. BLUE SABINO’s ROM encompasses approximately 95% of the ADL motion range for all joints combined. It also covers between 83% and 89% of healthy 50th-percentile ROM on all joints.</p>
Full article ">Figure 28
<p>Sinusoid tracking-state accuracy. (<b>Top</b>) Position- and velocity-state tracking accuracy is shown for each of the joints of BLUE SABINO 5-DOF-RIGHT. Progression between states moves clockwise around each circle with the position and velocity states shown in red, and the error shown in blue. Only the portion of the input with full 10-degree amplitude (between 20 and 80 s in <a href="#machines-12-00617-f016" class="html-fig">Figure 16</a>) is shown. (<b>Bottom</b>) An enlarged view of the J<sub>7</sub> state chart shows the cyclical tracking error present in detail. Sixty wave cycles are shown with measured state line colors progressing from red to yellow to blue to highlight the variation of tracking accuracy between cycles.</p>
Full article ">Figure 29
<p>Sinusoid tracking phase magnitude characterization. A single sine-wave input and position/velocity response is shown for J<sub>7</sub>. The velocity-state measurement is filtered using a noncausal low-pass filter with a 10 Hz cutoff that smooths the signal, making it easier to identify the response peak time. The state time delay <math display="inline"><semantics> <mi mathvariant="sans-serif">β</mi> </semantics></math> is extracted as the time distance between peaks of the target and measured position states. The state-magnitude ratios are computed via the measurement and target state values at the identified peak times.</p>
Full article ">Figure 30
<p>RMS chirp-trajectory state-tracking error vs. frequency.</p>
Full article ">Figure 31
<p>Chirp-trajectory state-tracking-error variance vs. frequency.</p>
Full article ">Figure 32
<p>Logarithmic-ramping chirp-tracking time-series results. The input position (red)- and velocity (blue)-state targets signals and the instantaneous command frequency (dashed black) are shown in the first four columns. Command torque signals are shown on the fifth and sixth columns, overlaid with each actuator’s continuous and peak torque output band.</p>
Full article ">Figure 33
<p>Ensemble-averaging robotic measures. (<b>Top</b>) Hand displacement in x (red), y (blue), z (green), and absolute displacement (purple); (<b>Middle</b>) Hand-velocity ensemble average (red) of all measurements (grey); (<b>Bottom</b>) Required shoulder torque to complete each reach trajectory and the ensemble-average torque profile (purple). Individual trajectories from all reach movements and shoulder torques required to drive the exoskeleton are displayed in grey.</p>
Full article ">Figure 34
<p>Trial average EEG, EMG, and robot kinematics. The ensemble-averaged biosignal including contralateral low-beta EEG at C3, EMG from three shoulder muscles, and robotic measurements, including the displacement and velocity of the right hand, and the absolute summed magnitude of interaction torque between the user and robot’s shoulder joints.</p>
Full article ">Figure 35
<p>Topographical progression of EEG power: The top portion of the plot shows Lβ power and μ power at C3, as well as shoulder torque from robot joints (RJ) 3–5, from 1000 ms before the cue presentation to 4000 ms after the cue. On the same timescale along the bottom are topographical heat maps showing the percent change in Lβ power with respect to baseline.</p>
Full article ">Figure 36
<p>Adjustable HRA based on ellipse-fit forearm model. (<b>Left</b>) Three-piece HRA design. (<b>Right</b>) Ellipse size range with enlarged view of potential range of alignment errors. Colored dots represent the center location of each ellipse when the adjustment is fully contracted (cyan), centered (yellow), and fully expanded (green). Red axes represent the model coordinate system which was built around the 50th-percentile arm.</p>
Full article ">Figure A1
<p>A triple-pivot four-par mechanism is similar to a standard four-bar mechanism (<b>A</b>), with two intermediate links operated in parallel (<b>B</b>), and with both intermediate links extended beyond the output link pivot to a remote center (RC) output link (<b>C</b>).</p>
Full article ">Figure A2
<p>Remote-center mechanisms. (<b>A</b>) Remote-center mechanisms at the upper arm and forearm allow the placement of actuators for internal/external rotation and pronation/supination away from the anatomical centers of rotation. (<b>B</b>) The mechanisms use ball bearing pairs that are spaced apart to reduce angular play resulting from each bearing experiencing radial play in opposite directions. Precision shims ensure a snug fit at each bearing interface. The shims and compression preload applied by the precision shoulder bolts reduce both axial and radial play.</p>
Full article ">Figure A3
<p>Experimental mean and standard deviation measurements vs. optimal-fit and relaxed-Coulombic-fit models show the torque required to overcome friction in each system motor. The sigmoid model of Equation (18) with parameters fit using FMINCON optimization best fits the experimentally measured torque–velocity profiles. However, the relaxed model reduces activation in the low-velocity region, improving chatter rejection.</p>
Full article ">Figure A4
<p>The effect of the friction model using relaxed-fit vs. optimal-fit friction parameters is illustrated. Friction torque compensation for a white noise-corrupted 1 Hz velocity signal is computed using the proposed sigmoid friction model with both sets of parameters. The relaxed-fit model effectively reduces the effect of discontinuous chatter as velocity passes through 0.</p>
Full article ">
17 pages, 6508 KiB  
Article
Design and Characterisation of a 3D-Printed Pneumatic Rotary Actuator Exploiting Enhanced Elastic Properties of Auxetic Metamaterials
by Francesca Federica Donadio, Donatella Dragone, Anna Procopio, Francesco Amato, Carlo Cosentino and Alessio Merola
Actuators 2024, 13(9), 329; https://doi.org/10.3390/act13090329 - 30 Aug 2024
Viewed by 461
Abstract
This paper describes the design and characterisation of a novel hybrid pneumatic rotational actuator that aims to overcome the limitations of both rigid and soft actuators while combining their advantages; indeed, the designed actuator consists of a soft air chamber having an auxetic [...] Read more.
This paper describes the design and characterisation of a novel hybrid pneumatic rotational actuator that aims to overcome the limitations of both rigid and soft actuators while combining their advantages; indeed, the designed actuator consists of a soft air chamber having an auxetic structure constrained between two rigid frames connected by a soft hinge joint inspired by the musculoskeletal structure of a lobster leg. The main goal is to integrate the advantages of soft actuation, such as inherent compliance and safe human–robot interaction, with those of rigid components, i.e., the robustness and structural stability limiting the ineffective expansion of the soft counterpart of the actuator. The air chamber and its auxetic structure are capable of leveraging the hyper-elastic properties of the soft fabrication material, thereby optimising the response and extending the operational range of the rotational actuator. Each component of the hybrid actuator is fabricated using a 3D-printing method based on Fused Deposition Modeling technology; the soft components are made of thermoplastic polyurethane, and the rigid components are made of polylactic acid. The design phases were followed by some experimental tests to characterise the hybrid actuation by reproducing the typical operating conditions of the actuator itself. In particular, the actuator response in unconstrained expansion and isometric and isobaric conditions has been evaluated. The experimental results show linearity, good repeatability, and sensitivity of the actuator response vs. pneumatic pressure input, other than a small percentage hysteresis, which is ten times less than that observed in commercial soft pneumatic actuators. Full article
(This article belongs to the Special Issue Advanced Technologies in Soft Pneumatic Actuators)
Show Figures

Figure 1

Figure 1
<p>Re-entrant honeycomb (REH): classical (on the <b>left</b>) and with rounded corners (on the <b>right</b>).</p>
Full article ">Figure 2
<p>REH auxetic path with rounded corners.</p>
Full article ">Figure 3
<p>Comparison between nominal (<b>a</b>) and inflated (<b>b</b>) state (at pressure of 200 kPa).</p>
Full article ">Figure 4
<p>Pneumatic soft actuator (<b>a</b>) and air chamber with auxetic structure (<b>b</b>).</p>
Full article ">Figure 5
<p>Printing orientation.</p>
Full article ">Figure 6
<p>Photo of the real setup of the hybrid actuator: soft air chamber (in TPU, black) and rigid frames (in PLA, white) in nominal (<b>a</b>,<b>b</b>) inflated state.</p>
Full article ">Figure 7
<p>CAD representation for the different setups for unconstrained expansion (<b>a</b>), isometric (<b>b</b>), and isobaric (<b>c</b>) characterisation.</p>
Full article ">Figure 8
<p>Unconstrained expansion: characterisation curves. Expansion (blue solid line) and compression (red solid line) curves; regression lines (dashed).</p>
Full article ">Figure 9
<p>Isometric test at 200 kPa. Expansion (blue solid line) and compression (red solid line) curves; regression line (dashed).</p>
Full article ">Figure 10
<p>Isobaric tests. Expansion (in blue) and compression (in red) for the soft actuator at 0 kPa (<b>a</b>), 50 kPa (<b>b</b>), 100 kPa (<b>c</b>), 150 kPa (<b>d</b>), and 200 kPa (<b>e</b>).</p>
Full article ">
22 pages, 10563 KiB  
Article
Low-Cost Cable-Driven Robot Arm with Low-Inertia Movement and Long-Term Cable Durability
by Van Pho Nguyen, Wai Tuck Chow, Sunil Bohra Dhyan, Bohan Zhang, Boon Siew Han and Hong Yee Alvin Wong
Robotics 2024, 13(9), 128; https://doi.org/10.3390/robotics13090128 - 27 Aug 2024
Viewed by 806
Abstract
Our study presents a novel design for a cable-driven robotic arm, emphasizing low cost, low inertia movement, and long-term cable durability. The robotic arm shares similar specifications with the UR5 robotic arm, featuring a total of six degrees of freedom (DOF) distributed in [...] Read more.
Our study presents a novel design for a cable-driven robotic arm, emphasizing low cost, low inertia movement, and long-term cable durability. The robotic arm shares similar specifications with the UR5 robotic arm, featuring a total of six degrees of freedom (DOF) distributed in a 1:1:1:3 ratio at the arm base, shoulder, elbow, and wrist, respectively. The three DOF at the wrist joints are driven by a cable system, with heavy motors relocated from the end-effector to the shoulder base. This repositioning results in a lighter cable-actuated wrist (weighing 0.8 kg), which enhances safety during human interaction and reduces the torque requirements for the elbow and shoulder motors. Consequently, the overall cost and weight of the robotic arm are reduced, achieving a payload-to-body weight ratio of 5:8.4 kg. To ensure good positional repeatability, the shoulder and elbow joints, which influence longer moment arms, are designed with a direct-drive structure. To evaluate the design’s performance, tests were conducted on loading capability, cable durability, position repeatability, and manipulation. The tests demonstrated that the arm could manipulate a 5 kg payload with a positional repeatability error of less than 0.1 mm. Additionally, a novel cable tightener design was introduced, which served dual functions: conveniently tightening the cable and reducing the high-stress concentration near the cable locking end to minimize cable loosening. When subjected to an initial cable tension of 100 kg, this design retained approximately 80% of the load after 10 years at a room temperature of 24 °C. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Drawbacks of industrial collaborative robot arms consisting of direct-drive joints in handling heavy load and generating low-inertia interaction. (<b>b</b>) Our solution of a cable-driven robot arm. In this inset picture, the encoder and torque sensor may be located before or after the gearbox.</p>
Full article ">Figure 2
<p>A 3D design of the cable-driven robot arm with different views: (<b>a</b>) perspective view, and (<b>b</b>) top view. The dash-line boxes show the boundaries of the main cluster structures in the robot arm. <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>E</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>F</mi> <mi>U</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>F</mi> <mi>F</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>W</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>e</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>P</mi> </msub> </mrow> </semantics></math>, are respectively the center lines of the joints: elbow, tube holders at the forearm, tube holders at the upper arm, wrist, end-effector, and pinion wrist. Concurrently, <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>B</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>F</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> are in turn the center lines of the motor shafts: base <math display="inline"><semantics> <msub> <mi>M</mi> <mi>B</mi> </msub> </semantics></math>, shoulder <math display="inline"><semantics> <msub> <mi>M</mi> <mi>S</mi> </msub> </semantics></math>, forearm <math display="inline"><semantics> <msub> <mi>M</mi> <mi>F</mi> </msub> </semantics></math>, and wrist <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Schematic illustration of the 3-DOF differential-gear wrist design with (<b>a</b>) isometric view, (<b>b</b>) front view, and (<b>c</b>) side view. Six cables are labeled as <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mn>1</mn> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>1</mn> <mo>−</mo> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>2</mn> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>2</mn> <mo>−</mo> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>3</mn> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>3</mn> <mo>−</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Illustration of the inside structure and the decoupling mechanism in the 3-DOF differential-gear wrist.</p>
Full article ">Figure 5
<p>Schematic illustration of the cable tightening mechanism (<b>a</b>) and the principle of adjusting the cable tension (<b>b</b>). A yellow-dash circle line presents the hub surface of the pulley and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </msub> </semantics></math> means <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> </semantics></math> or <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> </semantics></math>. The models in this figure are also applied for both wrist and pinion pulleys. The cable is assumed to be locked in the wrist pulley and the male/female pulley, while <span class="html-italic">A</span> and <span class="html-italic">B</span> are the first contact points between the cable and the pulleys.</p>
Full article ">Figure 6
<p>Experimental relation between the <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> </semantics></math> (<b>a</b>) and the ratio of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> on the pulley over two variants <math display="inline"><semantics> <mi>μ</mi> </semantics></math> and the number of rounds (<b>b</b>). In graph (<b>a</b>), the pulley with a diameter of 32 mm, the load cell, and the force sensor are clamped. Also, the cable made from Dyneema material has a diameter of 2 mm. The cable is wound many rounds on the pulley hub surface. <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> </semantics></math> is set at the load cell, while <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </msub> </semantics></math> is measured at the force sensor. In (<b>b</b>), the ratio of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> in experiments is shown in the solid lines, and the dot lines show the interpolations from such data over different values of <math display="inline"><semantics> <mi>μ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Front-view illustration of cable layout in the robot arm. <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi>M</mi> <mi>F</mi> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> are, respectively, the motors driving the shoulder joint, forearm, and wrist. <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>E</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>M</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>B</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>P</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>D</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>D</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are the pulleys, respectively, elbow, minor, base, planar, direct 1, and direct 2, with their center lines <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>E</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>M</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>B</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>D</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>D</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>. Inset images covered by the green-dash lines and red-dash lines show the front view of the cable layout at the elbow and the decoupling mechanism, respectively.</p>
Full article ">Figure 8
<p>Kinematics analysis of the cable-driven robot arm with six DOF. Inset images show the rotations of the wrist along three axes <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>5</mn> </msub> </mrow> </semantics></math> indicated by red-dash lines. The range of motion of each joint is <math display="inline"><semantics> <mrow> <mo>[</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>5</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>6</mn> </msub> <mo>]</mo> </mrow> </semantics></math> = <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>360</mn> <mo>,</mo> <mn>111</mn> <mo>,</mo> <mn>106</mn> <mo>,</mo> <mn>160</mn> <mo>,</mo> <mn>360</mn> <mo>,</mo> <mn>180</mn> <mo>]</mo> </mrow> </semantics></math>(°). <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>5</mn> </msub> </semantics></math> can reach a full round and only a half round rotation if <math display="inline"><semantics> <mover accent="true"> <msub> <mi>θ</mi> <mn>6</mn> </msub> <mo>˙</mo> </mover> </semantics></math> is zero and non-zero, respectively.</p>
Full article ">Figure 9
<p>Fabrication and assembling processes for making the robot arm.</p>
Full article ">Figure 10
<p>Electrical design and controlling system for the cable-driven robot arm.</p>
Full article ">Figure 11
<p>Setting up experiments for testing the cable loosening in static payload of 100 kg (<b>a</b>) and dynamic payload of 2 kg (<b>b</b>) with (<b>b-1</b>), (<b>b-2</b>), and (<b>b-3</b>), respectively, for rotating around <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>5</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Setting up experiments for testing the repeatability of the arm (<b>a</b>) and the wrist: (<b>b</b>), (<b>c</b>) and (<b>d</b>) for rotating around <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>5</mn> </msub> </mrow> </semantics></math>, respectively. In this figure, the grasped object is a box sucked and manipulated by a suction-cup gripper.</p>
Full article ">Figure 13
<p>Evaluation of the cable loosening under 2 kg dynamic payload. The value of each dot point indicates the measurement value of the tension, and the dot and solid lines show the interpolation of the real test data over time.</p>
Full article ">Figure 14
<p>Cable durability test on the cable tightener with 10 wound rounds on its hub surface under 100 kg static payload setting-up. The blue curve indicates the interpolating function from the experimental data (red markers).</p>
Full article ">Figure 15
<p>Demonstration of the cable-driven robot arm in manipulating the objects (<b>a</b>) filament box, (<b>b</b>) heavy box, (<b>c</b>) foam sheet, (<b>d</b>) pneumatic-joint box, (<b>e</b>) storing box, and (<b>f</b>) component bag. (1), (2), and (3) are the phases of the experiments following the order of sucking, lifting, moving, and releasing.</p>
Full article ">Figure 16
<p>Weight distribution (<b>a</b>) and cost distribution of our robot arm after fabrication (<b>b</b>).</p>
Full article ">
47 pages, 818 KiB  
Systematic Review
Workplace Well-Being in Industry 5.0: A Worker-Centered Systematic Review
by Francesca Giada Antonaci, Elena Carlotta Olivetti, Federica Marcolin, Ivonne Angelica Castiblanco Jimenez, Benoît Eynard, Enrico Vezzetti and Sandro Moos
Sensors 2024, 24(17), 5473; https://doi.org/10.3390/s24175473 - 23 Aug 2024
Viewed by 573
Abstract
The paradigm of Industry 5.0 pushes the transition from the traditional to a novel, smart, digital, and connected industry, where well-being is key to enhance productivity, optimize man–machine interaction and guarantee workers’ safety. This work aims to conduct a systematic review of current [...] Read more.
The paradigm of Industry 5.0 pushes the transition from the traditional to a novel, smart, digital, and connected industry, where well-being is key to enhance productivity, optimize man–machine interaction and guarantee workers’ safety. This work aims to conduct a systematic review of current methodologies for monitoring and analyzing physical and cognitive ergonomics. Three research questions are addressed: (1) which technologies are used to assess the physical and cognitive well-being of workers in the workplace, (2) how the acquired data are processed, and (3) what purpose this well-being is evaluated for. This way, individual factors within the holistic assessment of worker well-being are highlighted, and information is provided synthetically. The analysis was conducted following the PRISMA 2020 statement guidelines. From the sixty-five articles collected, the most adopted (1) technological solutions, (2) parameters, and (3) data analysis and processing were identified. Wearable inertial measurement units and RGB-D cameras are the most prevalent devices used for physical monitoring; in the cognitive ergonomics, and cardiac activity is the most adopted physiological parameter. Furthermore, insights on practical issues and future developments are provided. Future research should focus on developing multi-modal systems that combine these aspects with particular emphasis on their practical application in real industrial settings. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

Figure 1
<p>PRISMA flowchart.</p>
Full article ">Figure 2
<p>Number of occurrences of ergonomic physical indexes.</p>
Full article ">Figure 3
<p>Amount of surveyed papers over years.</p>
Full article ">Figure 4
<p>Adoption of the different devices in the time span covered by the review.</p>
Full article ">Figure 5
<p>Graphical representation of the frequency of adoption of each of the physiological parameters.</p>
Full article ">Figure 6
<p>Adoption of the different technologies to acquire physiological data in the time span considered.</p>
Full article ">Figure 7
<p>Graphical representation of the frequency of adoption of the data acquisition technologies.</p>
Full article ">Figure 8
<p>Graphical representation of the frequency of adoption of single vs. multiple physiological measurements.</p>
Full article ">Figure 9
<p>Graphical representation of the frequency of adoption of statistical vs. machine/deep learning approaches.</p>
Full article ">
22 pages, 3190 KiB  
Article
Sustainable Impact of Stance Attribution Design Cues for Robots on Human–Robot Relationships—Evidence from the ERSP
by Dong Lv, Rui Sun, Qiuhua Zhu, Jiajia Zuo and Shukun Qin
Sustainability 2024, 16(17), 7252; https://doi.org/10.3390/su16177252 - 23 Aug 2024
Viewed by 428
Abstract
With the development of large language model technologies, the capability of social robots to interact emotionally with users has been steadily increasing. However, the existing research insufficiently examines the influence of robot stance attribution design cues on the construction of users’ mental models [...] Read more.
With the development of large language model technologies, the capability of social robots to interact emotionally with users has been steadily increasing. However, the existing research insufficiently examines the influence of robot stance attribution design cues on the construction of users’ mental models and their effects on human–robot interaction (HRI). This study innovatively combines mental models with the associative–propositional evaluation (APE) model, unveiling the impact of the stance attribution explanations of this design cue on the construction of user mental models and the interaction between the two types of mental models through EEG experiments and survey investigations. The results found that under the influence of intentional stance explanations (compared to design stance explanations), participants displayed higher error rates, higher θ- and β-band Event-Related Spectral Perturbations (ERSPs), and phase-locking value (PLV). Intentional stance explanations trigger a primarily associatively based mental model of users towards robots, which conflicts with the propositionally based mental models of individuals. Users might adjust or “correct” their immediate reactions caused by stance attribution explanations after logical analysis. This study reveals that stance attribution interpretation can significantly affect users’ mental model construction of robots, which provides a new theoretical framework for exploring human interaction with non-human agents and provides theoretical support for the sustainable development of human–robot relations. It also provides new ideas for designing robots that are more humane and can better interact with human users. Full article
Show Figures

Figure 1

Figure 1
<p>The intentional stance explanation initiated a conflict between the user’s association-based mental model of robots (robots are associated with emotions) and the people’s logical analytic proposition-based mental model (robots do not have emotions), which resulted in a heightened cognitive conflict component for the participants.</p>
Full article ">Figure 2
<p>The experiment consisted of presenting a fixation point picture of 500 ms for each trial, followed by a position attribution explanation picture of 2000 ms for the robot, the content of which was either an intentional attribution explanation or a design attribution explanation. Finally, the screen was presented with a 2000-ms robot avatar and words, which were divided into emotional words (go stimulus) and neutral words (nogo stimulus). The participants were asked to determine whether the words were related to emotional feelings: if so, they pressed the “F” key (go); if not, do not press the button (nogo) and finally fill out a questionnaire.</p>
Full article ">Figure 3
<p>Participant error rate in go/nogo tasks.</p>
Full article ">Figure 4
<p>ERSD spectrograms and topographic maps evoked by conditioned stimulation of Cz electrode.</p>
Full article ">Figure 5
<p>Functional connectivity diagrams.</p>
Full article ">Figure 6
<p>Results of the questionnaire on the participants’ attitudes that robots have emotions.</p>
Full article ">
14 pages, 5143 KiB  
Article
A Self-Powered, Skin Adhesive, and Flexible Human–Machine Interface Based on Triboelectric Nanogenerator
by Xujie Wu, Ziyi Yang, Yu Dong, Lijing Teng, Dan Li, Hang Han, Simian Zhu, Xiaomin Sun, Zhu Zeng, Xiangyu Zeng and Qiang Zheng
Nanomaterials 2024, 14(16), 1365; https://doi.org/10.3390/nano14161365 - 20 Aug 2024
Viewed by 680
Abstract
Human–machine interactions (HMIs) have penetrated into various academic and industrial fields, such as robotics, virtual reality, and wearable electronics. However, the practical application of most human–machine interfaces faces notable obstacles due to their complex structure and materials, high power consumption, limited effective skin [...] Read more.
Human–machine interactions (HMIs) have penetrated into various academic and industrial fields, such as robotics, virtual reality, and wearable electronics. However, the practical application of most human–machine interfaces faces notable obstacles due to their complex structure and materials, high power consumption, limited effective skin adhesion, and high cost. Herein, we report a self-powered, skin adhesive, and flexible human–machine interface based on a triboelectric nanogenerator (SSFHMI). Characterized by its simple structure and low cost, the SSFHMI can easily convert touch stimuli into a stable electrical signal at the trigger pressure from a finger touch, without requiring an external power supply. A skeleton spacer has been specially designed in order to increase the stability and homogeneity of the output signals of each TENG unit and prevent crosstalk between them. Moreover, we constructed a hydrogel adhesive interface with skin-adhesive properties to adapt to easy wear on complex human body surfaces. By integrating the SSFHMI with a microcontroller, a programmable touch operation platform has been constructed that is capable of multiple interactions. These include medical calling, music media playback, security unlocking, and electronic piano playing. This self-powered, cost-effective SSFHMI holds potential relevance for the next generation of highly integrated and sustainable portable smart electronic products and applications. Full article
(This article belongs to the Special Issue Self-Powered Flexible Sensors Based on Triboelectric Nanogenerators)
Show Figures

Figure 1

Figure 1
<p>Overview diagram of the proposed SSFHMI. (<b>a</b>) The proposed SSFHMI and its application in intelligence interaction, medical calls, media player control, password unlocking, and electronic keyboard playing. (<b>b</b>) Construction of the SSFHMI. (<b>c</b>) Atomic-scale and macroscopic charge transfer mechanisms during friction between PDMS and PTFE. (<b>i</b>) PDMS and PTFE are in Separated stage. (<b>ii</b>) Compressing stage. (<b>iii</b>) Compressed stage. (<b>iv</b>) Separating stage. (<b>d</b>) The diagram of the mechanism of adhesion effect after contact between the adhesive layer and the human tissue. (<b>e</b>) Photograph of the proposed SSFHMI. (<b>i</b>) MCU connected with SSFHMI. (<b>ii</b>) SSFHMI attached to the back of the hand.</p>
Full article ">Figure 2
<p>Preparation, properties of the hydrogel-based adhesive layer. (<b>a</b>) Schematic illustration of the material used to prepare the adhesive layer. (<b>b</b>) Steps and methods for preparing the adhesive layer. (<b>c</b>) The tensile stress-strain curves and dynamic strain amplitude curves of the nanocomposite hydrogel adhesives. (<b>i</b>) Tensile stress-strain curves of LAP nanosheets at different concentrations. (<b>ii</b>) Self-healing characteristic curve. (<b>d</b>) Photograph of the adhesive layer for bonding SSFHMI to biological tissues, and the viscosity curve of the adhesive layer.</p>
Full article ">Figure 3
<p>Electrical characterization of the TENG. (<b>a</b>) Illustration of the macro-level working mechanism of a single-electrode TENG. (<b>i</b>) Compressing. (<b>ii</b>) Compressed. (<b>iii</b>) Separating. (<b>iv</b>) Separated. (<b>b</b>–<b>d</b>) Open-circuit voltage, short-circuit current, and short-circuit charge at the working frequency of 3 Hz, respectively. (<b>e</b>) Output voltage, current, and power density under the different external load resistances. (<b>f</b>) Output voltage of the TENG during 18,000 working cycles.</p>
Full article ">Figure 4
<p>The output characterization of the TENG array on the proposed SSFHMI. (<b>a</b>) The output waveform of nine TENGs. (<b>b</b>) The output voltage of nine TENGs. (<b>c</b>) Results of the two-sample T-test performed between TENGs. (<b>d</b>) 3D heat map corresponding to the different pressing positions on the TENG array.</p>
Full article ">Figure 5
<p>The signal coding of the proposed SSFHMI for intelligent control. (<b>a</b>) Schematic diagram of the human–computer interaction using SSFHMI. (<b>b</b>) Password lock function of the T9 keyboard and its 3D heat map of corresponding operations. (<b>c</b>) Application of the medical call. (<b>i</b>) Practical application demonstration. (<b>ii</b>) Comparison with similar work in Structure Complexity and Anti-interference [<a href="#B45-nanomaterials-14-01365" class="html-bibr">45</a>,<a href="#B46-nanomaterials-14-01365" class="html-bibr">46</a>,<a href="#B47-nanomaterials-14-01365" class="html-bibr">47</a>,<a href="#B48-nanomaterials-14-01365" class="html-bibr">48</a>,<a href="#B49-nanomaterials-14-01365" class="html-bibr">49</a>,<a href="#B50-nanomaterials-14-01365" class="html-bibr">50</a>,<a href="#B51-nanomaterials-14-01365" class="html-bibr">51</a>,<a href="#B52-nanomaterials-14-01365" class="html-bibr">52</a>].</p>
Full article ">
12 pages, 7357 KiB  
Article
User-Centered Evaluation of the Wearable Walker Lower Limb Exoskeleton; Preliminary Assessment Based on the Experience Protocol
by Cristian Camardella, Vittorio Lippi, Francesco Porcini, Giulia Bassani, Lucia Lencioni, Christoph Mauer, Christian Haverkamp, Carlo Alberto Avizzano, Antonio Frisoli and Alessandro Filippeschi
Sensors 2024, 24(16), 5358; https://doi.org/10.3390/s24165358 - 19 Aug 2024
Viewed by 644
Abstract
Using lower limb exoskeletons provides potential advantages in terms of productivity and safety associated with reduced stress. However, complex issues in human–robot interactions are still open, such as the physiological effects of exoskeletons and the impact on the user’s subjective experience. In this [...] Read more.
Using lower limb exoskeletons provides potential advantages in terms of productivity and safety associated with reduced stress. However, complex issues in human–robot interactions are still open, such as the physiological effects of exoskeletons and the impact on the user’s subjective experience. In this work, an innovative exoskeleton, the Wearable Walker, is assessed using the EXPERIENCE benchmarking protocol from the EUROBENCH project. The Wearable Walker is a lower-limb exoskeleton that enhances human abilities, such as carrying loads. The device uses a unique control approach called Blend Control that provides smooth assistance torques. It operates two models simultaneously, one in the case in which the left foot is grounded and another for the grounded right foot. These models generate assistive torques combined to provide continuous and smooth overall assistance, preventing any abrupt changes in torque due to model switching. The EXPERIENCE protocol consists of walking on flat ground while gathering physiological signals, such as heart rate, its variability, respiration rate, and galvanic skin response, and completing a questionnaire. The test was performed with five healthy subjects. The scope of the present study is twofold: to evaluate the specific exoskeleton and its current control system to gain insight into possible improvements and to present a case study for a formal and replicable benchmarking of wearable robots. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>The <span class="html-italic">Wearable Walker</span> lower limb exoskeleton along with the sensing devices used for the experiment. The CAD models show the kinematic variables and the detail of the two degrees of freedom that the thigh harness has with regards to the exoskeleton’s thigh link.</p>
Full article ">Figure 2
<p>Electronics, computing units, and assistance computation architecture of the <span class="html-italic">Wearable Walker</span> lower limb exoskeleton. In the bottom right scheme, red blocks highlight the components of assistive torques reported in Equation (<a href="#FD2-sensors-24-05358" class="html-disp-formula">2</a>).</p>
Full article ">Figure 3
<p>Scores of the physiological PIs of the EXPERIENCE protocol for each volunteer as a function of time. Each color corresponds to one volunteer. The three protocol phases, i.e., SIT, SIT EXO, and WALK are plotted together and marked in the figures.</p>
Full article ">Figure 4
<p>Scores for the psychophysiological performance indicators (PIs) of the EXPERIENCE protocol for each volunteer. Each point on the <span class="html-italic">x</span>-axis represents an average of a 1-minute recording. Curves show the evolution in time of such PIs.</p>
Full article ">Figure 5
<p>Distribution of answers to the questionnaire items. Bars report the average score and the whiskers their standard deviation.</p>
Full article ">
19 pages, 2723 KiB  
Review
Study of Human–Robot Interactions for Assistive Robots Using Machine Learning and Sensor Fusion Technologies
by Ravi Raj and Andrzej Kos
Electronics 2024, 13(16), 3285; https://doi.org/10.3390/electronics13163285 - 19 Aug 2024
Viewed by 738
Abstract
In recent decades, the potential of robots’ understanding, perception, learning, and action has been widely expanded due to the integration of artificial intelligence (AI) into almost every system. Cooperation between AI and human beings will be responsible for the bright future of AI [...] Read more.
In recent decades, the potential of robots’ understanding, perception, learning, and action has been widely expanded due to the integration of artificial intelligence (AI) into almost every system. Cooperation between AI and human beings will be responsible for the bright future of AI technology. Moreover, for a perfect manually or automatically controlled machine or device, the device must perform together with a human through multiple levels of automation and assistance. Humans and robots cooperate or interact in various ways. With the enhancement of robot efficiencies, they can perform more work through an automatic method; therefore, we need to think about cooperation between humans and robots, the required software architectures, and information about the designs of user interfaces. This paper describes the most important strategies of human–robot interactions and the relationships between several control techniques and cooperation techniques using sensor fusion and machine learning (ML). Based on the behavior and thinking of humans, a human–robot interaction (HRI) framework is studied and explored in this article to make attractive, safe, and efficient systems. Additionally, research on intention recognition, compliance control, and perception of the environment by elderly assistive robots for the optimization of HRI is investigated in this paper. Furthermore, we describe the theory of HRI and explain the different kinds of interactions and required details for both humans and robots to perform different kinds of interactions, including the circumstances-based evaluation technique, which is the most important criterion for assistive robots. Full article
Show Figures

Figure 1

Figure 1
<p>Process of human–robot interactions.</p>
Full article ">Figure 2
<p>Illustration of the collaborative control for human–robot cooperation.</p>
Full article ">Figure 3
<p>Illustration of the robotics perception system.</p>
Full article ">Figure 4
<p>Illustration of different sensors and their corresponding applications in robotics [<a href="#B34-electronics-13-03285" class="html-bibr">34</a>].</p>
Full article ">Figure 5
<p>Illustration of arm-mounted IR sensor and data processor [<a href="#B39-electronics-13-03285" class="html-bibr">39</a>].</p>
Full article ">Figure 6
<p>Illustration of complete IR sensor-based navigation system for the blind: (<b>a</b>) Description of Sensor module; (<b>b</b>) Description of Notice module [<a href="#B39-electronics-13-03285" class="html-bibr">39</a>].</p>
Full article ">Figure 7
<p>Illustration of an assistive cobot [<a href="#B73-electronics-13-03285" class="html-bibr">73</a>].</p>
Full article ">
Back to TopTop