[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Robotic Contact with the Human Body in Physical Human–Robot Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 July 2023) | Viewed by 37534

Special Issue Editors


E-Mail Website
Guest Editor
Robotics and Mechatronics Group, Escuela de Ingenierías Industriales, Universidad de Málaga, 29071 Málaga, Spain
Interests: human–robot interaction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Robotics and Mechatronics lab, Systems Engineering and Automation Department, University of Malaga, Calle Dr Ortiz Ramos, 29010 Malaga, Spain
Interests: physical human–robot interaction; human–robot collaboration; haptics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

Despite the progress in robotics in recent decades, the physical interaction between humans and robots is still a barely developed field, mainly because of safety requirements and the complexity of the task. A human is not a typical target for robot manipulation, but robots are already in contact with humans in existing applications such as rehabilitation, prosthetics, or feeding assistance. In these applications, contact is typically initiated or prepared by a human, and the remaining task is performed autonomously.

 

As robots become more intelligent, they are assigned tasks that involve more significant responsibility. There are many circumstances where a robot has to physically interact with a human in a fully autonomous way, including approach and contact operations, such as in rescue, nursing, elderly/child assistance, and many others.

 

This Special Issue focuses on the main challenges for successful autonomous physical interaction with humans: pre-contact human detection and perception, development of sensorized human-friendly grippers and manipulators, and methods to estimate and identify the parameters of the human model during the performance.

 

We invite authors to submit original research, new developments, experimental works, and surveys within the field of physical human-robot interaction (pHRI). The topics of interest of this special issue include, but are not limited to:

  • Robot-to-human manipulation
  • Physical Human-Robot Collaboration (HRC)
  • Assistive and rehabilitation robotics
  • Haptic perception for pHRI
  • Physical devices for pHRI
  • Human-friendly grippers
  • Soft robotics for pHRI
  • Human modeling
  • Human kinodynamics estimation
  • Motion and trajectory planning in pHRI applications
  • Wearable robotics
  • Exoskeletons
  • Robotic prostheses
  • Biomedical sensors
  • Robotic learning for pHRI
  • Human-in-the-loop
  • Sensor fusion in pHRI applications
  • Computer vision for pHRI
  • Robotic-assisted ergonomics
  • Mobile manipulation for HRC
  • Floating-base robots for HRC

Dr. Jesús Manuel Gómez de Gabriel

Dr. Juan Manuel Gandarias
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • vision-based human pose estimation
  • tactile sensing
  • haptic perception
  • Grippers for physical human-robot interaction
  • biomedical sensors
  • motion planning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

14 pages, 5564 KiB  
Article
Validation of a Robotic Testbench for Evaluating Biomechanical Effects of Implant Rotation in Total Knee Arthroplasty on a Cadaveric Specimen
by Nikolas Wilhelm, Constantin von Deimling, Sami Haddadin, Claudio Glowalla and Rainer Burgkart
Sensors 2023, 23(17), 7459; https://doi.org/10.3390/s23177459 - 27 Aug 2023
Cited by 1 | Viewed by 1411
Abstract
In this study, we developed and validated a robotic testbench to investigate the biomechanical compatibility of three total knee arthroplasty (TKA) configurations under different loading conditions, including varus–valgus and internal–external loading across defined flexion angles. The testbench captured force–torque data, position, and quaternion [...] Read more.
In this study, we developed and validated a robotic testbench to investigate the biomechanical compatibility of three total knee arthroplasty (TKA) configurations under different loading conditions, including varus–valgus and internal–external loading across defined flexion angles. The testbench captured force–torque data, position, and quaternion information of the knee joint. A cadaver study was conducted, encompassing a native knee joint assessment and successive TKA testing, featuring femoral component rotations at −5°, 0°, and +5° relative to the transepicondylar axis of the femur. The native knee showed enhanced stability in varus–valgus loading, with the +5° external rotation TKA displaying the smallest deviation, indicating biomechanical compatibility. The robotic testbench consistently demonstrated high precision across all loading conditions. The findings demonstrated that the TKA configuration with a +5° external rotation displayed the minimal mean deviation under internal–external loading, indicating superior joint stability. These results contribute meaningful understanding regarding the influence of different TKA configurations on knee joint biomechanics, potentially influencing surgical planning and implant positioning. We are making the collected dataset available for further biomechanical model development and plan to explore the 6 Degrees of Freedom (DOF) robotic platform for additional biomechanical analysis. This study highlights the versatility and usefulness of the robotic testbench as an instrumental tool for expanding our understanding of knee joint biomechanics. Full article
Show Figures

Figure 1

Figure 1
<p>Testbench architecture consisting of an optical measuring system (<b>I</b>), the 6 DOF Robot Stäubli RX90B (<b>II</b>) with an additional 6 DOF force–torque sensor for external load acquisition of the respective joint, CT imaging (<b>III</b>), and the central evaluation system (<b>IV</b>).</p>
Full article ">Figure 2
<p>Sequential steps of the modified total knee arthroplasty (TKA) procedure on a cadaveric knee joint: (<b>a</b>) opening of the joint capsule to expose the femur, tibia, and patella; (<b>b</b>) resection of the proximal tibia; (<b>c</b>) resection of the distal femur; (<b>d</b>) implantation of the tibial component and femoral adapter; and (<b>e</b>) switching femoral shields with different internal–external rotations (−5°, 0°, and +5°) during biomechanical testing (<b>f</b>).</p>
Full article ">Figure 3
<p>(<b>a</b>) 3D Slicer segmentation of the cadaveric knee joint, demonstrating the tibia, fibula, patella, and femur, along with the femoral (upper blue spheres) and tibial (lower blue spheres) trackers. (<b>b</b>) Tailored femoral component designs modeled on Triathlon CR ([<a href="#B37-sensors-23-07459" class="html-bibr">37</a>] GmbH, Duisburg, Germany) implants, resized and rotated to −5° (internal), 0° (neutral), and +5° (external). The femoral components are displayed on the transverse (axial) plane, viewed distally, as indicated by the direction of the black arrow in (<b>a</b>).</p>
Full article ">Figure 4
<p>Visualization of the complete test setup (<b>a</b>), the synchronization and movement of the femur tracker (femur, blue) and the end-effector of the robot (eff, orange) (<b>b</b>), as well as the acquired force (<b>c</b>) and torque (<b>d</b>) data during the test of the native knee.</p>
Full article ">Figure 5
<p>Comparison of the maximum deviation angles and their corresponding standard deviations for (<b>a</b>) varus, (<b>b</b>) valgus, (<b>c</b>) internal, and (<b>d</b>) external loading at 5 Nm and three repetitions of the test. The measurements were taken at distinct flexion angles of 10°, 20°, 30°, 60°, and 90° for both the native knee and total knee arthroplasty (TKA) configurations with 0°, −5°, and +5° implant rotations.</p>
Full article ">Figure 6
<p>Kiviat Diagram of the Total Deviation Angle (TDA) as a measure for instability in internal/external and varus/valgus loading for native knee and various TKA variations (neutral 0°, −5° internal, and +5° external) across different flexion angles under 5 Nm loading conditions. (<b>a</b>) Varus/valgus loading and (<b>b</b>) internal/external loading. The presented results display the sum of the deviations for varus/valgus and internal/external loading, illustrating the impact of different TKA variation strategies on knee joint biomechanics.</p>
Full article ">
17 pages, 10382 KiB  
Article
Hold My Hand: Development of a Force Controller and System Architecture for Joint Walking with a Companion Robot
by Enrique Coronado, Toshifumi Shinya and Gentiane Venture
Sensors 2023, 23(12), 5692; https://doi.org/10.3390/s23125692 - 18 Jun 2023
Viewed by 1773
Abstract
In recent years, there has been a growing interest in the development of robotic systems for improving the quality of life of individuals of all ages. Specifically, humanoid robots offer advantages in terms of friendliness and ease of use in such applications. This [...] Read more.
In recent years, there has been a growing interest in the development of robotic systems for improving the quality of life of individuals of all ages. Specifically, humanoid robots offer advantages in terms of friendliness and ease of use in such applications. This article proposes a novel system architecture that enables a commercial humanoid robot, specifically the Pepper robot, to walk side-by-side while holding hands, and communicating by responding to the surrounding environment. To achieve this control, an observer is required to estimate the force applied to the robot. This was accomplished by comparing joint torques calculated from the dynamics model to actual current measurements. Additionally, object recognition was performed using Pepper’s camera to facilitate communication in response to surrounding objects. By integrating these components, the system has demonstrated its capability to achieve its intended purpose. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed system architecture.</p>
Full article ">Figure 2
<p>General control algorithm integrated in this work.</p>
Full article ">Figure 3
<p>Camera acquisition interface used to collect and publish images from the Pepper camera to the network. The user can also select the resolution of the images obtained. Lowering the resolution of images obtained from Pepper’s camera reduces the latency between the images.</p>
Full article ">Figure 4
<p>Recognition application developed to enable Pepper to recognize objects in its environment using the YOLO algorithm. On the interface shown on the left side, users can select the parameters and pre-trained deep learning model to use, while on the right side, the user can see which objects are detected from the camera of the Pepper robot.</p>
Full article ">Figure 5
<p>Pepper robot and their geometric model represented as a 9-link mechanism that includes the elements from its foot to left hand.</p>
Full article ">Figure 6
<p>Coordinate system of Pepper. The thin arrows indicate the directions of the applied force on Pepper’s left hand.</p>
Full article ">Figure 7
<p>Gesture of looking and pointing at the recognized object.</p>
Full article ">Figure 8
<p>Example of object recognition results when walking with Pepper robot in a public space.</p>
Full article ">Figure 9
<p>Example of force results when walking with Pepper robot. The module using the Robotics Toolbox is used to plot the current values of each joint and results of the force estimation method.</p>
Full article ">Figure 10
<p>Result of object recognition (left: 2 m, right: 4 m).</p>
Full article ">Figure 11
<p>Result of object recognition in low resolution (left: 2 m, right: 4 m).</p>
Full article ">Figure 12
<p>Result of force estimation experiment.</p>
Full article ">
17 pages, 2040 KiB  
Article
Behavioural Models of Risk-Taking in Human–Robot Tactile Interactions
by Qiaoqiao Ren, Yuanbo Hou, Dick Botteldooren and Tony Belpaeme
Sensors 2023, 23(10), 4786; https://doi.org/10.3390/s23104786 - 16 May 2023
Viewed by 1623
Abstract
Touch can have a strong effect on interactions between people, and as such, it is expected to be important to the interactions people have with robots. In an earlier work, we showed that the intensity of tactile interaction with a robot can change [...] Read more.
Touch can have a strong effect on interactions between people, and as such, it is expected to be important to the interactions people have with robots. In an earlier work, we showed that the intensity of tactile interaction with a robot can change how much people are willing to take risks. This study further develops our understanding of the relationship between human risk-taking behaviour, the physiological responses by the user, and the intensity of the tactile interaction with a social robot. We used data collected with physiological sensors during the playing of a risk-taking game (the Balloon Analogue Risk Task, or BART). The results of a mixed-effects model were used as a baseline to predict risk-taking propensity from physiological measures, and these results were further improved through the use of two machine learning techniques—support vector regression (SVR) and multi-input convolutional multihead attention (MCMA)—to achieve low-latency risk-taking behaviour prediction during human–robot tactile interaction. The performance of the models was evaluated based on mean absolute error (MAE), root mean squared error (RMSE), and R squared score (R2), which obtained the optimal result with MCMA yielding an MAE of 3.17, an RMSE of 4.38, and an R2 of 0.93 compared with the baseline of 10.97 MAE, 14.73 RMSE, and 0.30 R2. The results of this study offer new insights into the interplay between physiological data and the intensity of risk-taking behaviour in predicting human risk-taking behaviour during human–robot tactile interactions. This work illustrates that physiological activation and the intensity of tactile interaction play a prominent role in risk processing during human–robot tactile interaction and demonstrates that it is feasible to use human physiological data and behavioural data to predict risk-taking behaviour in human–robot tactile interaction. Full article
Show Figures

Figure 1

Figure 1
<p>The four interaction conditions used in the study [<a href="#B12-sensors-23-04786" class="html-bibr">12</a>].</p>
Full article ">Figure 2
<p>Model diagnostic plots, which consist of four subplots, with the top-left plot titled “Residuals vs. Fitted” used to evaluate the assumption of a linear relationship. The top-right plot titled “Normal Q-Q” was used to test the normality of the residuals. The bottom-left plot titled “Scale-Location (or Spread-Location)” was employed to examine the homogeneity of variance of the residuals. Finally, the bottom-right plot titled “Residuals vs. Leverage” was used to identify influential cases that may significantly impact the regression results when included or excluded from the analysis.</p>
Full article ">Figure 3
<p>Lasso coefficient path visualization. The red and blue vertical lines on the plot are the values of the minimum lambda and 1-standard-error lambda, respectively, and were obtained from a lasso regression model that had undergone cross-validation.</p>
Full article ">Figure 4
<p>Data flow of mixed effects model.</p>
Full article ">Figure 5
<p>Mixed effects model: actual risk-taking behaviour vs. predicted risk-taking behaviour.</p>
Full article ">Figure 6
<p>Data flow for SVR model.</p>
Full article ">Figure 7
<p>SVR model: actual risk-taking behaviour vs. predicted risk-taking behaviour.</p>
Full article ">Figure 8
<p>The proposed multi-input convolutional multihead attention (MCMA) model.</p>
Full article ">Figure 9
<p>The convolutional block in the proposed MCMA model.</p>
Full article ">
14 pages, 3209 KiB  
Article
A Predictable Obstacle Avoidance Model Based on Geometric Configuration of Redundant Manipulators for Motion Planning
by Fengjia Ju, Hongzhe Jin, Binluan Wang and Jie Zhao
Sensors 2023, 23(10), 4642; https://doi.org/10.3390/s23104642 - 10 May 2023
Cited by 2 | Viewed by 1520
Abstract
When a manipulator works in dynamic environments, it may be affected by obstacles and may cause danger to people around. This requires the manipulator to be able to plan the obstacle avoidance motion in real time. Therefore, the problem solved in this paper [...] Read more.
When a manipulator works in dynamic environments, it may be affected by obstacles and may cause danger to people around. This requires the manipulator to be able to plan the obstacle avoidance motion in real time. Therefore, the problem solved in this paper is dynamic obstacle avoidance with the whole body of the redundant manipulator. The difficulty of this problem is how to model the manipulator to reflect the motion relationship between the manipulator and the obstacle. In order to describe accurately the occurrence conditions of the collision, we propose the triangular collision plane, a predictable obstacle avoidance model based on the geometric configuration of the manipulator. Based on this model, three cost functions, including the cost of the motion state, the cost of a head-on collision, and the cost of the approach time, are established and regarded as optimization objectives in the inverse kinematics solution of the redundant manipulator combined with the gradient projection method. The simulations and experiments on the redundant manipulator and the comparison with the distance-based obstacle avoidance point method show that our method improves the response speed of the manipulator and the safety of the system. Full article
Show Figures

Figure 1

Figure 1
<p>The simplified manipulator and obstacle in the triangular collision plane method. (<b>a</b>) The simplified manipulator and obstacle models; (<b>b</b>) the triangles in the same plane for demonstration; (<b>c</b>) the obstacle model.</p>
Full article ">Figure 2
<p>The triangular collision plane model.</p>
Full article ">Figure 3
<p>The triangular collision plane method for dynamic obstacle avoidance. (<b>a</b>) The collision cost neuron; (<b>b</b>) the control diagram of the manipulator system.</p>
Full article ">Figure 4
<p>The collision line segment model.</p>
Full article ">Figure 5
<p>The comparison of TCPM and OAPM on ABB Yumi in the one-triangle case. (<b>a</b>) The simulation and experiment screenshots per second by TCPM; (<b>b</b>) the simulation and experiment screenshots per second by OAPM; (<b>c</b>) the joint angles, velocities, and accelerations in the simulation by TCPM; (<b>d</b>) the joint angles, velocities, and accelerations in the experiment by TCPM; (<b>e</b>) the joint angles, velocities, and accelerations in the simulation by OAPM; (<b>f</b>) the joint angles, velocities, and accelerations in the experiment by OAPM; (<b>g</b>) sub and total costs in the simulation by TCPM; (<b>h</b>) sub and total costs in the experiment by TCPM; (<b>i</b>) the change process of the triangular collision plane; (<b>j</b>) the comparison of the critical distance between TCPM and OAPM.</p>
Full article ">Figure 5 Cont.
<p>The comparison of TCPM and OAPM on ABB Yumi in the one-triangle case. (<b>a</b>) The simulation and experiment screenshots per second by TCPM; (<b>b</b>) the simulation and experiment screenshots per second by OAPM; (<b>c</b>) the joint angles, velocities, and accelerations in the simulation by TCPM; (<b>d</b>) the joint angles, velocities, and accelerations in the experiment by TCPM; (<b>e</b>) the joint angles, velocities, and accelerations in the simulation by OAPM; (<b>f</b>) the joint angles, velocities, and accelerations in the experiment by OAPM; (<b>g</b>) sub and total costs in the simulation by TCPM; (<b>h</b>) sub and total costs in the experiment by TCPM; (<b>i</b>) the change process of the triangular collision plane; (<b>j</b>) the comparison of the critical distance between TCPM and OAPM.</p>
Full article ">Figure 6
<p>The comparison of TCPM and OAPM on ABB Yumi in the two-triangle case. (<b>a</b>) The simulation and experiment screenshots per second by TCPM; (<b>b</b>) the simulation and experiment screenshots per second by OAPM; (<b>c</b>) the joint angles, velocities, and accelerations in the simulation by TCPM; (<b>d</b>) the joint angles, velocities, and accelerations in the experiment by TCPM; (<b>e</b>) the joint angles, velocities, and accelerations in the simulation by OAPM; (<b>f</b>) the joint angles, velocities, and accelerations in the experiment by OAPM; (<b>g</b>) sub costs in the simulation by TCPM; (<b>h</b>) total costs in the simulation by TCPM; (<b>i</b>) sub costs in the experiment by TCPM; (<b>j</b>) total costs in the experiment by TCPM; (<b>k</b>) the comparison of the critical distance between TCPM and OAPM.</p>
Full article ">Figure 6 Cont.
<p>The comparison of TCPM and OAPM on ABB Yumi in the two-triangle case. (<b>a</b>) The simulation and experiment screenshots per second by TCPM; (<b>b</b>) the simulation and experiment screenshots per second by OAPM; (<b>c</b>) the joint angles, velocities, and accelerations in the simulation by TCPM; (<b>d</b>) the joint angles, velocities, and accelerations in the experiment by TCPM; (<b>e</b>) the joint angles, velocities, and accelerations in the simulation by OAPM; (<b>f</b>) the joint angles, velocities, and accelerations in the experiment by OAPM; (<b>g</b>) sub costs in the simulation by TCPM; (<b>h</b>) total costs in the simulation by TCPM; (<b>i</b>) sub costs in the experiment by TCPM; (<b>j</b>) total costs in the experiment by TCPM; (<b>k</b>) the comparison of the critical distance between TCPM and OAPM.</p>
Full article ">
20 pages, 4030 KiB  
Article
Spatial Calibration of Humanoid Robot Flexible Tactile Skin for Human–Robot Interaction
by Sélim Chefchaouni Moussaoui, Rafael Cisneros-Limón, Hiroshi Kaminaga, Mehdi Benallegue, Taiki Nobeshima, Shusuke Kanazawa and Fumio Kanehiro
Sensors 2023, 23(9), 4569; https://doi.org/10.3390/s23094569 - 8 May 2023
Cited by 2 | Viewed by 3105
Abstract
Recent developments in robotics have enabled humanoid robots to be used in tasks where they have to physically interact with humans, including robot-supported caregiving. This interaction—referred to as physical human–robot interaction (pHRI)—requires physical contact between the robot and the human body; one way [...] Read more.
Recent developments in robotics have enabled humanoid robots to be used in tasks where they have to physically interact with humans, including robot-supported caregiving. This interaction—referred to as physical human–robot interaction (pHRI)—requires physical contact between the robot and the human body; one way to improve this is to use efficient sensing methods for the physical contact. In this paper, we use a flexible tactile sensing array and integrate it as a tactile skin for the humanoid robot HRP-4C. As the sensor can take any shape due to its flexible property, a particular focus is given on its spatial calibration, i.e., the determination of the locations of the sensor cells and their normals when attached to the robot. For this purpose, a novel method of spatial calibration using B-spline surfaces has been developed. We demonstrate with two methods that this calibration method gives a good approximation of the sensor position and show that our flexible tactile sensor can be fully integrated on a robot and used as input for robot control tasks. These contributions are a first step toward the use of flexible tactile sensors in pHRI applications. Full article
Show Figures

Figure 1

Figure 1
<p>Photographs of HRP-4C humanoid robot with the flexible tactile sensor mounted on its right wrist and flattened sensing array: (1) flexible sensing array; (2) acquisition module sending data through Bluetooth.</p>
Full article ">Figure 2
<p>Simplified diagram of humanoid robot control from tactile sensor information. Sensor data are acquired and processed with the tactile information framework and are used in the mc_rtc-based controller. The displayed controller uses admittance control, but other control tasks can be used.</p>
Full article ">Figure 3
<p>Construction process of the B-spline surface. From a given mesh and sets of planes along two distinct directions, the process generates a B-spline surface that fits the original mesh. The distances between the intersection points and the control points are exaggerated on the figure for better visualization.</p>
Full article ">Figure 4
<p>Flowchart of the experimental process. The data from the sensors (in blue) and the geometric objects (in red) are used to compute the applied forces <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mi>i</mi> </msub> </semantics></math> and contact points <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mi>i</mi> </msub> </semantics></math> through different methods.</p>
Full article ">Figure 5
<p>Schematic of the estimation of the pressure points <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mi>i</mi> </msub> </semantics></math> during the experiment using two methods: (1) Using equivalent forces: the equivalent force <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>eq</mi> </mrow> </msub> </semantics></math> is computed geometrically from the resultant force and torque. (2) Using motion capture: the motion capture system gives the relative poses of the wand and the support, whose respective meshes intersect at the pressure points.</p>
Full article ">Figure 6
<p>Visualization of the positions and directions obtained during the experiment. Exact values are given in <a href="#app1-sensors-23-04569" class="html-app">Table S1</a>.</p>
Full article ">Figure 7
<p>Absolute error between the direction vectors obtained with the force/torque sensor (FTS) and the calibration method. The vectors are normalized before comparison, so that the results are unitless. We can observe much higher errors along the <span class="html-italic">z</span> axis.</p>
Full article ">Figure 8
<p>Mean position error on intersection points, in m. Errors are computed between the validation method—force/torque sensor (FTS) or motion capture—and the calibration method. The FTS is assumed to give exact <span class="html-italic">z</span> values for the force.</p>
Full article ">
16 pages, 6539 KiB  
Article
A Trade-Off between Complexity and Interaction Quality for Upper Limb Exoskeleton Interfaces
by Dorian Verdel, Guillaume Sahm, Olivier Bruneau, Bastien Berret and Nicolas Vignais
Sensors 2023, 23(8), 4122; https://doi.org/10.3390/s23084122 - 20 Apr 2023
Cited by 2 | Viewed by 2531
Abstract
Exoskeletons are among the most promising devices dedicated to assisting human movement during reeducation protocols and preventing musculoskeletal disorders at work. However, their potential is currently limited, partially because of a fundamental contradiction impacting their design. Indeed, increasing the interaction quality often requires [...] Read more.
Exoskeletons are among the most promising devices dedicated to assisting human movement during reeducation protocols and preventing musculoskeletal disorders at work. However, their potential is currently limited, partially because of a fundamental contradiction impacting their design. Indeed, increasing the interaction quality often requires the inclusion of passive degrees of freedom in the design of human-exoskeleton interfaces, which increases the exoskeleton’s inertia and complexity. Thus, its control also becomes more complex, and unwanted interaction efforts can become important. In the present paper, we investigate the influence of two passive rotations in the forearm interface on sagittal plane reaching movements while keeping the arm interface unchanged (i.e., without passive degrees of freedom). Such a proposal represents a possible compromise between conflicting design constraints. The in-depth investigations carried out here in terms of interaction efforts, kinematics, electromyographic signals, and subjective feedback of participants all underscored the benefits of such a design. Therefore, the proposed compromise appears to be suitable for rehabilitation sessions, specific tasks at work, and future investigations into human movement using exoskeletons. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the task. (<b>A</b>) Posture of a participant inside the exoskeleton. The axes (<math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="normal">A</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">z</mi> <mi mathvariant="normal">A</mi> </msub> </semantics></math>) of the arm force sensor are highlighted in green. The last axis <math display="inline"><semantics> <msub> <mi mathvariant="bold">y</mi> <mi mathvariant="normal">A</mi> </msub> </semantics></math> of the arm force sensor was defined by <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mi mathvariant="normal">A</mi> </msub> <mo>=</mo> <msub> <mi mathvariant="bold">z</mi> <mi mathvariant="normal">A</mi> </msub> <mo>×</mo> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="normal">A</mi> </msub> </mrow> </semantics></math>. The same definitions were adopted for the forearm force sensor, with axes represented in orange. The passive rotations and translation are illustrated in red in the zoomed picture. (<b>B</b>) Schematic representation of the task and the target location. The reference position is shaded.</p>
Full article ">Figure 2
<p>Control structure of the exoskeleton (see the description of Equation (<a href="#FD1-sensors-23-04122" class="html-disp-formula">1</a>) for the definition of the terms).</p>
Full article ">Figure 3
<p>Maximum and averaged interaction effort components (i.e., <math display="inline"><semantics> <mrow> <mi mathvariant="normal">i</mi> <mo>∈</mo> <mfenced separators="" open="{" close="}"> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> </mfenced> </mrow> </semantics></math>) at the level of the arm (<math display="inline"><semantics> <mi mathvariant="normal">A</mi> </semantics></math>) and forearm (<math display="inline"><semantics> <mi>FA</mi> </semantics></math>) for both the <span class="html-italic">Rot</span> and <span class="html-italic">noRot</span> conditions. <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>t</mi> <mi>o</mi> <mi>p</mi> </mrow> </msub> </semantics></math> movements are represented with positive values and <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>b</mi> <mi>o</mi> <mi>t</mi> </mrow> </msub> </semantics></math> movements are represented with negative values. Significant ANOVAs on the tested conditions are represented by “*” if <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math> and by “<math display="inline"><semantics> <mrow> <mo>*</mo> <mo>*</mo> </mrow> </semantics></math>” if <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>. (<b>A</b>) Components of the interaction force at the arm. (<b>B</b>) Components of the interaction force at the forearm. (<b>C</b>) Components of the interaction torque at the arm. (<b>D</b>) Components of the interaction torque at the forearm.</p>
Full article ">Figure 4
<p>Average trajectories recorded in the three tested conditions for each of the two targets, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>t</mi> <mi>o</mi> <mi>p</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>b</mi> <mi>o</mi> <mi>t</mi> </mrow> </msub> </semantics></math>. (<b>A</b>) Consecutive positions of the index in the sagittal plane <math display="inline"><semantics> <mrow> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>,</mo> <mi mathvariant="bold">y</mi> <mo>)</mo> </mrow> </semantics></math> (see <a href="#sensors-23-04122-f001" class="html-fig">Figure 1</a>B). (<b>B</b>) Velocity profiles normalized by <math display="inline"><semantics> <mi>MD</mi> </semantics></math>. (<b>C</b>) Acceleration profiles normalized by <math display="inline"><semantics> <mi>MD</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>Average RMS of muscle groups for both targets and each condition. A main effect of the condition on the RMS is signaled by “*”. All significant ANOVAs returned <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>. (<b>A</b>) Shoulder flexors. (<b>B</b>) Shoulder extensors. (<b>C</b>) Elbow flexors. (<b>D</b>) Elbow extensors.</p>
Full article ">Figure 6
<p>Average answers of the participants to the ergonomic feedback questionnaire.</p>
Full article ">
20 pages, 3775 KiB  
Article
Building a Low-Cost Wireless Biofeedback Solution: Applying Design Science Research Methodology
by Chih-Feng Cheng and Chiuhsiang Joe Lin
Sensors 2023, 23(6), 2920; https://doi.org/10.3390/s23062920 - 8 Mar 2023
Cited by 1 | Viewed by 4100
Abstract
In recent years, affective computing has emerged as a promising approach to studying user experience, replacing subjective methods that rely on participants’ self-evaluation. Affective computing uses biometrics to recognize people’s emotional states as they interact with a product. However, the cost of medical-grade [...] Read more.
In recent years, affective computing has emerged as a promising approach to studying user experience, replacing subjective methods that rely on participants’ self-evaluation. Affective computing uses biometrics to recognize people’s emotional states as they interact with a product. However, the cost of medical-grade biofeedback systems is prohibitive for researchers with limited budgets. An alternative solution is to use consumer-grade devices, which are more affordable. However, these devices require proprietary software to collect data, complicating data processing, synchronization, and integration. Additionally, researchers need multiple computers to control the biofeedback system, increasing equipment costs and complexity. To address these challenges, we developed a low-cost biofeedback platform using inexpensive hardware and open-source libraries. Our software can serve as a system development kit for future studies. We conducted a simple experiment with one participant to validate the platform’s effectiveness, using one baseline and two tasks that elicited distinct responses. Our low-cost biofeedback platform provides a reference architecture for researchers with limited budgets who wish to incorporate biometrics into their studies. This platform can be used to develop affective computing models in various domains, including ergonomics, human factors engineering, user experience, human behavioral studies, and human–robot interaction. Full article
Show Figures

Figure 1

Figure 1
<p>The development procedure of the low-cost biofeedback platform using the design science research methodology.</p>
Full article ">Figure 2
<p>The participant wore HoloLens 2 and BrainLink lite.</p>
Full article ">Figure 3
<p>The biometrical sensor, devices, and development board used in this study. The total cost of these devices was about USD 800, which is a significant cost saving when compared with a medical-grade system.</p>
Full article ">Figure 4
<p>GUI of the low-cost biofeedback platform developed in this study.</p>
Full article ">Figure 5
<p>Screenshot of the first-person shooting 2D game used in this study.</p>
Full article ">Figure 6
<p>Screenshot of the 3D Google Slides captured through HoloLens 2.</p>
Full article ">Figure 7
<p>Raw EEG readings in the baseline, 2D game, and 3D slide.</p>
Full article ">Figure 8
<p>Heart rate readings in the baseline, 2D game, and 3D slide.</p>
Full article ">Figure 9
<p>GSR readings in the baseline, 2D game, and 3D slide.</p>
Full article ">Figure 10
<p>Facial expression of the participant was neutral at the beginning of the 2D game (<b>left</b>) and smiling during the 2D game (<b>right</b>).</p>
Full article ">
13 pages, 1678 KiB  
Article
Quantification of Comfort for the Development of Binding Parts in a Standing Rehabilitation Robot
by Yejin Nam, Sumin Yang, Jongman Kim, Bummo Koo, Sunghyuk Song and Youngho Kim
Sensors 2023, 23(4), 2206; https://doi.org/10.3390/s23042206 - 16 Feb 2023
Cited by 1 | Viewed by 2978
Abstract
Human-machine interfaces (HMI) refer to the physical interaction between a user and rehabilitation robots. A persisting excessive load leads to soft tissue damage, such as pressure ulcers. Therefore, it is necessary to define a comfortable binding part for a rehabilitation robot with the [...] Read more.
Human-machine interfaces (HMI) refer to the physical interaction between a user and rehabilitation robots. A persisting excessive load leads to soft tissue damage, such as pressure ulcers. Therefore, it is necessary to define a comfortable binding part for a rehabilitation robot with the subject in a standing posture. The purpose of this study was to quantify the comfort at the binding parts of the standing rehabilitation robot. In Experiment 1, cuff pressures of 10–40 kPa were applied to the thigh, shank, and knee of standing subjects, and the interface pressure and pain scale were obtained. In Experiment 2, cuff pressures of 10–20 kPa were applied to the thigh, and the tissue oxygen saturation and the skin temperature were measured. Questionnaire responses regarding comfort during compression were obtained from the subjects using the visual analog scale and the Likert scale. The greatest pain was perceived in the thigh. The musculoskeletal configuration affected the pressure distribution. The interface pressure distribution by the binding part showed higher pressure at the intermuscular septum. Tissue oxygen saturation (StO2) increased to 111.9 ± 6.7% when a cuff pressure of 10 kPa was applied and decreased to 92.2 ± 16.9% for a cuff pressure of 20 kPa. A skin temperature variation greater than 0.2 °C occurred in the compressed leg. These findings would help evaluate and improve the comfort of rehabilitation robots. Full article
Show Figures

Figure 1

Figure 1
<p>Equipment setting: (<b>a</b>) position of NIRS sensor probe; (<b>b</b>) experimental equipment attached to the thigh.</p>
Full article ">Figure 2
<p>Pain scales for different cuff pressures at various binding parts.</p>
Full article ">Figure 3
<p>Musculoskeletal configurations of various binding parts and their pressure distributions. Pressure distributions correspond to the blue box in the 1<sup>st</sup> row. The bottom two rows are examples of pressure distribution for 2 subjects out of 10 total. In the pressure distribution, red box represents the whole mask and sub-divided boxes represent medial, center, and lateral masks of interest.</p>
Full article ">Figure 4
<p>StO<sub>2</sub> according to cuff pressures per subject in 15 subjects.</p>
Full article ">
13 pages, 1589 KiB  
Article
Dataset with Tactile and Kinesthetic Information from a Human Forearm and Its Application to Deep Learning
by Francisco Pastor, Da-hui Lin-Yang, Jesús M. Gómez-de-Gabriel and Alfonso J. García-Cerezo
Sensors 2022, 22(22), 8752; https://doi.org/10.3390/s22228752 - 12 Nov 2022
Cited by 2 | Viewed by 2211
Abstract
There are physical Human–Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of [...] Read more.
There are physical Human–Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of the human. Computer vision methods provide pre-grasp information with strong constraints imposed by the field environments. Force-based compliant control, after grasping, limits the amount of applied strength. On the other hand, valuable tactile and proprioceptive information can be obtained from the pHRI gripper, which can be used to better know the features of the human and the contact state between the human and the robot. This paper presents a novel dataset of tactile and kinesthetic data obtained from a robot gripper that grabs a human forearm. The dataset is collected with a three-fingered gripper with two underactuated fingers and a fixed finger with a high-resolution tactile sensor. A palpation procedure is performed to record the shape of the forearm and to recognize the bones and muscles in different sections. Moreover, an application for the use of the database is included. In particular, a fusion approach is used to estimate the actual grasped forearm section using both kinesthetic and tactile information on a regression deep-learning neural network. First, tactile and kinesthetic data are trained separately with Long Short-Term Memory (LSTM) neural networks, considering the data are sequential. Then, the outputs are fed to a Fusion neural network to enhance the estimation. The experiments conducted show good results in training both sources separately, with superior performance when the fusion approach is considered. Full article
Show Figures

Figure 1

Figure 1
<p>The robotic manipulator with the three-fingered gripper performs a grasp to recognize the location of the grasped forearm. Bones and muscles present different shapes and sizes along the forearm. The 3D images of the musculoskeletal model at the bottom are from BioDigital [<a href="#B32-sensors-22-08752" class="html-bibr">32</a>].</p>
Full article ">Figure 2
<p>Experimental setup used to obtain a forearm dataset. A description of the elements of the underactuated gripper attached to the forearm measurement device.</p>
Full article ">Figure 3
<p>Kinematic design of the gripper, defining the two underactuated fingers parameters and the fixed finger.</p>
Full article ">Figure 4
<p>Flowchart of the steps needed to record a full haptic dataset using the forearm measurement device.</p>
Full article ">Figure 5
<p>Time sequences of tactile and kinesthetic information are excerpted during the grasping process. A collection of tactile images composing tactile information (<b>a</b>). Five tactile images are presented in this figure to demonstrate how the pressure distribution evolves along the sequence, distinguishing the bones and muscles of the forearm. The variation in the joint position of the underactuated fingers’ joints produces kinesthetic information (<b>b</b>). Here, <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>l</mi> <mi>a</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>r</mi> <mi>a</mi> </mrow> </msub> </semantics></math> denote the left and right fingers’ actuated joint angles, respectively, while <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>l</mi> <mi>p</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>r</mi> <mi>p</mi> </mrow> </msub> </semantics></math> denote the left and right fingers’ underactuated joint angles.</p>
Full article ">Figure 6
<p>Diagram of the regression neural network structure. Kinesthetic and tactile data are trained separately. The kinesthetic network is defined by three LSTM layers and a fully connected layer. The tactile network is formed by a convolutional LSTM layer with a convolutional layer and two fully connected layers. Both outputs are concatenated and fed to a fully connected layer fusion network that uses both sources of information to improve the output.</p>
Full article ">Figure 7
<p>The median, 25/75 percentiles, and the maximum and the minimum estimation output of each <math display="inline"><semantics> <msub> <mi mathvariant="script">X</mi> <mo>%</mo> </msub> </semantics></math>. The test is presented for the tactile, kinesthetic, and fusion neural networks.</p>
Full article ">Figure 8
<p>The root mean squared error (RMSE) and the medium absolute error (MAE) achieved for each <math display="inline"><semantics> <msub> <mi mathvariant="script">X</mi> <mo>%</mo> </msub> </semantics></math> tested. Both errors are presented for the kinesthetic, the tactile, and the fusion neural networks.</p>
Full article ">

Review

Jump to: Research, Other

33 pages, 2003 KiB  
Review
Human Factors Considerations for Quantifiable Human States in Physical Human–Robot Interaction: A Literature Review
by Nourhan Abdulazeem and Yue Hu
Sensors 2023, 23(17), 7381; https://doi.org/10.3390/s23177381 - 24 Aug 2023
Cited by 4 | Viewed by 2894
Abstract
As the global population rapidly ages with longer life expectancy and declining birth rates, the need for healthcare services and caregivers for older adults is increasing. Current research envisions addressing this shortage by introducing domestic service robots to assist with daily activities. The [...] Read more.
As the global population rapidly ages with longer life expectancy and declining birth rates, the need for healthcare services and caregivers for older adults is increasing. Current research envisions addressing this shortage by introducing domestic service robots to assist with daily activities. The successful integration of robots as domestic service providers in our lives requires them to possess efficient manipulation capabilities, provide effective physical assistance, and have adaptive control frameworks that enable them to develop social understanding during human–robot interaction. In this context, human factors, especially quantifiable ones, represent a necessary component. The objective of this paper is to conduct an unbiased review encompassing the studies on human factors studied in research involving physical interactions and strong manipulation capabilities. We identified the prevalent human factors in physical human–robot interaction (pHRI), noted the factors typically addressed together, and determined the frequently utilized assessment approaches. Additionally, we gathered and categorized proposed quantification approaches based on the measurable data for each human factor. We also formed a map of the common contexts and applications addressed in pHRI for a comprehensive understanding and easier navigation of the field. We found out that most of the studies in direct pHRI (when there is direct physical contact) focus on social behaviors with belief being the most commonly addressed human factor type. Task collaboration is moderately investigated, while physical assistance is rarely studied. In contrast, indirect pHRI studies (when the physical contact is mediated via a third item) often involve industrial settings, with physical ergonomics being the most frequently investigated human factor. More research is needed on the human factors in direct and indirect physical assistance applications, including studies that combine physical social behaviors with physical assistance tasks. We also found that while the predominant approach in most studies involves the use of questionnaires as the main method of quantification, there is a recent trend that seeks to address the quantification approaches based on measurable data. Full article
Show Figures

Figure 1

Figure 1
<p>Example of an indirect pHRI application: a collaborative robotic arm mediates the interaction between the user and an object, such as handing over a book.</p>
Full article ">Figure 2
<p>Example of a direct pHRI application: a user exercises with a collaborative robotic arm by pushing against its stiff joints. The robotic arm is under joint impedance control.</p>
Full article ">Figure 3
<p>Most representative measurable dimensions in the pHRI literature according to the results of the literature review. This diagram is adapted to pHRI from the holistic conceptual model adapted for HRI, proposed in [<a href="#B12-sensors-23-07381" class="html-bibr">12</a>]. Measurable dimensions are the human factors that can be quantified and interpreted, and theoretical constructs are abstract concepts that encompass various measurable dimensions.</p>
Full article ">Figure 4
<p>Number of studies emerged in each human factor type.</p>
Full article ">Figure 5
<p>Number of studies emerged in each measurable dimension.</p>
Full article ">Figure 6
<p>Number of studies that evaluated each human factor type in each direct pHRI category.</p>
Full article ">Figure 7
<p>Number of studies that evaluated each human factor type in each indirect pHRI category.</p>
Full article ">Figure 8
<p>Number of studies relied on each quantification approach.</p>
Full article ">Figure 9
<p>Quantification approaches developed in the literature. MM stands for Mathematical Model, ML stands for Machine Learning, and Q stands for Questionnaire.</p>
Full article ">Figure 10
<p>Number of studies relied on each assessment approach in each human factor type.</p>
Full article ">
25 pages, 377 KiB  
Review
A Review on Human Comfort Factors, Measurements, and Improvements in Human–Robot Collaboration
by Yuchen Yan and Yunyi Jia
Sensors 2022, 22(19), 7431; https://doi.org/10.3390/s22197431 - 30 Sep 2022
Cited by 19 | Viewed by 4794
Abstract
As the development of robotics technologies for collaborative robots (COBOTs), the applications of human–robot collaboration (HRC) have been growing in the past decade. Despite the tremendous efforts from both academia and industry, the overall usage and acceptance of COBOTs are still not so [...] Read more.
As the development of robotics technologies for collaborative robots (COBOTs), the applications of human–robot collaboration (HRC) have been growing in the past decade. Despite the tremendous efforts from both academia and industry, the overall usage and acceptance of COBOTs are still not so high as expected. One of the major affecting factors is the comfort of humans in HRC, which is usually less emphasized in COBOT development; however, it is critical to the user acceptance during HRC. Therefore, this paper gives a review of human comfort in HRC including the influential factors of human comfort, measurement of human comfort in terms of subjective and objective manners, and human comfort improvement approaches in the context of HRC. Discussions on each topic are also conducted based on the review and analysis. Full article
Show Figures

Figure 1

Figure 1
<p>Uncanny valley plot [<a href="#B36-sensors-22-07431" class="html-bibr">36</a>].</p>
Full article ">

Other

Jump to: Research, Review

26 pages, 1454 KiB  
Perspective
SoftSAR: The New Softer Side of Socially Assistive Robots—Soft Robotics with Social Human–Robot Interaction Skills
by Yu-Chen Sun, Meysam Effati, Hani E. Naguib and Goldie Nejat
Sensors 2023, 23(1), 432; https://doi.org/10.3390/s23010432 - 30 Dec 2022
Cited by 2 | Viewed by 5015
Abstract
When we think of “soft” in terms of socially assistive robots (SARs), it is mainly in reference to the soft outer shells of these robots, ranging from robotic teddy bears to furry robot pets. However, soft robotics is a promising field that has [...] Read more.
When we think of “soft” in terms of socially assistive robots (SARs), it is mainly in reference to the soft outer shells of these robots, ranging from robotic teddy bears to furry robot pets. However, soft robotics is a promising field that has not yet been leveraged by SAR design. Soft robotics is the incorporation of smart materials to achieve biomimetic motions, active deformations, and responsive sensing. By utilizing these distinctive characteristics, a new type of SAR can be developed that has the potential to be safer to interact with, more flexible, and uniquely uses novel interaction modes (colors/shapes) to engage in a heighted human–robot interaction. In this perspective article, we coin this new collaborative research area as SoftSAR. We provide extensive discussions on just how soft robotics can be utilized to positively impact SARs, from their actuation mechanisms to the sensory designs, and how valuable they will be in informing future SAR design and applications. With extensive discussions on the fundamental mechanisms of soft robotic technologies, we outline a number of key SAR research areas that can benefit from using unique soft robotic mechanisms, which will result in the creation of the new field of SoftSAR. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The CASTOR robot’s soft outer shell has the ability to absorb impact, courtesy of Casas-Bocanegra et al. [<a href="#B21-sensors-23-00432" class="html-bibr">21</a>], (<b>b</b>) the SEA neck joint and the integrated element connecting the head and the body in the CASTOR robot, courtesy of Casas-Bocanegra et al. [<a href="#B21-sensors-23-00432" class="html-bibr">21</a>], (<b>c</b>) the Probo robot with a removable soft jacket and expressive trunk, courtesy of Saldien et al [<a href="#B38-sensors-23-00432" class="html-bibr">38</a>], and (<b>d</b>) the schematic of the NBDS actuation system for Probo showing the worm gear design, courtesy of Saldien et al [<a href="#B32-sensors-23-00432" class="html-bibr">32</a>]. All images are courtesy of open access articles.</p>
Full article ">Figure 2
<p>(<b>a</b>) An example of a PN soft-robotic gripper manipulating an object. The gripper also has color-changing ability due to covalent bond activation, courtesy of Gossweiler et al. [<a href="#B56-sensors-23-00432" class="html-bibr">56</a>] with permission obtained from ACS Publications; (<b>b</b>) an example of variable-length tendon-driven octopus-inspired robotic arm, which utilizes SMA springs in both the transverse and longitudinal direction; courtesy of Cianchetti et al. [<a href="#B61-sensors-23-00432" class="html-bibr">61</a>], open access article; and (<b>c</b>) examples of ionic EAP actuator motion under different DC voltages, courtesy of Park et al. [<a href="#B66-sensors-23-00432" class="html-bibr">66</a>], open access article.</p>
Full article ">Figure 3
<p>The Pepper robot displaying different emotions using nonverbal communication via its head, torso, and arm movements and changing eye colors: (<b>a</b>) positive valence and high arousal; and (<b>b</b>) negative valence and low arousal; courtesy of Shao et al. [<a href="#B116-sensors-23-00432" class="html-bibr">116</a>], open access article.</p>
Full article ">
Back to TopTop