[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (479)

Search Parameters:
Keywords = human-robot collaboration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 5261 KiB  
Article
A Methodology for the Mechanical Design of Pneumatic Joints Using Artificial Neural Networks
by Michele Gabrio Antonelli, Pierluigi Beomonte Zobel, Enrico Mattei and Nicola Stampone
Appl. Sci. 2024, 14(18), 8324; https://doi.org/10.3390/app14188324 (registering DOI) - 15 Sep 2024
Viewed by 409
Abstract
The advent of collaborative and soft robotics has reduced the mandatory adoption of safety barriers, pushing human–robot interaction to previously unreachable levels. Due to their reciprocal advantages, integrating these technologies can maximize a device’s performance. However, simplifying assumptions or elementary geometries are often [...] Read more.
The advent of collaborative and soft robotics has reduced the mandatory adoption of safety barriers, pushing human–robot interaction to previously unreachable levels. Due to their reciprocal advantages, integrating these technologies can maximize a device’s performance. However, simplifying assumptions or elementary geometries are often required due to non-linear factors that identify analytical models for designing soft pneumatic actuators for collaborative and soft robotics. Over time, various approaches have been employed to overcome these issues, including finite element analysis, response surface methodology (RSM), and machine learning (ML) algorithms. Based on the latter, in this study, the bending behavior of an externally reinforced soft pneumatic actuator was characterized by the changing geometric and functional parameters, realizing a Bend dataset. This was used to train 14 regression algorithms, and the Bilayered neural network (BNN) was the best. Three different external reinforcements, excluded for the realization of the dataset, were tested by comparing the predicted and experimental bending angles. The BNN demonstrated significantly lower error than that obtained by RSM, validating the methodology and highlighting how ML techniques can advance the prediction and mechanical design of soft pneumatic actuators. Full article
(This article belongs to the Special Issue Intelligent Robotics in the Era of Industry 5.0)
Show Figures

Figure 1

Figure 1
<p>The adopted SPA: (<b>a</b>) external reinforcement with cuts to create geometric asymmetries; (<b>b</b>) example of SPA bending under pressurization toward the direction of the inextensible layer; (<b>c</b>) details of geometric parameters of the reinforcement, with green indicating the inextensible layer and red the closing angle.</p>
Full article ">Figure 2
<p>Prototypes with the geometric parameters in <a href="#applsci-14-08324-t002" class="html-table">Table 2</a> after printing and support removal.</p>
Full article ">Figure 3
<p>Camera calibration results: (<b>a</b>) the adopted checkerboard in one of the positions used for the camera calibration; (<b>b</b>) real-world 3D reconstruction with the camera positioned at the point (0,0,0); (<b>c</b>) mean errors of the fifteen images used in the calibration with the maximum error shown by the red solid line and the mean error the orange dotted line.</p>
Full article ">Figure 4
<p>Example of test number 6, at P = 1.2 bar, L = 10 mm, R = 0.5, Θ = 40°: (<b>a</b>) raw image; (<b>b</b>) image after <span class="html-italic">undistorted</span> filter application; (<b>c</b>) image after <span class="html-italic">imlocalbright</span> filter application and detail about open sector bending angle α. Histogram of pixel intensities for: (<b>d</b>) raw image; (<b>e</b>) undistorted image; (<b>f</b>) image with contrast improved.</p>
Full article ">Figure 5
<p>Training results for the BNN: (<b>a</b>) comparison between predicted and experimental bending angles; (<b>b</b>) residual vs. true response.</p>
Full article ">Figure 6
<p>Validation results for the BNN: (<b>a</b>) comparison between predicted and experimental bending angles; (<b>b</b>) residual vs. true response.</p>
Full article ">Figure 7
<p>Deformation configurations for reinforcement: (<b>a</b>) I at 0.4 bar; (<b>b</b>) I at 0.8 bar; (<b>c</b>) I at 1.2 bar; (<b>d</b>) II at 0.4 bar; (<b>e</b>) II at 0.8 bar; (<b>f</b>) II at 1.2 bar; (<b>g</b>) III at 0.4 bar; (<b>h</b>); III at 0.8 bar; (<b>i</b>) III 1.2 bar.</p>
Full article ">Figure 8
<p>Absolute errors between experimental and predicted bending angles for BNN and RSM as a function of feeding pressure for (<b>a</b>) reinforcement I, (<b>b</b>) reinforcement II, and (<b>c</b>) reinforcement III.</p>
Full article ">
19 pages, 312 KiB  
Review
Digital and Virtual Technologies for Work-Related Biomechanical Risk Assessment: A Scoping Review
by Paulo C. Anacleto Filho, Ana Colim, Cristiano Jesus, Sérgio Ivan Lopes and Paula Carneiro
Safety 2024, 10(3), 79; https://doi.org/10.3390/safety10030079 - 12 Sep 2024
Viewed by 405
Abstract
The field of ergonomics has been significantly shaped by the advent of evolving technologies linked to new industrial paradigms, often referred to as Industry 4.0 (I4.0) and, more recently, Industry 5.0 (I5.0). Consequently, several studies have reviewed the integration of advanced technologies for [...] Read more.
The field of ergonomics has been significantly shaped by the advent of evolving technologies linked to new industrial paradigms, often referred to as Industry 4.0 (I4.0) and, more recently, Industry 5.0 (I5.0). Consequently, several studies have reviewed the integration of advanced technologies for improved ergonomics in different industry sectors. However, studies often evaluate specific technologies, such as extended reality (XR), wearables, artificial intelligence (AI), and collaborative robot (cobot), and their advantages and problems. In this sense, there is a lack of research exploring the state of the art of I4.0 and I5.0 virtual and digital technologies in evaluating work-related biomechanical risks. Addressing this research gap, this study presents a comprehensive review of 24 commercial tools and 10 academic studies focusing on work-related biomechanical risk assessment using digital and virtual technologies. The analysis reveals that AI and digital human modelling (DHM) are the most commonly utilised technologies in commercial tools, followed by motion capture (MoCap) and virtual reality (VR). Discrepancies were found between commercial tools and academic studies. However, the study acknowledges limitations, including potential biases in sample selection and search methodology. Future research directions include enhancing transparency in commercial tool validation processes, examining the broader impact of emerging technologies on ergonomics, and considering human-centred design principles in technology integration. These findings contribute to a deeper understanding of the evolving landscape of biomechanical risk assessment. Full article
(This article belongs to the Special Issue Advances in Ergonomics and Safety)
16 pages, 3585 KiB  
Article
Upper-Limb and Low-Back Load Analysis in Workers Performing an Actual Industrial Use-Case with and without a Dual-Arm Collaborative Robot
by Alessio Silvetti, Tiwana Varrecchia, Giorgia Chini, Sonny Tarbouriech, Benjamin Navarro, Andrea Cherubini, Francesco Draicchio and Alberto Ranavolo
Safety 2024, 10(3), 78; https://doi.org/10.3390/safety10030078 - 11 Sep 2024
Viewed by 219
Abstract
In the Industry 4.0 scenario, human–robot collaboration (HRC) plays a key role in factories to reduce costs, increase production, and help aged and/or sick workers maintain their job. The approaches of the ISO 11228 series commonly used for biomechanical risk assessments cannot be [...] Read more.
In the Industry 4.0 scenario, human–robot collaboration (HRC) plays a key role in factories to reduce costs, increase production, and help aged and/or sick workers maintain their job. The approaches of the ISO 11228 series commonly used for biomechanical risk assessments cannot be applied in Industry 4.0, as they do not involve interactions between workers and HRC technologies. The use of wearable sensor networks and software for biomechanical risk assessments could help us develop a more reliable idea about the effectiveness of collaborative robots (coBots) in reducing the biomechanical load for workers. The aim of the present study was to investigate some biomechanical parameters with the 3D Static Strength Prediction Program (3DSSPP) software v.7.1.3, on workers executing a practical manual material-handling task, by comparing a dual-arm coBot-assisted scenario with a no-coBot scenario. In this study, we calculated the mean and the standard deviation (SD) values from eleven participants for some 3DSSPP parameters. We considered the following parameters: the percentage of maximum voluntary contraction (%MVC), the maximum allowed static exertion time (MaxST), the low-back spine compression forces at the L4/L5 level (L4Ort), and the strength percent capable value (SPC). The advantages of introducing the coBot, according to our statistics, concerned trunk flexion (SPC from 85.8% without coBot to 95.2%; %MVC from 63.5% without coBot to 43.4%; MaxST from 33.9 s without coBot to 86.2 s), left shoulder abdo-adduction (%MVC from 46.1% without coBot to 32.6%; MaxST from 32.7 s without coBot to 65 s), and right shoulder abdo-adduction (%MVC from 43.9% without coBot to 30.0%; MaxST from 37.2 s without coBot to 70.7 s) in Phase 1, and right shoulder humeral rotation (%MVC from 68.4% without coBot to 7.4%; MaxST from 873.0 s without coBot to 125.2 s), right shoulder abdo-adduction (%MVC from 31.0% without coBot to 18.3%; MaxST from 60.3 s without coBot to 183.6 s), and right wrist flexion/extension rotation (%MVC from 50.2% without coBot to 3.0%; MaxST from 58.8 s without coBot to 1200.0 s) in Phase 2. Moreover, Phase 3, which consisted of another manual handling task, would be removed by using a coBot. In summary, using a coBot in this industrial scenario would reduce the biomechanical risk for workers, particularly for the trunk, both shoulders, and the right wrist. Finally, the 3DSSPP software could be an easy, fast, and costless tool for biomechanical risk assessments in an Industry 4.0 scenario where ISO 11228 series cannot be applied; it could be used by occupational medicine physicians and health and safety technicians, and could also help employers to justify a long-term investment. Full article
Show Figures

Figure 1

Figure 1
<p>Some 3DSSPP reconstructions of the three subtasks analyzed: Phase 1 with (<b>a1</b>) and without (<b>b1</b>) the coBot; Phase 2 with (<b>a2</b>) and without (<b>b2</b>) the coBot; and Phase 3 with (<b>a3</b>) and without (<b>b3</b>) the coBot.</p>
Full article ">Figure 2
<p>Mean and SD values for Phase 1, with Bazar (wB) in blue and without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). An asterisk (*) over the bars shows statistical significance.</p>
Full article ">Figure 3
<p>Mean and SD values for Phase 2, with Bazar (wB) in blue and without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). An asterisk (*) over the bars shows statistical significance.</p>
Full article ">Figure 4
<p>Mean and SD values for Phase 3 without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). When using the Bazar coBot, this phase would be totally automatized, so we do not have values with the Bazar (wB).</p>
Full article ">
24 pages, 11469 KiB  
Article
A Multidisciplinary Learning Model Using AGV and AMR for Industry 4.0/5.0 Laboratory Courses: A Study
by Ákos Cservenák and Jozef Husár
Appl. Sci. 2024, 14(17), 7965; https://doi.org/10.3390/app14177965 - 6 Sep 2024
Viewed by 389
Abstract
This paper presents the development of a multidisciplinary learning model using automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) for laboratory courses, focusing on Industry 4.0 and 5.0 paradigms. Industry 4.0 and 5.0 emphasize advanced industrial automation and human–robot collaboration, which requires [...] Read more.
This paper presents the development of a multidisciplinary learning model using automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) for laboratory courses, focusing on Industry 4.0 and 5.0 paradigms. Industry 4.0 and 5.0 emphasize advanced industrial automation and human–robot collaboration, which requires innovative educational strategies. Motivated by the need to align educational practices with these industry trends, the goal of this research is to design and implement an effective educational model integrating AGV and AMR. The methodology section details the complex development process, including technology selection, curriculum design, and laboratory exercise design. Data collection and analysis were conducted to assess the effectiveness of the model. The design phase outlines the structure of the educational model, integrating AGV and AMR into the laboratory modules and enriching them with industry collaboration and practical case studies. The results of a pilot implementation are presented, showing the impact of the model on students’ learning outcomes compared to traditional strategies. The evaluation reveals significant improvements in student engagement and understanding of industrial automation. The implications of these findings are discussed, challenges and potential improvements identified, and alignment with current educational trends discussed. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

Figure 1
<p>AGV types taught in the subject “Intelligent Material Handling Machines and Systems”. (<b>a</b>) Transport vehicle. (<b>b</b>) Tow truck. (<b>c</b>) Forklift.</p>
Full article ">Figure 2
<p>AGV navigation methods—Physical path—Line following and Tags.</p>
Full article ">Figure 3
<p>AGV navigation methods—Virtual path—Laser triangulation and Vision guidance.</p>
Full article ">Figure 4
<p>Navigation methods used for AGV and AMR [<a href="#B50-applsci-14-07965" class="html-bibr">50</a>].</p>
Full article ">Figure 5
<p>AGV used in the Logistics 4.0 laboratory [<a href="#B51-applsci-14-07965" class="html-bibr">51</a>].</p>
Full article ">Figure 6
<p>Wheels of AGV—driven and spherical (self-made photos).</p>
Full article ">Figure 7
<p>Parts of AGV—LIDAR sensor, PC, and PLC (self-made photos).</p>
Full article ">Figure 8
<p>Software of LIDAR sensor—Entering the position of mirrors.</p>
Full article ">Figure 9
<p>Software of LIDAR sensor—Create a room contour (self-made screenshot).</p>
Full article ">Figure 10
<p>AMRs used in Logistics 4.0 laboratory.</p>
Full article ">Figure 11
<p>Different network devices used within Festo Robotinos.</p>
Full article ">Figure 12
<p>Successfully established live control with Festo Robotinos (self-made screenshots). (<b>a</b>) Both Robotinos are from the same desktop computer. (<b>b</b>) From a smartphone.</p>
Full article ">Figure 13
<p>New program for controlling Festo Robotino.</p>
Full article ">Figure 14
<p>Information system for communication between AGV and AMR.</p>
Full article ">Figure 15
<p>The physical system is from the other viewpoint, including AGV, AMR, and PC.</p>
Full article ">Figure 16
<p>Reading AGV data via an OPC system on a PC.</p>
Full article ">Figure 17
<p>Graphical program created for communication with a PC. (<b>a</b>) Parts of the joystick handle, battery voltage sensor and image processing within the program. (<b>b</b>) Parts of infrared distance, navigation and odometry sensors within the program.</p>
Full article ">Figure 18
<p>Full learning model using AGV and AMR during two semesters.</p>
Full article ">Figure 19
<p>Learning model using AGV and AMR for the first semester.</p>
Full article ">Figure 20
<p>Learning model using AGV and AMR for the second semester.</p>
Full article ">Figure 21
<p>Programming part of the learning model using AGV and AMR.</p>
Full article ">Figure 22
<p>Reduced learning models using AGV and AMR used for only one semester. (<b>a</b>) theoretic-oriented part. (<b>b</b>) practical-oriented part.</p>
Full article ">
18 pages, 8130 KiB  
Article
Design and Prototyping of a Collaborative Station for Machine Parts Assembly
by Federico Emiliani, Albin Bajrami, Daniele Costa, Giacomo Palmieri, Daniele Polucci, Chiara Leoni and Massimo Callegari
Machines 2024, 12(8), 572; https://doi.org/10.3390/machines12080572 - 19 Aug 2024
Viewed by 412
Abstract
Collaboration between humans and machines is the core of the Industry 5.0 paradigm, and collaborative robotics is one the most impactful enabling technologies for small and medium Enterprises (SMEs). In fact, small batch production and high levels of product customization make parts assembly [...] Read more.
Collaboration between humans and machines is the core of the Industry 5.0 paradigm, and collaborative robotics is one the most impactful enabling technologies for small and medium Enterprises (SMEs). In fact, small batch production and high levels of product customization make parts assembly one of the most challenging operations to be automated, and it often still depends on the versatility of human labor. Collaborative robots, for their part, can be easily integrated in this productive paradigm, as they have been specifically developed for coexistence with human beings. This work investigates the performance of collaborative robots in machine parts assembly. Design and research activities were carried out as a case study of industrial relevance at the i-Labs industry laboratory, a pole of innovation that is briefly introduced at the beginning of the paper. A fully-functional prototype of the cobotized station was realized at the end of the project, and several experimental tests were performed to validate the robustness of the assembly process as well as the collaborative nature of the application. Full article
(This article belongs to the Special Issue Advancing Human-Robot Collaboration in Industry 4.0)
Show Figures

Figure 1

Figure 1
<p>Flux diagram of the three-step methodology presented in this work.</p>
Full article ">Figure 2
<p>Subfigure (<b>a</b>) shows an exploded view of component 1 with its four labeled parts: the Seeger ring, metal eyelet, cap, and conical interface used for manual assembly, which is displayed in the mounting position. Subfigure (<b>b</b>) displays an exploded view of component 2 with its multiple labeled parts: radial ball bearing, Seeger ring JV30, radial needle bearing, Seeger ring JV28, steel case, and steel ring.</p>
Full article ">Figure 3
<p>Preliminary tests for component 1: Gripping fingers for insertion of the Seeger ring in the conical interface.</p>
Full article ">Figure 4
<p>Automatic Seeger ring feeding system mechanism required in order to increase the repeatability of ring insertion over the conical interface for component 1.</p>
Full article ">Figure 5
<p>Assembly base for component 1. This base allows the cap to be assembled, and is designed to accommodate the various part of the component and handle the correct positions.</p>
Full article ">Figure 6
<p>Auxiliary systems designed to assemble component 2. The first press (<b>a</b>,<b>b</b>) inserts the JV28 Seeger ring into a metal case using a pneumatic piston with a 3D-printed cap and conical guide. The second press (<b>c</b>) places bearings and other parts, ensuring correct alignment with a 3D-printed base.</p>
Full article ">Figure 7
<p>Custom 3D-printed gripper fingers designed to grip the different parts of the two components using only the ‘fully open’ and ‘fully closed’ positions. Subfigure (<b>a</b>) shows the prototype fingers designed to handle the parts of component 1. Subfigure (<b>b</b>) shows the prototype fingers designed to handle the parts of component 2. Both prototypes were 3D-printed by means of high-precision stereolithography (SLA).</p>
Full article ">Figure 8
<p>Custom 3D-printed gripper fingers designed to grip all the different parts of the two components using only the ‘fully open’ and ‘fully closed’ positions. Subfigure (<b>a</b>) shows the prototype fingers, while subfigure (<b>b</b>) shows the different gripping areas. Green indicates gripping areas shared by the parts of both components, blue indicates gripping areas for component 1 parts, and red indicates gripping areas for component 2 parts.</p>
Full article ">Figure 9
<p>Station layout, arranged according to the client’s requirements. The colored areas on the working table are dedicated to robotized assembly of the components. Areas 1 and 2 can be accessed by the operator to perform final inspection on the products, which are unloaded by the cobot onto the yellow table.</p>
Full article ">Figure 10
<p>GTE Cobosafe CBSF contact sensors, showing the different zones that can be simulated by the sensor, the GTE Cobosafe CBSF contact sensors, and the damping elements.</p>
Full article ">Figure 11
<p>Robotic station setup during the collision tests, with the GTE Cobosafe sensor positioned between the robot and the unloading area to simulate an impact with the operator’s back.</p>
Full article ">Figure 12
<p>Force diagram of the collision test. The graph refers to the test simulating a collision with the human operator’s hand, where the speed of the cobot’s end effector was 300 mm/s.</p>
Full article ">Figure 13
<p>A 3D diagram showing the pressures (P) present during the collision test. This graph was obtained for the impact with the operator’s shoulder, with a rotation speed along the robot’s first axis of 120°/s. The colour scale quantifies the different pressure values.</p>
Full article ">
34 pages, 6437 KiB  
Article
Detection of Novel Objects without Fine-Tuning in Assembly Scenarios by Class-Agnostic Object Detection and Object Re-Identification
by Markus Eisenbach, Henning Franke, Erik Franze, Mona Köhler, Dustin Aganian, Daniel Seichter and Horst-Michael Gross
Automation 2024, 5(3), 373-406; https://doi.org/10.3390/automation5030023 - 19 Aug 2024
Viewed by 517
Abstract
Object detection is a crucial capability of autonomous agents for human–robot collaboration, as it facilitates the identification of the current processing state. In industrial scenarios, it is uncommon to have comprehensive knowledge of all the objects involved in a given task. Furthermore, training [...] Read more.
Object detection is a crucial capability of autonomous agents for human–robot collaboration, as it facilitates the identification of the current processing state. In industrial scenarios, it is uncommon to have comprehensive knowledge of all the objects involved in a given task. Furthermore, training during deployment is not a viable option. Consequently, there is a need for a detector that is able to adapt to novel objects during deployment without the necessity of retraining or fine-tuning on novel data. To achieve this, we propose to exploit the ability of discriminative embeddings learned by an object re-identification model to generalize to unknown categories described by a few shots. To do so, we extract object crops with a class-agnostic detector and then compare the object features with the prototypes of the novel objects. Moreover, we demonstrate that the embedding is also effective for predicting regions of interest, which narrows the search space of the class-agnostic detector and, consequently, increases processing speed. The effectiveness of our approach is evaluated in an assembly scenario, wherein the majority of objects belong to categories distinct from those present in the training datasets. Our experiments demonstrate that, in this scenario, our approach outperforms the current best few-shot object-detection approach DE-ViT, which also does not perform fine-tuning on novel data, in terms of both detection capability and inference speed. Full article
Show Figures

Figure 1

Figure 1
<p>Detection result of our approach on a crop of an HD image of the ATTACH dataset depicting the workplace. This dataset adequately represents the target scenario and contains mainly novel categories that were not included in the training data. The category information is taken from 20 shots per category. The detector is not trained on these shots. However, it is able to adapt to novel categories by employing a class-agnostic detector and an object re-identification model as proposed.</p>
Full article ">Figure 2
<p>Overall processing pipeline. (I) First, regions of interest (RoIs) are extracted (purple rects). (II) Second, a class-agnostic detector is employed to extract crops for all objects in the scene (green rects). (III) Finally, a re-identification (ReID) model compares the novel object shots with the object proposals to identify matching objects. <a href="#automation-05-00023-f003" class="html-fig">Figure 3</a> provides a more detailed view of the stages (I)–(III).</p>
Full article ">Figure 3
<p>Detailed view of our processing pipeline consisting of three stages, namely (I) regions of interest (RoI) proposal, (II) class-agnostic object detection and (III) object re-identification (ReID). For illustration purposes, we show results for a single novel category (<math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, here: board) represented by <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> shots. In practice, multiple novel categories can be processed simultaneously. The numbered stages are continuously referenced in the text.</p>
Full article ">Figure 4
<p>Example for combining multiple small RoIs into a single one, with <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Sample images for class-agnostic object detector benchmarks. These images are part of the ATTACH dataset [<a href="#B6-automation-05-00023" class="html-bibr">6</a>] for human action recognition. For benchmarking the object detectors, we annotated the objects in the scene associated with the assembly.</p>
Full article ">Figure 6
<p>Model comparison for class-agnostic detection on the table benchmark. Shown is the novel category-detection capability in terms of recall vs. the inference speed ([<a href="#B10-automation-05-00023" class="html-bibr">10</a>,<a href="#B23-automation-05-00023" class="html-bibr">23</a>,<a href="#B70-automation-05-00023" class="html-bibr">70</a>,<a href="#B75-automation-05-00023" class="html-bibr">75</a>,<a href="#B76-automation-05-00023" class="html-bibr">76</a>,<a href="#B77-automation-05-00023" class="html-bibr">77</a>,<a href="#B78-automation-05-00023" class="html-bibr">78</a>]).</p>
Full article ">Figure 7
<p>Exemplary rankings of the proposed object re-identification model applied to out-of-domain data.</p>
Full article ">Figure 8
<p>Distribution of cosine distances of queries to gallery of CO3D validation split. A distance of <math display="inline"><semantics> <mrow> <mn>1.0</mn> </mrow> </semantics></math> indicates an angle of <math display="inline"><semantics> <msup> <mn>90</mn> <mo>∘</mo> </msup> </semantics></math>, which is the expected angle between two random vectors in high-dimensional space.</p>
Full article ">Figure 9
<p>Results of hyperparameter sweeps and comparison with DE-ViT. The processing time represents the inference time in seconds per image on an A100 GPU. The continuous plots represent the Pareto front of data points from the hyperparameter sweeps when considering mAP and inference time. Specific hyperparameters that lead to the operating points marked with A and B are listed in the text.</p>
Full article ">
20 pages, 2598 KiB  
Article
Adapting to the Agricultural Labor Market Shaped by Robotization
by Vasso Marinoudi, Lefteris Benos, Carolina Camacho Villa, Maria Lampridi, Dimitrios Kateris, Remigio Berruto, Simon Pearson, Claus Grøn Sørensen and Dionysis Bochtis
Sustainability 2024, 16(16), 7061; https://doi.org/10.3390/su16167061 - 17 Aug 2024
Viewed by 518
Abstract
Agriculture is being transformed through automation and robotics to improve efficiency and reduce production costs. However, this transformation poses risks of job loss, particularly for low-skilled workers, as automation decreases the need for human labor. To adapt, the workforce must acquire new qualifications [...] Read more.
Agriculture is being transformed through automation and robotics to improve efficiency and reduce production costs. However, this transformation poses risks of job loss, particularly for low-skilled workers, as automation decreases the need for human labor. To adapt, the workforce must acquire new qualifications to collaborate with automated systems or shift to roles that leverage their unique human abilities. In this study, 15 agricultural occupations were methodically mapped in a cognitive/manual versus routine/non-routine two-dimensional space. Subsequently, each occupation’s susceptibility to robotization was assessed based on the readiness level of existing technologies that can automate specific tasks and the relative importance of these tasks in the occupation’s execution. The qualifications required for occupations less impacted by robotization were summarized, detailing the specific knowledge, skills, and work styles required to effectively integrate the emerging technologies. It was deduced that occupations involving primary manual routine tasks exhibited the highest susceptibility rate, whereas occupations with non-routine tasks showed lower susceptibility. To thrive in this evolving landscape, a strategic combination of STEM (science, technology, engineering, and mathematics) skills with essential management, soft skills, and interdisciplinary competences is imperative. Finally, this research stresses the importance of strategic preparation by policymakers and educational systems to cultivate key competencies, including digital literacy, that foster resilience, inclusivity, and sustainability in the sector. Full article
Show Figures

Figure 1

Figure 1
<p>Schematical presentation of O*NET changes in occupations definitions and number of tasks; previous O*NET version (orange, used in [<a href="#B29-sustainability-16-07061" class="html-bibr">29</a>]) versus updated version (green, this study). Circles denote the number of tasks related to each occupation, with added tasks indicated above the corresponding arrows.</p>
Full article ">Figure 2
<p>Distribution of the 15 reviewed occupations based on the major group classification of [<a href="#B31-sustainability-16-07061" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>Mapping of the estimated cognitive/manual versus routine/non-routine levels along with the susceptibility rate to robotization of the reviewed occupations.</p>
Full article ">Figure 4
<p>Spider charts showing the susceptibility rate to robotization in relation to (<b>a</b>) the task nature of an occupation and (<b>b</b>) major groups of occupations.</p>
Full article ">Figure 5
<p>Contour plots of the susceptibility rate to robotization of the reviewed occupations in the cognitive (C)/manual (M) versus routine (R)/non-routine (nR) space by means of equal intervals and three classes (<b>a</b>,<b>b</b>) and quantiles with four classes (<b>c</b>,<b>d</b>); left plots correspond to the present study while the right plots to [<a href="#B29-sustainability-16-07061" class="html-bibr">29</a>].</p>
Full article ">Figure 6
<p>The critical knowledge domains associated with low susceptibility to robotization, alongside the corresponding importance and proficiency level for each analyzed aspect.</p>
Full article ">Figure 7
<p>The key skills associated with low susceptibility to robotization, alongside the corresponding importance and proficiency level for each investigated skill.</p>
Full article ">Figure 8
<p>The principal work styles associated with low susceptibility to robotization, alongside the corresponding importance for each studied aspect.</p>
Full article ">
18 pages, 8632 KiB  
Article
RobotSDF: Implicit Morphology Modeling for the Robotic Arm
by Yusheng Yang, Jiajia Liu, Hongpeng Zhou, Afimbo Reuben Kwabena, Yuqiao Zhong and Yangmin Xie
Sensors 2024, 24(16), 5248; https://doi.org/10.3390/s24165248 - 14 Aug 2024
Viewed by 456
Abstract
The expression of robot arm morphology is a critical foundation for achieving effective motion planning and collision avoidance in robotic systems. Traditional geometry-based approaches usually suffer from the contradiction between the high demand for computing resources for fine expression and the insufficient detail [...] Read more.
The expression of robot arm morphology is a critical foundation for achieving effective motion planning and collision avoidance in robotic systems. Traditional geometry-based approaches usually suffer from the contradiction between the high demand for computing resources for fine expression and the insufficient detail expression caused by the pursuit of efficiency. The signed distance function addresses these drawbacks due to its ability to handle complex and arbitrary shapes and lower computational requirements. However, conventional robotic morphology methods based on the signed distance function often face challenges when the robot moves dynamically, since robots with different postures are modeled as independent individuals but the postures of robots are infinite. In this paper, we introduce RobotSDF, an implicit morphology modeling approach that can express the robot shape of arbitrary posture precisely. Instead of depicting a whole model of the robot arm, RobotSDF models the robot morphology as integrated implicit joint models driven by joint configurations. In this approach, the dynamic shape change process of the robot is converted into the coordinate transformations of query points within each joint’s coordinate system. Experimental results with the Elfin robot demonstrate that RobotSDF can accurately depict robot shapes across different postures up to the millimeter level, which exhibits 38.65% and 66.24% improvement over the Neural-JSDF and configuration space distance field algorithms, respectively, in representing robot morphology. We further verified the efficiency of RobotSDF through collision avoidance in both simulation and actual human–robot collaboration experiments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>The example of the proposed SDF in one posture. The SDF values in different slices are given, and the colors from blue to yellow represent the robot’s SDF distance from near to far.</p>
Full article ">Figure 2
<p>The robot model is separated into several independent joint links, and each link is implicitly represented as a neural network, which is regarded as the JointSDF. The input of RobotSDF is the position of the spatial point in the joint local coordinate, and the output is the corresponding SDF value to the joint.</p>
Full article ">Figure 3
<p>The structure of JointSDF. The inputs are the position <math display="inline"><semantics> <msub> <mi>P</mi> <msub> <mi>J</mi> <mi>i</mi> </msub> </msub> </semantics></math> in the local coordinate system and the latent vector <math display="inline"><semantics> <msub> <mi>z</mi> <mi>i</mi> </msub> </semantics></math> of the joint. The output is the corresponding SDF value <math display="inline"><semantics> <msub> <mi>d</mi> <msub> <mi>J</mi> <mi>i</mi> </msub> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>The calculation procedure of RobotSDF.</p>
Full article ">Figure 5
<p>The network structure of RobotSDF.</p>
Full article ">Figure 6
<p>During the sampling procedure, the 3D model of the robot joint is simplified by omitting the interconnection structure and replacing it with a plane.</p>
Full article ">Figure 7
<p>The sampling space for each joint is divided into five regions. The first region is the inside space of MBBXJ. The next four regions diffuse outward from the MBBXJ surface by distances of 0.05 m, 0.1 m, 0.3 m, and 0.5 m, separately.</p>
Full article ">Figure 8
<p>The distribution of CD error in the RobotSDF reconstruction experiment.</p>
Full article ">Figure 9
<p>The comparison of robot morphology expression capabilities among three algorithms (VSM, Neural-JSDF, and RobotSDF). The pictures of VSM [<a href="#B21-sensors-24-05248" class="html-bibr">21</a>] and Neural-JSDF [<a href="#B22-sensors-24-05248" class="html-bibr">22</a>] are from corresponding papers. Since the ground truth picture of the Neural-JSDF is not given in their paper, only the prediction result is given here. The proposed RobotSDF shows the smoothest and most detailed robot surface.</p>
Full article ">Figure 10
<p>The comparison between RobotSDF and the ground truth. The visualization of the RobotSDF model is drawn in orange, and the corresponding ground truths of the robot are colored green.</p>
Full article ">Figure 11
<p>Twenty scenarios are generated to evaluate the collision detection accuracy of RobotSDF. The ground truths of S1 to S10 are not in collision, and the ground truths of S11 to S20 are in collision.</p>
Full article ">Figure 12
<p>The collision detection time performance of SAT, GJK, and RobotSDF in experimental scenarios. The collision detection time for SAT and GJK includes the model segmentation time, convex generation time, and convex collision detection time. RobotSDF can process the collision detection by taking the vertices of the human body as RobotSDF network inputs, and it yields the collision detection results directly.</p>
Full article ">Figure 13
<p>The structure of the experimental system.</p>
Full article ">Figure 14
<p>The procedure of collision detection, Experiment 1. The target object for the robot’s grasping task is a cube. During the manipulation process, the human arm acts as an obstacle. When the robot senses the obstacle based on the RobotSDF model, the robot will re-plan the trajectory.</p>
Full article ">Figure 15
<p>The procedure of collision detection, Experiment 2. The procedure of Experiment 2 is similar to the procedure of Experiment 1; however, the initial postures of the robot change, and the target object is replaced with a bottle.</p>
Full article ">
16 pages, 5238 KiB  
Article
Personalizing Human–Robot Workplace Parameters in Human-Centered Manufacturing
by Robert Ojsteršek, Borut Buchmeister and Aljaž Javernik
Machines 2024, 12(8), 546; https://doi.org/10.3390/machines12080546 - 11 Aug 2024
Viewed by 500
Abstract
This study investigates the relationship between collaborative robot (CR) parameters and worker utilization and system performance in human–robot collaboration (HRC) environments. We investigated whether optimized parameters increase workplace efficiency and whether adapting these parameters to the individual worker improves workplace outcomes. Three experimental [...] Read more.
This study investigates the relationship between collaborative robot (CR) parameters and worker utilization and system performance in human–robot collaboration (HRC) environments. We investigated whether optimized parameters increase workplace efficiency and whether adapting these parameters to the individual worker improves workplace outcomes. Three experimental scenarios with different CR parameters were analyzed in terms of the setup time, assembly time, finished products, work in process, and worker utilization. The main results show that personalized CR parameters significantly improve efficiency and productivity. The scenario in which CR parameters were tailored to individual workers, balanced the workload, and minimized worker stress, resulting in higher productivity compared to non-people-centric settings. The study shows that personalization reduces cognitive and physical stress, promotes worker well-being, and is consistent with the principles of human-centered manufacturing. Overall, our research supports the adoption of personalized, collaborative workplace parameters, supported by the mathematical model, to optimize employee efficiency and health, contributing to human-centered and efficient HRC environments. Full article
Show Figures

Figure 1

Figure 1
<p>Evaluated HRC workplace.</p>
Full article ">Figure 2
<p>Experiment phases structure.</p>
Full article ">Figure 3
<p>Data evaluation in Kubios HRV software.</p>
Full article ">Figure 4
<p>FESTO CP LAB 400 layout and workplaces description.</p>
Full article ">Figure 5
<p>Assembly time evaluation results.</p>
Full article ">Figure 6
<p>Finished products and worker utilization results.</p>
Full article ">Figure 7
<p>Stress index results.</p>
Full article ">Figure 8
<p>Workplace utilization results.</p>
Full article ">Figure 9
<p>Simulation model and presentation of waiting queues for different scenarios.</p>
Full article ">Figure 10
<p>Finished and WIP products results.</p>
Full article ">Figure 11
<p>Order flow time results.</p>
Full article ">
15 pages, 3315 KiB  
Article
MR-Based Human–Robot Collaboration for Iterative Design and Construction Method
by Yang-Ting Shen
Buildings 2024, 14(8), 2436; https://doi.org/10.3390/buildings14082436 - 7 Aug 2024
Viewed by 559
Abstract
The current building industry is facing challenges of labor shortages and labor-intensive practices. Effectively collaborating with robots will be crucial for industry upgrading. This research introduces a MR iterative design and robot-assisted construction mode based on human–robot collaboration, facilitating an integrated process innovation [...] Read more.
The current building industry is facing challenges of labor shortages and labor-intensive practices. Effectively collaborating with robots will be crucial for industry upgrading. This research introduces a MR iterative design and robot-assisted construction mode based on human–robot collaboration, facilitating an integrated process innovation from design to construction. The development of the ROCOS (Robot Collaboration System) comprises three key aspects: (1) Layout Stage: using MR technology to layout the site, forming a full-scale integrated virtual and physical digital twin design environment. (2) Design Stage: conducting virtual iterative design in the digital twin environment and automatically simulating assembly processes. (3) Assembly Stage: translating simulated results into assembly path commands and driving a robotic arm to perform actual assembly. In the end, this research setup two experiments to examine the feasibility of this iterative design–construction loop script. The results showed that although the presence of obstacles reduced the designer’s freedom and increased the number of steps, the designer could still finish both tasks. This means that the ROCOS has value in the prototype of human–robot collaboration. In addition, some valuable findings from users’ feedback showed that potential improvements can be addressed in operability, customization, and real construction scenarios. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the three stages of the ROCOS.</p>
Full article ">Figure 2
<p>MR handle used to create the digital twin design environment.</p>
Full article ">Figure 3
<p>Use of the Rectangle 3Pt function in Grasshopper to draw a rectangle.</p>
Full article ">Figure 4
<p>Trim with Brep function used to detect conflicts.</p>
Full article ">Figure 5
<p><b>Left</b>—Simulate the assembly based on two coordinate points with vectors. <b>Right</b>—The ROCOS highlighted in red to alert the conflict.</p>
Full article ">Figure 6
<p>The KUKA|PRC function components programmed to fit HIWIN RA620-1739 control logic.</p>
Full article ">Figure 7
<p><b>Left</b>—RA620-1739 function component. <b>Right</b>—The ROCOS automatically converts the visually simulated paths into HRL files.</p>
Full article ">Figure 8
<p><b>Left</b>—The designer designs and simulates the component in the MR environment. <b>Right</b>—The robotic arm assembles the components based on the pre-simulated path.</p>
Full article ">Figure 9
<p><b>Left</b>—The setting of the experimental area. <b>Right</b>—The setting of control and experimental groups.</p>
Full article ">Figure 10
<p>Control group—no virtual obstacle in the operational area. (Red circle: the start area, Green circle: the middle area, and Blue circle: the end area).</p>
Full article ">Figure 11
<p>Experimental group—four pre-set virtual obstacles in the operational area. (Red circle: the start area, Green circle: the middle area, and Blue circle: the end area).</p>
Full article ">
35 pages, 36874 KiB  
Review
A Survey of Augmented Reality for Human–Robot Collaboration
by Christine T. Chang and Bradley Hayes
Machines 2024, 12(8), 540; https://doi.org/10.3390/machines12080540 - 7 Aug 2024
Viewed by 563
Abstract
For nearly three decades, researchers have explored the use of augmented reality for facilitating collaboration between humans and robots. In this survey paper, we review the prominent, relevant literature published since 2008, the last date that a similar review article was published. We [...] Read more.
For nearly three decades, researchers have explored the use of augmented reality for facilitating collaboration between humans and robots. In this survey paper, we review the prominent, relevant literature published since 2008, the last date that a similar review article was published. We begin with a look at the various forms of the augmented reality (AR) technology itself, as utilized for human–robot collaboration (HRC). We then highlight specific application areas of AR for HRC, as well as the main technological contributions of the literature. Next, we present commonly used methods of evaluation with suggestions for implementation. We end with a look towards future research directions for this burgeoning field. This review serves as a primer and comprehensive reference for those whose work involves the combination of augmented reality with any kind of human–robot collaboration. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Figure 1
<p>Isometric (<b>a</b>) and top (<b>b</b>) views of the Microsoft HoloLens 2, a commonly used head-mounted display for augmented reality.</p>
Full article ">Figure 2
<p>An example of communicating constraints to a robotic system using augmented reality [<a href="#B73-machines-12-00540" class="html-bibr">73</a>].</p>
Full article ">Figure 3
<p>An example of adding visual divisions within a workspace via AR to improve human–robot collaboration in a manufacturing environment [<a href="#B80-machines-12-00540" class="html-bibr">80</a>].</p>
Full article ">Figure 4
<p>An example of using AR to improve the shared mental model in collaboration between a human and a robot during a search and rescue scenario [<a href="#B89-machines-12-00540" class="html-bibr">89</a>].</p>
Full article ">Figure 5
<p>An example of using augmented reality to communicate spatial ownership in a shared space environment between a human and an airborne robot [<a href="#B97-machines-12-00540" class="html-bibr">97</a>].</p>
Full article ">
17 pages, 1894 KiB  
Article
Comparative Study of Methods for Robot Control with Flexible Joints
by Ranko Zotovic-Stanisic, Rodrigo Perez-Ubeda and Angel Perles
Actuators 2024, 13(8), 299; https://doi.org/10.3390/act13080299 - 6 Aug 2024
Viewed by 610
Abstract
Robots with flexible joints are gaining importance in areas such as collaborative robots (cobots), exoskeletons, and prostheses. They are meant to directly interact with humans, and the emphasis in their construction is not on precision but rather on weight reduction and soft interaction [...] Read more.
Robots with flexible joints are gaining importance in areas such as collaborative robots (cobots), exoskeletons, and prostheses. They are meant to directly interact with humans, and the emphasis in their construction is not on precision but rather on weight reduction and soft interaction with humans. Well-known rigid robot control strategies are not valid in this area, so new control methods have been proposed to deal with the complexity introduced by elasticity. Some of these methods are seldom used and are unknown to most of the academic community. After selecting the methods, we carried out a comprehensive comparative study of algorithms: simple gravity compensation (Sgc), the singular perturbation method (Spm), the passivity-based approach (Pba), backstepping control design (Bcd), and exact gravity cancellation (Egc). We modeled these algorithms using MATLAB and simulated them for different stiffness levels. Furthermore, their practical implementation was analyzed from the perspective of the magnitudes to be measured and the computational costs of their implementation. In conclusion, the Sgc method is a fast and affordable solution if joint stiffness is relatively high. If good performance is necessary, the Pba is the best option. Full article
(This article belongs to the Special Issue Actuators in Robotic Control: Volume II)
Show Figures

Figure 1

Figure 1
<p>Schema of an elastic joint. <math display="inline"><semantics> <mrow> <mi>θ</mi> </mrow> </semantics></math> is the position of the motor rotor, q is the position of the link, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>τ</mi> </mrow> <mrow> <mi>e</mi> <mi>l</mi> <mi>a</mi> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> <mo>=</mo> <mi>K</mi> <mo>(</mo> <mi>θ</mi> <mo>−</mo> <mi>q</mi> <mo>)</mo> </mrow> </semantics></math> is the elastic torque. K is the stiffness of the joint in this figure.</p>
Full article ">Figure 2
<p>The positions of the links when both stiffnesses are <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>. The blue (first joint) and red (second joint) lines represent the reference positions (first link in blue and second link in red), while the yellow (first joint) and purple (second joint) lines represent the real positions.</p>
Full article ">Figure 3
<p>The positions of the links when both stiffnesses are <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math>. The blue (first joint) and red (second joint) lines represent the reference positions, while the other lines represent the real positions for Sgc, Spm, Pba, Bcd, and Egc.</p>
Full article ">Figure 4
<p>The positions of the links when both stiffnesses are <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>. The blue (first joint) and red (second joint) lines represent the reference positions (first link in blue and second link in red), while the other lines represent the real positions for Sgc, Spm, Pba, Bcd, and Egc.</p>
Full article ">Figure 5
<p>The positions of the links when both stiffnesses are <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>. The blue (first joint) and red (second joint) lines represent the reference positions (first link in blue and second link in red), while the other lines represent the real positions for Sgc, Spm, Pba, Bcd, and Egc.</p>
Full article ">Figure 6
<p>The mean quadratic error of joint 1 when the stiffnesses are 200, 1000, and 10,000.</p>
Full article ">Figure 7
<p>The mean quadratic error of joint 2 when the stiffnesses are 200, 1000, and 10,000.</p>
Full article ">Figure 8
<p>The mean quadratic error of joint 1 with simple gravity compensation with various stiffness values and control gains.</p>
Full article ">Figure 9
<p>The mean quadratic error of joint 2 with simple gravity compensation with various stiffness values and control gains.</p>
Full article ">
19 pages, 6180 KiB  
Article
Human–Robot Interaction through Dynamic Movement Recognition for Agricultural Environments
by Vasileios Moysiadis, Lefteris Benos, George Karras, Dimitrios Kateris, Andrea Peruzzi, Remigio Berruto, Elpiniki Papageorgiou and Dionysis Bochtis
AgriEngineering 2024, 6(3), 2494-2512; https://doi.org/10.3390/agriengineering6030146 - 1 Aug 2024
Viewed by 729
Abstract
In open-field agricultural environments, the inherent unpredictable situations pose significant challenges for effective human–robot interaction. This study aims to enhance natural communication between humans and robots in such challenging conditions by converting the detection of a range of dynamic human movements into specific [...] Read more.
In open-field agricultural environments, the inherent unpredictable situations pose significant challenges for effective human–robot interaction. This study aims to enhance natural communication between humans and robots in such challenging conditions by converting the detection of a range of dynamic human movements into specific robot actions. Various machine learning models were evaluated to classify these movements, with Long Short-Term Memory (LSTM) demonstrating the highest performance. Furthermore, the Robot Operating System (ROS) software (Melodic Version) capabilities were employed to interpret the movements into certain actions to be performed by the unmanned ground vehicle (UGV). The novel interaction framework exploiting vision-based human activity recognition was successfully tested through three scenarios taking place in an orchard, including (a) a UGV following the authorized participant; (b) GPS-based navigation to a specified site of the orchard; and (c) a combined harvesting scenario with the UGV following participants and aid by transporting crates from the harvest site to designated sites. The main challenge was the precise detection of the dynamic hand gesture “come” alongside navigating through intricate environments with complexities in background surroundings and obstacle avoidance. Overall, this study lays a foundation for future advancements in human–robot collaboration in agriculture, offering insights into how integrating dynamic human movements can enhance natural communication, trust, and safety. Full article
Show Figures

Figure 1

Figure 1
<p>Data flow diagram of the proposed framework.</p>
Full article ">Figure 2
<p>Depiction of the examined dynamic hand gestures: (<b>a</b>) “left hand waving”, (<b>b</b>) “right hand waving”, (<b>c</b>) “both hands waving”, (<b>d</b>) “left hand come”, (<b>e</b>) “left hand fist” (<b>f</b>) “right hand fist”, and (<b>g</b>) “left hand raise”.</p>
Full article ">Figure 3
<p>Chart depicting the data flow between the captured RGB-D images to the developed node, which is responsible for aggregating the training/testing data.</p>
Full article ">Figure 4
<p>Depiction of the examined anthropometric parameters, namely the relative angles of both hands in relation to (<b>a</b>) the shoulders (<math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>κ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> <mo> </mo> <mi>s</mi> <mi>h</mi> <mi>o</mi> <mi>u</mi> <mi>l</mi> <mi>d</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>λ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> <mo> </mo> <mi>s</mi> <mi>h</mi> <mi>o</mi> <mi>u</mi> <mi>l</mi> <mi>d</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>), the elbows (<math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>θ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> <mo> </mo> <mi>e</mi> <mi>l</mi> <mi>b</mi> <mi>o</mi> <mi>w</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>φ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> <mo> </mo> <mi>e</mi> <mi>l</mi> <mi>b</mi> <mi>o</mi> <mi>w</mi> </mrow> </msub> </mrow> </semantics></math>), the chest (<math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>μ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> <mo> </mo> <mi>c</mi> <mi>h</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ν</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> <mo> </mo> <mi>c</mi> <mi>h</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>) and (<b>b</b>) the torso (<math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ξ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> <mo> </mo> <mi>t</mi> <mi>o</mi> <mi>r</mi> <mi>s</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ο</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> <mo> </mo> <mi>t</mi> <mi>o</mi> <mi>r</mi> <mi>s</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Bar chart illustrating the number of instances of each class with each bar corresponding to a specific dynamic hand gesture.</p>
Full article ">Figure 6
<p>The unmanned ground vehicle utilized in the present study along with the embedded sensors.</p>
Full article ">Figure 7
<p>Images from the experimental demonstration of the “following” scenario in an orchard showing (<b>a</b>) the UGV following the locked participant while maintaining a safe speed and distance and (<b>b</b>) the participant asking the UGV to pause the following mode.</p>
Full article ">Figure 8
<p>Image from the UGV autonomously navigating within the orchard toward a predefined site indicated by the map location sign marker.</p>
Full article ">Figure 9
<p>Images from the experimental demonstration of the integrated harvesting scenario in an orchard. In (<b>a</b>) the participant, who is currently locked, requests the UGV to stop all ongoing activities promptly; (<b>b</b>) the participant requests to be unlocked; (<b>c</b>) the second participant requests to be locked; (<b>d</b>) the participant signals the UGV to follow him; (<b>e</b>) the UGV follows the participant; (<b>f</b>) the participant signals the UGV to stop following him; (<b>g</b>) a participant fills a custom-built trolley with crates; (<b>h</b>) the participant indicates to the UGV to initiate its autonomous navigation to the preset destination.</p>
Full article ">
17 pages, 857 KiB  
Article
Enhancing Recognition of Human–Object Interaction from Visual Data Using Egocentric Wearable Camera
by Danish Hamid, Muhammad Ehatisham Ul Haq, Amanullah Yasin, Fiza Murtaza and Muhammad Awais Azam
Future Internet 2024, 16(8), 269; https://doi.org/10.3390/fi16080269 - 29 Jul 2024
Viewed by 756
Abstract
Object detection and human action recognition have great significance in many real-world applications. Understanding how a human being interacts with different objects, i.e., human–object interaction, is also crucial in this regard since it enables diverse applications related to security, surveillance, and immersive reality. [...] Read more.
Object detection and human action recognition have great significance in many real-world applications. Understanding how a human being interacts with different objects, i.e., human–object interaction, is also crucial in this regard since it enables diverse applications related to security, surveillance, and immersive reality. Thus, this study explored the potential of using a wearable camera for object detection and human–object interaction recognition, which is a key technology for the future Internet and ubiquitous computing. We propose a system that uses an egocentric camera view to recognize objects and human–object interactions by analyzing the wearer’s hand pose. Our novel idea leverages the hand joint data of the user, which were extracted from the egocentric camera view, for recognizing different objects and related interactions. Traditional methods for human–object interaction rely on a third-person, i.e., exocentric, camera view by extracting morphological and color/texture-related features, and thus, often fall short when faced with occlusion, camera variations, and background clutter. Moreover, deep learning-based approaches in this regard necessitate substantial data for training, leading to a significant computational overhead. Our proposed approach capitalizes on hand joint data captured from an egocentric perspective, offering a robust solution to the limitations of traditional methods. We propose a machine learning-based innovative technique for feature extraction and description from 3D hand joint data by presenting two distinct approaches: object-dependent and object-independent interaction recognition. The proposed method offered advantages in computational efficiency compared with deep learning methods and was validated using the publicly available HOI4D dataset, where it achieved a best-case average F1-score of 74%. The proposed system paves the way for intuitive human–computer collaboration within the future Internet, enabling applications like seamless object manipulation and natural user interfaces for smart devices, human–robot interactions, virtual reality, and augmented reality. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Computer Vision)
Show Figures

Figure 1

Figure 1
<p>Object-dependent interaction recognition (ODIR).</p>
Full article ">Figure 2
<p>Object-independent interaction recognition (OIIR).</p>
Full article ">Figure 3
<p>Representation of the MANO 3D hand joint model.</p>
Full article ">Figure 4
<p>Feature extraction and description process for proposed framework (where “<math display="inline"><semantics> <mi>θ</mi> </semantics></math>” represents the process of hand joints extraction, “<math display="inline"><semantics> <mi>τ</mi> </semantics></math>” represents the process of the 3D to 7D transformation, and “<span class="html-italic">f</span>” represents the process of feature extraction).</p>
Full article ">Figure 5
<p>Confusion matrix for object recognition.</p>
Full article ">Figure 6
<p>Object-wise F1-score for interaction recognition.</p>
Full article ">Figure 7
<p>Time and performance analysis: ODIR vs. OIIR.</p>
Full article ">
22 pages, 11126 KiB  
Article
Analytical Investigation of Vertical Force Control in In-Wheel Motors for Enhanced Ride Comfort
by Chanoknan Bunlapyanan, Sunhapos Chantranuwathana and Gridsada Phanomchoeng
Appl. Sci. 2024, 14(15), 6582; https://doi.org/10.3390/app14156582 - 27 Jul 2024
Viewed by 580
Abstract
This study explores the effectiveness of vertical force control in in-wheel motors (IWMs) to enhance ride comfort in electric vehicles (EVs). A dynamic vehicle model and a proportional ride-blending controller were used to reduce vertical vibrations of the sprung mass. By converting the [...] Read more.
This study explores the effectiveness of vertical force control in in-wheel motors (IWMs) to enhance ride comfort in electric vehicles (EVs). A dynamic vehicle model and a proportional ride-blending controller were used to reduce vertical vibrations of the sprung mass. By converting the state-space model into a transfer function, the system’s frequency response was evaluated using road profiles generated according to ISO 8608 standards and converted into Power Spectral Density (PSD) inputs. The frequency-weighted acceleration (aw) was calculated based on ISO 2631 standards to measure ride comfort improvements. The results showed that increasing the proportional gain (Kp) effectively reduced the frequency-weighted acceleration and the RMS of the vertical acceleration of the sprung mass. However, the proportional gain could not be increased indefinitely due to the torque limitations of the IWMs. Optimal proportional gains for various road profiles demonstrated significant improvements in ride comfort. This study concludes that advanced suspension technologies, including the proportional ride-blending controller, can effectively mitigate the challenges of increased unsprung mass in IWM vehicles, thereby enhancing ride quality and vehicle dynamics. Full article
(This article belongs to the Special Issue Advances in Vehicle System Dynamics and Control)
Show Figures

Figure 1

Figure 1
<p>A half-vehicle dynamics model.</p>
Full article ">Figure 2
<p>Free body diagram of the half-vehicle model: (<b>a</b>) free body diagram of front and rear wheels; (<b>b</b>) free body diagram of sprung mass.</p>
Full article ">Figure 3
<p>Schematic diagram of the input for the vehicle system.</p>
Full article ">Figure 4
<p>Block diagram of the vehicle system: (<b>a</b>) block diagram with MISO system principle; (<b>b</b>) block diagram with linear combination principle.</p>
Full article ">Figure 5
<p>The frequency response of open-loop system by the exact delay and Pade approximation methods.</p>
Full article ">Figure 6
<p>The frequency response of open-loop and closed-loop system: (<b>a</b>) the frequency response from 0.1 to 1000 Hz, (<b>b</b>) the frequency response from 5 to 20 Hz, and (<b>c</b>) the frequency response from 0.1 to 5 Hz.</p>
Full article ">Figure 7
<p>The calculation of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The example of PSD conversion via Bode plot.</p>
Full article ">Figure 9
<p>Road profile classification according to ISO 8608 [<a href="#B36-applsci-14-06582" class="html-bibr">36</a>,<a href="#B37-applsci-14-06582" class="html-bibr">37</a>].</p>
Full article ">Figure 10
<p>Examples of road profiles for classes A to D: (<b>a</b>) class A, (<b>b</b>) class B, (<b>c</b>) class C, and (<b>d</b>) class D.</p>
Full article ">Figure 11
<p>The road profiles created by Equation (44): (<b>a</b>) the road profiles’ elevation versus distance (x); (<b>b</b>) the road profiles’ elevation versus time.</p>
Full article ">Figure 12
<p>PSD of the road profiles: (<b>a</b>) PSD with spatial frequency domain; (<b>b</b>) PSD with temporal frequency domain.</p>
Full article ">Figure 13
<p>PSD of the sprung mass vertical acceleration: (<b>a</b>) PSD of the open-loop system, (<b>b</b>) PSD of the closed-loop system for road profile classes A to B, (<b>c</b>) PSD of the closed-loop system for road profile classes B to C, and (<b>d</b>) PSD of the closed-loop system for road profile classes C to D.</p>
Full article ">Figure 13 Cont.
<p>PSD of the sprung mass vertical acceleration: (<b>a</b>) PSD of the open-loop system, (<b>b</b>) PSD of the closed-loop system for road profile classes A to B, (<b>c</b>) PSD of the closed-loop system for road profile classes B to C, and (<b>d</b>) PSD of the closed-loop system for road profile classes C to D.</p>
Full article ">Figure 14
<p>Parameters of the IWM L1500 [<a href="#B1-applsci-14-06582" class="html-bibr">1</a>].</p>
Full article ">Figure 15
<p>Time response of the front wheel force for the road profile classes A to B.</p>
Full article ">Figure 16
<p>Time response of the front wheel force for the road profile classes B to C.</p>
Full article ">Figure 17
<p>Time response of the front wheel force for the road profile classes C to D.</p>
Full article ">
Back to TopTop