[go: up one dir, main page]

0% found this document useful (0 votes)
11 views16 pages

Rashil

Download as doc, pdf, or txt
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 16

Chapter 1

INTRODUCTION
The demands and trends of the current market require enhanced manufacturing
systems with reduced delivery times, mass production, and product customization, which
impose a greater need for system flexibility and adaptability. Collabo- ration between
humans and robots is considered a promising technique to increase productivity and
decrease the cost of production by combining both the robot’s fast repetition and high
production capabilities, and a human operator’s ability to judge, react and plan. Collaborative
robots (Co-bots) represent an evolution that can resolve a few challenges presented in the
manufacturing and assembly environments. Co-bots allow physical interaction with humans
within the work-space. Matheson and his team [1] described different ways a robot and an
operator can work together,

(1) Co-existence: the operator and robot are in the same work-space, but no interaction,

(2) Synchronized: the operator and robots work within the same work-space, but at different
times,

(3) Cooperation: the operator and robots work together in the same work-space but have
independent tasks,

(4) Collaboration: the operator and robots work together to complete an assigned task.
In a collaboration environment, it is important to note that any action will have immediate
consequences for the other entity.

According to the International Standard ISO 10218 (1 and 2), and more extensively in
Technical Specification ISO/TS 15066:2016, four classes of safety requirements for
collaborative robots are required:

Supervised stop: The movement of the robot is stopped


• before an operator enters the
collaborative work-space to interact with the robot and complete the desired task.

Manual guide: The operator uses a manually operated• device located on or near the robot’s
end-effect to transmit movement commands to the robot’s system.


Monitoring speed and separation: The robot and operator can move within the
collaborative work-space simultaneously. The reduction of risk is achieved by always
maintaining a distant separation between the operator and robot.

1
Power and force limitation: Where the system must• be designed to adequately reduce the
risk for an operator by not exceeding the threshold as defined by the risk assessment.

Additionally, it is important to note that collaborative methods can be adopted even when
using traditional robots. However, this requires the use of several and expensive safety
devices such as laser sensors or visual systems. For these reasons, the team started to work
on evaluating and developing affordable and accurate sensory systems that can measure the
distance between the operator and the robot. This study utilizes a lowcost RGBD camera to
measure the position of an operator with respect to the robotic manipulator. While this
configu- ration of specific measurement was utilized to track human beings [6], to our
knowledge and based on the conducted literature review, it was not previously studied in
the context of human-robot interaction and collaboration. Some researcher [7–15] analyzed
the literature review and found that most of the RGB-D use was meant for human
identification and tracking, human activity recognition, human behavior analysis for
shopping and security purposes, intelligent health care systems, detecting defects in produce
and animal recognition, also a data-based had been developed to summarize all these uses
and algorithms. It was proven that the top-view RGB- D cameras can be utilized
successfully in several applications where behaviors and interactions can be analyzed and
they are very attractive due to their affordability and the sufficient information extracted
from the provided pictures or live feed.

The paper is organized as follows: Section II is a literature review about robotics and their
application in the industrial system, robotics safety regulations and standards, and collab-
oration and interaction between human and robots, Section III is a description of the robot
and sensory system developed in this research, Section IV describes the followed
methodology including the geometric model of the robot-sensor system, the process of
calibration, and the detection of the operator, while in Section V, we evaluate the two
methods of interaction between human and robot and reporting our findings and Section VI
concludes the paper and describes the future plan.

2
Chapter 2
LITERATURE REVIEW
The world has come to a point of many technological innovations where the presence
and use of robotics are growing. Robots had been presented in manufacturing, hospitals,
personal-use robots, service robots, etc. These robots aid the productivity of several tasks
depending on their surrounding environment. In general, robotics could be used in many
different settings where their intended purpose is to aid on a specific goal, complete a set
of tasks that is difficult/tedious for a human to achieve, or simply make processes faster.
Expedite services in systems such as Industrial/manufacturing, Health, or personal use, is a
great enhancement to all current systems as their efficiency will increase. Therefore, safety
standards are essentially required and must be implemented to achieve a safe operation
of robotics in certain areas and or near human beings. Traditional robots have been
separated from humans in workplaces trying to avoid any risk, injuries, or fatal incidents.
This separation was implemented in the form of physical barricades or shut off robots
whenever a human is present. However, technological improvements have shown great
results where robotics no longer need to be separated and robots can be collaborative by
working closely with humans, by developing new safety standards to design collaborative
robotics to ensure humans’ safety.

The Existence of robots in industrial settings enhances the production to meet the required
demands while keeping the cost low. Robotics is considered as a flexible cell within a
manufacturing line as they can be programmed to conduct different processes when needed.
Safety is of utmost priority when designing robots and placing them in such environments
and because of the rapid rise of robotics presence, safety standards are to be frequently
developed and improved to meet the new technology trends.

Few researchers and their teams discussed different industrial environments, the safety
approaches that should be followed, and some real-life case studies. it was showed that lead
designers must develop and evaluate safe, human-centered, ergonomics, and efficient
collaborative assembly workstations, where the operator’s feedback was pro- vided in regards
to occupational health and safety. Additionally, the Human Industrial Robot Collaboration
(HIRC) workstation design process was evaluated through computer- based simulations
based on the performance and safety characteristics such as Ergonomics, Operation time,
operational costs, Maximum contact forces, and maximum energy density, this research

3
illustrated how difficult is to evaluate safety and performance characteristics due to lack of
physical workstations.

Parigi-Polverini, developed a new safety assessment tool “Kinetostatic Safety Field”


which identifies sources of danger which could be an obstacle, human body part, or another
robot link. The main advantage of this tool is the real- time applications and real-time
collision avoidance with the use of a reactive control strategy. Another researcher suggested
that robotics no longer need to be separated from humans, as robots can enforce safety by
proposing a kinematic control strategy and maintain the robots’ max level of productivity by
reduced when humans are present in a working area.

Incorporating the industrial regulations such as the Inter- national Standard ISO 10218,
Technical Specification ISO/TS 15066:2016, the American ANSI/RIA R15.06, the European
EN 775 ISO 10218, and the national standards Spanish Association of Normalization and
Certification, is the main procedure that is followed by manufacturing systems. These
standards are outdated and have not been improved in the last five years, therefore some
researchers introduced new concepts to cover techniques for estimation and evaluation of
injuries focusing on various areas of the human body and the importance of developing new
devices to detect impact, and minimize the human-robot impact.

Risk assessment is a crucial tool that must be used to enhance safety for both humans and
robot systems. from literature review discussed some history of operators and robots and how
industrial robots have been evolved, differences between collaborative and non-collaborative
robot cell safeguarding, voluntary industry consensus standards, and the risk assessment. Risk
assessment should include quantitative head injury index for service robots as mechanical
risk and incidents such as robot throws or drops and trapping and crushing are more to
happen with such robots. Another proposed method to address safety in the human-Robot
collab- oration setting is Cooperative Collision Avoidance in dynamic environments . This
method computes a collision-free local motion for a short time horizon, which restricts the
actuator motion but allows a smooth and safe control. Modeling human behavior and errors is
another proposed method this formal verification methodology was developed to analyze the
safety of collaborative robotic applications with a rich non- deterministic formal model of
operator behaviors that captures the hazardous situations, which allows safety engineers to
refine their designs until all plausible erroneous behaviors are considered and mitigated.

4
Other researchers [29–31] discussed different aspects of robotics design and their
relationship to their safety rank- ing. Robot design principles should include robustness,
fast reaction time, context awareness, energy, and power limitations. These principles will
facilitate the following features as speech processing, vision processing, and robot control
that also follow guidelines that will allow the robot to recognize speech, gestures, and
correlations which eventually learns in the long run while also keeping humans safe.
Predicting human behaviors, collision avoidance, collision reduction by data analysis,
collisions reduction by design, perceptions affecting design, boundaries, sensors, adaptability
to the surrounding environment, path planning, statistical probability, and robotic decision
making are some of the safeguards that can be implemented in a high speeds and payload
levels industrial settings.

5
Chapter 3
SYSTEM DETAILS AND SETUP
The system is developed based on available educational and off-the-shelf components to
model real-life robotics tasks, which are explained below.
3.1 Robotic Manipulator

The robotic manipulator selected for this project was the Scorbot ER-V Plus show in
Fig. 3.1. This robot has five degrees of freedom, the Fig. 3.2 shows the length of the links
and the degree of rotatin and operation range determining the work-space of the robot.
The direct kinematics of this robot determine the pose of tool T with respect to the
base B is resolved using equation (2) based on Fig. 3.3. The base of the robot is at a fixed
position on a workbench

Fig. 3.1 Scorbot ER-V Plus Robot Details of Angles and Conventions. Note: the arm
occupies a plane coinciding with the z-axis of its base

The robot is controlled using ACL, which is a language that can be used as a multitask
robotic programming environment

6
Fig. 3.2. The Operation Range that Defines the Space and Parameters of the
Robot used in the Kinematics.

MATLAB functions were created to establish bidirec tional serial communication


with the Scorbot controller. Both systems (robot and Computer Vision systems) are running
in MATLAB, and give ACL commands which allowed the robot to execute specific tasks,
read and load pose data into the controller, and modified the manipulator’s movement speed.
Fig. 3 also shows the flow of exchanging information between system components

3.2 Vision Sensory System

The Kinect V2 sensor (RGB-D sensor) is composed of two cameras, the RGB and an
infrared IR camera. The IR camera can be utilized to obtain depth maps, with a field of
vision (70◦ horizontal and 60◦ vertical). The Kinect camera is capable of running at a rate

7
of (30 fps) at a resolution of (512X424 pixels) and the operational range for the IR
camera is between (0.5 m to 4.5 m). The sensor operates based on the time-of-flight
principle [35]. The depth data obtained in each pixel corresponds to the Zi coordinate
measured on the optical axis of the IR camera as illustrated in Fig. 4.

Fig. 3.3. The Frame Associated with


{ } the System Problem Modeling, B is the
{ } { }
base of the Robot, K Represents the Kinect Sensor, T Represent the
Robot’s Tool. The Dotted Line Indicates the Information Flow that is being
Exchanged in the Systems.

8
Chapter 3

INTRODUCTION OF HUMAN-ROBOT INTERACTION

For face detection, a method originally developed by Viola and Jones for object
detection is adopted. Their approach uses a cascade of simple rectangular features that allows
a very efficient binary classification of image windows into either the face or non face class.
This classification step is executed for different window positions and different scales to scan
the complete image for faces. We apply the idea of a classification pyramid starting with very
fast but weak classifiers to reject image parts that are certainly no faces. With increasing
complexity of classifiers, the number of remaining image parts decreases. The training of the
classifiers is based on the AdaBoost algorithm . Combining the weak classifiers iteratively to
more stronger ones until the desired level of quality is achieved.

As an extension to the frontal view detection proposed by Viola and Jones, we


additionally classify the horizontal gazing direction of faces, as shown in Fig. 4, by using four
instances of the classifier pyramids described earlier, trained for faces rotated by 20", 40",
60", and 80". For classifying left and right-turned faces, the image is mirrored at its vertical
axis, and the same four classifiers are applied again. The gazing direction is evaluated for
activating or deactivating the speech processing, since the robot should not react to people
talking to each other in front of the robot, but only to communication partners facing the
robot. Subsequent to the face detection, a face identification is applied to the detected image
region using the eigenface method to compare the detected face with a set of trained faces.
For each detected face, the size, center coordinates, horizontal rotation, and results of the face
identification are provided at a real-time capable frequency of about 7 Hz on an Athlon64 2
GHz desktop PC with I GB RAM

As mentioned before, the limited field-of-view of the cameras demands for alternative
detect ion and tracking methods. Motivated by human perception, sound location is applied to
direct the robot's attention. The integrated speaker localization (SPLOC) realizes both the
detection of possible communication partners outside the field-of-view of the camera and the
estimation whether a person found by face detection is currently speaking. The program
continuously captures the audio data by the two microphones. To estimate the relative

9
direction of one or more sound sources in front of the robot, the direction of sound toward the
microphones is considered . Dependent on the position of a sound source in front of the robot,
the run time difference t results from the run times tr and tl of the right and left microphone.
SPLOC compares the recorded audio signal of the left and the right] microphone using a
fixed number of samples for a cross power spectrum phase (CSP) to calculate the temporal
shift between the signals. Taking the distance of the microphones dmic and a minimum range
of 30 cm to a sound source into account, it is possible to estimate the direction of a signal in a
2-D space. For multiple sound source detection, not only the main energy value for the CSP
result is taken, but also all values exceeding an adjustable threshold.

Fig 3.1 Human-Robot Interaction

10
Chapter 4
METHODOLOGY

4.1. Kinect-Robot Modeling and Calibrating


The objective of this work is to provide a system that allows the robot to sense its
surroundings and act accordingly. It is necessary to represent the three-dimensional space
around the manipulator.
4.2 Human Detection using Background-Foreground Technique
For human or foreign object detection in the scene, a Background-Foreground (B-F)
technique was used. 100 frames of depth images were captured within 10 sec and used to
form images of the background making sure the scene stayed static. As previously
mentioned, rotating and fixed rectangular binary masks were generated to avoid the detection
of the robot’s movement by the foreground. A captured image was printed on the screen, and
the mouse determined the vertices of the two rectangles and a fixed point for one of them to
rotate. The non-rotating rectangle was used to hide the base of the robot from the foreground.
The rotating rectangle did the same with the extension of the maximum possible arm.

Fig. 4.1. Collision Prevention Scenario.

At the beginning of each iteration, a depth image will be captured, and the value of the
robot’s base encoder will be gathered. Then a binary mask will be added to the foreground
where the captured image was subtracted from the background then a binary mask was

11
applied to hide the robot. An opening was performed to the resulting image with a 5 pixels
radius kernel disk to remove the noise. Finally, a 50 mm depth threshold was used to
binarize the image The B-F results combined with the areas of interest to determine the
behavior of the robot’s speed. If the foreground binary area within the red zone exceeded 100
pixels, the state turns red. If the area is not exceeding 100 pixels in the red zone but reaching
at least 500 pixels in the yellow zone, then the state turns yellow. If none of the above
conditions are met, the state updates to green. Collaborative Scenario: Each iteration started
with the robot taking an object located at a pre-established location and a request that
will appear on the user screen to guide the operator to position her/his hand where
she/he wanted to receive the object from the robot. Subsequently, a binary mask will be
generated for the foreground. The background image will be subtracted from the captured
image and applied a 15 mm depth threshold to make it binary. It was decided to analyze a
200 pixels radius to avoid dealing with peripheral noise. The radius was equivalent to the
calibration at 1.32 m at the height of the workbench. The foreground was cleaned by
imposing an opening using a disk of 4 pixels radius like a kernel. After closing was
imposed with a kernel disk with a radius of 3 pixels to remove any imperfections remaining
in the blobs. The blobs with an area smaller than 800 pixels were discarded. A binary mask
was generated with the remaining blobs. Two zones were separated by heights zones in the
resulting blobs using Otsu’s method

Fig. 4.2. Collaboration with an Operator Scenario

12
Chapter 5

RESULTS
This work is meant to develop an affordable prototype that can be added to
industrial robots to increase robot safety, decrease the barriers between human operators and
robots, and facilitate a collaboration system between them. The designed system addressed
these goals as follow:

5.1 Collision Prevention System

The detection of an operator in the pre-established zones is exemplified in Fig. 5. To test


the operation of the system, 10 tests were made in areas of interest, where a human operator
will introduce his/her hand into the robot surroundings. The system was used to detect
the operator when first entering the yellow zone (Operator’s leg) where the system forced
the robot to move at half of its original operation speed. Then the operator introduced his/her
hands within the red zone to force the robot to slow significantly to almost not moving.
These actions were captured by the camera and highlighted by the associated colors shown in
Fig. 5, which displays the areas of interest corresponds to the detection of the pixels as
part of the foreground and the outline of the ScorBot is shown inside the binary mask In all
test cases, the system behaved correctly as intended, by identify the existence of the human
operator and change the robot speed according to the distance between the human and the
robot. The robot response time to change the end- effector speed was recorded and the mean
system update time was 0.45 s with a standard deviation of 0.30 s, which is a significantly
fast response.

Modify the speed of the robot was accomplished through an ACL command called
’CLRBUF’, which was introduced as an instant stop to the robot followed by an
immediately a new movement speed was set, and a new trajectory was generated from the
current pose until the next corresponding task resuming the job. the team implemented other
methods to change the speed by changing the task priorities on the robot or send speed
change commands during a test but all failed since these commands could only be utilized
after completing the previous tasks

13
5.2 Collaborative Scenario

Collaboration between the robot and human operator was simulated by having the
automated system detect the operator’s hand and estimate the spatial coordinate of the center
of the hand then command the robot will move to pick up an object from a predefined
location then place it on the operator’s hand, this is illustrated in Fig. 6. The blue region
represents the Scorbot work-space, the orange lines show the skeletonization of the
operator’s arm while the yellow area shows the mask’s maximum circumference where the
robot should place the object on.

Experiments with 20 different hand positions within the robot’s work-space were
conducted, the system gave satisfac- tory results, where the job was done correctly, and
placement coordinates mean error was 2.4 cm.

14
Chapter 6
CONCLUSION
This work showed that an overhead low-cost RGB-D cam- era can measure the position of
an operator with respect to a robotic manipulator, and thus improve human-robot interaction
safety and increase the collaboration opportunities through 3D sensing of the robot
surrounding environment. This proposed system will allow manufacturing and industrial
companies to update their existing robotics and automation system by adding an affordable
add-on safety and collaboration device without influencing their manufacturing lines with a
lower cost of investment. In the collision prevention scenario, the captured video analysis
proved that the reaction times of the system was 500ms and the system’s bottleneck was the
PC and robot inter-communication which required relatively longer times and added pauses
and checkpoints to make sure it is reliable. In the collaborative scenario, detecting the
operator’s hand and have the robot placing an object was achieved, and similar to the other
scenario, the internal variables, and date transmitting speed between the robot controller and
the main computer was the main factor to defined the speed of the system. The team is
working on a few improvements to the pro- posed system including enhancing the B-F
algorithm internal variables and date, exploring the application of dynamics methods that can
assimilate changes in the scene on slower times scales. Also, an RGB camera system
development is being conducted to detect a particular color or clothing as an activator for
robot tasks. Additionally, more sophisticated moving object classification techniques such as
convolution neural networks will be explored.

15
Chapter 7
REFERENCES

[1]Matheson, E., Minto, R., Zampieri, E. G., Faccio, M., & Rosati, G. (2019). Human-
Robot Collaboration in Manufacturing Applications: A Review. Robotics, 8(4), 100.
[2]Ferraguti, F., Bertuletti, M., Landi, C. T., Bonfe, M., Fantuzzi, C., & Secchi, C. (2020).
A control barrier function approach for maximizing performance while fulfilling to
iso/ts 15066 regulations. IEEE Robotics and Automation Letters, 5(4).
https://doi.org/10.1109/LRA.2020.3010494
[3]Robla-Gomez, S., Becerra, V. M., Llata, J. R., Gonzalez-Sarabia, E., Torre-Ferrero, C.,
& Perez-Oria, J. (2017). Working together: a review on safe human-robot collaboration
in industrial environments. IEEE Access, 5.
[4]Harper, C., & Virk, G. (2010). Towards the development of international safety
standards for human-robot interaction. International Journal of Social Robotics, 2(3),
229-234.
[5]Anandan, T. M. (2013). Safety and control in collaborative robotics. Published on:
Aug, 6, 1-4.

16

You might also like