[go: up one dir, main page]

0% found this document useful (0 votes)
27 views13 pages

Unit 4

Uploaded by

rishika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views13 pages

Unit 4

Uploaded by

rishika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Unit 4: Robot Programming and Sensors

Robot Programming

• The sophistication of the user interface is becoming extremely important as manipulators


and other programmable automation are applied to more and more demanding industrial
applications.

• In considering the programming of manipulators, it is important to remember that they


are typically only a minor part of an automated process.

• The term work cell is used to describe a local collection of equipment, which may include
one or more manipulators, conveyor systems, parts feeders, and fixtures.

• At the next higher level, work cells might be interconnected in factory wide networks so
that a central control computer can control the overall factory flow.

• Current technology robots can accept input from sensors and other devices.

• Robots can send signals to pieces of equipment operating with them in the cell

• Robots can make decisions on its own. Robots can communicate with computers to
receive instruction.

• All of these capabilities require programming.

• Robot programming is the process of inputting instructions to a robot's control system to


automate tasks. The instructions are interpreted and executed by the robot's
microcontroller, which then moves the robot's actuators to perform the desired actions.

METHODS OF ROBOT PROGRAMMING


• Programming methods is divided into two basic types:

On line: Leadthrough method (Teach by showing )

Off line: Textual robot languages (Explicit robot programming languages )

LEADTHROUGH PROGRAMMING METHODS:


• Early robots were all programmed by a method that we will call teach by showing, which
involved moving the robot to a desired goal point and recording its position in a memory
that the sequencer would read during playback.

• During the teach phase, the user would guide the robot either by hand or through
interaction with a teach pendant.

• Teach pendants are handled button boxes that allow control of each manipulator joint or
of each Cartesian degree of freedom.

• Some such controllers allow testing and branching, so that simple programs involving logic
can be entered.
• Some teach pendants have alphanumeric displays and are approaching hand-held
terminals in complexity.

• They are two ways of accomplishing Leadthrough programming

1. Powered Leadthrough 2. Manual Leadthrough

1. Powered Leadthrough
• Powered Lead through method makes use of a teach pendant to control the various joints
motors, and to power drive the robot arm and wrist through a series of points in space.

• Each point is recorded into memory for subsequent playback during the work cycle.

• The teach pendant is usually a small handheld control box with combination of toggle
switches, dials and buttons to regulate the robots physical movements and programming
capabilities.

• Powered Lead through method is limited to Point to point motions rather than continuous
movements because of the difficulty in using teach pendant to regulate complex
geometries.

• Applications of Powered Leadthrough method: Part transfer task, Machine loading and
unloading application , Spot welding.

2. Manual Leadthrough
Manual Leadthrough method also called the “Walk-through” method which is used for
continuous path programming where the motion cycle involves smooth complex curvilinear
movements. Applications: Spray Painting, Continuous arc welding

• In this method, the programmer physically grasps the robot arm and moves it through the
desired motion cycle.

• If the robot is large and awkward to physically move, a special programming apparatus is
used.

• A teach button is often located near the wrist of the robot which is depressed during those
movements of the manipulator that will become part of the robot programmed cycle.

• The motion cycle is divided into hundreds or even thousands of individually closely spaced
joints along the path and these points are recorded into the controller memory.

• The control system for both leadthrough procedures operate in either of 2 modes: teach
mode or run mode. Teach mode is used to program the robot and run mode is used to
execute the program.

Leadthrough Programming Advantages


• Advantages:

• Easily learned by shop personnel


• Logical way to teach a robot

• No computer programming

• Disadvantages:
• Downtime during programming

• Limited programming logic capability

• Not compatible with supervisory control

• Robot programming with textual languages

• Ever since the arrival of inexpensive and powerful computers, the trend has been
increasingly toward programming robots via programs written in computer programming
languages.

• Usually, these computer programming languages have special features that apply to the
problems of programming manipulators and so are called robot programming languages
(RPLs).

• The first textual language was WAVE, developed in 1973 as an experimental language for
research at the Stanford Artificial Intelligence Laboratory.

• Research involving a robot interfaced to a machine vision system was accomplished using
the WAVE language.

• The research demonstrated the feasibility of robot hand eye coordination.

• Subsequently, AL, VAL(Vectors Assembly Language), VAL II, AUTOPASS, AML(A


Manufacturing Language) by IBM, RAIL, MCL, etc,.

Generations of Robot programming Languages: Possess a variety of structures and capabilities

First Generation Languages:


• They use a combinations of Command statements & teach pendant procedures for
developing robot programs.

• They were developed largely to implement motion control, so they referred some times as
motion level languages.

• This language is machine-dependent. The machine language statements are written in


binary code (0/1 form) because the computer can understand only binary language.

• Typical features include ability to define manipulator motions using statements, Straight
line interpolation, branching and elementary sensor commands involving binary (on-off)
signals.

• Used for Motion sequence of manipulator (MOVE), Input/Output Capabilities (WAIT,


SIGNAL), can be used to write subroutines(BRANCH)
• VAL –Example of first generation languages Limitations :
• Inability to specify complex arithmetic computations for use during program execution.

• Inability to make use of complex sensors and sensor data

• Limited capacity to communicate with other computers

• Cannot be readily extended for future enhancements

Second Generation Languages:


• Involves more complex tasks

• Robot seems to be more intelligent

• These languages are called structured programming languages, because they possess
structured control constructs used in computer programming languages.

• Commercially available SG languages are AML, Robotic Automatix Incorporated


Language(RAIL), Monte Carlo localization(MCL)-IBM, VAL II.

• Programming in this languages is very much similar to computer programming

• They make use of teach pendant to define locations in work spaces.

Features of Second Generation Languages:


• Motion Control: Same as first generation

• Advanced Sensor Capabilities: not only simple binary (on-off) signal, use analog signals to
have capability to control devices by means of sensory data. (ex: in 1G only open & close of
gripper, but 2G can measure forces)

• Intelligence: ability to utilize information received about work environment to modify


system behavior in a programmed manner (ex: part loose in fixture is detected and insert
back firmly)

• Communications and data processing: they generally have provisions to interact with
computers and its data bases, purposes of maintaing records, data controlling activities of
work cell.

Third Generation Languages:


• Third-generation languages, also known as procedural languages, are high-level
programming languages designed to be more user-friendly by utilizing syntax similar to
human language.

• This makes it simpler for programmers to create and understand code.

• Some well-known examples are C, C++, Java, FORTRAN, and PASCAL.


• Before execution, these languages must be converted into machine code using a compiler
or interpreter.

Fourth Generation Languages:


• Fourth-generation languages are created to simplify programming by using syntax that is
more similar to everyday human language.

• These languages allow users to focus on specifying what tasks need to be done, without
needing to worry about the specific implementation details.

• They are widely used for tasks such as database handling, generating reports, and building
graphical user interfaces (GUIs).

• Some well-known examples are SQL(Structured Query Language), Python, Perl, Ruby, and
MATLAB.

Fifth Generation Languages:


• Fifth-generation languages represent the latest stage in programming evolution and focus
on visual programming and artificial intelligence.

• These languages use visual tools and constraint-based logic to create programs.

• Instead of writing step-by-step instructions, the programmer defines goals, and the system
generates the code to achieve them.

• Examples of fifth-generation languages are Prolog, OPS5.

• The first two generations are called low-level languages. The next three generations are
called high-level languages.

A SAMPLE APPLICATION
• Automated work cell that completes a small subassembly in a hypothetical manufacturing
process.

• The work cell consists of a conveyor under computer control that delivers a work piece; a
camera connected to a vision system, used to locate the work piece on the conveyor; an
industrial robot (a PUIVIA 560 is pictured) equipped with a force-sensing wrist;

• A small feeder located on the work surface that supplies another part to the manipulator;
a computer-controlled press that can be loaded and unloaded by the robot;

• and a pallet upon which the robot places finished assemblies.

• The entire process is controlled by the manipulator's controller in a sequence,

• Automated work cell that completes a small subassembly in a hypothetical manufacturing


process

• The work cell consists of a conveyor under computer control that delivers a work piece ; a
camera connected to a vision system, used to locate the work piece on the conveyor ; an
industrial robot (a PUIVIA 560 is pictured) equipped with a force -sensing wrist ;

• A small feeder located on the work surface that supplies another part to the manipulator ;
a computer - controlled press that can be loaded and unloaded by the robot ;

• and a pallet upon which the robot places finished assemblies .

• The entire process is controlled by the


manipulator's controller in a sequence .

1. The conveyor is signaled to start; it is


stopped when the vision system reports
that a bracket has been detected on the
conveyor.

2. The vision system judges the bracket's


position and orientation on the conveyor
and inspects the bracket for defects, such as
the wrong number of holes.

3. Using the output of the vision system, the


manipulator grasps the bracket with a
specified force. The distance between the
fingertips is checked to ensure that the bracket has been properly grasped. If it has not, the
robot moves out of the way and the vision task is repeated.

4. The bracket is placed in the fixture on the work surface. At this point, the conveyor can be
signaled to start again for the next bracket—that is, steps 1 and 2 can begin in parallel with
the following steps.

5. A pin is picked from the feeder and inserted partway into a tapered hole in the bracket.
Force control is used to perform this insertion and to perform simple checks on its
completion. (If the pin feeder is empty, an operator is notified and the manipulator waits
until commanded to resume by the operator.)

6. The press is commanded to actuate, and it presses the pin the rest of the way into the
bracket. The press signals that it has completed, and the bracket is placed back into the
fixture for a final inspection.

7. By force sensing, the assembly is checked for proper insertion of the pin. The manipulator
senses the reaction force when it presses sideways on the pin and can do several checks to
discover how far the pin protrudes from the bracket.

8. If the assembly is judged to be good, the robot places the finished part into the next
available pallet location. If the pallet is full, the operator is signaled. If the assembly is bad, it
is dropped into the trash bin.

9. Once Step 2 (started earlier in parallel) is complete, go to Step 3.

Sensors
• A sensor is a device that produces an output signal for the purpose of detecting a physical
phenomenon.

• A transducer is a device that converts one type of physical variable into another form.

• A sensor is a transducer that is used to make a measurement of a physical variable of


interest.

• Internal Sensors: These are responsible for the internal working of the robot and are
mainly used for closing the loop in feedback control (ex: Position, velocity, Acceleration viz.,
potentiometers, encoders, LVDT, Tachometers, Accelerometers). A robot cannot function
properly without these if it is using a closed loop feedback control system.

• External Sensors: These are responsible for interaction with the environment. A robot
can use external sensors like touch sensor for interaction with the environment. In case of
any of these sensors fail the robot can still function but its ability to interact with the
external world is reduced. Ex: force, vision, touch, pressure, slip, proximity viz., on/off
switches, ultrasonic, force sensor, hall effect, inductive sensor, piezo sensor.

Classification of sensors and their functions


Position Sensor : Potentiometer
• It is the common sensor for position measurement.

• It relates the change in


position(linear or rotary) into the
change in resistance.

• It consists a resistive element, a


sliding contact(wiper) that moves
along the element, a mechanical shaft
which is linked to the wiper, two
electrical terminals at each end of the
element.

• The wiper contact is linked to mechanical shaft either rotary or slider, which causes
resistance value between the wiper and 2 end connections to change.

• The resistance change is then converted to a proportional voltage change in the electrical
circuit of the sensor.

• In other words resistance is propositional to position.


Position Sensor : Encoders:
As microprocessors have become cheaper and with a move towards digital electronics, the
encoder is virtually used everywhere for position measurement.

• Encoder – Digital device that produces pulses based on rotational position.

• Encoders used to measure position and velocity of motion.

Encoder Types:
• Incremental Encoder: (Magnetic, Optical, Inductive, Capacitive, Laser)

• Absolute Encoder: (Magnetic, Optical, Inductive, Capacitive, Laser)

• Resolver : Provides sine wave and cosine wave to provide both velocity and position
feedback.

Incremental Encoder:(optical)
• An incremental encoder consists of a disk
marked with alternating transparent and
opaque stripes aligned radially.

• A photo transmitter (a light source) is


located on one side of the disk and a
photo receiver is on the other.

• As the disk rotates, the light beam is


alternately completed and broken.

• The output from the photo receiver is a


pulse train whose frequency is
proportional to the speed of rotation of the disk.

• In a typical encoder, there are two sets of photo transmitters and receivers aligned 900
out of phase.
• This phasing provides direction information, that is, if signal A leads to signal B by 900 the
encoder disk is rotating in one direction, if B leads A then is going in the other direction.

• By counting the pulses and by adding or subtracting based on the sign, it is possible to use
the encoder to provide position information w.r.t a known starting location.

• Normally, two incremental encoders are used in parallel so that the resolution of
measurements is increased.

• The rate at which pulses are generated can also be counted to get an estimate of the
velocity of the rotating shaft. Hence, can also used as velocity sensor.

Absolute Encoder:(optical)

• In some cases, it is desirable to know the position of an object in absolute terms, that is
not w.r.t a starting position. For this an absolute encoder could be used.

• Construction is same as incremental encoder except that there are more tracks are stripes
and a corresponding number of receivers and transmitters. Usually the stripes are arranged
to provide a binary number proportional to the shaft angle.

• The first track might have two stripes, the second four, the third eight and so on. In this
way the angle can be read directly from the encoder without any counting being necessary.

• The resolution of an absolute encoder is dependent on the number of tracks and is given
by Resolution= 2n (where n is number of tracks on the disk)

Resolver
• A resolver is an electromagnetic transducer that can be used in a wide variety of position
and velocity feedback applications which includes light duty/servo, light industrial or heavy
duty applications.

• Resolvers, known as motor resolvers, are commonly used in servo motor feedback
applications due to their good performance in high temperature environments.

• Because of its simple transformer design and lack of any on board electronics, the resolver
is a much more rugged device than most any other feedback device and is the best choice
for those applications where reliable performance is required in those high temperature,
high shock and vibration, radiation and contamination environments which makes the
resolver the sensible design alternative for shaft angle encoding.

• The resolver is a special type of rotary transformer that consists of a cylindrical rotor and
stator. An ac signal must be used for excitation, if dc signal is used then there would be no
output signal. Primary coil is connected to rotating shaft and carries alternate current.

• To specifically define a resolver has a single winding on the rotor and a pair of windings on
stator.

• The stator windings are 90˚ apart.

• A resolver outputs signal by energizing the input phase of the resolver with an AC voltage
(VAC) to induce voltage into each of the output windings.

• The resolver amplitude modulates the VAC input in proportion to the Sine and the Cosine
of the angle of mechanical rotation.

• The signal may be used directly or it can be converted into digital representation using a
device known as “resolver-to-digital” converter

Velocity Sensors

• Velocity information is required


for closed loop feedback control
using a PD or PID controller.

• One of the most commonly used


devices for feedback of velocity
information is the DC tachometer.

• The dc tachometer is essentially a


DC generator providing an output
voltage proportional to angular
velocity of the armature.

• A tachometer can be described by the relation Vo(t) = Kt (t) ω

• Vo(t)= output voltage of the tachometer in volts


• Kt (t)= tachometer constant in v/rad/s

• ω = angular velocity in radian per second

• DC tachometer provide a voltage output proportional to the armature rotational velocity,


hence they are analog devices.

• There is a digital equivalent of the DC tachometer which provides a pulse train output of a
frequency proportional to the angular velocity.

Machine vision
• Machine vision is the
substitution of the human
visual sense and judgment
capabilities with a video
camera and computer to
perform an inspection task.

• It is the automatic
acquisition and analysis of
images to obtain desired
data for controlling or
evaluating a specific part or
activity.

• The operation of the vision system consists of three functions:

1. Sensing and digitizing image data 2. Image processing and analysis 3. Application

• The sensing and digitizing functions involve the input of vision data by means of camera
focused on the scene of interest.

• Special lighting techniques are frequently used to obtain an image of sufficient contrast
for later processing.

• The image captured is digitized and stored in computer memory.

• The digital image is called a frame of vision data and is frequently captured by a hardware
device called a frame grabber. These devices are capable of digitizing images at the rate of
15 FPS to 120 FPS(frames per second)

• The frames consist of a matrix of data representing projections of the scene sensed by the
camera. The elements of the matrix are called picture element or pixels.

• The number of pixels are determined by a sampling process performed on each image
frame.

• A single pixel is the projection of a small portion of the scene which reduces that portion
to a single value. The value is measure of the light intensity for that element of the scene.

• Each pixel intensity is converted into a digital value.


• The digitized image matrix for each frame is stored and then subjected to image
processing and analysis functions for data reduction and interpretation of the image.

• This data reduction can change the representation of a frame from several hundred
thousand bytes of raw image data to several hundred bytes of feature value data.

• The resultant feature data can be analyzed in the available time for action by the robot
system.

• Machine vision application categories:

• Defect detection

• Gauging

• Guidance and part tracking

• Part Identification

• Packaging inspection

• Pattern Recognition

• Product Inspection

• Surface Inspection

• Web Inspection

You might also like