[go: up one dir, main page]

Module 4 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

Syllabus:
MODULE – 1: AUTOMATION
History of Automation, Reasons for automation, Disadvantages of automation, Automation
systems, Types of automation – Fixed, Programmable and Flexible automation, Automation
strategies. Automated Manufacturing Systems: Components, classification and overview of
manufacturing Systems, Flexible Manufacturing Systems (FMS), Types of FMS, Applications
and benefits of FMS.

MODULE – 2: ROBOTICS
Definition of Robot: History Of Robotics, Robotics Market & Future Prospects, Robot
Anatomy, Robot Configurations: Polar, Cartesian, Cylindrical & Jointed- Arm Configuration.
Robot Motions, Joints, Work Volume, Robot Drive Systems, Precision Of Movement, Spatial
Resolution, Accuracy, Repeatability, End Effectors, Tools, Grippers

MODULE – 3: CONTROLLERS AND ACTUATORS


Basic Control System concepts and Models, Transfer functions, Block diagrams,
characteristic equation, Types of Controllers: on-off, Proportional, Integral, Differential, P-I,
P-D, P-I-D controllers. Control system and analysis. Robot actuation and feedback
components Position sensors – Potentiometers, resolvers, encoders, velocity sensors.
Actuators - Pneumatic and Hydraulic Actuators, Electric Motors, Stepper motors,
Servomotors, Power Transmission systems.

MODULE – 4: ROBOT SENSORS AND MACHINE VISION SYSTEM


Sensors in Robotics - Tactile sensors, Proximity and Range sensors, use of sensors in
Robotics. Machine Vision System: Introduction to Machine vision, the sensing and digitizing
function in Machine vision, Image processing and analysis, Training and Vision systems.

MODULE – 5: ROBOTS TECHNOLOGY OF THE FUTURE


Robot Intelligence, Advanced Sensor capabilities, Telepresence and related technologies,
Mechanical design features, Mobility, locomotion and navigation, the universal hand, system
integration and networking. Artificial Intelligence: Goals of AI research, AI techniques –
Knowledge representation, Problem representation and problem solving, LISP
programming, AI and Robotics, LISP in the factory.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 1


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

MODULE – 4: ROBOT SENSORS AND MACHINE VISION SYSTEM


Sensors in Robotics - Tactile sensors, Proximity and Range sensors, use of sensors in
Robotics. Machine Vision System: Introduction to Machine vision, the sensing and digitizing
function in Machine vision, Image processing and analysis, Training and Vision systems.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 2


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4.1 INTRODUCTION :TRANSDUCER AND SENSORS:


A Transducer is a device that converts one type of physical variable (e.g. force, pressure,
temperature, velocity, flow rate, etc) into another form. A common conversion is to electrical
voltage and the reason for making the conversion is that the converted signal is more convenient to
use and evaluate using a digital computer.
A sensor is a transducer that is used to make a measurement of a physical variable of
interest. some of the common sensors and transducers include strain gauges( used to measure
force and pressure), thermocouples(temperatures), speedometers(velocity) and pitot tubes(flow
rates).
Transducers and sensors can also be classified into two basic types depending on the form of
the converted signal. the two types are 1. Analog Transducers 2. Digital Transducers.
ANALOG Transducers provide a continuous analog signals such as electrical voltage or
current. This signal can then be interpreted as the value of the physical variable that is being
measured.
DIGITAL Transducers produce a digital output signal, either in the form of the a set of
parallel status bits or a series of pulses that can be counted.
In either form, the digital signal represents the value of the measured variable. Digital
transducers are becoming more popular because of the ease with which they can be read as
separate measuring instruments.

4.1.2 FEATURES OF SENSORS:


1. ACCURACY:
Accuracy of the measurement should be as high as possible. Accuracy is interpreted to mean
that the true value of the variable can be sensed with no systematic positive or negative errors in
the measurement. over many measurements of the variable, the average between the actual value
and the sensed value will tend to be zero.
2. PRECISION:
The precision of the measurement should be as high as possible. Precision means that there
is little or no random variable in the measured variable. The dispersion in the values of a series
over the entire range
3. OPERATING RANGE:
The sensor should possess a wide operating range and should be accurate and precise over
the entire range

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 3


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4. SPEED OF RESPONSE:
The transducer should be capable of responding to changes in the sensed variable in
minimum time. Ideally, the response would be instantaneous.
5. CALIBRATION:
The sensor should be easy to calibrate. The time and trouble required to accomplish the
calibration procedure should be minimum. Further, the sensor should not require frequent failures
during recalibration. The term "drift" is commonly applied to denote the gradual loss in accuracy of
the sensor with time and use, and which would necessitate recalibration.
6. RELIABILITY:
The sensor should possess a high reliability. It should not be subject to frequent failures
during operation.
7. COST AND EASE OF OPERATION:
The cost to purchase, install and operate the sensor should be as low as possible. further, the
ideal circumstance would be that the installation and operation the device would not required a
specially trained, highly skilled operator.

4.2 SENSORS IN ROBOTICS:


The sensors used in robotics mainly for interaction with the environment include a wide
range of devices which can be divided into the following general categories:
1. TACTILE SENSORS
2. PROXIMITY AND RANGE SENSORS
3. MISCELLANEOUS SENSORS AND SENSOR- BASED SYSTEMS
4. MACHINE VISION SYSTEMS
4.2.1 SENSOR DEVICES USED IN ROBOT WORK CELLS:
AMMETER:-
Electrical meter used to measure electrical current
EDDY CURRENT DETECTORS:- (PROXIMITY SENSOR)
Device that emits an alternating magnetic field at the tip of the probe, which induces eddy
currents in any conductive object in the range of the device. It can be used to indicate presence or
absence of a conductive object.
INFRARED SENSOR:- (PROXIMITY SENSOR)
Transducer which measures temperatures by the infrared light emitted from the surface of
an object. can be used indicate presence or absence of hot object.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 4


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

LIMIT SWITCH:- (TOUCH SENSOR)


Electrical on-off switch actuated by depressing a mechanical level or button on the device.
can be used to measure presence or absence of an object.
LINEAR VARIABLE DIFFERENTIAL TRANSFORMER:-
Electromechanical transducer used to measure linear or angular displacement
MICROSWITCH:-
Small electrical limit switch. Can be used to indicate presence or absence of an object.
OHMMETER:- Meter used to measure electrical resistance.
OPTICAL PYROMETER:-
Device used to measure high temperature by sensing the brightness of an objects surface.
Can be used to indicate presence or absence of a hot object
PHOTOMETRIC SENSORS:-
Various transducers used to sense light. Category includes photocells, photoelectric
transducers, phototubes, photodiodes, phototransistors, and photoconductors. Can be used to
indicate presence or absence of an object.
PIEZOELECTRIC ACCELEROMETER:-
Sensor used to indicate or measure vibration
POTENTIOMETER:-
Electrical meter used to measure voltage
PRESSURE TRANSDUCER:_-
Various transducers used to indicate air pressure and other fluid pressures.
RADIATION PYROMETE:- PROXIMITY SENSOR
Device used to measure high temperature by sensing the thermal radiation emitting from
the surface of an object. Can be used to indicate presence or absence of a hot object.
STRAIN GAUGE: FORCE SENSOR
Common transducer used to measure for , torque, pressure and other related variables. Can
be used to indicate force applied to grasp an object.
THERMISTOR:
Device based on electrical resistance used to measure temperature
THERMOCOUPLE:-
Commonly used device used to measure temperature. Based on the physical principle that a
junction of two dissimilar metals will emit and emf which can be related to temperature.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 5


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

VACUUM SWITCHES:-
Device is used to indicate negative air pressure. Can be used with vacuum gripper to indicate
presence or absence of an object,
VISION SENSOR:
Advanced sensor system used in conjunction with pattern recognition and other techniques
to view and interpret events occurring in the robot workplace.
VOICE SENSORS:
Advanced sensor system used to communicate commands or information orally to the robot.

4.3 TACTILE SENSORS:


Devices which indicate contact between themselves and some other solid object. A tactile
sensor is a collection of touch sensors that embedded between two polymer layers separated by an
isolator mesh.
Two layers of the film are used and are separated by a soft film which transmits vibration.
The lower PVDF film has an alternating voltage applied to it and this results in mechanical
oscillations of the film. The intermediate film transmits these vibrations to the upper PVDF film,
these vibrations cause an alternating voltage to be produced, when pressure is applied to the upper
PVDF film.
Tactile sensor is a pressure sensor used on the fingertips of robotic hands to determine
when a hand has come into contact with an object. This sensor is activated by a single touch and is
used in areas where posses positional problem. Tactile sensor is capable to detect.
 Presence of objects
 Shape, location, orientation of the object
 Contact area and the pressure at that point
 Magnitude, location and direction of the force
 Moments magnitude, plane and the direction
It can be divided into two classes 1) TOUCH sensor 2) FORCE sensor. Touch sensors provide
a binary output signal which indicates whether or not contact has been made with the object. Force
sensors indicate not only that contact has been made with the object but also the magnitude of the
contact force between the two objects.
4.3.1 TOUCH SENSORS:-
It is Used to indicate that contact has been made between 2 objects without regard to the
magnitude of the contacting force. E.g. limit switches, micro switches. They can be used to indicate

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 6


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

the presence or absence of parts in a fixture or at the pickup point along a conveyor. It works like a
light switch in our house. When the button is pressed, an electrical circuit is closed inside the
sensor and electricity flows. When a button released, the circuit is broken and no electricity flows.
RCX can sense this flow of electricity, so it knows if the touch sensor if pressed or released. Touch
sensors send a signal when physical contact is made.
A robot with 6 Degree of freedom would be capable of accessing surfaces on the part that
would be difficult for a 3-axis coordinate measuring machine, the inspection system normally
considered for such an inspection task. Unfortunately, the robot's accuracy would be limiting factor
in contact inspection work.
4.3.2 FORCE SENSORS:
The capability to measure forces permits the robot to perform a number of tasks. These
include the capability to grasp parts of different sizes in material handling, machine loading and
assembly work, applying the appropriate level of force for the given part.
In assembly applications, force sensing could be used to determine if the screws have
become cross - threaded or if the parts are jammed. Force sensor used measure the force applied
on the object by the robot's arm during operation. The force can be worked out by using a force
sensing wrist or by measuring the torque exerted at the joint of the arm
Force sensing can be accomplished in several ways. Commonly used technique is force
sensing wrist. This consists of a special load-cell mounted between the gripper and the wrist.
Another technique is to measure the torque being exerted by each joint. This is usually
accomplished by sensing motor current for each of the joint motors. Third technique is to form an
array of force-sensing elements so that the shape and other information about the contact surface
can be determined.
4.3.2.1FORCE-SENSING WRIST:
The purpose is to provide information about the 3 components of force (Fx, Fy, & Fz) & the 3
moments (Mx,My,Mz) being applied at the end effector. The device consists of a metal bracket
fastens to a rigid frame. The frame is mounted to the wrist of the robot & the tool is mounted to the
center of bracket.
Since the forces are usually applied to the wrist in combinations it is necessary to first
resolve the forces & moments into their six components. This kind of computation can be carried out
but the robot controller or by a specialized amplifier designed for this purpose. Based on these
calculations, the robot controller can obtain the required information of the forces & moments being
applied at the wrist. This information could be used for a number of applications.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 7


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

As an example, an insertion operation requires that there are no side force being applied to
the peg. Another example the robot 's end effector is required to follow along an edge or contour of
an irregular surface. This is called force accommodation. With this technique, certain forces are set =
0 while others are set = to specific values. Using force accommodation, one could command the
robot to follow the edge or contour by maintaining a fixed velocity in one direction & fixed forces in
other directions

The robot equipped with a force-sensing wrist + proper computing capacity could be
programmed to accomplish these kinds of applications. The procedure would begin by deciding on
the desired force to be applied in each axis direction.

The controller would perform the following sequences of operation, with the resulting offset
force calculated
1. Measure the forces at the wrist in each axis direction
2. Calculate the force offsets required. The force offset in each direction is determined by
subtracting the desired force from the measured force
3. Calculate the torques to be applied by each axis to generate the desired force offsets at
the wrist. These are moment calculations which take into account the combined
effects of the various joints & links of robot.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 8


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4. Then the robot must provide the torques calculated in step 2 so that the desired forces
are applied in each direction
Force-sensing wrist are usually very rigid devices so that they will not deflect undesirably
while under load. When designing a Force-sensing wrist there are several problems that may be
encountered. The end-of the-arm is often in a relatively hostile environment.
This means that the device must be sufficiently rugged to withstand the environment. This
means that the device must be capable of tolerating an occasional crash of the robot arm. At the
same time the device must be sensitive enough to detect the small forces. This design problem is
usually solved but using over travel limits. An over travel limit is a physical stop designed to
prevent the force sensor from deflecting so that it would be damaged.
The calculations required to utilize a force-sensing wrist are complex & require considerable
computation time. Also for an arm traveling at moderate to high speeds, the level of control over
the arm just as it makes contact with an object is limited by the dynamic performance of the arm.
The momentum of the arm makes it difficult to stop its forward motion quickly enough to
prevent a crash. Designing of force sensors is a very complex process due to the redundancies
present in the force sensing process itself. Force sensors are made using strain gauges that measure
the strain along particular axes.

It is to be noted that in 3D space we can have a total of 3 forces & 3 moments. Hence, single
cantilever beam cannot measure all the six components. A force sensor is usually made having 4
arms, each of which responds to two forces.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 9


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4.3.2.2 JOINT SENSING:


if the robot uses dc servomotors then the torque being exerted by the motors is proportional
to the current flowing through the armature. A simple way to measure this current is to measure the
voltage drop across a small precision resistor in series with the motor and power amplifier. this
simplicity makes this technique attractive; however, measuring the joint torque has several
disadvantages. firstly measurements are made in joint space, while the forces of interest are applied
by the tool and would be more useful if made in tool space. The measurements not only reflect the
forces being applied at the tool, but also the forces and torques required to accelerate the links of the
arm and to overcome the friction and transmission losses of the joints. In fact, if the joint friction is
relatively high, it will cover out the small forces being applied at the tool tip. Direct - Drive robots are
a relatively new innovation in which the drive motors are located at the joints of the manipulator. In
torque sensing, this configuration reduces the friction and transmission losses and the problems of
torque measurement reduced.

4.3.2.3: TACTILE ARRAY SENSORS:


It is special type of force sensor composed of a matrix of force sensing elements. The force
data provided by this type of device may be combined with pattern recognition technique to
describe a number of characteristics 1. The presence of an object, 2. The object's contact area, shape,
location & orientation, 3. The pressure & pressure distribution & 4. Force magnitude & location.
Tactile array sensors can be mounted in the fingers of the robot gripper or attached to a
work table as a flat touch surface. The device is typically composed of an array of conductive
elastomer pads. As each pad is squeezed its electrical resistance changes in response to the amount
of deflection in the pad, which is proportional to the applied force.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 10


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

By measuring the resistance of each pad, information about the shape of the object against
the array of sensing elements can be determined. The operation of a tactile array sensor with an 8X8
matrix of pressure - sensitive pads. In the background is the CRT monitor display of the tactile
impression made by the object placed on the surface of the sensor device, as the number of pads in
the array is increased the resolution of the displayed information improves.

4.4 PROXIMITY & RANGE SENSORS:


Devices that indicate when one object is close to another object. How close the object must
be in order to activate the sensor is dependent on the particular devices. The distances can be
anywhere between several millimeters & several feet. Some of these sensors can also be used to
measure the distance between the object & the sensor are called range sensors. Proximity & range
sensor typically be located on the wrist end or end effector since these are the moving parts of the
robot.
One practical use is to detect the presence or absence of a workparts or other object.
Another important application is for sensing human beings in the robot workcell. Range sensor
would be useful for determine the location of an object. A variety of technologies are available for
designing sensors include optical devices, acoustics devices, electrical field techniques.

4.4.1 OPTICAL PROXIMITY SENSORS:


It can be designed using either visible or invisible light sources (INFRARED). Infrared sensor
may be active or passive. The active sensors send out an infrared beam & respond to the reflection
of the beam against a target. The active infrared sensor can be used to indicate not only whether
or not a part is present, but also the position of the part. By timing interval from when the signal is
sent & the echo is received, a measurement of the distance between the object & the sensor can be
made. The passive infrared sensor are simple devices which detect the presence of infrared
radiation in the environment. They are often utilized in security system to detect the presence of
bodies giving off heat within the range of the sensor.
Another optical approach for proximity sensing involves the use of collimated light beam &
linear array of light sensors. By reflecting the light beam off the surface of the object, the location of
the object can be determined from the position of its reflected beam on the sensor array. The
formula for the distance between the object & the sensor is
X= 0.5y tan(A); where A angle between the object & the sensor array
X is the distance of the object from the sensor

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 11


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

Y is the lateral distance between the light source & the reflected light beam against
the linear array. This distance corresponds to the number of elements contained within the
reflected beam in the sensor array.

4.4.2 ACOUSTICAL DEVICES:


It can be used as proximity sensors. Ultra sonic frequencies above 20,000hertz are often
used since sound is beyond the range of human hearing. One type of acoustical proximity sensor
uses a cylindrical open-ended chamber with an acoustic emitter at the closed end of the chamber.
The emitter sets up a pattern of standing waves in the cavity which is altered but the presence of
an object near the open end. A microphone located in the wall of the chamber is used to sense the
change in the sound pattern. This kind of device can also be used as a range sensor

4.4.3 ELECTRICAL FIELD:


 Two types of category 1] eddy current sensors & 2] magnetic sensors
EDDY CURRENT SENSOR:
It creates a primary alternating magnetic field in the small region near the probe. This field
induces eddy currents in an object placed in the region so long as the object is made of a
conductive material. These eddy currents produce their own magnetic field which interacts with
the primary field to change the flux density. The probe detects the change in the flux density & this
indicates the presence of the object

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 12


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

MAGNETIC FIELD SENSOR:


It is simple & can be made using reed switch & permanent magnet. The magnet can be made
a part of the object being detected or it can be part of the sensor device. In either case, the device
can be designated so that the presence of the object in the region of the sensor completes the
magnetic circuit & activates the reed switch. This type design is attractive because of its relative
simplicity & because no external power supply is required for its operation.

4.5 USE OF SENSOR:


Major uses of sensors in industrial robotics and other automated manufacturing systems can be
divided into 4 basic categories:
1. Safety monitoring
2. Interlocks in work cell control
3. Part inspection for quality Control
4. Determining positions and related information about objects in the robot cell
1. SAFETY MONITORING:
This concerns the protection of human workers who work close to the robot or other
equipment. There are 3 occasions when humans are close to the machine to be exposed to danger.
a) During programming of the robot
b) During the operation of the robot cell when human work in the cell
c) During maintenance of the robot
The types of risk encountered during these times include physical injury from collision
between human and the robot, electrical shock, objects dropped from the grippers and loose power
cables or hydraulic lines on the floor. Properly grounding, shocks are prevented and raised floor
platforms to cover power cables and hydraulic lines
2. INTERLOCKS IN WORKCELL CONTROL:
Interlocks are used to coordinate the sequence of activities of the different pieces of
equipment in the work cell. In the execution of the program, there are certain elements of the work
cycle whose completion must be verified before proceeding with the next element in the cycle.
Sensors, often utilized to provide this kind of verification.

3. PART INSPECTION FOR QUALITY CONTROL:


Sensors can be used to determine a variety of part quality characteristics. Traditionally, QC
has been performed using manual inspection techniques on a statistical sampling basis. Use of
sensor permits the inspection operation to be performed automatically on a 100 % basis. The
limitation is that the sensor system can only inspect for a limited range of part characteristics and

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 13


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

defects. For example, a sensor probe designed to measure part length cannot detect flaws in the
part surface.

4. DETERMINING POSITIONS- RELATED INFORMATION - OBJECTS IN THE ROBOT CELL


Major use of sensors in robotics is to determine the position and other information about
various objects in the work cell [ workparts, fixtures, people, equipment, etc]. In addition to
positional data about the particular object, other information required to properly execute the
work cycle might include the object's orientation, color size and other characteristics
An example
1. workparts identification:
In a work cell in which the robot processes several types of workparts, each requiring a
different sequence of actions by the robot. Each part would have to be properly identified so that
the correct subroutine could be called for execution. This type of identification problem arises in
automobile body spot - welding lines where the line is designed to weld several different body
styles. Each welding robot along the line must execute the welding cycle for the presence or
absence of specific body style features in order make the proper identification.
All 4 categories are instances where the sensor constitutes a component of control system
used in the robot work cell to accomplish some specific control functions. All of the control
functions which takes place in the work cell are coordinated and regulated by this larger system.
-------------------------------------------------------------------------------------------------------------------------

4.6 MACHINE VISION:


Automatic extraction of information from digital images for process control and
manufacturing systems. Machine vision are programmed to perform narrowly defined tasks such as
counting object in the conveyor, decoding the serial number and searching for surface defects.
Machine Vision is concerned with the sensing of vision data and its interpretation by a computer.
Typical system consists of the camera and digitizing hardware, a digital computer and preprocessor
which interface the hardware and software. Operation of Machine Vision system consists of 3
functions

i) Sensing and Digitizing image data


ii) Image processing and analysis
iii) Application

i) SENSING & DIGITIZING IMAGE DATA:

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 14


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

It involves the input of vision data by means of camera focused on the scene of interest.
Special Lighting techniques are used to obtain an image with sufficient contrast for future
processing. The image is digitized and stored in computer memory, The digital image is called
FRAME of VISION DATA and is frequently captured by hardware device is called FRAME GRABBER.
These devices are capable of digitizing images @ 30 frames per second.
The frames consists of a matrix of data representing projections of scene sensed by the
camera. The elements of matrix are called PIXEL or picture elements. The number of pixels are
determined by a sampling process performed on each image frame. A single pixel is the projection
of a small portion of the scene which reduces that portion to a single value. The value is measure of
the light intensity for that element. Each pixel is converted into a digital value.
ii) IMAGE PROCESSING AND ANALYSIS:
The digitized image for each frame is stored and then subjected to image processing and
analysis functions for data reduction and interpretation of the image. These steps are required in
order to permit the real time application of vision analysis required in robotic applications.
Typically an image frame will be threshold to produce a binary image and then various feature
measurements will further reduce the data representation of the image. This data reduction can
change the representation of frame from several 100,000 bytes of raw image data to several 100
bytes of featured value data. The resultant feature data can be analyzed in the available time for
action by the robot system.
Various technique to compute the feature values can be programmed into the computer to
obtain feature descriptors of the image which are matched with previously computed values stored
in the computer. These descriptors include shape and size characteristics that can be readily
calculated from the thresh hold image matrix. To accomplish image processing and analysis, the
Vision system frequently must be trained. In training, information is obtained on objects and stored
as computer models. The information consists of features such as area of the objects, its perimeter
length, major and minor diameters and similar features.
During Subsequent operation, feature values computed on unknown objects viewed by
camera are compared with models to determine if match has occurred.
iii)ROBORT APPLICATION:
The current application of Machine vision in robotics include inspection, part identification,
location and orientations. Research is ongoing in advanced application for use in complex
inspection , guidance and navigation. Vision systems can be classified in a number of ways, two
dimensional or three dimensional models. 2D models include checking the dimensions of a part or

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 15


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

verifying the presence of components on a subassembly. Many 2D vision system can operate on
binary image which is the result of simple thresholding technique. This is based on an assumed high
contrast between the objects and the back ground. the desired controlled lighting system.
3D vision system may require special light techniques and more sophisticated image
processing algorithms to analyze the image. Some systems require 2 cameras in order to achieve a
stereoscopic view, while other 3D system rely on the use of structured light and optical
triangulation techniques with single camera. Another way of classifying vision system is according
to the number of gray levels used to characterize the image. In a binary image, the gray levels are
divided into either of 2 categories, black or white.

4.6.1 THE SENSING AND DIGITIZING FUNCTION IN MACHINE VISION:


Image sensing requires some type of image formation device such as a camera and a
digitizer which stores a video frame in the computer memory. The sensing and digitizing functions
into several steps. The initial step involves capturing the image of the scene with the vision camera.
The image consists of relative light intensities corresponding the various portions of the
scene. These light intensities are continuous analog values which must be sampled and converted

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 16


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

into a digital form. The second step involves, digitizing is achieved by an ADC. ADC is either part of a
digital video camera or the front end of the frame grabber.
The choice is dependent on the type of hardware in the system. the frame grabber involves
third step, is an image storage and computation device which stores a given pixel array. the frame
grabber can vary in capability from one which simply stores an image to significant computation
capability. In the more powerful frame grabbers, thresholding, windowing, and histogram
modification calculations can be carried out under computer control. The stored image is then
subsequently processed and analyzed by the combination of the frame grabber and the vision
controller.
4.6.1 IMAGING DEVICES:
Camera technologies available include the older black - and - white vidicon camera and the
newer, second generation, solid state cameras. Solid state cameras used for robot vision include
CCD, CID and silicon bipolar sensor cameras.
VIDICON CAMERA:

In this operation of this system, the lens forms an image on the glass faceplate of the
camera. The faceplate has an inner surface which is coated with two layers of material. The first
layer consists of a transparent signal electrode film deposited on the faceplate of the inner
surface. The second layer is , thin photosensitive material deposited over the conducting film. The
photosensitive layer consists of a high density of small areas. These areas are similar to the pixels.
Each area generates a decreasing electrical resistance in response to increasing
illumination. A charge is created in each small area upon illumination. An electrical charge pattern
is generated corresponding to the image formed on the faceplate. The charge accumulated for an
area is a function of the intensity of impinging light over a specified time.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 17


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

once a light sensitive charge is built up, this charge is read out to produce a video signal. this
is accomplished by scanning the layer by an electron beam. The scanning is controlled by a
deflection coil mounted along the length of the tube.
For an accumulated +ve charge the electron beam deposits enough electrons to neutralize
the charge. An equal number of electronics flow to cause a current at the video signal electrode. the
magnitude of the signal is proportional to the light intensity and the amount of time with which an
area is scanned.
the current is then directed through a load resistor which develops a signal voltage which is
further amplified and analyzed. Raster scanning eliminates the need to consider the time at each
area by making the scan time the same for all areas. only the intensity of impinging light is
considered.
Raster scanning is typically done by scanning the electron beam from left to right and top to
bottom. the process is designed to start the integration with zero- accumulated charge. For fixed
scan time, the charge accumulated is proportional to the intensity of that portion of the image being
considered. the output of the camera is a continuous voltage signal for each line scanned. the
voltage signal for each scan line is subsequently sampled and quantized resulting in a series of
sampled voltages being stored in digital memory.
this ADC process for the completer screen results in a two dimensional array of pixels.
Typically, a single pixel is quantized to between six and eight bits by the ADC.
4.6.3CHARGE COUPLED DEVICE:
In this technology, the image is projected by a video camera on to the CCD which detects,
stores, and reads out the accumulated charge generated by the light on each portion of the image.
Light detection occurs through the absorption of light on a photoconductive substrate. Charges
accumulate under positive control electrodes in isolated wells due to voltages applied to the
central electrodes. Each isolated well represents one pixel & can be transferred to output storage
registers by varying the voltages on the metal control electrodes.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 18


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

Another type of CCD imagers.


Charges are accumulated for the time it takes to complete a single image after which they
are transferred line but line into a storage register. For example, register A accumulates the pixel
charge produced but the light image. Once accumulated for a single picture, the charges are
transferred line by line to register B. The pixel charges are read out line by line through a horizontal
register C to an output amplifier. During read out, register A is accumulating new pixel elements.
The complete cycle is repeated approximately every 1/60th of a second.

4.6.3 LIGHTING TECHNIQUES:


Good illumination of the scene is important because of its effect on the level of complexity of
image-processing algorithms required. Poor lighting makes the task of interpreting the scene more
difficult. Proper lighting technique should provide high contrast & minimize specular reflections &
shadows unless specifically designed into the system. The purpose of lighting techniques is to direct
the path of light from the lighting device to the camera. The basic types of lighting devices used in
machine vision may be grouped into the following categories.
1. Diffuse surface devices: Examples of diffuse surface illuminators are the typical
fluorescent lamps & light tables
2. Condenser projections: A condenser projector transforms an expanding light source
into a condensing light source. This is useful in imaging optics.
3. Flood or spot projectors: flood lights & spot lights are used to illuminate surface areas

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 19


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4. Collimators: used to provide a parallel beam of light on the subject


5. Imagers: imagers such as slide projectors & optical enlargers form as image of the
target at the object plane.
There are two basic illumination techniques used in machine vision: front lighting & back
lighting. Front lighting: the light source is on the same side of the scene as the camera. Reflected
light is used to create the image viewed by the camera. BACK lighting: the light source is directed at
the camera & is located behind the objects. The image seen by the camera is a shadow of the object.
Back lighting is suitable for applications in which a silhouette of the object is sufficient for
recognition or where there is a need to obtain relevant measurements.
4.6.4 VARIOUS ILLUMINATION TECHNIQUES:
TECHNIQUE FUNCTIONS /USE

A. FRONT LIGHT SOURCE


1. FRONT ILLUMINATION Area flooded such that surface is defining feature of image
2.SPECULAR ILLUMINATION
Used for surface detect recognition (background dark)
(DARK FIELD)
3. SPECULAR ILLUMINATION Used for surface defect recognition; camera in-line with
(LIGHT FIELD) reflected rays (background light)
Structured light applications: image light superimposed on
4. FRONT IMAGER
object surface - light beam displaced as function of thickness

B. BACK LIGHT SOURCE


1. REAR ILLUMINATION Uses surface diffuser to silhouette features: used in parts
(LIGHTED FIELD) inspection & basic measurements
2. REAR ILLUMINATION Produces high-contrast images: useful for high magnification
(CONDENSER) application
3. REAR ILLUMINATION Produces parallel light ray source such that features of the
(COLLIMATOR) object do not lie in same plane
4. REAR OFFSET Useful to produce feature highlights when feature is in
ILLUMINATION transparent medium

C.OTHER MISCELLANEOUS DEVICES


Transmits light along same optical axis as sensor: advantage is
1. BEAM SPLITTER
that it can illuminate difficult to view objects
Similar to beam splitter but more efficient with lower intensity
2. SPLIT MIRROR
requirements
3.NONSELECTIVE
Light source is redirected to provide proper illumination
REDIRECTORS
A device that redirects incident rays back to sensor: incident
4.RETRO REFLECTOR angle capable of being varied; provides high contrast for object
between source & reflector
A technique used to increase illumination intensity at sensor
5. DOUBLE DENSITY
used with transparent media & retro reflector

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 20


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4.7 ANALOG TO DIGITAL SIGNAL CONVERSION:


This process involves taking an analog input voltage signal & producing an output that
represents the voltage signal in the digital memory of a computer. ADC consists of three phases:
sampling, quantization & encoding.
4.7.1 SAMPLING:-
A given analog signal is samples periodically to obtain a series of discrete time analog
signals. By setting specified sampling rate, the analog signal can be approximated by the sampled
digital outputs. The analog signal is determined by the sampling rate of the ADC. The sampling rate
should be at least twice the highest frequency in the video signal if we wish to reconstruct that
signal exactly

4.7.2 QUANTIZATION:
Each sampled discrete time voltage level is assigned to a finite number of defined amplitude
levels. These amplitude levels correspond to the gray scale used in the system. The predefined
amplitude levels are characteristic to a particular ADC & consist of a set of discrete values of the
voltage levels. The number of quantization levels is = 2n where n is the number of bits of the ADC. A
large number of bits enables a signal to be represented more precisely
For example an 8 bit converter would allow us to quantize at 28=256 different values
whereas a bits would allow only 24=16 different levels
4.7.3 ENCODING:
The amplitude levels that are quantized must be changed into digital code is called
ENCODING. This involves representing an amplitude level by a binary digit sequence. The ability of
the encoding process to distinguish between various amplitude levels is a function of the spacing of
each quantization level.
Quantizing level spacing = full-scale range/ 2n
Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 21
Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

Quantization error = ±(1/2)[quantization level spacing]


PROBLEMS:
Consider a vision system using a vidicon tube. An analog video signal is generated for
each line of the 512 lines comprising the faceplate. The sampling capability of the ADC is
100ns. This is the cycle time required to complete the ADC process for one pixel. Using the
American standard of 33.33ms to scan the entire faceplate consisting of 512 lines. Determine
the sampling rate & the number of pixels that can processed per line?
Solution: Given data:
1. the cycle time per pixel = 100ns
2. scanning rate for the 512 lines in the faceplate = 33.33ms
Required data:
1. the sampling rate?
2. the number of pixels that can processed per line?
Answer:
The scanning rate for each line is = 33.33ms/512 lines = 65.1microsecond / line
The number of pixels that can be processed/ line
= (65.1micro second / line) / (100nano second / pixel )
= 651 pixels / line
Problem 2:
A continuous video voltage signal is to be converted into a discrete signal. The range of
the signal after amplification is 0 to 5V. the ADC has an 8 bit capacity. Determine the number of
quantization levels, the quantization level spacing, the resolution & the quantization error?
For an 8 bit capacity, the number of quantization levels = 2n = 256.
The ADC resolution is 1/256 = 0.0039 or 0.39%, For the 5V range
Quantization level spacing = 5V/ 28 =0.0195V
Quantization error = ±(1/2)[quantization level spacing]
= ±(1/2)[0.0195V] = 0.00975V
To represent the voltage signal in binary form involves the process of encoding. This is
accomplished by assigning the sequence of binary digits to represent increasing
quantization levels
PROBLEM3:
For the ADC of previous problem indicate how the voltage signal might be indicated in
binary form. Of the 8 bits available, we can use these bits to represent increasing quantization
levels as follows:
Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 22
Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

VOLTAGE RANGE V BINARY NUMBER GRAY SCALE


0-0.0195 0000 0000 0 (black)
0.0195 - 0.0390 0000 0001 1 [dark gray]
0.0390 - 0.0585 0000 0010 2
0.0585 - 0.0780 0000 0011 3
0.0780 - 0.0975 0000 0100 4
- - -
- - -
- - -
4.9805 - 5.0 1111 1111 255 [white]

4.7.4 IMAGE STORAGE:


After ADC, the image is stored in computer memory, typically called frame buffer. This buffer
may be part of the frame grabber or in the computer itself. Various techniques have been developed
to acquire & access digital images. The frame grabber is one example of a video data acquisition
device that will store a digitized picture & acquire it is 1/30S. digital frame are typically quantized
to 8 bits per pixel. However a 6 bit buffer is adequate since the average camera system cannot
produce 8 bits of noise -free data.
Thus the lower-order bits are dropped as a means of noise cleaning. In addition the human
eye can only resolve about 26 = 64 gray levels. A combination of row & column counters are used in
the frame grabber which are synchronized with the scanning of the electron beam in the camera.
Thus each position on the screen can be uniquely addressed.
To read the information stored in the frame buffer, the data is grabbed via signal sent from
the computer to the address corresponding to a row-column combination. Such frame grabber
techniques have become extremely popular & are used frequently in vision systems.

4.8 IMAGING PROCESSING & ANALYSIS:


For use of the stored image in industrial applications, the computer must be programmed to
operate on the digitally stored image. This is a substantial task considering the large amount of data
that must be analyzed. Consider an industrial vision system having a pixel density of 350 pixels per
line & 280 lines (a total of 98000 picture elements), & a 6 bit register for each picture element to
represent various gray levels. These techniques include:
1. image data reduction
2, segmentation
3. Feature extraction

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 23


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4.object recognition
4.8.1 IMAGE DATA REDUCTION:
The objective is to reduce the volume of data. As primary steps in the data analysis. The
following two schemes have found 1. DIGITAL CONVERSION 2. WINDOWING. The functions of both
schemes is to eliminate the bottleneck that can occur from the large volume of data in image
processing.
Digital conversion reduces the number of gray levels used by the machine vision system. For
example, 8 bit register used for each pixel would have 28 =256 gray levels. Depending on the
requirements of the application, digital conversion can be used to reduce the number of gray levels
by using fewer bits to represent the pixel light intensity. 4 bits would reduce the number of gray
levels to 16. This kind of conversion would significantly reduce the magnitude of the image -
processing problem.
PROBLEM4:
For an image digitized at 128 points per line & 128 lines, determine a) the total number
of bits to represent the gray level values required if an 8 bit ADC is used to indicate various
shades of gray & b) the reduction in data volume if only black & white values are digitized.
Solution
a) for gray scale imaging with 28 = 256 levels of gray
number of bits = 128X128X256 = 4,194,304 bits
b) for black & white (binary bit conversion)
number of bits = 128 X 128 X 2 = 16,384 bits
reduction in data volume = 4,194,304 --16,384 = 4,177,920 bits
windowing involves using only a portion of the total image stored in the frame buffer for
image processing & analysis. This portion is called the window.

4.8.2 SEGMENTATION:
The objective is to group areas of an image having similar characteristics or features into
distinct entities representing parts of the image. For example boundaries (edges) or regions (areas)
represent two natural segments of an image. There are many ways to segment an image. Three
important techniques are 1. THRESHOLDING, 2. REGION GROWING, 3. EDGE DETECTION.
THRESHOLDING is a binary conversion technique in which each pixel is converted into a
binary value either black or white. This is accomplished by utilizing a frequency histogram of the

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 24


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

image & establishing gray level is to be the border between the black & white. Picture shows the
regular image with each pixel having a specific gray tone out of 256 possible gray levels.

The
histogram plots the frequency versus the gray level for the image. For histogram that are bimodal in
shape , each peak of the histogram represents either the object itself or the background upon which
the object rests. To differentiate between the object & background, the procedure is to establish a
threshold & assign.
It should be pointed out that the above methods of using a histogram to determine a
threshold is only one of a large number of ways to threshold an image. It is however the method
used by many of the commercially available robot vision systems today. Such a method is said to
use a global threshold for the entire image.
When is not possible to find a single threshold for entire image. One approach is to partition
the total image into smaller rectangular areas & determine the threshold for each window being
analyzed.
Thresholding is the most widely used technique for segmentation in industrial vision
applications. The reasons are that it is fast & easily implemented & that the lighting is usually
controllable in an industrial setting.
Once thresholding is established for a particular image, the next step is to identify particular
areas associated with objects within the image. Such regions usually possess uniform pixel
properties computed over the area. The pixel properties may be multidimensional; that is there
may be more than a single attribute that can be used to characterize the pixel (color & intensity).

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 25


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

REGION GROWING is a collection of segmentation techniques in which pixels are grouped in


regions called grid elements based on attribute similarities. Defined regions can then be examined
as to whether they are independent or can be merged to other regions by means of an analysis of
the difference in their average properties & spatial connectiveness.
To differentiate between the objects & the background assign 1 for any grid element
occupied by an object & 0 for background elements. It is common practice to use square sampling
grid with pixels spaced equally along each side of the grid.
For the 2D image of a key as shown, this would give the pattern indicated. This technique of
creating runs of 1s & 0s is often used as a first -pass analysis to partition the image into identifiable
segments or blobs.
For a simple image such as a dark blob on a light background, a runs technique can provide
useful information. For more complex images, this technique may not provide an adequate partition
of an image into a set of meaningful regions. Such regions might contain pixels that are connected to
each other & have similar attributes, for example, gray level. A typical region-growing technique for
complex image could have the following procedure:
1. Select a pixel that meets a criterion for inclusion in a region. In the simplest case, this could
means select white pixel & assign a value of 1.
2. Compare the pixel selected with all adjacent pixels. Assign an equivalent value to adjacent
pixels if an attribute match occurs.
3. Go to an equivalent adjacent pixel & repeat process until no equivalent pixels can be added
to the region

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 26


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

This simple procedure of growing regions around a pixel would be repeated until no new
regions can be added for the image.
The region growing segmentation technique described here is applicable when images are
not distinguishable from each other by straight thresholding or edge detection techniques. This
sometimes occurs when lighting of the scene cannot be adequately controlled.
In industrial robot vision systems, it is common practice to consider only edge detection or
simple thresholding. This is due to the fact that lighting can be controllable factor in an industrial
setting & hardware / computational implementation is simpler.

EDGE DETECTION considers the intensity change that occurs in the pixels at the boundary or edges
of a part. Given that a region of similar attributes has been found but the boundary shape is
unknown, the boundary can be determined by a simple edge following procedure. This can be
illustrated by the schematic of a binary image as shown in diagram
For the binary image, the procedure is to scan the image until a pixel within the region is
encountered. For a pixel within the region, turn left & step, otherwise, turn right & step. The
procedure is stopped when the boundary is traversed & the path has returned to the starting pixel.
The contour -following procedure described can be extended to gray level images.

4.8.3 FEATURE EXTRACTION:


In machine vision application, it is often necessary to distinguish one object from another.
This is usually accomplished by means of features that uniquely characterize the object. Some
features of objects that can be used in machine vision include area, diameter & perimeter.
A feature is a single parameter that permits ease of comparison & identification. The
technique available for 2d cases can be deal with boundary features & those that deal with area
features. The various features can be used to identify the object or part & determine the part
location & or orientation.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 27


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 28


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 29


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4.8.4 OBJECT RECOGNITION:


The next step in image data processing is to identify the object the image represents. The
recognition algorithm must be powerful enough to uniquely identify the object. Object recognition
technique used in industry today maybe classified into 2 major categories.
1. TEMPLATE- MATCHING techniques
2. STRUCTURAL TECHNIQUEs
TEMPAKTE MATCHING TECHNIQUES are a subset of the more general statistical pattern.
Recognition technique that serve to classify objects in an image into predetermined categories. The
basic problem in template matching is to match the object with a stored pattern feature set defined
as a model templates.
The procedure is based on the use of a sufficient number of features to minimize the
frequency of errors in the classification process. The features of the object in the image (its area,

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 30


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

diameter, aspect ratio) are compared to the corresponding stored values. These values constitute
the stored template. When a match is found, allowing for certain statistical variations in the
comparison process. Then the object has been properly classified.
STRUCTURAL TECHNIQUE of pattern consider relationship between features or edges of an
object, for example, it the image of an object can be subdivided into 4 straight lines connected at
their end points & the connected lines are at right angles, then the object is a rectangle. This kind of
technique known as syntactic pattern recognition is the most widely used structural technique.
Structural technique differ from decision - theoretic technique in that the latter deals with a
pattern on a quantitative basis & ignores for the most part interrelationships among object
primitives.
It can be computationally time consuming for complete pattern recognition. Accordingly, it
is often more appropriate to search for simpler regions or edges within an image. These simpler
regions can then be used to extract the required features. The majority of commercial robot vision
systems make use of this approach to the recognition of two dimensional objects. The recognition
algorithms are used to identify each segmented object in an image & assign it to a classification.

4.9. TRAINING THE VISION SYSTEM:


The purpose is to program the vision system with known objects. The system stores these
objects in the form of extracted feature values which can be subsequently compared against the
corresponding feature values from images of unknown objects.
The training of the vision system should be carried out under conditions as close to
operating condition as possible. Physical parameters such as camera placement, aperture setting,
part position & lighting are the critical conditions that should be simulated as closely as possible
during the training session.
Vision system manufacturers have developed application software for each individual
system marketed. The software is typically based on a high - level programming language; for
example, object recognition systems inc. uses the C language & automatix inc. uses their internally
developed language called RAIL. There are two version of RAIL one for automated vision systems &
the other for robot programming.

IMPORTANT QUESTIONS:

1. Describe The Desirable Features Of Sensors? (5)


2. List Out The Sensor Devices Used In Robot Workcell & Discuss Any Three Types? (5)

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 31


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

3. Explain with principle of operations, use of Force-sensing wrist for force sensing? (8)
4. Explain the proximity and range sensors in detail.? Discuss any two types [10]
5. What are tactile sensors? Discuss tactile sensors with example? [5]
6. Write short notes on i) Eddy current sensor ii) magnetic field sensor? (5M)
7. Write short notes on i) optical sensor ii) Acoustic sensor (10M)
8. Write short notes on i) TOUCH SENSORS ii) FORCE SENSOR [5]
9. Write short notes on i) Joint sensing ii) Tactile array sensors [5]
10. Explain briefly the uses of sensors in robotics [8]
11. What do you understand by the term machine vision used in Robotics, explain with
suitable block diagram the functions of a machine vision system?[10]
12. Discuss the important image processing & analysis technique? [10]
13. Describe the sensing & digitizing function in machine vision? [10]
14. Explain with neat sketch of principle & operation of vidicon tube [8]
15. With relevant diagrams, explain the Charged Coupled Devices?[8]
16. What is the necessary of lighting system in machine vision system? [5]
17. Explain the various lighting techniques? [10]
18. Discuss the various techniques used in the ADC process [8]
19. Explain, briefly about the segmentation process in image processing & analysis? [10]
20. Discuss in detail object recognition? [5]

TEXT BOOK:
1. Automation, Production Systems and Computer Integrated Manufacturing M.P.Groover,
Pearson Education, 5th edition, 2009
2. Industrial Robotics, Technology, Programming and Applications by M.P.Groover, Weiss,
Nagel, McGraw Hill International, edition, 2012.
REFERENCE BOOKS:
1. Industrial Robotics, Technology, Programming and Applications by M.P.Groover, Weiss,
Nagel, McGraw Hill International, 1st edition, 1986.
2. Automation, Production Systems and Computer Integrated Manufacturing M.P.Groover,
Pearson Education, 2nd edition, 2001.
3. Robotics, control vision and intelligence – Fu, Lee and gonzalez, McGraw Hill International,
2nd edition, 2007.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 32


Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565

4. Robotics Engineering – An Integrated Approach, Klafter, Chmielewski and Negin, PHI 1st
edition, 2009.
5. Robotics (Industrial Robotics), P.Jaganathan, Lakshmi Publications, Chennai.

Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 33

You might also like