Module 4 PDF
Module 4 PDF
Module 4 PDF
Syllabus:
MODULE – 1: AUTOMATION
History of Automation, Reasons for automation, Disadvantages of automation, Automation
systems, Types of automation – Fixed, Programmable and Flexible automation, Automation
strategies. Automated Manufacturing Systems: Components, classification and overview of
manufacturing Systems, Flexible Manufacturing Systems (FMS), Types of FMS, Applications
and benefits of FMS.
MODULE – 2: ROBOTICS
Definition of Robot: History Of Robotics, Robotics Market & Future Prospects, Robot
Anatomy, Robot Configurations: Polar, Cartesian, Cylindrical & Jointed- Arm Configuration.
Robot Motions, Joints, Work Volume, Robot Drive Systems, Precision Of Movement, Spatial
Resolution, Accuracy, Repeatability, End Effectors, Tools, Grippers
4. SPEED OF RESPONSE:
The transducer should be capable of responding to changes in the sensed variable in
minimum time. Ideally, the response would be instantaneous.
5. CALIBRATION:
The sensor should be easy to calibrate. The time and trouble required to accomplish the
calibration procedure should be minimum. Further, the sensor should not require frequent failures
during recalibration. The term "drift" is commonly applied to denote the gradual loss in accuracy of
the sensor with time and use, and which would necessitate recalibration.
6. RELIABILITY:
The sensor should possess a high reliability. It should not be subject to frequent failures
during operation.
7. COST AND EASE OF OPERATION:
The cost to purchase, install and operate the sensor should be as low as possible. further, the
ideal circumstance would be that the installation and operation the device would not required a
specially trained, highly skilled operator.
VACUUM SWITCHES:-
Device is used to indicate negative air pressure. Can be used with vacuum gripper to indicate
presence or absence of an object,
VISION SENSOR:
Advanced sensor system used in conjunction with pattern recognition and other techniques
to view and interpret events occurring in the robot workplace.
VOICE SENSORS:
Advanced sensor system used to communicate commands or information orally to the robot.
the presence or absence of parts in a fixture or at the pickup point along a conveyor. It works like a
light switch in our house. When the button is pressed, an electrical circuit is closed inside the
sensor and electricity flows. When a button released, the circuit is broken and no electricity flows.
RCX can sense this flow of electricity, so it knows if the touch sensor if pressed or released. Touch
sensors send a signal when physical contact is made.
A robot with 6 Degree of freedom would be capable of accessing surfaces on the part that
would be difficult for a 3-axis coordinate measuring machine, the inspection system normally
considered for such an inspection task. Unfortunately, the robot's accuracy would be limiting factor
in contact inspection work.
4.3.2 FORCE SENSORS:
The capability to measure forces permits the robot to perform a number of tasks. These
include the capability to grasp parts of different sizes in material handling, machine loading and
assembly work, applying the appropriate level of force for the given part.
In assembly applications, force sensing could be used to determine if the screws have
become cross - threaded or if the parts are jammed. Force sensor used measure the force applied
on the object by the robot's arm during operation. The force can be worked out by using a force
sensing wrist or by measuring the torque exerted at the joint of the arm
Force sensing can be accomplished in several ways. Commonly used technique is force
sensing wrist. This consists of a special load-cell mounted between the gripper and the wrist.
Another technique is to measure the torque being exerted by each joint. This is usually
accomplished by sensing motor current for each of the joint motors. Third technique is to form an
array of force-sensing elements so that the shape and other information about the contact surface
can be determined.
4.3.2.1FORCE-SENSING WRIST:
The purpose is to provide information about the 3 components of force (Fx, Fy, & Fz) & the 3
moments (Mx,My,Mz) being applied at the end effector. The device consists of a metal bracket
fastens to a rigid frame. The frame is mounted to the wrist of the robot & the tool is mounted to the
center of bracket.
Since the forces are usually applied to the wrist in combinations it is necessary to first
resolve the forces & moments into their six components. This kind of computation can be carried out
but the robot controller or by a specialized amplifier designed for this purpose. Based on these
calculations, the robot controller can obtain the required information of the forces & moments being
applied at the wrist. This information could be used for a number of applications.
As an example, an insertion operation requires that there are no side force being applied to
the peg. Another example the robot 's end effector is required to follow along an edge or contour of
an irregular surface. This is called force accommodation. With this technique, certain forces are set =
0 while others are set = to specific values. Using force accommodation, one could command the
robot to follow the edge or contour by maintaining a fixed velocity in one direction & fixed forces in
other directions
The robot equipped with a force-sensing wrist + proper computing capacity could be
programmed to accomplish these kinds of applications. The procedure would begin by deciding on
the desired force to be applied in each axis direction.
The controller would perform the following sequences of operation, with the resulting offset
force calculated
1. Measure the forces at the wrist in each axis direction
2. Calculate the force offsets required. The force offset in each direction is determined by
subtracting the desired force from the measured force
3. Calculate the torques to be applied by each axis to generate the desired force offsets at
the wrist. These are moment calculations which take into account the combined
effects of the various joints & links of robot.
4. Then the robot must provide the torques calculated in step 2 so that the desired forces
are applied in each direction
Force-sensing wrist are usually very rigid devices so that they will not deflect undesirably
while under load. When designing a Force-sensing wrist there are several problems that may be
encountered. The end-of the-arm is often in a relatively hostile environment.
This means that the device must be sufficiently rugged to withstand the environment. This
means that the device must be capable of tolerating an occasional crash of the robot arm. At the
same time the device must be sensitive enough to detect the small forces. This design problem is
usually solved but using over travel limits. An over travel limit is a physical stop designed to
prevent the force sensor from deflecting so that it would be damaged.
The calculations required to utilize a force-sensing wrist are complex & require considerable
computation time. Also for an arm traveling at moderate to high speeds, the level of control over
the arm just as it makes contact with an object is limited by the dynamic performance of the arm.
The momentum of the arm makes it difficult to stop its forward motion quickly enough to
prevent a crash. Designing of force sensors is a very complex process due to the redundancies
present in the force sensing process itself. Force sensors are made using strain gauges that measure
the strain along particular axes.
It is to be noted that in 3D space we can have a total of 3 forces & 3 moments. Hence, single
cantilever beam cannot measure all the six components. A force sensor is usually made having 4
arms, each of which responds to two forces.
By measuring the resistance of each pad, information about the shape of the object against
the array of sensing elements can be determined. The operation of a tactile array sensor with an 8X8
matrix of pressure - sensitive pads. In the background is the CRT monitor display of the tactile
impression made by the object placed on the surface of the sensor device, as the number of pads in
the array is increased the resolution of the displayed information improves.
Y is the lateral distance between the light source & the reflected light beam against
the linear array. This distance corresponds to the number of elements contained within the
reflected beam in the sensor array.
defects. For example, a sensor probe designed to measure part length cannot detect flaws in the
part surface.
It involves the input of vision data by means of camera focused on the scene of interest.
Special Lighting techniques are used to obtain an image with sufficient contrast for future
processing. The image is digitized and stored in computer memory, The digital image is called
FRAME of VISION DATA and is frequently captured by hardware device is called FRAME GRABBER.
These devices are capable of digitizing images @ 30 frames per second.
The frames consists of a matrix of data representing projections of scene sensed by the
camera. The elements of matrix are called PIXEL or picture elements. The number of pixels are
determined by a sampling process performed on each image frame. A single pixel is the projection
of a small portion of the scene which reduces that portion to a single value. The value is measure of
the light intensity for that element. Each pixel is converted into a digital value.
ii) IMAGE PROCESSING AND ANALYSIS:
The digitized image for each frame is stored and then subjected to image processing and
analysis functions for data reduction and interpretation of the image. These steps are required in
order to permit the real time application of vision analysis required in robotic applications.
Typically an image frame will be threshold to produce a binary image and then various feature
measurements will further reduce the data representation of the image. This data reduction can
change the representation of frame from several 100,000 bytes of raw image data to several 100
bytes of featured value data. The resultant feature data can be analyzed in the available time for
action by the robot system.
Various technique to compute the feature values can be programmed into the computer to
obtain feature descriptors of the image which are matched with previously computed values stored
in the computer. These descriptors include shape and size characteristics that can be readily
calculated from the thresh hold image matrix. To accomplish image processing and analysis, the
Vision system frequently must be trained. In training, information is obtained on objects and stored
as computer models. The information consists of features such as area of the objects, its perimeter
length, major and minor diameters and similar features.
During Subsequent operation, feature values computed on unknown objects viewed by
camera are compared with models to determine if match has occurred.
iii)ROBORT APPLICATION:
The current application of Machine vision in robotics include inspection, part identification,
location and orientations. Research is ongoing in advanced application for use in complex
inspection , guidance and navigation. Vision systems can be classified in a number of ways, two
dimensional or three dimensional models. 2D models include checking the dimensions of a part or
verifying the presence of components on a subassembly. Many 2D vision system can operate on
binary image which is the result of simple thresholding technique. This is based on an assumed high
contrast between the objects and the back ground. the desired controlled lighting system.
3D vision system may require special light techniques and more sophisticated image
processing algorithms to analyze the image. Some systems require 2 cameras in order to achieve a
stereoscopic view, while other 3D system rely on the use of structured light and optical
triangulation techniques with single camera. Another way of classifying vision system is according
to the number of gray levels used to characterize the image. In a binary image, the gray levels are
divided into either of 2 categories, black or white.
into a digital form. The second step involves, digitizing is achieved by an ADC. ADC is either part of a
digital video camera or the front end of the frame grabber.
The choice is dependent on the type of hardware in the system. the frame grabber involves
third step, is an image storage and computation device which stores a given pixel array. the frame
grabber can vary in capability from one which simply stores an image to significant computation
capability. In the more powerful frame grabbers, thresholding, windowing, and histogram
modification calculations can be carried out under computer control. The stored image is then
subsequently processed and analyzed by the combination of the frame grabber and the vision
controller.
4.6.1 IMAGING DEVICES:
Camera technologies available include the older black - and - white vidicon camera and the
newer, second generation, solid state cameras. Solid state cameras used for robot vision include
CCD, CID and silicon bipolar sensor cameras.
VIDICON CAMERA:
In this operation of this system, the lens forms an image on the glass faceplate of the
camera. The faceplate has an inner surface which is coated with two layers of material. The first
layer consists of a transparent signal electrode film deposited on the faceplate of the inner
surface. The second layer is , thin photosensitive material deposited over the conducting film. The
photosensitive layer consists of a high density of small areas. These areas are similar to the pixels.
Each area generates a decreasing electrical resistance in response to increasing
illumination. A charge is created in each small area upon illumination. An electrical charge pattern
is generated corresponding to the image formed on the faceplate. The charge accumulated for an
area is a function of the intensity of impinging light over a specified time.
once a light sensitive charge is built up, this charge is read out to produce a video signal. this
is accomplished by scanning the layer by an electron beam. The scanning is controlled by a
deflection coil mounted along the length of the tube.
For an accumulated +ve charge the electron beam deposits enough electrons to neutralize
the charge. An equal number of electronics flow to cause a current at the video signal electrode. the
magnitude of the signal is proportional to the light intensity and the amount of time with which an
area is scanned.
the current is then directed through a load resistor which develops a signal voltage which is
further amplified and analyzed. Raster scanning eliminates the need to consider the time at each
area by making the scan time the same for all areas. only the intensity of impinging light is
considered.
Raster scanning is typically done by scanning the electron beam from left to right and top to
bottom. the process is designed to start the integration with zero- accumulated charge. For fixed
scan time, the charge accumulated is proportional to the intensity of that portion of the image being
considered. the output of the camera is a continuous voltage signal for each line scanned. the
voltage signal for each scan line is subsequently sampled and quantized resulting in a series of
sampled voltages being stored in digital memory.
this ADC process for the completer screen results in a two dimensional array of pixels.
Typically, a single pixel is quantized to between six and eight bits by the ADC.
4.6.3CHARGE COUPLED DEVICE:
In this technology, the image is projected by a video camera on to the CCD which detects,
stores, and reads out the accumulated charge generated by the light on each portion of the image.
Light detection occurs through the absorption of light on a photoconductive substrate. Charges
accumulate under positive control electrodes in isolated wells due to voltages applied to the
central electrodes. Each isolated well represents one pixel & can be transferred to output storage
registers by varying the voltages on the metal control electrodes.
4.7.2 QUANTIZATION:
Each sampled discrete time voltage level is assigned to a finite number of defined amplitude
levels. These amplitude levels correspond to the gray scale used in the system. The predefined
amplitude levels are characteristic to a particular ADC & consist of a set of discrete values of the
voltage levels. The number of quantization levels is = 2n where n is the number of bits of the ADC. A
large number of bits enables a signal to be represented more precisely
For example an 8 bit converter would allow us to quantize at 28=256 different values
whereas a bits would allow only 24=16 different levels
4.7.3 ENCODING:
The amplitude levels that are quantized must be changed into digital code is called
ENCODING. This involves representing an amplitude level by a binary digit sequence. The ability of
the encoding process to distinguish between various amplitude levels is a function of the spacing of
each quantization level.
Quantizing level spacing = full-scale range/ 2n
Prepared by: G.V.RAJA Sri Sairam College of Engineering Anekal. Page | 21
Regulation – 2015 (CBCS Scheme) AUTOMATION AND ROBOTICS – 15ME565
4.object recognition
4.8.1 IMAGE DATA REDUCTION:
The objective is to reduce the volume of data. As primary steps in the data analysis. The
following two schemes have found 1. DIGITAL CONVERSION 2. WINDOWING. The functions of both
schemes is to eliminate the bottleneck that can occur from the large volume of data in image
processing.
Digital conversion reduces the number of gray levels used by the machine vision system. For
example, 8 bit register used for each pixel would have 28 =256 gray levels. Depending on the
requirements of the application, digital conversion can be used to reduce the number of gray levels
by using fewer bits to represent the pixel light intensity. 4 bits would reduce the number of gray
levels to 16. This kind of conversion would significantly reduce the magnitude of the image -
processing problem.
PROBLEM4:
For an image digitized at 128 points per line & 128 lines, determine a) the total number
of bits to represent the gray level values required if an 8 bit ADC is used to indicate various
shades of gray & b) the reduction in data volume if only black & white values are digitized.
Solution
a) for gray scale imaging with 28 = 256 levels of gray
number of bits = 128X128X256 = 4,194,304 bits
b) for black & white (binary bit conversion)
number of bits = 128 X 128 X 2 = 16,384 bits
reduction in data volume = 4,194,304 --16,384 = 4,177,920 bits
windowing involves using only a portion of the total image stored in the frame buffer for
image processing & analysis. This portion is called the window.
4.8.2 SEGMENTATION:
The objective is to group areas of an image having similar characteristics or features into
distinct entities representing parts of the image. For example boundaries (edges) or regions (areas)
represent two natural segments of an image. There are many ways to segment an image. Three
important techniques are 1. THRESHOLDING, 2. REGION GROWING, 3. EDGE DETECTION.
THRESHOLDING is a binary conversion technique in which each pixel is converted into a
binary value either black or white. This is accomplished by utilizing a frequency histogram of the
image & establishing gray level is to be the border between the black & white. Picture shows the
regular image with each pixel having a specific gray tone out of 256 possible gray levels.
The
histogram plots the frequency versus the gray level for the image. For histogram that are bimodal in
shape , each peak of the histogram represents either the object itself or the background upon which
the object rests. To differentiate between the object & background, the procedure is to establish a
threshold & assign.
It should be pointed out that the above methods of using a histogram to determine a
threshold is only one of a large number of ways to threshold an image. It is however the method
used by many of the commercially available robot vision systems today. Such a method is said to
use a global threshold for the entire image.
When is not possible to find a single threshold for entire image. One approach is to partition
the total image into smaller rectangular areas & determine the threshold for each window being
analyzed.
Thresholding is the most widely used technique for segmentation in industrial vision
applications. The reasons are that it is fast & easily implemented & that the lighting is usually
controllable in an industrial setting.
Once thresholding is established for a particular image, the next step is to identify particular
areas associated with objects within the image. Such regions usually possess uniform pixel
properties computed over the area. The pixel properties may be multidimensional; that is there
may be more than a single attribute that can be used to characterize the pixel (color & intensity).
This simple procedure of growing regions around a pixel would be repeated until no new
regions can be added for the image.
The region growing segmentation technique described here is applicable when images are
not distinguishable from each other by straight thresholding or edge detection techniques. This
sometimes occurs when lighting of the scene cannot be adequately controlled.
In industrial robot vision systems, it is common practice to consider only edge detection or
simple thresholding. This is due to the fact that lighting can be controllable factor in an industrial
setting & hardware / computational implementation is simpler.
EDGE DETECTION considers the intensity change that occurs in the pixels at the boundary or edges
of a part. Given that a region of similar attributes has been found but the boundary shape is
unknown, the boundary can be determined by a simple edge following procedure. This can be
illustrated by the schematic of a binary image as shown in diagram
For the binary image, the procedure is to scan the image until a pixel within the region is
encountered. For a pixel within the region, turn left & step, otherwise, turn right & step. The
procedure is stopped when the boundary is traversed & the path has returned to the starting pixel.
The contour -following procedure described can be extended to gray level images.
diameter, aspect ratio) are compared to the corresponding stored values. These values constitute
the stored template. When a match is found, allowing for certain statistical variations in the
comparison process. Then the object has been properly classified.
STRUCTURAL TECHNIQUE of pattern consider relationship between features or edges of an
object, for example, it the image of an object can be subdivided into 4 straight lines connected at
their end points & the connected lines are at right angles, then the object is a rectangle. This kind of
technique known as syntactic pattern recognition is the most widely used structural technique.
Structural technique differ from decision - theoretic technique in that the latter deals with a
pattern on a quantitative basis & ignores for the most part interrelationships among object
primitives.
It can be computationally time consuming for complete pattern recognition. Accordingly, it
is often more appropriate to search for simpler regions or edges within an image. These simpler
regions can then be used to extract the required features. The majority of commercial robot vision
systems make use of this approach to the recognition of two dimensional objects. The recognition
algorithms are used to identify each segmented object in an image & assign it to a classification.
IMPORTANT QUESTIONS:
3. Explain with principle of operations, use of Force-sensing wrist for force sensing? (8)
4. Explain the proximity and range sensors in detail.? Discuss any two types [10]
5. What are tactile sensors? Discuss tactile sensors with example? [5]
6. Write short notes on i) Eddy current sensor ii) magnetic field sensor? (5M)
7. Write short notes on i) optical sensor ii) Acoustic sensor (10M)
8. Write short notes on i) TOUCH SENSORS ii) FORCE SENSOR [5]
9. Write short notes on i) Joint sensing ii) Tactile array sensors [5]
10. Explain briefly the uses of sensors in robotics [8]
11. What do you understand by the term machine vision used in Robotics, explain with
suitable block diagram the functions of a machine vision system?[10]
12. Discuss the important image processing & analysis technique? [10]
13. Describe the sensing & digitizing function in machine vision? [10]
14. Explain with neat sketch of principle & operation of vidicon tube [8]
15. With relevant diagrams, explain the Charged Coupled Devices?[8]
16. What is the necessary of lighting system in machine vision system? [5]
17. Explain the various lighting techniques? [10]
18. Discuss the various techniques used in the ADC process [8]
19. Explain, briefly about the segmentation process in image processing & analysis? [10]
20. Discuss in detail object recognition? [5]
TEXT BOOK:
1. Automation, Production Systems and Computer Integrated Manufacturing M.P.Groover,
Pearson Education, 5th edition, 2009
2. Industrial Robotics, Technology, Programming and Applications by M.P.Groover, Weiss,
Nagel, McGraw Hill International, edition, 2012.
REFERENCE BOOKS:
1. Industrial Robotics, Technology, Programming and Applications by M.P.Groover, Weiss,
Nagel, McGraw Hill International, 1st edition, 1986.
2. Automation, Production Systems and Computer Integrated Manufacturing M.P.Groover,
Pearson Education, 2nd edition, 2001.
3. Robotics, control vision and intelligence – Fu, Lee and gonzalez, McGraw Hill International,
2nd edition, 2007.
4. Robotics Engineering – An Integrated Approach, Klafter, Chmielewski and Negin, PHI 1st
edition, 2009.
5. Robotics (Industrial Robotics), P.Jaganathan, Lakshmi Publications, Chennai.