Final Module 2
Final Module 2
Final Module 2
Page | 1
Module – 2 Robotics and Automation
4. SPEED OF RESPONSE:
The transducer should be capable of responding to changes in the sensed variable in
minimum time. Ideally, the response would be instantaneous.
5. CALIBRATION:
The sensor should be easy to calibrate. The time and trouble required to accomplish the
calibration procedure should be minimum. Further, the sensor should not require frequent failures
during recalibration. The term "drift" is commonly applied to denote the gradual loss in accuracy of
the sensor with time and use, and which would necessitate recalibration.
6. RELIABILITY:
The sensor should possess a high reliability. It should not be subject to frequent failures
during operation.
7. COST AND EASE OF OPERATION:
The cost to purchase, install and operate the sensor should be as low as possible. further, the
ideal circumstance would be that the installation and operation the device would not required a
specially trained, highly skilled operator.
SENSORS IN ROBOTICS:
The sensors used in robotics mainly for interaction with the environment include a wide
range of devices which can be divided into the following general categories:
1. TACTILE SENSORS
2. PROXIMITY AND RANGE SENSORS
3. MISCELLANEOUS SENSORS AND SENSOR- BASED SYSTEMS
4. MACHINE VISION SYSTEMS
SENSOR DEVICES USED IN ROBOT WORK CELLS:
AMMETER:-
Electrical meter used to measure electrical current
EDDY CURRENT DETECTORS:- (PROXIMITY SENSOR)
Device that emits an alternating magnetic field at the tip of the probe, which induces eddy
currents in any conductive object in the range of the device. It can be used to indicate presence or
absence of a conductive object.
INFRARED SENSOR:- (PROXIMITY SENSOR)
Transducer which measures temperatures by the infrared light emitted from the surface of
an object. can be used indicate presence or absence of hot object.
Page | 2
Module – 2 Robotics and Automation
Page | 3
Module – 2 Robotics and Automation
VACUUM SWITCHES:-
Device is used to indicate negative air pressure. Can be used with vacuum gripper to indicate
presence or absence of an object,
VISION SENSOR:
Advanced sensor system used in conjunction with pattern recognition and other techniques
to view and interpret events occurring in the robot workplace.
VOICE SENSORS:
Advanced sensor system used to communicate commands or information orally to the robot.
TACTILE SENSORS:
Devices which indicate contact between themselves and some other solid object. A tactile
sensor is a collection of touch sensors that embedded between two polymer layers separated by an
isolator mesh.
Two layers of the film are used and are separated by a soft film which transmits vibration.
The lower PVDF film has an alternating voltage applied to it and this results in mechanical
oscillations of the film. The intermediate film transmits these vibrations to the upper PVDF film,
these vibrations cause an alternating voltage to be produced, when pressure is applied to the upper
PVDF film.
Tactile sensor is a pressure sensor used on the fingertips of robotic hands to determine
when a hand has come into contact with an object. This sensor is activated by a single touch and is
used in areas where posses positional problem. Tactile sensor is capable to detect.
Presence of objects
Shape, location, orientation of the object
Contact area and the pressure at that point
Magnitude, location and direction of the force
Moments magnitude, plane and the direction
It can be divided into two classes 1) TOUCH sensor 2) FORCE sensor. Touch sensors provide
a binary output signal which indicates whether or not contact has been made with the object. Force
sensors indicate not only that contact has been made with the object but also the magnitude of the
contact force between the two objects.
TOUCH SENSORS:-
It is Used to indicate that contact has been made between 2 objects without regard to the
magnitude of the contacting force. E.g. limit switches, micro switches. They can be used to indicate
Page | 4
Module – 2 Robotics and Automation
the presence or absence of parts in a fixture or at the pickup point along a conveyor. It works like a
light switch in our house. When the button is pressed, an electrical circuit is closed inside the
sensor and electricity flows. When a button released, the circuit is broken and no electricity flows.
RCX can sense this flow of electricity, so it knows if the touch sensor if pressed or released. Touch
sensors send a signal when physical contact is made.
A robot with 6 Degree of freedom would be capable of accessing surfaces on the part that
would be difficult for a 3-axis coordinate measuring machine, the inspection system normally
considered for such an inspection task. Unfortunately, the robot's accuracy would be limiting factor
in contact inspection work.
FORCE SENSORS:
The capability to measure forces permits the robot to perform a number of tasks. These
include the capability to grasp parts of different sizes in material handling, machine loading and
assembly work, applying the appropriate level of force for the given part.
In assembly applications, force sensing could be used to determine if the screws have
become cross - threaded or if the parts are jammed. Force sensor used measure the force applied
on the object by the robot's arm during operation. The force can be worked out by using a force
sensing wrist or by measuring the torque exerted at the joint of the arm
Force sensing can be accomplished in several ways. Commonly used technique is force
sensing wrist. This consists of a special load-cell mounted between the gripper and the wrist.
Another technique is to measure the torque being exerted by each joint. This is usually
accomplished by sensing motor current for each of the joint motors. Third technique is to form an
array of force-sensing elements so that the shape and other information about the contact surface
can be determined.
FORCE-SENSING WRIST:
The purpose is to provide information about the 3 components of force (Fx, Fy, & Fz) & the 3
moments (Mx,My,Mz) being applied at the end effector. The device consists of a metal bracket
fastens to a rigid frame. The frame is mounted to the wrist of the robot & the tool is mounted to the
center of bracket.
Since the forces are usually applied to the wrist in combinations it is necessary to first
resolve the forces & moments into their six components. This kind of computation can be carried out
but the robot controller or by a specialized amplifier designed for this purpose. Based on these
calculations, the robot controller can obtain the required information of the forces & moments being
applied at the wrist. This information could be used for a number of applications.
Page | 5
Module – 2 Robotics and Automation
As an example, an insertion operation requires that there are no side force being applied to
the peg. Another example the robot 's end effector is required to follow along an edge or contour of
an irregular surface. This is called force accommodation. With this technique, certain forces are set =
0 while others are set = to specific values. Using force accommodation, one could command the
robot to follow the edge or contour by maintaining a fixed velocity in one direction & fixed forces in
other directions
The robot equipped with a force-sensing wrist + proper computing capacity could be
programmed to accomplish these kinds of applications. The procedure would begin by deciding on
the desired force to be applied in each axis direction.
The controller would perform the following sequences of operation, with the resulting offset
force calculated
1. Measure the forces at the wrist in each axis direction
2. Calculate the force offsets required. The force offset in each direction is determined by
subtracting the desired force from the measured force
3. Calculate the torques to be applied by each axis to generate the desired force offsets at
the wrist. These are moment calculations which take into account the combined
effects of the various joints & links of robot.
Page | 6
Module – 2 Robotics and Automation
4. Then the robot must provide the torques calculated in step 2 so that the desired forces
are applied in each direction
Force-sensing wrist are usually very rigid devices so that they will not deflect undesirably
while under load. When designing a Force-sensing wrist there are several problems that may be
encountered. The end-of the-arm is often in a relatively hostile environment.
This means that the device must be sufficiently rugged to withstand the environment. This
means that the device must be capable of tolerating an occasional crash of the robot arm. At the
same time the device must be sensitive enough to detect the small forces. This design problem is
usually solved but using over travel limits. An over travel limit is a physical stop designed to
prevent the force sensor from deflecting so that it would be damaged.
The calculations required to utilize a force-sensing wrist are complex & require considerable
computation time. Also for an arm traveling at moderate to high speeds, the level of control over
the arm just as it makes contact with an object is limited by the dynamic performance of the arm.
The momentum of the arm makes it difficult to stop its forward motion quickly enough to
prevent a crash. Designing of force sensors is a very complex process due to the redundancies
present in the force sensing process itself. Force sensors are made using strain gauges that measure
the strain along particular axes.
It is to be noted that in 3D space we can have a total of 3 forces & 3 moments. Hence, single
cantilever beam cannot measure all the six components. A force sensor is usually made having 4
arms, each of which responds to two forces.
Page | 7
Module – 2 Robotics and Automation
Page | 8
Module – 2 Robotics and Automation
By measuring the resistance of each pad, information about the shape of the object against
the array of sensing elements can be determined. The operation of a tactile array sensor with an 8X8
matrix of pressure - sensitive pads. In the background is the CRT monitor display of the tactile
impression made by the object placed on the surface of the sensor device, as the number of pads in
the array is increased the resolution of the displayed information improves.
Page | 9
Module – 2 Robotics and Automation
Y is the lateral distance between the light source & the reflected light beam against
the linear array. This distance corresponds to the number of elements contained within the
reflected beam in the sensor array.
ACOUSTICAL DEVICES:
It can be used as proximity sensors. Ultra sonic frequencies above 20,000hertz are often
used since sound is beyond the range of human hearing. One type of acoustical proximity sensor
uses a cylindrical open-ended chamber with an acoustic emitter at the closed end of the chamber.
The emitter sets up a pattern of standing waves in the cavity which is altered but the presence of
an object near the open end. A microphone located in the wall of the chamber is used to sense the
change in the sound pattern. This kind of device can also be used as a range sensor
ELECTRICAL FIELD:
Two types of category 1] eddy current sensors & 2] magnetic sensors
EDDY CURRENT SENSOR:
It creates a primary alternating magnetic field in the small region near the probe. This field
induces eddy currents in an object placed in the region so long as the object is made of a
conductive material. These eddy currents produce their own magnetic field which interacts with
the primary field to change the flux density. The probe detects the change in the flux density & this
indicates the presence of the object
Page | 10
Module – 2 Robotics and Automation
USE OF SENSOR:
Major uses of sensors in industrial robotics and other automated manufacturing systems can be
divided into 4 basic categories:
1. Safety monitoring
2. Interlocks in work cell control
3. Part inspection for quality Control
4. Determining positions and related information about objects in the robot cell
1. SAFETY MONITORING:
This concerns the protection of human workers who work close to the robot or other
equipment. There are 3 occasions when humans are close to the machine to be exposed to danger.
a) During programming of the robot
b) During the operation of the robot cell when human work in the cell
c) During maintenance of the robot
The types of risk encountered during these times include physical injury from collision
between human and the robot, electrical shock, objects dropped from the grippers and loose power
cables or hydraulic lines on the floor. Properly grounding, shocks are prevented and raised floor
platforms to cover power cables and hydraulic lines
2. INTERLOCKS IN WORKCELL CONTROL:
Interlocks are used to coordinate the sequence of activities of the different pieces of
equipment in the work cell. In the execution of the program, there are certain elements of the work
cycle whose completion must be verified before proceeding with the next element in the cycle.
Sensors, often utilized to provide this kind of verification.
Page | 11
Module – 2 Robotics and Automation
defects. For example, a sensor probe designed to measure part length cannot detect flaws in the
part surface.
MACHINE VISION:
Automatic extraction of information from digital images for process control and
manufacturing systems. Machine vision are programmed to perform narrowly defined tasks such as
counting object in the conveyor, decoding the serial number and searching for surface defects.
Machine Vision is concerned with the sensing of vision data and its interpretation by a computer.
Typical system consists of the camera and digitizing hardware, a digital computer and preprocessor
which interface the hardware and software. Operation of Machine Vision system consists of 3
functions
Page | 12
Module – 2 Robotics and Automation
It involves the input of vision data by means of camera focused on the scene of interest.
Special Lighting techniques are used to obtain an image with sufficient contrast for future
processing. The image is digitized and stored in computer memory, The digital image is called
FRAME of VISION DATA and is frequently captured by hardware device is called FRAME GRABBER.
These devices are capable of digitizing images @ 30 frames per second.
The frames consists of a matrix of data representing projections of scene sensed by the
camera. The elements of matrix are called PIXEL or picture elements. The number of pixels are
determined by a sampling process performed on each image frame. A single pixel is the projection
of a small portion of the scene which reduces that portion to a single value. The value is measure of
the light intensity for that element. Each pixel is converted into a digital value.
ii) IMAGE PROCESSING AND ANALYSIS:
The digitized image for each frame is stored and then subjected to image processing and
analysis functions for data reduction and interpretation of the image. These steps are required in
order to permit the real time application of vision analysis required in robotic applications.
Typically an image frame will be threshold to produce a binary image and then various feature
measurements will further reduce the data representation of the image. This data reduction can
change the representation of frame from several 100,000 bytes of raw image data to several 100
bytes of featured value data. The resultant feature data can be analyzed in the available time for
action by the robot system.
Various technique to compute the feature values can be programmed into the computer to
obtain feature descriptors of the image which are matched with previously computed values stored
in the computer. These descriptors include shape and size characteristics that can be readily
calculated from the thresh hold image matrix. To accomplish image processing and analysis, the
Vision system frequently must be trained. In training, information is obtained on objects and stored
as computer models. The information consists of features such as area of the objects, its perimeter
length, major and minor diameters and similar features.
During Subsequent operation, feature values computed on unknown objects viewed by
camera are compared with models to determine if match has occurred.
iii) ROBORT APPLICATION:
The current application of Machine vision in robotics include inspection, part identification,
location and orientations. Research is ongoing in advanced application for use in complex
inspection , guidance and navigation. Vision systems can be classified in a number of ways, two
dimensional or three dimensional models. 2D models include checking the dimensions of a part or
Page | 13
Module – 2 Robotics and Automation
verifying the presence of components on a subassembly. Many 2D vision system can operate on
binary image which is the result of simple thresholding technique. This is based on an assumed high
contrast between the objects and the back ground. the desired controlled lighting system.
3D vision system may require special light techniques and more sophisticated image
processing algorithms to analyze the image. Some systems require 2 cameras in order to achieve a
stereoscopic view, while other 3D system rely on the use of structured light and optical
triangulation techniques with single camera. Another way of classifying vision system is according
to the number of gray levels used to characterize the image. In a binary image, the gray levels are
divided into either of 2 categories, black or white.
Page | 14
Module – 2 Robotics and Automation
into a digital form. The second step involves, digitizing is achieved by an ADC. ADC is either part of a
digital video camera or the front end of the frame grabber.
The choice is dependent on the type of hardware in the system. the frame grabber involves
third step, is an image storage and computation device which stores a given pixel array. the frame
grabber can vary in capability from one which simply stores an image to significant computation
capability. In the more powerful frame grabbers, thresholding, windowing, and histogram
modification calculations can be carried out under computer control. The stored image is then
subsequently processed and analyzed by the combination of the frame grabber and the vision
controller.
4.6.1 IMAGING DEVICES:
Camera technologies available include the older black - and - white vidicon camera and the
newer, second generation, solid state cameras. Solid state cameras used for robot vision include
CCD, CID and silicon bipolar sensor cameras.
VIDICON CAMERA:
In this operation of this system, the lens forms an image on the glass faceplate of the
camera. The faceplate has an inner surface which is coated with two layers of material. The first
layer consists of a transparent signal electrode film deposited on the faceplate of the inner
surface. The second layer is , thin photosensitive material deposited over the conducting film. The
photosensitive layer consists of a high density of small areas. These areas are similar to the pixels.
Each area generates a decreasing electrical resistance in response to increasing
illumination. A charge is created in each small area upon illumination. An electrical charge pattern
is generated corresponding to the image formed on the faceplate. The charge accumulated for an
area is a function of the intensity of impinging light over a specified time.
Page | 15
Module – 2 Robotics and Automation
once a light sensitive charge is built up, this charge is read out to produce a video signal. this
is accomplished by scanning the layer by an electron beam. The scanning is controlled by a
deflection coil mounted along the length of the tube.
For an accumulated +ve charge the electron beam deposits enough electrons to neutralize
the charge. An equal number of electronics flow to cause a current at the video signal electrode. the
magnitude of the signal is proportional to the light intensity and the amount of time with which an
area is scanned.
the current is then directed through a load resistor which develops a signal voltage which is
further amplified and analyzed. Raster scanning eliminates the need to consider the time at each
area by making the scan time the same for all areas. only the intensity of impinging light is
considered.
Raster scanning is typically done by scanning the electron beam from left to right and top to
bottom. the process is designed to start the integration with zero- accumulated charge. For fixed
scan time, the charge accumulated is proportional to the intensity of that portion of the image being
considered. the output of the camera is a continuous voltage signal for each line scanned. the
voltage signal for each scan line is subsequently sampled and quantized resulting in a series of
sampled voltages being stored in digital memory.
this ADC process for the completer screen results in a two dimensional array of pixels.
Typically, a single pixel is quantized to between six and eight bits by the ADC.
4.6.3CHARGE COUPLED DEVICE:
In this technology, the image is projected by a video camera on to the CCD which detects,
stores, and reads out the accumulated charge generated by the light on each portion of the image.
Light detection occurs through the absorption of light on a photoconductive substrate. Charges
accumulate under positive control electrodes in isolated wells due to voltages applied to the
central electrodes. Each isolated well represents one pixel & can be transferred to output storage
registers by varying the voltages on the metal control electrodes.
Page | 16
Module – 2 Robotics and Automation
LIGHTING TECHNIQUES:
Good illumination of the scene is important because of its effect on the level of complexity of
image-processing algorithms required. Poor lighting makes the task of interpreting the scene more
difficult. Proper lighting technique should provide high contrast & minimize specular reflections &
shadows unless specifically designed into the system. The purpose of lighting techniques is to direct
the path of light from the lighting device to the camera. The basic types of lighting devices used in
machine vision may be grouped into the following categories.
1. Diffuse surface devices: Examples of diffuse surface illuminators are the typical
fluorescent lamps & light tables
2. Condenser projections: A condenser projector transforms an expanding light source
into a condensing light source. This is useful in imaging optics.
3. Flood or spot projectors: flood lights & spot lights are used to illuminate surface areas
Page | 17
Module – 2 Robotics and Automation
Page | 18
Module – 2 Robotics and Automation
QUANTIZATION:
Each sampled discrete time voltage level is assigned to a finite number of defined amplitude
levels. These amplitude levels correspond to the gray scale used in the system. The predefined
amplitude levels are characteristic to a particular ADC & consist of a set of discrete values of the
voltage levels. The number of quantization levels is = 2 n where n is the number of bits of the ADC. A
large number of bits enables a signal to be represented more precisely
For example an 8 bit converter would allow us to quantize at 28=256 different values
whereas a bits would allow only 24=16 different levels
ENCODING:
The amplitude levels that are quantized must be changed into digital code is called
ENCODING. This involves representing an amplitude level by a binary digit sequence. The ability of
the encoding process to distinguish between various amplitude levels is a function of the spacing of
each quantization level.
Quantizing level spacing = full-scale range/ 2n
Page | 19
Module – 2 Robotics and Automation
Page | 21
Module – 2 Robotics and Automation
4.object recognition
IMAGE DATA REDUCTION:
The objective is to reduce the volume of data. As primary steps in the data analysis. The
following two schemes have found 1. DIGITAL CONVERSION 2. WINDOWING. The functions of both
schemes is to eliminate the bottleneck that can occur from the large volume of data in image
processing.
Digital conversion reduces the number of gray levels used by the machine vision system. For
example, 8 bit register used for each pixel would have 2 8 =256 gray levels. Depending on the
requirements of the application, digital conversion can be used to reduce the number of gray levels
by using fewer bits to represent the pixel light intensity. 4 bits would reduce the number of gray
levels to 16. This kind of conversion would significantly reduce the magnitude of the image -
processing problem.
PROBLEM4:
For an image digitized at 128 points per line & 128 lines, determine a) the total number
of bits to represent the gray level values required if an 8 bit ADC is used to indicate various
shades of gray & b) the reduction in data volume if only black & white values are digitized.
Solution
a) for gray scale imaging with 28 = 256 levels of gray
number of bits = 128X128X256 = 4,194,304 bits
b) for black & white (binary bit conversion)
number of bits = 128 X 128 X 2 = 16,384 bits
reduction in data volume = 4,194,304 --16,384 = 4,177,920 bits
windowing involves using only a portion of the total image stored in the frame buffer for
image processing & analysis. This portion is called the window.
SEGMENTATION:
The objective is to group areas of an image having similar characteristics or features into
distinct entities representing parts of the image. For example boundaries (edges) or regions (areas)
represent two natural segments of an image. There are many ways to segment an image. Three
important techniques are 1. THRESHOLDING, 2. REGION GROWING, 3. EDGE DETECTION.
THRESHOLDING is a binary conversion technique in which each pixel is converted into a
binary value either black or white. This is accomplished by utilizing a frequency histogram of the
Page | 22
Module – 2 Robotics and Automation
image & establishing gray level is to be the border between the black & white. Picture shows the
regular image with each pixel having a specific gray tone out of 256 possible gray levels.
The
histogram plots the frequency versus the gray level for the image. For histogram that are bimodal in
shape , each peak of the histogram represents either the object itself or the background upon which
the object rests. To differentiate between the object & background, the procedure is to establish a
threshold & assign.
It should be pointed out that the above methods of using a histogram to determine a
threshold is only one of a large number of ways to threshold an image. It is however the method
used by many of the commercially available robot vision systems today. Such a method is said to
use a global threshold for the entire image.
When is not possible to find a single threshold for entire image. One approach is to partition
the total image into smaller rectangular areas & determine the threshold for each window being
analyzed.
Thresholding is the most widely used technique for segmentation in industrial vision
applications. The reasons are that it is fast & easily implemented & that the lighting is usually
controllable in an industrial setting.
Once thresholding is established for a particular image, the next step is to identify particular
areas associated with objects within the image. Such regions usually possess uniform pixel
properties computed over the area. The pixel properties may be multidimensional; that is there
may be more than a single attribute that can be used to characterize the pixel (color & intensity).
Page | 23
Module – 2 Robotics and Automation
Page | 24
Module – 2 Robotics and Automation
This simple procedure of growing regions around a pixel would be repeated until no new
regions can be added for the image.
The region growing segmentation technique described here is applicable when images are
not distinguishable from each other by straight thresholding or edge detection techniques. This
sometimes occurs when lighting of the scene cannot be adequately controlled.
In industrial robot vision systems, it is common practice to consider only edge detection or
simple thresholding. This is due to the fact that lighting can be controllable factor in an industrial
setting & hardware / computational implementation is simpler.
EDGE DETECTION considers the intensity change that occurs in the pixels at the boundary or edges
of a part. Given that a region of similar attributes has been found but the boundary shape is
unknown, the boundary can be determined by a simple edge following procedure. This can be
illustrated by the schematic of a binary image as shown in diagram
For the binary image, the procedure is to scan the image until a pixel within the region is
encountered. For a pixel within the region, turn left & step, otherwise, turn right & step. The
procedure is stopped when the boundary is traversed & the path has returned to the starting pixel.
The contour -following procedure described can be extended to gray level images.
FEATURE EXTRACTION:
In machine vision application, it is often necessary to distinguish one object from another.
This is usually accomplished by means of features that uniquely characterize the object. Some
features of objects that can be used in machine vision include area, diameter & perimeter.
A feature is a single parameter that permits ease of comparison & identification. The
technique available for 2d cases can be deal with boundary features & those that deal with area
features. The various features can be used to identify the object or part & determine the part
location & or orientation.
Page | 25
Module – 2 Robotics and Automation
Page | 26
Page | 30
OBJECT RECOGNITION:
The next step in image data processing is to identify the object the image represents. The
recognition algorithm must be powerful enough to uniquely identify the object. Object recognition
technique used in industry today maybe classified into 2 major categories.
1. TEMPLATE- MATCHING techniques
2. STRUCTURAL TECHNIQUEs
TEMPAKTE MATCHING TECHNIQUES are a subset of the more general statistical pattern.
Recognition technique that serve to classify objects in an image into predetermined categories. The
basic problem in template matching is to match the object with a stored pattern feature set defined
as a model templates.
The procedure is based on the use of a sufficient number of features to minimize the
frequency of errors in the classification process. The features of the object in the image (its area,
Page | 31
diameter, aspect ratio) are compared to the corresponding stored values. These values constitute
the stored template. When a match is found, allowing for certain statistical variations in the
comparison process. Then the object has been properly classified.
STRUCTURAL TECHNIQUE of pattern consider relationship between features or edges of an
object, for example, it the image of an object can be subdivided into 4 straight lines connected at
their end points & the connected lines are at right angles, then the object is a rectangle. This kind of
technique known as syntactic pattern recognition is the most widely used structural technique.
Structural technique differ from decision - theoretic technique in that the latter deals with a
pattern on a quantitative basis & ignores for the most part interrelationships among object
primitives.
It can be computationally time consuming for complete pattern recognition. Accordingly, it
is often more appropriate to search for simpler regions or edges within an image. These simpler
regions can then be used to extract the required features. The majority of commercial robot vision
systems make use of this approach to the recognition of two dimensional objects. The recognition
algorithms are used to identify each segmented object in an image & assign it to a classification.
IMPORTANT QUESTIONS:
Page | 32
3. Explain with principle of operations, use of Force-sensing wrist for force sensing? (8)
4. Explain the proximity and range sensors in detail.? Discuss any two types [10]
5. What are tactile sensors? Discuss tactile sensors with example? [5]
6. Write short notes on i) Eddy current sensor ii) magnetic field sensor? (5M)
7. Write short notes on i) optical sensor ii) Acoustic sensor (10M)
8. Write short notes on i) TOUCH SENSORS ii) FORCE SENSOR [5]
9. Write short notes on i) Joint sensing ii) Tactile array sensors [5]
10. Explain briefly the uses of sensors in robotics [8]
11. What do you understand by the term machine vision used in Robotics, explain with
suitable block diagram the functions of a machine vision system?[10]
12. Discuss the important image processing & analysis technique? [10]
13. Describe the sensing & digitizing function in machine vision? [10]
14. Explain with neat sketch of principle & operation of vidicon tube [8]
15. With relevant diagrams, explain the Charged Coupled Devices?[8]
16. What is the necessary of lighting system in machine vision system? [5]
17. Explain the various lighting techniques? [10]
18. Discuss the various techniques used in the ADC process [8]
19. Explain, briefly about the segmentation process in image processing & analysis? [10]
20. Discuss in detail object recognition? [5]
TEXT BOOK:
1. Automation, Production Systems and Computer Integrated Manufacturing M.P.Groover,
Pearson Education, 5th edition, 2009
2. Industrial Robotics, Technology, Programming and Applications by M.P.Groover, Weiss,
Nagel, McGraw Hill International, edition, 2012.
REFERENCE BOOKS:
1. Industrial Robotics, Technology, Programming and Applications by M.P.Groover, Weiss,
Nagel, McGraw Hill International, 1st edition, 1986.
2. Automation, Production Systems and Computer Integrated Manufacturing M.P.Groover,
Pearson Education, 2nd edition, 2001.
3. Robotics, control vision and intelligence – Fu, Lee and gonzalez, McGraw Hill International,
2nd edition, 2007.
Page | 33
4. Robotics Engineering – An Integrated Approach, Klafter, Chmielewski and Negin, PHI 1 st
edition, 2009.
5. Robotics (Industrial Robotics), P.Jaganathan, Lakshmi Publications, Chennai.
Page | 34