[go: up one dir, main page]

0% found this document useful (0 votes)
3 views18 pages

Notes On Sensor Unit 02

Uploaded by

mohdsameer10841
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views18 pages

Notes On Sensor Unit 02

Uploaded by

mohdsameer10841
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

What is a Thermistor?

A thermistor (or thermal resistor) is defined as a type of resistor whose electrical resistance varies with
changes in temperature. Although all resistors’ resistance will fluctuate slightly with temperature, a
thermistor is particularly sensitive to temperature changes.

Uses of Thermistors
Thermistors have a variety of applications. They are widely used as a way to measure temperature as a
thermistor thermometer in many different liquid and ambient air environments. Some of the most
common uses of thermistors include:
 Digital thermometers (thermostats)
 Automotive applications (to measure oil and coolant temperatures in cars & trucks)
 Household appliances (like microwaves, fridges, and ovens)
 Circuit protection (i.e. surge protection)
 Rechargeable batteries (ensure the correct battery temperature is maintained)
 To measure the thermal conductivity of electrical materials
 Temperature compensation (i.e. maintain resistance to compensate for effects caused by changes
in temperature in another part of the circuit)
 Used in wheatstone bridge circuits
How Does a Thermistor Work
The working principle of a thermistor is that its resistance is dependent on its temperature. We can
measure the resistance of a thermistor using an ohmmeter. If we know the exact relationship between
how changes in the temperature will affect the resistance of the thermistor – then by measuring the
thermistor’s resistance we can derive its temperature.
How much the resistance changes depends on the type of material used in the thermistor. The
relationship between a thermistor’s temperature and resistance is non-linear. A typical thermistor graph
is shown below:

If we had a thermistor with the above temperature graph, we could simply line up the resistance
measured by the ohmmeter with the temperature indicated on the graph. By drawing a horizontal line
across from the resistance on the y-axis, and drawing a vertical line down from where this horizontal line
intersects with the graph, we can hence derive the temperature of the thermistor.
NTC Thermistor
In an NTC thermistor, when the temperature increases, resistance decreases. And when temperature
decreases, resistance increases. Hence in an NTC thermistor temperature and resistance are inversely
proportional. These are the most common type of themistor.

The relationship between resistance and temperature in an NTC thermistor is governed by the following
expression:

Where:
 RT is the resistance at temperature T (K)
 R0 is the resistance at temperature T0 (K)
 T0 is the reference temperature (normally 25oC)
 β is a constant, its value is dependant on the characteristics of the material. The nominal value is
taken as 4000.
If the value of β is high, then the resistor–temperature relationship will be very good. A higher value of β
means a higher variation in resistance for the same rise in temperature – hence you have increased the
sensitivity (and hence accuracy) of the thermistor.

From the expression (1), we can obtain the resistance temperature co-efficient. This is nothing but the
expression for the sensitivity of the thermistor.

Above we can clearly see that the αT has a negative sign. This negative sign indicates the negative
resistance-temperature characteristics of the NTC thermistor.

If β = 4000 K and T = 298 K, then the αT = –0.0045/oK. This is much higher than the sensitivity of platinum
RTD. This would be able to measure the very small changes in the temperature.

Thermistor Construction
To make a thermistor, two or more semiconductor powders made of metallic oxides are mixed with a
binder to form a slurry. Small drops of this slurry are formed over the lead wires. For drying purpose, we
have to put it into a sintering furnace. During this process, that slurry will shrink onto the lead wires to
make an electrical connection. This processed metallic oxide is sealed by putting a glass coating on it. This
glass coating gives a waterproof property to the thermistors – helping to improve their stability.
There are different shapes and sizes of thermistors available in the market. Smaller thermistors are in the
form of beads of diameter from 0.15 millimeters to 1.5 millimeters. Thermistors may also be in the form of
disks and washers made by pressing the thermistor material under high pressure into flat cylindrical
shapes with diameter from 3 millimeters to 25 millimeters.

Types of Temperature Sensors

The typical size of a thermistor is 0.125mm to 1.5 mm. Commercially available thermistors have nominal
values of 1K, 2K, 10K, 20K, 100K, etc. This value indicates the resistance value at a temperature of 25oC.
Thermistors are available in different models: bead type, rod type, disc type, etc. The major advantages of
thermistors are their small size and relatively low cost.

This size advantage means that the time constant of thermistors operated in sheaths is small, although the
size reduction also decreases its heat dissipation capability and so makes the self-heating effect greater.
This effect can permanently damage the thermistor.

To prevent this, thermistors have to be operated at low levels of electric current compared to resistance
thermometer – resulting in lower measurement sensitivity.

Thermistor vs Thermocouple
The main differences between a thermistor and a thermocouple are:
Thermistors:

 A more narrow range of sensing (55 to +150oC – although this varies depending on the brand)
 Sensing parameter = Resistance
 Nonlinear relationship between the sensing parameter (resistance) and temperature
 NTC thermistors have a roughly exponential decrease in resistance with increasing temperature
 Good for sensing small changes in temperature (it’s hard to use a thermistor accurately and with
high resolution over more than a 50oC range).
 The sensing circuit is simple and doesn’t need amplification & is very simple
 Accuracy is usually hard to get better than 1oC without calibration
Thermocouples:

 Have a wide range of temperature sensing (Type T = -200-350oC; Type J = 95-760°C; Type K = 95-
1260°C; other types go to even higher temperatures)
 Can be very accurate
 Sensing parameter = voltage generated by junctions at different temperatures
 Thermocouple voltage is relatively low
 Linear relationship between the sensing parameter (voltage) and temperature
Thermistor vs RTD
Resistance Temperature Detectors (also known as RTD sensors) are very similar to thermistors. Both
RTDs and thermistors have varying resistance dependent on the temperature.
The main difference between the two is the type of material that they are made of. Thermistors are
commonly made with ceramic or polymer materials while RTDs are made of pure metals. In terms of
performance, thermistors win in almost all aspects.

Thermistors are more accurate, cheaper, and have faster response times than RTDs. The only real
disadvantage of a thermistor vs an RTD is when it comes to temperature range. RTDs can measure
temperature over a wider range than a thermistor.

Aside from this, there is no reason to use a thermistor over an RTD.

Resistance Temperature Detector (RTD):-

A Resistance Thermometer or Resistance Temperature Detector is a device which used to determine the
temperature by measuring the resistance of pure electrical wire. This wire is referred to as a temperature sensor. If
we want to measure temperature with high accuracy, RTD is the only one solution in industries. It has good linear
characteristics over a wide range of temperature.
The variation of resistance of the metal with the variation of the temperature is given as,

Where, Rt and R0 are the resistance values at toC and t0oC temperatures. α and β are the constants depends on the
metals.
This expression is for huge range of temperature. For small range of temperature, the expression can be,

In RTD devices; Copper, Nickel and Platinum are widely used metals. These three metals are having different
resistance variations with respective to the temperature variations. That is called resistance-temperature
characteristics. Platinum has the temperature range of 650oC, and then the Copper and Nickel have 120oC and 300oC
respectively. The figure-1 shows the resistance-temperature characteristics curve of the three different metals. For
Platinum, its resistance changes by approximately 0.4 ohms per degree Celsius of temperature.
The purity of the platinum is checked by measuring R100 / R0. Because, whatever the materials actually we are using
for making the RTD that should be pure. If it will not pure, it will deviate from the conventional resistance-
temperature graph. So, α and β values will change depending upon the metals.
Construction of Resistance Temperature Detector or RTD

The construction is typically such that the wire is wound on a form (in a coil) on notched mica cross frame to
achieve small size, improving the thermal conductivity to decrease the response time and a high rate of heat transfer
is obtained. In the industrial RTD’s, the coil is protected by a stainless steel sheath or a protective tube.

So that, the physical strain is negligible as the wire expands and increase the length of wire with the temperature
change. If the strain on the wire is increasing, then the tension increases. Due to that, the resistance of the wire will
change which is undesirable.So, we don’t want to change the resistance of wire by any other unwanted changes
except the temperature changes.
This is also useful to RTD maintenance while the plant is in operation. Mica is placed in between the steel sheath
and resistance wire for better electrical insulation. Due less strain in resistance wire, it should be carefully wound
over mica sheet. The fig.2 shows the structural view of an Industrial

Limitations of RTD

In the RTD resistance, there will be an I2R power dissipation by the device itself that causes a slight heating effect.
This is called as self-heating in RTD. This may also cause an erroneous reading. Thus, the electric current through
the RTD resistance must be kept sufficiently low and constant to avoid self-heating.
2.2.1.3 Thermocouples
The thermocouple is widely used temperature sensor in industry. Whenever two different types of metals are
connected together, a thermoelectric potential (sometimes called thermoelectric EMF) is generated across the two
free ends the metals according to the temperature of the joint. This is known as the thermoelectric effect.

Basically thermocouple consists of two different metals which are placed in contact with each other as shown in the
diagram.

Let us consider temperature of the heater element be T a and the temperature of cold metal be Tb.
Now it is found that the generated emf at the junction is related to temperature difference as:

This thermoelectric effect was discovered by Thomas Johan Seeback discovered in 1821. This thermoelectric EMF
is generated due to the combination of Peltier effect and Thomson effect. The EMF generated can be approximately
expressed by the relationship:

The values of constants a1, a2, a3, etc. depend on the metals A and B as shown in fig.1.
In this fig.1, T1 and T2 are the temperatures which are presented in the junction points of metal A and B. T1 is
represented as hot junction and T2 is represented as clod junction.
So, the T1 should be greater than the T2. Now,

The Seebeck effect describes the phenomenon whereby a temperature gradient in a metal gives rise to an
accompanying electric field. The magnitude of the accompanying electric field is always proportional to the
temperature gradient but depends on a material-dependent and temperature-dependent Seebeck Coefficient.

Thus a wire of pure material in a temperature gradient spontaneously acquires a voltage across its ends, the
magnitude of which is equal to the integrated Seebeck voltage along the wire. A thermocouple is made from two
wires of different materials with differing Seebeck coefficients joined at one end – the so-called ‘hot’ junction. The
open ends of the wires are connected to a sensitive voltage measuring device in one of a variety of configurations.

Typically in operation the open ends of the junctions are connected to the terminals of a high-resolution voltmeter.

Thermocouples are made by welding or soldering together wires of the metals concerned. These junctions can be
made very small and with negligible heat capacity.

When used to measure temperature, a measurement is taken of the electromotive force set up when one junction is
maintained at a standard known temperature – usually the ice point - and the other junction is allowed to take the
temperature whose value is required. This electromotive force can be directly related to the difference in temperature
between the two junctions by previous calibration of the system, and thus the unknown temperature is found by
adding this difference algebraically to the known standard temperature.

In practice it is inconvenient to maintain an ice point and instead the measurements are referenced to the temperature
of the terminals of the digital voltmeter. This technique – known as cold-junction compensation – requires a
measurement of the temperature of the voltmeter terminals using a thermistor or platinum resistance thermometer.
The additional thermocouple voltage that would have been expected if an ice-point had been used is then calculated
and added to the measured voltage. The sum is then used to determine the temperature using interpolation of
standard tables. Where cold-junction compensation is used, especial care must be taken close to voltmeter junctions
where small temperature differences between the terminals can generate spurious voltages.

In meteorology, thermocouples are mostly used when a thermometer of very small time-constant, of the order of 1
or 2 s, and capable of remote reading and recording is required, usually for special research tasks. A disadvantage, if
the absolute temperature is required, is the necessity for a constant-temperature enclosure for both the cold junction
and ancillary apparatus for the measurements of the electromotive force that has been set up; thermocouples are best
suited for the measurement of differential temperatures, since this complication does not arise. Very high accuracy
can be achieved with suitably sensitive apparatus, but frequent calibration is necessary. Copper-constantan or iron-
constantan combinations are suitable for meteorological work, as the electromotive force produced per degree
Celsius is higher than with rarer and more expensive metals, which are normally used at high temperatures.

Thermocouple Materials

There are many types of thermocouples, each with its own unique characteristics in terms of temperature range,
durability, vibration resistance, chemical resistance, and application compatibility. Type J, K, T, & E are “Base
Metal” thermocouples, the most common types of thermocouples. Type R, S, and B thermocouples are “Noble
Metal” thermocouples, which are used in high temperature applications

Thermocouples are manufactured from various combinations of the base metals copper and iron, the base-metal
alloys of Alumel (Ni/Mn/Al/Si), Chromel (Ni/Cr), Constantan (Cu/Ni), Nicrosil (Ni/Cr/Si) and Nisil (Ni/Si/Mn), the
noble metals platinum and tungsten, and the noble-metal alloys of platinum/rhodium and tungsten/rhenium.
Only certain combinations of these are used as thermocouples and each standard combination is known by an
internationally recognized type letter, for instance type K is Chromel–Alumel. The below table is shown the some of
the material types and their characteristics. The EMF–temperature characteristics for some of these standard
thermocouples are shown in Fig.3. These curves show reasonable linearity over at least part of their temperature
measuring ranges.
Following are advantages of Thermocouple type of instruments,
1. The thermocouple type of instruments accurately indicates the root mean square value of current and
voltages irrespective of the waveform. There is a wide varieties of range of thermocouple instruments are
available in the market.
2. Thermocouple type of instruments give very accurate reading even at high frequency, thus these types of
instruments are completely free from frequency errors.
3. The measurement of quantity under these instruments is not affected by stray magnetic fields.
4. These instruments are known for their high sensitivity.
5. Usually for measuring the low value of current bridge type of arrangement is used i.e. ranging from 0.5
Amperes to 20 Amperes while for measuring the higher value of current heater element is required to retain
accuracy.
Disadvantages of Thermocouple Type Instruments

Instead of many advantages these type of instruments posses one disadvantage,


The over load capacity of thermocouple type of instrument is small, even fuse is not able to the heater wire
because heater wire may burn out before the fuse blows out.
THERMAL IMAGING

Here's how thermal imaging works:


1. A special lens focuses the infrared light emitted by all of the objects in view.
2. The focused light is scanned by a phased array of infrared-detector elements. The detector elements create a
very detailed temperature pattern called a thermogram. It only takes about one-thirtieth of a second for the
detector array to obtain the temperature information to make the thermogram. This information is obtained
from several thousand points in the field of view of the detector array.
3. The thermogram created by the detector elements is translated into electric impulses.
4. The impulses are sent to a signal-processing unit, a circuit board with a dedicated chip that translates the
information from the elements into data for the display.
5. The signal-processing unit sends the information to the display, where it appears as various colors
depending on the intensity of the infrared emission. The combination of all the impulses from all of the
elements creates the image.

Thermal imaging

Types of Thermal Imaging Devices


Most thermal-imaging devices scan at a rate of 30 times per second. They can sense temperatures ranging from -4
degrees Fahrenheit (-20 degrees Celsius) to 3,600 F (2,000 C), and can normally detect changes in temperature of
about 0.4 F (0.2 C).
There are two common types of thermal-imaging devices:
 Un-cooled - This is the most common type of thermal-imaging device. The infrared-detector elements are
contained in a unit that operates at room temperature. This type of system is completely quiet, activates
immediately and has the battery built right in.
 Cryogenically cooled - More expensive and more susceptible to damage from rugged use, these systems
have the elements sealed inside a container that cools them to below 32 F (zero C). The advantage of such a
system is the incredible resolution and sensitivity that result from cooling the elements. Cryogenically-
cooled systems can "see" a difference as small as 0.2 F (0.1 C) from more than 1,000 ft (300 m) away,
which is enough to tell if a person is holding a gun at that distance!
While thermal imaging is great for detecting people or working in near-absolute darkness, most night-vision
equipment uses image-enhancement technology
Hall Effect Sensors are devices which are activated by an external magnetic field. We know that a
magnetic field has two important characteristics flux density, (B) and polarity (North and South Poles). The output
signal from a Hall effect sensor is the function of magnetic field density around the device. When the magnetic flux
density around the sensor exceeds a certain pre-set threshold, the sensor detects it and generates an output voltage
called the Hall Voltage, VH. Consider the diagram below.

Hall Effect Sensor Principles

Hall Effect Sensors consist basically of a thin piece of rectangular p-type semiconductor material such as gallium
arsenide (GaAs), indium antimonide (InSb) or indium arsenide (InAs) passing a continuous current through itself.
When the device is placed within a magnetic field, the magnetic flux lines exert a force on the semiconductor
material which deflects the charge carriers, electrons and holes, to either side of the semiconductor slab. This
movement of charge carriers is a result of the magnetic force they experience passing through the semiconductor
material.
As these electrons and holes move side wards a potential difference is produced between the two sides of the
semiconductor material by the build-up of these charge carriers. Then the movement of electrons through the
semiconductor material is affected by the presence of an external magnetic field which is at right angles to it and this
effect is greater in a flat rectangular shaped material.
The effect of generating a measurable voltage by using a magnetic field is called the Hall Effect after Edwin Hall
who discovered it back in the 1870’s with the basic physical principle underlying the Hall effect being Lorentz
force. To generate a potential difference across the device the magnetic flux lines must be perpendicular, (90o) to the
flow of current and be of the correct polarity, generally a south pole.
The Hall effect provides information regarding the type of magnetic pole and magnitude of the magnetic field. For
example, a south pole would cause the device to produce a voltage output while a north pole would have no effect.
Generally, Hall Effect sensors and switches are designed to be in the “OFF”, (open circuit condition) when there is
no magnetic field present. They only turn “ON”, (closed circuit condition) when subjected to a magnetic field of
sufficient strength and polarity.

Hall Effect Magnetic Sensor


The output voltage, called the Hall voltage, (VH) of the basic Hall Element is directly proportional to the strength of
the magnetic field passing through the semiconductor material (output ∝ H). This output voltage can be quite small,
only a few microvolts even when subjected to strong magnetic fields so most commercially available Hall effect
devices are manufactured with built-in DC amplifiers, logic switching circuits and voltage regulators to improve the
sensors sensitivity, hysteresis and output voltage. This also allows the Hall effect sensor to operate over a wider
range of power supplies and magnetic field conditions.

The Hall Effect Sensor

Hall Effect Sensors are available with either linear or digital outputs. The output signal for linear (analogue)
sensors is taken directly from the output of the operational amplifier with the output voltage being directly
proportional to the magnetic field passing through the Hall sensor. This output Hall voltage is given as:

 Where:
 VH is the Hall Voltage in volts
 RH is the Hall Effect co-efficient
 I is the current flow through the sensor in amps
 t is the thickness of the sensor in mm
 B is the Magnetic Flux density in Teslas

Linear or analogue sensors give a continuous voltage output that increases with a strong magnetic field and
decreases with a weak magnetic field. In linear output Hall effect sensors, as the strength of the magnetic field
increases the output signal from the amplifier will also increase until it begins to saturate by the limits imposed on it
by the power supply. Any additional increase in the magnetic field will have no effect on the output but drive it
more into saturation.
Digital output sensors on the other hand have a Schmitt-trigger with built in hysteresis connected to the op-amp.
When the magnetic flux passing through the Hall sensor exceeds a pre-set value the output from the device switches
quickly between its “OFF” condition to an “ON” condition without any type of contact bounce. This built-in
hysteresis eliminates any oscillation of the output signal as the sensor moves in and out of the magnetic field. Then
digital output sensors have just two states, “ON” and “OFF”.
There are two basic types of digital Hall effect sensor, Bipolar and Unipolar. Bipolar sensors require a positive
magnetic field (south pole) to operate them and a negative field (north pole) to release them while unipolar sensors
require only a single magnetic south pole to both operate and release them as they move in and out of the magnetic
field.
Most Hall effect devices can not directly switch large electrical loads as their output drive capabilities are very small
around 10 to 20mA. For large current loads an open-collector (current sinking) NPN Transistor is added to the
output.
This transistor operates in its saturated region as a NPN sink switch which shorts the output terminal to ground
whenever the applied flux density is higher than that of the “ON” pre-set point.
The output switching transistor can be either an open emitter transistor, open collector transistor configuration or
both providing a push-pull output type configuration that can sink enough current to directly drive many loads,
including relays, motors, LEDs, and lamps.

Hall Effect Applications


Hall effect sensors are activated by a magnetic field and in many applications the device can be operated by a single
permanent magnet attached to a moving shaft or device. There are many different types of magnet movements, such
as “Head-on”, “Sideways”, “Push-pull” or “Push-push” etc sensing movements. Which every type of configuration
is used, to ensure maximum sensitivity the magnetic lines of flux must always be perpendicular to the sensing area
of the device and must be of the correct polarity.

Positional Detector

This head-on positional detector will be “OFF” when there is no magnetic field present, (0 gauss). When the
permanent magnets south pole (positive gauss) is moved perpendicular towards the active area of the Hall effect
sensor the device turns “ON” and lights the LED. Once switched “ON” the Hall effect sensor stays “ON”.
To turn the device and therefore the LED “OFF” the magnetic field must be reduced to below the release point for
unipolar sensors or exposed to a magnetic north pole (negative gauss) for bipolar sensors. The LED can be replaced
with a larger power transistor if the output of the Hall Effect Sensor is required to switch larger current loads.
Proximity sensors:
A proximity sensor detects an object when the object approaches within the detection boundary of the sensor.

Proximity sensors are used in various facets of manufacturing for detecting the approach of metal objects. In this

post we will discuss about Inductive & Capacitive Proximity Sensor as an Object Detector.

Common types of non-contact proximity sensors include inductive proximity sensors, capacitive proximity sensors,

ultrasonic proximity sensors, and photoelectric sensors. Hall-effect sensors detect a change in a polarity of a

magnetic field.

Inductive & Capacitive Proximity Sensor:


Inductive Proximity Sensor:

Inductive sensor is an electronic proximity sensor which detects metallic objects without touching them. When the

detecting distance is 5mm or less 1 inch and your application calls for metal sensing, the inductive proximity (IP)

sensor provides the needed solution.

Working Principal of Inductive Proximity sensor:

Inductive proximity sensors operate under the electrical principle of inductance. Inductance is the phenomenon

where a fluctuating current, which by definition has a magnetic component, induces an electromotive force (emf) in

a target object. In circuit design, one measures this inductance in H (henrys). To amplify a device’s inductance

effect, the sensor twists wire into a tight coil and runs a

current through it.


Component and working of Inductive Proximity Sensor: An inductive proximity sensor has four elements: coil,

oscillator, trigger circuit, and an output. The oscillator is an inductive capacitive tuned circuit that creates a radio

frequency. The electromagnetic field produced by the oscillator is emitted from the coil away from the face of the

sensor. The circuit has just enough feedback from the field to keep the oscillator going. When a metal target enters

the field, eddy currents circulate within the target. This causes a load on the sensor, decreasing the amplitude of the

electromagnetic field. As the target approaches the sensor, the eddy currents increases, increasing the load on the

oscillator and further decreasing the amplitude of the field. The trigger circuit monitors the oscillator’s amplitude

and at a predetermined level switches the output state of the sensor from its normal condition (on or off). As the

target moves away from the sensor, the oscillators amplitude increases. At a predetermined level the trigger switches

the output state of the sensor back to its normal condition (on or off).
Advantages:

01. They are very accurate to other technologies.

02. Have high switching rate.

03. Can work in harsh environment condition.


Disadvantages:

01. It can detect only metallic target.

02. Operating range may be limited


Capacitive Proximity sensors:

Capacitive proximity sensors can be used to detect metallic and also non metallic targets like paper, wood, plastic,

glass, wood, powder, and liquid.etc without physical contact. Capacitive proximity sensors sense “target” objects

due to the target’s ability to be electrically charged. Since even non-conductors can hold charges, this means that

just about any object can be detected with this type of sensor.

Working Principal of Capacitive Proximity Sensor:

The capacitive proximity sensor works on the capacitor principle. Capacitive Proximity Sensors detect changes in

the capacitance between the sensing object and the Sensor. The amount of capacitance varies depending on the size

and distance of the sensing object.

Component and working of Capacitive Proximity Sensor: The main components of the capacitive proximity

sensor are plate, oscillator, threshold detector and the output circuit.

The plate inside the sensor acts as one plate of the capacitor and the target acts as another plate and the air acts as the

dielectric between the plates. As the object comes close to the plate of the capacitor the capacitance increases and as
the object moves away the capacitance decreases. The detector circuit checks the amplitude output from the

oscillator and based on that the output switches. The capacitive sensor can detect any targets whose dielectric

constant is more than air. The changes in the capacity generated between these two poles are detected.
Advantages:

01. It can detect both metallic and non metallic targets.

02. Good stability

03. High Speed

04. Good Resolution

05. Capacitive sensors are good in terms of power usage

06. Low cost


Disadvantages:

01. They are affected by temperature and humidity

02. Could be triggered by dust, moisture, etc.

03. Sensitive to noise

04. Difficulties in designing

05. Linearity is not good

06. Capacitive proximity sensors are not as accurate compare to inductive

WHY MONITOR VIBRATION?

Global competition and pressure on corporate performance makes productivity a primary concern for
any business in the 90's. Machinery vibration monitoring programs are effective in reducing overall
operating costs of industrial plants. Vibrations produced by industrial machinery are vital indicators of
machinery health. Machinery monitoring programs record a machine's vibration history. Monitoring
vibration levels over time allows the plant engineer to predict problems before serious damage
occurs. Machinery damage and costly production delays caused by unforeseen machinery failure
can be prevented. When pending problems are discovered early, the plant engineer has the
opportunity to schedule maintenance and reduce downtime in a cost effective manner. Vibration
analysis is used as a tool to determine machine condition and the specific cause and location of
machinery problems. This expedites repairs and minimizes costs.

COMMON VIBRATION SENSORS

Critical to vibration monitoring and analysis is the machine mounted sensor. Three parameters
representing motion detected by vibration monitors are displacement, velocity, and acceleration.
These parameters are mathematically related and can be derived from a variety of motion sensors.
Selection of a sensor proportional to displacement, velocity or acceleration depends on the
frequencies of interest and the signal levels involved. Figure 1 shows the relationship between
velocity and displacement to constant acceleration. Sensor selection and installation is often the
determining factor in accurate diagnoses of machinery condition.

Displacement Sensors
Displacement sensors are used to measure shaft motion and internal clearances. Monitors have
used non-contact proximity sensors such as eddy probes to sense shaft vibration relative to bearings
or some other support structure. These sensors are best suited for measuring low frequency and low
amplitude displacements typically found in sleeve bearing machine designs. Piezoelectric
displacement transducers (doubly integrated accelerometers) have been developed to overcome
problems associated with mounting non-contact probes, and are more suitable for rolling element
bearing machine designs. Piezoelectric sensors yield an output proportional to the absolute motion
of a structure, rather than relative motion between the proximity sensor mounting point and target
surface, such as a shaft.

Laser flowmeters sensor


Optical flowmeters use light to determine flow rate. Small particles which accompany natural and industrial gases
pass through two laser beams focused a short distance apart in the flow path in a pipe by illuminating optics. Laser
light is scattered when a particle crosses the first beam. The detecting optics collects scattered light on a
photodetector, which then generates a pulse signal. As the same particle crosses the second beam, the detecting
optics collect scattered light on a second photodetector, which converts the incoming light into a second electrical

pulse. By measuring the time interval between these pulses, the gas velocity is calculated as where is

the distance between the laser beams and is the time interval.
Laser-based optical flowmeters measure the actual speed of particles, a property which is not dependent on thermal
conductivity of gases, variations in gas flow or composition of gases. The operating principle enables optical laser
technology to deliver highly accurate flow data, even in challenging environments which may include high
temperature, low flow rates, high pressure, high humidity, pipe vibration and acoustic noise.
Optical flowmeters are very stable with no moving parts and deliver a highly repeatable measurement over the life
of the product. Because distance between the two laser sheets does not change, optical flowmeters do not require
periodic calibration after their initial commissioning. Optical flowmeters require only one installation point, instead
of the two installation points typically required by other types of meters. A single installation point is simpler,
requires less maintenance and is less prone to errors.
Commercially available optical flowmeters are capable of measuring flow from 0.1 m/s to faster than 100 m/s
(1000:1 turn down ratio) and have been demonstrated to be effective for the measurement of flare gases from oil
wells and refineries, a contributor to atmospheric pollution. [15]
detect the passage of any light-scattering particles carried along by the moving fluid:

Where,
v=Velocity of particle
d = Distance separating laser beams
t = Time difference between sensor pulses

As a particle passes through each laser beam, it redirects the light away from its normal straight line path in such a
way that an optical sensor (one per beam) detects up the scattered light and generates a pulse signal.

As that same particle passes through the second beam, the scattered light excites a second optical sensor to generate a
corresponding pulse signal.

The time delay between two successive pulses is inversely proportional to the velocity of that particle.

You might also like