[go: up one dir, main page]

0% found this document useful (0 votes)
133 views34 pages

Instruments and Measurement Systems - 1

This document defines key terms used in electrical measuring instruments. It discusses concepts like true value, accuracy, precision, error, uncertainty, sensitivity, resolution, loading effect, hysteresis, stability, bias, noise, and types of errors. Accuracy describes how close a measurement is to the true value, while precision refers to the consistency of repeated measurements. Error is the difference between the measured and true values. Random errors vary unpredictably, while systematic errors are consistent and can be due to instrument issues. Dynamic characteristics describe how an instrument responds over time to changing inputs.

Uploaded by

mubanga20000804
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views34 pages

Instruments and Measurement Systems - 1

This document defines key terms used in electrical measuring instruments. It discusses concepts like true value, accuracy, precision, error, uncertainty, sensitivity, resolution, loading effect, hysteresis, stability, bias, noise, and types of errors. Accuracy describes how close a measurement is to the true value, while precision refers to the consistency of repeated measurements. Error is the difference between the measured and true values. Random errors vary unpredictably, while systematic errors are consistent and can be due to instrument issues. Dynamic characteristics describe how an instrument responds over time to changing inputs.

Uploaded by

mubanga20000804
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

DEFINITIONS USED IN ELECTRICAL MEASURING INSTRUMENTS:

True Value:
It is the exact value or the perfectly correct value in any measuring scheme.
It is defined as the average of infinite values taken when the average deviation due to various
contributing factors approach zero.
True value is the one which we cannot reach my experiments. In actual practice true value is
usually taken from a laboratory standard or obtained with all possible error-cancelling
provisions.
Accuracy:
We cannot say any measurement will be exactly correct.
The term Accuracy is used to express how much is near the measured value to the true value.
When we say the readings obtained are very accurate, it means the readings are true for all
practical purposes.
It is defined as the degree of the closeness with which instrument reading approaches the true
value of the quantity being measured. Accuracy can be expressed in three ways
1. Point accuracy
2. Accuracy as the percentage of scale of range
3. Accuracy as percentage of true value.
Precision / Reproducibility
It tells the method which gives the best possible accuracy. When we say the instrument is precise
instrument, it means that the instrument will give uniformly equal readings repeatedly, for a
given quantity measurement.
In short, precision refers to the ability of an instrument to give consistent readings.
It is defined as the degree of the closeness with which a given quantity may be repeatedly
measured. High value of reproducibility means low value of drift. No drift means that with a
given input the measured values do not vary with time.
An undesired gradual departure of an instrument output over a period of time that is unrelated to
change in input, operating conditions or lead
or
Is the gradual shift in the indication or record of the instrument over an extended period of time,
during which the true value of the variable does not change. Drift is an undesirable quality
in instruments, so instruments are properly guarded against drift.
Drift are of three types
1. Zero drift
2. Span drift
3. Zonal drift
Error:
Using an electrical instrument what we measure is termed as actual value. The perfect reading is
the true value. So error is defined as the difference between the measured value and true value.
Error = Measured Value - True value.
Uncertainty:
Uncertainty denotes the range of error, i.e. the region in which one guesses the error to be.
Sensitivity:
It means the ability to feel or realize readily and accurately the slight change in the input quality.
When an instrument is reacting to even a slight difference in the input quantity we can say that
instrument is very sensitive.
It is defined as the ratio of the magnitude response of the output signal to the magnitude response
of the input signal.
Scale readability:
Indicates the closeness with which the scale can read.
Repeatability:
It is defined as the variation of scale reading; it is a measure of closeness with which a given
input can be measured over and over again.

Stiction (static friction):


It is the force or torque that is necessary just to initiate motion from rest.
Resolution:
In instrumentation, the resolution means the smallest change in the input signal that can be
detected by the instrument. The resolution is usually expressed as a fraction or percentage or full-
scale value.
Instrument Efficiency:
It is defined as the ratio of measured quantity to the power taken by the instrument at full scale
deflection.
Threshold:
The minimum value at which the instrument provides reading.
Linearity:
The ability of the instrument to reproduce the input quantity as instrument output linearly
Dead zone:
It is the range within which variable can vary without being detected.
Speed of response:
The quickest of an instrument to read the measured variable.
Dead Time:
The time required for the measurement system to respond to a change in input .dead time is the
time lag in responding of the instrument to an input i.e., it is the time before the instrument
begins to respond for any change in input quantity.
Loading effect:
The inability of the system used for the measurement to faithfully measure, record or control the
quantity being measured.
When any measuring instrument is connected to the system, it consumes some energy from the
system thereby disturbing (reducing, distorting the waveform) the quantity being measured. This
way the system becomes incapable of making correct measurement.
Tolerance:
It is the range of inaccuracy which can be tolerated in measurement
Hysteresis effect
Due to frictional effect in an instrument, the reading obtained for increasing values of the
measuring quantity may be somewhat different from the corresponding decreasing values. Thus,
two different values are obtained for the same input quantity under increasing and decreasing
values.
Hysteresis effect is the change in the same reading when input is first increased and then
decreased.
Stability:
The ability of the instrument to maintain its quality of measurement over a long period of time.
Bias:
Bias is a small amount of constant error which exists in the instrument over its full range of
measurement.
Backlash:
It is defined as the maximum distance or angle through which any part of a mechanical system
may be moved in one direction without applying appreciable force or motion to the next part in a
mechanical system.
Noise:
It is an extraneous disturbance generated in a measuring system which conveys no meaningful
information with respect to desired signal.
Dynamic Characteristics
When an instrument is required to measure a time-varying process variable, one has to be
concerned with “dynamic characteristics” these characteristics quantify the dynamic relation
between the input and the output.
The dynamic input are of two types; steady state periodic and transient
The steady state periodic quantity is one whose magnitude has a definite repeating time, whereas
the time variation of a transient magnitude does not repeat.
The response of a measurement system subjected to a time varying input can be divided into
steady state response and transient response.
Steady state response is simply the response when time reaches infinity; Transient response in
measurement system is defined as the part of response which goes to zero as time becomes large.
The Dynamic characteristic of a measurement system are:
- speed of response …… desirable
- measuring lag …… undesirable
- fidelity …… desirable
- dynamic error …… undesirable
1. The speed of response or responsiveness is defined as the rapidity with which a
measurement system responds to change in the measured quantity.
2. Measuring lag refers to retardation or delay in the response of a measurement system to
changes in measured quantity. The lag is caused by conditions such as capacitance,
inertia or resistance. The measuring lags are of the following two types:
i. Retardation type lag: in this type of measuring lag the response of the
measurement system begins immediately after a change in measured quantity has
occurred.
ii. Time delay type lag: in this case the response of the measurement system begins
after a dead time after the application of the input.
3. Fidelity: this is the degree to which a measurement system indicates changes in the
measured quantity without any dynamic error. It refers to the ability of the system to
reproduce the output in the same form as the input.
4. Dynamic error: the dynamic error, also called “measurement error” is the difference
between the true value of the quantity changing with time and the value indicated by the
measurement system if no static error is assumed.
Overshoot
The maximum amount by which the pointer moves beyond the steady state.
Gross errors
These errors occur due to human mistakes in reading instruments and recording and
calculating results of measurement.
These errors can be avoided by adopting two means:
i. Immense care should be taken while taking the reading and recording the data
ii. Two, three, or even more readings should be taken for the quantity being measured.
Systematic errors
The systematic errors are repeated consistently with the repetition of the experiment and are
caused by such affects as:
- sensitive shift;
- zero off-set:
- known non-linearity
These errors may be instrumental, environmental or observational.

Random errors
These are the errors that remain even after the systematic errors are more or less eliminated.
These errors are due to a multiple of small errors which together give rise to variation in the
readings of the instruments. They are also referred to as residual errors because they remain after
taking care of all known sources of error.
The random errors are accidental, small and independent.
 They vary in an unpredictable manner
 The magnitude and direction of these errors cannot be predicted.
The most common sources of these errors are:
- friction in instrument movement
- backlash in the instrument
- parallax errors between pointer and scale
- finite dimensions of the pointer and scale divisions
- hysteresis in elastic members
- mechanical vibrations
NOTE: sources of error are noise, response time, design limitation, energy exchanged by
interacting, transmission, determination of measuring system, ambient influences on measuring
systems, errors of observation and interpretation.

1. FUNCTIONS OF INSTRUMENTS AND MEASUREMENT SYSTEM

1.1 Introduction

Dairy processing unit operations mainly involve heating, cooling, separating, drying or
freezing of the products. These unit operations are carried out under varying conditions
of temperatures, pressures, flows and physical compositions. The measurement and
control of these variable factors at the various stages of processing call for the accurate
and efficient instruments, in addition to the dependence upon human skills. With the
advent of large scale milk handling plants the automatic operation and control through
efficient instrumentation and automation has become even more necessary. Utilities
such as steam, water, electricity air, fuel etc. have to be measured and controlled at
appropriate points in the plant. Automatic control instruments are employed to measure
and control the temperature, pressure, flow and level of these utilities. The overall aim
of the instrumentation/ automation is to improve the product quality and enhance the
plant efficiency for better economic returns.

1.2 Variable

A characteristic number or quantity that increases or decreases over time, or takes


different values in different situations is known as Variable. It is a factor that can be
assigned a measurable dimension of some kind that varies, e.g., length, diameter, area,
flow, weight, cost or life-span etc. A dependent variable is any measurable factor whose
behavior is controlled by another variable. An independent variable is any measurable
factor that produces change or reaction in another variable. A variable is something that
is changed or altered in an experiment. In processing of food products the variables
involved could be temperature and pressure of steam, processing time, flow rate of
various streams etc. For example, to determine the effect of temperature and humidity
on storage of a food product will provide evidence on the shelf life of product in
different storage conditions. Variable is liable to change, may have a range of possible
values and is liable to deviate from an established extension type.

1.3 Measurement

When we decide to study a variable we need to devise some way to measure it. Some
variables are easy to measure and others are very difficult. The values of variables are
made meaningful by quantifying them into specific units. For example, instead of saying
that a particular fluid is hot, we can specify a measurement and specify that the fluid is
having a temperature of 80°C.

Measurement is collection of quantitative data. A measurement is made by comparing a


quantity with a standard unit. An example of measurement means the use of a ruler to
determine the length of a piece of paper.

Measurement is thus essentially an act or the result of comparison between the


quantity (whose magnitude is unknown) and a predefined standard. Since both the
quantities are compared, the result is expressed in numerical values. In the physical
sciences, quality assurance, and engineering, measurement is the activity of obtaining
and comparing physical quantities of real-world objects and events. Established
standard objects and events are used as units, and the process of measurement gives a
number relating the item under study and the referenced unit of measurement.
There are two essential requirements of the measurements, in order to make the
results meaningful;
(i) The standard used for comparison purposes must be accurately defined and
should be commonly accepted.
(ii) The apparatus used and the method adopted must be provable.

1.4 Unit of Measurement


A unit of measurement is a definite magnitude of a physical quantity, defined and
adopted by convention and or by law, that is used as a standard for measurement of
the same physical quantity. Any other value of the physical quantity can be expressed
as a simple multiple of the unit of measurement. For example, length is a physical
quantity. The meter is a unit of length that represents a definite predetermined length.
When we say 10 meters (or 10 m), we actually mean 10 times the definite
predetermined length called "meter".

The definition, agreement, and practical use of units of measurement have played a
crucial role in human endeavor from early ages up to this day. Different systems of
units used to be very common. Now there is a global standard, the International
System of Units (SI), the modern form of the metric system.

The International System of Units (abbreviated as SI from the French language


name Systme International d'Units) is the modern revision of the metric system. It is
the world's most widely used system of units, both in everyday commerce and in
science. The SI was developed in 1960 from the meter-kilogram-second (MKS) system,
rather than the centimeter-gram-second (CGS) system, which, in turn, had many
variants. During its development the SI also introduced several newly named units that
were previously not a part of the metric system. The original SI units for the six basic
physical quantities were:

i. meter (m) : SI unit of length


ii. second (s) : SI unit of time
iii. kilogram (kg) : SI unit of mass
iv. ampere (A) : SI unit of electric current
v. degree kelvin (K) : SI unit of thermodynamic temperature
vi. candela (cd) : SI unit of luminous intensity
The mole was subsequently added to this list and the degree Kelvin renamed the kelvin.

There are two types of SI units, base units and derived units. Base units are the simple
measurements for time, length, mass, temperature, and amount of substance, electric
current and light intensity. Derived units are constructed from the base units, for
example, the watt, i.e. the unit for power, is defined from the base units as m 2/kg/s−3.
Other physical properties may be measured in compound units, such as material
density, measured in kg/m3.
DEFINITIONS OF STANDARDS UNITS

1.5 Significance of Measurements

Science is based on objective observation of the changes in variables. The greater our
precision of measurement the greater can be our confidence in our observations. Also,
measurements are always less than perfect, i.e., there are errors in them. The more we
know about the sources of errors in our measurements the less likely we will be to draw
erroneous conclusions. With the progress in science and technology, new phenomena
and relationships are constantly being discovered and these advancements require
newer developments in measurement systems. Any invention is not of any practical
utility unless it is backed by actual measurements. The measurements thus confirm the
validity of a given hypothesis and also add to its understanding. This is a continuous
chain that leads to new discoveries with new and more sophisticated measurement
techniques. While elementary measurements require only ordinary methods of
measurement, the advanced measurements are associated with sophisticated methods
of measurement. The advancement of Science and Technology is therefore dependent
upon a parallel progress in measurement techniques. It can be safely be said that, the
progress in Science and Technology of any country could be assessed by the way in
which the data is acquired by measurements and is processed.

In R&D applications the design of equipments and processes require the basic
engineering design data on the properties of the input raw materials and processed
products. The operation and maintenance of equipments for optimal processing
variables to achieve best quality product and energy efficient equipment utilization
require the monitoring and control of several process variables. Both these functions
require measurements. The economical design, operation and maintenance require a
feedback of information. This information is supplied by appropriate measurement
systems.

1.6 Function of Instruments and Measurement Systems

The measurement systems and the instruments may be classified based upon the
functions they perform. There are four main functions performed by them: indicating,
signal processing, recording and control.
i. Indicating Function: This function includes supplying information
concerning the variable quantity under measurement. Several types of
methods could be employed in the instruments and systems for this purpose.
Most of the time, this information is obtained as the deflection of a pointer of
a measuring instrument.
ii. Recording Function: In many cases the instrument makes a written record,
usually on paper, of the value of the quantity under measurement against
time or against some other variable. This is a recording function performed
by the instrument. For example, a temperature indicator / recorder in the
HTST pasteurizer gives the instantaneous temperatures on a strip chart
recorder.
iii. Signal Processing: This function is performed to process and modify the
measured signal to facilitate recording / control.
iv. Controlling Function: This is one of the most important functions,
especially in the food processing industries where the processing operations
are required to be precisely controlled. In this case, the information is used
by the instrument or the systems to control the original measured variable or
quantity.

Thus, based on the above functions, there are three main groups of instruments. The
largest group has the indicating function. Next in line is the group of instruments which
have both indicating and or recording functions. The last group falls into a special
category and perform all the three functions, i.e., indicating, recording and controlling.

In this lesson only those instruments would be discussed whose functions are mainly
indicating and recording, especially those instruments which are used for engineering
analysis purposes.

1.7 Basic Requirements of a Measurement System / Instrument

The following are the basic requirements of a good quality measurement system /
instrument:
a) Ruggedness
b) Linearity
c) No hysteresis
d) Repeatability
e) High output signal quality
f) High reliability and stability
g) Good dynamic response

1.8 Applications of Measurement Systems

Before discussing the instrument characteristics, construction and working, it is


pertinent to understand the various ways in which the measuring instruments are put in
use. Different applications of the instruments and measurement systems are:
i. Monitoring a process/operation
ii. Control a process/operation
iii. Experimental engineering analysis
i. Monitoring a Process/Operation
There are several applications of measuring instruments that mainly have a
function of monitoring a process parameter. They simply indicate the value or
condition of parameter under study and these readings do not provide any control
operation. For example, a speedometer in a car indicates the speed of the car at a
given moment, an ammeter or a voltmeter indicates the value of current or voltage
being monitored at a particular instant. Similarly, water and electric energy meters
installed in homes and industries provide the information on the commodity used so
that its cost could be computed and realized from the user.

ii. Control a Process/Operation


Another application of instruments is in automatic control systems.
Measurement of a variable and its control are closely associated. To control a
process variable, e.g., temperature, pressure or humidity etc., the prerequisite
is that it is accurately measured at any given instant and at the desired location.
Same is true for all other process parameters such as position, level, velocity
and flow, etc. and the servo-systems for these parameters. Let us assume that
the output variable to be controlled is non-electrical quantity and the control
action is through electrical means. Since the output variable is a non-electrical
quantity, it is converted into a corresponding electrical form by a transducer
connected in the feedback loop. The input to the controller is reference which
corresponds to the desired value of the process parameter. The output process
variable is compared with the reference or desired value with the help of a
comparator. In case the desired value and the process variable differ, there is a
resultant error signal. This error signal is amplified and then fed to an actuator,
which produces power to drive the controlled circuitry. The corrective action
goes on till the output is at the same level as the input which corresponds to the
desired output. At this stage, there is no error signal and hence there is no input
to the actuator and the control action stops. Common examples of this
application are the domestic appliances, such as, refrigerator, air conditioner or
a hot air oven. All of these employ a thermostatic control. A temperature
measuring device (often a bimetallic element) measures the temperature in the
room, refrigerated chamber or in the oven and provides the information
necessary for appropriate functioning of the control system in these appliances.
iii. Experimental Engineering Analysis
Experimental engineering analysis is carried out to find out solution of the
engineering problems. These problems may be theoretical designs or practical
analysis. The exact experimental method for engineering analysis will depend upon
the nature of the problem. The analysis could be grouped into following categories:
1. Obtaining solutions of mathematical relationships with the help of
analogies.
2. Formulating the generalized empirical relationships in the cases where no
proper theoretical backing exists.
3. Testing the validity of theoretical predications.
4. Generating the basic engineering design data on the properties of the
input raw materials and processed products for R&D application.
5. Design of process equipments for specific applications.
6. Optimization of machine / system parameters, variables and performance
indices.

2. CALIBRATION

2.1 DEFINITIONS

1. Calibration is the act of comparing a device under test (DUT) of an unknown


value with a reference standard of a known value.
2. Calibration is a comparison between a known measurement (the standard) and
the measurement using your instrument.
3. Calibration is the process of making an adjustment or marking a scale so that the
readings of an instrument agree with the accepted & the certified standard.
In other words, it is the procedure for determining the correct values of measurand by
comparison with the measured or standard ones. The calibration offers a guarantee to
the device or instrument that it is operating with required accuracy, under stipulated
environmental conditions. The calibration procedure involves the steps like visual
inspection for various defects; installation according to the specifications, zero
adjustment etc., and the calibration is the procedure for determining the correct values
of measurand by comparison with standard ones. The standard of device with which
comparison is made is called a standard instrument. The instrument which is
unknown & is to be calibrated is called test instrument. Thus in calibration, test
instrument is compared with standard instrument.

Typically, the accuracy of the standard should be ten times the accuracy of the
measuring device being tested. However, an accuracy ratio of 3:1 is acceptable by most
standards organizations. Calibration of your measuring instruments has two objectives;

i. It checks the accuracy of the instrument


ii. It determines the traceability of the measurement.

In practice, calibration also includes repair of the device if it is out of calibration. A


report is provided by the calibration expert, which shows the error in measurements
with the measuring device before and after the calibration.

A person typically performs a calibration to determine the error or verify the accuracy of
the device under test’s (DUT) unknown value.

As a basic example, you could perform a calibration by measuring the temperature of a


device under test (DUT) thermometer in water at the known boiling point (100 degrees
Celsius) to learn the error of the thermometer. Because visually determining the exact
moment that boiling point is achieved can be imprecise, you could achieve a more
accurate result by placing a calibrated reference thermometer, of a precise known
value, into the water to verify the device under test’s (DUT) thermometer

2.2 Types of calibration methodologies:

There are two methodologies for obtaining the comparison between test instrument &
standard instrument. These methodologies are;

i. Direct comparisons
ii. Indirect comparisons
Direct comparisons:

In a direct comparison, a source or generator applies a known input to the meter under
test. The ratio of what meter is indicating & the known generator values gives the
meter’s error. In such case the meter is the test instrument while the generator is the
standard instrument. The deviation of meter from the standard value is compared with
the allowable
performance limit. With the help of direct comparison a generator or source also can be
calibrated.
Indirect comparisons:

In the indirect comparison, the test instrument is compared with the response standard
instrument of same type i .e., if test instrument is meter, standard instrument is also
meter, if test instrument is generator; the standard instrument is also generator & so
on. If the test instrument is a meter then the same input is applied to the test meter as
well a standard meter.

Importance of instrument calibration

The accuracy of all measuring devices degrades over time. This is typically caused by
normal wear and tear. However, changes in accuracy can also be caused by electric or
mechanical shock or a hazardous manufacturing environment. Depending on the type
of instrument and the environment in which it is being used, it may degrade very
quickly or over a long period of time. The bottom line is that calibration improves the
accuracy of the measuring device. Accurate measuring devices improve product quality.

2.3 When to calibrate measuring instruments

A measuring device should be calibrated:

i. According to the recommendation of the manufacturer.


ii. After any mechanical or electrical shock.
iii. Periodically (annually, quarterly, monthly)

The hidden costs and risks associated with un-calibrated measuring instruments could
be much higher than the cost of calibration. Therefore, it is recommended that the
measuring instruments are calibrated regularly by a reputable company to ensure that
errors associated with the measurements are in the acceptable range. People who
perform calibration in laboratories include:

i. Metrologists
ii. Lab managers
iii. Calibration engineers
iv. Calibration technicians

People who perform calibration work in the field include:


i. Manufacturing engineers
ii. Instrument technicians

2.4 CALIBRATION TOOLS AND TESTING EQUIPMENT


Process calibration workload might include test and measurement equipment such as
multimeters or portable field calibrators. It might also include process instruments and
sensors, such as pressure or temperature transmitters. Electrical, temperature,
pressure, or a combination of parameters might need to be measured and adjusted.

Calibrators

A device that calibrates other equipment is sometimes referred to as a calibrator. A


calibrator is different from other types of calibration standards because it has a built-in
calibration standard as well as useful features that make it easier to calibrate
instruments. For example, the electrical calibrator shown here has connectors to allow a
user to connect a device under test easily and safely, and buttons and menu options to
help the user efficiently perform a calibration.

Example of a Calibrator

Calibration Disciplines
There are many calibration disciplines, each having different types of calibrators and
calibration references. Common calibration disciplines include but are not limited to:

i. Electrical instrumentation
ii. Radio frequency (RF)
iii. Temperature
iv. Humidity
v. Pressure
vi. Flow
2.5 ELECTRICAL INSTRUMENTS CALIBRATION PROCEDURE

There are several ways to calibrate an instrument depending on the type of instrument
and the chosen calibration scheme. There are two general calibration schemes:

1. Calibration by comparison with a source of known value. An example of


a source calibration scheme is measuring an ohmmeter using a calibrated
reference standard resistor. The reference resistor provides (sources) a known
value of the ohm, the desired calibration parameter. A more sophisticated
calibration source like the resistor is a multifunction calibrator that can source
known values of resistance, voltage, current, and possibly other electrical
parameters. A resistance calibration can also be performed by measuring a
resistor of unknown value (not calibrated) with both the DUT instrument and a
reference ohm meter. The two measurements are compared to determine the
error of the DUT.
2. Calibration by comparison of the DUT measurement with the
measurement from a calibrated reference standard. A variant of the
source-based calibration is calibrating the DUT against a source of known natural
value such as a chemical melt or freeze temperature of a material like pure
water.

From this basic set of calibration schemes, the calibration options expand with each
measurement discipline.

Calibration Steps
A calibration process starts with the basic step of comparing a known with an unknown
to determine the error or value of the unknown quantity. However, in practice, a
calibration process may consist of "as found" verification, adjustment, and "as left"
verification. Many measurement devices are adjusted physically (turning an adjustment
screw on a pressure gauge), electrically (turning a potentiometer in a voltmeter), or
through internal firmware settings in a digital instrument.

For example, for some devices, the data attained in calibration is maintained on the
device as correction factors, where the user may choose to compensate for the known
correction for the device. An example of this is RF attenuators, where their attenuation
values are measured across a frequency range. The data is kept with the instrument in
the form of correction factors, which the end-user applies to improve the quality of their
measurements. It is generally assumed that the device in question will not drift
significantly, so the corrections will remain within the measurement uncertainty
provided during the calibration for the calibration interval. It is a common mistake for
people to assume that all calibration data can be used as correction factors, because
the short and long term variation of the device may be greater than the measurement
uncertainty during the calibration interval.

Non-adjustable instruments, sometimes referred to as “artifacts”, such as temperature


RTDs, resistors, and Zener diodes, are often calibrated by characterization. Calibration
by characterization usually involves some type of mathematical relationship that allows
the user to use the instrument to get calibrated values. The mathematical relationships
vary from simple error offsets calculated at different levels of the required
measurement, like different temperature points for a thermocouple thermometer, to a
slope and intercept correction algorithm in a digital voltmeter, to very complicated
polynomials such as those used for characterizing reference standard radiation
thermometers.

The “as left” verification step is required any time an instrument is adjusted to ensure
the adjustment works correctly. Artifact instruments are measured “as-is” since they
can’t be adjusted, so “as found” and “as left” steps don’t apply.

A calibration professional performs calibration by using a calibrated reference standard


of known uncertainty (by virtue of the calibration traceability pyramid) to compare with
a device under test. He or she records the readings from the device under test and
compares them to the readings from the reference source. He or she may then make
adjustments to correct the device under test.

2.6 STANDARDS IN INSTRUMENT CALIBRATION

All the instruments are calibrated at the time of manufacturer against measurement
standards. A standard of measurement is a physical representation of a unit of
measurement. A standard means known accurate measure of physical quantity.

The different sizes of standards of measurement are classified as;

i. International standards
ii. Primary standards
iii. Secondary standards
iv. Working standards

International standards

International standards are defined as the international agreement. These standards, as


mentioned above are maintained at the international bureau of weights an d measures
and are periodically evaluated and checked by absolute measurements in term s of
fundamental units of physics. These international standards are not available to the
ordinary users for the calibration purpose. For the improvements in the accuracy of
absolute measurements the international units are replaced by the absolute units in
1948. Absolute units are more accurate than the international units.

Primary standards

These are highly accurate absolute standards, which can be used as ultimate reference
standards. These primary standards are maintained at national standard laboratories in
different countries. These standards representing fundamental units as well as some
electrical and mechanical derived units are calibrated independently by absolute
measurements at each of the national laboratories. These are not available for use,
outside the national laboratories. The main function of the primary standards is the
calibration and verification of secondary standards.
Secondary standards

As mentioned above, the primary standards are not available for use outside the
national laboratories. The various industries need some reference standards. So, to
protect highly accurate primary standards the secondary standards are maintained,
which are designed and constructed from the absolute standards. These are used by
the measurement and calibration laboratories in industries and are maintained by the
particular industry to which they belong. Each industry has its own standards.

Working standards

These are the basic tools of a measurement laboratory and are used to check and
calibrate the instruments used in laboratory for accuracy and the performance.

2.7 STANDARDS IN INSTRUMENT CALIBRATION


All the instruments are calibrated at the time of manufacturer against measurement
standards. A standard of measurement is a physical representation of a unit of
measurement. A standard means known accurate measure of physical quantity.
The different sizes of standards of measurement are classified as;
i. International standards
ii. Primary standards
iii. Secondary standards
iv. Working standards
International standards
International standards are defined as the international agreement. These standards, as
mentioned above are maintained at the international bureau of weights and measures and
are periodically evaluated and checked by absolute measurements in term s of fundamental
units of physics. These international standards are not available to the ordinary users for
the calibration purpose. For the improvements in the accuracy of absolute measurements
the international units are replaced by the absolute units in 1948. Absolute units are more
accurate than the international units.
Primary standards
These are highly accurate absolute standards, which can be used as ultimate reference standards.
These primary standards are maintained at national standard laboratories in different countries.
These standards representing fundamental units as well as some electrical and mechanical derived
units are calibrated independently by absolute measurements at each of the national laboratories.
These are not available for use, outside the national laboratories. The main function of the primary
standards is the calibration and verification of secondary standards.

Secondary standards
As mentioned above, the primary standards are not available for use outside the national
laboratories. The various industries need some reference standards. So, to protect highly
accurate primary standards the secondary standards are maintained, which are designed
and constructed from the absolute standards. These are used by the measurement and
calibration laboratories in industries and are maintained by the particular industry to which
they belong. Each industry has its own standards.
Working standards
These are the basic tools of a measurement laboratory and are used to check and calibrate the
instruments used in laboratory for accuracy and the performance.
3. SPECIFICATIONS
3.1 Definition of the term Specifications
This is a written or printed description of work describing qualities of materials and
Method of writing specifications
1. Method System
 The specifier describes in detail the materials, workmanship, installation,
and erection procedures to be used by the installer in the conduct of his
work operations in order to achieve the results expected.
 The method system can best be described as a descriptive specification.
 The specifications code sets forth specific materials and methods that are
permitted under the law.
2. Result system
 When the specifier instead elects to specify results, he places on the
installer the responsibility for securing the desired results by whatever
methods the installer chooses to use.
 The result system is best described as performance specification
 Under the performance code, materials and methods are left to the
installer and engineer, provided that performance criteria for fire
protection, structural adequacy, and sanitation are met.
 As a matter of fact, both the descriptive specification and the performance
specification can be used together in the same project specification, each
in its proper place, in order to achieve the prime objective.

3.2 Types of specifications


1. Descriptive Specification
 A detailed written description of the required properties of a product,
materials, or piece of equipment, and the workmanship required for its
proper installation.
 The Products And Processes Are Specified But Results Are Not Specified.
 Defines exact properties of materials and methods of installation without
using proprietary names.
 When descriptive specifications are used, the burden of performance is
assumed by the preparer.
 Often difficult to write, very detailed

When to use Descriptive Specifications?


 When the Performance specification is not adequate.
 Adequate reference standards do not exist.
 Brand name is forbidden.
 The specifier has gained wealth of information and experience from use of
known materials and method.

Preparing a Descriptive Specifications


 Research available products.
 Research critical features needed.
 Analyze and compare requirements with available products.
 Determine which features are best specified and which are best shown
on the drawings.
 Describe critical features.
 State the minimum acceptable requirements and be certain they can
be met.
 Provide specific information about submittals, testing and other
procedures necessary to ensure acceptable products will be provided
2. Performance Specifications
 Is a statement of required results with criteria for verifying compliance,
but without unnecessary limitation on the methods for achieving the
result? It can also be defined as an end result by formulating the criteria
for its accomplishment. Part 1 (required results)
 Means all desired end result must be spelled out. Part 2 (with criteria for
verifying compliance)
 The criteria are capable of measurement, test evaluation, or other
acceptable assurances. Part3
 Without unnecessary limitations on the methods of achieving the required
results.

Application of Performance Specifications


 Simple performance criteria can be incorporated into any
specifications.
 A method of eliciting improved products and methods by stating the
desired results and leaving the rest to the innovative industrial
producer.
 It is possible to successfully combine performance specifying and
descriptive specifying for the same project.
 Using both for a single requirement should be avoided.
 If it is known how a specific item performs, it is not difficult to work
backwards and describes its performance in sufficient detail so bidders
will know exactly which product is desired.

3.3 Reference Specifications


 References standards are requirements set by authority, custom, or general
consensus and are established as accepted criteria.
 They are published by trade associations, government, and institutional
organization.
 Reference standards are incorporated in specification by reference to a number,
a letter, or other designation. The provisions of standards so referenced become
a part of the project document.
Advantage of Reference Specifications
i. Reduction in of construction specifications.

Disadvantages of Reference Specifications


i. Bad reference standards coexist with good ones.
ii. Reference standards can create duplication and within the contract
documents.
iii. Standards can contain hidden choices.
iv. Standards generally refer to minimum requirements

To eliminate the problems in Reference Specifications


i. Know the standards.
ii. Incorporate the standard properly
iii. Enforce the requirement of the standard.

3.4 Propriety Specifications


 Proprietary specifications identify the desired products by manufacturer products
manufacturer’s name, brand name, model number, type designation, or other
unique characteristics.
 A specification is considered a proprietary specification when the product
specified is only available from one source.

Open Propriety Specifications - Permits substitution


Closed Specifications – Prohibits substitution

Advantages
i. Close control of product selection close selection.
ii. Preparation of more detailed and complete drawings based on precise
information obtained from manufacturers data.
iii. Decreases the size of the specification and reduces production time
production time.
iv. Simplification of bidding by narrowing competition and removing product
pricing as a major variable.

Disadvantages
i. Elimination or narrowing of competition elimination competition.
ii. Requiring products with which the contractor has perhaps little or bad
experience.
iii. Favouring of certain products and manufacturers over others.

Closed Propriety Specifications


 Only one product is named.
 Several products may be named as options: Multiproduct Specification wide
range of specification Products Difficult To List All

Advantages of Closed Specs


i. Permits Design To Be Completed Down To The Smallest Detail. (Promote
Accurate Adding)

Disadvantage of Closed Specs


i. High Cost
ii. Contractor Experience
iii. Conflict Between The Specifier And The Supplier

Open Propriety Specifications


 Only one product is named.
 Several products may be named as options.

INSTRUMENT SPECIFICATIONS
There are four common specifications typically prepared for instrumentation and control
1. Instrument specification sheets
2. Control system specifications
3. Control panel (or control cabinet) specifications
4. Installation specifications.
Among the four, preparation of the instrument specification sheets is the most difficult
and time consuming. Although some operations still prepare these specifications
manually, many now use computer-based systems that can select the best instrument
to fit the process conditions, and then generate a specification sheet.
Such software packages also contain master specifications for control systems, control
panels, installation activities, and other specifications. These tools save time and help
produce consistent quality design.
Instrument specification sheets
The purpose of the instrument specification sheet is to list pertinent details for use by
engineers and vendors. The information is also used by installation and maintenance
personnel.
This specification sheet describes the instrument and provides a record of its function.
The information should be uniform in content, presentation, and terminology. And, of
course, the selection must consider all plant and process requirements and comply with
any code requirements in effect at the site.
The most common specification sheets used in instrumentation and control are for:
i. Flow measurement
ii. Level measurement
iii. Pressure measurement
iv. Temperature measurement
v. Analysers (including pH and conductivity)
vi. Control valves and regulators
vii. Pressure relief devices.
Typically, preparation of the instrument specification sheet involves several steps. If
software is used, some of the procedure can be automated. First, the process data are
completed, generally by a process or a mechanical engineer. Then, the best instrument
for the job is chosen.
The specification sheet is completed to cover such points as type of enclosure, type of
signal required, material in contact with the process, connection size, and the like.
Vendors are selected, prices solicited, and finally an order is placed.
Control system specifications
The control system document outlines the parameters for the computer-based control
system. It typically contains the requirements for code compliance, overview of the
system, and detailed requirements.
The information generally begins with a master specification in a word processor
document that can be tailored to the needs of each application. This document remains
in use and is typically needed long after the system is up and running.
The content of a typical control system specification covers
i. Field conditions (including temperature, humidity, and environmental)
ii. Hardware requirements (such as cabinets, communications devices, inputs
and outputs, controllers, and operator consoles)
iii. Software requirements (including system configuration capabilities, graphics,
alarms, trends, and reports)
iv. Service and support.
Control panel/cabinet specifications
The control panel document provides the guidelines for the design, construction,
assembly, testing, and shipping of control panels and cabinets. As with the control
system specification, the control panel specification generally originates with a master
word processor specification to allow the requirements of each application to be easily
customized.
A typical control panel specification is divided into sections covering design,
construction, testing, and shipping. The document also should address certain details,
such as nameplates, electrical and pneumatic requirements, and purging requirements,
if necessary.
All electrically operated instruments, or electrical components incorporated in a panel or
cabinet, must comply with the requirements of the current edition of the electrical code
in effect at the site. All such equipment should be approved (by UL or CSA) and bear
the approval label. ISA’s “Standards and Recommended Practices” also provide a
valuable source of information and guidelines for instrumentation.
Panel drawings may be generated with CAD tools, but the need for control panel
specifications and drawings has diminished with the proliferation of computer-based
control systems and the use of off-the-shelf cabinets. CAD drawings are still used,
however, to show wiring and component locations in the cabinets.
Installation specifications
The installation specification provides the requirements for installing instruments,
control systems, and their accessories. The contractor uses this document to estimate
the cost of the installation. Once again, the information in the specification should be
developed from a master specification document prepared in word-processor format to
allow for convenient customization.
The installation specification marks the transition point between engineering and
maintenance, who typically installs the equipment. The installation specification has
many parts, each covering a section of the installation. Typically, these sections consist
of an overview of the scope of the work.
It is followed by a description of how the instruments are to be mounted and installed,
including the connections between the process and the instruments. The specification
should also cover wiring and tubing requirements. Finally, checkout procedures should
be defined to ensure that the control system as a whole is ready for operation.
All installation work should be based on the installation specification and reference
documentation provided by the engineering phase. This reference documentation,
which forms part of the contract, clearly identifies the scope of work, thereby
minimizing misunderstandings, completion delays, and additional costs.
Drawings
The most commonly prepared drawings for instrumentation and controls are logic
diagrams, instrumentation index, loop diagrams, and interlock diagrams (or electrical
schematics). Although many companies still design drawings manually before
implementing them on a CAD system, some have moved to computer-based systems
that produce a large portion of the design automatically. Such software packages save
time and help produce a consistent design.
Logic diagrams
Logic diagrams are needed to define discrete (on/off) controls. These controls cover all
time-based and state-based logic used in process control, including PLC sequences and
hard-wired trip systems.
If the logic is simple, a written description in the control system definition or a
description on the P&ID is generally adequate. However, whenever intricate logic is
used, logic diagrams, typically drawn to conform with ANSI/ISA Standard S5.2, are
required.
Instrument index
An instrument index lists all items of instrumentation for a specific project or for a
particular plant. Its purpose is to act as a cross-reference document for each item of
instrumentation and for all documents and drawings related to the particular item. An
instrument index is typically generated and maintained on a PC using a database
program. A computer-based approach facilitates updating and retrieving data.
The instrument index is normally presented in tabular form, is generated at the start of
a project, and stays active throughout the life of the facility. The following items are
typically shown on an instrument index:
i. Tag number
ii. Description
iii. P&ID number
iv. Line/equipment number
v. Instrument specification sheet number
vi. Manufacturer’s drawing numbers
vii. Loop drawing number
viii. Interlock diagram number
ix. Location diagram number
x. Miscellaneous notes.
Some users add other information they consider important, such as the equipment
supplier and model number, installation details, purchase order number, and the like.
Loop diagrams
A loop diagram should be prepared for each instrument loop in the project that contains
more than one instrument. The only instruments not requiring loop diagrams are
interlock systems (these instruments are shown on the interlock diagrams) and local
devices such as relief valves (an instrument index entry should suffice for these
devices).
Loop diagrams are generated to show the detailed arrangement for instrumentation
components in all loops. All pneumatic and electronic devices with the same loop
number are generally shown on the same loop diagram. The content and format of loop
diagrams should conform to ANSI/ISA Standard S5.4.

Interlock diagrams
Interlock diagrams (electrical schematics) show the detailed wiring arrangement for
discrete (on/off) control. However, with the introduction and extensive use of
programmable electronic systems to perform logic functions, the use of interlock
diagrams has diminished over the years.
4. Instrumentation and Control Design
The purpose of Instrumentation and Control (I&C) Design document is to cover
the project-specific technical requirements which are to be followed throw-out the Feed
or Detailed Engineering Phase while in preparation of engineering deliverables. The
Design Basis is considered as a mother document for all the engineering activities or
deliverables to be carried out in a particular project.

The inputs required in the preparation of the instrument design document.


The following are the required inputs for the preparation of the document
Instrumentation & Control Design Basis.

 Client’s Specification
 Electrical Hazardous Area Layout
 Basic Engineering Design Data
Structure of Design Basis of Instrumentation & Control
The following items listed below should be covered in the Instrumentation & Control
Design Basis.

1. List of Codes, Standards, and Regulatory Requirements

2. Units of Measurements

3. Control System

4. Package Control System

5. Power Supply & Instrument Air Supply

6. Hazardous Area Classification Requirements

7. Basic Requirements Related to Field Instruments and Cables

8. Basic Requirements Related to Installation & Related Items

9. Spares

Instrumentation documentation
Instrumentation documentation consists of drawings, diagrams and schedules. The
documentation is used by various people for different purposes. Of all the disciplines in
a project, instrumentation is the most interlinked and therefore the most difficult to
control. The best way to understand the purpose and function of each document is to
look at the complete project flow from design through to commissioning.

 Design criteria, standards, specifications, vendor lists


 Construction
 Quantity surveying, disputes, installation contractor, price per meter, per
installation
 Operations
 Maintenance commissioning

Instrument list
This is a list of all the instruments on the plant, in the ‘List’ format. All the instruments
of the same type (tag) are listed together; for example, all the pressure transmitters
‘PT’ are grouped together.

1. Instrument index lists– Associated documentation such as loop drawing number,


datasheets, installation details.
2. Loop List -The same information as the instrument list but ordered by loop
number instead of tag number. This sort of order will group all elements of the
same loop number together.
3. Function- Gives a list of all the instrumentation on the plant and may include
‘virtual’ instruments such as controllers in PLC.
4. Tag No- The instrument tag number as defined by the specification.
5. Description -Description of the instrument as denoted by the tag number.
6. Service Description -A description of the process related parameter.
7. Functional Description –The role of the device.
8. Manufacturer- Details of the manufacturer of the device.
9. Model- Details of the model type and number.

Instrument location plans


The instrument location drawing is used to indicate an approximate location of the
instruments and junction boxes. This drawing is then used to determine the cable
lengths from the instrument to the junction box or control room. This drawing is also
used to give the installation contractor an idea as to where the instrument should be
installed.

Cable racking layout


This is a drawing that shows the physical layout and sizes of the cable rack as it moves
through the plant

Cable routing layout


The routing layout used a single line to indicate the rack direction as well as routing and
sizes and was known as a ‘Racking & Routing layout’

Block diagrams – signal, cable and power block diagrams


Cable block diagrams can be divided into two categories: Power and Signal block
diagrams. The block diagram is used to give an overall graphical representation of the
cabling philosophy for the plant.

Field connections / Wiring diagrams


Function: To instruct the wireman on how to wire the field cables at the junction box.
Used by: The installation contractor. When the cable is installed on the cable rack, it is
left lying loose at both the instrument and junction box ends. The installation contractor
stands at the junction box and strips each cable and wires it into the box according to
the drawing.
Power distribution diagram
Function: There are various methods of supplying power to field instruments; the
various formats of the power distribution diagrams show these different wiring systems.
Used by: Various people depending on the wiring philosophy, such as the panel
wireman, field wiring contractor.
Earthing diagram
Function: Used to indicate how the earthing should be done. Although this is often
undertaken by the electrical discipline, there are occasions when the instrument
designer may or must generate his own scheme – E.g. for earthing of Zener barriers in
a hazardous area environment.
Used by: Earthing contractor for the installation of the earthing. This drawing should
also be kept for future modifications and reference.
Loop diagrams
Function: A diagram that shows comprehensively details the wiring of the loop,
showing every connection from field to instrument or I/O point of a DCS/PLC.

Used by: Maintenance staff during the operation of the plant and by commissioning
staff at start up.

Measurement system/Instrumentation Systems


The purpose of an instrumentation system used for making measurements is to give
the user a numerical value corresponding to the variable being measured. Thus a
thermometer may be used to give a numerical value for the temperature of a liquid. We
must, however, recognize that, for a variety of reasons, this numerical value may not
actually be the true value of the variable. Thus, in the case of the thermometer, there
may be errors due to the limited accuracy in the scale calibration, or reading errors due
to the reading falling between two scale markings, or perhaps errors due to the
insertion of a cold thermometer into a hot liquid, lowering the temperature of the liquid
and so altering the temperature being measured. We thus consider a measurement
system to have an input of the true value of the variable being measured and an output
of the measured value of that variable the figures below shows some examples of
instrument measuring systems.
An instrumentation system for making measurements has an input of the true value of
the variable being measured and an output of the measured value. This output might
be then used in a control system to control the variable to some set value.

The Constituent Elements of an Instrumentation System

An instrumentation system for making measurements consists of several elements


which are used to carry out particular functions. These functional elements are:
1. Sensor
This is the element of the system which is effectively in contact with the process
for which a variable is being measured and gives an output which depends in
some way on the value of the variable and which can be used by the rest of the
measurement system to give a value to it. For example, a thermocouple is a
sensor which has an input of temperature and an output of a small e.m.f which
in the rest of the measurement system might be amplified to give a reading on a
meter. Another example of a sensor is a resistance thermometer element which
has an input of temperature and an output of a resistance change.

2. Signal processor
This element takes the output from the sensor and converts it into a form which
is suitable for display or onward transmission in some control system. In the case
of the thermocouple this may be an amplifier to make the e.m.f. big enough to
register on a meter (Figure 1.8B). There often may be more than an item,
perhaps an element which puts the output from the sensor into a suitable
condition for further processing and then an element which processes the signal
so that it can be displayed. The term signal conditioner is used for an element
which converts the output of a sensor into a suitable form for further processing.
Thus in the case of the resistance thermometer there might be a signal
conditioner, such as a Wheatstone bridge, which transforms the resistance
change into a voltage change, then an amplifier to make the voltage big enough
for display (Figure 1.8B) or for use in a system used to control the temperature.
3. Data presentation
This presents the measured value in a form which enables an observer to
recognize it. This may be via a display, e.g. a pointer moving across the scale of
a meter or perhaps information on a visual display unit (VDU). Alternatively, or
additionally, the signal may be recorded, e.g. in a computer memory, or
transmitted to some other system such as a control system.

The figure below shows how these basic functional elements form a measurement
system.

The term transducer is often used in relation to measurement systems.


Transducers are defined as an element that converts a change in some physical variable
into a related change in some other physical variable. It is generally used for an
element that converts a change in some physical variable into an electrical signal
change. Thus sensors can be transducers. However, a measurement system may use
transducers, in addition to the sensor, in other parts of the system to convert signals in
one form to another form.

Example
With a resistance thermometer, element A takes the temperature signal and transforms
it into resistance signal, element B transforms the resistance signal into a current signal,
element C transforms the current signal into a display of a movement of a pointer
across a scale.
Which of these elements is (a) the sensor, (b) the signal processor, (c) the data
presentation?
The sensor is element A,
The signal processor element B,
the data presentation element is C.
The system can be represented by Figure below

You might also like