[go: up one dir, main page]

0% found this document useful (0 votes)
2 views46 pages

Metrology -2

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 46

Chapter-2

Metrology in Chemical Analysis


2.1 Measurement
2.2 Traceability and reference materials in Chemical Analysis
2.3 Errors in chemical analysis; Random and systematic error
2.4 Estimating uncertainty in chemical analysis
2.5 Calibration and validation
2.1 Measurement
What is metrology ?
- Science of Measurement
- Metrology includes all theoretical and practical aspects of measurement, whatever
the measurement uncertainty and field of application
In metrology, the measurand is the thing you are trying to measure

What is a Measurement?
Process of experimentally obtaining one or more quantity values
Quantity is a property which has a magnitude that can be expressed as a number and
a unit
Eg: Quantity: Cadmium (mass)concentration Quantity value: 12 mg/l Cd
Measurement result: 12 ± 2 mg/l Cd

A measurement usually consists of a unit and a number that expresses the quantity of
that unit. We can express the same physical measurement with different units, which
creates confusion if we are not careful to specify the unit.

To ensure consistency, and to avoid problems, scientists use the common set of
fundamental base units
What is measurement quality ?
Results should be fit for purpose – regarding several
parameters e.g uncertainty, price and comparability

Comparability - measurement results need to be


comparable over:
-Time 1990 2000 2100 year
- Between different laboratories
-Between different countries
measured once – accepted everywhere

Lack of measurement quality can lead to:

Duplication of measurements
Use of extra resources
Lack of trust
Negative economic impact
Disasters/accidents
Examples of lack of measurement quality
Mars Climate Orbiter
...confusion about units leads to crash...
On 23 September 1999 the Mars Climate
Orbiter,one of the missions in a long-term
program of Mars exploration, burned out
completely.

The accident was not due to a technical


problem, but the result of the different
measurement units used by the NASA teams.

One team used the metric system for


spacecraft operation, while the other used the
English units. This information was critical to
the maneuvers required to position the
spacecraft in the proper Mars orbit and led to
the loss of the orbiter.
(need for standardization of units)
Cholesterol Measurements

• A high measurements uncertainty for cholesterol can lead to an


unnecessary costly treatment or a higher health risk.

• Reducing the measurement uncertainty from 23.7% in 1949 to 5%


in 1995, saves to the United States alone $ 100 million every year
in health care costs.
(Standard reference materials played an important role in lowering
the measurement uncertainty)
Certified reference material (CRM)
―Reference material (RM) characterized by a metrologically valid procedure for one or
more specified properties, accompanied by an RM certificate that provides the value of
the specified property, its associated uncertainty, and a statement of metrological
traceability.‖ ISO/Guide 30:2015; Reference Materials — Selected Terms and Definitions

-Complete range with all parameters for


analytical quality control of wastewater,
drinking water and process water
Detailed Certificate of Analysis for each
CRM simplifies accreditation
-Ready-to-use, diluted CRMs save time and
prevent dilution errors
-Long shelf life of 24 months

ISO (International Organization for Standardization) is a worldwide federation of national


standards bodies. ISO is a nongovernmental organization that comprises standards bodies
from more than 160 countries, with one standards body representing each member country.
Primary measurement standard
―Measurement standard that is designated or widely acknowledged as having the
highest metrological qualities and whose property value is accepted without
reference to other standards of the same property or quantity, within a specified
context.‖
ISO/Guide 30:2015; Reference Materials — Selected Terms and Definitions
Secondary measurement standard
―Measurement standard whose property value is assigned by comparison with a
primary measurement standard of the same property or quantity.‖ ISO/Guide
30:2015; Reference Materials — Selected Terms and Definitions
Measurement uncertainty
Non-negative parameter characterizing the dispersion of the quantity values being
attributed to a measurand, based on the information used.
ISO Guide 99:2007; International Vocabulary of Metrology - Basic and General
Concepts and Associated Terms (VIM)
Expanded measurement uncertainty
Product of a combined standard measurement uncertainty and a factor larger than
the number one
NOTE: The term ―factor‖ in this definition refers to a coverage factor.
ISO Guide 99:2007; International Vocabulary of Metrology - Basic and General
Concepts and Associated Terms (VIM)
Uncertainty in Measurements
A measurement provides information about both its magnitude and its uncertainty.

Consider, for example, the three photos in Figure 2.1.1 , taken at intervals of
approximately 1 sec after placing a sample on the balance. Assuming the balance is
properly calibrated, we are certain that the sample’s mass is more than 0.5729 g
and less than 0.5731 g.

We are uncertain, however, about the sample’s mass in the last decimal place since
the final two decimal places fluctuate between 29, 30, and 31. The best we can do
is to report the sample’s mass as 0.5730 g ± 0.0001 g, indicating both its magnitude
and its absolute uncertainty.

Figure 2.1.1 : When weighing an sample on a balance, the measurement fluctuates in the
final decimal place. We record this sample’s mass as 0.5730 g ± 0.0001 g.
Stated Uncertainty
An interval around the measurement result.
The uncertainty budget
-Uncertainties carried by the references
-Uncertainties introduced by the measurement process

Exersie-1
How many significant figures are in each of the following measurements?
Convert each measurement to its equivalent scientific notation or decimal form.

A. 0.0120 mol HCl


B. 605.3 mg CaCO3
C. 1.043×10−4 mol Ag+
D. 9.3×104 mg NaOH

Using the correct number of significant figures is important because it tells other
scientists about the uncertainty of your measurements.
Traceability
Traceability
―Property of a measurement result whereby the result can be related to a reference through a
documented unbroken chain of calibrations, each contributing to the measurement
uncertainty.‖
ISO Guide 99:2007; International Vocabulary of Metrology — Basic and General Concepts
and Associated Terms (VIM)
-Values carried by reference materials should be traceable to other
references.
-The same features which are valid for the analytical laboratories
are also valid for the reference materials producers
Calibration and validation
• A simple definition of a quantitative analytical method is that it is a mechanism
for converting a measurement, the signal, into the amount of analyte in a sample.
Assuming we can correct for interferents, a quantitative analysis is nothing more
than solving equation 2.1 or equation 2.2 for nA or for CA.

SA= kA nA …………………….eq-2.1

where nA is the moles or grams of analyte in the sample, SA, is the signal due to
the analyte and kA is a proportionality constant.

• To solve these equations we need the value of kA. For a total analysis method
usually we know the value of kA because it is defined by the stoichiometry of the
chemical reactions responsible for the signal.
• For a concentration method, however, the value of kA usually is a complex
function of experimental conditions. A Calibration is the process of
experimentally determining the value of kA by measuring the signal for one or
more standard samples, each of which contains a known concentration of
analyte.
• With a single standard we can calculate the value of kA using equation 2.1 or
equation 2.2. When using several standards with different concentrations of
analyte, the result is best viewed visually by plotting SA versus the concentration
of analyte in the standards. Such a plot is known as a calibration curve, an
example of which is shown in Figure 2.
• A second class of analytical techniques are those that respond to the analyte’s
concentration, CA
SA= kA CA ……………Eq 2.2
Why do you calibrate instruments?
• Any instrument used in scientific research needs to be properly calibrated before it is
used
• This is done through adjustment of the precision and accuracy of the instruments. You,
therefore, need to know what precision and accuracy mean:
• By adjusting these values, instrument calibration can reduce (or completely eliminate)
bias in readings.

When do you calibrate instruments?


There are a number of scenarios in which an instrument used for research needs to be
calibrated. Just a few are listed below:
• After an 'event' – this could be if the instrument is knocked, bumped, or moved. Any of
these things can impact the accuracy of an instrument’s measurements.
• When measurements don’t seem right – if you are conducting research, but the
measurements being produced don’t seem right, then the instrument may need to be
calibrated again.
• When instructed by the manufacturer – some manufacturers require an instrument to be
checked every so often to make sure it is working properly. If so, they will tell you how
often this needs to be done.
How do you calibrate an instrument?
• There are two main ways of calibrating an instrument – these are the working curve
method and the standard addition method.
Working Curve Calibration
• In the working curve method, a set of standards must be prepared. They
will each contain a known amount of the analyte being measured. These
standards are then measured using the instrument in question, and a
calibration curve will be plotted. This curve will show the relationship
between the response of the instrument and the concentration of the
analyte. An example of a calibration curve can be found in figure 2 .

• When using the working curve method, it is important that each


standard is prepared individually and not all from the same stock
solution. Any errors in the stock solution will carry through the
entire calibration process, and thus the instrument will not be
calibrated correctly.
• The calibration curve should also be checked for any outliers – this
is a measurement which is significantly different from the other
measurements. Put simply, these results will shift the regression line
(line of best fit) and give inaccurate results, and should, therefore,
be removed.

• There are a number of steps which should be followed when


performing a working curve calibration, which is outlined below:
• The calibration standards should cover the range of interest – this
is so, during your actual experiment, you are sure to get the most
accurate results from your curve

• A ‘blank’ should be included in your calibration – this is a standard


which contains no analyte
• Don’t automatically set your regression line intercept to zero!
Only do this if you have enough evidence to show that the intercept
point is not actually statistically different from zero.
Standard-Addition Calibration
• The standard-addition method of calibration helps to remove bias that may arise
from a number of factors, including the temperature and composition of the actual
matrix.
• For the standard-addition method of calibration, two requirements must be met:
• The calibration curve has to be linear
• The regression line must pass through zero

In this method, the signal intensity of the sample solution is measured.

Then, the analyte is added to this solution at known concentrations – the signal
intensity is measured after each addition of the analyte.

This, therefore, gives a calibration curve which is linear and shows signal intensity vs.
added concentration.

The concentration of the analyte is determined from the point where the regression
line crosses the axis at zero.
In this method, the matrix itself remains completely unchanged – it is
for this reason that this method is useful in cases where the matrix is
either very complicated or hard to reproduce.
Validation
• If we are to have confidence in our procedure we must demonstrate that it can
provide acceptable results, a process we call validation. Perhaps the most
important part of validating a procedure is establishing that its precision and
accuracy are appropriate for the problem we are trying to solve.
• We also ensure that the written procedure has sufficient detail so that different
analysts or laboratories will obtain comparable results. Ideally, validation uses a
standard sample whose composition closely matches the samples we will
analyze. In the absence of appropriate standards, we can evaluate accuracy by
comparing results to those obtained using a method of known accuracy
Typical analytical method validation characteristics
• Specificity
Specificity is the ability of the method to measure only the analyte of interest
without interference from other components in the sample that are likely to be
present, such as impurities, degradants, matrix components, etc. It can be evaluated
by analyzing samples containing the analyte of interest and potential interfering
substances.
• The tests used to demonstrate specificity will depend on the intended use of
the analytical procedure; suitable identification tests should be able to
discriminate between compounds of closely related structures, whereas suitable
assay or purity tests should be able to discriminate between the analyte,
impurities, degradation products and/or matrix components.
Figure 3: An example of method specificity
Linearity
• The analytical method should be linear, i.e., there should be a direct relationship
between the concentration of the analyte(s) and the signal produced.
• Linearity is usually evaluated by analyzing samples containing the analyte at 5 different
concentration levels in triplicate. A plot of signals as a function of analyte concentration
or content is then created, usually by means of a regression line. The correlation
coefficient, y-intercept, slope of the regression line, and residual sum of squares should
be calculated.
• In case an analytical method does not demonstrate linearity, even after any
transformation, the analytical response should be described by an appropriate function of
the concentration (amount) of an analyte in a sample.

Figure 4: An example of a regression line

Figure 5: Residuals graph


Precision
Precision is the degree of agreement (degree of scatter) between replicate
measurements of the same sample under the same conditions.
It is measured by calculating the (relative) standard deviation and confidence
interval of the replicate measurements of multiple samplings of the same
homogenous sample. Three levels of precision are defined: repeatability,
intermediate precision, and reproducibility.
Repeatability: precision established under the same operating conditions over
a short of time using a minimum of 6 determinations at 100% of the test
concentration or 3 replicates of 3 concentrations.

Intermediate precision: should establish the effects of random events on the


precision of the analytical procedure calculated using a minimum of 6
determinations at 100% of the test concentration. Typical within-lab variations
such as days, analysts, equipment, etc.

Reproducibility: different analysts in different laboratories should be able to


get similar results. This is usually tested in the case of method transfer
Accuracy
• Accuracy is the closeness of the test results to the true or theoretical value.

• Accuracy is assessed using a minimum of 9 determinations over a minimum of 3


concentration levels covering the specified range (e.g. 3 replicates each of 3
concentrations over the range).
• Accuracy is then reported as a percentage recovery of the theoretical amount of
analyte in the sample, together with the confidence intervals.

Range
The specified range is normally derived from linearity studies and depends on
the intended application of the procedure.
It is established by confirming that the analytical procedure provides an acceptable
degree of linearity, accuracy, and precision when applied to samples containing
amounts of analyte within or at the extremes of the specified range of the analytical
procedure.

Figure 6: Range
Limit of detection (LOD) limit of quantitation (LOQ)
• The analytical method must be able to detect and quantify low levels of
analytes accurately:
• The limit of quantitation (LOQ) is the lowest amount of an analyte that
can be quantitated with suitable accuracy and precision.
• The limit of detection (LOD) is the lowest amount of an analyte that can
be detected but not necessarily quantitated

Figure 7: LOQ vs LOD


LOQ and LOD may be calculated using different methods:
based on visual evaluation,
based on the Signal-to-Noise ratio,
or based on the standard deviation of the response and the slope.
The LOQ must be validated by analysis of a suitable number of samples with
known concentrations at or near the quantitation limit.
Robustness
The evaluation of robustness should be considered during the development
phase. It should show the reliability of an analysis with respect to deliberate
variations in method parameters.
An important parameter is the stability of the sample solution.
System suitability testing
System suitability testing is an integral part of many analytical procedures.

Tests are based on the concept that the equipment, electronics, analytical
operations and samples constitute an integral system that must be evaluated as
such.
All the (calculated) parameters are within the acceptable limits indicating good
performance of the method and system.

You might also like