[go: up one dir, main page]

0% found this document useful (0 votes)
244 views7 pages

Random and Systematic Error

Uploaded by

Muneeba Saeed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
244 views7 pages

Random and Systematic Error

Uploaded by

Muneeba Saeed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

MEASUREMENT ERROR

In scientific research, measurement error is the difference between an observed value and the
true value of something. It’s also called observation error or experimental error.

There are two main types of measurement error:

RANDOM AND SYSTEMATIC ERRORS

 RANDOM ERROR

Random error is a chance difference between the observed and true values of something (e.g., a
researcher misreading a weighing scale records an incorrect measurement).

In the graph below, the black line represents a perfect match between the true scores and observed
scores of a scale. In an ideal world, all of your data would fall on exactly that line. The green dots
represent the actual observed scores for each measurement with random error added.

Random error is referred to as “noise”, because it blurs the true value (or the “signal”) of what’s
being measured. Keeping random error low helps you collect precise data.
 Sources of random errors

Some common sources of random error include:

o natural variations in real world or experimental contexts.


o imprecise or unreliable measurement instruments.
o individual differences between participants or units.
o poorly controlled experimental procedures.

 Reducing Random Error


Random error is almost always present in research, even in highly controlled settings. While you
can’t eradicate it completely, you can reduce random error using the following methods.
o Take repeated measurements
A simple way to increase precision is by taking repeated measurements and using their average.
For example, you might measure the wrist circumference of a participant three times and get
slightly different lengths each time. Taking the mean of the three measurements, instead of using
just one, brings you much closer to the true value.
o Increase your sample size
Large samples have less random error than small samples. That’s because the errors in different
directions cancel each other out more efficiently when you have more data points. Collecting data
from a large sample increases precision and statistical power.
o Control variables
In controlled experiments, you should carefully control any extraneous variables that could impact
your measurements. These should be controlled for all participants so that you remove key sources
of random error across the board.

 SYSTEMATIC ERROR

Systematic error means that your measurements of the same thing will vary in predictable ways:
every measurement will differ from the true measurement in the same direction, and even by the
same amount in some cases.

Systematic error is also referred to as bias because your data is skewed in standardized ways that
hide the true values. This may lead to inaccurate conclusions.

 Types of systematic errors


Offset errors and scale factor errors are two quantifiable types of systematic error.

o An offset error occurs when a scale isn’t calibrated to a correct zero point. It’s also called
an additive error or a zero-setting error. Example: Offset error occurs when measuring
participants’ wrist circumferences, you misread the “2” on the measuring tape as a zero-
point. All of your measurements have an extra 2 centimeters added to them.
o A scale factor error is when measurements consistently differ from the true value
proportionally (e.g., by 10%). It’s also referred to as a correlational systematic error or a
multiplier error. Example: Scale factor error A weighing scale consistently adds 10% to
each weight. A true weight of 10 kg is recorded as 11 kg, while a true weight of 40 kg is
recorded as 44 kg.

You can plot offset errors and scale factor errors in graphs to identify their differences. In the
graphs below, the black line shows when your observed value is the exact true value, and there is
no random error.

The blue line is an offset error: it shifts all of your observed values upwards or downwards by a
fixed amount (here, it’s one additional unit).

The purple line is a scale factor error: all of your observed values are multiplied by a factor—all
values are shifted in the same direction by the same proportion, but by different absolute amounts.

 Sources of systematic errors

o The sources of systematic error can range from your research materials to your data
collection procedures and to your analysis techniques. This isn’t an exhaustive list of
systematic error sources, because they can come from all aspects of research.
o Response bias occurs when your research materials (e.g., questionnaires) prompt
participants to answer or act in inauthentic ways through leading questions. For
example, social desirability bias can lead participants try to conform to societal norms,
even if that’s not how they truly feel.
o Experimenter drift occurs when observers become fatigued, bored, or less motivated after
long periods of data collection or coding, and they slowly depart from using standardized
procedures in identifiable ways.
o Sampling bias occurs when some members of a population are more likely to be included
in your study than others. It reduces the generalizability of your findings, because your
sample isn’t representative of the whole population.

 Reducing systematic error

You can reduce systematic errors by implementing these methods in your study.

o Triangulation

Triangulation means using multiple techniques to record observations so that you’re not relying
on only one instrument or method.

For example, if you’re measuring stress levels, you can use survey responses, physiological
recordings, and reaction times as indicators. You can check whether all three of these
measurements converge or overlap to make sure that your results don’t depend on the exact
instrument used.

o Regular calibration

Calibrating an instrument means comparing what the instrument records with the true value of a
known, standard quantity. Regularly calibrating your instrument with an accurate reference helps
reduce the likelihood of systematic errors affecting your study.

You can also calibrate observers or researchers in terms of how they code or record data. Use
standard protocols and routine checks to avoid experimenter drift.
o Randomization

Probability sampling methods help ensure that your sample doesn’t systematically differ from the
population.

In addition, if you’re doing an experiment, use random assignment to place participants into
different treatment conditions. This helps counter bias by balancing participant characteristics
across groups.

o Masking

Wherever possible, you should hide the condition assignment from participants and researchers
through masking (blinding).

You might also like