[go: up one dir, main page]

0% found this document useful (0 votes)
287 views5 pages

Research Instruments, Validity and Reliability Report

The document discusses factors to consider when constructing a research instrument, including the type of instrument, reasons for choosing it, and defining its parts. It also discusses establishing an instrument's validity and reliability. Specifically, it provides definitions and examples of different types of validity for research instruments, including face validity, content validity, construct validity, internal validity, external validity, and statistical conclusion validity. The goal is to develop an instrument that accurately measures the intended variables and can generate conclusions that generalize beyond the specific sample.

Uploaded by

Glaisa Bautista
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
287 views5 pages

Research Instruments, Validity and Reliability Report

The document discusses factors to consider when constructing a research instrument, including the type of instrument, reasons for choosing it, and defining its parts. It also discusses establishing an instrument's validity and reliability. Specifically, it provides definitions and examples of different types of validity for research instruments, including face validity, content validity, construct validity, internal validity, external validity, and statistical conclusion validity. The goal is to develop an instrument that accurately measures the intended variables and can generate conclusions that generalize beyond the specific sample.

Uploaded by

Glaisa Bautista
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Abra State Institute of Science and Technology

GRADUATE SCHOOL
Bangued Campus, Bangued, Abra

RAYSA LYNE AGBILAY


GLAISA BAUTISTA Dr. JOEY Dela Cruz
MARIA YBONNE GANDEZA Professor
MARIFE B. LLAVADO
MAGS Students
___________________________________________________________________________

In constructing the research instrument of the study, there are many factors to be
considered. The type of instrument, reasons for choosing the type, and the description and
conceptual definition of its parts are some of the factors that need to be decided before
constructing a research instrument. Furthermore, it is also very important to understand the
concepts of scales of research instruments and how to establish validity and reliability of
instruments.

CHARACTERISTICS OF A GOOD RESEARCH INSTRUMENT

Concise. Have you tried answering a very long test, and because of its length, you just pick
the answer without even reading it? A good research instrument is concise in length yet can
elicit the needed data.

Sequential. Questions or items must be arranged well. It is recommended to arrange it from


simplest to the most complex. In this way, the instrument will be more favorable to the
respondents to answer.

Valid and reliable. The instrument should pass the tests of validity and reliability to get
more appropriate and accurate information.

Easily tabulated. Since you will be constructing an instrument for quantitative research, this
factor should be considered. Hence, before crafting the instruments, the researcher makes
sure that the variable and research questions are established. These will be an important basis
for making items in the research instruments.

WAYS IN DEVELOPING RESEARCH INSTRUMENT


There are three ways you can consider in developing the research instrument for your
study. First is adopting an instrument from the already utilized instruments from previous
related studies. The second way is modifying an existing instrument when the available
instruments do not yield the exact data that will answer the research problem. And the third
way is when the researcher made his own instrument that corresponds to the variable and
scope of his current study.

Another important consideration in constructing a research instrument is how to


establish its validity and reliability.
VALIDITY
Validity, in research realm, pertains to how well the research instrument measures
right elements which need to be measured. That is, the instrument is valid if it measures what
it intends to measure.

Validity is how researchers talk about the extent that results represent reality.
Research methods, quantitative or qualitative, are methods of studying real phenomenon –
validity refers to how much of that phenomenon they measure vs. how much “noise,” or
unrelated information, is captured by the results.

How can validity be established?


 Quantitative studies:
 measurements, scores, instruments used, research design
 Qualitative studies:
 ways that researchers have devised to establish credibility: member checking,
triangulation, thick description, peer reviews, external audits

Here are the 7 key types of validity in research:


1. Face validity
Face validity is how valid your results seem based on what they look like.
This is the least scientific method of validity, as it is not quantified using statistical
methods. Face validity is not validity in a technical sense of the term.  It is concerned
with whether it seems like we measure what we claim.
For example, imagine you give a survey that appears to be valid to the
respondent and the questions are selected because they look valid to administer. The
researcher asks a group of random people, untrained observers, if the questions appear
valid to them.
Example:
For example, a mathematical test consisting of problems in which the test
taker has to add and subtract numbers may be considered to have strong face validity.
The test items appear, at face value, to measure what one is seeking to measure.

2. Content validity
Content validity is whether or not the measure used in the research covers all
of the content in the underlying construct (the thing you are trying to measure).
This is also a subjective measure, but unlike face validity we ask whether the content
of a measure covers the full domain of the content. If a researcher wanted to measure
introversion, they would have to first decide what constitutes a relevant domain of
content for that trait.
Content validity is considered a subjective form of measurement because it
still relies on people’s perception for measuring constructs that would otherwise be
difficult to measure.
Where content validity distinguishes itself (and becomes useful) is through its
use of experts in the field or individuals belonging to a target population.  This study
can be made more objective through the use of rigorous statistical tests. For example,
you could have a content validity study that informs researchers how items used in a
survey represent their content domain, how clear they are, and the extent to which
they maintain the theoretical factor structure assessed by the factor analysis.
Example:
If a test is designed to survey arithmetic skills at a third-grade level, content
validity indicates how well it represents the range of arithmetic operations possible at
that level. Modern approaches to determining content validity involve the use
of exploratory factor analysis and other multivariate statistical procedures.

3. Construct validity
Construct validity is the degree to which your research measures the
construct (as compared to things outside the construct). Construct validity is the
degree to which inferences can be made from operationalizations (connecting
concepts to observations) in your study to the constructs on which those
operationalizations are based.  To establish construct validity, you must first provide
evidence that your data supports the theoretical structure.
A construct represents a collection of behaviors that are associated in a
meaningful way to create an image or an idea invented for a research purpose.
For example, depression is a construct that represents a personality trait which
manifests itself in behaviors such as over sleeping, loss of appetite, difficulty
concentrating, etc. The existence of a construct is manifest by observing the collection
of related indicators.  Any one sign may be associated with several constructs.  A
person with difficulty concentrating may have A.D.D. but not depression.
Example:
An example could be a doctor testing the effectiveness of painkillers on
chronic back sufferers. Every day, he asks the test subjects to rate their pain level on a
scale of one to ten - pain exists, we all know that, but it has to be measured
subjectively. In this case, construct validity would test whether the doctor actually
was measuring pain and not numbness, discomfort, anxiety or any other factor.
Therefore, with the definition of a construct properly defined, we can look at
construct ability, a measure of how well the test measures the construct. It is a tool
that allows researchers to perform a systematic analysis of how well designed their
research is.

4. Internal validity
Internal validity refers to the extent to which the independent variable can
accurately be stated to produce the observed effect.
If the effect of the dependent variable is only due to the independent variable(s) then
internal validity is achieved. This is the degree to which a result can be manipulated.
Example:
You want to test the hypothesis that drinking a cup of coffee improves
memory. You schedule an equal number of college-aged participants for morning and
evening sessions at the laboratory. For convenience, you assign all morning session
participants to the treatment group and all evening session participants to the control
group. Once they arrive at the laboratory, the treatment group participants are given a
cup of coffee to drink, while control group participants are given water. You also give
both groups memory tests. After analyzing the results, you find that the treatment
group performed better than the control group on the memory test.

5. External validity
External validity refers to the extent to which the results of a study can be
generalized beyond the sample. Which is to say that you can apply your findings to
other people and settings.
Example:
You want to test the hypothesis that people tend to perceive themselves as
smarter than others in terms of academic abilities. Your target population is the
10,000 undergraduate students at your university. You recruit over 200 participants.
They are science and engineering majors; most of them are American, male, 18–20
years old and from a high socioeconomic background. In a laboratory setting, you
administer a mathematics and science test and then ask them to rate how well they
think performed. You find that the average participant believes they are smarter than
66% of their peers.

6. Statistical conclusion validity


Statistical conclusion validity is a determination of whether a relationship or
co-variation exists between cause and effect variables.
This type of validity requires:
• Ensuring adequate sampling procedures
• Appropriate statistical tests
• Reliable measurement procedures
This is the degree to which a conclusion is credible or believable.

7. Criterion-related validity

Criterion-related validity (also called instrumental validity) is a measure of


the quality of your measurement methods.  The accuracy of a measure is
demonstrated by comparing it with a measure that is already known to be valid. In
other words – if your measure has a high correlation with other measures that are
known to be valid because of previous research.

RELIABILITY
Reliability pertains to the consistency of the results of the instrument. If the
instrument is administered on the same respondents for number of times and produces the
same results, the instrument is regarded as reliable. For instance, if the same test is given
to the same respondents twice, it is reasonable that consistency of two scores will be
observed.

Establishing Reliability of the Instrument


Besides ensuring that your instrument is valid, you need to evaluate the
consistency of results which your research instrument provides. Aspects of reliability
include homogeneity/ internal consistency, stability, and equivalence.

 Homogeneity/ Internal Consistency- It refers to the extent which a research


instrument accurately measures all aspects of a study.
 Stability- It is a measure of the repeatability of the research instrument over
time. That is, the instrument should give the same results whenever it is used.
 Equivalence- It accounts the consistency among responses of multiple users
of an instrument, or among alternate forms of an instrument.

Factors Affecting Reliability


a. The wording of questions
b. The physical setting
c. The respondent’s interviewer mode
d. The regression effects
e. The nature of interaction

Methods of determining the Reliability


A. Internal Consistency procedures – items or questions measuring the same
phenomenon should produce similar results irrespective of their number in an
instrument. This include the Split-half technique.
a. Split-half Technique
 To correlate half of the items with other half in a research
instrument
 Questions are divided in half in such way that any two questions
intended to measure the same aspect fall into different halves
 The scores obtained by administering the two halves are correlated
 Reliability is calculated using product moment correlation between
scores
B. External Consistency procedures – compare findings from two independent
process of data collection with each other as a means of verifying the reliability of
the measure. This includes the Test/retest and parallel form of the same test
a. Test/Retest (repeatability test)
 An instrument is administered once, and then again, under the same
or similar conditions
 The ratio between test and retest score is an indication of the
reliability of the instrument. The greater the value of the ratio, the
higher the reliability of the instrument.
 Its advantage is, it permits the instrument to be compared with
itself, but its disadvantage is that, a respondent may recall the
responses that they gave in the first round.
b. Parallel Forms of the same test
 Two instruments intended to measure the same population is
constructed and administered to two similar population.
 The results obtained from one test is compared with another. If
similar, the instrument is reliable.
 The advantage is, it does not suffer from the problem of recall and
time lapse between two test is not required and its disadvantage is
it needs to construct two instrument instead of one

You might also like