Psych 243 Notes
Psych 243 Notes
Note:
- Often some characteristics are unknown
- Can only predict an event when ALL conditions and circumstances are known
- In social sciences it is rare to possess all information
Probability statement:
- if some conditions are satisfied the event will occur more often than if some of these conditions were
not met
Probabilistic explanation:
- Explanations that concede some uncertainty
A fundamental relationship between facts about the world, defined as empirically verifiable
observations, and theory, as an explanatory framework.
• Perceptual knowledge:
o Information about the world perceived through our senses
• Rational knowledge:
o Based on judgment and logical thought
o Provides explanations or reasons for the relationships between observations
• Fundamental relationship between:
1) Facts about the world – verifiable observations
2) Theory – explanatory framework
The scientific method
- Synthesis of:
1. Rationalistic method
2. Empirical methods
- Uses rationalism to develop theories
- Uses empiricism to test theories
- Science – continuous interplay of rational thought and empirical observation
Experience without theory is blind, but theory without experience is mere intellectual play
– Immanuel Kant (1724 - 1804)
The combination of observation and the process of logical thought provide a deeper understanding of the
world
Facts and theory
• Facts give rise to theory since they raise a need for explanations of the observed phenomena. –
indisputable set of statements about reality,
o e.g., sun rises in the East and sets in the West
o Facts, the cornerstone of knowledge, are not the product of random observation, but should
be both selective and meaningful.
• Theory serves as a basis for the gathering of facts since it specifies the facts to be systematically
observed.
o e.g., learning theory may be used to explain anti-social and maladaptive behaviour at school
• Facts allow a researcher to find support for, improve, disprove or formulate
a theory.
This means in practice that a theory allows a researcher to predict some facts.
Any theory should provide an explanation of how relevant facts relate to one another.
• The process of theory reformulation is extremely important to the advancement of science, and
therefore of new knowledge.
If theories were not advanced, deeper understandings of social phenomena would not be achieved
and knowledge would become stagnant.
Theories are typically not proved.
Theory is tenable.
Always the possibility that, some observations or data might contradict the theory.
• Theory allows for a classification and conceptualisation of facts.
It summarises and provides explanations of how relevant facts relate to one another.
• Theory predicts facts.
A useful theory then allows social scientists to make predictions about future phenomena in the
form of probability statements.
LAW OF SUPPLY & DEMAND => Existence of an illegal market
LAWS OF ASTRONOMY => eclipse of the sun
Relationship between facts and theory
Quantitative, qualitative and the mixed-method approaches
- Quantitative methods emerged from the philosophical belief that the world runs according to natural
laws and that the role of the scientist is to uncover or discover these pre-existing laws.
- Qualitative methods emerged out of more recent philosophical beliefs that truth is relative and that
knowledge is constructed by human beings.
- The quantitative approach relies extensively on numbers and statistics in the analysis and
interpretation of findings
o Data collected according to very particular set of steps
o Attempt to remain as objective and neutral as possible
o Analyzed using statistical procedures so that findings can be generalized from a relatively small
sample to the entire population
o Favor a more controlled environment
- The qualitative approach is often used when the problem has not been investigated before.
o His method uses smaller samples, from which findings a better understanding of the
phenomenon being investigated is produced.
o Research plan is more flexible and circular
o Investigates the problem from the respondents’ point of view
o Reality is interpreted from the respondent’s frame of reference
o Attempt to understand phenomena in their natural context
- Investigation demand that both approaches be used in the same study in order to confirm or elaborate
each other, the mixed-methods approach.
o Might combine:
• structured questionnaires (quantitative) and open ended interviews (qualitative)
• surveys (quantitative) and case studies (qualitative), or
• questionnaires (quantitative) and observations (qualitative).
- In the quantitative approach, the researcher tends to follow what natural scientists do: that is collect
data (that is measurements or frequencies) according to a very specific set of steps, and in so doing
attempt to remain as objective and neutral as possible.
- Data is analysed using statistical procedures so that findings can be generalised from a relatively small
sample, to the entire population.
- Researchers are more inclined to qualitative approaches in which the plan of the research is more
flexible and circular.
- Investigates a problem from the respondents’ point of view.
- The focus of such a study is to determine what respondents think and feel about a particular
phenomenon or issue.
- Attempt to understand phenomena in their natural context, rather than the more controlled
environment favoured in quantitative approach.
- The distinction between the quantitative and qualitative approach both are authentic scientific
approaches to developing knowledge in the social sciences.
A comparison between quantitative and qualitative approaches can be made based on the following
criteria:
Key Points:
1. Social sciences investigate social phenomena, whereas the natural sciences study laws and
properties of natural phenomena.
2. A fact is an indisputable statement about reality and a theory is set of ideas that explain a social
phenomenon.
3. Quantitative research emanates from the philosophy that the world operates on natural laws,
whereas qualitative studies are informed by the philosophy that the world and the truth is not
objective as meaning is socially constructed. Quantitative research is planned well ahead, and is
rigid and objective whereas qualitative research requires a flexible plan, which allows for
movement back and forth.
4. The quantitative approach uses scales and instruments in the collection of data whereas qualitative
approach uses observation, interviews and focus group discussions.
5. Data analysis in quantitative studies happens at the end of the data collection stage whereas in
qualitative research this occurs at the same time with data collection.
Notes created by Christéle Meyer
- Descriptive research
〜 Researcher may be interested in finding out the opinion of a group of people towards a particular
issue at a particular time.
〜 Research might involve a survey (for example, a population census), a one-shot case study, or in
depth qualitative descriptions of the phenomenon. Various types of information can be collected in
many different ways.
〜 Various types of information can be collected in many different ways.
〜 Greater part of the data collected in these ways will be qualitative and more or less reliable,
depending on the source of the data and the skill of the researcher.
〜 The choice of one or the other technique is rarely arbitrary and depends on the aim of the research
and on the type of data available.
〜 Can be conducted using quantitative, qualitative or mixed-methods approaches.
- Correlational research
〜 May change in the same or opposite directions
〜 Provides an indication of the strength and direction of the relationship
〜 Researcher is able to make a statement or hypothesis predicting the relationship between at least
two variables
〜 Some cases the relationship between variables can be stated precisely, while in others it cannot.
〜 Two variables co-vary, that is, that they change simultaneously.
〜 Correlational research does not give an explanation of how variables are related. It merely indicates
that they are related, and provides an indication of the strength and direction of the relationship.
Levels of significance
Significant:
- p < .05 (5% probability of making an error)
- p < .01 (1% probability of making an error)
- p < .001 (0.1% probability of making an error)
Not significant:
- p > .05
- H0 : There are no significant associations between hours studied and test mark obtained
- H1 : There are significant associations between hours studied and test mark obtained
Significant:
- p < .05 (5% probability of making an error)
- p < .01 (1% probability of making an error)
- p < .001 (0.1% probability of making an error)
Not significant:
- p > .05
- H0 : There are no significant associations between number of beers consumed and test mark
obtained
- H1 : There are significant associations between number of beers consumed and test mark
obtained
- Explanatory research
〜 When a causal relationship between variables can be stated, and an explanation can be found for
the variation of at least one variable, the research is explanatory.
〜 A deeper understanding of the relationships between variables
〜 Causality of this kind is difficult to demonstrate, a topic that is discussed in more depth in the next
section.
Example:
Which one of the 3 alternatives is correct?
- If A is the cause of B, then the occurrence of the cause, (A), is a necessary and sufficient condition for B
(the effect) to occur
- This means:
o Necessary – B can never occur unless A occurs first
o Sufficient – wherever A occurs, B will occur
- A is the cause of B only if the occurrence of A is a necessary and sufficient condition for B to occur, then
B is the effect of A and there is a causal relationship between B and A
Three conditions that are satisfied in a causal relationship:
Only if all three conditions are satisfied is a causal relationship between A and B assured. These conditions
are the following:
1. Proof of the co-variance of A and B. the researcher must dem- onstrate that a relationship exists
between the variation of A and B.
2. Proof of the non-spuriousness of the co-variance. the researcher must exclude a third variable which
alone can determine the variations of both A and B.
3. Proof of a stable time-order. the researcher must demonstrate that the cause always precedes the
effect
〜 The plan of how to proceed in determining the nature of the relationship between variables is
called a research design.
〜 The purpose of correlational research is often only to detect the existence of a relationship
between variables (co-variance) that suggests a possible base for causality.
〜 Correlation does not necessarily imply causation, but causation always implies correlation
〜 Often explanatory research is not feasible. This is the case when it is not possible to manipulate the
variables or to assess the time-order.
〜 A correlational study will allow for an evaluation of the importance (or strength) of each
relationship, or the contribution of each factor to productivity.
〜 This is of great practical relevance.
〜 It will also indicate whether each of these factors promotes productivity (positive correlation) or
inhibits productivity (negative correlation).
〜 Therefore, a correlational study is not only useful when no clear causal relationship exists, but it
also allows for an estimation of the strength of the relationship between two variables even when
one variable is influenced by many others.
〜 the four types of research are applied to different aspects of the same research topic
Example
- Bad research questions
- Not empirically testable, non-scientific questions
- Should prostitution be legalised?
Should capital punishment be implemented in South Africa?
- General topics, not research questions
- Treatment of alcohol and drug abuse.
- Sexuality and ageing.
Capital punishment and racial discrimination.
- Urban decay and gangs.
- Questions that are too vague and ambiguous
o Do police affect delinquency?
o What can be done to prevent child abuse?
- Good research questions
- Exploratory
o How do young Zambian mothers experience health services in their country?
- Descriptive
o How has the infant mortality rate in Zambia changed over the past 10 years?
o What factors do doctors feel explain changes in the infant mortality rate over the past ten
years?
- Correlational
o Is the infant mortality rate in particular hospitals related to the average income of people living
in the surrounding area?
o Do mothers with better education have greater access to healthcare?
- Explanatory
o Have changes in the training of medical personnel caused a drop in the infant mortality rate?
Research and the internet
the internet provides many advantages and opportunities.
1. Data collection can be quicker and more affordable. For example, questionnaires can be emailed
and interviews can be conducted using voice over internet protocols.
2. The researcher can reach respondents over a very broad geographic area, such as the whole of
Africa, or even the whole world.
3. Respondents can participate in data collection from the privacy and comfort of their own homes.
Key Points:
1. There are several different ways to divide the field of social research.
2. Quantitative, qualitative and mixed-methods research approaches are distinguished primarily by
the form of data collected.
3. Basic and applied research are distinguished by the desired outcome: pure knowledge or social
intervention.
4. Primary and secondary research are differentiated by whether the data was collected specifically to
answer the current research question, or for some other reason.
5. Exploratory, descriptive, correlational and explanatory research are distinguished by the aims of the
researcher and thus the methodologies employed.
6. A demonstration of causality demands (a) demonstration of co-variance; (b) refutation of
alternative explanations; and (c) demonstration of a stable time order.
7. The growth of the internet and increasing access to online resources offer both opportunities and
dangers to the social researcher.
Notes created by Christéle Meyer
Moderator Control
Variable Variable
Antecedent
IV DV
Variable
Intervening
Variable
Variables to take into account
- Additional variables affecting the DV
- Factor measured, manipulated, or selected by the researcher to discover whether or not it modifies the
relationship between the IV & DV
- Thought of as secondary independent variables because moderator variables share some of the
characteristics of IV
Control variables:
- Factors that are controlled by the researcher to cancel out or neutralize any effect they may otherwise
have on the observed phenomenon
Example
o When measuring height of children as a function of their age, one should control for their health
condition since malnutrition has a negative effect on growth
o control variable = same nutritional background
o IV = different ages
o DV = height
Antecedent variables
- Appears before the IV
- Determines the IV
o e.g. refer to example of maize output “financial situation of farmer” could vary, and this could
influence the IV can be held constant
Intervening variable:
- a consequence of IV
- determines the variation of the DV
- falls between the IV and the DV
- can be held constant
Note:
o Antecedent, independent, moderator, intervening and dependent variables can be connected in
a causal chain or network
o Researchers should also be watchful for accidental connections and extraneous variables
Extraneous variable
- A variable that influences both the independent and dependent variables, giving the false impression of
a relationship between them
Note:
- Variables take on various roles in relation to different research questions
- The IV could actually depend on a different variable
- Or, the DV could serve as an IV predicting the variation in yet another variable
- Variables must be defined in a way that allows a systematic and accurate description or measurement
of their variation
Concepts:
Main functions:
1. to facilitate communication among human beings
2. to aid in the classification of the elements of reality and their generalization
3. building blocks of theories
For concepts to be useful:
- they must be defined in a clear, precise, non-ambiguous and agreed-upon way
- “speak the same language”
Conceptual and operational definitions
- A conceptual definition is the definition of a concept in terms of a set of other concepts.
o e.g. ‘a courageous person’ - is a ‘brave person’, ‘a person able to control fear in the face of
danger’, ‘a person who remains calm in the face of threatening events’…
- A conceptual definition cannot be true or false, but it may or may not be useful for communication.
Below are some properties that conceptual definitions should have in order to be useful.
1. A conceptual definition must denote the distinctive characteristics of that which is defined.
2. A conceptual definition should not be circular.
3. A conceptual definition should be stated positively
4. A conceptual definition should be stated in clear and unequivocal terms
- Even a very well-formulated conceptual definition does not fulfil the need of a researcher who wants to
assess the existence of some empirical phenomenon.
- This is the function of an operational definition.
- Thus an operational definition not only gives precise indications as to what the fundamental
characteristics of a concept are, but it also gives precise indications about how to observe, describe or
measure the characteristics under study.
- Stated in another way: an operational definition is based on the observable characteristics of an object
or phenomenon and indicates what to do or what to observe in order to identify these characteristics.
Example:
- “a hungry person”
- Conceptual def.: a person needing food
- Operational definition:
o a person deprived of food for 24 hrs
o a person who can eat a loaf of bread in 10 mins
o a person whose blood-sugar level is lower than a specified level
Hypothesis formulation
1. Tentative, concrete and testable answers to such questions
2. Suggested answer to a problem
3. a way to predict a relationship
4. has to be tested empirically before it can be accepted and incorporated into a theory
5. if not supported by empirical evidence, must be rejected
The following are the main characteristics of usable hypotheses.
1. A hypothesis must be conceptually clear.
2. A hypothesis should have empirical referents.
3. A hypothesis must be specific.
4. A hypothesis must be testable with available techniques.
Hypotheses concerning the cause of marriage instability could be variably formulated as follows:
1. The young age of the partners at the time of marriage (under 23 years) has an effect on the stability
of their marriage.
2. The young age of the partners at the time of marriage (under 23 years) has adverse influences on
the stability of the marriage.
3. The young age of the partners at the time of marriage (under 23 years) has positive influences on
the stability of the marriage.
The direction of a suspected causal relationship between variables:
- The first formulation is non-directional, since it does not indicate the direction in which the dependent
variable will be influenced by the independent one.
o ‘young age of partners (under 23yrs) has an effect on the stability of their marriage’
- The second and third formulations indicate a negative and positive influence respectively and they are
thus called directional hypotheses. (preferred)
o ‘the young age of partners at time of marriage (under 23 yrs) has adverse influences on the
stability of the marriage’= negative infl.
o ‘the young age of partners at time of marriage (under 23 yrs) has positive influences on the
stability of the marriage’ = positive infl.
Replication
- The principle of replication says that a hypothesis needs several tests with consistent and repeated
support to gain broad acceptance.
Disconfirmation
- This logic suggests that the more alternatives we test a hypothesis against, the greater our confidence
in it becomes.
- A curious aspect of hypothesis testing is that researchers treat evidence that supports a hypothesis
differently from evidence that opposes it.
- Disconfirmation is associated with the idea of falsification and with the use of null hypotheses. The null
hypothesis is a hypothesis that states that the variables in question are not in fact related.
- Negative evidence is more significant because the hypothesis becomes ‘tarnished’ if the evidence
contradicts it.
- Positive or confirming evidence for a hypothesis is less critical because plausible rival hypotheses may
make the same prediction.
- A hypothesis is never proved, but it can be disproved
Types of hypotheses:
- Hypothesis: a way to predict a relationship
o vs.
- Null hypothesis: does the opposite > no relationship between the IV and DV
- Null hypothesis thinking:
o rests on assumption that researchers try to discover a relationship
o the researcher assumes that the null hypothesis is correct, unless otherwise shown through
alternative evidence.
Null hypothesis vs Alternative Hypothesis
H0 : There are no significant associations between hours studied and test mark obtained
H1 : There are significant associations between hours studied and test mark obtained
H0 : There are no significant associations between number of beers consumed and test mark obtained
H1 : There are significant associations between number of beers drank and test mark obtained
Key points
1. Broad research topics must always be narrowed down to clearly defined research questions.
2. Research questions typically refer to a relationship between two or more concepts or variables.
3. Variables change from one observation to another whereas constants remain the same.
4. The research question concerns the influence of the independent variable(s) upon the dependent
variable(s).
5. The relationship between the independent and dependent variable(s) is also influenced by moderator,
control, antecedent and intervening variables.
6. Researchers should be careful to ensure that the relationship being studied is not the result of some
other extraneous variable that has not been included in the study.
7. All concepts and variables must be carefully defined with both conceptual and operational definitions.
Notes created by Christéle Meyer
2. Cohort designs
- Cohorts are likely to be similar to each other since their environment is the same except for the
treatment variable.
o e.g., grade 6 class at a school in 2005 is likely to have similar demographic characteristics as
the grade 6 class at the same school in 2006
- Testing occurs at different points in time as cohorts follow each other through the system
= represents the use of cohorts
- Both cohorts are pre-tested, receive the intervention, and are post-tested, at different times
- Advantage of this design: confidence in the similarity of the groups prior to treatment (at pre-test)
- Cohort designs are strengthened when the researcher can argue that the two cohorts share similar
environments, except for the interventions that they receive.
Pre-test/post-test cohort design
- A more sophisticated cohort design is the pre-test/post-test cohort design.
- The main advantage of this design is that the researcher is more confident that the groups were similar
at pre-test, that is, prior to treatment.
3. Time-series design
- Time-series designs represent an improvement over the one group pre-test/post-test design of the pre-
experimental group.
- Several measurements taken before (pre1, pre2 and pre3) and after (post1, post2 and post3) the
event/treatment are made
- The effects of history, maturation, test effects and regression towards the mean are observed by:
o Comparing each measurement with the measurements that were taken before and after it
o Differences between those measurements taken before the event (for example, pre3-pre2), and
between those taken after the event (for example, post2-post1), but not between the before
and after measurements, must be due to variables such as history, maturation, regression
towards the mean and test effect.
Experimental designs
- Most experimental designs that meet the objectives of explanatory research are characterised by their
use of randomisation to create two or more equivalent groups.
- The use of randomisation negates the difficulties of ensuring that the groups are identical.
- ‘R’ used to identify groups created on the basis of random procedures
Randomisation
- Randomisation requires that every participant involved in the study has an equal chance of being
assigned to any of the groups of the study.
- This can be achieved by:
1. Identifying the entire group of participants,
2. Randomly dividing this group into two or more subgroups (depending on the chosen design)
through the use of random number tables, coin flipping or various other randomisation
techniques.
- It is important here to distinguish between random selection and random assignment.
Random selection vs. random assignment:
- Random selection means that each member of the population has an equal chance of being selected
into the sample.
- Random assignment (randomization)means that each member of the sample has an equal chance of
being assigned to any of the conditions or groups in the study.
The advantage of randomly generated groups:
o the researcher starts the experiment with two or more equivalent groups
o if Group A is subjected to the treatment, the researcher can be reasonably sure that any difference
between the groups (A and B) thereafter is due to the effects of the treatment alone.
- If only one group is subjected to the treatment, the researcher can be reasonably sure that any
difference between the groups thereafter is due to the effects of the treatment and nothing else.
Groups
- The group that does not receive the treatment is called the control group
o The purpose of a control group is to compare treated participants with non-treated
participants.
o In this way the effect of treatment versus no treatment can be examined.
- While the group that receives the treatment is called the experimental group.
Placebo control group
- The placebo control group is another type of control group.
- Respondents or participants in a placebo control group are led to believe that they are receiving a
viable treatment, although in reality what is given to them is supposed to be ineffective.
- The Hawthorne effect is relevant here in accounting for the differences between pre- and post-test
scores.
o Generally accepted in research that just by receiving attention from a researcher, participants’
post-test scores might change
Matching
- Of course, it is impossible to randomly assign participants to groups when the independent variable
cannot be manipulated.
o An obvious example of a variable that cannot be manipulated is sex.
o The same is true of religious affiliation, cultural background, income, age, place of residence
and many other characteristics.
- Other techniques must be used to establish equivalent groups
o e.g. matching the elements of the group
- Pairs of members having identical characteristics considered relevant for the research are formed
o e.g. pairs matched on age, sex, health >, each member of each pair is then randomly assigned to
a different group (ensures equivalent properties in the two groups)
- When one of these factors is the independent variable, the researcher must use other techniques to
establish equivalent groups.
- One procedure is matching the elements of the group.
o In this case, the control for the equivalence of the two groups is based on having knowledge of
the main characteristics of the elements, persons, events or objects to be investigated.
o In this way, the two groups so constituted will have equivalent properties and they are called
dependant groups.
- Another method of matching, which has the advantage of being more efficient especially in the case of
large groups, is aimed at forming groups that have the same global characteristics.
o This matching is not as precise as in the first case
o Matching must be done for all relevant factors
- The choice between randomisation and matching often does not exist.
- Matching procedures can only be used if enough variables relevant to the research are known.
Experimental design
1. Pre-test/post-test control group design
- The pre-test/post-test control group design requires participants to be randomly assigned to two
groups:
1. the experimental group
2. the control group
- Note that a control group is similar to the comparison group of the intact group’s design, except that it
is arrived at through random assignment.
- Both groups are measured at the beginning of the study.
- Thereafter, the experimental group is subjected to the event or treatment.
- Thereafter, both groups are again measured
- The pre-and post-test scores of both groups can now be compared in order to assess the effect of the
event
Strengths of this design:
- randomization: expected that the two groups will be equivalent at pre-test
- The effects of history, maturation and regression towards the mean can be measured
- Weaknesses of this design:
o The subjects’ may be sensitised to the material as they have been given the test twice
o May make post-test scores different to what they may have been without a pre-test
o May affect external validity: the fact that subjects’ had a pre-test may have affected the
way they respond to the post-test
Weaknesses of the design:
- It is possible that giving a test twice – a pre-test and post-test – might sensitise participants to the
material
- This may make post-test scores different to what they might have been without a pre-test.
- However, if the two groups are truly randomly assigned, then this effect will be the same for the
control group and so therefore no real threat to internal validity.
2. Post-test-only control group design
- An experimental design that has virtually all the experimental rigour of the pre-test/ post-test control
group design is the post-test-only control group design.
- Randomisation aims at ensuring that the experimental and control groups are identical
- Except for the fact that only the experimental group receives the treatment or event.
- Therefore the pre-test/post-test comparison incorporated into the previous design may be
superfluous.
- All the data is collected at one time
o Problems of maturation, history, test effects and regression towards the mean do not arise
- Because there is no pre-test, it is impossible to be sure that the two groups are equivalent to start with
- The simpler design represented by Figure 10.8 does away with the pre-testing.
3. Factorial designs
- Factorial designs can be thought of as generalisations of either the pre-test/post- test control group
design or the post-test-only control group design.
- The important difference, however, is that factorial designs can incorporate two or more independent
variables, whereas the previous designs only allow for a single independent variable.
o E.g., a factorial design with two IV’s (each having two levels)
o e.g., IV 1= sex (males & females)
o IV 2 = age (18-23yrs and 24-30yrs)
- The following figure illustrates how an analysis of the relationship between these two independent
variables demands four experimental groups in the design.
- To analyse the relationship between these two variables we need four experimental groups in the
design
- The groups are exposed to different combinations of the two (or more) IV’s and then post-tested
- Differences between the levels of the IV’s are looked for > main effects
Developing a research design
- Very often research is hampered by constraints of resources, participants and time.
- Rarely can any of the designs discussed in this chapter be imposed directly onto actual research
projects.
- Rather, these designs explain the logic of social research and should serve as a foundation for good
research design.
- The researcher’s work is complicated by many sources of bias and error which must each be dealt with
as effectively as possible in order to ensure the highest quality of research.
- In developing a design for a particular research problem, there are several questions to be considered.
1. Is a cross-sectional or longitudinal design more appropriate?
Studies involving change over time virtually always use longitudinal designs.
Cross-sectional designs are most useful for describing populations, and differences between
populations, at a particular moment in time.
2. If a longitudinal design is used, how many repeated observations will be made?
How will the problems of history, maturation, test effect and regression to the mean be
addressed?
Is there any danger of experimental mortality affecting the usefulness of the design?
3. Is it possible to create equivalent groups either through randomisation or matching?
If this is possible, then experimental designs are the most appropriate.
If not, the researcher is forced to use a quasi-experimental design.
It is important for social scientists to be very critical of their own and their colleagues’ research designs.
Summary of sources of bias in research design
- Because every research project has its own particular set of problems, a complete list of the possible
sources of bias in research does not exist.
- The quality of both quantitative and qualitative research can be affected to different extents and in
different ways by these factors.
- There are, of course, many other sources of bias that relate more specifically to methods of data
collection, subject selection, data presentation and analysis.
- These are discussed in other chapters and only those relating to research planning and design are dealt
with here.
History and maturation
- Problematic to designs that require that data be collected at more than one time
- Solution:
1. Collect data at a single point in time
2. Pre-test/post-test control group and time-series design, where repeated measurements allow the
researcher to estimate the effects
- History refers to changes which occur in the world, other than those intended by the researcher, and
which might affect the results.
- Maturation refers to changes that occur within participants and thus confound the researcher’s design.
Regression towards the mean
- Regression towards the mean arises when researchers base their conclusions upon single
measurements.
- If a pre-test score is unusually high due to chance factors, the post-test score is likely to be lower,
regardless of any changes due to the experimental variables.
- The most effective solution to this problem is:
1. The repeated testing adopted by time-series designs.
2. In the case of qualitative research, the freedom of selecting the participants as the research
develops, gives the opportunity to compensate for this tendency.
Test effect
- In quantitative research, prior exposure to a test or measurement technique can bias a person’s
responses.
- This is particularly problematic when participants are tested and retested, as in all pre-test/post-test
and time-series designs.
- Effects: boredom, practice and fatigue
- Solution:
1. Reduce number of times participants complete the test
2. Vary the test slightly
Instrumentation
- Some of the many difficulties of developing appropriate instruments are discussed in a later chapter.
- However, instruments present a problem for design as well, particularly when different instruments are
used to test the same concept.
- Solution:
1. The researcher must be sure that the instruments are equally sensitive and accurate, otherwise
changes between the two measurements might be due to differences in the instruments and not
due to any differences within the participants.
2. Instruments should be pre-tested on a pilot group
3. Qualitative studies should be in accordance with cultural and other environmental factors
Experimental mortality
- Often participants drop out of a research project during the data-collection procedures.
- This is particularly true of research that takes place over an extended period of time.
- Time-series designs in particular are vulnerable to this source of error.
- Possible that participants who drop out of a study are systematically different from those who remain
with the study until the end
- Solution:
1. design the research so that it is convenient for the participants to participate until the end
2. Should impress upon the participants the importance of their continuing co-operation
Reactive effects
- When participants are aware of being observed, they behave in unnatural ways, that is, they ‘react’ to
being observed.
o E.g., Hawthorne effect
- One example of a reactive effect is test anxiety.
- The measuring instrument may increase the arousal levels of some participants and thus influence their
scores.
- Similarly, some participants try to please the experimenter and provide those results that they believe
are desired.
- Others will do just the opposite and try to confound the study in order to find out how the researcher
will react.
- Solution:
1. Use unobtrusive techniques of data collection
2. Not always possible – attempt to collect data in a way that causes the least disturbance to the
participants live
Selection bias
- Since many studies incorporate more than one group of participants,
- It is important that the quantitative researcher be sure that these groups are equivalent to each other
in all respects except for the independent variable of the study.
- Little can be done if the IV is beyond the researcher’s control and a quasi-experimental design has to be
used.
- Solution:
1. When the researcher is in control of which participants fall into which group
o Best technique for a large group of participants is random assignment
o When only a few participants and has reason to believe that randomisation might not result in
equivalent groups, the option of matching is available
- Qualitative researcher is less affected – select participants in the course of the research and thus more
flexible
Relationship between internal and external validity
- The potential of the research to achieve this aim is referred to as the validity of the design
- Measured in two separate but related dimensions: internal and external validity
- Good research design leads to high internal validity, where the researcher is able to answer the
research question convincingly.
- External validity is concerned with the question, ‘To what extent do the results obtained in this study
apply to the population being studied and to contexts different to those of this specific study?’
- Examine the extent to which the results of the study can be generalised
- Consider two factors:
1. Sample must reflect the experiences of the population as fully as possible
2. Researcher must make sure that the study simulates reality as close as possible
- In qualitative research, external validity is often referred to as transferability.
- Sometimes, in intervention research, the distinction is made between efficacy and effectiveness
research.
- Efficacy studies test the success of one treatment over another under tightly controlled lab-like
conditions.
- Thus, in an efficacy study, the researcher tries to control and isolate the variable of interest.
- In the real world it is often not possible to give people transport, childcare and pay them to come for
treatment.
- In fact, it is supposed to be the other way round. In the real world we conduct effectiveness studies, to
see if there is actually uptake of the service and what its effect is if people have competing demands on
their time.
- Tend to be inversely related
- studies with high internal validity often have low external validity and vice versa
- Studies in a particular social context > high external validity and low internal validity
- Studies in more controlled environments > high internal validity and low external validity
Key points
1. A design in quantitative research is a detailed outline spelled out in clear and uncertain terms for
testing of hypotheses. It is a specification of the operations to be performed in order to test a
specific hypothesis under given conditions
2. In the qualitative case, as the many steps of literature review, theory building, sampling and data
collection do not constitute separate activities but are continuously interrelated, the planning and
design of the entire process remain more flexible.
3. The purpose of research design is to ensure high internal validity. In quantitative research, internal
validity is concerned with whether the observed changes in
the dependent variable actually relate to changes in the independent variable. In qualitative
research, internal validity is concerned with whether the researcher’s method of data collection
and analysis adequately addresses the research question.
4. When dealing with research design, three aspects must be considered: the focus of research, the
unit of analysis and the time dimension.
5. There are three types of research designs: pre-experimental (exploratory, descriptive), the quasi-
experimental, and the experimental designs.
6. Experimental designs, in particular, use some specific methods such as randomisation, placebo
control group and matching.
Notes created by Christéle Meyer
Chapter 11 – Sampling
The purpose and types of sampling
- The entire set of objects or people that is the focus of a research project and about which the
researcher wants to determine some characteristics is called the population.
o e.g., all primary school teachers
- The subset of the whole population, which is investigated by a researcher and in the case of
quantitative research whose characteristics will be generalized to the entire population, is called the
sample. (“target population”)
o e.g. selecting 100 primary school teachers from a full list of primary school teachers
- In each sample, a car, a house or one primary school teacher constitutes an element or unit of analysis.
o The person or object from whom the social researcher collects data
o e.g., one primary school teacher
- Specific values or quantities that relate to the population as a whole, are called population
parameters.
o e.g., the average age of all primary school teachers
- When the corresponding values or quantities are drawn from the sample, they are called sample
statistics (or simply statistics).
o e.g., the average age of the 100 teachers in the sample
- Statistics are therefore estimates of population parameters.
- The process of generalizing from findings based on the sample to the population is called statistical
inference.
o we try to draw inferences about the unknown population parameters from the known sample
statistics
- Sampling = certainty is abandoned in favor of probability
Sampling theory
Quantitative research:
- Aims at testing a hypothesis on a representative sample, so that the results can be generalized to the
whole population.
- It is thus deductive and tends to simplify the complexity of reality.
- On the other hand, qualitative research aims at acquiring a deeper insight into a complex phenomenon
that might be quite specific and unique.
- The study of the relationship between a population and the samples drawn from it
- Objective - to draw inferences about the unknown population parameters from the known sample
statistics
- Statistical inferences – process of generalizing from findings based on the sample to the population
Qualitative research:
- Aims at acquiring a deeper insight into complex phenomenon that might be quite specific and unique
- Is concerned with studying people in their natural context.
- It is thus inductive and holistic.
- It does not aim at generalizing the results to the population but aspires towards some aspects of the
new knowledge and understanding being transferrable to other units of the population.
- One of the objectives of sampling is to draw inferences about the unknown population parameters
from the known sample statistics.
- These are obtained by collecting data from the sample.
- Sampling means abandoning certainty in favor of probability.
- Probability is understood as the likelihood of an event happening.
The main advantages of sampling, as compared to the collection of data on the whole population, are the
following:
1. Gathering data on a sample is less time consuming
2. Less costly since the costs of research are proportional to the number of hours spent on data
collection. (e.g., travel expenses, cost of reproducing data)
3. Sampling may be the only practical method of data collection.
This is the case in studies where the property under investigation necessitates the destruction of
the object.
When testing the resistance of an object to wear and tear, such as testing the lifespan of an electric
light bulb, one must wait till the bulb is ‘dead’.
It would be inadvisable to test an entire population.
This is called destructive sampling and, in this case, testing can only be realistically conducted on a
sample.
4. Sampling is a practical way of collecting data when the population is infinite or extremely large,
thus making a study of all its elements impossible.
Main sampling concepts
Good sampling implies:
- a well-defined population
- an adequately chosen sample
Good sampling must satisfy different criteria for quantitative and qualitative research.
1. Quantitative research.
o A sample is adequate if it enables the obtained results to be generalized to the whole
population.
o The sample is selected before starting data collection and analysis.
o Based on an estimate of how representative the sample is of the whole population
2. Qualitative research.
o A sample is adequate if it allows all possibilities or aspects of the researched phenomenon
to be identified.
o In other words, when the researcher reaches the conclusion that collecting more data and
increasing the sample does not bring any new information, data saturation has been
achieved.
§ Data saturation: collecting more data or increasing the sample does not bring any
new information
o In this case, the sample is partly selected in an ongoing way during the process of data
collection and analysis.
o One must have a well-defined population and an adequate sample in both cases.
A well-defined population
- A population, sometimes referred to as a ‘target population’, is the set of examples that the research
focuses upon.
- It is essential to describe the target population accurately.
- This can be done most effectively by clearly defining the properties to be analyzed. (establishes
boundary definitions)
- Compile a list of all elements of this population
- In quantitative research it is done by providing an operational definition.
- Once this is done it should be possible to compile a list of all elements of the population under
investigation or at least to determine whether an element belongs to that population.
- Once an operational definition is given, boundary conditions can be established which make it easy to
ascertain whether an element belongs to that population.
The sample
- Quantitative research - must have properties which make it representative of the whole population.
- Representative sample – to determine samples that best represent a population so as to ensure
accurate generalisation of results
Sampling frame:
- List of all units from which the sample is to be drawn
- Exclude no element of the population under investigation
- All elements of the population must have the same chance of being drawn into the sample
Sampling for Quantitative Research
- Quantitative sampling: Although sample is, by definition, a subset of the population, to be useful it
must have all the properties of the population to be representative of the whole.
- Thus, one of the major issues in sampling is determined samples that best represent a population to
allow for an accurate generalization of results.
o Such a group is called a representative sample.
- The first means of ensuring a representative sample is the use of a complete and correct sampling
frame.
o This is the list of all units from which the sample is to be drawn.
- An inadequate sampling frame that discards parts of the target population has been the case of many
poor research results.
- An even stricter requirement would be that all elements of the population should have the same
chance of being drawn into the sample or at least that the probability of this happening can be
specified.
Probability or random sampling
- When the probability of including each element of the population can be determined.
- Possible to estimate the extent to which the findings based on the sample are likely to differ from what
would have been found by studying the population
Non-probability sampling
- When the probability of including each element of the population in a sample is unknown.
- Advantages:
o Practical advantages
o e.g. cheaper, faster and quite adequate for homogenous populations
- Disadvantages:
o Difficult to estimate how well the sample represents the population
o Generalization therefore highly questionable
o This disadvantage can be reduced by enlarging the sample
- Some elements might even have no chance of being included.
- Although it is difficult to determine the extent to which non-probability samples are representative of
the population, they have some practical advantages.
- When the necessary population lists are not available, non-probability sampling remains the only
possibility for the researcher.
- Non-probability sampling is also almost always cheaper, faster and often adequate for homogeneous
populations.
- Finally it should be noted that to some extent the disadvantages of non-probability sampling could be
reduced by enlarging the sample.
Probability sampling for quantitative research
Most common sampling procedures in probability sampling:
1. Simple random sampling
2. Interval or systematic sampling
3. Stratified sampling
4. Multi-stage probability sampling
Simple random sampling:
- “Random”, in the scientific scene, expresses the idea of chance being the only criterion for selection.
- The selection of an element from a population is called random when the chance, likelihood or
probability of being included in the sample can be calculated for each element of the population.
- Accordingly, simple random sampling is a sampling procedure that provides equal opportunity of
selection for each element in a population.
- There are various techniques for selecting randomly.
- The most common are the lottery techniques where a symbol for each unit of the population is placed
in a container, mixed well and then the ‘lucky numbers’ drawn that constitute the sample.
- A more sophisticated method, particularly useful for large populations, is the use of random number
tables or electronic random number generators.
Random number tables:
Interval or systematic sampling:
- In accordance with simple random sampling
- This technique, instead of relying on random numbers, is based on the selection of elements at equal
intervals, starting with a randomly selected element on the population list.
Advantage:
- This sampling procedure is simpler and quicker than the use of random numbers.
Unfortunately, this method also has constraints.
- Like simple random sampling it relies on the availability of a complete unbiased population list.
- Moreover, this list must not have any cyclical or periodic characteristics.
e.g. we require a sample of 50 units from a population of 500
> k = length of interval
> k = N/n
> k = 500/50 = 10
> thus every 10th unit should be selected for the sample, with the starting element being any no.
between 1 and 10, or between 1 and k
- Although many samplings are performed, each is done for a relatively small population only.
- This increases availability of adequate lists and facilitates selection of a simple random sample without
decreasing the quality of the sample in any way.
- Stratified sampling, on the other hand, by preserving proportions even of very small samples, will allow
for any small minority to be properly represented.
- Of course, if the population is very homogenous with no marked differences of opinion between sexes,
or if both sexes are fairly equally represented, simple random and stratified sampling will lead to
similar results.
Multistage probability sampling
- The principle underlying multistage sampling is to start by sampling a population which is much more
general than the final one.
- In a second stage, based on the first sample, a new population is considered, one that is less general
than the first one, and a new sample is subsequently determined.
- The procedure is continued until the population to be investigated is reached and a final sample is
drawn.
- At each stage sampling is done in a random way (using one of the 3 previously mentioned sampling
methods)
- While not as accurate as simple random sampling, the advantage of multistage sampling become
apparent when other sampling methods fail due to a lack of complete lists of elements for a population
under investigation. > reduces costs.
- Multistage sampling can cut down expenses by reducing the cost of compiling long lists and by reducing
the travel expenditure necessary when respondents are spread over a large area.
Non-probability sampling for quantitative research
Most common sampling procedures in non-probability sampling:
1. Convenience or availability sampling
2. Purposive sampling
3. Quota sampling
4. Cluster sampling
Convenience or availability sampling
- Easily accessible, available and convenient sample is drawn.
o e.g., a convenient place is chosen where one is assured of finding many people, e.g. a bar, bus-
stop
- This sampling method, the most rudimentary one, consists of taking all cases on hand until the sample
reaches the desired size.
- Generalization based on a sample, not representative of the whole population, is extremely risky.
Advantage:
- Saves time and money
Disadvantages:
- Can introduce serious biases (e.g., men will be over-represented in bars)
- Generalization therefore risky (not representative of the whole population)
Purposive or judgmental sampling
- Selection on the basis of what the researcher considers to be typical units (researcher’s judgement)
- Units judged to be most common in the population are selected.
o E.g. If a school pupil is defined as being ‘12yrs-old, male, Catholic, with parents in clerical
profession’
- Only units meeting these characteristics are selected.
Disadvantages
- Relies heavily on the researcher’s subjective considerations (and not on objective criteria)
- Often leads to non-representative samples
- This sampling method is based on the judgement of a researcher regarding the characteristics of a
representative sample.
- The strategy is to select units that are judged to be the most common in the population under
investigation.
- The great danger in this type of sampling is that it relies more heavily on the subjective considerations
of the researcher than on objective criteria.
Quota sampling:
- Non-probability equivalent of stratified sampling
- The purpose here is to draw a sample that has the same proportions of characteristics as the
population.
- However, the sampling procedure, instead of relying on random selection, relies on convenience.
Advantages
- Although much less accurate than stratified sampling this method is often more convenient and
economical.
- No lists need be compiled, and all data can be collected at an arbitrary location.
Disadvantage:
- the sampling procedure relies on accidental choice (rather than random selection).
Cluster sampling:
- Equivalent to multi-stage quantitative probability sampling
- The process is the same as described above
- Except that the use of simple random sampling is replaced by a non-probabilistic selection method,
such as the availability of the elements or the convenience of reaching them.
Other sampling possibilities
Independent versus related/dependent samples
Alternatively, groups or samples can be related, usually when their elements have been matched by
specific properties.
In such cases, they are called related or dependent samples.
Every member of the sample has a partner somewhere else in the study.
They are related or dependent.
Independent – each unit is drawn randomly from the population and is also randomly assigned to one or
the other group.
Dependent – groups or samples can be related, usually when their elements have been matched by
specific properties.
Sample size: How large should a sample be?
Major criterion used to determine the sample size:
o The extent to which the sample is representative of the population
o Expect a 95% chance that the sample is distributed in the same way as the population
o Larger samples required if one wants high accuracy
§ large variability
§ heterogeneity
o Smaller samples required when less accuracy is required
§ homogeneity
§ less variables examined
- The size of the sample is an important determinant of the statistical power of the research.
o This refers to the likelihood that inferential statistical tests will be able to test the research
hypotheses adequately.
- Extent to which the sample is representative of the population.
o This can be expressed in terms of probability.
- The more heterogeneous a population is, the larger the sample must be to represent the
characteristics of the population correctly.
- At least 5% of the population.
A researcher’s decision about the best sample size depends on three things:
1. Degree of accuracy required
2. Degree of variability or diversity in the population
3. Number of different variables to be examined simultaneously in the data analysis.
Sampling errors and related problems
Chance factors:
- This type of error is the inevitable result of sampling and can never be eliminated, but the sampling
error can be calculated statistically.
Bias in selection:
- Arising primarily from faulty technique.
- Can be avoided.
o e.g., a researcher fails to take into account one criterion, e.g., age of respondents; or the
respondents themselves may give incorrect information about their age; over- or under-
representation of a population
Non-response error:
- Comes about when an element of the sample does not respond to a measurement instrument or is not
available for some unknown reason.
- Alters the representativeness of the sample
- As a consequence, such elements are excluded from the group, which changes the constitution, and
thus the representativeness, of the sample.
- It may not be possible to interview or test a person because of illness, language differences or other
factors.
- The chosen respondent cannot be found because of changes in residence or name, or because of
death.
- The selected person may be absent whenever the interviewer calls, purely by chance.
- The person can refuse to collaborate and not answer questions or give information.
Sampling for qualitative research
- Therefore, since qualitative research does not concentrate on the ‘average’ person but rather on the
diversity of cases, non-probability sampling is used.
- The three types of non-probability sampling introduced under quantitative sampling, convenience or
accidental, purposive or judgement, and quota sampling are the most frequently used.
Convenience or accidental sampling:
- Based on the availability of the units of the target group
Snowball sampling (also called chain or referral sampling)
- Particularly useful for identifying people not listed or difficult to find.
- This method could also be used to locate and interview pregnant women not attending pre-natal
clinics.
- This technique is very helpful to gain access to participants when the framing of the target population
is not possible.
- Can increase the size of the sample or develop a network
Qualitative approach to purposive or judgmental sampling
- Rests on the assumption that the researcher knows what type of participant is needed
- The element that is most complex and rich in information is the most valuable
- The researcher will thus purposefully choose participants based on some specific criteria that are
judged to be essential.
Extreme or deviant case sampling
- Concentrates on obtaining information from most extreme cases
- Highly unusual manifestations of phenomena may be most revealing
- Comparing extreme cases may lead to better in-depth understanding of underlying factors of a
problem
- Getting information from the most extreme cases, the ‘exceptions to the rule’.
- Here the assumption is that comparing these extreme cases might lead to a better in-depth
understanding of the underlying factors of a problem.
Critical case sampling
- In some instances, one can assume that some cases have key position.
- E.g., “if student A can pass this test, then all others can”
- Advantage – reduces the sample to a few cases most likely to give a lot of essential, relevant
information
Maximum variations sampling
- Aspires to having the most heterogeneous sample.
- E.g., studying the causes for divorce
- The importance is to have the widest possible range of possibilities
- This will yield a rich variety of variation for the phenomenon under study.
- However, for such samples to be all-inclusive, they will also be relatively large and costly.
Quota sampling
- Principles are the same as proportional sampling described under quantitative methods
- To this method one can add a less restrictive one, non-proportional quota sampling.
- The researcher does not make a concerted effort to reach the numbers or quota corresponding to the
proportions in the population.
- All categories represented in the sample.
- One is satisfied if the information given by the participants of a certain quota has reached saturation.
Combination or mixed purposeful sampling
- Often a combination of various sampling methods is chosen to arrive at the most appropriate sample.
- Flexibility is particularly helpful when dealing with triangulation
- Important – sampling strategy focuses on and fits the aim of the research, the method, and
instruments used, as well as the available resources
Sampling process and sample size: when is enough, enough?
- In the ideal case the data collection will continue if new elements or facts are found and will stop when
no new details are available, this is also called sampling to redundancy.
- Thus, the size of the sample is not determined in advance but becomes a consequence of the
exhaustion of new discovery or data saturation, as defined earlier.
- Unfortunately, in reality time and money sometimes demand some compromise.
- Quality of the research is directly related to the sampling procedures, adequacy of the techniques
chosen, as well as the professionalism of implementation and the appropriacy of the sample size
Key points:
1. The purpose of sampling is to provide various types of information of a qualitative or quantitative
nature about a population by examining a few selected units.
2. Quantitative sampling emphasizes the representivity of the sample, allowing for generalization of
the results to the target population.
3. Qualitative sampling aims at the identification and understanding of a complex phenomenon
experienced differently by the various units of the population.
4. The main distinction between probability (or random) sampling and non-probability sampling is
related to the type and aim of the research, with probability sampling used nearly exclusively in
quantitative research and non-probability sampling being particularly well suited to the aims of
qualitative research, although it may also be used for quantitative research.
5. Sampling is based on a well-defined population and must be determined by the aim of the research.
6. Errors introduced into the sampling process (such as chance factors, bias in selection, non-response
error, incorrectly identified criterion, etc.) play an important role in the end product.
7. The optimal size of the sample is determined, in the case of quantitative research, by the need for
the sample to be representative of the whole population, and in the case of qualitative research, by
the need for all the possible answers or information to have been collected.
Notes created by Christéle Meyer
Validity
- Although validity and reliability are both important to the evaluation of an instrument, they are actually
entirely different concepts.
- The term validity was used in Chapter 10 where the internal and external validity of research designs
was discussed.
- This chapter is concerned with the validity of data collection procedures and instruments and, in this
context, the term ‘validity’ has a somewhat different meaning.
- Where reliability asked the question ‘how accurate and consistent is this instrument?’, validity asks
questions such as ‘what does this instrument actually measure?’ and ‘what do the results actually
mean?’
Types of quantitative Validity
1. Content validity
- The extent to which an instrument measures all the various components of the variable in question
- E.g., If all the questions in your Psych 243 exam come from Chapter 1, the exam would have low
content validity.
- In many cases, the topics that social scientists are most interested in are very complex and have
many different components.
- In order to measure such complex topics properly, the researcher must find a technique which will
provide some information on all of its different components.
- Unfortunately, ensuring good content validity is usually far more complex than it sounds.
2. Criterion-related validity
- One way to test whether an instrument measures what it is expected to measure is to compare it to
another measure that is known to be valid.
(The extent to which a measure is able to predict scores on another measure (the criterion measure)
that is known to be valid)
- E.g., Scores on a new measure of depression are able to predict scores on the BDI.
- This other measure is then called the criterion measure.
- When the instrument being tested and the criterion measure are administered at the same time,
the term concurrent validity is used.
- When the instrument being tested is used to predict a future criterion, it is called predictive
validity.
3. Construct validity
- Construct validity is the extent to which scores on an instrument reflect the desired construct
rather than some other construct.
- E.g., A measure of depression would have low construct validity if it had items assessing income.
- Unreliable scores cannot have any construct validity because they are mostly due to random error.
- Construct validity is the most important and most often used of the various forms of validity test
discussed in this chapter.
- Construct validity requires a strong link between the measure at hand and the related theory.
The following three steps are necessary in order to establish construct validity.
1. Identify all of the variables that are strongly related to the variable that the test is designed to
measure. This is done on the basis of theory, past research and logical deduction.
2. Measure all the variables involved and determine the relationships between them through the
use of statistical tests.
3. Interpret and explain these relationships and develop an argument to demonstrate the
construct validity (or lack thereof) of the instrument. The more variables, other than the one
under study that can be shown to interrelate meaningfully, the better the construct validity.
4. Convergent validity
- One way to establish construct validity is to determine whether the test has any convergent validity
with other measures.
- The extent to which a measure is positively correlated with another measure that is known to be
valid.
- E.g., A measure of depression that is positively correlated with the BDI displays convergent validity.
- Convergent validity is the relationship between the scale used and other scales that are intended
to measure the same con- struct.
5. Face validity
- Face validity is somewhat different from the other four forms of validity discussed so far.
- Face validity is concerned with the way the instrument appears to the participant.
- The extent to which an instrument appears to be measuring the construct at hand.
Balancing reliability with validity
- Both reliability and validity are important for every instrument.
- It does not help the researcher to use a highly reliable instrument that has no validity.
IMPORTANT:
- It is possible to have a measure that is reliable but not valid, but it is not possible to have a measure
that is valid but not reliable.
Qualitative equivalents of Reliability and Validity
Qualitative research: Ensuring trustworthiness
Trustworthiness
Obviously, this does not absolve the qualitative researcher of the need to evaluate the quality of the
research, but in qualitative research this is done in terms of how much trust can be given to the research
process and the findings.
Thus we speak of trustworthiness.
Concept of trustworthiness on the basis of 4 concepts:
Credibility
- High credibility = studies in which the researcher has convincingly demonstrated the appropriateness
and overall internal logic of the research questions, the study design, the data collection method, and
the approach to data analysis
- Credibility corresponds to the concept of internal validity, since it seeks to convince that the findings
depict the truth of the reality under study, or, in other words, that they make sense.
Dependability
- Dependability as a concept is similar to, but not the same as, reliability.
- Dependability demands that the researcher thoroughly describes and precisely follows a clear and
thoughtful research strategy.
Transferability
- Transferability can be compared to external validity since it refers to the extent to which results apply
to other, similar, situations.
- It requires the researcher to provide detailed descriptions of the context in which the data was
collected, about the researcher as a person, and about her relationships with the participants, etc.
Confirmability
- Confirmability, which is similar to replicability, requires that other researchers or observers be able to
obtain similar findings by following a similar research process in a similar context.
- The researcher is also expected to present a critical evaluation of the methodology used.
- When credibility, dependability, transferability and confirmability are all high we can speak of the
research being highly trustworthy, or having high quality.
Tools for increasing research trustworthiness:
1. Adequate description of context
Because qualitative research emphasises the context in which the study takes place, detailed
descriptions of the researcher, the participants, their relationship, and the context in which they
find themselves are essential.
2. Adequate description of the sample and the sampling procedure
As in quantitative research it is imperative that the researcher adequately describes the sampling
procedure used and the sample that actually participated in the research.
This includes describing the criteria of inclusion and exclusion and the sampling strategy used.
3. Concurrent data collection and analysis
By analysing data as it is being collected, the researcher can refine the data collection approach in
relation to the emerging results.
4. Triangulation
Triangulation is the method most frequently used to verify and increase the trustworthiness of
qualitative research.
It involves combining several different research methodologies to investigate the same
phenomenon.
The purpose of doing this is to show that the results obtained are independent of the methodology
used.