[go: up one dir, main page]

0% found this document useful (0 votes)
43 views64 pages

Psych 243 Notes

This document discusses different methods of acquiring knowledge, including scientific and non-scientific approaches. It outlines various non-scientific methods like authority, intuition and conventional wisdom. It then describes the scientific method, involving rational and empirical approaches. The scientific method uses observation, reasoning, testing and revision of explanations. The document provides details on the scientific method and its key assumptions in acquiring knowledge.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views64 pages

Psych 243 Notes

This document discusses different methods of acquiring knowledge, including scientific and non-scientific approaches. It outlines various non-scientific methods like authority, intuition and conventional wisdom. It then describes the scientific method, involving rational and empirical approaches. The scientific method uses observation, reasoning, testing and revision of explanations. The document provides details on the scientific method and its key assumptions in acquiring knowledge.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Notes created by Christéle Meyer

Chapter 1 - Different methods of acquiring knowledge


- Research
o To look again
- Social
o people/ human beings
- Sciences
o Process of systematic inquiry (gain knowledge) which…..
§ makes sense (logical),
§ has a reference (observed evidence) and gives
§ an explanation (theory).
Knowledge
- How does society progress?
o by constantly generating new knowledge
o formulating specific questions and finding answers to them
o Understanding ourselves and our environment
- Epistemology = The study of ways of knowing about the world (i.e., the question: ‘what is knowledge?’)
o Non-scientific, and
o Scientific methods
Nonscientific methods of acquiring knowledge:
- This ‘method of authority’ is when we rely on the knowledge and wisdom of prominent and significant
people who are recognized as having a better growth of their environment than ordinary people.
- These people are rarely questioned or challenged.
- Once individuals are placed in such a position of authority that often rely on particular strategies to
justify and preserve their position.
- This may take the form of making their own ignorance with impressive rituals, using a very specialized
way of expression (professional jargon), or by emphasizing the uniqueness of their position very often
this makes it allows individuals to hide the superficiality of their knowledge, it's underlying ideology, as
well as other weaknesses in their argument.
- A variation of ‘method of authority’ is the mystical method, where the correctness of the knowledge is
assumed to reside in a supernatural source, e.g., traditional healers.
- Their credibility is strongly related to the level of education and general knowledge of the audience.
- The authority and mystical modes of acquiring knowledge lose influence when better, alternative
explanations can be found.
- The intuitive method is when people make judgments about the world based on what ‘feels’ right for
them. For example, a doctor may arrive at a diagnosis for patient because she has an intuitive feeling
about the ailment.
- Others may not come to similar conclusions in the same situation, as their intuitive feeling may be
different this method depends on the individual and his or her personal understanding of the issue.
- The intuitive method is not transparent and cannot be communicated easily to others.
- The intuitive method or not easily re replicable.
- A variation of the intuitive method is conventional wisdom.
- Conventional wisdom reflects so-called common sense understanding of the world that are commonly
accepted as being true.
- Common wisdom statements can be very general, without details on when there are applicable or
often contradict.
- For example, there is no systematic way of determining which of these statements is accurate;
“opposites attract” or “birds of a feather flock together”.
In contrast:
- The rationalistic method is based on human reason.
- According to this approach human beings can think logically (or to reason), and thus to discover laws
through purely intellectual processes.
- The basis of knowledge is correct reasoning which enables one to know what must be true by principle.
For example, peer mathematics where laws and principles are discovered without relying on any reality
but based on axioms.
- Observation of reality, collection of facts and using the five human senses are unnecessary.
- This approach has some success in the natural sciences but has made little progress in the social
sciences.
- The opposite of the rationalistic method is the empirical method where facts observed in nature or the
foundation of knowledge.
- Objectivity of observation is emphasized and only what is observable, what can be perceived by our
senses, constitutes knowledge.
- For an extreme empiricist, knowledge stops here since the reason for the difference cannot be
observed.
- Interpretations of observations and speculation about relationships between facts introduce
subjectivity and are therefore seen as distortions of the data.
- The scientific method is the synthesis of the rationalistic method and the empirical method.
- This method uses rationalism to develop theories
- Uses empiricism to test theories
- Science – continuous interplay of rational thought and empirical observation.
- The scientific method – the process of knowing
The scientific method and its properties:
- In the case of quantitative research
o The first step to knowing is a description of the object relationship or situation.
§ The object of the study must be accurately depicted. Here evidently the empirical
method of objective must be used.
o Thereafter, an explanation or statement of the relationship between their described facts
should be expressed.
§ The explanation is thus the result of reason reasoning process using the rationalistic
method and leads to the formulation of a natural or social law.
o The stated explanation should permit a prediction of future events under well-defined
conditions.
§ To ensure that this explanation or law will enable prediction, the correctness of the
explanation must be tested.
§ This is achieved by confronting it with reality as perceived by the five human senses,
using the empirical approach.
o Intelligent intervention, based on correct explanations leading to the ability to predict events.
- In the case of qualitative research
- Most frequently the researcher is concentrating on the observation and recording of events under
study.
- Since very little is known about the situation under observation, no a priori explanations are given and
thus no expectations or predictions can be tested.
- However, it may be expected that some a posteriori explanations will be found at the end, based on
the data collection and analysis.
- The scientific method of acquiring knowledge, also called scientific research, is a systematic
investigation of a question, phenomenon or problem using certain principles.
A Priori vs. A Posteriori Knowlege
A Priori Knowledge
("from the earlier")
- A priori knowledge is knowledge that we can have "prior to experience".
- A priori knowledge or justification is independent of experience.
A Posteriori Knowledge
("from the later")
- A posteriori knowledge is knowledge that we can have only after we have certain experiences.
- A posteriori knowledge or justification is dependent on experience or empirical evidence
Science assumes the following:
1. The existence of natural and social laws.
Science presumes order and regularity in natural and social events.
These laws are assumed to exist independently of the observer, and they describe the way
phenomena interact or social events occur.
2. Laws can be discovered by human beings.
Scientific research assumes that, although human beings are part of nature and are themselves
subject to its laws, they can discover those laws.
3. Natural phenomena have natural causes.
No supernatural powers are necessary to gross but the cause of events and now unexplainable
supernatural forces are needed to explain the way nature functions. Scientific understandings are
based on parsimony of ideas.
Parsimony requires that explanations of phenomena be based on as few assumptions as possible:
that is, that they be as simple as possible.
If both a complex under simple explanation exist for an observation then the simple is superior.
4. New knowledge is accumulated gradually and sequentially.
Numerous examples show how invention of new instruments of investigation and new approaches
to studying a problem can lead to new advances in science.
5. Knowledge and truth are founded on evidence.
Much of the time observations that are made on the basis of the senses constitute evidence.
In the absence of evidence, the claim remains at the level of conject are or hypothesis.
6. Scientific statements must be distinguished from common sense statements. Common sense
statements are the result of non-scientific observations in that they do not take into consideration
the different variables at stake.
7. Scientific observation is objective.
The more accurate a description is, stating measurable properties, for instance, the greater the
objectivity of the observation.
8. Scientific observation is systematic
All possibilities are considered one at a time, in a logical order.
- It is not always possible to observe every phenomenon or all of its aspects systematically. Most often
some characteristics remain unknown so that the explanation and, as a result, the prediction may not
be accurate.
- In fact, one can only predict that an event will take place if one knows all the conditions and
circumstances, which caused the event.
- Probability statement: if some conditions are satisfied the event will occur more often than if some of
these conditions were not met
- Explanations that concede some uncertainties are called probabilistic explanations.
Properties of scientific research:
1. Scientific research is empirical.
〜 Each step is based on observation.
〜 Scientists attempt to understand the world beyond their personal base biases. Quantitative
methods use a variety of tools to help scientists be more detached and dispassionate about
what they research.
〜 Qualitative methods focus on acknowledging, understanding and allowing for the biases in a
transparent manner.
2. Scientific research is systematic and logical.
〜 A certain logical order must be followed.
〜 Logical predictions cannot be made before a description has been given and an explanation of
the observed phenomenon found.
〜 An analysis of the different variables involved must be undertaken prior to the formulation of
the questions.
3. Scientific research is replicable and transmittable
〜 Anyone placed in exactly the same circumstances can observe the same event.
〜 It is possible to communicate each step of the research and to transmit the acquired
knowledge.
〜 This is to present the research and its results in a report or an article published in a journal.
4. Scientific research is reductive.
〜 By grasping the main relationship between laws, the complexity of reality is reduced. All details
which are not essential, or which have little influence on the process under investigation or
omitted.
〜 Reductionism or the method of considering only the essential and necessary properties,
variables, or aspects of a problem, has to be used with great care as it can lead to significant
biases in the research process.
〜 This can be negative, and you can end up not seeing the bigger picture.
5. A scientific claim statement or theory must be falsifiable.
〜 This means that a scientific claim must be stated in such a way that can be demonstrated to be
false.
〜 The claim must be testable.
〜 If a ‘scientific statement’ affirms that the capability to drive a race car at foot 100 kilometers per
hour is related to some generic male characteristic, the event of one single woman driving such
a car at a given speed will falsify that statement.
〜 A scientific theory should not only tell us what should happen but what should not happen.
〜 If these things that should not happen do in fact happen, then we have a clear indication that
the theory is untenable (wrong).
The Relationship between theory and research:
- Deduction
o Going from the general to the specific
o Using general principles to suggest specific outcomes
- Induction
o Going from the specific to the general
o Using a number of specific observations to formulate general principles
Keypoints:
1. The rationalistic method of acquiring knowledge is based on the reasoning power of the human
mind.
2. The empirical method is based on ‘facts’ obtained through the five senses.
3. Scientific research is a process that combines the principles of rationalism with the process of
empiricism.
4. A quantitative research process develops from the description of the object under study, to the
explanation of the relationship between the described facts, to the inferred prediction and finally to
the identification of intelligent intervention.
5. Science is based on the following assumptions: natural and social laws exist and can be discovered
by human beings; natural phenomena have natural causes; knowledge is founded on evidence; the
advance of knowledge increases gradually and sequentially; and scientific observation is systematic
and does not depend on the observer.
6. Distinctions have to be made between situations where a prediction can be made with certainty
and situations where only a probabilistic explanation is possible.
7. The essential properties of scientific research or that it be empirical, systematic, replicable, and
transmittable.
Notes created by Christéle Meyer

The scientific method applied to social reality


The distinction between natural and social sciences:
- Research usually arises from some need, so that it has a particular purpose to fulfil.
~ Natural sciences investigate the properties and laws of natural phenomena.
~ E.g., astronomy, geometry, mechanics
~ Development at time determined by the needs of production, commerce, and industry
On the other hand,
~ Social sciences investigate the properties and laws of social phenomena.
~ E.g. psychology, sociology
~ The development of social sciences has its roots in the need for understanding, management and
manipulation of social affairs.
- The development of both types of science depends on the needs, values, aspirations and other
characteristics of the particular society in which it takes place.
- Social sciences, however, are much more affected by some aspects of the society than are the natural
sciences.
- Social sciences introduce methodological challenges:
1) ethical reasons some experiments cannot be done on humans.
2) some variables cannot be controlled.
3) most results expressed as probability statements
- Judgement and logical thought, provides explanations or reasons for the relationships between
observations.
Some variables cannot be controlled
- There are so many uncontrolled factors due to the complexity of social reality that exact laws can rarely
be found
- A physicist can express the laws of electricity with a formula and predict with certainty when a light
bulb will glow
BUT:
- A sociologist can only predict that, under certain circumstances, unemployment leads to crime

Note:
- Often some characteristics are unknown
- Can only predict an event when ALL conditions and circumstances are known
- In social sciences it is rare to possess all information
Probability statement:
- if some conditions are satisfied the event will occur more often than if some of these conditions were
not met
Probabilistic explanation:
- Explanations that concede some uncertainty
A fundamental relationship between facts about the world, defined as empirically verifiable
observations, and theory, as an explanatory framework.
• Perceptual knowledge:
o Information about the world perceived through our senses
• Rational knowledge:
o Based on judgment and logical thought
o Provides explanations or reasons for the relationships between observations
• Fundamental relationship between:
1) Facts about the world – verifiable observations
2) Theory – explanatory framework
The scientific method
- Synthesis of:
1. Rationalistic method
2. Empirical methods
- Uses rationalism to develop theories
- Uses empiricism to test theories
- Science – continuous interplay of rational thought and empirical observation
Experience without theory is blind, but theory without experience is mere intellectual play
– Immanuel Kant (1724 - 1804)
The combination of observation and the process of logical thought provide a deeper understanding of the
world
Facts and theory
• Facts give rise to theory since they raise a need for explanations of the observed phenomena. –
indisputable set of statements about reality,
o e.g., sun rises in the East and sets in the West
o Facts, the cornerstone of knowledge, are not the product of random observation, but should
be both selective and meaningful.
• Theory serves as a basis for the gathering of facts since it specifies the facts to be systematically
observed.
o e.g., learning theory may be used to explain anti-social and maladaptive behaviour at school
• Facts allow a researcher to find support for, improve, disprove or formulate
a theory.
This means in practice that a theory allows a researcher to predict some facts.
Any theory should provide an explanation of how relevant facts relate to one another.
• The process of theory reformulation is extremely important to the advancement of science, and
therefore of new knowledge.
If theories were not advanced, deeper understandings of social phenomena would not be achieved
and knowledge would become stagnant.
Theories are typically not proved.
Theory is tenable.
Always the possibility that, some observations or data might contradict the theory.
• Theory allows for a classification and conceptualisation of facts.
It summarises and provides explanations of how relevant facts relate to one another.
• Theory predicts facts.
A useful theory then allows social scientists to make predictions about future phenomena in the
form of probability statements.
LAW OF SUPPLY & DEMAND => Existence of an illegal market
LAWS OF ASTRONOMY => eclipse of the sun
Relationship between facts and theory
Quantitative, qualitative and the mixed-method approaches
- Quantitative methods emerged from the philosophical belief that the world runs according to natural
laws and that the role of the scientist is to uncover or discover these pre-existing laws.
- Qualitative methods emerged out of more recent philosophical beliefs that truth is relative and that
knowledge is constructed by human beings.
- The quantitative approach relies extensively on numbers and statistics in the analysis and
interpretation of findings
o Data collected according to very particular set of steps
o Attempt to remain as objective and neutral as possible
o Analyzed using statistical procedures so that findings can be generalized from a relatively small
sample to the entire population
o Favor a more controlled environment
- The qualitative approach is often used when the problem has not been investigated before.
o His method uses smaller samples, from which findings a better understanding of the
phenomenon being investigated is produced.
o Research plan is more flexible and circular
o Investigates the problem from the respondents’ point of view
o Reality is interpreted from the respondent’s frame of reference
o Attempt to understand phenomena in their natural context
- Investigation demand that both approaches be used in the same study in order to confirm or elaborate
each other, the mixed-methods approach.
o Might combine:
• structured questionnaires (quantitative) and open ended interviews (qualitative)
• surveys (quantitative) and case studies (qualitative), or
• questionnaires (quantitative) and observations (qualitative).
- In the quantitative approach, the researcher tends to follow what natural scientists do: that is collect
data (that is measurements or frequencies) according to a very specific set of steps, and in so doing
attempt to remain as objective and neutral as possible.
- Data is analysed using statistical procedures so that findings can be generalised from a relatively small
sample, to the entire population.
- Researchers are more inclined to qualitative approaches in which the plan of the research is more
flexible and circular.
- Investigates a problem from the respondents’ point of view.
- The focus of such a study is to determine what respondents think and feel about a particular
phenomenon or issue.
- Attempt to understand phenomena in their natural context, rather than the more controlled
environment favoured in quantitative approach.
- The distinction between the quantitative and qualitative approach both are authentic scientific
approaches to developing knowledge in the social sciences.
A comparison between quantitative and qualitative approaches can be made based on the following
criteria:

Developing theory from facts


- FROM FACTS TO THEORY: PROBLEM, HYPOTHESIS AND MODEL
- Facts > raise questions > condensed into a problem > temporary solution (hypothesis) to be tested >
related to the underlying theory.
- These questions can in turn be condensed into a problem that is then given a temporary solution, yet
to be tested, which is called the hypothesis.
- If a hypothesis is supported by sufficient verifiable evidence it will become part of the theory
Theory of Reasoned Action (adopted from Fishbein & Ajzen 1975)
Theory of Planned Behaviour (adopted from Ajzen 1991)

Analogy and model-building


- Analogy is a correspondence between a phenomenon or event that has been studied already and
another phenomenon or event that resembles the first but has not yet been studied.
- Permits one to draw conclusions based on the similarities between objects and certain of their
properties.
- Allows one to infer some properties of the less well-known objects. For example, the first aeroplanes
were built to resemble flying birds.
- In understanding human behaviour, an analogy to animal behaviour is often used.
- In model-building, one object or phenomenon, the well-known one, serves as the model.
- Here certain properties of the object have been singled out, represented in their pure, simplified form
and then studied in the absence of the actual object.
- Ideal models are formed with the help of particular symbols.
- For instance, a commonly used model is the representation of certain properties of the earth’s surface
by geographical maps.
The relationship between facts, problems, hypotheses, models and theory
- Analogy and model-building are therefore quite similar ways of discovering some properties of an
object or phenomenon by utilising the existing knowledge of another object or phenomenon.
- The advantage of model-building is its reductive property, that is, only the main characteristics are
considered.
- The characteristics that do not affect the process under study are ignored and the properties of the
model can then be studied in the absence of the original object of study.
- A geographer can study the properties of a particular region through the use of adequate maps without
actually travelling to that region.
- The utility or danger of analogy and model-building depends on the adequacy of the analogy or model.
- The similarity of two chosen objects or events might be too superficial.
o similar ways of discovering some properties of an object or phenomenon, by
o utilizing the existing knowledge of another object or phenomenon
- The model might be an over-simplification of reality.
- Both flaws will lead to fallacious results.
(see picture on next page)
The process of knowledge acquisition

Steps in scientific research:


The quantitative research process generally involves the following steps:
1. Selection and formulation of the research problem
the researcher identifies an area of study and specifies one or more clear research questions.
2. Reviewing literature
acquire knowledge of current theory:
reviewing the existing literature on the subject
necessary variables will be identified
conceptual and operational definitions will be developed
formulate testable hypotheses
3. Developing a research method
choosing a particular research design and sampling method.
4. Developing a data collection instrument
an instrument is developed
appropriateness and quality (validity and reliability) are tested through a pilot study.
5. Sampling
sample is drawn from the general population
6. Data collection
instrument is then administered to the sample and data is gathered or collected.
7. Data Analysis
data is collected it must be organised, and checked for accuracy and completeness.
arithmetic and statistical tests to describe the sample data
generalise from this data set to the population from which the sample was drawn.
8. Interpretation of results
considers what the results of the statistical analysis can teach us about the research questions
9. Conclusions and recommendations
draws certain conclusions and makes considered recommendations.
this may have to do with social policy or action, or the recommendations may point to further
research.
10. Dissemination of results
results are presented in different forms such as research reports, seminars and conference papers,
books and, of course, peer-reviewed journal articles.
The qualitative research process is more difficult to describe since the steps are generally less linear:
1. Selection and formulation of the research problem
typically more descriptive
2. Reviewing literature
learn first-hand what has been studied on the specific question
increase the researcher’s understanding of the concept under investigation
takes place at the same time as data collection.
3. Developing a research method
the most appropriate way of gathering data that will shed light on the research questions.
4. Developing a data collection instrument
interviews and focus group discussions
collection of spoken and written words
an interview, design an interview guide.
5. Sampling
participants that carries most of the characteristics associated with the phenomenon under
investigation.
6. Data collection and analysis
data collection and analysis often take place at the same time or as alternating processes
data includes words (written and spoken), artifacts, pictures and video.
7. Interpretation of results
studies the results of the data analysis in order to gain some overview
to what extent have the research questions been addressed and answered?
8. Conclusions and recommendations
researcher draws certain conclusions and makes considered recommendations.
social policy or action, or the recommendations may relate to further research.
9. Dissemination of results
he findings of the research must be shared with others.
form of a report, a seminar or conference presentation, a book or a peer-reviewed journal article.
- Finally, mixed-methods research is likely to follow either the quantitative or qualitative research
process outlined above, depending upon which kind of data collection is dominant in the research
design.
- Of importance to mixed-methods designs is the point at which quantitative and qualitative data are
mixed.
- Could happen during data collection, during the analysis of data,
- End of the research during the interpretation of the results.
- Must describe in their reports why, how and when they combined the different kinds of data collected.

Key Points:
1. Social sciences investigate social phenomena, whereas the natural sciences study laws and
properties of natural phenomena.
2. A fact is an indisputable statement about reality and a theory is set of ideas that explain a social
phenomenon.
3. Quantitative research emanates from the philosophy that the world operates on natural laws,
whereas qualitative studies are informed by the philosophy that the world and the truth is not
objective as meaning is socially constructed. Quantitative research is planned well ahead, and is
rigid and objective whereas qualitative research requires a flexible plan, which allows for
movement back and forth.
4. The quantitative approach uses scales and instruments in the collection of data whereas qualitative
approach uses observation, interviews and focus group discussions.
5. Data analysis in quantitative studies happens at the end of the data collection stage whereas in
qualitative research this occurs at the same time with data collection.
Notes created by Christéle Meyer

Chapter 5 - The types of research


The relationship between the research problem and the type of research selected to investigate it.
Ways of classifying research
- Quantitative research relies on measurement to compare and analyse different variables.
- By contrast, qualitative research uses words or descriptions to record aspects of the world.
- Mixed-methods research uses both measurements and descriptions in a complementary fashion to
deepen the researcher’s understanding of the research topic.
- Studies that primarily seek to increase human understanding of a particular aspect of society are often
referred to as basic social research.
- By contrast, studies that primarily aim to solve a particular problem confronting a group of people are
often referred to as applied social research.
- Participatory research, action research, community needs assessments, as well as monitoring and
evaluation studies tend to be more applied in nature and are discussed further in later chapters.
- Primary research occurs when the researcher collects data with the specific intent of answering a
particular research question.
- Secondary research occurs when a researcher uses data that has been collected for some other reason
to answer a new research question.
- In cases where very little is known about the research topic, we speak of exploratory research.
- Where the researcher is interested in describing a phenomenon, the research is called descriptive
research.
- When the research question requires an understanding of the relationship between variables, the
research is called correlational research.
- Finally, when the research question demands that the researcher explain the relation- ship between
variables and demonstrate that change in one variable causes change in another variable, the research
is called explanatory research.
Quantitative, qualitative and mixed-methods research
- Quantitative research methodology relies upon measurement, counting, and the use of various scales
to compare and analyze different variables.
o Numbers have the advantage of being exact.
o Important advantage of numbers is that they can be analysed using descriptive and
inferential statistics
o Limitation: some kinds of information that cannot be recorded adequately using
quantitative data.
- Qualitative research uses words or descriptions to record aspects of the world
o Language provides a far more sensitive and meaningful way of recording human experience.
o In many cases the line between quantitative and qualitative methods is blurred.
o At the same time, there are also ways in which text can be coded to produce numbers, and
mixed-methods research attempts to combine the advantages of quantitative and
qualitative methods and to avoid their disadvantages.
Basic and applied research
- Basic Research:
o Primary motivation – contribute to human knowledge and understanding about a particular
phenomenon.
o By gathering more facts and information that may challenge existing theories and allow new ones
to be developed.
o The actual utility or application of this newly acquired knowledge is of little concern to the
researcher.
- Applied Research:
o Achieved by applying basic research findings to a particular community’s challenges
o In this way applied research may assist that community to overcome their problem or at least
design interventions to address it.
o Almost no social research is completely without application.
Exploratory, descriptive, correlational and explanatory research
- Exploratory research
〜 Exploratory research is called for where limited knowledge or information exists about a particular
subject and the purpose of the research is to gain a broad understanding.
〜 the most appropriate type because so little is known about the topic, qualitative methods are often
used.
〜 Appropriate when very little is known about the topic.
〜 Qualitative methods are often used.

- Descriptive research
〜 Researcher may be interested in finding out the opinion of a group of people towards a particular
issue at a particular time.
〜 Research might involve a survey (for example, a population census), a one-shot case study, or in
depth qualitative descriptions of the phenomenon. Various types of information can be collected in
many different ways.
〜 Various types of information can be collected in many different ways.
〜 Greater part of the data collected in these ways will be qualitative and more or less reliable,
depending on the source of the data and the skill of the researcher.
〜 The choice of one or the other technique is rarely arbitrary and depends on the aim of the research
and on the type of data available.
〜 Can be conducted using quantitative, qualitative or mixed-methods approaches.
- Correlational research
〜 May change in the same or opposite directions
〜 Provides an indication of the strength and direction of the relationship
〜 Researcher is able to make a statement or hypothesis predicting the relationship between at least
two variables
〜 Some cases the relationship between variables can be stated precisely, while in others it cannot.
〜 Two variables co-vary, that is, that they change simultaneously.
〜 Correlational research does not give an explanation of how variables are related. It merely indicates
that they are related, and provides an indication of the strength and direction of the relationship.
Levels of significance

Significant:
- p < .05 (5% probability of making an error)
- p < .01 (1% probability of making an error)
- p < .001 (0.1% probability of making an error)
Not significant:
- p > .05
- H0 : There are no significant associations between hours studied and test mark obtained
- H1 : There are significant associations between hours studied and test mark obtained

Significant:
- p < .05 (5% probability of making an error)
- p < .01 (1% probability of making an error)
- p < .001 (0.1% probability of making an error)
Not significant:
- p > .05
- H0 : There are no significant associations between number of beers consumed and test mark
obtained
- H1 : There are significant associations between number of beers consumed and test mark
obtained
- Explanatory research
〜 When a causal relationship between variables can be stated, and an explanation can be found for
the variation of at least one variable, the research is explanatory.
〜 A deeper understanding of the relationships between variables
〜 Causality of this kind is difficult to demonstrate, a topic that is discussed in more depth in the next
section.

- Criteria for selection of research type


〜 the choice of the type of research cannot be arbitrary. It depends on the following factors:
1. The object of research.
2. The aim of the research.
3. The nature of the data to be collected.
Demonstrating causality
- Just because two variables co-vary does not mean that one causes the other
- Plausible rival explanation = An alternative explanation for the relationship
- There are generally accepted criteria that need to be met in order to demonstrate that one variable is
causally related to another,
- Generally accepted criteria:
1. Temporal order. A cause must always precede an effect.
2. Association must be a strong relationship or association between them.
3. Eliminating plausible rival explanations. supposed effect is due to the causal variable and not to
something else.
- Potential errors in causal explanation
A good causal explanation for theory requires avoiding common logical errors:
1. Tautology a form of circular reasoning ‘Poverty is caused by having very little money’ is an example
of tautology.
2. Spuriousness call a relationship between variables spurious is to say that it is false, an illusion
- Testing a causal hypothesis
〜 The five characteristics of a causal hypothesis are:
1. It has at least two variables.
2. It expresses a causal or cause/effect relationship between the variables.
3. It can be expressed as a prediction or future outcome.
4. It is logically linked to a research question and a theory.
5. It is falsifiable, that is, it is capable of being tested against empirical evidence and shown to be true
or false.
- How can a researcher test a causal hypothesis? Suppose that two variables, A and B, have been
identified, and that a relationship is observed between them.
- Their simultaneous existence can be interpreted in three possible ways.
Example:
- Suppose a relationship is observed between two variables:
1. Variable A = ‘malnutrition’
2. Variable B = ‘illiteracy’
- Their co-existence can be interpreted in three ways:
1. A is the cause of B
2. B is the cause of A
3. Both A and B depend on a third variable (e.g. ‘socio-economic factors’)
- what is the problem?

Example:
Which one of the 3 alternatives is correct?
- If A is the cause of B, then the occurrence of the cause, (A), is a necessary and sufficient condition for B
(the effect) to occur
- This means:
o Necessary – B can never occur unless A occurs first
o Sufficient – wherever A occurs, B will occur
- A is the cause of B only if the occurrence of A is a necessary and sufficient condition for B to occur, then
B is the effect of A and there is a causal relationship between B and A
Three conditions that are satisfied in a causal relationship:
Only if all three conditions are satisfied is a causal relationship between A and B assured. These conditions
are the following:
1. Proof of the co-variance of A and B. the researcher must dem- onstrate that a relationship exists
between the variation of A and B.
2. Proof of the non-spuriousness of the co-variance. the researcher must exclude a third variable which
alone can determine the variations of both A and B.
3. Proof of a stable time-order. the researcher must demonstrate that the cause always precedes the
effect
〜 The plan of how to proceed in determining the nature of the relationship between variables is
called a research design.
〜 The purpose of correlational research is often only to detect the existence of a relationship
between variables (co-variance) that suggests a possible base for causality.
〜 Correlation does not necessarily imply causation, but causation always implies correlation
〜 Often explanatory research is not feasible. This is the case when it is not possible to manipulate the
variables or to assess the time-order.
〜 A correlational study will allow for an evaluation of the importance (or strength) of each
relationship, or the contribution of each factor to productivity.
〜 This is of great practical relevance.
〜 It will also indicate whether each of these factors promotes productivity (positive correlation) or
inhibits productivity (negative correlation).
〜 Therefore, a correlational study is not only useful when no clear causal relationship exists, but it
also allows for an estimation of the strength of the relationship between two variables even when
one variable is influenced by many others.
〜 the four types of research are applied to different aspects of the same research topic
Example
- Bad research questions
- Not empirically testable, non-scientific questions
- Should prostitution be legalised?
Should capital punishment be implemented in South Africa?
- General topics, not research questions
- Treatment of alcohol and drug abuse.
- Sexuality and ageing.
Capital punishment and racial discrimination.
- Urban decay and gangs.
- Questions that are too vague and ambiguous
o Do police affect delinquency?
o What can be done to prevent child abuse?
- Good research questions
- Exploratory
o How do young Zambian mothers experience health services in their country?
- Descriptive
o How has the infant mortality rate in Zambia changed over the past 10 years?
o What factors do doctors feel explain changes in the infant mortality rate over the past ten
years?
- Correlational
o Is the infant mortality rate in particular hospitals related to the average income of people living
in the surrounding area?
o Do mothers with better education have greater access to healthcare?
- Explanatory
o Have changes in the training of medical personnel caused a drop in the infant mortality rate?
Research and the internet
the internet provides many advantages and opportunities.
1. Data collection can be quicker and more affordable. For example, questionnaires can be emailed
and interviews can be conducted using voice over internet protocols.
2. The researcher can reach respondents over a very broad geographic area, such as the whole of
Africa, or even the whole world.
3. Respondents can participate in data collection from the privacy and comfort of their own homes.
Key Points:
1. There are several different ways to divide the field of social research.
2. Quantitative, qualitative and mixed-methods research approaches are distinguished primarily by
the form of data collected.
3. Basic and applied research are distinguished by the desired outcome: pure knowledge or social
intervention.
4. Primary and secondary research are differentiated by whether the data was collected specifically to
answer the current research question, or for some other reason.
5. Exploratory, descriptive, correlational and explanatory research are distinguished by the aims of the
researcher and thus the methodologies employed.
6. A demonstration of causality demands (a) demonstration of co-variance; (b) refutation of
alternative explanations; and (c) demonstration of a stable time order.
7. The growth of the internet and increasing access to online resources offer both opportunities and
dangers to the social researcher.
Notes created by Christéle Meyer

Chapter 6 – Research questions and variables


Formulation of the research question
- A research question is usually expressed as a general question about the relationship between two or
more concepts or variables.
Concepts, variables and constants
Research problem should:
o be specific rather than general
o expressed as a general question about the relationship between two or more variables
- The word ‘concept’ refers to a particular idea that can be described in words.
- The word ‘variable’ has a similar function in quantitative research.
- ‘Variable’ means that something varies and that its variation can be measured.
- Since a variable is something that varies from one observation to the next, it needs to be contrasted
with a constant, which does not vary.
o e.g. > recruit a sample of people > will vary in age, socio-economic status or gender. Therefore
these = variables.
o BUT: if only men recruited > gender = a constant
- A variable: is capable of taking two or more values
o can change in value or kind e.g. “gender” has 2 levels or values: (1) male and (2) female.
- A constant: cannot take more than one value
o e.g. “citizens of the same country”
Identification of the variables
- Some variables may be the ones influencing other variables, determining the values of these affected
variables.
- These are the independent variables (indicated by IV).
- Other variables may be subject to other causes so that their values are influenced by the values of
other variables.
- These are the dependent variables (indicated by DV).
Independent and dependent variables are defined more precisely as follows:
- The independent variable is that factor which is measured, manipulated or selected by the researcher.
o may influence other variables
o may determine the values of the affected variables
- The dependent variable is that factor which is observed and measured to deter- mine the effect on it
of the independent variable.
o subject to other causes
o their values are influenced by the values of other variable
The kindness of nurses in treating anxious patients will depend on the working conditions of the hospital
The quality of the working conditions
(Independent variable)
May affect the nurse’s bedside manners i.e. their behaviour towards patients
(Dependent Variable)
- two most important types of variables
- constitute the core of the research problem
- tied to each other by a certain relationship –
changes in the IV are associated with changes in the DV
Example
o The height of a child depends on its age.
o height = DV & age = IV
o To prove this assertion > Take many children of different ages > Measure their respective heights
o Thus IV (age) is manipulated
o DV (height) will be compared to IV (age)
o A certain regularity becomes evident
A certain regularity becomes evident:
• The height of the child (DV) should vary according to the age (IV) = younger children should be
shorter than the older ones
• BUT: It is not because a child measures 98cm that she is five years old, but rather that she is 98cm
(small) because of being only five years old
Note:
- Most phenomena CANNOT be explained by the effect of only one IV:
- Variation in the IV usually accounts for only part of the variation of the DV
- variation in the DV is also accounted for by secondary IV’s and moderator variables
Variables to take into account:
1. Moderator variable
- Another kind of variable is the moderator variable. In the simple example of the relationship
between the age and height of children, a researcher could argue that other factors also influence
the height of a child.
- In other words, the moderator variable is that factor which is measured, manipulated or selected
by the researcher to discover whether or not it modifies the relationship between the independent
and dependent variables.
- The variation in the independent variable can usually only partly account for the variation in the
dependent variable.
2. Control Variable
- Control variables are those factors that are controlled by the researcher to cancel out or neutralise
any effect they may otherwise have on the observed phenomenon.
3. Antecedent Variables
- The antecedent variable, as indicated by its name, appears before the independent variable and
determines it.
4. Intervening variable
- The intervening variable is in some ways the opposite.
- It is a consequence of the independent variable and it determines the variation of the dependent
variable.
- It falls between the independent and dependent variables, so that, if it is held constant, the latter
will not vary.
5. Extraneous
- An extraneous variable is a variable that influences both the independent and dependent variables,
giving the false impression of a relationship between them.
Relationship between variables
Extraneous
Variable

Moderator Control
Variable Variable

Antecedent
IV DV
Variable
Intervening
Variable
Variables to take into account
- Additional variables affecting the DV
- Factor measured, manipulated, or selected by the researcher to discover whether or not it modifies the
relationship between the IV & DV
- Thought of as secondary independent variables because moderator variables share some of the
characteristics of IV
Control variables:
- Factors that are controlled by the researcher to cancel out or neutralize any effect they may otherwise
have on the observed phenomenon
Example
o When measuring height of children as a function of their age, one should control for their health
condition since malnutrition has a negative effect on growth
o control variable = same nutritional background
o IV = different ages
o DV = height
Antecedent variables
- Appears before the IV
- Determines the IV
o e.g. refer to example of maize output “financial situation of farmer” could vary, and this could
influence the IV can be held constant
Intervening variable:
- a consequence of IV
- determines the variation of the DV
- falls between the IV and the DV
- can be held constant
Note:
o Antecedent, independent, moderator, intervening and dependent variables can be connected in
a causal chain or network
o Researchers should also be watchful for accidental connections and extraneous variables
Extraneous variable
- A variable that influences both the independent and dependent variables, giving the false impression of
a relationship between them
Note:
- Variables take on various roles in relation to different research questions
- The IV could actually depend on a different variable
- Or, the DV could serve as an IV predicting the variation in yet another variable
- Variables must be defined in a way that allows a systematic and accurate description or measurement
of their variation
Concepts:
Main functions:
1. to facilitate communication among human beings
2. to aid in the classification of the elements of reality and their generalization
3. building blocks of theories
For concepts to be useful:
- they must be defined in a clear, precise, non-ambiguous and agreed-upon way
- “speak the same language”
Conceptual and operational definitions
- A conceptual definition is the definition of a concept in terms of a set of other concepts.
o e.g. ‘a courageous person’ - is a ‘brave person’, ‘a person able to control fear in the face of
danger’, ‘a person who remains calm in the face of threatening events’…
- A conceptual definition cannot be true or false, but it may or may not be useful for communication.
Below are some properties that conceptual definitions should have in order to be useful.
1. A conceptual definition must denote the distinctive characteristics of that which is defined.
2. A conceptual definition should not be circular.
3. A conceptual definition should be stated positively
4. A conceptual definition should be stated in clear and unequivocal terms
- Even a very well-formulated conceptual definition does not fulfil the need of a researcher who wants to
assess the existence of some empirical phenomenon.
- This is the function of an operational definition.
- Thus an operational definition not only gives precise indications as to what the fundamental
characteristics of a concept are, but it also gives precise indications about how to observe, describe or
measure the characteristics under study.
- Stated in another way: an operational definition is based on the observable characteristics of an object
or phenomenon and indicates what to do or what to observe in order to identify these characteristics.
Example:
- “a hungry person”
- Conceptual def.: a person needing food
- Operational definition:
o a person deprived of food for 24 hrs
o a person who can eat a loaf of bread in 10 mins
o a person whose blood-sugar level is lower than a specified level
Hypothesis formulation
1. Tentative, concrete and testable answers to such questions
2. Suggested answer to a problem
3. a way to predict a relationship
4. has to be tested empirically before it can be accepted and incorporated into a theory
5. if not supported by empirical evidence, must be rejected
The following are the main characteristics of usable hypotheses.
1. A hypothesis must be conceptually clear.
2. A hypothesis should have empirical referents.
3. A hypothesis must be specific.
4. A hypothesis must be testable with available techniques.
Hypotheses concerning the cause of marriage instability could be variably formulated as follows:
1. The young age of the partners at the time of marriage (under 23 years) has an effect on the stability
of their marriage.
2. The young age of the partners at the time of marriage (under 23 years) has adverse influences on
the stability of the marriage.
3. The young age of the partners at the time of marriage (under 23 years) has positive influences on
the stability of the marriage.
The direction of a suspected causal relationship between variables:
- The first formulation is non-directional, since it does not indicate the direction in which the dependent
variable will be influenced by the independent one.
o ‘young age of partners (under 23yrs) has an effect on the stability of their marriage’
- The second and third formulations indicate a negative and positive influence respectively and they are
thus called directional hypotheses. (preferred)
o ‘the young age of partners at time of marriage (under 23 yrs) has adverse influences on the
stability of the marriage’= negative infl.
o ‘the young age of partners at time of marriage (under 23 yrs) has positive influences on the
stability of the marriage’ = positive infl.
Replication
- The principle of replication says that a hypothesis needs several tests with consistent and repeated
support to gain broad acceptance.
Disconfirmation
- This logic suggests that the more alternatives we test a hypothesis against, the greater our confidence
in it becomes.
- A curious aspect of hypothesis testing is that researchers treat evidence that supports a hypothesis
differently from evidence that opposes it.
- Disconfirmation is associated with the idea of falsification and with the use of null hypotheses. The null
hypothesis is a hypothesis that states that the variables in question are not in fact related.
- Negative evidence is more significant because the hypothesis becomes ‘tarnished’ if the evidence
contradicts it.
- Positive or confirming evidence for a hypothesis is less critical because plausible rival hypotheses may
make the same prediction.
- A hypothesis is never proved, but it can be disproved
Types of hypotheses:
- Hypothesis: a way to predict a relationship
o vs.
- Null hypothesis: does the opposite > no relationship between the IV and DV
- Null hypothesis thinking:
o rests on assumption that researchers try to discover a relationship
o the researcher assumes that the null hypothesis is correct, unless otherwise shown through
alternative evidence.
Null hypothesis vs Alternative Hypothesis
H0 : There are no significant associations between hours studied and test mark obtained
H1 : There are significant associations between hours studied and test mark obtained
H0 : There are no significant associations between number of beers consumed and test mark obtained
H1 : There are significant associations between number of beers drank and test mark obtained
Key points
1. Broad research topics must always be narrowed down to clearly defined research questions.
2. Research questions typically refer to a relationship between two or more concepts or variables.
3. Variables change from one observation to another whereas constants remain the same.
4. The research question concerns the influence of the independent variable(s) upon the dependent
variable(s).
5. The relationship between the independent and dependent variable(s) is also influenced by moderator,
control, antecedent and intervening variables.
6. Researchers should be careful to ensure that the relationship being studied is not the result of some
other extraneous variable that has not been included in the study.
7. All concepts and variables must be carefully defined with both conceptual and operational definitions.
Notes created by Christéle Meyer

Chapter 10 – Research planning and design


What is a research design?
- A research design relates directly to the answering of a research question and the testing of
hypotheses.
- Because research is a project that takes place over an extended period of time, it is unthinkable to
embark on such an exercise without a clear plan or design, a sort of blue print.
- In quantitative research specifies the operations to be performed to test a specific hypothesis under
given conditions.
- This design is not, however, to be confused with research management which is a plan to guide the
researcher through the research process.
- The purpose of research design is to ensure high internal validity.
- In quantitative research, internal validity is concerned with the question, ‘Do the observed changes in
the dependent variable actually relate to changes in the independent variable?’
Fundamental questions in order to construct a good research design:
- Questions relate to:
1. The focus of research
2. The unit of analysis
3. The time dimension of the problem at hand
1. The focus of research
The focus of research may be understood in terms of three different categories: conditions, orientations
and actions.
1. Conditions are studied when the researcher wishes to explore the current state of the respondents
of the research.
2. Orientations are concerned with participants’ attitudes, beliefs and lifestyles.
3. Actions are also very often the focus of research. These actions may be observed directly or may be
reported by the actor or others who observed the actor.
These three categories are not mutually exclusive.
2. The unit of analysis
- The person or object from whom the researcher collects data.
- The data from such a unit only describes that unit, but when combined with similar data collected from
a group of similar units then the data provides an accurate picture of the group to which the unit
belongs.
- The second important factor that the researcher must consider when planning an appropriate research
design is that of the unit of analysis.
- There are several possible units of analysis which fall into broad categories.
o Individuals are the most common unit of analysis. In this case, the researcher investigates the
conditions, orientations, actions, attitudes and beliefs of a group of individual people.
o Groups of people are also sometimes studied. Some examples of research where the unit of
analysis is more than one person are studies of siblings and identical twins, marital
relationships, family functioning and small-group functioning.
o Organisations with formal structures constitute a particular kind of group that is often used as
the unit of analysis for social research.
o Occasionally, the unit of analysis is a period of time.
o Finally, a common unit of analysis is a particular social artifact. Social artifacts are the products
of social beings and can be anything from poems and letters to automobiles and farming
implements.
- It is very important that the researcher keeps the units of analysis clearly in mind throughout the
research process.
- It is often very tempting to draw conclusions about one unit of analysis when, in actual fact, the
research is focused on a different one.
o This is called the ecological fallacy.
- The tendency of researchers to focus on particular units of analysis is another potential flaw in social
science research.
- This is a form of reductionism and was mentioned in Chapter 1.
- Psychologists tend to focus on the individual, but sociologists investigate groups of people, as do
economists.
3. The time dimension
- A third fundamental aspect of any research is the manner in which it deals with time.
- Observations may all take place at a particular time or may be deliberately stretched over a long
period.
Cross-sectional design
- When all data is collected at the same time, the research design is cross-sectional.
(The discussion of correlational research in Chapter 5 is relevant here.)
- The researcher using this design attempts to understand a topic by collecting a cross-section of
information relevant to that topic.
- Collect a cross-section of information relevant to that topic inherent difficulty with this design:
- Does not allow the researcher to measure change over time
- Thus, it is difficult to demonstrate causality
- Advantages of this design:
o Immediate nature
o Relative ease of data collection
Longitudinal design
- Longitudinal designs spread data collection out over time.
o E.g., ‘anxiety levels of 318 students during exam time between 2001-2013’
- Data collected several times, each year, at the same time, from 2001 to 2011
- Once data is collected, a predictable trend may be demonstrated
- Evaluation research may be cross-sectional or longitudinal in nature.
- Some longitudinal designs are introduced below.
o Cohort studies use a type of longitudinal design that tracks particular age group/s over time.
o Tracer studies are used to ‘trace’ people, or to follow their lives over a period of time.
- Data usually only collected at one time, so not strictly longitudinal, but produces data that simulates
longitudinal design
o e.g., tracing graduates 3 years after graduation for a number of years
Types of research design
- Every project requires a research design that is carefully tailored to the exact needs of the problem.
- Research designs have two essential components.
- The first is observation.
- The second essential component of research design is the analysis of the relationships between the
variables.
- Three categories of research design can be distinguished for quantitative research according to the
level of scientific rigour involved in proving the causal relationship, or the aims of the qualitative
research.
o Pre-experimental (3 types)
o Quasi-experimental (4 types)
o Experimental (4 types)
Pre-experimental (exploratory and descriptive) designs are essentially the methods of qualitative
research.
- They satisfy the aim of the researcher to describe and understand a phenomenon.
- Least adequate i.t.o. scientific rigor
- Least likely to establish a clear causal relationship between the IV and DV
- Far fewer requirements than the other designs
o Three pre-experimental designs are presented below:
• one-shot case study
• pre-test/post-test design
• intact group comparison design.
Quasi-experimental designs are designs which do not meet the exacting criteria of experimental designs,
but which manage to approximate experimental conditions.
- They approximate experimental designs (but don’t meet exact criteria)
- Have fewer requirements than experimental designs
- Can achieve a similar level of scientific rigour
- Higher level of control and rigour is possible than in pre-experimental
- Major identifying characteristic: lack of random assignment of subjects to conditions
o Four quasi-experimental designs are presented:
• contrasted groups design
• post-test-only cohort design
• pre-test/post-test cohort design
• time-series design.
Experimental designs are the most rigorous of all the designs and have strict requirements.
- They provide explanatory relationships between variables.
o Three experimental designs are. presented:
• pre-test/post-test control group design
• post-test-only control group design
• factorial design.
Pre-experimental designs
- Pre-experimental designs are largely qualitative and often use small, non-probability samples.
- Descriptive and exploratory research respond to the need of qualitative researchers to gain a deeper
understanding of a particular social phenomenon.
o One-shot case study
o In quantitative research, a one-shot case study is most often used to determine whether an
event (or intervention) has any effect upon a group of participants.
o The dependent variable is measured after the event (post-test) and conclusions are drawn.
• Qualitative research uses this method extensively: the most common use being case
studies and focus groups.
o In quantitative research, the lack of an initial ‘baseline’ measure of functioning, often referred
to as the baseline, makes it very difficult to convincingly demonstrate change resulting from the
event and therefore does not exclude plausible alternative explanations as mentioned in
Chapter 5.
o This design is generally regarded as uninterpretable
o The pre-test/post-test design overcomes this problem
Pre-test/post-test design
- In the pre-test/post-test design, the quantitative researcher measures the dependent variable before
(pre-test or baseline) and after (post-test) the treatment or event that is expected to bring about a
change.
- As a result, the scores for the dependent variable can be compared at two points in time, and the
difference between the before and after scores may be due to the event that occurred between them.
- In quantitative research, the pre-test/post-test design is the most common one for impact studies or
summative evaluations (discussed in Chapter 9). In this case the event is the intervention that is being
evaluated.
Concerns with this design:
- Other changes may be responsible for the change in the DV
o e.g when a long time has lapsed between pre- and post-tests
- These changes = threats to validity
o they threaten the validity of the study’s conclusions
- One of the problems with a single group design lies in the impossibility of determining what might have
happened if the intervention had not been made.
o This is called a counterfactual condition.
- These other changes that might be confounding the study are of two different types: those that occur
within the environment and those that occur within the participants.
- Events that arise within the environment are referred to as history.
o In the absenteeism example it is possible that the workers’ attendance was influenced by a fear
of losing their jobs due to increasing unemployment, changes in weather conditions which
made it easier to get to work, or the end to a flu outbreak in the area.
- Events that arise within the participants are referred to as maturation.
o It is also possible that the workers’ attitude to their job has changed and, as a result, they are
staying away from work less often.
- In cases where participants must be tested (such as tests of memory or driving skill) a bias called the
test effect may arise.
o bored with the test procedure,
o Practiced and thus improve
o fatigued
- Instrumentation refers to changes in the measuring device or procedure over the course of the study.
- When participants drop out of a study, the scores that remain at post-test may not be representative of
the original sample that was recruited for the study.
o This is referred to as attrition or mortality.
- A final source of error in the pre-test/post-test design is that of regression towards the mean.
o All variables fluctuate somewhat due to chance.
Pre-experimental designs
1. Intact group comparison design
- Instead of using only a single group, the intact group comparison design uses two groups, one of which
is affected by the event or treatment, while the other is not.
- ‘Intact group’: any group that exists prior to a research study
- Instead of comparing the performance of one group before and after the event or treatment, this
design compares the scores of two groups, only one of which was affected by the event or treatment.
Quasi-experimental designs
- Quasi-experimental designs allow the researcher to maintain a higher level of control and rigour than is
possible in pre-experimental designs, even when the demands of experimental designs cannot be met.
1. Contrasted group design
o One solution (but not the best) to the problems of the intact groups design is to use groups that
clearly contrast.
o In other words, the researcher’s goal has changed from finding similar groups to finding groups that
are essentially dissimilar, or contrasting in their main characteristic.
o If it is known that the groups differ i.t.o the IV, and a difference between groups is recorded i.t.o.
the DV, it can be concluded that the difference in the DV is due to the differences between the two
groups
o Does not allow for an independent event or treatment, but is based on differences that already
exist between the two groups.

2. Cohort designs
- Cohorts are likely to be similar to each other since their environment is the same except for the
treatment variable.
o e.g., grade 6 class at a school in 2005 is likely to have similar demographic characteristics as
the grade 6 class at the same school in 2006
- Testing occurs at different points in time as cohorts follow each other through the system
= represents the use of cohorts
- Both cohorts are pre-tested, receive the intervention, and are post-tested, at different times
- Advantage of this design: confidence in the similarity of the groups prior to treatment (at pre-test)
- Cohort designs are strengthened when the researcher can argue that the two cohorts share similar
environments, except for the interventions that they receive.
Pre-test/post-test cohort design
- A more sophisticated cohort design is the pre-test/post-test cohort design.
- The main advantage of this design is that the researcher is more confident that the groups were similar
at pre-test, that is, prior to treatment.
3. Time-series design
- Time-series designs represent an improvement over the one group pre-test/post-test design of the pre-
experimental group.
- Several measurements taken before (pre1, pre2 and pre3) and after (post1, post2 and post3) the
event/treatment are made
- The effects of history, maturation, test effects and regression towards the mean are observed by:
o Comparing each measurement with the measurements that were taken before and after it
o Differences between those measurements taken before the event (for example, pre3-pre2), and
between those taken after the event (for example, post2-post1), but not between the before
and after measurements, must be due to variables such as history, maturation, regression
towards the mean and test effect.
Experimental designs
- Most experimental designs that meet the objectives of explanatory research are characterised by their
use of randomisation to create two or more equivalent groups.
- The use of randomisation negates the difficulties of ensuring that the groups are identical.
- ‘R’ used to identify groups created on the basis of random procedures
Randomisation
- Randomisation requires that every participant involved in the study has an equal chance of being
assigned to any of the groups of the study.
- This can be achieved by:
1. Identifying the entire group of participants,
2. Randomly dividing this group into two or more subgroups (depending on the chosen design)
through the use of random number tables, coin flipping or various other randomisation
techniques.
- It is important here to distinguish between random selection and random assignment.
Random selection vs. random assignment:
- Random selection means that each member of the population has an equal chance of being selected
into the sample.
- Random assignment (randomization)means that each member of the sample has an equal chance of
being assigned to any of the conditions or groups in the study.
The advantage of randomly generated groups:
o the researcher starts the experiment with two or more equivalent groups
o if Group A is subjected to the treatment, the researcher can be reasonably sure that any difference
between the groups (A and B) thereafter is due to the effects of the treatment alone.
- If only one group is subjected to the treatment, the researcher can be reasonably sure that any
difference between the groups thereafter is due to the effects of the treatment and nothing else.
Groups
- The group that does not receive the treatment is called the control group
o The purpose of a control group is to compare treated participants with non-treated
participants.
o In this way the effect of treatment versus no treatment can be examined.
- While the group that receives the treatment is called the experimental group.
Placebo control group
- The placebo control group is another type of control group.
- Respondents or participants in a placebo control group are led to believe that they are receiving a
viable treatment, although in reality what is given to them is supposed to be ineffective.
- The Hawthorne effect is relevant here in accounting for the differences between pre- and post-test
scores.
o Generally accepted in research that just by receiving attention from a researcher, participants’
post-test scores might change
Matching
- Of course, it is impossible to randomly assign participants to groups when the independent variable
cannot be manipulated.
o An obvious example of a variable that cannot be manipulated is sex.
o The same is true of religious affiliation, cultural background, income, age, place of residence
and many other characteristics.
- Other techniques must be used to establish equivalent groups
o e.g. matching the elements of the group
- Pairs of members having identical characteristics considered relevant for the research are formed
o e.g. pairs matched on age, sex, health >, each member of each pair is then randomly assigned to
a different group (ensures equivalent properties in the two groups)

- When one of these factors is the independent variable, the researcher must use other techniques to
establish equivalent groups.
- One procedure is matching the elements of the group.
o In this case, the control for the equivalence of the two groups is based on having knowledge of
the main characteristics of the elements, persons, events or objects to be investigated.
o In this way, the two groups so constituted will have equivalent properties and they are called
dependant groups.
- Another method of matching, which has the advantage of being more efficient especially in the case of
large groups, is aimed at forming groups that have the same global characteristics.
o This matching is not as precise as in the first case
o Matching must be done for all relevant factors
- The choice between randomisation and matching often does not exist.
- Matching procedures can only be used if enough variables relevant to the research are known.
Experimental design
1. Pre-test/post-test control group design
- The pre-test/post-test control group design requires participants to be randomly assigned to two
groups:
1. the experimental group
2. the control group
- Note that a control group is similar to the comparison group of the intact group’s design, except that it
is arrived at through random assignment.
- Both groups are measured at the beginning of the study.
- Thereafter, the experimental group is subjected to the event or treatment.
- Thereafter, both groups are again measured
- The pre-and post-test scores of both groups can now be compared in order to assess the effect of the
event
Strengths of this design:
- randomization: expected that the two groups will be equivalent at pre-test
- The effects of history, maturation and regression towards the mean can be measured
- Weaknesses of this design:
o The subjects’ may be sensitised to the material as they have been given the test twice
o May make post-test scores different to what they may have been without a pre-test
o May affect external validity: the fact that subjects’ had a pre-test may have affected the
way they respond to the post-test
Weaknesses of the design:
- It is possible that giving a test twice – a pre-test and post-test – might sensitise participants to the
material
- This may make post-test scores different to what they might have been without a pre-test.
- However, if the two groups are truly randomly assigned, then this effect will be the same for the
control group and so therefore no real threat to internal validity.
2. Post-test-only control group design
- An experimental design that has virtually all the experimental rigour of the pre-test/ post-test control
group design is the post-test-only control group design.
- Randomisation aims at ensuring that the experimental and control groups are identical
- Except for the fact that only the experimental group receives the treatment or event.
- Therefore the pre-test/post-test comparison incorporated into the previous design may be
superfluous.
- All the data is collected at one time
o Problems of maturation, history, test effects and regression towards the mean do not arise
- Because there is no pre-test, it is impossible to be sure that the two groups are equivalent to start with
- The simpler design represented by Figure 10.8 does away with the pre-testing.

3. Factorial designs
- Factorial designs can be thought of as generalisations of either the pre-test/post- test control group
design or the post-test-only control group design.
- The important difference, however, is that factorial designs can incorporate two or more independent
variables, whereas the previous designs only allow for a single independent variable.
o E.g., a factorial design with two IV’s (each having two levels)
o e.g., IV 1= sex (males & females)
o IV 2 = age (18-23yrs and 24-30yrs)
- The following figure illustrates how an analysis of the relationship between these two independent
variables demands four experimental groups in the design.
- To analyse the relationship between these two variables we need four experimental groups in the
design
- The groups are exposed to different combinations of the two (or more) IV’s and then post-tested
- Differences between the levels of the IV’s are looked for > main effects
Developing a research design
- Very often research is hampered by constraints of resources, participants and time.
- Rarely can any of the designs discussed in this chapter be imposed directly onto actual research
projects.
- Rather, these designs explain the logic of social research and should serve as a foundation for good
research design.
- The researcher’s work is complicated by many sources of bias and error which must each be dealt with
as effectively as possible in order to ensure the highest quality of research.
- In developing a design for a particular research problem, there are several questions to be considered.
1. Is a cross-sectional or longitudinal design more appropriate?
Studies involving change over time virtually always use longitudinal designs.
Cross-sectional designs are most useful for describing populations, and differences between
populations, at a particular moment in time.
2. If a longitudinal design is used, how many repeated observations will be made?
How will the problems of history, maturation, test effect and regression to the mean be
addressed?
Is there any danger of experimental mortality affecting the usefulness of the design?
3. Is it possible to create equivalent groups either through randomisation or matching?
If this is possible, then experimental designs are the most appropriate.
If not, the researcher is forced to use a quasi-experimental design.
It is important for social scientists to be very critical of their own and their colleagues’ research designs.
Summary of sources of bias in research design
- Because every research project has its own particular set of problems, a complete list of the possible
sources of bias in research does not exist.
- The quality of both quantitative and qualitative research can be affected to different extents and in
different ways by these factors.
- There are, of course, many other sources of bias that relate more specifically to methods of data
collection, subject selection, data presentation and analysis.
- These are discussed in other chapters and only those relating to research planning and design are dealt
with here.
History and maturation
- Problematic to designs that require that data be collected at more than one time
- Solution:
1. Collect data at a single point in time
2. Pre-test/post-test control group and time-series design, where repeated measurements allow the
researcher to estimate the effects
- History refers to changes which occur in the world, other than those intended by the researcher, and
which might affect the results.
- Maturation refers to changes that occur within participants and thus confound the researcher’s design.
Regression towards the mean
- Regression towards the mean arises when researchers base their conclusions upon single
measurements.
- If a pre-test score is unusually high due to chance factors, the post-test score is likely to be lower,
regardless of any changes due to the experimental variables.
- The most effective solution to this problem is:
1. The repeated testing adopted by time-series designs.
2. In the case of qualitative research, the freedom of selecting the participants as the research
develops, gives the opportunity to compensate for this tendency.
Test effect
- In quantitative research, prior exposure to a test or measurement technique can bias a person’s
responses.
- This is particularly problematic when participants are tested and retested, as in all pre-test/post-test
and time-series designs.
- Effects: boredom, practice and fatigue
- Solution:
1. Reduce number of times participants complete the test
2. Vary the test slightly
Instrumentation
- Some of the many difficulties of developing appropriate instruments are discussed in a later chapter.
- However, instruments present a problem for design as well, particularly when different instruments are
used to test the same concept.
- Solution:
1. The researcher must be sure that the instruments are equally sensitive and accurate, otherwise
changes between the two measurements might be due to differences in the instruments and not
due to any differences within the participants.
2. Instruments should be pre-tested on a pilot group
3. Qualitative studies should be in accordance with cultural and other environmental factors
Experimental mortality
- Often participants drop out of a research project during the data-collection procedures.
- This is particularly true of research that takes place over an extended period of time.
- Time-series designs in particular are vulnerable to this source of error.
- Possible that participants who drop out of a study are systematically different from those who remain
with the study until the end
- Solution:
1. design the research so that it is convenient for the participants to participate until the end
2. Should impress upon the participants the importance of their continuing co-operation
Reactive effects
- When participants are aware of being observed, they behave in unnatural ways, that is, they ‘react’ to
being observed.
o E.g., Hawthorne effect
- One example of a reactive effect is test anxiety.
- The measuring instrument may increase the arousal levels of some participants and thus influence their
scores.
- Similarly, some participants try to please the experimenter and provide those results that they believe
are desired.
- Others will do just the opposite and try to confound the study in order to find out how the researcher
will react.
- Solution:
1. Use unobtrusive techniques of data collection
2. Not always possible – attempt to collect data in a way that causes the least disturbance to the
participants live
Selection bias
- Since many studies incorporate more than one group of participants,
- It is important that the quantitative researcher be sure that these groups are equivalent to each other
in all respects except for the independent variable of the study.
- Little can be done if the IV is beyond the researcher’s control and a quasi-experimental design has to be
used.
- Solution:
1. When the researcher is in control of which participants fall into which group
o Best technique for a large group of participants is random assignment
o When only a few participants and has reason to believe that randomisation might not result in
equivalent groups, the option of matching is available
- Qualitative researcher is less affected – select participants in the course of the research and thus more
flexible
Relationship between internal and external validity
- The potential of the research to achieve this aim is referred to as the validity of the design
- Measured in two separate but related dimensions: internal and external validity
- Good research design leads to high internal validity, where the researcher is able to answer the
research question convincingly.
- External validity is concerned with the question, ‘To what extent do the results obtained in this study
apply to the population being studied and to contexts different to those of this specific study?’
- Examine the extent to which the results of the study can be generalised
- Consider two factors:
1. Sample must reflect the experiences of the population as fully as possible
2. Researcher must make sure that the study simulates reality as close as possible
- In qualitative research, external validity is often referred to as transferability.
- Sometimes, in intervention research, the distinction is made between efficacy and effectiveness
research.
- Efficacy studies test the success of one treatment over another under tightly controlled lab-like
conditions.
- Thus, in an efficacy study, the researcher tries to control and isolate the variable of interest.
- In the real world it is often not possible to give people transport, childcare and pay them to come for
treatment.
- In fact, it is supposed to be the other way round. In the real world we conduct effectiveness studies, to
see if there is actually uptake of the service and what its effect is if people have competing demands on
their time.
- Tend to be inversely related
- studies with high internal validity often have low external validity and vice versa
- Studies in a particular social context > high external validity and low internal validity
- Studies in more controlled environments > high internal validity and low external validity
Key points
1. A design in quantitative research is a detailed outline spelled out in clear and uncertain terms for
testing of hypotheses. It is a specification of the operations to be performed in order to test a
specific hypothesis under given conditions
2. In the qualitative case, as the many steps of literature review, theory building, sampling and data
collection do not constitute separate activities but are continuously interrelated, the planning and
design of the entire process remain more flexible.
3. The purpose of research design is to ensure high internal validity. In quantitative research, internal
validity is concerned with whether the observed changes in
the dependent variable actually relate to changes in the independent variable. In qualitative
research, internal validity is concerned with whether the researcher’s method of data collection
and analysis adequately addresses the research question.
4. When dealing with research design, three aspects must be considered: the focus of research, the
unit of analysis and the time dimension.
5. There are three types of research designs: pre-experimental (exploratory, descriptive), the quasi-
experimental, and the experimental designs.
6. Experimental designs, in particular, use some specific methods such as randomisation, placebo
control group and matching.
Notes created by Christéle Meyer

Chapter 11 – Sampling
The purpose and types of sampling

- The entire set of objects or people that is the focus of a research project and about which the
researcher wants to determine some characteristics is called the population.
o e.g., all primary school teachers
- The subset of the whole population, which is investigated by a researcher and in the case of
quantitative research whose characteristics will be generalized to the entire population, is called the
sample. (“target population”)
o e.g. selecting 100 primary school teachers from a full list of primary school teachers

- In each sample, a car, a house or one primary school teacher constitutes an element or unit of analysis.
o The person or object from whom the social researcher collects data
o e.g., one primary school teacher
- Specific values or quantities that relate to the population as a whole, are called population
parameters.
o e.g., the average age of all primary school teachers
- When the corresponding values or quantities are drawn from the sample, they are called sample
statistics (or simply statistics).
o e.g., the average age of the 100 teachers in the sample
- Statistics are therefore estimates of population parameters.
- The process of generalizing from findings based on the sample to the population is called statistical
inference.
o we try to draw inferences about the unknown population parameters from the known sample
statistics
- Sampling = certainty is abandoned in favor of probability
Sampling theory
Quantitative research:
- Aims at testing a hypothesis on a representative sample, so that the results can be generalized to the
whole population.
- It is thus deductive and tends to simplify the complexity of reality.
- On the other hand, qualitative research aims at acquiring a deeper insight into a complex phenomenon
that might be quite specific and unique.
- The study of the relationship between a population and the samples drawn from it
- Objective - to draw inferences about the unknown population parameters from the known sample
statistics
- Statistical inferences – process of generalizing from findings based on the sample to the population
Qualitative research:
- Aims at acquiring a deeper insight into complex phenomenon that might be quite specific and unique
- Is concerned with studying people in their natural context.
- It is thus inductive and holistic.
- It does not aim at generalizing the results to the population but aspires towards some aspects of the
new knowledge and understanding being transferrable to other units of the population.
- One of the objectives of sampling is to draw inferences about the unknown population parameters
from the known sample statistics.
- These are obtained by collecting data from the sample.
- Sampling means abandoning certainty in favor of probability.
- Probability is understood as the likelihood of an event happening.
The main advantages of sampling, as compared to the collection of data on the whole population, are the
following:
1. Gathering data on a sample is less time consuming
2. Less costly since the costs of research are proportional to the number of hours spent on data
collection. (e.g., travel expenses, cost of reproducing data)
3. Sampling may be the only practical method of data collection.
This is the case in studies where the property under investigation necessitates the destruction of
the object.
When testing the resistance of an object to wear and tear, such as testing the lifespan of an electric
light bulb, one must wait till the bulb is ‘dead’.
It would be inadvisable to test an entire population.
This is called destructive sampling and, in this case, testing can only be realistically conducted on a
sample.
4. Sampling is a practical way of collecting data when the population is infinite or extremely large,
thus making a study of all its elements impossible.
Main sampling concepts
Good sampling implies:
- a well-defined population
- an adequately chosen sample
Good sampling must satisfy different criteria for quantitative and qualitative research.
1. Quantitative research.
o A sample is adequate if it enables the obtained results to be generalized to the whole
population.
o The sample is selected before starting data collection and analysis.
o Based on an estimate of how representative the sample is of the whole population
2. Qualitative research.
o A sample is adequate if it allows all possibilities or aspects of the researched phenomenon
to be identified.
o In other words, when the researcher reaches the conclusion that collecting more data and
increasing the sample does not bring any new information, data saturation has been
achieved.
§ Data saturation: collecting more data or increasing the sample does not bring any
new information
o In this case, the sample is partly selected in an ongoing way during the process of data
collection and analysis.
o One must have a well-defined population and an adequate sample in both cases.
A well-defined population
- A population, sometimes referred to as a ‘target population’, is the set of examples that the research
focuses upon.
- It is essential to describe the target population accurately.
- This can be done most effectively by clearly defining the properties to be analyzed. (establishes
boundary definitions)
- Compile a list of all elements of this population
- In quantitative research it is done by providing an operational definition.
- Once this is done it should be possible to compile a list of all elements of the population under
investigation or at least to determine whether an element belongs to that population.
- Once an operational definition is given, boundary conditions can be established which make it easy to
ascertain whether an element belongs to that population.
The sample
- Quantitative research - must have properties which make it representative of the whole population.
- Representative sample – to determine samples that best represent a population so as to ensure
accurate generalisation of results
Sampling frame:
- List of all units from which the sample is to be drawn
- Exclude no element of the population under investigation
- All elements of the population must have the same chance of being drawn into the sample
Sampling for Quantitative Research
- Quantitative sampling: Although sample is, by definition, a subset of the population, to be useful it
must have all the properties of the population to be representative of the whole.
- Thus, one of the major issues in sampling is determined samples that best represent a population to
allow for an accurate generalization of results.
o Such a group is called a representative sample.
- The first means of ensuring a representative sample is the use of a complete and correct sampling
frame.
o This is the list of all units from which the sample is to be drawn.
- An inadequate sampling frame that discards parts of the target population has been the case of many
poor research results.
- An even stricter requirement would be that all elements of the population should have the same
chance of being drawn into the sample or at least that the probability of this happening can be
specified.
Probability or random sampling
- When the probability of including each element of the population can be determined.
- Possible to estimate the extent to which the findings based on the sample are likely to differ from what
would have been found by studying the population
Non-probability sampling
- When the probability of including each element of the population in a sample is unknown.
- Advantages:
o Practical advantages
o e.g. cheaper, faster and quite adequate for homogenous populations
- Disadvantages:
o Difficult to estimate how well the sample represents the population
o Generalization therefore highly questionable
o This disadvantage can be reduced by enlarging the sample
- Some elements might even have no chance of being included.
- Although it is difficult to determine the extent to which non-probability samples are representative of
the population, they have some practical advantages.
- When the necessary population lists are not available, non-probability sampling remains the only
possibility for the researcher.
- Non-probability sampling is also almost always cheaper, faster and often adequate for homogeneous
populations.
- Finally it should be noted that to some extent the disadvantages of non-probability sampling could be
reduced by enlarging the sample.
Probability sampling for quantitative research
Most common sampling procedures in probability sampling:
1. Simple random sampling
2. Interval or systematic sampling
3. Stratified sampling
4. Multi-stage probability sampling
Simple random sampling:
- “Random”, in the scientific scene, expresses the idea of chance being the only criterion for selection.
- The selection of an element from a population is called random when the chance, likelihood or
probability of being included in the sample can be calculated for each element of the population.
- Accordingly, simple random sampling is a sampling procedure that provides equal opportunity of
selection for each element in a population.
- There are various techniques for selecting randomly.
- The most common are the lottery techniques where a symbol for each unit of the population is placed
in a container, mixed well and then the ‘lucky numbers’ drawn that constitute the sample.
- A more sophisticated method, particularly useful for large populations, is the use of random number
tables or electronic random number generators.
Random number tables:
Interval or systematic sampling:
- In accordance with simple random sampling
- This technique, instead of relying on random numbers, is based on the selection of elements at equal
intervals, starting with a randomly selected element on the population list.
Advantage:
- This sampling procedure is simpler and quicker than the use of random numbers.
Unfortunately, this method also has constraints.
- Like simple random sampling it relies on the availability of a complete unbiased population list.
- Moreover, this list must not have any cyclical or periodic characteristics.
e.g. we require a sample of 50 units from a population of 500
> k = length of interval
> k = N/n
> k = 500/50 = 10
> thus every 10th unit should be selected for the sample, with the starting element being any no.
between 1 and 10, or between 1 and k

Stratified random sampling:


- The principle of stratified random sampling is to divide a population into different groups, called strata,
so that each element of the population belongs to one, and only one, stratum.
- Then, within each stratum random sampling is performed using either the simple random sampling or
the interval sampling method.

- Although many samplings are performed, each is done for a relatively small population only.
- This increases availability of adequate lists and facilitates selection of a simple random sample without
decreasing the quality of the sample in any way.
- Stratified sampling, on the other hand, by preserving proportions even of very small samples, will allow
for any small minority to be properly represented.
- Of course, if the population is very homogenous with no marked differences of opinion between sexes,
or if both sexes are fairly equally represented, simple random and stratified sampling will lead to
similar results.
Multistage probability sampling
- The principle underlying multistage sampling is to start by sampling a population which is much more
general than the final one.
- In a second stage, based on the first sample, a new population is considered, one that is less general
than the first one, and a new sample is subsequently determined.
- The procedure is continued until the population to be investigated is reached and a final sample is
drawn.
- At each stage sampling is done in a random way (using one of the 3 previously mentioned sampling
methods)
- While not as accurate as simple random sampling, the advantage of multistage sampling become
apparent when other sampling methods fail due to a lack of complete lists of elements for a population
under investigation. > reduces costs.
- Multistage sampling can cut down expenses by reducing the cost of compiling long lists and by reducing
the travel expenditure necessary when respondents are spread over a large area.
Non-probability sampling for quantitative research
Most common sampling procedures in non-probability sampling:
1. Convenience or availability sampling
2. Purposive sampling
3. Quota sampling
4. Cluster sampling
Convenience or availability sampling
- Easily accessible, available and convenient sample is drawn.
o e.g., a convenient place is chosen where one is assured of finding many people, e.g. a bar, bus-
stop
- This sampling method, the most rudimentary one, consists of taking all cases on hand until the sample
reaches the desired size.
- Generalization based on a sample, not representative of the whole population, is extremely risky.
Advantage:
- Saves time and money
Disadvantages:
- Can introduce serious biases (e.g., men will be over-represented in bars)
- Generalization therefore risky (not representative of the whole population)
Purposive or judgmental sampling
- Selection on the basis of what the researcher considers to be typical units (researcher’s judgement)
- Units judged to be most common in the population are selected.
o E.g. If a school pupil is defined as being ‘12yrs-old, male, Catholic, with parents in clerical
profession’
- Only units meeting these characteristics are selected.
Disadvantages
- Relies heavily on the researcher’s subjective considerations (and not on objective criteria)
- Often leads to non-representative samples
- This sampling method is based on the judgement of a researcher regarding the characteristics of a
representative sample.
- The strategy is to select units that are judged to be the most common in the population under
investigation.
- The great danger in this type of sampling is that it relies more heavily on the subjective considerations
of the researcher than on objective criteria.
Quota sampling:
- Non-probability equivalent of stratified sampling
- The purpose here is to draw a sample that has the same proportions of characteristics as the
population.
- However, the sampling procedure, instead of relying on random selection, relies on convenience.
Advantages
- Although much less accurate than stratified sampling this method is often more convenient and
economical.
- No lists need be compiled, and all data can be collected at an arbitrary location.
Disadvantage:
- the sampling procedure relies on accidental choice (rather than random selection).
Cluster sampling:
- Equivalent to multi-stage quantitative probability sampling
- The process is the same as described above
- Except that the use of simple random sampling is replaced by a non-probabilistic selection method,
such as the availability of the elements or the convenience of reaching them.
Other sampling possibilities
Independent versus related/dependent samples
Alternatively, groups or samples can be related, usually when their elements have been matched by
specific properties.
In such cases, they are called related or dependent samples.
Every member of the sample has a partner somewhere else in the study.
They are related or dependent.
Independent – each unit is drawn randomly from the population and is also randomly assigned to one or
the other group.
Dependent – groups or samples can be related, usually when their elements have been matched by
specific properties.
Sample size: How large should a sample be?
Major criterion used to determine the sample size:
o The extent to which the sample is representative of the population
o Expect a 95% chance that the sample is distributed in the same way as the population
o Larger samples required if one wants high accuracy
§ large variability
§ heterogeneity
o Smaller samples required when less accuracy is required
§ homogeneity
§ less variables examined
- The size of the sample is an important determinant of the statistical power of the research.
o This refers to the likelihood that inferential statistical tests will be able to test the research
hypotheses adequately.
- Extent to which the sample is representative of the population.
o This can be expressed in terms of probability.
- The more heterogeneous a population is, the larger the sample must be to represent the
characteristics of the population correctly.
- At least 5% of the population.
A researcher’s decision about the best sample size depends on three things:
1. Degree of accuracy required
2. Degree of variability or diversity in the population
3. Number of different variables to be examined simultaneously in the data analysis.
Sampling errors and related problems
Chance factors:
- This type of error is the inevitable result of sampling and can never be eliminated, but the sampling
error can be calculated statistically.
Bias in selection:
- Arising primarily from faulty technique.
- Can be avoided.
o e.g., a researcher fails to take into account one criterion, e.g., age of respondents; or the
respondents themselves may give incorrect information about their age; over- or under-
representation of a population
Non-response error:
- Comes about when an element of the sample does not respond to a measurement instrument or is not
available for some unknown reason.
- Alters the representativeness of the sample
- As a consequence, such elements are excluded from the group, which changes the constitution, and
thus the representativeness, of the sample.
- It may not be possible to interview or test a person because of illness, language differences or other
factors.
- The chosen respondent cannot be found because of changes in residence or name, or because of
death.
- The selected person may be absent whenever the interviewer calls, purely by chance.
- The person can refuse to collaborate and not answer questions or give information.
Sampling for qualitative research
- Therefore, since qualitative research does not concentrate on the ‘average’ person but rather on the
diversity of cases, non-probability sampling is used.
- The three types of non-probability sampling introduced under quantitative sampling, convenience or
accidental, purposive or judgement, and quota sampling are the most frequently used.
Convenience or accidental sampling:
- Based on the availability of the units of the target group
Snowball sampling (also called chain or referral sampling)
- Particularly useful for identifying people not listed or difficult to find.
- This method could also be used to locate and interview pregnant women not attending pre-natal
clinics.
- This technique is very helpful to gain access to participants when the framing of the target population
is not possible.
- Can increase the size of the sample or develop a network
Qualitative approach to purposive or judgmental sampling
- Rests on the assumption that the researcher knows what type of participant is needed
- The element that is most complex and rich in information is the most valuable
- The researcher will thus purposefully choose participants based on some specific criteria that are
judged to be essential.
Extreme or deviant case sampling
- Concentrates on obtaining information from most extreme cases
- Highly unusual manifestations of phenomena may be most revealing
- Comparing extreme cases may lead to better in-depth understanding of underlying factors of a
problem
- Getting information from the most extreme cases, the ‘exceptions to the rule’.
- Here the assumption is that comparing these extreme cases might lead to a better in-depth
understanding of the underlying factors of a problem.
Critical case sampling
- In some instances, one can assume that some cases have key position.
- E.g., “if student A can pass this test, then all others can”
- Advantage – reduces the sample to a few cases most likely to give a lot of essential, relevant
information
Maximum variations sampling
- Aspires to having the most heterogeneous sample.
- E.g., studying the causes for divorce
- The importance is to have the widest possible range of possibilities
- This will yield a rich variety of variation for the phenomenon under study.
- However, for such samples to be all-inclusive, they will also be relatively large and costly.
Quota sampling
- Principles are the same as proportional sampling described under quantitative methods
- To this method one can add a less restrictive one, non-proportional quota sampling.
- The researcher does not make a concerted effort to reach the numbers or quota corresponding to the
proportions in the population.
- All categories represented in the sample.
- One is satisfied if the information given by the participants of a certain quota has reached saturation.
Combination or mixed purposeful sampling
- Often a combination of various sampling methods is chosen to arrive at the most appropriate sample.
- Flexibility is particularly helpful when dealing with triangulation
- Important – sampling strategy focuses on and fits the aim of the research, the method, and
instruments used, as well as the available resources
Sampling process and sample size: when is enough, enough?
- In the ideal case the data collection will continue if new elements or facts are found and will stop when
no new details are available, this is also called sampling to redundancy.
- Thus, the size of the sample is not determined in advance but becomes a consequence of the
exhaustion of new discovery or data saturation, as defined earlier.
- Unfortunately, in reality time and money sometimes demand some compromise.
- Quality of the research is directly related to the sampling procedures, adequacy of the techniques
chosen, as well as the professionalism of implementation and the appropriacy of the sample size
Key points:
1. The purpose of sampling is to provide various types of information of a qualitative or quantitative
nature about a population by examining a few selected units.
2. Quantitative sampling emphasizes the representivity of the sample, allowing for generalization of
the results to the target population.
3. Qualitative sampling aims at the identification and understanding of a complex phenomenon
experienced differently by the various units of the population.
4. The main distinction between probability (or random) sampling and non-probability sampling is
related to the type and aim of the research, with probability sampling used nearly exclusively in
quantitative research and non-probability sampling being particularly well suited to the aims of
qualitative research, although it may also be used for quantitative research.
5. Sampling is based on a well-defined population and must be determined by the aim of the research.
6. Errors introduced into the sampling process (such as chance factors, bias in selection, non-response
error, incorrectly identified criterion, etc.) play an important role in the end product.
7. The optimal size of the sample is determined, in the case of quantitative research, by the need for
the sample to be representative of the whole population, and in the case of qualitative research, by
the need for all the possible answers or information to have been collected.
Notes created by Christéle Meyer

Chapter 13 – Ensuring the quality of data


Quantitative research: Reliability and validity of measurements
- Once a researcher has identified the constructs or variables that are important to a particular study, it
is necessary for them to be operationalised so that they can be measured.
- The evaluation of such measurement is done in terms of the principles of reliability and validity.
- Reliability is the extent to which the observable (or empirical) measures that represent a theoretical
concept are accurate and stable over repeated observations.
- Validity is concerned with just how accurately the observable measures actually represent the concept
in question or whether, in fact, they represent something else.
- Unfortunately, virtually no measurement technique in social science is perfect.
- It is therefore important that researchers always evaluate the measures that they use.
- This is the purpose of reliability and validity.
Reliability
- Reliability is concerned with the consistency of measures.
- An instrument that produces different scores every time it is used to measure an unchanging value has
low reliability.
- It cannot be depended upon to produce an accurate measurement.
- In most cases, the reliability of measurement is the degree to which that instrument produces
equivalent results for repeated trials.
- However, there may also be error in the measurement of constructs.
- A measure is similar to a ruler, but because the concepts are a bit more nebulous than length, breadth
or height, the measurement is less precise.
- The reliability of a set of scores is the degree to which the variance in scores is due to systematic rather
than chance factors.
- Reliability measures the proportion of variance that is due to true differences between subjects’ scores
rather than differences due to chance.
- But the reasons for lack of reliability can be quite diverse and complex
- It is worth emphasising that in the social sciences there is always concern about establishing
regularities of perceptions, opinions, behaviours, etc.
- Without going into the mathematical procedures needed to estimate reliability, the following sections
provide a description of the various techniques commonly used.
- These are: test–retest reliability, equivalent form reliability, interrater reliability, and internal
consistency of measures.
Types of Quantitative Reliability:
1. Test–retest reliability
- To assess test–retest reliability, the same measurement procedure is applied to the same group of
people on two or more occasions.
- The results of a procedure with low test–retest reliability will vary widely.
- One of the other factors which may have influenced the participants between the two testing’s may be
due to the effects of history.
- A measure with high test-retest reliability will produce very similar results at each testing, whereas a
measure with low test-retest reliability will produce results that vary widely.
- A similar problem may arise due to maturation.
- Another potential problem that must be considered by researchers using the test–retest method of
determining reliability is reactivity.
Problems with test-retest reliability:
- If there are long intervals between testing:
o History
o Maturation
If there are short intervals between testing:
- Reactivity occurs when exposure to the first testing influences responses to the second testing.
- Counterfactual condition – other conditions in the environment and those that occur in the individual
that may confound the study
- History – occur in the environment
- Maturation – occur in the individual
- Test effect – in cases where participants are tested, the test itself may affect them:
o Bored with the test procedure
o “Practice” and thus improve
o Fatigue
2. Equivalent-form reliability
- This method of assessing reliability (sometimes called parallel-form reliability) is very similar to the
test–retest method but tries to address the problem of reactivity by changing the original test slightly
at the second testing.
- Although slightly different, the second test is equivalent to the first.
Problems with equivalent-forms reliability
- If there are long intervals between testing:
o History
o Maturation
o Can we be sure that the two forms of the measure are equivalent?
o Time consuming and expensive.
3. Inter-rater reliability
- Error in research – rater’s judgment is influenced by extrinsic factors
- Avoid by using more than one rater:
o Their respective scores are compared and the consensus of the raters, measured through
the agreement or concordance of the scores, is called inter-rater reliability.
4. Internal consistency
- The various items of an instrument measure the same construct even though there will be some
variation between item scores.
- A measure of the homogeneity of a measure’s items.
- A measure’s items should all be measuring the same construct and as such there should be a high level
of correspondence between the items.
- However, they should vary in the same direction.
Poor internal consistency:
a. I enjoy watching cricket.
b. I feel sad.
c. I get angry quickly.
d. My income is greater than R2000 per month.
e. I live with my parents.
Good internal consistency
a. I feel sad.
b. I cry often.
c. My appetite is not as good as it used to be.
d. I often feel worthless.
5. Split-halves reliability
- Split-halves reliability is somewhat different to the test–retest and equivalent-forms reliability.
- Rather than testing the consistency of instruments over multiple testing’s, the split-halves reliability
method is concerned with the internal consistency of instruments.
- A measure is split into halves and the extent of reliability between the halves is examined.
- When there is high correspondence between the halves, a measure demonstrates high split-halves
reliability.
- The problems of history, maturation, reactivity, and equivalent forms do not plague this method.
- Items may be split into a first half and a second half, according to odds and evens, or randomly.
Item analysis
- A more detailed method for estimating the internal consistency of an instrument is found in item
analysis.
- Often the researcher will measure the degree of match between each item and every other item in the
instrument.
- Those items that give results contradictory to the others must be discarded.
- An overall test of internal consistency is also carried out often.
- This is usually done through the use of a statistic called the coefficient of reliability.
- Helps the researcher to identify those items within an instrument which are not providing useful
information
- Following questions illustrate how some items may be useful measures of prejudice and others not:
1. How would you feel about having a woman to report to at work? - more reliable measure of
gender prejudice
2. How would you feel about working with women? – ambiguous about the nature of the
suggested relationship
- A test of internal consistency = coefficient of reliability (Cronbach’s alpha)
o Falls between 0 and 1
o Instrument with no reliability will score 0 and an instrument with very high reliability will score
close to 1

Validity
- Although validity and reliability are both important to the evaluation of an instrument, they are actually
entirely different concepts.
- The term validity was used in Chapter 10 where the internal and external validity of research designs
was discussed.
- This chapter is concerned with the validity of data collection procedures and instruments and, in this
context, the term ‘validity’ has a somewhat different meaning.
- Where reliability asked the question ‘how accurate and consistent is this instrument?’, validity asks
questions such as ‘what does this instrument actually measure?’ and ‘what do the results actually
mean?’
Types of quantitative Validity
1. Content validity
- The extent to which an instrument measures all the various components of the variable in question
- E.g., If all the questions in your Psych 243 exam come from Chapter 1, the exam would have low
content validity.
- In many cases, the topics that social scientists are most interested in are very complex and have
many different components.
- In order to measure such complex topics properly, the researcher must find a technique which will
provide some information on all of its different components.
- Unfortunately, ensuring good content validity is usually far more complex than it sounds.
2. Criterion-related validity
- One way to test whether an instrument measures what it is expected to measure is to compare it to
another measure that is known to be valid.
(The extent to which a measure is able to predict scores on another measure (the criterion measure)
that is known to be valid)
- E.g., Scores on a new measure of depression are able to predict scores on the BDI.
- This other measure is then called the criterion measure.
- When the instrument being tested and the criterion measure are administered at the same time,
the term concurrent validity is used.
- When the instrument being tested is used to predict a future criterion, it is called predictive
validity.
3. Construct validity
- Construct validity is the extent to which scores on an instrument reflect the desired construct
rather than some other construct.
- E.g., A measure of depression would have low construct validity if it had items assessing income.
- Unreliable scores cannot have any construct validity because they are mostly due to random error.
- Construct validity is the most important and most often used of the various forms of validity test
discussed in this chapter.
- Construct validity requires a strong link between the measure at hand and the related theory.
The following three steps are necessary in order to establish construct validity.
1. Identify all of the variables that are strongly related to the variable that the test is designed to
measure. This is done on the basis of theory, past research and logical deduction.
2. Measure all the variables involved and determine the relationships between them through the
use of statistical tests.
3. Interpret and explain these relationships and develop an argument to demonstrate the
construct validity (or lack thereof) of the instrument. The more variables, other than the one
under study that can be shown to interrelate meaningfully, the better the construct validity.
4. Convergent validity
- One way to establish construct validity is to determine whether the test has any convergent validity
with other measures.
- The extent to which a measure is positively correlated with another measure that is known to be
valid.
- E.g., A measure of depression that is positively correlated with the BDI displays convergent validity.
- Convergent validity is the relationship between the scale used and other scales that are intended
to measure the same con- struct.
5. Face validity
- Face validity is somewhat different from the other four forms of validity discussed so far.
- Face validity is concerned with the way the instrument appears to the participant.
- The extent to which an instrument appears to be measuring the construct at hand.
Balancing reliability with validity
- Both reliability and validity are important for every instrument.
- It does not help the researcher to use a highly reliable instrument that has no validity.
IMPORTANT:
- It is possible to have a measure that is reliable but not valid, but it is not possible to have a measure
that is valid but not reliable.
Qualitative equivalents of Reliability and Validity
Qualitative research: Ensuring trustworthiness

Trustworthiness
Obviously, this does not absolve the qualitative researcher of the need to evaluate the quality of the
research, but in qualitative research this is done in terms of how much trust can be given to the research
process and the findings.
Thus we speak of trustworthiness.
Concept of trustworthiness on the basis of 4 concepts:
Credibility
- High credibility = studies in which the researcher has convincingly demonstrated the appropriateness
and overall internal logic of the research questions, the study design, the data collection method, and
the approach to data analysis
- Credibility corresponds to the concept of internal validity, since it seeks to convince that the findings
depict the truth of the reality under study, or, in other words, that they make sense.
Dependability
- Dependability as a concept is similar to, but not the same as, reliability.
- Dependability demands that the researcher thoroughly describes and precisely follows a clear and
thoughtful research strategy.
Transferability
- Transferability can be compared to external validity since it refers to the extent to which results apply
to other, similar, situations.
- It requires the researcher to provide detailed descriptions of the context in which the data was
collected, about the researcher as a person, and about her relationships with the participants, etc.
Confirmability
- Confirmability, which is similar to replicability, requires that other researchers or observers be able to
obtain similar findings by following a similar research process in a similar context.
- The researcher is also expected to present a critical evaluation of the methodology used.
- When credibility, dependability, transferability and confirmability are all high we can speak of the
research being highly trustworthy, or having high quality.
Tools for increasing research trustworthiness:
1. Adequate description of context
Because qualitative research emphasises the context in which the study takes place, detailed
descriptions of the researcher, the participants, their relationship, and the context in which they
find themselves are essential.
2. Adequate description of the sample and the sampling procedure
As in quantitative research it is imperative that the researcher adequately describes the sampling
procedure used and the sample that actually participated in the research.
This includes describing the criteria of inclusion and exclusion and the sampling strategy used.
3. Concurrent data collection and analysis
By analysing data as it is being collected, the researcher can refine the data collection approach in
relation to the emerging results.
4. Triangulation
Triangulation is the method most frequently used to verify and increase the trustworthiness of
qualitative research.
It involves combining several different research methodologies to investigate the same
phenomenon.
The purpose of doing this is to show that the results obtained are independent of the methodology
used.

There are several different types of triangulation.


- Methodological triangulation requires that different methods of data collection are used.
o For example, data from in-depth interviews might be compared with data from direct
observation.
- Theoretical triangulation implies the use of different theoretical perspectives in the
interpretation of data.
o For example, how do the results of a thematic content analysis differ from the results of
a narrative analysis.
- Data triangulation is used when the researcher applies the same method of data collection and
analysis to different research participants.
o For example, the researcher might collect most of the data from university students but
show that the results are transferable to young people in the working world by
collecting data from a small group of young working people and showing that the
pattern of findings is essentially the same.
- Finally, using a variety of researchers leads to investigator triangulation, which brings great
diversity in the gathering as well as the interpreting of data.
5. Methodological verification:
This is the process of having other experienced researchers verifying the logic and implementation
of each step of the methodology.
Because qualitative research is more flexible, it often happens that important changes take place
after the study has begun.
Having other experienced researchers verifying the logic and implementation of each step of the
methodology
6. Ensuring data saturation:
While qualitative research is not concerned with representative samples in the statistical sense, the
researcher must be able to show that enough data has been collected to reflect the full range and
depth of the topic of the research.
7. Respondent validation:
Respondent validation (sometimes called member checking or informant feedback) is a process
whereby the researcher presents the results of a study to the people who provided the original
data and asks for their feedback.
8. Use of sufficient verbatim quotations:
By including many direct quotations from the original data in research reports, the researcher
allows the reader to hear exactly what respondents said and how the researcher interpreted that
information.
Scientific rigour in mixed-methods research
- The mixed-methods researcher employs both quantitative and qualitative research methods in the
same study.

- As such, methodological triangulation is built into every mixed-method study by definition.


- This is an important strength of mixed-methods research.
- Might combine:
o Structured questionnaires (quantitative) and open ended interviews (qualitative);
o Surveys (quantitative) and case studies (qualitative); or
o Questionnaires (quantitative) and observations (qualitative)
- The essential advantage of the mixed-method here is the enhancement of credibility of the research
where convergence of the findings is evident.
- Mixed-method study employs quantitative and qualitative research methods therefore, methodological
triangulation is built in
- Required to demonstrate the rigour of both the quantitative and qualitative aspects of the research
- Essential advantage is the enhancement of credibility of the research where convergence of the
findings is evident
- Able to bring the strength and advantages of both the rigour of quantitative research and the
exploratory power of qualitative research to the project hand
- In conclusion, when investigating and aiming at understanding reality, the necessity of ensuring the
quality of data often results in the researcher adopting a mixed-method type of research.
Key points
1. Due to the nature and aims of quantitative and qualitative research, the definitions, methods and
instruments related to the quality of the data are different.
2. Whereas in quantitative research the establishment and measurement of the reliability and validity
of measurement occur at a early stage, ensuring the trustworthiness and quality of the data is an
on-going process when using qualitative methodology.
3. In both cases many criteria and instruments are available to ensure quality as well as to identify
dangers and pitfalls.
4. One powerful method of increasing the quality of data that can be utilised within both quantitative
research and qualitative research, as well as when using a combination of the two methods, is
called triangulation. It consists of studying a phenomenon by adopting different angles or
perspectives and applying various methods of investigation to the process.
5. One of the major differences between quantitative and qualitative research is the role of the
researcher: in the former case, objectivity and detachment are essential since results must be
replicable, that is, independent of the researcher, in the latter instance, the researcher is the
instrument’ and the depth and quality of the findings depend on the skills and sensitivity of the
interviewer.
6. Moreover, as it is possible in quantitative research to assess the quality of the entire process and in
particular that of the instrument to be used for data collection at the beginning of the study, the
pursuit of quality in qualitative research is a continuous process.
7. In quantitative research, the several types of reliability include test–retest reliability, equivalent
form reliability, inter-rater reliability and internal consistency. The several types of validity
considered are content validity, criterion-related validity, construct validity, convergent validity and
face validity. Balancing reliability with validity is essential.
8. In qualitative research, trustworthiness is based on credibility, dependability, transferability and
confirmability, and there are specific tools to ensure the trustworthiness of research.
9. In mixed-methods research, the methods and instruments of both quantitative and qualitative
research are used.
Notes created by Christéle Meyer

Chapter 16 – Research findings and dissemination


Interpreting the findings: Detection of possible errors
- In quantitative studies, research results typically comprise tables of descriptive statistics representing
the sample, and the results of inferential statistics demonstrating differences between different parts
of the data.
- Qualitative data typically comprises descriptions of key themes and accounts of differences between
those themes or between groups of respondents.
- Mixed methods include a combination of the two.
Types of error and mistakes
Many sources of error exist. First of all, there are measurement and classification errors.
Measurement errors
- Refers to data that is wrong or inaccurate, whether it pertains to the income of families, the age of
children or respondents’ true feeling about a particular political candidate.
- Such errors occur when the research design or data collection does not produce an accurate
measurement or description of the phenomenon being investigated.
- Similarly a qualitative interview conducted in the respondents’ second language might result in
them not being able to express certain concepts, thereby changing the data they might otherwise
have provided.
Classification errors
- When data is wrongly identified and put in an inappropriate class.
- A second group of errors consists of constant and random errors.
- E.g. poor coding in qualitative research.
- Measurement and classification errors can be reduced:
o Careful & precise handling of the data
o Asking more precise questions
Constant errors
- Systematic, repeated errors throughout the research which can introduce significant biases.
- An evaluator might systematically underestimate the number of people attending events, or an
interviewer might repeatedly avoid a particular aspect of the interview schedule.
Random error
- Occurs on some occasions but not on others; it is thus non-systematic and unpredictable.
- Random errors are not considered to be very serious, while constant errors do create serious bias
and must therefore be identified & eliminated
- Finally, we must also distinguish between errors and mistakes.
- Although considered as equivalents in everyday language, these two words have different
meanings in research.
Errors
- Introduce bias and inaccuracies in measurement, but their sources can be detected, their
seriousness evaluated and their effects ameliorated.
- Their sources can be detected
- Their seriousness can be evaluated
- Their effects can be ameliorated
- Example:
o Biased formulation of research problem
o Difficulties in the way the researcher approaches the respondents
o When analysis is skewed by the researcher’s own assumptions & opinions
Mistakes or blunders
- Generally neither predictable, nor detectable in a way which permits their systematic analysis and
correction.
- Since these are very often related to the inexperience or incompetence of the researcher
- They should and could be avoided through sound training and support from more experienced
researchers.
Sources of error
- Errors can be introduced at all levels of the research process.
- Thus, it suffices to analyse each step in order to identify possible sources of bias.
- The list presented below is not exhaustive, and is in many aspects a summary of what has been
presented in previous chapters.
1. Vagueness of definitions and inaccuracy of hypotheses or research questions.
If the research question or problem is vaguely stated, this may introduce a situation where the
researcher conducts a study to confirm his own biases.
Lack of adequate operational definitions can lead to an inaccurate description of the population, &
types of information to be collected
2. Inadequacy of design and planning of research.
This source of error arises from a failure to identify all important variables,
Having too many uncontrolled variables, or having too many sources of error that have escaped
detection.
3. Sampling errors and other errors.
In quantitative research, sampling errors affect the representativeness of the sample.
A discrepancy may exist between the actual population from which the sample was drawn and the
target population to which inferences are to be applied.
Qualitative research – may effect representativeness of the sample
Qualitative research – if the principle of saturation is ignored the researcher may fail to collect
information about some aspects
4. Imperfection in the research instrument.
In experiments, laboratory equipment may be deficient or not properly adjusted to particular
conditions.
In the case of interviews and questionnaires, many more errors are often introduced relating
to the length of the questionnaire, the venue, the order of presentation of the questions, the type
of questions, and the content.
Data collected under unfavourable conditions
Interview guides that bear little resemblance to the research question
5. Bias.
Bias is the systematic introduction of extraneous variables that may distort or disguise the
relationships among the experimental variables.
Someone reads a book written in Spanish.
This person has a fair knowledge of Spanish but may struggle to understand the exact meaning of
some words.
She can prevent the errors of misinterpretation by consulting a dictionary, because she is aware of
specific problematic words.
She might also prefer to read an English translation of that book if English is her mother tongue.
However, in that case, she would be at the mercy of any bias introduced by the translator, who
would have had his own understanding of the originaltext.
There are several sources of bias.
1. Interviewer bias
The interviewer can affect the answers of the respondents through the way in which he
relates to them (e.g.: being too lenient, aggressive or impatient)
2. Respondent bias
Bias introduced by unresponsive participants, or uncooperative ones who only answer at
random, as well as by respondents who purposefully give false information because of
mistrust, fear, desire to conform or because of social pressures
3. Analyst bias
Errors introduced at the level of processing, coding & analysing the data
4. Researcher bias
Researchers can never be completely objective, particularly when dealing with social
phenomena
An exuberant researcher may subtly (or not) promote the effectiveness of one intervention
over others
Some solutions to the problems of bias include:
- Make sure that all research staff are aware of, and trained, in the issues of bias.
- Where possible in quantitative research, keep the research staff who are responsible for
data collection unaware (that is, blind) of the research hypotheses.
- Where possible in experimental research, keep the research staff who are responsible for
data collection unaware (that is, blind) of the identity of each group (experimental,
comparison or control).
- Design your data collection tools carefully, using simple and accessible language, avoiding
double barrelled questions, etc.
- Test data collection tools in a pilot study.
- Provide adequate training to research assistants responsible for collecting, entering and
coding data.
- Keep issues of bias in mind at all times when conducting research.
6. Poor research procedures.
- Procedural imprecision – researcher treats respondents differently because there are no
specific instructions
- Researcher fatigue – participants may feel that the researcher is not interested in them or
the researcher behaves differently from one participant to the next
- Procedural variation – research may have been influenced by things related to the research
procedure
- Serious threat to internal validity
- There are many procedures involved in the conduct
of any research study: recruiting participants, greeting participants, obtaining informed
consent, providing a rationale for the study, allocating respondents to groups, giving
instructions, administering interventions, recording data, reminding respondents about
returning questionnaires, interviewing respondents, entering data into computers, cleaning
and coding data, and so on.
Some solutions to the problems of procedural imprecision
- The most important remedy for procedural imprecision is to make every procedure explicit in the
finest detail.
- The more complex a study, the more procedures there are, and more important procedural
precision becomes.
- Interventions should be carefully standardised
- NB: researcher should train every individual involved in research in the implementation of those
procedures
- Researcher should stay close to all research activities
- The researcher should be able to explain any inconsistencies in the research results & between the
research results and findings from other studies
Drawing conclusions
- After interpreting the findings it is useful to summarise the aims of the research and to compare them
with the findings, thereby drawing conclusions as to how much and in what manner the aims of the
research have been achieved.
- In quantitative research the researcher should pay attention to the extent to which hypotheses have
been confirmed, and whether it has been possible to infer causal or correlational relationships
between variables.
- In qualitative research the researcher should pay attention to the extent to which new information
about the research question has been gathered and presented, and also examine possible causal or
correlational relationships between key concepts.
Generalising research findings
- Estimate how far findings can be generalized to a larger population
- Suggest modifications that should be taken into consideration when making generalizations
- E.g., estimate how far the results obtained on a target population of university students could be
generalized to the broader population of students in tertiary educational institutions
- Comparison of the findings from the present research with the ones of investigators that were
considered in the literature review
- Help to discover important flaws, misinterpretations, relevant variables that have been overlooked by
either the present or previous researchers
- Unexpected similarities might also help to uncover an important common factor
- Opposite findings lead to deeper analysis
- Integrate final conclusions within a theoretical framework
- Once conclusions reflecting some properties of the target population have been drawn, one can
estimate how far these findings can be generalised to a larger population, and suggest modifications
that should be taken into consideration when making such a generalisation.
- Even more important is the integration of research results with previous research findings and with the
theoretical framework used in the research.
Suggestions and recommendations
- Research is mainly relevant due to its implications for the improvement of the human condition.
- As such, the practical aspects of the findings must be analysed as well.
- On the basis of the results, suggestions may be made for further investigations to clarify certain
aspects, to generalise some findings, or to check the importance of some variables and, perhaps, to
avoid various pitfalls.
- Mention the possible application of the results to other fields
- Recommendations should be made on how programs & policies might be altered
- Action research –
o Recommendations & suggestions are central to the research process
o Concrete plans for the implementation of suggestions should be formulated by the participants
o These becomes the actions of the action research
Ways of disseminating findings
- The dissemination of research findings takes many forms, depending on the researcher’s aims and the
intended audience.
- Several dissemination platforms:
1. Reports at research sites and to research participants
Much of the research discussed in this book involves applied and community-based studies where
the community itself is the most important consumer.
Present research to community representatives & participants in a way that is easily accessible &
stimulates questions and discussion.
2. Presentations to close colleagues
It is sometimes very helpful for inexperienced researchers to begin the process of dissemination by
presenting their results to a group of colleagues or peer researchers.
3. Organisational or departmental seminars
More formal seminars within one’s department or organisation can serve the same process.
Can practice to present research in clear & concise manner, &
Discussions with colleagues often lead to ideas being refined & elaborated
4. Professional conferences
Professional conferences provide another platform at which researchers can present their work.
Professional associations and conferences differ greatly in their aims and the kind of presentations
that are expected.
5. Reports and presentations to clients
Much research is contracted and paid for by governments, private companies, and both local and
international non-governmental organisations and funding agencies.
Very often researchers are called upon to present the results of the work to their clients.
Emphasis on the results & implications with short summary of the method
6. Research in the mass media
One of the more difficult challenges for a scientist is to present research through the mass media in
a responsible way.
Newspaper & magazines want short explanations of findings often to over-interpret or
sensationalize results.
7. Scientific journals
Publication of research in a peer-reviewed scientific journal is the most demanding of the various
platforms available to the researcher.
This is because the process of peer review, by which the scientific community ensures a high
minimum standard of research, can be extremely challenging.
Peer review is conducted anonymously, and articles submitted for review are typically reviewed
independently by at least two suitably experienced and knowledgeable reviewers.
Organisation of a research report
The most detailed, complete and scientific report for research-funding institutions and archives should
present all the different steps of the research in detail, but it is also possible to have less complete reports
for specific purposes.
A report written for publication in a scientific journal will have to show a high level of scientific quality
condensed into a few pages, one written for an agency particularly interested in conclusions and practical
consequences might cut short the technical aspects of the research and emphasise the discussion of
findings and their implications, while one that needs to be understood by the average educated readership
of a magazine may present findings in more general terms, avoiding scientific vocabulary.
1. Title
o The title of the study or article should describe the essence of the research.
2. Abstract or executive summary
o Abstracts are required for journal articles and also for some other types of reports.
o Abstracts are usually between 100 and 200 words long, briefly describing the four main
sections, namely: research question, method, results, and conclusions.
o Briefly describing: research question, method, results, & conclusion
o Emphasis should be placed on the results and their significance, but information on the design
and the characteristics and size of the sample is also important.
3. Acknowledgements
o It is appropriate (and an ethical requirement for researchers) to acknowledge the people and
organisations who have contributed in different ways to the research.
o Contributors might include funding agencies, other experienced researchers who have given
advice and support, academic institutions, government departments, private companies and
civil society organisations that supported the research in different ways.
4. Introduction
o Identification of the research question. The question to be dealt with is introduced and the area
within which it is situated is identified.
o Significance of the study. The relevance of the problem is highlighted in terms of both its
theoretical and practical implications.
5. Literature review
o Literature review. Only the relevant articles are cited and commented upon.
6. Method
1. Statement of the problem. The statement of the problem refers to the first two points referred
to in the introduction, but contains a more precise approach to the issue and clarifies the
purpose of the study.
2. Statement and rationale of the hypotheses or research question. In quantitative research the
hypotheses should be clearly and concisely stated in terms of the conceptual definitions of the
variables.
Qualitative research – hypotheses should be clearly & concisely stated in terms of the
conceptual definitions of the variable
Qualitative research – more particular research questions are often included at this point
3. Operational definitions of the variables and hypotheses. Of greater importance to quantitative
research is the question of providing operational definitions of all variables and, where
necessary, restating the hypotheses in these terms.
4. Research type and design. In this subsection the researcher must describe the kind of research
that is being written about and the research design being used.
5. The sample. Here all issues and information concerning the research participants are examined.
6. Data collection instruments. Any instruments or systems for collecting data should be described
in this section.
How instruments were developed or selected, whether or not they have been previously tested
with the population in question, information about reliability & validity
7. Procedures. The researcher should explain in a step-by-step fashion all the steps of the
research.
8. Data analysis. The process of data analysis must be described in detail. These processes will
depend on the nature of the research question and the type of data.
9. Ethical provisions. The final subsection of the methods part of a research report should explain
the ethical provisions of the research method, as well as the process whereby the researcher
gained ethical approval for the project from an independent body.
7. Results
o Typically, this section involves two steps.
1. Summary of the findings. The main findings following from the data analysis are presented
here.
Quantitative research - main findings are presented in tables, graphs & diagrams is used to
help the reader understand the data
Qualitative research – summery of results should include descriptions of the key themes
illustrated with verbatim quotations from respondents
2. Analysis of the hypotheses or research question.
Quantitative research: each hypothesis is analyzed in the light of the research results &
decisions to reject or retain each hypothesis are made
Qualitative research questions are discussed in the light of qualitative findings
8. Discussion and conclusions
o This section is in many ways the most important part of the report where the researcher
provides:
1. an interpretation of the findings
2. a discussion of the significance of the findings for knowledge of the research topic
3. a discussion of the significance of the findings for the betterment of society
4. a discussion of the limitations and weaknesses of the current study
5. recommendations for policy and practice arising from the study findings 6.
recommendations for further research in the field.
o Metaphorically, research can be compared to a relay race where each participating athlete is
expected to pass the baton to the next member of the team in order to complete the race.
9. References and bibliography
o A reference list is a list of all the materials that have been cited in the report or paper.
o A bibliography refers to all the works or sources that have been consulted during the research,
some of which have been cited in the text.
o Typically this includes the name of the author(s), the year of publication, the title of the book or
article in italics, the place of publication, the publisher’s name and the relevant page numbers.
Examples are:
§ Mpofu, E. (Ed.) 2011. Counseling people of African Ancestry, New York, NY: Cambridge
University Press.
§ Creswell, J W. 2010. Mapping the developing landscape of mixed methods research, in
A. Tashakkori and C. Teddlie (eds). SAGE Handbook of Mixed Methods in Social and
Behavioral Research, 2nd ed. Thousand Oaks: Sage.
o Journal articles are referenced by indicating the name of the journal in italics after the title of
the article, as well as by indicating the volume and the number of the issue before the page
number:
§ Sideris, T. 2003. War, gender and culture: Mozambican women refuges. Social Science
and Medicine, 56 (4), 713–724.
10. Appendices
o Appendices usually contain summarised data and the results of the data analysis, both often
presented as tables.
o It is essential to give a number and title to each appendix in line with how they are referred to
in the text.
Guidelines for writing
- Scientific writing, much like creative writing, is an idiosyncratic exercise. Each person presents
information and data according to their own style.
- However, academic writing is governed by more guidelines, conventions and rules.
- Know your audience. The results of research reports are read by many different kinds of people. It is
important that you keep in mind a clear picture of your audience when you are writing.
- In scientific writing, institutions and journals often have their own style and formatting guides, and
these are almost always found online.
- Know your theme. Too often novice researchers begin writing before they know exactly what it is that
they want to say.
- Use simple language. Modern scientific writing emphasises the importance of clarity.
- Some simple grammatical and stylistic rules to keep in mind are:
o Write as much as possible in the active voice.
o Write as much as possible in the first person.
o Write mainly in the past tense.
o Avoid being verbose.
o Use simple words.
o Avoid any kind of discriminatory or pejorative language.
o Do not plagiarise.
o Edit and check your work.
Key points
1. The interpretation of results is the culmination of the research process and must be done
thoroughly.
2. Researchers should be mindful of all possible sources of bias and error that might have influenced
the results.
3. Dissemination of findings may take many forms, each of which makes different demands on the
researcher in terms of style, content and scientific rigour in presentation.
4. The researcher must always pay careful attention to the particular characteristics of the target
audience.
5. Many institutions and publications have very particular style and format requirements to which the
researcher must adhere.
6. Scientific writing should be clear, concise, detailed and respectful.
7. Scientific writing is a skill that can only be mastered with time and practice.

You might also like