[go: up one dir, main page]

0% found this document useful (0 votes)
21 views58 pages

Methods of Data Collection11

The document discusses various data collection methods, including Focus Group Discussions (FGDs) and case studies, highlighting their advantages and disadvantages. It also covers concepts of reliability and validity in research, detailing different types of reliability (test-retest, parallel forms, inter-rater, internal consistency) and validity (face, construct, criterion-related, formative, sampling). Overall, it emphasizes the importance of selecting appropriate methods and ensuring the reliability and validity of research findings.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views58 pages

Methods of Data Collection11

The document discusses various data collection methods, including Focus Group Discussions (FGDs) and case studies, highlighting their advantages and disadvantages. It also covers concepts of reliability and validity in research, detailing different types of reliability (test-retest, parallel forms, inter-rater, internal consistency) and validity (face, construct, criterion-related, formative, sampling). Overall, it emphasizes the importance of selecting appropriate methods and ensuring the reliability and validity of research findings.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

 Which data to collect?

 How to collect the data?


 Who will collect the data? And
 When to collect the data?
FOCUS GROUP DISCUSSION (FGD)
 It is a group discussion conducted with the participation of and 7 to 12
people to capture their experiences and views regarding specific issues
closely related to the research question.
 FGD data collection methods is most suitable for types of studies where
multiple perspectives needed to be obtained regarding the same problem.
 FGD are led by a moderator who is responsible to ensure that group
discussions remain focused on research area.

 Advantages of FGD include possibility of obtaining primary data through


non-verbal channels, as well as, verbal channels and approaching the
research area from various perspectives.
FOCUS GROUP CONTD.
 The disadvantages includes:
Group discussions may be heavily influenced by one or two
dominant individuals in the group and some members of the
focus is group may be discouraged from participating in
discussions due to lack of confidence or not articulate
communication skills.
Moreover, the nature of primary data obtained through FGD
are greatly influenced by environmental factors such as
design of the room, room temperature, time of the day, etc.
is important to understand that data collection
 It

through FGD and data analysis is much more


difficult compared to questionnaires and
CASE STUDIES
 Itaim to analyze specific issues within the boundaries of
a specific environment, situation or organization.

 Casestudies is a popular research method in education


and business area.

 According to its design, case study research method can


be divided into three categories:
 Explanatory,
 Descriptive and
 Exploratory.
CASE STUDIES CONTD.
 Explanatory case studies aim to answer ‘how’ or ’why’ questions with
little control on behalf of researcher over occurrence of events.

 This type of case studies focus on phenomena within the contexts of real-
life situations. E.g. “An investigation into the reasons of the global financial
and economic crisis of 2016 – 2017”.

 Descriptive case studies aim to analyze the sequence of interpersonal


events after a certain amount of time has passed.

 Usually it describe culture or sub-culture, and they attempt to


discover the key phenomena.
 E.g. “Impact of increasing levels of multiculturalism on marketing
practices: A case study of Jimma Ethiopia”.
CASE STUDIES CONTD.
 Exploratory case studies aim to find answers to the questions
of ‘what’ or ‘who’.

 Tis
method often accompanied by additional data collection
method(s) such as interviews, questionnaires, experiments
etc.
 e.g.“A study into differences of leadership practices between private and
public sector organizations in Atlanta, USA”.
ADVANTAGES AND DISADVANTAGES OF CASE STUDY
 Advantages of case study method include data collection and
analysis within the context of phenomenon, integration of
qualitative and quantitative data in data analysis, and the ability
to capture complexities of real-life situations so that the
phenomenon can be studied in greater levels of depth.

 Case studies do have certain disadvantages that may include


lack of rigor, challenges associated with data analysis and
very little basis for generalizations of findings and
conclusions.
RELIABILITY
 Refers to “whether or not you get the same answer by using an instrument
to measure something more than once” (Bernard, 2011, p.42).

 In simple terms, research reliability is the degree to which research method


produces stable and consistent results.

 A specific measure is considered to be reliable if its application on the


same object of measurement number of times produces the same results.

 Research reliability can be divided into three categories:


1. Test-retest reliability
2. Parallel forms reliability
3. Inter-rater reliability
1. TEST-RETEST RELIABILITY
 It relates to the measure of reliability that has been obtained by conducting
the same test more than one time over period of time with the
participation of the same sample group.

 Example: Employees of ABC Company may be asked to complete the same


questionnaire about employee job satisfaction two times with an interval
of one week, so that test results can be compared to assess stability of
scores.
2. PARALLEL FORMS RELIABILITY
 It relates to a measure that is obtained by conducting assessment of the
same phenomena with the participation of the same sample group via
more than one assessment method.

 Example: The levels of employee satisfaction of ABC Company may be


assessed with questionnaires, in-depth interviews and focus groups and
results can be compared.
3. INTER-RATER RELIABILITY
 As the name indicates relates to the measure of sets of results obtained by
different assessors using same methods. Benefits and importance of
assessing inter-rater reliability can be explained by referring to subjectivity
of assessments.

 Example: Levels of employee motivation at ABC Company can be assessed


using observation method by two different assessors, and inter-rater
reliability relates to the extent of difference between the two assessments.
4. INTERNAL CONSISTENCY RELIABILITY
 Is applied to assess the extent of differences within the test items
that explore the same construct produce similar results.
 It can be represented in two main formats.

a) average inter-item correlation is a specific form of internal consistency


that is obtained by applying the same construct on each item of the test
b) split-half reliability as another type of internal consistency reliability
involves all items of a test to be ‘spitted in half’.
VALIDITY

 Research validity in surveys relates to the extent at which the survey


measures right elements that need to be measured.

 In simple terms, validity refers to how well an instrument as measures


what it is intended to measure.

 Reliability alone is not enough, measures need to be reliable, as well as,


valid.
 E.g. if a weight measuring scale is wrong by 4kg (it deducts 4 kg of the
actual weight), it can be specified as reliable, because the scale
displays the same weight every time we measure a specific item.
However, the scale is not valid because it does not display the
actual weight of the item.
TYPES OF VALIDITY
 Research validity can be divided into two groups: internal and external.

 It can be specified that “internal validity refers to how the research


findings match reality, while external validity refers to the extent to
which the research findings can be replicated to other environments”
(Pelissier, 2008, p.12).

 Moreover, validity can also be divided into five types:


1. Face Validity,

2. Construct Validity,

3. Criterion-Related Validity,

4. Formative Validity and

5. Sampling Validity
1. FACE VALIDITY
 Is the most basic type of validity and it is associated with a highest level of
subjectivity because it is not based on any scientific approach.

 In other words, in this case a test may be specified as valid by a researcher


because it may seem as valid, without an in-depth scientific justification.

 Example: questionnaire design for a study that analyses the issues of


employee performance can be assessed as valid because each individual
question may seem to be addressing specific and relevant aspects of
employee performance.
2. CONSTRUCT VALIDITY
 Relates to assessment of suitability of measurement tool to measure
the phenomenon being studied.

 Application of construct validity can be effectively facilitated with the


involvement of panel of ‘experts’ closely familiar with the measure
and the phenomenon.

 Example: with the application of construct validity the levels of leadership


competency in any given organisation can be effectively assessed by
devising questionnaire to be answered by operational level employees and
asking questions about the levels of their motivation to do their duties in a
daily basis.
3. CRITERION-RELATED VALIDITY
 Involves comparison of tests results with the outcome.

 This specific type of validity correlates results of assessment with another


criterion of assessment.

 Example: nature of customer perception of brand image of a specific


company can be assessed via organising a focus group. The same issue can
also be assessed through devising questionnaire to be answered by current
and potential customers of the brand. The higher the level of correlation
between focus group and questionnaire findings, the high the level of
criterion-related validity.
4. FORMATIVE VALIDITY
 Refers to assessment of effectiveness of the measure in terms of
providing information that can be used to improve specific aspects of the
phenomenon.

 Example: when developing initiatives to increase the levels of


effectiveness of organisational culture if the measure is able to identify
specific weaknesses of organisational culture such as employee-manager
communication barriers, then the level of formative validity of the measure
can be assessed as adequate.
5. SAMPLING VALIDITY
 (similar to content validity) ensures that the area of coverage of the
measure within the research area is vast.

 No measure is able to cover all items and elements within the


phenomenon, therefore, important items and elements are selected using
a specific pattern of sampling method depending on aims and objectives of
the study.

 Example: when assessing a leadership style exercised in a specific


organisation, assessment of decision-making style would not suffice, and
other issues related to leadership style such as organisational culture,
personality of leaders, the nature of the industry etc. need to be taken into
account as well.

You might also like