[go: up one dir, main page]

0% found this document useful (0 votes)
293 views34 pages

Hrm554 Chapter 7

This document discusses measurement and its importance in staffing decisions. It covers key concepts around reliability and validity of measures. Regarding reliability, it discusses ensuring consistent measurement and controlling errors. Validity refers to a measure accurately assessing the intended attribute. There are two types of validation studies: criterion-related looks at relationships between measures and job performance, while construct examines relationships between a measure and related attributes. Reliability places an upper limit on validity - a measure must be reliable to be valid. Overall the document provides an overview of important measurement concepts for evaluating tools used in staffing.

Uploaded by

Husaini Azhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
293 views34 pages

Hrm554 Chapter 7

This document discusses measurement and its importance in staffing decisions. It covers key concepts around reliability and validity of measures. Regarding reliability, it discusses ensuring consistent measurement and controlling errors. Validity refers to a measure accurately assessing the intended attribute. There are two types of validation studies: criterion-related looks at relationships between measures and job performance, while construct examines relationships between a measure and related attributes. Reliability places an upper limit on validity - a measure must be reliable to be valid. Overall the document provides an overview of important measurement concepts for evaluating tools used in staffing.

Uploaded by

Husaini Azhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

PART FOUR – Staffing Activities: Selection

Chapter 7:
Measurement
Chapter Outline

1. Importance and Use of Measures 4. Collection of Assessment Data


2. Key Concepts ◦ Testing Procedures
◦ Measurement ◦ Acquisition of Tests and Test Manuals
◦ Scores ◦ Professional Standards
◦ Correlation Between Scores
3. Quality of Measures
◦ Reliability of Measures
◦ Validity of Measures
◦ Validation of Measures in Staffing
◦ Validity Generalization
◦ Staffing Metrics and Benchmarks
LEARNING OBJECTIVES!
What are your expected outcomes?
◦ Define measurement and understand its use and importance in staffing decisions
◦ Understand the concept of reliability and review the different ways reliability of
measures can be assessed
◦ Define validity and consider the relationship between reliability and validity
◦ Compare and contrast the two types of validation studies typically conducted
◦ Consider how validity generalization affects and informs validation of measures
in staffing
◦ Review the primary ways assessment data can be collected
Importance and Use of Measures

Definition
• Process used to gather and express information about people and jobs in numerical form.

-Examples
• Tests of applicant KSAOs.
• Job performance ratings of employees.
• Applicants’ ratings of their preferences for various types of job rewards.

Measurement
• The process of assigning numbers to objects to represent quantities of an attribute of the objects.
Use of Measures in Staffing
Standardization

 Involves
 Controlling influence of extraneous factors
on scores generated by a measure and
 Ensuring scores obtained reflect the attribute measured
 Properties of a standardized measure
 Content is identical for all objects measured
 Administration of measure is identical for all objects
 Rules for assigning numbers are clearly specified and agreed on in
advance
LEVELS OF MEASUREMENT

Nominal Ordinal Interval Ratio


» A given attribute is » Objects are rank- » Objects are rank- » Similar to interval
categorized and ordered according ordered scales - equal
numbers are to how much of » Differences between differences between
assigned to attribute they scale points for
possess adjacent points on
categories attribute being
measurement scale
» Represents relative measured
» No order or level are equal in terms of
implied among differences among attribute » Have a logical or
categories objects absolute zero point
Differences in Objective and Subjective Measures

 Objective measures
 Rules used to assign numbers to attribute are predetermined,
communicated, and applied
through a system
 Subjective measures
 Scoring system is more elusive, often involving a rater who assigns the
numbers
 Research shows these may not be strongly related, but purely
objective measures can miss important parts of job performance
SCORES
 Definition
 Measures provide scores to represent
amount of attribute being assessed
 Scores are the numerical indicator of attribute
 Central tendency and variability
 Percentiles
 Percentage of people scoring below an individual in a distribution of
scores
 Standard scores
Correlation Between Scores

 Two or more measures for a group of individuals.


 Help organization decide whether to incorporate use of test into selection process for job
applicants.
 Example: individual’s scores on particular KSAO measures (knowledge test) and measures of
job performance.
Investigation of relationship between two sets of scores;

1.Scatter Diagram
◦ –plot of joint distribution of two sets of scores

2. Correlation Coefficient
◦ -calculation
Exh. 7.3: Scatter Diagrams and Corresponding Correlations
Exh. 7.3: Scatter Diagrams and
Corresponding Correlations
Significance Of The Correlation Coefficient

Practical significance
◦ Refers to size of correlation coefficient
◦ The greater the degree of common variation between 2 variables, the more one variable can

be used to understand another variable.

Statistical significance
◦ Refers to likelihood a correlation exists in a population, based on knowledge of the actual value of r in a

sample from that population.


◦ Significance level is expressed as p< value
◦ Interpretation- If p<0.5, there are fewer than 5 chances in 100 of concluding thereis a relationship in

the population when, in fact, there is not.


Quality of Measures

◦ Reliability of measures

◦ Validity of measures

◦ Validity of measures in staffing

◦ Validity generalization
Quality of Measures: Reliability

◦ Definition: Consistency of measurement of an attribute


◦ A measure is reliable to the extent it provides a consistent set of scores to represent an attribute

◦ Internal consistency reliability of the measurement that we use to test the job
applicants
◦ Within a single time period
◦ Consistent set of scores (high similarity)

◦ Between time periods


◦ Evaluates the same individuals in terms of personality and attitude in two different time periods
Measurement Error

◦ Rarely will any of comparisons among scores discussed previously yield perfect similarity and
reliability.
◦ The lack of agreement among the scores may be due to the occurrence of measurement error.
◦ This type of error represents the ‘noise’ in the measure and measurement process.
◦ The scores actually obtained from the measure thus have two components to them, a true score
and measurement error
Actual score = true score + error
Types of Measurement Error

◦ Deficiency Error
◦ Failure to measure some portion or aspect of the attribute assessed

Occur in several ways


◦ The attribute may have been inadequately defined in the first place
◦ The construction of measures used to assess the attribute
◦ The organization opts to use whatever measures are available because of ease, cost considerations, sales
pitches and promotional claims, and so forth

◦ Contamination Error
◦ Represents the occurrence or unwanted or undesirable influence on the measure and on individuals for
whom the measure is being used
Exh. 7.7 Sources of Contamination Error and Suggestions for Control
Procedures to calculate reliability estimates

◦ Coefficient alpha
◦ Should be at least .80 for a measure to have an acceptable degree of reliability
◦ Inter-rater agreement
◦ If members of a group or panel interview independently rate a set of job applicants on a 1 – 5 scale,
ask how much they agreed with one another
◦ Test-Retest reliability
◦ Concerned with stability of measurement
◦ From two different time periods are correlated
◦ Level of r should range between r = .50 to r = .90
◦ Intra-rater agreement
◦ Scores that the rater assigns to the same people in two different time periods are compared
Implications of Reliability
Standard error of measurement
◦ Since only one score is obtained from an applicant, the critical issue is how accurate the score
is as an indicator of an applicant’s true level of knowledge

Relationship to validity
◦ Reliability of a measure places an upper limit on the possible validity of a measure
◦ A highly reliable measure is not necessarily valid
◦ Reliability does not guarantee validity - it only makes it possible
Validity of Measure

◦ Definition : The degree to which it measure the attribute it is intended to measure.


◦ Importance of validity to staffing process
◦ Predictors must be accurate representations of KSAOs to be measured
◦ Predictors must be accurate in predicting job success
◦ Two types of validation studies
◦ Criterion-related validation
◦ Construct validation
Construct Validation

◦ Involves collecting information that shows a test or assessment measures what it


was intended to measure.
◦ First, reliability evidence assesses the stability and consistency of a measure of a
psychological construct.
◦ Second, patterns of correlations between what the measure should be related to
and what it should be measuring.
Accuracy of Measurement

◦ Circle on the left represents the


construct “knowledge for the job”
◦ Circle on the right represent the
actual test of knowledge of the job.
◦ The overlap of the circle represent
the degree of accuracy of
measurement for the test.
◦ The greater the overlap. The
greater the accuracy of
measurement.
Validity of Measures in Staffing

◦ Importance of validity to staffing process


◦ Predictors must be accurate representations of KSAOs to be measured
◦ Predictors must be accurate in predicting job success

◦ Two types of validation studies


◦ Criterion-related validation
◦ Content validation
Criterion-Related Validation

• Criterion Measures: measures of performance on tasks and task dimensions


- These may already be available as part of an ongoing performance appraisal system, or they may
have to be developed.
• Predictor Measure: it taps into one or more of the KSAOs identified in job analysis

- It should be the type of measure most suitable to assess the KSAOs.


• Predictor–Criterion Scores: must be gathered from a sample of current employees or job applicants.
• Concurrent validation – done at the same time convenient and can often be done quickly because
current employees are used as the predictor and criterion.
• Predictive validation – overcomes the potential limitations of concurrent validation since the
predictor scores are obtained from job applicants.

◦ Predictor–Criterion Relationship: the correlation must be calculated.


Content Validation

◦ Content validation involves


◦ Demonstrating the questions/problems are a representative sample of the kinds of situations
occurring on the job
◦ Criterion measures are not used
◦ A judgment is made about the probable correlation between predictors and criterion measures
◦ Used in two situations
◦ When there are too few people to form a sample for criterion-related validation
◦ When criterion measures are not available
Validity generalization
◦ From discussion before in validity and validations there is no discussing about generalization.
◦ For example, if a predictor is valid for a particular job in organization A, would it be valid for the
same type of job in organization B?

Hypothetical
Validity
Generalization
Example
Staffing Metrics and Benchmarks

◦ Many HR managers in industry nowadays have limited knowledge about job analysis, validation,
and measurement.
◦ Benchmarking is a process where organizations evaluate their practices against those used by
industry leaders.
◦ Example of benchmark is cost per hire, forecasted hiring and vacancies filled.
◦ Benchmark can be a useful means of measuring important aspects of staffing methods or entire
staffing process.
◦ Reliability, Validity, Utility and measurement principles are more enduring and more
fundamental, metrics of staffing effectiveness
COLLECTION OF ASSESSMENT DATA
◦ The primary ways assessment data can be collected:

Testing Procedures
 who is qualified to test applicants?
 what information should be disclosed to applicants?
 how to assess applicants with standardized procedures?

Acquisition of Tests and Test Manual


 Requires some start-up costs in terms of the time and effort needed to contact test publishers.
 Publishers of selection tests – have information on their websites that describes the product available for purchase.
 Publishers provide sample copies of the tests and a user’s manual - consult by the selection decision makers before
purchasing the test.
 Manual should describe the development and validation of the test.
 Test manual should include crucial information to ensure the test is appropriate and it is used in appropriate manner.
Professional Standards
 Principles for the Validation and Use of Personnel Selection Procedures – a
guidebook that provides testing standards for use in selection decisions.
 Covers test choice, development, evaluation and use of personnel selection
procedures in employment settings.
 The Principles was developed by many of the world’s leading expert on
selection, thus any selection decision maker would be well advised to consult
this important document.
Let's Watch a Video!!

Made by Danial Muhayat

You might also like