Publications: 117 | Followers: 1

Chapter 13 Combining Multiple Assessments -

Publish on Category: Birds 0

Chapter 13 Individual and Group Assessment
Complex Candidate JudgmentsIndividual AssessmentsAssessment Centers
Chapter 13 Combining Multiple Assessments
Decision ModelsAdditiveWell known and useful …and…CompensatoryNew concept: compensatory batteriesWhat is …”either or”or ….”if then?What does the ADA have to do with it?Judgmental (for individual assessments) v. StatisticalCfjudgmental vs. Statistical – which is better?FromHighhouse’sPTC presentation…
Chapter 13 Combining Multiple Assessments
Issues in Combining Predictors
ToughChoicesLarge applicant pools and top-down, no problemWhat about small pools of candidates for one position?What factors influence the decision?
Chapter 13 Combining Multiple Assessments
Usually for executives or special positionsPerformance is hard to define and few occupy the rolesWhy do these assessment opportunities attract charlatans?Holistic Approach (Henry Murray)How good it this approach? (P.Meehl, ‘54)Analytic Emphasis (history)Approach:Consultant visited clients to learn the job/org/contextTwo psychologists interviewed and rated candidates w/o access to data on themProjective tests were analyzed by clinician –blind to other informationTest battery developed to includeTwo personality/ interest inventory/abilities testsOne psychologist –interviewer wrote the reportTwo other programs:EXXON and Sears -Batteries included critical thinking / personality MRC .70 -.75!Exec success = forcefulness, dominance, assertiveness, confidenceAlthough valid, legal concerns from ‘50s – ’60 damped down research
Chapter 13 Combining Multiple Assessments
Individual Assessment
Criticisms of IndividualAssessmentOverconfidence in clinical judgmentsTrue or false?(Camerer& Johnson, ’91;Highouse, ‘02)Psychometrics don’t apply to this type of assessmentAssessors wouldn’t be in business if they weren’t valid
Chapter 13 Combining Multiple Assessments
Individual Assessments
Other criticisms:Individual assessments is rarely subjected to validationConclusions are often unreliable (Ryan &Sackett, ‘89)Summaries are often influenced by one or two parts which could be donealone(they are judgments! And judgments often focus on negative & early cues!)Great emphasis is usually placed on personalityWhen cognitive tests are usually more validActual interpersonal interaction needs to be assessedBut need to be done with more than one person evaluating (assessment centers are useful)May be ethically or legally questionable to seek information not explicitly relevant to the job“Mr. Obama, can you tell us a little about your wife, Michelle?”
Chapter 13 Combining Multiple Assessments
To Address these issues:
Combine evidence of relevant traits with evidence from construct validitiesUse well-developed predictive hypotheses to dictate and justify the assessment contentUse more work samples (or in-basket, SJT)To assess interpersonal behaviorPersonnel records /biodata/ interview structureOthers?
Chapter 13 Combining Multiple Assessments
Assessment Centers
PurposesOften organizationally specificTo reflect specific values and practicesFor managerial (Thornton &Byham, ‘82)Early identification of potentialPromotionDevelopmentFor personnel decisions – OAR (overall rating)
Chapter 13 Combining Multiple Assessments
AssessmentCenters(organization specific)
PurposesPromotion (Id potential managers) –succession planningManagement developmentAssessment Center Components(need a JA)Multiattributeand should be multimethod (more that one method for an attribute)Testsand InventoriesExercises (performance tests-work samples)In-basketLeaderless groupdiscussionsdo these have problems? Confounds?InterviewsShould a stress interview be used? When? Give an example.AssessorsFunctions of Assessors(Zedeck, ‘86)Observer and recorderRole playPredictorAssessor Qualifications: (SMEs, HR, psychologists)Number of Assessors: (2 candidates to 1 assessor)
Chapter 13 Combining Multiple Assessments
Dimensions to be AssessedSee Table 13.2Dimensions(usually not defined but should be)Should be defined in behavioralterms(in a particular situation)RatingsReplication: Ratings on different predictors for the same dimension should generalize from one exercise to anotherWould you predict that happens much?Overall Assessment Ratings (OAR)Should be a definable attributeOr is it a composite of unrelated, but valid predictors?Is consensus the way to go?Can you think of an example?
Chapter 13 Combining Multiple Assessments
Construct Validities of Dimension AssessmentsDimensional Consistency (across exercises):rsshould not be high but substantial.Why?Results of factor analysesAre the factors (in Table 13.3, defined by the exercise or dimensions?Is this consistent withSackett&Drehers, ‘82 findings?Reasons for inconsistency in dimension ratingsTwo viewpoints:Are the dimensions relative enduring or situational specific? Or contingent?Solutions? the OAR Maybe the dimensions are just a small number of cognitive and personality factorsA behavioral checklist perhaps?Criterion-Related Validities (review of meta-analytic studies)Predictive validityhigher with multiple measuresValiditieshigher when peerevalsincludedBackground and trainingmoderates validity4 dimensionsaccount for most of the varianceValiditieshigher for managerial progress v. future performance
Chapter 13 Combining Multiple Assessments
Point of View
What is the authors’ on Assessment Center validity?What do they recommend?--behaviorally based ratings-using checklists
Chapter 13 Combining Multiple Assessments





Make amazing presentation for free
Chapter 13 Combining Multiple Assessments -