click below
click below
Normal Size Small Size show me how
Psych 521
Question | Answer |
---|---|
How job analysis and performance appraisal may be related | job analysis is the foundation that a performance appraisal should be built on. The job analysis informs the criteria for the performance appraisal. For the PA to be legally defensible, the criteria must be based on the job analysis. |
basic considerations for establishing performance appraisal standards | Realistic. Measurable. Clearly understood. Clearly defined traits with anchors to base decisions on. Bad standards have unclear criteria such as excellent, poor, good. What does excellent, quantity of work mean? |
How selection and promotional exam may be conceptually similar to performance appraisal | The same precedent set for selection and promotion procedures also apply to performance appraisal – if actions are going to be base on a performance appraisal, the performance appraisal needs to meet the same standards as an employment test. |
purposes of performance appraisal | Administrative – pay for performance, promotions, dismissal, downsizing, layoffs Developmental- identify strengths/opportunities, development planning, coaching and career planning. |
What Strategic Alignment and Strategic Relevance mean and how performance appraisal can support strategic alignment | Strategic alignment – PA should be based on, be in line with and support the goals of comp. Strategic Relevance – how relevant are the activities/criteria. Are there deficient, contaminated, or overemphasized measures? |
Forced choice rating method | forced to choose from behavioral descriptors. Doesnt know weights of descriptors or which count for something. reduce bias or favoritism. smiley face scales. Doesn’t tell what person did. How to give feedback? reduction of many traits to just one. |
Forced distribution rating method | Forcing employees into every category of the PA even if they don’t actually belong there. Helps to reduce halo effect/leniency. Ford example. |
advantages and disadvantages of forced distribution | Helps with inflation. compensation increases reflect performance differences among individuals Mgrs don't use extreme categories Explanation for placement can be difficult. Perf. may not follow normal distribution. |
how reliability and validity relates to PA | The bottom line is that performance appraisal systems must be able to meet to same standards of reliability and validity that employment tests meet. All decisions made based on the PA must be legally defensible. Job-related. Clear definitions. |
how reliability & validity relates to PA, continued | Behavioral examples increase reliability. Reliability isnt always increased if people interpret. If the scales/definitions are related specifically to the job, this increases validity. |
concepts of criterion contamination and deficiency and how this is related to performance appraisal | all performance criteria suffers from deficiency or contamination to some degree. flawed process is better than nothing. 4% roulette wheel example. |
different sources of performance appraisal information and the optimal uses for these sources | supervisor-doin more w/less. Peers/coworkers-know the work, ratings may be inflated due to leniency.Self rating- people tend to rate selves higher or more critical.Outside raters-if job is cust. facing.360-rounded view of emp.but may not be accurate. |
Policy Capturing | focus attention and behaviors on those things that are most likely to be rewarded or which have higher weighted values -contaminant that can derail the strategic alignment that PA should be reinforcing. reflective of criterion deficiency |
different types of rater errors and how to reduce or eliminate these errors | contrast effects, similar to me, first impressions, halo effect, central tendency, leniency or generosity, strictness or severity, spillover effect |
different types of rating approach classifications | behavioral rating approach, results rating approach |
different types of rating formats | forced choice, BARS, BOS |
legal requirement for performance appraisal and how these requirements compare to the requirements for testing | The legal requirements for performance appraisal systems are the same as for selection systems. Title VII of the Civil Right Act forbids discrimination on the basis of Race, Color, Religion, Sex, or National Origin. |
explain how performance appraisal may become increasingly difficult as you ascend the organizational hierarchy | behaviors that define success become more abstract. Leadership, motivation, and organizational strategy can be hard to measure. Also, there is a inherent problem with who should be rating/evaluating the performance of organizational leaders. |
things that can cause performance appraisal to fail | •A appraisal system that isn’t valid or reliable •Systems that measure things unrelated to successful job performance •Subjectivity in appraisal systems •Bias in PA systems •Rater errors |
What the linkage matrix is and how that can facilitate the creation of performance standards | connect KSA’s to work tasks. (class assignment) Rate KSA’s, rate daily work tasks, then connect to see which KSA’s are required for most frequent tasks. shows which tasks require which KSA’s & if it needs to be known from day 1 r can be trained at hire. |
What Management by Objective (MBO) is and how performance appraisal can support MBO | set specific measurable goals w/each emp. periodically review progress made. 1.Set the organization’s goals. 2.Set departmental goals. 3.Discuss departmental goals. 4.Define expected results (set individual goals)5.Performance reviews 6.Provide feedba |
What Personnel Measures are and how these measure best be used in performance appraisal | measures: days late/absent, # of grievances r disciplinary action & other measures of personnel factors vs actual job performance best used in PA process in conjunction with other measures, such as behavioral or results based approaches to evaluation. |
How O*NET can be used in creating performance appraisal scales or content | tool for creating performance appraisal scales bc provides task statements for Behavioral Observation Scales. Task statements and KSA’s from O*NET can be redesigned to be anchors in Behaviorally Anchored Scales. also used for Forced Choice appraisals. |
how to reduce or eliminate rater errors | objective measures anchored by specific behaviors needed for successful job perf. rater focus on work related behavior & only evaluate the perf. within the current rating period. Reward mgrs for accurate appraisals. train raters to make aware of errors. |
contrast effects rating error | better or worse rating as result of immediately previous exposure to a stimulus of lesser or greater value. (exaggerated ratings when experiencing one extreme followed by the other, ex:temperature when leaving air conditioned building into summer heat.) |
similar to me rating error | A rating error that gives an overly good rating to to those with similar characteristics as the rater |
first impressions rating error | When ratings are effected positively or negatively based purely on first impressions |
halo effect rating error | The rater’s tendency to rate all aspects of performance based on the observation of one trait. (very common) Horn Effect is the opposite of Halo Effect. |
central tendency rating error | Most common and most serious type of error. When a rater clusters all ratings in the middle or average point. Fear of rating too high or too low. |
Leniency or Generosity rating error | The tendency for a rater to give all employees high ratings. Fear of seeming too critical |
severity or strictness rating error | The tendency for a rater to give all employees low ratings. Opposite of Leniency Effect. Fear of seeming too easy/soft with evaluations. |
spillover effect rating error | When past evaluations are allowed to effect a current rating, either positively or negatively, even though current performance ahs changed. |
Slides say impact of bias and rater error can be reduced if | •Employees work is more visible •The work expectations and objectives are clear •The rating criteria are more specific and behaviorally based (rather than using generic characteristics or traits) |
behavioral rating approach | Assess behaviors vs. other characteristics. when certain behaviors r required successful work perf. use when # of work outcomes arent only thing that makes best emp. good. scales created by:important job dimensions statements desired/undesirable (anchors) |
results rating approach | Employee performance is evaluated based upon job outcomes (ex. Sales, orders filled, etc.) This approach is best applied when other work behaviors are not essential to job success, the concrete numbers of job outcomes are really all that matters. |
BARS | not clear distinction between BARS and Behavioral Expectancy Scales. Raters differ in their interpretations of “poor”, “average” and “exceptional” “Anchoring” scale points with behavioral examples help “standardize” or increase reliability across raters. |
Behavioral Observation Scales | Focus in on the frequency or relative frequency of behaviors occurring. anchored with “almost always” & “almost never”. items r dimensions are treated & summed as being of equal value, unless weights can be empirically derived. Ratings on a Thurstone Sca |
Legally Defensible Performance Appraisal System: | •criteria based on job analysis •no disparate impact evidence of validity •eval criteria to limit interpretation •PA linked to duties •knowledge/contact w/ratee•Train supervisors •Review process to prevent undue control of careers, help performers improve |