000 01471pab a2200193 454500
008 180718b2000 xxu||||| |||| 00| 0 eng d
100 _aMurphy, Kevin R.
245 _aInterrater correlations do not estimate the reliability of job performance ratings
260 _c2000
300 _ap.873-900
362 _aWinter
520 _aInterrater correlations are widely interpreted as estimates of the reliability of supervisory performance ratings, and are frequently used to correct the correlations between ratings and other measures (e.g., test scores) for attenuation. These interrater correlations do provide some useful information, but they are no reliability coefficients. There is clear evidence of systematic rater effects in performance appraisal, and variance associated with raters is not a source of random measurement error. We use generalizability theory to show why rater variance is not properly interpreted as measurement error, and show how such systematic rater effects can influence both reliability estimates and validity coefficients. We show conditions under which interrater correlations can either overestimate or underestimate reliability coefficients, and discuss reasons other than random measurement error for low interrater correlations. - Reproduced
650 _aEmployees - Rating of
650 _aJob satisfaction
650 _aJob evaluation
700 _aDe Shon, Richard
773 _aPersonnel Pschology
909 _a47317
999 _c47317
_d47317