The Journal of applied psychology
-
The authors present an analytical method to assess the average criterion performance of the selected candidates as well as the adverse impact and the cost of general multistage selection decisions. The method extends previous work on the analytical estimation of multistage selection outcomes to the case in which the applicant pool is a mixture of applicant populations that differ in their average performance on the selection predictors. Next, the method was used to conduct 3 studies of important issues practitioners and researchers have with multistage selection processes. Finally, the authors indicate how the method can be integrated into a broader analytical framework to design multistage selection decisions that achieve intended levels of selection cost, workforce quality, and workforce diversity.
-
Building on recent work in occupational safety and climate, the authors examined 2 organizational foundation climates thought to be antecedents of specific safety climate and the relationships among these climates and occupational accidents. It is believed that both foundation climates (i.e., management-employee relations and organizational support) will predict safety climate, which will in turn mediate the relationship between occupational accidents and these 2 distal foundation climates. ⋯ Results supported all hypotheses. Overall it appears that different climates have direct and indirect effects on occupational accidents.
-
The impact of corrections for faking on the validity of noncognitive measures in selection settings.
In selection research and practice, there have been many attempts to correct scores on noncognitive measures for applicants who may have faked their responses somehow. A related approach with more impact would be identifying and removing faking applicants from consideration for employment entirely, replacing them with high-scoring alternatives. The current study demonstrates that under typical conditions found in selection, even this latter approach has minimal impact on mean performance levels. ⋯ Where trait scores were corrected only for suspected faking, and applicants not removed or replaced, the minimal impact the authors found on mean performance was reduced even further. By comparison, the impact of selection ratio and test validity is much larger across a range of realistic levels of selection ratios and validities. If selection researchers are interested only in maximizing predicted performance or validity, the use of faking measures to correct scores or remove applicants from further employment consideration will produce minimal effects.
-
Meta Analysis
A reexamination of black-white mean differences in work performance: more data, more moderators.
This study is the largest meta-analysis to date of Black-White mean differences in work performance. The authors examined several moderators not addressed in previous research. ⋯ Greater mean differences were found for highly cognitively loaded criteria, data reported in unpublished sources, and for performance measures consisting of multiple item scales. On the basis of these findings, the authors hypothesize several potential determinants of mean racial differences in job performance.
-
This study examined the relationship between the similarity and accuracy of team mental models and compared the extent to which each predicted team performance. The relationship between team ability composition and team mental models was also investigated. ⋯ Results indicated that although similarity and accuracy of team mental models were significantly related, accuracy was a stronger predictor of team performance. In addition, team ability was more strongly related to the accuracy than to the similarity of team mental models and accuracy partially mediated the relationship between team ability and team performance, but similarity did not.