The Journal of applied psychology
-
Van Iddekinge, Roth, Raymark, and Odle-Dusseau's (2012) meta-analysis of pre-employment integrity test results confirmed that such tests are meaningfully related to counterproductive work behavior. The article also offered some cautionary conclusions, which appear to stem from the limited scope of the authors' focus and the specific research procedures used. Issues discussed in this commentary include the following: (a) test publishers' provision of studies for meta-analytic consideration; (b) errors and questions in the coding of statistics from past studies; (c) debatable corrections for unreliable criterion measures; (d) exclusion of laboratory, contrasted-groups, unit-level, and time-series studies of counterproductive behavior; (e) under-emphasis on the prediction of counterproductive workplace behaviors compared with job performance, training outcomes, and turnover; (f) overlooking the industry practice of deploying integrity scales with other valid predictors of employee outcomes; (g) implication that integrity test publishers produce biased research results; (h) incomplete presentation of integrity tests' resistance to faking; and (i) omission of data indicating applicants' favorable response to integrity tests, the tests' lack of adverse impact, and the positive business impact of integrity testing. This commentary, therefore, offers an alternate perspective, addresses omissions and apparent inaccuracies, and urges a return to the use of diverse methodologies to evaluate the validity of integrity tests and other psychometric instruments.
-
We clear up a number of misconceptions from the critiques of our meta-analysis (Van Iddekinge, Roth, Raymark, & Odle-Dusseau, 2012). We reiterate that our research question focused on the criterion-related validity of integrity tests for predicting individual work behavior and that our inclusion criteria flowed from this question. We also reviewed the primary studies we could access from Ones, Viswesvaran, and Schmidt's (1993) meta-analysis of integrity tests and found that only about 30% of the studies met our inclusion criteria. ⋯ In addition, we address concerns raised about certain decisions we made and values we used, and we demonstrate how such concerns would have little or no effect on our results or conclusions. Finally, we discuss some other misconceptions about our meta-analysis, as well as some divergent views about the integrity test literature in general. Overall, we stand by our research question, methods, and results, which suggest that the validity of integrity tests for criteria such as job performance and counterproductive work behavior is weaker than the authors of the critiques appear to believe.
-
We react to the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012a) meta-analysis of the relationship between integrity test scores and work-related criteria, the earlier Ones, Viswesvaran, and Schmidt (1993) meta-analysis of those relationships, the Harris et al. (2012) and Ones, Viswesvaran, and Schmidt (2012) responses, and the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012b) rebuttal. We highlight differences between the findings of the 2 meta-analyses by focusing on studies that used predictive designs, applicant samples, and non-self-report criteria. ⋯ The lack of detailed documentation of all effect size estimates used in either meta-analysis makes it impossible to ascertain the bases for the differences in findings. We call for increased detail in meta-analytic reporting and for better information sharing among the parties producing and meta-analytically integrating validity evidence.