The Journal of applied psychology
-
We clear up a number of misconceptions from the critiques of our meta-analysis (Van Iddekinge, Roth, Raymark, & Odle-Dusseau, 2012). We reiterate that our research question focused on the criterion-related validity of integrity tests for predicting individual work behavior and that our inclusion criteria flowed from this question. We also reviewed the primary studies we could access from Ones, Viswesvaran, and Schmidt's (1993) meta-analysis of integrity tests and found that only about 30% of the studies met our inclusion criteria. ⋯ In addition, we address concerns raised about certain decisions we made and values we used, and we demonstrate how such concerns would have little or no effect on our results or conclusions. Finally, we discuss some other misconceptions about our meta-analysis, as well as some divergent views about the integrity test literature in general. Overall, we stand by our research question, methods, and results, which suggest that the validity of integrity tests for criteria such as job performance and counterproductive work behavior is weaker than the authors of the critiques appear to believe.
-
Examination of the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012) meta-analysis reveals a number of problems. They meta-analyzed a partial database of integrity test validities. An examination of their coded database revealed that measures coded as integrity tests and meta-analyzed as such often included scales that are not in fact integrity tests. ⋯ We found the absence of fully hierarchical moderator analyses to be a serious weakness. We also explain why empirical comparisons between test publishers versus non-publishers cannot unambiguously lead to inferences of bias, as alternate explanations are possible, even likely. In light of the problems identified, it appears that the conclusions about integrity test validity drawn by Van Iddekinge et al. cannot be considered accurate or reliable.
-
We react to the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012a) meta-analysis of the relationship between integrity test scores and work-related criteria, the earlier Ones, Viswesvaran, and Schmidt (1993) meta-analysis of those relationships, the Harris et al. (2012) and Ones, Viswesvaran, and Schmidt (2012) responses, and the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012b) rebuttal. We highlight differences between the findings of the 2 meta-analyses by focusing on studies that used predictive designs, applicant samples, and non-self-report criteria. ⋯ The lack of detailed documentation of all effect size estimates used in either meta-analysis makes it impossible to ascertain the bases for the differences in findings. We call for increased detail in meta-analytic reporting and for better information sharing among the parties producing and meta-analytically integrating validity evidence.