The Journal of applied psychology
-
Review Meta Analysis
Do other-reports of counterproductive work behavior provide an incremental contribution over self-reports? A meta-analytic comparison.
Much of the recent research on counterproductive work behaviors (CWBs) has used multi-item self-report measures of CWB. Because of concerns over self-report measurement, there have been recent calls to collect ratings of employees' CWB from their supervisors or coworkers (i.e., other-raters) as alternatives or supplements to self-ratings. However, little is still known about the degree to which other-ratings of CWB capture unique and valid incremental variance beyond self-report CWB. ⋯ Third, self-raters reported engaging in more CWB than other-raters reported them engaging in, suggesting other-ratings capture a narrower subset of CWBs. Fourth, other-report CWB generally accounted for little incremental variance in the common correlates beyond self-report CWB. Although many have viewed self-reports of CWB with skepticism, the results of this meta-analysis support their use in most CWB research as a viable alternative to other-reports.
-
Integrity tests have become a prominent predictor within the selection literature over the past few decades. However, some researchers have expressed concerns about the criterion-related validity evidence for such tests because of a perceived lack of methodological rigor within this literature, as well as a heavy reliance on unpublished data from test publishers. In response to these concerns, we meta-analyzed 104 studies (representing 134 independent samples), which were authored by a similar proportion of test publishers and non-publishers, whose conduct was consistent with professional standards for test validation, and whose results were relevant to the validity of integrity-specific scales for predicting individual work behavior. ⋯ Several variables appeared to moderate relations between integrity tests and the criteria. For example, corrected validities for job performance criteria were larger when based on studies authored by integrity test publishers (.27) than when based on studies from non-publishers (.12). In addition, corrected validities for counterproductive work behavior criteria were larger when based on self-reports (.42) than when based on other-reports (.11) or employee records (.15).
-
Van Iddekinge, Roth, Raymark, and Odle-Dusseau's (2012) meta-analysis of pre-employment integrity test results confirmed that such tests are meaningfully related to counterproductive work behavior. The article also offered some cautionary conclusions, which appear to stem from the limited scope of the authors' focus and the specific research procedures used. Issues discussed in this commentary include the following: (a) test publishers' provision of studies for meta-analytic consideration; (b) errors and questions in the coding of statistics from past studies; (c) debatable corrections for unreliable criterion measures; (d) exclusion of laboratory, contrasted-groups, unit-level, and time-series studies of counterproductive behavior; (e) under-emphasis on the prediction of counterproductive workplace behaviors compared with job performance, training outcomes, and turnover; (f) overlooking the industry practice of deploying integrity scales with other valid predictors of employee outcomes; (g) implication that integrity test publishers produce biased research results; (h) incomplete presentation of integrity tests' resistance to faking; and (i) omission of data indicating applicants' favorable response to integrity tests, the tests' lack of adverse impact, and the positive business impact of integrity testing. This commentary, therefore, offers an alternate perspective, addresses omissions and apparent inaccuracies, and urges a return to the use of diverse methodologies to evaluate the validity of integrity tests and other psychometric instruments.