Evaluation and program planning
-
Over the past decade, substance abuse treatment professionals have begun to implement evidence-based practices (EBPs) into the treatment of substance use disorders. There is a growing body of research on the diffusion of EBP in addiction treatment; however, less is known about individual state initiatives to implement EBPs among community providers. ⋯ In addition, the study examines potential barriers to implementing these practices. To accomplish this goal, we reported the findings of two surveys of Mississippi addiction professionals conducted in 2010 and in 2013.
-
Increasingly, public sector programmes respond to complex social problems that intersect specific fields and individual disciplines. Such responses result in multi-site initiatives that can span nations, jurisdictions, sectors and organisations. The rigorous evaluation of public sector programmes is now a baseline expectation. ⋯ This paper provides the case study of an evaluation of a national, inter-jurisdictional, cross-sector, aged care health initiative and its encounters with Australian centralised ethics review processes. Specifically, the paper considers progress against the key themes of a previous five-year, five nation study (Fitzgerald and Phillips, 2006), which found that centralised ethics review processes would save time, money and effort, as well as contribute to more equitable workloads for researchers and evaluators. The paper concludes with insights for those charged with refining centralised ethics review processes, as well as recommendations for future evaluators of complex multi-site programme initiatives.
-
In this paper, we demonstrate the importance of conducting well-thought-out sensitivity analyses for handling clustered data (data in which individuals are grouped into higher order units, such as students in schools) that arise from cluster randomized controlled trials (RCTs). This is particularly relevant given the rise in rigorous impact evaluations that use cluster randomized designs across various fields including education, public health and social welfare. ⋯ These methods include: (1) hierarchical linear modeling (HLM); (2) feasible generalized least squares (FGLS); (3) generalized estimating equations (GEE); and (4) ordinary least squares (OLS) regression with cluster-robust (Huber-White) standard errors. We compare our findings across each method, showing how inconsistent results - in terms of both effect sizes and statistical significance - emerged across each method and our analytic approach to resolving such inconsistencies.
-
Complexity theory has increasingly been discussed and applied within evaluation literature over the past decade. This article reviews the discussion and use of complexity theory within academic journal literature. The aim is to identify the issues to be considered when applying complexity theory to evaluation. ⋯ The first group considers implications of applying complexity theory concepts for defining evaluation purpose, scope and units of analysis. The second group of themes consider methodology and method. Results provide a starting point for a configuration of an evaluation approach consistent with complexity theory, whilst also identifying a number of design considerations to be resolved within evaluation planning.
-
Evolutionary theory, developmental systems theory, and evolutionary epistemology provide deep theoretical foundations for understanding programs, their development over time, and the role of evaluation. This paper relates core concepts from these powerful bodies of theory to program evaluation. Evolutionary Evaluation is operationalized in terms of program and evaluation evolutionary phases, which are in turn aligned with multiple types of validity. ⋯ The paper illustrates how an Evolutionary Evaluation perspective can illuminate important controversies in evaluation using the example of the appropriate role of randomized controlled trials that encourages a rethinking of "evidence-based programs". From an Evolutionary Evaluation perspective, prevailing interpretations of rigor and mandates for evidence-based programs pose significant challenges to program evolution. This perspective also illuminates the consequences of misalignment between program and evaluation phases; the importance of supporting both researcher-derived and practitioner-derived programs; and the need for variation and evolutionary phase diversity within portfolios of programs.