Home Programs Crime prevention Funding programs RKDF Evaluation Strength Assessment Scale
The Evaluation Strength Assessment Scale has been developed to help NCPC assess the strength of the evaluation plan for the demonstration projects submitted to the Research and Knowledge Development Fund (RKDF). Categories of scale Guiding principlesThis scale will be used to assess evaluation strength only (in terms of methodology). The strength of the evaluation then influences what can be concluded about project effectiveness. A Project Effectiveness Scale has also been developed that draws upon the results of this scale to indicate the degree to which a project can be deemed effective or ineffective. - The categories will be (from highest to lowest):
- Exemplary
- Excellent
- Notable
- Basic
- Exploratory
It is recognized that some degree of discretion will be used when assigning an evaluation to a specific category. A particular evaluation may not have all of the required or recommended elements but instead may have most of them. In these cases, the assessor will have to use his/her judgement as to which category best fits. Reasoning will need to be justified. It is acknowledged that basic and exploratory evaluations can provide valuable information, especially related to the challenges encountered in implementing and evaluating certain projects. For the most part, however, evaluations of demonstration projects are intended to provide information on what works, for whom, and under what conditions, and will thus require rigorous evaluation designs. RKDF evaluations should strive to achieve the exemplary level, and no lower than the notable level. Qualitative evaluation designs will sometimes be the most appropriate way to answer specific research questions in the area of crime prevention through social development. The NCPC will use the framework for assessing qualitative evaluations produced by the Government Chief Social Researchers Office in the United Kingdom to assess the quality of (and provide guidance regarding expectations for) qualitative designs. - It is important that all evaluations (regardless of level) strive to include the following in order to strengthen overall designs:
Categories of scaleExemplary evaluations- Randomized control group
- Attrition is low or differences as a result of attrition are fully analyzed
- Includes pre-test and post-test on both groups
- Adequate sample size -- evaluators explain why it is considered adequate; in other words, large enough to provide statistical power to detect effects
- Clearly defined constructs
Excellent evaluations- Comparison group utilized -- participants in experimental and comparison group are matched based on relevant characteristics. The factors that are relevant for matching are discussed, and evaluators explain why matching is adequate and why a randomized design was not possible. The matching process itself is described fully and differences between groups are discussed and taken into account in the evaluation (e.g. by way of statistical controls or by examining different sub-groups). If matching is not used, there needs to be a clear explanation by the evaluators of why the differences between groups were not significant enough to impact results
- Attrition is low or differences as a result of attrition are fully analyzed
- Includes pre-test and post-test on both groups
- Adequate sample size -- evaluators explain why it is considered adequate; in other words, large enough to provide statistical power to detect effects
- Clearly defined constructs
- A discussion of history, instrumentation and testing effects is provided
Notable evaluations- Comparison group utilized -- not matched
- Clear description of how participants were chosen and put into groups -- differences between groups and implications are discussed
- Explanation why randomization and/or matched comparison group was not possible
- Attrition is low or differences as a result of attrition are fully analyzed
- Includes pre-test and post-test on both groups
- Adequate sample size -- evaluators explain why it is considered adequate; in other words, large enough to provide statistical power to detect effects
- Clearly defined constructs
- A discussion of history, regression, maturation, instrumentation and testing effects is provided
Basic evaluations- Pre-test and post-test conducted
- Attrition is low or differences as a result of attrition are fully analyzed
- Adequate sample size -- evaluators explain why it is considered adequate; in other words large enough to provide statistical power to detect effects
- Clearly defined constructs
- Implications of maturation of participants on program effects are discussed
- Discussion of history, regression, instrumentation and testing effects is provided
- Adequate pre-test data are provided to assist in understanding the extent to which effects may be attributable to regression
Because evaluations at the basic level leave many questions unanswered as to their ability to rule out t are more amenable to stronger evaluation designs. Exploratory evaluations- Designs where there is no comparison group and where only a post-test is conducted
- Process evaluations with a strongly demonstrated link to the existing literature/theory. Process evaluations do not report on outcomes. Instead they focus on determining things such as (but not limited to) whether the project was implemented as planned, if the target group was reached, the types of activities offered, the satisfaction of participants and what works well and does not work well when working with specific groups. This type of information is important and will aid in the development of future projects thas of post-test only designs, these will not be funded for outcome evaluations of RKDF demonstration projects. Nor will process evaluations be funded, unless they are carried out in conjunction with outcome evaluations.
Because of the methodological limitation alternative explanations for project effects, the NCPC would generally not fund a demonstration project that planned to employ this type of evaluation design.
|