Government of Canada | Gouvernement du Canada Government of Canada
    FrançaisContact UsHelpSearchHRDC Site
  EDD'S Home PageWhat's NewHRDC FormsHRDC RegionsQuick Links

·
·
·
·
 
·
·
·
·
·
·
·
 

Quasi-Experimental Evaluation


Program manager: Why do we need to evaluate our program? We have a good handle on what's going on with our program and our clients, and we know we are very successful.

Evaluator: Because you never know if it was your program or something else that produced the success you are claiming.

Program manager: Of course we know it's our program. What else would cause all these people to find jobs so quickly?

Evaluator: Maybe the rapid economic expansion that we are now enjoying.

Program manager: That's nonsense. Anyway, we know we have to have our program evaluated. But why do we need to go through the trouble of finding a comparison group to do an evaluation?

Evaluator: Let's say that six months after finishing your training program, 70% of the trainees are working. Would you consider that proof your program is a success?

Program manager: I'd say so, yes. We'd like to do better than 70%     in fact, we believe we are doing better than that     but I'd say that having 70% of our trainees working would make our program look very good, especially given the barriers many of our clients face when they come to us.

Evaluator: What proportion of these individuals might be working now if they hadn't gone through your training program?

Program manager: I'm not sure, but it wouldn't be as high as 70%, I can tell you that.

Evaluator: Well you don't really know that though. For all you know, 80% might be working now if they hadn't taken training.

Program manager: No way. You don't know how many obstacles our clients face when they come to us. We are providing a valuable service and are really helping our clients.

Evaluator: That may be so, but you haven't proven it. And the sponsors of the program need to know with certainty how successful you have been. They need to know they are getting bang for their buck.

Program manager: They are, I assure you.

Evaluator: Okay, let's say you are doing some good, that individuals who go through your program are indeed more likely to find a job than if they hadn't been trained. How much of an effect are you having? Would half of them have found jobs anyway? One-third? Two-thirds? You can't know that unless you do an evaluation that includes a comparable group of people who haven't taken your training?

Program manager: Even if half of them would have found a job without the training, isn't raising that proportion to 70% worth it?

Evaluator: I don't know. What did it cost for that incremental 20%? And how long will the effects of training last?

Program manager: Our program is well worth the money . . .

This contrived conversation illustrates well the different perceptions of program managers and evaluators when it comes to assessing the merits of a program. Managers live and work with the program every day: they care about their program and work hard to make it a success; they strongly believe they are doing a good job; and they understandably resent any implication that they are not.

Evaluators usually have no connection with the program, and, more importantly, no stake in its survival (which sometimes leads to an underestimation of the threat that evaluations can impose on program management and staff). They know that managers are heavily invested in their program and that a manager's assessment of the program     even one aided by reliable monitoring data     will not be accepted by program sponsors as a valid test of whether the program is meeting its objectives and is worth what it costs. And they know that many different factors that are unrelated to the design of the program can affect the outcomes of any social program, and can easily lead to unwarranted conclusions about the program.

This paper is based on the premise that only a good program evaluation can produce convincing evidence of a program's effectiveness in reaching its objectives, and that only an evaluation that includes a control or comparison group can provide estimates of program impact uncorrupted by the influence of other factors that also may affect outcomes. It summarizes the basics of evaluation research, focusing on what is usually the best practical approach     the "quasi-experimental design." It is written in non-technical language for managers who have little or no exposure to the field of evaluation, but includes more advanced treatments in appendices for those interested in the more technical aspects.

The paper begins with a brief introduction to evaluation, including a broad definition, and a summary of the two basic types of evaluation. It then moves to a discussion of evaluation design, starting with the reasons evaluators need to worry about appropriate designs. Common evaluation designs are introduced, with the pros and cons of each discussed. Designs without comparison or control groups are shown to fall well short of ideal in terms of ruling out extraneous causes. Since any good evaluation is fundamentally a comparison between what happened to program clients and what would have happened had they not been in the program, one-group designs virtually preclude any serious summative evaluation.

Next, the report explains details of quasi-experimental evaluation design     its theoretical underpinnings and practical considerations. The concept of selection bias is explained and econometric procedures to control for it are introduced. Confidence in econometric estimates will be explored.

How to select comparison groups is the next consideration. The different techniques for matching are summarized, with an assessment of their relative strengths and weaknesses. Potential variables to use in conducting the matching are reviewed at this stage. Finally, with the general topic of quasi-experimental evaluation well explicated, we then proceed to apply the lessons to the more specific tasks of determining the best sampling techniques for drawing regional Employment Benefits and Support Measures (EBSM) comparison groups, and the best variables to use in the sampling.


[Previous Page][Table of Contents][Next Page]