Government of Canada | Gouvernement du Canada Government of Canada
    FrançaisContact UsHelpSearchHRDC Site
  EDD'S Home PageWhat's NewHRDC FormsHRDC RegionsQuick Links

·
·
·
·
 
·
·
·
·
·
·
·
 

D. Outcome Specification and Measurement


Four criteria guided the Panel's thoughts on outcome specification and measurement:
(1) It seems likely that for most evaluations some of the key measures38 will focus on employment in the post-program39 period; (2) Given this focus, it seems clear that sufficient employment information should be collected so that a variety of specific measures can be calculated. This will aid in the tailoring of outcome measures to provide an accurate reflection of specific program purposes40; (3) Data on a number of other key socio-economic variables should be collected — primarily for use as control variables in studying employment outcomes; and (4) Survey data collection techniques should strive for direct comparability across different regional evaluations.

Seven specific actions that might serve to meet these criteria are:

  1. Barring special considerations, the Follow-up survey should occur at least two years after Action Plan start dates. The intent of this recommendation on timing is to offer a high probability that interventions are completed well before the interview date. Because the first evaluations are contemplating Fall 2001 interview dates, this would require that action plans with start dates during FY99 (April 1, 1998 — March 31, 1999) be used. Evaluations with later interview dates might focus on FY00 instead.
  2. When possible, similar employment history questions should be used in all of the evaluations. Because post-start and post-program employment will probably be the focus of most of the EBSM evaluations, it seems clear that significant survey resources should be devoted to its measurement. Prior studies have documented that use of different data collection instruments can lead to quite different estimated results (Heckman, Lalonde, and Smith 1999). To avoid this source of variation, evaluators should be encouraged to use the same question batteries. Similarly, data cleaning and variable construction routines should be coordinated across the evaluations. Evaluators should also consider how the availability of documentation in the possession of respondents (for example, tax documents) can be best utilized in the collection of these data.
  3. A number of employment-related measures should be constructed. The goal of this recommendation is to ensure that the outcome measures being used in the evaluations are in fact appropriate to the intended program goals. Although all evaluations would be expected to construct the same basic measures (such as weeks employed during the past year, total earnings during that period, number of jobs held, and so forth), there would also be some flexibility for specific regions to focus on the measures they considered to be most appropriate to the package of benefits being offered. For example, outcomes for clients in Skills Development interventions might focus on wage increases in the post program period or on changes in the types of jobs held. Outcomes for participants in Targeted Wage Subsidy programs might focus on successes in transitioning to unsubsidized employment. And it may prove very difficult to design ways of measuring the long-term viability of the self-employment options pursued by some clients. As mentioned previously, there is a clear need for further research on precisely how measured outcomes and interventions will be linked. On the more conceptual level there is the need to show explicitly how the outcomes that are to be measured in the evaluations are tied to the general societal goals of the (EBSM) program (as stated, for example, in its enabling legislation).
  4. It is desirable that a core module for collecting data on other socio-economic variables be developed. Such a module could be used across most of the evaluations. The goal of such a module would be to foster some agreement about which intervening variables should be measured and to ensure that these would be available in all of the evaluations. In the absence of such an agreement it may be very difficult to compare analytical results across regions because the analyses from different evaluations would be controlling for different factors. Pooling of data for cross-region analysis would also be inhibited if such a module were not used. Clearly Human Resources Development Canada (HRDC) has a direct interest in such cross-region analyses both because they may be able to make some estimates more precise and because they may be able to identify important determinants of differential success across regions41. Hence, it should consider ways in which all evaluators could be encouraged to use similar core modules — perhaps by developing them under separate contract.
  5. Additional follow-up interviews should be considered, at least in some locations. Although most evaluations will probably utilize a one-shot survey approach, the Panel believed that evaluators should be encouraged to appraise what might be learned from a subsequent follow-up (perhaps 24 months after the initial survey). It seems likely that such additional data collection would be especially warranted in cases for which interventions promise only relatively long term payoffs. It seems likely that additional follow-up interviews, if they were deemed crucial to an evaluation, would be independently contracted. Regardless of whether a follow-up interview is included as part of an evaluation design, the Panel believed that HRDC should make arrangements that would enable evaluation participants to be followed over time using administrative data on EI participation and (ideally) earnings (see the next point).
  6. Administrative data should be employed to measure some outcomes in all of the evaluations. Timing factors may prevent the use of administrative earnings data (from T-4's) to measure outcomes in the evaluations as currently contracted (though this could be part of a follow-up contract), but EI administrative data should be utilized to the fullest extent practicable. These data can provide the most accurate measures of EI receipt and can also shed some light on the validity of the survey results on employment. Administrative data can also be used in the evaluations to construct measures similar to those to be constructed in the medium term indicators (MTI) pilot project thereby facilitating comparisons between the two studies (see Section E below). Using administrative data to measure outcomes also has benefits that would extend far beyond individual evaluation contracts. In principle it should be possible to follow members of the participant and comparison groups for many years using such data. Use of these data would aid in determining whether program impacts observed in the evaluations persisted or were subject to rapid decay. It is also possible that assembling the longer longitudinal data sets made possible by using administrative data could shed some light on the validity of the original impact estimates by making fuller use of measured variations in the time series properties of earnings for participant and comparison groups.
  7. Cost-benefit and cost-effectiveness analyses should be considered, but they are likely to play a secondary role in the evaluations. Estimates of impacts derived in the evaluations could play important roles in providing the bases for cost-benefit and cost-effectiveness analyses. Development of a relatively simple cost-effectiveness analysis would be straightforward assuming data on incremental intervention costs are available. The utility of such an analysis depends importantly on the ability to estimate impacts of specific interventions accurately — both from the perspective of quasi-experimental designs and the adequacy of sample sizes to provide sufficiently detailed estimates. Still, it may be possible to make some rough cross-interventions comparisons.

    Conducting extensive cost-benefit analyses under the evaluations might present more significant difficulties, however, especially given the sizes of budgets anticipated. Some of the primary obstacles to conducting a comprehensive cost-benefit analysis include the possibility that many of the social benefits of the EBSM program may be difficult to measure, that estimating the long-run impacts of the programs may be difficult and that the overall size of the program may make displacement effects significant in some locations. Methodologies for addressing this latter issue in any simple way are especially problematic. For all of these reasons, the panel believed that the planned modest budgets of the various evaluations would not support the kind of research effort that would be required to mount a viable cost-benefit analysis. However, the panel strongly believes that some sort of cost-benefit analysis should be the ultimate goal of the evaluations because stakeholders will want to know whether the programs are "worth" what they cost. Hence, it believes that it may be prudent for HRDC to consider how a broader cost-benefit approach might be conducted by using the combined data42 from several of the evaluations taken together as part of a separate research effort.


Footnotes

38 Other measures might include program completion, Employment Insurance (EI) collections, and subsequent enrollment in additional Employment Benefits and Support Measures (EBSM) interventions. [To Top]
39 In actuality, employment should usually be measured from the entry date into the program (and from a similar date for comparison cases) because this permits the most accurate appraisal of foregone employment opportunities caused by program participation. [To Top]
40 In Section F we discuss the need to develop a research program to study how outcomes should be related to EBSM and more general societal program goals. [To Top]
41 The desirability of pooling data from different evaluations is discussed further in section F. [To Top]
42 At a minimum, contractors should be encouraged to provide public use data sets that might be combined by other researchers in the development of cost-benefit or other types of analysis. [To Top]


[Previous Page][Table of Contents][Next Page]