Government of Canada | Gouvernement du Canada Government of Canada
    FrançaisContact UsHelpSearchHRDC Site
  EDD'S Home PageWhat's NewHRDC FormsHRDC RegionsQuick Links

·
·
·
·
 
·
·
·
·
·
·
·
 

2. Evaluation Methods


Six sources of information were used to evaluate Employment Benefits and Support Measures (EBSM): a review and analysis of administrative data; interviews with program officials; a survey of participants and a matched sample of non-participants; focus groups with Human Resources Centres of Canada (HRCC) staff; case studies at three HRCCs; and an econometric analysis to determine preliminary program impact.

2.1 Administrative Data Review

The overall purpose of the document review was to enable the evaluators to learn about the program and its context. This is imperative for the conduct of a formative evaluation wherein a thorough understanding of the program as designed is the foundation for all subsequent work. The Committee provided various documents during the initial meeting, which were read before preparing the draft research instruments.

HRDC provided electronic files and file descriptions of administrative data concerning EBSM. The main purposes of the analysis of the administrative files were: to produce a profile of the program and its clients; to assess the monitoring and performance measurement system; and to validate the primary employment results indicator. In addition, the data were used to select samples for the participant and non-participant surveys, to address several evaluation issues, and to examine the data sets for completeness and accuracy.

2.2 In-depth Interviews

The purposes of the interviews were: to assess program implementation, management and operation; to determine informants' understanding of the goals and objectives of the program; to identify any major obstacles to achieving program objectives; to examine the federal-provincial partnership; and to gather suggestions for making the program more successful.

The first step was to obtain a list of interview subjects from the EBSM evaluation committee. Concurrently, interview guides were designed to govern the interviews.

Somewhat different guides were needed to reflect the different perspectives of HRCC managers, zone managers, regional managers, and provincial officials. Interviews with 25 HRDC and provincial government managers were completed.

2.3 Participant and Non-participant Surveys

Two separate survey instruments were created, one for participants and one for non-participants. There was a lengthy core of questions focusing on post-program activities common to both questionnaires to set the stage for an econometric analysis of preliminary impact.

The questionnaires were reviewed by the evaluation committee, then pre-tested with about 40 respondents. Respondents had virtually no problems with the questions or response categories, with the length of the questionnaire, or with recalling details of interest.

While the questionnaires were being crafted, random samples of participants and non-participants were selected from administrative databases supplied by HRDC. Participants who started their intervention on or after January 1, 1997 and ended on or before June 30, 1998 were eligible for selection. Selection was stratified by EBSM component to end up with approximately a 6 percent margin of error for each component. The final number of cases by component is as follows:

Component N
Employment Assistance Services 209
Job Creation Partnerships 196
Purchase of training/Feepayer 284
Self-Employment 116
Targeted Wage Subsidies 228
TOTAL 1,033

As for non-participants, a random sample was selected very similar to the sample of participants. The literature points to a few key traits for matching including age, sex, education, program eligibility, and employment/unemployment history. Program eligibility was a given since the non-participant file came from Employment Insurance (EI) files. Because administrative data on education are not available on all clients, this variable was not used for matching. Samples were matched on age, sex, and unemployment history.

A computer-assisted telephone interview (CATI) system was used to facilitate the phone surveys. Because of the problems with finding people at home and with invalid telephone numbers, up to 20 attempts were made to reach each person in the sample before replacement. Most telephone interviews took place in the evenings or on weekends. On average, they lasted about 18 minutes for participants and 13 minutes for non-participants. Response rates were respectable for this target group: 62 percent for participants and 61 percent for non-participants. An analysis of non-response concluded there should be no substantial biases.

CATI generated a ready-made computerized file. It was carefully edited and imported into SPSS for statistical analysis.

The standard error is the key measure of the accuracy of results. For the calculation, the gender variable is used. The standard errors and associated margins of error1 with a 95 percent confidence interval are:

  SE Margin of error
EBSM 0.0145 ± 2.9%
Employment Assistance Services 0.0314 ± 6.2%
Job Creation Projects 0.0305 ± 6.0%
Purchase of training 0.0274 ± 5.4%
Self-Employment 0.0325 ± 6.4%
Targeted Wage Subsidies 0.0296 ± 5.8%

2.4 Focus Groups with HRCC Staff

The purposes of the focus groups were: to assess program implementation; to gain a better appreciation of how EBSM activities are carried out; to examine how the program is monitored; and gather suggestions for improving the program.

Focus groups were held with all HRCC staff who could attend at the case study sites as well as in Halifax. A protocol to cover the issues was submitted to the committee for approval. Sessions lasted for two to three hours. The discussions were transcribed, then analyzed.

2.5 Case Studies

Case studies were conducted at HRCCs in Antigonish, Sydney and Yarmouth. The case studies were meant to explore how EBSM — a program characterized by a great degree of local flexibility — was implemented in different areas of the province. The case studies consisted of an administrative data review, interviews with the manager, employers, outreach groups and third party providers, a focus group with staff, a group meeting with a regional committee, and observations. Discussions were transcribed and analyzed.

2.6 Preliminary Econometric Analysis of Impact

Although it is too early at the formative evaluation stage to render a definitive verdict on program impact, it is important to look for early signs of program success. The operative question: Does the program seem to be accomplishing its objectives at this early stage?

This evaluation used a quasi-experimental design to estimate program effectiveness. This approach was required because participants and non-participants were not randomly assigned as would be the case in a true experiment. A variety of econometric and statistical techniques were used to assess whether the program had an incremental impact on work activity, earnings, receipt of employment insurance or social assistance, the allocation of time to work and/or school, and attitudes toward work and social assistance.

To measure the program's impact, pre-program data were obtained beginning two or more years prior to referral to the program. This information was important because it permitted a determination of the incremental impact of the program by controlling for biases caused by unobserved individual differences.


Footnotes

1 In any one sample the mean will usually differ from the population mean. The measure of this difference is the standard error. To estimate how accurate the findings are, one calculates a "confidence interval" for the population mean. Confidence intervals, reported as the "margin of error" in everyday parlance, are basically adjustments to account for any potential differences between the sample and the population. [To Top]


[Previous Page][Table of Contents][Next Page]