Government of Canada | Gouvernement du Canada Government of Canada
    FranηaisContact UsHelpSearchHRDC Site
  EDD'S Home PageWhat's NewHRDC FormsHRDC RegionsQuick Links

·
·
·
·
 
·
·
·
·
·
·
·
 

Introduction


1.1 Background

In 1994, the Government of Canada announced its intention to renew and revitalise the country's social security system to create an environment that better rewards effort and offers incentives to work. To this end, Human Resources Development Canada (HRDC) launched the Strategic Initiatives (SI) program to provide a funding mechanism for the federal government to work in partnership with provincial and territorial governments to test new and innovative approaches in high priority areas of employment, education and income security. Projects supported by SI were funded on a 50/50 basis between the federal and provincial/territorial governments.

Negotiations took place between HRDC and the Alberta government departments of Family & Social Services (F&SS;) and Advanced Education & Career Development (AE&CD;) to identify projects that would be eligible for SI funding. Negotiations led to an agreement to fund the Integrated Training Centres for Youth (ITCY) pilot project.

Tenders were called in early 1995 for agencies to establish ITCYs. Three proposals were accepted, and contracts were negotiated with the following agencies:

  1. Career High — a program of the Chinook School Division to provide training to youth at two sites in Red Deer and Innisfail;
  2. Destinations — a partnership between Hennig Research and Consulting, Skye Consulting and the Northern Alberta Institute of Technology (NAIT) to provide training to youth in Edmonton;
  3. Fifth on Fifth Youth Services — a program of the Lethbridge Youth Foundation to provide integrated training to youth in Lethbridge and area.

The focus of the ITCY pilot project was on youth that had dropped out of school and were having difficulty achieving significant labour force attachment. Youth interested in attending an ITCY had to meet basic Strategic Initiatives eligibility requirements:

  • age 16 to 20;
  • have less than grade 12 education;
  • out of school for a minimum of 3 months with no intentions of returning;
  • unemployed or underemployed;
  • motivated to work.

The pilot project would test the value of customised training and work site interventions for youth that were at risk of long- term dependence on government support, in order to help them make a successful transition to employment.

The ITCYs began accepting clients in the spring and summer of 1995. A process evaluation commenced in May of 1995 under the supervision of an Evaluation Steering Committee made up of representatives from the three sponsoring departments. The final report was submitted in January 1996. HRDC published the final report entitled: Integrated Training Centres for Youth: A Process Evaluation in June of 1996.

As part of the contract for the process evaluation, the consultants were responsible for designing an outcome evaluation framework (Workplan for an Outcome Evaluation of the Integrated Training Centres for Youth. January 1996) complete with procedures for the selection/assignment of a comparison group, along with the forms and procedures needed to collect outcome data.

The consultants began work on the outcome evaluation in October 1996. An Interim Report was submitted to the department of AE&CD; in May 1997. The report consists of qualitative findings from interviews with a variety of ITCY stakeholders, including agency staff and clients, employers, government representatives and community agencies involved with youth. Key results from the Interim Report have been brought forward into this report to assist in drawing final conclusions (see Chapter 4.0).

1.2 Program description

AE&CD; wanted the ITCY pilot project to incorporate certain features of integrated training based on a model developed by the Center for Employment Training in the United States, for example:

  • open entry/exit;
  • individualised, self-paced training;
  • intensive, hands-on, competency-based, job-specific skill training;
  • academic skill training linked directly to occupational skills;
  • industry and counselling-oriented instructors;
  • integrated support services (financial incentives, mentoring, counselling, job placement, job maintenance).

The program emphasis was on integrating practical job and life management skills with ongoing coaching and support services tailored and sequenced to the individual needs of each participant.

Figure 1 below provides an overview of how clients typically access services at an ITCY. Various components of ITCY programming are described in Appendix A.2

1.3 Methodology

The outcome evaluation methodology combines two different designs:

  • Applicant-Based Design — the comparison of variables between clients who received the Integrated Training (IT) intervention and other youth who did not. Clients were designated members of the Program Group (PG) if they received 2 or more weeks of training. Clients who applied to the program but did not complete at least 2 weeks of training were designated members of the Comparison Group (CG).

  • Pre-Post Design — the measurement of changes in a set of "Baseline" attitudinal variables for both PG and CG clients.

Data for the outcome evaluation was collected primarily through a series of survey instruments (Appendix B) 3 delivered at different stages of the intervention:

  • Baseline Survey — administered to PG and CG members at intake to capture family/employment history, demographic data and attitudes that might be affected by the intervention.

  • Exit Surveys — administered to PG members upon completion of key stages of the intervention to identify services received client satisfaction and changes in baseline attitudinal variables. Exit A was to be administered after completion of training, and Exit B after a 4 month job maintenance period.4

  • Follow-up Surveys — administered to PG and CG members at several points after intake to document employment status (Follow-up A) and changes in baseline attitudinal variables (Follow-up B).

Comparison Group

The original intention, as documented in the Outcome Evaluation Workplan, was to create an experimental "control group" in Edmonton whose members would be very similar to those in the PG. For example, it was originally proposed that eligible clients be assigned randomly to the two groups. A number of practical limitations arose which prevented implementing the experimental design as proposed. Project sponsors agreed to an alternative approach wherein the CG would be comprised of a range of youth, all "at risk"5 and otherwise eligible for the IT intervention, but not necessarily equivalent to those who eventually formed the PG. (See Appendix C for further discussion of changes to original experimental design.)

The conceptual flow of youth to the Program and Comparison Groups in Edmonton is outlined in Figure 2 below. The schematic also fits the Red Deer program, although procedures for administering the Baseline Survey were somewhat different. Also, Red Deer did not have a waiting list.

Data Collection

Client intake and job training commenced in July 1995. Clients whose intake was later than the October 1996 cut-off date were excluded from the outcome study. AE&CD; hired screeners/trackers (1 each for Edmonton & Red Deer) to administer all the data collection instruments.6 Although the workplan for the outcome evaluation outlined a detailed schedule for data collection, the trackers were not able to adhere to the schedule and, therefore, a significant amount of data was collected historically.7

Because of the delay in tracking CG members after their initial contact with the tracker, and in contacting PG clients after they had left the training program, many could not be located for follow-up interviews and response rates for individual months were low. In order to increase the representativeness of the measurement periods, a consolidation process was used to maximise the effective response rate. The data was consolidated at points 3, 6, 9, 12, 15, and 18 months from baseline. If data for a given month (e.g., month 3) was not available, the consolidation procedure used the data for the month prior (e.g., month 2). If data for that month was also not available, then data for the month following (e.g., month 4) was used.

Attitude measures were similarly grouped into 2 post-intervention periods:

  1. Months 3 - 8 from baseline which would measure changes in attitudes that may be attributed to the intervention;
  2. Months 11 - 13 from baseline which would measure changes and sustainability in attitudes 1 year from baseline.

Table 1 documents the response rates for each of the measurement instruments and their associated data periods.

Table 1 - Response Rates

  Edmonton Red Deer
PG CG PG CG
Total Population n=290 n=180 n=145 n=88
Exit A (client satisfaction with training) 48% - 61% -
Exit B (client satisfaction with job maintenance) 42% - 60% -
Follow-up A (employment status)  
3 months from baseline 11% 54% 25% 82%
6 months from baseline 16% 28% 34% 67%
9 months from baseline 56% 37% 52% 68%
12 months from baseline 67% 38% 63% 52%
15 months from baseline 53% 28% 65% 45%
18 months from baseline 37% 9% 51% 33%
Follow-up B (attitudinal measures)  
3 - 8 months from baseline 31% 39% 54% 60%
11 – 13 months from baseline 39% 22% 21% 14%

Bias Between Responders and Non-Responders

Responders and non-responders were found to be very similar in both Edmonton and Red Deer. They were significantly different at the .01 level 8 on only 2 attitudinal variables. For example, in Edmonton responders were more likely than non-responders to indicate they had a lot of support around them. In Red Deer, responders were more likely to feel they had the skills to get a job. No correction for bias between responders and non-responders was considered necessary, and the results reported for the PG sample are considered representative of those who took the training.

Bias Between Program Group and Comparison Group

When non-random assignment of subjects to program and comparison groups is not feasible, as in this study, selection bias (due both to self-selection and program selection factors) presents a considerable challenge. More specifically, the problem of selection bias occurs when some determinant of earnings is correlated with one or more variables not associated with whether a person received training.

Two classes of variables must be considered: measured and unmeasured. Measured variables present the least difficult problem, in that standard statistical procedures (e.g., analysis of covariance) can account for their impact through multivariate regression techniques. Unmeasured variables pose a more difficult problem, and have been the subject of much discussion and analysis, particularly among econometricians investigating the impact of training programs.9

Two clusters of variables known to influence earnings, and which therefore present a potential source of selection bias, are demographic (e.g., age, race, gender, education, and prior work experience and wages) and motivational or attitudinal variables. Bell et al. have shown that the unmeasured components of these clusters can be reasonably dealt with through the use of non-participating program applicants (e.g., screen-outs and no-shows) as comparison subjects. These authors argue and successfully demonstrate that non-participating program applicants provide a reasonable alternative to random assignment in controlling for unmeasured components of selection bias in the evaluation of training programs, particularly when coupled with standard regression techniques to control for measured demographic variables. It is argued that unmeasured motivational/attitudinal variables are, a priori, controlled for in large part by using subjects who applied to the program but did not participate.

The argument by Bell et al. is made stronger if comparison subjects neither self-select out of the program nor are selected out by program staff. Those in the present study who were on the waiting list but were not invited to participate due to lack of space (Waiting List-Not Invited) fit this category. These subjects represent about 40% of the Edmonton CG, but none, unfortunately, of the Red Deer CG. In Edmonton, very few clients were selected out by program staff, mitigating possible bias from program selection factors.

The argument is also strengthening when motivational and attitudinal variables are explicitly measured at baseline, as was done in the present study. This enables the incorporation of these variables (along with relevant demographic variables which were also measured in this study), into the vector of covariates used to provide statistical control of variables associated with earnings.

The covariates consistently used to control for bias include:

  • age;
  • gender;
  • education;
  • ever had paid employment;
  • mobility in 5 years prior to Baseline;
  • living arrangement at Baseline;
  • visible minority or aboriginal;
  • criminal record;
  • single parent;
  • household history of SFI before client age 16;
  • client history with SFI in 3 years prior to Baseline;
  • employment history in year prior to Baseline;
  • number of factors (out of 10) where a client would be considered "at risk" of needing Government assistance;
  • mean attitude scores (work, self and life).


Footnotes

2 A description of the Lethbridge ITCY has been omitted as no outcome data for the program was collected owing to difficulties in obtaining the resources necessary to track clients. The program is described in the Process Evaluation report. [To Top]
3 The instruments were designed to address issues in a Monitoring & Evaluation Framework prepared by project sponsors dealing with both process and outcome dimensions. [To Top]
4 The consultants also designed a brief "Early Exit" survey to obtain information from clients who did not complete their training. This information relates more to process, and is therefore of lesser relevance to the outcome evaluation. [To Top]
5 Youth filling out the Baseline Survey were all screened for risk factors that would have rendered them eligible for IT. [To Top]
6 Data collection in Edmonton was taken over by a private consulting firm in April 1996. [To Top]
7 The majority of follow-up employment measures were collected in a timely manner in Red Deer. [To Top]
8 The .01 level was used because of the large number of baseline variables available for comparison. [To Top]
9 See, for example: Heckman, J. J. "Sample Selection Bias As A Specification Error". Econometrica, 47 (1979): 163-174; and Bell, S. H., Orr, L.L, Blomquist, J. D. and Cain, G. G. Program Applicants as a Comparison Group in Evaluating Training Programs. Michigan: W. E. Upjohn, 1995. [To Top]


[Previous Page][Table of Contents][Next Page]