![]() |
![]() |
![]() ![]() ![]() ![]() ![]() |
||
![]() |
![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |
1.1 BackgroundIn 1994, the Government of Canada announced its intention to renew and revitalise the country's social security system to create an environment that better rewards effort and offers incentives to work. To this end, Human Resources Development Canada (HRDC) launched the Strategic Initiatives (SI) program to provide a funding mechanism for the federal government to work in partnership with provincial and territorial governments to test new and innovative approaches in high priority areas of employment, education and income security. Projects supported by SI were funded on a 50/50 basis between the federal and provincial/territorial governments. Negotiations took place between HRDC and the Alberta government departments of Family & Social Services (F&SS;) and Advanced Education & Career Development (AE&CD;) to identify projects that would be eligible for SI funding. Negotiations led to an agreement to fund the Integrated Training Centres for Youth (ITCY) pilot project. Tenders were called in early 1995 for agencies to establish ITCYs. Three proposals were accepted, and contracts were negotiated with the following agencies:
The focus of the ITCY pilot project was on youth that had dropped out of school and were having difficulty achieving significant labour force attachment. Youth interested in attending an ITCY had to meet basic Strategic Initiatives eligibility requirements:
The pilot project would test the value of customised training and work site interventions for youth that were at risk of long- term dependence on government support, in order to help them make a successful transition to employment. The ITCYs began accepting clients in the spring and summer of 1995. A process evaluation commenced in May of 1995 under the supervision of an Evaluation Steering Committee made up of representatives from the three sponsoring departments. The final report was submitted in January 1996. HRDC published the final report entitled: Integrated Training Centres for Youth: A Process Evaluation in June of 1996. As part of the contract for the process evaluation, the consultants were responsible for designing an outcome evaluation framework (Workplan for an Outcome Evaluation of the Integrated Training Centres for Youth. January 1996) complete with procedures for the selection/assignment of a comparison group, along with the forms and procedures needed to collect outcome data. The consultants began work on the outcome evaluation in October 1996. An Interim Report was submitted to the department of AE&CD; in May 1997. The report consists of qualitative findings from interviews with a variety of ITCY stakeholders, including agency staff and clients, employers, government representatives and community agencies involved with youth. Key results from the Interim Report have been brought forward into this report to assist in drawing final conclusions (see Chapter 4.0). 1.2 Program descriptionAE&CD; wanted the ITCY pilot project to incorporate certain features of integrated training based on a model developed by the Center for Employment Training in the United States, for example:
The program emphasis was on integrating practical job and life management skills with ongoing coaching and support services tailored and sequenced to the individual needs of each participant. Figure 1 below provides an overview of how clients typically access services at an ITCY. Various components of ITCY programming are described in Appendix A.2 ![]() 1.3 MethodologyThe outcome evaluation methodology combines two different designs:
Data for the outcome evaluation was collected primarily through a series of survey instruments (Appendix B) 3 delivered at different stages of the intervention:
Comparison GroupThe original intention, as documented in the Outcome Evaluation Workplan, was to create an experimental "control group" in Edmonton whose members would be very similar to those in the PG. For example, it was originally proposed that eligible clients be assigned randomly to the two groups. A number of practical limitations arose which prevented implementing the experimental design as proposed. Project sponsors agreed to an alternative approach wherein the CG would be comprised of a range of youth, all "at risk"5 and otherwise eligible for the IT intervention, but not necessarily equivalent to those who eventually formed the PG. (See Appendix C for further discussion of changes to original experimental design.) The conceptual flow of youth to the Program and Comparison Groups in Edmonton is outlined in Figure 2 below. The schematic also fits the Red Deer program, although procedures for administering the Baseline Survey were somewhat different. Also, Red Deer did not have a waiting list. ![]() Data CollectionClient intake and job training commenced in July 1995. Clients whose intake was later than the October 1996 cut-off date were excluded from the outcome study. AE&CD; hired screeners/trackers (1 each for Edmonton & Red Deer) to administer all the data collection instruments.6 Although the workplan for the outcome evaluation outlined a detailed schedule for data collection, the trackers were not able to adhere to the schedule and, therefore, a significant amount of data was collected historically.7 Because of the delay in tracking CG members after their initial contact with the tracker, and in contacting PG clients after they had left the training program, many could not be located for follow-up interviews and response rates for individual months were low. In order to increase the representativeness of the measurement periods, a consolidation process was used to maximise the effective response rate. The data was consolidated at points 3, 6, 9, 12, 15, and 18 months from baseline. If data for a given month (e.g., month 3) was not available, the consolidation procedure used the data for the month prior (e.g., month 2). If data for that month was also not available, then data for the month following (e.g., month 4) was used. Attitude measures were similarly grouped into 2 post-intervention periods:
Table 1 documents the response rates for each of the measurement instruments and their associated data periods. Table 1 - Response Rates
Bias Between Responders and Non-Responders Responders and non-responders were found to be very similar in both Edmonton and Red Deer. They were significantly different at the .01 level 8 on only 2 attitudinal variables. For example, in Edmonton responders were more likely than non-responders to indicate they had a lot of support around them. In Red Deer, responders were more likely to feel they had the skills to get a job. No correction for bias between responders and non-responders was considered necessary, and the results reported for the PG sample are considered representative of those who took the training. Bias Between Program Group and Comparison Group When non-random assignment of subjects to program and comparison groups is not feasible, as in this study, selection bias (due both to self-selection and program selection factors) presents a considerable challenge. More specifically, the problem of selection bias occurs when some determinant of earnings is correlated with one or more variables not associated with whether a person received training. Two classes of variables must be considered: measured and unmeasured. Measured variables present the least difficult problem, in that standard statistical procedures (e.g., analysis of covariance) can account for their impact through multivariate regression techniques. Unmeasured variables pose a more difficult problem, and have been the subject of much discussion and analysis, particularly among econometricians investigating the impact of training programs.9 Two clusters of variables known to influence earnings, and which therefore present a potential source of selection bias, are demographic (e.g., age, race, gender, education, and prior work experience and wages) and motivational or attitudinal variables. Bell et al. have shown that the unmeasured components of these clusters can be reasonably dealt with through the use of non-participating program applicants (e.g., screen-outs and no-shows) as comparison subjects. These authors argue and successfully demonstrate that non-participating program applicants provide a reasonable alternative to random assignment in controlling for unmeasured components of selection bias in the evaluation of training programs, particularly when coupled with standard regression techniques to control for measured demographic variables. It is argued that unmeasured motivational/attitudinal variables are, a priori, controlled for in large part by using subjects who applied to the program but did not participate. The argument by Bell et al. is made stronger if comparison subjects neither self-select out of the program nor are selected out by program staff. Those in the present study who were on the waiting list but were not invited to participate due to lack of space (Waiting List-Not Invited) fit this category. These subjects represent about 40% of the Edmonton CG, but none, unfortunately, of the Red Deer CG. In Edmonton, very few clients were selected out by program staff, mitigating possible bias from program selection factors. The argument is also strengthening when motivational and attitudinal variables are explicitly measured at baseline, as was done in the present study. This enables the incorporation of these variables (along with relevant demographic variables which were also measured in this study), into the vector of covariates used to provide statistical control of variables associated with earnings. The covariates consistently used to control for bias include:
|