Government of Canada | Gouvernement du Canada Government of Canada
    FrançaisContact UsHelpSearchHRDC Site
  EDD'S Home PageWhat's NewHRDC FormsHRDC RegionsQuick Links

·
·
·
·
 
·
·
·
·
·
·
·
 

Appendix C: Descriptions of the Methodologies


C.1 Key Informant Interviews

A total of 30 in-person interviews were conducted with key informants. These interviews were held from March 8 to March 12, 1999, when three members of the study team travelled to Prince Edward Island. As most of the interviews involved two respondents, a total of 52 key informants were consulted. The average interview was one and a half-hours long. Interviews were conducted in Charlottetown (24 interviews), Summerside (four interviews) and Montague (two interviews) with informants from these cities as well as some respondents from other areas of the Island. The interview locations were set to be convenient for most respondents.

The key informant interviews fell into the following three categories:

  • Members of Labour Market Development Agreement (LMDA) Committees and Working Groups (10 interviews; 16 informants);
  • Human Resource Centres of Canada managers; Human Resources Development Canada (HRDC) program consultants, program supervisors; and provincial administrators and project officers (seven interviews; 12 informants); and
  • Stakeholders (13 interviews; 24 informants), including representatives of industry associations, development associations, chambers of commerce, public and private educational/training institutions and colleges, community learning centres, the francophone community, the federal public service union and a youth association.

Three interview guides — one guide designed for each of the three groups were utilized in the key informant interviews. All respondents were sent a copy of the appropriate interview guide in advance of their scheduled interview appointment. In addition, they were sent an introductory letter explaining the purpose of the interview, the fact that the interview is voluntary, and that the fact their comments are protected under the Federal Privacy Act. Key informants were interviewed in their preferred official language.

C.2 Focus Groups

a) Overview

A total of 12 focus groups were conducted from March 8 to March 12, 1999 when members of the study team travelled to PEI. The discussions were held with four types of participants: HRCC and provincial delivery/front-line staff (one group); stakeholders45 (two groups); clients (six groups); and employers (three groups; one of the stakeholder groups also included some employers). The combined stakeholder/employer group was targeted to the francophone community and was conducted in French, and all other discussions were held in English.46 At least one group with clients was held in each of the five Human Resource Centre of Canada (HRCC) regions of the Island — in Charlottetown, Summerside, O'Leary, Montague and Souris. The sixth client focus group was targeted to francophone clients and was offered in French. These various client focus groups were conducted in order to assess the views of clients in the urban, central rural and more remote rural areas of the province, and the different employment situations in each area. Each discussion was two hours in duration, and was held in the evening (with the exception of the one staff focus group which was conducted in the afternoon). The focus groups were held at a focus group facility in Charlottetown and in hotel meeting rooms in the other four centres.

b) Recruitment

Using lists of potential participants developed with the assistance of the Evaluation Committee, interviewers recruited the participants by telephone. For employers and stakeholders, the lists were comprised of persons who had accessed Employment Insurance (EI) programs and who represented specific demographic and geographic areas of PEI. Clients were recruited at random from a client sample list provided by the Evaluation Committee. In these telephone contacts, the interviewers explained to prospective participants the purpose of the discussion and study sponsor; details on the time and location of the focus group; the fact that participation is voluntary; the fact that the discussion would be audio-tape recorded but that their comments would be kept strictly confidential; and the fact that an honorarium of $50.00 as well as travel expenses would be provided to all participants (except the front-line government staff). Persons agreeing to participate were given a reminder phone call a day before their scheduled discussion.

In order to ensure the participation of eight to ten people in each discussion, we endeavoured to recruit 12 confirmed participants for each focus group. Unfortunately, there were bad weather conditions in PEI on some of the evenings when groups were scheduled, so the participation was low for some of the focus groups (see Exhibit C.1). In an attempt to make up for this, the focus group questions were sent (by facsimile) to 31 persons who were unable to attend discussions during our visit to PEI. Only two people — both employers from Charlottetown — returned their responses and these comments are incorporated in the findings.

c) Distribution of Focus Groups

The distribution of the focus groups, including the number of participants per group, is summarized in Exhibit C.1.

Graphic
View Exhibit C.1

d) Discussion Guides

Four focus group guides — one guide for each of the four types of participants — were developed for the group discussions. Following the appropriate guide, the moderator asked the group the questions in a non-directive way, probing for clarification and more detail when necessary, and intervening as appropriate to involve all participants and keep the discussion on topic.

C.3 Survey of Participants and Comparison Group

Participant Survey

The participant data files were originally developed to include participants who participated in LMDA employment programs and services at any time between April 26, 1997 and October 31, 1998. These were compiled from a participant file (n=5,409) and five administrative data files (HRDC, Hull, Quebec, names and addresses file and T1 file (n=45,513); Status Vector file (n=45,963); Status vector file with Benefit Period Commencement (BPC) and BVT data (n=45,963); and Record of Employment (ROE) file (n=45,861)). These files were aggregated, yielding a single data file containing information for 5,409 participant cases, with the individual client as the unit of analysis. This was not equal to the sum of all the cases from the administration and data files because clients who had taken part in more than one intervention could appear in more than one file. Following the removal of all cases without valid phone numbers, start and end dates for EI benefits, and start and end dates for most recent interventions, the final data file consisted of 3,744 individuals.

The survey sample was randomly drawn from the final data file using a three to one "sample to survey completion" ratio for each different participant group (i.e., three times as many participants were sampled as were expected to complete the survey). For all groups except Employment Assistance Services (EAS) and Enhanced Feepayers, there were not enough cases available to obtain this three to one ratio, thus for an expected total of 1,164 survey completions, a total final sample of 2,483 cases was drawn from the data file of 3,744 program participants.

Based on the matrix of issues and indicators developed for the evaluation, a survey was designed for clients who have participated in LMDA funded programs in Prince Edward Island. From the initial review of the instrument by the Joint Evaluation Committee (JEC) in January 1999, a number of changes were made, including wording changes; the addition of a number of questions related to respondents' work profiles, LMDA programs, attitudes, and use of income assistance; and modifications to response categories.

The pre-test was carried out in order to simulate the conditions to be encountered during the actual survey. The objective was to test the survey instrument in terms of the length of time required for the interview, as well as to ensure the sequencing and clarity of the questions and appropriate wording and flow were. On February 23, 1999, a total of six interviews were completed with an average length of 29.5 minutes. The pre-test results prompted several revisions of the instrument, such as changes to skip logic and wording changes. Notably, the pre-test results demonstrated that the instrument was longer than planned; thus a list of suggested changes to shorten the survey was developed and submitted to the JEC for their approval. Efforts to shorten the survey involved developing a list of questions to be deleted and/or merged with other questions (i.e., in order to collect the same type of information using fewer questions).

Following approval of the suggested changes and modification of the survey instrument to reflect these changes, another pre-test was conducted on March 11, 1999. The results of eight completed interviews showed that the average length of the survey was now 28.2 minutes. Additional efforts to shorten the survey were made and another pre-test was conducted with three survey completions the following day, on March 12. This pre-test yielded an average estimated time of 24 minutes for the survey. Although the final pre-test revealed that the instrument was somewhat longer than the time allotted for the survey, additional resources were supplied by the client to offset the costs of the longer survey.

Fieldwork for the survey began on April 5, 1999 and was completed on June 10, 1999. A major delay in the fieldwork occurred on April 14 when it was discovered that different protocols were used by HRDC in PEI and National Headquarters in Ottawa to extract the population of participants from the administrative data files. The consequent discrepancy in the population characteristics of the participants pulled using these two extraction protocols required that all fieldwork come to a stop until a resolution to the problem had been reached. The differences in the two different selection strategies were as follow:

  • National Headquarters (NHQ) defined a participant as anyone who had an action plan, whereas HRDC/PEI used the definition of a participant as anyone who has accessed HRDC programs and services with or without an action plan. The result was that the PEI population was much larger;
  • NHQ extracted more codes than PEI/HRDC, resulting in a more liberal sampling strategy with respect to these variables, even though the overall population as defined by NHQ was smaller than that pulled by PEI/HRDC.

On May 17, 1999, EKOS received the new participant data files from HRDC, including the population of all participants and all participant administrative data files. The new participant data file was rebuilt and matched to the old participant data file so that only new cases were pulled from the new file. This new file was also matched to the list of comparison group cases that had already completed the comparison group interview. Respondents who had responded to the comparison survey and were listed as participants in the new participant data files (n=48) were also ineligible for selection as participants in the new wave.

In order to limit the amount of time that elapsed between the first period of data collection and the continuation of the fieldwork, fieldwork for the participant survey resumed on May 12, 1999, prior to receiving the new participant data files. This early return to the field was also prompted by the fact that the research team felt it would be prudent to collect extra participant cases concurrent to the comparison group fieldwork, which began in the field at the same time. In this way, there would be a sample of completed participant surveys that could be compared to the earlier participant cases, as well as to the comparison group cases. These comparisons would provide information about any effect that the time delay might have had on participant responses.

An additional difficulty that arose from problems in defining the population of participants concerned the loss of time in the field. Specifically, fieldwork was quickly moving into the May long weekend, a weekend which typically marks the beginning of the tourist season on the Island and a return to work for many people, including perhaps significant proportions of the participant survey sample. Given this potential source of bias, all efforts were made to complete the survey fieldwork before the long weekend. When it became apparent that this was not going to occur, the wording for both surveys was modified so that questions concerning post-intervention employment status and activities referred clients to report on their employment history only up until April 10, 1999 (i.e., the mid-point of the original fieldwork for the participant survey). In this way, the confounding effect of a mass return to work heralded by the beginning of the tourist season was avoided.

Following the completion of fieldwork, it was also discovered that no reliable means existed to distinguish reachback clients from active EI claimants on the basis of the available administrative data. The administrative data lacked a reliable flag for reachbacks and claimants; thus reachback status was computed on the basis of the BVT and BPT variables derived from the Status Vector files. The BVT variable records the receipt of EI claimants' most recent report cards, while the BPT is the week code of the theoretical end of EI eligibility. At the actual end of a claim, the BVT and BPT codes are reconciled and will be equal. When the proportion of reachbacks participating in each EBSM was determined using the BVT variable, the true proportion of participating reachbacks was over-estimated because claimants for whom a most recent (but not last) report card had been received were categorized as reachbacks. The only way to make a positive determination of claimant status is to wait a sufficient period of time after the claim period begins for the BVT and BPT variables to be reconciled and for the quarter-annual data extraction to take place following this reconciliation for the data to become available to researchers. As such, at the time of this report the claimant status of roughly one in 10 participants was still in question.

The response rates and refusal rates for participants by program type are presented in Exhibit C.2. The response rate is the proportion of cases from the functional sample who responded to the survey, while the refusal rate represents the proportion of cases from the functional sample who declined to participate in the survey. The functional sample factors out the attrition in the survey, leaving only the sample which resulted in completions, refusals, and those numbers attempted but not reached before the completion of fieldwork (e.g., retired phone numbers, respondents who were unavailable for the duration of the survey, respondents who were unable to participate due to illness or some other factor, etc.). Attrition includes numbers not in service, duplicate phone numbers, respondents who do not speak either English or French and respondents who indicated no knowledge of the topic.

The overall margin of error for the survey is Plusminus2.6 percent. That is, the overall survey results are accurate within Plusminus2.6 percentage points, 19 times out of 20. It should be noted that the response rate for the survey was fairly good, ranging from 77.4 percent among Purchase of Training participants to 33.7 percent for Enhanced Feepayers, with an overall response rate of 59.5 percent. The overall refusal rate was also quite satisfactory (4.2 percent) and ranged from 8.6 percent for Enhanced Feepayers to 0.7 percent for Purchase of Training participants.

Graphic
View Exhibit C.2

Comparison Group Survey

The comparison group sample was drawn from a file of Employment Insurance claims that were active in 1998 and dormant EI claims (claims that were not being processed) from 1994 to 1998 that was provided by Human Resources Development Canada. This selection method produced a file of 41,549 claimants from which to draw the comparison group sample.

The comparison group data file was matched to the participant data file based on the time periods for which members of the comparison group were receiving EI. To accomplish this matching, three time periods were defined according to observed values for program end dates in the population of program participants. That is, within the population of program participants, the end dates of participants' most recent interventions ranged from April 26, 1997 to October 31, 1998 and this time period was divided into three time periods. For comparison group sampling purposes, the following time periods were derived: April 26, 1997 to September 30; October 1, 1997 to April 30, 1998; and May 1, 1998 to October 31, 1998. Reference data flags were then computed using the mid-point in each of these time periods (June 1, 1997, December 1, 1997, and July 1, 1998) so that if an individual in the comparison group was EI eligible at the reference date (at the mid-point of the time period), that individual would fall into the time period cohort. This meant, however, that these were not necessarily mutually exclusive cohorts because an individual could have been EI eligible at more than one of the reference dates.

Based on the participant population characteristics, for each time period cohort a listing was produced of the time in weeks between the end of the latest intervention and the start date of the most recent EI eligibility period. These time frames were further broken down into five categories based on the amount of time into the EI eligibility period that the participant's intervention came to an end. These categories were 13 weeks or less, 14 to 26 weeks, 27 to 39 weeks, 40 to 52 weeks, and 53 weeks or more. Each comparison group cohort was then similarly broken down into the same five categories based on the time in weeks between the reference date (the mid-point of one of the three time period cohorts) and the start date of the most recent EI eligibility period. (see Exhibit C.3).

The comparison sample was drawn in the same proportions as were observed for each of the three time period cohorts in the participant population. For example, if 13.3 percent of the population of participants fell into the first time cohort (i.e., the most recent intervention end dates were between Apri1 26, 1997 and September 30, 1997), the comparison group was sampled to ensure that 13.3 percent of cases were EI eligible at the mid-point of this time period (i.e., June 1, 1997). The comparison group sample was further stratified by the number of weeks between the end date of the latest intervention and the start date of the most recent EI eligibility period. For example, if 10.1 percent of the participant April to September time cohort had 14 to 26 weeks between the end date of their latest intervention and the start date of their most recent EI period, this meant that 10.1 percent of the comparison group population in the April to September time cohort (they were EI eligible at the mid-point of this time period, on June 1, 1997) was sampled from those with 14-26 weeks between the reference date (the mid-point of the time period) and the beginning of their most recent EI eligibility period.

To correct for the fact that the comparison time cohorts are not mutually exclusive, each time period cohort was sampled separately and a flag was computed to identify sampled cases. As such, it was possible to track these cases and not include them when sampling from subsequent time periods. Thus the final comparison group sample consisted of 2,637 cases in three mutually exclusive time period cohorts from a population of 41,549.

Exhibit C.3 - PEI LMDA Comparison Group Sample Frame
Time (Weeks) into EI Eligibility that Program Ended Participants in Population Total Sample from Comparison Group Population
# % of Subtotal # % of Subtotal
April 26, 1997 to September 30, 1997
Less than 13 26 8.9 20 8.9
14-26 46 15.2 35 15.2
27-39 84 27.8 63 27.8
40-52 67 22.2 50 22.2
52 and over 79 26.2 59 26.2
Subtotal 302 (8.6% of total population) 227 (8.6% of total sample)
October 1, 1997 to April 30, 1998
Less than 13 239 15.2 179 15.2
14-26 437 27.8 327 27.8
27-39 214 13.6 160 13.6
40-52 202 12.8 150 12.8
52 and over 482 30.6 360 30.6
Subtotal 1,574 (44.6% of total population) 1,176 (44.6% of total sample)
May 1, 1998 to October 31, 1998
Less than 13 52 3.1 38 3.1
14-26 155 9.4 116 9.4
27-39 446 27 334 27
40-52 319 19.3 239 19.3
52 and over 679 41.1 507 41.1
Subtotal 1,651 (46.8% of total population) 1,234 (46.8% of total sample)
Overall total 3,527* 2,637
* This total represents the sample frame total rather than the population total. The sample frame consists of those members of the population with complete data, including telephone numbers, to permit sampling for the survey.

A comparison group survey instrument was developed in the early winter of 1998, and reviewed by the Joint Evaluation Committee (JEC) in January 1999. Based on the JEC's review, a number of changes were made to both the participant and comparison group surveys, including wording changes; the addition of a number of questions related to respondents' work profile, Labour Market Development Agreement programs, attitudes, and use of income assistance; and modifications to response categories.

Throughout February and March, as pretests were being conducted with the participant survey, the comparison group instrument was modified to reflect ongoing changes being made to the corresponding participant survey. Because the instrument was virtually identical to the participant survey, with the exception of several questions that were not asked of non-participants and slightly different wording of some questions, all pretest information from the participant survey was equally applicable to the comparison group survey. Thus, modifications to the sequencing and clarity of questions, as well as checks on wording and flow, were made to the comparison group survey instrument on the basis of participant survey results. Furthermore, the length of the comparison group survey was easily deduced from results of the participant survey pretest because we were able to record the difference in the number and type of questions between the two survey instruments.

The response rate for the comparison group survey47 is presented in Exhibit C.4. The response rate is the proportion of individuals from the functional sample who responded to the survey. Conversely, the refusal rate represents the proportion of cases from the functional sample who declined to participate in the survey. The functional sample factors out the attrition in the survey, leaving only the sample which is comprised of completions, refusals, and those numbers attempted but not reached by the completion of fieldwork. (e.g., appointments for interviews that were not kept, retired phone numbers, respondents who were unavailable for the duration of the survey). Attrition includes numbers not in service, duplicate phone numbers, respondents who did not speak either French or English and respondents who indicated no knowledge of the topic or were ineligible to take part (e.g., LMDA participants).

The overall margin of error is Plusminus4.4 percent. That is, the overall survey results are accurate within Plusminus4.4 percentage points, 19 times out of 20. The response rate for the survey was 29.2 percent and the refusal rate was 11.9 percent. For the purpose of analysis, the comparison group survey data were weighted according to age, sex and time cohort.

Exhibit C.4 - Response Rate for the Comparison Group Survey
  Total
Initial sample 2,637
(less) Unused sample 648
(less) Attrition  
Number not in service/Invalid number 222
Duplicate number 13
No knowledge of topic/ineligible 77
Language barrier (did not speak English or French) 17
Functional sample 1,660
Other numbers retired (not due to attrition)  
No answer/busy 710
Unavailable for duration of survey 214
Other/illness 54
Non-response  
Refusal 193
Incomplete refusal 4
Total non-response 197
Total completed 485
Refusal rate 11.9%
Response rate 29.2%
Margin of error Plusminus4.4%

C.4 Multivariate Analysis

Eleven dependent variables representing key employment, earnings, and income support use outcomes were tested in the multivariate models. These variables are based for the most part on survey data. The models were run for all participants and separately for certain socio-demographic, rural/urban and claimant status segments. The outcomes correspond to the key objectives of the Canada/PEI LMDA and the Employment Benefits and Support Measures, which are to lead to sustained employment and a reduction in dependency on income support. Logistic (logit) regression was used for binary dependent variables (yes/no) and Ordinary Least Squares (OLS) regression was used for continuous dependent variables (numeric values). These variables follow:

  • employed/self-employed (or not) at time of survey;
  • full-time employed (or not) at time of the survey;
  • employed in a seasonal job (or not) at the time of the survey;
  • worked 12 consecutive weeks (or not) since end of intervention/reference date;
  • weeks working as percentage of weeks since intervention/reference date;
  • weeks looking for work as a percentage of weeks since end of intervention/ reference date;
  • weekly earnings of current or most recent job (at the time of the survey);
  • absolute change in weekly earnings (compared to one year prior to intervention/ reference date);
  • percent change in weekly earnings (compared to one year prior to intervention/ reference date);
  • weeks on EI in a new-spell EI48 as a percentage of weeks since end of intervention/ reference date; and
  • received Social Assistance since intervention/reference date.

Along with the intervention (EBSM) binary variables (e.g., participant in Employment Assistance Services or not), a common set of explanatory (control) variables was introduced into the models for each dependent variable (the outcomes). The purpose was to assess (or control for) the influence of other factors on the intervention's impact on the outcomes. These other factors included the time since the intervention and antecedent socio-demographic and employment-history variables. These variables were selected because they were thought, a priori, to have an impact on outcomes and because participants and non-participants differed with respect to these variables.49 Noting that "intervention" here refers to the end of the intervention for participants in Canada/PEI LMDA EBSMs and to the reference date for comparison group members. The variables entered into the models follow:

  • intervention status: five variables to indicate either the individual's participation in one or more of five EBSMs, or their non-participation in any of the interventions (comparison group);
  • length of time since the intervention;
  • socio-demographic variables: age, sex, education, mother tongue, minority status, marital status, and existence of dependants;
  • prior labour force experience: employment status (employed, unemployed) in month before intervention (versus not in the labour force), whether employed or not one year before intervention/reference date (entered in stepwise fashion because of concerns with co-linearity with the previous variable), interest in entering training/self-employment/labour force prior to intervention, number of separations 1992-1997, weeks EI benefits received 1992-1997, and total gross earnings in the year prior to intervention;
  • service-delivery variables: whether individuals had used self-serve products, received counselling, participated in job-search assistance activities or developed an action plan, or services other than those associated directly with the EBSM; and
  • the Heckman Correction or the Inverse Mill's Ratio, a control variable computed to reduce self-selection bias on the basis of regressions used to model participation in the intervention. This factor corrects for bias created by the fact that the same unobservable participant characteristics that determine entry into programs may be a factor in the observed impacts.


Footnotes

45 Stakeholders included representatives of industry associations, community development associations, and non-governmental organization (NGOs) representing interest groups. [To Top]
46 Originally, the intent was to also conduct one of the client focus groups in French. This client group was actually conducted in English, however, because one of the participants at this discussion was more comfortable speaking in English, and the other bilingual francophone participants were willing and able to speak English. [To Top]
47 The fact that the response rate for the comparison group survey (29.2 percent) is somewhat lower than that of the participant survey (59.5 percent), is not surprising if we consider that comparison group respondents have little direct connection to the topic of interest (employment programs and services). As such, it is more appropriate to compare this response rate to rates obtained from surveys of the general public, where a response rate of 30 percent is considered satisfactory. [To Top]
48 This variable was based on administrative data for survey respondents rather than their responses to the respective survey question, because it was felt that survey respondents would find it difficult to know whether their current EI spell was a new one or one "left over" from the intervention. [To Top]
49 Compared to the comparison group, participants had fewer weeks since the intervention, were more likely to be employed one month before, were younger, were less likely to be married and to have dependents, had a somewhat greater interest in being trained, earned less, and were more likely to have used employment assistance services independent of the EBSMs. [To Top]


[Previous Page][Table of Contents][Next Page]