![]() |
![]() |
![]() ![]() ![]() ![]() ![]() |
||
![]() |
![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |
C.1 Key Informant InterviewsA total of 30 in-person interviews were conducted with key informants. These interviews were held from March 8 to March 12, 1999, when three members of the study team travelled to Prince Edward Island. As most of the interviews involved two respondents, a total of 52 key informants were consulted. The average interview was one and a half-hours long. Interviews were conducted in Charlottetown (24 interviews), Summerside (four interviews) and Montague (two interviews) with informants from these cities as well as some respondents from other areas of the Island. The interview locations were set to be convenient for most respondents. The key informant interviews fell into the following three categories:
Three interview guides — one guide designed for each of the three groups were utilized in the key informant interviews. All respondents were sent a copy of the appropriate interview guide in advance of their scheduled interview appointment. In addition, they were sent an introductory letter explaining the purpose of the interview, the fact that the interview is voluntary, and that the fact their comments are protected under the Federal Privacy Act. Key informants were interviewed in their preferred official language. C.2 Focus Groupsa) OverviewA total of 12 focus groups were conducted from March 8 to March 12, 1999 when members of the study team travelled to PEI. The discussions were held with four types of participants: HRCC and provincial delivery/front-line staff (one group); stakeholders45 (two groups); clients (six groups); and employers (three groups; one of the stakeholder groups also included some employers). The combined stakeholder/employer group was targeted to the francophone community and was conducted in French, and all other discussions were held in English.46 At least one group with clients was held in each of the five Human Resource Centre of Canada (HRCC) regions of the Island — in Charlottetown, Summerside, O'Leary, Montague and Souris. The sixth client focus group was targeted to francophone clients and was offered in French. These various client focus groups were conducted in order to assess the views of clients in the urban, central rural and more remote rural areas of the province, and the different employment situations in each area. Each discussion was two hours in duration, and was held in the evening (with the exception of the one staff focus group which was conducted in the afternoon). The focus groups were held at a focus group facility in Charlottetown and in hotel meeting rooms in the other four centres. b) RecruitmentUsing lists of potential participants developed with the assistance of the Evaluation Committee, interviewers recruited the participants by telephone. For employers and stakeholders, the lists were comprised of persons who had accessed Employment Insurance (EI) programs and who represented specific demographic and geographic areas of PEI. Clients were recruited at random from a client sample list provided by the Evaluation Committee. In these telephone contacts, the interviewers explained to prospective participants the purpose of the discussion and study sponsor; details on the time and location of the focus group; the fact that participation is voluntary; the fact that the discussion would be audio-tape recorded but that their comments would be kept strictly confidential; and the fact that an honorarium of $50.00 as well as travel expenses would be provided to all participants (except the front-line government staff). Persons agreeing to participate were given a reminder phone call a day before their scheduled discussion. In order to ensure the participation of eight to ten people in each discussion, we endeavoured to recruit 12 confirmed participants for each focus group. Unfortunately, there were bad weather conditions in PEI on some of the evenings when groups were scheduled, so the participation was low for some of the focus groups (see Exhibit C.1). In an attempt to make up for this, the focus group questions were sent (by facsimile) to 31 persons who were unable to attend discussions during our visit to PEI. Only two people — both employers from Charlottetown — returned their responses and these comments are incorporated in the findings. c) Distribution of Focus GroupsThe distribution of the focus groups, including the number of participants per group, is summarized in Exhibit C.1. ![]() View Exhibit C.1 d) Discussion GuidesFour focus group guides — one guide for each of the four types of participants — were developed for the group discussions. Following the appropriate guide, the moderator asked the group the questions in a non-directive way, probing for clarification and more detail when necessary, and intervening as appropriate to involve all participants and keep the discussion on topic. C.3 Survey of Participants and Comparison GroupParticipant Survey The participant data files were originally developed to include participants who participated in LMDA employment programs and services at any time between April 26, 1997 and October 31, 1998. These were compiled from a participant file (n=5,409) and five administrative data files (HRDC, Hull, Quebec, names and addresses file and T1 file (n=45,513); Status Vector file (n=45,963); Status vector file with Benefit Period Commencement (BPC) and BVT data (n=45,963); and Record of Employment (ROE) file (n=45,861)). These files were aggregated, yielding a single data file containing information for 5,409 participant cases, with the individual client as the unit of analysis. This was not equal to the sum of all the cases from the administration and data files because clients who had taken part in more than one intervention could appear in more than one file. Following the removal of all cases without valid phone numbers, start and end dates for EI benefits, and start and end dates for most recent interventions, the final data file consisted of 3,744 individuals. The survey sample was randomly drawn from the final data file using a three to one "sample to survey completion" ratio for each different participant group (i.e., three times as many participants were sampled as were expected to complete the survey). For all groups except Employment Assistance Services (EAS) and Enhanced Feepayers, there were not enough cases available to obtain this three to one ratio, thus for an expected total of 1,164 survey completions, a total final sample of 2,483 cases was drawn from the data file of 3,744 program participants. Based on the matrix of issues and indicators developed for the evaluation, a survey was designed for clients who have participated in LMDA funded programs in Prince Edward Island. From the initial review of the instrument by the Joint Evaluation Committee (JEC) in January 1999, a number of changes were made, including wording changes; the addition of a number of questions related to respondents' work profiles, LMDA programs, attitudes, and use of income assistance; and modifications to response categories. The pre-test was carried out in order to simulate the conditions to be encountered during the actual survey. The objective was to test the survey instrument in terms of the length of time required for the interview, as well as to ensure the sequencing and clarity of the questions and appropriate wording and flow were. On February 23, 1999, a total of six interviews were completed with an average length of 29.5 minutes. The pre-test results prompted several revisions of the instrument, such as changes to skip logic and wording changes. Notably, the pre-test results demonstrated that the instrument was longer than planned; thus a list of suggested changes to shorten the survey was developed and submitted to the JEC for their approval. Efforts to shorten the survey involved developing a list of questions to be deleted and/or merged with other questions (i.e., in order to collect the same type of information using fewer questions). Following approval of the suggested changes and modification of the survey instrument to reflect these changes, another pre-test was conducted on March 11, 1999. The results of eight completed interviews showed that the average length of the survey was now 28.2 minutes. Additional efforts to shorten the survey were made and another pre-test was conducted with three survey completions the following day, on March 12. This pre-test yielded an average estimated time of 24 minutes for the survey. Although the final pre-test revealed that the instrument was somewhat longer than the time allotted for the survey, additional resources were supplied by the client to offset the costs of the longer survey. Fieldwork for the survey began on April 5, 1999 and was completed on June 10, 1999. A major delay in the fieldwork occurred on April 14 when it was discovered that different protocols were used by HRDC in PEI and National Headquarters in Ottawa to extract the population of participants from the administrative data files. The consequent discrepancy in the population characteristics of the participants pulled using these two extraction protocols required that all fieldwork come to a stop until a resolution to the problem had been reached. The differences in the two different selection strategies were as follow:
On May 17, 1999, EKOS received the new participant data files from HRDC, including the population of all participants and all participant administrative data files. The new participant data file was rebuilt and matched to the old participant data file so that only new cases were pulled from the new file. This new file was also matched to the list of comparison group cases that had already completed the comparison group interview. Respondents who had responded to the comparison survey and were listed as participants in the new participant data files (n=48) were also ineligible for selection as participants in the new wave. In order to limit the amount of time that elapsed between the first period of data collection and the continuation of the fieldwork, fieldwork for the participant survey resumed on May 12, 1999, prior to receiving the new participant data files. This early return to the field was also prompted by the fact that the research team felt it would be prudent to collect extra participant cases concurrent to the comparison group fieldwork, which began in the field at the same time. In this way, there would be a sample of completed participant surveys that could be compared to the earlier participant cases, as well as to the comparison group cases. These comparisons would provide information about any effect that the time delay might have had on participant responses. An additional difficulty that arose from problems in defining the population of participants concerned the loss of time in the field. Specifically, fieldwork was quickly moving into the May long weekend, a weekend which typically marks the beginning of the tourist season on the Island and a return to work for many people, including perhaps significant proportions of the participant survey sample. Given this potential source of bias, all efforts were made to complete the survey fieldwork before the long weekend. When it became apparent that this was not going to occur, the wording for both surveys was modified so that questions concerning post-intervention employment status and activities referred clients to report on their employment history only up until April 10, 1999 (i.e., the mid-point of the original fieldwork for the participant survey). In this way, the confounding effect of a mass return to work heralded by the beginning of the tourist season was avoided. Following the completion of fieldwork, it was also discovered that no reliable means existed to distinguish reachback clients from active EI claimants on the basis of the available administrative data. The administrative data lacked a reliable flag for reachbacks and claimants; thus reachback status was computed on the basis of the BVT and BPT variables derived from the Status Vector files. The BVT variable records the receipt of EI claimants' most recent report cards, while the BPT is the week code of the theoretical end of EI eligibility. At the actual end of a claim, the BVT and BPT codes are reconciled and will be equal. When the proportion of reachbacks participating in each EBSM was determined using the BVT variable, the true proportion of participating reachbacks was over-estimated because claimants for whom a most recent (but not last) report card had been received were categorized as reachbacks. The only way to make a positive determination of claimant status is to wait a sufficient period of time after the claim period begins for the BVT and BPT variables to be reconciled and for the quarter-annual data extraction to take place following this reconciliation for the data to become available to researchers. As such, at the time of this report the claimant status of roughly one in 10 participants was still in question. The response rates and refusal rates for participants by program type are presented in Exhibit C.2. The response rate is the proportion of cases from the functional sample who responded to the survey, while the refusal rate represents the proportion of cases from the functional sample who declined to participate in the survey. The functional sample factors out the attrition in the survey, leaving only the sample which resulted in completions, refusals, and those numbers attempted but not reached before the completion of fieldwork (e.g., retired phone numbers, respondents who were unavailable for the duration of the survey, respondents who were unable to participate due to illness or some other factor, etc.). Attrition includes numbers not in service, duplicate phone numbers, respondents who do not speak either English or French and respondents who indicated no knowledge of the topic. The overall margin of error for the survey is ![]() View Exhibit C.2 Comparison Group SurveyThe comparison group sample was drawn from a file of Employment Insurance claims that were active in 1998 and dormant EI claims (claims that were not being processed) from 1994 to 1998 that was provided by Human Resources Development Canada. This selection method produced a file of 41,549 claimants from which to draw the comparison group sample. The comparison group data file was matched to the participant data file based on the time periods for which members of the comparison group were receiving EI. To accomplish this matching, three time periods were defined according to observed values for program end dates in the population of program participants. That is, within the population of program participants, the end dates of participants' most recent interventions ranged from April 26, 1997 to October 31, 1998 and this time period was divided into three time periods. For comparison group sampling purposes, the following time periods were derived: April 26, 1997 to September 30; October 1, 1997 to April 30, 1998; and May 1, 1998 to October 31, 1998. Reference data flags were then computed using the mid-point in each of these time periods (June 1, 1997, December 1, 1997, and July 1, 1998) so that if an individual in the comparison group was EI eligible at the reference date (at the mid-point of the time period), that individual would fall into the time period cohort. This meant, however, that these were not necessarily mutually exclusive cohorts because an individual could have been EI eligible at more than one of the reference dates. Based on the participant population characteristics, for each time period cohort a listing was produced of the time in weeks between the end of the latest intervention and the start date of the most recent EI eligibility period. These time frames were further broken down into five categories based on the amount of time into the EI eligibility period that the participant's intervention came to an end. These categories were 13 weeks or less, 14 to 26 weeks, 27 to 39 weeks, 40 to 52 weeks, and 53 weeks or more. Each comparison group cohort was then similarly broken down into the same five categories based on the time in weeks between the reference date (the mid-point of one of the three time period cohorts) and the start date of the most recent EI eligibility period. (see Exhibit C.3). The comparison sample was drawn in the same proportions as were observed for each of the three time period cohorts in the participant population. For example, if 13.3 percent of the population of participants fell into the first time cohort (i.e., the most recent intervention end dates were between Apri1 26, 1997 and September 30, 1997), the comparison group was sampled to ensure that 13.3 percent of cases were EI eligible at the mid-point of this time period (i.e., June 1, 1997). The comparison group sample was further stratified by the number of weeks between the end date of the latest intervention and the start date of the most recent EI eligibility period. For example, if 10.1 percent of the participant April to September time cohort had 14 to 26 weeks between the end date of their latest intervention and the start date of their most recent EI period, this meant that 10.1 percent of the comparison group population in the April to September time cohort (they were EI eligible at the mid-point of this time period, on June 1, 1997) was sampled from those with 14-26 weeks between the reference date (the mid-point of the time period) and the beginning of their most recent EI eligibility period. To correct for the fact that the comparison time cohorts are not mutually exclusive, each time period cohort was sampled separately and a flag was computed to identify sampled cases. As such, it was possible to track these cases and not include them when sampling from subsequent time periods. Thus the final comparison group sample consisted of 2,637 cases in three mutually exclusive time period cohorts from a population of 41,549.
A comparison group survey instrument was developed in the early winter of 1998, and reviewed by the Joint Evaluation Committee (JEC) in January 1999. Based on the JEC's review, a number of changes were made to both the participant and comparison group surveys, including wording changes; the addition of a number of questions related to respondents' work profile, Labour Market Development Agreement programs, attitudes, and use of income assistance; and modifications to response categories. Throughout February and March, as pretests were being conducted with the participant survey, the comparison group instrument was modified to reflect ongoing changes being made to the corresponding participant survey. Because the instrument was virtually identical to the participant survey, with the exception of several questions that were not asked of non-participants and slightly different wording of some questions, all pretest information from the participant survey was equally applicable to the comparison group survey. Thus, modifications to the sequencing and clarity of questions, as well as checks on wording and flow, were made to the comparison group survey instrument on the basis of participant survey results. Furthermore, the length of the comparison group survey was easily deduced from results of the participant survey pretest because we were able to record the difference in the number and type of questions between the two survey instruments. The response rate for the comparison group survey47 is presented in Exhibit C.4. The response rate is the proportion of individuals from the functional sample who responded to the survey. Conversely, the refusal rate represents the proportion of cases from the functional sample who declined to participate in the survey. The functional sample factors out the attrition in the survey, leaving only the sample which is comprised of completions, refusals, and those numbers attempted but not reached by the completion of fieldwork. (e.g., appointments for interviews that were not kept, retired phone numbers, respondents who were unavailable for the duration of the survey). Attrition includes numbers not in service, duplicate phone numbers, respondents who did not speak either French or English and respondents who indicated no knowledge of the topic or were ineligible to take part (e.g., LMDA participants). The overall margin of error is
C.4 Multivariate AnalysisEleven dependent variables representing key employment, earnings, and income support use outcomes were tested in the multivariate models. These variables are based for the most part on survey data. The models were run for all participants and separately for certain socio-demographic, rural/urban and claimant status segments. The outcomes correspond to the key objectives of the Canada/PEI LMDA and the Employment Benefits and Support Measures, which are to lead to sustained employment and a reduction in dependency on income support. Logistic (logit) regression was used for binary dependent variables (yes/no) and Ordinary Least Squares (OLS) regression was used for continuous dependent variables (numeric values). These variables follow:
Along with the intervention (EBSM) binary variables (e.g., participant in Employment Assistance Services or not), a common set of explanatory (control) variables was introduced into the models for each dependent variable (the outcomes). The purpose was to assess (or control for) the influence of other factors on the intervention's impact on the outcomes. These other factors included the time since the intervention and antecedent socio-demographic and employment-history variables. These variables were selected because they were thought, a priori, to have an impact on outcomes and because participants and non-participants differed with respect to these variables.49 Noting that "intervention" here refers to the end of the intervention for participants in Canada/PEI LMDA EBSMs and to the reference date for comparison group members. The variables entered into the models follow:
|