Government of Canada | Gouvernement du Canada Government of Canada
    FrançaisContact UsHelpSearchHRDC Site
  EDD'S Home PageWhat's NewHRDC FormsHRDC RegionsQuick Links

·
·
·
·
 
·
·
·
·
·
·
·
 

4. Methodological Issues for the Summative Evaluations


This section discusses a range of methodological issues for future evaluations of the IAM Initiative. The first part of the section highlights the following themes:

  • creating a valid comparison group for the IAM is unlikely;
  • outcomes must be more clearly defined;
  • attribution is unlikely; and
  • logistics must be managed.

With these themes as a backdrop, the latter part of this section outlines a potential design for summative evaluations of the IAM programs.

4.1 Creating a Valid Comparison Group for the IAM is Unlikely

In evaluation, as in science, comparing outcomes for those who receive a program with those who do not is the accepted method for inferring incremental impacts of a program. However, it is very unlikely that a valid comparison group could be obtained for summative evaluations of the IAM Initiative. The approaches reviewed below are either not possible in the case of IAM or would require costly coordination and the participation of many institutions.

Randomization

The ideal method for creating a valid comparison is randomization. By randomly assigning subjects to a program or treatment group, it is possible to ensure that there are no statistically significant differences between the treatment group and the control group of non-participants. This method assumes that the sample sizes of the two groups are large enough and are drawn from a large population. Any observed differences can then be attributed to program interventions. Randomization is rarely possible in policy settings or socio-economic program settings, however, because ethical and political concerns require that programs be available to all.

The central problem with using actual participants and non-participants, instead of randomly assigned subjects, is that those who apply and are accepted into programs (participants) may differ from those who do not apply or are not accepted (non-participants). For example, participating students may have superior grades or more desire to travel abroad. Therefore, observed differences between IAM participants and non-participants could be due to pre-program differences between these two groups.

Using Secondary Data Sources

Another approach is to consider whether it is possible to create a useful comparison exercise between program participants and non-participants using secondary data sources. For example, if program participants and non-participants have drivers' licenses, consistent measures such as age and gender may be obtained for both the participant and non-participant groups. Health insurance databases and school records could also be used to generate information on the two groups. These "background" variables can then be used in multivariate analysis to control for differences between the two groups, with a "dummy" variable inserted into the statistical model to capture any effects arising from the program. However, there are several limitations to this approach:

  • A "complete" set of background variables must be collected to control for all possible impacts on outcomes caused by individual attributes. Few databases offer a complete set of data, requiring researchers to try to assemble information from several sources.
  • It is difficult to know what constitutes a complete set of background variables, and one always wonders whether a crucial variable has been omitted that could account for the observed differences in outcomes.
  • Increasing the number of control variables to a multivariate specification produces a range of statistical difficulties.26

"Matching" Participants to Non-Participants

Yet another approach to the construction of a comparison group is to "match" participants to non-participants based on attributes believed to determine the outcomes. For example, a participant who is single, a student, aged 19, female, in engineering, and from Alberta would have a comparison group "match" who has all these same attributes. Clearly, matching on a wider range of variables increases the power of the comparison process, but raises the data demands because exact matching requires the same information on both participants and non-participants. A less exact approach is to match on ranges, such as an age range of 18 to 21, rather than on the exact age of 19. Taking the match-on-ranges approach lessens the data demands but reduces the power of the test.

Using a Group Norm Approach

Finally, one can compare the average experience of program participants with a group norm by using, for example, the National Graduates Survey (NGS). For this method, it is necessary to select a relevant outcome variable for the comparison. Of course, this approach completely ignores the problem of self-selection. To increase the precision of the test, HRDC could arrange with Statistics Canada to extract a sub-sample from the NGS that more closely resembles the IAM participants than the total sample, but this would be time-consuming. Another approach is a national survey of universities that surveys undergraduate students. It may be possible to include several questions on this survey to provide a reference point for the IAM.

Implications for IAM Evaluations

All of the above approaches are either not possible or would require costly co-ordination and the participation of many institutions. For example, creating a comparison group from secondary data sources for the IAM will require co-operation from the participating schools. Students typically provide a wealth of family and background data on the applications for admission and financial assistance. As an IAM project is approved, HRDC would need to arrange with school administrations to secure information that would support the identification of a comparison group. Registrars and college or university records would need to be approached to identify non-participating students to include in the comparison group. Further, all students (participants and non-participants) would have to consent to the collection of their personal data and would need to be offered incentives to encourage their inclusion in follow-up surveys.

Also, arrangements for comparison groups would need to be made with each participating school and that would be too expensive to be feasible. Furthermore, few universities and colleges would be willing to share student information to create an effective comparison group.

Other reference groups, such as the National Graduates Survey pose few questions relevant for testing the goals of the IAM.

4.2 Outcome Variables Must be More Clearly Defined

The IAM Initiative has intended outcomes for institutions and students. In the case of institutions, project directors and educational partners could report on outcomes for their institutions and their own careers in follow-up surveys that occur within a year or two of the project. By then, most of the institutional and professional outcomes should be apparent to participating faculty.

The evaluation of student outcomes is more problematic, however. Student outcomes, such as career and other gains, will be difficult to detect because of the problems involved in creating valid comparison groups and the complexity of defining sensible, long-term outcome measures. Most university/college graduates will find employment and, on average, will experience career progress in the form of higher wages and increased responsibility. The question is whether IAM participants show a statistically significant advantage in these areas. Therefore, establishing a norm from the comparison group is essential. If one confines outcomes to income, however, the net differences may be small. Selecting an outcome that reflects a goal of the IAM Initiative, such as success in pursuing a career related to international trade or other "global" careers, requires that non-participants who select these careers are also identified and monitored. Identifying these non-participants while they are at college or university will be challenging.

Also, most students participate in IAM projects during the third or fourth year of their post-secondary education. Some go on to pursue graduate or professional training, and this training can often last for five years or more after they have completed their undergraduate degrees. Although, their experience with the IAM Initiative may affect their choice of post-graduate training and eventually may pay dividends in their careers, these sorts of effects can occur many years after program participation.

4.3 Attribution is Unlikely

The IAM Initiative, like most international exchange programs for post-secondary students, is intended to increase the chance that a student will become engaged in a career demanding awareness and experience with foreign cultures. However, it is unlikely that initiatives such as IAM will have a measurable impact until a student has been in the workforce for several years. It is also likely that a wide range of other factors influence career choice and progress, while the influence of a student's IAM experience can be considered modest or finite and of declining importance as an individual matures and works up the career ladder.

Baseline data and comparing participants to the "average" student at an institution is the route to successfully controlling selection bias. In turn, this means that the program may wish to consider collecting detailed information on students at the outset of their participation.

4.4 Logistics Must be Managed

Securing survey responses from students and faculty members will always present major challenges. Students are highly mobile and, for a large fraction of participants, address or phone information will become out of date as early as three months after the project. Because students participating in IAM projects are usually in their third or fourth year, few will have the same address and phone number after two years. This has three implications for the research:

  • Follow-up surveys need to commence very soon after program completion.
  • Collateral information on the student's permanent address, family contact or a friend who will know his/her whereabouts is critical because, without it, any long-term follow-up will surely fail.
  • Most importantly, the address and collateral location information might be collected at the time of participant application for all students and faculty. This will improve the possibility of conducting follow-up surveys.
  • Also, it is preferable to collect at the time of student participant application. Failure to secure consent will require a convoluted process that involves an initial mailing and then awaiting the consent card. Experience here shows the futility of that approach.

Project directors present some different challenges. Several refused outright to participate in the formative evaluation, citing the administrative burdens of previous reports and the fact that they had completed a lengthy questionnaire associated with an audit of the program. The length of the questionnaire discouraged some from participating.

The academic cycle produces optimum windows for academic surveys April and December at exam time are best. October to November, January to March, and May are reasonable times for surveys of faculty. Summer is very poor.

4.5 Suggestions for the Summative Evaluation

With these four themes as background, it is possible to outline a methodology for the summative evaluation of the IAM Initiative. Given the problems associated with comparison groups and the difficulty in inferring attribution, the summative evaluation may wish to focus on collecting information on short-term outcomes and how the project has assisted the post-secondary institution to increase its capacity to participate in international exchange programs.

Also, project applications, student applications, and project reports would be better used to anchor the evaluation. This approach would reduce the level of effort needed for follow-up surveys by making better use of the application process and by collecting key information from Project Directors in follow-up project reports. For example, the survey of Project Directors and Educational Partners can be eliminated entirely.

The proposed lines of evidence for the summative evaluations are discussed below.

Administrative Data

Administrative data consists of the following data collection and analysis activities:

  • Project application (submitted by Project Director);
  • Student/participant application;
  • Project report submitted by Project Director at conclusion of the project; and
  • Program database development and annual reporting.

The project application could provide information about the proposed project as well as the participating faculty. It is important to collect attributes of the Project Director and Educational Partners such as:

  • Other research grants (general and those associated with foreign research);
  • Number of graduate students supervised;
  • Academic rank, years of employment at present and all academic institutions; and
  • Highest degree and year of graduation.

The standard applications for all students could be designed to collect the following information:

  • Name, current address and phone;
  • Secondary contact information (name address of parent or other who will know where the student will be);
  • Current academic program, grade point;
  • Work history (last 3 positions);
  • Experience with international travel;
  • Capacity in additional language;
  • Expectations for the program;
  • Personal and family socio-economic status (occupation of parents, family income, etc.); and
  • Informed consent release.

The onus for ensuring that all student and educational partner applications are complete could be placed on the Project Directors. In effect, the application function will provide partial information normally collected by follow-up surveys.

Reports by Project Directors submitted at the conclusion of the project could supply a range of information on activities and short-term outcomes from the perspective of the participating faculty members. This includes number of students, duration of stay, total costs incurred in the exchange, all sources of funding not covered by the IAM (i.e., cross-supported by other grant funds), activities in the project, etc. Project Directors could collect input from educational partners using a standard questionnaire.27 This will ensure that the experience of all educational institutions is included in the report.

The IAM program could develop a standard reporting template with an instruction guide to increase uniformity of response. Also, the annual report could include feedback from Educational Partners on the contribution made by the IAM project for their students and institutions. The Project Director can submit all feedback using the same template.

A program database could be developed using the project and student applications and Project Directors' report. This needs to be a proper relational database that can support queries and annual reporting for the program.

The proposed approach relies both on the creation of a standard project report template and on Project Directors supplying additional information on the Educational Partners at the conclusion of the project. This raises the burden on Project Directors during the project, but eliminates the need to participate in a follow-up survey. IAM could consider allowing a small portion of project funds (i.e. $2000) to be used to assist with this administrative burden.

Surveys of Students

IAM may want to consider using application forms to eliminate the need for an entry survey. The follow-up survey of students could then be configured in three waves:

First Wave: Students would complete an exit survey within a month of their return or completion of the program (and before classes end in April). The questionnaire would collect much of the information included in the survey used in this formative evaluation.

This survey must request confirming details on the secondary contact (permanent address) for follow-up surveys. The questionnaire could contain a page (release) that the student signs indicating their agreement for HRDC to solicit student co-ordinates from the secondary contact.

Second Wave: A second follow-up survey would be completed a year after students return. HRDC would initiate the process by sending a letter to the secondary contact (with a copy of the release signed by the student) requesting the current phone number for the student be provided (to a toll free line). A follow-up call may be needed to collect this information. The student survey would then proceed using a telephone interview and would collect information of current educational and labour market activity.

Third Wave: Subsequent surveys are possible, with periodic re-contacting of the secondary contact, although attrition will likely increase over time.

The survey data can be added to the program database to support an integrated information system for the summative evaluation. Participants may be cross-referenced to the project and vice versa to support analysis of outcomes by project type, and even Project Director attribute and institution. Changing the follow-up questions on the surveys will impede longitudinal analysis and will use the program database to support the summative evaluation.

The formative evaluation also provided a few important lessons learned in the execution of student surveys. For example, prize draws are a useful incentive to encourage students to participate in the research. Questionnaires must be concise and brief. All the questionnaires used in the evaluation were too long and attempted to collect much "nice to have" as opposed to "need to have" information. Also, if the academic cycle is not taken into account when scheduling the surveys, the response rates will be very poor. This is important for all phases of data collection for the summative evaluation.

Qualitative Data

Key Informant Interviews: Key informant interviews with program managers, university/college representatives (such as Association of Universities and Colleges of Canada and Association of Canadian Community Colleges), and senior management within HRDC provide important program details and context. As an option, key informant interviews may also be conducted with selected Project Directors, but it is likely that few will be able to offer additional information beyond that provided in the project report.

Case Studies and Focus groups: Case studies and focus groups with students can offer useful insights into program operation. Case studies would also be useful for obtaining greater insight into projects with non-mobile students. For example, interviews with Project Directors, focus groups with participating students, and a review of curricula would constitute a useful case study.

Case studies and focus groups must be conducted within a narrow time frame (i.e. immediately after students have returned). In particular, focus groups must be convened before April 1 because students disperse immediately after classes have concluded. Missing the narrow window for scheduling the focus groups with students would, however, erode this as a line of evidence.


Footnotes

26 These include multicollinearity and simultaneous equation bias. The latter occurs when a variable on the right hand of the regression model is also determined by the dependent (left hand variable). Both these problems reduce the precision of the statistical estimation and invalidate the model. [To Top]
27 In general, more information value exists in collecting information from Project Directors than Educational Partners. Project Directors usually have the most information and typically will have more contact with students and the institutions in other countries. [To Top]


[Previous Page][Table of Contents][Next Page]