Government of Canada | Gouvernement du Canada Government of Canada
    FrançaisContact UsHelpSearchHRDC Site
  EDD'S Home PageWhat's NewHRDC FormsHRDC RegionsQuick Links

·
·
·
·
 
·
·
·
·
·
·
·
 

2.0 The Formative Evaluation Approach


2.1 The ACR Program Evaluation Process


The Cooperation Agreement which established the ACR Strategic Initiative required that a Management Committee be established to manage the implementation of the Initiative. As well, a federal/provincial Evaluation Steering Committee was to monitor, review and evaluate the effectiveness of the Initiative. A framework for the evaluation was to be developed to determine the effectiveness, efficiency and potential to contribute to social security reform.

Pursuant to the evaluation requirement described above, the ACR Evaluation Steering Committee developed terms of reference for the program evaluation and issued the terms of reference through a Request for Proposal process.

The program evaluation requested by the ACR Steering Committee was to be comprised of a "formative evaluation", a "baseline data study" and, in a separate engagement to be conducted at a later date, a "summative evaluation".

The "formative evaluation" was to address a series of questions determined by the ACR Evaluation Steering Committee. These are set out in Appendix a to this report. This report focuses on responding to these questions and reporting other formative evaluation issues identified in the course of the evaluation.

 


2.2 Formative and Summative Evaluation

A formal, comprehensive program evaluation is conducted by carrying out two conceptually different, yet inter-related evaluation exercises known as the "formative" and "summative" evaluations.

The formative evaluation focuses on the conceptual, organizational and operational aspects of a program and, through detailed research, data gathering, study and evaluation, assesses the strengths and weaknesses of those program aspects. In this way, an assessment is made of how well the program has been designed and managed, and what factors will very likely affect the efficiency, economy and effectiveness of the program.

The second evaluation exercise is the summative evaluation. In the summative evaluation, specific, appropriate research methods are used to actually measure the outcomes or impacts of a program in relation to the originally intended impacts, goals and objectives. The summative evaluation does not focus on the operations or processes carried out within a program. Rather, it focuses on the end products, the ultimate effects of the program. The summative evaluation addresses the question of effectiveness directly by attempting to quantify the effects of a program and draw conclusions about the relative value of those effects.

In order to evaluate a program fully, it is necessary to conduct both a formative and a summative evaluation. As well, it is usually advantageous to conduct the formative evaluation first, because the knowledge gained will produce not only the insights described above, but will also play a key role in the design of appropriate summative evaluation methodologies.

In the case of the ACR Strategic Initiative, a detailed formative evaluation was extremely important, because the actual "program" consisted of 11 pilot projects which were designed locally and were therefore different from project to project. Enough information about the individual pilot projects had to be obtained in order to be able to consider the formative issues and design a proper summative methodology.

 


2.3 Methodology

The research and data collection methodology for the formative evaluation consisted of a literature research, a documents review, on-site interviews, follow-up telephone interviews, and the administration of a formative questionnaire to members of the 11 local ACR Committees and the service providers involved in the pilot projects.

The formative evaluation questionnaire was developed by combining questions designed to obtain fundamental descriptive information about the ACR pilot projects with:

  • Questions of the ACR Evaluation Steering Committee that could be addressed in a formative evaluation.
  • Questions that were needed to address broader aspects of a formative evaluation.
  • Also, the questionnaire was tailored to fit the characteristics of each individual pilot project and was faxed to the interviewees in advance of the on-site visits.
  • In each pilot project, the evaluation team met with as many of the members of the local ACR Committee as possible, as well as representatives from the service provider organizations under contract. The questionnaire described above was completed either during these sessions or afterwards by project representatives who then submitted the questionnaire back to the evaluation team. Follow-up telephone interviews were conducted. The follow-up telephone interviews ranged from 1/2 to 1 1/2 hours.
  • Based on the results of the data gathering and research methodology described above, detailed case study reports were written for all the pilot projects in a separate volume.

 


2.4 Methodological Limitations

Several significant limitations to the research and data gathering methodology should be understood to fully appreciate what was discovered and what could not be assessed during the formative evaluation.

First, there were limitations to the amount of time and effort the local ACR committee members and others at the pilot project level could devote to the evaluation effort. The limitations stemmed from the fact that case loads and overall workloads at the field level of the three government partners to the Cooperation Agreement were extremely high and the additional capacity to participate in a program evaluation was very limited.

Second, "evaluation exhaustion" may have been a factor at the field level because a number of reviews and evaluations had been conducted and some of the respondents to the formal ACR evaluation had already responded to similar kinds of questions asked for different purposes by other stakeholders.

Third, fundamental changes to the basic business, organizational structures, roles and responsibilities of the government partners were being implemented at the time when the pilot projects were being developed and during the period of the evaluation. The impacts of these changes were unfolding at the field level almost daily in a manner that made pilot project implementation more challenging and evaluation efforts more difficult.

The result of all these factors was that the evaluation team would not be able to administer the questionnaire described above in a completely rigorous manner. First, it would not be possible to obtain responses from all pilot project representatives. Second, it would not be possible to obtain complete responses in every instance. Third, due to the differences in pilot projects, the actual meaning and relevance of the questions to the respondents would vary and as a result, it would be difficult to develop generalizations about the issues. Finally, it was not possible to follow up the responses in detail, or to obtain more detailed explanations or varying views on the same issues.


[Previous Page][Table of Contents][Next Page]