Government of Canada | Gouvernement du Canada Government of Canada
    FrançaisContact UsHelpSearchHRDC Site
  EDD'S Home PageWhat's NewHRDC FormsHRDC RegionsQuick Links

·
·
·
·
 
·
·
·
·
·
·
·
 

4. RBA Accountability


The purpose of this chapter is to assess the accountability framework for the RBAs. The research team gained considerable knowledge of the RBA initiative and the individual RBAs through the review of RBA documentation and the Mid-Term review reports and through interviews with HRDC and RBA staff. This knowledge of the RBA initiative, combined with the team's experience in the development and review of accountability frameworks, was used to assess the current RBA accountability framework. The team was also asked to provide an overview of a review strategy for the new five-year strategy (Aboriginal Human Resource Development Strategy [AHRD]).18

To set a context for the assessment of the accountability framework, this chapter begins with a brief review of the general accountability requirements set forth by the Government of Canada. Next, it presents a model of the RBA's performance so as to provide a foundation for assessing the RBA accountability framework. It then assesses the accountability framework and suggests ways in which it might be improved. Finally, it suggests a review strategy that synthesizes the collection of performance information for the day-to-day management of the RBA with that for the accountability requirements of HRDC.

4.1 Planning, Reporting and Accountability Structure

According to the Treasury Board of Canada, an accountability framework defines the nature and scope of a department's responsibilities, its performance expectations, and the monitoring and reporting requirements through which the department will answer for the authority vested in it. The Planning, Reporting and Accountability Structure (PRAS) is the federal government's accountability framework, and all departmental programming is expected to conform to it. For each business line or program, the PRAS requires:

  1. an objective;
  2. a description;
  3. key results that will be reported in planning and performance documents;
  4. a performance measurement strategy; and
  5. the position in the organization accountable for achieving results.

An accountability framework provides the basis for developing an accountability accord. An accountability accord is an agreement between organizational levels that sets out performance commitments and performance targets. Also referred to as a 'performance agreement' or 'performance contract,' an accord covers a specified period and usually reflects strategic priorities.

4.1.1 Accountability and Evaluation Frameworks

Many of the first four elements of an accountability framework are found in an evaluation framework. Therefore, in general, accountability and evaluation frameworks are mutually supportive documents and must be completely consistent. The major difference between them is their purpose. An accountability framework focuses on the key results that will be reported in planning and performance documents, along with the position in the organization accountable for achieving these results. An evaluation framework identifies which performance data should be collected on an ongoing basis and which should be collected at the time of the evaluation. In this way, it seeks to optimize evaluation data collection in terms of cost, timing and reliability. An evaluation framework also outlines the methodology for estimating impacts and cost/benefits for the program.

More sophisticated evaluation frameworks identify a flow of performance information, including (a) the day-to-day information needed to manage the program; (b) implementation evaluations to ensure that the performance of a new initiative is not compromised by design changes made during its implementation; (c) special studies (such as management reviews, audits) that deal with specific problems or information requirements or (d) interim evaluations (usually carried out in the early days of a program's operation) designed to look at a program's design and preliminary impacts; and (e) summative or final evaluations (usually carried out after a program has matured) designed to assess how well the program is achieving its objectives. Theoretically, this flow of information builds over time, reinforcing successive needs and minimizing response burden on program participants and staff.

4.1.2 Program Model

At the heart of both accountability and evaluation frameworks is performance. Performance is not just about knowing what you want to achieve. It is also about knowing how you will achieve it. This is because current accountability models do more than promise results and explain shortfalls. They now include the need to show that reasonable and appropriate precautions have been taken to avoid shortfalls. And this requires knowing and making explicit the 'cause and effect' relationships of performance so that they can be managed and validated. For example, RBA staff skills affect their ability to counsel. In turn, this counselling ability affects RBA staff's ability to help clients develop appropriate personal action plans, which affect the bottom line — whether RBA clients are able to find meaningful employment after they have completed their RBA programming.

Performance may be based on a program19 model. It shows the intended cause and effect relationship among key activities, their outputs and their impacts. It reflects program design from a performance perspective. The evaluation framework makes the model explicit, while the accountability framework usually does not.

The program model lays a foundation for assessing whether an organization has the capacity to manage its performance. Capacity is determined by:

  1. assessing the infrastructure (i.e., employee morale, skills, organizational structure, policies, procedures, systems, standards, etc.) underlying the program;
  2. identifying shortcomings in business processes and personnel; and
  3. aligning the organization's infrastructure with program objectives, especially those impacts/results for which it has made a commitment.

In a rapidly changing environment, such as that of the RBA initiative, capacity assessment can provide an early indication of whether a program or initiative is being implemented successfully. It provides the means not only to identify and redress performance shortfalls, but also to identify and redress threats to performance before they occur.

Both the program model and capacity assessment provide a foundation for developing a performance measurement system. The program model focuses on outcome measures, while the capacity assessment focuses on process measures. Both are compatible with the PRAS and are necessary to identify key accountability results.

4.2 RBA Program Models

HRDC negotiates and signs RBAs with various Aboriginal organizations to give them the opportunity to design and deliver employment programs that reflect and serve Aboriginal needs at the local labour market and community level. These agreements enable Aboriginal organizations, through a number of employment programs, to help Aboriginal people (including the employed and those not in the labour force, as well as the unemployed) prepare for, obtain, and maintain employment.

The performance of the RBA initiative is best depicted by two program models. The first shows the cause and effect relationship among the activities, outputs and impacts of the RBA-holder — the Aboriginal organizations that design and deliver the programming. The second shows the cause and effect relationship among the activities, outputs and impacts of HRDC in its role as program administrator, at the level of the RBA initiative. Together these models show the initiative's design from a performance perspective. They also focus on identifying outcome measures that lay a foundation for the PRAS.

It should be noted that the proposed models presented in this section are based on RBA documents and interviews with selected RBA-holders and HRDC personnel. While the proposed models have been reviewed by staff of the ARO at HRDC, they have not been reviewed in any formal way by the various RBA-holders. Such a review is suggested, not only to ensure the models' accuracy but also to ensure its acceptance among the RBA-holders. Any changes to the initiative would lead to changes of a material nature in the program model.

4.2.1 RBA-Holders

The performance of RBA-holders rests on eight basic activities. First, RBA managers negotiate budgets and associated results. Then, they assess their capacity to realize these performance expectations and, where necessary, build or strengthen their capacity to achieve these results. Next, they promote the RBA programming. At this point, they are ready to identify and determine the eligibility of potential clients. Next, they counsel them and help them to determine their employment programming needs. Eligible clients whose employment action plans are approved are then matched with appropriate delivery mechanisms. As clients are engaged in their employment programming, their progress is monitored and assessed, and the eventual results are reported to HRDC. As a final step, RBA managers evaluate how well the RBA performed and how well its performance was managed. The following explains each of these eight activities in more detail, along with their primary impacts.

Negotiate Budgets/Targets

Regional Bilateral Agreements (RBAs) are similar but not identical. In general, they cover the term of the agreement; local appeal process; funding, reporting and accountability mechanisms; services to be provided; data sharing; geographic area to be served; evaluation and monitoring of agreements; transitional measures; regional budget allocations; communities of interest; and joint projects. For each of the three years of the RBA's term, holders negotiate budgets and performance targets. These are established at the local level to reflect local conditions, such as the unemployment rate and the number of job-ready clients.

Success is measured using indicators. Primary success indicators are the number of clients who become employed or self-employed; the number of clients who successfully completed labour market interventions; and savings to the EI Account or Income Support programs (such as social assistance). Other success indicators include the proportion of youth participants; proportion of participants who self-identify as having a disability; number of trainees successfully completing interventions in high-skill occupational areas; number of on- and off-reserve participants; and duration of employment.

Negotiating budgets and targets should result in a better understanding of performance commitments and expected results. It may also improve the contribution of unit delivery mechanisms to RBA's performance commitments.

Build/Strengthen Capacity

Some RBAs need to build or strengthen their organizational capacity to successfully deliver their labour market programming. In particular, there is a recognized need for RBAs to increase their ability to (a) set objectives and priorities; (b) perform functions, solve problems and achieve their objectives; and (c) understand and deal with their labour market needs in a broad, strategic and sustainable manner.

Both internal and external improvements in capacity are needed. Externally, the RBAs encourage and support employers, employee and/or employer associations and communities to improve their capacity for dealing with human resources requirements and implementing labour force adjustments. There is also a need to better develop and use partnerships with the private sector (as well as others, such as training institutions) and to work out programming relationships with the provinces (e.g., social assistance and training allowances). Internally, there is a need to integrate the RBA into the community and, where necessary, establish or strengthen local labour market delivery mechanisms. RBAs may allocate funds to these mechanisms and negotiate/assign targets. Also at the internal level, there is a need to develop tools that lead to improvements in program design, delivery and results. These tools are grouped in five broad areas: (a) planning, such as labour market analysis, data analysis, accountability and monitoring/evaluation; (b) systems, such as data collection, accounting and EI screens; (c) people, such as counsellor training and project officer training; (d) self-assessment, such as counsellor tools; and (e) delivery, such as Aboriginal training institutions.

Immediate priorities lie in the areas of employment counselling, accountability (tracking, following up and measuring success), board training (team building), staff training (communication skills) and needs assessment (client needs).

Building capacity will increase both the knowledge and skills present in the RBA, as well as its ability to efficiently and effectively deliver programming and results.

Promote Programming

RBAs may undertake a number of activities to increase awareness of their programming and its benefits. This should make it easier to attract participants. RBA managers may use a number of means to promote their programming, such as publications, flyers, brochures, meetings, presentations, seminars and 'word-of-mouth.' Promoting RBA programming should increase community support for, and interest in, the RBAs and their programs, including attracting clients.

Identify Clients/Determine Eligibility

Some RBA clients self-identify. Others are identified through local knowledge or in connection with local community business plans or projects. RBAs need to ensure that potential clients are eligible for RBA programming. The eligibility of participants is primarily based on their unemployment status, but there may also be a need to verify their Aboriginal status. Where potential clients are not eligible for RBA programming, the RBA may refer them to alternative federal or provincial programming. These activities ensure that RBA programming is directed at appropriate clients.

Counsel Clients/ Determine Needs

RBAs provide a wide range of labour market programming to help their unemployed clients find employment. Their programs may include:

  1. targeted wage subsidies that encourage employers to hire individuals whom they would not normally hire in the absence of the subsidy;
  2. self-employment assistance (which help individuals create jobs for themselves by starting a business);
  3. job creation partnerships (which provide individuals with employment opportunities through which they can gain work experience that leads to ongoing employment); and
  4. training purchases (which help individuals to obtain skills, ranging from basic to advanced, which should improve their immediate and long-term employment prospects). In some cases, tuitions are paid, as opposed to seats being purchased.

In support of this programming, RBAs counsel their clients and help them create job finding clubs and develop job search strategies. RBAs may also support activities that identify better ways of helping individuals prepare for, or keep, employment and be productive participants in the labour force.

RBAs need to assess the skills and qualifications of eligible clients and determine with them what RBA programming best suits their individual needs. The suitability of eligible clients may be assessed in terms of their interest, commitment to achieve career goals, educational background, and readiness for participation in RBA programming. RBAs may prepare a personal action plan for each client that may be based on a case management approach. Determining client needs in consultation with the client increases client commitment and leads to better targeted, more efficient programming.

Some RBAs use committees to review and approve proposed personal action plans. Decisions may be appealed. This process ensures that the needs of the community and the individual are met and that unrealistic plans are not funded.

Based on their needs, RBAs provide appropriate employment services, including matching them with identified delivery mechanisms, such as training institutes, employers or associations. Depending on the service(s) provided, clients may increase their knowledge, skills and experience as a step towards increasing their employment. Once they are employed, SAR/EI savings (defined as unpaid benefits) may result. Evaluations may want to look at other indicators of success in relation to government assistance. The program may also increase well-being, not only for the individual, but also for the community.

Monitor, Assess and Report Results

While RBA clients are undergoing their programming, their progress will be monitored and payments will be made. Where necessary, adjustments may be made. In addition, RBAs survey their clients after they have completed their programming to determine their employment status. Not only does monitoring help ensure that clients are progressing according to their personal action plans, it allows RBA managers to report to HRDC on the status and results of their programming. Using monitoring information helps RBA managers and staff to improve their performance and meet their commitments/results.

Review RBA

Besides the ongoing monitoring and day-to-day management of the RBA's program performance, RBA managers and staff rely on periodic internal/external evaluations/ reviews to assess the effectiveness of its performance and control. In addition, they assess its performance management and measurement by appraising the value/contribution of various performance measures, eliminating unneeded measures/data and refining performance measures, targets and commitments. The RBA-holder shared these experiences throughout its organization in an effort not only to improve its performance management and measurement but also to make its programming more efficient and more effective. The RBA-holder reports its performance to its clients and stakeholders.

Logic Chart

The logic chart, or performance map that follows depicts these eight key activities, along with their main outputs and the impacts they are expected to produce. This Activity —> Output —> Impacts chain shows what results the RBA-holders can achieve. As such, the logic chart makes explicit the cause and effect relationships of performance so that they can be managed and validated.

In the Activity —> Output —> Impacts chain, activities are major (in terms of resources) or significant (in terms of importance) chunks of work. These are what resources are used for. Outputs are the goods or services provided by the RBA-holder to its clients. These are key because they are the basis on which clients judge program performance. Impacts are the consequences of the RBA-holder's outputs. They are often called outcomes.

In looking at the logic chart, it is important to note that the Activity —> Output —> Impacts chain — that is, the vertical arrows — focuses on primary causality. For example, promoting RBA programming, is primarily designed to increase the community's interest in, and support for, the program, including attracting clients. It may lead to increased employment, but other activities, such as counselling sessions, job finding clubs and RBA programming, are primarily designed for that purpose.

The horizontal arrows connecting the activities show the 'logical' sequence in which the program is delivered. They also capture the notion that earlier activities support the impacts achieved by later activities but that their 'influence' fades the further they are from the impact.

Graphic
View Graphic 1

4.2.2 RBA — HRDC

The performance of the RBA initiative from the perspective of HRDC in its role as program administrator rests on six basic activities. First, HRDC managers establish the initiative's parameters. Next, they identify holders and negotiate agreements. Once agreements are in place they negotiate budgets and associated results. Then, they help RBA-holders build or strengthen their capacity to deliver RBA programming. As the RBA-holders deliver their programming, they send monitoring reports to HRDC for review. In the final step, HRDC managers review evaluations prepared by RBA-holders and integrate these into an evaluation of the program as a whole.

The following explains the first two of these activities in more detail, along with their primary impacts. The four remaining activities mirror those of the RBA-holders and need no further explanation.

Establish Program Parameters

HRDC managers are responsible for establishing the parameters of the RBA initiative. They design the initiative, including the respective roles and responsibilities of the Aboriginal organizations and HRDC. In dealing with the Treasury Board, they prepare funding proposals and establish the terms and conditions under which the program will operate. All of these activities lead to better designed and targeted programming.

As part of this activity, HRDC managers also develop a review strategy. This strategy not only leads to more timely, relevant information on the initiative's and the RBA's performance but also supports a results-based management regime.

Identify Holders/Negotiate Agreements

While the principal instrument of delivery is the RBA, HRDC negotiates these under the auspices of National Framework Agreements (NFAs), which are negotiated with the three national Aboriginal groups — First Nations, Métis and Inuit. The NFAs account for more than 50 RBAs. However, some Aboriginal groups fall outside the NFAs. HRDC negotiated agreements with 9 such groups. The breadth of such agreements ensures that HRDC helps Aboriginal organizations develop appropriate local labour market delivery approaches.

The logic chart on the next page depicts the six key activities of the RBA initiative from the HRDC perspective, along with their main outputs and the impacts they are expected to produce.

Graphic
View Graphic2

4.3 Assessment of the Accountability Framework

The RBA initiative meets the requirements of the government's PRAS accountability framework. It has an objective, a description, key results to be reported in planning and performance documents, a performance measurement strategy, and the positions in the various organizations (e.g., RBA-holders and HRDC) accountable for achieving results. The purpose of this section is to examine the key results in the accountability framework for the RBAs and suggest improvements, if any.

This assessment looks at the key results from both a conceptual and a definitional point of view (i.e., what results should be in the accountability framework and how they are or should be defined). It also takes into account observations from the Mid-Term reviews and feedback from discussions with HRDC regional office and RBA staff who participated in the Mid-Term review. The assessment is also based on the understanding of the initiative reflected in the RBA program models.

The assessment assumes a simple client management model for the RBAs. This model holds that clients are identified, their specific programming needs (i.e., interventions) are determined/provided, and the outcome of this programming is monitored to find out what impact, if any, it had in helping the clients find employment. This simple client —> intervention —> outcome model suggests a focus for determining key results for the RBA initiative.

4.3.1 Existing Key Results

The accountability framework for the RBAs focuses on two primary success indicators for EI-funded activities:

  1. the number of clients who became employed or self-employed; and
  2. savings to the EI Account;

and three primary success indicators for activities funded by the Consolidated Revenue Fund (CRF):

  1. the number of clients who became employed or self-employed;
  2. the number of clients who successfully completed labour market interventions; and
  3. savings to the Income Support programs (primarily social assistance — short and medium term).

For both EI- and CRF-funded activities, other success indicators may be chosen, such as the:

  1. number of youth participants;
  2. number of participants who self-identify as having a disability;
  3. number of trainees successfully completing interventions in high-skill occupational areas;
  4. number of on- and off-reserve participants; and
  5. duration of employment.

4.3.2 Assessment of Existing Key Result Indicators

The assessment that follows will look at each of the three key success indicators individually — employment, savings and interventions.

Employment

The objective for the RBA initiative suggests that it is appropriate that emphasis be placed on employment as the single-most critical measure of the success for RBA programming. However, the accountability framework defines "employment" as the "number of clients who became employed or self-employed." While the quantity of employment generated by the RBAs is important, account must also be taken of the quality of the employment generated. If the quality dimension is ignored, RBAs may bias programming towards part-time, short-term/seasonal, low-paying employment. In other words, the quality of employment may be sacrificed for quantity.

Information provided in the Mid-Term review suggested that this is happening in some RBAs. However, RBA-holders expressed a preference for focusing on full-time, high-paying, long-term/permanent employment, as opposed to part-time, low-paying, short-term/seasonal employment. Thus, to meet the needs of both partners, the accountability framework needs to recognize the quality of employment as well as the number of jobs arising from RBA programming. Key quality indicators of employment include its length, the wage/income paid, and whether it was full- or part-time, permanent or seasonal. Both HRDC and the RBA-holders need to decide what dimensions of quality should be included in the accountability framework and how they should be measured.

The accountability framework measures the quantity of employment in a number of ways. For some RBA clients, employment is based on the duration of the EI claim and the amount drawn. For others, employment is based on whether the client reports being employed when contacted during the 12-week follow-up survey. The presence of many definitions of employment has the potential to increase the non-comparability of results amongst RBA-holders and their interventions. In addition, it increases confusion on the part of RBA-holders. HRDC and the RBA-holders should agree on a single definition of the quantity of employment, and this should be applied to all clients. The simplest and most straightforward definition would appear to be client-reported employment (through an administrative follow-up) at the time of the 12-week follow-up survey.

Savings

The accountability framework identifies savings to the EI Account and to income-support programs (primarily social assistance) as another primary success indicator for RBA-holders. This reflects the initiative's emphasis on EI and social-assistance recipients as clients. While 'savings' is a key result for HRDC to report in its accountability to Parliament, RBA-holders do not actually calculate the savings that result from their interventions. Rather, the RBA-holders identify clients who are currently in receipt of EI or some other income-support program, and HRDC calculates the estimates of 'savings.' The concept of 'savings' is not a factor that drives implementation of the RBA; service deliverers do not focus on targeting activities to maximize savings. The focus is more on employment, with activities targeted to those who currently rely on government assistance, rather than employment, for their income. As a result, consideration should be given to having this as the key indicator of performance, rather than savings.

Savings to EI Benefits or some other form of income-support program are determined by an RBA client's employment status. As noted earlier, there are a variety of employment definitions, and this may affect the consistency and integrity of this calculation.

Savings to EI are defined as the difference between what a person was entitled to receive in benefits and what is estimated to be paid out in benefits to that person. Similarly, savings to income-support programs are defined as the difference between what a person was entitled to receive in social-assistance benefits and what is estimated to be paid out in benefits to that person. The latter is based on an 'assumption that these clients would have remained on SA for the duration of the year and when they find employment, that they would remain employed for the duration of that year.' This assumption can only result in a potential upward bias to the estimated SA savings. While the assumption may be valid, it should be verified.

Interventions

The performance indicator measuring 'the number of clients who successfully completed labour market interventions' is another critical measure of how the RBAs perform. However, it should be broken down by the type of intervention to reflect the continuum of services. This is important because interventions carry different costs and impacts. In addition, there may need to be some agreement on definitions. A case in point is counselling. If a client has several counselling sessions with RBA staff, do these count as one or three counselling interventions? Early feedback from some RBA-holders indicates that the average number of interventions per client was a little over one. However, one RBA-holder reported an average of about eight. This suggests that RBA-holders may be defining their interventions differently. Thus, consideration should be given to developing common definitions of interventions. In the absence of this, it will be important to develop a good understanding of what constitutes each intervention type so that they can be aggregated across RBAs.

Also, as each intervention carries different costs and may have different employment impacts, consideration should be given to including measures of the cost-effectiveness of the various interventions.

4.3.3 Additional Key Results

Besides the accountability framework's three primary success indicators (employment, savings and interventions) discussed in the previous section, consideration should be given to adding success indicators that reflect the longer term impacts of the RBA initiative and the need for capacity building. These are discussed below.

Term

The accountability framework appears to focus on the short as opposed to the long term. However, the longer term impacts may be much more important than the shorter term ones. For example, what are the employment impacts of RBA programming one, two or three years after the intervention? How long were clients employed? How many clients return to EI or SA? The accountability framework may need to recognize the employment history of clients following their RBA program participation. This would include looking at the number of clients who have returned to EI or SA and for how long.

Similarly, the accountability framework does not recognize the need for some RBA clients who face serious employment barriers to complete personal action plans containing several RBA interventions to help them obtain employment, even low-paying, short-term/seasonal employment. Such plans may take several years to complete, especially if the goal is high-paying, long-term/permanent employment. The accountability framework should recognize long-term personal action plans. Such action plans are expected to be more than one year in duration and may last as long as three years.

Capacity Building

Finally, the Mid-Term review indicates that many RBA-holders had limited, if not insufficient, capacity to deliver the results expected in the accountability framework. As capacity precedes results, the accountability framework needs to recognize capacity-building efforts and the maturity or self-sufficiency of the RBA-holder.

Capacity building has many dimensions and is, to some extent, a continuum along which RBA-holders may progress on the way to becoming self-sufficient. In this regard, the accountability framework may need to include indicators with respect to the operation of the RBAs, such as their ability to:

  1. report results;
  2. ensure their financial integrity;
  3. assess local labour market needs, set priorities, and prepare appropriate strategies and plans;
  4. integrate themselves into their communities;
  5. deliver programming, i.e., skills needed to identify/counsel clients and monitor their performance with a view to identifying opportunities for improvement, if needed;
  6. create supportive labour market conditions (through partnerships, linkages to professional organization, etc.) and take advantage of emerging opportunities;
  7. assess their performance, learn from that experience, and redesign their programming if needed; and
  8. sustain themselves in terms of having achieved all of the foregoing and being able to maintain them, including being able to build and maintain capacity of local labour market deliverers.

Target vs. Commitment

In closing this section, consideration should be given to distinguishing between performance commitments and performance targets in the accountability framework. A performance commitment is a "promised" result, while a performance target is an "aimed-for" result. The difference between the two rests largely with the degree of control that the organization has over achieving the result. The former is within the control of the organization delivering or achieving it, while the latter is usually not (i.e., it may be affected to a significant extent by external factors). In the case of the success indicators discussed above, only capacity building would be considered a performance commitment. All of the rest would be considered performance targets.

4.4 Review Strategy

The RBA initiative will be succeeded by the AHRDS. The purpose of this section is to propose a review strategy that will suggest performance indicators and methods to provide performance information required: (1) by AHRDS agreement holders for the day-to-day management of their agreements; (2) by HRDC to meet its accountability requirements; (3) for an implementation evaluation of new AHRDS agreements; (4) for special studies, if needed; (5) for interim evaluations of the AHRDS agreements and the program; and (6) for summative or final evaluations of the AHRDS agreements and the program. This strategy results in a flow of performance information that builds over time, reinforcing successive needs and minimizing response burden on AHRDS participants and staff.

4.4.1 AHRDS Agreement Holders' Day-to-Day Management

The information needed by AHRDS agreement holders for the day-to-day management of their clients reflects the client X intervention —> outcome model introduced earlier.

AHRDS-holders require information on their clients (e.g., SIN, name, address) and their interventions (e.g., type [e.g., training, wage subsidy], start/end dates) to successfully manage their programming. Such information can be captured in various electronic formats and sent to HRDC on a daily or weekly basis, if needed. Monthly or quarterly reporting may be more efficient. These administrative (i.e., client and intervention) data support the monitoring of clients' progress through their personal action plans and allow AHRDS staff to take corrective action, as needed, to help their clients succeed.

Once an AHRDS client has completed (or withdrawn from) his or her programming, the AHRDS-holder requires information on what has happened to the client, i.e., the outcome. Outcome data may be collected in follow-up surveys conducted, for example, 3 and 12 months after the client has completed his or her programming. These surveys provide short-term and longer term impact measures, particularly with respect to the nature (e.g., occupation, duration, wage/income, full-time/part-time, permanent/ temporary) of the employment found. However, finding clients for follow-up surveys may prove difficult, and special attention would have to be paid to this activity.

Some AHRDS agreement holders may wish to consider exit interviews with clients as a way of collecting rapid feedback on the preliminary effectiveness of a client's programming. Exit surveys could be used to collect a considerable amount of client information on continuing need, program design and delivery, satisfaction with the program, employment prospects, and whether clients would recommend the program to others. This information is not likely to be very time dependent, and it could be used to help manage the program. The use of exit surveys means that follow-up surveys and an interim evaluation could concentrate on estimating program impacts.

Ideally, the exit surveys should be conducted on the last day of the intervention. However, more practical timing would be the week before or the week after the intervention ends. The advantages of this approach are that: (a) the data are collected consistently (i.e., at the end of the intervention); (b) the surveys are easy to administer (compared with collecting the information some time after the interventions are over); (c) the surveys are cheaper (they are completed by the clients); (d) it is easier to locate clients; (e) the experiences are fresher; (f) it allows for greater follow-up for non-response; (g) it provides a way to verify client addresses and telephone numbers that could be used in the 3- and 12-month follow-up surveys; (h) it provides a quick way to identify and concentrate on 'success stories'; and (i) it starts to address evaluation questions earlier in the development process.

4.4.2 HRDC Accountability Requirements

The data collected as part of the AHRDS agreement day-to-day management should answer virtually all of HRDC's accountability requirements. AHRDS administrative data sent to HRDC easily allow HRDC to generate reports containing such performance information as number of clients and interventions, year-to-date budget expenditures, average cost per client, employment found, etc. The employment data allow HRDC to estimate unpaid EI and social-assistance benefits.

To supplement this, AHRDS agreement holders should provide HRDC with periodic summaries of progress in meeting capacity-building commitments and success stories. Such information could be provided in the AHRDS agreement holder's annual report and would accompany annual financial and management reviews.

4.4.3 Implementation Evaluation (optional)

Putting in place a new initiative is tricky. Many things can go wrong during the implementation phase. That is why new AHRDS agreements may wish to consider conducting implementation evaluations. They are designed to determine whether the programming of the agreement is being carried out as designed. If not, it identifies what problems are being encountered and suggests what needs to be done to solve them. Implementation evaluations are useful not only for new initiatives but also for substantially changed ones. They are also useful in preventing design flaws or changes that may compromise the initiative's effectiveness.

4.4.4 Special Study (optional)

Similarly, special studies focus on one or two specific issues that may arise after an initiative has been implemented. For example, why is the take-up on a program less than expected? Or, how can we increase client satisfaction with our service? They tend to be problem driven and solution oriented.

4.4.5 Interim Evaluation

Many initiatives take considerable time before showing whether and to what degree they are having any impacts and effects. In such cases, AHRDS managers may not be able to wait for a final evaluation. They may require more immediate feedback on a range of shorter term or intermediate impacts and effects, including how well the initiative's design is working and how satisfied are its clients. These are generally addressed in interim evaluations.

Interim evaluations are important for new AHRDS agreement holders. They are also important for existing agreement holders should AHRDS include programming that was not contained in the RBA initiative.

The use of exit surveys would considerably reduce the scope and depth of an interim evaluation. An interim evaluation would likely include:

  1. follow-up surveys of clients;
  2. interviews with the key AHRDS agreement holder staff and community leaders; and
  3. an analysis and assessment of client/intervention/outcomes data.

The use of exit surveys means that follow-up surveys of clients can concentrate on the program's employment (particularly, the quality) impacts. Otherwise, the survey may have to include other areas, such as client satisfaction.

Facilitated self-assessments might be an appropriate approach for evaluation of the AHRDS. This would involve staff of the RBA-holder organizations (and potentially their local delivery agents) in an assessment (guided by a facilitator) of their own program, its processes and impacts. This approach to evaluation is consistent with the findings of the Mid-Term reviews and with the guiding principles for the RBA initiative. It has the potential to increase the involvement of various stakeholders and strengthen partnerships under the initiative.

As will be seen in Section 5, this approach is proposed for the final evaluation of the current RBA initiative, for those RBA-holders that have not already conducted an interim evaluation. The approach uses the experiences of RBA-holders who participated in the Mid-Term review as a way of identifying design and delivery problems and solutions. Agreement holder staff would review and discuss various design and delivery issues in a workshop under the guidance of a facilitator. The purpose would be to identify and prioritize any potential delivery problems with a view to investigating and resolving them.

The probable starting date for AHRDS agreement interim evaluations would be determined by the maturity of the agreement and how far along it is in its five-year life. Mature AHRDS agreements probably do not require interim evaluations. Newer AHRDS agreements may require an interim evaluation two or three years after starting.

4.4.6 Final Summative Evaluation

Final evaluations look at a program's impacts and effects, including such issues as how well programs have achieved their objective(s), whether they have continuing relevance, and whether there are more effective alternatives to their design and delivery.

The timing of final evaluation of the AHRDS agreement and the program as a whole would probably take place during the last year of AHRDS's five-year mandate. In the event that it did, final evaluations of AHRDS agreements would include:

  1. 12-month (or longer) follow-up surveys of clients (to obtain longer term data on the program's employment and savings impacts);
  2. interviews with the key AHRDS agreement holder staff and community leaders;
  3. interviews with non-participants (in this case, people who applied to participate in the program but were not found suitable or eligible and were referred elsewhere);
  4. a file review; and
  5. administrative data analysis — e.g., longer term estimates of earnings and/or use of government assistance (EI or SA).

Given the nature of the interim evaluations, final evaluations would likely concentrate on issues of reach (i.e., profile of clients served, interventions used successfully/ unsuccessfully), accountability (i.e., how well performance targets were met), effectiveness (i.e., what results various interventions produced, community impacts, rural/urban differences) and design (i.e., what changes in programming and its delivery, if any, are needed to make the agreement more effective or efficient).

These final evaluations, at the agreement level, would be synthesized into an overall evaluation of the impacts of AHRDS and would be added to a final evaluation of the overall AHRDS, which would address issues related to program rationale (i.e., continuing need for the program, contribution to devolution) and effectiveness (i.e., design and delivery). In addition to the information supplied by the synthesis of the final evaluations of the AHRDS agreements, these issues would largely be addressed by interviews with key staff and stakeholders.


Footnotes

18 It must be noted, however, that this overview is based on the programming as it was implemented in the current RBA initiative. It does not take into account any program changes that are being incorporated into the AHRDS. [To Top]
19 'Program' is used in the generic sense for any type of public-sector intervention. For example, a program model can be constructed for such interventions as a project, a service, a piece of legislation, a set of regulations, an initiative and a policy. The key ingredients to constructing a program model are a set of activities designed to produce outputs that seek to achieve specific impacts. [To Top]


[Previous Page][Table of Contents][Next Page]