International Development Research Centre (IDRC) Canada     
idrc.ca HOME > IDRC Publications > IDRC Books Online > All our books > EVALUATING CAPACITY DEVELOPMENT >
 Topic Explorer  
IDRC Books Online
     New
     Economics
     Environment & Biodiversity
     Food & Agriculture
     Health
     Information & Communication
     Natural Resources
     Science & Technology
     Social & Political Sciences
     Development & Evaluation
    All our books

IDRC in the world
Subscribe
Development Dossiers
Free Online Books
IDRC Explore Magazine
Research Programs
 People
Bill Carman

ID: 43626
Added: 2003-09-11 13:43
Modified: 2005-06-20 15:01
Refreshed: 2006-01-25 04:14

Click here to get the URL for the RSS format file RSS format file

Chapter 6. Approaches for Evaluating Organizational Capacity Development
Prev Document(s) 9 of 15 Next

This chapter is intended to help the reader prepare for and successfully carry out an evaluation of a capacity development effort. Rather than propose a fixed set of steps to follow, we outline a number of issues that managers and evaluators should consider from the outset of an evaluation process. The chapter begins by setting out key issues to consider when preparing for an evaluation. We raise a number of methodological questions that should be considered when designing and undertaking an evaluation that is sound and produces useful results. We highlight several challenges that were encountered in our evaluation studies and suggest how managers and evaluators might address similar challenges in their own organizations.

Key Issues to Consider

The previous chapters have dealt mainly with issues of capacity development in research and development organizations. We have discussed the meaning of capacity and organizational capacity development, the types of capacities that organizations need and how to go about developing them, and the roles of different organizations in capacity development processes. We now turn to approaches and methods for capacity development initiatives.

Evaluations are frequently carried out by external experts and provide information for funding agencies and to meet external accountability requirements. Our purpose here is to familiarize managers and evaluators with issues, approaches, and methods that will help them prepare, execute, and improve evaluations of capacity development efforts in their own and partner organizations.

As discussed more extensively in Chapter 7, we believe that involvement in evaluation processes can produce very great benefits for an organization and its members. The benefits of direct involvement in an evaluation often exceed those arising from the use of results contained in evaluation reports. For this reason, we emphasize the use of participatory self-assessment methods that involve the organizations’ members and stakeholders. Based on our studies carried out in the ECD Project, we believe that evaluations of capacity development efforts are ideally carried

out in a collaborative mode, by teams composed of members from the different participating organizations.

One goal of the ECD Project was to test frameworks and methods in the field and to draw conclusions about their use. Based on our experiences and reflections, we offer approaches for evaluating capacity development initiatives that involve potential users in all aspects of the evaluation process.

This chapter addresses three broad questions:

  • How to prepare for the evaluation?
  • Which evaluation principles can be used to guide the evaluation?
  • How to carry out the evaluation?

We suggest some general answers to these questions, based on our experiences with evaluating capacity development in our own organizations. We also guide you to further reading on evaluation methods.

We do not present a ‘cookbook’ recipe that can be followed step-by-step. Instead we attempt to stimulate thinking about how to plan and carry out an evaluation of capacity development efforts. This is because no simple recipes or blueprints are suitable for evaluating the broad range of organizational capacity development efforts that take place in different organizations. In one case, the organization may wish to evaluate a capacity development initiative that is just getting under way, to sharpen its goals, and consolidate its approaches. In another case, it may want to evaluate the results of a ‘mature’ or completed capacity development initiative, to report on impacts and benefits to its key stakeholders. Due to budget and time limitations, the organization might need to complete the entire evaluation in a few weeks. In yet another case, an organization might have sufficient resources to systematically collect information over several months, or even years, before drawing conclusions.

Preparing for the Evaluation

If an evaluation team jumps straight into the collection of data without preparing adequately, it may soon find that it has a mountain of information that is difficult to handle and questions that are difficult to answer. Our studies suggest that six activities are essential in preparing for an evaluation:

  • clarify why and for whom the evaluation is being done;
  • involve intended users throughout the evaluation process;
  • cultivate necessary support for the evaluation;
  • mobilize adequate resources to carry out the evaluation;
  • discuss possible results of the evaluation;
  • agree on basic principles to guide the evaluation.
Methods and tools for evaluating capacity development in a rural development institute in Viet Nam

In Viet Nam, the evaluation focused on Can Tho University’s Mekong Delta Farming Systems R&D Institute and the two networks it coordinates—FSRNET and NAREMNET. The IDRC-CBNRM program has offered various types of support to all three organizations.

The study aimed to evaluate individual and organizational capacity development efforts that took place over a ten-year period among the participating organizations and to improve their use of monitoring and evaluation tools for capacity development. Building on examples of previous organizational assessment studies and capacity development methods, the Viet Nam study team primarily used a set of qualitative and participatory monitoring and evaluation tools adapted to their evaluation’s specific theme and focus. These tools were chosen to engage all staff in a frank and constructive discussion about past, current, and future capacity development efforts. At the same time, the variety of tools served as a methodological learning experience for both the evaluation team and the staff.

Initially, a two-day self-assessment workshop was organized, carried out and facilitated by the evaluation team with the involvement of 34 Institute staff members. The workshop served as a vehicle for presenting the ECD Project and the evaluation study to staff and for receiving feedback on a variety of questions concerning capacity development. The workshop helped develop a shared understanding about the evaluation study within the Institute and a strong commitment from staff to cooperate. The workshop also provided preliminary insights into the evaluation’s key questions.

Institute managers, lecturers, technicians, and administration staff were asked to complete questionnaires and participate in interviews aimed at preparing ‘work stories’. The questionnaires were used to gauge the impact of capacity development efforts at both the individual and project level. The ‘work stories’ explored, through personal and detailed accounts, how staff perceived their contribution to the Institute’s core activities, if and how their work had changed over time, if and how their own capacities had evolved, and how these capacities related to the organizational capacity development efforts of the Institute. As part of the evaluation study, a small sub-case study was added to the main evaluation. This looked at the impact of the two networks on SIAS, one of the members. A participatory workshop with SIAS and the Institute was organized to present the ECD Project and to explore how the two organizations would collaborate in the evaluation study. A month later, SIAS organized a one-day focus group meeting with their own research partners (including farmers and government staff working at the local level) to obtain more detailed answers on the study questions.

Finally, key informant interviews were held with the Director of the Institute and with IDRC-CBNRM staff who had been responsible for overseeing support to the Institute and the networks. The interviews explored how the IDRC-CBNRM program had contributed to the Institute’s capacity development, identified the impact of joint research projects, the changes that occurred in Viet Nam during the period under review and what affect they had on research and development in the country, and, finally, challenges that lie ahead. Throughout the evaluation process an array of documents were reviewed by the evaluation team to obtain relevant quantitative and qualitative data and information.

Clarify why and for whom the evaluation is being done

Evaluations are conducted for many different reasons and to meet the needs of many different audiences. Lack of clarity on the purpose and audience of an evaluation can lead to confusion, frustration, and dissatisfaction. When the evaluation began in FARENA in Nicaragua, many professors assumed that it was being done to evaluate them and to apply individual sanctions for poor performance. An important first step in the evaluation process was to clarify that the focus was on the capacity of the Faculty as a whole, and that it was intended to provide information for the Faculty itself to improve its capacity and performance. Clarifying the purpose and main audience(s) of the evaluation is also essential to identify key stakeholders who should be involved in the evaluation.

“The main purpose of evaluation is to improve, reflect, and to transform what has been done in the past. It is like a snowball effect: the more you do, the more you need to reflect, and the more you understand, the more you need to do.”

Albina Maestrey Boza

Involve intended users throughout the evaluation process

Over the years, evaluators have learned that the single most effective way to ensure that an evaluation produces useful results that are actually used is to involve the intended users throughout the evaluation process. In what has come to be known as ‘utilization-focused evaluation’, potential users are involved in discussions on the possible use and benefits of the evaluation and in collectively agreeing on the evaluation’s purpose and methods, taking into account the resources and time that are available. Stakeholders should also be involved in discussions about the possible results and implications of the evaluation and potential follow-up actions that might be appropriate under different scenarios.

In each study, the evaluation team needs to decide which individuals to involve, depending on the purpose of the evaluation and the relationships that exist within the organization and with outsiders. In Cuba, since the evaluation was looking at the development of capacity for food chain analysis, stakeholders from various points in the swine production chain were involved, ranging from the Minister of Agriculture to researchers, extension workers, pig farmers, and meat processors.

In Viet Nam, an aspect of the evaluation focused on the capacity development efforts of two natural resource management networks coordinated by Can Tho

University Institute, and farmers, local extension workers, government officials, and university researchers were involved. In Nicaragua, the main stakeholders were essentially FARENA staff since the Faculty was the main focus of the evaluation. External stakeholders from other branches of the university and from partner organizations were only involved when their views on the Faculty’s work were needed.

Cultivate necessary support for the evaluation

Given the sensitivity of evaluation processes and results, key people need to be committed to the evaluation as early as possible. The support of managers is crucial, but others, including staff members and government officials in the case of public agencies, can also make or break an evaluation. The support of senior managers is especially important since they have the power to decide who will be part of the evaluation team. They also must authorize the use of time and resources for the evaluation. Perhaps most important, they can promote or hamper the use of the evaluation’s results by deciding on follow-up actions and changes after the evaluation. We encourage evaluators to gain the commitment of other senior managers to carry out the evaluation and to act on its results, before beginning to collect information. One way to do this is to ask managers what information they would like to gain from the evaluation, and to involve them in discussing ways to collect, analyze, and interpret this information.

In each of the organizations participating in the ECD Project, internal and external support for the evaluation had to be cultivated before the work could begin in earnest. Over time, as issues arose and individuals changed positions, further negotiations were also needed.

In Ghana, the support of the Plant Genetic Center’s Director was gained by involving him as co-study leader. He, in turn, gained support for the evaluation from the Director of the Center’s parent organization, the Council for Scientific and Industrial Research (CSIR). Later on, when the CSIR Director retired, it was essential to negotiate the new Director’s support for the evaluation.

In Cuba, the original study design was prepared by members of the Directorate of Science and Technology and the New Paradigm Project. Because aspects of the study included an assessment of organizational change within the Cuban national system, SINCITA, its design needed to be negotiated with the Director of Science and Technology, the Vice Minister of Agriculture, and the Director of one of the institutes under review, as well as the leader of the external support organization, the New Paradigm Project.

In addition to the support of managers, we also need the active support of staff members throughout the organization(s) involved, including support staff. Our

studies show that staff members are often eager to assess their own capacity and performance and that of their organizations, as long as the purpose of the evaluation is to learn and improve, not to judge and sanction. The approach of our evaluation studies helped motivate and commit staff to participate in building their organizations’ futures. In the case of Viet Nam, a two-day self-assessment workshop was carried out and facilitated by the evaluation team with the participation of 34 staff members. The workshop was a vehicle for discussing and presenting the ECD Project and evaluation study to staff and receiving feedback on a variety of questions concerning capacity development. This workshop helped gain a strong commitment from staff to cooperate in the project and provided insights into the evaluation’s key questions.

Mobilize adequate resources to carry out the evaluation

Time, skilled and motivated individuals, and financial resources will be necessary for the evaluation, and it is best to negotiate their availability before jumping into the work. As already noted, a utilization-focused evaluation involves intended users, and this means that substantial time and effort will be required from them. Funds for travel, for example, might also be required if the organization in question is decentralized or if two or more organizations that are geographically distant from one another are involved.

Evaluation specialists from outside of the organization can provide some guidance in designing the study and facilitating the collection or analysis of information. But, if an organization and its staff members are to learn and benefit from the evaluation, they must be deeply involved in it and feel responsible for the results. One of the main benefits of our evaluation studies proved to be the learning that took place during the implementation of the evaluation. Some previous evaluation experience will be a plus, but even without it, staff members can ‘learn by doing’, and those who take part in the process will obtain new insights and skills as the study progresses.

The ECD Project provided each evaluation study team with a modest amount of funding (approximately US$10,000 for each study.) However, the principle cost of carrying out a participatory evaluation is the time that managers, staff members, and external stakeholders dedicate to the evaluation process. This cost was borne by the participating organizations and their stakeholders.

In all of the studies, staff members of the organizations involved did the bulk of the work. In FARENA in Nicaragua, the evaluation leader—also the Dean of the Faculty—was so busy that a consultant was hired to facilitate the evaluation process. Nevertheless, even in this case, Faculty members did most of the work, and found that they benefited from the evaluation because of their direct involvement in the

evaluation process. In other cases, external consultants to the ECD Project met with local teams for a few days to provide guidance on evaluation design and methodology.

Discuss possible results of the evaluation

Before beginning to collect information, it is useful to discuss the possible results with key potential users. This helps the potential users of the results to prepare and consider actions that might be needed. It also helps evaluators to sharpen the evaluation questions and the methods used.

Agree on basic principles to guide the evaluation

As evaluation is a very complex and potentially sensitive process, it is useful to have some basic principles to guide the work and to assist in resolving differences of opinion that may arise. This is the subject of the next section.

To conclude, our experiences have shown that the time and effort invested in preparing for an evaluation are very well spent. Rushing into data collection without securing stakeholder commitment, mobilizing the necessary resources, cultivating support, or agreeing on some basic principles, can lead to confusion and frustration later on.

Principles for Assuring the Quality and Use of the Evaluation

Various professional evaluation groups have established standards and principles for conducting evaluations. These generally emphasize the need for evaluations to be

Evaluation standards

Utility: The evaluation should serve the information needs of intended users.
Feasibility: An evaluation should be realistic, prudent, diplomatic, and cost-effective.
Propriety: An evaluation should be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results.
Accuracy: An evaluation should provide sound information (i.e. defensible sources, valid and reliable information, justified conclusions, etc.) on the object of the evaluation.

Source: Joint Committee on Standards for Educational Evaluation (1994)

useful, feasible, fair, and accurate. Such principles can be useful for planning and implementing an evaluation as well as for assessing the evaluation after it is completed.

Based on our experience of the six evaluation studies we carried out, we propose seven guiding principles for evaluating organizational capacity development efforts. These principles reflect a utilization-focused philosophy and approach for evaluating organizational capacity development initiatives.

Utility

The evaluation should be designed and implemented so as to be useful to, and actually used by, the intended users, whether they are the managers and staff of the organizations concerned or key external stakeholders.

Sensitivity to context

As each organization operates in a particular and changing political and socioeconomic setting, external conditions should be taken into account in designing and carrying out the evaluation. Similarly, the organization’s internal environment needs to be considered. Where the organizational culture promotes open and frank discussions and organizational learning and improvement, a highly participatory and openly self-critical evaluation approach can be adopted. In contrast, where the culture rewards competition and individual achievements over teamwork, an approach that protects the anonymity of individuals may be more appropriate.

Participation and negotiation

As has already been stated, both internal and external intended users of the evaluation results should be involved in the whole cycle of the evaluation—from the design to the implementation to the review of the evaluation process—to promote their use of the evaluation results. For the intended users to develop a sense of ownership for the evaluation and its results, agreements on the various steps of the evaluation should be negotiated with them, rather than imposed from above or outside.

Learning by doing

The main benefits of an evaluation of a capacity development effort can be the individual and organizational learning that takes place while undertaking it. For

this reason, it is important to involve people in their own evaluation process, rather than leave it up to ‘experts’. During the evaluation process, participants can learn a great deal, not only about capacity development, but about evaluation methods as well.

Iterative approach

Cycles of reflection and analysis are at the heart of the evaluation process. The main benefits of an evaluation often come from the insights obtained during the evaluation process, rather than from the results presented in a report. Frequently, important questions and issues come to the surface during an evaluation that require adjustments to planned data collection or analysis. For this reason, it is important to have a flexible, iterative approach to implementing the evaluation.

Systematic documentation

It is important to document the main decisions taken during the evaluation, the questions asked, the sources used, and the information obtained. This will allow reflection on the evaluation process and the results. It will also allow findings and suggestions to be more easily substantiated.

Integrity and transparency

To ensure fairness and acceptance of the evaluation’s procedures and results, the process needs to be open and honest and not intended to harm specific individuals or the organization as a whole. In virtually all of the studies, initial workshops involving managers and staff were used to explain the purposes of the evaluation and to develop an open approach to the study. A delicate balance needs to be established between openness and propriety, and individuals who provide sensitive information should be protected. During the evaluations, we found it important to keep individual information sources confidential. In group sessions, it was useful to establish the norm that potentially sensitive personal views and opinions would not be divulged outside the group.

The point in presenting these principles is to encourage any evaluation team to establish its own set of guiding principles for the evaluation. The Guide to Further Reading at the end of this chapter suggests other sets of evaluation standards and principles that may stimulate your thinking about how to design your organization’s evaluation.

Example of guiding principles for evaluating a capacity development effort

Utility: Design and implement your evaluation so that it will be useful to its intended users.

Sensitivity to context: Take into account the environment in which your evaluation is being designed and carried out.

Participation and negotiation: Include internal and external intended users in the entire evaluation cycle.

Learning by doing: Promote learning from the evaluation process by involving people in the evaluation process.

Iterative approach: Build ongoing cycles of action and reflection into your evaluation process.

Systematic documentation: Document your findings and suggestions so they are substantiated and can be reflected on later.

Integrity and transparency: Encourage an open and honest evaluation process to ensure fairness and acceptance of the evaluation’s procedures and results.

Getting into Action: Doing the Evaluation

Once the evaluation team is prepared and equipped with guiding principles, it needs to decide how to carry out the evaluation. We propose a dynamic learning-oriented evaluation approach that addresses the complexity of organizational capacity development efforts and their relationship to organizational performance.

We do not offer a blueprint or a recipe for designing and conducting an evaluation. Rather, based on our experiences, we propose a flexible approach that combines qualitative and quantitative methods. We suggest using multiple methods and cross-checking or ‘triangulating’ the results. Triangulation refers to the use of different information sources, methods, types of data, or evaluators to study an issue from different perspectives and thereby arrive at more reliable findings.

Organizational capacity development is a highly complex and little understood process, the results of which are difficult to measure. For this reason, cross-checking, triangulation, and validation of evaluation results with stakeholders are especially useful.

The box opposite presents a list of methodological questions that should be answered if an evaluation of a capacity development effort is to be sound.

What questions will the evaluation seek to address?

It is important to focus an evaluation on specific questions that you will seek to answer through systematic collection of information, analysis, and interpretation. When planning an evaluation it is important to ask the right questions and to get the questions right. In other words, evaluation questions should be both relevant and well formulated. Unfortunately, there is a tendency to formulate evaluation questions hurriedly or to avoid formulating them altogether. Many of us are familiar with evaluations that had vague terms of reference with no questions at all. In other cases evaluators were expected to answer a long list of questions in an unreasonably short period of time. Both of these approaches tend to result in frustration and a lack of focus.

In our studies we found it difficult, but important, to agree on a short list of evaluation questions. This phase of our work proved to be extremely important, as the questions guided us later on in the collection and analysis of information and in the interpretation and presentation of our results. In many cases, our evaluation questions evolved over time and became more precise as our understanding of our own capacity development efforts and of evaluation methods matured.

Methodological questions that need to be answered in designing and carrying out an evaluation
  • What questions will the evaluation seek to address?
  • Who will use the results?
  • How can a ‘logic model’ be used to focus the evaluation?
  • What will be the unit of analysis and the scope of the evaluation?
  • How can shared understanding and commitment to the evaluation be developed?
  • How should the evaluation process be managed?
  • What information needs to be collected?
  • What tools should be used to collect and analyze information?
  • How should the results be cross-checked, triangulated, and validated?
  • How should the evaluation results be presented?
  • How can use of the evaluation results be encouraged?

Over time, the evaluation team responsible for the Viet Nam study arrived at the following evaluation questions, which the study was designed to answer:

  • What key organizational capacities has the Mekong Delta Farming Systems R&D Institute developed?
  • How have the organizational capacities of the Institute changed over time (since its creation)?
  • How and to what extent have individual staff at the Institute contributed to the development of the organizational capacities?
  • What are the future challenges for the Institute in terms of organizational capacity development?
  • What has been the contribution of the IDRC-CBNRM program to the individual and organizational capacity development efforts of the Institute?

Who will use the results?

The need to consider who will use the results is just as important as formulating appropriate evaluation questions. In fact, in an evaluation, deciding on appropriate questions is directly linked to defining the audiences the evaluation will serve. Decisions on the priority audience(s) will also influence the type of analysis that is conducted and how the results should be presented. For example, if the audience of an evaluation is internal to the organization in question, it may be most effective to present the results verbally in closed-door sessions where sensitive issues can be openly discussed. In contrast, if the primary audience is an external body, it is usually necessary to present a formal report, and some of the more sensitive points might be presented separately in a confidential report or in face-to-face sessions.

How can a ‘logic model’ be used to focus the evaluation?

Professional evaluators recommend the development of a ‘logic model’ for the projects and programs they evaluate. A logic model is a simplified chain of relationships that portrays the logic and assumptions underlying a program or intervention and how it intends to achieve its expected results. It states the logic of the program, identifies the assumptions on which it is based, and outlines the logical connections between

  • the activities undertaken;
  • the outputs to be produced;
  • the intermediate or short-term outcomes that are expected;
  • the ultimate or long-term impacts the program is designed to achieve.

Many projects and programs present some sort of logic model in their proposals or work plans. These are often in the form of a ‘logical framework’, required by many development organizations. In the ECD Project, we attempted to develop logic models for our capacity development initiatives, but were only partially successful. Reflecting on this, we concluded that it is difficult to develop a logic model for a capacity development intervention because the national and international partners frequently have different objectives and assumptions that have not been openly discussed and agreed on. Reaching agreement on the logic of a capacity development initiative requires considerable discussion and agreement on a plan of action. As noted in the previous chapter on partnerships for organizational capacity development, this is seldom done.

One of the contributions of an evaluation to a capacity development initiative is to encourage the participants to clarify their objectives and assumptions and document them in a logic model. As stressed in the previous chapter, we now appreciate the need to negotiate the goals, assumptions, and strategies, as well as the contractual terms of our collaborative initiatives with our partners. In future, this will facilitate the development of logic models that can be used to guide our evaluations.

What will be the unit of analysis and scope of the evaluation?

In any evaluation, it is important to define the basic unit of analysis and the scope of study. In Cuba it was originally planned to evaluate the development of capacities throughout the entire national system for agricultural science, innovation, and technology. Later on, due to limitations on time and resources, it was decided to reduce the scope of the study to an examination of the development of one type of capacity in a single research institute. In Nicaragua, on the other hand, it was originally planned to examine the development of one type of capacity resulting from collaborative work with a single international organization (CIAT), but later it was decided to broaden the study to assess the development of capacity throughout FARENA. The scope of an evaluation refers not only to the organizations and topics covered but also to the time horizon. In Ghana the evaluation covered a 20-year period, in Viet Nam it covered 10 years, and in the Philippines 12 years.

In each case, the coverage of the evaluation—either whole organization, unit within an organization, or system of organizations—the topics addressed, and the time period needed to be clearly determined to guide subsequent collection and analysis of information. Most evaluation teams had difficulty defining clear boundaries and units of analysis. Instead of focusing on departments, centers, or programs, we often coped with this difficulty by defining more comprehensible units such as

individuals, teams, partnerships, projects, events, or outputs. This allowed us to gather just the right amount of information to answer the evaluation questions adequately.

How can shared understanding and commitment to the evaluation be developed?

In our organizations, evaluating capacity development proved to be a highly sensitive activity, and those leading the evaluation needed to deal with personal sensitivities and organizational politics throughout the process. Among other things, we often needed to overcome negative feelings about evaluation per se. In several cases, staff members noted that evaluations are usually carried out to judge individuals or to justify restructuring and staff cuts. Few of us had been familiar with the use of evaluation for learning and improvement in our organizations. In our studies we have found it valuable to use the following approaches to deal with sensitivities, to promote common understanding, and to gain commitment to the evaluation process.

 

Involve the organization’s managers and staff as well as key external stakeholders in the evaluation process from the outset. We have already stressed why it is important in early interactions with evaluation participants to discuss the fundamental purpose of the evaluation, emphasizing its use for individual and organizational learning and improvement. When the evaluation study teams began their studies, they often thought that a specialized team would carry out most of the work, and then present its results at the end of the process. However, in all our cases we came to realize the importance of involving many people in the evaluation process and of periodically informing other stakeholders about the purpose, methods, and emerging results of the work. This had the advantage of gaining broad commitment to the evaluation process while it was going on.

 

Openly discuss issues of organizational capacity development and its evaluation. Managers and staff tend to be wrapped up in day-to-day activities and they seldom have the opportunity to discuss broad organizational issues. Simply starting to talk about organizational capacities can be an eye-opener. Much confusion arose initially concerning basic concepts and terms in all of the participating organizations. What do we actually mean by capacity development? What kind of evaluation do we have in mind? Why should we bother with these things? People who do not understand and appreciate the usefulness of an evaluation cannot be expected to contribute productively to it. In the studies, we found it useful to organize initial workshops with managers, staff members, and external stakeholders, to discuss the purpose of the proposed study and its potential uses. In FARENA in Nicaragua, for example, 31 staff

members and several students attended such an initial workshop. In the Philippines, 17 members of the Root Crops Center and BSU attended a similar initial workshop. These workshops stimulated interest in the studies and motivated individuals to invest their time and energy in them. They also allowed participants’ views on organizational capacity development to surface, to be discussed, and to be documented.

 

Validate findings and recommendations with key stakeholders. Involving and informing people of the emerging results during the course of the evaluation helps avoid unpleasant surprises at the end. It is also important to discuss and validate the study’s conclusions and recommendations with key stakeholders. A weakness of many evaluations is the gap between the information collected and analyzed and the study’s conclusions and recommendations. In many cases, conclusions and recommendations are hastily tacked on to the end of an evaluation report, with little consideration of their validity or their feasibility. Involving interested parties and potential decision-makers in the formulation or validation of conclusions and recommendations can increase the extent to which these are understood and accepted, which, in turn, promotes subsequent follow-up and action. Interested parties often disagree with the conclusions and recommendations of an evaluation. If people are involved in reviewing the evidence and drawing conclusions, they are more likely to reach consensus and to accept and act on the results.

“Evaluation always had a negative connotation, something like policing. In this project, we explored the educative impact that evaluation can have. It should be an opportunity for people to learn. Only then are they able to change the way they make decisions and how they act.”

José de Souza Silva

In the Mekong Delta Farming Systems R&D Institute, workshops were organized to discuss the evaluation’s conclusions and recommendations with the Institute’s managers and staff and with key external stakeholders. Then a Vietnamese version of the evaluation report was prepared. Two important decisions were to initiate a strategic planning exercise, based on results of the evaluation, and to revise the Institute’s procedures for staff performance evaluation. In Cuba, the evaluation report was discussed and validated in a series of meetings at the level of the Ministry of Agriculture, the Directorate of Science and Technology, the Swine Research Institute, and the agrifood chain team. One result was the decision, at the level of the Ministry, to carry out a system-wide assessment of capacity development in SINCITA.

How should the evaluation process be managed?

Evaluations need to be managed, and the participatory evaluation processes we advocate in this book need to be facilitated. Managing an evaluation involves defining the goals of the exercise, the roles and responsibilities of those involved, the time and resources available, and the products to be delivered. Some individual or group has to take charge of the evaluation, make the necessary decisions, and supervise the work to its successful completion.

All the six studies relied heavily on facilitation, by which we mean stimulating, motivating, and guiding the evaluation process, usually through group activities. Sound facilitation is essential to ensure fairness to all participants involved, to capture the different ideas, views, and interests of the range of people that make up the organization(s) involved, to generate collective knowledge and to allow negotiation of common understandings and agreed-upon actions.

While the evaluation teams all depended upon group work for the studies, they tended to underestimate the importance of sound facilitation. The New Paradigm Project and IIP in Cuba have probably recognized the importance of facilitation and progressed furthest in developing capacity in this area. This is because the New Paradigm Project and Cuba’s Directorate of Science and Technology have a long tradition of joint work on participatory adult education. The facilitation approach and skills they developed in the mid-1990s were later successfully applied in their evaluation work.

We suggest that organizations embarking on the evaluation of capacity development dedicate time and resources to finding or developing capable facilitators who can be actively involved throughout the evaluation process. In many cases it will be necessary to invest in specialized training in facilitation skills for your staff.

What information needs to be collected?

It is sometimes assumed that in an evaluation you should collect the largest amount of information possible with the time and resources available. However, it is generally better to collect the smallest amount of information needed to answer the evaluation questions.

Formulating precise evaluation questions and determining the scope of the evaluation and the unit of analysis (in terms of organizational coverage and time horizon) is essential to cutting down the volume of information that needs to be collected. Evaluators who begin collecting information before defining their evaluation’s questions and coverage often collect information that is never used. In

general, the fewer and more carefully formulated the evaluation questions, the less information needs to be collected.

In broad terms, two types of information may be used in an evaluation of an organizational capacity development initiative:

  • primary information that needs to be collected specifically for the evaluation;
  • secondary information such as information that already exists in written organizational records, files, reports, or publications.

In organizational studies, there is a tendency to overlook secondary information and rush into collection of primary information. Our cases were no different. In retrospect, a more careful review of existing documents would have been useful.

Compiling and assessing existing information on capacity development can serve both to enter into a discussion of the topic and to gather information for the evaluation. Before beginning to collect new information, for example through interviews or surveys, it is important to collect the information that already exists in files, reports, and publications, which can help to answer the evaluation questions. The evaluation teams were often surprised to find how much information was, in fact, available. Collecting and analyzing the available information reduced the amount of new, primary information that we had to collect.

In the Plant Genetic Center in Ghana, for example, information on germplasm collection and use had already been collected. In Cuba, both IIP and the Directorate for Science and Technology had good records on the workshops and training events carried out to develop capacity for food-chain analysis. The Root Crops Center in the Philippines also had good records on the technological innovations developed that involved participatory research.

Collection of primary information tends to be more costly and time consuming than compiling and assessing existing information. In organizational assessment it is common to think first of collecting information through formal questionnaire surveys. But as shown in the next section, there are many other important ways to collect useful information for evaluating organizational capacity development initiatives.

What tools should be used to collect and analyze information?

Many tools are available for collecting and analyzing information and for interpreting the results. Some useful sources are included in the Guide to Further Reading at the end of this chapter. Tools that proved useful in the evaluation studies are briefly described in this section.

 

Self-assessment workshops. Self-assessment workshops were used in all of the studies, and proved to be very useful for gathering and analyzing information, for interpreting

results, for building awareness and commitment for the evaluation, and for validating and enriching information, conclusions, and recommendations. Given the importance of such workshops, facilitation skills and related tools for group analysis, synthesis of findings, and reporting of results have proven essential for evaluating organizational capacity development.

 

Review of documents. Documents, including archives, annual reports, budgets, and minutes of meetings, were reviewed in all of our studies. In some cases, documents were only moderately useful due to incomplete records on capacity development efforts. Nevertheless, information contained in documents often proved very useful as a starting point for discussion of capacity development issues and to focus further collection of information. In the Philippines’ Root Crops Center, the study team found that efforts to develop capacity in participatory research were generally embedded in broader research and development interventions, and the elements pertaining to participatory research were seldom well documented. On the other hand, documentary evidence could be found on new technologies released by the Center, including new varieties, which had resulted from participatory research activities. Despite its limitations, the study team found the information available in documents to be very useful in stimulating workshop discussions and in cross-checking their own perceptions of capacity development processes and the results.

 

Key informant interviews. Key informant interviewing involves in-depth discussions with individuals who are selected because they represent certain groups of interest, or they are thought to be particularly experienced, insightful, or informative. Such interviews were carried out in all of the studies, usually face-to-face. However, in some cases, key informant interviews were conducted over the telephone or by e-mail. These interviews allowed the evaluation teams to capture the views and expectations of stakeholders (e.g. staff members, managers, outsiders) concerning capacity development efforts and changes in capacity and performance over time.

 

Group interviews. In some cases, information was collected through interviews with groups rather than individuals. In some ways, this technique is somewhere in the middle between a key informant interview (with an individual) and a self-assessment workshop. Group interviews structured with the help of a facilitator proved to be especially useful in capturing the consensus views of relatively homogeneous groups. They are less appropriate where groups are heterogeneous or where certain individuals dominate the conversation.

Personal histories. In a few cases, detailed personal histories were compiled from individuals who had deep and long-term knowledge of capacity development processes. In Ghana’s Plant Genetic Center the perception and personal history of the Director was especially useful, since the evaluation covered a 20-year period and very little documentation was available on earlier years. The study team interviewed the Director to capture his perspectives on the history of his organization, his personal development as a scientist and manager, and to identify factors that helped or hindered the development of the Center’s capacity. The team transcribed and analyzed the complete interview.

 

Evaluation studies. A case study is a structured and detailed investigation of an organization or group, designed to analyze the context and processes involved in capacity development as well as the results. Each of the evaluation studies can be considered a case study. However, since the questions asked and the methods used differed from case to case, the studies are not strictly comparable. Some of the teams were more systematic than others in developing a case study framework for their studies.

The Ghana team developed a systematic case study approach in which multiple methods and information sources were used to address the study questions. It had three components, one focused on each of the three organizations involved in the study. The component corresponding to the Plant Genetic Center included three self-assessment workshops to assess the Center’s strengths and weaknesses, a series of interviews to capture the perceptions of high-level officials, a personal history of the Director, and a review of archives and records to assess staff changes, publications produced, infrastructure developed, and other factors that could be assessed quantitatively. The component corresponding to IPGRI included a survey of IPGRI staff involved in capacity development, interviews with five key managers, and a review of records to assess IPGRI’s contributions to training, infrastructure, and research methods in Ghana. The component corresponding to GRENEWECA included a workshop to capture the perspectives of nine network members and a review of archives to assess the network’s contributions to capacity development through training, collaborative research, and the supply of equipment.

 

Direct observations. An evaluation of capacity development can benefit from observation of the organization’s activities and facilities and their use. However, management and staff may be so familiar with the organization that they no longer observe things that an outsider would see immediately. The most novel and useful observations are often made by outsiders who have sufficient knowledge of similar organizations to

allow them to make insightful comparisons. This highlights the potential value of combining internal self-assessment with external expertise.

 

Questionnaire surveys. The questionnaire survey is probably the most frequently suggested tool for collecting information for an organizational study or evaluation. When the evaluation teams first developed their evaluation plans, they included questionnaire surveys in all of them. However, when they returned home to their organizations, they all decided to use tools that would demand less of their time and other resources. Use of a questionnaire survey requires skills for preparing the survey form, sampling, administration of the survey, management of databases for quantitative and qualitative information, statistical analysis, research, and other tasks. Survey forms need to be administered in local languages, which may require translation of forms and processing of qualitative information in more than one language.

The evaluation team from RDRS in Bangladesh, for example, originally planned to carry out a survey in rural Bangladesh and process the information at IIRR in the Philippines. However, when they realized that the survey responses would be in Bengali, which would not be understood in the Philippines, this plan was abandoned. A scaled-down survey was carried out to obtain the views of RDRS staff members, and its results were processed and reported on in Bangladesh. The surveys helped the team identify capacities that staff had obtained from courses offered by IIRR. The study team also gained systematic information on what new skills alumni had or had not been able to apply on the job.

How should the results be cross-checked, triangulated, and validated?

Triangulation is a means to increase confidence in the results of an evaluation by assessing and cross-checking findings from multiple points of view, including using different sources of data, different methods for data collection and analysis, different evaluators, or different theoretical perspectives. Given the complex nature of capacity development efforts, the difficulty of applying experimental methods to evaluate them, the limited information on them (particularly baseline data), and the often-conflicting views on them, triangulation is particularly important in the evaluation of organizational capacity development efforts.

In this context, one important way to cross-check and build confidence in results is to use more than one information source to confirm findings. This allows the consistency of results across methods to be checked. Another important way to build confidence in an evaluation’s results is to review findings with stakeholders during the evaluation process. Where participants seriously question results, the analysts

can recheck the information sources as well as the methods used for analysis and interpretation. In the case of Viet Nam, the evaluation team used three different tools—a self-assessment workshop, a case study, and a feedback workshop—to provide information to answer one of its evaluation questions.

Cross-checking is not always easy and it requires time and resources. However, given the potentially controversial nature of evaluation findings, the ECD Project participants urge evaluators to build in means to cross-check their information and results wherever possible.

How should the evaluation results be presented?

Well-planned and well-executed evaluations sometimes fail to produce the expected results because they are not presented in a format that would be useful to users. Traditionally, the final product of an evaluation is a lengthy report that is only made available to a few people. Our work indicates the value of making frequent verbal presentations of the evaluation’s goals, progress, results, and conclusions to interested stakeholders. In each of the evaluation studies, these sorts of presentations have been the main vehicle for people to learn about the evaluation and its results, and to gain a shared understanding and commitment to them. In presenting an evaluation’s results, it is important to keep in mind how different groups may be affected by the results. Critical findings need to be handled discretely to avoid public embarrassment and possible backlashes, which may reduce the constructive use of the results.

“Now that I am convinced of the relevance of evaluation for capacity development, my challenge is to create that conviction across the hierarchy of my organization. I need to disseminate insights learned from this project throughout my organization and its stakeholders.”

Imrul Kayes Muniruzzaman

How can use of the evaluation results be encouraged?

Throughout this chapter we have introduced techniques to promote the utilization of evaluation results, by focusing the evaluation on the key interests of intended users and by involving them throughout the evaluation process. We expand further on these arguments in the next chapter.

Take-Home Messages

Adequate preparation is needed for an evaluation of capacity development before embarking on data collection and analysis. Inadequate preparation is the greatest weakness of most evaluations. An evaluation of a capacity development effort should be guided by a set of principles that ensures it will be useful, accurate, feasible, and sensitive to its context and to the needs of its stakeholders.

There are several key methodological considerations when designing and carrying out an evaluation, as outlined in the following points:

 

Evaluation questions. The evaluation should seek to answer a few key questions. These may evolve over time and become more precise as our understanding of capacity development and evaluation methods matures.

 

Logic model. A logic model should be developed to focus the evaluation. A logic model is a simplified chain of relationships that portrays the logic and assumptions underlying a program or intervention and how it intends to achieve its expected results. Developing a logic model encourages participants to clarify their objectives, assumptions, and overall understanding of their capacity development effort.

 

Scope of the evaluation and unit of analysis. The unit of analysis, the topics to be addressed, and the time period to be covered within the evaluation need to be determined to guide subsequent information collection and analysis.

 

Developing shared understanding and commitment to an evaluation. Involving internal and external stakeholders in the evaluation process from the outset, openly discussing issues of organizational development and evaluation to clarify concepts, and validating findings and recommendations with key stakeholders throughout the process are just some of the ways to build confidence in an evaluation.

 

Managing the evaluation process. The types of participatory evaluation processes that are advocated in this book require sound facilitation. This may require some investment in specialized training for staff.

 

Information to be collected. It is better to collect the smallest amount of information needed to answer the evaluation questions than a mass of information ‘just in case’.

 

Tools to collect and analyze information. Tools that proved useful in our studies included self-assessment workshops, document review, key informant interviews, group interviews, personal histories, case studies, direct observations, and questionnaire surveys.

Triangulation. Triangulation is a means to increase confidence in results by assessing and cross-checking findings from multiple points, including various sources, methods, evaluators, or theoretical perspectives.

 

Communication. It is important to communicate frequently with interested parties. Such communication should include frequent verbal presentations of the evaluation goals, progress, results, and conclusions. Effective communication involves careful listening.

 

Focus on use. Methodological decisions should be taken in ways that promote use of the evaluation, while ensuring its feasibility, accuracy, and propriety.

Guide to Further Reading

Scores of textbooks and guidelines present methods for evaluating programs and projects. Two that we have found especially useful are Utilization-Focused Evaluation, by Michael Quinn Patton (1997), and From the Roots Up, by Gubbels and Koss (2000). Patton’s book, probably the most widely read and most influential evaluation text in print, covers all major aspects of planning and carrying out an evaluation that will actually be used by the intended users. From the Roots Up is particularly strong on principles and techniques for self-assessment exercises that aim to strengthen organizational capacity.

Useful approaches and tools for assessing and enhancing organizational performance are presented by Lusthaus and colleagues in Enhancing Organizational Performance (1999) and in Organizational Assessment (2002). Evaluating the Impact of Training and Institutional Development Programs, by Taschereau (1998), presents a useful collaborative approach for evaluating training and institutional development programs.

The seven guiding principles for evaluating capacity development initiatives that have emerged from our studies are compatible with widely accepted evaluation principles and standards developed by professional evaluation organizations around the world. The American Evaluation Association (www.eval.org) has defined five evaluation principles: systematic inquiry, evaluator competence, integrity/honesty, respect for people, and responsibilities for general and public welfare. The Joint Committee on Standards for Educational Evaluation (1994) has identified four basic standards for sound evaluations: utility, feasibility, propriety, and accuracy. The German Evaluation Society (www.degeval.de) has agreed on a similar set of basic attributes of a sound evaluation.

For a detailed explanation of the use of program logic model (or ‘program theory’) to focus an evaluation, readers are referred to Chapter 10 of Patton’s Utilization-

Focused Evaluation and the Logic Model Development Guide issued by the W.K. Kellogg Foundation (2001) (www.wkkf.org). The website www.reflect-learn.org/EN/ provides useful tools and resources for organizational self-reflection.

On monitoring of capacity development, readers are referred to a useful paper by Morgan, An Update on the Performance Monitoring of Capacity Development Programs (1999), which is available on www.capacity.org

The Guide to Monitoring and Evaluation of Capacity-Building Interventions in the Health Sector in Developing Countries (2003), by LaFond and Brown, provides a useful framework and tools that can be applied in research and development organizations. The Letter to a Project Manger, by Mook (2001), provides a series of guidelines, checklists, and practical suggestions for evaluation generally.

The book Construyendo Capacidades Colectivas, by Carroll (2002), presents results of detailed studies of organizational capacity development in peasant federations in highland Ecuador.







Prev Document(s) 9 of 15 Next



   guest (Read)(Ottawa)   Login Home|Jobs|Important Notice|General Infomation|Contact Us|Webmaster|Low Bandwidth
Copyright 1995 - 2005 © International Development Research Centre Canada     
Latin America Middle East And North Africa Sub-Saharan Africa Asia IDRC in the world