International Development Research Centre (IDRC) Canada     
idrc.ca HOME > IDRC Publications > IDRC Books Online > All our books > EVALUATING CAPACITY DEVELOPMENT >
 Topic Explorer  
IDRC Books Online
     New
     Economics
     Environment & Biodiversity
     Food & Agriculture
     Health
     Information & Communication
     Natural Resources
     Science & Technology
     Social & Political Sciences
     Development & Evaluation
    All our books

IDRC in the world
Subscribe
Development Dossiers
Free Online Books
IDRC Explore Magazine
Research Programs
 People
Bill Carman

ID: 43611
Added: 2003-09-11 13:11
Modified: 2005-06-20 14:43
Refreshed: 2006-01-25 04:13

Click here to get the URL for the RSS format file RSS format file

Foreword
Document(s) 1 of 15 Next
Michael Quinn Patton - Author, Utilization-Focused Evaluation

The perspective that informs this important book is that every evaluation of a capacity development effort should itself contribute to the capacity development effort and ultimately to the organization’s performance. This is a revolutionary idea in evaluation. With the idea have come the questions: Can it be done? And, if it is done, what will be the consequences?

This book elucidates and deepens the idea, shows it can be done, and examines the consequences, both intended and unintended, of engaging in capacity development evaluation.

The Culture of Evaluation

Let’s start with the idea. Historically and academically, evaluation adapted social science research methods to examine questions of program and organizational effectiveness. The product of an evaluation was a report judging the merit or worth of the program. The impact of the evaluation, if it had an impact, came from the use of the evaluation’s findings.

But in studying evaluation use, we began to observe that the processes involved in certain kinds of evaluations had an impact quite apart from the findings. In approaches to evaluation that involve participatory processes, those involved often experience changes in thought and behavior as a result of learning that occurs during the evaluation process. Changes in program or organizational procedures and culture can also be manifestations of an evaluation’s impacts. These observations about the ‘process use’1 of evaluation led to a more direct focus on the potential of evaluation to contribute to organizational capacity development.

One way of thinking about process use is to recognize that evaluation constitutes a culture, of sorts. We, as evaluators, have our own values, our own ways


1 This term is defined and discussed in Patton (1997).

of thinking, our own language, our own hierarchy, and our own reward system. When we engage other people in the evaluation process, we are providing them with a cross-cultural experience. The interactions between evaluators and people in programs and organizations involve yet another layer of cross-cultural interactions. In the international and cross-cultural contexts within which the work in this book takes place, an appreciation of the cross-cultural dimensions of evaluation interactions can shed light on the complexities and challenges of this enterprise.

Those new to the evaluation culture may need help and facilitation in coming to view the experience as valuable. This culture of evaluation, that we as evaluators take for granted in our own way of thinking, is quite alien to many of the people with whom we work in organizations. Examples of the values of evaluation include clarity, specificity, and focusing; being systematic and making assumptions explicit; operationalizing program concepts, ideas, and goals; distinguishing inputs and processes from outcomes; valuing empirical evidence; and separating statements of fact from interpretations and judgments.

These values constitute ways of thinking that are not natural to some people and that are quite alien to many. When we take people through a process of evaluation—at least in any kind of stakeholder involvement or participatory process—they are, in fact, learning things about evaluation culture and often learning how to think in these ways. The learning that occurs as a result of these processes is twofold:

  1. the evaluation can yield specific insights and findings that can change practices and be used to build capacity, and
  2. those who participate in the inquiry learn to think more systematically about their capacity for further learning and improvement.

Learning to Think Evaluatively

‘Process use’ refers to using evaluation logic and processes to help people in programs and organizations learn to think evaluatively. This is distinct from using the substantive findings in an evaluation report. It’s equivalent to the difference between learning how to learn versus learning substantive knowledge about something. Learning how to think evaluatively is learning how to learn. As this book shows, developing an organization’s capacity to think evaluatively opens up new possibilities for how evaluations can contribute and be used. It is an experience that the leadership in organizations is coming to value because the capacity to engage in evaluative thinking has more enduring value than a delimited set of findings, especially for organizations interested in ongoing learning and improvement. Findings have a very

short ‘half life’—to use a physical science metaphor. They deteriorate very quickly as the world changes rapidly. Specific findings typically have a small window of relevance. In contrast, learning to think and act evaluatively can have an ongoing impact, especially where evaluation is built into ongoing organizational development. The experience of being involved in an evaluation, then, for those stakeholders actually involved, can have a lasting impact on how they think, on their openness to reality-testing, and on how they view the things they do. For example, I’ve worked with a number of programs and organizations where the very process of taking people through goals clarification is a change-inducing experience for those involved. As a result, the program is changed. Values are the foundations of goals. By providing a mechanism and process for clarifying values and goals, evaluation has an impact even before data are collected. Likewise, the process of designing an evaluation often raises questions that have an immediate impact on program implementation. Such effects can be quite pronounced, as when the process of clarifying the program’s logic model or theory-of-action leads to changes in delivery well before any evaluative data are ever collected.

This book has that kind of impact by forcing serious examination of what it means to develop organizational capacity and providing concrete examples of variations, possibilities, and results.

Evaluation as an Intervention

Evaluation as a capacity development, intentional intervention in support of increased organizational effectiveness is controversial among some evaluation theorists, because it challenges the research principle that the measurement of something should be independent of the thing measured. Of course, researchers have long observed that measuring a phenomenon can affect the phenomenon. The classic example is the way that taking a pre-test can affect performance on a post-test. Viewing evaluation as an intervention turns the table on this classic threat to validity and looks at how the collection of data can be built into program processes in ways that enhance program and organizational outcomes. This can make evaluation more cost beneficial to a significant extent. For example, an evaluation interview or survey that asks about various objectives of a program can affect awareness of what the objectives or intended outcomes of the program are. In that sense, the evaluation is an intervention in that it can reinforce what the program is trying to do.

Another kind of evaluation impact involves introducing the discipline of evaluation as a mechanism for helping to keep a program or organization on track by

maintaining attention to priorities, often under the banner of accountability. The mantra of performance measurement—‘What gets measured gets done’—encapsulates one aspect of evaluation’s process impact. What we choose to measure has an impact on how people behave. If staff or programs, for example, get rewarded (or punished) for those things that are measured, then those things take on added importance. This focusing effect of evaluation adds responsibility to the evaluation process because measuring the wrong thing, measuring it inappropriately, or using what is measured inappropriately increases the likelihood that the ‘wrong’ thing will get done.

Organizational Capacity Development

The ideas and examples in this book move the evaluation field forward significantly. As noted in opening this foreword, the contributors have taken seriously the idea that every evaluation of a capacity development effort should itself contribute to the capacity development effort and ultimately to the organization’s performance. That’s a high standard to meet, but especially in the developing world, where resources are so scarce, aiming at multiple levels and kinds of impacts is crucial. Evaluation is too valuable and scarce a resource to be wasted just producing reports. This book shows that a greater impact and broader vision is both needed in theory and possible in practice.







Document(s) 1 of 15 Next



   guest (Read)(Ottawa)   Login Home|Jobs|Important Notice|General Infomation|Contact Us|Webmaster|Low Bandwidth
Copyright 1995 - 2005 © International Development Research Centre Canada     
Latin America Middle East And North Africa Sub-Saharan Africa Asia IDRC in the world