International Development Research Centre (IDRC) Canada     
idrc.ca HOME > IDRC Publications > Reports magazine > Features >
 Topic Explorer  
Reports magazine
     About Reports
     Archives
     Collections
    Features
     News
     Opinions
     Researcher Profiles

IDRC in the world
Subscribe
Development Dossiers
Free Online Books
IDRC Explore Magazine
Research Programs
 People
Jennifer McCue

ID: 30442
Added: 2003-05-27 7:59
Modified: 2003-08-11 10:04
Refreshed: 2006-01-25 03:49

Click here to get the URL for the RSS format file RSS format file


Prev News 89 of 118 Next

In Conversation: Michael Quinn Patton



Related sidebar:

Michael Quinn Patton on IDRC and Evaluative Thinking



Links to explore...

IDRC's Evaluation Unit

Evaluation Unit brochure: Outcome Mapping

IDRC Booktique: Outcome Mapping — Building Learning and Reflection into Development Programs


About Reports

Email notification


1048_full.jpg
Michael Quinn Patton at IDRC leading a workshop on evaluative thinking. (IDRC Photo: Kevin Conway)
2002-02-08
Lisa Waldick

Many development programs are evaluated to determine how effective and useful they are. But how effective and useful are the evaluations themselves? Internationally renowned evaluator, Michael Quinn Patton, recently came to IDRC to discuss his approach for making sure evaluations are useful for decision-makers. Dr Patton is the head of an organizational development consulting business: Utilization-Focused Information and Training. Known for five influential books on evaluation, including Qualitative/Evaluation and Research Methods, he was the 1984 recipient of the Alva and Gunnar Myrdal Award from the Evaluation Research Society for "outstanding contributions to evaluation use and practice".

Why is it important to evaluate development programs? Don't people on the ground just intuitively understand what is going on in programs?

Our very processes of taking in information distort reality — all the evidence of social science indicates this. We have selective perception — some of us have rose-coloured glasses, some of us are gloom-and-doomers. We are not neutral; there is an emotional content to information. We need disciplined techniques to be able to stand back from that day-to-day world and really be able to see what is going on. We need approaches to help us stand back from our tendency to have biases, prejudices, and preconceptions.

Can you give an example of how a preconception can influence a project?

There are examples in development that are legendary. One agriculture project grew a bean that cooked faster so it would use less fuel — and therefore you would partly reduce deforestation. But there was a lot of resistance to adopting it. Part of the resistance was because one of the few times women in this culture were able to socialize with each other was when they were cooking. They didn’t want a fast cooking bean. Those are the kinds of things evaluators see when they go in — they see the things that people can’t see because they are too close to it. Evaluation is about standing back and being able to see things through somebody else’s eyes.

What distinguishes your approach to evaluation?

One of the ways of you can distinguish different evaluation approaches is by what they take as their bottom line for the evaluation. For me, it is the pragmatic use of evaluation findings and the evaluation process. In other words, it is that the evaluation is designed and implemented in a way that really makes a difference to improving programs and improving decisions about programs. So the bottom line in my approach is use — that’s the reason why my approach is called utilization focussed evaluation.

How do you think the usefulness of evaluations can be increased?

In the timing of the evaluation, for example, it means that you time the findings to match when decisions are really going to be made. A lot of evaluations take place at the end of a project. An evaluation report gets written and it’s a very good piece of work. But all the decisions have already been made about the future of the project by the time the evaluation gets done. On paper, it appears to make sense to do an evaluation right at the end of the project to try to capture everything that’s gone on. But it turns out not to be useful to do that. Everything that is going to be decided about the future of a program gets decided before the end of the program.

How can evaluations inform decision-making?

By knowing what the questions are that the decision-makers bring to a project. So, for example, you have to know: is there consideration being given to expanding the project from one part of the world to another — to adapt the intervention to a new ecosystem or a new group of people? Or do decision-makers already know that resources are declining and the real question is: Can we do more with less? Knowing what the decision context is lets you gather data that is relevant. A lot of evaluations get designed generically. When decision-makers get them, the response is: Well that’s interesting. But it doesn’t help me with what my decision is. It doesn’t answer my question.

What do you see as the difference between research and evaluation?

There’s a whole continuum of different kinds of evaluation and different kinds of research. However, on the whole, the purpose of evaluation is to produce useful information for program improvements and decision making. And the purpose of research is to produce knowledge about how the world works. Because research is driven by the agenda of knowledge production, the standards for evidence are higher, and the time lines for generating knowledge can be longer. In evaluation, there are very concrete deadlines for when decisions have to get made, for when program action has to be taken. It often means that the levels of evidence involve less certainty than they would under a research approach and that the time lines are much shorter.

If you don’t have the highest possible levels of evidence in the evaluation, isn’t there a risk of making bad decisions?

In the real world, you don’t have perfect knowledge and decisions are going to get made anyway. When a program is coming to an end and a decision has to get made about it, the decision is going to get made whether or not you have perfect knowledge. If you are saying: "No, don’t decide now. Wait until I have perfect knowledge", the train is going to pass. The reality is that it’s better to have some information in a timely fashion than to have perfect information too late to get used.

What is participatory evaluation? What are its advantages?

Participatory evaluation means involving people in the evaluation — not only to make the findings more relevant and more meaningful to them through their participation, but also to build their capacity for engaging in future evaluations and to deepen their capacity for evaluative thinking.

So, let's say you want to do a serious evaluation and you are trying to decide whether or not to have an external person do it or to do it internally. The external person may do a very good job of generating findings for you. But all the things that they learn about how to do evaluation, they take away with them. If you do the evaluation with counterparts in the countries where you are working, then they get the opportunity not only to generate findings and know where those findings come from, but also to learn about evaluative thinking.

What is evaluative thinking?

Evaluative thinking includes a willingness to do reality testing, to ask the question: how do we know what we think that we know. To use data to inform decisions — not to make data the only basis of decisions — but to bring data to bear on decisions. Evaluative thinking is not just limited to evaluation projects, it’s not even just limited to formal evaluation. It’s an analytical way of thinking that infuses everything that goes on. [See related sidebar: Michael Quinn Patton on IDRC and Evaluative Thinking]

What is the hardest thing to teach about evaluation?

The hardest thing I find to teach is how to go from data to recommendations. When you are doing an evaluation, you are looking at what has gone on — a history. But when you write recommendations, you are a futurist. Evaluations can help you make forecasts, but future decisions are not just a function of data. Making good, contextually grounded, politically savvy and do-able recommendations is a sophisticated skill. A great evaluator can really show the strengths and weaknesses in a program and can gather good, credible data about what is working and not working. But that doesn’t mean that they know how to turn that information into recommendations.

I actually prefer to involve the primary decision-makers who are going to use the evaluation in generating their own recommendations through a process of facilitation and collaboration. I encourage them to look at the data, consider the options, and then come up with their own recommendations in a context that includes their values, experience, and resources.

When is it good not to evaluate a project or program?

You can overdo evaluation just because people get sick of it. There are also times of crisis when you need to take action, rather than study the questions. I’ve seen projects go down the tubes while people were studying and evaluating when in fact they needed to take action.

What do you think is the most important key to evaluation?

It is being serious diligent and disciplined about asking the questions, over and over: "What are we really going to do with this? Why are we doing it? What purpose is it going to serve? How are we going to use this information?" This typically gets answered casually: "We are going to use the evaluation to improve the program" — without asking the more detailed questions: "What do we mean by improve the program? What aspects of the program are we trying to improve?" So a focus develops, driven by use.



For more information:


Terry Smutylo, Director, Evaluation Unit, IDRC, 250 Albert Street, PO Box 8500, Ottawa, Ontario, Canada K1G 3H9; Phone: (613) 236-6163 ext 2345; Email: tsmutylo@idrc.ca



Sidebar


Michael Quinn Patton on IDRC and Evaluative Thinking


The fact that IDRC has made evaluative thinking a part of their core culture indicates a corporate belief that program design, planning, and implementation are all improved by bringing evaluative thinking to them.

I do see a lot of organizations working on similar kinds of things. I don’t know of any other organization that is taking the approach that IDRC has taken by making evaluative thinking part of their core corporate assessment framework. That’s challenging; I think it’s very exciting. I think it’s very cutting edge. It’s going to be interesting to see the way in which IDRC has raised the bar for what it means to be a learning organization by making evaluative thinking one of their core themes. IDRC, in that sense, will become one of the cases in evaluation literature of how this does or doesn’t happen.



Top of Page

Prev News 89 of 118 Next



   guest (Read)(Ottawa)   Login Home|Jobs|Important Notice|General Infomation|Contact Us|Webmaster|Low Bandwidth
Copyright 1995 - 2005 © International Development Research Centre Canada     
Latin America Middle East And North Africa Sub-Saharan Africa Asia IDRC in the world