Jul 17 2013
From BetterEvaluation.org’s great weekly blog comes a post that has value for facilitators, not just evaluators! Week 28: Framing an evaluation: the importance of asking the right questions.
First let me share the tips and the examples from the article (you’ll need to read the whole article for full context), and then in blue I’ll add my facilitator contextual comments!
Eight tips for good evaluation questions:
Limit the number of main evaluation questions to 3-7. Each main evaluation question can include sub-questions but these should be directly relevant for answering the main question under which they fall. When facilitating, think of each question as a stepping stone along a path that may or may not diverge. Questions in a fluid interaction need to reflect the emerging context. So plan, but plan to improvise the next question.
Prioritize and rank questions in terms of importance. In the GEM example, we realized that relevance, effectiveness, and sustainability were of most importance to the USAID Mission and tried to refine our questions to best get at these elements. Same in facilitation!
Link questions clearly to the evaluation purpose. In the GEM example, the evaluation purpose was to gauge the successes and failures of the program in developing and stabilizing conflict-affected areas of Mindanao. We thus tried to tailor our questions to get more at the program’s contributions to peace and stability compared to longer-term economic development goals. Ditto! I have to be careful not to keep asking questions for my OWN interest!
Make sure questions are realistic in number and kind given time and resources available. In the GEM example, this did not take place. The evaluation questions were too numerous and some were not appropriate to either the evaluation methods proposed or the level of data available (local, regional, and national). YES! I need to learn this one better. I always have too many.
Make sure questions can be answered definitively. Again, in the GEM example, this did not take place. For example, numerous questions asked about the efficiency/cost-benefit analysis of activity inputs and outputs. Unfortunately, much of the budget data needed to answer these questions was unavailable and some of the costs and benefits (particularly those related to peace and stability) were difficult to quantify. In the end, the evaluation team had to acknowledge that they did not have sufficient data to fully answer certain questions in their report. This is more subtle in facilitation as we have the opportunity to try and surface/tease out answers that may not be clear to anyone at the start.
Choose questions which reflect real stakeholders’ needs and interests. This issue centers on the question of utility. In the GEM example, the evaluation team discovered that a follow-on activity had already been designed prior to the evaluation and that the evaluation would serve more to validate/tweak this design rather than truly shape it from scratch. The team thus tailored their questions to get more at peace, security, and governance issues given the focus on the follow-on activity. AMEN! YES!
Don’t use questions which contain two or more questions in one. See for example question #6 in the attached—“out of the different types of infrastructure projects supported (solar dyers, box culverts, irrigation canals, boat landings, etc.), were there specific types that were more effective and efficient (from a cost and time perspective) in meeting targets and programmatic objectives?” Setting aside the fact that the evaluators simply did not have access to sufficient data to answer which of the more than 10 different types of infrastructure projects was most efficient (from both a cost and time perspective), the different projects had very different intended uses and number of beneficiaries reached. Thus, while box culverts (small bridge) might have been both efficient (in terms of cost and time) and effective (in terms of allowing people to cross), their overall effectiveness in developing and stabilizing conflict-affected areas of Mindanao were minimal. Same for facilitation. Keep it simple!
Use questions which focus on what was achieved, how and to what extent, and not simple yes/no questions. In the GEM example, simply asking if an activity had or had not met its intended targets was much less informative than asking how those targets were set, whether those targets were appropriate, and how progress towards meeting those targets were tracked. Agree on avoiding simple yes/no unless of course, it is deciding if it is time to go to lunch.
I’m currently pulling together some materials on evaluating communities of practice, and I think this list will be a useful addition. I hope to be posting more on that soon.
By the way, BetterEvaluation.org is a great resource. Full disclosure, I’ve been providing some advice on the community aspects! But I’m really proud of what Patricia Rogers and her amazing team have done.