Archive for the 'evaluation' Category

Jul 20 2016

Learning While Building eLearning: Part 2 – Learning Objectives & Assessment

signsThis is the second (slow!) of four pieces reflecting on the experiences of Emilio, a subject matter expert who was tasked with converting his successful F2F training into an elearning offering. Emilio has let me interview him during the process.

This piece focuses on the thorny issue of learning objectives at the front end of an elearning project and assessment at the other end. You can find the context in part 1 here. (Disclaimer: I was an adviser to the project and my condition of participation was the ability to do this series of blog posts, because there is really useful knowledge to share, both within the colleague’s organization and more widely. So I said I’d add the blog reflections – without pay – if I could share them.)

Nancy: Looking back, let’s talk about learning objectives. You started with all of your F2F material, then had to hone it down for online. You received feedback from the implementation team along the way. What lessons came out of that process? How do we get content more precise when you have fewer options to assess learner needs and interests “in the moment” as you do face to face, and with the limited attention span for online?

Emilio: I realize now that I had not thought about being disciplined with learning objectives. I had created them with care when I first developed my F2F offering. Once I had tested the course several times, I recognized that I forgot my own initial learning objectives because in a F2F setting I adapted to student’s interest and knowledge gaps on the spot, and I was also able to clarify any doubts about the content. Therefore, over time, these learning objectives become malleable depending on the group of students, and thus lost presence in my mind.

This became apparent as  I was doing the quizzes for the online work and got comments back from Cheryl (the lead consultant).  She noted where my quiz questions were and were NOT clarified in the learning objectives and content. I realized I was asking a bunch of questions that were not crucial to the learning objectives.

With that feedback, I narrowed down the most important questions to achieve and measure the learning objectives. It was an aha moment. This is something that is not necessarily obvious or easy. You have to put your mind into it when you are developing an e-learning course especially. It applies to the F2F context as well, but in an e-learning setup you are forced to be more careful because you cannot clarify things on the spot. There is less opportunity for that online.  That was very critical. (Note: most of the course was asynchronous. There were weekly “office hours” where clarifications happened. Those learners who participated in the office hours had higher completion rates as well.)

It was clear I had to simplify the content for elearning set up – and that was super useful. While my F2F materials were expansive to enable me to adapt to local context, that became overload online.

Nancy: What was your impression of the learners’ experiences?

Emilio: It was hard to really tell because online we were  dealing with a whole different context. Your indicators change drastically. When I’m in F2F I can probe and sense if the learners are understanding the material. It is harder online to get the interim feedback and know how people are doing. For the final assessment,  we relied on a final exam with an essay question. The exam was very helpful in assessing the learner’s experience, but since it is taken at the end of the course, there are no corrective measures one can take.

Nancy: Yes, I remember talking about that as we reviewed pageviews and the unit quizzes during the course. The data gives you some insight, but it isn’t always clear how to interpret it. I was glad you were able to get some feedback from the learners during your open “office hours.”

We used the learning objectives as the basis for some learner assessment (non graded quizzes for each unit and a graded final exam which drew from the quizzes.) How did the results compare with your expectations of the learners’ acquisition of knowledge and insights? How well did we hit the objectives?

Emilio: We had 17 registered learners and 7 completed. That may sound disappointing. Before we started, I  asked you about participation rates and you warned me that they might be low and that is why I am not crying. The 7 that completed scored really well in the final exam and you could see their engagement. They went through material, did quizzes and participated in the Office Hours. One guy got 100% in all of the quizzes, and then 97% in the exam.

We had 8 people take the final exam. One learner failed to pass the 70% required benchmark, but going deep into it, Terri (one of our consultants) discovered the way Moodle was correcting the answers on the multiple choice was not programmed precisely. It was giving correct answers for partially correct answers. We need to fix that.  Still, only one failed to pass the 70% benchmark even with the error.

The essay we included in the exam had really good responses. It achieved my objective to get an in depth look at the context the learners  were coming from. Most of them described an institutional context. Then they noted what they thought was most promising from all the modules,  what was most applicable or relevant to their work. There were very diverse answers but I saw some trends that were useful. However, it would be useful to have know more of this before and during the course.

Nancy: How difficult was it to grade the essays? This is something people often wonder about online…

Emilio: I did not find it complicated, although there is always some degree of subjectivity. The basic criteria I used was to value their focus on the question asked, and the application of all possible principles taught during the course that relate to the context described in the question.

Nancy: One of the tricky things online is meaningful learner participation. How did the assessment reflect participation in the course?

Emilio: We decided not to give credit for participation in activities because we were not fully confident of how appropriately we had designed such activities for an e-learning environment in this first beta test. I think this decision was the right one.

First, I feel that I did not do a good job at creating an atmosphere, this sense of community, that would encourage participation. Even though I responded to every single comment that got posted, I don’t really feel that people responded that much in some of the exercises. So I would have penalized students for something that is not their fault.

Second, we had one learner who did every exercise but did not comment on any of the posts. He is a very good student and I would have penalized him if completion relied on participation. Another learner who failed did participate, went to the office hours and still did not pass the final exam.

We failed miserably with the group exercise for the second module.  I now realize the group exercise requires a lot of work to build the community beforehand.  I sense this is an art. You told me that it is completely doable in the elearning atmosphere, but after going through the experience I really feel challenged to make it work. Not only with respect to time, but how do you create that sense of community? I feel I don’t have a guaranteed method for it to work. It is an art to charm people in. I may or may not have it!

Nancy: The challenges of being very clear, what content you want to share with learners, how you share it, and how you assess it should not be underestimated. So often people think it is easy: here is the content! Learning design in general  is far more than content and learning design online can be trickier because of your distance from your learners – and not just geographic distance, but the social distance where there is less time and space for the very important relational aspects of learning.

Up Next: Facilitating Online

No responses yet

Dec 02 2013

An Interview With Aaron Leonard on Online Communities

I had a chance to interview Aaron Leonard late last September (photo URL) just before he took a leave from his online community management work at the World Bank to talk about that work. This is part of a client project I’m working on to evaluate a regional collaboration pattern and to start understanding processes for more strategic design, implementation and evaluation of collaboration platforms, particularly in the international development context.

Aaron’s World Bank blog is http://www.southsouth.info/profiles/blog/list?user=1uxarewp1npnk

How long have you been working on “this online community/networks stuff” at the Bank?  How did your team’s practice emerge?

I’ve been at the bank 4 years and working on their communities of practice (CoP) front for 3 of those. I started as a community manager building a CoP for external/non-bank people focused onSouth-South exchange. Throughout this process, I struggled with navigating the World Bank rules governing these types of “social websites”. At the time, there were no actual rules in place – they were under formulation. So what you could do,/could not, use/ could not, pay for/ could not depended on who you talked to. I started working with other community managers to find answers to these questions along with getting tips and tricks on how to engage members, build HTML widgets, etc… I realized that my background working with networks (pre-Bank experience) and my experience launching an online community of practice within the Bank was useful to others. As more and more people joined our discussions, we started formalizing our conversation (scheduling meetings in advance, setting agendas, etc… but not too formal :).

We were eventually able to make ourselves useful enough to the point where I applied for and received a small budget to bring in some outside help from a very capable firm called Root Change and to hire a brilliant guy, Kurt Morriesen, to help us develop a few tools for community managers and project teams and to help them think through their work with networks. We started with 15 groups – mostly within WBI, but some from the regions as well. All were asking and needing answers to some common questions, “How do we get what we want out of our network? How do we measure and communicate our success? How do we set up a secretariat and good governance structure?” This line of questioning seemed wrong in many ways. It represented a “management mindset” (credit Evan Bloom!) versus a “network mindset”. The project teams were trying to get their membership to do work that fit their programmatic goals versus seeing the membership as the goal and working out a common direction for members to own and act on themselves. We started asking instead, “Why are you engaging? “Who really are you trying to work with?” What do you hope to get out of this engagement?” What value does this network provide its members?” This exercise was really eye opening for all of us and eventually blossomed into an actual program. I brought in Ese Emerhi last year as a full time team member. She has an amazing background as a digital activist, and knows more than I do about how to make communities really work well.

Ese and I set up a work program around CoPs and built it into a practice area for the World Bank Institute (WBI) together with program community managers like Kurt, Norma Garza (Open Contracting), and Raphael Shepard (GYAC) among others. With Ese on board, we were able to expand beyond WBI (to the World Bank in general). This was possible in part because our team works on knowledge exchange, South-South knowledge exchange specifically (SSKE). We help project teams in the World Bank design and deliver effective knowledge exchange. CoPs are a growing part of this business, in part because the technology to connect people in a meaningful conversation is getting better, and in part because we know how to coach people on when and how to use communities.

How did you approach the community building?

With Rootchange  we started with basic stocktaking and crowd sourcing with respect to  trying to define an agenda for ourselves. We had 4-5 months for this activity. We settled on a couple things.

  1. Looking at different governance arrangements. How do we structure the networks?

  2. What tools or instruments to use in design of planning of more effective networks.

We noticed that we were talking more about networks than communities. Some were blends of CoPs, coalitions, and broader programs. The goals aren’t always just the members’. So we talked about difference between these things, how they can be thought of along a spectrum of commitment or formality. A social network vs. an association and how they are/are not similar beasts.

We gave assignments to project teams and met on monthly basis to work with these instruments. On the impetus of consultants at Root Change, we started doing 1 to 1 consultation w/ teams. We reserved a room, brought in cookies and coffee and then brought the teams in for 90 minutes each of  free consulting sessions. These were almost more useful for the teams than the project work. Instead of exploring the tools, they were APPLYING the tools themselves. It was also a matter of taking the time to focus, sit down and be intentional with their work with their networks. Just shut the door and collectively think about what is was they were trying to do. A lot of this started out in a more organic way around what was thought to be an easy win. “We’ll start a CoP, get a website, get 1000 people to sign” up without understanding what it meant for membership, resourcing, team, commitment and longer term goals and objectives.

We helped them peel back some of the layers of the onion to better understand what they were trying to do. We didn’t get as far as wanted. We wanted to get into measuring and evaluation and social network analysis, but that was  a little advance for these teams and their stage of development. They did not have someone they could rely on to do this work. Some had a community manager but most of these were short term consultants, for 150 days or less, and often really junior people who saw the job  as an entry level gig. They were often more interested in the subject matter than being a community managers. They often tended to get pulled in different directions and may or may not have liked the work. They tended to be hired right out of an International  Devevlpment masters program where they had a thematic bent so they were usually interested in projects, vs organizing a 1000 people and lending some sense of community. Different skill sets!

We worked with these teams, and came up with a few ideas,. Root Change wrote a small report (please share) which helped justify a budget for subsequent fiscal year and my boss let me hire someone who would have community building as part of their job. Together we were working on the Art of Knowledge Exchange toolkit and the other half time was for community. At this point we opened  up our offering to World Bank group to help people start,  understand how to work with membership, engage, measure and report on a CoP. We helped them figure out how they could use data and make sense of their community’s story. We brought in a few speakers and did social things to profile community managers. Over the course of the year we had talked to and worked with over 300 people. (Aaron reports they  have exact numbers, but I did not succeed in connecting with him before he left to get those numbers!). We did 100 one-on-one counseling sessions. We reached very broadly across institution and increased the awareness of the skillset we have in WBI regarding communities and networks. We helped people see that this is different way of working. Our work coincided with build up of the Bank’s internal community platform based on Jive (originally called Scoop and now called Sparks – a collaboration for development and CoP oriented platform.) The technology was getting really easy for people to access. There was more talk about knowledge work, about being able to connect clients, and awareness of what had been working well on the S-S platform.

We did a good job and that gave us the support for another round of budget this year.  Now we have been able to shift some of the conversation to the convening and brokering role of the Bank. This coincided with the Bank’s decreased emphasis  in lending and increase in access to experts which complimented the direction we were going in.  We reached out and have become a reference point for a lot of this work. There have been parallele institutional efforts that flare and fade, flare and fade. But it is difficult to move “the machine.” It can even be a painful process to witness. I admire the people doing this, but (the top down institutional change process) was something we tried to avoid. We did our work on the side, supporting people’s efforts where possible. Those things are finally bearing fruit. We have content. They have a management system. We have process for teams to open a new CoP space, a way to find what is available to them as community leaders, They have  a community finder associated with an expert finder. Great to have these things to have and invest in, but it is not where we were aiming. We want to know the community leaders, the people like Ese, like Norma Garza, running these communities and who struggle and have new ideas to share. What are the ways to navigate the institutional bureaucracy that governs our use of social media tools? How do you find good people to bring on board. You can’t just hire the next new grad and expect it to last. There is an actual skills set, unique, not always well defined but getting more recognition as something that is of value and unique to building a successful CoP. There is new literature out there and people like Richard Millington (FeverBee) – a kid genius doing this since he was 13. He takes ideas from people like you, Wenger and Denning. There is now more of a practice around this.

While the Bank is still not super intentional on how it works internally with respect to  knowledge and process, more attention is being paid and more people are being brought in. It can be a touch and go effort. We’re just a small piece, but feeling a much needed demand and our numbers prove that. We have monthly workshops (x2 sometimes) that are promoted through a learning registration system and we’d sell the spaces out within minutes. People are stalking our stuff. It is exciting. At same time while it felt like the process of expansion touched a lot of people, convinced/shaped dialog, I also feel we lost touch with the Normas. Relationships changed. We were supporting them by profiling them, helping them communicate to their bosses, so the bosses understood their work, but not directly supporting them with new ideas, techniques, approaches.

We reassessed at end of last year. We want to focus building an actual community again. We started but lost that last year while busy pushing outwards. But we still kept them close and we can rely on each other. It has  not been the intimate settings of 15-20 or 3 of us doing this work, sitting around and talking about what we are struggling with. Like “how did you do your web setup, how to do a Twitter Jam?” So our goals this year are a combination. Management likes that we hit so many people last year. They have been pretty hands off and we can set our own pace. Because we did well last year, they given us that room, the trust.

So now we want to focus more on championing the higher level community managers. The idea is to take a two fold approach. First we want to use technology to reach out, to use our internal online space to communicate and form a more active online community. We secondly want to focus a few of our offerings on these higher level community managers with idea that if we can give them things to help their with the deeper challenges of their job, they will be able to help us field the more general requests for the more introductory offerings. Can you review my concept note?  Help me setting up my technology.

It is still just the two of us. We are grooming another person but also working with the more senior community managers will allow us to handle more requests by relying on their experience. We give them training and  in return they help w/ basic requests. This is not a mandate. We don’t have to do this. It is what we see as a way of building a holistic and sustainable community within the Bank to meet the needs of community managers and people who use networks to deliver products and services with their clients.

How do you set strategic intentions when setting up a platform?

One of the things I love most advising about CoPs is telling them not to do it. I love being able to say this. The incentives are wrong, the purpose. So many people think CoPs are something that is “on the checklist, magic bullet, or a sexy tech solution”. Whatever it is, those purposes are wrong. They are thinking about the tech and not the people they are engaging.. If you want to build a fence, you don’t go buy a hammer and be done with it. You need to actually plan it out, think about why you are building it. Why its going in, how high, … bad analogy. To often CoPs are done for all the wrong reasons. The whole intent around involving people in a conversation is lost or not even considered, or is simply an afterthought. The fallacy of “build it and they will come.” One of my favorite usage pieces is from the guy who wrote the 10 things about how to increase engagement on your blog. It speaks to general advice of understanding who you are targeting. Anyone can build a blog, set up a cool website or space. But can you build community? The actual dialog or conversation? How do you do that?

One key is reaching people where they already are – one of the best pieces of advice I’ve heard and I always pass on. Don’t build the fancy million dollar custom website if no one is really going to go there. One of the things I have is a little speech for people. Here’s my analogy. If you are going to throw a party, you have to think about who you are going to invite, where to do it, what to feed them, the music: you are hosting that party. You can’t just leave it up to them. They might trash your place, not get on board, never even open door. You have to manage the crowd, facilitate the conversation unless they already know each other. And why are you throwing the party if they already get together in another space?

Coming from NGO world, and then coming to bank I saw  how easy it is to waste development dollars. It is frustrating. I have spoken openly about this. The amount of money wasted on fancy websites that no one uses is sad. There are a lot of great design firms that help you waste that money. It is an easy thing for someone to take credit for a website once it launches. It looks good, and someone makes a few clicks, then on one asks to look at it again. The boss looks at it once and that is it. No one thinks about or sees the long term investment. They see it as a short term win.

One of the things I try to communicate is to ask, if you are going to invest in a platform, do you really want to hear back from the people you are pushing info to? If not build a simple website. If you do want to engage with that community, to what extent and for what purpose? How will you use what you learn to inform your product or work? If you can’t answer that, go back to the first question. If they actually have a plan – and their mandate is to “share knowledge’ – how do they anticipate sharing knowledge. They often tell me a long laundry list of target audiences. So you are targeting the world? This is the conversation I’ve experienced, with no clear, direct targeting, or understanding of who specifically they are trying to connect with. We suggest they focus on one user group. Name real names. If you can’t name an individual or write out a description.  Talk about their fears, desires, challenges, and work environment. Really understand them in their daily work life. Then think about how does this proposed platform/experience/community really add value. In what specific way. It is not just about knowledge sharing. People can Google for information. You are competing w/ Google, email, facebook, their boss, their partner. That’s your competition. How do you beat all those for attention. That is what you are competing with when someone sits down at the computer. This is the conversation we like to walk people through before they start. The hard part is a lot of these people are younger or temporary staff hired to do this. It is hard for them  to go back to boss and say “we don’t know what we are doing” and possibly lose their jobs. There can be an inherent conflict of interest.

How do you monitor and evaluate the platforms? What indicators do you use? How are they useful?

One of the things we don’t do – and this might be a sticking point – we don’t actually run or manage any of these communities. We just advise teams. I haven’t run one for 2 years. Ese has her own outside, but not inside that we personally run beside the community managers’ community and that has been mainly a repository.

We have built some templates for starting up communities, especially for online networks with external or mixed external and internal audiences. We have online metrics (# posts, pageviews, etc) and survey data that we use to tell the story of a community. Often the target of those metrics are the managers who had the decision making role in that community. We try and communicate intentionally the value (the community gives) to members and to a program. We have developed some more sophisticated tools with RootChange, but we didn’t get enough people to use them. Perhaps they are too sophisticated for the current stage of community development. And we can’t’ force people to use them.

It would be fantastic to have a common rubric, but we don’t have the energy or will to get these decisions. We are still in the early “toddler” stage. Common measurement approaches and quality indicators are far down the line. Same with social network analysis. RootChange has really pushed the envelope in that area, but we aren’t advanced enough to benefit from that level of analysis. The (Rootchange) tool is fun to play around with and provides a way of communicating complex systems to community owners and members. What RootChange has done is is develop an online social network analysis platform that can continuously be updated by members and grow over time. Unlike most SNA, which is a snapshot, this is more organic and builds on an initial survey that is sent to the initial group and they forward it to their networks.

If you had a magic wand, what are three things you’d want every time you have to implement a collaboration platform?

If I had a magic wand and I could actually DO it, I would first eliminate email. Part of the reason, the main reason we can’t get people to collaborate is that they aren’t familiar working in a new way. I think of my cousins that 10 years younger and they don’t have email. They use Facebook. They are dialoging in a different way. They use Facebook’s private messaging, Twitter, and Whatsapp. They use a combination of things that are a lot more direct. They keep an open running of IM messages. Right now email is the reigning champion in the Bank and if we have any hope of getting people to work differently and collaboratively we  have to  first get rid of email.

Next, to implement any kind of project or activity in a collaboration space right  I’d want a really simple user interface, something so intuitive that it just needs no explanation.

Thirdly, I’d’ want that thing available where those people are, regardless if it is on their cell phone, ipad, and any touchable, interactable interface. Here you have to sit at your computer. We don’t even get laptops. You have to sit at desk to engage in online space. Hard to do it through your phone – not easy. People still bring paper and pencil to meetings. More bringing ipads. Still a large minority. A while back I did a study tour to IDEO. They have this internal Facebook like system which shares project updates, findings and all  their internal communications called The Tube. No one was using it at the beginning. One of the smartest thing they did was installed – in 50 different offices.- a big flat screen at each entrance. which randomly displays the latest status updates pulled from Tube from across their global team. Once they did that, the rate of people updating their profile and using that as a way of communicating jumped to something like a 99% adoption rate in short time. From a small minority to vast majority. No one wanted to be seen with a project status update from many months past. It put a little social pressure in the commons areas and entrance way – right in front of your bosses and  teammates. It was an added incentive to use that space.

You want something simple, that replaces traditional communications, and something with a strong, and present, incentive. When you think about building knowledge sharing into your review – how do you really measure that? You can use point systems, all sorts of ways to identify champions. Yelp does a great job at encouraging champions. I have talked to one of their community managers. They have a smart approach to building and engaging community. They incentive people through special offerings, such as first openings of new restaurants, that they can organize. They get reviews out of that. That’s their business model.

We don’t really have a digital culture now. If we want to engage digitally, globally we have to be more agile with how we use communication technology and where we use it. The tube in front of the urinals and stall doors. You’ve got a minute or two to look at something. That’s the way!

 

One response so far

Sep 09 2013

How do we evaluate the strategic use of collaboration platforms?

The earthHey smart people, especially my KM and collaboration peeps, I need your help!

I’ve been trolling around to find examples of monitoring and assessment rubrics to evaluate how well a collaboration platform is actually working. In other words, are the intended strategic activities and goals fulfilled? Are people using it for unintended purposes? What are the adoption and use patterns? How do you assess the need for tweaks, changed or deleted functionality?

I can find piles of white papers and reports on how to pick a platform in terms of vendors and features. Vendors seem to produce them in droves. I certainly can fall back on the Digital Habitats materials in that area as well.

But come on, why are there few things that help us understand if our existing platforms and tool configurations are or are not working?

Here are some of my burning questions. Pointers and answers DEEPLY appreciated. And if you are super passionate about this, ask me directly about the action research some of us are embarking upon (nancyw at fullcirc dot com)!

  • How do you do evaluate the strategic use of your collaboration platform(s) and tools in your organization?
  • What indicators are you looking for? (There can be a lot, so my assumption is we are looking for ones that really get to the strategic sweet spot)
  • Does the assessment need to be totally context specific, or are there shared patterns for similar organizations or domains?
  • How often do you do it?
  • How do we involve users in assessments?
  • How have the results prompted changes (or not and if not, why not)?

Please, share this widely!

THANKS!

15 responses so far

Jul 17 2013

BetterEvaluation: 8 Tips for Good Evaluation Questions

BEQuestionsFrom BetterEvaluation.org’s great weekly blog comes a post that has value for facilitators, not just evaluators! Week 28: Framing an evaluation: the importance of asking the right questions.

First let me share the tips and the examples from the article (you’ll need to read the whole article for full context), and then in blue I’ll add my facilitator contextual comments!

Eight tips for good evaluation questions:

  1. Limit the number of main evaluation questions to 3-7. Each main evaluation question can include sub-questions but these should be directly relevant for answering the main question under which they fall. When facilitating, think of each question as a stepping stone along a path that may or may not diverge. Questions in a fluid interaction need to reflect the emerging context. So plan, but plan to improvise the next question.

  2. Prioritize and rank questions in terms of importance. In the GEM example, we realized that relevance, effectiveness, and sustainability were of most importance to the USAID Mission and tried to refine our questions to best get at these elements. Same in facilitation!

  3. Link questions clearly to the evaluation purpose. In the GEM example, the evaluation purpose was to gauge the successes and failures of the program in developing and stabilizing conflict-affected areas of Mindanao. We thus tried to tailor our questions to get more at the program’s contributions to peace and stability compared to longer-term economic development goals. Ditto! I have to be careful not to keep asking questions for my OWN interest!

  4. Make sure questions are realistic in number and kind given time and resources available. In the GEM example, this did not take place. The evaluation questions were too numerous and some were not appropriate to either the evaluation methods proposed or the level of data available (local, regional, and national). YES! I need to learn this one better. I always have too many. 

  5. Make sure questions can be answered definitively. Again, in the GEM example, this did not take place. For example, numerous questions asked about the efficiency/cost-benefit analysis of activity inputs and outputs. Unfortunately, much of the budget data needed to answer these questions was unavailable and some of the costs and benefits (particularly those related to peace and stability) were difficult to quantify. In the end, the evaluation team had to acknowledge that they did not have sufficient data to fully answer certain questions in their report. This is more subtle in facilitation as we have the opportunity to try and surface/tease out answers that may not be clear to anyone at the start. 

  6. Choose questions which reflect real stakeholders’ needs and interests. This issue centers on the question of utility. In the GEM example, the evaluation team discovered that a follow-on activity had already been designed prior to the evaluation and that the evaluation would serve more to validate/tweak this design rather than truly shape it from scratch. The team thus tailored their questions to get more at peace, security, and governance issues given the focus on the follow-on activity. AMEN! YES!

  7. Don’t use questions which contain two or more questions in one. See for example question #6 in the attached—“out of the different types of infrastructure projects supported (solar dyers, box culverts, irrigation canals, boat landings, etc.), were there specific types that were more effective and efficient (from a cost and time perspective) in meeting targets and programmatic objectives?” Setting aside the fact that the evaluators simply did not have access to sufficient data to answer which of the more than 10 different types of infrastructure projects was most efficient (from both a cost and time perspective), the different projects had very different intended uses and number of beneficiaries reached. Thus, while box culverts (small bridge) might have been both efficient (in terms of cost and time) and effective (in terms of allowing people to cross), their overall effectiveness in developing and stabilizing conflict-affected areas of Mindanao were minimal. Same for facilitation. Keep it simple!

  8. Use questions which focus on what was achieved, how and to what extent, and not simple yes/no questions. In the GEM example, simply asking if an activity had or had not met its intended targets was much less informative than asking how those targets were set, whether those targets were appropriate, and how progress towards meeting those targets were tracked. Agree on avoiding simple yes/no unless of course, it is deciding if it is time to go to lunch. 

I’m currently pulling together some materials on evaluating communities of practice, and I think this list will be a useful addition. I hope to be posting more on that soon.

By the way, BetterEvaluation.org is a great resource. Full disclosure, I’ve been providing some advice on the community aspects! But I’m really proud of what Patricia Rogers and her amazing team have done.

One response so far

Feb 12 2013

Data, Transparency & Impact Panel –> a portfolio mindset?

KanterSEASketchnotesYesterday I was grateful to attend a panel presentation by Beth Kanter (Packard Foundation Fellow), Paul Shoemaker (Social Venture Partners), Jane Meseck (Microsoft Giving) and Eric Stowe (Splash.org) moderated by Erica Mills (Claxon). First of all, from a confessed short attention spanner, the hour went FAST. Eric tossed great questions for the first hour, then the audience added theirs in the second half. As usual, Beth got a Storify of the Tweets and a blog post up before we could blink. (Uncurated Tweets here.)

There was  much good basic insight on monitoring for non profits and NGOs. Some of may favorite soundbites include:

  • What is your impact model? (Paul Shoemaker I think. I need to learn more about impact models)
  • Are you measuring to prove, or to improve (Beth Kanter)
  • Evaluation as a comparative practice (I think that was Beth)
  • Benchmark across your organization (I think Eric)
  • Transparency = Failing Out Loud (Eric)
  • “Joyful Funeral” to learn from and stop doing things that didn’t work out (from Mom’s Rising via Beth)
  • Mission statement does not equal IMPACT NOW. What outcomes are really happening RIGHT NOW (Eric)
  • Ditch the “just in case” data (Beth)
  • We need to redefine capacity (audience)
  • How do we create access to and use all the data (big data) being produced out of all the M&E happening in the sector (Nathaniel James at Philanthrogeek)

But I want to pick out a few themes that were emerging for me as I listened. These were not the themes of the terrific panelists — but I’d sure wonder what they have to say about them.

A Portfolio Mindset on Monitoring and Evaluation

There were a number of threads about the impact of funders and their monitoring and evaluation (M&E) expectations. Beyond the challenge of what a funder does or doesn’t understand about M&E, they clearly need to think beyond evaluation at the individual grant or project level. This suggests making sense across data from multiple grantees –> something I have not seen a lot of from funders. I am reminded of the significant difference between managing a project and managing a portfolio of projects (learned from my clients at the Project Management Institute. Yeah, you Doc!) IF I understand correctly, portfolio project management is about the business case –> the impacts (in NGO language), not the operational management issues. Here is the Wikipedia definition:

Project Portfolio Management (PPM) is the centralized management of processes, methods, and technologies used by project managers and project management offices (PMOs) to analyze and collectively manage a group of current or proposed projects based on numerous key characteristics. The objectives of PPM are to determine the optimal resource mix for delivery and to schedule activities to best achieve an organization’s operational and financial goals ― while honouring constraints imposed by customers, strategic objectives, or external real-world factors.

There is a little bell ringing in my head that there is an important distinction between how we do project M&E — which is often process heavy and too short term to look at impact in a complex environment — and being able to look strategically at our M&E across our projects. This is where we use the “fail forward” opportunities, the iterating towards improvements AND investing in a longer view of how we measure the change we hope to see in the world. I can’t quite articulate it. Maybe one of you has your finger on this pulse and can pull out more clarity. But the bell is ringing and I didn’t want to ignore it.

This idea also rubs up against something Eric said which I both internally applauded and recoiled from. It was something along the lines of “if you can’t prove you are creating impact, no one should fund you.” I love the accountability. I worry about actually how to meaningfully do this in a)  very complex non profit and international development contexts, and for the next reason…

Who Owns Measurement and Data?

Chart from Effective Philanthropy 2/2013

Chart from Effective Philanthropy 2/2013

There is a very challenging paradigm in non profits and NGOs — the “helping syndrome.” The idea that we who “have” know what the “have nots” need or want. This model has failed over and over again and yet we still do it. I worry that this applies to M&E as well. So first of all, any efforts towards transparency (including owning and learning from failures) is stellar. I love what I see, for example, on Splash.org particularly their Proving.it technology. (In the run up to the event, Paul Shoemaker pointed to this article on the disconnect on information needs between funders and grantees.) Mostly I hear about the disconnect between funders information needs and those of the NPOs. But what about the stakeholders’ information needs and interests?

Some of the projects I’m learning from in agriculture (mostly in Africa and SE/S Asia) are looking towards finding the right mix of grant funding, public (government and international) investment and local ownership (vs. an extractive model). Some of the more common examples are marketing networks for farmers to get the best prices for their crops, lending clubs and using local entrepreneurs to fill new business niches associated with basics such as water, food, housing, etc. The key is the ownership at the level of stakeholders/people being served/impacted/etc. (I’m trying to avoid the word users as it has so many unintended other meanings for me!)

So if we are including these folks as drivers of the work, are they also the drivers of M&E and, in the end, the “owners” of the data produced. This is important not only because for years we have measured stakeholders and rarely been accountable to share that data, or actually USE it productive, but also because change is often motivated by being able to measure change and see improvement. 10 more kids got clean water in our neighborhood this week. 52 wells are now being regularly serviced and local business people are increasing their livelihoods by fulfilling those service contracts.  The data is part of the on-the-ground workings of a project. Not a retrospective to be shoveled into YARTNR (yet another report that no one reads.)

In working with communities of practice, M&E is a form of community learning. In working with scouts, badges are incentives, learning measures and just plain fun. The ownership is not just at the sponsor level. It is embedded with those most intimately involved in the work.

So stepping back to Eric’s staunch support of accountability, I say yes AND the full ownership of that accountability with all involved, not just the NGO/NPO/Funder.

The Unintended Consequences of How We Measure

Related to ownership of M&E and the resulting data brings me back to the complexity lens. I’m a fan of the Cynefin Framework to help me suss out where I am working – simple, complicated, complex or chaotic domains. Using the framework may be a good diagnostic for M&E efforts because when we are working in a complex domain, predicting cause and effect may not be possible (now, or into the future.) If we expect M&E to determine if we are having impact, this implies we can predict cause and effect and focus our efforts there. But things such as local context may suggest that everything won’t play out the same way everywhere.  What we are measuring may end up having unintended negative consequences (this HAS happened!) Learning from failures is one useful intervention, but I sense we have a lot more to learn here. Some of the threads about big data yesterday related to this — again a portfolio mentality looking across projects and data sets (calling Nathaniel James) We need to do more of the iterative monitoring until we know what we SHOULD be measuring.  I’m getting out of my depth again here (Help! Patricia Rogers! Dave Snowden!)  The point is, there is a risk of being simplistic in our M&E and a risk of missing unintended consequences. I think that is one reason I enjoyed the panel so much yesterday, as you could see the wheels turning in people’s heads as they listened to each other! 🙂

Arghhh, so much to think about and consider. Delicious possibilities…

 Wednesday Edit: See this interesting article on causal chains… so much to learn about M&E! I think it reflects something Eric said (which is not captured above) about measuring what really happens NOW, not just this presumption of “we touched one person therefore it transformed their life!!”

Second edit: Here is a link with some questions about who owns the data… may be related http://www.downes.ca/cgi-bin/page.cgi?post=59975

Third edit: An interesting article on participation with some comments on data and evaluation http://philanthropy.blogspot.com/2013/02/the-people-affected-by-problem-have-to.html

Fourth Edit (I keep finding cool stuff)

The public health project is part of a larger pilgrimage by Harvard scholars to study the Kumbh Mela. You can follow their progress on Twitter, using the hashtag #HarvardKumbh.

 

3 responses so far

Next »

Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States.
%d bloggers like this: