Archive for the 'evaluation' Category

Dec 02 2013

An Interview With Aaron Leonard on Online Communities

I had a chance to interview Aaron Leonard late last September (photo URL) just before he took a leave from his online community management work at the World Bank to talk about that work. This is part of a client project I’m working on to evaluate a regional collaboration pattern and to start understanding processes for more strategic design, implementation and evaluation of collaboration platforms, particularly in the international development context.

Aaron’s World Bank blog is http://www.southsouth.info/profiles/blog/list?user=1uxarewp1npnk

How long have you been working on “this online community/networks stuff” at the Bank?  How did your team’s practice emerge?

I’ve been at the bank 4 years and working on their communities of practice (CoP) front for 3 of those. I started as a community manager building a CoP for external/non-bank people focused onSouth-South exchange. Throughout this process, I struggled with navigating the World Bank rules governing these types of “social websites”. At the time, there were no actual rules in place – they were under formulation. So what you could do,/could not, use/ could not, pay for/ could not depended on who you talked to. I started working with other community managers to find answers to these questions along with getting tips and tricks on how to engage members, build HTML widgets, etc… I realized that my background working with networks (pre-Bank experience) and my experience launching an online community of practice within the Bank was useful to others. As more and more people joined our discussions, we started formalizing our conversation (scheduling meetings in advance, setting agendas, etc… but not too formal :).

We were eventually able to make ourselves useful enough to the point where I applied for and received a small budget to bring in some outside help from a very capable firm called Root Change and to hire a brilliant guy, Kurt Morriesen, to help us develop a few tools for community managers and project teams and to help them think through their work with networks. We started with 15 groups – mostly within WBI, but some from the regions as well. All were asking and needing answers to some common questions, “How do we get what we want out of our network? How do we measure and communicate our success? How do we set up a secretariat and good governance structure?” This line of questioning seemed wrong in many ways. It represented a “management mindset” (credit Evan Bloom!) versus a “network mindset”. The project teams were trying to get their membership to do work that fit their programmatic goals versus seeing the membership as the goal and working out a common direction for members to own and act on themselves. We started asking instead, “Why are you engaging? “Who really are you trying to work with?” What do you hope to get out of this engagement?” What value does this network provide its members?” This exercise was really eye opening for all of us and eventually blossomed into an actual program. I brought in Ese Emerhi last year as a full time team member. She has an amazing background as a digital activist, and knows more than I do about how to make communities really work well.

Ese and I set up a work program around CoPs and built it into a practice area for the World Bank Institute (WBI) together with program community managers like Kurt, Norma Garza (Open Contracting), and Raphael Shepard (GYAC) among others. With Ese on board, we were able to expand beyond WBI (to the World Bank in general). This was possible in part because our team works on knowledge exchange, South-South knowledge exchange specifically (SSKE). We help project teams in the World Bank design and deliver effective knowledge exchange. CoPs are a growing part of this business, in part because the technology to connect people in a meaningful conversation is getting better, and in part because we know how to coach people on when and how to use communities.

How did you approach the community building?

With Rootchange  we started with basic stocktaking and crowd sourcing with respect to  trying to define an agenda for ourselves. We had 4-5 months for this activity. We settled on a couple things.

  1. Looking at different governance arrangements. How do we structure the networks?

  2. What tools or instruments to use in design of planning of more effective networks.

We noticed that we were talking more about networks than communities. Some were blends of CoPs, coalitions, and broader programs. The goals aren’t always just the members’. So we talked about difference between these things, how they can be thought of along a spectrum of commitment or formality. A social network vs. an association and how they are/are not similar beasts.

We gave assignments to project teams and met on monthly basis to work with these instruments. On the impetus of consultants at Root Change, we started doing 1 to 1 consultation w/ teams. We reserved a room, brought in cookies and coffee and then brought the teams in for 90 minutes each of  free consulting sessions. These were almost more useful for the teams than the project work. Instead of exploring the tools, they were APPLYING the tools themselves. It was also a matter of taking the time to focus, sit down and be intentional with their work with their networks. Just shut the door and collectively think about what is was they were trying to do. A lot of this started out in a more organic way around what was thought to be an easy win. “We’ll start a CoP, get a website, get 1000 people to sign” up without understanding what it meant for membership, resourcing, team, commitment and longer term goals and objectives.

We helped them peel back some of the layers of the onion to better understand what they were trying to do. We didn’t get as far as wanted. We wanted to get into measuring and evaluation and social network analysis, but that was  a little advance for these teams and their stage of development. They did not have someone they could rely on to do this work. Some had a community manager but most of these were short term consultants, for 150 days or less, and often really junior people who saw the job  as an entry level gig. They were often more interested in the subject matter than being a community managers. They often tended to get pulled in different directions and may or may not have liked the work. They tended to be hired right out of an International  Devevlpment masters program where they had a thematic bent so they were usually interested in projects, vs organizing a 1000 people and lending some sense of community. Different skill sets!

We worked with these teams, and came up with a few ideas,. Root Change wrote a small report (please share) which helped justify a budget for subsequent fiscal year and my boss let me hire someone who would have community building as part of their job. Together we were working on the Art of Knowledge Exchange toolkit and the other half time was for community. At this point we opened  up our offering to World Bank group to help people start,  understand how to work with membership, engage, measure and report on a CoP. We helped them figure out how they could use data and make sense of their community’s story. We brought in a few speakers and did social things to profile community managers. Over the course of the year we had talked to and worked with over 300 people. (Aaron reports they  have exact numbers, but I did not succeed in connecting with him before he left to get those numbers!). We did 100 one-on-one counseling sessions. We reached very broadly across institution and increased the awareness of the skillset we have in WBI regarding communities and networks. We helped people see that this is different way of working. Our work coincided with build up of the Bank’s internal community platform based on Jive (originally called Scoop and now called Sparks – a collaboration for development and CoP oriented platform.) The technology was getting really easy for people to access. There was more talk about knowledge work, about being able to connect clients, and awareness of what had been working well on the S-S platform.

We did a good job and that gave us the support for another round of budget this year.  Now we have been able to shift some of the conversation to the convening and brokering role of the Bank. This coincided with the Bank’s decreased emphasis  in lending and increase in access to experts which complimented the direction we were going in.  We reached out and have become a reference point for a lot of this work. There have been parallele institutional efforts that flare and fade, flare and fade. But it is difficult to move “the machine.” It can even be a painful process to witness. I admire the people doing this, but (the top down institutional change process) was something we tried to avoid. We did our work on the side, supporting people’s efforts where possible. Those things are finally bearing fruit. We have content. They have a management system. We have process for teams to open a new CoP space, a way to find what is available to them as community leaders, They have  a community finder associated with an expert finder. Great to have these things to have and invest in, but it is not where we were aiming. We want to know the community leaders, the people like Ese, like Norma Garza, running these communities and who struggle and have new ideas to share. What are the ways to navigate the institutional bureaucracy that governs our use of social media tools? How do you find good people to bring on board. You can’t just hire the next new grad and expect it to last. There is an actual skills set, unique, not always well defined but getting more recognition as something that is of value and unique to building a successful CoP. There is new literature out there and people like Richard Millington (FeverBee) – a kid genius doing this since he was 13. He takes ideas from people like you, Wenger and Denning. There is now more of a practice around this.

While the Bank is still not super intentional on how it works internally with respect to  knowledge and process, more attention is being paid and more people are being brought in. It can be a touch and go effort. We’re just a small piece, but feeling a much needed demand and our numbers prove that. We have monthly workshops (x2 sometimes) that are promoted through a learning registration system and we’d sell the spaces out within minutes. People are stalking our stuff. It is exciting. At same time while it felt like the process of expansion touched a lot of people, convinced/shaped dialog, I also feel we lost touch with the Normas. Relationships changed. We were supporting them by profiling them, helping them communicate to their bosses, so the bosses understood their work, but not directly supporting them with new ideas, techniques, approaches.

We reassessed at end of last year. We want to focus building an actual community again. We started but lost that last year while busy pushing outwards. But we still kept them close and we can rely on each other. It has  not been the intimate settings of 15-20 or 3 of us doing this work, sitting around and talking about what we are struggling with. Like “how did you do your web setup, how to do a Twitter Jam?” So our goals this year are a combination. Management likes that we hit so many people last year. They have been pretty hands off and we can set our own pace. Because we did well last year, they given us that room, the trust.

So now we want to focus more on championing the higher level community managers. The idea is to take a two fold approach. First we want to use technology to reach out, to use our internal online space to communicate and form a more active online community. We secondly want to focus a few of our offerings on these higher level community managers with idea that if we can give them things to help their with the deeper challenges of their job, they will be able to help us field the more general requests for the more introductory offerings. Can you review my concept note?  Help me setting up my technology.

It is still just the two of us. We are grooming another person but also working with the more senior community managers will allow us to handle more requests by relying on their experience. We give them training and  in return they help w/ basic requests. This is not a mandate. We don’t have to do this. It is what we see as a way of building a holistic and sustainable community within the Bank to meet the needs of community managers and people who use networks to deliver products and services with their clients.

How do you set strategic intentions when setting up a platform?

One of the things I love most advising about CoPs is telling them not to do it. I love being able to say this. The incentives are wrong, the purpose. So many people think CoPs are something that is “on the checklist, magic bullet, or a sexy tech solution”. Whatever it is, those purposes are wrong. They are thinking about the tech and not the people they are engaging.. If you want to build a fence, you don’t go buy a hammer and be done with it. You need to actually plan it out, think about why you are building it. Why its going in, how high, … bad analogy. To often CoPs are done for all the wrong reasons. The whole intent around involving people in a conversation is lost or not even considered, or is simply an afterthought. The fallacy of “build it and they will come.” One of my favorite usage pieces is from the guy who wrote the 10 things about how to increase engagement on your blog. It speaks to general advice of understanding who you are targeting. Anyone can build a blog, set up a cool website or space. But can you build community? The actual dialog or conversation? How do you do that?

One key is reaching people where they already are – one of the best pieces of advice I’ve heard and I always pass on. Don’t build the fancy million dollar custom website if no one is really going to go there. One of the things I have is a little speech for people. Here’s my analogy. If you are going to throw a party, you have to think about who you are going to invite, where to do it, what to feed them, the music: you are hosting that party. You can’t just leave it up to them. They might trash your place, not get on board, never even open door. You have to manage the crowd, facilitate the conversation unless they already know each other. And why are you throwing the party if they already get together in another space?

Coming from NGO world, and then coming to bank I saw  how easy it is to waste development dollars. It is frustrating. I have spoken openly about this. The amount of money wasted on fancy websites that no one uses is sad. There are a lot of great design firms that help you waste that money. It is an easy thing for someone to take credit for a website once it launches. It looks good, and someone makes a few clicks, then on one asks to look at it again. The boss looks at it once and that is it. No one thinks about or sees the long term investment. They see it as a short term win.

One of the things I try to communicate is to ask, if you are going to invest in a platform, do you really want to hear back from the people you are pushing info to? If not build a simple website. If you do want to engage with that community, to what extent and for what purpose? How will you use what you learn to inform your product or work? If you can’t answer that, go back to the first question. If they actually have a plan – and their mandate is to “share knowledge’ – how do they anticipate sharing knowledge. They often tell me a long laundry list of target audiences. So you are targeting the world? This is the conversation I’ve experienced, with no clear, direct targeting, or understanding of who specifically they are trying to connect with. We suggest they focus on one user group. Name real names. If you can’t name an individual or write out a description.  Talk about their fears, desires, challenges, and work environment. Really understand them in their daily work life. Then think about how does this proposed platform/experience/community really add value. In what specific way. It is not just about knowledge sharing. People can Google for information. You are competing w/ Google, email, facebook, their boss, their partner. That’s your competition. How do you beat all those for attention. That is what you are competing with when someone sits down at the computer. This is the conversation we like to walk people through before they start. The hard part is a lot of these people are younger or temporary staff hired to do this. It is hard for them  to go back to boss and say “we don’t know what we are doing” and possibly lose their jobs. There can be an inherent conflict of interest.

How do you monitor and evaluate the platforms? What indicators do you use? How are they useful?

One of the things we don’t do – and this might be a sticking point – we don’t actually run or manage any of these communities. We just advise teams. I haven’t run one for 2 years. Ese has her own outside, but not inside that we personally run beside the community managers’ community and that has been mainly a repository.

We have built some templates for starting up communities, especially for online networks with external or mixed external and internal audiences. We have online metrics (# posts, pageviews, etc) and survey data that we use to tell the story of a community. Often the target of those metrics are the managers who had the decision making role in that community. We try and communicate intentionally the value (the community gives) to members and to a program. We have developed some more sophisticated tools with RootChange, but we didn’t get enough people to use them. Perhaps they are too sophisticated for the current stage of community development. And we can’t’ force people to use them.

It would be fantastic to have a common rubric, but we don’t have the energy or will to get these decisions. We are still in the early “toddler” stage. Common measurement approaches and quality indicators are far down the line. Same with social network analysis. RootChange has really pushed the envelope in that area, but we aren’t advanced enough to benefit from that level of analysis. The (Rootchange) tool is fun to play around with and provides a way of communicating complex systems to community owners and members. What RootChange has done is is develop an online social network analysis platform that can continuously be updated by members and grow over time. Unlike most SNA, which is a snapshot, this is more organic and builds on an initial survey that is sent to the initial group and they forward it to their networks.

If you had a magic wand, what are three things you’d want every time you have to implement a collaboration platform?

If I had a magic wand and I could actually DO it, I would first eliminate email. Part of the reason, the main reason we can’t get people to collaborate is that they aren’t familiar working in a new way. I think of my cousins that 10 years younger and they don’t have email. They use Facebook. They are dialoging in a different way. They use Facebook’s private messaging, Twitter, and Whatsapp. They use a combination of things that are a lot more direct. They keep an open running of IM messages. Right now email is the reigning champion in the Bank and if we have any hope of getting people to work differently and collaboratively we  have to  first get rid of email.

Next, to implement any kind of project or activity in a collaboration space right  I’d want a really simple user interface, something so intuitive that it just needs no explanation.

Thirdly, I’d’ want that thing available where those people are, regardless if it is on their cell phone, ipad, and any touchable, interactable interface. Here you have to sit at your computer. We don’t even get laptops. You have to sit at desk to engage in online space. Hard to do it through your phone – not easy. People still bring paper and pencil to meetings. More bringing ipads. Still a large minority. A while back I did a study tour to IDEO. They have this internal Facebook like system which shares project updates, findings and all  their internal communications called The Tube. No one was using it at the beginning. One of the smartest thing they did was installed – in 50 different offices.- a big flat screen at each entrance. which randomly displays the latest status updates pulled from Tube from across their global team. Once they did that, the rate of people updating their profile and using that as a way of communicating jumped to something like a 99% adoption rate in short time. From a small minority to vast majority. No one wanted to be seen with a project status update from many months past. It put a little social pressure in the commons areas and entrance way – right in front of your bosses and  teammates. It was an added incentive to use that space.

You want something simple, that replaces traditional communications, and something with a strong, and present, incentive. When you think about building knowledge sharing into your review – how do you really measure that? You can use point systems, all sorts of ways to identify champions. Yelp does a great job at encouraging champions. I have talked to one of their community managers. They have a smart approach to building and engaging community. They incentive people through special offerings, such as first openings of new restaurants, that they can organize. They get reviews out of that. That’s their business model.

We don’t really have a digital culture now. If we want to engage digitally, globally we have to be more agile with how we use communication technology and where we use it. The tube in front of the urinals and stall doors. You’ve got a minute or two to look at something. That’s the way!

 

One response so far

Sep 09 2013

How do we evaluate the strategic use of collaboration platforms?

The earthHey smart people, especially my KM and collaboration peeps, I need your help!

I’ve been trolling around to find examples of monitoring and assessment rubrics to evaluate how well a collaboration platform is actually working. In other words, are the intended strategic activities and goals fulfilled? Are people using it for unintended purposes? What are the adoption and use patterns? How do you assess the need for tweaks, changed or deleted functionality?

I can find piles of white papers and reports on how to pick a platform in terms of vendors and features. Vendors seem to produce them in droves. I certainly can fall back on the Digital Habitats materials in that area as well.

But come on, why are there few things that help us understand if our existing platforms and tool configurations are or are not working?

Here are some of my burning questions. Pointers and answers DEEPLY appreciated. And if you are super passionate about this, ask me directly about the action research some of us are embarking upon (nancyw at fullcirc dot com)!

  • How do you do evaluate the strategic use of your collaboration platform(s) and tools in your organization?
  • What indicators are you looking for? (There can be a lot, so my assumption is we are looking for ones that really get to the strategic sweet spot)
  • Does the assessment need to be totally context specific, or are there shared patterns for similar organizations or domains?
  • How often do you do it?
  • How do we involve users in assessments?
  • How have the results prompted changes (or not and if not, why not)?

Please, share this widely!

THANKS!

15 responses so far

Jul 17 2013

BetterEvaluation: 8 Tips for Good Evaluation Questions

BEQuestionsFrom BetterEvaluation.org’s great weekly blog comes a post that has value for facilitators, not just evaluators! Week 28: Framing an evaluation: the importance of asking the right questions.

First let me share the tips and the examples from the article (you’ll need to read the whole article for full context), and then in blue I’ll add my facilitator contextual comments!

Eight tips for good evaluation questions:

  1. Limit the number of main evaluation questions to 3-7. Each main evaluation question can include sub-questions but these should be directly relevant for answering the main question under which they fall. When facilitating, think of each question as a stepping stone along a path that may or may not diverge. Questions in a fluid interaction need to reflect the emerging context. So plan, but plan to improvise the next question.

  2. Prioritize and rank questions in terms of importance. In the GEM example, we realized that relevance, effectiveness, and sustainability were of most importance to the USAID Mission and tried to refine our questions to best get at these elements. Same in facilitation!

  3. Link questions clearly to the evaluation purpose. In the GEM example, the evaluation purpose was to gauge the successes and failures of the program in developing and stabilizing conflict-affected areas of Mindanao. We thus tried to tailor our questions to get more at the program’s contributions to peace and stability compared to longer-term economic development goals. Ditto! I have to be careful not to keep asking questions for my OWN interest!

  4. Make sure questions are realistic in number and kind given time and resources available. In the GEM example, this did not take place. The evaluation questions were too numerous and some were not appropriate to either the evaluation methods proposed or the level of data available (local, regional, and national). YES! I need to learn this one better. I always have too many. 

  5. Make sure questions can be answered definitively. Again, in the GEM example, this did not take place. For example, numerous questions asked about the efficiency/cost-benefit analysis of activity inputs and outputs. Unfortunately, much of the budget data needed to answer these questions was unavailable and some of the costs and benefits (particularly those related to peace and stability) were difficult to quantify. In the end, the evaluation team had to acknowledge that they did not have sufficient data to fully answer certain questions in their report. This is more subtle in facilitation as we have the opportunity to try and surface/tease out answers that may not be clear to anyone at the start. 

  6. Choose questions which reflect real stakeholders’ needs and interests. This issue centers on the question of utility. In the GEM example, the evaluation team discovered that a follow-on activity had already been designed prior to the evaluation and that the evaluation would serve more to validate/tweak this design rather than truly shape it from scratch. The team thus tailored their questions to get more at peace, security, and governance issues given the focus on the follow-on activity. AMEN! YES!

  7. Don’t use questions which contain two or more questions in one. See for example question #6 in the attached—“out of the different types of infrastructure projects supported (solar dyers, box culverts, irrigation canals, boat landings, etc.), were there specific types that were more effective and efficient (from a cost and time perspective) in meeting targets and programmatic objectives?” Setting aside the fact that the evaluators simply did not have access to sufficient data to answer which of the more than 10 different types of infrastructure projects was most efficient (from both a cost and time perspective), the different projects had very different intended uses and number of beneficiaries reached. Thus, while box culverts (small bridge) might have been both efficient (in terms of cost and time) and effective (in terms of allowing people to cross), their overall effectiveness in developing and stabilizing conflict-affected areas of Mindanao were minimal. Same for facilitation. Keep it simple!

  8. Use questions which focus on what was achieved, how and to what extent, and not simple yes/no questions. In the GEM example, simply asking if an activity had or had not met its intended targets was much less informative than asking how those targets were set, whether those targets were appropriate, and how progress towards meeting those targets were tracked. Agree on avoiding simple yes/no unless of course, it is deciding if it is time to go to lunch. 

I’m currently pulling together some materials on evaluating communities of practice, and I think this list will be a useful addition. I hope to be posting more on that soon.

By the way, BetterEvaluation.org is a great resource. Full disclosure, I’ve been providing some advice on the community aspects! But I’m really proud of what Patricia Rogers and her amazing team have done.

One response so far

Feb 12 2013

Data, Transparency & Impact Panel –> a portfolio mindset?

KanterSEASketchnotesYesterday I was grateful to attend a panel presentation by Beth Kanter (Packard Foundation Fellow), Paul Shoemaker (Social Venture Partners), Jane Meseck (Microsoft Giving) and Eric Stowe (Splash.org) moderated by Erica Mills (Claxon). First of all, from a confessed short attention spanner, the hour went FAST. Eric tossed great questions for the first hour, then the audience added theirs in the second half. As usual, Beth got a Storify of the Tweets and a blog post up before we could blink. (Uncurated Tweets here.)

There was  much good basic insight on monitoring for non profits and NGOs. Some of may favorite soundbites include:

  • What is your impact model? (Paul Shoemaker I think. I need to learn more about impact models)
  • Are you measuring to prove, or to improve (Beth Kanter)
  • Evaluation as a comparative practice (I think that was Beth)
  • Benchmark across your organization (I think Eric)
  • Transparency = Failing Out Loud (Eric)
  • “Joyful Funeral” to learn from and stop doing things that didn’t work out (from Mom’s Rising via Beth)
  • Mission statement does not equal IMPACT NOW. What outcomes are really happening RIGHT NOW (Eric)
  • Ditch the “just in case” data (Beth)
  • We need to redefine capacity (audience)
  • How do we create access to and use all the data (big data) being produced out of all the M&E happening in the sector (Nathaniel James at Philanthrogeek)

But I want to pick out a few themes that were emerging for me as I listened. These were not the themes of the terrific panelists — but I’d sure wonder what they have to say about them.

A Portfolio Mindset on Monitoring and Evaluation

There were a number of threads about the impact of funders and their monitoring and evaluation (M&E) expectations. Beyond the challenge of what a funder does or doesn’t understand about M&E, they clearly need to think beyond evaluation at the individual grant or project level. This suggests making sense across data from multiple grantees –> something I have not seen a lot of from funders. I am reminded of the significant difference between managing a project and managing a portfolio of projects (learned from my clients at the Project Management Institute. Yeah, you Doc!) IF I understand correctly, portfolio project management is about the business case –> the impacts (in NGO language), not the operational management issues. Here is the Wikipedia definition:

Project Portfolio Management (PPM) is the centralized management of processes, methods, and technologies used by project managers and project management offices (PMOs) to analyze and collectively manage a group of current or proposed projects based on numerous key characteristics. The objectives of PPM are to determine the optimal resource mix for delivery and to schedule activities to best achieve an organization’s operational and financial goals ― while honouring constraints imposed by customers, strategic objectives, or external real-world factors.

There is a little bell ringing in my head that there is an important distinction between how we do project M&E — which is often process heavy and too short term to look at impact in a complex environment — and being able to look strategically at our M&E across our projects. This is where we use the “fail forward” opportunities, the iterating towards improvements AND investing in a longer view of how we measure the change we hope to see in the world. I can’t quite articulate it. Maybe one of you has your finger on this pulse and can pull out more clarity. But the bell is ringing and I didn’t want to ignore it.

This idea also rubs up against something Eric said which I both internally applauded and recoiled from. It was something along the lines of “if you can’t prove you are creating impact, no one should fund you.” I love the accountability. I worry about actually how to meaningfully do this in a)  very complex non profit and international development contexts, and for the next reason…

Who Owns Measurement and Data?

Chart from Effective Philanthropy 2/2013

Chart from Effective Philanthropy 2/2013

There is a very challenging paradigm in non profits and NGOs — the “helping syndrome.” The idea that we who “have” know what the “have nots” need or want. This model has failed over and over again and yet we still do it. I worry that this applies to M&E as well. So first of all, any efforts towards transparency (including owning and learning from failures) is stellar. I love what I see, for example, on Splash.org particularly their Proving.it technology. (In the run up to the event, Paul Shoemaker pointed to this article on the disconnect on information needs between funders and grantees.) Mostly I hear about the disconnect between funders information needs and those of the NPOs. But what about the stakeholders’ information needs and interests?

Some of the projects I’m learning from in agriculture (mostly in Africa and SE/S Asia) are looking towards finding the right mix of grant funding, public (government and international) investment and local ownership (vs. an extractive model). Some of the more common examples are marketing networks for farmers to get the best prices for their crops, lending clubs and using local entrepreneurs to fill new business niches associated with basics such as water, food, housing, etc. The key is the ownership at the level of stakeholders/people being served/impacted/etc. (I’m trying to avoid the word users as it has so many unintended other meanings for me!)

So if we are including these folks as drivers of the work, are they also the drivers of M&E and, in the end, the “owners” of the data produced. This is important not only because for years we have measured stakeholders and rarely been accountable to share that data, or actually USE it productive, but also because change is often motivated by being able to measure change and see improvement. 10 more kids got clean water in our neighborhood this week. 52 wells are now being regularly serviced and local business people are increasing their livelihoods by fulfilling those service contracts.  The data is part of the on-the-ground workings of a project. Not a retrospective to be shoveled into YARTNR (yet another report that no one reads.)

In working with communities of practice, M&E is a form of community learning. In working with scouts, badges are incentives, learning measures and just plain fun. The ownership is not just at the sponsor level. It is embedded with those most intimately involved in the work.

So stepping back to Eric’s staunch support of accountability, I say yes AND the full ownership of that accountability with all involved, not just the NGO/NPO/Funder.

The Unintended Consequences of How We Measure

Related to ownership of M&E and the resulting data brings me back to the complexity lens. I’m a fan of the Cynefin Framework to help me suss out where I am working – simple, complicated, complex or chaotic domains. Using the framework may be a good diagnostic for M&E efforts because when we are working in a complex domain, predicting cause and effect may not be possible (now, or into the future.) If we expect M&E to determine if we are having impact, this implies we can predict cause and effect and focus our efforts there. But things such as local context may suggest that everything won’t play out the same way everywhere.  What we are measuring may end up having unintended negative consequences (this HAS happened!) Learning from failures is one useful intervention, but I sense we have a lot more to learn here. Some of the threads about big data yesterday related to this — again a portfolio mentality looking across projects and data sets (calling Nathaniel James) We need to do more of the iterative monitoring until we know what we SHOULD be measuring.  I’m getting out of my depth again here (Help! Patricia Rogers! Dave Snowden!)  The point is, there is a risk of being simplistic in our M&E and a risk of missing unintended consequences. I think that is one reason I enjoyed the panel so much yesterday, as you could see the wheels turning in people’s heads as they listened to each other! :-)

Arghhh, so much to think about and consider. Delicious possibilities…

 Wednesday Edit: See this interesting article on causal chains… so much to learn about M&E! I think it reflects something Eric said (which is not captured above) about measuring what really happens NOW, not just this presumption of “we touched one person therefore it transformed their life!!”

Second edit: Here is a link with some questions about who owns the data… may be related http://www.downes.ca/cgi-bin/page.cgi?post=59975

Third edit: An interesting article on participation with some comments on data and evaluation http://philanthropy.blogspot.com/2013/02/the-people-affected-by-problem-have-to.html

Fourth Edit (I keep finding cool stuff)

The public health project is part of a larger pilgrimage by Harvard scholars to study the Kumbh Mela. You can follow their progress on Twitter, using the hashtag #HarvardKumbh.

 

3 responses so far

Jan 09 2013

Looking Back on the Project Community Course

Long Post Warning!

I was reminded by a post from Alan Levine reflecting on a course he taught this past Autumn (Looking Back on ds106 – CogDogBlog) that I had promised a reflective post on the Project Community course I co-taught Sept- November at the Hague University of Applied Business with Maarten Thissen, Janneke Sluijs, Shahab Zehtabchi, Laura Stevens and technology stewardship by Alan himself. It is easy to let the time pass, but all those ideas and observations tend to fade away. So after a few bites of fine holiday chocolates, it is time to dive in. (This will be cross-posted on my course Tumblr blog which feeds into the overall course site.)

What was it?

Course Goal: Here is the text from the course description:

The intersection of technology and social processes has changed what it means to “be together.” No longer confined to an engineering team, a company, a market segment or country, we have the opportunity to tap into different groups of people using online tools and processes. While we initially recognized this as “online communities,” the ubiquity and diversity of technology and access has widened our possibilities. When we want to “organize our passion” into something, we have interesting choices.  It is time to think about a more diverse ecosystem of interaction possibilities which embrace things such as different group configurations, online + offline, short and long term interactions, etc. In this course we will consider the range of options that can be utilized in the design, testing, marketing and use of engineering products.

My shorthand is that the course was an exploration about how online communities and networks can be part of a designers practice. When and how can these forms be of strategic use? You can review the whole syllabus here – and note that we tweaked it as we went! The students were all international students and this was one of their first courses in the Design Engineering Program. Some did not have strong English language skills, and the course was in English.

The Design: Let me start by saying this was designed as an OPEN experience, but it wasn’t a MOOC or anything like that. Maarten had asked me to design the course, building on a set of learning goals previously used for this course, but to translate the ideas into practice by DOING much of the course online. While the class met F2F once a week and had access to the Netherlands based faculty, we engaged, worked and explored together online. This stuff needs more than theory. It requires practice. And by practicing and learning “in public” rather than on an institutionally protected platform, students could tap into real communities and networks. If there is one thing I harp on when I talk to folks in Universities, it is the critical importance of learners connecting with real communities and networks of practitioners in their fields of learning BEFORE they leave school. These connections are fundamental to both learning and developing one’s practice out in the world.

I also wanted to focus on some sector to help us think practically about using networks and communities along the design process and avoid grand generalizations, so I suggested we use design in the international development context. This fit with my background, network (to draw upon) and experience. I was leery of stepping into the more distinct world of commercial product design about which I know NOTHING! What quickly became a huge lesson for me was that many of the students had little knowledge about international development, Millenium development goals, etc. So we all had a lot to learn!

The other aspect of the design was to bring three elements together: sense making discussions about the subject matter (synchronously in class and asynchronously on the class website), insights from weekly “guests” shared via 5-10 minute videos (to bring a variety of voices), and action learning through small group experiences and team projects. I know there are strong feelings about team projects, but building collaboration skills was part of the course learning objectives, so this was a “must do.” And we spent time talking about the how – -and reflecting on what was and wasn’t working as a vector for learning these skills.

The Resources

We knew we wanted real examples, a variety of sources and we wanted multimedia. Many of the students are speaking English (the class language) as a second, third or fourth language, so the use of visually rich media was important. What we did not count on was the lack of time to USE the resources. ;-) A typical pitfall!

  • Readings and examples .  We collected a wide range of resources on a Google doc – more than we could ever use. We then picked a few each week as assigned readings, but it became clear that most people were not/did not make time to read all of them. So when I felt something was particularly important, I harped on it and the on-the-ground team asked people to read it during the weekly class meeting.  The examples we used more in an an ad-hoc manner as teams began to develop their projects.
  • Videos- from faculty and guests. For example, here is my Introductory video and the other guest videos can be seen in each weekly update. All the interviews I did (via Google Hangout) can be found here. The students final project videos are here. I have not done an analysis of the number of views per video, but since they are public, I can’t sort out student vs. external views. That said, some of the videos have fewer views than the number of enrolled students. Go figure!
  • Visitors – I had hoped to bring people in live, but we quickly discerned that the tech infrastructure for our online/F2F hybrid meetings was not good enough, so we brought people in via recorded videos and encouraged students to ask the guests questions on the guests own blogs and websites. There was just a wee bit of that…

Technology stuff…

The Course WordPress site: It is online, so of course, there is technology. Since there was no appropriate platform available from the hosting university (we  did not consider BlackBoard appropriate because it was not open enough and we did not have programming resources to really customize it.) So I called my pals who know a lot about open, collaborative learning configurations – Jim Groom and Alan Levine, some of the amazing ds106 team. Alan was ready and willing so he was roped in! Alan built us a WordPress base with all kinds of cool plug ins. You will have to ask Alan for details! He has been doing this for a variety of courses, and blogs about it quite a bit, so check out da blog!  The main functions of the course site included: providing a home for weekly syllabus/instructions, a place to aggregate student blogs, and a place to link to course resources.  Alan set up pages for each week and taught the team how to populate them. (Edit: Alan wrote a post with more details on the set up here. Thanks, Alan! )

Tumblr blogs: Instead of a multiple user WordPress installation, Alan suggested that we use the very easy to set up Tumblr blogging platform and then aggregate into the site. Again, I’ll leave the detail to Alan but the pros were that some students already had Tumblr blogs (yay!), Tumblr could integrate many types of media (strong w/ photos),  and it was easy for people to set up. The key is to get them to set them up the first week and share the URL. Alan set up a form to plop that data right into a Google spreadsheet which was also our student roster, as well as a great Tumblr guide. The main con was that the comments via WordPress were dissociated with the original posts on Tumblr, so if you wanted to read the post in original context, you missed the comments. There were tweaks Alan implemented based on our team and student feedback, mainly to make it easier to comment on the blogs (in the WP site — Tumblr is not so much about commenting), and to help make new comments and posts more visible on the main site though the use of some sidebar widgets. I liked the Conversational views but I also found I needed to use the admin features to really notice new posts and comments. Plus we had to do a lot of initial comment approval to get past our spam barrier in the first weeks.

Each faculty had a Tumblr blog, but in truth, I think I was the main member actively blogging… I also used tags to filter my general reflective blogging with “announcement” posts which provided student direction.

I tried to comment on every student’s blog at the beginning and end of the course. Each of the other team members had a group of students to follow closely. I chimed in here and there, but wanted to make sure I did not dominate conversations, nor set up the expectation that the blog posts — mostly reflective writing assignments – were a dialog with me. Students were also asked to read and comment upon a selection of other student’s blogs. At first these were a bit stilted, but they got their text-based “conversation legs” after a few weeks and there were some exchanges that I thought were really exemplary.

Google Docs: We used Google Docs and spreadsheets to do all our curriculum drafting, planning and coordinating as a faculty team. I need to ask the team if they would be willing to make those documents public (except for  the roster/grading) as a way to share our learning. Would you be interested in seeing them?

Meetingwords.com: Synchronous online meetings for large groups create a context where it is easy to “tune out” and multitask. My approach to this is to set up a shared note taking site and engage people there to take notes, do “breakout” work from smaller groups and generally offer another modality for engagement and interaction. We used Meetingwords.com and Google docs for this, later sharing cleaned up notes from these tools. I like that Meeting words has the shared note taking (wiki) on the left, and a chat on the right. It is based on Etherpad, which was eventually folded into Google docs. So we were using “cousin” technologies! As one of the team noticed, chat is also a great place to practice written English!

Blackboard: Blackboard was used for enrollment and grading as I understand it. I never saw it nor did I have access to it.

Live Meetings: Skype, Google+/Google Hangouts: We considered a variety of web meeting platforms for our weekly meetings. We did not have access to a paid service. We tried a few free ones early on and had some challenges so started the course with me Skyping in to a single account projected on a screen and with a set of speakers. Unfortunately, the meeting room was not idea for the sound set up and many students had difficulty hearing me clearly. This and the fact that I talk too fast….

We then decided we wanted to do more with Google Hangouts, which the faculty team used in early planning meetings. At the time, only 10 active connections were available, so we both used it as we had Skype with me connecting to one account, and later used it for smaller team meetings, breakouts and, with each team in a separate room, we could have one account per team with me. Sometimes this worked really well. Other times we had problems with dropped connections, noise, people not muting their computers etc.  In the end, we need to develop better live meeting technology and meeting space for future iterations. That was the standout technical challenge! You can read some Hangout Feedback from the first group experiment here.

Team Spaces – Facebook and…: Each project team was asked to pick their own collaboration platform. Quite a few chose Facebook, and an overall course group was also set up on Facebook. One team chose Basecamp, which they liked, but after the 30 day free trial they let it lapse. Other team spaces remained a mystery to me. I think their tutors knew! When you have multiple platforms, it would be good to have a central list of all the sites. It got pretty messy!

Twitter: I set up a Twitter list and we had a tag (#commproj12, or as I mistyped it #projcomm12!) and asked people to share their Twitter names, but only a few in the class were active on Twitter. In terms of social media networks, Facebook was clearly dominant, yet some of the students had not been previously active on any social networks. It is crucial not to buy into assumptions about what age cohort uses which tools! I did use Twitter to send queries to my network(s) on behalf of the class and we did have a few fruitful bursts of interactions.

Email – yeah, plain old email: Finally, we used email. Not a lot, but when we needed to do private, “back channel” communications with the team or with students, email was useful. But it was remarkable how this course did not significantly add to my email load. Times have changed!

Overall, I think the students had a good exposure to a wider set of tools than many of them had used before. Our team was agile in noticing needed tweaks and improvements and Alan made them in the blink of an eye. That was terrific. I wonder if we could get a couple of students involved in that process next time? We also knew and expected challenges and used each glitch as a learning opportunity and I was grateful the students accepted this approach with humor and graciousness — even when it was very challenging. That is learning!

What happened? What did I learn?

Beyond what was noted above, I came away feeling I had been part of a good learning experience. As usual, I beat myself up a bit on a few things (noted below) and worried that I did not “do right” for all of the students. Some seem to have really taken wing and learned things that they can use going forward. Others struggled and some failed. I have a hard time letting go of that. There is still data to crunch on page views etc. Let’s look at a few key issues.

Team Preparation & Coordination (Assumptions!): I designed the course but I did not orient the team to it at the start. We had little time together to coordinate (all online) before the course began. You don’t even know how many students there are until a few days before the start, and THEN tutors are allocated (as I understand. I may have that wrong!) Maarten was my contact, but I did not really know the rest of the team. My advice: get to know the team and make sure you are all on the same page. We’ll do that next time! That said, I am deeply grateful for how they jumped in, kept a 100% positive and constructive attitude and worked HARD. I could not wish for a more wonderful, smart, engaged team. THANK YOU! And I promise I will never again assume that the team is up to speed without checking. PROMISE!

The Loud (and very informal) American: As noted above, our live meeting tech set up was not ideal. So when I was beamed into the weekly meetings, I was coming across as loud, incomprehensible and fast talking.I was grateful when the teaching team clued me in more deeply to the challenges based on their observations in the room. That was when we shifted from large groups to small groups. I think I was much more able to be of use when we met at the project team level. I could get to know individual students, we could talk about relevant issues. And I could then weave across the conversations, noting when something one group was doing was related to another group’s work. Weaving, to me, is a critical function of the teaching team, both verbally in these meetings, and across blog posts.  This ended up being a better way to leverage my contributions to the students. That said, I did not connect with all of them, nor successfully with all of the groups. We need to think through this process for next time.

On top of it, I’m very informal and this group of international students mostly came from much more formal contexts. Talk about a shift as we negotiated the informality barrier. During the course we also had to address the difference between informality and respect. At one point we had one learner anonymously insert an inappropriate comment in the chat and our learning community intervened.

Language, Language, Language: Writing backgrounders and instructions in the simplest, clearest language is critical. I can always improve in this area. We do need a strategy for those students who still have to strengthen their English language skills. I worry that they get left behind. So understanding language skills from the start and building appropriate scaffolding would be helpful.

Rhythm of Online and Face-to-Face: Unsurprisingly, we need more contact and interaction early on and should have scheduled perhaps two shorter meetings per week the first three weeks, then build a blend of small and large group sessions. I’d really love to see us figure a way that the small group sessions are demand driven. That requires us to demonstrate high value early on. I think a few of the early small group meetings did that for SOME of the students (see this recording from our hangout), but not all. The F2F faculty team has suggested that we do more online and they do less F2F which I think, given the topic, is both realistic and useful.

Student Self-Direction and Agency: There is a lot of conditioning we experience to get us to work towards satisfying the requirements for a grade. This seems to be the enemy of learning these days, and helping students step out of “how do I get a good mark” into “how do I thrive as a learner and learn something that takes me forward in my education” is my quest. At the start of the course, we tossed a ton of ideas and information at the students and they kept seeking clarity. We declared that “confusiasm” was indeed a learning strategy, and that generating their own questions and learning agenda was, in the end, a more useful strategy than hewing to a carefully (over-constructed) teacher-driven syllabus  That is a leap of faith. With good humor, some missteps on all sides and a great deal of energy, most of the group found ways to start owning their learning. This was manifest in the changes in their reflective blog posts. I was blown away by some of the insights but more importantly was how their writing deepened and matured. I hypothesize that it was important to get comments and know they were being “heard.” It is always an interesting balance for me. No or not enough feedback dampens things. Too much and the learner’s own agency is subverted to pleasing the commentors vs working on their own learning agenda.

I was intrigued to watch students get used to the new experience of writing in public. Few of the students had this experience. I’d love to interview them and hear what they thought about this. Especially those who had comments from people outside the course (mostly folks I linked to from my network — and I’d like to do more of that. ) It is my experience that an open learning environment fosters learning reciprocity, both within the class cohort and with professionals out in the world. I’d like to deepen this practice in future iterations.

There is also the problem of making too many offers of activities. Each week there was a video, a discussion around a key topic, 2-3 activities, reflective blogging and, after the first few weeks, significant group work. The design intent was that these things all worked together, but some weeks that was not so clear. So again – simplify! Keep the bits integrated so the learning agenda is served, moving forward.

We also had some ad hoc offers like helping co-construct a glossary and adding to the resource page. That had just about ZERO uptake! ;-) Abundance has its costs! We did get some good questions and some of the students were note taking rock stars at our live meetings. Speaking of that, a few of our students were rock star FACILITATORS and TECHNOLOGY STEWARDS. Seeing them in action were perhaps the most satisfying moments of the whole course for me!

Student Group Projects: The project teams were designed around the five parts of design that the program uses. With 9 groups of 5-6 students (one group was alumni who only marginally participated) that meant some topics had two teams while others had just one. Alan set up the tags so it was easy for teams with shared topics to see each other’s blog stream, but I’m not sure the students picked up on/used that. A clear learning was that we needed to help people see the whole as well as the parts, and the projects could have been designed to be interlinked. That would add more coordination, but if we picked a clearer focus than “helping an NGO” and maybe even worked with an actual NGO need identified up front, the projects might have had a bit more grounding in reality.

I’m not sure we set up the five design areas well enough. That warrants a whole other blog post. To both understand the concept, put it in the context of a real NGO need and then create a short video is a tall order. It took the teams a number of weeks to really dig in to their topics and establish their own collaborative process. And of course that put a lot of pressure on video production at the end. I think the single most useful design change I’d institute is to have a required storyboard review step before they went into production. Then we could have checked on some key points of understanding before they produced.

A second production element came to light — literacy about what is acceptable use of copyrighted material. This relates to good practices about citing sources and giving supporting evidence for conclusions. There is always a space for one’s opinion, but there is also useful data out there to inform and support our opinions. I think I’d set the bar higher on this next time, and do it early – with good examples.

Student Response: I have not seen the student evaluations and really look forward to seeing them. I expect some sharp critique as well as some satisfaction. I personally know we learned a lot and can really improve a subsequent interaction. I am also interested to understand how this experience lands within the institution as they explore if and how they do more online elements in their learning structure. I smiled often when I read comments from the more social-media literate/experienced students and wondered how we could leverage their knowledge more as tech stewards in the future. Here is a comment we loved: Geoffrey – “the world is freakin bigger than facebook.”

Alan wrote something in his ds106 reflection that resonated for me in Project Community.

This is not about revolutionizing education or scaling some thing to world changing proportions, it is not even about us changing students, its showing them how to change themselves. I see in their writings new awarenesses of media, of the web, of their place in it, I see unleashed creativity, I see an acceptance of a learning environment that pushes them to reach out and grab their own learning.

 Next time?

First of all, I hope I get invited back to participate next year. We challenged ourselves and learned a lot. I think we can build on what worked and certainly improve many things. And from this, make it less work for the team. We learned a lot about the online/offline rhythm and from our team debrief, I sensed a strong inclination to do MORE online. But we also have to simplify things so that we can spend most of our time co-learning and facilitating rather than “explaining” what the course, the assignments and the projects were about. Clarity, simplicity — two key words for another round!

If you made it all the way through this, do you a) have any questions, b) insights or c) find something you can use the next time you design a course? Please share in the comments!

Artifacts:

Later Added Interesting Connections:

As I find some cool things related to this post, I’ll add them here. So expect more add/edits!

8 responses so far

Next »

Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States.
%d bloggers like this: