Archive for the 'evaluation' Category

Jul 27 2016

Learning While Building eLearning: #4 Lessons from the Pilot

Scholar Project - 2This is the last of four pieces reflecting on the experiences of Emilio, a subject matter expert who was tasked with converting his successful F2F training into an elearning offering. This one focuses on the lessons learned from the pilot and we are pulling in Cheryl Frankiewicz, the project manager. You can find the context in part 1 ,  part 2 and part 3. (Disclaimer: I was an adviser to the project and my condition of participation was the ability to do this series of blog posts, because there is really useful knowledge to share, both within the colleague’s organization and more widely. So I said I’d add the blog reflections – without pay – if I could share them.)

Nancy: Emilio and Cheryl, what is your advice for someone else embarking on this process?

Emilio: Solve the prep work for the launch. Pay a lot of attention to the very important thing you do in every single project, no matter what it is. Getting the process of getting the people there. The enrollment, selecting a good partner and being on top of your partner so that nothing goes wrong in this introduction process. The key thing is to get the people there at the start of your course. That has to go flawless. If it starts flawless, it is almost a piece of cake to do a good learning course. Then everything flows easily.

Cheryl: I would encourage others who embark on this process to start by revisiting their objectives and making sure that they measure the most important learning outcomes. Once the objectives are clear, focused and measurable, it’s much easier to make wise choices about which content and activities to include in the course design. Interaction is just as important in elearning as in F2F learning, but that doesn’t mean that all the interaction that takes place in the face-to-face environment should be transferred to the online environment. Attention spans are more limited and the demands on learners’ time are greater in an online environment, so you have to be careful not to include so much interaction that it becomes overwhelming to learners.

If you haven’t facilitated online before, take an elearning facilitation course before you deliver for the first time. I took one before I delivered my first online training and it was worth every penny I paid. Not only did I get useful tips on how to manage participation in a virtual environment, but I also had the opportunity to practice them before going “live”.  The big surprise for me was how much I depended on participants’ body language for feedback in a F2F environment, and how lost I felt online without it. The course helped me identify other strategies for gathering and giving feedback online. Emilio wanted to take one of these courses but his travel schedule didn’t allow it.

One other recommendation I’d make is to plan for regular communication with learners. In a F2F setting, facilitators don’t have to think about how this will happen because they are in constant contact with learners, but in an elearning environment, extra effort has to be made to design and time communication in a way that helps keep participants on track and motivated to participate. Regular bulletins from the facilitator that remind participants what is happening in a given week or unit are a valuable tool for accomplishing this. These bulletins can also highlight key lessons learned or insightful contributions from participants during the previous week. The review can help re-engage those that have fallen behind, and the recognition can help motivate quality participation in the future.

Nancy: Emilio, I have done quite a bit of work with your organization around learning, facilitating and elearning. As you think about your experiences and the experiences you’ve learned from other colleagues doing elearning in FAO, what capacity is needed to do this sort of work in an organization like yours?

Emilio: We have our own elearning team at FAO doing their own projects for specific groups. Their services are relatively expensive.  If I were to do with them the same thing I did with MEDA I would have likely paid more. And they have a limited number of people. They don’t have enough capacity to be service providers to the rest of the organization. We have so many different units. Our organization is structured so that we have to provide services to each other and we have to pay for them.

Nancy: I know there is a lot of talent spread through the organization, but it is not clear that they are aware of each other, talk to each other, learn and support each other.

Emilio: You are right. I have a  colleague doing a training. She decided to work with Unitar. She is thrilled with the experience. Then she started talking about her very different needs and experiences. From what she tells me I would not be inclined to use that model. I would have to have something different.  It is hard at the end of the day to come up with a corporate, very well coordinated approach to this elearning, to cultivate that knowledge among all of FAO’s staff, or at least expand it as much as possible.

But you are right, the result is we don’t leverage, learn from each other, from a very valuable experience a colleague is having and have to go through painful process of learning myself.

Cheryl, how about you? What is your advice?

Cheryl: Don’t aim for the moon in your beta test. Aim to learn. As Emilio mentioned, only 41% of those who registered for the course actually completed it. But 100% of those who completed  it said they would recommend it to their colleagues. Learning happened, and more learning will happen the next time around because Emilio and his team are observant, open to learning, and patient with themselves and the process.

Make sure you bring together a good team of people who can cover all the bases that need to be covered when converting a F2F training into an elearning offering. Don’t expect that any one person is going to be your subject matter expert, instructional designer, programmer, learning strategist, platform troubleshooter and project manager all in one. Ultimately, a team of six people contributed to this conversion, none of us working on it full time, but all of us contributing expertise in a particular area. Make sure that someone on the team takes responsibility for organizing the work and keeping your timeline on track. And avoid the temptation to outsource everything because you’ll miss the opportunity to learn how to do it yourself. Emilio’s probably not ready to develop his next course entirely in-house, but he and Milica have built the capacity to maintain and adapt the courses that now exist.

Speaking of adaptation, one last piece of advice is to take advantage of the opportunities that elearning provides to monitor how participants are learning as they are learning and make adjustments to the course design as you go along. Emilio mentioned earlier that the feedback he received in the office hours helped him adjust the course materials, but our analysis of the quiz, final exam and evaluation results also helped us identify which concepts could be better explained, and which objectives could be better supported. We monitored how, when and where learners engaged (and did not engage) and this is helping Emilio to improve his next offering of the course. For example, we learned that participants who did not complete the course tended to follow one of two patterns: approximately one-third logged in only once or twice and did not finish even the first module; the other two-thirds participated fairly regularly and completed module 2, but then dropped out. With this information, and with feedback from participants who completed the course, Emilio is revising the design of the Module 2 group work, and he and Milica are planning to follow up more quickly with inactive participants during the first module of the course to identify if there are any barriers to participation that they might help learners address.

Here’s mine (Nancy)…

I’m really glad the decision was made to have a beta test which helps us sharpen the content, process, assessment and technology. The example of understanding how the exam was graded shows that there are always technical things to learn, and the careful attention to assessment as it relates to learning objectives helped us learn a lot.

We learned some things about the process of having a marketing partner, the importance of lead time and a very real need to  do some pre-course orientation for the learners about the technology and course expectations. We have talked about developing some short videos and having a short “week 0” prior to the actual start of the course to ensure the tech is working for learners before we dive so quickly into content and community building.

We need to get the participation rate higher because I’m convinced that is key to successful completion – look at the people who participated in the office hours — they stayed engaged and completed! I think this starts with a clearer ramp up and explicit expectations (including pre-course communications), regular emails during the course and refinement of our pre-course learner survey that would help the facilitator understand the learners a bit before the course.

That said, there were SO many things to pay attention to, it was easy to spend less time on the social aspects of learning: initial engagement with the learners, building a learning community (which is difficult in three weeks and limited expectation of learner hours), and helping learners contextualize the content to their contexts. I had warned Emilio beforehand that facilitating online learning is a bit different than teaching face to face. The learning management system delivers a lot of the content. The real role is connecting learners to the content and to each other.  

Thanks to Emilio, Cheryl, FAO and MEDA for supporting these four blog reflections!

No responses yet

Jul 20 2016

Learning While Building eLearning: Part 2 – Learning Objectives & Assessment

signsThis is the second (slow!) of four pieces reflecting on the experiences of Emilio, a subject matter expert who was tasked with converting his successful F2F training into an elearning offering. Emilio has let me interview him during the process.

This piece focuses on the thorny issue of learning objectives at the front end of an elearning project and assessment at the other end. You can find the context in part 1 here. (Disclaimer: I was an adviser to the project and my condition of participation was the ability to do this series of blog posts, because there is really useful knowledge to share, both within the colleague’s organization and more widely. So I said I’d add the blog reflections – without pay – if I could share them.)

Nancy: Looking back, let’s talk about learning objectives. You started with all of your F2F material, then had to hone it down for online. You received feedback from the implementation team along the way. What lessons came out of that process? How do we get content more precise when you have fewer options to assess learner needs and interests “in the moment” as you do face to face, and with the limited attention span for online?

Emilio: I realize now that I had not thought about being disciplined with learning objectives. I had created them with care when I first developed my F2F offering. Once I had tested the course several times, I recognized that I forgot my own initial learning objectives because in a F2F setting I adapted to student’s interest and knowledge gaps on the spot, and I was also able to clarify any doubts about the content. Therefore, over time, these learning objectives become malleable depending on the group of students, and thus lost presence in my mind.

This became apparent as  I was doing the quizzes for the online work and got comments back from Cheryl (the lead consultant).  She noted where my quiz questions were and were NOT clarified in the learning objectives and content. I realized I was asking a bunch of questions that were not crucial to the learning objectives.

With that feedback, I narrowed down the most important questions to achieve and measure the learning objectives. It was an aha moment. This is something that is not necessarily obvious or easy. You have to put your mind into it when you are developing an e-learning course especially. It applies to the F2F context as well, but in an e-learning setup you are forced to be more careful because you cannot clarify things on the spot. There is less opportunity for that online.  That was very critical. (Note: most of the course was asynchronous. There were weekly “office hours” where clarifications happened. Those learners who participated in the office hours had higher completion rates as well.)

It was clear I had to simplify the content for elearning set up – and that was super useful. While my F2F materials were expansive to enable me to adapt to local context, that became overload online.

Nancy: What was your impression of the learners’ experiences?

Emilio: It was hard to really tell because online we were  dealing with a whole different context. Your indicators change drastically. When I’m in F2F I can probe and sense if the learners are understanding the material. It is harder online to get the interim feedback and know how people are doing. For the final assessment,  we relied on a final exam with an essay question. The exam was very helpful in assessing the learner’s experience, but since it is taken at the end of the course, there are no corrective measures one can take.

Nancy: Yes, I remember talking about that as we reviewed pageviews and the unit quizzes during the course. The data gives you some insight, but it isn’t always clear how to interpret it. I was glad you were able to get some feedback from the learners during your open “office hours.”

We used the learning objectives as the basis for some learner assessment (non graded quizzes for each unit and a graded final exam which drew from the quizzes.) How did the results compare with your expectations of the learners’ acquisition of knowledge and insights? How well did we hit the objectives?

Emilio: We had 17 registered learners and 7 completed. That may sound disappointing. Before we started, I  asked you about participation rates and you warned me that they might be low and that is why I am not crying. The 7 that completed scored really well in the final exam and you could see their engagement. They went through material, did quizzes and participated in the Office Hours. One guy got 100% in all of the quizzes, and then 97% in the exam.

We had 8 people take the final exam. One learner failed to pass the 70% required benchmark, but going deep into it, Terri (one of our consultants) discovered the way Moodle was correcting the answers on the multiple choice was not programmed precisely. It was giving correct answers for partially correct answers. We need to fix that.  Still, only one failed to pass the 70% benchmark even with the error.

The essay we included in the exam had really good responses. It achieved my objective to get an in depth look at the context the learners  were coming from. Most of them described an institutional context. Then they noted what they thought was most promising from all the modules,  what was most applicable or relevant to their work. There were very diverse answers but I saw some trends that were useful. However, it would be useful to have know more of this before and during the course.

Nancy: How difficult was it to grade the essays? This is something people often wonder about online…

Emilio: I did not find it complicated, although there is always some degree of subjectivity. The basic criteria I used was to value their focus on the question asked, and the application of all possible principles taught during the course that relate to the context described in the question.

Nancy: One of the tricky things online is meaningful learner participation. How did the assessment reflect participation in the course?

Emilio: We decided not to give credit for participation in activities because we were not fully confident of how appropriately we had designed such activities for an e-learning environment in this first beta test. I think this decision was the right one.

First, I feel that I did not do a good job at creating an atmosphere, this sense of community, that would encourage participation. Even though I responded to every single comment that got posted, I don’t really feel that people responded that much in some of the exercises. So I would have penalized students for something that is not their fault.

Second, we had one learner who did every exercise but did not comment on any of the posts. He is a very good student and I would have penalized him if completion relied on participation. Another learner who failed did participate, went to the office hours and still did not pass the final exam.

We failed miserably with the group exercise for the second module.  I now realize the group exercise requires a lot of work to build the community beforehand.  I sense this is an art. You told me that it is completely doable in the elearning atmosphere, but after going through the experience I really feel challenged to make it work. Not only with respect to time, but how do you create that sense of community? I feel I don’t have a guaranteed method for it to work. It is an art to charm people in. I may or may not have it!

Nancy: The challenges of being very clear, what content you want to share with learners, how you share it, and how you assess it should not be underestimated. So often people think it is easy: here is the content! Learning design in general  is far more than content and learning design online can be trickier because of your distance from your learners – and not just geographic distance, but the social distance where there is less time and space for the very important relational aspects of learning.

Up Next: Facilitating Online

4 responses so far

Dec 02 2013

An Interview With Aaron Leonard on Online Communities

I had a chance to interview Aaron Leonard late last September (photo URL) just before he took a leave from his online community management work at the World Bank to talk about that work. This is part of a client project I’m working on to evaluate a regional collaboration pattern and to start understanding processes for more strategic design, implementation and evaluation of collaboration platforms, particularly in the international development context.

Aaron’s World Bank blog is http://www.southsouth.info/profiles/blog/list?user=1uxarewp1npnk

How long have you been working on “this online community/networks stuff” at the Bank?  How did your team’s practice emerge?

I’ve been at the bank 4 years and working on their communities of practice (CoP) front for 3 of those. I started as a community manager building a CoP for external/non-bank people focused onSouth-South exchange. Throughout this process, I struggled with navigating the World Bank rules governing these types of “social websites”. At the time, there were no actual rules in place – they were under formulation. So what you could do,/could not, use/ could not, pay for/ could not depended on who you talked to. I started working with other community managers to find answers to these questions along with getting tips and tricks on how to engage members, build HTML widgets, etc… I realized that my background working with networks (pre-Bank experience) and my experience launching an online community of practice within the Bank was useful to others. As more and more people joined our discussions, we started formalizing our conversation (scheduling meetings in advance, setting agendas, etc… but not too formal :).

We were eventually able to make ourselves useful enough to the point where I applied for and received a small budget to bring in some outside help from a very capable firm called Root Change and to hire a brilliant guy, Kurt Morriesen, to help us develop a few tools for community managers and project teams and to help them think through their work with networks. We started with 15 groups – mostly within WBI, but some from the regions as well. All were asking and needing answers to some common questions, “How do we get what we want out of our network? How do we measure and communicate our success? How do we set up a secretariat and good governance structure?” This line of questioning seemed wrong in many ways. It represented a “management mindset” (credit Evan Bloom!) versus a “network mindset”. The project teams were trying to get their membership to do work that fit their programmatic goals versus seeing the membership as the goal and working out a common direction for members to own and act on themselves. We started asking instead, “Why are you engaging? “Who really are you trying to work with?” What do you hope to get out of this engagement?” What value does this network provide its members?” This exercise was really eye opening for all of us and eventually blossomed into an actual program. I brought in Ese Emerhi last year as a full time team member. She has an amazing background as a digital activist, and knows more than I do about how to make communities really work well.

Ese and I set up a work program around CoPs and built it into a practice area for the World Bank Institute (WBI) together with program community managers like Kurt, Norma Garza (Open Contracting), and Raphael Shepard (GYAC) among others. With Ese on board, we were able to expand beyond WBI (to the World Bank in general). This was possible in part because our team works on knowledge exchange, South-South knowledge exchange specifically (SSKE). We help project teams in the World Bank design and deliver effective knowledge exchange. CoPs are a growing part of this business, in part because the technology to connect people in a meaningful conversation is getting better, and in part because we know how to coach people on when and how to use communities.

How did you approach the community building?

With Rootchange  we started with basic stocktaking and crowd sourcing with respect to  trying to define an agenda for ourselves. We had 4-5 months for this activity. We settled on a couple things.

  1. Looking at different governance arrangements. How do we structure the networks?

  2. What tools or instruments to use in design of planning of more effective networks.

We noticed that we were talking more about networks than communities. Some were blends of CoPs, coalitions, and broader programs. The goals aren’t always just the members’. So we talked about difference between these things, how they can be thought of along a spectrum of commitment or formality. A social network vs. an association and how they are/are not similar beasts.

We gave assignments to project teams and met on monthly basis to work with these instruments. On the impetus of consultants at Root Change, we started doing 1 to 1 consultation w/ teams. We reserved a room, brought in cookies and coffee and then brought the teams in for 90 minutes each of  free consulting sessions. These were almost more useful for the teams than the project work. Instead of exploring the tools, they were APPLYING the tools themselves. It was also a matter of taking the time to focus, sit down and be intentional with their work with their networks. Just shut the door and collectively think about what is was they were trying to do. A lot of this started out in a more organic way around what was thought to be an easy win. “We’ll start a CoP, get a website, get 1000 people to sign” up without understanding what it meant for membership, resourcing, team, commitment and longer term goals and objectives.

We helped them peel back some of the layers of the onion to better understand what they were trying to do. We didn’t get as far as wanted. We wanted to get into measuring and evaluation and social network analysis, but that was  a little advance for these teams and their stage of development. They did not have someone they could rely on to do this work. Some had a community manager but most of these were short term consultants, for 150 days or less, and often really junior people who saw the job  as an entry level gig. They were often more interested in the subject matter than being a community managers. They often tended to get pulled in different directions and may or may not have liked the work. They tended to be hired right out of an International  Devevlpment masters program where they had a thematic bent so they were usually interested in projects, vs organizing a 1000 people and lending some sense of community. Different skill sets!

We worked with these teams, and came up with a few ideas,. Root Change wrote a small report (please share) which helped justify a budget for subsequent fiscal year and my boss let me hire someone who would have community building as part of their job. Together we were working on the Art of Knowledge Exchange toolkit and the other half time was for community. At this point we opened  up our offering to World Bank group to help people start,  understand how to work with membership, engage, measure and report on a CoP. We helped them figure out how they could use data and make sense of their community’s story. We brought in a few speakers and did social things to profile community managers. Over the course of the year we had talked to and worked with over 300 people. (Aaron reports they  have exact numbers, but I did not succeed in connecting with him before he left to get those numbers!). We did 100 one-on-one counseling sessions. We reached very broadly across institution and increased the awareness of the skillset we have in WBI regarding communities and networks. We helped people see that this is different way of working. Our work coincided with build up of the Bank’s internal community platform based on Jive (originally called Scoop and now called Sparks – a collaboration for development and CoP oriented platform.) The technology was getting really easy for people to access. There was more talk about knowledge work, about being able to connect clients, and awareness of what had been working well on the S-S platform.

We did a good job and that gave us the support for another round of budget this year.  Now we have been able to shift some of the conversation to the convening and brokering role of the Bank. This coincided with the Bank’s decreased emphasis  in lending and increase in access to experts which complimented the direction we were going in.  We reached out and have become a reference point for a lot of this work. There have been parallele institutional efforts that flare and fade, flare and fade. But it is difficult to move “the machine.” It can even be a painful process to witness. I admire the people doing this, but (the top down institutional change process) was something we tried to avoid. We did our work on the side, supporting people’s efforts where possible. Those things are finally bearing fruit. We have content. They have a management system. We have process for teams to open a new CoP space, a way to find what is available to them as community leaders, They have  a community finder associated with an expert finder. Great to have these things to have and invest in, but it is not where we were aiming. We want to know the community leaders, the people like Ese, like Norma Garza, running these communities and who struggle and have new ideas to share. What are the ways to navigate the institutional bureaucracy that governs our use of social media tools? How do you find good people to bring on board. You can’t just hire the next new grad and expect it to last. There is an actual skills set, unique, not always well defined but getting more recognition as something that is of value and unique to building a successful CoP. There is new literature out there and people like Richard Millington (FeverBee) – a kid genius doing this since he was 13. He takes ideas from people like you, Wenger and Denning. There is now more of a practice around this.

While the Bank is still not super intentional on how it works internally with respect to  knowledge and process, more attention is being paid and more people are being brought in. It can be a touch and go effort. We’re just a small piece, but feeling a much needed demand and our numbers prove that. We have monthly workshops (x2 sometimes) that are promoted through a learning registration system and we’d sell the spaces out within minutes. People are stalking our stuff. It is exciting. At same time while it felt like the process of expansion touched a lot of people, convinced/shaped dialog, I also feel we lost touch with the Normas. Relationships changed. We were supporting them by profiling them, helping them communicate to their bosses, so the bosses understood their work, but not directly supporting them with new ideas, techniques, approaches.

We reassessed at end of last year. We want to focus building an actual community again. We started but lost that last year while busy pushing outwards. But we still kept them close and we can rely on each other. It has  not been the intimate settings of 15-20 or 3 of us doing this work, sitting around and talking about what we are struggling with. Like “how did you do your web setup, how to do a Twitter Jam?” So our goals this year are a combination. Management likes that we hit so many people last year. They have been pretty hands off and we can set our own pace. Because we did well last year, they given us that room, the trust.

So now we want to focus more on championing the higher level community managers. The idea is to take a two fold approach. First we want to use technology to reach out, to use our internal online space to communicate and form a more active online community. We secondly want to focus a few of our offerings on these higher level community managers with idea that if we can give them things to help their with the deeper challenges of their job, they will be able to help us field the more general requests for the more introductory offerings. Can you review my concept note?  Help me setting up my technology.

It is still just the two of us. We are grooming another person but also working with the more senior community managers will allow us to handle more requests by relying on their experience. We give them training and  in return they help w/ basic requests. This is not a mandate. We don’t have to do this. It is what we see as a way of building a holistic and sustainable community within the Bank to meet the needs of community managers and people who use networks to deliver products and services with their clients.

How do you set strategic intentions when setting up a platform?

One of the things I love most advising about CoPs is telling them not to do it. I love being able to say this. The incentives are wrong, the purpose. So many people think CoPs are something that is “on the checklist, magic bullet, or a sexy tech solution”. Whatever it is, those purposes are wrong. They are thinking about the tech and not the people they are engaging.. If you want to build a fence, you don’t go buy a hammer and be done with it. You need to actually plan it out, think about why you are building it. Why its going in, how high, … bad analogy. To often CoPs are done for all the wrong reasons. The whole intent around involving people in a conversation is lost or not even considered, or is simply an afterthought. The fallacy of “build it and they will come.” One of my favorite usage pieces is from the guy who wrote the 10 things about how to increase engagement on your blog. It speaks to general advice of understanding who you are targeting. Anyone can build a blog, set up a cool website or space. But can you build community? The actual dialog or conversation? How do you do that?

One key is reaching people where they already are – one of the best pieces of advice I’ve heard and I always pass on. Don’t build the fancy million dollar custom website if no one is really going to go there. One of the things I have is a little speech for people. Here’s my analogy. If you are going to throw a party, you have to think about who you are going to invite, where to do it, what to feed them, the music: you are hosting that party. You can’t just leave it up to them. They might trash your place, not get on board, never even open door. You have to manage the crowd, facilitate the conversation unless they already know each other. And why are you throwing the party if they already get together in another space?

Coming from NGO world, and then coming to bank I saw  how easy it is to waste development dollars. It is frustrating. I have spoken openly about this. The amount of money wasted on fancy websites that no one uses is sad. There are a lot of great design firms that help you waste that money. It is an easy thing for someone to take credit for a website once it launches. It looks good, and someone makes a few clicks, then on one asks to look at it again. The boss looks at it once and that is it. No one thinks about or sees the long term investment. They see it as a short term win.

One of the things I try to communicate is to ask, if you are going to invest in a platform, do you really want to hear back from the people you are pushing info to? If not build a simple website. If you do want to engage with that community, to what extent and for what purpose? How will you use what you learn to inform your product or work? If you can’t answer that, go back to the first question. If they actually have a plan – and their mandate is to “share knowledge’ – how do they anticipate sharing knowledge. They often tell me a long laundry list of target audiences. So you are targeting the world? This is the conversation I’ve experienced, with no clear, direct targeting, or understanding of who specifically they are trying to connect with. We suggest they focus on one user group. Name real names. If you can’t name an individual or write out a description.  Talk about their fears, desires, challenges, and work environment. Really understand them in their daily work life. Then think about how does this proposed platform/experience/community really add value. In what specific way. It is not just about knowledge sharing. People can Google for information. You are competing w/ Google, email, facebook, their boss, their partner. That’s your competition. How do you beat all those for attention. That is what you are competing with when someone sits down at the computer. This is the conversation we like to walk people through before they start. The hard part is a lot of these people are younger or temporary staff hired to do this. It is hard for them  to go back to boss and say “we don’t know what we are doing” and possibly lose their jobs. There can be an inherent conflict of interest.

How do you monitor and evaluate the platforms? What indicators do you use? How are they useful?

One of the things we don’t do – and this might be a sticking point – we don’t actually run or manage any of these communities. We just advise teams. I haven’t run one for 2 years. Ese has her own outside, but not inside that we personally run beside the community managers’ community and that has been mainly a repository.

We have built some templates for starting up communities, especially for online networks with external or mixed external and internal audiences. We have online metrics (# posts, pageviews, etc) and survey data that we use to tell the story of a community. Often the target of those metrics are the managers who had the decision making role in that community. We try and communicate intentionally the value (the community gives) to members and to a program. We have developed some more sophisticated tools with RootChange, but we didn’t get enough people to use them. Perhaps they are too sophisticated for the current stage of community development. And we can’t’ force people to use them.

It would be fantastic to have a common rubric, but we don’t have the energy or will to get these decisions. We are still in the early “toddler” stage. Common measurement approaches and quality indicators are far down the line. Same with social network analysis. RootChange has really pushed the envelope in that area, but we aren’t advanced enough to benefit from that level of analysis. The (Rootchange) tool is fun to play around with and provides a way of communicating complex systems to community owners and members. What RootChange has done is is develop an online social network analysis platform that can continuously be updated by members and grow over time. Unlike most SNA, which is a snapshot, this is more organic and builds on an initial survey that is sent to the initial group and they forward it to their networks.

If you had a magic wand, what are three things you’d want every time you have to implement a collaboration platform?

If I had a magic wand and I could actually DO it, I would first eliminate email. Part of the reason, the main reason we can’t get people to collaborate is that they aren’t familiar working in a new way. I think of my cousins that 10 years younger and they don’t have email. They use Facebook. They are dialoging in a different way. They use Facebook’s private messaging, Twitter, and Whatsapp. They use a combination of things that are a lot more direct. They keep an open running of IM messages. Right now email is the reigning champion in the Bank and if we have any hope of getting people to work differently and collaboratively we  have to  first get rid of email.

Next, to implement any kind of project or activity in a collaboration space right  I’d want a really simple user interface, something so intuitive that it just needs no explanation.

Thirdly, I’d’ want that thing available where those people are, regardless if it is on their cell phone, ipad, and any touchable, interactable interface. Here you have to sit at your computer. We don’t even get laptops. You have to sit at desk to engage in online space. Hard to do it through your phone – not easy. People still bring paper and pencil to meetings. More bringing ipads. Still a large minority. A while back I did a study tour to IDEO. They have this internal Facebook like system which shares project updates, findings and all  their internal communications called The Tube. No one was using it at the beginning. One of the smartest thing they did was installed – in 50 different offices.- a big flat screen at each entrance. which randomly displays the latest status updates pulled from Tube from across their global team. Once they did that, the rate of people updating their profile and using that as a way of communicating jumped to something like a 99% adoption rate in short time. From a small minority to vast majority. No one wanted to be seen with a project status update from many months past. It put a little social pressure in the commons areas and entrance way – right in front of your bosses and  teammates. It was an added incentive to use that space.

You want something simple, that replaces traditional communications, and something with a strong, and present, incentive. When you think about building knowledge sharing into your review – how do you really measure that? You can use point systems, all sorts of ways to identify champions. Yelp does a great job at encouraging champions. I have talked to one of their community managers. They have a smart approach to building and engaging community. They incentive people through special offerings, such as first openings of new restaurants, that they can organize. They get reviews out of that. That’s their business model.

We don’t really have a digital culture now. If we want to engage digitally, globally we have to be more agile with how we use communication technology and where we use it. The tube in front of the urinals and stall doors. You’ve got a minute or two to look at something. That’s the way!

 

One response so far

Sep 09 2013

How do we evaluate the strategic use of collaboration platforms?

The earthHey smart people, especially my KM and collaboration peeps, I need your help!

I’ve been trolling around to find examples of monitoring and assessment rubrics to evaluate how well a collaboration platform is actually working. In other words, are the intended strategic activities and goals fulfilled? Are people using it for unintended purposes? What are the adoption and use patterns? How do you assess the need for tweaks, changed or deleted functionality?

I can find piles of white papers and reports on how to pick a platform in terms of vendors and features. Vendors seem to produce them in droves. I certainly can fall back on the Digital Habitats materials in that area as well.

But come on, why are there few things that help us understand if our existing platforms and tool configurations are or are not working?

Here are some of my burning questions. Pointers and answers DEEPLY appreciated. And if you are super passionate about this, ask me directly about the action research some of us are embarking upon (nancyw at fullcirc dot com)!

  • How do you do evaluate the strategic use of your collaboration platform(s) and tools in your organization?
  • What indicators are you looking for? (There can be a lot, so my assumption is we are looking for ones that really get to the strategic sweet spot)
  • Does the assessment need to be totally context specific, or are there shared patterns for similar organizations or domains?
  • How often do you do it?
  • How do we involve users in assessments?
  • How have the results prompted changes (or not and if not, why not)?

Please, share this widely!

THANKS!

15 responses so far

Jul 17 2013

BetterEvaluation: 8 Tips for Good Evaluation Questions

BEQuestionsFrom BetterEvaluation.org’s great weekly blog comes a post that has value for facilitators, not just evaluators! Week 28: Framing an evaluation: the importance of asking the right questions.

First let me share the tips and the examples from the article (you’ll need to read the whole article for full context), and then in blue I’ll add my facilitator contextual comments!

Eight tips for good evaluation questions:

  1. Limit the number of main evaluation questions to 3-7. Each main evaluation question can include sub-questions but these should be directly relevant for answering the main question under which they fall. When facilitating, think of each question as a stepping stone along a path that may or may not diverge. Questions in a fluid interaction need to reflect the emerging context. So plan, but plan to improvise the next question.

  2. Prioritize and rank questions in terms of importance. In the GEM example, we realized that relevance, effectiveness, and sustainability were of most importance to the USAID Mission and tried to refine our questions to best get at these elements. Same in facilitation!

  3. Link questions clearly to the evaluation purpose. In the GEM example, the evaluation purpose was to gauge the successes and failures of the program in developing and stabilizing conflict-affected areas of Mindanao. We thus tried to tailor our questions to get more at the program’s contributions to peace and stability compared to longer-term economic development goals. Ditto! I have to be careful not to keep asking questions for my OWN interest!

  4. Make sure questions are realistic in number and kind given time and resources available. In the GEM example, this did not take place. The evaluation questions were too numerous and some were not appropriate to either the evaluation methods proposed or the level of data available (local, regional, and national). YES! I need to learn this one better. I always have too many. 

  5. Make sure questions can be answered definitively. Again, in the GEM example, this did not take place. For example, numerous questions asked about the efficiency/cost-benefit analysis of activity inputs and outputs. Unfortunately, much of the budget data needed to answer these questions was unavailable and some of the costs and benefits (particularly those related to peace and stability) were difficult to quantify. In the end, the evaluation team had to acknowledge that they did not have sufficient data to fully answer certain questions in their report. This is more subtle in facilitation as we have the opportunity to try and surface/tease out answers that may not be clear to anyone at the start. 

  6. Choose questions which reflect real stakeholders’ needs and interests. This issue centers on the question of utility. In the GEM example, the evaluation team discovered that a follow-on activity had already been designed prior to the evaluation and that the evaluation would serve more to validate/tweak this design rather than truly shape it from scratch. The team thus tailored their questions to get more at peace, security, and governance issues given the focus on the follow-on activity. AMEN! YES!

  7. Don’t use questions which contain two or more questions in one. See for example question #6 in the attached—“out of the different types of infrastructure projects supported (solar dyers, box culverts, irrigation canals, boat landings, etc.), were there specific types that were more effective and efficient (from a cost and time perspective) in meeting targets and programmatic objectives?” Setting aside the fact that the evaluators simply did not have access to sufficient data to answer which of the more than 10 different types of infrastructure projects was most efficient (from both a cost and time perspective), the different projects had very different intended uses and number of beneficiaries reached. Thus, while box culverts (small bridge) might have been both efficient (in terms of cost and time) and effective (in terms of allowing people to cross), their overall effectiveness in developing and stabilizing conflict-affected areas of Mindanao were minimal. Same for facilitation. Keep it simple!

  8. Use questions which focus on what was achieved, how and to what extent, and not simple yes/no questions. In the GEM example, simply asking if an activity had or had not met its intended targets was much less informative than asking how those targets were set, whether those targets were appropriate, and how progress towards meeting those targets were tracked. Agree on avoiding simple yes/no unless of course, it is deciding if it is time to go to lunch. 

I’m currently pulling together some materials on evaluating communities of practice, and I think this list will be a useful addition. I hope to be posting more on that soon.

By the way, BetterEvaluation.org is a great resource. Full disclosure, I’ve been providing some advice on the community aspects! But I’m really proud of what Patricia Rogers and her amazing team have done.

One response so far

Next »

Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States.
%d bloggers like this: