How do we evaluate the strategic use of collaboration platforms?

The earthHey smart people, especially my KM and collaboration peeps, I need your help!

I’ve been trolling around to find examples of monitoring and assessment rubrics to evaluate how well a collaboration platform is actually working. In other words, are the intended strategic activities and goals fulfilled? Are people using it for unintended purposes? What are the adoption and use patterns? How do you assess the need for tweaks, changed or deleted functionality?

I can find piles of white papers and reports on how to pick a platform in terms of vendors and features. Vendors seem to produce them in droves. I certainly can fall back on the Digital Habitats materials in that area as well.

But come on, why are there few things that help us understand if our existing platforms and tool configurations are or are not working?

Here are some of my burning questions. Pointers and answers DEEPLY appreciated. And if you are super passionate about this, ask me directly about the action research some of us are embarking upon (nancyw at fullcirc dot com)!

  • How do you do evaluate the strategic use of your collaboration platform(s) and tools in your organization?
  • What indicators are you looking for? (There can be a lot, so my assumption is we are looking for ones that really get to the strategic sweet spot)
  • Does the assessment need to be totally context specific, or are there shared patterns for similar organizations or domains?
  • How often do you do it?
  • How do we involve users in assessments?
  • How have the results prompted changes (or not and if not, why not)?

Please, share this widely!

THANKS!

BetterEvaluation: 8 Tips for Good Evaluation Questions

BEQuestionsFrom BetterEvaluation.org’s great weekly blog comes a post that has value for facilitators, not just evaluators! Week 28: Framing an evaluation: the importance of asking the right questions.

First let me share the tips and the examples from the article (you’ll need to read the whole article for full context), and then in blue I’ll add my facilitator contextual comments!

Eight tips for good evaluation questions:

  1. Limit the number of main evaluation questions to 3-7. Each main evaluation question can include sub-questions but these should be directly relevant for answering the main question under which they fall. When facilitating, think of each question as a stepping stone along a path that may or may not diverge. Questions in a fluid interaction need to reflect the emerging context. So plan, but plan to improvise the next question.

  2. Prioritize and rank questions in terms of importance. In the GEM example, we realized that relevance, effectiveness, and sustainability were of most importance to the USAID Mission and tried to refine our questions to best get at these elements. Same in facilitation!

  3. Link questions clearly to the evaluation purpose. In the GEM example, the evaluation purpose was to gauge the successes and failures of the program in developing and stabilizing conflict-affected areas of Mindanao. We thus tried to tailor our questions to get more at the program’s contributions to peace and stability compared to longer-term economic development goals. Ditto! I have to be careful not to keep asking questions for my OWN interest!

  4. Make sure questions are realistic in number and kind given time and resources available. In the GEM example, this did not take place. The evaluation questions were too numerous and some were not appropriate to either the evaluation methods proposed or the level of data available (local, regional, and national). YES! I need to learn this one better. I always have too many. 

  5. Make sure questions can be answered definitively. Again, in the GEM example, this did not take place. For example, numerous questions asked about the efficiency/cost-benefit analysis of activity inputs and outputs. Unfortunately, much of the budget data needed to answer these questions was unavailable and some of the costs and benefits (particularly those related to peace and stability) were difficult to quantify. In the end, the evaluation team had to acknowledge that they did not have sufficient data to fully answer certain questions in their report. This is more subtle in facilitation as we have the opportunity to try and surface/tease out answers that may not be clear to anyone at the start. 

  6. Choose questions which reflect real stakeholders’ needs and interests. This issue centers on the question of utility. In the GEM example, the evaluation team discovered that a follow-on activity had already been designed prior to the evaluation and that the evaluation would serve more to validate/tweak this design rather than truly shape it from scratch. The team thus tailored their questions to get more at peace, security, and governance issues given the focus on the follow-on activity. AMEN! YES!

  7. Don’t use questions which contain two or more questions in one. See for example question #6 in the attached—“out of the different types of infrastructure projects supported (solar dyers, box culverts, irrigation canals, boat landings, etc.), were there specific types that were more effective and efficient (from a cost and time perspective) in meeting targets and programmatic objectives?” Setting aside the fact that the evaluators simply did not have access to sufficient data to answer which of the more than 10 different types of infrastructure projects was most efficient (from both a cost and time perspective), the different projects had very different intended uses and number of beneficiaries reached. Thus, while box culverts (small bridge) might have been both efficient (in terms of cost and time) and effective (in terms of allowing people to cross), their overall effectiveness in developing and stabilizing conflict-affected areas of Mindanao were minimal. Same for facilitation. Keep it simple!

  8. Use questions which focus on what was achieved, how and to what extent, and not simple yes/no questions. In the GEM example, simply asking if an activity had or had not met its intended targets was much less informative than asking how those targets were set, whether those targets were appropriate, and how progress towards meeting those targets were tracked. Agree on avoiding simple yes/no unless of course, it is deciding if it is time to go to lunch. 

I’m currently pulling together some materials on evaluating communities of practice, and I think this list will be a useful addition. I hope to be posting more on that soon.

By the way, BetterEvaluation.org is a great resource. Full disclosure, I’ve been providing some advice on the community aspects! But I’m really proud of what Patricia Rogers and her amazing team have done.

Data, Transparency & Impact Panel –> a portfolio mindset?

KanterSEASketchnotesYesterday I was grateful to attend a panel presentation by Beth Kanter (Packard Foundation Fellow), Paul Shoemaker (Social Venture Partners), Jane Meseck (Microsoft Giving) and Eric Stowe (Splash.org) moderated by Erica Mills (Claxon). First of all, from a confessed short attention spanner, the hour went FAST. Eric tossed great questions for the first hour, then the audience added theirs in the second half. As usual, Beth got a Storify of the Tweets and a blog post up before we could blink. (Uncurated Tweets here.)

There was  much good basic insight on monitoring for non profits and NGOs. Some of may favorite soundbites include:

  • What is your impact model? (Paul Shoemaker I think. I need to learn more about impact models)
  • Are you measuring to prove, or to improve (Beth Kanter)
  • Evaluation as a comparative practice (I think that was Beth)
  • Benchmark across your organization (I think Eric)
  • Transparency = Failing Out Loud (Eric)
  • “Joyful Funeral” to learn from and stop doing things that didn’t work out (from Mom’s Rising via Beth)
  • Mission statement does not equal IMPACT NOW. What outcomes are really happening RIGHT NOW (Eric)
  • Ditch the “just in case” data (Beth)
  • We need to redefine capacity (audience)
  • How do we create access to and use all the data (big data) being produced out of all the M&E happening in the sector (Nathaniel James at Philanthrogeek)

But I want to pick out a few themes that were emerging for me as I listened. These were not the themes of the terrific panelists — but I’d sure wonder what they have to say about them.

A Portfolio Mindset on Monitoring and Evaluation

There were a number of threads about the impact of funders and their monitoring and evaluation (M&E) expectations. Beyond the challenge of what a funder does or doesn’t understand about M&E, they clearly need to think beyond evaluation at the individual grant or project level. This suggests making sense across data from multiple grantees –> something I have not seen a lot of from funders. I am reminded of the significant difference between managing a project and managing a portfolio of projects (learned from my clients at the Project Management Institute. Yeah, you Doc!) IF I understand correctly, portfolio project management is about the business case –> the impacts (in NGO language), not the operational management issues. Here is the Wikipedia definition:

Project Portfolio Management (PPM) is the centralized management of processes, methods, and technologies used by project managers and project management offices (PMOs) to analyze and collectively manage a group of current or proposed projects based on numerous key characteristics. The objectives of PPM are to determine the optimal resource mix for delivery and to schedule activities to best achieve an organization’s operational and financial goals ― while honouring constraints imposed by customers, strategic objectives, or external real-world factors.

There is a little bell ringing in my head that there is an important distinction between how we do project M&E — which is often process heavy and too short term to look at impact in a complex environment — and being able to look strategically at our M&E across our projects. This is where we use the “fail forward” opportunities, the iterating towards improvements AND investing in a longer view of how we measure the change we hope to see in the world. I can’t quite articulate it. Maybe one of you has your finger on this pulse and can pull out more clarity. But the bell is ringing and I didn’t want to ignore it.

This idea also rubs up against something Eric said which I both internally applauded and recoiled from. It was something along the lines of “if you can’t prove you are creating impact, no one should fund you.” I love the accountability. I worry about actually how to meaningfully do this in a)  very complex non profit and international development contexts, and for the next reason…

Who Owns Measurement and Data?

Chart from Effective Philanthropy 2/2013
Chart from Effective Philanthropy 2/2013

There is a very challenging paradigm in non profits and NGOs — the “helping syndrome.” The idea that we who “have” know what the “have nots” need or want. This model has failed over and over again and yet we still do it. I worry that this applies to M&E as well. So first of all, any efforts towards transparency (including owning and learning from failures) is stellar. I love what I see, for example, on Splash.org particularly their Proving.it technology. (In the run up to the event, Paul Shoemaker pointed to this article on the disconnect on information needs between funders and grantees.) Mostly I hear about the disconnect between funders information needs and those of the NPOs. But what about the stakeholders’ information needs and interests?

Some of the projects I’m learning from in agriculture (mostly in Africa and SE/S Asia) are looking towards finding the right mix of grant funding, public (government and international) investment and local ownership (vs. an extractive model). Some of the more common examples are marketing networks for farmers to get the best prices for their crops, lending clubs and using local entrepreneurs to fill new business niches associated with basics such as water, food, housing, etc. The key is the ownership at the level of stakeholders/people being served/impacted/etc. (I’m trying to avoid the word users as it has so many unintended other meanings for me!)

So if we are including these folks as drivers of the work, are they also the drivers of M&E and, in the end, the “owners” of the data produced. This is important not only because for years we have measured stakeholders and rarely been accountable to share that data, or actually USE it productive, but also because change is often motivated by being able to measure change and see improvement. 10 more kids got clean water in our neighborhood this week. 52 wells are now being regularly serviced and local business people are increasing their livelihoods by fulfilling those service contracts.  The data is part of the on-the-ground workings of a project. Not a retrospective to be shoveled into YARTNR (yet another report that no one reads.)

In working with communities of practice, M&E is a form of community learning. In working with scouts, badges are incentives, learning measures and just plain fun. The ownership is not just at the sponsor level. It is embedded with those most intimately involved in the work.

So stepping back to Eric’s staunch support of accountability, I say yes AND the full ownership of that accountability with all involved, not just the NGO/NPO/Funder.

The Unintended Consequences of How We Measure

Related to ownership of M&E and the resulting data brings me back to the complexity lens. I’m a fan of the Cynefin Framework to help me suss out where I am working – simple, complicated, complex or chaotic domains. Using the framework may be a good diagnostic for M&E efforts because when we are working in a complex domain, predicting cause and effect may not be possible (now, or into the future.) If we expect M&E to determine if we are having impact, this implies we can predict cause and effect and focus our efforts there. But things such as local context may suggest that everything won’t play out the same way everywhere.  What we are measuring may end up having unintended negative consequences (this HAS happened!) Learning from failures is one useful intervention, but I sense we have a lot more to learn here. Some of the threads about big data yesterday related to this — again a portfolio mentality looking across projects and data sets (calling Nathaniel James) We need to do more of the iterative monitoring until we know what we SHOULD be measuring.  I’m getting out of my depth again here (Help! Patricia Rogers! Dave Snowden!)  The point is, there is a risk of being simplistic in our M&E and a risk of missing unintended consequences. I think that is one reason I enjoyed the panel so much yesterday, as you could see the wheels turning in people’s heads as they listened to each other! 🙂

Arghhh, so much to think about and consider. Delicious possibilities…

 Wednesday Edit: See this interesting article on causal chains… so much to learn about M&E! I think it reflects something Eric said (which is not captured above) about measuring what really happens NOW, not just this presumption of “we touched one person therefore it transformed their life!!”

Second edit: Here is a link with some questions about who owns the data… may be related http://www.downes.ca/cgi-bin/page.cgi?post=59975

Third edit: An interesting article on participation with some comments on data and evaluation http://philanthropy.blogspot.com/2013/02/the-people-affected-by-problem-have-to.html

Fourth Edit (I keep finding cool stuff)

The public health project is part of a larger pilgrimage by Harvard scholars to study the Kumbh Mela. You can follow their progress on Twitter, using the hashtag #HarvardKumbh.

 

Looking Back on the Project Community Course

Long Post Warning!

I was reminded by a post from Alan Levine reflecting on a course he taught this past Autumn (Looking Back on ds106 – CogDogBlog) that I had promised a reflective post on the Project Community course I co-taught Sept- November at the Hague University of Applied Business with Maarten Thissen, Janneke Sluijs, Shahab Zehtabchi, Laura Stevens and technology stewardship by Alan himself. It is easy to let the time pass, but all those ideas and observations tend to fade away. So after a few bites of fine holiday chocolates, it is time to dive in. (This will be cross-posted on my course Tumblr blog which feeds into the overall course site.)

What was it?

Course Goal: Here is the text from the course description:

The intersection of technology and social processes has changed what it means to “be together.” No longer confined to an engineering team, a company, a market segment or country, we have the opportunity to tap into different groups of people using online tools and processes. While we initially recognized this as “online communities,” the ubiquity and diversity of technology and access has widened our possibilities. When we want to “organize our passion” into something, we have interesting choices.  It is time to think about a more diverse ecosystem of interaction possibilities which embrace things such as different group configurations, online + offline, short and long term interactions, etc. In this course we will consider the range of options that can be utilized in the design, testing, marketing and use of engineering products.

My shorthand is that the course was an exploration about how online communities and networks can be part of a designers practice. When and how can these forms be of strategic use? You can review the whole syllabus here – and note that we tweaked it as we went! The students were all international students and this was one of their first courses in the Design Engineering Program. Some did not have strong English language skills, and the course was in English.

The Design: Let me start by saying this was designed as an OPEN experience, but it wasn’t a MOOC or anything like that. Maarten had asked me to design the course, building on a set of learning goals previously used for this course, but to translate the ideas into practice by DOING much of the course online. While the class met F2F once a week and had access to the Netherlands based faculty, we engaged, worked and explored together online. This stuff needs more than theory. It requires practice. And by practicing and learning “in public” rather than on an institutionally protected platform, students could tap into real communities and networks. If there is one thing I harp on when I talk to folks in Universities, it is the critical importance of learners connecting with real communities and networks of practitioners in their fields of learning BEFORE they leave school. These connections are fundamental to both learning and developing one’s practice out in the world.

I also wanted to focus on some sector to help us think practically about using networks and communities along the design process and avoid grand generalizations, so I suggested we use design in the international development context. This fit with my background, network (to draw upon) and experience. I was leery of stepping into the more distinct world of commercial product design about which I know NOTHING! What quickly became a huge lesson for me was that many of the students had little knowledge about international development, Millenium development goals, etc. So we all had a lot to learn!

The other aspect of the design was to bring three elements together: sense making discussions about the subject matter (synchronously in class and asynchronously on the class website), insights from weekly “guests” shared via 5-10 minute videos (to bring a variety of voices), and action learning through small group experiences and team projects. I know there are strong feelings about team projects, but building collaboration skills was part of the course learning objectives, so this was a “must do.” And we spent time talking about the how – -and reflecting on what was and wasn’t working as a vector for learning these skills.

The Resources

We knew we wanted real examples, a variety of sources and we wanted multimedia. Many of the students are speaking English (the class language) as a second, third or fourth language, so the use of visually rich media was important. What we did not count on was the lack of time to USE the resources. 😉 A typical pitfall!

  • Readings and examples .  We collected a wide range of resources on a Google doc – more than we could ever use. We then picked a few each week as assigned readings, but it became clear that most people were not/did not make time to read all of them. So when I felt something was particularly important, I harped on it and the on-the-ground team asked people to read it during the weekly class meeting.  The examples we used more in an an ad-hoc manner as teams began to develop their projects.
  • Videos– from faculty and guests. For example, here is my Introductory video and the other guest videos can be seen in each weekly update. All the interviews I did (via Google Hangout) can be found here. The students final project videos are here. I have not done an analysis of the number of views per video, but since they are public, I can’t sort out student vs. external views. That said, some of the videos have fewer views than the number of enrolled students. Go figure!
  • Visitors – I had hoped to bring people in live, but we quickly discerned that the tech infrastructure for our online/F2F hybrid meetings was not good enough, so we brought people in via recorded videos and encouraged students to ask the guests questions on the guests own blogs and websites. There was just a wee bit of that…

Technology stuff…

The Course WordPress site: It is online, so of course, there is technology. Since there was no appropriate platform available from the hosting university (we  did not consider BlackBoard appropriate because it was not open enough and we did not have programming resources to really customize it.) So I called my pals who know a lot about open, collaborative learning configurations – Jim Groom and Alan Levine, some of the amazing ds106 team. Alan was ready and willing so he was roped in! Alan built us a WordPress base with all kinds of cool plug ins. You will have to ask Alan for details! He has been doing this for a variety of courses, and blogs about it quite a bit, so check out da blog!  The main functions of the course site included: providing a home for weekly syllabus/instructions, a place to aggregate student blogs, and a place to link to course resources.  Alan set up pages for each week and taught the team how to populate them. (Edit: Alan wrote a post with more details on the set up here. Thanks, Alan! )

Tumblr blogs: Instead of a multiple user WordPress installation, Alan suggested that we use the very easy to set up Tumblr blogging platform and then aggregate into the site. Again, I’ll leave the detail to Alan but the pros were that some students already had Tumblr blogs (yay!), Tumblr could integrate many types of media (strong w/ photos),  and it was easy for people to set up. The key is to get them to set them up the first week and share the URL. Alan set up a form to plop that data right into a Google spreadsheet which was also our student roster, as well as a great Tumblr guide. The main con was that the comments via WordPress were dissociated with the original posts on Tumblr, so if you wanted to read the post in original context, you missed the comments. There were tweaks Alan implemented based on our team and student feedback, mainly to make it easier to comment on the blogs (in the WP site — Tumblr is not so much about commenting), and to help make new comments and posts more visible on the main site though the use of some sidebar widgets. I liked the Conversational views but I also found I needed to use the admin features to really notice new posts and comments. Plus we had to do a lot of initial comment approval to get past our spam barrier in the first weeks.

Each faculty had a Tumblr blog, but in truth, I think I was the main member actively blogging… I also used tags to filter my general reflective blogging with “announcement” posts which provided student direction.

I tried to comment on every student’s blog at the beginning and end of the course. Each of the other team members had a group of students to follow closely. I chimed in here and there, but wanted to make sure I did not dominate conversations, nor set up the expectation that the blog posts — mostly reflective writing assignments – were a dialog with me. Students were also asked to read and comment upon a selection of other student’s blogs. At first these were a bit stilted, but they got their text-based “conversation legs” after a few weeks and there were some exchanges that I thought were really exemplary.

Google Docs: We used Google Docs and spreadsheets to do all our curriculum drafting, planning and coordinating as a faculty team. I need to ask the team if they would be willing to make those documents public (except for  the roster/grading) as a way to share our learning. Would you be interested in seeing them?

Meetingwords.com: Synchronous online meetings for large groups create a context where it is easy to “tune out” and multitask. My approach to this is to set up a shared note taking site and engage people there to take notes, do “breakout” work from smaller groups and generally offer another modality for engagement and interaction. We used Meetingwords.com and Google docs for this, later sharing cleaned up notes from these tools. I like that Meeting words has the shared note taking (wiki) on the left, and a chat on the right. It is based on Etherpad, which was eventually folded into Google docs. So we were using “cousin” technologies! As one of the team noticed, chat is also a great place to practice written English!

Blackboard: Blackboard was used for enrollment and grading as I understand it. I never saw it nor did I have access to it.

Live Meetings: Skype, Google+/Google Hangouts: We considered a variety of web meeting platforms for our weekly meetings. We did not have access to a paid service. We tried a few free ones early on and had some challenges so started the course with me Skyping in to a single account projected on a screen and with a set of speakers. Unfortunately, the meeting room was not idea for the sound set up and many students had difficulty hearing me clearly. This and the fact that I talk too fast….

We then decided we wanted to do more with Google Hangouts, which the faculty team used in early planning meetings. At the time, only 10 active connections were available, so we both used it as we had Skype with me connecting to one account, and later used it for smaller team meetings, breakouts and, with each team in a separate room, we could have one account per team with me. Sometimes this worked really well. Other times we had problems with dropped connections, noise, people not muting their computers etc.  In the end, we need to develop better live meeting technology and meeting space for future iterations. That was the standout technical challenge! You can read some Hangout Feedback from the first group experiment here.

Team Spaces – Facebook and…: Each project team was asked to pick their own collaboration platform. Quite a few chose Facebook, and an overall course group was also set up on Facebook. One team chose Basecamp, which they liked, but after the 30 day free trial they let it lapse. Other team spaces remained a mystery to me. I think their tutors knew! When you have multiple platforms, it would be good to have a central list of all the sites. It got pretty messy!

Twitter: I set up a Twitter list and we had a tag (#commproj12, or as I mistyped it #projcomm12!) and asked people to share their Twitter names, but only a few in the class were active on Twitter. In terms of social media networks, Facebook was clearly dominant, yet some of the students had not been previously active on any social networks. It is crucial not to buy into assumptions about what age cohort uses which tools! I did use Twitter to send queries to my network(s) on behalf of the class and we did have a few fruitful bursts of interactions.

Email – yeah, plain old email: Finally, we used email. Not a lot, but when we needed to do private, “back channel” communications with the team or with students, email was useful. But it was remarkable how this course did not significantly add to my email load. Times have changed!

Overall, I think the students had a good exposure to a wider set of tools than many of them had used before. Our team was agile in noticing needed tweaks and improvements and Alan made them in the blink of an eye. That was terrific. I wonder if we could get a couple of students involved in that process next time? We also knew and expected challenges and used each glitch as a learning opportunity and I was grateful the students accepted this approach with humor and graciousness — even when it was very challenging. That is learning!

What happened? What did I learn?

Beyond what was noted above, I came away feeling I had been part of a good learning experience. As usual, I beat myself up a bit on a few things (noted below) and worried that I did not “do right” for all of the students. Some seem to have really taken wing and learned things that they can use going forward. Others struggled and some failed. I have a hard time letting go of that. There is still data to crunch on page views etc. Let’s look at a few key issues.

Team Preparation & Coordination (Assumptions!): I designed the course but I did not orient the team to it at the start. We had little time together to coordinate (all online) before the course began. You don’t even know how many students there are until a few days before the start, and THEN tutors are allocated (as I understand. I may have that wrong!) Maarten was my contact, but I did not really know the rest of the team. My advice: get to know the team and make sure you are all on the same page. We’ll do that next time! That said, I am deeply grateful for how they jumped in, kept a 100% positive and constructive attitude and worked HARD. I could not wish for a more wonderful, smart, engaged team. THANK YOU! And I promise I will never again assume that the team is up to speed without checking. PROMISE!

The Loud (and very informal) American: As noted above, our live meeting tech set up was not ideal. So when I was beamed into the weekly meetings, I was coming across as loud, incomprehensible and fast talking.I was grateful when the teaching team clued me in more deeply to the challenges based on their observations in the room. That was when we shifted from large groups to small groups. I think I was much more able to be of use when we met at the project team level. I could get to know individual students, we could talk about relevant issues. And I could then weave across the conversations, noting when something one group was doing was related to another group’s work. Weaving, to me, is a critical function of the teaching team, both verbally in these meetings, and across blog posts.  This ended up being a better way to leverage my contributions to the students. That said, I did not connect with all of them, nor successfully with all of the groups. We need to think through this process for next time.

On top of it, I’m very informal and this group of international students mostly came from much more formal contexts. Talk about a shift as we negotiated the informality barrier. During the course we also had to address the difference between informality and respect. At one point we had one learner anonymously insert an inappropriate comment in the chat and our learning community intervened.

Language, Language, Language: Writing backgrounders and instructions in the simplest, clearest language is critical. I can always improve in this area. We do need a strategy for those students who still have to strengthen their English language skills. I worry that they get left behind. So understanding language skills from the start and building appropriate scaffolding would be helpful.

Rhythm of Online and Face-to-Face: Unsurprisingly, we need more contact and interaction early on and should have scheduled perhaps two shorter meetings per week the first three weeks, then build a blend of small and large group sessions. I’d really love to see us figure a way that the small group sessions are demand driven. That requires us to demonstrate high value early on. I think a few of the early small group meetings did that for SOME of the students (see this recording from our hangout), but not all. The F2F faculty team has suggested that we do more online and they do less F2F which I think, given the topic, is both realistic and useful.

Student Self-Direction and Agency: There is a lot of conditioning we experience to get us to work towards satisfying the requirements for a grade. This seems to be the enemy of learning these days, and helping students step out of “how do I get a good mark” into “how do I thrive as a learner and learn something that takes me forward in my education” is my quest. At the start of the course, we tossed a ton of ideas and information at the students and they kept seeking clarity. We declared that “confusiasm” was indeed a learning strategy, and that generating their own questions and learning agenda was, in the end, a more useful strategy than hewing to a carefully (over-constructed) teacher-driven syllabus  That is a leap of faith. With good humor, some missteps on all sides and a great deal of energy, most of the group found ways to start owning their learning. This was manifest in the changes in their reflective blog posts. I was blown away by some of the insights but more importantly was how their writing deepened and matured. I hypothesize that it was important to get comments and know they were being “heard.” It is always an interesting balance for me. No or not enough feedback dampens things. Too much and the learner’s own agency is subverted to pleasing the commentors vs working on their own learning agenda.

I was intrigued to watch students get used to the new experience of writing in public. Few of the students had this experience. I’d love to interview them and hear what they thought about this. Especially those who had comments from people outside the course (mostly folks I linked to from my network — and I’d like to do more of that. ) It is my experience that an open learning environment fosters learning reciprocity, both within the class cohort and with professionals out in the world. I’d like to deepen this practice in future iterations.

There is also the problem of making too many offers of activities. Each week there was a video, a discussion around a key topic, 2-3 activities, reflective blogging and, after the first few weeks, significant group work. The design intent was that these things all worked together, but some weeks that was not so clear. So again – simplify! Keep the bits integrated so the learning agenda is served, moving forward.

We also had some ad hoc offers like helping co-construct a glossary and adding to the resource page. That had just about ZERO uptake! 😉 Abundance has its costs! We did get some good questions and some of the students were note taking rock stars at our live meetings. Speaking of that, a few of our students were rock star FACILITATORS and TECHNOLOGY STEWARDS. Seeing them in action were perhaps the most satisfying moments of the whole course for me!

Student Group Projects: The project teams were designed around the five parts of design that the program uses. With 9 groups of 5-6 students (one group was alumni who only marginally participated) that meant some topics had two teams while others had just one. Alan set up the tags so it was easy for teams with shared topics to see each other’s blog stream, but I’m not sure the students picked up on/used that. A clear learning was that we needed to help people see the whole as well as the parts, and the projects could have been designed to be interlinked. That would add more coordination, but if we picked a clearer focus than “helping an NGO” and maybe even worked with an actual NGO need identified up front, the projects might have had a bit more grounding in reality.

I’m not sure we set up the five design areas well enough. That warrants a whole other blog post. To both understand the concept, put it in the context of a real NGO need and then create a short video is a tall order. It took the teams a number of weeks to really dig in to their topics and establish their own collaborative process. And of course that put a lot of pressure on video production at the end. I think the single most useful design change I’d institute is to have a required storyboard review step before they went into production. Then we could have checked on some key points of understanding before they produced.

A second production element came to light — literacy about what is acceptable use of copyrighted material. This relates to good practices about citing sources and giving supporting evidence for conclusions. There is always a space for one’s opinion, but there is also useful data out there to inform and support our opinions. I think I’d set the bar higher on this next time, and do it early – with good examples.

Student Response: I have not seen the student evaluations and really look forward to seeing them. I expect some sharp critique as well as some satisfaction. I personally know we learned a lot and can really improve a subsequent interaction. I am also interested to understand how this experience lands within the institution as they explore if and how they do more online elements in their learning structure. I smiled often when I read comments from the more social-media literate/experienced students and wondered how we could leverage their knowledge more as tech stewards in the future. Here is a comment we loved: Geoffrey – “the world is freakin bigger than facebook.”

Alan wrote something in his ds106 reflection that resonated for me in Project Community.

This is not about revolutionizing education or scaling some thing to world changing proportions, it is not even about us changing students, its showing them how to change themselves. I see in their writings new awarenesses of media, of the web, of their place in it, I see unleashed creativity, I see an acceptance of a learning environment that pushes them to reach out and grab their own learning.

 Next time?

First of all, I hope I get invited back to participate next year. We challenged ourselves and learned a lot. I think we can build on what worked and certainly improve many things. And from this, make it less work for the team. We learned a lot about the online/offline rhythm and from our team debrief, I sensed a strong inclination to do MORE online. But we also have to simplify things so that we can spend most of our time co-learning and facilitating rather than “explaining” what the course, the assignments and the projects were about. Clarity, simplicity — two key words for another round!

If you made it all the way through this, do you a) have any questions, b) insights or c) find something you can use the next time you design a course? Please share in the comments!

Artifacts:

Later Added Interesting Connections:

As I find some cool things related to this post, I’ll add them here. So expect more add/edits!

Planning and Evaluating Your CoP (amplifying and other good stuff)

Today JeffJackson tweeted an embedded link to this YouTube video from the USAID KM Impact Challenge video I did with them earlier this year.  I know – I posted this already, but Jeff’s note gave it a twist. And having hung out recently with Alan (aka cogdog) Levine in Tasmania last week, I am attending to his principle that in social media, we ADD value. We don’t just retweet, rebroadcast, etc. Jeff added value.

Here’s his note: “Check out @nancywhite ‘s Full Circle Associates on Communities of Practice (CoPs). Great way to review and consider how you develop your #PLN and #AltProDev.” Wow, I had not thought about this from the perspective of a PLN (personal learning network) but it sure does work. Thanks Jeff, for helping me see another perspective.  I used the same communities of practice framework the last three weeks in my workshops in Australia about teaching and learning online. We weren’t talking about CoPs, but the framework is useful. (More on that in a subsequent post, currently in edit stage!)

For a refresher, here is the video.

Nancy White discusses various aspects of strengthening CoPs, mechanisms to measure their effectiveness and improve our understanding of how people are participating in CoPs.

To access the two items referred to in the video, please visit:

Promoting and Assessing Value Creation in Communities and Networks: a Conceptual Framework:
http://wenger-trayner.com/resources/publications/evaluation-framework/

The Activity Spidergram
http://fullcirc.com/wp/wp-content/uploads/2011/06/SpidergramWorksheet2011.pdf