Monday Video: 4 Perspectives on CoP Evaluation

I often have a “deer in the headlight” look when someone asks me about evaluating communities of practice. I think that is because I have some stereotype in my head about evaluation. But in fact, when I let my common sense kick in, I know of and use many evaluation approaches. I guess I never called them “evaluation approaches.” Recently my friends Etienne Wenger, Bev Trayner and Maarten deLaat wrote a lovely paper on “Promoting and Asessing Value Creation in Communities and Networks.” It is lovely because in many ways it gives voice to the “common sense” practices I’ve used and seen around me. And it gave me confidence to say yes to an interview with the KMImpact Challenge earlier this year. The video came out today. Besides sounding like I’m on speed… what do you think? How do you evaluate your communities?

via Nancy White of Full Circle Associates on CoPs – YouTube.

Reviving Community Indicators – Learning

For long time readers of this blog,  you know I’ve been obsessed with “signs of life” from communities which I call “community indicators.” I haven’t posted any recently, but something spurred me yesterday…

This past week I was very grateful to be a supporter of Dreamfish’s online retreat for their inaugural group of Dreamfish Fellows. The fellows will be taking leadership/stewardship roles in the Dreamfish network and communities over the next six month. As the first group, there was not only the exploration of a new group, but exploration of the roles they will play. All online, because cost and distance made a face to face a less “sustainable” option.

One of the Fellows, Kate McAlpine  shared some of her work with the Caucus for  Children’s Rights, in Tanzania

She shared a draft paper which I’ve still to read, but this graphic just “rang my bells.”  You’ll have to click into it to read it, and I’ve included the PDF for ease.

This sure is a community indicator in my eyes, capturing (or “reifying” – definition below!) the learning of a community of practice over time. In this case, the indicator is learning over time, and a way to VISUALIZE and SHARE that learning. That is the bit that really stands out for me.)

Attribution: Kate McAlpine (2009) Caucus for  Children’s Rights, Tanzania.

CCR Graphics_15Dec09

Any community indicators showing up in your life? Should we start thinking about network indicators?

Definition Time….Reification from Etienne Wenger (Wenger, E.  (1998).  Communities of practice. Learning, meaning and identity. Cambridge: Cambridge University Press.) gleaned by a paper by Hildreth, 2002

: …to refer to the process of giving form to our experience by producing objects that congeal this experience into ‘thingness’ … With the term reification I mean to cover a wide range of processes that include making, designing, representing, naming, encoding and describing as well as perceiving, interpreting, using, reusing, decoding and recasting. (Wenger, 1998: 58-59)

EFQUEL – European Foundation for Quality in E-Learning – 2010

A few years ago my friend Ulf-Daniel Ehlers invited me to speak at the European Foundation for Quality in E-Learning. When he asked, I said incredulously, WHAT? Me speak about E-Learning Quality? Thus began my education into what this can mean – beyond certifications, hide-bound rules and what often ends up  being a limitation instead of a search for and valuing of quality. Ulf and his colleagues opened up some new vistas for me.  In thanks, I’m sitting on this year’s program committee. We haven’t had our virtual meeting yet, so I wanted to find out, what would YOU want to see on an agenda for such meeting. Take a look at the current outline, which is more action oriented than presentation focused.  Do you have any suggestions you want me to take to the meeting?

EFQUEL – European Foundation for Quality in E-Learning – 2010: Lisbon.

Evaluation in Complex Settings and Leadership

I was browsing around the site of a very interesting conference to be held in Australia, Show Me The Change, and I noticed a number of people I respect and follow were involved. No wonder – the site was engaging, inviting. If I were in Australia, I’d go. Here is a bit about the gathering:

A National Conference on ‘Evaluation of Behaviour Change’ for Sustainability

We all know that behaviour change is complex. How do we show what’s working and how do we evaluate it? You are invited to participate in Show Me The Change and explore what matters most to you. You can take part in the ongoing conversations here, on our Show Me The Change blog. We’d love to hear your ideas and your comments. If you’re a Twitter user, please use the hashtag #smtc for your posts there.

I can’t believe I’m actually enjoying wandering through a conference site. Thanks Johnnie, Viv, Bob, Anne, Andrew, Geoff, and Chris .

Then I remembered I had a blog draft noting something that Chris (as in Corrigan) had written waaaay back in September of 09. Time to dig it up.Why not mash up evaluation and leadership?  In truth, I think they have a lot to do with each other – at least participatory leadership does.

If you are interested in leadership, go take a look at the post — too much good stuff to just tease you with a quote!

Chris Corrigan » Describing participatory leadership
How do you explain participatory leadership in one sentence?

A Gem from KM4Dev on Impact and Outcomes

There has been a great discussion on the KM4Dev mailing list the last 10 days or so about evaluation, impact and measurement. In the context of international development, this is critical. Why do something if it doesn’t make a difference. However, often we don’t do a very good job figuring out what does make a difference, let alone know why (causality.) Dave Snowden posted something that just rang the bell for me. I hastily copied it down to share here. The link to the web archive of the email discussion is at the bottom, if  you want to mine for the rest of the thread. Emphasis is mine.

The linear concept of input, leading to outputs, leading to outcomes which in turn leads to impact is I think at the heart of the problem, It implies (and I can see why people would want this) a causal chain that can be replicated.

However if the system is complex (in the sense of complex adaptive) then any input is a stimulus or modulator which influences but does not determine impact. That means we need to start measuring the sensitivity of a system to different stimuli, and the way in which some stimuli produce a disproportionate effect in that they catalyze other inputs. This is newly developing area which has not hit the development sector yet, but we are working on it in related fields, loosely termed modulator mapping. It also leads us to evolutionary representations (such as fitness landscapes) and measure based on stability of landscapes. In all those cases mathematics are simplified by representation and linked micro-narratives. There is no point in measuring anything if the results do not convince both donors and recipients alike to take action

All of that moves the “impact” agenda on. I didn’t confuse outputs and outcomes, I conflated them as the model means there is no real difference in what is measured in practice.

via Discussions.

I am now going to start paying attention to this idea of “measuring sensitivity of a system to different stimuli.” This relates closely to two projects I’m working on where I have been sensing this, but hadn’t had the words for it. Now I have a toehold. Onward!

Photo credit: