How do we evaluate the strategic use of collaboration platforms?

The earthHey smart people, especially my KM and collaboration peeps, I need your help!

I’ve been trolling around to find examples of monitoring and assessment rubrics to evaluate how well a collaboration platform is actually working. In other words, are the intended strategic activities and goals fulfilled? Are people using it for unintended purposes? What are the adoption and use patterns? How do you assess the need for tweaks, changed or deleted functionality?

I can find piles of white papers and reports on how to pick a platform in terms of vendors and features. Vendors seem to produce them in droves. I certainly can fall back on the Digital Habitats materials in that area as well.

But come on, why are there few things that help us understand if our existing platforms and tool configurations are or are not working?

Here are some of my burning questions. Pointers and answers DEEPLY appreciated. And if you are super passionate about this, ask me directly about the action research some of us are embarking upon (nancyw at fullcirc dot com)!

  • How do you do evaluate the strategic use of your collaboration platform(s) and tools in your organization?
  • What indicators are you looking for? (There can be a lot, so my assumption is we are looking for ones that really get to the strategic sweet spot)
  • Does the assessment need to be totally context specific, or are there shared patterns for similar organizations or domains?
  • How often do you do it?
  • How do we involve users in assessments?
  • How have the results prompted changes (or not and if not, why not)?

Please, share this widely!


15 thoughts on “How do we evaluate the strategic use of collaboration platforms?”

    1. Thanks, Harold. Do you know of anyone who has used these lenses to actually do an assessment of use? Have yoU? What I’m finding is I can find quite a few theoretical lenses, lots of potential indicators, but not alot about how to actually use them to measure. (Now, back to reading your PDF. Thanks again!)

  1. I confess, after years as a so-called ‘CKO’, that I am deeply suspicious of most of the measures of ‘success’, ‘progress’ etc. in complex systems. As curmudgeon Snowden says, there’s a propensity in humans to cognitive biases that see patterns where there are none, find convergence and consensus where there is none, and see causality between actions and results where there is none.

    My sense is that the best way to evaluate any program, tool, platform or process is to set aside specific time for assessment, have an open and candid conversation about it, and ensure the rewards for ‘success’ don’t bias us too much in our assessment. An appreciative approach I think works best i.e. what could we do to make this better, what could we do instead, what could we stop doing without major consequences to free up time for other work etc.

    1. Dave, so far this approach sounds the most rational from what I’ve seen. What I hope to explore is how to frame such conversations. I notice a lot of a) lack of self awareness of how people use a platform (which isn’t a bad thing itself, but makes reflection a bit tough) and b) often very little clarity of “why bother.” And we wonder why we have challenges with “adoption.”

      I have also been toying with an idea w/ Rachel Cardone to look at activities and then see if there is some sort of useful view from the Cynefin perspective. So many possibilities! Thanks

  2. Might be some synergies here with what Christopher Allen, Tim Bonnemann et al are thinking about w/r/t facilitation patterns in virtual environments/meetings. On the recent call someone said (to many nods of agreement) they just HATE virtual meetings, period. They just find them intrinsically dysfunctional compared to F2F, and have despaired that any kind of non-F2F collaboration/decision-making can ever be particularly effective. That’s a deep hole to climb out of, even if it’s a false perception.

    Agree completely on your two ‘noticings’. When we’re largely unaware of how we work as individuals, it’s pretty hard to become aware and helpful w/r/t work with others.

  3. Hi Nancy,
    Very interesting and sounds like fun 🙂

    You may find something of use in this paper on starting points for assessing collaboration:
    You may already know the answers but some initial questions to help focus (that will change the types of questions you ask) would include:
    – is the platform internal to an org or external/cross-organization/networked?
    – who is the evaluation for?
    I do think that Cynefin could be used as another way to organize the framework. I imagine that some things on the platform should be simple and straightforward like uploading files, searching for people while others could be more complicated such as discussion forums and perhaps even complex in terms of what emerges from the platform. Once that is delineated you could organize the review/questions to ask accordingly.

    Good luck and keep us posted.


    1. Thanks, I have that one on the reading list already! Kismet! And we have at least 5 people evaluating platforms in their org, so we have internal, cross, and external. We want to learn across as well as within a single type of use.

  4. Just saying hi so I can subscribe to new comments on this thread. 😉

    And yes, really hope you’ll be able to join one of our upcoming “online patterns” calls/meetings, Nancy!

  5. Hiya Tim. Do you know Aldo deMoor? I’m working with his collaboration patterns. Dave, do you know him?

    Re virtual meetings… it is interesting. I’m starting to find people who are doing them well, but they are small in group size. The ones that seem to have worked with larger (not HUGE… i.e. 50 or less) seem to work when the meeting purpose itself was transformative and the chat was powerful. So many of them are awful. But so many F2F meetings are awful.

  6. Pingback: Intranet Lounge
  7. Hi Nancy,
    I did this report for UNDP Vietnam a few years ago, focused on effective practices of KS and org learning for umbrella NGOs. There are a few sections where I looked specifically at collaboration platforms, infrastructure and technology which you might find useful. If I recall correctly, one of the clearest success indicators was sustainable use over time, and obviously evidence that people were actually better equipped to do their job/carry out a specific task, through use of a platform. You are probably looking for more granular indicators, but this research came to mind when I read your post — (report is here)

Comments are closed.