Sep 09 2013
I’ve been trolling around to find examples of monitoring and assessment rubrics to evaluate how well a collaboration platform is actually working. In other words, are the intended strategic activities and goals fulfilled? Are people using it for unintended purposes? What are the adoption and use patterns? How do you assess the need for tweaks, changed or deleted functionality?
I can find piles of white papers and reports on how to pick a platform in terms of vendors and features. Vendors seem to produce them in droves. I certainly can fall back on the Digital Habitats materials in that area as well.
But come on, why are there few things that help us understand if our existing platforms and tool configurations are or are not working?
Here are some of my burning questions. Pointers and answers DEEPLY appreciated. And if you are super passionate about this, ask me directly about the action research some of us are embarking upon (nancyw at fullcirc dot com)!
- How do you do evaluate the strategic use of your collaboration platform(s) and tools in your organization?
- What indicators are you looking for? (There can be a lot, so my assumption is we are looking for ones that really get to the strategic sweet spot)
- Does the assessment need to be totally context specific, or are there shared patterns for similar organizations or domains?
- How often do you do it?
- How do we involve users in assessments?
- How have the results prompted changes (or not and if not, why not)?
Please, share this widely!