Last week I mentioned the importance of good “metrics” when managing Knowledge Workers. This week that came home to roost, as I had to provide input for yearly performance reviews on individuals, managers, and groups.
Since the beginning of our KM journey, we’ve been talking about the behaviors we want to encourage. Sharing of knowledge (creating solutions). Reusing othersâ€™ solutions. Updating existing solutions. In the western statistics-driven society, the question becomes how to best do that without doing more harm than good.
If you reward sharing by counting the number of solutions created, then people will just create lots â€“ and weâ€™ve found time and time again that results in “a big pile of useless junk” and a knowledgebase full of duplicate solutions. It didnâ€™t matter that the answer was already there; folks felt they had to create their own copy. Needless to say, this makes updating information an impossible task.
If you just measure the number of times that employees reuse an existing solution (provided that your tools allow you to do so), theyâ€™ll be tempted to just click the button to get pointsâ€¦ even if that solution didnâ€™t help them. If they know the answer but donâ€™t see it quickly enough in a knowledgebase, theyâ€™ll click anyway.
If editing of existing solutions becomes the primary metric then folks will be tempted to edit just for editingâ€™s sake, to get into “a style contest.” Weâ€™ve seen cases where someone claimed that everything in the knowledge base was wrong â€“ absolutely wrong. What he really meant, when we got down to it, was that the solutions werenâ€™t written the way he would have written them.
Weâ€™ve settled on using a percentage, attempting to measure how often our knowledge workers use the knowledge effectively. We divide the number of times that any of the above activities are performed by the number of problems they solve. In a perfect world, every problem would either be in the knowledge base already (reuse it), be there but not quite be complete (update it), or be brand new (create a solution).
This week, as I said, we “ran the numbers” for the past few weeks and months on all the individuals. We also looked at groups as a whole and at their managers and coaches. How did they fare? As with anything done by humans, of course, there was a variety in the level of performance. I was pleased to see, though, that there were very few individuals who over-did one aspect of the job â€“ in other words our lack of emphasis on any one of the above three areas worked.
By the way, please don’t think I’m saying that these numbers are the be-all and end-all of the yearly evaluation. They are just one part of our input to the managers; we also discuss attitude, non-measurable contributions, and other topics.
As I mentioned, we donâ€™t just measure individuals, we look at groups as a whole and the managers & coaches. Iâ€™ll talk more about that in my next article.