I began with the assumption that you cannot improve that which you do not measure. My early career included six years as a retail sales manager in a series of department stores where I learned to read and appreciate the Statement of Operations each month to evaluate my progress against goals.
I’m a numbers guy, and like hard data, stats, and graphs. Trendlines are my friend. It’s a bit geeky, but I am what I am.
I created an IT Architecture program about ten years ago and began looking for the elusive stats to demonstrate both value to the corporation and progress over time. Activity-based metrics were easy; number of projects reviewed, number of hours engaged, number of standards established.
We put everything on our internal web site and tracked usage to show, for instance, that a pool of 300 architects, Lead Developers, and executives hit the web material 60,000 times per year. This served as sort of a value metric.
Over time I became comfortable with the fact that we could never find the direct cause and effect numbers that would prove that our architecture program was delivering more than the cost of the investment. So, we establish this response:
The value of the EA program is not expressed as a number.
Isn’t that cool. It even sounds a little deep. If only it weren't wrong.
Still, we continued to look. I rejected notions like adding up the number of times a project team employed a reusable asset rather than build from scratch for two reasons. First, rarely did anyone build from scratch and so the concept was silly. Secondly, how long do you count using existing ethernet cables as “reuse.” In other words, when do you stop counting reuse as a savings, and start saying, that’s just how we do business.
Then I read Douglas Hubbard’s book How to Measure Anything where he makes the point that the REASON you measure something is to reduce uncertainty. That’s all - you don’t have to eliminate all ambiguity, just some. Architects (myself included) try to find the perfect measures; unassailable, unambiguous, direct cause-and-effect numbers. But what we need is any measure that simply takes us to reduced uncertainty.
With that as a definition it is easy to come up metrics, and we now post several on our executive dashboard. Everything can be measured once you free yourself from the notion that the measure must be absolute. Actually it just has to reduce uncertainty.