Tuesday, February 5, 2008

Reuse is not an EA metric

Enterprise Architects are always on the prowl for metrics which can be used to validate their existence.  Given that EA is more similar to Strategic Planning and/or Audit that say Sales, IT Operations, or even Project Management, this can be a considerable challenge.  Frequently, we hear that "reuse" is a measure of EA effectiveness. 

We all know that reuse saves time and money, and some can even show that a strategy of reuse will improve quality and time to market.  Reuse is a good thing; most of us would hold this as an axiom, but it's not a measure of architecture.  Don't get me wrong, measuring reuse, assuming you could actually do it, speaks more to the effectiveness of a Shared Services Team.  If the Shared Services Team is part of the EA Group, well then, I suppose it could be a sub-measure of EA.  But on the whole, reuse doesn't measure the value of architecture.

Architecture is more about quality attributes such as reliability, performance, throughput, responsiveness, to some degree functionality, and cost (which implies time), all of which could be achieved without a scrap of reuse.  Again, again, again, reuse is a good thing and is rightfully desired.  Reusing existing components; however, doesn't mean those components are loosely coupled, properly granular, or even well-designed.  But more importantly, reuse doesn't speak to the effectiveness of the EA program.

Reuse, doesn't cause the achievement of organizational flexibility, adaptability, or resiliency.  Reuse doesn't indicate an alignment between the business and the technology segments of the organization.  So, measure reuse because it's a good thing to do for many quantifiable and anecdotal reasons, but it doesn't express the value of a good architecture or architecture program.

Monday, February 4, 2008

Who really controls architecture?

So here is a scary thought; architecture is controlled by the one who last writes the code.  In the physical world, an architect designs a solution and as the product (tool, car, house, nuclear power plant) is being constructed, any number of audits are completed to ensure that what was designed is what is actually being built.

Financial institutions often provide funding for new office buildings sometimes costing hundreds of millions of dollars.  The developer does not receive a check for the full amount, rather he gets an amount sufficient for digging a hole.  The bank then sends smart people to determine if the hole is suitable for the proposed building, and only if so is the next round of funding/approval given. 

This is not so true in the digital world, at least not in the majority of corporate development centers.  A well architected solution is given to a developer (this is a topic into and of itself!) who then proceeds to write code.  Now if the digital world mimicked the physical world, we'd ask the developers to construct an object model which could be compared to the original architecture for validation.  Then, the developer(s) would have to construct, in the case of Java, Java Interfaces for all of the to-be-derived classes, and again, there'd be an audit. Digitally, we let the developer go until they have a functional system, albeit incomplete.  We then test the functionality to determine progress, and rarely if ever re-examine the actual architecture as coded.  Therefore, the architecture is under the control of the person who last wrote the code.

What techniques do you employ to ensure that developers are coding as intended by the architect?  Code reviews are seldom consistently used, and corporate time lines are so tight that adding addition delay into a project doesn't seem acceptable.  How do you validate and verify?


Follow by Email