Wednesday, December 30, 2009

Got Architecture?

Jokes about milkmen have been around forever, even if the practice of home delivery has been dying off.

Joanne read in Vogue magazine that a milk bath does wonders for your skin. So she wrote a note asking the milkman to leave 100 bottles of milk for her next delivery. Eddie, the milkman, saw the note, and thought there must be an error in the number of zeros. Therefore he knocked on the door and asked Joanne, to clarify the order. Joanne confirmed that she wants 100 bottles to fill her bath. The milkman then asked, 'Do you want it pasteurized' Joanne replied 'No, just up to my neck'.

I don't ever think my family used a milk delivery service, and yes, I look just like my father thank you very much. Frankly I wasn't sure they even existed anymore until I stumbled across a couple of news articles indicating that after a peak in the early 1960s, the home milk delivery business plummeted until about 2000, and then started to pick up again. Mostly, it is estimated, that the combination of convenience for two-income families and the desire for fresh local milk has spawned a re-emergence of this industry.

If you worked for such a company and were an Enterprise Architect you'd design systems to maintain the tenuous just-in-time stream of fresh milk from local suppliers, schedule drivers, configure delivery routes, drive down costs, maintain infrastructure, lease vehicles, manage inspections and repairs, and of course, conform to government regulations.

Imagine pitching a new GPS tracking system to improve on-time delivery and enable real-time tracking of milkmen. For the first time ever you would be able to monitor milkmen who stay suspiciously long at certain addresses!

There are only a few key elements to your Milk Home Delivery business. Get them right - you stay alive. Get them wrong, hello Monster.com. Customers of home milk delivery want convenience and reliability. Frankly, they can buy milk just as good from the local A&P/Giant Eagle/Food Lion - and for less money. Everything you do, must be focused on convenience and reliability. Period. Nothing else matters.

If you think you want the new cool GPS, you have to equate it to convenience and reliability. I don't mean you have to find a way to cleverly convince your boss that a GPS will improve reliability - you have to be able to prove it to yourself; and you should be a skeptic.

But, but, but... you say, "Our competitors have a GPS in their trucks!" So what? Can you prove that the presence of the GPS makes them more reliable? I didn't ask if you could make a case for more reliability, I asked, can you prove it?

As technologists, we can be overly enamored with the prospect of adding new solutions, tools, and capabilities to our arsenal.

Architects must understand the key drivers of our businesses and skeptically evaluate every solution in terms of it's impact on those key drivers. Are you concerned that your company doesn't have the latest technology? Prove that you need it. If you can't prove it, then spend your time and intellectual energies on improving your use of existing technologies.

Here's a test -- whatever the cost of your next project, assume it is the last 'N' dollars you have for the next two years. What project would you spend it on? Before you implement that SOA solution, the latest JDK, the newest database engine, or a cool new network appliance, prove there is no better way to invest the money.

Architecture Backpacks

I just saw the movie "Up in the Air" with George Clooney. I don't mean to suggest that George was with me when I saw it, only that George, well... you know.. was in the, nevermind. He plays the part of a professional "Displacement Manager", i.e. he is hired to fire people - and he's very good at it. It's the kind of show where you love the movie, but hate most of the message.

As a side job, Clooney's character gives the world's worst motivational presentations. How he ever lands a speaking engagement is one of those Hollywood riddles that you just have to accept. Kind of like the new Star Trek movie when young Kirk is marooned on an alien world which is close enough to witness the total destruction of a neighboring planet, but somehow is unaffected by the gravitational distortion. I'm such a geek.

In Up in the Air, Clooney's character uses a backpack as a metaphor for all of the things in your life that are holding you back. For example; friends, relatives, spouses, co-workers, and such. He says these are things you throw in your backpack that just weigh you down, hold you back, make you unsuccessful. (This is his "motivational" message!). Please George, don't become a counselor.

Clooney's motivational lack of elegance notwithstanding, the backpack metaphor can be applied to our work as architects, developers, and project managers. Every project comes with certain challenges such as costs or time lines that causes us to forgo perfection. We learn to accept low quality code, obfuscated APIs, fragile connections, magic numbers, and stupid coding tricks just to make our time / cost estimates.

As Clooney suggests, these weigh us down, reduce our satisfaction and infuse errors and performance problems into our systems. This baggage holds us back, keeping us tied to systems from which we'd like to progress away. This year, find something in your backpack to get rid of. This might mean revisiting an existing solution on the down low and replacing a hack with a hooray!

Maybe you can lighten your backpack by committing to never using a magic number in your code. For instance:

for( int x = 0; x<=9; x++ ) {
// some code
}

In this instance, '9' is a magic number. Why 9? Why not 8 or 10? Yes, yes, I get it, you want to loop 10 times, but why? How about:

static final int MAX_ALLOWABLE_ACCOUNTS = 10;

for( int x = 0; x < MAX_ALLOWABLE_ACCOUNTS; x++) {
// some code
}

The next developer to touch this code (assuming an IQ over 12) is at the very least going to ask, "Why do we only allow 10 accounts?" This becomes a business question that will yield a better understanding of both the business model, the program code, as well as a lighter backpack for both you and the next coder.

This change takes no additional time, costs zero additional dollars, improves the understanding of the business, reduces future maintenance time, and lightens the backpack for everybody. If that isn't motivation enough, maybe we could arrange a one on one with George Clooney.

Sunday, November 22, 2009

The Hardest IT Job to Fill

For the fourth year in a row, Gartner is reporting that the hardest IT job to fill is that of enterprise architect. I'll leave the specifics of the numbers to the fine folks who did the survey, but suffice to say this is either really bad news or really great news - depending on which side of the interview table you sit.

For me, it's a little of both; a little bit bad news and a little good. I have a responsibility to find enterprise architects (that makes the news bad) and I also receive an annual performance review (which makes the Gartner study fabulous).

I attended a university sponsored discussion recently on the topic of enterprise architecture and during one of the breakout session we discussed what collegiate level qualifications we would want to see for an inexperienced enterprise architect. The consensus was that a recent grad would have to have a BS in Computer Science and a Master in either Business Management or Information Science. And that was after we all had a great chuckle over the oxymoron of "inexperienced enterprise architect!"

One of the reasons, I think, for the difficulty in finding enterprise architects is due to the inability to define enterprise architecture. If there is one area of unanimity in the field of EA, it is that there is no unanimity in defining EA. In fact, the opening remarks at the aforementioned university sponsored EA discussion included the comment that our tasks for the event specifically excluded trying to define the term. Right - let's have a conversation about enterprise architecture predicated on the proposition that one cannot define it.

Different organizations ask very different things from their EA teams and programs. For some, EA is a pure efficiency play - drive down the cost of IT. For others, it all about achieving some level of consistency, reducing the number of disperate systems whether or not that is cheaper (though it ought to be). Still others see it as a business play - truly advancing the business through the analytical processes usually attributed to technology. And of course, every permutation in between.

The practitioners inside each of these differently-defined EA teams are all called Enterprise Architects. So in addition to wanting 10 - 15 years of experience, possibly advanced education degrees, and a plethora of soft skills such as communications, presentation, facilitation, and more. In addition to all of that we want enterprise architects that fulfill our particular definition of EA.

If you are an enterprise architect there is god news here. Assuming you have the qualifications you are one of the most heavily sought IT talents worldwide. If you are a hiring manager / recruiter looking to hire an enterprise architect, start with articulating your vision for EA - counter-intuitively, you may find the search easier if you narrow it down to the needs of your business.

Sunday, November 15, 2009

Could you Repeat the Question?

In the past month our Architecture Review Board meetings have blown out the capacity of our WebEx servers and exceeded our audio bridges. Twice. ARB meetings serve a number of purposes, the most obvious being to ensure alignment between proposed technology solutions and business priorities, which often times means extending technical capabilities. There is, however, a number of other value propositions, such as communicating impending activity, decision criteria, cultural thought processes, and maintaining contact between technology, infrastructure, and governance teams. For a number of reasons, chief among them the value being delivered and the respectful manner of the conversations, our ARB is a very popular venue.

As I said, we use desktop sharing and audio conferencing to enable participation from a dozen different locations, across four states. Asking all of the participants to leave their buildings to convene in a single venue is just not reasonable, especially when you consider that the same key resources we need to make the ARB effective are in high demand everywhere else. They simply cannot afford travel time on top of their busy meeting schedule.

To the plus side, the ARB benefits from tremendous participation, and the architects benefit from reduced travel / loss of work time. But... they don't always pay attention. I need to find a way to make money off of the statement, "Can you repeat the question", which is of course abbreviated for "I was not actually paying attention to the proceedings figuring you only invited me as a courtesy, and I thought I'd catch up on XKCD, Digg, and my over due time sheets; so could you repeat the question?" How many times I've wanted to reply, "I'm sorry, could you repeat your question?" just to see how deep the recursive loop could spin.

It's not always just a momentary distraction either. I had 20 minute detailed design conversation around VMWare operating on blade servers only to have a some attention deficit distractee ask 10 minutes later, "Are these on blade servers?" I wanted to say, "We reviewed the VMWare configuration while you were filing your nails, and we covered blade servers when you were submitting expenses."



It's not like we don't use all the social cues to avoid this situation. For instance we never end a question with a name, "I am concerned that the chromium plated muffler bearings won't handle the stress of the reverse thrust axle bumps, what about you.... Todd?" I mean, that's just unfair. So we also state the name first, "Todd, assuming you've been awake for the past fifteen minutes and I don't have to completely reintroduce the context of this question, who's your favorite pilgrim?"

Look, on-line meetings are great, and the more we work in geographically dispersed areas the more we're all going to have to get used to less face time and more virtual togetherness. But like the ads for Pepsi Max used to scream, Pay Attention!

Saturday, October 31, 2009

Architecture Review Board Value Proposition

If you think of an Architecture Review with the same mindset as an IRS Audit then something is terribly, terribly wrong. Properly constructed, and properly managed, the Architecture Review Board ought to be the one place any project team would want to go for two specific reasons; to get the right, best answers to any question, and to have the confidence that a proposed solution is endorsed by the most experienced, trusted business-driven technologists the organization has to offer.

The board should be comprised of individuals who are the thought leaders for the architectural domains of the enterprise. That would typically include database, security, web hosting, information management, and what ever other domains your organization feels is key to driving business value. Who in your organization is the most accountable person relative to database designs / architecture (for example). It doesn't matter if that person is an executive, an analyst, or a manager - that person should be on the board ... and no one else who might represent a competing "Database" interest.

Once you have identified and assembled the list of individuals that represent the key architectural domains (every institution will be different), you then have to get them to articulate in written or diagrammatic form their architectural standards. Call these reference architectures, operating practices, standards, or guidelines - call them what you want, just make sure of two things. First, that they don't conflict with each other, and secondly, that they are published and easily accessible. The rules for which designs are preferred or acceptable versus those that are unacceptable shouldn't be known only to mystics, the enlightened, or off-worlders.

You now have the ingredients for an Architecture Review Board that is sought after rather than avoided. You now have the one and only venue in the organization where a project team can bring their proposed solution and have it vetted by the people who are the most trusted problem solvers in the company. Remember, that by publishing and advertising the aforementioned 'standards' you are likely to find that most of the proposed solutions already fit into your desired architectures.

As you conduct the meetings, you must find a balance between confronting issues and allowing confrontation. Respectful, professional behavior and integrity must rule the day. Do not confuse experience with intelligence - the teams bringing proposed solutions to the board may not have the years of experience achieved by the ARB members, but most of them will be smart, educated, and proud of their solutions. Always talk in terms of desired company practices rather than right or wrong.

Following these simple guidelines will enable any organization to build or reconstruct and ARB that provides value and impact.

Thursday, October 22, 2009

Your Architecture Rots! - Says Who?


"I see little point in replacing a poor architecture with a worse one." Ouch! A pretty biting assessment to be sure, made worse only by the fact that yours truly is the credited author of the poor one. I recently found myself in a three-way conversation regarding a proposed change to an existing system. I had designed the original solution several years ago, but the current support team had proposed a change. A third architect and I were discussing the proposal when he provided his opinion of both my original and the new designs. Ouch!

My poor architecture uses a well-known well-supported product for corporate content management - augmented with some in-house development to implement a limited set of features found in much higher-end WCM products. This back end is accessed via web services by a J2EE front end that can be easily adapted for style and function through JSPs and CSS. The front end was also developed to take advantage of clustering and caching to accommodate performance and scale considerations. As you might guess, I'm a little perplexed as to the exact nature of this architecture's poorness.

To be fair to the original critique - the newest proposed architecture is not as good as the current one, so the notion that the solution is stepping backwards is shared by both of us. I'll also stipulate that the assessment of "poor" was not done after a careful review of the facts, rather it was an intuitive comment given in response to the names of products used in the original design.

The conversation, however biting, raised an interesting question - how do seasoned architects with varying backgrounds and experience find objective agreement on sound architecture? Having been in conversations with other IT professionals for over 25 years (as well as conversations with um, er a few non-professionals), I can can relay unequivocally that architecture assessments of good, better, and best are often in the eye of the beholder.

I ask three questions when reviewing an architecture, and while the answers can be admittedly subjective; these have held up over the years.

Are the components used as intended?
This could be a book in and of itself, but the gist is that relational databases should be used for relational data. Queue based solutions ought to be used asynchronously. This is the fit-to-function element of architecture. While it may be possible to screen scrape your web-based email for a real-time, transactional update, it should be considered (by definition) a poor architecture.

Are the components appropriately cohesive (tight) and coupled (loose)?
Long standing tenants of object oriented design, each component of a design should only perform functions related to each other (cohesion) and be sufficiently stupid about the internal working of other components (loosely coupled). This becomes the foundation for modular design.

Does the form (design) follow the function (requirements)?
Of course, a design that does not afford its intended use and / or fails to meet the requirements should be considered a poor architecture.

Lastly, the proof is in the pudding, right? So when I come across something that curls my teeth, I ask if the solution been found to be stable, lightweight, and flexible? It's hard to argue with success.

Tuesday, October 13, 2009

High School Students - Your Career Starts Here!

I recently had the opportunity to participate in a discussion with a well-known university that is looking to build a curriculum around enterprise architecture. They had assembled about a hundred representatives from industry, education, and government / military to share ideas about how a collegiate program might look. Think of it as 100 people in a club, talking about how to join the club, but starting with the premise that no one can define the club's purpose (I'm not making this up).

If there is one element of unanimity around Enterprise Architects, it is that there is no unanimity in defining Enterprise Architecture.

CIO's, CTO's, and other enterprise architects pretty much agree that there is a difference between an architect, even a very good one, and an enterprise architect. The prefix "enterprise" changes in very substantial ways the definition of the word "architect." It's not like adding the adjective "very" to the word "big", or the prefix "super" to "duper." The Enterprise Architect role in some companies is not even an extension of the IT job ladder. Enterprise architect is to architect what astronaut is to pilot. It is a different world, set of skills, and expectations.

During one of the break out sessions a small group of us pondered the question, given a stack of resumes from recent college grads, what would you have to see on a resume to make you jump at an interview? What makes a person a suitable candidate to become an EA? At my company we've been using a list of attributes that includes negotiator, consensus builder, visionary, and other hard to measure skills such as abstract thinking, and an ability to quickly move from high-level / top down thinking to detailed, bottom-up processing. Technical acumen doesn't even make the top five criteria.

After some conversation we agreed that a recent college graduate that had a Bachelor's Degree in Computer Science AND a Masters in Information Science or similar field, AND a Masters of Business Administration (an MBA) - that is a candidate we'd want to interview for an enterprise architect's role. We also agreed that if hired, the candidate would still need to complete some significant internal EA training program.

Of course, none of us at the table had near that amount of formal education, so what gives? Remember that the context of the conversation was the development of a university program that could approximate the experience one would seek in an enterprise architect. Frankly, I'm not sure how many high school seniors are asking their guidance counselors about the exciting field of Enterprise Architecture.

Tuesday, September 8, 2009

Developers shouldn't also be architects

Much of what you're about to read applies only to organizations of some size. In smaller companies, each individual must take on multiple roles. As the entity grows in size, specialization necessarily kicks in, and the roles and boundaries need to become more defined and rigid. In a one-person shop, for example, the one person must be sales, operations, IT, and even janitorial. When the second person comes on board, the roles begin splitting; and so it goes.

Consider this graphic and assume that the area of the rectangle represents the total knowledge, energy, and capability of an individual. If this one person were to run their own company, the total work-load and output would be expressed by the area of the box.

Now the one-person company hires a second worker - the total amount of workload and output of the first person shouldn't change much (there is some loss in the person-to-person interface), even though the breadth of their work is only half what it used to be. With the addition of a second worker, the first person can get deeper, read that better, in the half-work they now have to do.

Let's say the company doubles again to four people. Theoretically, each person will handle a narrower set of duties and by doing so, should be able to excel at their assigned tasks. Now, a few caveats. There is never quite the clean separation of duties as this model portends. The model is illustrative of the true fact that as a company grows in size, the individual jobs tend to become more specialized.

So, as a development team grows, it becomes necessary to separate the jobs if for no other reason than competitive pressures to produce higher quality products. There are other reasons for job separation; for instance the QA testers should not be the same individuals that performed application coding. Programmers should be separate from the deployment staff. I offer only Jeffersonian proof - we hold these truths to be self evident.

Asking a single individual to execute both architecture / design duties as well as programmer / developer work is asking a lot and asking for a lot... of trouble. It's not about intelligence, there are plenty of professionals who are smart enough. No, the reasons for separating the architect and development activities has more to do with tendencies and pragmatics. Developers will tend to choose solutions that fall into one of two categories; technologies they already know, or technologies they want to know.

I know some pretty strong Lotus Domino developers. These folks are sharp. That being said, every business problem brought to them will be solved (i.e. designed) with a Notes Hierarchical database design. Really? Seriously? Hey, before you snicker too loud Mr. J2EE Professional - how many work-flow objects have you written? The answer should be close to zero, especially if you have Notes in your shop.

Good architects translate the business problems into a design that is technology agnostic - but which conforms to sound architecture principles such as loose coupling, abstraction, encapsulation, etc. Now it still takes a strong application developer to put those principles into practice. But the architecture of the solution should not be constrained by what the developers happen to know, or worse yet, what they want to add to their resume.

Again, I am not suggesting that architects are smarter, or that developers can't design, only that (A) the tendency is for developers to implicitly base their designs on their current development skill set, and (B) as an organization increases in size, the need for specialization should drive the design and development activities into separate domains.

Tuesday, August 25, 2009

Eschew Obfuscation

Wired Magazine wrote of Hans Monderman, [He] is one of the leaders of a new breed of traffic engineer - equal parts urban designer, social scientist, civil engineer, and psychologist. The approach is radically counterintuitive: Build roads that seem dangerous, and they'll be safer. Donald Norman of "The Design of Everyday Things" coined the phrase, a sign is a sign of a bad design.

A roadway, like any engineered artifact (say a computer system) should be intuitive to the user and afford the user with obvious indications of it's proper use. When key elements of the design have to be explained then obviously, less is obvious. There are two different ways of expressing this concept; the first is that people shouldn't need signs / directions to properly navigate, understand, or use your solutions. The more signs or documentation you need, the less well-designed is the product.

Another way (and the way I prefer to express this) says that people have brains, and the more you engage the brain correctly, the less external information is needed to properly convey the proper use of a solution. In Matthew May's book, In pursuit of Elegance he states that there is a counterintuitive element of nature at work in many systems. As we add documentation such as "Slippery when wet", "Children at Play", "Deer Crossing" and so forth, we actually cause the brain to disengage from risk analysis.

In December of 1995, the State of Montana removed the speed limit signs from its state highways and instead posted notifications to drive at speeds which were reasonable and prudent. The State Police were not particularly enthralled with this move and continued to cite drivers who exceeded 80-90 miles per hour. For the next five years Montana recorded its lowest traffic related fatality rates in twenty five years. Coincidence? When one of the speeding tickets was challenged, the Supreme Court of Montana ruled that "reasonable and prudent" was unconstitutional, and in 2000 the state went back to posted speed limits. In the next year, road fatalities in Montana rose 111%, and hit all times highs the following two years.

Why would this be? Could it be that when you are solely responsible for the speed at which you are traveling, you take extra precautions, stay more engaged, and use your brain more fully? In the case of the Montana highway speed control, offering less instruction not only enabled faster travel, but safer travel as well.

Hans Monderman takes this same approach to designing intersections. He removes traffic signals, and many of the cautionary signs. He replaces them with well-designed roundabouts. I have traveled through New England where they have used Rotary intersections (Americanized roundabouts), and have always approached them with trepidation. Only recently did I begin to realize that my apprehension was a direct result of having to engage the brain more fully - I was now completely responsible for navigating the intersection. Here is a time lapsed video of a busy roundabout. Note the elegance of the traffic flow.



Let me emphasize that neither Monderman nor I am advocating the indiscriminate removal of signs, directions, or documentation. Rather, the suggestion is that well-designed solutions don't need them. If a solution is in need of placards, signs, and documentation, then maybe the solution isn't so well designed.

Friday, August 14, 2009

Horticulture and Loose Coupling

My wife and I bought a beautiful new home late last year. It wasn't my idea, for I had been paying attention to the news, I work for a bank, and my dad is a Realtor. The input from these three source instilled one very rock solid data point into my knowing places - now is not a good time to sell your home! As it turned out, it was an exceedingly good time to buy a new home, but we were not in a position to carry multiple mortgages (college age kids, wouldn't you know). Sara was undaunted - and I'm pretty much committed to following her wherever she goes. What can I say, she was right and we exchanged an 80 year old house for a new one; downsized but upgraded.

So we have our shiny new home, with shiny new landscaping, and shiny new deer to infest our lawn and digest the lower half of the taller organic decorations. One particular tree, a Hemlock (not the poison bush kind) was especially attractive to the indigenous animal traffic, and before the first winter was a third over the tree was half gone. As a child I loved Bambi, and all of her relatives. As an adult I am no longer so afflicted. I have thoughts, desires, and aspirations which would (I'm sure) satisfy the retirement fund of at least one Psychologist.

After consulting with a number of plant and tree professionals, we came to the conclusion that the tree was not salvageable as anything but a deer trap. It had to go. The good news is that it was only four years old and so (here comes the insight from the experts) the roots won't have had time to escape the root ball. For the horticulturally-challenged, let me explain that when one plants a tree the roots are neatly packaged in a burlap sack (a ball), all tied up neatly so as to easily fit into an easily dug hole (another topic for another rant). Over time, a long, long time so I'm told, the roots will poke through the burlap and take hold in the ground. Since mine was a mere toddler in tree-years, barely into Tree-dergarden, popping it out of the ground would be child's play.

In the vernacular of an architect, the root ball should have been properly coupled and elegantly cohesive. In other words, the object I was trying to remove should have been able to exchange the necessary moisture and nutrients through the burlap root ball (it's interface), without regard to the surrounding environment (i.e. brown dirt, clay, overpriced mulch). Had it been loosely coupled, removing it and then subsequently replacing it would not have required my neighbor's Dodge Ram 2500, an aircraft carrier anchor chain, and four hours of alcohol-induced colorful metaphors.

Not only did the tentacles of the hemlock escape the perimeter of the root ball, thus becoming tightly coupled to the surrounding objects, but the metal cage that encased the root ball (when the tree was originally planted) had never been removed. In the context of a planted tree, as opposed to a tree in transit, the metal cage has no purpose - a clear violation of proper cohesion.

Whomever did the landscaping for my shiny new house may have been excellent groundskeepers, but they didn't know jack about Enterprise Architecture!

Sunday, June 28, 2009

Wikipedia, Gartner, Snopes, and World Book - Who do you Trust?

Many of my friends and family have taken me off their email distribution lists. It's not that they don't like me (per se), it's that I have an annoying habit of pointing out that their latest chain letter is inaccurate. Some recent examples:


These things are just too easy to refute. We no longer live in a world where we can just parrot the works of others. Do you know that 43.7% of all statistics are just made up?

In 1897, Virginia O'Hanlon asked her father, Dr. Philip O'Hanlon if there was a Santa Claus. He suggested that she write to the Editor of the local newspaper, the New York Sun. According to Dr. O'Hanlon, "if you see it in the Sun, it's so." She did. Frank Church replied, and we are left with an enduring story enjoyed by many.

"If you see it in the Sun, it's so." What a wonderful construct. If only we could apply that same sentiment to our world today. If you see it on Wikipedia, it's so. If you read it from Gartner, it's so. If you see it on Snopes.com, it's so. Or my personal favorite, if you see it in the Encyclopedia Britannica, it's so.

Wouldn't it be great if we could just suspend our responsibilities and go to a trusted oracle for any and all answers and mindlessly repeat their words of wisdom. Gartner says we should set up a Center of Excellence - therefore (with thunder and angels), so it has been said, so it shall be done. So say we all.

I cite Gartner in almost every persuasive paper and presentation I prepare, not because Gartner is Gospel, but because I also cite Forrester, Time Magazine, IBM, Microsoft, Sun, and any number of other on-line information sources. It is the collection of sources that matter, rather than any one. If I cite W3C, it is likely because they have articulated a perspective shared by many in a particularly effective way.

Does IBM have a motivation to present a profit-driving perspective. Yes! Does that mean their perspectives are necessarily askew? No! Not if their analysis, recommendations, and thoughts can be validated against the population of like-minded and competitive interests. Citing Forrester is not wrong. Using only Forrester, or Wikipedia, or the Encyclopedia Britannica as a source is fundamentally flawed - even if their information is correct.

Wait, did I just dis the Encyclopedia Britannica? Well, no, not really, but since you asked:

I've had colleagues suggest that a Gartner recommendation is only worth the paper their invoice is written on. I would suggest that it is as valuable as the World Book Encyclopedia, Wikipedia, IBM, The New York Times, and Mom, combined. Sorry Mom. The recommendation of Gartner is as valuable as the due diligence one puts into validating it - comparing it with other aligned and contrasted recommendations.

Trust Gartner, Wikipedia, Encyclopedia Britannica, and the Wall Street Journal. Cite them, but cite them because you verified. (Actually, I always trust Mom.)

Now, if you send this on to seven people you trust before the Big Dipper tilts downward you'll get $50 from Bill Gates. Failing to to do so will infect your family gene pool with a government sponsored incurable virus originally developed by the ACLU! It's true, I saw it on Google.

Tuesday, June 23, 2009

Hack My Password

If I told you that this (l1v3l0n9&pr05p3r) was one of my passwords, would you be able to figure out how I remembered it? You might ask that if you know the password, why would you care how I remember it? Well, knowing the logic key that unlocks that password, might enable you to figure out my passwords to other systems.

I am an unabashed fan of Star Trek. Its positive vision for the future of humans resonates with me and has for 40 years. I like other SciFi genres as well, but for me nothing compares with Star Trek. I am also a fan of Wired Magazine. It is the only magazine subscription for which I pay. Stay with me a moment, the tie in will be obvious.

Security through obscurity is likely the biggest challenge corporate programmers have, not so much because our developers aren't smart, but rather because we can't think in the time scales of hackers. We often believe that if something is complicated enough, it is secure because no one would have the time to guess it out. We tend to confuse layers of security with added complexity - but these are not the same.

Corporate developers have to produce results on a deadline, within fixed constraints of time, money, and other programmers. The concept of unlimited time in which to work through complicated system is a foreign to us as glee to a Vulcan (trust me, it's foreign). But to a hacker, time is an endless quantity. Consider some of these systems which have been hacked:
Hacking these systems took time; time to figure out the security and work around it. Wired Magazine recently published an edition with the help of J. J. Abrams - the director of the recently released Star Trek film. The magazine is filled with puzzles and games. In some cases, there are no clues that a page contains a puzzle - you just have to look, and think, and ponder, and hack.

I had a lot of fun with this Wired and was surprised when a month later, in the Letters to the Editor, someone pointed out that the spine label of the magazine (usually a series of uniformly spaced blocks) was itself a coded message. Can you figure it out? I'll give you this one, if you promise to think about how someone who wasn't even told that the spine contained a puzzle, figured out the FIVE BIT BINARY CODE that spells "Trekkie."

If you are concerned about the security of your system, don't rely on obfuscation and complexity. If you can figure it out, someone else will too. The only things you can rely on being secret are passwords and certificates (because they can be changed after the solution is deployed). Assume everything else is knowable. I know a lot of developers who have trouble with that, because they assume that it is impossible to build systems that the original programmer can't break into. Not true. If this sounds like you - here's a good book; Writing Secure Code by Howard and LeBlanc.

Lastly, there are three puzzles in this post for you to solve:
  • How to hack my password (what's the key that helps me remember it)?
  • What's Commander Data's password?
  • What is the name of the sculpture which contains a code the CIA cannot crack?
A hacker with experience and unlimited time will find this task to be child's play. A corporate developer under deadline may find it a little more challenging. I've pretty much laid out the answers for you. Good luck, and let me know how it goes.

Monday, June 15, 2009

Entropy and my iced-tea

I drink tea. I don't especially like tea, but it has the value of not containing many of the chemicals that my body either enjoys too much (sugar) or rejects too much (artificial sweetener). Oh to be young again and partake of sustenance solely on the basis of taste! Pepsi by the drum. Nachos! I've arrived at that place where each meal is a cost / benefit analysis to balance any short term pleasure with the down stream after effects.

In the cooler months I drink hot-tea. Not Earl Grey, not cinnamon, not green tea, just plain old hot-tea. Lipton - comes in a yellow box. In the hotter seasons, I convince myself that Iced-Tea is just as refreshing as that other cold liquid that my brain is not allowed to consider. What is interesting is that it takes a certain amount of effort to keep my hot-tea hot, and my iced-tea cold.

Entropy, it is said, is the attribute of nature that causes all systems to move toward chaos. It's not true, but that is a common belief. No, entropy is the natural process that causes a system to move toward a state of equilibrium - that is different from chaos. For instance, if we put 100 pennies in a shoe box, and then shake the box up and down, some number of pennies will flip over. Let's assume you started out with all of the pennies lying heads up. After shaking the box up and down for five minutes, you wouldn't be surprised if half of the pennies were tails.

That's entropy. From a 'nature' perspective neither heads nor tails is better. Having 50 of your 100 pennies facing in an opposite direction to the other, is neither good nor bad, it is no more chaotic than 100 heads or 100 tails. Entropy is not about disorder, it is about equilibrium. It's why my hot-tea becomes luke warm and my iced-tea becomes... luke warm.

Consider a large corporation segmented into several lines of business, hypothetically. Imagine that each business is aggressively pursuing their market space and in some cases employs information technology as a differentiator. Several entropy effects can be observed. First, unless acted upon by some additional force each line of business will adopt technologies to solve their challenges and furthermore, it is unlikely (unless acted upon by some other force) that they will coordinate their technology implementations. This hypothetical scenario might lead to four lines of business buying four separate Business Process Management Suites. Hypothetically.

There is another, less obvious, symptom of entropy; again it's not about disorder, it's about equilibrium. In the corporate world of IT, equilibrium can be observed when the complexity of the existing systems exceeds the entities ability to adopt new solutions. When the effort required to "keep the lights on" get too high there are no more resources left to add business-driving functionality, install new solutions, or even reduce the complexity of the existing solutions. Like my luke warm iced- or hot-tea, the entropy in these corporations has yielded an unrewarding equilibrium.

Being that this is a blog dedicated to Enterprise Architecture, the more astute reader may surmise that I have a point. According to Gartner, corporations with mature Enterprise Architecture programs spend 20% less on keeping the lights on and 28% more on transformational projects.

Effective EA programs look across the vertical siloes of a corporation to identify the unnecessary complexity that naturally results from the effects of business-driven IT entropy, and drives reoccurring solutions out of the line of business siloes to where common technology assets can be scaled, leveraged, and managed. This has the added benefit of reducing the workload on line of business development teams so that they have less to support, and can become more responsive, nimble, and cost effective. Think of Enterprise Architecture as always have hot hot-tea, and really cold iced-tea.

Sunday, June 7, 2009

While we're at it

The bathroom sink was draining slowly. It was still draining mind you, it was just slower than I would have liked. I had already tried chemicals, which on an 80 year-old house was, to be completely honest, beyond my risk tolerance. Introducing modern molecular reactions into an environment with unknown metals, untold combinations of foreign ingredients (I live with two women), and the passage of time is hardly a scientific experiment with controlled variables. And yet, it was draining a little too slowly for my tastes. Something had to be done.

Cue the wrenches, bucket, and leftover towels. This is a job for "Mr. Home Owner!" As our hero crawled under the vanity, he first noticed that the hot water feed was dripping ever so slightly. Well, as long as we've got the tools out and we're under the sink anyways, we might as well tighten the loose joint up a bit. To get to the joint we'll have to remove the vanity drawer, whereupon we notice that the little rubber bumpers that quiet down the drawer when it closes have fallen off. As long as we've got the drawer out, we might as well attach new ones.

Wait. Our mild mannered home repair guy (and owner who never throws any leftover parts away), thinks he may have some rubber drawer bumper thingies (the official name of such ... thingies), may have some of these things in his official coffee can of leftover doodads. After navigating the stairways and concentration-interrupting "what are you doing, I thought you were going to fix the drain?" challenges, we find our hero at the coffee can of endless odd parts; he dumps it out and begins rooting for the, um, er... "here's that screw I was looking for to fix the door to the linen closet." Wait, what was I here for?

While we're at it. These are the four most dangerous words in the Engrish language. Technically, it is five words that have been contracted into four. The phrase doesn't seem so bad when used in the informal context. Wait, where was I? Oh yeah, at some level, doing Project 'B' while in the midst of Project 'A' just makes sense. Consider the surgeon who informs you that while removing the tumor in your belly, he happened to notice that your appendix was about to rupture. He left the appendix in, because his surgical plan was tumor extraction and an appendectomy would have been scope creep.

Then again; "Mrs. Jones, I'm sorry to inform you that your husband did not survive. Oh, the tumor extraction went well, but we figure he could use a tummy tuck which led to a beautification makeover. During a mole removal on his left foot, his blood pressure spiked and it was downhill from there." Hmm, maybe we should stick to one thing at a time.

Conversations of architecture often encroach on, embrace, engulf, and embody the "while we're at it" philosophy. "As long as we're talking about writing our log to a relational database, we might as well place the application configuration data there as well. And while we're at it, we'd better make sure that the table structure employs the fourth normal form. No use going to all this trouble for a database configuration on a single server, so let's add an active / active high availability configuration, which in our shop will mean changing database vendors." Um, is this really necessary for a non-critical application that will be used as a stop gap solution for three weeks?

If you're architecting from scratch; implement the best ideas, best thinking, and best practices without losing perspective on the scope of the project. That may mean 'over architecting' for version 1.0, knowing that versions 2.0 and 3.0 are coming. Or it may mean that a temporary solution (OK, don't get me started) needed in a tight time frame with a limited budget may not be the cat's pajamas.

If you're designing an upgrade to an existing system to address an issue (performance, reliability, functionality), keep the goal ever present and try not to diverge. For instance, if performance is the needed area of improvement, stay away from functionality at the same time. Avoid making "while we're at it" changes outside of the motivating scope.

I have to go now; my wife noticed the kitchen faucet is dripping - this will just take a minute.

Monday, June 1, 2009

How often does the ARB say 'No'

Albert Einstein was once asked what would happen if an irresistible force met an immovable object. He replied that, by definition, in a universe where one exists the other cannot. I was once asked how often the Architecture Review Board (ARB) says "no" to a proposed solution. I think the interviewer was surprised to hear me say, very, very seldom. After all, if we seldom say no, doesn't that by definition mean that the ARB is just a rubber stamp for any project that comes for review?

Since this is my entry, you won't be surprised when I say, no, the ARB is not a rubber stamp. You wouldn't expect me to say anything less, but let me go further and explain how a seldom-say-no review process can, and does, yield the kinds of forward-looking, desired-state architectures you would want to see in a large organization.

To begin with, our company supports a diverse set of technologies and the reason we support such a diverse set is because we are driven to meet business expectations. For example, we have staff, equipment, licenses, and support contracts to enable enterprise level implementations of Oracle, DB2, and SQL Server database solutions. If a proposed application needs a relational database, one would be hard pressed to find a reason why one of these three products won't meet the needs. This is only one example, but suffice to say, we are prepared as an enterprise to support more than one product / solution for any given technology.

Next, we don't just buy and deploy, we take the time to understand, integrate, and document our preferred implementations. You can call these standards, but we prefer the term operating practices because that's what they are. Anyone wanting to know our operating practices relative to database, network, web hosting, or other common technology asset need only consult our internal wiki to learn of the processes, technologies, and desired implementations. There are no secrets and the information is readily available.

Anyone can attend an Architecture Review Board meeting, but the core members are the most trusted individuals in the technology organization. Have a question about application security? The most trusted individuals in the company attend the ARB. Not sure about a database configuration - come to the ARB. And so it goes with network, messaging, middleware, desktop, and on and on. Disagreements just don't happen very often because everyone knows that the primary ARB speaker for any given topic is "The Voice" for that topic for the company.

Also attending the ARB, are the lead developers and architects from all of the corporation's lines of business. They hear week in and week out how the operating practices (standards) are being consistently applied application after application. Quite frankly, we very seldom see a proposal that is out of sync with our desired operating practices, because ... well, who would bring it? The same individuals who are trusted with new solutions are the same ones who regularly attend the ARB - and these folks generally know how to accomplish their business objectives within the well-communicated operating practices. Now, we do hear requests and requirements that cause us to grow our operating practices - that is a perfectly valid expression of innovation. The ARB is receptive to innovative solutions to advance our business capabilities.

Lastly, we have adopted an agenda that begins every conversation with a description of the business problem or opportunity with an eye and ear towards 'how' to accomplish the goal rather than looking for reasons to say 'no.' So, our philosophy is to make sure that any proposals that might not be in line with our desired-state architecture are properly examined, vetted, and made visible to all key stakeholders. It's not about right and wrong, it's about choice and consequence. We're not the 'No' folks, we're the 'Know' folks (<- I made that up just now).

Monday, May 11, 2009

Use the Right Tool for the Job

I recently needed to build a series of storage closets in my garage and was able to borrow my son-in-law's nail gun for the task. It was so cool. Having pounded a billion nails into 2x4s over the years I fell in love with my new tool. I can't imaging doing another construction project without it. It was so cool. Just load it up with a stack of nails, turn on the air compressor, press a little button, and THWACK! So cool. It was only later that I learned that according to the U.S. Centers for Disease Control and Prevention there are about 37,000 nail gun injuries a year.

37,000? I didn't know that the industry had sold 37,000 of these puppies! Really, 37,000? I mean... I know that accidents happen and all - but... 37,000? That's 740 per state, 2,500 per major city, 101 a day, or 6 nail gun accidents every year for every Wall-Mart. In the book, Why We Make Mistakes, Joseph Hallinin tells of a man who accidentally shot himself with a nail gun... in the head... twice. Ah, the power of new tools.

Of course nail guns are useful tools when used in the right way at the right time and for the right reasons. Programming languages are useful as well, but choosing the wrong one for the wrong job at the wrong time can be just as smart as shooting yourself in the foot (or head) with a nail gun. I'll leave the C or Java, FORTRAN or COBOL debate for another time. I am frequently approached for an opinion on the use scripting language; such as Perl, PHP, Python, Jython, or Godknowswhat. Typically the conversation goes like this:

proponent: I can write all of the code to launch a nuclear missile counter attack is less than 12 lines of code - wohoo!
me: Does that include comments to make the code decipherable?
proponent: What? Anyone with a spare neuron can easily understand the code?
me: Anyone?
proponent: Shut up!

I once chaired an architecture discussion to determine if outdated client/server application written in dBase III (ok, well it was written with FoxPro, but six of one ...) should be replaced by a shiny new Internet/web based application written in PHP. For those that don't know PHP is an abbreviation of Personal Home Pages - the original purpose for the language. With all of the analytical calmness we could muster we respectfully indicated that PHP would not be our first choice. I think the order of language preference would have been:

Java with HTML/CSS/JavaScript
C# with HTML/CSS/JavaScript
.
.
.
PHP
Assembler
Bailing wire
Spit

I'm not suggesting that PHP has no place in a development shop's tool kit - it just shouldn't be used as the primary engine for creating commercial grade, secure, functional code that (and here's the key) will be modified by others for years to come. Scripting languages, as a whole, provide rapid development at a cost of readability. For a moment, divorce yourself from your own preferences, the investment of time, and the cool things you know about your favorite developer tools.

If you think about it, rapid coding and readability are necessarily opposite ends of the scale. COBOL is very readable. Why - because it is as chatty as a pre-teen with a cell phone. Perl and others of its ilk lend themselves to rapid coding because you can create a lot of function with a limited number of characters. To be sure, you can create readable code with Perl, PHP, Python, ANT and others, but most coders just don't. We're in so much of a hurry to get the code to work, and then in so much of a hurry to get to the next thing, we never take the time to make our scripts readable.

To be fair, I've seen some pretty unreadable Java code as well, but typically, we tend to take the time. Scripting languages come and go; with a new one entering the scene every eighteen months. It would be impossible for a leading edge development shop to adopt new languages, train enough staff, deliver new function while maintaining the code base when the underlying language changes that often. Use scripting languages like Perl and ANT and Python when on-going maintenance is likely to be minimal. They are perfect for one-time data conversion utilities. They are also ideal when you need rapid development of small, easy to digest functions (i.e. the original idea behind JavaScript). Think utility. Stay away from scripting languages for any application you set in front of someone else.

Now, I'll wait for the onslaught of objections from the why-not crowd.

Sunday, May 3, 2009

Hacking the HOV Lanes

The city of Pittsburgh has high occupancy vehicle (HOV) lanes to expedite the flow of traffic during the morning and evening rush hours. Sounds good, huh? I don't know whether it was a result of poor planning, condensed space due to the terrain, or a restricted budget, but the architects of the HOV solution decided to use the same two lanes in the morning and in the evening. In the morning the two lanes carry traffic into the city, and in the evening, the traffic flows outward.

Are the merest glimpses of a problem beginning to emerge? There are a set of gates at each end of the HOV lanes structured in a way that allow or prohibit traffic from entering the lanes. Typically, the gates into the city are open in the morning, and the gates leaving the city are open at the end of the day. A human operator is responsible for opening and closing the gates. It should be noted that it is impossible to see both ends of the HOV lanes from any single location. Therefore the operator has to close one set of gates at one end of the HOV lanes before proceeding to the other end and opening those gates.

It is therefore, possible, to have both sets of gates closed, and both sets of gates open at the same time. Having both ends closed is a bit of an inconvenience. Having both ends open is taunting disaster. Here is an expert from the website pittsburgh.pahighways.com:

The worst accident to occur on the HOV lanes happened in 1995 between two cars which hit each other head-on and cost the lives of six people. A PennDOT employee did not close the gates to the outbound entrances, and was later convicted of improperly changing the lanes while under the influence of cocaine. After the accident, the number of HOV users dropped by more than 1,000 per day. Wrong-way accidents are unheard of today; however, this hasn't helped to increase the ridership.

The latest improvements to the HOV lanes were unveiled on May 18, 2006 in the form of a $770,000 automated "fast-acting" gate system which are the latest in a series of improvements such as CCTV cameras, automated interlocks on permanent gates, and improved signage since the 1995 accident. The new gates will be down during morning rush hours with overhead sensors to detect approaching inbound vehicles. If one is detected, the gate will raise to allow it to pass. During afternoon rush hours and weekends when the HOV lanes are open in the outbound direction, the gates will be up.


Did you catch the phrase, "convicted of improperly changing the lanes while under the influence of cocaine" I will make no excuses for someone in dereliction of their duties as a result of self-inflicted judgment-impairing activities. There's simply no excuse. That being said, was the architect of the solution also convicted? Tried? Admonished? Told to sit in a corner without crayons? Consider the cost of "improvements" which were made after the HOV lanes were operational.

In the book, "Why We Make Mistakes" by Joseph Hallinan the author argues that when mistakes happen we tend to look down, not so much with our eyes, as with our attempt to understand where to place the blame. We look down the chain of events to the people closest to the accident rather than looking upward to determine how the situation was allowed to exist and (more importantly) how to avoid a recurrence. I don't mean avoid as in band-aid and patches, I mean avoid as in designing solutions that cannot fail.

Even if we could wrap our heads around the insanity of a system which relied on human memory to avoid a catastrophic loss of life, how does one implement such a solution with the absence of closed circuit television cameras, automated fast-acting gate systems, and other fail-safe devices. Make no mistake about it, these band-aids and hacks (there aren't any more-appropriate terms) are only necessary because of the fundamental design flaw in the system. Creating a high speed roadway where traffic flows in both directions in a shared set of lanes is fundamentally flawed. We have learned that we need concrete barriers between high speed lanes of opposing traffic, or maybe a wide grassy knoll; at the very least an ugly three foot steel guide rail.

Donald Norman writes in his book, "The Design of Everyday Things" his credo about errors:

If an error is possible, someone will make it. The designer must assume that all possible errors will occur and design so as to minimize the chance of the error in the first place, or its effect once it gets made. Errors should be easy to detect, they should have minimal consequences, and, if possible, their effects should be reversible.

Imagine if the architect, project managers, budget holders, and users of the HOV lanes had embraced Norman's credo; even if only the first sentence, "If an error is possible, someone will make it."

In my industry, banking, no one dies from the mistakes we make. Any fundamental design flaws are, in the grand scheme of life, relatively minor. But that is no reason to be careless. Consider the solution you are working on right now. Forget malicious users for a moment (although, you should never stop considering malicious users) and ask yourself, does this solution assume that users will always perform their job correctly. Will batch job 'B' always be executed after batch job 'A'? What, you say? Users don't execute batch jobs, only highly trained professionals do that!

If an error is possible, someone will make it. Batch jobs are usually initiated in sequence as a result of some script kicked off by a job scheduler. Batch job 'B' should contain logic to ensure that its input files are correct rather than assume that 'A' must have executed. User interface gurus will tell you, never trust the input. Actually, that's good advice for any API.

If an error is possible, someone will make it. If it is possible to shoot yourself with a nail gun, someone will do it, or rather, 37,000 people will do it. Every year. Have you looked at your current project with a Donald Norman perspective and eliminated (as opposed to hacking over) all of the HOV design flaws?

Tuesday, April 21, 2009

Star Trek, the Borg, and Peer Reviews

Once again we engage in a topic that might not seem, on the surface, to be of architectural significance. Good architecture rarely emanates from a single brain, rather it is from the pre-borg hive-mind that we achieve the elevated levels of elegance and purity found in systems that cannot fail, effortlessly scale, and extend with ease.

In a galaxy far, far away, there exists a quasi human species that has adapted to technology with such embracement that every thought of every being is shared, considered, and to the degree reasonable, executed flawlessly. I know this because I watch Star Trek. Everything I ever needed to know I learned from Star Trek. For instance, I know that engineers are not supposed to ever provide accurate estimates when asked how long a repair will take. Our answers must always be padded by a factor of 30% so as to appear to be miracle workers that can deliver under pressure.

I also learned never to wear a red shirt to work, don't insult the tall guy with the ugly forehead, and on the planet Vulcan, the seven year itch means something ENTIRELY different than on Earth. But I digress.

Humans on Terra Firma have not yet perfected the hive-mind of a Borg cube, and so we must share our thoughts via the provincial process of written and verbal communications. This makes the whole collectively-creating-an-architecture thingy a bit problematic. We architects have opinions about what is right, and learning that the left side of the northern hemisphere thinks our ice caps are melting, just torpedoes our photons!

I had the benefit, years ago as a cadet, to be on a development team where I was clearly the weakest member. Oh, I pulled my own way and found a niche as the user-interface guy, but I also had to pitch in and help with code throughout the application. This meant trudging through database calls, thread management, caches of caches, arrays of arrays, event-driven processes, transporter re-assemblies and host of other cool and terrifying app/dev scenarios. OK, I made up the part about transporter re-assemblies.

The point is that I had to learn to accept criticism from a variety of people, some of them younger than me; and I had to engage in frequent peer reviews to consider my own code with a clinical eye and personal detachment. It was tough at first because I had been a paid consultant for almost a decade by that time, so I didn't think I was a slouch. But I adopted the 'tude that being right and the author of great code was not as important as implementing the best ideas the team could generate. Sometimes those ideas were mine, but the law of proportionality (I was one of six) mandated that most of the time someone else's ideas would prevail.

It's like the time the Organians prevented Kirk from starting a war with the Klingons - He thought he was justified until the Organians made all of the weapons super hot so no one could fight. OK, so it's not exactly like that, but the point is, even Kirk had to admit when a better idea was presented.

Do you have a clinical eye and a personal detachment to your own efforts? Monty Python defines an argument as "a connected series of statements intended to establish a proposition." Clearly, I didn't learn that from Star Trek. Do you offer your code, or your documentation, or your proposals, or even long emails up for peer review? Do you accept criticism, commentary, or complaints about that which you produce, or is your first instinct to defend your creation because, of course, you're smart.

If your job involves the creation of a product (applications, systems, servers, white papers, architecture, ...) that is the result of experience, creativity, and judgment then you should actively seek out the opinions of others. Find two other people with whom you can build a challenge relationship based on the aspirations of higher quality regardless of authorship. Agree to consider everyone's ideas, including your own, with a clinical eye and a personal detachment.

Go where you've not gone before. Accept only the best ideas, do not resist because resistance is futile. Live long and prosper by engaging in fact-based, non-emotional peer reviews. Warp factor 10 - to infinity and beyond (wait... wrong meme).

Tuesday, April 14, 2009

When Good Enough is Good Enough

I just came off a project to select a new enterprise-level solution for the organization. The exact product and purpose are inconsequential in this case, so instead I'll just focus on one of the more interesting aspects of the decision process.

In other posts I've discussed the value of having IT professionals who look beyond the first answer they find. Some of our siblings in this field accept the first workable solution at which they arrive without considering alternatives, without pushing for an elegant answer that solves all of the known requirements, plus positions the business for the future.

In the face of that line of thought, (and the tenacity with which I advocate the position), it may surprise some of my readers to hear me argue that sometimes, good enough is good enough. Sometimes, we knock ourselves silly trying to find the perfect product, or perfect code when acceptable solutions are right at hand. Imagine arguing over two sports cars, one which will reach 135 miles per hour, the other able to reach only 130. Both have leather interiors, XM Satellite Radio, both are sleek and sit five comfortably.

One faction of evaluators believes that the extra five miles per hour could be important in a long race, on the straightaways. The other faction finds the slower car to have a more refined style. The selected car will be driven on US highways, where the top speed is about 85 mph. The car that will hit 130 is, frankly, good enough. OK, that was an easy one.

You're asked to pick between two market-leading products that will provide... Stop! Did you catch the key phrase in the preceding sentence. Both are "Market Leading." Exactly how cutting edge is your need such that the "B" level product is a bad choice? We get entirely too wrapped up in the differences between technology products that we often miss that the business doesn't care, need to care, or want to care. Consider the graphic to the right. Notice the apparent disparity between the two products based on the delta in bar height.

Based on this comparison, one could see why choosing product "B" might cause some concern; it is clearly inferior. This graph illustrates the danger in focusing too closely on details, and why improper analysis can mislead the analyzer. Note that the horizontal axis does not start at zero, which it should. The proper graphic illustration in this case is displayed to left. I actually saw a U.S. President use this misleading technique to make a case for legislation. I yelled at the television for 20 minutes and he never backed down.

We tend to exaggerate the significance of minor differences and that tends to delay decisions, actions, and productivity. Here's a key - if you are inventing an solution, say writing code or building a configuration; take some time to work through a couple of solutions. Compare and contrast, especially if you are not following an established pattern. If you need a Factory Pattern, or a Singleton, just build it using established templates and move on. If what you're creating is new or novel, then take some time to make it elegantly simple.

If, however, you are choosing between the top two or three existing solutions, don't sweat the petty things and don't pet the sweating things. This is especially true if you are comparing the top solutions in a given market space. The term "diminishing returns" can be easily applied to elongating the decision process. In most cases, where the top solution providers are concerned, the details don't really make that much difference. Furthermore the leader in the market today will be leapfrogged by the challenger and round and round we go. Good enough is good enough. Make a choice. Move on.

Wednesday, April 8, 2009

Are you normal?

In 1972 Dr. E. F. Codd published a paper titled, "Further Normalization of the Data Base Relational Model", which has become a best seller for insomniacs worldwide. Dr. Codd was brilliant to be sure, but neither of my kids ever requested a second reading of his material at bedtime. This begs the question, "did you really read them technical manuals at night?" The answer is, when your child has been up for 18 straight hours... and you're exhausted... and you just want them to sleep... you find yourself "thinking outside the box."

Strangely, neither of my offspring ever considered a technology career path. I blame Goodnight Moon, Dr. Seuss, and Shel Silverstein.

Dr. Codd proposed a series of rules to normalize the design of databases. I can state with some authority that his skills with arithmetic far and away exceed his talent as a story teller. I would go so far as to suggest that Further Normalization of the Data Base Relational Model is the mathematical equivalent of Muzak.

The first three of his rules can be collapsed into this; if a piece of data exists in two places then it is wrong in one of them. Of course, we can find countless examples where this is not true which in turn lulls us into a false sense of trust. So, if the same piece of data exists in two different places, then it is possible for it to be wrong in one of them and it is impossible to predict where or when. It is in understanding this over-simplified construct that issues relating to database integrity and scalability can be managed.

Codd's formulas are often expressed as the Rules for Normalization; rules which we are far too willing to violate because "it just makes sense in this case", "normalization is too theoretical", and the ever popular "THERE'S NO TIME." As a recap, here are the first three Normalization Rules:

First Normal Form (1NF): No Repeating Columns or Groups of Columns. (Wrong: AREA_CODE_1, PHONE_1, AREA_CODE_2, PHONE_2, ...).

Second Normal Form (2NF): No Partial Dependencies on a combination key. This only applies to tables with a primary key which is comprised of multiple columns (combination key). This means that we have to make sure that none of the non-key columns of the table are dependent on some, but not all, of the key columns. Tables that have a one column primary key (assuming they pass First Normal Form) are automatically compliant with the Second Normal Form.

Third Normal Form (3NF): No Dependencies on Non-Key columns. Sometimes we'll have a pair of columns in a table that really should be a foreign key pointing to a row of another table. A classic example is Zip code, City, and State. Given the zip code, the city and state could be looked up. The downside to carrying all three (BTW - everybody does this) is that if you change any one of the three data items you create a referential integrity issue for the other two data items.


Now wasn't that refreshing. No, not the rules, the little nap you took right after "First Nor....zzzzz" Did it again, huh? Yeah they're a real sleeper set. But if you can just get through them one time, you'll find that strict adherence to these rules will save you a lot of time, money, and aggravation when maintaining transactional database tables. For read-only reporting tables you can have at it - make them as flat and wide as you want. Who cares, no one believes that data anyways.

Normalization is one of those silly little best practice thingies we are delighted to skip because we are too smart to be trapped by the fallout later on. So, here is some bad news. You're not that bright. Don't get me wrong, I'm not that bright either; nobody is. No one can remember all of the little rules we break en-route to a new application or function. So, following the rules of normalization, even when you think the database is too small, or too big, is an important declaration that you accept the human condition. Referential integrity and scalability (normalization) are functions of physics - they are based on mathematics, not some arbitrary set of majority-wins guidelines established by ivory tower, monolithic governance bodies bent on imposing intellectual will over the unwashed.

A database design should not be based on the needs of an application, rather it should reflect the realities of reality. The data design should mimic, to the highest degree possible, the true relationships that exist in the domain being modeled. All applications, and therefore the databases, start small and grow. Begin with the right (extensible) design and your life, and that of your successor, will be easier. Don't be lulled into the false trap of "this is a little app so I can break the rules, no one will ever notice."

Sunday, March 29, 2009

The Secret to 21st Century Productivity: Gamers

What does IBM and Harvard know that you should?

Allow me to apologize for the length of this narrative as I need to provide a direct quote before launching into the actual subject. Imagine a world in which video games were invented before books. Books would come along at some point and because they are a new medium they would be assailed. You could imagine that a critic of books might write:

Reading books chronically under-stimulates the senses. Unlike the longstanding tradition of game-playing - which engages the child in a vivid three-dimensional world filled with moving images and musical soundscapes, navigated and controlled with complex muscular movements - books are simply a barren string of words on the page. Only a small portion of the brain is devoted to processing written language which is activated during reading, while games engage the full range of the sensory and motor cortices.

Books are also tragically isolating. While games have for many years engaged the young in complex social relationships with their peers, building and exploring worlds together, books force the child to sequester him or herself in a quiet space, shut off from interaction with other children.

But perhaps the most dangerous property of books is that they follow a fixed linear path. You can't control their narrative in any fashion - you simply sit back and have the story dictated to you. Why would anyone want to embark on an adventure utterly choreographed by another person.

Reading is not an active, participatory process; it's a submissive one. The book readers of the younger generation are learning to "follow the plot" instead of learning to lead.


That is an excerpt from the book (how ironic) "Everything Bad is Good For You" by Steven Johnson which takes a humorous but legitimate look at the notion that pop culture is all bad. Now, anyone who frequents this blog knows I love to read, so I'm not anti-book, but I am also not anti-pop culture, and specifically, I am not anti-video games. In fact, I'm a pro-gamer if anything.

Johnson makes a pretty strong argument that much of what we call bad (night time soaps like ER, Hill Street Blues, Law & Order) actually requires our brains to become more engaged than classic, "quality" television like Andy Griffith, I Love Lucy, or Gunsmoke. Conventional wisdom also holds that gaming rots the mind.

Video games have become an integral part of our recreation and some folks wonder if there are any benefits to all of this thumb-wielding, eye-rotting, mind-numbing activity. According to the Entertainment Software Association 65% of American households play some form of video games, the average age of a player is 35 and has been playing for 13 years. Keep that in mind - the average gamer has been at it for over a decade. With that much practice in most martial arts you would be a 3rd Degree Black Belt (just below Master). In 13 years of college you could have two separate four-year Bachelor's degrees, a masters, and a PhD.

The next time you are interviewing a candidate for a job opening, ask if they play any video games. For if they do, here are some of the skills they have built up during their "off hours."
  • Problem solving - Most games begin with a few base assumptions and a specific goal (save the queen, acquire wealth, destroy the aliens, ...) but not much else. The player must figure out from context cues, trial and error, and experience how to navigate, analyze threats and opportunities to achieve stated or self-determine goals.
  • Patience - Modern games take a long time to play. A game of Monopoly might take 4 hours. Today's video games require four hours to get familiar with the keyboard.
  • Accepting setbacks - No one, and I mean no one traverses a video game without failure. Most gamers never achieve the final victory and yet they persevere night after night, trying one dead-end path after another, learning, become more skillful, gaining more ground. Adversity just doesn't have the negative impact it did on earlier generations.
  • Teamwork - The notion that game playing is a solitary activity negates the presence of thousands of on-line forums, social circles, and of course on-line interactions during game playing. True, some games can be enjoyed alone, but even some of those involve social interactions between the actors.
  • Leadership - Multiplayer games often involve leadership roles which change from person to person as the game activity changes. Players are respected for whatever role they play as all are needed for success.
This barely scratches the surface of gamer value. IBM published research about gamers and Leadership in Games and at Work and found that while there some differences in which skills are most valued, all of the skills needed for effective leadership are present in Multiplayer Role Playing games.

Harvard Business Publishing, a wholly-owned subsidiary of Harvard University, recently said gamers are bottom-line oriented, understand the power of diversity, thrive on change, and see learning as fun.

If you are in a position to hire, look with glee upon the candidates who openly express their prowess on the console. They may not have the best tan, but they very well may have an exceptional mind.

Follow by Email