EA teams like to know how mature their EA practice is. There are a lot of EA maturity models out there. You will find some of these assessments and maturity models discussed in a 2009 Forrester report. Many EA teams share the idea that there is a single “ultimate EA model” and that EA leaders should strive to move up the ladder to this ultimate model. It’s like a video game – you try to get to the next level. 

For the past three months, the EA team’s Researcher Tim DeGennaro has been looking at these models and Forrester’s research on EA best practices to create a framework for assessing EA programs. This looked deceptively simple: Develop criteria based on the best practices we see in leading EA organizations, create an objective scale to rate an organization’s progress, offer reporting to illuminate next steps, and wrap it in an easy-to-use assessment package. What we’ve found so far is not only that avoiding the effects of subjectivity and lack of context is impossible but also that many assessment styles disagree on the most crucial aspect: What exactly is EA supposed to be aiming for?

Discussion within Forrester highlighted this confusion. Some approaches tell you that you should be documenting and strengthening your EA processes and making sure they’re followed – and the degree to which you’ve done this is what determines your scores. What they fail to uncover is how “good” what you’re doing is, whether that’s what business needs you to be doing, and whether you’re actually squashing progress for the sake of upholding the rules. Others try to determine how much “stuff” you can do and forget to address whether your “stuff” is useful to anyone or not. In any case, you walk away from these with some kind of magic number that usually tells you that you’re OK but that there’s room for improvement.

The results of our discussion: Keep it simple and focus on EA’s ability to drive critical activities. There is no end state where you’re supposed to be and no score you’re supposed to attain; rather, you should compare what you do against what the most effective EA programs do to drive these critical activities – for example, IT investment governance. Gap analysis will uncover the strengths and weaknesses of your practice, defining who you are and what you can and can’t do, where your value comes from, and exactly what should change as EA shifts its value orientation.

For investment governance, we think it’s important to look at:

                Contributions – what you’re able to offer

  • Artifacts – the numbers, rules, models, objectives, etc., that EA can produce
  • Expertise – the guidance EA can offer beyond referencing an artifact
  • Accessibility – the availability of expertise and documented knowledge

                Quality – how helpful your offerings actually are

  • Utility – the ability of artifacts to answer decision-makers’ questions
  • Future orientation – artifacts’ ability to look ahead and offer predictability
  • Repeatability/consistency – your process for creating artifacts
  • Reliability – the integrity of information behind these artifacts

                Relationships – your ability to convert these offerings into valuable practice

  • Socialization – your ability to drive acceptance of EA outputs
  • Reputation – the perception of EA’s services
  • Involvement – the depth of your input with stakeholders

 

What do you think about these dimensions? Are they the right ones? Are there others? Is it possible to put a “number” to maturity? Is it even useful?