In a recent blog post that I wrote right before the Agile 2012 conference in Dallas, I expressed my hope that the conference sessions would be full of specifics, not generalities. No, thank you, I don't need another presentation explaining what test-driven development is. I'd like to hear someone talk about how they did test-driven development in a large project, or how they made it a standard practice across projects, or whether anyone has had success with it when working with outsourcing partners.

The presentations at this year's conference were a mixed bag, consisting of the following:

  • Very high-level discussions, often from consultants, about Agile and Lean concepts. These presentations consisted of a lot of name dropping ("Mumblemumble Daniel Pink mumblemumble Geoffrey Moore mumblemumble Taiichi Ohno") and colorful polygons ("You want to be in the two-by-two Grid Of Delight, not the Overlapping Ovals Of Doom"). Unfortunately, the keynote bore a bit too much resemblance to this category.
  • Emphatic statements of Agile precepts. Sometimes, the arguments were convincing, such as a good workshop-ish session I attended on Agile planning. Others were less persuasive, such as the contention that Scrum should be the governing methodology in every project. (That's exactly the sort of over-the-top pronouncement that gave birth to the phrase "Agile zealot.")
  • In-depth analyses of Agile and related practices. A good example was the presentation on risk, starting with the way in which other industries define it, and then applying it to software development and delivery in a practical and compelling way. Forrester colleagues who attended some presentations and mini-workshops about testing had good things to say about those sessions.

Annoyingly, many of the sessions that I wanted to attend were already standing room only, no further admission, by the time I arrived. Maybe those were the sessions in that third category that I had hoped to find. But, assuming that conjecture turned out to be true, it would be making my point: people are hungry for specifics, not generalities, about Agile and Lean transformation.

But maybe it's unfair to expect too many specifics out of the Agile conference. After all, now that Agile has gone mainstream, there's less need for a meeting of software insurgents eager to make the case for upheaval. Still, there's a lot more revolutionary praxis to figure out.

So, if the Agile conference is less the place for hammering out the specifics of Agile and related practices (Lean, DevOps, UX, etc.), where is this work happening? 

As I'm finishing the application life-cycle management (ALM) Forrester Wave, I've seen firsthand how vendors have addressed these specifics. If you're building a project management tool that supports Agile, you have to make some critical choices about those devilish details. How strictly do you enforce the prioritization of the backlog? How many different sizing metrics do you support? Do you suggest that teams should scale back the work items in a sprint if it looks as though, with the best of intentions, they're packing way too much into the next iteration?

And project management is just one small piece of the larger ALM machine. Is it a good idea to let teams load up the data model for a user story with lots of fields for nonfunctional requirements? Should the release schedule look more like a project timeline or a product road map? When building integrations, which tools are Agile teams more likely to use (Hudson and Jenkins are easy guesses), and which might turn out to be far less important? Is a Kanban board a "good enough" tool for collaborating with non-Agile teams, or is something else required? What's the relative value of a tool for converting manual tests into automated ones versus automating the responses (file this bug, notify these people, etc.) when a test fails?

ALM vendors have to make specific choices and live with their results. Even the decision to allow people to stray from Agile practices somewhat, versus strict enforcement of Agile practices within the tool, is itself an important choice. With every such decision, the tools designers might delight, confuse, or offend their customers.

Consequently, there's a lot of deep thinking and good ideas among the ALM vendors. We write all the time about how Agile has forced software professionals to face some hard truths about the flaws in their approaches. (Every sprint, you discover how good or bad your planning and estimation skills are. Every sprint, you learn how well you have defects under control. Every sprint, you appreciate how important mundane activities, such as build management, can be for a team.) You can make the same statement about ALM vendors, who have faced some hard truths about the tools they've developed to support these teams. 

They don't always succeed. But, given the ubiquity of Agile, the ALM vendors have to keep trying. Just by looking at one set of features in ALM tools, we can see why.

Everyone wants meaningful reports and dashboards. Development teams want them for continuous improvement. Other groups in the software value chain — testers, business analysts, architects, and so on — want them to gain transparency into what the dev teams are doing, to better collaborate with them. Management wants them to see how effectively the organization is delivering software and to gauge whether the investment in Agile is paying off. The data that goes into these reports and dashboards have to come from somewhere. The mechanisms for creating the reports and dashboards have to provide prompt, meaningful answers to demanding and often changing questions.

If ALM vendors can't keep up with these very specific requirements, they'll find themselves consigned to shelfware very quickly. Therefore, if ALM vendors can't anticipate many of the important questions their customers will be asking through reports and dashboards, they'll lose customers. That's a powerful incentive to get right into the specifics of Agile transformation in a way that bland presenters may not feel compelled to do.