A Wall Street infrastructure & operations day

I recently had an opportunity to spend a day in three separate meetings with infrastructure & operations professionals from three of the top six financial service firms in the country, and discuss topics ranging from long-term business and infrastructure strategy to specific likes and dislikes regarding their Tier-1 vendors and their challengers. The day’s meetings were neither classic consulting nor classic briefings, but rather a free-form discussion, guided only loosely by an agenda and, despite possible Federal regulations to the contrary, completely devoid of PowerPoint presentations. As in the past, these in-depth meetings provided a wealth of food for thought, interesting and sometimes contradictory indicators from the three groups. There was a lot of material to ponder, but I’ll try and summarize some of the high-level takeaways in this post.

Servers and Vendors

These companies between them own in the neighborhood of 180,000 servers, and probably purchase 30,000 – 50,000 servers per year in various cycles of procurements. In short, these are heavyweight users. One thing that struck me in the course of the conversations was the Machiavellian view of their Tier-1 server vendors. Viewed as key partners, at the same time the majority of this group of users devoted a substantial amount of time to keeping their key vendors at arm’s length through aggressive vendor management techniques like deliberate splitting of procurements between competitors. They understand their suppliers' margins and cost structures well, and are committed to driving hardware supplier margins to “as close to zero as we can,” in the words of one participant.

Quick advice to incumbent vendors in these accounts – don’t count on it. And they all understand the implications, pro and con, of the advanced infrastructure management and service offerings that you are trying to add on to help your margins (see below), particularly the balance between lock-in and additional value to their I&O groups.

Core Wars and CPUs

No surprise, Intel dominates this space. AMD has in fact lost recent market share from its peak a couple of years ago, and the general consensus among the groups was that Intel looked like it had the lead for the near future. But nobody is wedded to a CPU vendor, and on granular procurement boundaries they are willing to consider an AMD offering if it meets their requirements. Loyalty s not a strong suit for this group of customers, and AMD has an uphill road in front of it, but not a barrier. If AMD produces a product that significantly alters the throughput per watt equation in their favor, they have a chance to regain some traction with these large users.

Integrated Infrastructure

The most interesting split between the groups was in their attitude toward the advanced converged infrastructure stacks that most of the top-tier vendors are beginning to offer. Across the combined groups they were aware of and in most cases had trial experience with HP Matrix, IBM Virtual Fabric, Cisco UCS, VCE VBlocks, and several early stage offerings from other players that will be announced over the next two or three months. That they should have early knowledge and access was no surprise – at HP these were our most valuable customers, and our team spent a lot of time briefing and polling these users. What did surprise me was the polarization of the reactions. Some of the groups wanted to avoid engineering these solutions onto their core infrastructure and making them a strategic infrastructure element due to the increased lock-in to the vendor. While others were willing to allow the lock-in in exchange for the perceived value, I would submit that the majority attitude was one of suspicion. I should note that some of these companies have the option to pass on these offerings because as a group they spend years and $10s of millions of dollars duplicating potions of these offerings, but that’s a topic for another venue. Other data indicates that the opposite is true further down the size pyramid, with smaller enterprises and midsize companies much more receptive to the value message and less concerned about lock-in.

Fringe Computing

All reported using and/or developing for GPU targets, with NVIDIA as the de-facto standard platform. One had actually implemented a high-performance application using FPGAs (Field Programmable Gate Arrays), dropping the time needed to do a critical risk analysis calculation from 8 hours to 10 minutes. Lots of discussion around how GPUs would never become a general-purpose platform, but unanimity on their suitability for large parallel compute applications, mostly around risk analysis, often coupled with real-time trading.

Stay Tuned

I’ll be adding additional research from this visit in the form of a document for the infrastructure & operations community in the near future.