December 1, 2015
I’ve written and commented in the past about the inevitability of a new class of infrastructure called “composable”, i.e. integrated server, storage and network infrastructure that allowed its users to “compose”, that is to say configure, a physical server out of a collection of pooled server nodes, storage devices and shared network connections.[i]
The early exemplars of this class were pioneering efforts from Egenera and blade systems from Cisco, HP, IBM and others, which allowed some level of abstraction (a necessary precursor to composablity) of server UIDs including network addresses and storage bindings, and introduced the notion of templates for server configuration. More recently the Dell FX and the Cisco UCS M-Series servers introduced the notion of composing of servers from pools of resources within the bounds of a single chassis.[ii] While innovative, they were early efforts, and lacked a number of software and hardware features that were required for deployment against a wide spectrum of enterprise workloads.
This morning, HPE put a major marker down in the realm of composable infrastructure with the announcement of Synergy, its new composable infrastructure system. HPE Synergy represents a major step-function in capabilities for core enterprise infrastructure as it delivers cloud-like semantics to core physical infrastructure. Among its key capabilities:
- Synergy can construct servers from a pool of processing nodes (up to 12 in a single chassis) and disk drives (multiples of 40 per enclosure), with an arbitrary numbers of drives mapped to the server nodes. The servers appear to software as if the disks are directly attached.
- The servers in turn are connected to the enterprise network via a shared set of uplinks.
- This composition is controlled by a complex layer of embedded software accessed by either a GUI presented by HPE OneView management console or by an API set. Servers can be configured individually or via templates, which can define an entire runtime boot image, which is streamed and customized at boot time from the internal controller nodes. HPE claims that a runtime image can be created, customized and made ready to boot in 15 seconds.
Synergy is designed to work at scale in enterprises from medium to large. A Synergy management domain can include up to 20 chassis, external storage arrays can be included in the Synergy resource domain, and the Composer modules include comprehensive discovery and relationship mapping capabilities. Physically, a Synergy module is a 10U enclosure with 12 modular slots in the front for a combination of server nodes (up to 12), and disks modules (40 disks in a 3 slot module), with six shared interconnect modules, power supplies and whatnot in the rear. While this is not a conventional blade server, it is clear that the team that built one of the industry’s most successful blade architectures contributed, with Synergy displaying a lot of little design details that would indicate that there was considerable cross-pollination between the groups.
Why is it Important?
Agility, responsiveness, operational efficiency – these demands on core infrastructure groups show up constantly in any survey of challenges and future requirements. These pressures are among the drivers behind at least some of the move to both virtualized IaaS as well as the newer bare-metal cloud offerings. Synergy enables core cloud-like semantics on top of core physical infrastructure, extending the notion of software-defined environments to the physical layer, a capability that I (no false humility here) strongly suggested was a critical missing capability in software-defined data center offerings when I first wrote about them in 2012. In the interim there have been incremental offerings, along with emerging bare-metal cloud offerings, but Synergy finally delivers the required cover set of capabilities to truly be classed as an enterprise-grade software-defined physical infrastructure capability.
My prediction is that Synergy will be an extremely successful product, a strong defender of HPE’s installed enterprise base, and a capable platform with which to try and win back some of the ground lost to competitors over the past few years. The former is almost a given. The latter may be more problematic.
HPE Will Not be The Only Game in Town – 2016 and 2017 Will be the “Years of Composable Infrastructure”
Composable infrastructure will be both a hot topic for marketing in 2016 as well as a center of major technology investment across the industry. As I noted above, both Dell and Cisco have credible niche entrants that can claim some level of composability, and since their products were introduced over a year ago (Dell in November 2014, Cisco in September 2014) it is hard to convince myself that they have been asleep at the wheel and not thinking about successor products. Cisco in particular has been writing about composable infrastructure for some time and has a rich management stack which already supplies many of the high-level functions required by a composable infrastructure, and it would truly surprising if they were not thinking about extending the M-Series technology. Dell has had some success with the first iteration of the FX, and since the FX already has much of the architectural framework required, it would likewise be surprising if it was not planning successor products.
With an intermediate transition coming next year with a mid-term refresh of Intel’s server product line and a major architectural transition looming for 2017, I expect a bunch of composable infrastructure announcements from multiple sources. HPE Synergy will have a strong first-mover advantage, but will face a crowded field by the end of 2016.
For users, the chore of evaluating and picking between multiple qualified alternatives will repay itself with a level of fluidity and efficiency in core infrastructure that was unthinkable even a couple of years ago.
[i] Think configuration, building or constructing if the word “compose” gives you trouble.
[ii] The Dell FX took an approach that is similar in concept to Synergy, that of binding one or more disks from a common pool to specific servers. Cisco, using some ingenious ASIC technology, actually divided a set of four large disks into virtual disks and assigned them to the collection of small single-socket servers in the M-Series chassis. In both cases the servers shared a set of physical network resources.