Got Kubernetes In Production? Let’s Talk
Kubernetes is going mainstream — at least, that’s what we were told at the KubeCon conferences, in the Cloud Native Computing Foundation (CNCF)’s case studies, and via various vendor success stories. But those high-profile use cases are dominated by cloud hyperscalers, software and software-as-a-service providers, tech equipment makers, and telcos. Big banks and other financial services providers with way-above-average tech budgets also figure prominently in the CNCF case studies. But what about everybody else?
A newly released VMware survey, The State of Kubernetes 2021, noted that 65% of respondents reported use of Kubernetes in production, up from 59 percent in 2020, a meaningful increase given the impact of the pandemic. About half of all respondents reported using Amazon Elastic Kubernetes Services (EKS) and Azure Kubernetes Service (AKS), significant increases over the previous year’s survey, with Red Hat OpenShift and Google Kubernetes Engine (GKE) both dropping slightly to 23% (VMware’s Tanzu was next at 20%). The same survey found that Kubernetes deployments based directly on CNCF projects dropped from 29% in 2020 to 18% in 2021. The results suggest that many users who tried to get Kubernetes up and running on their own encountered difficulties and concluded that they’d be better off using managed services from a public cloud provider.
But managed Kubernetes services is a bare-bones approach. While such services bring Kubernetes into infrastructure as a service (IaaS) and provide APIs, managed Kubernetes still requires substantial effort on the part of users.
Consider Amazon Web Services’ (AWS) shared responsibility model for EKS. As an AWS GitHub post explains, EKS will provide the Kubernetes control plane, but the rest — identity and access management, security, and compliance, to name a few — is up to the user. Google provides a more security-focused interpretation of the shared responsibility model for GKE, but the message is similar: GKE customers will have plenty to do on their own to get Kubernetes up and running and must rely on voluminous guides. Microsoft’s approach to shared responsibility for AKS follows the same pattern but includes a “you break it, you own it” disclaimer for Kubernetes cluster support: “Any modification done directly to the agent nodes using any of the IaaS APIs renders the cluster unsupportable,” Microsoft warns, adding, “changing any of the system-created metadata will [also] render the cluster unsupported.”
The upshot: Users who wish to offload the Kubernetes control plane to a services provider will still have plenty of work to do. Even so, some organizations may see this approach as a happy (or not!) medium between raw Kubernetes from the open source community and full-blown application platforms based on Kubernetes. Managed Kubernetes services can take some of the pressure off infrastructure teams as they undertake infrastructure modernization efforts. Moreover, teams responsible for application development and delivery may welcome such an approach if they want more flexibility in choosing development platforms. That leaves a great deal of integration work in the hands of infrastructure teams and may make the path of putting Kubernetes into production longer and more challenging in exchange for better performance and ROI. If the VMware survey is any indication, a significant number of users are taking this approach as they move into production with Kubernetes or prepare to do so.
Organizations that find Kubernetes managed services insufficient may choose a Kubernetes distribution that’s already baked into a multicloud container application platform. As noted in the most recent Forrester Wave™ covering those platforms, the leading vendors in that market — Canonical, D2iQ, Google, Mirantis, Platform9 Systems, Rancher, Red Hat-IBM, and VMware — incorporate Kubernetes while focusing their efforts on developer experience and application modernization. While that packaging has appeal to some users, it comes with trade-offs, such as OpenShift’s availability only with Red Hat Enterprise Linux, for example, or the limitations of each platform’s application catalogs.
These multicloud container development platform providers are betting that the difficulties of DIY Kubernetes and frustrations with no-frills, managed Kubernetes services will drive customers in their direction. They may be right. But it’s too soon to conclude that Kubernetes is too complex for widespread adoption outside such platforms. My colleagues Brent Ellis, Andras Cser, and I noted in a previous blog that the elements of enterprise-grade Kubernetes are increasingly available through open source efforts and vendor offerings. I’ve analyzed one such requirement for Kubernetes — business continuity and disaster recovery — in a report to be published later this year.
Nevertheless, Kubernetes may be overkill for a wide range of use cases. That’s HashiCorp’s proposition with the release of Nomad 1.1, which is pitched as a “simple and flexible orchestrator” for containers. HashiCorp is betting that, at least for some organizations, the best practice for Kubernetes is not to use it in the first place.
What’s your take? Do you have Kubernetes in production in an industry outside tech? If so, I’d like to hear from you for my ongoing research on Kubernetes adoption — where you’ve deployed it, why, and how. It’s an opportunity to share lessons learned — anonymously, if you prefer — with your peers as they take on the transformation of IT infrastructure. Let’s talk.