Kubernetes Trials & Tribulations Q&A: Cloud, Data Center, Edge

Kubernetes cloud orchestration platforms offer all the flexibility, elasticity, and ease of use — on premises, in a private or public cloud, even at the edge. The flexibility of turning on services when you want them, turning them off when you don’t, is an enticing prospect for developers as well as application deployment teams, but it has not been without its challenges.

At our recent SNIA Cloud Storage Technologies Initiative webcast “Kubernetes Trials & Tribulations: Cloud, Data Center, Edge” our experts, Michael St-Jean and Pete Brey, debated both the challenges and advantages of Kubernetes. If you missed the session, it is available on-demand along with the presentation slides. The live audience raised several interesting questions. Here are answers to them from our presenters.

Q: Are all these trends coming together? Where will Kubernetes be in the next 1-3 years?

A: Adoption rates for workloads like databases, artificial intelligence & machine learning, and data analytics in a container environment are on the rise. These applications are stateful and diverse, so a multi-protocol persistent storage layer built with Kubernetes services is essential.

Additionally, Kubernetes-based platforms pave the way for application modernization, but when, and which applications should you move… and how do you do it? There are companies who still have virtual machines in their environment, and maybe they’re deploying Kubernetes on top of VMs, but then some are trying to move to a bare-metal implementation to avoid VMs altogether. Virtual machines are really good in a lot of instances… say for example, for running your existing applications. But there’s a Kubernetes service called KubeVirt that allows you to run those applications in VMs on top of containers, instead of the other way around. This offers a lot of flexibility to those who are adopting a modern application development approach, while still maintaining existing apps. First, you can rehost traditional apps within VMs on top of Kubernetes. You can even refactor existing applications. For example, you can run Windows applications on Windows VMs within the environment taking advantage of the container infrastructure. Then while you are building new apps and microservices, you can begin to rearchitect your integration points across your application workflows. When the time is right, you can rebuild that functionality and retire the old application. Taking this approach is a lot less painful than rearchitecting entire workloads for cloud-native.

Q: Is cloud repatriation really a thing?

A: There are a lot of perspectives on repatriation from the cloud. Some hardware value-added resellers are of the opinion that it is happening quite a bit. Many of their customers had an initiative to move everything to the cloud. Then the company was merged or acquired and someone looked at the costs, and sure, they moved expenses from CapEx to OpEx, but there were runaway projects with little accountability and expanding costs. So, they started moving everything back from the cloud to the core datacenter.
I think those situations do exist, but I also think the perspective is skewed a bit. I believe the reality of the situation is that where applications run is really more workload dependent. We continue to see workloads moving to public clouds, and at the same time, some workloads are being repatriated. Let’s take for example, a workload that may need processor accelerators like GPUs or Deep Learning accelerators for a short period of time. It would make perfect sense to offload some of that work in a public cloud deployment because the analyst or data scientist could run the majority of their model on less expensive hardware and then burst to the cloud for the resources they need when they need them. In this way, the organization saves money by not making capital purchases for resources that will largely remain idle.

At the same time, a lot of data is restricted or governed and cannot live outside of a corporate firewall. Many countries around the world even restrict companies within their borders from housing data on servers outside of the country domain. These workloads are clearly being repatriated to a datacenter. Many other factors such as costs and data gravity will also contribute to some workloads being repatriated.

Another big trend we see is the proliferation of workloads to the edge. In some cases, these edge deployments are connected and can interact with cloud resources, and in others they are disconnected, either because they don’t have access to a network, or due to security restrictions. The positive thing to note with this ongoing transformation, which includes hybrid and multi-cloud deployments as well as edge computing, is that Kubernetes can offer a common experience across all of these underlying infrastructures.

Q: How are traditional hardware vendors reinventing themselves to compete?

A: This is something we will continue to see unfold over time, but certainly, as we see Kubernetes platforms starting to take the place of virtual machines, there is a lot of interest in building architectures to support it. That said, right now, hardware vendors are starting to make their bets on what segments to go after. For example, there is a compact mode deployment available built on servers targeted at public sector deployments. There is also an AI Accelerator product built with GPUs. There are specific designs for Telco and multi-access edge computing and validated platforms and validated designs for Kubernetes that incorporate AI and Deep Learning accelerators all running on Kubernetes.

While the platform architectures and the target workloads or market segments are really interesting to follow, another emerging trend is for hardware companies to offer a full managed service offering to customers built on Kubernetes. Full-scale hardware providers also have amassed quite a bit of expertise with Kubernetes and they have a complete services arm that can provide managed services, not just for the infrastructure, but for the Kubernetes-based platform as well. What’s more, the sophisticated hardware manufacturers have redesigned their financing options so that customers can purchase the service as a utility, regardless of where the hardware is deployed.

I don’t remember where I heard it, but some time ago someone once said “Cloud is not a ‘where,’ Cloud is a ‘how.’” Now, with these service offerings, and the cloud-like experience afforded by Kubernetes, organizations can operationalize their expenses regardless of whether the infrastructure is in a public cloud, on-site, at a remote location, or even at the edge.

Q: Where does the data live and how is the data accessed? Could you help parse the meaning of “hybrid cloud” versus “distributed cloud” particularly as it relates to current industry trends?

A: Organizations have applications running everywhere today: In the cloud, on-premises, on bare metal servers, and in virtual machines. Many are already using multiple clouds in addition to a private cloud or datacenter. Also, a lot of folks are used to running VMs, and they are trying to figure out if they should just run containers on top of existing virtual machines, or move to bare metal. They wonder if they can move more of their processing to the edge. Really, there’s rarely an either-or scenario. There’s just this huge mix-match of technologies and methodologies that are taking place which is why we term this the hybrid cloud. It is really hybrid in many ways, and the goal is to get to a development and delivery mechanism that provides a cloud-like experience. The term Distributed Cloud Computing generally just encompasses the typical cloud infrastructure categories of public, private, hybrid, and multi-cloud.

Q: What workloads are emerging? How are edge computing architectures taking advantage of data in Kubernetes?

A: For many organizations, being able to gather and process data closer to data sources in combination with new technologies like Artificial Intelligence/Machine Learning or new immersive applications can help build differentiation.

By doing so, organizations can react faster, connect everything, anywhere, and deliver better experiences and business outcomes. They are able to use data derived from sensors, video, devices, and other edge devices to make faster data-driven decisions, deploy latency sensitive applications with the experience users expect—no matter where they are, and keep data within geographical boundaries to meet regulatory requirements on data storage and processing.

Alongside these business drivers, many organizations also benefit from edge computing as it helps limit the data that needs to be sent to the cloud for processing, decreasing bandwidth usage and costs. It creates resilient sites that can continue to operate, even if the connection to the core datacenter or cloud is lost. And you can optimize resource usage and costs as only necessary services and functionality are deployed to address a use case or problem.

Q: How and why will Kubernetes succeed? What challenges still need to be addressed?

 

 

 

 

 

 

 

 

 

 

A: Looking at the application modernization options you can venture to guess the breakdown of what organizations are doing, i.e. how many are doing rehost, refactor, rearchitect, etc., and what drives those decisions. When we look at the current state of application delivery, most enterprises today have a mix of modern cloud-native apps and legacy apps. Also, a lot of large enterprises have a huge portfolio of existing apps that are built with traditional architectures, and traditional languages (Java or .NET or maybe C++) or even mainframe apps. These are supporting stateful and stateless workloads. In addition, many are building new apps or modernizing some of those existing apps on new architectures (microservices, APIs) with newer languages/frameworks (Spring, Quarkus, Node.js, etc.). We’re also seeing more interest in building in added intelligence through analytics and AI/ML, and even automating workflows through distributed event driven architectures/serverless/functions. So, as folks are modernizing their applications, a lot of questions come up around when and how to transition existing applications, how do they integrate with their business processes, and what development processes and methodologies are they adopting? Are they using an agile or waterfall methodology? Are they ready to adopt CI/CD pipelines and GitOps to operationalize their workflows and create a continuous application lifecycle?

Q: Based on slide #12 from this presentation, should we assume that 76% for databases and data cache are larger, stateful container use cases?

A: In most cases, it is safe to assume they will be stateful applications that are using databases, but they don’t necessarily have to be large applications. The beauty of cloud-native deployments is that code doesn’t have to be a huge monolithic application. It can be a set of microservices that are coded together, each piece of code being able to address a certain part of the overall workflow for a particular use case. As such, many pieces of code can be small in nature, but use an underlying database to store relational data. Even services like a container registry service or logging and metrics will use an underlying database. For example, a registry service may have an object store of container images, but then have a database that keeps an index and catalog of those images.

If you’re looking for more educational information on Kubernetes, please check out the other webcasts we’ve done on this topic in the SNIA Educational Library.

Leave a Reply

Your email address will not be published. Required fields are marked *