Kubernetes Business Resiliency FAQ

The SNIA Cloud Storage Technologies Initiative continued our webcast series on Kubernetes last month with an interesting look at the business resiliency of Kubernetes. If you missed “A Multi-tenant, Multi-cluster Kubernetes Datapocalypse in Coming” it’s available along with the slide deck in the SNIA Educational Library here. In this Q&A blog, our Kubernetes expert, Paul Burt, answers some frequently asked questions on this topic.

Q: Multi-cloud: Departments might have their own containers; would they have their own cloud (i.e. Hybrid Cloud)?  Is that how multi-cloud might start in a company?

A: Multi-cloud or hybrid cloud is absolutely a result of different departments scaling containers in a deployment. Multi-cloud means multiple clusters, but those can be of various configurations. Different clusters and clouds need to be tuned for the needs of the organization.

Netflix is perhaps one of the most popular adopters of the cloud. Following their success, most organizations prefer a gradual shift towards cloud. That practice naturally results in different environments during the growth and exploration phase.

Q: Service Mesh and Kube Federation: From the perspective of tools, how is the development of the various federation tools progressing?

A: There are some tools (Istio and Linkerd) that are quite far along. The official community solution, KubeFed, is just on the cusp of moving into Beta. We’re starting to see some standards or standard practices developing in this space. Companies like SnapChat and Uber are already sharing some of their benefits and challenges with multi-tenancy and multi-cluster at scale. More concrete recommendations should emerge as more organizations gain experience and share their findings.

Q: Defining cluster characteristics: Within a given container, does the service needed define or predict the statefulness of the application?

A: In Kubernetes a service defines how an app might be discovered by other apps and services that need to rely on it. Generally, services tend to push the state down. That means where the data is stored will be defined in the YAML manifest that gets pushed to Kubernetes, and stateful apps are pretty well-defined in that environment.

Stateful apps will naturally be harder to manage, because of the need for uptime. The good news is they are at least easy to identify when running on Kubernetes.

Q: On how knowledge grows: It’s possible to start simply with containers and move to more complex environments. Does the knowledge gained from containers transfer to a multi-cluster, multi-tenant space?

A: Yes, it’s possible to start and grow. A lot of the knowledge will transfer. Think of it like math, with each course building on the last. We might learn trigonometry, and find that many of those trig problems simplify into algebra problems. Similarly, we might learn about Pods in Kubernetes, and find that they simplify down to many of the things we learned about containers.

In the end, your problems will likely resolve down to something familiar. Start simply, and then grow your complexity and mindset.

Q: Version control: Do we need coordinated version pinning when deploying homogenous clusters?  Does that allow us to write once and run everywhere?

A: Kubernetes is capable of that, but there are certainly some caveats. By having a homogenous infrastructure and tightly maintaining versions, you are more likely to be successful and have less versioning and bug issues that you have to track over time.

On the other hand, a machine learning team that creates a recommendation system has vastly different needs than a team building an e-commerce website.

Our goal should be to start with a well-defined base platform. With a well-defined base, it’s easier to test the compatibility of those common components. When specific teams have specific needs, we’ll inevitably need to adapt our platform. That base should mean it’s easier to add new components with confidence. Troubleshooting, and maintaining the resulting distinct version of the platform, should also be easier because the base platform ensures a lot of familiarity and common knowledge transfers.

Q: Interface and management of security: There needs to be a balance between customer demand for cluster storage and security. Is there an appropriate interface that can manage security problems and give users enough space to easily consume storage on clusters? What’s the right design in this case?

A: Yes. Many of the current tools have really strong interfaces for management of security.  Rook and Astra are two good examples. Most of these solutions are open-source, but that does not necessarily mean they approach problems like security in the same way.

For any cluster storage solution, we’re probably looking for a few common features. Encryption of data at rest, RBAC / permissions, snapshots, and backups. For components that are defined by Kubernetes (like RBAC), we’re more likely to see a consistent way of doing things amongst tools. For other items like encryption of data or backing up, it’s more likely we’ll see each solution tackle the problem in a slightly different way.

The implication is that the cluster storage solution you start with will likely be with you for a while, so make sure you’re picking the right tool for the overall corporate needs.

Q: So that means it’s really important to do the research early? Is it possible to move between tools?

A: It is possible to move between tools, but it’s likely that your first choices will be the ones you carry forward. Beginning to research this early can help reduce the panic and stress that might come later on, when your organization discovers a hard need and resulting deadline for cluster storage.

Remember, we said this webcast was part of a series? Watch all of the CSTI’s Kubernetes in the Cloud presentations to learn more.

Leave a Reply

Your email address will not be published. Required fields are marked *