Accelerate Cloud-Native Continuous Deployment Using Helm for Kubernetes
Kubernetes has established itself as a de facto container orchestration platform for deploying cloud-native applications. Its growth shows no signs of stopping, as demonstrated by the fact that every major cloud has its own managed Kubernetes version (GKE, AKS, EKS). While it’s proved a critical tool for simplifying cloud-native continuous deployment, it has led to increased container management complexity, which can hold up delivery. However, it is possible to significantly reduce feature delivery lead time on Kubernetes by making the deployment mechanism robust, consistent, and re-usable. Digital engineers can use a variety of tools to deploy an application, its configuration, and Kubernetes specific objects, including Kubectl, Helm, Kustomize, Skaffold, and Draft. Today, we’ll be focusing on Helm, a valuable solution for managing Kubernetes apps.
Managing Kubernetes – Common Challenges
The Kubernetes native approach for deploying applications and other Kubernetes objects (persistent volumes, ConfigMaps, ingresses, network policy, etc.) involves writing YAML-based manifests. These manifests are unable to process/evaluate variables or parameters, which results in teams needing to write almost identical manifests for different clusters or environments. Needless to say, this goes against the DRY (Don’t Repeat Yourself) software design principle.
Furthermore, applications are continuously evolving due to business, executive and technical requirements. The result? Further changes in application configurations and other Kubernetes objects are required, leading to configuration management challenges that arise from binding a Kubernetes manifest with the application version. The knock-on effect of not being able to maintain versioning is that it introduces additional challenges, for instance, with rolling back an application in the event of deployment failure(s).
Additionally, modern applications and Kubernetes objects might be dependent on other applications and other Kubernetes objects, respectively, which then further increases deployment complexity as each component and manifest needs to be deployed exactly in that order.
The Solution: Helm
Helm is an official project of the Cloud Native Computing Foundation focused on packaging, sharing, and deploying cloud-native applications on Kubernetes. We are going to look closer at Helm V3 and how it can address the challenges around versioning, enable repeatable builds, and result in faster release cycles. Apexon has worked with multiple clients to slash their lead time for changes and increase deployment frequency of their cloud-native applications by leveraging Helm, CD platforms and advance deployment patterns like blue-green and canary deployments. This blog will look at the core components that make up the Helm solution and then examine why digital engineering teams should consider using Helm.
Helm’s Core Offering:
Charts
Helm uses a packaging format called charts which contains the file packaging application, its configurations and a set of Kubernetes objects. Charts provide:
- Templatized Kubernetes manifests
- Dependent Helm charts
- Chart version
- Default configuration values
- Chart tests.
Chart Repository
Helm’s chart repository is an HTTP server that remotely hosts and stores the different charts. Its main functions are to:
- Store the charts and their corresponding different versions
- Maintain metadata about the stored charts.
As it is an HTTP server it can also be easily hosted on cloud with object storage (AWS S3, GCS, etc.) and also on-premise with webservers (HTTPD, Nginx).
Why Helm?
- Helm chart templates reduce significant code redundancy since the same chart can be used for multiple environments with different environment-specific values
- Helm chart dependency management features enable re-usability by sharing common charts across various applications
- Helm’s out-of-the-box version sorts the order of deployment of Kubernetes objects.
- Helm’s ability to validate the chart(s) after installation
- Helm has native functionality to upgrade and roll back chart releases.
- Helm uses CLI to interact with Kubernetes, meaning it can be integrated with any CI/CD toolchain.
- Helm chart support lifecycle hooks which allows to perform custom actions during pre/post phases of install, upgrade, delete, and rollback of Helm charts
- CD platforms like Spinnaker or Harness.io can be natively integrated with Helm.
Continuous Deployment with Helm
Versioning strategy
Before jumping into deployment with Helm, it is important to define your team’s Helm chart versioning strategy. Helm’s versioning scheme is semantic, and teams have two options when it comes to versioning. If a single application is released as part of a single Helm chart without any dependencies, then the most straightforward approach would be to use the same semantic version for the Helm chart as for the application. However, different semantic versions may be more suitable when multiple applications are released as part of a single Helm chart. This is a complex strategy to implement, but it provides version correlation and traceability between the Helm chart and applications deployed as part of the Helm chart. It requires team co-ordination on when to bump up the release version.
Deployment strategy
CD/CD best practices recommend using the same generated artifact and promoting it to a different environment. The two options for engineering teams to weigh up are whether to choose to promote charts across different environments using the same chart repository or different chart repositories.
If using the same chart repository across different environments, the Helm chart promotion strategy should involve the following steps for CI/CD:
- Publish the Helm chart to the chart repository as part of the CI pipeline
- The CD pipeline will deploy the application via the Helm chart on the dev environment
- Upon integration and sanity testing, the CD pipeline will promote the Helm chart to the staging environment where it will be deployed with staging environment-specific values
- The same process is then repeated for higher environments
It is a simple and easy approach to implement, but it requires guard rails to prevent the accidental promotion of any unwanted Helm chart releases.
If promoting Helm charts across different environments and different chart repositories, the steps should be as follows:
- Push the Helm chart to the development chart repository as part of the CI pipeline
- The CD pipeline will deploy the application using the development repository Helm chart on the dev environment
- Upon integration and sanity testing, the CD pipeline will promote the Helm chart to the staging chart repository and then deploy the application via the staging repository helm chart with staging-specific values
- The same process is then repeated for higher environments
This approach is flexible and robust and prevents teams from accidentally promoting unwanted releases to the production environment, as the production Helm chart repository won’t contain the uncertified chart versions, but it does add a little complexity to the workflow.
If you are looking for ways to accelerate your organization’s cloud-native continuous deployment, get in touch with Apexon today using the form below.