Multiple Environments with Flux and Kluctl

multi-env-flux

Most projects that have server-side components usually need to be deployed multiple times, at least if you don’t want to break things for your users all the time. This means, that there is not only one single “prod” environment, but also something like “test” or “staging” environments. So usually, there is a minimum of two different environments.

Each of these environments has a defined role to play. The “prod” environment is obviously the environment that is used in production. A “test” or “staging” environment is usually the one where new versions are deployed so that they can be tested before being promoted to “prod”. A “dev” environment might be the one where even high-risk deployments are allowed where breaking everything is daily business. You could even have multiple dev environments, e.g. one per developer.

In the Kubernetes world, it is perfectly viable and good practice to run multiple environments on a single cluster. Depending on the requirements of your project, you might want to separate prod from non-prod environments, but you could still have “prod” and “staging” on the same cluster while “test”, “dev”, “uat” exist on a non-prod cluster.

Environment Parity

If you want to make this work in a way that does not cause too much pain, you must practice environment parity as much as possible. The best way to achieve this is to have full automation and everything as code. The same “code” that was used to deploy “prod” should also be used to deploy “staging”, “test” and all other environments. The only difference in the deployments should be a defined set of configurations.

Such configurations can for example be the target cluster and/or namespace, the resource allocations (e.g. 1 replica in “dev”, 3 replicas in “prod”), external systems (e.g. databases) and ingress configuration (e.g. DNS and certificates). Configuration can also enable/disable conditional parts of the deployment, for example to disable advanced monitoring on “dev” environments and enable mocking services as replacements for real external systems.

Tooling

There are multiple tools available that allow you to implement a multi-env/multi-cluster deployment that is completely automated and completely “as code”. Helm and Kustomize are currently the first tools that will pop up when you try to look for such tools. As written in my previous blog post, I believe that these tools are the best option for the things that they do very good, but a sub-optimal choice when it comes to configuration management.

Kluctl is the preferred solution for me right now. Not only because I built it, but also because so far I did not find a solution that is as easy to learn and use and so flexible at the same time.

Fully working multi-env example

I suggest to open the microservices demo in another tab and look into it at least briefly (especially the third part). I will from now on pick stuff from this tutorial as examples in this blog post.

Targets in Kluctl

Kluctl works with the concept of “targets”. A target is a named configuration that acts as the entry point for every further configuration required for your environment. As an example, look at the targets from .kluctl.yaml of the microservices demo:

targets:
  - name: local
    context: kind-kind
    args:
      env_type: local
  - name: test
    context: kind-kind
    args:
      env_type: real
  - name: prod
    context: kind-kind
    args:
      env_type: real

Based on these, the same deployment can be configured differently depending on the target, or actually the args passed via the target. In the microservices demo, env_type (name can be chosen by you) is used to include further configuration inside the deployment.yaml:

...
vars:
  - file: ./vars/{{ args.env_type }}.yml
...

At the same time, Kluctl makes the target configuration itself available to the deployment, making things like this possible:

apiVersion: v1
kind: Namespace
metadata:
  name: ms-demo-{{ target.name }}

You can also conditionally enable/disable parts of your deployment depending on the configuration loaded via the vars block above:

deployments:
  - path: adservice
  - path: cartservice
  - path: checkoutservice
  - path: currencyservice
  {% if services.emailservice.enabled %}
  - path: emailservice
  {% endif %}
  - path: frontend
  {% if services.loadgenerator.enabled %}
  - path: loadgenerator
  {% endif %}
  - path: paymentservice
  - path: productcatalogservice
  - path: recommendationservice
  - path: shippingservice

I hope the above snippets give you a feeling about how multi-env deplyoments can be solved via Kluctl. As already mentioned, I suggest to read through the microservices demo tutorial to get an even better understanding. The first two parts will describe some Kluctl basics while the third part enters multi-env deployments.

GitOps and Flux

Kluctl is designed in a way that allows seamless co-existence of CLI based workflows, classical CI/CD and GitOps. This means, that even if you decide to perform prod deployments only via GitOps, you can still perform exactly the same deployment to other environments through your CLI. All this without any struggle (you really only need access to the cluster) and 100% compatible to how the same deployment would be performed via GitOps or CI/CD.

This allows you to adapt your workflow depending on what your current goal is. For example, a developer testing out risky and bleeding-edge changes in his personal dev environment can deploy from his local machine, avoiding the painful “modify -> push -> wait -> error -> repeat” cycles seen too often in pure GitOps and CI/CD setups. When the developer is done with the changes, GitOps can take over on the another (e.g. “test” and later “prod”) environment.

Even for “prod”, which in the above scenario is GitOps managed, can benefit from the possibility to run Kluctl from your local machine. Running a “kluctl diff -t prod” before promoting to “prod” can prevent some scary surprises.

Kluctl implements GitOps via the flux-kluctl-controller. It allows to create KluctlDeployment objects which refer to your Kluctl project (which relies in Git) and the target to be deployed.

Installing flux-kluctl-controller

Before being able to create KluctlDeployment objects, the flux-kluctl-controller needs to be installed first. Please navigate to the installation documentation and follow the instructions found there.

Microservices Demo and Flux

Deploying the microservices demo via Flux is quite easy. First, we’ll need a GitRepository object that refers to the Kluctl project:

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
  name: microservices-demo
spec:
  interval: 1m
  url: https://github.com/kluctl/kluctl-examples.git
  ref:
    branch: main

This will cause Flux to pull the source whenever it changes. A KluctlDeployment can then refer to the GitRepository:

apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: microservices-demo-test
spec:
  interval: 5m
  path: "./microservices-demo/3-templating-and-multi-env/"
  sourceRef:
    kind: GitRepository
    name: microservices-demo
  timeout: 2m
  target: test
  prune: true
  # kluctl targets specify the expected context name, which does not necessarily match the context name
  # found while it is deployed via the controller. This means we must pass a kubeconfig to kluctl that has the
  # context renamed to the one that it expects.
  renameContexts:
    - oldContext: default
      newContext: kind-kind

The above example will cause the controller to deploy the “test” target/environment to the same cluster where the controller runs on. The same deployment can also be deployed to “prod” with a slightly different KluctlDeployment:

apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: microservices-demo-prod
spec:
  interval: 5m
  path: "./microservices-demo/3-templating-and-multi-env/"
  sourceRef:
    kind: GitRepository
    name: microservices-demo
  timeout: 2m
  target: prod
  prune: true
  # kluctl targets specify the expected context name, which does not necessarily match the context name
  # found while it is deployed via the controller. This means we must pass a kubeconfig to kluctl that has the
  # context renamed to the one that it expects.
  renameContexts:
    - oldContext: default
      newContext: kind-kind

Multiple Clusters

To make things easy for now, the above examples stick with a single cluster. The microservices demo project is deploying all targets to different namespaces already, so this is enough to showcase the Flux support. To make it work with multiple clusters, simply install the controller on another cluster and create the appropriate KluctlDeployment objects per cluster.

As an alternative, you can have a central Flux (+flux-kluctl-controller) installation that deploys to multiple clusters. This can be achieved with the help of the spec.kubeconfig and spec.serviceAccountName field of the KluctlDeployment object.

Also, as the examples stem from the microservices demo, they use the kind-kind context names. In a more realistic setup, you would use the real cluster/context names here. This also assumes that all developers will then use the same context names to refer to the same clusters. If this is honored, you gain 100% compatibility between the GitOps based deployments and CLI based deployments.

What’s next?

The flux-kluctl-controller and Kluctl itself already support dynamic feature/review environments, meaning that the controller can create new targets/deployments dynamically. The next blog article will go into the details of these feature/review environments.

Rethinking Kubernetes Configuration Management

One of the big advantages of Kubernetes is its declarative nature when it comes to deployed resources. You define what state you expect, apply it to Kubernetes and then let it figure out how to get to that state.

At first, this sounds like a big win, which it actually is. It however doesn’t take long for beginners to realize that simply writing yamls and applying them via “kubectl apply” is not enough.

The beginner will realize pretty early that configuration between different manifests must be synchronized, e.g. a port defined in a Service must match the port used when accessing the service from another Deployment. Hardcoding these configuration values will work till some degree, but get messy when your deployment projects get larger.

It will get completely unmanageable when the same resources need to get deployed multiple times, e.g. for multiple environments (local, test, uat, prod, …) and/or clusters. Copying and modifying all resources is probably the worst solution one could come up with at this stage.

A poor-man’s solution that works a little bit better is to introduce glue bash code, that performs substitutions and decides which manifests to actually apply. As with most bash scripting, this tends to start with “just a few lines” and ends up with unmaintainable spaghetti code.

Spoiler: This blog post is meant to introduce and promote Kluctl, which is a new tool and approach to configuration management in Kubernetes. Skip the next sections about existing tools if you don’t want to read about stuff you probably already know well enough.

Multiple tools to the rescue

Luckily, there are multiple solutions available that try to solve the described situation from above. Each of these tools has a different approach with its pros and cons. Many of the pros/cons are a matter of personal preference, e.g. text based templating is preferred by some and avoided by others.

Based on the last sentence, I’ll try to list a few tools that I know of and what my personal opinion/preference/experience with these is.

Kustomize

Kustomize advertises itself as a “template-free” solution for configuration management. Instead of templating, it relies on “bases and overlays” for composition and configuration of different environments.

Overlays include bases while performing some basic modifications to these (e.g. add a name prefix). More complex modifications to bases require the use of strategic or json6902 patches.

This approach usually starts clean and is easy to understand and maintain in the beginning. It can however easily get complicated and messy due to the unnatural project structure. It can become hard to follow the manifests and how/where modifications/patches got introduced.

The biggest pro point for Kustomize is its ease of use and the flat learning curve. There is nearly no need to learn anything new, as everything is plain YAML and looks very familiar if you already know how to write and read Kubernetes manifests. Kustomize also offers some very convenient tools, like secret and configmap generators. Kustomize is also integrated into kubectl and thus does not need any extra installation.

An interesting and revealing observation I made is that even though Kustomize does not offer templating on its own, other tools that leverage Kustomize (e.g. Flux or ArgoCD) offer features like substitutions to fulfill a recurring need of their users, loosening the “template-free” nature of Kustomize and signaling a clear demand for templating with Kustomize.

Helm

The first thing you see on helm.sh is: “The package manager for Kubernetes”. This already reveals what it’s very good for: Package management. If you want to pick a ready-to-use, pre-packaged and well-configurable application from a huge library of “Charts”, simply go to hub.helm.sh and look for what you need.

Installation is as simple as two command line calls, “helm repo add …” and “helm install …”. Other commands then allow you to manage the installation, e.g. upgrade or delete it.

What it’s not good at is composition and configuration management for your infrastructure and applications. Helm Charts require quite some boilerplate templating if you want to get to the flexibility that is offered by many of the publicly available charts. At the same time, you usually don’t need that flexibility and end up with templates that only have very few configuration options.

If you still decide to use Helm for composition and configuration management, you might end up with all the overhead of Helm without any of the advantages of it. For example, you would have to provide and maintain a Helm repository and actually perform releases into it, even if such release management doesn’t make sense for you.

Composition of Helm Charts requires the use of “Umbrella Charts”. These are charts that don’t by itself define resources but instead refer to multiple dependent charts. Configuration of these sub-charts is passed down from the umbrella chart. The umbrella chart, as all other charts, will also require you to perform release management and maintenance of a repository.

In my personal opinion, umbrella charts do not feel “natural”. It feels like a solution that only exists because it is technically possible to do it this way.

Helm and Kustomize can not be used together without some outer glue, e.g. scripting or special tooling. Many solutions that glue together Helm and Kustomize will however lose some crucial Helm features in-between, e.g. Helm Hooks and release management.

Using both tools in combination can still be very powerful. For example, it allows you to customize existing public Helm Charts in ways not offered by the chart maintainers. For example, if a chart does not offer configuration to add nodeSelectors, you could use a Kustomize patch to add the desired modifications to the resources in question.

(Kustomize has some experimental Helm integration, but the Kustomize developers strongly discourage use of it in production).

Flux CD

Flux allows you to implement GitOps style continuous delivery of your infrastructure and applications. It internally heavily relies on Kustomize and Helm to provide the actual resources.

Flux can also be used for composition and configuration management. It is for example possible to have a single Kustomization deployment that actually glues together multiple other Kustomizations and HelmReleases.

Flux also “enhances” Kustomize and Helm by allowing different kinds of preprocessing operations, e.g. substitutions for Kustomize and patches for Helm Charts. These enhancements are what make configuration management much easier with Flux.

The biggest advantage of Flux (and GitOps in general) is however also one of its biggest disadvantages, at least in my humble opinion. You become completely dependent on the Flux infrastructure, even for testing of simple changes to your deployments. I assume every DevOps engineer knows the pain of recurring “modify -> push -> wait -> error -> repeat” cycles. Such trial and error sessions can become quite time-consuming and frustrating.

It would be much easier to test out your changes locally (e.g. via kind or minikube) and push to Git only when you feel confident enough that your changes are ready. This is not easily possible to do in a reliable way with Flux based configuration management. Even a dedicated environment/playground stays painful as it does not remove the described cycle.

Plain Kustomize and Helm would allow you to run the tools from your local machine, targeted against a local cluster or remote playground environment, but this would require you to replicate some of the features offered by Flux so that your testing stays compatible with what is later deployed via Flux.

Argo CD

Argo CD is another GitOps implementation that is similar to Flux. The most visible difference is that it also offers some very good UI to visualize your deployments and the state of these. I do not have enough experience with ArgoCD to go into detail, but I assume that many of the things I wrote about Flux also apply for ArgoCD.

Rethinking Kubernetes Configuration Management

I propose to rethink how the available tooling is used in daily business. Basically, I suggest that the previously described tools keep doing the parts that they are best at and introduce new tooling to solve the configuration management problem.

Basically, we need a tool that acts as “glue” between Kustomize, Helm and GitOps. Something that allows to perform declarative composition of self-written and third-party Kustomize deployments, third-party Helm Charts and a powerful but easy to learn mechanism surrounding it to synchronize configuration between all components.

It would get even better if a unified CLI would manage all deployments the exact same way, no matter how large or complex. A CLI that does not interfere with GitOps style continues delivery and allows friendly co-existence.

Introducing Kluctl

My proposed solution for all this is Kluctl. It’s “the missing glue” (hence the name, glue with k and without e). Kluctl was born out of the need to create a unified solution that removes the need for all other glue around Kubernetes deployments. Development of it happened based on large real-life production environments inside a corporate environment with practicability being one of the highest priorities.

It provides a powerful but easy to learn project structure and templating system that feels natural when used daily. The project structure follows a simple inclusion hierarchy and gives you full freedom on how to organize your project.

The templating engine allows you to load configuration from multiple sources (usually arbitrary structured yaml files) and use the configuration in all components of your project.

The project is then deployed to “targets”, which can represent an environment and/or cluster. Multiple targets can exist on the same cluster or on different clusters, it’s all up to you. The “target” acts as the entry point for your configuration, which then implicitly defines what a target means.

The Kluctl CLI allows to diff, deploy, prune and delete targets, all via a unified CLI. If you remember how to do a “deploy”, you will easily figure out how to diff or prune a target. If you remember how to do it for project X, you will also know how to do it for project Y. It’s always the same from the CLI perspective, no matter how simple or complex a deployment actually is.

Kluctl will perform a dry-run and show a diff before a deployment is actually performed. Only if you agree with what would be changed, the changes are actually applied. After the deployment, Kluctl will show another diff that visualizes what has been changed.

This means, you always know what will happen and what has happened. It allows you to trust your deployments, even if you somehow lost track about the state of a target environment.

In addition, Kluctl works according to the premise: Everything can, nothing must. That’s why it works equally well for small and large deployment projects compared to the overhead some other tools impose.

Flux integration is possible via the flux-kluctl-controller and allows friendly co-existence of classical DevOps and GitOps. This means you can deploy to a playground environment from your local machine and let GitOps handle all other deployments, adhering to the GitOps principles.

Learning Kluctl

If you want to learn about Kluctl, go to kluctl.io and read the documentation. Best is actually to start with the Microservices Demo Tutorial, as it tries to introduce you to the Kluctl concepts step by step.

After you’ve finished the tutorial, you should be able to understand how projects are structured, how Kustomize and Helm is integrated and how targets are used to implement multi-env and multi-cluster deployments.

Based on that, you should be able to imagine how Kluctl could play a role in your own projects and maybe you decide to give it a try.

More tutorials, documentation and blog posts will follow soon.

Kluctl and Flux

We’re very happy to announce that Kluctl can from now on be used together with Flux. This will allow you to combine the workflows and features advertised by Kluctl with GitOps style continuous delivery.

GitOps vs Kluctl

One of the first questions that we usually get when introducing Kluctl to someone is something like: “Why not GitOps?” or “Why not Flux?”. There seems to be a common misunderstanding that arises in many people when trying to understand Kluctl on first sight, which is to believe that Kluctl is an alternative or competitor to GitOps and Flux (or even ArgoCD).

This is not the case. If one wants to compare Kluctl with something else, then it’s more appropriate to compare it to Helm, Kustomize or Helmfile. It should be clear that Kustomize for example is not an alternative/competitor for Flux, but instead an essential tool and building block to make it work.

Kluctl can be looked at from the same perspective when it comes to Flux. Flux implements Helm and Kustomize support via different controllers, namely the kustomize-controller and the helm-controller. With Kluctl, the same style of controller can be implemented and integrated into the Flux ecosystem.

Introducing the Kluctl Flux Controller

An alpha version of the Kluctl Flux Controller has just been released. It allows to create KluctlDeployment objects which are reconciled in a similar fashion as Kustomizations.

Each KluctlDeployment specifies a source object (e.g. a GitRepository), the target to be deployed and some information on how to handle kubeconfigs. The controller then regularly reconciles the deployment, meaning that it will invoke kluctl deploy whenever a change is detected in the deployment.

Sounds great? Then take a look at this very simple example

Kustomize/Helm vs Kluctl

If you’ve already read through the Kluctl documentation, you’ve probably noticed that Kluctl internally uses Kustomize and Helm extensively.

This might raise the question: Why not use plain Kustomize and/or Helm if Flux is already involved? Good question!. Lets take a look:

Kluctl Projects/Deployments

If you prefer the way Kluctl organizes and structures projects and deployments, then using the Flux Kluctl Controller is obviously the best choice. Kluctl allows you to easily glue together what belongs together. If for example, a redis database is required to make your application work, you can manage the redis Helm Release and your application in the same deployment, including the necessary configuration to let them talk to each other.

To see how different a Kluctl deployment is compared to classic Kustomize/Helm + Flux, you can compare the flux2-kustomize-helm-example and the Kluctl Microservices Demo (here is tutorial for the demo).

Native multi-env support

Kluctl allows you to natively create deployment projects that can be deployed multiple times to different environments/targets. You can for example have one target that is solely meant for local (e.g. Kind based) deployments, one that targets the test environment and one for prod. You can then use templating to influence deployments in whatever way you like. For example, you could change the local target to set all replicas to 1 and skip resource hungry support applications (e.g. monitoring infrastructure).

This is possible in plain Kustomize as well, but requires you to solve it without the concept of targets and without templating. In Kustomize, multi-env deployments must be solved with overlays, which does not necessary align with how you prefer your project structure.

Mix DevOps and GitOps

The core idea of GitOps is that Git becomes the single source of truth for the desired cluster state. This is something that is extremely valuable with many advantages compared to other approaches. There are however still situations where diverging from GitOps is very valuable as well.

For example, when you start a new deployment project, you’re usually in a state of frequent changes inside the deployment project. These frequent changes need frequent deployments and testing until you get to a point where things are stable enough. If you’re forced to adhere to GitOps in that situation, you end up with very noisy Git histories and plenty of trial-and-error deployment cycles. This is a major productivity killer and we believe there has to be a better way.

With Kluctl, you can start developing locally and deploying from your local machine, with the guarantee that what you see is what will also happen later when GitOps is introduced for the same deployment. When you’re ready, push to Git, create the appropriate KluctlDeployment resource and let GitOps/Flux do its magic.

You can also use dedicated targets for development purposes and only deploy to them from your local machine, while other targets are deployed via GitOps/Flux.

See it in action

If you want to see the Flux Kluctl Controller in action, check out template-cluster-k3s-kluctl, which is a fork of k8s@home template-cluster-k3s with all HelmReleases and Kustomizations ported to KluctlDeployments.

What now?

More documentation, guides, tutorials and examples will follow in the next few days and weeks.

Hello kluctl fans!

It’s alive!

As can be seen, we’ve launched https://kluctl.io, which will be the future home of all kluctl related documentation, tutorials, blog articles and much more.

We’re looking forward to what is coming for kluctl!