This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Kluctl Documentation

The missing glue to put together large Kubernetes deployments.

Kluctl is the missing glue that puts together your (and any third-party) deployments into one large declarative Kubernetes deployment, while making it fully manageable (deploy, diff, prune, delete, …) via one unified command line interface.

Kluctl tries to be as flexible as possible, while remaining as simple as possible. It reuses established tools (e.g. Kustomize and Helm), making it possible to re-use a large set of available third-party deployments.

Kluctl is centered around “targets”, which can be a cluster or a specific environment (e.g. test, dev, prod, …) on one or multiple clusters. Targets can be deployed, diffed, pruned, deleted, and so on. The idea is to have the same set of operations for every target, no matter how simple or complex the deployment and/or target is.

Kluctl does not depend on external operators/controllers and allows to use the same deployment wherever you want, as long as access to the kluctl project and clusters is available. This means, that you can use it from your local machine, from your CI/CD pipelines or any automation platform/system that allows to call custom tools.

Flux support is in alpha state and available via the flux-kluctl-controller.

Kluctl in Short

💪 Kluctl handles all your deployments You can manage all your deployments with Kluctl, including infrastructure related and your applications.
🪶 Complex or simple, all the same You can manage complex and simple deployments with Kluctl. Simple deployments are lightweight while complex deployment are easily manageable.
🤖 Native git support Kluctl has native Git support integrated, meaning that it can easily deploy remote Kluctl projects or externalize parts (e.g. configuration) of your Kluctl project.
🪐 Multiple environments Deploy the same deployment to multiple environments (dev, test, prod, …), with flexible differences in configuration.
🌌 Multiple clusters Manage multiple target clusters (in multiple clouds or bare-metal if you want).
🔩 Configuration and Templating Kluctl allows to use templating in nearly all places, making it easy to have dynamic configuration.
⎈ Helm and Kustomize The Helm and Kustomize integrations allow you to reuse plenty of third-party charts and kustomizations.
🔍 See what’s different Always know what the state of your deployments is by being able to run diffs on the whole deployment.
🔎 See what happened Always know what you actually changed after performing a deployment.
💥 Know what went wrong Kluctl will show you what part of your deployment failed and why.
👐 Live and let live Kluctl tries to not interfere with any other tools or operators. This is possible due to it’s use of server-side-apply.
🧹 Keep it clean Keep your clusters clean by issuing regular prune calls.
🔐 Encrypted Secrets Manage encrypted secrets for multiple target environments and clusters.

What can I do with Kluctl?

Kluctl allows you to define a Kluctl project, which in turn defines Kluctl deployments and sub-deployments. Each Kluctl deployment defines Kustomize deployments.

A Kluctl project also defines targets, which represent your target environments and/or clusters.

The Kluctl CLI then allows to deploy, diff, prune, delete, … your deployments.

Where do I start?

1 - Core Concepts

Core Concepts of Kluctl.

These are some core concepts in Kluctl.

Kluctl project

The kluctl project defines targets, secret sources and external git projects. It is defined via the .kluctl.yaml configuration file.

The kluctl project can also optionally define where the deployment project and clusters configs are located (external git projects).

Targets

A target defines a target cluster and a set of deployment arguments. Multiple targets can use the same cluster. Targets allow implementing multi-cluster, multi-environment, multi-customer, … deployments.

Deployments

A deployment defines which Kustomize deployments and which sub-deployments to deploy. It also controls the order of deployments.

Deployments may be configured through deployment arguments, which are typically provided via the targets but might also be provided through the CLI.

Variables

Variables are the main source of configuration. They are either loaded yaml files or directly defined inside deployments. Each variables file that is loaded has access to all the variables which were defined before, allowing complex composition of configuration.

After being loaded, variables are usable through the templating engine at all nearly all places.

Templating

All configuration files (including .kluctl.yaml and deployment.yaml) and all Kubernetes manifests involved are processed through a templating engine. The templating engine allows simple variable substitution and also complex control structures (if/else, for loops, …).

Secrets

Secrets are loaded from external sources and are only available while sealing. After the sealing process, only the public-key encrypted sealed secrets are available.

Sealed Secrets

Sealed Secrets are based on Bitnami’s sealed-secrets controller. Kluctl offers integration of sealed secrets through the seal command. Kluctl allows managing multiple sets of sealed secrets for multiple targets.

Unified CLI

The CLI of kluctl is designed to be unified/consistent as much as possible. Most commands are centered around targets and thus require you to specify the target name (via -t <target>). If you remember how one command works, it’s easy to figure out how the others work. Output from all targets based commands is also unified, allowing you to easily see what will and what did happen.

2 - Get Started with Kluctl

Get Started with Kluctl.

This tutorial shows you how to bootstrap Flux to a Kubernetes cluster and deploy a sample application in a GitOps manner.

Before you begin

A few things must be prepared before you actually begin.

Get a Kubernetes cluster

The first step is of course: You need a kubernetes cluster. It doesn’t really matter where this cluster is hosted, if it’s a local (e.g. kind) cluster, managed cluster, or a self-hosted cluster, kops or kubespray based, AWS, GCE, Azure, … and so on. kluctl is completely independent of how Kubernetes is deployed and where it is hosted.

There is however a minimum Kubernetes version that must be met: 1.20.0. This is due to the heavy use of server-side apply which was not stable enough in older versions of Kubernetes.

Prepare your kubeconfig

Your local kubeconfig should be configured to have access to the target Kubernetes cluster via a dedicated context. The context name should match with the name that you want to use for the cluster from now on. Let’s assume the name is test.example.com, then you’d have to ensure that the kubeconfig context test.example.com is correctly pointing and authorized for this cluster.

See Configure Access to Multiple Clusters for documentation on how to manage multiple clusters with a single kubeconfig. Depending on the Kubernets provisioning/deployment tooling you used, you might also be able to directly export the context into your local kubeconfig. For example, kops is able to export and merge the kubeconfig for a given cluster.

Objectives

  • Checkout one of the example Kluctl projects
  • Deploy to your local cluster
  • Change something and re-deploy

Install Kluctl

The kluctl command-line interface (CLI) is required to perform deployments.

To install the CLI with Homebrew run:

brew install kluctl/tap/kluctl

For other installation methods, see the install documentation.

Clone the kluctl examples

Clone the example project found at https://github.com/kluctl/kluctl-examples

git clone https://github.com/kluctl/kluctl-examples.git

Choose one of the examples

You can choose whatever example you like from the clones repository. We will however continue this guide by referring to the simple-helm example found in that repository. Change the current directory:

cd kluctl-examples/simple-helm

Create your local cluster

Create a local cluster with kind:

kind create cluster

This will update your kubeconfig to contain a context with the name kind-kind. By default, all examples will use the currently active context.

Deploy the example

Now run the following command to deploy the example:

kluctl deploy -t simple-helm

Kluctl will perform a diff first and then ask for your confirmation to deploy it. In this case, you should only see some objects being newly deployed.

kubectl -nsimple-helm get pod

Change something and re-deploy

Now change something inside the deployment project. You could for example add replicaCount: 2 to deployment/nginx/helm-values.yml. After you have saved your changes, run the deploy command again:

kluctl deploy -t simple-helm

This time it should show your modifications in the diff. Confirm that you want to perform the deployment and then verify it:

kubectl -nsimple-helm get pod

You should need 2 instances of the nginx POD running now.

Where to continue?

Continue by reading through the tutorials and by consulting the reference documentation.

3 - Installation

Installing kluctl.

Install kluctl

The kluctl CLI is available as a binary executable for all major platforms, the binaries can be downloaded form GitHub releases page.

With Homebrew for macOS and Linux:

brew install kluctl/tap/kluctl

With Bash for macOS and Linux:

curl -s https://kluctl.io/install.sh | bash

Container images

A container image with kluctl is available on GitHub:

  • ghcr.io/kluctl/kluctl:<version>

4 - Philosophy

The philosophy behind kluctl.

Kluctl tries to follow a few basic ideas and a philosophy. Project and deployments structure, as well as all commands are centered on these.

Be practical

Everything found in kluctl is based on years of experience in daily business, from the perspective of a DevOps Engineer. Kluctl prefers practicability when possible, trying to make the daily life of a DevOps Engineer as comfortable as possible.

Consistent CLI

Commands try to be as consistent as possible, making it easy to remember how they are used. For example, a diff is used the same way as a deploy. This applies to all sizes and complexities of projects. A simple/single-application deployment is used the same way as a complex one, so that it is easy to switch between projects.

Mostly declarative

Kluctl tries to be declarative whenever possible, but loosens this in some cases to stay practical. For example, hooks, barriers and waitReadiness allows you to control order of deployments in a way that a pure declarative approach would not allow.

Predictable and traceable

Always know what will happen (diff or --dry-run) and always know what happened (output changes done by a command). There is nothing worse than not knowing what’s going to happen when you deploy the current state to prod. Not knowing what happened is on the same level.

Live and let live

Kluctl tries to not interfere with any other tools or operators. It achieves this by honoring managed fields in an intelligent way. Kluctl will never force-apply anything without being told so, it will also always inform you about fields that you lost ownership of.

CLI/Client first

Kluctl is centered around a unified command line interface and will always prioritize this. This guarantees that the DevOps Engineer never looses control, even if automation and/or GitOps style operators are being used.

No scripting

Kluctl tries its best to remove the need for scripts (e.g. Bash) around deployments. It tries to remove the need for external orchestration of deployment order and/or dependencies.

5 - History

The history of kluctl.

Kluctl was created after multiple incarnations of complex multi-environment (e.g. dev, test, prod) deployments, including everything from monitoring, persistency and the actual custom services. The philosophy of these deployments was always “what belongs together, should be put together”, meaning that only as much Git repositories were involved as necessary.

The problems to solve turned out to be always the same:

  • Dozens of Helm Charts, kustomize deployments and standalone Kubernetes manifests needed to be orchestrated in a way that they work together (services need to connect to the correct databases, and so on)
  • (Encrypted) Secrets needed to be managed and orchestrated for multiple environments and clusters
  • Updates of components was always risky and required keeping track of what actually changed since the last deployment
  • Available tools (Helm, Kustomize) were not suitable to solve this on its own in an easy/natural way
  • A lot of bash scripting was required to put things together

When this got more and more complex, and the bash scripts started to become a mess (as “simple” Bash scripts always tend to become), kluctl was started from scratch. It now tries to solve the mentioned problems and provide a useful set of features (commands) in a sane and unified way.

The first versions of kluctl were written in Python, hence the use of Jinja2 templating in kluctl. With version 2.0.0, kluctl was rewritten in Go.

6 - Guides

6.1 - Tutorials

6.1.1 - Microservices Demo

6.1.1.1 - 1. Basic Project Setup

Introduction

This is the first tutorial in a series of tutorials around the GCP Microservices Demo and the use of kluctl to deploy and manage the demo.

We will start with a simple kluctl project setup (this tutorial) and then advance to a multi-environment and multi-cluster setup (upcoming tutorial). Afterwards, we will also show how daily business (updates, house keeping, …) with such a deployment would look like.

GCP Microservices Demo

From the README.md of GCP Microservices Demo:

Online Boutique is a cloud-native microservices demo application. Online Boutique consists of a 10-tier microservices application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.

This demo application seems to be a good example for a more or less typical application seen on Kubernetes. It has multiple self-developed microservices while also requiring third-party applications/services (e.g. redis) to be deployed and configured properly.

Ways to deploy the demo

The simplest and most naive way to deploy the demo is by using kubectl apply with the provided release manifests:

$ kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/release/kubernetes-manifests.yaml

This is also what is shown in the README.md of the microservices demo.

The shortcomings of this approach are however easy to spot, and probably no one would ever follow this approach up to production. As an example, updates to the application and its dependencies will be hard to maintain. Housekeeping (deleting orphan resources) will also be hard to achieve. At some point in time, when you start deploying the application multiple times to different clusters and/or different environments, configuration will also become hard to maintain, as every target might need different configuration. Long story short…without proper tooling, you’ll easily run into painful limitations.

There are multiple solutions available that each solve parts of the limitations and problems. As an example, Helm and Kustomize are well known. Introducing these tools will easily bring you much further, but you will very likely end up with something complicated/complex around these tools to make it usable in daily business. In the worst case, you’d start using Bash scripts that orchestrate your deployments.

GitOps oriented solutions like ArgoCD and Flux are able to relieve you from parts of the deployment orchestration tasks, but bring in new complexities that need to be solved as well.

Deploying with kluctl

In this tutorial, we’ll show how the microservices demo can be deployed and managed with kluctl. We will start with a simple and naive deployment to a local kind cluster. The next tutorial in this series will then focus on making the deployment multi-environment and multi-cluster capable.

The goal is to make a deployment as simple as typing:

$ kluctl deploy -t local

Setting up the kluctl project

The first thing you need is an empty project directory and the .kluctl.yml project configuration:

$ mkdir -p microservices-demo/1-basic-setup
$ cd microservices-demo/1-basic-setup

Inside this new directory, create the file .kluctl.yml with the following content:

targets:
  - name: local
    context: kind-kind

This is a very simple example with only a single target, being a local kind cluster.

You might have noticed that the target configuration refers a kubectl context that is not existing yet. It’s time to create a local kind cluster now. To do so, first ensure that you have kind installed and then run:

$ kind create cluster

After this, you should have a local cluster setup and your kubeconfig prepared with a new context named kind-kind.

Setting up a minimal deployment project

Inside the kluctl project, you will now have to create a minimal deployment project. The deployment project starts with the root deployment.yml.

The location of this deployment.yml is the same as the .kluctl.yml. Create the file with following content:

deployments:
  - path: redis

commonLabels:
  examples.kluctl.io/deployment-project: "microservices-demo"

This minimal deployment project contains two elements:

  1. The list of deployment items, which currently only consists of the upcoming redis deployment. The next chapter will explain this deployment.
  2. The commonLabels, which is a map of common labels and values. These labels are applied to all deployed resources and are later used by kluctl to identify resources that belong to this kluctl deployment.

Setting up the redis deployment

As seen in the previous chapter, the root deployment.yml refers to a redis deployment item. This deployment item must be located inside the sub-folder redis (hence the path: redis). kluctl expects each deployment item to be a kustomize deployment. Such a kustomize deployment can be as simple as a kustomization.yml with a single resources entry or a fully fledged kustomize deployment with overlays, generators, and so on.

For our example, first create the sub-directory redis:

$ mkdir redis

Then create the file redis/kustomization.yml with the following content:

resources:
  - deployment.yml
  - service.yml

Then create the file redis/deployment.yml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cart
spec:
  selector:
    matchLabels:
      app: redis-cart
  template:
    metadata:
      labels:
        app: redis-cart
    spec:
      containers:
      - name: redis
        image: redis:alpine
        ports:
        - containerPort: 6379
        readinessProbe:
          periodSeconds: 5
          tcpSocket:
            port: 6379
        livenessProbe:
          periodSeconds: 5
          tcpSocket:
            port: 6379
        volumeMounts:
        - mountPath: /data
          name: redis-data
        resources:
          limits:
            memory: 256Mi
            cpu: 125m
          requests:
            cpu: 70m
            memory: 200Mi
      volumes:
      - name: redis-data
        emptyDir: {}

And the file redis/service.yml:

apiVersion: v1
kind: Service
metadata:
  name: redis-cart
spec:
  type: ClusterIP
  selector:
    app: redis-cart
  ports:
  - name: redis
    port: 6379
    targetPort: 6379

The above files (deployment.yml and service.yml) are based on the content of redis.yaml from the original GCP Microservices Demo.

As you can see, there is nothing special about the contents of these files so far. It’s simple and plain Kubernetes and YAML resources. The full potential of kluctl will become clear later, when we start to use templating inside these files. Only with the templating, it will become possible to support multi-environment and multi-cluster deployments.

Setting up the first microservice

Now it’s time to setup the first microservice. It is done the same way as we’re already setup the redis deployment.

First, create the sub-directory cartservice at the same level as you created the redis sub-directory. Then create the following files.

Another kustomization.yml

resources:
  - deployment.yml
  - service.yml

Another deployment.yml, with the content found here

Another service.yml, with the content found here

Finally add the new deployment item to the root deployment.yml

...
deployments:
  ...
  # add this line
  - path: cartservice
...

Setting up all other microservices

The GCP Microservices Demo is composed of multiple other services, which can be setup the same way as the microservice shown before. You can do this by yourself, or alternatively switch to the completed example found here.

From now on, we will assume that all microservices have been added (or that you switched to the example project).

Deploy it!

We now have a minimal kluctl project with two simple kustomize deployments. It’s time to deploy it. From inside the kluct project directory, call:

$ kluctl deploy -t local
INFO[0000] Rendering templates and Helm charts          
INFO[0000] Building kustomize objects                   
Do you really want to deploy to the context/cluster kind-kind? (y/N) y
INFO[0001] Getting remote objects by commonLabels       
INFO[0001] Getting 24 additional remote objects         
INFO[0001] Running server-side apply for all objects    
INFO[0001] shippingservice: Applying 2 objects          
INFO[0001] paymentservice: Applying 2 objects           
INFO[0001] currencyservice: Applying 2 objects          
INFO[0001] frontend: Applying 3 objects                 
INFO[0001] loadgenerator: Applying 1 objects            
INFO[0001] recommendationservice: Applying 2 objects    
INFO[0001] productcatalogservice: Applying 2 objects    
INFO[0001] adservice: Applying 2 objects                
INFO[0001] cartservice: Applying 2 objects              
INFO[0001] emailservice: Applying 2 objects             
INFO[0001] checkoutservice: Applying 2 objects          
INFO[0001] redis: Applying 2 objects                    

New objects:
  default/Deployment/adservice
  default/Deployment/cartservice
  default/Deployment/checkoutservice
  default/Deployment/currencyservice
  default/Deployment/emailservice
  default/Deployment/frontend
  default/Deployment/loadgenerator
  default/Deployment/paymentservice
  default/Deployment/productcatalogservice
  default/Deployment/recommendationservice
  default/Deployment/redis-cart
  default/Deployment/shippingservice
  default/Service/adservice
  default/Service/cartservice
  default/Service/checkoutservice
  default/Service/currencyservice
  default/Service/emailservice
  default/Service/frontend
  default/Service/frontend-external
  default/Service/paymentservice
  default/Service/productcatalogservice
  default/Service/recommendationservice
  default/Service/redis-cart
  default/Service/shippingservice

The -t local selects the local target which was previously defined in the .kluctl.yml. Right now we only have this one target, but we will add more targets in upcoming tutorials from this series.

Answer with y to the question if you really want to deploy. The command will output what is happening and then show what has been changed on the target.

Playing around

You have now deployed redis and the cartservice microservice. You can now start to play around with some other kluctl commands. For example, try to change something inside cartservice.yml (e.g. set terminationGracePeriodSeconds to 10) and then run kluctl diff -t local:

$ kluctl diff -t local
INFO[0000] Rendering templates and Helm charts          
...

Changed objects:
  default/Deployment/cartservice

Diff for object default/Deployment/cartservice
+--------------------------------------------------+---------------------------+
| Path                                             | Diff                      |
+--------------------------------------------------+---------------------------+
| spec.template.spec.terminationGracePeriodSeconds | -5                        |
|                                                  | +10                       |
+--------------------------------------------------+---------------------------+

As you can see, kluctl now shows you what will happen. If you’d now perform a kluctl deploy -t local, kluctl would output what has happened (which would be the same as in the diff as long as you don’t change anything else).

If you try to remove (or at least comment out) a microservice, e.g. the cartservice and then run kluctl diff -t local again, you will get:

$ kluctl diff -t local
INFO[0000] Rendering templates and Helm charts          
...

Changed objects:
  default/Deployment/cartservice

Diff for object default/Deployment/cartservice
+--------------------------------------------------+---------------------------+
| Path                                             | Diff                      |
+--------------------------------------------------+---------------------------+
| spec.template.spec.terminationGracePeriodSeconds | -5                        |
|                                                  | +10                       |
+--------------------------------------------------+---------------------------+

Orphan objects:
  default/Service/cartservice
  default/Deployment/cartservice

As you can see, the resources belonging cartservice are listed as “Orphan objects” now, meaning that these are not found locally anymore. A kluctl prune -t local would then give:

$ kluctl prune -t local
INFO[0000] Rendering templates and Helm charts          
...
Do you really want to delete 2 objects? (y/N) y

Deleted objects:
  default/Service/cartservice
  default/Deployment/cartservice

How to continue

The result of this tutorial is a naive version of the microservices demo deployment. There are a few things that you would solve differently in the real world, e.g. use Helm Charts for things like redis instead of proving self-crafted manifests. The next tutorials in this series will focus on a few improvements and refactorings that will make this kluctl project more “realistic” and more useful. They will also introduce concepts like multi-environment and multi-cluster deployments.

6.1.1.2 - 2. Helm Integration

Introduction

The first tutorial in this series demonstrated how to setup a simple kluctl project that is able to deploy the GCP Microservices Demo to a local kind cluster.

This initial kluctl project was however quite naive and too simple to be any way realistic. For example, the project structure is too flat and will likely result in chaos when the project grows. Also, the project used self-crafted manifests while it might have been better to reuse feature rich Helm Charts. We will fix both these issues in this tutorial.

How to start

This tutorial is based on the results of the first tutorial. As an alternative, you can take the 1-basic-project example project found here and use it the base to be able to continue with this tutorial.

You can also deploy the base project and then incrementally perform deployments after each step in this tutorial. This way you will also gain some experience and feeling for to use kluctl.

A simple refactoring

Let’s start with a simple refactoring. Having all deployment items on the root level will easily get unmaintainable.

kluctl allows you to structure your project in all kinds of fashions by leveraging sub-deployments. The deployment items found in deployment projects allows specifying includes which point to sub-directory with another deployment.yml.

Let’s split the deployment into third-party applications (currently only redis) and the project specific microservices. To do this, create the sub-directories third-party and microservices. Then move the redis directory into third-party and all microservice sub-directories into microservices:

$ mkdir third-party
$ mkdir microservices
$ mv redis third-party/
$ mv adservice cartservice checkoutservice currencyservice emailservice \
    frontend loadgenerator paymentservice \
    productcatalogservice recommendationservice shippingservice microservices/

Now change the deployments list inside the root deployment.yml to:

deployments:
  - include: third-party
  - include: services

Add a deployment.yml with the following content into the third-party sub-directory:

deployments:
  - path: redis

And finally a deployment.yml with the following content into the microservices sub-directory:

deployments:
  - path: adservice
  - path: cartservice
  - path: checkoutservice
  - path: currencyservice
  - path: emailservice
  - path: frontend
  - path: loadgenerator
  - path: paymentservice
  - path: productcatalogservice
  - path: recommendationservice
  - path: shippingservice

To get an overview of these changes, look into this commit inside the example project belonging to this tutorial.

If you deploy the new state of the project, you’ll notice that only labels will change. These labels are automatically added to all resources and represent the tags of the corresponding deployment items.

Some notes on project structure

The refactoring from above is meant as an example that demonstrates how sub-deployments can be used to structure your project. Such sub-deployments can also include deeper sub-deployments, allowing you to structure your project in any way and complexity that fits your needs.

Introducing the first Helm Chart

There are many examples where self-crafting of Kubernetes manifests is not the best solution, simply because there is already a large ecosystem of pre-created Kubernetes packages in the form of Helm Charts.

The redis deployment found in the microservices demo is a good example for this, especially as many available Helm Charts offer quite some functionality, for example high availability.

kluctl allows the integration of Helm Charts, which we will do now to replace the self-crafted redis deployment with the Bitname Redis Chart.

First, create the file third-party/redis/helm-chart.yml with the following content:

helmChart:
  repo: https://charts.bitnami.com/bitnami
  chartName: redis
  chartVersion: 16.8.0
  releaseName: cart
  namespace: default
  output: deploy.yml

Most of the above configuration can directly be mapped to Helm invocations (pull, install, …). The output value has a special meaning and must be reflected inside the kustomization.yml resources list. The reason is that kluctl solves the Helm integration by running helm template and writing the result to the file configured via output. After this, kluctl expects that kustomize takes over, which requires that the generated file is references in kustomization.yml.

To do so, simply replace the content of third-party/redis/kustomization.yml with:

resources:
  - deploy.yml

We now need some configuration for the redis chart, which is provides via [third-party/redis/helm-values.yml`](https://kluctl.io/docs/reference/deployments/helm/#helm-valuesyml):

architecture: replication

auth:
  enable: false

sentinel:
  enabled: true
  quorum: 2

replica:
  replicaCount: 3
  persistence:
    enabled: true

master:
  persistence:
    enabled: true

The above configuration will configure redis to run in replication mode with sentinel and 3 replicas, giving us some high availability (at least in theory, as we’d still need a HA Kubernetes cluster and proper affinity configuration).

The Redis Chart will also deploy a Service resource, but with a different name as the self-crafted version. This means we have to fix the service name in microservices/cartservice/deployment.yml (look for the environment variable REDIS_ADDR) to point to cart-redis:6379 instead of redis-cart:6379.

You can now remove the old redis related manifests (third-party/redis/deployment.yml and third-party/redis/service.yml).

All the above changes can be found in this commit from the example project.

Pulling Helm Charts

We have now added a Helm Chart to our deployment, but to make it deployable it must be pre-pulled first. kluctl requires Helm Charts to be pre-pulled for multiple reasons. The most important reasons are performance and reproducibility. Performance would significantly suffer if Helm Chart would have to be pulled on-demand at deployment time. Also, Helm Charts have no functionality to ensure that a chart that you pulled yesterday is equivalent to the chart pulled today, even if the version is unchanged.

To pre-pull the redis Helm Chart, simply call:

$ kluctl helm-pull
INFO[0000] Pulling for third-party/redis/helm-chart.yml

This will pre-pull the chart into the sub-directory third-party/redis/charts. This directory is meant to be added to version control, so that it is always available when deploying.

If you ever change the chart version in helm-chart.yml, don’t forget to re-run the above command and commit the resulting changes.

Deploying the current state

It’s time to deploy the current state again:

$ kluctl deploy -t local
INFO[0000] Rendering templates and Helm charts          
...          

New objects:
  default/ConfigMap/cart-redis-configuration
  default/ConfigMap/cart-redis-health
  default/ConfigMap/cart-redis-scripts
  default/Service/cart-redis
  default/Service/cart-redis-headless
  default/ServiceAccount/cart-redis
  default/StatefulSet/cart-redis-node

Changed objects:
  default/Deployment/cartservice

Diff for object default/Deployment/cartservice
+-------------------------------------------------------+------------------------------+
| Path                                                  | Diff                         |
+-------------------------------------------------------+------------------------------+
| spec.template.spec.containers[0].env.REDIS_ADDR.value | -redis-cart:6379             |
|                                                       | +cart-redis:6379             |
+-------------------------------------------------------+------------------------------+

Orphan objects:
  default/Deployment/redis-cart
  default/Service/redis-cart

As you can see, the changes that we did to the kluctl project are reflected in the output of the deploy call, meaning that we can perfectly see what happened. We can see a few new resources which are all redis related, the change of the service name and the old redis resources being marked as orphan. Let’s get rid of the orphan resources:

$ kluctl prune -t local
INFO[0000] Rendering templates and Helm charts          
INFO[0000] Building kustomize objects                   
INFO[0000] Getting remote objects by commonLabels       
The following objects will be deleted:
  default/Service/redis-cart
  default/Deployment/redis-cart
Do you really want to delete 2 objects? (y/N) y

Deleted objects:
  default/Service/redis-cart
  default/Deployment/redis-cart

You have just performed your first house-keeping, which you’ll probably do quite often from now on in your daily DevOps business.

More house-keeping

When time passes, new versions of the Helm Charts that you integrated are going to be released. You might have to keep your deployments up-to-date in such cases. The most naive way is to simply increase the chart version inside helm-chart.yml and then simply re-call kluctl helm-pull.

As the number of used charts can easily grow to a number where it becomes hard to keep everything up-to-date, kluctl offers a command to support you in this:

$ kluctl helm-update
INFO[0005] Chart third-party/redis/helm-chart.yml has new version 16.8.2 available. Old version is 16.8.0. 

As you can see, it will display charts with new versions. You can also use the same command to actually update the helm-chart.yml files and ultimately commit these to git:

$ kluctl helm-update --upgrade --commit
INFO[0005] Chart third-party/redis/helm-chart.yml has new version 16.8.2 available. Old version is 16.8.0. 
INFO[0005] Pulling for third-party/redis/helm-chart.yml 
INFO[0010] Committing: Updated helm chart third-party/redis from 16.8.0 to 16.8.2

How to continue

After this tutorial, you have hopefully learned how to better structure your projects and how to integrate third-party Helm Charts into your project, including some basic house-keeping tasks.

The next tutorials in this series will show you how to use this kluctl project as a base to implement a multi-environment and multi-cluster deployment.

6.1.1.3 - 3. Templating and multi-env deployments

Introduction

The second tutorial in this series demonstrated how to integrate Helm into your deployment project and how to keep things structured.

The project is however still not flexible enough to be deployed multiple times and/or in different flavors. As an example, it doesn’t make much sense to deploy redis with replication on a local cluster, as there can’t be any high availability with single node. Also, the resource requests currently used are quite demanding for a single node cluster.

How to start

This tutorial is based on the results of the second tutorial. As an alternative, you can take the 2-helm-integration example project found here and use it as the base to be able to continue with this tutorial.

This time, you should start with a fresh kind cluster. If you are sure that you won’t loose any critical data by deleting the existing cluster, simply run:

$ kind delete cluster
$ kind create cluster

If you’re unsure or if you want to re-use the existing cluster for some reason, you can also simply delete the old deployment:

$ kluctl delete -t local
  INFO[0000] Rendering templates and Helm charts
  INFO[0000] Building kustomize objects
  INFO[0000] Getting remote objects by commonLabels
The following objects will be deleted:
  default/Service/emailservice
  ...
  default/ConfigMap/cart-redis-scripts
  Do you really want to delete 29 objects? (y/N) y

Deleted objects:
  default/ConfigMap/cart-redis-scripts
  ...
  default/StatefulSet/cart-redis-node

The reason to start with a fresh deployment is that we will later switch to different namespaces and stop using the default namespace.

Targets

If we want to allow the deployment to be deployed multiple times, we first need multiple targets in our project. Let’s add 2 targets called test and prod. To do so, modify the content of .kluctl.yml to contain:

targets:
  - name: local
    context: kind-kind
    args:
      env_type: local
  - name: test
    context: kind-kind
    args:
      env_type: real
  - name: prod
    context: kind-kind
    args:
      env_type: real

You might notice that all targets point to the kind cluster at the moment. This is of course not how you would do it in a real project as you’d probably have at least one real production-ready cluster to target your deployments against.

We’ve also introduced args for each target, with each target having an env_type argument configured. This argument will later be used to change details of the deployment, depending on the value of it. For example, setting it to local might change the redis deployment into a single-node/standalone deployment.

Dynamic namespaces

One of the most obvious and also useful application of templates is making namespaces dynamic, depending on the target that you want to deploy. This allows to deploy the same set of deployment/manifests multiple times, even to the same cluster.

There are a few predefined variables which are always available in all deployments. One of these variables is the target dictionary which is a copy of the currently processed target. This means, we can use {{ target.name }} to insert the current target name through templating.

There are multiple ways to change the namespaces of involved resources. The most naive way is to go directly into the manifests and add the metadata.namespace field. For example, you could edit services/adservice/deployment.yml this way:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: adservice
  namespace: ms-demo-{{ target.name }}
...

This can however easily lead to resources being missed or resources where you are not in control, e.g. rendered Helm Charts. Another way to set the namespace on multiple resources is by using the namespace property of kustomize. For example, instead of changing the adservice deployment directly, you could modify the content of services/adservice/kustomization.yml to:

resources:
  - deployment.yml
  - service.yml

namespace: ms-demo-{{ target.name }}

This is better than the naive solution, but still limited in a comparable (but not as bad) way. The most powerful and preferred solution is use overrideNamespace in the root deployment.yml:

...
overrideNamespace: ms-demo-{{ target.name }}
...

As an alternative, you could also use overrideNamespace separately in third-party/deployment.yml and services/deployment.yml. In this case, you’re also free to use different prefixes for the namespaces, as long as you include {{ target.name }} in them.

Helm Charts and namespaces

The previously described way of making namespaces dynamic in all resources works well for most cases. There are however situations where this is not enough, mostly when the name of the namespace is used in other places than metadata.namespace.

Helm Charts very often do this internally, which makes it necessary to also include the dynamic namespace into the helm-chart.yml’s namespace property. You will have to do this for the redis chart as well, so let’s modify third-party/redis/helm-chart.yml to:

helmChart:
  repo: https://charts.bitnami.com/bitnami
  chartName: redis
  chartVersion: 16.8.2
  releaseName: cart
  namespace: ms-demo-{{ target.name }}
  output: deploy.yml

Without this change, redis is going to be deployed successfully but will then fail to start due to wrong internal references to the default namespace.

Making commonLabels unique per target

commonLabels in your root deployment.yml has a very special meaning which is important to understand and work with. The combination of all commonLabels MUST be unique between all supported targets on a cluster, including the ones that don’t exist yet and are from other kluctl projects.

This is because kluctl uses these to identify resources belonging to the currently processed deployment/target, which becomes especially important when deleting or pruning.

To fulfill this requirement, change the root deployment.yml to:

...
commonLabels:
  examples.kluctl.io/deployment-project: "microservices-demo"
  examples.kluctl.io/deployment-target: "{{ target.name }}"
...

examples.kluctl.io/deployment-project ensures that we don’t get in conflict with any other kluctl project that might get deployed to the same cluster. examples.kluctl.io/deployment-target ensures that the same deployment can be deployed once per target. The names of the labels are arbitrary, and you can choose whatever you like.

Creating necessary namespaces

If you’d try to deploy the current state of the project, you’d notice that it will result in many errors where kluctl says that the desired namespace is not found. This is because kluctl does not create namespaces on its own. It also does not do this for Helm Charts, even if helm install for the same charts would do this. In kluctl you have to create namespaces by yourself, which ensures that you have full control over them.

This implies that we must create the necessary namespace resource by ourselves. Let’s put it into its own kustomize deployment below the root directory. First, create the namespaces directory and place a simple kustomization.yml into it:

resources:
  - namespace.yml

In the same directory, create the manifest namespace.yml:

apiVersion: v1
kind: Namespace
metadata:
  name: ms-demo-{{ target.name }}

Now add the new kustomize deployment to the root deployment.yml:

deployments:
  - path: namespaces
  - include: third-party
  - include: services
...

Deploying multiple targets

You’re now able to deploy the current deployment multiple times to the same kind cluster. Simply run:

$ kluctl deploy -t local
$ kluctl deploy -t prod

After this, you’ll have two namespaces with the same set of microservices and two instances of redis (both replicated with 3 replicas) deployed.

All changes together

For a complete overview of the necessary changes to get to this point, look into this commit.

Make the local target more lightweight

Having the microservices demo deployed twice might easily lead to you local cluster being completely overloaded. The solution would obviously be to not deploy the prod target to your local cluster and instead use a real cluster.

However, for the sake of this tutorial, we’ll instead try to introduce a few differences between targets so that they fit better onto the local cluster.

To do so, let’s introduce variables files that contain different sets of configuration for different environment types. These variables files are simply yaml files with arbitrary content, which is then available in future templating contexts.

First, create the sub-directory vars in the root project directory. The name of this directory is arbitrary and up to you, it must however match what is later used in the deployment.yml.

Inside this directory, create the file local.yml with the following content:

redis:
  architecture: standalone
  # the standalone architecture exposes redis via a different service then the replication architecture (which uses sentinel)
  svcName: cart-redis-master

And the file real.yml with the following content:

redis:
  architecture: replication
  # the standalone architecture exposes redis via a different service then the replication architecture (which uses sentinel)
  svcName: cart-redis

To load these variables files into the templating context, modify the root deployment.yml and add the following to the top:

vars:
  - file: ./vars/{{ args.env_type }}.yml
...

As you can see, we can even use templating inside the deployment.yml. Generally, templating can be used everywhere, with a few limitations outlined in the documentation.

The above changes will now load a different variables file, depending on which env_type was specified in the currently processed target. This allows us to customize all kinds of configurations via templating. You’re completely free in how you use this feature, including loading multiple variables files where each one can use the variables loaded by the previous variables file.

To use the newly introduces variables, first modify the content of third-party/redis/helm-values.yml to:

architecture: {{ redis.architecture }}

auth:
  enabled: false

{% if redis.architecture == "replication" %}
sentinel:
  enabled: true
  quorum: 2

replica:
  replicaCount: 3
  persistence:
    enabled: true
{% endif %}

master:
  persistence:
    enabled: true

The templating engine used by kluctl is currently Jinja2. We suggest reading through the documentation of Jinja2 to understand what is possible. In the example above, we use simple variable expressions and if/else statements.

We will also have to replace the occurrence of cart-redis:6379 with {{ redis.svcName }}:6379 inside services/cartservice/deployment.yml.

For an overview of the above changes, look into this commit.

Deploying the current state

You can now try to deploy the local and test targets. You’ll notice that the local deployment will result in quite a few changes (seen in the diff) and the test target not having any changes at all. You might also want to do a prune for the local target to get rid of the old redis deployment.

Disable a few services for local

Some services are not needed locally or might not even be able to run properly. Let’s assume this applies to the services loadgenerator and emailservice. We can conditionally remove these from the deployment with simple boolean variables in vars/local.yml and vars/real.yml and if/else statements in services/deployment.yml.

Add the following variables to vars/local.yml:

...
services:
  emailservice:
    enabled: false
  loadgenerator:
    enabled: false

And the following variables to vars/real.yml:

...
services:
  emailservice:
    enabled: true
  loadgenerator:
    enabled: true

Now change the content of services/deployment.yml to:

deployments:
  - path: adservice
  - path: cartservice
  - path: checkoutservice
  - path: currencyservice
  {% if services.emailservice.enabled %}
  - path: emailservice
  {% endif %}
  - path: frontend
  {% if services.loadgenerator.enabled %}
  - path: loadgenerator
  {% endif %}
  - path: paymentservice
  - path: productcatalogservice
  - path: recommendationservice
  - path: shippingservice

A deployment to test should not change anything now. Deploying to local however should reveal multiple orphan resources, which you can then prune.

For an overview of the above changes, look into this commit.

How to continue

After this tutorial, you should have a basic understanding how templating in kluctl works and how a multi-environment deployment can be implemented.

We however only deployed to a single cluster so far and are unable to properly manage the image versions of our microservices at the moment. In the next tutorial of this series, we’ll learn how to deploy to multiple clusters and split third-party image management and (self developed) application image management.

6.2 - Examples

6.2.1 - Simple

Very simple example with cluster and deployment in a single repository.

Description

This example is a very simple one that shows how to define a target cluster, context, create a namespace and deploy a nginx. You can configure the name of the namespace by changing the arg environment in .kluctl.yml.

Prerequisites

  1. A running kind cluster with a context named kind-kind.
  2. Of course, you need to install kluctl. Please take a look at the installation guide, in case you need further information.

How to deploy

git clone git@github.com:kluctl/kluctl-examples.git
cd kluctl-examples/simple
kluctl diff --target simple
kluctl deploy --target simple

6.2.2 - Simple with external repositories

Very simple example with cluster and deployment in external repositories.

Description

This example is very similar to simple except that the target cluster and the deployment is defined externally. You can configure the repositories and the ref in .kluctl.yml.

Prerequisites

  1. A running kind cluster with a context named kind-kind.
  2. Of course, you need to install kluctl. Please take a look at the installation guide, in case you need further information.

How to deploy

git clone git@github.com:kluctl/kluctl-examples.git
cd kluctl-examples/simple-with-external-repos
kluctl diff --target simple-with-external-repos
kluctl deploy --target simple-with-external-repos

6.2.3 - Simple Helm

Very simple example of a helm-based deployment.

Description

This example is very similar to simple but it deploys a Helm-based nginx to give a first impression how kluctl and Helm work together.

Prerequisites

  1. A running kind cluster with a context named kind-kind.
  2. Of course, you need to install kluctl. Please take a look at the installation guide, if you need further information.
  3. You also need to install Helm. Please take a look at the Helm installation guide for further information.

How to deploy

git clone git@github.com:kluctl/kluctl-examples.git
cd kluctl-examples/simple-helm
kluctl helm-pull
kluctl diff --target simple-helm
kluctl deploy --target simple-helm

6.2.4 - Microservices demo

Complex example inspired by inspired by the Google Online Boutique Demo.

Description

This example is a more complex one and contains the files for the microservices tutorial inspired by the Google Online Boutique Demo.

Prerequisites

Please take a look at Tutorials for prerequisites.

How to deploy

Please take a look at Tutorials for deployment instructions.

7 - Reference

Description of configuration files and commands

7.1 - Kluctl project (.kluctl.yaml)

Kluctl project configuration, found in the .kluctl.yaml file.

The .kluctl.yaml is the central configuration and entry point for your deployments. It defines where the actual deployment project is located, where sealed secrets and unencrypted secrets are localed and which targets are available to invoke commands on.

Example

An example .kluctl.yaml looks like this:

targets:
  # test cluster, dev env
  - name: dev
    context: test.example.com
    args:
      environment_name: dev
    sealingConfig:
      secretSets:
        - non-prod
  # test cluster, test env
  - name: test
    context: test.example.com
    args:
      environment_name: test
    sealingConfig:
      secretSets:
        - non-prod
  # prod cluster, prod env
  - name: prod
    context: prod.example.com
    args:
      environment_name: prod
    sealingConfig:
      secretSets:
        - prod

# This is only required if you actually need sealed secrets
secretsConfig:
  secretSets:
    - name: prod
      vars:
        # This file should not be part of version control!
        - file: .secrets-prod.yaml
    - name: non-prod
      vars:
        # This file should not be part of version control!
        - file: .secrets-non-prod.yaml

Allowed fields

Please check the sub-sections of this section to see which fields are allowed at the root level of .kluctl.yaml.

7.1.1 - secretsConfig

Optional, defines where to load secrets from.

This configures how secrets are retrieved while sealing. It is basically a list of named secret sets which can be referenced from targets.

It has the following form:

...
secretsConfig:
  secretSets:
    - name: <name>
      vars:
        - ...
  sealedSecrets: ...
...

secretSets

Each secretSets entry has the following fields.

name

This field specifies the name of the secret set. The name can be used in targets to refer to this secret set.

vars

A list of variables sources. Check the documentation of variables sources for details.

Each variables source must have a root dictionary with the name secrets and all the actual secret values below that dictionary. Every other root key will be ignored.

Example variables file:

secrets:
  secret: value1
  nested:
    secret: value2
    list:
      - a
      - b
...

sealedSecrets

This field specifies the configuration for sealing. It has the following form:

...
secretsConfig:
  secretSets: ...
  sealedSecrets:
    bootstrap: true
    namespace: kube-system
    controllerName: sealed-secrets-controller
...

bootstrap

Controls whether kluctl should bootstrap the initial private key in case the controller is not yet installed on the target cluster. Defaults to true.

namespace

Specifies the namespace where the sealed-secrets controller is installed. Defaults to “kube-system”.

controllerName

Specifies the name of the sealed-secrets controller. Defaults to “sealed-secrets-controller”.

7.1.2 - targets

Required, defines targets for this kluctl project.

Specifies a list of targets for which commands can be invoked. A target puts together environment/target specific configuration and the target cluster. Multiple targets can exist which target the same cluster but with differing configuration (via args). Target entries also specifies which secrets to use while sealing.

Each value found in the target definition is rendered with a simple Jinja2 context that only contains the target itself. The rendering process is retried 10 times until it finally succeeds, allowing you to reference the target itself in complex ways. This is especially useful when using dynamic targets.

Target entries have the following form:

targets:
...
  - name: <target_name>
    context: <context_name>
    args:
      arg1: <value1>
      arg2: <value2>
      ...
    dynamicArgs:
      - name: <arg_name>
      ...
    images:
      - image: my-image
        resultImage: my-image:1.2.3
    sealingConfig:
      secretSets:
        - <name_of_secrets_set>
...

The following fields are allowed per target:

name

This field specifies the name of the target. The name must be unique. It is referred in all commands via the -t option.

context

This field specifies the kubectl context of the target cluster. The context must exist in the currently active kubeconfig. If this field is omitted, Kluctl will always use the currently active context.

args

This fields specifies a map of arguments to be passed to the deployment project when it is rendered. Allowed argument names are configured via deployment args.

The arguments specified in the dynamic target config have higher priority.

dynamicArgs

This field specifies a list of CLI arguments that can be passed to kluctl when performing any commands on the target. These arguments are passed with -a arg_name=arg_value when for example calling kluctl deploy -t target_name.

Each entry has the following fields:

images

This field specifies a list of fixed images to be used by images.get_image(...). The format is identical to the fixed images file.

The fixed images specified in the dynamic target config have higher priority.

name

The name of the argument.

sealingConfig

This field configures how sealing is performed when the [seal command] (https://kluctl.io/docs/reference/commands/seal/) is invoked for this target. It has the following form:

targets:
...
- name: <target_name>
  ...
  sealingConfig:
    args:
      arg1: <override_for_arg1>
    certFile: <path-to-cert-file>
    dynamicSealing: <true_or_false>
    secretSets:
      - <name_of_secrets_set>

args

This field allows adding extra arguments to the target args. These are only used while sealing and may override arguments which are already configured for the target.

certFile

Optional path to a local (inside your project) public certificate used for sealing. Such a certificate can be fetched from the sealed-secrets controller using kubeseal --fetch-cert.

dynamicSealing

This field specifies weather sealing should happen per dynamic target or only once. This field is optional and defaults to true.

secretSets

This field specifies a list of secret set names, which all must exist in the secretsConfig.

7.1.2.1 - Dynamic Targets

Dynamically defined targets.

Targets can also be “dynamic”, meaning that additional configuration can be sourced from another git repository. This can be based on a single target repository and branch, or on a target repository and branch/ref pattern, resulting in multiple dynamic targets being created from one target definition.

Please note that a single entry in target might end up with multiple dynamic targets, meaning that the name must be made unique between these dynamic targets. This can be achieved by using templating in the name field. As an example, {{ target.targetConfig.ref }} can be used to set the target name to the branch name of the dynamic target.

Dynamic targets have the following form:

targets:
...
  - name: <dynamic_target_name>
    context: <cluster_name>
    args: ...
      arg1: <value1>
      arg2: <value2>
      ...
    targetConfig:
      project:
        url: <git-url>
      ref: <ref-name>
      refPattern: <regex-pattern>
      file: <config-file>
    sealingConfig:
      dynamicSealing: <false_or_true>
      secretSets:
        - <name_of_secrets_set>
...

All fields known from normal targets are allowed. In addition, the targetConfig with following fields is available.

targetConfig

The presence of this field causes the target to become a dynamic target. It specifies where to look for dynamic targets and their addional configuration. It has the following form:

...
targets:
...
- name: <dynamic_target_name>
  ...
  targetConfig:
    project:
      url: <git-url>
    ref: <ref-name>
    refPattern: <regex-pattern>
...

project.url

This field specifies the git clone url of the target configuration project.

ref

This field specifies the branch or tag to use. If this field is specified, using refPattern is forbidden. This will result in one single dynamic target.

refPattern

This field specifies a regex pattern to use when looking for candidate branches and tags. If this is specified, using ref is forbidden. This will result in multiple dynamic targets. Each dynamic target will have ref set to the actual branch name it belong to. This allows using of {{ target.targetConfig.ref }} in all other target fields.

file

This field specifies the config file name to read externalized target config from.

Format of the target config

The target config file referenced in targetConfig must be of the following format:

args:
  arg1: value1
  arg2: value2
images:
  - image: registry.gitlab.com/my-group/my-project
    resultImage: registry.gitlab.com/my-group/my-project:1.1.0

args

An optional map of arguments, in the same format as in the normal target args.

The arguments specified here have higher priority.

images

An optional list of fixed images, in the same format as in the normal target images

Simple dynamic targets

A simplified form of dynamic targets is to store target config inside the same directory/project as the .kluctl.yaml. This can be done by omitting project, ref and refPattern from targetConfig and only specify file.

A note on sealing

When sealing dynamic targets, it is very likely that it is not known yet which dynamic targets will actually exist in the future. This requires some special care when sealing secrets for these targets. Sealed secrets are usually namespace scoped, which might need to be changed to cluster-wide scoping so that the same sealed secret can be deployed into multiple targets (assuming you deploy to different namespaces for each target). When you do this, watch out to not compromise security, e.g. by sealing production level secrets with a cluster-wide scope!

It is also very likely required to set target.sealingConfig.dynamicSealing to false, so that sealing is only performed once and not for all dynamic targets.

7.2 - Deployments

Deployments and sub-deployments.

A deployment project is collection of deployment items and sub-deployments. Deployment items are usually Kustomize deployments, but can also integrate Helm Charts.

Basic structure

The following visualization shows the basic structure of a deployment project. The entry point of every deployment project is the deployment.yaml file, which then includes further sub-deployments and kustomize deployments. It also provides some additional configuration required for multiple kluctl features to work as expected.

As can be seen, sub-deployments can include other sub-deployments, allowing you to structure the deployment project as you need.

Each level in this structure recursively adds tags to each deployed resources, allowing you to control precisely what is deployed in the future.

Some visualized files/directories have links attached, follow them to get more information.

-- project-dir/
   |-- deployment.yaml
   |-- .gitignore
   |-- kustomize-deployment1/
   |   |-- kustomization.yaml
   |   `-- resource.yaml
   |-- sub-deployment/
   |   |-- deployment.yaml
   |   |-- kustomize-deployment2/
   |   |   |-- kustomization.yaml
   |   |   |-- resource1.yaml
   |   |   `-- ...
   |   |-- kustomize-deployment3/
   |   |   |-- kustomization.yaml
   |   |   |-- resource1.yaml
   |   |   |-- resource2.yaml
   |   |   |-- patch1.yaml
   |   |   `-- ...
   |   |-- kustomize-with-helm-deployment/
   |   |   |-- charts/
   |   |   |   `-- ...
   |   |   |-- kustomization.yaml
   |   |   |-- helm-chart.yaml
   |   |   `-- helm-values.yaml
   |   `-- subsub-deployment/
   |       |-- deployment.yaml
   |       |-- ... kustomize deployments
   |       `-- ... subsubsub deployments
   `-- sub-deployment/
       `-- ...

Order of deployments

Deployments are done in parallel, meaning that there are usually no order guarantees. The only way to somehow control order, is by placing barriers between kustomize deployments. You should however not overuse barriers, as they negatively impact the speed of kluctl.

7.2.1 - deployment.yaml

Structure of deployment.yaml.

The deployment.yaml file is the entrypoint for the deployment project. Included sub-deployments also provide a deployment.yaml file with the same structure as the initial one.

An example deployment.yaml looks like this:

sealedSecrets:
  outputPattern: "{{ cluster.name }}/{{ args.environment }}"

deployments:
- path: nginx
- path: my-app
- include: monitoring

commonLabels:
  my.prefix/target: "{{ target.name }}"
  my.prefix/deployment-project: my-deployment-project

args:
- name: environment

The following sub-chapters describe the available fields in the deployment.yaml

sealedSecrets

sealedSecrets configures how sealed secrets are stored while sealing and located while rendering. See Sealed Secrets for details.

deployments

deployments is a list of deployment items. Multiple deployment types are supported, which is documented further down. Individual deployments are performed in parallel, unless a barrier is encountered which causes kluctl to wait for all previous deployments to finish.

Kustomize deployments

Specifies a kustomize deployment. Please see Kustomize integration for more details.

Example:

deployments:
- path: path/to/deployment1
- path: path/to/deployment2
  waitReadiness: true

The path must point to a directory relative to the directory containing the deployment.yaml. Only directories that are part of the kluctl project are allowed. The directory must contain a valid kustomization.yaml.

waitReadiness is optional and if set to true instructs kluctl to wait for readiness of each individual object of the kustomize deployment. Readiness is defined in readiness.

Includes

Specifies a sub-deployment project to be included. The included sub-deployment project will inherit many properties of the parent project, e.g. tags, commonLabels and so on.

Example:

deployments:
- include: path/to/sub-deployment

The path must point to a directory relative to the directory containing the deployment.yaml. Only directories that are part of the kluctl project are allowed. The directory must contain a valid deployment.yaml.

Git includes

Specifies an external git project to be included. The project is included the same way with regular includes, except that the included project can not use/load templates from the parent project. An included project might also include further git projects.

Simple example:

deployments:
- git: git@github.com/example/example.git

This will clone the git repository at git@github.com/example/example.git, checkout the default branch and include it into the current project.

Advanced Example:

deployments:
- git:
    url: git@github.com/example/example.git
    ref: my-branch
    subDir: some/sub/dir

The url specifies the Git url to be cloned and checked out. ref is optional and specifies the branch or tag to be used. If ref is omitted, the default branch will be checked out. subDir is optional and specifies the sub directory inside the git repository to include.

Barriers

Causes kluctl to wait until all previous kustomize deployments have been applied. This is useful when upcoming deployments need the current or previous deployments to be finished beforehand. Previous deployments also include all sub-deployments from included deployments.

Example:

deployments:
- path: kustomizeDeployment1
- path: kustomizeDeployment2
- include: subDeployment1
- barrier: true
# At this point, it's ensured that kustomizeDeployment1, kustomizeDeployment2 and all sub-deployments from
# subDeployment1 are fully deployed.
- path: kustomizeDeployment3

deployments common properties

All entries in deployments can have the following common properties:

vars (deployment item)

A list of variable sets to be loaded into the templating context, which is then available in all deployment items and sub-deployments.

See templating for more details.

Example:

deployments:
- path: kustomizeDeployment1
  vars:
    - file: vars1.yaml
    - values:
        var1: value1
- path: kustomizeDeployment2
# all sub-deployments of this include will have the given variables available in their Jinj2 context.
- include: subDeployment1
  vars:
    - file: vars2.yaml

tags (deployment item)

A list of tags the deployment should have. See tags for more details. For includes, this means that all sub-deployments will get these tags applied to. If not specified, the default tags logic as described in tags is applied.

Example:

deployments:
- path: kustomizeDeployment1
  tags:
    - tag1
    - tag2
- path: kustomizeDeployment2
  tags:
    - tag3
# all sub-deployments of this include will get tag4 applied
- include: subDeployment1
  tags:
    - tag4

alwaysDeploy

Forces a deployment to be included everytime, ignoring inclusion/exclusion sets from the command line. See Deploying with tag inclusion/exclusion for details.

deployments:
- path: kustomizeDeployment1
  alwaysDeploy: true
- path: kustomizeDeployment2

skipDeleteIfTags

Forces exclusion of a deployment whenever inclusion/exclusion tags are specified via command line. See Deleting with tag inclusion/exclusion for details.

deployments:
- path: kustomizeDeployment1
  skipDeleteIfTags: true
- path: kustomizeDeployment2

vars (deployment project)

A list of variable sets to be loaded into the templating context, which is then available in all deployment items and sub-deployments.

See templating for more details.

commonLabels

A dictionary of labels and values to be added to all resources deployed by any of the kustomize deployments in this deployment project.

This feature is mainly meant to make it possible to identify all objects in a kubernetes cluster that were once deployed through a specific deployment project.

Consider the following example deployment.yaml:

deployments:
  - path: nginx
  - include: sub-deployment1

commonLabels:
  my.prefix/target: {{ target.name }}
  my.prefix/deployment-name: my-deployment-project-name
  my.prefix/label-1: value-1
  my.prefix/label-2: value-2

Every resource deployed by the kustomize deployment nginx will now get the two provided labels attached. All included sub-deployment projects (e.g. sub-deployment1) will also recursively inherit these labels and pass them to further down.

In case an included sub-deployment project also contains commonLabels, both dictionaries of common labels are merged inside the included sub-deployment project. In case of conflicts, the included common labels override the inherited.

The root deployment’s commonLabels is also used to identify objects to be deleted when performing kluctl delete or kluctl prune operations

Please note that these commonLabels are not related to commonLabels supported in kustomization.yaml files. It was decided to not rely on this feature but instead attach labels manually to resources right before sending them to kubernetes. This is due to an implementation detail in kustomize which causes commonLabels to also be applied to label selectors, which makes otherwise editable resources read-only when it comes to commonLabels.

overrideNamespace

A string that is used as the default namespace for all kustomize deployments which don’t have a namespace set in their kustomization.yaml.

tags (deployment project)

A list of common tags which are applied to all kustomize deployments and sub-deployment includes.

See tags for more details.

args

A list of arguments that can or must be passed to most kluctl operations. Each of these arguments is then available in templating via the global args object. Only the root deployment.yaml can contain such argument definitions.

An example looks like this:

deployments:
  - path: nginx

args:
  - name: environment
  - name: enable_debug
    default: false
  - name: complex_arg
    default:
      my:
        nested1: arg1
        nested2: arg2

These arguments can then be used in templating, e.g. by using {{ args.environment }}.

When calling kluctl, most of the commands will then require you to specify at least -a environment=xxx and optionally -a enable_debug=true

The following sub chapters describe the fields for argument entries.

name

The name of the argument.

default

If specified, the argument becomes optional and will use the given value as default when not specified.

The default value can be an arbitrary yaml value, meaning that it can also be a nested dictionary. In that case, passing args in nested form will only set the nested value. With the above example of complex_arg, running:

kluctl deploy -t my-target -a my.nested1=override`

will only modify the value below my.nested1 and keep the value of my.nested2.

ignoreForDiff

A list of objects and fields to ignore while performing diffs. Consider the following example:

deployments:
  - ...

ignoreForDiff:
  - group: apps
    kind: Deployment
    namespace: my-namespace
    name: my-deployment
    fieldPath: spec.replicas

This will remove the spec.replicas field from every resource that matches the object. group, kind, namespace and name can be omitted, which results in all objects matching. fieldPath must be a valid JSON Path. fieldPath may also be a list of JSON paths.

The JSON Path implementation used in kluctl has extended support for wildcards in field names, allowing you to also specify paths like metadata.labels.my-prefix-*.

As an alternative, annotations can be used to control diff behavior of individual resources.

7.2.2 - Kustomize Integration

How Kustomize is integrated into Kluctl

kluctl uses kustomize to render final resources. This means, that the finest/lowest level in kluctl is represented with kustomize deployments. These kustomize deployments can then perform further customization, e.g. patching and more. You can also use kustomize to easily generate ConfigMaps or secrets from files.

Generally, everything is possible via kustomization.yaml, is thus possible in kluctl.

We advise to read the kustomize reference. You can also look into the official kustomize example.

7.2.3 - Container Images

Dynamic configuration of container images.

There are usually 2 different scenarios where Container Images need to be specified:

  1. When deploying third party applications like nginx, redis, … (e.g. via the Helm integration).
    • In this case, image versions/tags rarely change, and if they do, this is an explicit change to the deployment.
  2. When deploying your own applications.
    • In this case, image versions/tags might change very rapidly, sometimes multiple times per hour. It would be too much effort and overhead when this would be managed explicitly via your deployment. Even with Jinja2 templating, this would be hard to maintain.

kluctl offers a better solution for the second case.

Dynamic versions/tags

kluctl is able to ask the used container registry for a list of tags/versions available for an image. It then can sort the list of images via a configurable order and then use the latest image for your deployment.

It however only does this when the involved resource (e.g. a Deployment or StatefulSet) is not yet deployed. In case it is already deployed, the already deployed image will be reused to avoid undesired re-deployment/re-starting of otherwise unchanged resources.

images.get_image()

This is solved via a templating function that is available in all templates/resources. The function is part of the global images object and expects the following arguments:

images.get_image(image, latest_version)

  • image
  • latest_version
    • Configures how tags/versions are sorted and thus how the latest image is determined. Can be:
      • version.semver()
        Filters and sorts by loose semantic versioning. Versions must start with a number. It allows unlimited . inside the version. It treats versions with a suffix as less then versions without a suffix (e.g. 1.0-rc1 < 1.0). Two versions which only differ by suffix are sorted semantically.
      • version.prefix(prefix)
        Only allows tags with the given prefix and then applies the same logic as images.semver() to whatever follows right after the prefix. You can override the handling of the right part by providing suffix=xxx, while xxx is another version filter, e.g. `version.prefix(“master-”, suffix=version.number())
      • version.number()
        Only allows plain numbers as version numbers sorts them accordingly.
      • version.regex(regex)
        Only allows versions/tags that match the given regex. Sorting is done the same way as in version.semver(), except that versions do not necessarily need to start with a number.

The mentioned version filters must be specified as strings. For example,

images.get_version("my-image", "prefix('master-', suffix=number())").

If no version_filter is specified, then it defaults to "semver()".

Example deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  template:
    spec:
      containers:
      - name: c1
        image: "{{ images.get_image('registry.gitlab.com/my-group/my-project') }}"

Always using the latest images

If you want to use the latest image no matter if an older version is already deployed, use the -u flag to your deploy, diff or list-images commands.

You can restrict updating to individual images by using -I/--include-tag. This is useful when using CI/CD for example, where you often want to perform a deployment that only updates a single application/service from your large deployment.

Fixed images via CLI

The described images.get_image logic however leads to a loosely defined state on your target cluster/environment. This might be fine in a CI/CD environment, but might be undesired when deploying to production. In that case, it might be desirable to explicitly define which versions need to be deployed.

To achieve this, you can use the -F FIXED_IMAGE argument. FIXED_IMAGE must be in the form of -F image<:namespace:deployment:container>=result. For example, to pin the image registry.gitlab.com/my-group/my-project to the tag 1.1.2 you’d have to specify -F registry.gitlab.com/my-group/my-project=registry.gitlab.com/my-group/my-project:1.1.2.

Fixed images via a yaml file

As an alternative to specifying each fixed image via CLI (--fixed-images-file=<file>), you can also specify a single yaml file via CLI which then contains a list of entries that define image/deployment -> imageResult mappings.

An example fixed-images files looks like this:

images:
  - image: registry.gitlab.com/my-group/my-project
    resultImage: registry.gitlab.com/my-group/my-project:1.1.0
  - image: registry.gitlab.com/my-group/my-project2
    resultImage: registry.gitlab.com/my-group/my-project2:2.0.0
  - deployment: StatefulSet/my-sts
    resultImage: registry.gitlab.com/my-group/my-project3:1.0.0

You can also take an existing deployment and export the already deployed image versions into a fixed-images file by using the list-images command. It will produce a compatible fixed-images file based on the calls to images.get_image if a deployment would be performed with the given arguments. The result of that call is quite expressive, as it contains all the information gathered while images were collected. Use --simple to only return a list with image -> resultImage mappings.

Supported image registries and authentication

All v2 API based image registries are supported, including the Docker Hub, Gitlab, and many more. Private registries will need credentials to be setup correctly. This can be done by locally logging in via docker login <registry> or by specifying the following environment variables:

Simply set the following environment variables to pass credentials to you private repository:

  1. KLUCTL_REGISTRY_HOST=registry.example.com
  2. KLUCTL_REGISTRY_USERNAME=username
  3. KLUCTL_REGISTRY_PASSWORD=password

You can also pass credentials for more registries by adding an index to the environment variables, e.g. “KLUCTL_REGISTRY_1_HOST=registry.gitlab.com”

In case your registry uses self-signed TLS certificates, it is currently required to disable TLS verification for these. You can do this via KLUCTL_REGISTRY_TLSVERIFY=1/KLUCTL_REGISTRY_<idx>_TLSVERIFY=1 for the corresponding KLUCTL_REGISTRY_HOST/KLUCTL_REGISTRY_<idx>_HOST or by globally disabling it via KLUCTL_REGISTRY_DEFAULT_TLSVERIFY=1.

7.2.4 - Helm Integration

How Helm is integrated into Kluctl.

kluctl offers a simple-to-use Helm integration, which allows you to reuse many common third-party Helm Charts.

The integration is split into 2 parts/steps/layers. The first is the management and pulling of the Helm Charts, while the second part handles configuration/customization and deployment of the chart.

Pulled Helm Charts are meant to be added to version control to ensure proper speed and consistency.

How it works

Helm charts are not directly deployed via Helm. Instead, kluctl renders the Helm Chart into a single file and then hands over the rendered yaml to kustomize. Rendering is done in combination with a provided helm-values.yaml, which contains the necessary values to configure the Helm Chart.

The resulting rendered yaml is then referred by your kustomization.yaml, from which point on the kustomize integration takes over. This means, that you can perform all desired customization (patches, namespace override, …) as if you provided your own resources via yaml files.

Helm hooks

Helm Hooks are implemented by mapping them to kluctl hooks, based on the following mapping table:

Helm hook kluctl hook
pre-install pre-deploy-initial
post-install post-deploy-initial
pre-delete Not supported
post-delete Not supported
pre-upgrade pre-deploy-upgrade
post-upgrade post-deploy-upgrade
pre-rollback Not supported
post-rollback Not supported
test Not supported

Please note that this is a best effort approach and not 100% compatible to how Helm would run hooks.

helm-chart.yaml

The helm-chart.yaml defines where to get the chart from, which version should be pulled, the rendered output file name, and a few more Helm options. After this file is added to your project, you need to invoke the helm-pull command to pull the Helm Chart into your local project. It is advised to put the pulled Helm Chart into version control, so that deployments will always be based on the exact same Chart (Helm does not guarantee this when pulling).

Example helm-chart.yaml:

helmChart:
  repo: https://charts.bitnami.com/bitnami
  chartName: redis
  chartVersion: 12.1.1
  skipUpdate: false
  releaseName: redis-cache
  namespace: "{{ my.jinja2.var }}"
  output: helm-rendered.yaml # this is optional

When running the helm-pull command, it will search for all helm-chart.yaml files in your project and then pull the chart from the specified repository with the specified version. The pull chart will then be located in the sub-directory charts below the same directory as the helm-chart.yaml

The same filename that was specified in output must then be referred in a kustomization.yaml as a normal local resource. If output is omitted, the default value helm-rendered.yaml is used and must also be referenced in kustomization.yaml.

helmChart inside helm-chart.yaml supports the following fields:

repo

The url to the Helm repository where the Helm Chart is located. You can use hub.helm.sh to search for repositories and charts and then use the repos found there.

oci based repositories are also supported, for example:

helmChart:
  repo: oci://r.myreg.io/mycharts/pepper
  chartName: pepper
  chartVersion: 1.2.3
  releaseName: pepper
  namespace: pepper

chartName

The name of the chart that can be found in the repository.

chartVersion

The version of the chart.

skipUpdate

Skip this Helm Chart when the helm-update command is called. If omitted, defaults to false.

releaseName

The name of the Helm Release.

namespace

The namespace that this Helm Chart is going to be deployed to. Please note that this should match the namespace that you’re actually deploying the kustomize deployment to. This means, that either namespace in kustomization.yaml or overrideNamespace in deployment.yaml should match the namespace given here. The namespace should also be existing already at the point in time when the kustomize deployment is deployed.

output

This is the file name into which the Helm Chart is rendered into. Your kustomization.yaml should include this same file. The file should not be existing in your project, as it is created on-the-fly while deploying.

skipCRDs

If set to true, kluctl will pass --skip-crds to Helm when rendering the deployment. If set to false (which is the default), kluctl will pass --include-crds to Helm.

helm-values.yaml

This file should be present when you need to pass custom Helm Value to Helm while rendering the deployment. Please read the documentation of the used Helm Charts for details on what is supported.

Updates to helm-charts

In case a Helm Chart needs to be updated, you can either do this manually by replacing the chartVersion value in helm-chart.yaml and the calling the helm-pull command or by simply invoking helm-update with --upgrade and/or --commit being set.

Private Chart Repositories

It is also possible to use private chart repositories. There are currently two options to provide Helm Repository credentials to Kluctl.

Use helm repo add --username xxx --password xxx before

Kluctl will try to find known repositories that are managed by the Helm CLI and then try to reuse the credentials of these. The repositories are identified by the URL of the repository, so it doesn’t matter what name you used when you added the repository to Helm. The same method can be used for client certificate based authentication (--key-file in helm repo add).

Use the –username/–password arguments in kluctl helm-pull

See the helm-pull command. You can control repository credentials via --username, --password and --key-file. Each argument must be in the form credentialsId:value, where the credentialsId must match the id specified in the helm-chart.yaml. Example:

helmChart:
  repo: https://raw.githubusercontent.com/example/private-helm-repo/main/
  credentialsId: private-helm-repo
  chartName: my-chart
  chartVersion: 1.2.3
  releaseName: my-chart
  namespace: default

When credentialsId is specified, Kluctl will require you to specify --username=private-helm-repo:my-username and --password=private-helm-repo:my-password. You can also specify a client-side certificate instead via --key-file=private-helm-repo:/path/to/cert.

Multiple Helm Charts can use the same credentialsId.

Environment variables can also be used instead of arguments. See Environment Variables for details.

Templating

Both helm-chart.yaml and helm-values.yaml are rendered by the templating engine before they are actually used. This means, that you can use all available Jinja2 variables at that point, which can for example be seen in the above helm-chart.yaml example for the namespace.

There is however one exception that leads to a small limitation. When helm-pull reads the helm-chart.yaml, it does NOT render the file via the templating engine. This is because it can not know how to properly render the template as it does have no information about targets (there are no -t arguments set) at that point.

This exception leads to the limitation that the helm-chart.yaml MUST be valid yaml even in case it is not rendered via the templating engine. This makes using control statements (if/for/…) impossible in this file. It also makes it a requirement to use quotes around values that contain templates (e.g. the namespace in the above example).

helm-values.yaml is not subject to these limitations as it is only interpreted while deploying.

7.2.5 - Hooks

Kluctl hooks.

Kluctl supports hooks in a similar fashion as known from Helm Charts. Hooks are executed/deployed before and/or after the actual deployment of a kustomize deployment.

To mark a resource as a hook, add the kluctl.io/hook annotation to a resource. The value of the annotation must be a comma separated list of hook names. Possible value are described in the next chapter.

Hook types

Hook Type Description
pre-deploy-initial Executed right before the initial deployment is performed.
post-deploy-initial Executed right after the initial deployment is performed.
pre-deploy-upgrade Executed right before a non-initial deployment is performed.
post-deploy-upgrade Executed right after a non-initial deployment is performed.
pre-deploy Executed right before any (initial and non-initial) deployment is performed.
post-deploy Executed right after any (initial and non-initial) deployment is performed.

A deployment is considered to be an “initial” deployment if none of the resources related to the current kustomize deployment are found on the cluster at the time of deployment.

If you need to execute hooks for every deployment, independent of its “initial” state, use pre-deploy-initial,pre-deploy to indicate that it should be executed all the time.

Hook deletion

Hook resources are by default deleted right before creation (if they already existed before). This behavior can be changed by setting the kluctl.io/hook-delete-policy to a comma separated list of the following values:

Policy Description
before-hook-creation The default behavior, which means that the hook resource is deleted right before (re-)creation.
hook-succeeded Delete the hook resource directly after it got “ready”
hook-failed Delete the hook resource when it failed to get “ready”

Hook readiness

After each deployment/execution of the hooks that belong to a deployment stage (before/after deployment), kluctl waits for the hook resources to become “ready”. Readiness is defined here.

It is possible to disable waiting for hook readiness by setting the annotation kluctl.io/hook-wait to “false”.

7.2.6 - Readiness

Definition of readiness.

There are multiple places where kluctl can wait for “readiness” of resources, e.g. for hooks or when waitReadiness is specified on a deployment item. Readiness depends on the resource kind, e.g. for a Job, kluctl would wait until it finishes successfully.

After each deployment/execution of the hooks that belong to a deployment stage (before/after deployment), kluctl waits for the hook resources to become “ready”. Readiness depends on the resource kind, e.g. for a Job, kluctl would wait until it finishes successfully.

7.2.7 - Tags

Every kustomize deployment has a set of tags assigned to it. These tags are defined in multiple places, which is documented in deployment.yaml. Look for the tags field, which is available in multiple places per deployment project.

Tags are useful when only one or more specific kustomize deployments need to be deployed or deleted.

Default tags

deployment items in deployment projects can have an optional list of tags assigned.

If this list is completely omitted, one single entry is added by default. This single entry equals to the last element of the path in the deployments entry.

Consider the following example:

deployments:
  - path: nginx
  - path: some/subdir

In this example, two kustomize deployments are defined. The first would get the tag nginx while the second would get the tag subdir.

In most cases this heuristic is enough to get proper tags with which you can work. It might however lead to strange or even conflicting tags (e.g. subdir is really a bad tag), in which case you’d have to explicitly set tags.

Tag inheritance

Deployment projects and deployments items inherit the tags of their parents. For example, if a deployment project has a tags property defined, all deployments entries would inherit all these tags. Also, the sub-deployment projects included via deployment items of type include inherit the tags of the deployment project. These included sub-deployments also inherit the tags specified by the deployment item itself.

Consider the following example deployment.yaml:

deployments:
  - include: sub-deployment1
    tags:
      - tag1
      - tag2
  - include: sub-deployment2
    tags:
      - tag3
      - tag4
  - include: subdir/subsub

Any kustomize deployment found in sub-deployment1 would now inherit tag1 and tag2. If sub-deployment1 performs any further includes, these would also inherit these two tags. Inheriting is additive and recursive.

The last sub-deployment project in the example is subject to the same default-tags logic as described in Default tags, meaning that it will get the default tag subsub.

Deploying with tag inclusion/exclusion

Special care needs to be taken when trying to deploy only a specific part of your deployment which requires some base resources to be deployed as well.

Imagine a large deployment is able to deploy 10 applications, but you only want to deploy one of them. When using tags to achieve this, there might be some base resources (e.g. Namespaces) which are needed no matter if everything or just this single application is deployed. In that case, you’d need to set alwaysDeploy to true.

Deleting with tag inclusion/exclusion

Also, in most cases, even more special care has to be taken for the same types of resources as decribed before.

Imagine a kustomize deployment being responsible for namespaces deployments. If you now want to delete everything except deployments that have the persistency tag assigned, the exclusion logic would NOT exclude deletion of the namespace. This would ultimately lead to everything being deleted, and the exclusion tag having no effect.

In such a case, you’d need to set skipDeleteIfTags to true as well.

In most cases, setting alwaysDeploy to true also requires setting skipDeleteIfTags to true.

7.2.8 - Annotations

Annotations usable in Kubernetes resources.

7.2.8.1 - All resources

Annotations on all resources

The following annotations control the behavior of the deploy and related commands.

Control deploy behavior

The following annotations control deploy behavior, especially in regard to conflict resolution.

kluctl.io/delete

If set to “true”, the resource will be deleted at deployment time. Kluctl will not emit an error in case the resource does not exist. A resource with this annotation does not have to be complete/valid as it is never sent to the Kubernetes api server.

kluctl.io/force-apply

If set to “true”, the whole resource will be force-applied, meaning that all fields will be overwritten in case of field manager conflicts.

kluctl.io/force-apply-field

Specifies a JSON Path for fields that should be force-applied. Matching fields will be overwritten in case of field manager conflicts.

If more than one field needs to be specified, add -xxx to the annotation key, where xxx is an arbitrary number.

Control deletion/pruning

The following annotations control how delete/prune is behaving.

kluctl.io/skip-delete

If set to “true”, the annotated resource will not be deleted when delete or prune is called.

kluctl.io/skip-delete-if-tags

If set to “true”, the annotated resource will not be deleted when delete or prune is called and inclusion/exclusion tags are used at the same time.

This tag is especially useful and required on resources that would otherwise cause cascaded deletions of resources that do not match the specified inclusion/exclusion tags. Namespaces are the most prominent example of such resources, as they most likely don’t match exclusion tags, but cascaded deletion would still cause deletion of the excluded resources.

Control diff behavior

The following annotations control how diffs are performed.

kluctl.io/diff-name

This annotation will override the name of the object when looking for the in-cluster version of an object used for diffs. This is useful when you are forced to use new names for the same objects whenever the content changes, e.g. for all kinds of immutable resource types.

Example (filename job.yaml):

apiVersion: batch/v1
kind: Job
metadata:
  name: myjob-{{ load_sha256("job.yaml", 6) }}
  annotations:
    kluctl.io/diff-name: myjob
spec:
  template:
    spec:
      containers:
      - name: hello
        image: busybox
        command: ["sh",  "-c", "echo hello"]
      restartPolicy: Never

Without the kluctl.io/diff-name annotation, any change to the job.yaml would be treated as a new object in resulting diffs from various commands. This is due to the inclusion of the file hash in the job name. This would make it very hard to figure out what exactly changed in an object.

With the kluctl.io/diff-name annotation, kluctl will pick an existing job from the cluster with the same diff-name and use it for the diff, making it a lot easier to analyze changes. If multiple objects match, the one with the youngest creationTimestamp is chosen.

Please note that this will not cause old objects (with the same diff-name) to be prunes. You still have to regularely prune the deployment.

kluctl.io/ignore-diff

If set to “true”, the whole resource will be ignored while calculating diffs.

kluctl.io/ignore-diff-field

Specifies a JSON Path for fields that should be ignored while calculating diffs.

If more than one field needs to be specified, add -xxx to the annotation key, where xxx is an arbitrary number.

7.2.8.2 - Hooks

Annotations on hooks

The following annotations control hook execution

See hooks for more details.

kluctl.io/hook

Declares a resource to be a hook, which is deployed/executed as described in hooks. The value of the annotation determines when the hook is deployed/executed.

kluctl.io/hook-weight

Specifies a weight for the hook, used to determine deployment/execution order.

kluctl.io/hook-delete-policy

Defines when to delete the hook resource.

kluctl.io/hook-wait

Defines whether kluctl should wait for hook-completion.

7.2.8.3 - Validation

Annotations to control validation

The following annotations influence the validate command.

validate-result.kluctl.io/xxx

If this annotation is found on a resource that is checked while validation, the key and the value of the annotation are added to the validation result, which is then returned by the validate command.

The annotation key is dynamic, meaning that all annotations that begin with validate-result.kluctl.io/ are taken into account.

7.2.8.4 - Kustomize

Annotations on the kustomization.yaml resource

Even though the kustomization.yaml from Kustomize deployments are not really Kubernetes resources (as they are not really deployed), they have the same structure as Kubernetes resources. This also means that the kustomization.yaml can define metadata and annotations. Through these annotations, additional behavior on the deployment can be controlled.

Example:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

metadata:
  annotations:
    kluctl.io/barrier: "true"
    kluctl.io/wait-readiness: "true"

resources:
  - deployment.yaml

kluctl.io/barrier

If set to true, kluctl will wait for all previous objects to be applied (but not necessarily ready). This has the same effect as barrier from deployment projects.

kluctl.io/wait-readiness

If set to true, kluctl will wait for readiness of all objects from this kustomization project. Readiness is defined the same as in hook readiness.

7.3 - Sealed Secrets

Sealed Secrets integration

kluctl has an integration for sealed secrets, allowing you to securely store secrets for multiple target clusters and/or environments inside version control.

The integration consists of two parts:

  1. Sealing of secrets
  2. Automatically choosing and deploying the correct sealed secrets for a target

Requirements

The Sealed Secrets integration relies on the sealed-secrets operator being installed. Installing the operator is the responsibility of you (or whoever is managing/operating the cluster).

Kluctl can however perform sealing of secrets without an existing sealed-secrets operator installation. This is solved by automatically pre-provisioning a key onto the cluster that is compatible with the operator or by providing the public certificate via certFile in the targets sealingConfig.

Sealing of .sealme files

Sealing is done via the seal command. It must be done before the actual deployment is performed.

The seal command recursively searches for files that end with .sealme, renders them with the templating engine engine. The rendered secret resource is then converted/encrypted into a sealed secret.

The .sealme files itself have to be Kubernetes Secrets, but without any actual secret data inside. The secret data is referenced via templating variables and is expected to be provided only at the time of sealing. This means, that the sensitive secret data must only be in clear text while sealing. Afterwards the sealed secrets can be added to version control.

Example file (the name could be for example db-secrets.yaml.sealme):

kind: Secret
apiVersion: v1
metadata:
  name: db-secrets
  namespace: {{ my.namespace.variable }}
stringData:
  DB_URL: {{ secrets.database.url }}
  DB_USERNAME: {{ secrets.database.username }}
  DB_PASSWORD: {{ secrets.database.password }}

While sealing, the full templating context (same as in templating) is available. Additionally, the global secrets object/variable is available which contains the sensitive secrets.

Secret Sources

Secrets are only loaded while sealing. Available secret sets and sources are configured via .kluctl.yaml. The secrets used per target are configured via the secrets config of the targets.

Using sealed secrets

After sealing a secret, it can be used inside kustomize deployments. While deploying, kluctl will look for resources included from kustomization.yaml which are not existent but for which a file with a .sealme extension exists. If such a file is found, the appropriate sealed secrets is located based on the outputPattern.

An example kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
# please note that we do not specify the .sealme suffix here
- db-secrets.yaml
- my-deployments.yaml

outputPattern and location of stored sealed secrets

It is possible to override the output pattern in the root deployment project. The output pattern must be a template string that is rendered with the full templating context available for the deployment.yaml.

When manually specifying the outputPattern, ensure that it works well with multiple clusters and targets. You can for example use the {{ target.name }} and {{ cluster.name }} inside the outputPattern.

# deployment.yaml in root directory
sealedSecrets:
  outputPattern: "{{ cluster.name }}/{{ target.name }}"

The default outputPattern is simply {{ target.name }}, which should work well in most cases.

The final storage location for the sealed secret is:

<base_dir>/<rendered_output_pattern>/<relative_sealme_file_dir>/<file_name>

with:

  • base_dir: The base directory for sealed secrets, which defaults to to the subdirectory .sealed-secrets in the kluctl project root diretory.
  • rendered_output_pattern: The rendered outputPattern as described above.
  • relative_sealme_file_dir: The relative path from the deployment root directory.
  • file_name: The filename of the sealed secret, excluding the .sealme extension.

Content Hashes and re-sealing

Sealed secrets are stored together with hashes of all individual secret entries. These hashes are then used to avoid unnecessary re-sealing in future seal invocations. If you want to force re-sealing, use the –force-reseal option.

Hashing of secrets is done with bcrypt and the cluster id as salt. The cluster id is currently defined as the sha256 hash of the cluster CA certificate. This will cause re-sealing of all secrets in case a cluster is set up from scratch (which causes key from the sealed secrets operator to get wiped as well).

Clusters and namespaces

Sealed secrets are usually only decryptable by one cluster, simply because each cluster has its own set of randomly generated public/private key pairs. This means, that a secret that was sealed for your production cluster can’t be unsealed on your test cluster.

In addition, sealed secrets can be bound to a single namespace, making them undecryptable for any other namespace. To limit a sealed secret to a namespace, simply fill the metadata.namespace field of the input secret (which is in the .sealme file). This way, the sealed secret can only be deployed to a single namespace.

You can also use Scopes to lift/limit restrictions.

Using reflectors/replicators

In case a sealed secrets needs to be deployed to more than one namespace, some form of replication must be used. You’d then seal the secret for a single namespace and use a reflection/replication controller to reflect the unsealed secret into one or multiple other namespaces. Example controllers that can accomplish this are the Mittwald kubernetes-reflector and the Emberstack Kubernetes Reflector.

Consider the following example (using the Mittwald replicator):

kind: Secret
apiVersion: v1
metadata:
  name: db-secrets
  namespace: {{ my.namespace.variable }}
  annotations:
    replicator.v1.mittwald.de/replicate-to: '{{ my.namespace.variable }}-.*'
stringData:
  DB_URL: {{ secrets.database.url }}
  DB_USERNAME: {{ secrets.database.username }}
  DB_PASSWORD: {{ secrets.database.password }}

The above example would cause automatic replication into every namespace that matches the replicate-to pattern.

Please watch out for security implications. In the above example, everyone who has the right to create a namespace that matches the pattern will get access to the secret.

7.4 - Templating

Templating Engine.

kluctl uses a Jinja2 Templating engine to pre-process/render every involved configuration file and resource before actually interpreting it. Only files that are explicitly excluded via .templateignore files are not rendered via Jinja2.

Generally, everything that is possible with Jinja2 is possible in kluctl configuration/resources. Please read into the Jinja2 documentation to understand what exactly is possible and how to use it.

.templateignore

In some cases it is required to exclude specific files from templating, for example when the contents conflict with the used template engine (e.g. Go templates conflict with Jinja2 and cause errors). In such cases, you can place a .templateignore beside the excluded files or into a parent folder of it. The contents/format of the .templateignore file is the same as you would use in a .gitignore file.

Includes and imports

Standard Jinja2 includes and imports can be used in all templates.

The path given to include/import is searched in the directory of the root template and all it’s parent directories up until the project root. Please note that the search path is not altered in included templates, meaning that it will always search in the same directories even if an include happens inside a file that was included as well.

To include/import a file relative to the currently rendered file (which is not necessarily the root template), prefix the path with ./, e.g. use {% include "./my-relative-file.j2" %}".

Macros

Jinja2 macros are fully supported. When writing macros that produce yaml resources, you must use the --- yaml separator in case you want to produce multiple resources in one go.

Why no Go Templating

kluctl started as a python project and was then migrated to be a Go project. In the python world, Jinja2 is the obvious choice when it comes to templating. In the Go world, of course Go Templates would be the first choice.

When the migration to Go was performed, it was a conscious and opinionated decision to stick with Jinja2 templating. The reason is that I (@codablock) believe that Go Templates are hard to read and write and at the same time quite limited in their features (without extensive work). It never felt natural to write Go Templates.

This “feeling” was confirmed by multiple users of kluctl when it started and users described as “relieving” to not be forced to use Go Templates.

The above is my personal experience and opinion. I’m still quite open for contributions in regard to Go Templating support, as long as Jinja2 support is kept.

7.4.1 - Predefined Variables

Available predefined variables.

There are multiple variables available which are pre-defined by kluctl. These are:

args

This is a dictionary of arguments given via command line. It contains every argument defined in deployment args.

target

This is the target definition of the currently processed target. It contains all values found in the target definition, for example target.name.

images

This global object provides the dynamic images features described in images.

version

This global object defines latest version filters for images.get_image(...). See images for details.

secrets

This global object is only available while sealing and contains the loaded secrets defined via the currently sealed target.

7.4.2 - Variable Sources

Available variable sources.

There are multiple places in deployment projects (deployment.yaml) where additional variables can be loaded into future Jinja2 contexts.

The first place where vars can be specified is the deployment root, as documented here. These vars are visible for all deployments inside the deployment project, including sub-deployments from includes.

The second place to specify variables is in the deployment items, as documented here.

The variables loaded for each entry in vars are not available inside the deployment.yaml file itself. However, each entry in vars can use all variables defined before that specific entry is processed. Consider the following example.

vars:
- file: vars1.yaml
- file: vars2.yaml

vars2.yaml can now use variables that are defined in vars1.yaml. At all times, variables defined by parents of the current sub-deployment project can be used in the current vars file.

Different types of vars entries are possible:

file

This loads variables from a yaml file. Assume the following yaml file with the name vars1.yaml:

my_vars:
  a: 1
  b: "b"
  c:
    - l1
    - l2

This file can be loaded via:

vars:
  - file: vars1.yaml

After which all included deployments and sub-deployments can use the jinja2 variables from vars1.yaml.

values

An inline definition of variables. Example:

vars:
  - values:
      a: 1
      b: c

These variables can then be used in all deployments and sub-deployments.

git

This loads variables from a git repository. Example:

vars:
  - git:
      url: ssh://git@github.com/example/repo.git
      ref: my-branch
      path: path/to/vars.yaml

clusterConfigMap

Loads a configmap from the target’s cluster and loads the specified key’s value as a yaml file into the jinja2 variables context.

Assume the following configmap to be deployed to the target cluster:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-vars
  namespace: my-namespace
data:
  vars: |
    a: 1
    b: "b"
    c:
      - l1
      - l2    

This configmap can be loaded via:

vars:
  - clusterConfigMap:
      name: my-vars
      namespace: my-namespace
      key: vars

It assumes that the configmap is already deployed before the kluctl deployment happens. This might for example be useful to store meta information about the cluster itself and then make it available to kluctl deployments.

clusterSecret

Same as clusterConfigMap, but for secrets.

http

The http variables source allows to load variables from an arbitrary HTTP resource by performing a GET (or any other configured HTTP method) on the URL. Example:

vars:
  - http:
      url: https://example.com/path/to/my/vars

The above source will load a variables file from the given URL. The file is expected to be in yaml or json format.

The following additional properties are supported for http sources:

method

Specifies the HTTP method to be used when requesting the given resource. Defaults to GET.

body

The body to send along with the request. If not specified, nothing is sent.

headers

A map of key/values pairs representing the header entries to be added to the request. If not specified, nothing is added.

jsonPath

Can be used to select a nested element from the yaml/json document returned by the HTTP request. This is useful in case some REST api is used which does not directly return the variables file. Example:

vars:
  - http:
      url: https://example.com/path/to/my/vars
      jsonPath: $[0].data

The above example would successfully use the following json document as variables source:

[{"data": {"vars": {"var1": "value1"}}}]

Authentication

Kluctl currently supports BASIC and NTLM authentication. It will prompt for credentials when needed.

awsSecretsManager

AWS Secrets Manager integration. Loads a variables YAML from an AWS Secrets Manager secret. The secret can either be specified via an ARN or via a secretName and region combination. An AWS config profile can also be specified (which must exist while sealing).

The secrets stored in AWS Secrets manager must contain a valid yaml or json file.

Example using an ARN:

vars:
  - awsSecretsManager:
      secretName: arn:aws:secretsmanager:eu-central-1:12345678:secret:secret-name-XYZ
      profile: my-prod-profile

Example using a secret name and region:

vars:
  - awsSecretsManager:
      secretName: secret-name
      region: eu-central-1
      profile: my-prod-profile

The advantage of the latter is that the auto-generated suffix in the ARN (which might not be known at the time of writing the configuration) doesn’t have to be specified.

vault

Vault by HashiCorp with Tokens authentication integration. The address and the path to the secret can be configured. The implementation was tested with KV Secrets Engine.

Example using vault:

vars:
  - vault:
      address: http://localhost:8200
      path: secret/data/simple

Before deploying or sealing please make sure that you have access to vault. You can do this for example by setting the environment variable VAULT_TOKEN.

systemEnvVars

Load variables from environment variables. Children of systemEnvVars can be arbitrary yaml, e.g. dictionaries or lists. The leaf values are used to get a value from the system environment.

Example:

vars:
- systemEnvVars:
    var1: ENV_VAR_NAME1
    someDict:
      var2: ENV_VAR_NAME2
    someList:
      - var3: ENV_VAR_NAME3

The above example will make 3 variables available: var1, someDict.var2 and someList[0].var3, each having the values of the environment variables specified by the leaf values.

7.4.3 - Filters

Available filters.

In addition to the builtin Jinja2 filters, kluctl provides a few additional filters:

b64encode

Encodes the input value as base64. Example: {{ "test" | b64encode }} will result in dGVzdA==.

b64decode

Decodes an input base64 encoded string. Example {{ my.source.var | b64decode }}.

from_yaml

Parses a yaml string and returns an object. Please note that json is valid yaml, meaning that you can also use this filter to parse json.

to_yaml

Converts a variable/object into its yaml representation. Please note that in most cases the resulting string will not be properly indented, which will require you to also use the indent filter. Example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  config.yaml: |
        {{ my_config | to_yaml | indent(4) }}

to_json

Same as to_yaml, but with json as output. Please note that json is always valid yaml, meaning that you can also use to_json in yaml files. Consider the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  template:
    spec:
      containers:
      - name: c1
        image: my-image
        env: {{ my_list_of_env_entries | to_json }}

This would render json into a yaml file, which is still a valid yaml file. Compare this to how this would have to be solved with to_yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  template:
    spec:
      containers:
      - name: c1
        image: my-image
        env:
          {{ my_list_of_env_entries | to_yaml | indent(10) }}

The required indention filter is the part that makes this error-prone and hard to maintain. Consider using to_json whenever you can.

render

Renders the input string with the current Jinja2 context. Example:

{% set a="{{ my_var }}" %}
{{ a | render }}

7.4.4 - Functions

Available functions.

In addition to the provided builtin global functions, kluctl also provides a few global functions:

load_template(file)

Loads the given file into memory, renders it with the current Jinja2 context and then returns it as a string. Example:

{% set a=load_template('file.yaml') %}
{{ a }}

load_template uses the same path searching rules as described in includes/imports.

load_sha256(file, digest_len)

Loads the given file into memory, renders it and calculates the sha256 hash of the result.

The filename given to load_sha256 is treated the same as in load_template. Recursive loading/calculating of hashes is allowed and is solved by replacing load_sha256 invocations with currently loaded templates with dummy strings. This also allows to calculate the hash of the currently rendered template, for example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config-{{ load_sha256("configmap.yaml") }}
data:

digest_len is an optional parameter that allows to limit the length of the returned hex digest.

get_var(field_path, default)

Convenience method to navigate through the current context variables via a JSON Path. Let’s assume you currently have these variables defined (e.g. via vars):

my:
  deep:
    var: value

Then {{ get_var('my.deep.var', 'my-default') }} would return value. When any of the elements inside the field path are non-existent, the given default value is returned instead.

The field_path parameter can also be a list of pathes, which are then tried one after the another, returning the first result that gives a value that is not None. For example, {{ get_var(['non.existing.var', my.deep.var'], 'my-default') }} would also return value.

merge_dict(d1, d2)

Clones d1 and then recursively merges d2 into it and returns the result. Values inside d2 will override values in d1.

update_dict(d1, d2)

Same as merge_dict, but merging is performed in-place into d1.

raise(msg)

Raises a python exception with the given message. This causes the current command to abort.

debug_print(msg)

Prints a line to stderr.

7.5 - Commands

Description of available commands.

kluctl offers a unified command line interface that allows to standardize all your deployments. Every project, no matter how different it is from other projects, is managed the same way.

You can always call kluctl --help or kluctl <command> --help for a help prompt.

Individual commands are documented in sub-sections.

7.5.1 - Common Arguments

Common arguments

A few sets of arguments are common between multiple commands. These arguments are still part of the command itself and must be placed after the command name.

Global arguments

These arguments are available for all commands.

Global arguments:
      --cpu-profile string   Enable CPU profiling and write the result to the given path
      --debug                Enable debug logging
      --no-color             Disable colored output
      --no-update-check      Disable update check on startup

Project arguments

These arguments are available for all commands that are based on a Kluctl project. They control where and how to load the kluctl project and deployment project.

Project arguments:
  Define where and how to load the kluctl project and its components from.

  -a, --arg stringArray                      Template argument in the form name=value
      --cluster string                       DEPRECATED. Specify/Override cluster
      --git-cache-update-interval duration   Specify the time to wait between git cache updates. Defaults to not
                                             wait at all and always updating caches.
      --local-clusters existingdir           DEPRECATED. Local clusters directory. Overrides the project from
                                             .kluctl.yaml
      --local-deployment existingdir         DEPRECATED. Local deployment directory. Overrides the project from
                                             .kluctl.yaml
      --local-sealed-secrets existingdir     DEPRECATED. Local sealed-secrets directory. Overrides the project
                                             from .kluctl.yaml
      --output-metadata string               Specify the output path for the project metadata to be written to.
  -c, --project-config existingfile          Location of the .kluctl.yaml config file. Defaults to
                                             $PROJECT/.kluctl.yaml
  -b, --project-ref string                   Git ref of the kluctl project. Only used when --project-url was given.
  -p, --project-url string                   Git url of the kluctl project. If not specified, the current
                                             directory will be used instead of a remote Git project
  -t, --target string                        Target name to run command for. Target must exist in .kluctl.yaml.
      --timeout duration                     Specify timeout for all operations, including loading of the project,
                                             all external api calls and waiting for readiness. (default 10m0s)

Image arguments

These arguments are available on some target based commands. They control image versions requested by images.get_image(...) calls.

Image arguments:
  Control fixed images and update behaviour.

  -F, --fixed-image stringArray          Pin an image to a given version. Expects
                                         '--fixed-image=image<:namespace:deployment:container>=result'
      --fixed-images-file existingfile   Use .yaml file to pin image versions. See output of list-images
                                         sub-command or read the documentation for details about the output format
      --offline-images                   Omit contacting image registries and do not query for latest image tags.
  -u, --update-images                    This causes kluctl to prefer the latest image found in registries, based
                                         on the 'latest_image' filters provided to 'images.get_image(...)' calls.
                                         Use this flag if you want to update to the latest versions/tags of all
                                         images. '-u' takes precedence over '--fixed-image/--fixed-images-file',
                                         meaning that the latest images are used even if an older image is given
                                         via fixed images.

Inclusion/Exclusion arguments

These arguments are available for some target based commands. They control inclusion/exclusion based on tags and deployment item pathes.

Inclusion/Exclusion arguments:
  Control inclusion/exclusion.

      --exclude-deployment-dir stringArray   Exclude deployment dir. The path must be relative to the root
                                             deployment project. Exclusion has precedence over inclusion, same as
                                             in --exclude-tag
  -E, --exclude-tag stringArray              Exclude deployments with given tag. Exclusion has precedence over
                                             inclusion, meaning that explicitly excluded deployments will always
                                             be excluded even if an inclusion rule would match the same deployment.
      --include-deployment-dir stringArray   Include deployment dir. The path must be relative to the root
                                             deployment project.
  -I, --include-tag stringArray              Include deployments with given tag.

7.5.2 - Environment Variables

Controlling Kluctl via environment variables

In addition to arguments, Kluctl can be controlled via a set of environment variables.

Environment variables as arguments

All options/arguments accepted by kluctl can also be specified via environment variables. The name of the environment variables always start with KLUCTL_ and end with the option/argument in uppercase and dashes replaced with underscores. As an example, --project-url=my-project can also be specified with the environment variable KLUCTL_PROJECT_URL=my-project.

Additional environment variables

A few additional environment variables are supported which do not belong to an option/argument. These are:

  1. KLUCTL_REGISTRY_<idx>_HOST, KLUCTL_REGISTRY_<idx>_USERNAME, and so on. See registries for details.
  2. KLUCTL_SSH_DISABLE_STRICT_HOST_KEY_CHECKING. Disable ssh host key checking when accessing git repositories.
  3. KLUCTL_NO_THREADS. Do not use multithreading while performing work. This is only useful for debugging purposes.
  4. KLUCTL_IGNORE_DEBUGGER. Pretend that there is no debugger attached when automatically deciding if multi-threading should be enabled or not.

7.5.3 - delete

delete command

Command

Usage: kluctl delete [flags]

Delete a target (or parts of it) from the corresponding cluster Objects are located based on ‘commonLabels’, configured in ‘deployment.yaml’

WARNING: This command will also delete objects which are not part of your deployment project (anymore). It really only decides based on the ‘deleteByLabel’ labels and does NOT take the local target/state into account!

Arguments

The following sets of arguments are available:

  1. project arguments
  2. image arguments
  3. inclusion/exclusion arguments

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

  -l, --delete-by-label stringArray   Override the labels used to find objects for deletion.
      --dry-run                       Performs all kubernetes API calls in dry-run mode.
  -o, --output-format stringArray     Specify output format and target file, in the format 'format=path'. Format
                                      can either be 'text' or 'yaml'. Can be specified multiple times. The actual
                                      format for yaml is currently not documented and subject to change.
      --render-output-dir string      Specifies the target directory to render the project into. If omitted, a
                                      temporary directory is used.
  -y, --yes                           Suppresses 'Are you sure?' questions and proceeds as if you would answer 'yes'.

They have the same meaning as described in deploy.

7.5.4 - deploy

deploy command

Command

Usage: kluctl deploy [flags]

Deploys a target to the corresponding cluster This command will also output a diff between the initial state and the state after deployment. The format of this diff is the same as for the ‘diff’ command. It will also output a list of prunable objects (without actually deleting them).

Arguments

The following sets of arguments are available:

  1. project arguments
  2. image arguments
  3. inclusion/exclusion arguments

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

      --abort-on-error               Abort deploying when an error occurs instead of trying the remaining deployments
      --dry-run                      Performs all kubernetes API calls in dry-run mode.
      --force-apply                  Force conflict resolution when applying. See documentation for details
      --force-replace-on-error       Same as --replace-on-error, but also try to delete and re-create objects. See
                                     documentation for more details.
      --no-wait                      Don't wait for objects readiness'
  -o, --output-format stringArray    Specify output format and target file, in the format 'format=path'. Format
                                     can either be 'text' or 'yaml'. Can be specified multiple times. The actual
                                     format for yaml is currently not documented and subject to change.
      --readiness-timeout duration   Maximum time to wait for object readiness. The timeout is meant per-object.
                                     Timeouts are in the duration format (1s, 1m, 1h, ...). If not specified, a
                                     default timeout of 5m is used. (default 5m0s)
      --render-output-dir string     Specifies the target directory to render the project into. If omitted, a
                                     temporary directory is used.
      --replace-on-error             When patching an object fails, try to replace it. See documentation for more
                                     details.
  -y, --yes                          Suppresses 'Are you sure?' questions and proceeds as if you would answer 'yes'.

–force-apply

kluctl implements deployments via server-side apply and a custom automatic conflict resolution algorithm. This algurithm is an automatic implementation of the “Don’t overwrite value, give up management claim” method. It should work in most cases, but might still fail. In case of such failure, you can use --force-apply to use the “Overwrite value, become sole manager” strategy instead.

Please note that this is a risky operation which might overwrite fields which were initially managed by kluctl but were then overtaken by other managers (e.g. by operators). Always use this option with caution and perform a dry-run before to ensure nothing unexpected gets overwritten.

–replace-on-error

In some situations, patching Kubernetes objects might fail for different reasons. In such cases, you can try --replace-on-error to instruct kluctl to retry with an update operation.

Please note that this will cause all fields to be overwritten, even if owned by other field managers.

–force-replace-on-error

This flag will cause the same replacement attempt on failure as with --replace-on-error. In addition, it will fallback to a delete+recreate operation in case the replace also fails.

Please note that this is a potentially risky operation, especially when an object carries some kind of important state.

–abort-on-error

kluctl does not abort a command when an individual object fails can not be updated. It collects all errors and warnings and outputs them instead. This option modifies the behaviour to immediately abort the command.

7.5.5 - diff

diff command

Command

Usage: kluctl diff [flags]

Perform a diff between the locally rendered target and the already deployed target The output is by default in human readable form (a table combined with unified diffs). The output can also be changed to output a yaml file. Please note however that the format is currently not documented and prone to changes. After the diff is performed, the command will also search for prunable objects and list them.

Arguments

The following sets of arguments are available:

  1. project arguments
  2. image arguments
  3. inclusion/exclusion arguments

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

      --force-apply                 Force conflict resolution when applying. See documentation for details
      --force-replace-on-error      Same as --replace-on-error, but also try to delete and re-create objects. See
                                    documentation for more details.
      --ignore-annotations          Ignores changes in annotations when diffing
      --ignore-labels               Ignores changes in labels when diffing
      --ignore-tags                 Ignores changes in tags when diffing
  -o, --output-format stringArray   Specify output format and target file, in the format 'format=path'. Format can
                                    either be 'text' or 'yaml'. Can be specified multiple times. The actual format
                                    for yaml is currently not documented and subject to change.
      --render-output-dir string    Specifies the target directory to render the project into. If omitted, a
                                    temporary directory is used.
      --replace-on-error            When patching an object fails, try to replace it. See documentation for more
                                    details.

--force-apply and --replace-on-error have the same meaning as in deploy.

7.5.6 - helm-pull

helm-pull command

Command

Usage: kluctl helm-pull [flags]

Recursively searches for ‘helm-chart.yaml’ files and pulls the specified Helm charts The Helm charts are stored under the sub-directory ‘charts/’ next to the ‘helm-chart.yaml’. These Helm charts are meant to be added to version control so that pulling is only needed when really required (e.g. when the chart version changes).

See helm-integration for more details.

Arguments

The following sets of arguments are available:

  1. project arguments (except -a)

7.5.7 - helm-update

helm-update command

Command

Usage: kluctl helm-update [flags]

Recursively searches for ‘helm-chart.yaml’ files and checks for new available versions Optionally performs the actual upgrade and/or add a commit to version control.

Arguments

The following sets of arguments are available:

  1. project arguments (except -a)

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

      --commit                                 Create a git commit for every updated chart
      --insecure-skip-tls-verify stringArray   Controls skipping of TLS verification. Must be in the form
                                               --insecure-skip-tls-verify=<credentialsId>, where <credentialsId>
                                               must match the id specified in the helm-chart.yaml.
      --key-file stringArray                   Specify client certificate to use for Helm Repository
                                               authentication. Must be in the form
                                               --key-file=<credentialsId>:<path>, where <credentialsId> must match
                                               the id specified in the helm-chart.yaml.
      --password stringArray                   Specify password to use for Helm Repository authentication. Must be
                                               in the form --password=<credentialsId>:<password>, where
                                               <credentialsId> must match the id specified in the helm-chart.yaml.
      --upgrade                                Write new versions into helm-chart.yaml and perform helm-pull afterwards
      --username stringArray                   Specify username to use for Helm Repository authentication. Must be
                                               in the form --username=<credentialsId>:<username>, where
                                               <credentialsId> must match the id specified in the helm-chart.yaml.

7.5.8 - list-images

list-images command

Command

Usage: kluctl list-images [flags]

Renders the target and outputs all images used via ‘images.get_image(…) The result is a compatible with yaml files expected by –fixed-images-file.

If fixed images (’-f/–fixed-image’) are provided, these are also taken into account, as described in for the deploy command.

Arguments

The following sets of arguments are available:

  1. project arguments
  2. image arguments
  3. inclusion/exclusion arguments

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

  -o, --output stringArray         Specify output target file. Can be specified multiple times
      --render-output-dir string   Specifies the target directory to render the project into. If omitted, a
                                   temporary directory is used.
      --simple                     Output a simplified version of the images list

7.5.9 - list-targets

list-targets command

Command

Usage: kluctl list-targets [flags]

Outputs a yaml list with all target, including dynamic targets Outputs a yaml list with all target, including dynamic targets

Arguments

The following arguments are available:

Misc arguments:
  Command specific arguments.

  -o, --output stringArray   Specify output target file. Can be specified multiple times

7.5.10 - poke-images

poke-images command

Command

Usage: kluctl poke-images [flags]

Replace all images in target This command will fully render the target and then only replace images instead of fully deploying the target. Only images used in combination with ‘images.get_image(…)’ are replaced

Arguments

The following sets of arguments are available:

  1. project arguments
  2. image arguments
  3. inclusion/exclusion arguments

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

      --dry-run                     Performs all kubernetes API calls in dry-run mode.
  -o, --output-format stringArray   Specify output format and target file, in the format 'format=path'. Format can
                                    either be 'text' or 'yaml'. Can be specified multiple times. The actual format
                                    for yaml is currently not documented and subject to change.
      --render-output-dir string    Specifies the target directory to render the project into. If omitted, a
                                    temporary directory is used.
  -y, --yes                         Suppresses 'Are you sure?' questions and proceeds as if you would answer 'yes'.

7.5.11 - prune

prune command

Command

Usage: kluctl prune [flags]

Searches the target cluster for prunable objects and deletes them

Arguments

The following sets of arguments are available:

  1. project arguments
  2. image arguments
  3. inclusion/exclusion arguments

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

      --dry-run                     Performs all kubernetes API calls in dry-run mode.
  -o, --output-format stringArray   Specify output format and target file, in the format 'format=path'. Format can
                                    either be 'text' or 'yaml'. Can be specified multiple times. The actual format
                                    for yaml is currently not documented and subject to change.
      --render-output-dir string    Specifies the target directory to render the project into. If omitted, a
                                    temporary directory is used.
  -y, --yes                         Suppresses 'Are you sure?' questions and proceeds as if you would answer 'yes'.

They have the same meaning as described in deploy.

7.5.12 - render

render command

Command

Usage: kluctl render [flags]

Renders all resources and configuration files Renders all resources and configuration files and stores the result in either a temporary directory or a specified directory.

Arguments

The following sets of arguments are available:

  1. project arguments
  2. image arguments

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

      --offline-kubernetes         Run render in offline mode, meaning that it will not try to connect the target
                                   cluster
      --print-all                  Write all rendered manifests to stdout
      --render-output-dir string   Specifies the target directory to render the project into. If omitted, a
                                   temporary directory is used.

7.5.13 - seal

seal command

Command

Usage: kluctl seal [flags]

Seal secrets based on target’s sealingConfig Loads all secrets from the specified secrets sets from the target’s sealingConfig and then renders the target, including all files with the ‘.sealme’ extension. Then runs kubeseal on each ‘.sealme’ file and stores secrets in the directory specified by ‘–local-sealed-secrets’, using the outputPattern from your deployment project.

If no ‘–target’ is specified, sealing is performed for all targets.

See sealed-secrets for more details.

Arguments

The following sets of arguments are available:

  1. project arguments (except -a)

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

      --cert-file string     Use the given certificate for sealing instead of requesting it from the
                             sealed-secrets controller
      --force-reseal         Lets kluctl ignore secret hashes found in already sealed secrets and thus forces
                             resealing of those.
      --offline-kubernetes   Run seal in offline mode, meaning that it will not try to connect the target cluster

7.5.14 - validate

validate command

Command

Usage: kluctl validate [flags]

Validates the already deployed deployment This means that all objects are retrieved from the cluster and checked for readiness.

TODO: This needs to be better documented!

Arguments

The following sets of arguments are available:

  1. project arguments
  2. image arguments

In addition, the following arguments are available:

Misc arguments:
  Command specific arguments.

  -o, --output stringArray         Specify output target file. Can be specified multiple times
      --render-output-dir string   Specifies the target directory to render the project into. If omitted, a
                                   temporary directory is used.
      --sleep duration             Sleep duration between validation attempts (default 5s)
      --wait duration              Wait for the given amount of time until the deployment validates
      --warnings-as-errors         Consider warnings as failures

8 - Flux Support

Flux Kluctl Controller documentation.

The documentation found here is synced from https://github.com/kluctl/flux-kluctl-controller/tree/main/docs.

8.1 - Flux Kluctl Controller

Flux Kluctl Controller documentation.

The Flux Kluctl Controller is a Kubernetes operator, specialized in running continuous delivery pipelines for infrastructure defined with kluctl.

Motivation

kluctl is a tool that allows you to declare and manage small, large, simple and/or complex multi-env and multi-cluster deployments. It is designed in a way that allows seamless co-existence of CLI centered DevOps and automation, for example in the form of GitOps/flux.

This means that you can continue doing local development of your deployments and test them from your local machine, for example by regularly running kluctl diff. When you believe you’re done with your work, you can then commit your changes to Git and let the Flux Kluctl Controller do the actual deployment.

You could also have a dedicated target that you solely use for local development and deployment testing and then let the Flux Kluctl Controller handle the deployments to the real (e.g. pre-prod or prod) targets.

This way you can have both:

  1. Easy and reliable development and testing of your deployments (no more change+commit+push+wait+error+retry cycles).
  2. Consistent GitOps style automation.

The Flux Kluctl Controller supports all types of Kluctl projects, including simple ones where a single Git repository contains all necessary data and complex ones where for example clusters or target configurations are in other Git repositories.

Installation

Installation instructions can be found here

Design

The reconciliation process can be defined with a Kubernetes custom resource that describes a pipeline such as:

  • fetch root kluctl project from source-controller (Git repository or S3 bucket)
  • compare the current deployment with the last deployed one and bail out if nothing changed
  • deploy the specified target via kluctl deploy
  • prune orphaned objects via kluctl prune
  • validate the deployment status via kluctl validate
  • alert if something went wrong
  • notify if the cluster state changed

The controller that runs these pipelines relies on source-controller for providing the root Kluctl project from Git repositories or any other source that source-controller could support in the future. If the root Kluctl project is located in a GitRepository, the Flux Kluctl Controller will reuse the Git credentials for all dependent Git repositories referenced by the project.

A pipeline runs on-a-schedule and ca be triggered manually by a cluster admin or automatically by a source event such as a Git revision change.

When a pipeline is removed from the cluster, the controller’s GC terminates all the objects previously created by that pipeline.

A pipeline can be suspended, while in suspension the controller stops the scheduler and ignores any source events. Deleting a suspended pipeline does not trigger garbage collection.

Alerting can be configured with a Kubernetes custom resource that specifies a webhook address, and a group of pipelines to be monitored.

The API design of the controller can be found at kluctldeployment.flux.kluctl.io/v1beta1.

Example

After installing flux-kluctl-controller alongside a normal flux installation, we can create a Kluctl deployment that automatically deploys the Microservices Demo.

Create a source that points to the demo project.

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
  name: microservices-demo
  namespace: flux-system
spec:
  interval: 5m
  url: https://github.com/kluctl/kluctl-examples.git
  ref:
    branch: main

And a KluctlDeployment that uses the demo project source to deploy the test target to the same cluster that flux runs on.

apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: microservices-demo-test
  namespace: flux-system
spec:
  interval: 10m
  path: "./microservices-demo/3-templating-and-multi-env/"
  sourceRef:
    kind: GitRepository
    name: microservices-demo
  timeout: 2m
  target: test
  prune: true
  # kluctl cluster configs specify the expected context name, which does not necessarely match the context name
  # found while it is deployed via the controller. This means we must pass a kubeconfig to kluctl that has the
  # context renamed to the one that it expects.
  renameContexts:
    - oldContext: default
      newContext: kind-kind

This example will deploy a fully-fledged microservices application with multiple backend services, frontends and databases, all via one single KluctlDeployment.

To deploy the same Kluctl project to another target (e.g. prod), simply create the following resource.

apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: microservices-demo-prod
  namespace: flux-system
spec:
  interval: 10m
  path: "./microservices-demo/3-templating-and-multi-env/"
  sourceRef:
    kind: GitRepository
    name: microservices-demo
  timeout: 2m
  target: prod
  prune: true
  renameContexts:
    - oldContext: default
      newContext: kind-kind

8.2 - Installation

Flux Kluctl Controller documentation.

The Flux Kluctl Controller requires an existing Flux installation on the same cluster that you plan to install the Flux Kluctl Controller to.

After Flux has been installed, you can install the Flux Kluctl Controller by running the following command:

kustomize build "https://github.com/kluctl/flux-kluctl-controller/config/install?ref=v0.6.2" | kubectl apply -f-

NOTE: To set up Flux Alerts from KluctlDeployments you will need to patch the enum in the Alerts CRD. There is a patch included in this repository that can do this for you. You can apply it directly or include the yaml version in gotk-patch.yaml with your flux bootstrap. You can also add something like the following to your cluster’s kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- gotk-components.yaml
- gotk-sync.yaml
patchesJson6902:
- target:
    group: apiextensions.k8s.io
    version: v1
    kind: CustomResourceDefinition
    name: alerts.notification.toolkit.fluxcd.io
  path: 'alerts-crd-patch.yaml' # The downloaded patch in your flux repository

8.3 - KluctlDeployment

Flux Kluctl Controller documentation.

The KluctlDeployment API defines a deployment of a target from a Kluctl Project.

Example

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
  name: microservices-demo
spec:
  interval: 1m
  url: https://github.com/kluctl/kluctl-examples.git
  ref:
    branch: main
---
apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: microservices-demo-prod
spec:
  interval: 5m
  path: "./microservices-demo/3-templating-and-multi-env/"
  sourceRef:
    kind: GitRepository
    name: microservices-demo
  timeout: 2m
  target: prod
  prune: true
  # kluctl targets specify the expected context name, which does not necessarily match the context name
  # found while it is deployed via the controller. This means we must pass a kubeconfig to kluctl that has the
  # context renamed to the one that it expects.
  renameContexts:
    - oldContext: default
      newContext: kind-kind

In the above example, two objects are being created, a GitRepository that points to the Kluctl project and KluctlDeployment that defines the deployment based on the Kluctl project.

The deployment is performed every 5 minutes or whenever the source changes. It will deploy the prod target and then prune orphaned objects afterwards.

It uses the default context provided by the default Flux service account and rename it to kind-kind so that it is compatible with the context specified in the example’s prod target.

Source reference

The KluctlDeployment spec.sourceRef is a reference to an object managed by source-controller. When the source revision changes, it generates a Kubernetes event that triggers a reconciliation attempt.

Source supported types:

The Kluctl project found in the referenced source might also internally reference other Git repositories, for example by loading variables from Git repositories or including other Git repositories in your deployments. In this case, the controller will re-use the credentials from the root project’s GitRepository for further authentication.

spec.path specifies the subdirectory inside the referenced source to be used as the project root.

Target

spec.target specifies the target to be deployed. It must exist in the Kluctl projects kluctl.yaml targets list.

Reconciliation

The KluctlDeployment spec.interval tells the controller at which interval to try reconciliations. The interval time units are s, m and h e.g. interval: 5m, the minimum value should be over 60 seconds.

At each reconciliation run, the controller will check if any rendered objects have been changes since the last deployment and then perform a new deployment if changes are detected. Changes are tracked via a hash consisting of all rendered objects.

To enforce periodic full deployments even if nothing has changed, spec.deployInterval can be used to specify an interval at which forced deployments must be performed by the controller.

The KluctlDeployment reconciliation can be suspended by setting spec.suspend to true.

The controller can be told to reconcile the KluctlDeployment outside of the specified interval by annotating the KluctlDeployment object with fluxcd.io/reconcileAt.

On-demand execution example:

kubectl annotate --overwrite kluctldeployment/microservices-demo-prod fluxcd.io/reconcileAt="$(date +%s)"

Deploy Mode

By default, the operator will perform a full deployment, which is equivalent to using the kluctl deploy command. As an alternative, the controller can be instructed to only perform a kluctl poke-images command. Please see https://kluctl.io/docs/reference/commands/poke-images/ for details on the command. To do so, set spec.deployMode field to poke-images.

Example:

apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: microservices-demo-prod
spec:
  interval: 5m
  path: "./microservices-demo/3-templating-and-multi-env/"
  sourceRef:
    kind: GitRepository
    name: microservices-demo
  timeout: 2m
  target: prod
  deployMode: poke-images

Pruning

To enable pruning, set spec.prune to true. This will cause the controller to run kluctl prune after each successful deployment.

Kluctl Options

The kluctl deploy command has multiple arguments that influence how the deployment is performed. KluctlDeployment’s can set most of these arguments as well:

args

spec.args is a map of strings representing arguments passed to the deployment. Example:

apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: example
spec:
  interval: 5m
  sourceRef:
    kind: GitRepository
    name: example
  timeout: 2m
  target: prod
  args:
    arg1: value1
    arg2: value2

The above example is equivalent to calling kluctl deploy -t prod -a arg1=value1 -a arg2=value2.

updateImages

spec.updateImages is a boolean that specifies whether images used via image.get_image(...) should use the latest image found in the registry.

This is equivalent to calling kluctl deploy -t prod -u

images

spec.images specifies a list of fixed images to be used by image.get_image(...). Example:

apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: example
spec:
  interval: 5m
  sourceRef:
    kind: GitRepository
    name: example
  timeout: 2m
  target: prod
  images:
    - image: nginx
      resultImage: nginx:1.21.6
      namespace: example-namespace
      deployment: Deployment/example
    - image: registry.gitlab.com/my-org/my-repo/image
      resultImage: registry.gitlab.com/my-org/my-repo/image:1.2.3

The above example will cause the images.get_image("nginx") invocations of the example Deployment to return nginx:1.21.6. It will also cause all images.get_image("registry.gitlab.com/my-org/my-repo/image") invocations to return registry.gitlab.com/my-org/my-repo/image:1.2.3.

The fixed images provided here take precedence over the ones provided in the target definition.

spec.images is equivalent to calling kluctl deploy -t prod --fixed-image=nginx:example-namespace:Deployment/example=nginx:1.21.6 ... and to kluctl deploy -t prod --fixed-images-file=fixed-images.yaml with fixed-images.yaml containing:

images:
- image: nginx
  resultImage: nginx:1.21.6
  namespace: example-namespace
  deployment: Deployment/example
- image: registry.gitlab.com/my-org/my-repo/image
  resultImage: registry.gitlab.com/my-org/my-repo/image:1.2.3

It is advised to use dynamic targets instead of providing images directly in the ´KluctlDeployment` object.

dryRun

spec.dryRun is a boolean value that turns the deployment into a dry-run deployment. This is equivalent to calling kluctl deploy -t prod --dry-run.

noWait

spec.noWait is a boolean value that disables all internal waiting (hooks and readiness). This is equivalent to calling kluctl deploy -t prod --no-wait.

forceApply

spec.forceApply is a boolean value that causes kluctl to solve conflicts via force apply. This is equivalent to calling kluctl deploy -t prod --force-apply.

replaceOnError and forceReplaceOnError

spec.replaceOnError and spec.forceReplaceOnError are both boolean values that cause kluctl to perform a replace after a failed apply. forceReplaceOnError goes a step further and deletes and recreates the object in question. These are equivalent to calling kluctl deploy -t prod --replace-on-error and kluctl deploy -t prod --force-replace-on-error.

abortOnError

spec.abortOnError is a boolean value that causes kluctl to abort as fast as possible in case of errors. This is equivalent to calling kluctl deploy -t prod --abort-on-error.

includeTags, excludeTags, includeDeploymentDirs and excludeDeploymentDirs

spec.includeTags and spec.excludeTags are lists of tags to be used in inclusion/exclusion logic while deploying. These are equivalent to calling kluctl deploy -t prod --include-tag <tag1> and kluctl deploy -t prod --exclude-tag <tag2>.

spec.includeDeploymentDirs and spec.excludeDeploymentDirs are lists of relative deployment directories to be used in inclusion/exclusion logic while deploying. These are equivalent to calling kluctl deploy -t prod --include-tag <tag1> and kluctl deploy -t prod --exclude-tag <tag2>.

Kubeconfigs and RBAC

As Kluctl is meant to be a CLI-first tool, it expects a kubeconfig to be present while deployments are performed. The controller will generate such kubeconfigs on-the-fly before performing the actual deployment.

The kubeconfig can be generated from 3 different sources:

  1. The default impersonation service account specified at controller startup (via --default-service-account)
  2. The service account specified via spec.serviceAccountName in the KluctlDeployment
  3. The secret specified via spec.kubeConfig in the KluctlDeployment.

The behavior/functionality of 1. and 2. is comparable to how the kustomize-controller handles impersonation, with the difference that a kubeconfig with a “default” context is created in-between.

spec.kubeConfig will simply load the kubeconfig from data.value of the specified secret.

Kluctl targets specify a context name that is expected to be present in the kubeconfig while deploying. As the context found in the generated kubeconfig does not necessarily have the correct name, spec.renameContexts allows to rename contexts to the desired names. This is especially useful when using service account based kubeconfigs, as these always have the same context with the name “default”.

Here is an example of a deployment that uses the service account “prod-service-account” and renames the context appropriately (assuming the Kluctl cluster config for the given target expects a “prod” context):

apiVersion: flux.kluctl.io/v1alpha1
kind: KluctlDeployment
metadata:
  name: example
  namespace: flux-system
spec:
  interval: 10m
  sourceRef:
    kind: GitRepository
    name: example
  target: prod
  serviceAccountName: prod-service-account
  renameContexts:
    - oldContext: default
      newContext: prod

Status

When the controller completes a deployments, it reports the result in the status sub-resource.

A successful reconciliation sets the ready condition to true and updates the revision field:

status:
  commonLabels:
    examples.kluctl.io/deployment-project: microservices-demo
    examples.kluctl.io/deployment-target: prod
  conditions:
  - lastTransitionTime: "2022-07-07T11:48:14Z"
    message: Deployed revision: master/2129450c9fc867f5a9b25760bb512054d7df6c43
    reason: ReconciliationSucceeded
    status: "True"
    type: Ready
  lastDeployResult:
    objectsHash: bc4d2b9f717088a395655b8d8d28fa66a9a91015f244bdba3c755cd87361f9e2
    result:
      hookObjects:
      - ...
      orphanObjects:
      - ...
      seenImages:
      - ...
      warnings:
      - ...
    revision: master/2129450c9fc867f5a9b25760bb512054d7df6c43
    targetName: prod
    time: "2022-07-07T11:49:29Z"
  lastPruneResult:
    objectsHash: bc4d2b9f717088a395655b8d8d28fa66a9a91015f244bdba3c755cd87361f9e2
    result:
      deletedObjects:
      - ...
    revision: master/2129450c9fc867f5a9b25760bb512054d7df6c43
    targetName: prod
    time: "2022-07-07T11:49:48Z"
  lastValidateResult:
    error: ""
    objectsHash: bc4d2b9f717088a395655b8d8d28fa66a9a91015f244bdba3c755cd87361f9e2
    result:
      errors:
      - ...
      ready: false
      results:
      - ...
    revision: master/2129450c9fc867f5a9b25760bb512054d7df6c43
    targetName: prod
    time: "2022-07-07T12:05:53Z"
  observedGeneration: 1

You can wait for the controller to complete a reconciliation with:

kubectl wait kluctldeployment/backend --for=condition=ready

A failed reconciliation sets the ready condition to false:

status:
  conditions:
  - lastTransitionTime: "2022-05-04T10:18:11Z"
    message: target invalid-name not found in kluctl project
    reason: PrepareFailed
    status: "False"
    type: Ready
  lastDeployResult:
    ...
  lastPruneResult:
    ...
  lastValidateResult:
    ...

Note that the lastDeployResult, lastPruneResult and lastValidateResult are only updated on a successful reconciliation.

8.4 - KluctlDeployment API reference

Flux Kluctl Controller documentation.

Packages:

flux.kluctl.io/v1alpha1

Package v1alpha1 contains API Schema definitions for the flux.kluctl.io v1alpha1 API group.

Resource Types:

    FixedImage

    (Appears on: KluctlDeploymentTemplateSpec)

    Field Description
    image
    string
    resultImage
    string
    deployedImage
    string
    registryImage
    string
    namespace
    string
    object
    ObjectRef
    deployment
    string
    container
    string
    versionFilter
    string
    deployTags
    []string
    deploymentDir
    string

    KluctlDeployment

    KluctlDeployment is the Schema for the kluctldeployments API

    Field Description
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    KluctlDeploymentSpec


    KluctlProjectSpec
    KluctlProjectSpec

    (Members of KluctlProjectSpec are embedded into this type.)

    KluctlDeploymentTemplateSpec
    KluctlDeploymentTemplateSpec

    (Members of KluctlDeploymentTemplateSpec are embedded into this type.)

    target
    string

    Target specifies the kluctl target to deploy

    status
    KluctlDeploymentStatus

    KluctlDeploymentSpec

    (Appears on: KluctlDeployment)

    KluctlDeploymentSpec defines the desired state of KluctlDeployment

    Field Description
    KluctlProjectSpec
    KluctlProjectSpec

    (Members of KluctlProjectSpec are embedded into this type.)

    KluctlDeploymentTemplateSpec
    KluctlDeploymentTemplateSpec

    (Members of KluctlDeploymentTemplateSpec are embedded into this type.)

    target
    string

    Target specifies the kluctl target to deploy

    KluctlDeploymentStatus

    (Appears on: KluctlDeployment)

    KluctlDeploymentStatus defines the observed state of KluctlDeployment

    Field Description
    KluctlProjectStatus
    KluctlProjectStatus

    (Members of KluctlProjectStatus are embedded into this type.)

    lastDeployResult
    LastCommandResult
    (Optional)

    LastDeployResult is the result of the last deploy command

    lastPruneResult
    LastCommandResult
    (Optional)

    LastDeployResult is the result of the last prune command

    lastValidateResult
    LastValidateResult
    (Optional)

    LastValidateResult is the result of the last validate command

    commonLabels
    map[string]string
    (Optional)

    CommonLabels are the commonLabels found in the deployment project when the last deployment was done. This is used to perform cleanup/deletion in case the KluctlDeployment project is deleted

    rawTarget
    string
    (Optional)

    KluctlDeploymentTemplateSpec

    (Appears on: KluctlDeploymentSpec)

    Field Description
    KluctlTimingSpec
    KluctlTimingSpec

    (Members of KluctlTimingSpec are embedded into this type.)

    registrySecrets
    []github.com/fluxcd/pkg/apis/meta.LocalObjectReference
    (Optional)

    RegistrySecrets is a list of secret references to be used for image registry authentication. The secrets must either have “.dockerconfigjson” included or “registry”, “username” and “password”. Additionally, “caFile” and “insecure” can be specified.

    serviceAccountName
    string
    (Optional)

    The name of the Kubernetes service account to use while deploying. If not specified, the default service account is used.

    kubeConfig
    KubeConfig
    (Optional)

    The KubeConfig for deploying to the target cluster. Specifies the kubeconfig to be used when invoking kluctl. Contexts in this kubeconfig must match the context found in the kluctl target. As an alternative, RenameContexts can be used to fix non-matching context names.

    renameContexts
    []RenameContext
    (Optional)

    RenameContexts specifies a list of context rename operations. This is useful when the kluctl target’s context does not match with the contexts found in the kubeconfig while deploying. This is the case when using kubeconfigs generated from service accounts, in which case the context name is always “default”.

    args
    map[string]string
    (Optional)

    Args specifies dynamic target args. Only arguments defined by ‘dynamicArgs’ of the target are allowed.

    updateImages
    bool
    (Optional)

    UpdateImages instructs kluctl to update dynamic images. Equivalent to using ‘-u’ when calling kluctl.

    images
    []FixedImage
    (Optional)

    Images contains a list of fixed image overrides. Equivalent to using ‘–fixed-images-file’ when calling kluctl.

    dryRun
    bool
    (Optional)

    DryRun instructs kluctl to run everything in dry-run mode. Equivalent to using ‘–dry-run’ when calling kluctl.

    noWait
    bool
    (Optional)

    NoWait instructs kluctl to not wait for any resources to become ready, including hooks. Equivalent to using ‘–no-wait’ when calling kluctl.

    forceApply
    bool
    (Optional)

    ForceApply instructs kluctl to force-apply in case of SSA conflicts. Equivalent to using ‘–force-apply’ when calling kluctl.

    replaceOnError
    bool
    (Optional)

    ReplaceOnError instructs kluctl to replace resources on error. Equivalent to using ‘–replace-on-error’ when calling kluctl.

    forceReplaceOnError
    bool
    (Optional)

    ForceReplaceOnError instructs kluctl to force-replace resources in case a normal replace fails. Equivalent to using ‘–force-replace-on-error’ when calling kluctl.

    abortOnError
    bool
    (Optional)

    ForceReplaceOnError instructs kluctl to abort deployments immediately when something fails. Equivalent to using ‘–abort-on-error’ when calling kluctl.

    includeTags
    []string
    (Optional)

    IncludeTags instructs kluctl to only include deployments with given tags. Equivalent to using ‘–include-tag’ when calling kluctl.

    excludeTags
    []string
    (Optional)

    ExcludeTags instructs kluctl to exclude deployments with given tags. Equivalent to using ‘–exclude-tag’ when calling kluctl.

    includeDeploymentDirs
    []string
    (Optional)

    IncludeDeploymentDirs instructs kluctl to only include deployments with the given dir. Equivalent to using ‘–include-deployment-dir’ when calling kluctl.

    excludeDeploymentDirs
    []string
    (Optional)

    ExcludeDeploymentDirs instructs kluctl to exclude deployments with the given dir. Equivalent to using ‘–exclude-deployment-dir’ when calling kluctl.

    deployMode
    string
    (Optional)

    DeployMode specifies what deploy mode should be used

    prune
    bool
    (Optional)

    Prune enables pruning after deploying.

    deployInterval
    Kubernetes meta/v1.Duration
    (Optional)

    DeployInterval specifies the interval at which to deploy the KluctlDeployment. This is independent of the ‘Interval’ value, which only causes deployments if some deployment objects have changed.

    validateInterval
    Kubernetes meta/v1.Duration
    (Optional)

    ValidateInterval specifies the interval at which to validate the KluctlDeployment. Validation is performed the same way as with ‘kluctl validate -t ’. Defaults to 1m.

    KluctlProjectSpec

    (Appears on: KluctlDeploymentSpec)

    Field Description
    path
    string
    (Optional)

    Path to the directory containing the .kluctl.yaml file, or the Defaults to ‘None’, which translates to the root path of the SourceRef.

    sourceRef
    github.com/fluxcd/pkg/apis/meta.NamespacedObjectKindReference

    Reference of the source where the kluctl project is. The authentication secrets from the source are also used to authenticate dependent git repositories which are cloned while deploying the kluctl project.

    KluctlProjectStatus

    (Appears on: KluctlDeploymentStatus)

    KluctlProjectStatus defines the observed state of KluctlProjectStatus

    Field Description
    ReconcileRequestStatus
    github.com/fluxcd/pkg/apis/meta.ReconcileRequestStatus

    (Members of ReconcileRequestStatus are embedded into this type.)

    observedGeneration
    int64
    (Optional)

    ObservedGeneration is the last reconciled generation.

    conditions
    []Kubernetes meta/v1.Condition
    (Optional)
    lastAttemptedRevision
    string
    (Optional)

    LastAttemptedRevision is the revision of the last reconciliation attempt.

    KluctlTimingSpec

    (Appears on: KluctlDeploymentTemplateSpec)

    Field Description
    interval
    Kubernetes meta/v1.Duration

    The interval at which to reconcile the KluctlDeployment.

    retryInterval
    Kubernetes meta/v1.Duration
    (Optional)

    The interval at which to retry a previously failed reconciliation. When not specified, the controller uses the KluctlDeploymentSpec.Interval value to retry failures.

    timeout
    Kubernetes meta/v1.Duration
    (Optional)

    Timeout for all operations. Defaults to ‘Interval’ duration.

    suspend
    bool
    (Optional)

    This flag tells the controller to suspend subsequent kluctl executions, it does not apply to already started executions. Defaults to false.

    KubeConfig

    (Appears on: KluctlDeploymentTemplateSpec)

    KubeConfig references a Kubernetes secret that contains a kubeconfig file.

    Field Description
    secretRef
    github.com/fluxcd/pkg/apis/meta.SecretKeyReference

    SecretRef holds the name of a secret that contains a key with the kubeconfig file as the value. If no key is set, the key will default to ‘value’. The secret must be in the same namespace as the Kustomization. It is recommended that the kubeconfig is self-contained, and the secret is regularly updated if credentials such as a cloud-access-token expire. Cloud specific cmd-path auth helpers will not function without adding binaries and credentials to the Pod that is responsible for reconciling the KluctlDeployment.

    LastCommandResult

    (Appears on: KluctlDeploymentStatus)

    Field Description
    ReconcileResultBase
    ReconcileResultBase

    (Members of ReconcileResultBase are embedded into this type.)

    rawResult
    string
    (Optional)
    error
    string
    (Optional)

    LastValidateResult

    (Appears on: KluctlDeploymentStatus)

    Field Description
    ReconcileResultBase
    ReconcileResultBase

    (Members of ReconcileResultBase are embedded into this type.)

    rawResult
    string
    (Optional)
    error
    string
    (Optional)

    ObjectRef

    (Appears on: FixedImage)

    ObjectRef contains the information necessary to locate a resource within a cluster.

    Field Description
    group
    string
    version
    string
    kind
    string
    name
    string
    namespace
    string

    ReconcileResultBase

    (Appears on: LastCommandResult, LastValidateResult)

    Field Description
    time
    Kubernetes meta/v1.Time

    AttemptedAt is the time when the attempt was performed

    revision
    string
    (Optional)

    Revision is the source revision. Please note that kluctl projects have dependent git repositories which are not considered in the source revision

    targetName
    string

    TargetName is the name of the target

    objectsHash
    string
    (Optional)

    ObjectsHash is the hash of all rendered objects

    RenameContext

    (Appears on: KluctlDeploymentTemplateSpec)

    RenameContext specifies a single rename of a context

    Field Description
    oldContext
    string

    OldContext is the name of the context to be renamed

    newContext
    string

    NewContext is the new name of the context

    This page was automatically generated with gen-crd-api-reference-docs