This is a collection of recipes, targeted at developers, devops engineers, SREs and everyone else who wants to use Kluctl to deploy their workloads to Kubernetes.
These recipes try to describe how to implement common use cases tasks.
This is the multi-page printable view of this section. Click here to print.
This is a collection of recipes, targeted at developers, devops engineers, SREs and everyone else who wants to use Kluctl to deploy their workloads to Kubernetes.
These recipes try to describe how to implement common use cases tasks.
This recipe will guide you on how to deploy the same deployment multiple times to the same (via namespaces) or different clusters.
The easiest way to achieve this is to define targets in
your .kluctl.yaml
. Each target should then use args to define
a small set configuration values for the specific target.
Each target should relate to the target environment and/or cluster that it needs to be deployed to. For example, one
could be named prod
while another is named test
, meaning that you can either deploy to the prod
or to the test
environment. It’s also useful to set the context field
on each target, so that you can’t accidentally deploy the prod
target to the test
cluster.
args
should be minimalistic to avoid bloating up the .kluctl.yaml
. It should be used as the “entrypoint” into
the actual configuration, which is then loaded from inside the root deployment.yaml
via vars
. See advanced configuration for details on this.
Example targets definition:
targets:
- name: prod
context: prod.example.com
args:
environment_name: prod
- name: test
context: test.example.com
args:
environment_name: test
# Warning, this discriminator is only ok if targets are only deployed once per cluster. See next chapter for details.
discriminator: "my-project-{{ target.name }}"
args:
- name: environment_name
Example CLI invocations:
$ kluctl deploy -t prod
$ kluctl deploy -t test
As an alternative to very specific targets, you could also define targets that are more dynamic so that each target can
be deployed multiple times, but to different Kubernetes contexts or even namespaces. You can also mix such targets,
for example have one prod
target that is just like described in the previous chapter, and one non-prod
target
that can be used to deploy to multiple non-production clusters.
The dynamic targets then need a way so that they can be differentiated. The easiest way is to use different contexts,
which means you deploy it to different clusters. Another way is to introduce args
that serve to differentiate, e.g.
an arg names environment_name
which can then be used to deploy the same workloads to different namespaces, add prefixes
to global resources, create unique ingresses, and so on.
If such an argument is introduced, you would then invoke the CLI with the argument being set.
Another thing to take into account is the required uniqueness of discriminators to make delete and prune work properly. If you miss this crucial part or make a mistake, you might end up deleting resources that were not meant to be deleted. The uniqueness must be ensured inside the boundaries of individual clusters.
Example targets definition:
targets:
- name: prod
context: prod.example.com
args:
environment_type: prod
environment_name: prod
- name: non-prod
args:
environment_type: non-prod
# environment_name must be passed via CLI
# This will ensure that the discriminator is unique, even if the same target is deployed multiple times
discriminator: my-project-{{ target.name }}-{{ args.environment_type }}-{{ args.environment_name }}
# This is a bad example of a discriminator. It will lead to the discriminator being equal for every environment deployed to the same cluster.
# discriminator: "my-project-{{ target.name }}"
args:
- name: environment_type
- name: environment_name
Example CLI invocations:
$ kluctl deploy -t prod # deploys to prod context
$ kluctl deploy -t non-prod -a environment_name=test-env1 # deploys to currently active context
$ kluctl deploy -t non-prod -a environment_name=test-env2 # deploys to currently active context
$ kluctl deploy -t non-prod -a environment_name=test-env3 --context test2.exmaple.com
Right now, Kluctl is internally using a single label to store discriminators in Kubernetes. This has some serious limitations in regard to the length of the discriminators, which is 63 characters. This means, that the discriminator template shown in the above example can easily lead to errors. This issue will be fixed when https://github.com/kluctl/kluctl/issues/468 is implemented.
Until then, you might need to use some form of shortening, e.g. by using a shortened hash of some string. Example for this:
discriminator: my-project-{{ target.name }}-{{ args.environment_type }}-{{ (args.environment_name | sha256)[:8] }}
So far, we have only shown how to define and use the targets
feature to implement multiple target environments.
This works out-of-the-box when you target different clusters per target, but will need some additional work when
deploying to the same cluster. In that case, you are required to use different namespaces for each environment.
This can be easily achieved by using the mentioned environment_name
inside resources. Combined with templating, it can
be used to create dynamic namespaces, prefix resource names and create unique ingresses.
Example project:
my-project/
├── .kluctl.yaml
├── deployment.yaml
├── namespaces/
│ └── namespace.yaml
└── apps
├── deployment.yaml
├── app1/
│ ├── resource1.yaml
│ └── resource2.yaml
└── app2/
├── resource1.yaml
└── resource2.yaml
See above.
deployments:
- path: namespaces
- barrier: true # ensure namespaces are applied before we continue
- include: apps
apiVersion: v1
kind: Namespace
metadata:
name: {{ args.environment_name }}
deployments:
- path: app1
- path: app2
# This instructs Kluctl to set the specified namespace on all resources, including resources from `app1` and `app2`,
# that do not have a namespace set explicitly.
overrideNamespace: {{ args.environment_name }}
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm
# no namespace needed here, as it is set via the `overrideNamespace` from `apps/deployment.yaml`
data:
# just an example to show that you can also use the `args` here.
environment_name: {{ args.environment_name }}
This recipe will try to give best practices on how to achieve advanced configuration that keeps being maintainable.
Kluctl offers multiple ways to introduce configuration args into your deployment. These are all accessible via
templating by referencing the global args
variable, e.g. {{ args.my_arg }}
.
Args can be passed via command line arguments, target definitions and GitOps KluctlDeployment spec.
It might however be tempting to provide all necessary configuration via args, which can easily end up clogging things up in a very unmaintainable way.
The better and much more maintainable approach is to combine args
with
variable sources. You could for example
introduce an arg that is later used to load further configuration from YAML files or even external vars sources (e.g. git).
Consider the following example:
# .kluctl.yaml
targets:
- name: prod
context: prod.example.com
args:
environment_type: prod
environment_name: prod
- name: test
context: test.example.com
args:
environment_type: non-prod
environment_name: test
- name: dev
context: test.example.com
args:
environment_type: non-prod
environment_name: dev
# root deployment.yaml
vars:
- file: config/{{ args.environment_type }}.yaml
deployments:
- include: my-include
- path: my-deployment
The above deployment.yaml
will load different configuration, depending on the passed environment_type
argument.
This means, you’ll also need the following configuration files:
# config/prod.yaml
myApp:
replicas: 3
# config/non-prod.yaml
myApp:
replicas: 1
This way, you don’t have to bloat up the .kluctl.yaml
with some ever-growing amount of configuration but instead can
move such configuration into dedicated configuration files.
The resulting configuration can then be used via templating, e.g. {{ myApp.replicas }}
Kluctl merges already loaded configuration with freshly loaded configuration. It does this for every item in vars
.
At the same time, Kluctl allows to use templating with the previously loaded configuration context in each loaded
vars source. This means, that configuration that was loaded by a vars item before the current one can already be used
in the current one.
All deployment items will then be provided with the final merged configuration. If deployment items also define vars, these are merged as well, but only for the context of the specific deployment item.
Consider the following example:
# root deployment.yaml
vars:
- file: config/common.yaml
- file: config/{{ args.environment_type }}.yaml
- file: config/monitoring.yaml
# config/common.yaml
myApp:
monitoring:
enabled: false
# config/prod.yaml
myApp:
replicas: 3
monitoring:
enabled: true
# config/non-prod.yaml
myApp:
replicas: 1
The merged configuration for prod
environments will have myApp.monitoring.enabled
set to true
, while all other
environments will have it set to false
.
Kluctl supports many different variable sources, which means you are not forced to store all configuration in files which are part of the project.
You can also store configuration inside the target cluster and access this configuration via the clusterConfigMap or clusterSecret variable sources. This configuration could for example be part of the cluster provisioning stage and contain information about networking info, cloud info, DNS info, and so on, so that this can then be re-used wherever needed (e.g. in ingresses).
Consider the following example ConfigMap, which was already deployed to your target cluster:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
vars: |
clusterInfo:
baseDns: test.example.com
aws:
accountId: 12345
irsaPrefix: test-example-com
Your deployment:
# root deployment.yaml
vars:
- clusterConfigMap:
name: cluster-info
namespace: kube-system
key: vars
- file: ... # some other configuration, as usual
deployments:
# as usual
- ...
# some/example/ingress.yaml
# look at the DNS name
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
spec:
rules:
- host: my-ingress.{{ clusterInfo.baseDns }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
tls:
- hosts:
- 'my-ingress.{{ clusterInfo.baseDns }}'
secretName: 'ssl-cert'
# some/example/irso-service-account.yaml
# Assuming you're using IRSA (https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)
# for external-dns
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::{{ clusterInfo.aws.accountId }}:role/{{ clusterInfo.aws.irsaPrefix }}-external-dns
This recipe will try to give best practices on how to leverage the kluctl controller to implement Kluctl GitOps. Before exploring Kluctl GitOps, it is suggested to first learn how Kluctl works without GitOps being involved.
You should also try to understand how to deploy to multiple targets/environments first to get a basic understanding of how the same deployment project can be deployed multiple times.
The source shown in this recipe can also be found on GitHub in the kluctl-examples repository
Kluctl follows a command-line-first approach, which means that all features implemented into Kluctl will always be added in a way so that you can keep using the CLI. This means, that Kluctl does not depend on the controller to implement all its features.
Letting the controller take over is optional and can even be done in a way so that you can mix CLI based (push-based GitOps) approaches and controller based approaches (pull-based GitOps).
Kluctl considers GitOps as just another interface for your deployments. This means that everything that can be
performed and configured via the CLI can also be configured through the Kluctl CRDs
(KluctlDeployment
).
Consider a deployment project that you usually deploy via these commands:
$ git clone https://github.com/kluctl/kluctl-examples.git
$ cd simple
$ kluctl deploy -t simple -a environment=test
The above lines perform a deployment in the “push” style, meaning that you (or your CI) pushes the deployment to the target cluster. That same deployment project can also be deployed in “pull” style, which involves the kluctl-controller running on the target cluster that “pulls” the deployment into the cluster.
If you have the controller already installed, you can apply the following
KluctlDeployment
to your target cluster:
# file example-deployment.yaml
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: example-deployment
namespace: kluctl-system
spec:
interval: 5m
source:
git:
url: https://github.com/kluctl/kluctl-examples.git
path: simple
target: simple
args:
environment: test
context: default
The above manifest can be applied via plain kubectl apply -f example-deployment.yaml
or via a Kluctl deployment
project. Later sections will go into more detail about some possible options.
After the KluctlDeployment
got applied, the controller will periodically (5m interval) clone the repository and check
if the result of the rendering process differs since the last deployment. If it differs, the controller will deploy
the deployment project with the given options (which are equal to options of the CLI example from above).
After a KluctlDeployment
is applied to the cluster, the kluctl-controller will immediately pick up that deployment
and start to periodically reconcile the deployment. Reconciliation basically performs the following steps:
interval
and then repeat the reconciliation loopinterval
and then repeat the reconciliation loopIf you already know GitOps from other solutions (e.g. Flux), you might notice that Kluctl does not deploy on every reconciliation iteration but instead only when the source changes. This deviation from other GitOps solutions is intended as it enabled more flexible intervention and processes (e.g, mixing GitOps with push-based processes).
To mitigate drift between the source and the cluster state, drift detection is performed on every reconciliation iteration. If necessary, the drift can be viewed and fixed via the Kluctl Webui or via the GitOps commands.
You can also override this behavior to match the behavior of other GitOps solutions by using deployInterval, which will cause the reconciliation loop to periodically perform a deployment even if the source does not change.
To start using Kluctl GitOps, install it into your cluster first.
Optionally, if you want to use the Kluctl Webui to monitor and control your GitOps deployments, either run it locally or install it into the cluster.
KluctlDeployment
resources need to be applied and managed the same way as any other Kubernetes resource. You might
easily end up managing dozens or even hundreds of KluctlDeployment
s per cluster. The recommended way to do this is
to introduce a dedicated GitOps deployment project which is only responsible for the management of other deployments.
Other options exist as well, it’s for example also possible to include the KluctlDeployment
resource into the
deployment itself, so when you perform the initial deployment, you will automatically let GitOps take over. The following
sections will go into more detail.
In this setup, you’ll have one dedicated directory (a simple deployment item)
for each cluster. These deployment items will contain one or more KluctlDeployment
resources.
The deployment works by using a simple templated entry in deployments
which uses the argument cluster_name
so that
a different directory is loaded for each cluster.
An clusters/all
deployment item is loaded as well for each cluster. The clusters/all
deployment item is meant to
add common deployments that are needed on all clusters. One of these deployments is the GitOps deployment itself, so
that it is also managed via GitOps.
The namespaces
deployment item is used to create the kluctl-gitops
namespace which we then use to deploy the
KluctlDeployment
resources into. It’s generally best practice to use a dedicated namespace for GitOps.
Consider the following project structure:
gitops-deployment
├── namespaces
│ └── kluctl-gitops.yaml
├── clusters/
│ ├── test.example.com/
│ │ ├── app1.yaml
│ │ └── app2.yaml
│ ├── prod.example.com/
│ │ ├── app1.yaml
│ │ └── app2.yaml
│ ├── all/
│ │ └── gitops.yaml
│ └── deployment.yaml
├── .kluctl.yaml
└── deployment.yaml
And the following YAML files and manifests:
# .kluctl.yaml
args:
# This allows us to deploy the GitOps deployment to different clusters. It is used to include dedicated deployment
# items for the selected cluster.
- name: cluster_name
targets:
- name: gitops
# Without a discriminator, pruning won't work. Make sure the rendered result is unique on the target cluster
discriminator: gitops-{{ args.cluster_name | slugify }}
# deployment.yaml
deployments:
- path: namespaces
- barrier: true
- include: clusters
# clusters/deployment.yaml
deployments:
# Include things that are required on all clusters (e.g., the KluctlDeployment for the GitOps deployment itself)
- path: all
# We use simple templating to change a dedicated deployment item per cluster
- path: {{ args.cluster_name }}
# namespaces/kluctl-gitops.yaml
apiVersion: v1
kind: Namespace
metadata:
name: kluctl-gitops
# clusters/test.example.com/app1.yaml
# and clusters/prod.example.com/app1.yaml
# but with adjusted specs (e.g., environment names differ)
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: app1
namespace: kluctl-gitops
spec:
interval: 5m
source:
git:
url: https://github.com/kluctl/kluctl-examples.git
path: simple
target: simple
args:
environment: test
context: default
# Let it automatically clean up orphan resources and delete all resources when the KluctlDeployment itself gets
# deleted. You might consider setting these to false for prod and instead do manual pruning and deletion when the
# need arises.
prune: true
delete: true
# clusters/test.example.com/app2.yaml
# and clusters/prod.example.com/app2.yaml
# but with adjusted specs (e.g., environment names differ)
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: app2
namespace: kluctl-gitops
spec:
interval: 5m
source:
git:
url: https://github.com/kluctl/kluctl-examples.git
path: simple-helm
target: simple-helm
args:
environment: test
context: default
# Let it automatically clean up orphan resources and delete all resources when the KluctlDeployment itself gets
# deleted. You might consider setting these to false for prod and instead do manual pruning and deletion when the
# need arises.
prune: true
delete: true
# clusters/all/gitops.yaml
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: gitops
namespace: kluctl-gitops
spec:
interval: 5m
source:
git:
url: https://github.com/kluctl/kluctl-examples.git
path: gitops-deployment # You could also use a dedicated repository without a sub-directory
target: gitops
args:
# this passes the cluster_name initially passed via `kluctl deploy -a cluster_name=xxx.example.com` into the KluctlDeployment
cluster_name: {{ args.cluster_name }}
context: default
# let it automatically clean up orphan KluctlDeployment resources
prune: true
delete: true
Please note that the above example deployments do not require authentication. It’s very likely that you’d need authentication for Git repositories, Helm repositories or OCI registries in your own setup, simply because not everything is public and/or Open Source.
To add authentication for the KluctlDeployment
s, fill the
credentials field in the spec of the
KluctlDeployment
s. These credentials
refer to Secret
s which also need to be deployed to the cluster.
You can either provide these secrets manually (should be avoided), via SOPS
encrypted Secret
s (which can then be part of the GitOps deployment project itself) or via
External Secrets.
Please ensure that you have committed and pushed all required files before you bootstrap the GitOps deployment. Otherwise, you’ll end up deploying different states from your local version while the controller will apply the Git version.
To bootstrap the GitOps deployment project, simply perform a kluctl deploy
:
$ cd gitops-deployment
$ kluctl deploy -a cluster_name=test.example.com
This will deploy the GitOps deployment to the current context cluster. After this deployment, the kluctl-controller
will
immediately start reconciling all deployed KluctlDeployment
resources, including the one for the GitOps deployment
itself.
This means, to change any of the deployments, perform the changes in Git via your already established processes (e.g., pull-requests or direct pushes to the main branch).
Each individual KluctlDeployment
can be controlled and inspected via the
Kluctl CLI (check the kluctl gitops xxx
sub-commands). Each command takes the
KluctlDeployment
name and its namespace as arguments.
In addition, if --name
and --namespace
are omitted, the CLI will try to auto-detect the KluctlDeployment
if your
current directory is inside a Kluctl deployment project. It does so by using the URL of the Git origin
remote and the
subdirectory inside the Git repository to find one or more KluctlDeployment
that refers to this project.
The CLI can suspend and resume individual KluctlDeployment
s. This is useful if you need to perform work that would
otherwise be hard to perform with constant reconciliation being active. This includes refactorings, migrations and other
more complex tasks. While suspended, manual reconciliation via the CLI and the Webui is still
possible.
To suspend the app1
deployment, run the following CLI command:
$ kluctl gitops suspend --namespace kluctl-gitops --name app1
While suspended, you can perform whatever actions you need without the kluctl-controller
intervening. Then, to resume
the deployment, run:
$ kluctl gitops resume --namespace kluctl-gitops --name app1
You can trigger different manual requests via the CLI. Please note that these requests are executed by the controller even though the usage of the CLI feels like things are executed locally.
Every manual request command is able to override many of the spec fields found in the KluctlDeployment
. The CLI
tries its best to mimic the interface already found in the non-GitOps based commands (e.g. kluctl deploy
).
As an example, with kluctl gitops deploy --namespace=xxx --name=yyy
you can pass deployment arguments
via -a my_arg=my_value
the same way as you can already do with kluctl deploy
.
kluctl gitops diff ...
before running any potentially disruptive commands. This behavior might change
in the future.The CLI will also try to detect if the Git repository in which you’re currently in is related to the Git repository used
in the referenced KluctlDeployment
. In that case, the CLI will upload the local source code to the controller for a
one-time override. This means, that the kluctl-controller
will actually work with your local version of the project.
This is mostly useful when you want to verify that changes are valid before actually pushing/merging your changes.
The following invocation will request a single reconciliation iteration. This means, it will do the same as described in The reconciliation loop.
$ kluctl gitops reconcile --namespace kluctl-gitops --name app1
The following invocation will perform a diff and print the result. This is especially useful if your local version of the source code contains modifications which you’d like to verify.
$ kluctl gitops diff --namespace kluctl-gitops --name app1
The following invocation will cause a manual prune (delete orphan objects).
$ kluctl gitops prune --namespace kluctl-gitops --name app1
The following CLI command can be used to view controller logs related to a given KluctlDeployment
:
$ kluctl gitops logs --namespace kluctl-gitops --name app1 -f
In addition to the Kluctl GitOps commands, the Kluctl Webui can be used to monitor and
control the KluctlDeployment
s.
The Webui is still very experimental, meaning that many features are still missing. But generally, performing manual requests, viewing state, diffs and logs should already work good enough as of now.
Kluctl allows you to mix pull-based GitOps with push-based CLI workflows. You can use GitOps for some targets/environments (e.g. prod) and revert to using push-based CLI workflows in other targets/environments (e.g. dev environments). This is useful if you want the security and stability of GitOps on prod while still having the flexibility and speed of development on non-prod environments.
You can also use GitOps for a target/environment to perform the actuall deployments while using kluctl diff
in the
push fashion to test/verify changes before actually pushing/merging the main branch.