This is the multi-page printable view of this section. Click here to print.
Kluctl Documentation
- 1: Core Concepts
- 2: Installation
- 3: Get Started
- 4: Philosophy
- 5: History
- 6: Reference
- 6.1: Kluctl project
- 6.1.1: targets
- 6.2: Deployments
- 6.2.1: Deployments
- 6.2.2: Kustomize Integration
- 6.2.3: Container Images
- 6.2.4: Helm Integration
- 6.2.5: SOPS Integration
- 6.2.6: Hooks
- 6.2.7: Readiness
- 6.2.8: Tags
- 6.2.9: Annotations
- 6.2.9.1: All resources
- 6.2.9.2: Hooks
- 6.2.9.3: Validation
- 6.2.9.4: Kustomize
- 6.3: Templating
- 6.3.1: Predefined Variables
- 6.3.2: Variable Sources
- 6.3.3: Filters
- 6.3.4: Functions
- 6.4: GitOps
- 6.4.1: Metrics
- 6.4.1.1: v1beta1 metrics
- 6.4.1.1.1: Metrics of the KluctlDeployment Controller
- 6.4.2: Specs
- 6.4.2.1: v1beta1 specs
- 6.4.2.1.1: KluctlDeployment
- 6.4.3: Legacy Controller Migration
- 6.4.4: Kluctl Controller API reference
- 6.5: Commands
- 6.5.1: Common Arguments
- 6.5.2: Environment Variables
- 6.5.3: controller install
- 6.5.4: controller run
- 6.5.5: delete
- 6.5.6: deploy
- 6.5.7: poke-images
- 6.5.8: prune
- 6.5.9: validate
- 6.5.10: diff
- 6.5.11: list-targets
- 6.5.12: helm-pull
- 6.5.13: render
- 6.5.14: list-images
- 6.5.15: helm-update
1 - Core Concepts
These are some core concepts in Kluctl.
Kluctl project
The kluctl project defines targets. It is defined via the .kluctl.yaml configuration file.
Targets
A target defines a target cluster and a set of deployment arguments. Multiple targets can use the same cluster. Targets allow implementing multi-cluster, multi-environment, multi-customer, … deployments.
Deployments
A deployment defines which Kustomize deployments and which sub-deployments to deploy. It also controls the order of deployments.
Deployments may be configured through deployment arguments, which are typically provided via the targets but might also be provided through the CLI.
Variables
Variables are the main source of configuration. They are either loaded yaml files or directly defined inside deployments. Each variables file that is loaded has access to all the variables which were defined before, allowing complex composition of configuration.
After being loaded, variables are usable through the templating engine at all nearly all places.
Templating
All configuration files (including .kluctl.yaml and deployment.yaml) and all Kubernetes manifests involved are processed through a templating engine. The templating engine allows simple variable substitution and also complex control structures (if/else, for loops, …).
Unified CLI
The CLI of kluctl is designed to be unified/consistent as much as possible. Most commands are centered around targets
and thus require you to specify the target name (via -t <target>
). If you remember how one command works, it’s easy
to figure out how the others work. Output from all targets based commands is also unified, allowing you to easily see
what will and what did happen.
2 - Installation
Kluctl is available as a CLI and as a GitOps controller.
Installing the CLI
Binaries
The kluctl CLI is available as a binary executable for all major platforms, the binaries can be downloaded form GitHub releases page .
Installation with Homebrew
With Homebrew for macOS and Linux:
brew install kluctl/tap/kluctl
Installation with Bash
With Bash for macOS and Linux:
curl -s https://kluctl.io/install.sh | bash
The install script does the following:
- attempts to detect your OS
- downloads and unpacks the release tar file in a temporary directory
- copies the kluctl binary to
/usr/local/bin
- removes the temporary directory
Build from source
Clone the repository:
git clone https://github.com/kluctl/kluctl
cd kluctl
Build the kluctl
binary (requires go >= 1.19):
make build
Run the binary:
./bin/kluctl -h
Container images
A container image with kluctl
is available on GitHub:
ghcr.io/kluctl/kluctl:<version>
Installing the GitOps Controller
The controller can be installed via two available options.
Using the “install” sub-command
The
kluctl controller install
command can be used to install the
controller. It will use an embedded version of the Controller Kluctl deployment project
found
here
.
Using a Kluctl deployment
To manage and install the controller via Kluctl, you can use a Git include in your own deployment:
deployments:
- git:
url: https://github.com/kluctl/kluctl.git
subDir: install/controller
ref: v2.20.3
3 - Get Started
This tutorial shows you how to start using kluctl.
Before you begin
A few things must be prepared before you actually begin.
Get a Kubernetes cluster
The first step is of course: You need a kubernetes cluster. It doesn’t really matter where this cluster is hosted, if it’s a local (e.g. kind ) cluster, managed cluster, or a self-hosted cluster, kops or kubespray based, AWS, GCE, Azure, … and so on. Kluctl is completely independent of how Kubernetes is deployed and where it is hosted.
There is however a minimum Kubernetes version that must be met: 1.20.0. This is due to the heavy use of server-side apply which was not stable enough in older versions of Kubernetes.
Prepare your kubeconfig
Your local kubeconfig should be configured to have access to the target Kubernetes cluster via a dedicated context. The context
name should match with the name that you want to use for the cluster from now on. Let’s assume the name is test.example.com
,
then you’d have to ensure that the kubeconfig context test.example.com
is correctly pointing and authorized for this
cluster.
See Configure Access to Multiple Clusters for documentation on how to manage multiple clusters with a single kubeconfig. Depending on the Kubernetes provisioning/deployment tooling you used, you might also be able to directly export the context into your local kubeconfig. For example, kops is able to export and merge the kubeconfig for a given cluster.
Objectives
- Checkout one of the example Kluctl projects
- Deploy to your local cluster
- Change something and re-deploy
Install Kluctl
The kluctl
command-line interface (CLI) is required to perform deployments. Read the
installation instructions
to figure out how to install it.
Use Kluctl with a plain Kustomize deployment
The simplest way to test out Kluctl is to use an existing Kustomize deployment and just test out the CLI. For example, try it with the podtato-head project :
$ git clone https://github.com/podtato-head/podtato-head.git
$ cd podtato-head/delivery/kustomize/base
$ kluctl deploy
Then try to modify something inside the Kustomize deployment and retry the kluctl deploy
call.
Try out the Kluctl examples
For more advanced examples, check out the Kluctl example projects. Clone the example project found at https://github.com/kluctl/kluctl-examples
git clone https://github.com/kluctl/kluctl-examples.git
Choose one of the examples
You can choose whatever example you like from the cloned repository. We will however continue this guide by referring
to the simple-helm
example found in that repository. Change the current directory:
cd kluctl-examples/simple-helm
Create your local cluster
Create a local cluster with kind :
kind create cluster
This will update your kubeconfig to contain a context with the name kind-kind
. By default, all examples will use
the currently active context.
Deploy the example
Now run the following command to deploy the example:
kluctl deploy -t simple-helm
Kluctl will perform a diff first and then ask for your confirmation to deploy it. In this case, you should only see some objects being newly deployed.
kubectl -n simple-helm get pod
Change something and re-deploy
Now change something inside the deployment project. You could for example add replicaCount: 2
to deployment/nginx/helm-values.yml
.
After you have saved your changes, run the deploy command again:
kluctl deploy -t simple-helm
This time it should show your modifications in the diff. Confirm that you want to perform the deployment and then verify it:
kubectl -n simple-helm get pod
You should need 2 instances of the nginx POD running now.
Where to continue?
Continue by reading through the tutorials and by consulting the reference documentation .
4 - Philosophy
Kluctl tries to follow a few basic ideas and a philosophy. Project and deployments structure, as well as all commands are centered on these.
Be practical
Everything found in kluctl is based on years of experience in daily business, from the perspective of a DevOps Engineer. Kluctl prefers practicability when possible, trying to make the daily life of a DevOps Engineer as comfortable as possible.
Consistent CLI
Commands try to be as consistent as possible, making it easy to remember how they are used. For example, a diff
is used the same way as a deploy
. This applies to all sizes and complexities of projects. A simple/single-application deployment is used the same way as a complex one, so that it is easy to switch between projects.
Mostly declarative
Kluctl tries to be declarative whenever possible, but loosens this in some cases to stay practical. For example, hooks, barriers and waitReadiness allows you to control order of deployments in a way that a pure declarative approach would not allow.
Predictable and traceable
Always know what will happen (diff
or --dry-run
) and always know what happened (output changes done by a command).
There is nothing worse than not knowing what’s going to happen when you deploy the current state to prod. Not knowing what happened is on the same level.
Live and let live
Kluctl tries to not interfere with any other tools or operators. It achieves this by honoring managed fields in an intelligent way. Kluctl will never force-apply anything without being told so, it will also always inform you about fields that you lost ownership of.
CLI/Client first
Kluctl is centered around a unified command line interface and will always prioritize this. This guarantees that the DevOps Engineer never looses control, even if automation and/or GitOps style operators are being used.
No scripting
Kluctl tries its best to remove the need for scripts (e.g. Bash) around deployments. It tries to remove the need for external orchestration of deployment order and/or dependencies.
5 - History
Kluctl was created after multiple incarnations of complex multi-environment (e.g. dev, test, prod) deployments, including everything from monitoring, persistency and the actual custom services. The philosophy of these deployments was always “what belongs together, should be put together”, meaning that only as much Git repositories were involved as necessary.
The problems to solve turned out to be always the same:
- Dozens of Helm Charts, kustomize deployments and standalone Kubernetes manifests needed to be orchestrated in a way that they work together (services need to connect to the correct databases, and so on)
- (Encrypted) Secrets needed to be managed and orchestrated for multiple environments and clusters
- Updates of components was always risky and required keeping track of what actually changed since the last deployment
- Available tools (Helm, Kustomize) were not suitable to solve this on its own in an easy/natural way
- A lot of bash scripting was required to put things together
When this got more and more complex, and the bash scripts started to become a mess (as “simple” Bash scripts always tend to become), kluctl was started from scratch. It now tries to solve the mentioned problems and provide a useful set of features (commands) in a sane and unified way.
The first versions of kluctl were written in Python, hence the use of Jinja2 templating in kluctl. With version 2.0.0, kluctl was rewritten in Go.
6 - Reference
6.1 - Kluctl project
The .kluctl.yaml
is the central configuration and entry point for your deployments. It defines which targets are
available to invoke
commands
on.
Example
An example .kluctl.yaml looks like this:
discriminator: "my-project-{{ target.name }}"
targets:
# test cluster, dev env
- name: dev
context: dev.example.com
args:
environment: dev
# test cluster, test env
- name: test
context: test.example.com
args:
environment: test
# prod cluster, prod env
- name: prod
context: prod.example.com
args:
environment: prod
args:
- name: environment
Allowed fields
discriminator
Specifies a default discriminator template to be used for targets that don’t have their own discriminator specified.
See target discriminator for details.
targets
Please check the targets sub-section for details.
args
A list of arguments that can or must be passed to most kluctl operations. Each of these arguments is then available
in templating via the global args
object.
An example looks like this:
targets:
...
args:
- name: environment
- name: enable_debug
default: false
- name: complex_arg
default:
my:
nested1: arg1
nested2: arg2
These arguments can then be used in templating, e.g. by using {{ args.environment }}
.
When calling kluctl, most of the commands will then require you to specify at least -a environment=xxx
and optionally
-a enable_debug=true
The following sub chapters describe the fields for argument entries.
name
The name of the argument.
default
If specified, the argument becomes optional and will use the given value as default when not specified.
The default value can be an arbitrary yaml value, meaning that it can also be a nested dictionary. In that case, passing
args in nested form will only set the nested value. With the above example of complex_arg
, running:
kluctl deploy -t my-target -a my.nested1=override`
will only modify the value below my.nested1
and keep the value of my.nested2
.
Using Kluctl without .kluctl.yaml
It’s possible to use Kluctl without any .kluctl.yaml
. In that case, all commands must be used without specifying the
target.
6.1.1 - targets
Specifies a list of targets for which commands can be invoked. A target puts together environment/target specific
configuration and the target cluster. Multiple targets can exist which target the same cluster but with differing
configuration (via args
).
Each value found in the target definition is rendered with a simple Jinja2 context that only contains the target and args. The rendering process is retried 10 times until it finally succeeds, allowing you to reference the target itself in complex ways.
Target entries have the following form:
targets:
...
- name: <target_name>
context: <context_name>
args:
arg1: <value1>
arg2: <value2>
...
images:
- image: my-image
resultImage: my-image:1.2.3
discriminator: "my-project-{{ target.name }}"
...
The following fields are allowed per target:
name
This field specifies the name of the target. The name must be unique. It is referred in all commands via the -t option.
context
This field specifies the kubectl context of the target cluster. The context must exist in the currently active kubeconfig. If this field is omitted, Kluctl will always use the currently active context.
args
This fields specifies a map of arguments to be passed to the deployment project when it is rendered. Allowed argument names are configured via deployment args .
images
This field specifies a list of fixed images to be used by
images.get_image(...)
.
The format is identical to the
fixed images file
.
discriminator
Specifies a discriminator which is used to uniquely identify all deployed objects on the cluster. It is added to all
objects as the value of the kluctl.io/discriminator
label. This label is then later used to identify all objects
belonging to the deployment project and target, so that Kluctl can determine which objects got orphaned and need to
be pruned. The discriminator is also used to identify all objects that need to be deleted when
kluctl delete
is called.
If no discriminator is set for a target, kluctl prune and kluctl delete are not supported.
The discriminator can be a
template
which is rendered at project loading time. While
rendering, only the target
and args
are available as global variables in the templating context.
The rendered discriminator should be unique on the target cluster to avoid mis-identification of objects from other
deployments or targets. It’s good practice to prefix the discriminator with a project name and at least use the target
name to make it unique. Example discriminator to achieve this: my-project-name-{{ target.name }}
.
If a target is meant to be deployed multiple times, e.g. by using external
arguments
, the external
arguments should be taken into account as well. Example: my-project-name-{{ target.name }}-{{ args.environment_name }}
.
A default discriminator can also be specified which is used whenever a target has no discriminator configured.
6.2 - Deployments
A deployment project is collection of deployment items and sub-deployments. Deployment items are usually Kustomize deployments, but can also integrate Helm Charts .
Basic structure
The following visualization shows the basic structure of a deployment project. The entry point of every deployment
project is the deployment.yaml
file, which then includes further sub-deployments and kustomize deployments. It also
provides some additional configuration required for multiple kluctl features to work as expected.
As can be seen, sub-deployments can include other sub-deployments, allowing you to structure the deployment project as you need.
Each level in this structure recursively adds tags to each deployed resources, allowing you to control precisely what is deployed in the future.
Some visualized files/directories have links attached, follow them to get more information.
-- project-dir/ |-- deployment.yaml |-- .gitignore |-- kustomize-deployment1/ | |-- kustomization.yaml | `-- resource.yaml |-- sub-deployment/ | |-- deployment.yaml | |-- kustomize-deployment2/ | | |-- resource1.yaml | | `-- ... | |-- kustomize-deployment3/ | | |-- kustomization.yaml | | |-- resource1.yaml | | |-- resource2.yaml | | |-- patch1.yaml | | `-- ... | |-- kustomize-with-helm-deployment/ | | |-- charts/ | | | `-- ... | | |-- kustomization.yaml | | |-- helm-chart.yaml | | `-- helm-values.yaml | `-- subsub-deployment/ | |-- deployment.yaml | |-- ... kustomize deployments | `-- ... subsubsub deployments `-- sub-deployment/ `-- ...
Order of deployments
Deployments are done in parallel, meaning that there are usually no order guarantees. The only way to somehow control order, is by placing barriers between kustomize deployments. You should however not overuse barriers, as they negatively impact the speed of kluctl.
Plain Kustomize
It’s also possible to use Kluctl on plain Kustomize deployments. Simply run kluctl deploy
from inside the
folder of your kustomization.yaml
. If you also don’t have a .kluctl.yaml
, you can also work without targets.
Please note that pruning and deletion is not supported in this mode.
6.2.1 - Deployments
The deployment.yaml
file is the entrypoint for the deployment project. Included sub-deployments also provide a
deployment.yaml
file with the same structure as the initial one.
An example deployment.yaml
looks like this:
deployments:
- path: nginx
- path: my-app
- include: monitoring
commonLabels:
my.prefix/target: "{{ target.name }}"
my.prefix/deployment-project: my-deployment-project
The following sub-chapters describe the available fields in the deployment.yaml
deployments
deployments
is a list of deployment items. Multiple deployment types are supported, which is documented further down.
Individual deployments are performed in parallel, unless a
barrier
is encountered which causes kluctl to
wait for all previous deployments to finish.
Deployments can also be conditional by using the when field.
Simple deployments
Simple deployments are specified via path
and are expected to be directories with Kubernetes manifests inside.
Kluctl will internally generate a kustomization.yaml from these manifests and treat the deployment item the same way
as it would treat a
Kustomize deployment
.
Example:
deployments:
- path: path/to/manifests
Kustomize deployments
When the deployment item directory specified via path
contains a kustomization.yaml
, Kluctl will use this file
instead of generating one.
Please see Kustomize integration for more details.
Example:
deployments:
- path: path/to/deployment1
- path: path/to/deployment2
waitReadiness: true
The path
must point to a directory relative to the directory containing the deployment.yaml
. Only directories
that are part of the kluctl project are allowed. The directory must contain a valid kustomization.yaml
.
waitReadiness
is optional and if set to true
instructs kluctl to wait for readiness of each individual object
of the kustomize deployment. Readiness is defined in
readiness
.
Includes
Specifies a sub-deployment project to be included. The included sub-deployment project will inherit many properties of the parent project, e.g. tags, commonLabels and so on.
Example:
deployments:
- include: path/to/sub-deployment
The path
must point to a directory relative to the directory containing the deployment.yaml
. Only directories
that are part of the kluctl project are allowed. The directory must contain a valid deployment.yaml
.
Git includes
Specifies an external git project to be included. The project is included the same way with regular includes, except that the included project can not use/load templates from the parent project. An included project might also include further git projects.
Simple example:
deployments:
- git: git@github.com/example/example.git
This will clone the git repository at git@github.com/example/example.git
, checkout the default branch and include it
into the current project.
Advanced Example:
deployments:
- git:
url: git@github.com/example/example.git
ref: my-branch
subDir: some/sub/dir
The url specifies the Git url to be cloned and checked out. ref
is optional and specifies the branch or tag to be used.
If ref
is omitted, the default branch will be checked out. subDir
is optional and specifies the sub directory inside
the git repository to include.
Barriers
Causes kluctl to wait until all previous kustomize deployments have been applied. This is useful when upcoming deployments need the current or previous deployments to be finished beforehand. Previous deployments also include all sub-deployments from included deployments.
Example:
deployments:
- path: kustomizeDeployment1
- path: kustomizeDeployment2
- include: subDeployment1
- barrier: true
# At this point, it's ensured that kustomizeDeployment1, kustomizeDeployment2 and all sub-deployments from
# subDeployment1 are fully deployed.
- path: kustomizeDeployment3
To create a barrier with a custom message, include the message parameter when creating the barrier. The message parameter accepts a string value that represents the custom message.
Example:
deployments:
- path: kustomizeDeployment1
- path: kustomizeDeployment2
- include: subDeployment1
- barrier: true
message: "Waiting for subDeployment1 to be finished"
# At this point, it's ensured that kustomizeDeployment1, kustomizeDeployment2 and all sub-deployments from
# subDeployment1 are fully deployed.
- path: kustomizeDeployment3
If no custom message is provided, the barrier will be created without a specific message, and the default behavior will be applied.
When viewing the kluctl deploy
status, the custom message, if provided, will be displayed along with default barrier information.
deleteObjects
Causes kluctl to delete matching objects, specified by a list of group/kind/name/namespace dictionaries. The order/parallelization of deletion is identical to the order and parallelization of normal deployment items, meaning that it happens in parallel by default until a barrier is encountered.
Example:
deployments:
- deleteObjects:
- group: apps
kind: DaemonSet
namespace: kube-system
name: kube-proxy
- barrier: true
- path: my-cni
The above example shows how to delete the kube-proxy DaemonSet before installing a CNI (e.g. Cilium in proxy-replacement mode).
deployments common properties
All entries in deployments
can have the following common properties:
vars (deployment item)
A list of variable sets to be loaded into the templating context, which is then available in all deployment items and sub-deployments .
See templating for more details.
Example:
deployments:
- path: kustomizeDeployment1
vars:
- file: vars1.yaml
- values:
var1: value1
- path: kustomizeDeployment2
# all sub-deployments of this include will have the given variables available in their Jinj2 context.
- include: subDeployment1
vars:
- file: vars2.yaml
when
Each deployment item can be conditional with the help of the when
field. It must be set to a
Jinja2 based expression
that evaluates to a boolean.
Example:
deployments:
- path: item1
- path: item2
when: my.var == "my-value"
tags (deployment item)
A list of tags the deployment should have. See tags for more details. For includes, this means that all sub-deployments will get these tags applied to. If not specified, the default tags logic as described in tags is applied.
Example:
deployments:
- path: kustomizeDeployment1
tags:
- tag1
- tag2
- path: kustomizeDeployment2
tags:
- tag3
# all sub-deployments of this include will get tag4 applied
- include: subDeployment1
tags:
- tag4
alwaysDeploy
Forces a deployment to be included everytime, ignoring inclusion/exclusion sets from the command line. See Deploying with tag inclusion/exclusion for details.
deployments:
- path: kustomizeDeployment1
alwaysDeploy: true
- path: kustomizeDeployment2
Please note that alwaysDeploy
will also cause
kluctl render
to always render the resources.
skipDeleteIfTags
Forces exclusion of a deployment whenever inclusion/exclusion tags are specified via command line. See Deleting with tag inclusion/exclusion for details.
deployments:
- path: kustomizeDeployment1
skipDeleteIfTags: true
- path: kustomizeDeployment2
onlyRender
Causes a path to be rendered only but not treated as a deployment item. This can be useful if you for example want to use Kustomize components which you’d refer from other deployment items.
deployments:
- path: component
onlyRender: true
- path: kustomizeDeployment2
vars (deployment project)
A list of variable sets to be loaded into the templating context, which is then available in all deployment items and sub-deployments .
See templating for more details.
commonLabels
A dictionary of labels and values to be added to all resources deployed by any of the deployment items in this deployment project.
Consider the following example deployment.yaml
:
deployments:
- path: nginx
- include: sub-deployment1
commonLabels:
my.prefix/target: {{ target.name }}
my.prefix/deployment-name: my-deployment-project-name
my.prefix/label-1: value-1
my.prefix/label-2: value-2
Every resource deployed by the kustomize deployment nginx
will now get the four provided labels attached. All included
sub-deployment projects (e.g. sub-deployment1
) will also recursively inherit these labels and pass them further
down.
In case an included sub-deployment project also contains commonLabels
, both dictionaries of commonLabels are merged
inside the included sub-deployment project. In case of conflicts, the included common labels override the inherited.
Please note that these commonLabels
are not related to commonLabels
supported in kustomization.yaml
files. It was
decided to not rely on this feature but instead attach labels manually to resources right before sending them to
kubernetes. This is due to an
implementation detail
in
kustomize which causes commonLabels
to also be applied to label selectors, which makes otherwise editable resources
read-only when it comes to commonLabels
.
commonAnnotations
A dictionary of annotations and values to be added to all resources deployed by any of the deployment items in this deployment project.
commonAnnotations
are handled the same as
commonLabels
in regard to inheriting, merging and overriding.
overrideNamespace
A string that is used as the default namespace for all kustomize deployments which don’t have a namespace
set in their
kustomization.yaml
.
tags (deployment project)
A list of common tags which are applied to all kustomize deployments and sub-deployment includes.
See tags for more details.
ignoreForDiff
A list of objects and fields to ignore while performing diffs. Consider the following example:
deployments:
- ...
ignoreForDiff:
- group: apps
kind: Deployment
namespace: my-namespace
name: my-deployment
fieldPath: spec.replicas
This will remove the spec.replicas
field from every resource that matches the object.
group
, kind
, namespace
and name
can be omitted, which results in all objects matching. fieldPath
must be a
valid
JSON Path
. fieldPath
may also be a list of JSON paths.
Using regex expressions instead of JSON Pathes is also supported:
deployments:
- ...
ignoreForDiff:
- group: apps
kind: Deployment
namespace: my-namespace
name: my-deployment
fieldPathRegex: metadata.labels.my-label-.*
As an alternative, annotations can be used to control diff behavior of individual resources.
6.2.2 - Kustomize Integration
kluctl uses kustomize to render final resources. This means, that the finest/lowest level in kluctl is represented with kustomize deployments. These kustomize deployments can then perform further customization, e.g. patching and more. You can also use kustomize to easily generate ConfigMaps or secrets from files.
Generally, everything is possible via kustomization.yaml
, is thus possible in kluctl.
We advise to read the kustomize reference . You can also look into the official kustomize example .
One way you might use this is to Kustomize a set of manifests from an external project.
For example:
# deployment.yml
deployments:
- git: git@github.com/example/example.git
onlyRender: true
- path: kustomize_example
# kustomize_example/kustomization.yml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../example
patches:
- # your patches here
6.2.3 - Container Images
There are usually 2 different scenarios where Container Images need to be specified:
- When deploying third party applications like nginx, redis, … (e.g. via the
Helm integration
).
- In this case, image versions/tags rarely change, and if they do, this is an explicit change to the deployment. This means it’s fine to have the image versions/tags directly in the deployment manifests.
- When deploying your own applications.
- In this case, image versions/tags might change very rapidly, sometimes multiple times per hour. Having these versions/tags directly in the deployment manifests can easily lead to commit spam and hard to manage multi-environment deployments.
kluctl offers a better solution for the second case.
images.get_image()
This is solved via a templating function that is available in all templates/resources. The function is part of the global
images
object and expects the following arguments:
images.get_image(image)
- image
- The image name/repository. It is looked up the list of fixed images.
The function will lookup the given image in the list of fixed images and return the last match.
Example deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
spec:
containers:
- name: c1
image: "{{ images.get_image('registry.gitlab.com/my-group/my-project') }}"
Fixed images
Fixed images can be configured multiple methods:
- Command line argument
--fixed-image
- Command line argument
--fixed-images-file
- Target definition
- Global ‘images’ variable
Command line argument --fixed-image
You can pass fixed images configuration via the --fixed-image
argument
.
Due to
environment variables support
in the CLI, you can also use the
environment variable KLUCTL_FIXED_IMAGE_XXX
to configure fixed images.
The format of the --fixed-image
argument is --fixed-image image<:namespace:deployment:container>=result
. The simplest
example is --fixed-image registry.gitlab.com/my-group/my-project=registry.gitlab.com/my-group/my-project:1.1.2
.
Command line argument --fixed-images-file
You can also configure fixed images via a yaml file by using --fixed-images-file /path/to/fixed-images.yaml
.
file:
images:
- image: registry.gitlab.com/my-group/my-project
resultImage: registry.gitlab.com/my-group/my-project:1.1.2
The file must contain a single root list named images
with each entry having the following form:
images:
- image: <image_name>
resultImage: <result_image>
# optional fields
namespace: <namespace>
deployment: <kind>/<name>
container: <name>
image
and resultImage
are required. All the other fields are optional and allow to specify in detail for which
object the fixed is specified.
Target definition
The
target
definition can optionally specify an images
field that can
contain the same fixed images configuration as found in the --fixed-images-file
file.
Global ‘images’ variable
You can also define a global variable named images
via one of the
variable sources
.
This variable must be a list of the same format as the images list in the --fixed-images-file
file.
This option allows to externalize fixed images configuration, meaning that you can maintain image versions outside the deployment project, e.g. in another Git repository .
6.2.4 - Helm Integration
kluctl offers a simple-to-use Helm integration, which allows you to reuse many common third-party Helm Charts.
The integration is split into 2 parts/steps/layers. The first is the management and pulling of the Helm Charts, while the second part handles configuration/customization and deployment of the chart.
It is recommended to pre-pull Helm Charts with
kluctl helm-pull
, which will store the
pulled charts inside .helm-charts
of the project directory. It is however also possible (but not
recommended) to skip the pre-pulling phase and let kluctl pull Charts on-demand.
When pre-pulling Helm Charts, you can also add the resulting Chart contents into version control. This is actually recommended as it ensures that the deployment will always behave the same. It also allows pull-request based reviews on third-party Helm Charts.
How it works
Helm charts are not directly installed via Helm. Instead, kluctl renders the Helm Chart into a single file and then
hands over the rendered yaml to
kustomize
. Rendering is done in combination with a provided
helm-values.yaml
, which contains the necessary values to configure the Helm Chart.
The resulting rendered yaml is then referred by your kustomization.yaml
, from which point on the
kustomize integration
takes over. This means, that you can perform all desired customization (patches, namespace override, …) as if you
provided your own resources via yaml files.
Helm hooks
Helm Hooks are implemented by mapping them to kluctl hooks , based on the following mapping table:
Helm hook | kluctl hook |
---|---|
pre-install | pre-deploy-initial |
post-install | post-deploy-initial |
pre-delete | Not supported |
post-delete | Not supported |
pre-upgrade | pre-deploy-upgrade |
post-upgrade | post-deploy-upgrade |
pre-rollback | Not supported |
post-rollback | Not supported |
test | Not supported |
Please note that this is a best effort approach and not 100% compatible to how Helm would run hooks.
helm-chart.yaml
The helm-chart.yaml
defines where to get the chart from, which version should be pulled, the rendered output file name,
and a few more Helm options. After this file is added to your project, you need to invoke the helm-pull
command
to pull the Helm Chart into your local project. It is advised to put the pulled Helm Chart into version control, so
that deployments will always be based on the exact same Chart (Helm does not guarantee this when pulling).
Example helm-chart.yaml
:
helmChart:
repo: https://charts.bitnami.com/bitnami
chartName: redis
chartVersion: 12.1.1
updateConstraints: ~12.1.0
skipUpdate: false
skipPrePull: false
releaseName: redis-cache
namespace: "{{ my.jinja2.var }}"
output: helm-rendered.yaml # this is optional
When running the helm-pull
command, it will search for all helm-chart.yaml
files in your project and then pull the
chart from the specified repository with the specified version. The pull chart will then be located in the sub-directory
charts
below the same directory as the helm-chart.yaml
The same filename that was specified in output
must then be referred in a kustomization.yaml
as a normal local
resource. If output
is omitted, the default value helm-rendered.yaml
is used and must also be referenced in
kustomization.yaml
.
helmChart
inside helm-chart.yaml
supports the following fields:
repo
The url to the Helm repository where the Helm Chart is located. You can use hub.helm.sh to search for repositories and charts and then use the repos found there.
OCI based repositories are also supported, for example:
helmChart:
repo: oci://r.myreg.io/mycharts/pepper
chartName: pepper
chartVersion: 1.2.3
releaseName: pepper
namespace: pepper
path
As alternative to repo
, you can also specify path
. The path must point to a local Helm Chart that is relative to the
helm-chart.yaml
. The local Chart must reside in your Kluctl project.
When path
is specified, repo
, chartName
, chartVersion
and updateContrainsts
are not allowed.
chartName
The name of the chart that can be found in the repository.
chartVersion
The version of the chart. Must be a valid semantic version.
updateConstraints
Specifies version constraints to be used when running helm-update . See Checking Version Constraints for details on the supported syntax.
If omitted, Kluctl will filter out pre-releases by default. Use a updateConstraints
like ~1.2.3-0
to enable
pre-releases.
skipUpdate
If set to true
, skip this Helm Chart when the
helm-update
command is called.
If omitted, defaults to false
.
skipPrePull
If set to true
, skip pre-pulling of this Helm Chart when running
helm-pull
. This will
also enable pulling on-demand when the deployment project is rendered/deployed.
releaseName
The name of the Helm Release.
namespace
The namespace that this Helm Chart is going to be deployed to. Please note that this should match the namespace
that you’re actually deploying the kustomize deployment to. This means, that either namespace
in kustomization.yaml
or overrideNamespace
in deployment.yaml
should match the namespace given here. The namespace should also be existing
already at the point in time when the kustomize deployment is deployed.
output
This is the file name into which the Helm Chart is rendered into. Your kustomization.yaml
should include this same
file. The file should not be existing in your project, as it is created on-the-fly while deploying.
skipCRDs
If set to true
, kluctl will pass --skip-crds
to Helm when rendering the deployment. If set to false
(which is
the default), kluctl will pass --include-crds
to Helm.
helm-values.yaml
This file should be present when you need to pass custom Helm Value to Helm while rendering the deployment. Please read the documentation of the used Helm Charts for details on what is supported.
Updates to helm-charts
In case a Helm Chart needs to be updated, you can either do this manually by replacing the
chartVersion
value in helm-chart.yaml
and the calling the
helm-pull
command or by simply invoking
helm-update
with --upgrade
and/or --commit
being set.
Private Chart Repositories
It is also possible to use private chart repositories. There are currently two options to provide Helm Repository credentials to Kluctl.
Use helm repo add --username xxx --password xxx
before
Kluctl will try to find known repositories that are managed by the Helm CLI and then try to reuse the credentials of
these. The repositories are identified by the URL of the repository, so it doesn’t matter what name you used when you
added the repository to Helm. The same method can be used for client certificate based authentication (--key-file
in helm repo add
).
Use the –username/–password arguments in kluctl helm-pull
See the
helm-pull command
. You can control repository credentials
via --username
, --password
and --key-file
. Each argument must be in the form credentialsId:value
, where
the credentialsId
must match the id specified in the helm-chart.yaml
. Example:
helmChart:
repo: https://raw.githubusercontent.com/example/private-helm-repo/main/
credentialsId: private-helm-repo
chartName: my-chart
chartVersion: 1.2.3
releaseName: my-chart
namespace: default
When credentialsId is specified, Kluctl will require you to specify --username=private-helm-repo:my-username
and
--password=private-helm-repo:my-password
. You can also specify a client-side certificate instead via
--key-file=private-helm-repo:/path/to/cert
.
Multiple Helm Charts can use the same credentialsId
.
Environment variables can also be used instead of arguments. See Environment Variables for details.
Templating
Both helm-chart.yaml
and helm-values.yaml
are rendered by the
templating engine
before they
are actually used. This means, that you can use all available Jinja2 variables at that point, which can for example be
seen in the above helm-chart.yaml
example for the namespace.
There is however one exception that leads to a small limitation. When helm-pull
reads the helm-chart.yaml
, it does
NOT render the file via the templating engine. This is because it can not know how to properly render the template as it
does have no information about targets (there are no -t
arguments set) at that point.
This exception leads to the limitation that the helm-chart.yaml
MUST be valid yaml even in case it is not rendered
via the templating engine. This makes using control statements (if/for/…) impossible in this file. It also makes it
a requirement to use quotes around values that contain templates (e.g. the namespace in the above example).
helm-values.yaml
is not subject to these limitations as it is only interpreted while deploying.
6.2.5 - SOPS Integration
Kluctl integrates natively with SOPS . Kluctl is able to decrypt all resources referenced by Kustomize deployment items (including simple deployments ). In addition, Kluctl will also decrypt all variable sources of the types file and git .
Kluctl assumes that you have setup sops as usual so that it knows how to decrypt these files.
Only encrypting Secrets’s data
To only encrypt the data
and stringData
fields of Kubernetes secrets, use a .sops.yaml
configuration file that
encrypted_regex
to filter encrypted fields:
creation_rules:
- path_regex: .*.yaml
encrypted_regex: ^(data|stringData)$
Combining templating and SOPS
As an alternative, you can split secret values and the resulting Kubernetes resources into two different places and then use templating to use the secret values wherever needed. Example:
Write the following content into secrets/my-secrets.yaml
:
secrets:
mySecret: secret-value
And encrypt it with SOPS:
$ sops -e -i secrets/my-secrets.yaml
Add this variables source to one of your deployments :
vars:
- file: secrets/my-secrets.yaml
deployments:
- ...
Then, in one of your deployment items define the following Secret
:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: default
stringData:
secret: "{{ secrets.mySecret }}"
6.2.6 - Hooks
Kluctl supports hooks in a similar fashion as known from Helm Charts. Hooks are executed/deployed before and/or after the actual deployment of a kustomize deployment.
To mark a resource as a hook, add the kluctl.io/hook
annotation to a resource. The value of the annotation must be
a comma separated list of hook names. Possible value are described in the next chapter.
Hook types
Hook Type | Description |
---|---|
pre-deploy-initial | Executed right before the initial deployment is performed. |
post-deploy-initial | Executed right after the initial deployment is performed. |
pre-deploy-upgrade | Executed right before a non-initial deployment is performed. |
post-deploy-upgrade | Executed right after a non-initial deployment is performed. |
pre-deploy | Executed right before any (initial and non-initial) deployment is performed. |
post-deploy | Executed right after any (initial and non-initial) deployment is performed. |
A deployment is considered to be an “initial” deployment if none of the resources related to the current kustomize deployment are found on the cluster at the time of deployment.
If you need to execute hooks for every deployment, independent of its “initial” state, use
pre-deploy-initial,pre-deploy
to indicate that it should be executed all the time.
Hook deletion
Hook resources are by default deleted right before creation (if they already existed before). This behavior can be
changed by setting the kluctl.io/hook-delete-policy
to a comma separated list of the following values:
Policy | Description |
---|---|
before-hook-creation | The default behavior, which means that the hook resource is deleted right before (re-)creation. |
hook-succeeded | Delete the hook resource directly after it got “ready” |
hook-failed | Delete the hook resource when it failed to get “ready” |
Hook readiness
After each deployment/execution of the hooks that belong to a deployment stage (before/after deployment), kluctl waits for the hook resources to become “ready”. Readiness is defined here .
It is possible to disable waiting for hook readiness by setting the annotation kluctl.io/hook-wait
to “false”.
6.2.7 - Readiness
There are multiple places where kluctl can wait for “readiness” of resources, e.g. for hooks or when waitReadiness
is
specified on a deployment item. Readiness depends on the resource kind, e.g. for a Job, kluctl would wait until it
finishes successfully.
6.2.8 - Tags
Every kustomize deployment has a set of tags assigned to it. These tags are defined in multiple places, which is
documented in
deployment.yaml
. Look for the tags
field, which is available in multiple places per
deployment project.
Tags are useful when only one or more specific kustomize deployments need to be deployed or deleted.
Default tags
deployment items in deployment projects can have an optional list of tags assigned.
If this list is completely omitted, one single entry is added by default. This single entry equals to the last element
of the path
in the deployments
entry.
Consider the following example:
deployments:
- path: nginx
- path: some/subdir
In this example, two kustomize deployments are defined. The first would get the tag nginx
while the second
would get the tag subdir
.
In most cases this heuristic is enough to get proper tags with which you can work. It might however lead to strange
or even conflicting tags (e.g. subdir
is really a bad tag), in which case you’d have to explicitly set tags.
Tag inheritance
Deployment projects and deployments items inherit the tags of their parents. For example, if a deployment project
has a
tags
property defined, all deployments
entries would
inherit all these tags. Also, the sub-deployment projects included via deployment items of type
include
inherit the tags of the deployment project. These included sub-deployments also
inherit the
tags
specified by the deployment item itself.
Consider the following example deployment.yaml
:
deployments:
- include: sub-deployment1
tags:
- tag1
- tag2
- include: sub-deployment2
tags:
- tag3
- tag4
- include: subdir/subsub
Any kustomize deployment found in sub-deployment1
would now inherit tag1
and tag2
. If sub-deployment1
performs
any further includes, these would also inherit these two tags. Inheriting is additive and recursive.
The last sub-deployment project in the example is subject to the same default-tags logic as described
in
Default tags
, meaning that it will get the default tag subsub
.
Deploying with tag inclusion/exclusion
Special care needs to be taken when trying to deploy only a specific part of your deployment which requires some base resources to be deployed as well.
Imagine a large deployment is able to deploy 10 applications, but you only want to deploy one of them. When using tags
to achieve this, there might be some base resources (e.g. Namespaces) which are needed no matter if everything or just
this single application is deployed. In that case, you’d need to set
alwaysDeploy
to true
.
Deleting with tag inclusion/exclusion
Also, in most cases, even more special care has to be taken for the same types of resources as decribed before.
Imagine a kustomize deployment being responsible for namespaces deployments. If you now want to delete everything except
deployments that have the persistency
tag assigned, the exclusion logic would NOT exclude deletion of the namespace.
This would ultimately lead to everything being deleted, and the exclusion tag having no effect.
In such a case, you’d need to set
skipDeleteIfTags
to true
as well.
In most cases, setting alwaysDeploy
to true
also requires setting skipDeleteIfTags
to true
.
6.2.9 - Annotations
6.2.9.1 - All resources
The following annotations control the behavior of the deploy
and related commands.
Control deploy behavior
The following annotations control deploy behavior, especially in regard to conflict resolution.
kluctl.io/delete
If set to “true”, the resource will be deleted at deployment time. Kluctl will not emit an error in case the resource does not exist. A resource with this annotation does not have to be complete/valid as it is never sent to the Kubernetes api server.
kluctl.io/force-apply
If set to “true”, the whole resource will be force-applied, meaning that all fields will be overwritten in case of field manager conflicts.
kluctl.io/force-apply-field
Specifies a JSON Path for fields that should be force-applied. Matching fields will be overwritten in case of field manager conflicts.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
kluctl.io/ignore-conflicts
If set to “true”, the whole all fields of the object are going to be ignored when conflicts arise. This effectively disables the warnings that are shown when field ownership is lost.
kluctl.io/ignore-conflicts-field
Specifies a JSON Path for fields that should be ignored when conflicts arise. This effectively disables the warnings that are shown when field ownership is lost.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
Control deletion/pruning
The following annotations control how delete/prune is behaving.
kluctl.io/skip-delete
If set to “true”, the annotated resource will not be deleted when delete or prune is called.
kluctl.io/skip-delete-if-tags
If set to “true”, the annotated resource will not be deleted when delete or prune is called and inclusion/exclusion tags are used at the same time.
This tag is especially useful and required on resources that would otherwise cause cascaded deletions of resources that do not match the specified inclusion/exclusion tags. Namespaces are the most prominent example of such resources, as they most likely don’t match exclusion tags, but cascaded deletion would still cause deletion of the excluded resources.
Control diff behavior
The following annotations control how diffs are performed.
kluctl.io/diff-name
This annotation will override the name of the object when looking for the in-cluster version of an object used for diffs. This is useful when you are forced to use new names for the same objects whenever the content changes, e.g. for all kinds of immutable resource types.
Example (filename job.yaml):
apiVersion: batch/v1
kind: Job
metadata:
name: myjob-{{ load_sha256("job.yaml", 6) }}
annotations:
kluctl.io/diff-name: myjob
spec:
template:
spec:
containers:
- name: hello
image: busybox
command: ["sh", "-c", "echo hello"]
restartPolicy: Never
Without the kluctl.io/diff-name
annotation, any change to the job.yaml
would be treated as a new object in resulting
diffs from various commands. This is due to the inclusion of the file hash in the job name. This would make it very hard
to figure out what exactly changed in an object.
With the kluctl.io/diff-name
annotation, kluctl will pick an existing job from the cluster with the same diff-name
and use it for the diff, making it a lot easier to analyze changes. If multiple objects match, the one with the youngest
creationTimestamp
is chosen.
Please note that this will not cause old objects (with the same diff-name) to be prunes. You still have to regularely prune the deployment.
kluctl.io/ignore-diff
If set to “true”, the whole resource will be ignored while calculating diffs.
kluctl.io/ignore-diff-field
Specifies a JSON Path for fields that should be ignored while calculating diffs.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
kluctl.io/ignore-diff-field-regex
Same as kluctl.io/ignore-diff-field but specifying a regular expressions instead of a JSON Path.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
6.2.9.2 - Hooks
The following annotations control hook execution
See hooks for more details.
kluctl.io/hook
Declares a resource to be a hook, which is deployed/executed as described in hooks . The value of the annotation determines when the hook is deployed/executed.
kluctl.io/hook-weight
Specifies a weight for the hook, used to determine deployment/execution order.
kluctl.io/hook-delete-policy
Defines when to delete the hook resource.
kluctl.io/hook-wait
Defines whether kluctl should wait for hook-completion.
6.2.9.3 - Validation
The following annotations influence the validate command.
validate-result.kluctl.io/xxx
If this annotation is found on a resource that is checked while validation, the key and the value of the annotation are added to the validation result, which is then returned by the validate command.
The annotation key is dynamic, meaning that all annotations that begin with validate-result.kluctl.io/
are taken
into account.
kluctl.io/validate-ignore
If this annotation is set to true
, the object will be ignored while kluctl validate
is run.
6.2.9.4 - Kustomize
Even though the kustomization.yaml
from Kustomize deployments are not really Kubernetes resources (as they are not
really deployed), they have the same structure as Kubernetes resources. This also means that the kustomization.yaml
can define metadata and annotations. Through these annotations, additional behavior on the deployment can be controlled.
Example:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
annotations:
kluctl.io/barrier: "true"
kluctl.io/wait-readiness: "true"
resources:
- deployment.yaml
kluctl.io/barrier
If set to true
, kluctl will wait for all previous objects to be applied (but not necessarily ready). This has the
same effect as
barrier
from deployment projects.
kluctl.io/wait-readiness
If set to true
, kluctl will wait for readiness of all objects from this kustomization project. Readiness is defined
the same as in
hook readiness
.
6.3 - Templating
kluctl uses a Jinja2 Templating engine to pre-process/render every involved configuration file and resource before actually interpreting it. Only files that are explicitly excluded via .templateignore files are not rendered via Jinja2.
Generally, everything that is possible with Jinja2 is possible in kluctl configuration/resources. Please read into the Jinja2 documentation to understand what exactly is possible and how to use it.
.templateignore
In some cases it is required to exclude specific files from templating, for example when the contents conflict with
the used template engine (e.g. Go templates conflict with Jinja2 and cause errors). In such cases, you can place
a .templateignore
beside the excluded files or into a parent folder of it. The contents/format of the .templateignore
file is the same as you would use in a .gitignore
file.
Includes and imports
Standard Jinja2 includes and imports can be used in all templates.
The path given to include/import is searched in the directory of the root template and all it’s parent directories up until the project root. Please note that the search path is not altered in included templates, meaning that it will always search in the same directories even if an include happens inside a file that was included as well.
To include/import a file relative to the currently rendered file (which is not necessarily the root template), prefix
the path with ./
, e.g. use {% include "./my-relative-file.j2" %}"
.
Macros
Jinja2 macros
are fully supported. When writing
macros that produce yaml resources, you must use the ---
yaml separator in case you want to produce multiple resources
in one go.
Why no Go Templating
kluctl started as a python project and was then migrated to be a Go project. In the python world, Jinja2 is the obvious choice when it comes to templating. In the Go world, of course Go Templates would be the first choice.
When the migration to Go was performed, it was a conscious and opinionated decision to stick with Jinja2 templating. The reason is that I (@codablock) believe that Go Templates are hard to read and write and at the same time quite limited in their features (without extensive work). It never felt natural to write Go Templates.
This “feeling” was confirmed by multiple users of kluctl when it started and users described as “relieving” to not be forced to use Go Templates.
The above is my personal experience and opinion. I’m still quite open for contributions in regard to Go Templating support, as long as Jinja2 support is kept.
6.3.1 - Predefined Variables
There are multiple variables available which are pre-defined by kluctl. These are:
args
This is a dictionary of arguments given via command line. It contains every argument defined in deployment args .
target
This is the target definition of the currently processed target. It contains all values found in the
target definition
, for example target.name
.
images
This global object provides the dynamic images features described in images .
6.3.2 - Variable Sources
There are multiple places in deployment projects (deployment.yaml) where additional variables can be loaded into future Jinja2 contexts.
The first place where vars can be specified is the deployment root, as documented here . These vars are visible for all deployments inside the deployment project, including sub-deployments from includes.
The second place to specify variables is in the deployment items, as documented here .
The variables loaded for each entry in vars
are not available inside the deployment.yaml
file itself.
However, each entry in vars
can use all variables defined before that specific entry is processed. Consider the
following example.
vars:
- file: vars1.yaml
- file: vars2.yaml
- file: optional-vars.yaml
ignoreMissing: true
- file: default-vars.yaml
noOverride: true
- file: vars3.yaml
when: some.var == "value"
vars2.yaml
can now use variables that are defined in vars1.yaml
. At all times, variables defined by
parents of the current sub-deployment project can be used in the current vars file.
Each variable source can have the optional field ignoreMissing
set to true
, causing Kluctl to ignore if the source
can not be found.
When specifying noOverride: true
, Kluctl will not override variables from the previously loaded variables. This is
useful if you want to load default values for variables.
Variables can also be loaded conditionally by specifying a condition via when: <condition>
. The condition must be in
the same format as described in
conditional deployment items
Different types of vars entries are possible:
file
This loads variables from a yaml file. Assume the following yaml file with the name vars1.yaml
:
my_vars:
a: 1
b: "b"
c:
- l1
- l2
This file can be loaded via:
vars:
- file: vars1.yaml
After which all included deployments and sub-deployments can use the jinja2 variables from vars1.yaml
.
Kluctl also supports variable files encrypted with SOPS . See the sops integration integration for more details.
values
An inline definition of variables. Example:
vars:
- values:
a: 1
b: c
These variables can then be used in all deployments and sub-deployments.
git
This loads variables from a git repository. Example:
vars:
- git:
url: ssh://git@github.com/example/repo.git
ref: my-branch
path: path/to/vars.yaml
Kluctl also supports variable files encrypted with SOPS . See the sops integration integration for more details.
clusterConfigMap
Loads a configmap from the target’s cluster and loads the specified key’s value into the templating context. The value
is treated and loaded as YAML and thus can either be a simple value or a complex nested structure. In case of a simple
value (e.g. a number), you must also specify targetPath
.
The referred ConfigMap must already exist while the Kluctl project is loaded, meaning that it is not possible to use a ConfigMap that is deployed as part of the Kluctl project itself.
Assume the following ConfigMap to be already deployed to the target cluster:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-vars
namespace: my-namespace
data:
vars: |
a: 1
b: "b"
c:
- l1
- l2
This ConfigMap can be loaded via:
vars:
- clusterConfigMap:
name: my-vars
namespace: my-namespace
key: vars
The following example uses a simple value:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-vars
namespace: my-namespace
data:
value: 123
This ConfigMap can be loaded via:
vars:
- clusterConfigMap:
name: my-vars
namespace: my-namespace
key: value
targetPath: deep.nested.path
clusterSecret
Same as clusterConfigMap, but for secrets.
http
The http variables source allows to load variables from an arbitrary HTTP resource by performing a GET (or any other configured HTTP method) on the URL. Example:
vars:
- http:
url: https://example.com/path/to/my/vars
The above source will load a variables file from the given URL. The file is expected to be in yaml or json format.
The following additional properties are supported for http sources:
method
Specifies the HTTP method to be used when requesting the given resource. Defaults to GET
.
body
The body to send along with the request. If not specified, nothing is sent.
headers
A map of key/values pairs representing the header entries to be added to the request. If not specified, nothing is added.
jsonPath
Can be used to select a nested element from the yaml/json document returned by the HTTP request. This is useful in case some REST api is used which does not directly return the variables file. Example:
vars:
- http:
url: https://example.com/path/to/my/vars
jsonPath: $[0].data
The above example would successfully use the following json document as variables source:
[{"data": {"vars": {"var1": "value1"}}}]
Authentication
Kluctl currently supports BASIC and NTLM authentication. It will prompt for credentials when needed.
awsSecretsManager
AWS Secrets Manager integration. Loads a variables YAML from an AWS Secrets Manager secret. The secret can either be specified via an ARN or via a secretName and region combination. An existing AWS config profile can also be specified.
The secrets stored in AWS Secrets manager must contain a valid yaml or json file.
Example using an ARN:
vars:
- awsSecretsManager:
secretName: arn:aws:secretsmanager:eu-central-1:12345678:secret:secret-name-XYZ
profile: my-prod-profile
Example using a secret name and region:
vars:
- awsSecretsManager:
secretName: secret-name
region: eu-central-1
profile: my-prod-profile
The advantage of the latter is that the auto-generated suffix in the ARN (which might not be known at the time of writing the configuration) doesn’t have to be specified.
vault
Vault by HashiCorp with Tokens authentication integration. The address and the path to the secret can be configured. The implementation was tested with KV Secrets Engine.
Example using vault:
vars:
- vault:
address: http://localhost:8200
path: secret/data/simple
Before deploying please make sure that you have access to vault. You can do this for example by setting
the environment variable VAULT_TOKEN
.
systemEnvVars
Load variables from environment variables. Children of systemEnvVars
can be arbitrary yaml, e.g. dictionaries or lists.
The leaf values are used to get a value from the system environment.
Example:
vars:
- systemEnvVars:
var1: ENV_VAR_NAME1
someDict:
var2: ENV_VAR_NAME2
someList:
- var3: ENV_VAR_NAME3
The above example will make 3 variables available: var1
, someDict.var2
and
someList[0].var3
, each having the values of the environment variables specified by the leaf values.
All specified environment variables must be set before calling kluctl unless a default value is set. Default values
can be set by using the ENV_VAR_NAME:default-value
form.
Example:
vars:
- systemEnvVars:
var1: ENV_VAR_NAME4:defaultValue
The above example will set the variable var1
to defaultValue
in case ENV_VAR_NAME4 is not set.
All values retrieved from environment variables (or specified as default values) will be treated as YAML, meaning that integers and booleans will be treated as integers/booleans. If you want to enforce strings, encapsulate the values in quotes.
Example:
vars:
- systemEnvVars:
var1: ENV_VAR_NAME5:'true'
The above example will treat true
as a string instead of a boolean. When the environment variable is set outside
kluctl, it should also contain the quotes. Please note that your shell might require escaping to properly pass quotes.
6.3.3 - Filters
In addition to the builtin Jinja2 filters , kluctl provides a few additional filters:
b64encode
Encodes the input value as base64. Example: {{ "test" | b64encode }}
will result in dGVzdA==
.
b64decode
Decodes an input base64 encoded string. Example {{ my.source.var | b64decode }}
.
from_yaml
Parses a yaml string and returns an object. Please note that json is valid yaml, meaning that you can also use this filter to parse json.
to_yaml
Converts a variable/object into its yaml representation. Please note that in most cases the resulting string will not
be properly indented, which will require you to also use the indent
filter. Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.yaml: |
{{ my_config | to_yaml | indent(4) }}
to_json
Same as to_yaml
, but with json as output. Please note that json is always valid yaml, meaning that you can also use
to_json
in yaml files. Consider the following example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
spec:
containers:
- name: c1
image: my-image
env: {{ my_list_of_env_entries | to_json }}
This would render json into a yaml file, which is still a valid yaml file. Compare this to how this would have to be
solved with to_yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
spec:
containers:
- name: c1
image: my-image
env:
{{ my_list_of_env_entries | to_yaml | indent(10) }}
The required indention filter is the part that makes this error-prone and hard to maintain. Consider using to_json
whenever you can.
render
Renders the input string with the current Jinja2 context. Example:
{% set a="{{ my_var }}" %}
{{ a | render }}
sha256(digest_len)
Calculates the sha256 digest of the input string. Example:
{{ "some-string" | sha256 }}
digest_len
is an optional parameter that allows to limit the length of the returned hex digest. Example:
{{ "some-string" | sha256(6) }}
slugify
Slugify a string based on python-slugify .
6.3.4 - Functions
In addition to the provided builtin global functions , kluctl also provides a few global functions:
load_template(file)
Loads the given file into memory, renders it with the current Jinja2 context and then returns it as a string. Example:
{% set a=load_template('file.yaml') %}
{{ a }}
load_template
uses the same path searching rules as described in
includes/imports
.
load_sha256(file, digest_len)
Loads the given file into memory, renders it and calculates the sha256 hash of the result.
The filename given to load_sha256
is treated the same as in load_template
. Recursive loading/calculating of hashes
is allowed and is solved by replacing load_sha256
invocations with currently loaded templates with dummy strings.
This also allows to calculate the hash of the currently rendered template, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-{{ load_sha256("configmap.yaml") }}
data:
digest_len
is an optional parameter that allows to limit the length of the returned hex digest.
get_var(field_path, default)
Convenience method to navigate through the current context variables via a JSON Path . Let’s assume you currently have these variables defined (e.g. via vars ):
my:
deep:
var: value
Then {{ get_var('my.deep.var', 'my-default') }}
would return value
.
When any of the elements inside the field path are non-existent, the given default value is returned instead.
The field_path
parameter can also be a list of pathes, which are then tried one after the another, returning the first
result that gives a value that is not None. For example, {{ get_var(['non.existing.var', my.deep.var'], 'my-default') }}
would also return value
.
merge_dict(d1, d2)
Clones d1 and then recursively merges d2 into it and returns the result. Values inside d2 will override values in d1.
update_dict(d1, d2)
Same as merge_dict
, but merging is performed in-place into d1.
raise(msg)
Raises a python exception with the given message. This causes the current command to abort.
debug_print(msg)
Prints a line to stderr.
time.now()
Returns the current time. The returned object has the following members:
member | description |
---|---|
t.as_timezone(tz) | Converts and returns the time t in the given timezone. Example: {{ time.now().as_timezone("Europe/Berlin") }} |
t.weekday() | Returns the time’s weekday. 0 means Monday and 6 means Sunday. |
t.hour() | Returns the time’s hour from 0-23. |
t.minute() | Returns the time’s minute from 0-59. |
t.second() | Returns the time’s second from 0-59. |
t.nanosecond() | Returns the time’s nanosecond from 0-999999999. |
t + delta | Adds a delta to t . Example: {{ time.now() + time.second * 10 }} |
t - delta | Subtracts a delta from t . Example: {{ time.now() - time.second * 10 }} |
t1 < t2 t1 >= t2 … |
Time objects can be compared to other time objects. Example:{% if time.now() < time.parse_iso("2022-10-01T10:00") %}...{% endif %} All logical operators are supported. |
time.utcnow()
Returns the current time in UTC. The object has the same members as described in time.now() .
time.parse_iso(iso_time_str)
Parse the given string and return a time object. The string must be in ISO time. The object has the same members as described in time.now() .
time.second, time.minute, time.hour
Represents a time delta to be used with t + delta
and t - delta
. Example
{{ time.now() + time.minute * 10 }}
6.4 - GitOps
GitOps in Kluctl is implemented through the Kluctl Controller, which must be installed to your target cluster.
The Kluctl Controller is a Kubernetes operator which implements the
KluctlDeployment
custom resource. This resource allows to define a Kluctl deployment that should be constantly reconciled (re-deployed)
whenever the deployment changes.
Motivation and Philosophy
Kluctl tries its best to implement all its features via Kluctl projects , meaning that the deployments are, at least theoretically, deployable from the CLI at all times. The Kluctl Controller does not add functionality on top of that and thus does not couple your deployments to a running controller.
Instead, the KluctlDeployment
custom resource acts as an interface to the deployment. It tries to offer the same
functionality and options as offered by the CLI, but through a custom resource instead of a CLI invocation.
As an example, arguments passed via -a arg=value
can be passed to the custom resource via the spec.args
field.
The same applies to options like --dry-run
, which equals to spec.dryRun: true
in the custom resource. Check the
documentation of
KluctlDeployment
for more such options.
Installation
Installation instructions can be found here
Design
The reconciliation process consists of multiple steps which are constantly repeated:
- clone the root Kluctl project via Git
- prepare the Kluctl deployment by rendering the whole deployment
- deploy the specified target via kluctl deploy if the rendered resources changed
- prune orphaned objects via kluctl prune
- validate the deployment status via kluctl validate
Reconciliation is performed on a configurable interval . A single reconciliation iteration will first clone and prepare the project. Only when the rendered resources indicate a change (by using a hash internally), the controller will initiate a deployment. After the deployment, the controller will also perform pruning (only if prune: true is set).
When the KluctlDeployment
is removed from the cluster, the controller cal also delete all resources belonging to
that deployment. This will only happen if
delete: true
is set.
Deletion and pruning is based on the discriminator of the given target.
A KluctlDeployment
can be
suspended
. While suspended, the controller
will skip reconciliation, including deletion and pruning.
The API design of the controller can be found at kluctldeployment.gitops.kluctl.io/v1beta1 .
Example
After installing the Kluctl Controller, we can create a KluctlDeployment
that automatically deploys the
Microservices Demo
.
Create a KluctlDeployment that uses the demo project source to deploy the test
target to the same cluster that the
controller runs on.
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: microservices-demo-test
namespace: kluctl-system
spec:
interval: 10m
source:
url: https://github.com/kluctl/kluctl-examples.git
path: "./microservices-demo/3-templating-and-multi-env/"
timeout: 2m
target: test
context: default
prune: true
This example will deploy a fully-fledged microservices application with multiple backend services, frontends and
databases, all via one single KluctlDeployment
.
To deploy the same Kluctl project to another target (e.g. prod), simply create the following resource.
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: microservices-demo-prod
namespace: kluctl-system
spec:
interval: 10m
source:
url: https://github.com/kluctl/kluctl-examples.git
path: "./microservices-demo/3-templating-and-multi-env/"
timeout: 2m
target: prod
context: default
prune: true
6.4.1 - Metrics
6.4.1.1 - v1beta1 metrics
Prometheus Metrics
The controller exports several metrics in the OpenMetrics compatible format . They can be scraped by all sorts of monitoring solutions (e.g. Prometheus) or stored in a database. Because the controller is based on controller-runtime , all the default metrics as well as the following controller-specific custom metrics are exported:
6.4.1.1.1 - Metrics of the KluctlDeployment Controller
Exported Metrics References
Metrics name | Type | Description |
---|---|---|
deployment_duration_seconds | Histogram | How long a single deployment takes in seconds. |
number_of_changed_objects | Gauge | How many objects have been changed by a single deployment. |
number_of_deleted_objects | Gauge | How many objects have been deleted by a single deployment. |
number_of_errors | Gauge | How many errors are related to a single deployment. |
number_of_images | Gauge | Number of images of a single deployment. |
number_of_orphan_objects | Gauge | How many orphans are related to a single deployment. |
number_of_warnings | Gauge | How many warnings are related to a single deployment. |
prune_duration_seconds | Histogram | How long a single prune takes in seconds. |
validate_duration_seconds | Histogram | How long a single validate takes in seconds. |
deployment_interval_seconds | Gauge | The configured deployment interval of a single deployment. |
dry_run_enabled | Gauge | Is dry-run enabled for a single deployment. |
last_object_status | Gauge | Last object status of a single deployment. Zero means failure and one means success. |
prune_enabled | Gauge | Is pruning enabled for a single deployment. |
delete_enabled | Gauge | Is deletion enabled for a single deployment. |
source_spec | Gauge | The configured source spec of a single deployment exported via labels. |
6.4.2 - Specs
6.4.2.1 - v1beta1 specs
gitops.kluctl.io/v1beta1
This is the v1beta1 API specification for defining continuous delivery pipelines of Kluctl Deployments.
Specification
6.4.2.1.1 - KluctlDeployment
The KluctlDeployment
API defines a deployment of a
target
from a
Kluctl Project
.
Example
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: microservices-demo-prod
spec:
interval: 5m
source:
url: https://github.com/kluctl/kluctl-examples.git
path: "./microservices-demo/3-templating-and-multi-env/"
timeout: 2m
target: prod
context: default
prune: true
delete: true
In the above example a KluctlDeployment is being created that defines the deployment based on the Kluctl project.
The deployment is performed every 5 minutes. It will deploy the prod
target
and then prune orphaned objects afterward.
When the KluctlDeployment gets deleted, delete: true
will cause the controller to actually delete the target
resources.
It uses the default
context provided by the default service account and thus overrides the context specified in the
target definition.
Spec fields
source
The KluctlDeployment spec.source
specifies the source repository to be used. Example:
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: example
spec:
source:
url: https://github.com/kluctl/kluctl-examples.git
path: path/to/project
secretRef:
name: git-credentials
ref:
branch: my-branch
...
The url
specifies the git clone url. It can either be a https or a git/ssh url. Git/Ssh url will require a secret
to be provided with credentials.
The path
specifies the sub-directory where the Kluctl project is located.
The ref
provides the Git reference to be used. It can either be a branch or a tag.
See Git authentication for details on authentication.
interval
See Reconciliation .
suspend
See Reconciliation .
target
spec.target
specifies the target to be deployed. It must exist in the Kluctl projects
kluctl.yaml targets
list.
This field is optional and can be omitted if the referenced Kluctl project allows deployments without targets.
targetNameOverride
spec.targetNameOverride
will set or override the name of the target. This is equivalent to passing
--target-name-override
to kluctl deploy
.
context
spec.context
will override the context used while deploying. This is equivalent to passing --context
to
kluctl deploy
.
deployMode
By default, the operator will perform a full deployment, which is equivalent to using the kluctl deploy
command.
As an alternative, the controller can be instructed to only perform a kluctl poke-images
command. Please
see
poke-images
for details on the command. To do so, set spec.deployMode
field to poke-images
.
Example:
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: microservices-demo-prod
spec:
interval: 5m
source:
url: https://github.com/kluctl/kluctl-examples.git
path: "./microservices-demo/3-templating-and-multi-env/"
timeout: 2m
target: prod
context: default
deployMode: poke-images
prune
To enable pruning, set spec.prune
to true
. This will cause the controller to run kluctl prune
after each
successful deployment.
delete
To enable deletion, set spec.delete
to true
. This will cause the controller to run kluctl delete
when the
KluctlDeployment gets deleted.
args
spec.args
is an object representing
arguments
passed to the deployment. Example:
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: example
spec:
interval: 5m
source:
url: https://github.com/kluctl/kluctl-examples.git
path: "./microservices-demo/3-templating-and-multi-env/"
timeout: 2m
target: prod
context: default
args:
arg1: value1
arg2: value2
arg3:
k1: v1
k2: v2
The above example is equivalent to calling kluctl deploy -t prod -a arg1=value1 -a arg2=value2
.
images
spec.images
specifies a list of fixed images to be used by
image.get_image(...)
. Example:
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: example
spec:
interval: 5m
source:
url: https://example.com
timeout: 2m
target: prod
images:
- image: nginx
resultImage: nginx:1.21.6
namespace: example-namespace
deployment: Deployment/example
- image: registry.gitlab.com/my-org/my-repo/image
resultImage: registry.gitlab.com/my-org/my-repo/image:1.2.3
The above example will cause the images.get_image("nginx")
invocations of the example
Deployment to return
nginx:1.21.6
. It will also cause all images.get_image("registry.gitlab.com/my-org/my-repo/image")
invocations
to return registry.gitlab.com/my-org/my-repo/image:1.2.3
.
The fixed images provided here take precedence over the ones provided in the target definition .
spec.images
is equivalent to calling kluctl deploy -t prod --fixed-image=nginx:example-namespace:Deployment/example=nginx:1.21.6 ...
and to kluctl deploy -t prod --fixed-images-file=fixed-images.yaml
with fixed-images.yaml
containing:
images:
- image: nginx
resultImage: nginx:1.21.6
namespace: example-namespace
deployment: Deployment/example
- image: registry.gitlab.com/my-org/my-repo/image
resultImage: registry.gitlab.com/my-org/my-repo/image:1.2.3
dryRun
spec.dryRun
is a boolean value that turns the deployment into a dry-run deployment. This is equivalent to calling
kluctl deploy -t prod --dry-run
.
noWait
spec.noWait
is a boolean value that disables all internal waiting (hooks and readiness). This is equivalent to calling
kluctl deploy -t prod --no-wait
.
forceApply
spec.forceApply
is a boolean value that causes kluctl to solve conflicts via force apply. This is equivalent to calling
kluctl deploy -t prod --force-apply
.
replaceOnError and forceReplaceOnError
spec.replaceOnError
and spec.forceReplaceOnError
are both boolean values that cause kluctl to perform a replace
after a failed apply. forceReplaceOnError
goes a step further and deletes and recreates the object in question.
These are equivalent to calling kluctl deploy -t prod --replace-on-error
and kluctl deploy -t prod --force-replace-on-error
.
abortOnError
spec.abortOnError
is a boolean value that causes kluctl to abort as fast as possible in case of errors. This is equivalent to calling
kluctl deploy -t prod --abort-on-error
.
includeTags, excludeTags, includeDeploymentDirs and excludeDeploymentDirs
spec.includeTags
and spec.excludeTags
are lists of tags to be used in inclusion/exclusion logic while deploying.
These are equivalent to calling kluctl deploy -t prod --include-tag <tag1>
and kluctl deploy -t prod --exclude-tag <tag2>
.
spec.includeDeploymentDirs
and spec.excludeDeploymentDirs
are lists of relative deployment directories to be used in
inclusion/exclusion logic while deploying. These are equivalent to calling kluctl deploy -t prod --include-tag <tag1>
and kluctl deploy -t prod --exclude-tag <tag2>
.
Reconciliation
The KluctlDeployment spec.interval
tells the controller at which interval to try reconciliations.
The interval time units are s
, m
and h
e.g. interval: 5m
, the minimum value should be over 60 seconds.
At each reconciliation run, the controller will check if any rendered objects have been changes since the last deployment and then perform a new deployment if changes are detected. Changes are tracked via a hash consisting of all rendered objects.
To enforce periodic full deployments even if nothing has changed, spec.deployInterval
can be used to specify an
interval at which forced deployments must be performed by the controller.
The KluctlDeployment reconciliation can be suspended by setting spec.suspend
to true
.
The controller can be told to reconcile the KluctlDeployment outside of the specified interval
by annotating the KluctlDeployment object with kluctl.io/request-reconcile
.
On-demand reconciliation example:
kubectl annotate --overwrite kluctldeployment/microservices-demo-prod kluctl.io/request-reconcile="$(date +%s)"
Similarly, a deployment can be forced even if the source has not changed by using the kluctl.io/request-deploy
annotation:
kubectl annotate --overwrite kluctldeployment/microservices-demo-prod kluctl.io/request-deploy="$(date +%s)"
Kubeconfigs and RBAC
As Kluctl is meant to be a CLI-first tool, it expects a kubeconfig to be present while deployments are performed. The controller will generate such kubeconfigs on-the-fly before performing the actual deployment.
The kubeconfig can be generated from 3 different sources:
- The default impersonation service account specified at controller startup (via
--default-service-account
) - The service account specified via
spec.serviceAccountName
in the KluctlDeployment - The secret specified via
spec.kubeConfig
in the KluctlDeployment.
The behavior/functionality of 1. and 2. is comparable to how the kustomize-controller handles impersonation, with the difference that a kubeconfig with a “default” context is created in-between.
spec.kubeConfig
will simply load the kubeconfig from data.value
of the specified secret.
Kluctl
targets
specify a context name that is expected to
be present in the kubeconfig while deploying. As the context found in the generated kubeconfig does not necessarily
have the correct name, spec.context
can be used to while deploying. This is especially useful
when using service account based kubeconfigs, as these always have the same context with the name “default”.
Here is an example of a deployment that uses the service account “prod-service-account” and overrides the context appropriately (assuming the Kluctl cluster config for the given target expects a “prod” context):
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: example
namespace: kluctl-system
spec:
interval: 10m
source:
url: https://github.com/kluctl/kluctl-examples.git
path: "./microservices-demo/3-templating-and-multi-env/"
target: prod
serviceAccountName: prod-service-account
context: default
Git authentication
The spec.source
can optionally specify a spec.source.secretRef
(see
here
) which must point to an existing
secret (in the same namespace) containing Git credentials.
Basic access authentication
To authenticate towards a Git repository over HTTPS using basic access
authentication (in other words: using a username and password), the referenced
Secret is expected to contain .data.username
and .data.password
values.
---
apiVersion: v1
kind: Secret
metadata:
name: basic-access-auth
type: Opaque
data:
username: <BASE64>
password: <BASE64>
HTTPS Certificate Authority
To provide a Certificate Authority to trust while connecting with a Git
repository over HTTPS, the referenced Secret can contain a .data.caFile
value.
---
apiVersion: v1
kind: Secret
metadata:
name: https-ca-credentials
namespace: default
type: Opaque
data:
caFile: <BASE64>
SSH authentication
To authenticate towards a Git repository over SSH, the referenced Secret is
expected to contain identity
and known_hosts
fields. With the respective
private key of the SSH key pair, and the host keys of the Git repository.
---
apiVersion: v1
kind: Secret
metadata:
name: ssh-credentials
type: Opaque
stringData:
identity: |
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
known_hosts: |
github.com ecdsa-sha2-nistp256 AAAA...
Helm Repository authentication
Kluctl allows to integrate Helm Charts in two different ways. One is to pre-pull charts and put them into version control, making it unnecessary to pull them at deploy time. This option also means that you don’t have to take any special care on the controller side.
The other way is to let Kluctl pull Helm Charts at deploy time. In that case, you have to ensure that the controller
has the necessary access to the Helm repositories. To add credentials for authentication, set the spec.helmCredentials
field to a list of secret references:
Basic access authentication
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: example
namespace: kluctl-system
spec:
interval: 10m
source:
url: https://github.com/kluctl/kluctl-examples.git
path: "./microservices-demo/3-templating-and-multi-env/"
target: prod
serviceAccountName: prod-service-account
context: default
helmCredentials:
- secretRef:
name: helm-creds
---
apiVersion: v1
kind: Secret
metadata:
name: helm-creds
namespace: kluctl-system
stringData:
url: https://example-repo.com
username: my-user
password: my-password
TLS authentication
For TLS authentication, see the following example secret:
apiVersion: v1
kind: Secret
metadata:
name: helm-creds
namespace: kluctl-system
data:
certFile: <BASE64>
keyFile: <BASE64>
# NOTE: Can be supplied without the above values
caFile: <BASE64>
Disabling TLS verification
In case you need to disable TLS verification (not recommended!), add the key insecureSkipTlsVerify
with the value
"true"
(make sure it’s a string, so surround it with "
).
Pass credentials
To enable passing of credentials to all requests, add the key passCredentialsAll
with the value "true"
.
This will pass the credentials to all requests, even if the hostname changes.
Secrets Decryption
Kluctl offers a
SOPS Integration
that allows to use encrypted
manifests and variable sources in Kluctl deployments. Decryption by the controller is also supported and currently
mirrors how the
Secrets Decryption configuration
of the Flux Kustomize Controller. To configure it in the KluctlDeployment
, simply set the decryption
field in the
spec:
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: example
namespace: kluctl-system
spec:
decryption:
provider: sops
secretRef:
name: sops-keys
...
The sops-keys
Secret has the same format as in the
Flux Kustomize Controller
.
AWS KMS with IRSA
In addition to the
AWS KMS Secret Entry
in the secret and the
global AWS KMS
authentication via the controller’s service account, the Kluctl controller also supports using the IRSA role of the
impersonated service account of the KluctlDeployment
(specified via serviceAccountName
in the spec or
--default-service-account
):
apiVersion: v1
kind: ServiceAccount
metadata:
name: kluctl-deployment
namespace: kluctl-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456:role/my-irsa-enabled-role
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kluctl-deployment
namespace: kluctl-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
# watch out, don't use cluster-admin if you don't trust the deployment
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kluctl-deployment
namespace: kluctl-system
---
apiVersion: gitops.kluctl.io/v1beta1
kind: KluctlDeployment
metadata:
name: example
namespace: kluctl-system
spec:
serviceAccountName: kluctl-deployment
decryption:
provider: sops
# you can also leave out the secretRef if you don't provide addinional keys
secretRef:
name: sops-keys
...
Status
When the controller completes a deployments, it reports the result in the status
sub-resource.
A successful reconciliation sets the ready condition to true
.
status:
conditions:
- lastTransitionTime: "2022-07-07T11:48:14Z"
message: "deploy: ok"
reason: ReconciliationSucceeded
status: "True"
type: Ready
lastDeployResult:
...
lastPruneResult:
...
lastValidateResult:
...
You can wait for the controller to complete a reconciliation with:
kubectl wait kluctldeployment/backend --for=condition=ready
A failed reconciliation sets the ready condition to false
:
status:
conditions:
- lastTransitionTime: "2022-05-04T10:18:11Z"
message: target invalid-name not found in kluctl project
reason: PrepareFailed
status: "False"
type: Ready
lastDeployResult:
...
lastPruneResult:
...
lastValidateResult:
...
Note that the lastDeployResult, lastPruneResult and lastValidateResult are only updated on a successful reconciliation.
6.4.3 - Legacy Controller Migration
Older versions of Kluctl (pre v2.20.0) relied on a legacy version of the Kluctl controller, named
flux-kluctl-controller
. If you upgraded from such an older
version and were already using KluctlDeployments
from the flux.kluctl.io
API group, you must migrate these
deployments to the new gitops.kluctl.io
group.
To do this, follow the following steps:
- Upgrade the legacy flux-kluctl-controller to at least v0.16.0. This version will introduce a special marker field
into the legacy
KluctlDeployment
status and set it to true. This marker field is used to inform the new Kluctl Controller that the legacy controller is now aware of the existence of the new controller. - If not already done, install the new Kluctl Controller.
- To be on the safe side, disable
pruning
and
deletion
for all legacy
KluctlDeployment
objects. Don’t forget to deploy/apply these changes before continuing with the next step. - Modify your
KluctlDeployment
manifests to use thegitops.kluctl.io/v1beta1
asapiVersion
. It’s important to use the same name and namespace as used in the legacy resources. Also read the breaking changes section. - Deploy/Apply the modified
KluctlDeployment
resources. - At this point, the legacy controller will detect that the
KluctlDeployment
exists twice, once for the legacy API group/version and once for the new group/version. Based on that knowledge, the legacy controller will stop reconciling the legacyKluctlDeployment
. - At the same time, the new controller will detect that the legacy
KluctlDeployment
has the marker field set, which means that the legacy controller is known to honor the new controller’s existence. - This will lead to the new controller taking over and reconciling the new
KluctlDeployment
. - If you disabled deletion/pruning in step 3., you should undo this on the new
KluctlDeployments
now.
After these steps, the legacy KluctlDeployment
resources will be excluded from reconciliation by the legacy controller.
This means, you can safely remove/prune the legacy resources.
Breaking changes
There exist some breaking changes between the legacy flux.kluctl.io/v1alpha1
and gitops.kluctl.io/v1beta1
custom
resources and controllers. These are:
Only deploy when resources change
The legacy controller did a full deploy on each reconciliation, following the Flux way of reconciliations. This
behaviour was configurable by allowing you to set spec.deployInterval: never
, which disabled full deployments and
caused the controller to only deploy when the resulting rendered resources actually changed.
The new controller will behave this way by default, unless you explicitly set spec.deployInterval
to some interval
value.
This means, you will have to introduce spec.deployInterval
in case you expect the controller to behave as before or
remove spec.deployInterval: never
if you already used the Kluctl specific behavior.
renameContexts has been removed
The spec.renameContexts
field is not available anymore. Use spec.context
instead.
status will not contain full result anymore
The legacy controller wrote the full command result (with objects, diffs, …) into the status field. The new controller will instead only write a summary of the result.
Why no fully automated migration?
I have decided against a fully automated migration as the move of the API group causes resources to have a different identity. This can easily lead to unexpected behaviour and does not play well with GitOps.
6.4.4 - Kluctl Controller API reference
Packages:
gitops.kluctl.io/v1beta1
Package v1beta1 contains API Schema definitions for the gitops.kluctl.io v1beta1 API group.
Resource Types:Decryption
(Appears on: KluctlDeploymentSpec)
Decryption defines how decryption is handled for Kubernetes manifests.
Field | Description |
---|---|
provider string |
Provider is the name of the decryption engine. |
secretRef LocalObjectReference |
(Optional)
The secret name containing the private OpenPGP keys used for decryption. |
serviceAccount string |
(Optional)
ServiceAccount specifies the service account used to authenticate against cloud providers. This is currently only usable for AWS KMS keys. The specified service account will be used to authenticate to AWS by signing a token in an IRSA compliant way. |
GitRef
(Appears on: ProjectSource)
Field | Description |
---|---|
branch string |
(Optional)
Branch to filter for. Can also be a regex. |
tag string |
(Optional)
Branch to filter for. Can also be a regex. |
HelmCredentials
(Appears on: KluctlDeploymentSpec)
Field | Description |
---|---|
secretRef LocalObjectReference |
SecretRef holds the name of a secret that contains the Helm credentials.
The secret must either contain the fields |
KluctlDeployment
KluctlDeployment is the Schema for the kluctldeployments API
Field | Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
spec KluctlDeploymentSpec |
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
status KluctlDeploymentStatus |
KluctlDeploymentSpec
(Appears on: KluctlDeployment)
Field | Description |
---|---|
source ProjectSource |
Specifies the project source location |
decryption Decryption |
(Optional)
Decrypt Kubernetes secrets before applying them on the cluster. |
interval Kubernetes meta/v1.Duration |
The interval at which to reconcile the KluctlDeployment. Reconciliation means that the deployment is fully rendered and only deployed when the result changes compared to the last deployment. To override this behavior, set the DeployInterval value. |
retryInterval Kubernetes meta/v1.Duration |
(Optional)
The interval at which to retry a previously failed reconciliation. When not specified, the controller uses the Interval value to retry failures. |
deployInterval SafeDuration |
(Optional)
DeployInterval specifies the interval at which to deploy the KluctlDeployment, even in cases the rendered result does not change. |
validateInterval SafeDuration |
(Optional)
ValidateInterval specifies the interval at which to validate the KluctlDeployment.
Validation is performed the same way as with ‘kluctl validate -t |
timeout Kubernetes meta/v1.Duration |
(Optional)
Timeout for all operations. Defaults to ‘Interval’ duration. |
suspend bool |
(Optional)
This flag tells the controller to suspend subsequent kluctl executions, it does not apply to already started executions. Defaults to false. |
helmCredentials []HelmCredentials |
(Optional)
HelmCredentials is a list of Helm credentials used when non pre-pulled Helm Charts are used inside a Kluctl deployment. |
serviceAccountName string |
(Optional)
The name of the Kubernetes service account to use while deploying. If not specified, the default service account is used. |
kubeConfig KubeConfig |
(Optional)
The KubeConfig for deploying to the target cluster. Specifies the kubeconfig to be used when invoking kluctl. Contexts in this kubeconfig must match the context found in the kluctl target. As an alternative, specify the context to be used via ‘context’ |
target string |
(Optional)
Target specifies the kluctl target to deploy. If not specified, an empty target is used that has no name and no context. Use ‘TargetName’ and ‘Context’ to specify the name and context in that case. |
targetNameOverride string |
(Optional)
TargetNameOverride sets or overrides the target name. This is especially useful when deployment without a target. |
context string |
(Optional)
If specified, overrides the context to be used. This will effectively make kluctl ignore the context specified in the target. |
args k8s.io/apimachinery/pkg/runtime.RawExtension |
(Optional)
Args specifies dynamic target args. |
images []github.com/kluctl/kluctl/v2/pkg/types.FixedImage |
(Optional)
Images contains a list of fixed image overrides. Equivalent to using ‘–fixed-images-file’ when calling kluctl. |
dryRun bool |
(Optional)
DryRun instructs kluctl to run everything in dry-run mode. Equivalent to using ‘–dry-run’ when calling kluctl. |
noWait bool |
(Optional)
NoWait instructs kluctl to not wait for any resources to become ready, including hooks. Equivalent to using ‘–no-wait’ when calling kluctl. |
forceApply bool |
(Optional)
ForceApply instructs kluctl to force-apply in case of SSA conflicts. Equivalent to using ‘–force-apply’ when calling kluctl. |
replaceOnError bool |
(Optional)
ReplaceOnError instructs kluctl to replace resources on error. Equivalent to using ‘–replace-on-error’ when calling kluctl. |
forceReplaceOnError bool |
(Optional)
ForceReplaceOnError instructs kluctl to force-replace resources in case a normal replace fails. Equivalent to using ‘–force-replace-on-error’ when calling kluctl. |
abortOnError bool |
(Optional)
ForceReplaceOnError instructs kluctl to abort deployments immediately when something fails. Equivalent to using ‘–abort-on-error’ when calling kluctl. |
includeTags []string |
(Optional)
IncludeTags instructs kluctl to only include deployments with given tags. Equivalent to using ‘–include-tag’ when calling kluctl. |
excludeTags []string |
(Optional)
ExcludeTags instructs kluctl to exclude deployments with given tags. Equivalent to using ‘–exclude-tag’ when calling kluctl. |
includeDeploymentDirs []string |
(Optional)
IncludeDeploymentDirs instructs kluctl to only include deployments with the given dir. Equivalent to using ‘–include-deployment-dir’ when calling kluctl. |
excludeDeploymentDirs []string |
(Optional)
ExcludeDeploymentDirs instructs kluctl to exclude deployments with the given dir. Equivalent to using ‘–exclude-deployment-dir’ when calling kluctl. |
deployMode string |
(Optional)
DeployMode specifies what deploy mode should be used. The options ‘full-deploy’ and ‘poke-images’ are supported. With the ‘poke-images’ option, only images are patched into the target without performing a full deployment. |
validate bool |
(Optional)
Validate enables validation after deploying |
prune bool |
(Optional)
Prune enables pruning after deploying. |
delete bool |
(Optional)
Delete enables deletion of the specified target when the KluctlDeployment object gets deleted. |
KluctlDeploymentStatus
(Appears on: KluctlDeployment)
KluctlDeploymentStatus defines the observed state of KluctlDeployment
Field | Description |
---|---|
lastHandledReconcileAt string |
(Optional)
LastHandledReconcileAt holds the value of the most recent reconcile request value, so a change of the annotation value can be detected. |
LastHandledDeployAt string |
(Optional) |
observedGeneration int64 |
(Optional)
ObservedGeneration is the last reconciled generation. |
observedCommit string |
ObservedCommit is the last commit observed |
conditions []Kubernetes meta/v1.Condition |
(Optional) |
projectKey github.com/kluctl/kluctl/v2/pkg/types/result.ProjectKey |
(Optional) |
targetKey github.com/kluctl/kluctl/v2/pkg/types/result.TargetKey |
(Optional) |
lastObjectsHash string |
(Optional) |
lastDeployError string |
(Optional) |
lastPruneError string |
(Optional) |
lastValidateError string |
(Optional) |
lastDeployResult github.com/kluctl/kluctl/v2/pkg/types/result.CommandResultSummary |
(Optional)
LastDeployResult is the result of the last deploy command |
lastPruneResult github.com/kluctl/kluctl/v2/pkg/types/result.CommandResultSummary |
(Optional)
LastDeployResult is the result of the last prune command |
lastValidateResult github.com/kluctl/kluctl/v2/pkg/types/result.ValidateResult |
(Optional)
LastValidateResult is the result of the last validate command |
KubeConfig
(Appears on: KluctlDeploymentSpec)
KubeConfig references a Kubernetes secret that contains a kubeconfig file.
Field | Description |
---|---|
secretRef SecretKeyReference |
SecretRef holds the name of a secret that contains a key with
the kubeconfig file as the value. If no key is set, the key will default
to ‘value’. The secret must be in the same namespace as
the Kustomization.
It is recommended that the kubeconfig is self-contained, and the secret
is regularly updated if credentials such as a cloud-access-token expire.
Cloud specific |
LocalObjectReference
(Appears on: Decryption, HelmCredentials, ProjectSource)
Field | Description |
---|---|
name string |
Name of the referent. |
ProjectSource
(Appears on: KluctlDeploymentSpec)
Field | Description |
---|---|
url github.com/kluctl/kluctl/v2/pkg/types.GitUrl |
Url specifies the Git url where the project source is located |
ref GitRef |
(Optional)
Ref specifies the branch, tag or commit that should be used. If omitted, the default branch of the repo is used. |
path string |
(Optional)
Path specifies the sub-directory to be used as project directory |
secretRef LocalObjectReference |
(Optional)
SecretRef specifies the Secret containing authentication credentials for the git repository. For HTTPS repositories the Secret must contain ‘username’ and ‘password’ fields. For SSH repositories the Secret must contain ‘identity’ and ‘known_hosts’ fields. |
SafeDuration
(Appears on: KluctlDeploymentSpec)
Field | Description |
---|---|
Duration Kubernetes meta/v1.Duration |
SecretKeyReference
(Appears on: KubeConfig)
SecretKeyReference contains enough information to locate the referenced Kubernetes Secret object in the same namespace. Optionally a key can be specified. Use this type instead of core/v1 SecretKeySelector when the Key is optional and the Optional field is not applicable.
Field | Description |
---|---|
name string |
Name of the Secret. |
key string |
(Optional)
Key in the Secret, when not specified an implementation-specific default key is used. |
This page was automatically generated with gen-crd-api-reference-docs
6.5 - Commands
kluctl offers a unified command line interface that allows to standardize all your deployments. Every project, no matter how different it is from other projects, is managed the same way.
You can always call kluctl --help
or kluctl <command> --help
for a help prompt.
Individual commands are documented in sub-sections.
6.5.1 - Common Arguments
A few sets of arguments are common between multiple commands. These arguments are still part of the command itself and must be placed after the command name.
Global arguments
These arguments are available for all commands.
Global arguments:
--cpu-profile string Enable CPU profiling and write the result to the given path
--debug Enable debug logging
--no-color Disable colored output
--no-update-check Disable update check on startup
Project arguments
These arguments are available for all commands that are based on a Kluctl project. They control where and how to load the kluctl project and deployment project.
Project arguments:
Define where and how to load the kluctl project and its components from.
-a, --arg stringArray Passes a template argument in the form of name=value. Nested args
can be set with the '-a my.nested.arg=value' syntax. Values are
interpreted as yaml values, meaning that 'true' and 'false' will
lead to boolean values and numbers will be treated as numbers. Use
quotes if you want these to be treated as strings. If the value
starts with @, it is treated as a file, meaning that the contents
of the file will be loaded and treated as yaml.
--args-from-file stringArray Loads a yaml file and makes it available as arguments, meaning that
they will be available thought the global 'args' variable.
--context string Overrides the context name specified in the target. If the selected
target does not specify a context or the no-name target is used,
--context will override the currently active context.
--git-cache-update-interval duration Specify the time to wait between git cache updates. Defaults to not
wait at all and always updating caches.
--local-git-group-override stringArray Same as --local-git-override, but for a whole group prefix instead
of a single repository. All repositories that have the given prefix
will be overridden with the given local path and the repository
suffix appended. For example,
'gitlab.com:some-org/sub-org=/local/path/to/my-forks' will override
all repositories below 'gitlab.com:some-org/sub-org/' with the
repositories found in '/local/path/to/my-forks'. It will however
only perform an override if the given repository actually exists
locally and otherwise revert to the actual (non-overridden) repository.
--local-git-override stringArray Specify a single repository local git override in the form of
'github.com:my-org/my-repo=/local/path/to/override'. This will
cause kluctl to not use git to clone for the specified repository
but instead use the local directory. This is useful in case you
need to test out changes in external git repositories without
pushing them.
-c, --project-config existingfile Location of the .kluctl.yaml config file. Defaults to
$PROJECT/.kluctl.yaml
--project-dir existingdir Specify the project directory. Defaults to the current working
directory.
-t, --target string Target name to run command for. Target must exist in .kluctl.yaml.
-T, --target-name-override string Overrides the target name. If -t is used at the same time, then the
target will be looked up based on -t <name> and then renamed to the
value of -T. If no target is specified via -t, then the no-name
target is renamed to the value of -T.
--timeout duration Specify timeout for all operations, including loading of the
project, all external api calls and waiting for readiness. (default
10m0s)
Image arguments
These arguments are available on some target based commands.
They control image versions requested by images.get_image(...)
calls
.
Image arguments:
Control fixed images and update behaviour.
-F, --fixed-image stringArray Pin an image to a given version. Expects
'--fixed-image=image<:namespace:deployment:container>=result'
--fixed-images-file existingfile Use .yaml file to pin image versions. See output of list-images
sub-command or read the documentation for details about the output format
Inclusion/Exclusion arguments
These arguments are available for some target based commands. They control inclusion/exclusion based on tags and deployment item pathes.
Inclusion/Exclusion arguments:
Control inclusion/exclusion.
--exclude-deployment-dir stringArray Exclude deployment dir. The path must be relative to the root
deployment project. Exclusion has precedence over inclusion, same as
in --exclude-tag
-E, --exclude-tag stringArray Exclude deployments with given tag. Exclusion has precedence over
inclusion, meaning that explicitly excluded deployments will always
be excluded even if an inclusion rule would match the same deployment.
--include-deployment-dir stringArray Include deployment dir. The path must be relative to the root
deployment project.
-I, --include-tag stringArray Include deployments with given tag.
Command Results arguments
These arguments control how command results are stored.
Command Results:
Configure how command results are stored.
--command-result-namespace string Override the namespace to be used when writing command results. (default
"kluctl-results")
--force-write-command-result Force writing of command results, even if the command is run in dry-run mode.
--keep-command-results-count int Configure how many old command results to keep. (default 10)
--write-command-result Enable writing of command results into the cluster.
6.5.2 - Environment Variables
In addition to arguments, Kluctl can be controlled via a set of environment variables.
Environment variables as arguments
All options/arguments accepted by kluctl can also be specified via environment variables. The name of the environment
variables always start with KLUCTL_
and end with the option/argument in uppercase and dashes replaced with
underscores. As an example, --dry-run
can also be specified with the environment variable
KLUCTL_DRY_RUN=true
.
If an argument needs to be specified multiple times through environment variables, indexed can be appended to the
names of the environment variables, e.g. KLUCTL_ARG_0=name1=value1
and KLUCTL_ARG_1=name2=value2
.
Additional environment variables
A few additional environment variables are supported which do not belong to an option/argument. These are:
KLUCTL_REGISTRY_<idx>_HOST
,KLUCTL_REGISTRY_<idx>_USERNAME
, and so on. See registries for details.KLUCTL_GIT_<idx>_HOST
,KLUCTL_GIT_<idx>_USERNAME
, and so on.KLUCTL_SSH_DISABLE_STRICT_HOST_KEY_CHECKING
. Disable ssh host key checking when accessing git repositories.
6.5.3 - controller install
Command
Usage: kluctl controller install [flags]
Install the Kluctl controller This command will install the kluctl-controller to the current Kubernetes clusters.
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--context string Override the context to use.
--controller-version string Specify the controller version to install.
--dry-run Performs all kubernetes API calls in dry-run mode.
-y, --yes Suppresses 'Are you sure?' questions and proceeds as if you would answer 'yes'.
6.5.4 - controller run
Command
Usage: kluctl controller run [flags]
Run the Kluctl controller This command will run the Kluctl Controller. This is usually meant to be run inside a cluster and not from your local machine.
Arguments
The following arguments are available:
Misc arguments:
Command specific arguments.
--context string Override the context to use.
--default-service-account string Default service account used for impersonation.
--dry-run Run all deployments in dryRun=true mode.
--health-probe-bind-address string The address the probe endpoint binds to. (default ":8081")
--kubeconfig string Override the kubeconfig to use.
--leader-elect Enable leader election for controller manager. Enabling this will
ensure there is only one active controller manager.
--metrics-bind-address string The address the metric endpoint binds to. (default ":8080")
6.5.5 - delete
Command
Usage: kluctl delete [flags]
Delete a target (or parts of it) from the corresponding cluster Objects are located based on the target discriminator.
WARNING: This command will also delete objects which are not part of your deployment project (anymore). It really only decides based on the discriminator and does NOT take the local target/state into account!
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--discriminator string Override the discriminator used to find objects for deletion.
--dry-run Performs all kubernetes API calls in dry-run mode.
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--no-obfuscate Disable obfuscation of sensitive/secret data
-o, --output-format stringArray Specify output format and target file, in the format
'format=path'. Format can either be 'text' or 'yaml'. Can be
specified multiple times. The actual format for yaml is
currently not documented and subject to change.
--render-output-dir string Specifies the target directory to render the project into. If
omitted, a temporary directory is used.
--short-output When using the 'text' output format (which is the default),
only names of changes objects are shown instead of showing all
changes.
-y, --yes Suppresses 'Are you sure?' questions and proceeds as if you
would answer 'yes'.
They have the same meaning as described in deploy .
6.5.6 - deploy
Command
Usage: kluctl deploy [flags]
Deploys a target to the corresponding cluster This command will also output a diff between the initial state and the state after deployment. The format of this diff is the same as for the ‘diff’ command. It will also output a list of prunable objects (without actually deleting them).
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--abort-on-error Abort deploying when an error occurs instead of trying the
remaining deployments
--dry-run Performs all kubernetes API calls in dry-run mode.
--force-apply Force conflict resolution when applying. See documentation for
details
--force-replace-on-error Same as --replace-on-error, but also try to delete and
re-create objects. See documentation for more details.
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--no-obfuscate Disable obfuscation of sensitive/secret data
--no-wait Don't wait for objects readiness'
-o, --output-format stringArray Specify output format and target file, in the format
'format=path'. Format can either be 'text' or 'yaml'. Can be
specified multiple times. The actual format for yaml is
currently not documented and subject to change.
--readiness-timeout duration Maximum time to wait for object readiness. The timeout is
meant per-object. Timeouts are in the duration format (1s, 1m,
1h, ...). If not specified, a default timeout of 5m is used.
(default 5m0s)
--render-output-dir string Specifies the target directory to render the project into. If
omitted, a temporary directory is used.
--replace-on-error When patching an object fails, try to replace it. See
documentation for more details.
--short-output When using the 'text' output format (which is the default),
only names of changes objects are shown instead of showing all
changes.
-y, --yes Suppresses 'Are you sure?' questions and proceeds as if you
would answer 'yes'.
–force-apply
kluctl implements deployments via
server-side apply
and a custom automatic conflict resolution algorithm. This algurithm is an automatic implementation of the
“
Don’t overwrite value, give up management claim
”
method. It should work in most cases, but might still fail. In case of such failure, you can use --force-apply
to
use the “Overwrite value, become sole manager” strategy instead.
Please note that this is a risky operation which might overwrite fields which were initially managed by kluctl but were then overtaken by other managers (e.g. by operators). Always use this option with caution and perform a dry-run before to ensure nothing unexpected gets overwritten.
–replace-on-error
In some situations, patching Kubernetes objects might fail for different reasons. In such cases, you can try
--replace-on-error
to instruct kluctl to retry with an update operation.
Please note that this will cause all fields to be overwritten, even if owned by other field managers.
–force-replace-on-error
This flag will cause the same replacement attempt on failure as with --replace-on-error
. In addition, it will fallback
to a delete+recreate operation in case the replace also fails.
Please note that this is a potentially risky operation, especially when an object carries some kind of important state.
–abort-on-error
kluctl does not abort a command when an individual object fails can not be updated. It collects all errors and warnings and outputs them instead. This option modifies the behaviour to immediately abort the command.
6.5.7 - poke-images
Command
Usage: kluctl poke-images [flags]
Replace all images in target This command will fully render the target and then only replace images instead of fully deploying the target. Only images used in combination with ‘images.get_image(…)’ are replaced
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--dry-run Performs all kubernetes API calls in dry-run mode.
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--no-obfuscate Disable obfuscation of sensitive/secret data
-o, --output-format stringArray Specify output format and target file, in the format
'format=path'. Format can either be 'text' or 'yaml'. Can be
specified multiple times. The actual format for yaml is
currently not documented and subject to change.
--render-output-dir string Specifies the target directory to render the project into. If
omitted, a temporary directory is used.
--short-output When using the 'text' output format (which is the default),
only names of changes objects are shown instead of showing all
changes.
-y, --yes Suppresses 'Are you sure?' questions and proceeds as if you
would answer 'yes'.
6.5.8 - prune
Command
Usage: kluctl prune [flags]
Searches the target cluster for prunable objects and deletes them
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--dry-run Performs all kubernetes API calls in dry-run mode.
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--no-obfuscate Disable obfuscation of sensitive/secret data
-o, --output-format stringArray Specify output format and target file, in the format
'format=path'. Format can either be 'text' or 'yaml'. Can be
specified multiple times. The actual format for yaml is
currently not documented and subject to change.
--render-output-dir string Specifies the target directory to render the project into. If
omitted, a temporary directory is used.
--short-output When using the 'text' output format (which is the default),
only names of changes objects are shown instead of showing all
changes.
-y, --yes Suppresses 'Are you sure?' questions and proceeds as if you
would answer 'yes'.
They have the same meaning as described in deploy .
6.5.9 - validate
Command
Usage: kluctl validate [flags]
Validates the already deployed deployment This means that all objects are retrieved from the cluster and checked for readiness.
TODO: This needs to be better documented!
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--command-result existingfile Specify a command result to use instead of loading a project.
This will also perform drift detection.
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
-o, --output stringArray Specify output target file. Can be specified multiple times
--render-output-dir string Specifies the target directory to render the project into. If
omitted, a temporary directory is used.
--sleep duration Sleep duration between validation attempts (default 5s)
--wait duration Wait for the given amount of time until the deployment validates
--warnings-as-errors Consider warnings as failures
6.5.10 - diff
Command
Usage: kluctl diff [flags]
Perform a diff between the locally rendered target and the already deployed target The output is by default in human readable form (a table combined with unified diffs). The output can also be changed to output a yaml file. Please note however that the format is currently not documented and prone to changes. After the diff is performed, the command will also search for prunable objects and list them.
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--force-apply Force conflict resolution when applying. See documentation for
details
--force-replace-on-error Same as --replace-on-error, but also try to delete and
re-create objects. See documentation for more details.
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--ignore-annotations Ignores changes in annotations when diffing
--ignore-labels Ignores changes in labels when diffing
--ignore-tags Ignores changes in tags when diffing
--no-obfuscate Disable obfuscation of sensitive/secret data
-o, --output-format stringArray Specify output format and target file, in the format
'format=path'. Format can either be 'text' or 'yaml'. Can be
specified multiple times. The actual format for yaml is
currently not documented and subject to change.
--render-output-dir string Specifies the target directory to render the project into. If
omitted, a temporary directory is used.
--replace-on-error When patching an object fails, try to replace it. See
documentation for more details.
--short-output When using the 'text' output format (which is the default),
only names of changes objects are shown instead of showing all
changes.
--force-apply
and --replace-on-error
have the same meaning as in
deploy
.
6.5.11 - list-targets
Command
Usage: kluctl list-targets [flags]
Outputs a yaml list with all targets Outputs a yaml list with all targets
Arguments
The following arguments are available:
Misc arguments:
Command specific arguments.
-o, --output stringArray Specify output target file. Can be specified multiple times
6.5.12 - helm-pull
Command
Usage: kluctl helm-pull [flags]
Recursively searches for ‘helm-chart.yaml’ files and pre-pulls the specified Helm charts Kluctl requires Helm Charts to be pre-pulled by default, which is handled by this command. It will collect all required Charts and versions and pre-pull them into .helm-charts. To disable pre-pulling for individual charts, set ‘skipPrePull: true’ in helm-chart.yaml.
See helm-integration for more details.
Arguments
The following sets of arguments are available:
-
project arguments
(except
-a
)
6.5.13 - render
Command
Usage: kluctl render [flags]
Renders all resources and configuration files Renders all resources and configuration files and stores the result in either a temporary directory or a specified directory.
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--kubernetes-version string Specify the Kubernetes version that will be assumed. This will
also override the kubeVersion used when rendering Helm Charts.
--offline-kubernetes Run command in offline mode, meaning that it will not try to
connect the target cluster
--print-all Write all rendered manifests to stdout
--render-output-dir string Specifies the target directory to render the project into. If
omitted, a temporary directory is used.
6.5.14 - list-images
Command
Usage: kluctl list-images [flags]
Renders the target and outputs all images used via ‘images.get_image(…) The result is a compatible with yaml files expected by –fixed-images-file.
If fixed images (’-f/–fixed-image’) are provided, these are also taken into account, as described in the deploy command.
Arguments
The following sets of arguments are available:
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--kubernetes-version string Specify the Kubernetes version that will be assumed. This will
also override the kubeVersion used when rendering Helm Charts.
--offline-kubernetes Run command in offline mode, meaning that it will not try to
connect the target cluster
-o, --output stringArray Specify output target file. Can be specified multiple times
--render-output-dir string Specifies the target directory to render the project into. If
omitted, a temporary directory is used.
--simple Output a simplified version of the images list
6.5.15 - helm-update
Command
Usage: kluctl helm-update [flags]
Recursively searches for ‘helm-chart.yaml’ files and checks for new available versions Optionally performs the actual upgrade and/or add a commit to version control.
Arguments
The following sets of arguments are available:
-
project arguments
(except
-a
)
In addition, the following arguments are available:
Misc arguments:
Command specific arguments.
--commit Create a git commit for every updated chart
--helm-insecure-skip-tls-verify stringArray Controls skipping of TLS verification. Must be in the form
--helm-insecure-skip-tls-verify=<credentialsId>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-key-file stringArray Specify client certificate to use for Helm Repository
authentication. Must be in the form
--helm-key-file=<credentialsId>:<path>, where <credentialsId>
must match the id specified in the helm-chart.yaml.
--helm-password stringArray Specify password to use for Helm Repository authentication.
Must be in the form
--helm-password=<credentialsId>:<password>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
--helm-username stringArray Specify username to use for Helm Repository authentication.
Must be in the form
--helm-username=<credentialsId>:<username>, where
<credentialsId> must match the id specified in the helm-chart.yaml.
-i, --interactive Ask for every Helm Chart if it should be upgraded.
--upgrade Write new versions into helm-chart.yaml and perform helm-pull
afterwards