Deployments
Deployments and sub-deployments.
A deployment project is a collection of deployment items and sub-deployments. Deployment items are usually
Kustomize deployments, but can also integrate Helm Charts.
Basic structure
The following visualization shows the basic structure of a deployment project. The entry point of every deployment
project is the deployment.yaml
file, which then includes further sub-deployments and kustomize deployments. It also
provides some additional configuration required for multiple kluctl features to work as expected.
As can be seen, sub-deployments can include other sub-deployments, allowing you to structure the deployment project
as you need.
Each level in this structure recursively adds tags to each deployed resources, allowing you to control
precisely what is deployed in the future.
-- project-dir/
|-- deployment.yaml
|-- .gitignore
|-- kustomize-deployment1/
| |-- kustomization.yaml
| `-- resource.yaml
|-- sub-deployment/
| |-- deployment.yaml
| |-- kustomize-deployment2/
| | |-- resource1.yaml
| | `-- ...
| |-- kustomize-deployment3/
| | |-- kustomization.yaml
| | |-- resource1.yaml
| | |-- resource2.yaml
| | |-- patch1.yaml
| | `-- ...
| |-- kustomize-with-helm-deployment
| | |-- charts/
| | | `-- ...
| | |-- kustomization.yaml
| | |-- helm-chart.yaml
| | `-- helm-values.yaml
| `-- subsub-deployment/
| |-- deployment.yaml
| |-- ... kustomize deployments
| `-- ... subsubsub deployments
`-- sub-deployment/
`-- ...
Order of deployments
Deployments are done in parallel, meaning that there are usually no order guarantees. The only way to somehow control
order, is by placing barriers between kustomize deployments.
You should however not overuse barriers, as they negatively impact the speed of kluctl.
Plain Kustomize
It’s also possible to use Kluctl on plain Kustomize deployments. Simply run kluctl deploy
from inside the
folder of your kustomization.yaml
. If you also don’t have a .kluctl.yaml
, you can also work without targets.
Please note that pruning and deletion is not supported in this mode.
1 - Deployments
Structure of deployment.yaml.
The deployment.yaml
file is the entrypoint for the deployment project. Included sub-deployments also provide a
deployment.yaml
file with the same structure as the initial one.
An example deployment.yaml
looks like this:
deployments:
- path: nginx
- path: my-app
- include: monitoring
- git:
url: git@github.com/example/example.git
- oci:
url: oci://ghcr.io/kluctl/kluctl-examples/simple
commonLabels:
my.prefix/target: "{{ target.name }}"
my.prefix/deployment-project: my-deployment-project
The following sub-chapters describe the available fields in the deployment.yaml
deployments
deployments
is a list of deployment items. Multiple deployment types are supported, which is documented further down.
Individual deployments are performed in parallel, unless a barrier is encountered which causes kluctl to
wait for all previous deployments to finish.
Deployments can also be conditional by using the when field.
Simple deployments
Simple deployments are specified via path
and are expected to be directories with Kubernetes manifests inside.
Kluctl will internally generate a kustomization.yaml from these manifests and treat the deployment item the same way
as it would treat a Kustomize deployment.
Example:
deployments:
- path: path/to/manifests
Kustomize deployments
When the deployment item directory specified via path
contains a kustomization.yaml
, Kluctl will use this file
instead of generating one.
Please see Kustomize integration for more details.
Example:
deployments:
- path: path/to/deployment1
- path: path/to/deployment2
The path
must point to a directory relative to the directory containing the deployment.yaml
. Only directories
that are part of the kluctl project are allowed. The directory must contain a valid kustomization.yaml
.
Includes
Specifies a sub-deployment project to be included. The included sub-deployment project will inherit many properties
of the parent project, e.g. tags, commonLabels and so on.
Example:
deployments:
- include: path/to/sub-deployment
The path
must point to a directory relative to the directory containing the deployment.yaml
. Only directories
that are part of the kluctl project are allowed. The directory must contain a valid deployment.yaml
.
Git includes
Specifies an external git project to be included. The project is included the same way with regular includes, except
that the included project can not use/load templates from the parent project. An included project might also include
further git projects.
If the included project is a Kluctl Library Project, current variables are NOT passed
automatically into the included project. Only when passVars is set to true, all current variables are passed.
For library projects, args is the preferred way to pass configuration.
Simple example:
deployments:
- git: git@github.com/example/example.git
This will clone the git repository at git@github.com/example/example.git
, checkout the default branch and include it
into the current project.
Advanced Example:
deployments:
- git:
url: git@github.com/example/example.git
ref:
branch: my-branch
subDir: some/sub/dir
The url specifies the Git url to be cloned and checked out.
ref
is optional and specifies the branch or tag to be used. To specify a branch, set the sub-field branch
as seen
in the above example. To pass a tag, set the tag
field instead. To pass a commit, set the commit
field instead.
If ref
is omitted, the default branch will be checked out.
subDir
is optional and specifies the sub directory inside the git repository to include.
OCI includes
Specifies an OCI based artifact to include. The artifact must be pushed to your OCI repository via the
kluctl oci push
command. The artifact is extracted and then included the same way a
git include is included.
If the included project is a Kluctl Library Project, current variables are NOT passed
automatically into the included project. Only when passVars is set to true, all current variables are passed.
For library projects, args is the preferred way to pass configuration.
Simple example:
deployments:
- oci:
url: oci://ghcr.io/kluctl/kluctl-examples/simple
The url
specifies the OCI repository url. It must use the oci://
scheme. It is not allowed to add tags or digests to
the url. Instead, use the dedicated ref
field:
deployments:
- oci:
url: oci://ghcr.io/kluctl/kluctl-examples/simple
ref:
tag: latest
For digests, use:
deployments:
- oci:
url: oci://ghcr.io/kluctl/kluctl-examples/simple
ref:
digest: sha256:9ac3ba762c373ebccecb9dd3ac1d8ca091e4bd4a101701ce99e6058c0c74eedc
Subdirectories of the pushed artifact can be specified via subDir
:
deployments:
- oci:
url: oci://ghcr.io/kluctl/kluctl-examples/simple
subDir: my-subdir
See OCI support for more details, especially in regard to authentication for private registries.
Barriers
Causes kluctl to wait until all previous kustomize deployments have been applied. This is useful when
upcoming deployments need the current or previous deployments to be finished beforehand. Previous deployments also
include all sub-deployments from included deployments.
Please note that barriers do not wait for readiness of individual resources. This means that it will not wait for
readiness of services, deployments, daemon sets, and so on. To actually wait for readiness, use waitReadiness: true
or
waitReadinessObjects
.
Example:
deployments:
- path: kustomizeDeployment1
- path: kustomizeDeployment2
- include: subDeployment1
- barrier: true
# At this point, it's ensured that kustomizeDeployment1, kustomizeDeployment2 and all sub-deployments from
# subDeployment1 are fully deployed.
- path: kustomizeDeployment3
To create a barrier with a custom message, include the message parameter when creating the barrier. The message parameter accepts a string value that represents the custom message.
Example:
deployments:
- path: kustomizeDeployment1
- path: kustomizeDeployment2
- include: subDeployment1
- barrier: true
message: "Waiting for subDeployment1 to be finished"
# At this point, it's ensured that kustomizeDeployment1, kustomizeDeployment2 and all sub-deployments from
# subDeployment1 are fully applied.
- path: kustomizeDeployment3
If no custom message is provided, the barrier will be created without a specific message, and the default behavior will be applied.
When viewing the kluctl deploy
status, the custom message, if provided, will be displayed along with default barrier information.
waitReadiness
waitReadiness
can be set on all deployment items. If set to true
, Kluctl will wait for readiness of each individual object
of the current deployment item. Readiness is defined in readiness.
Please note that Kluctl will not wait for readiness of previous deployment items.
This can also be combined with barriers, which will instruct Kluctl to stop processing the next deployment
items until everything before the barrier is applied and the current deployment item’s objects are all ready.
Examples:
deployments:
- path: kustomizeDeployment1
waitReadiness: true
- path: kustomizeDeployment2
# this will wait for kustomizeDeployment1 to be applied+ready and kustomizeDeployment2 to be applied
# kustomizeDeployment2 is not guaranteed to be ready at this point, but might be due to the parallel nature of Kluctl
- barrier: true
- path: kustomizeDeployment3
waitReadinessObjects
This is comparable to waitReadiness
, but instead of waiting for all objects of the current deployment item, it allows
to explicitly specify objects which are not necessarily part of the current (or any) deployment item.
This is for example useful if you used an external Helm Chart and want to wait for readiness of some individual objects,
e.g. CRDs that are being deployment by some in-cluster operator instead of the Helm chart itself.
Examples:
deployments:
# The cilium Helm chart does not deploy CRDs anymore. Instead, the cilium-operator does this on startup. This means,
# we can't apply CiliumNetworkPolicies before the CRDs get applied by the operator.
- path: cilium
- barrier: true
waitReadinessObjects:
- kind: Deployment
name: cilium-operator
namespace: kube-system
- kind: CustomResourceDefinition
name: ciliumnetworkpolicies.cilium.io
# This deployment can now safely use the CRDs applied by the operator
- path: kustomizeDeployment1
deleteObjects
Causes kluctl to delete matching objects, specified by a list of group/kind/name/namespace dictionaries.
The order/parallelization of deletion is identical to the order and parallelization of normal deployment items,
meaning that it happens in parallel by default until a barrier is encountered.
Example:
deployments:
- deleteObjects:
- group: apps
kind: DaemonSet
namespace: kube-system
name: kube-proxy
- barrier: true
- path: my-cni
The above example shows how to delete the kube-proxy DaemonSet before installing a CNI (e.g. Cilium in
proxy-replacement mode).
deployments common properties
All entries in deployments
can have the following common properties:
vars (deployment item)
A list of variable sets to be loaded into the templating context, which is then available in all deployment items
and sub-deployments.
See templating for more details.
Example:
deployments:
- path: kustomizeDeployment1
vars:
- file: vars1.yaml
- values:
var1: value1
- path: kustomizeDeployment2
# all sub-deployments of this include will have the given variables available in their Jinj2 context.
- include: subDeployment1
vars:
- file: vars2.yaml
passVars
Can only be used on include, git include and oci include. If set to true
,
all variables will be passed down to the included project even if the project is an explicitly marked
Kluctl Library Project.
If the included project is not a library project, variables are always fully passed into the included deployment.
args
Can only be used on include, git include and oci include. Passes the given
arguments into Kluctl Library Projects.
when
Each deployment item can be conditional with the help of the when
field. It must be set to a
Jinja2 based expression
that evaluates to a boolean.
Example:
deployments:
- path: item1
- path: item2
when: my.var == "my-value"
A list of tags the deployment should have. See tags for more details. For includes, this means that all
sub-deployments will get these tags applied to. If not specified, the default tags logic as described in tags
is applied.
Example:
deployments:
- path: kustomizeDeployment1
tags:
- tag1
- tag2
- path: kustomizeDeployment2
tags:
- tag3
# all sub-deployments of this include will get tag4 applied
- include: subDeployment1
tags:
- tag4
alwaysDeploy
Forces a deployment to be included everytime, ignoring inclusion/exclusion sets from the command line.
See Deploying with tag inclusion/exclusion for details.
deployments:
- path: kustomizeDeployment1
alwaysDeploy: true
- path: kustomizeDeployment2
Please note that alwaysDeploy
will also cause kluctl render to always render the resources.
Forces exclusion of a deployment whenever inclusion/exclusion tags are specified via command line.
See Deleting with tag inclusion/exclusion for details.
deployments:
- path: kustomizeDeployment1
skipDeleteIfTags: true
- path: kustomizeDeployment2
onlyRender
Causes a path to be rendered only but not treated as a deployment item. This can be useful if you for example want to
use Kustomize components which you’d refer from other deployment items.
deployments:
- path: component
onlyRender: true
- path: kustomizeDeployment2
vars (deployment project)
A list of variable sets to be loaded into the templating context, which is then available in all deployment items
and sub-deployments.
See templating for more details.
commonLabels
A dictionary of labels and values to be
added to all resources deployed by any of the deployment items in this deployment project.
Consider the following example deployment.yaml
:
deployments:
- path: nginx
- include: sub-deployment1
commonLabels:
my.prefix/target: {{ target.name }}
my.prefix/deployment-name: my-deployment-project-name
my.prefix/label-1: value-1
my.prefix/label-2: value-2
Every resource deployed by the kustomize deployment nginx
will now get the four provided labels attached. All included
sub-deployment projects (e.g. sub-deployment1
) will also recursively inherit these labels and pass them further
down.
In case an included sub-deployment project also contains commonLabels
, both dictionaries of commonLabels are merged
inside the included sub-deployment project. In case of conflicts, the included common labels override the inherited.
Please note that these commonLabels
are not related to commonLabels
supported in kustomization.yaml
files. It was
decided to not rely on this feature but instead attach labels manually to resources right before sending them to
kubernetes. This is due to an implementation detail in
kustomize which causes commonLabels
to also be applied to label selectors, which makes otherwise editable resources
read-only when it comes to commonLabels
.
commonAnnotations
A dictionary of annotations and values to be
added to all resources deployed by any of the deployment items in this deployment project.
commonAnnotations
are handled the same as commonLabels in regard to inheriting, merging and overriding.
overrideNamespace
A string that is used as the default namespace for all kustomize deployments which don’t have a namespace
set in their
kustomization.yaml
.
A list of common tags which are applied to all kustomize deployments and sub-deployment includes.
See tags for more details.
ignoreForDiff
A list of rules used to determine which differences should be ignored in diff outputs.
As an alternative, annotations can be used to control
diff behavior of individual resources.
Consider the following example:
deployments:
- ...
ignoreForDiff:
- kind: Deployment
name: my-deployment
fieldPath: spec.replicas
This will ignore differences for the spec.replicas
field in the Deployment
with the name my-deployment
.
Using regex expressions instead of JSON Pathes is also supported:
deployments:
- ...
ignoreForDiff:
- kind: Deployment
name: my-deployment
fieldPathRegex: metadata.labels.my-label-.*
The following properties are supported in ignoreForDiff
items.
fieldPath
If specified, must be a valid JSON Path. Kluctl will ignore differences for
all matching fields of all matching objects (see the other properties).
Either fieldPath
or fieldPathRegex
must be provided.
fieldPathRegex
If specified, must be a valid regex. Kluctl will ignore differences for all matching fields of all matching objects
(see the other properties).
Either fieldPath
or fieldPathRegex
must be provided.
group
This property is optional. If specified, only objects with a matching api group will be considered. Please note that this
field should NOT include the version of the api group.
kind
This property is optional. If specified, only objects with a matching kind
will be considered.
namespace
This property is optional. If specified, only objects with a matching namespace
will be considered.
name
This property is optional. If specified, only objects with a matching name
will be considered.
conflictResolution
A list of rules used to determine how to handle conflict resolution.
As an alternative, annotations can be used to control
conflict resolution of individual resources.
Consider the following example:
deployments:
- ...
conflictResolution:
- kind: ValidatingWebhookConfiguration
fieldPath: webhooks.*.*
action: ignore
This will cause Kluctl to ignore conflicts on all matching fields of all ValidatingWebhookConfiguration
objects.
Using regex expressions instead of JSON Pathes is also supported:
deployments:
- ...
conflictResolution:
- kind: ValidatingWebhookConfiguration
fieldPathRegex: webhooks\..
action: ignore
In some cases, it’s easier to match fields by manager name:
deployments:
- ...
conflictResolution:
- manager: clusterrole-aggregation-controller
action: ignore
- manager: cert-manager-cainjector
action: ignore
The following properties are supported in conflictResolution
items.
fieldPath
If specified, must be a valid JSON Path. Kluctl will ignore conflicts for
all matching fields of all matching objects (see the other properties).
Either fieldPath
, fieldPathRegex
or manager
must be provided.
fieldPathRegex
If specified, must be a valid regex. Kluctl will ignore conflicts for all matching fields of all matching objects
(see the other properties).
Either fieldPath
, fieldPathRegex
or manager
must be provided.
manager
If specified, must be a valid regex. Kluctl will ignore conflicts for all fields that currently have a matching field
manager assigned. This is useful if a mutating webhook or controller is known to modify fields after they have been
applied.
Either fieldPath
, fieldPathRegex
or manager
must be provided.
action
This field is required and must be either ignore
or force-apply
.
group
This property is optional. If specified, only objects with a matching api group will be considered. Please note that this
field should NOT include the version of the api group.
kind
This property is optional. If specified, only objects with a matching kind
will be considered.
namespace
This property is optional. If specified, only objects with a matching namespace
will be considered.
name
This property is optional. If specified, only objects with a matching name
will be considered.
2 - Kustomize Integration
How Kustomize is integrated into Kluctl
kluctl uses kustomize to render final resources. This means, that the finest/lowest
level in kluctl is represented with kustomize deployments. These kustomize deployments can then perform further
customization, e.g. patching and more. You can also use kustomize to easily generate ConfigMaps or secrets from files.
Generally, everything is possible via kustomization.yaml
, is thus possible in kluctl.
We advise to read the kustomize
reference. You can also look into the official kustomize
example.
Using the Kustomize Integration
Please refer to the Kustomize Deployment Item documentation for details.
3 - Container Images
Dynamic configuration of container images.
There are usually 2 different scenarios where Container Images need to be specified:
- When deploying third party applications like nginx, redis, … (e.g. via the Helm integration).
- In this case, image versions/tags rarely change, and if they do, this is an explicit change to the deployment. This
means it’s fine to have the image versions/tags directly in the deployment manifests.
- When deploying your own applications.
- In this case, image versions/tags might change very rapidly, sometimes multiple times per hour. Having these
versions/tags directly in the deployment manifests can easily lead to commit spam and hard to manage
multi-environment deployments.
kluctl offers a better solution for the second case.
images.get_image()
This is solved via a templating function that is available in all templates/resources. The function is part of the global
images
object and expects the following arguments:
images.get_image(image)
- image
- The image name/repository. It is looked up the list of fixed images.
The function will lookup the given image in the list of fixed images and return the last match.
Example deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
spec:
containers:
- name: c1
image: "{{ images.get_image('registry.gitlab.com/my-group/my-project') }}"
Fixed images
Fixed images can be configured multiple methods:
- Command line argument
--fixed-image
- Command line argument
--fixed-images-file
- Target definition
- Global ‘images’ variable
Command line argument --fixed-image
You can pass fixed images configuration via the --fixed-image
argument.
Due to environment variables support in the CLI, you can also use the
environment variable KLUCTL_FIXED_IMAGE_XXX
to configure fixed images.
The format of the --fixed-image
argument is --fixed-image image<:namespace:deployment:container>=result
. The simplest
example is --fixed-image registry.gitlab.com/my-group/my-project=registry.gitlab.com/my-group/my-project:1.1.2
.
Command line argument --fixed-images-file
You can also configure fixed images via a yaml file by using --fixed-images-file /path/to/fixed-images.yaml
.
file:
images:
- image: registry.gitlab.com/my-group/my-project
resultImage: registry.gitlab.com/my-group/my-project:1.1.2
The file must contain a single root list named images
with each entry having the following form:
images:
- image: <image_name>
resultImage: <result_image>
# optional fields
namespace: <namespace>
deployment: <kind>/<name>
container: <name>
image
(or imageRegex
) and resultImage
are required. All the other fields are optional and allow to specify in detail for which
object the fixed is specified.
You can also specify a regex for the image name:
images:
- imageRegex: registry\.gitlab\.com/my-group/.*
resultImage: <result_image>
# optional fields
namespace: <namespace>
deployment: <kind>/<name>
container: <name>
Target definition
The target definition can optionally specify an images
field that can
contain the same fixed images configuration as found in the --fixed-images-file
file.
Global ‘images’ variable
You can also define a global variable named images
via one of the variable sources.
This variable must be a list of the same format as the images list in the --fixed-images-file
file.
This option allows to externalize fixed images configuration, meaning that you can maintain image versions outside
the deployment project, e.g. in another Git repository.
4 - Helm Integration
How Helm is integrated into Kluctl.
kluctl offers a simple-to-use Helm integration, which allows you to reuse many common third-party Helm Charts.
The integration is split into 2 parts/steps/layers. The first is the management and pulling of the Helm Charts, while
the second part handles configuration/customization and deployment of the chart.
It is recommended to pre-pull Helm Charts with kluctl helm-pull
, which will store the
pulled charts inside .helm-charts
of the project directory. It is however also possible (but not
recommended) to skip the pre-pulling phase and let kluctl pull Charts on-demand.
When pre-pulling Helm Charts, you can also add the resulting Chart contents into version control. This is actually
recommended as it ensures that the deployment will always behave the same. It also allows pull-request based reviews
on third-party Helm Charts.
How it works
Helm charts are not directly installed via Helm. Instead, kluctl renders the Helm Chart into a single file and then
hands over the rendered yaml to kustomize. Rendering is done in combination with a provided
helm-values.yaml
, which contains the necessary values to configure the Helm Chart.
The resulting rendered yaml is then referred by your kustomization.yaml
, from which point on the
kustomize integration
takes over. This means, that you can perform all desired customization (patches, namespace override, …) as if you
provided your own resources via yaml files.
Helm hooks
Helm Hooks are implemented by mapping them
to kluctl hooks, based on the following mapping table:
Helm hook | kluctl hook |
---|
pre-install | pre-deploy-initial |
post-install | post-deploy-initial |
pre-delete | Not supported |
post-delete | Not supported |
pre-upgrade | pre-deploy-upgrade |
post-upgrade | post-deploy-upgrade |
pre-rollback | Not supported |
post-rollback | Not supported |
test | Not supported |
Please note that this is a best effort approach and not 100% compatible to how Helm would run hooks.
helm-chart.yaml
The helm-chart.yaml
defines where to get the chart from, which version should be pulled, the rendered output file name,
and a few more Helm options. After this file is added to your project, you need to invoke the helm-pull
command
to pull the Helm Chart into your local project. It is advised to put the pulled Helm Chart into version control, so
that deployments will always be based on the exact same Chart (Helm does not guarantee this when pulling).
Example helm-chart.yaml
:
helmChart:
repo: https://charts.bitnami.com/bitnami
chartName: redis
chartVersion: 12.1.1
updateConstraints: ~12.1.0
skipUpdate: false
skipPrePull: false
releaseName: redis-cache
namespace: "{{ my.jinja2.var }}"
output: helm-rendered.yaml # this is optional
When running the helm-pull
command, it will search for all helm-chart.yaml
files in your project and then pull the
chart from the specified repository with the specified version. The pull chart will then be located in the sub-directory
charts
below the same directory as the helm-chart.yaml
The same filename that was specified in output
must then be referred in a kustomization.yaml
as a normal local
resource. If output
is omitted, the default value helm-rendered.yaml
is used and must also be referenced in
kustomization.yaml
.
helmChart
inside helm-chart.yaml
supports the following fields:
repo
The url to the Helm repository where the Helm Chart is located. You can use hub.helm.sh to search for repositories and
charts and then use the repos found there.
OCI based repositories are also supported, for example:
helmChart:
repo: oci://r.myreg.io/mycharts/pepper
chartVersion: 1.2.3
releaseName: pepper
namespace: pepper
path
As alternative to repo
, you can also specify path
. The path must point to a local Helm Chart that is relative to the
helm-chart.yaml
. The local Chart must reside in your Kluctl project.
When path
is specified, repo
, chartName
, chartVersion
and updateContrainsts
are not allowed.
chartName
The name of the chart that can be found in the repository.
chartVersion
The version of the chart. Must be a valid semantic version.
updateConstraints
Specifies version constraints to be used when running helm-update. See
Checking Version Constraints for details on the
supported syntax.
If omitted, Kluctl will filter out pre-releases by default. Use a updateConstraints
like ~1.2.3-0
to enable
pre-releases.
skipUpdate
If set to true
, skip this Helm Chart when the helm-update command is called.
If omitted, defaults to false
.
skipPrePull
If set to true
, skip pre-pulling of this Helm Chart when running helm-pull. This will
also enable pulling on-demand when the deployment project is rendered/deployed.
releaseName
The name of the Helm Release.
namespace
The namespace that this Helm Chart is going to be deployed to. Please note that this should match the namespace
that you’re actually deploying the kustomize deployment to. This means, that either namespace
in kustomization.yaml
or overrideNamespace
in deployment.yaml
should match the namespace given here. The namespace should also be existing
already at the point in time when the kustomize deployment is deployed.
output
This is the file name into which the Helm Chart is rendered into. Your kustomization.yaml
should include this same
file. The file should not be existing in your project, as it is created on-the-fly while deploying.
skipCRDs
If set to true
, kluctl will pass --skip-crds
to Helm when rendering the deployment. If set to false
(which is
the default), kluctl will pass --include-crds
to Helm.
helm-values.yaml
This file should be present when you need to pass custom Helm Value to Helm while rendering the deployment. Please
read the documentation of the used Helm Charts for details on what is supported.
Updates to helm-charts
In case a Helm Chart needs to be updated, you can either do this manually by replacing the chartVersion
value in helm-chart.yaml
and the calling the helm-pull command or by simply invoking
helm-update with --upgrade
and/or --commit
being set.
Private Repositories
It is also possible to use private chart repositories and private OCI registries. There are multiple options to
provide credentials to Kluctl.
Use helm repo add --username xxx --password xxx
before
Kluctl will try to find known repositories that are managed by the Helm CLI and then try to reuse the credentials of
these. The repositories are identified by the URL of the repository, so it doesn’t matter what name you used when you
added the repository to Helm. The same method can be used for client certificate based authentication (--key-file
in helm repo add
).
Use helm registry login --username xxx --password xxx
for OCI registries
The same as for helm repo add
applies here, except that authentication entries are matched by hostname.
Use docker login
for OCI registries
Kluctl tries to use credentials stored in $HOME/.docker/config.json
as well, so
docker login
will also allow Kluctl to authenticate
against OCI registries.
Use the –helm-xxx and –registry-xxx arguments of Kluctl sub-commands
All commands that interact with Helm Chart repositories and OCI registries support the
helm arguments and registry arguments
to specify authentication per repository and/or OCI registry.
⚠️DEPRECATION WARNING ⚠️
Previous versions (prior to v2.22.0) of Kluctl supported managing Helm credentials via credentialsId
in helm-chart.yaml
.
This is deprecated now and will be removed in the future. Please switch to hostname/registry-name based authentication
instead. See helm arguments for details.
Use environment variables to specify authentication
You can also use environment variables to specify Helm Chart repository authentication. For OCI based registries, see
OCI authentication for details.
The following environment variables are supported:
KLUCTL_HELM_HOST
: Specifies the host name of the repository to match before the specified credentials are considered.KLUCTL_HELM_PATH
: Specifies the path to match before the specified credentials are considered. If omitted, credentials are applied to all matching hosts. Can contain wildcards.KLUCTL_HELM_USERNAME
: Specifies the username.KLUCTL_HELM_PASSWORD
: Specifies the password.KLUCTL_HELM_INSECURE_SKIP_TLS_VERIFY
: If set to true
, Kluctl will skip TLS verification for matching repositories.KLUCTL_HELM_PASS_CREDENTIALS_ALL
: If set to true
, Kluctl will instruct Helm to pass credentials to all domains. See https://helm.sh/docs/helm/helm_repo_add/ for details.KLUCTL_HELM_CERT_FILE
: Specifies the client certificate to use while connecting to the matching repository.KLUCTL_HELM_KEY_FILE
: Specifies the client key to use while connecting to the matching repository.KLUCTL_HELM_CA_FILE
: Specifies CA bundle to use for TLS/https verification.
Multiple credential sets can be specified by including an index in the environment variable names, e.g.
KLUCTL_HELM_1_HOST=host.org
, KLUCTL_HELM_1_USERNAME=my-user
and KLUCTL_HELM_1_PASSWORD=my-password
will apply
the given credential to all repositories with the host host.org
, while KLUCTL_HELM_2_HOST=other.org
,
KLUCTL_HELM_2_USERNAME=my-other-user
and KLUCTL_HELM_2_PASSWORD=my-other-password
will apply the other credentials
to the other.org
repository.
Credentials when using the kluctl-controller
In case you want to use the same Kluctl deployment via the kluctl-controller, you have to
configure Helm and OCI credentials via spec.credentials
.
Templating
Both helm-chart.yaml
and helm-values.yaml
are rendered by the templating engine before they
are actually used. This means, that you can use all available Jinja2 variables at that point, which can for example be
seen in the above helm-chart.yaml
example for the namespace.
There is however one exception that leads to a small limitation. When helm-pull
reads the helm-chart.yaml
, it does
NOT render the file via the templating engine. This is because it can not know how to properly render the template as it
does have no information about targets (there are no -t
arguments set) at that point.
This exception leads to the limitation that the helm-chart.yaml
MUST be valid yaml even in case it is not rendered
via the templating engine. This makes using control statements (if/for/…) impossible in this file. It also makes it
a requirement to use quotes around values that contain templates (e.g. the namespace in the above example).
helm-values.yaml
is not subject to these limitations as it is only interpreted while deploying.
5 - OCI Support
OCI Support in Kluctl
Kluctl provides OCI support in multiple places. See the following sections for details.
Helm OCI based registries
Kluctl fully supports OCI based Helm registries in
the Helm integration.
OCI includes
Kluctl can include sub-deployments from OCI artifacts via OCI includes.
These artifacts can be pushed via the kluctl oci push sub-command.
Authentication
Private registries are supported as well. To authenticate to these, use one of the following methods.
Authenticate via --registry-xxx
arguments
All commands that interact with OCI registries support the
registry arguments to specify authentication per OCI registry.
Authenticate via docker login
Kluctl tries to use credentials stored in $HOME/.docker/config.json
as well, so
docker login
will also allow Kluctl to authenticate
against OCI registries.
Use environment variables to specify authentication
You can also use environment variables to specify OCI authentication.
The following environment variables are supported:
KLUCTL_REGISTRY_HOST
: Specifies the registry host name to match before the specified credentials are considered.KLUCTL_REGISTRY_REPOSITORY
: Specifies the repository name to match before the specified credentials are considered. The repository name can contain the organization name, which default to library
is omitted. Can contain wildcards.KLUCTL_REGISTRY_USERNAME
: Specifies the username.KLUCTL_REGISTRY_PASSWORD
: Specifies the password.KLUCTL_REGISTRY_IDENTITY_TOKEN
: Specifies the identity token used for authentication.KLUCTL_REGISTRY_TOKEN
: Specifies the bearer token used for authentication.KLUCTL_REGISTRY_INSECURE_SKIP_TLS_VERIFY
: If set to true
, Kluctl will skip TLS verification for matching registries.KLUCTL_REGISTRY_PLAIN_HTTP
: If set to true
, forces the use of http (no TLS).KLUCTL_REGISTRY_CERT_FILE
: Specifies the client certificate to use while connecting to the matching repository.KLUCTL_REGISTRY_KEY_FILE
: Specifies the client key to use while connecting to the matching repository.KLUCTL_REGISTRY_CA_FILE
: Specifies CA bundle to use for TLS/https verification.
Multiple credential sets can be specified by including an index in the environment variable names, e.g.
KLUCTL_REGISTRY_1_HOST=host.org
, KLUCTL_REGISTRY_1_USERNAME=my-user
and KLUCTL_REGISTRY_1_PASSWORD=my-password
will apply
the given credential to all registries with the host host.org
, while KLUCTL_REGISTRY_2_HOST=other.org
,
KLUCTL_REGISTRY_2_USERNAME=my-other-user
and KLUCTL_REGISTRY_2_PASSWORD=my-other-password
will apply the other credentials
to the other.org
registry.
Credentials when using the kluctl-controller
In case you want to use the same Kluctl deployment via the kluctl-controller, you have to
configure OCI credentials via spec.credentials
.
6 - SOPS Integration
How SOPS is integrated into Kluctl
Kluctl integrates natively with SOPS. Kluctl is able to decrypt all resources
referenced by Kustomize deployment items (including simple deployments).
In addition, Kluctl will also decrypt all variable sources of the types file
and git.
Kluctl assumes that you have setup sops as usual so that it knows how to decrypt these files.
Only encrypting Secrets’s data
To only encrypt the data
and stringData
fields of Kubernetes secrets, use a .sops.yaml
configuration file that
encrypted_regex
to filter encrypted fields:
creation_rules:
- path_regex: .*.yaml
encrypted_regex: ^(data|stringData)$
Combining templating and SOPS
As an alternative, you can split secret values and the resulting Kubernetes resources into two different places and then
use templating to use the secret values wherever needed. Example:
Write the following content into secrets/my-secrets.yaml
:
secrets:
mySecret: secret-value
And encrypt it with SOPS:
$ sops -e -i secrets/my-secrets.yaml
Add this variables source to one of your deployments:
vars:
- file: secrets/my-secrets.yaml
deployments:
- ...
Then, in one of your deployment items define the following Secret
:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: default
stringData:
secret: "{{ secrets.mySecret }}"
7 - Hooks
Kluctl hooks.
Kluctl supports hooks in a similar fashion as known from Helm Charts. Hooks are executed/deployed before and/or after the
actual deployment of a kustomize deployment.
To mark a resource as a hook, add the kluctl.io/hook
annotation to a resource. The value of the annotation must be
a comma separated list of hook names. Possible value are described in the next chapter.
Hook types
Hook Type | Description |
---|
pre-deploy-initial | Executed right before the initial deployment is performed. |
post-deploy-initial | Executed right after the initial deployment is performed. |
pre-deploy-upgrade | Executed right before a non-initial deployment is performed. |
post-deploy-upgrade | Executed right after a non-initial deployment is performed. |
pre-deploy | Executed right before any (initial and non-initial) deployment is performed. |
post-deploy | Executed right after any (initial and non-initial) deployment is performed. |
A deployment is considered to be an “initial” deployment if none of the resources related to the current kustomize
deployment are found on the cluster at the time of deployment.
If you need to execute hooks for every deployment, independent of its “initial” state, use
pre-deploy-initial,pre-deploy
to indicate that it should be executed all the time.
Hook deletion
Hook resources are by default deleted right before creation (if they already existed before). This behavior can be
changed by setting the kluctl.io/hook-delete-policy
to a comma separated list of the following values:
Policy | Description |
---|
before-hook-creation | The default behavior, which means that the hook resource is deleted right before (re-)creation. |
hook-succeeded | Delete the hook resource directly after it got “ready” |
hook-failed | Delete the hook resource when it failed to get “ready” |
Hook readiness
After each deployment/execution of the hooks that belong to a deployment stage (before/after deployment), kluctl
waits for the hook resources to become “ready”. Readiness is defined here.
It is possible to disable waiting for hook readiness by setting the annotation kluctl.io/hook-wait
to “false”.
Hook Annotations
More control over hook behavior can be configured using additional annotations as described in annotations/hooks
8 - Readiness
Definition of readiness.
There are multiple places where kluctl can wait for “readiness” of resources, e.g. for hooks or when waitReadiness
is
specified on a deployment item. Readiness depends on the resource kind, e.g. for a Job, kluctl would wait until it
finishes successfully.
Control via Annotations
Multiple annotations control the behaviour when waiting for readiness of resources. These are
the following annoations:
9 - Tags
Every kustomize deployment has a set of tags assigned to it. These tags are defined in multiple places, which is
documented in deployment.yaml. Look for the tags
field, which is available in multiple places per
deployment project.
Tags are useful when only one or more specific kustomize deployments need to be deployed or deleted.
deployment items in deployment projects can have an optional list of tags assigned.
If this list is completely omitted, one single entry is added by default. This single entry equals to the last element
of the path
in the deployments
entry.
Consider the following example:
deployments:
- path: nginx
- path: some/subdir
In this example, two kustomize deployments are defined. The first would get the tag nginx
while the second
would get the tag subdir
.
In most cases this heuristic is enough to get proper tags with which you can work. It might however lead to strange
or even conflicting tags (e.g. subdir
is really a bad tag), in which case you’d have to explicitly set tags.
Tag inheritance
Deployment projects and deployments items inherit the tags of their parents. For example, if a deployment project
has a tags property defined, all deployments
entries would
inherit all these tags. Also, the sub-deployment projects included via deployment items of type
include inherit the tags of the deployment project. These included sub-deployments also
inherit the tags specified by the deployment item itself.
Consider the following example deployment.yaml
:
deployments:
- include: sub-deployment1
tags:
- tag1
- tag2
- include: sub-deployment2
tags:
- tag3
- tag4
- include: subdir/subsub
Any kustomize deployment found in sub-deployment1
would now inherit tag1
and tag2
. If sub-deployment1
performs
any further includes, these would also inherit these two tags. Inheriting is additive and recursive.
The last sub-deployment project in the example is subject to the same default-tags logic as described
in Default tags, meaning that it will get the default tag subsub
.
Deploying with tag inclusion/exclusion
Special care needs to be taken when trying to deploy only a specific part of your deployment which requires some base
resources to be deployed as well.
Imagine a large deployment is able to deploy 10 applications, but you only want to deploy one of them. When using tags
to achieve this, there might be some base resources (e.g. Namespaces) which are needed no matter if everything or just
this single application is deployed. In that case, you’d need to set alwaysDeploy
to true
.
Deleting with tag inclusion/exclusion
Also, in most cases, even more special care has to be taken for the same types of resources as decribed before.
Imagine a kustomize deployment being responsible for namespaces deployments. If you now want to delete everything except
deployments that have the persistency
tag assigned, the exclusion logic would NOT exclude deletion of the namespace.
This would ultimately lead to everything being deleted, and the exclusion tag having no effect.
In such a case, you’d need to set skipDeleteIfTags to true
as well.
In most cases, setting alwaysDeploy
to true
also requires setting skipDeleteIfTags
to true
.
10 - Annotations
Annotations usable in Kubernetes resources.
10.1 - All resources
Annotations on all resources
The following annotations control the behavior of the deploy
and related commands.
Control deploy behavior
The following annotations control deploy behavior, especially in regard to conflict resolution.
kluctl.io/delete
If set to “true”, the resource will be deleted at deployment time. Kluctl will not emit an error in case the resource
does not exist. A resource with this annotation does not have to be complete/valid as it is never sent to the Kubernetes
api server.
kluctl.io/force-apply
If set to “true”, the whole resource will be force-applied, meaning that all fields will be overwritten in case of
field manager conflicts.
As an alternative, conflict resolution can be controlled via conflictResolution.
kluctl.io/force-apply-field
Specifies a JSON Path for fields that should be force-applied. Matching
fields will be overwritten in case of field manager conflicts.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
As an alternative, conflict resolution can be controlled via conflictResolution.
kluctl.io/force-apply-manager
Specifies a regex for managers that should be force-applied. Fields with matching managers will be overwritten in
case of field manager conflicts.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
As an alternative, conflict resolution can be controlled via conflictResolution.
kluctl.io/ignore-conflicts
If set to “true”, the whole all fields of the object are going to be ignored when conflicts arise.
This effectively disables the warnings that are shown when field ownership is lost.
As an alternative, conflict resolution can be controlled via conflictResolution.
kluctl.io/ignore-conflicts-field
Specifies a JSON Path for fields that should be ignored when conflicts arise.
This effectively disables the warnings that are shown when field ownership is lost.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
As an alternative, conflict resolution can be controlled via conflictResolution.
kluctl.io/ignore-conflicts-manager
Specifies a regex for field managers that should be ignored when conflicts arise.
This effectively disables the warnings that are shown when field ownership is lost.
If more than one manager needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
As an alternative, conflict resolution can be controlled via conflictResolution.
kluctl.io/wait-readiness
If set to true
, kluctl will wait for readiness of this object. Readiness is defined
the same as in hook readiness. Waiting happens after all resources from the parent
deployment item have been applied.
kluctl.io/is-ready
If set to true
, kluctl will always consider this object as ready. If set to false
,
kluctl will always consider this object as not ready. If omitted, kluctl will perform normal readiness checks.
This annotation is useful if you need to introduce externalized readiness determination, e.g. inside a non-hook Pod
that can annotate an object that something got ready.
Control deletion/pruning
The following annotations control how delete/prune is behaving.
kluctl.io/skip-delete
If set to “true”, the annotated resource will not be deleted when delete or
prune is called.
If set to “true”, the annotated resource will not be deleted when delete or
prune is called and inclusion/exclusion tags are used at the same time.
This tag is especially useful and required on resources that would otherwise cause cascaded deletions of resources that
do not match the specified inclusion/exclusion tags. Namespaces are the most prominent example of such resources, as
they most likely don’t match exclusion tags, but cascaded deletion would still cause deletion of the excluded resources.
kluctl.io/force-managed
If set to “true”, Kluctl will always treat the annotated resource as being managed by Kluctl, meaning that it will
consider it for deletion and pruning even if a foreign field manager resets/removes the Kluctl field manager or if
foreign controllers add ownerReferences
even though they do not really own the resources.
Control diff behavior
The following annotations control how diffs are performed.
kluctl.io/diff-name
This annotation will override the name of the object when looking for the in-cluster version of an object used for
diffs. This is useful when you are forced to use new names for the same objects whenever the content changes, e.g.
for all kinds of immutable resource types.
Example (filename job.yaml):
apiVersion: batch/v1
kind: Job
metadata:
name: myjob-{{ load_sha256("job.yaml", 6) }}
annotations:
kluctl.io/diff-name: myjob
spec:
template:
spec:
containers:
- name: hello
image: busybox
command: ["sh", "-c", "echo hello"]
restartPolicy: Never
Without the kluctl.io/diff-name
annotation, any change to the job.yaml
would be treated as a new object in resulting
diffs from various commands. This is due to the inclusion of the file hash in the job name. This would make it very hard
to figure out what exactly changed in an object.
With the kluctl.io/diff-name
annotation, kluctl will pick an existing job from the cluster with the same diff-name
and use it for the diff, making it a lot easier to analyze changes. If multiple objects match, the one with the youngest
creationTimestamp
is chosen.
Please note that this will not cause old objects (with the same diff-name) to be prunes. You still have to regularely
prune the deployment.
kluctl.io/ignore-diff
If set to “true”, the whole resource will be ignored while calculating diffs.
kluctl.io/ignore-diff-field
Specifies a JSON Path for fields that should be ignored while calculating
diffs.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
kluctl.io/ignore-diff-field-regex
Same as kluctl.io/ignore-diff-field but specifying a regular expressions instead of a
JSON Path.
If more than one field needs to be specified, add -xxx
to the annotation key, where xxx
is an arbitrary number.
10.2 - Hooks
Annotations on hooks
The following annotations control hook execution
See hooks for more details.
kluctl.io/hook
Declares a resource to be a hook, which is deployed/executed as described in hooks. The value of the
annotation determines when the hook is deployed/executed.
kluctl.io/hook-weight
Specifies a weight for the hook, used to determine deployment/execution order. For resources with the same kluctl.io/hook
annotation, hooks are executed in ascending order based on hook-weight.
kluctl.io/hook-delete-policy
Defines when to delete the hook resource.
kluctl.io/hook-wait
Defines whether kluctl should wait for hook-completion. It defaults to true
and can be manually set to false
.
10.3 - Validation
Annotations to control validation
The following annotations influence the validate command.
validate-result.kluctl.io/xxx
If this annotation is found on a resource that is checked while validation, the key and the value of the annotation
are added to the validation result, which is then returned by the validate command.
The annotation key is dynamic, meaning that all annotations that begin with validate-result.kluctl.io/
are taken
into account.
kluctl.io/validate-ignore
If this annotation is set to true
, the object will be ignored while kluctl validate
is run.
10.4 - Kustomize
Annotations on the kustomization.yaml resource
Even though the kustomization.yaml
from Kustomize deployments are not really Kubernetes resources (as they are not
really deployed), they have the same structure as Kubernetes resources. This also means that the kustomization.yaml
can define metadata and annotations. Through these annotations, additional behavior on the deployment can be controlled.
Example:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
annotations:
kluctl.io/barrier: "true"
kluctl.io/wait-readiness: "true"
resources:
- deployment.yaml
kluctl.io/barrier
If set to true
, kluctl will wait for all previous objects to be applied (but not necessarily ready). This has the
same effect as barrier from deployment projects.
kluctl.io/wait-readiness
If set to true
, kluctl will wait for readiness of all objects from this kustomization project. Readiness is defined
the same as in hook readiness. Waiting happens after all resources from the current
deployment item have been applied.