1 - Microservices Demo

1.1 - 1. Basic Project Setup

Introduction

This is the first tutorial in a series of tutorials around the GCP Microservices Demo and the use of kluctl to deploy and manage the demo.

We will start with a simple kluctl project setup (this tutorial) and then advance to a multi-environment and multi-cluster setup (upcoming tutorial). Afterwards, we will also show how daily business (updates, house keeping, …) with such a deployment would look like.

GCP Microservices Demo

From the README.md of GCP Microservices Demo:

Online Boutique is a cloud-native microservices demo application. Online Boutique consists of a 10-tier microservices application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.

This demo application seems to be a good example for a more or less typical application seen on Kubernetes. It has multiple self-developed microservices while also requiring third-party applications/services (e.g. redis) to be deployed and configured properly.

Ways to deploy the demo

The simplest and most naive way to deploy the demo is by using kubectl apply with the provided release manifests:

$ kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/release/kubernetes-manifests.yaml

This is also what is shown in the README.md of the microservices demo.

The shortcomings of this approach are however easy to spot, and probably no one would ever follow this approach up to production. As an example, updates to the application and its dependencies will be hard to maintain. Housekeeping (deleting orphan resources) will also be hard to achieve. At some point in time, when you start deploying the application multiple times to different clusters and/or different environments, configuration will also become hard to maintain, as every target might need different configuration. Long story short…without proper tooling, you’ll easily run into painful limitations.

There are multiple solutions available that each solve parts of the limitations and problems. As an example, Helm and Kustomize are well known. Introducing these tools will easily bring you much further, but you will very likely end up with something complicated/complex around these tools to make it usable in daily business. In the worst case, you’d start using Bash scripts that orchestrate your deployments.

GitOps oriented solutions like ArgoCD and Flux are able to relieve you from parts of the deployment orchestration tasks, but bring in new complexities that need to be solved as well.

Deploying with kluctl

In this tutorial, we’ll show how the microservices demo can be deployed and managed with kluctl. We will start with a simple and naive deployment to a local kind cluster. The next tutorial in this series will then focus on making the deployment multi-environment and multi-cluster capable.

The goal is to make a deployment as simple as typing:

$ kluctl deploy -t local

Setting up the kluctl project

The first thing you need is an empty project directory and the .kluctl.yml project configuration:

$ mkdir -p microservices-demo/1-basic-setup
$ cd microservices-demo/1-basic-setup

Inside this new directory, create the file .kluctl.yml with the following content:

targets:
  - name: local
    context: kind-kind

This is a very simple example with only a single target, being a local kind cluster.

You might have noticed that the target configuration refers a kubectl context that is not existing yet. It’s time to create a local kind cluster now. To do so, first ensure that you have kind installed and then run:

$ kind create cluster

After this, you should have a local cluster setup and your kubeconfig prepared with a new context named kind-kind.

Setting up a minimal deployment project

Inside the kluctl project, you will now have to create a minimal deployment project. The deployment project starts with the root deployment.yml.

The location of this deployment.yml is the same as the .kluctl.yml. Create the file with following content:

deployments:
  - path: redis

commonLabels:
  examples.kluctl.io/deployment-project: "microservices-demo"

This minimal deployment project contains two elements:

  1. The list of deployment items, which currently only consists of the upcoming redis deployment. The next chapter will explain this deployment.
  2. The commonLabels, which is a map of common labels and values. These labels are applied to all deployed resources and are later used by kluctl to identify resources that belong to this kluctl deployment.

Setting up the redis deployment

As seen in the previous chapter, the root deployment.yml refers to a redis deployment item. This deployment item must be located inside the sub-folder redis (hence the path: redis). kluctl expects each deployment item to be a kustomize deployment. Such a kustomize deployment can be as simple as a kustomization.yml with a single resources entry or a fully fledged kustomize deployment with overlays, generators, and so on.

For our example, first create the sub-directory redis:

$ mkdir redis

Then create the file redis/kustomization.yml with the following content:

resources:
  - deployment.yml
  - service.yml

Then create the file redis/deployment.yml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cart
spec:
  selector:
    matchLabels:
      app: redis-cart
  template:
    metadata:
      labels:
        app: redis-cart
    spec:
      containers:
      - name: redis
        image: redis:alpine
        ports:
        - containerPort: 6379
        readinessProbe:
          periodSeconds: 5
          tcpSocket:
            port: 6379
        livenessProbe:
          periodSeconds: 5
          tcpSocket:
            port: 6379
        volumeMounts:
        - mountPath: /data
          name: redis-data
        resources:
          limits:
            memory: 256Mi
            cpu: 125m
          requests:
            cpu: 70m
            memory: 200Mi
      volumes:
      - name: redis-data
        emptyDir: {}

And the file redis/service.yml:

apiVersion: v1
kind: Service
metadata:
  name: redis-cart
spec:
  type: ClusterIP
  selector:
    app: redis-cart
  ports:
  - name: redis
    port: 6379
    targetPort: 6379

The above files (deployment.yml and service.yml) are based on the content of redis.yaml from the original GCP Microservices Demo.

As you can see, there is nothing special about the contents of these files so far. It’s simple and plain Kubernetes and YAML resources. The full potential of kluctl will become clear later, when we start to use templating inside these files. Only with the templating, it will become possible to support multi-environment and multi-cluster deployments.

Setting up the first microservice

Now it’s time to setup the first microservice. It is done the same way as we’re already setup the redis deployment.

First, create the sub-directory cartservice at the same level as you created the redis sub-directory. Then create the following files.

Another kustomization.yml

resources:
  - deployment.yml
  - service.yml

Another deployment.yml, with the content found here

Another service.yml, with the content found here

Finally add the new deployment item to the root deployment.yml

...
deployments:
  ...
  # add this line
  - path: cartservice
...

Setting up all other microservices

The GCP Microservices Demo is composed of multiple other services, which can be setup the same way as the microservice shown before. You can do this by yourself, or alternatively switch to the completed example found here.

From now on, we will assume that all microservices have been added (or that you switched to the example project).

Deploy it!

We now have a minimal kluctl project with two simple kustomize deployments. It’s time to deploy it. From inside the kluct project directory, call:

$ kluctl deploy -t local
INFO[0000] Rendering templates and Helm charts          
INFO[0000] Building kustomize objects                   
Do you really want to deploy to the context/cluster kind-kind? (y/N) y
INFO[0001] Getting remote objects by commonLabels       
INFO[0001] Getting 24 additional remote objects         
INFO[0001] Running server-side apply for all objects    
INFO[0001] shippingservice: Applying 2 objects          
INFO[0001] paymentservice: Applying 2 objects           
INFO[0001] currencyservice: Applying 2 objects          
INFO[0001] frontend: Applying 3 objects                 
INFO[0001] loadgenerator: Applying 1 objects            
INFO[0001] recommendationservice: Applying 2 objects    
INFO[0001] productcatalogservice: Applying 2 objects    
INFO[0001] adservice: Applying 2 objects                
INFO[0001] cartservice: Applying 2 objects              
INFO[0001] emailservice: Applying 2 objects             
INFO[0001] checkoutservice: Applying 2 objects          
INFO[0001] redis: Applying 2 objects                    

New objects:
  default/Deployment/adservice
  default/Deployment/cartservice
  default/Deployment/checkoutservice
  default/Deployment/currencyservice
  default/Deployment/emailservice
  default/Deployment/frontend
  default/Deployment/loadgenerator
  default/Deployment/paymentservice
  default/Deployment/productcatalogservice
  default/Deployment/recommendationservice
  default/Deployment/redis-cart
  default/Deployment/shippingservice
  default/Service/adservice
  default/Service/cartservice
  default/Service/checkoutservice
  default/Service/currencyservice
  default/Service/emailservice
  default/Service/frontend
  default/Service/frontend-external
  default/Service/paymentservice
  default/Service/productcatalogservice
  default/Service/recommendationservice
  default/Service/redis-cart
  default/Service/shippingservice

The -t local selects the local target which was previously defined in the .kluctl.yml. Right now we only have this one target, but we will add more targets in upcoming tutorials from this series.

Answer with y to the question if you really want to deploy. The command will output what is happening and then show what has been changed on the target.

Playing around

You have now deployed redis and the cartservice microservice. You can now start to play around with some other kluctl commands. For example, try to change something inside cartservice.yml (e.g. set terminationGracePeriodSeconds to 10) and then run kluctl diff -t local:

$ kluctl diff -t local
INFO[0000] Rendering templates and Helm charts          
...

Changed objects:
  default/Deployment/cartservice

Diff for object default/Deployment/cartservice
+--------------------------------------------------+---------------------------+
| Path                                             | Diff                      |
+--------------------------------------------------+---------------------------+
| spec.template.spec.terminationGracePeriodSeconds | -5                        |
|                                                  | +10                       |
+--------------------------------------------------+---------------------------+

As you can see, kluctl now shows you what will happen. If you’d now perform a kluctl deploy -t local, kluctl would output what has happened (which would be the same as in the diff as long as you don’t change anything else).

If you try to remove (or at least comment out) a microservice, e.g. the cartservice and then run kluctl diff -t local again, you will get:

$ kluctl diff -t local
INFO[0000] Rendering templates and Helm charts          
...

Changed objects:
  default/Deployment/cartservice

Diff for object default/Deployment/cartservice
+--------------------------------------------------+---------------------------+
| Path                                             | Diff                      |
+--------------------------------------------------+---------------------------+
| spec.template.spec.terminationGracePeriodSeconds | -5                        |
|                                                  | +10                       |
+--------------------------------------------------+---------------------------+

Orphan objects:
  default/Service/cartservice
  default/Deployment/cartservice

As you can see, the resources belonging cartservice are listed as “Orphan objects” now, meaning that these are not found locally anymore. A kluctl prune -t local would then give:

$ kluctl prune -t local
INFO[0000] Rendering templates and Helm charts          
...
Do you really want to delete 2 objects? (y/N) y

Deleted objects:
  default/Service/cartservice
  default/Deployment/cartservice

How to continue

The result of this tutorial is a naive version of the microservices demo deployment. There are a few things that you would solve differently in the real world, e.g. use Helm Charts for things like redis instead of proving self-crafted manifests. The next tutorials in this series will focus on a few improvements and refactorings that will make this kluctl project more “realistic” and more useful. They will also introduce concepts like multi-environment and multi-cluster deployments.

1.2 - 2. Helm Integration

Introduction

The first tutorial in this series demonstrated how to setup a simple kluctl project that is able to deploy the GCP Microservices Demo to a local kind cluster.

This initial kluctl project was however quite naive and too simple to be any way realistic. For example, the project structure is too flat and will likely result in chaos when the project grows. Also, the project used self-crafted manifests while it might have been better to reuse feature rich Helm Charts. We will fix both these issues in this tutorial.

How to start

This tutorial is based on the results of the first tutorial. As an alternative, you can take the 1-basic-project example project found here and use it the base to be able to continue with this tutorial.

You can also deploy the base project and then incrementally perform deployments after each step in this tutorial. This way you will also gain some experience and feeling for to use kluctl.

A simple refactoring

Let’s start with a simple refactoring. Having all deployment items on the root level will easily get unmaintainable.

kluctl allows you to structure your project in all kinds of fashions by leveraging sub-deployments. The deployment items found in deployment projects allows specifying includes which point to sub-directory with another deployment.yml.

Let’s split the deployment into third-party applications (currently only redis) and the project specific microservices. To do this, create the sub-directories third-party and microservices. Then move the redis directory into third-party and all microservice sub-directories into microservices:

$ mkdir third-party
$ mkdir microservices
$ mv redis third-party/
$ mv adservice cartservice checkoutservice currencyservice emailservice \
    frontend loadgenerator paymentservice \
    productcatalogservice recommendationservice shippingservice microservices/

Now change the deployments list inside the root deployment.yml to:

deployments:
  - include: third-party
  - include: services

Add a deployment.yml with the following content into the third-party sub-directory:

deployments:
  - path: redis

And finally a deployment.yml with the following content into the microservices sub-directory:

deployments:
  - path: adservice
  - path: cartservice
  - path: checkoutservice
  - path: currencyservice
  - path: emailservice
  - path: frontend
  - path: loadgenerator
  - path: paymentservice
  - path: productcatalogservice
  - path: recommendationservice
  - path: shippingservice

To get an overview of these changes, look into this commit inside the example project belonging to this tutorial.

If you deploy the new state of the project, you’ll notice that only labels will change. These labels are automatically added to all resources and represent the tags of the corresponding deployment items.

Some notes on project structure

The refactoring from above is meant as an example that demonstrates how sub-deployments can be used to structure your project. Such sub-deployments can also include deeper sub-deployments, allowing you to structure your project in any way and complexity that fits your needs.

Introducing the first Helm Chart

There are many examples where self-crafting of Kubernetes manifests is not the best solution, simply because there is already a large ecosystem of pre-created Kubernetes packages in the form of Helm Charts.

The redis deployment found in the microservices demo is a good example for this, especially as many available Helm Charts offer quite some functionality, for example high availability.

kluctl allows the integration of Helm Charts, which we will do now to replace the self-crafted redis deployment with the Bitname Redis Chart.

First, create the file third-party/redis/helm-chart.yml with the following content:

helmChart:
  repo: https://charts.bitnami.com/bitnami
  chartName: redis
  chartVersion: 16.8.0
  releaseName: cart
  namespace: default
  output: deploy.yml

Most of the above configuration can directly be mapped to Helm invocations (pull, install, …). The output value has a special meaning and must be reflected inside the kustomization.yml resources list. The reason is that kluctl solves the Helm integration by running helm template and writing the result to the file configured via output. After this, kluctl expects that kustomize takes over, which requires that the generated file is references in kustomization.yml.

To do so, simply replace the content of third-party/redis/kustomization.yml with:

resources:
  - deploy.yml

We now need some configuration for the redis chart, which is provides via [third-party/redis/helm-values.yml`](https://kluctl.io/docs/kluctl/deployments/helm/#helm-valuesyml):

architecture: replication

auth:
  enable: false

sentinel:
  enabled: true
  quorum: 2

replica:
  replicaCount: 3
  persistence:
    enabled: true

master:
  persistence:
    enabled: true

The above configuration will configure redis to run in replication mode with sentinel and 3 replicas, giving us some high availability (at least in theory, as we’d still need a HA Kubernetes cluster and proper affinity configuration).

The Redis Chart will also deploy a Service resource, but with a different name as the self-crafted version. This means we have to fix the service name in microservices/cartservice/deployment.yml (look for the environment variable REDIS_ADDR) to point to cart-redis:6379 instead of redis-cart:6379.

You can now remove the old redis related manifests (third-party/redis/deployment.yml and third-party/redis/service.yml).

All the above changes can be found in this commit from the example project.

Pulling Helm Charts

We have now added a Helm Chart to our deployment, but to make it deployable it must be pre-pulled first. kluctl requires Helm Charts to be pre-pulled for multiple reasons. The most important reasons are performance and reproducibility. Performance would significantly suffer if Helm Chart would have to be pulled on-demand at deployment time. Also, Helm Charts have no functionality to ensure that a chart that you pulled yesterday is equivalent to the chart pulled today, even if the version is unchanged.

To pre-pull the redis Helm Chart, simply call:

$ kluctl helm-pull
INFO[0000] Pulling for third-party/redis/helm-chart.yml

This will pre-pull the chart into the sub-directory third-party/redis/charts. This directory is meant to be added to version control, so that it is always available when deploying.

If you ever change the chart version in helm-chart.yml, don’t forget to re-run the above command and commit the resulting changes.

Deploying the current state

It’s time to deploy the current state again:

$ kluctl deploy -t local
INFO[0000] Rendering templates and Helm charts          
...          

New objects:
  default/ConfigMap/cart-redis-configuration
  default/ConfigMap/cart-redis-health
  default/ConfigMap/cart-redis-scripts
  default/Service/cart-redis
  default/Service/cart-redis-headless
  default/ServiceAccount/cart-redis
  default/StatefulSet/cart-redis-node

Changed objects:
  default/Deployment/cartservice

Diff for object default/Deployment/cartservice
+-------------------------------------------------------+------------------------------+
| Path                                                  | Diff                         |
+-------------------------------------------------------+------------------------------+
| spec.template.spec.containers[0].env.REDIS_ADDR.value | -redis-cart:6379             |
|                                                       | +cart-redis:6379             |
+-------------------------------------------------------+------------------------------+

Orphan objects:
  default/Deployment/redis-cart
  default/Service/redis-cart

As you can see, the changes that we did to the kluctl project are reflected in the output of the deploy call, meaning that we can perfectly see what happened. We can see a few new resources which are all redis related, the change of the service name and the old redis resources being marked as orphan. Let’s get rid of the orphan resources:

$ kluctl prune -t local
INFO[0000] Rendering templates and Helm charts          
INFO[0000] Building kustomize objects                   
INFO[0000] Getting remote objects by commonLabels       
The following objects will be deleted:
  default/Service/redis-cart
  default/Deployment/redis-cart
Do you really want to delete 2 objects? (y/N) y

Deleted objects:
  default/Service/redis-cart
  default/Deployment/redis-cart

You have just performed your first house-keeping, which you’ll probably do quite often from now on in your daily DevOps business.

More house-keeping

When time passes, new versions of the Helm Charts that you integrated are going to be released. You might have to keep your deployments up-to-date in such cases. The most naive way is to simply increase the chart version inside helm-chart.yml and then simply re-call kluctl helm-pull.

As the number of used charts can easily grow to a number where it becomes hard to keep everything up-to-date, kluctl offers a command to support you in this:

$ kluctl helm-update
INFO[0005] Chart third-party/redis/helm-chart.yml has new version 16.8.2 available. Old version is 16.8.0. 

As you can see, it will display charts with new versions. You can also use the same command to actually update the helm-chart.yml files and ultimately commit these to git:

$ kluctl helm-update --upgrade --commit
INFO[0005] Chart third-party/redis/helm-chart.yml has new version 16.8.2 available. Old version is 16.8.0. 
INFO[0005] Pulling for third-party/redis/helm-chart.yml 
INFO[0010] Committing: Updated helm chart third-party/redis from 16.8.0 to 16.8.2

How to continue

After this tutorial, you have hopefully learned how to better structure your projects and how to integrate third-party Helm Charts into your project, including some basic house-keeping tasks.

The next tutorials in this series will show you how to use this kluctl project as a base to implement a multi-environment and multi-cluster deployment.

1.3 - 3. Templating and multi-env deployments

Introduction

The second tutorial in this series demonstrated how to integrate Helm into your deployment project and how to keep things structured.

The project is however still not flexible enough to be deployed multiple times and/or in different flavors. As an example, it doesn’t make much sense to deploy redis with replication on a local cluster, as there can’t be any high availability with single node. Also, the resource requests currently used are quite demanding for a single node cluster.

How to start

This tutorial is based on the results of the second tutorial. As an alternative, you can take the 2-helm-integration example project found here and use it as the base to be able to continue with this tutorial.

This time, you should start with a fresh kind cluster. If you are sure that you won’t loose any critical data by deleting the existing cluster, simply run:

$ kind delete cluster
$ kind create cluster

If you’re unsure or if you want to re-use the existing cluster for some reason, you can also simply delete the old deployment:

$ kluctl delete -t local
  INFO[0000] Rendering templates and Helm charts
  INFO[0000] Building kustomize objects
  INFO[0000] Getting remote objects by commonLabels
The following objects will be deleted:
  default/Service/emailservice
  ...
  default/ConfigMap/cart-redis-scripts
  Do you really want to delete 29 objects? (y/N) y

Deleted objects:
  default/ConfigMap/cart-redis-scripts
  ...
  default/StatefulSet/cart-redis-node

The reason to start with a fresh deployment is that we will later switch to different namespaces and stop using the default namespace.

Targets

If we want to allow the deployment to be deployed multiple times, we first need multiple targets in our project. Let’s add 2 targets called test and prod. To do so, modify the content of .kluctl.yml to contain:

targets:
  - name: local
    context: kind-kind
    args:
      env_type: local
  - name: test
    context: kind-kind
    args:
      env_type: real
  - name: prod
    context: kind-kind
    args:
      env_type: real

You might notice that all targets point to the kind cluster at the moment. This is of course not how you would do it in a real project as you’d probably have at least one real production-ready cluster to target your deployments against.

We’ve also introduced args for each target, with each target having an env_type argument configured. This argument will later be used to change details of the deployment, depending on the value of it. For example, setting it to local might change the redis deployment into a single-node/standalone deployment.

Dynamic namespaces

One of the most obvious and also useful application of templates is making namespaces dynamic, depending on the target that you want to deploy. This allows to deploy the same set of deployment/manifests multiple times, even to the same cluster.

There are a few predefined variables which are always available in all deployments. One of these variables is the target dictionary which is a copy of the currently processed target. This means, we can use {{ target.name }} to insert the current target name through templating.

There are multiple ways to change the namespaces of involved resources. The most naive way is to go directly into the manifests and add the metadata.namespace field. For example, you could edit services/adservice/deployment.yml this way:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: adservice
  namespace: ms-demo-{{ target.name }}
...

This can however easily lead to resources being missed or resources where you are not in control, e.g. rendered Helm Charts. Another way to set the namespace on multiple resources is by using the namespace property of kustomize. For example, instead of changing the adservice deployment directly, you could modify the content of services/adservice/kustomization.yml to:

resources:
  - deployment.yml
  - service.yml

namespace: ms-demo-{{ target.name }}

This is better than the naive solution, but still limited in a comparable (but not as bad) way. The most powerful and preferred solution is use overrideNamespace in the root deployment.yml:

...
overrideNamespace: ms-demo-{{ target.name }}
...

As an alternative, you could also use overrideNamespace separately in third-party/deployment.yml and services/deployment.yml. In this case, you’re also free to use different prefixes for the namespaces, as long as you include {{ target.name }} in them.

Helm Charts and namespaces

The previously described way of making namespaces dynamic in all resources works well for most cases. There are however situations where this is not enough, mostly when the name of the namespace is used in other places than metadata.namespace.

Helm Charts very often do this internally, which makes it necessary to also include the dynamic namespace into the helm-chart.yml’s namespace property. You will have to do this for the redis chart as well, so let’s modify third-party/redis/helm-chart.yml to:

helmChart:
  repo: https://charts.bitnami.com/bitnami
  chartName: redis
  chartVersion: 16.8.2
  releaseName: cart
  namespace: ms-demo-{{ target.name }}
  output: deploy.yml

Without this change, redis is going to be deployed successfully but will then fail to start due to wrong internal references to the default namespace.

Making commonLabels unique per target

commonLabels in your root deployment.yml has a very special meaning which is important to understand and work with. The combination of all commonLabels MUST be unique between all supported targets on a cluster, including the ones that don’t exist yet and are from other kluctl projects.

This is because kluctl uses these to identify resources belonging to the currently processed deployment/target, which becomes especially important when deleting or pruning.

To fulfill this requirement, change the root deployment.yml to:

...
commonLabels:
  examples.kluctl.io/deployment-project: "microservices-demo"
  examples.kluctl.io/deployment-target: "{{ target.name }}"
...

examples.kluctl.io/deployment-project ensures that we don’t get in conflict with any other kluctl project that might get deployed to the same cluster. examples.kluctl.io/deployment-target ensures that the same deployment can be deployed once per target. The names of the labels are arbitrary, and you can choose whatever you like.

Creating necessary namespaces

If you’d try to deploy the current state of the project, you’d notice that it will result in many errors where kluctl says that the desired namespace is not found. This is because kluctl does not create namespaces on its own. It also does not do this for Helm Charts, even if helm install for the same charts would do this. In kluctl you have to create namespaces by yourself, which ensures that you have full control over them.

This implies that we must create the necessary namespace resource by ourselves. Let’s put it into its own kustomize deployment below the root directory. First, create the namespaces directory and place a simple kustomization.yml into it:

resources:
  - namespace.yml

In the same directory, create the manifest namespace.yml:

apiVersion: v1
kind: Namespace
metadata:
  name: ms-demo-{{ target.name }}

Now add the new kustomize deployment to the root deployment.yml:

deployments:
  - path: namespaces
  - include: third-party
  - include: services
...

Deploying multiple targets

You’re now able to deploy the current deployment multiple times to the same kind cluster. Simply run:

$ kluctl deploy -t local
$ kluctl deploy -t prod

After this, you’ll have two namespaces with the same set of microservices and two instances of redis (both replicated with 3 replicas) deployed.

All changes together

For a complete overview of the necessary changes to get to this point, look into this commit.

Make the local target more lightweight

Having the microservices demo deployed twice might easily lead to you local cluster being completely overloaded. The solution would obviously be to not deploy the prod target to your local cluster and instead use a real cluster.

However, for the sake of this tutorial, we’ll instead try to introduce a few differences between targets so that they fit better onto the local cluster.

To do so, let’s introduce variables files that contain different sets of configuration for different environment types. These variables files are simply yaml files with arbitrary content, which is then available in future templating contexts.

First, create the sub-directory vars in the root project directory. The name of this directory is arbitrary and up to you, it must however match what is later used in the deployment.yml.

Inside this directory, create the file local.yml with the following content:

redis:
  architecture: standalone
  # the standalone architecture exposes redis via a different service then the replication architecture (which uses sentinel)
  svcName: cart-redis-master

And the file real.yml with the following content:

redis:
  architecture: replication
  # the standalone architecture exposes redis via a different service then the replication architecture (which uses sentinel)
  svcName: cart-redis

To load these variables files into the templating context, modify the root deployment.yml and add the following to the top:

vars:
  - file: ./vars/{{ args.env_type }}.yml
...

As you can see, we can even use templating inside the deployment.yml. Generally, templating can be used everywhere, with a few limitations outlined in the documentation.

The above changes will now load a different variables file, depending on which env_type was specified in the currently processed target. This allows us to customize all kinds of configurations via templating. You’re completely free in how you use this feature, including loading multiple variables files where each one can use the variables loaded by the previous variables file.

To use the newly introduces variables, first modify the content of third-party/redis/helm-values.yml to:

architecture: {{ redis.architecture }}

auth:
  enabled: false

{% if redis.architecture == "replication" %}
sentinel:
  enabled: true
  quorum: 2

replica:
  replicaCount: 3
  persistence:
    enabled: true
{% endif %}

master:
  persistence:
    enabled: true

The templating engine used by kluctl is currently Jinja2. We suggest reading through the documentation of Jinja2 to understand what is possible. In the example above, we use simple variable expressions and if/else statements.

We will also have to replace the occurrence of cart-redis:6379 with {{ redis.svcName }}:6379 inside services/cartservice/deployment.yml.

For an overview of the above changes, look into this commit.

Deploying the current state

You can now try to deploy the local and test targets. You’ll notice that the local deployment will result in quite a few changes (seen in the diff) and the test target not having any changes at all. You might also want to do a prune for the local target to get rid of the old redis deployment.

Disable a few services for local

Some services are not needed locally or might not even be able to run properly. Let’s assume this applies to the services loadgenerator and emailservice. We can conditionally remove these from the deployment with simple boolean variables in vars/local.yml and vars/real.yml and if/else statements in services/deployment.yml.

Add the following variables to vars/local.yml:

...
services:
  emailservice:
    enabled: false
  loadgenerator:
    enabled: false

And the following variables to vars/real.yml:

...
services:
  emailservice:
    enabled: true
  loadgenerator:
    enabled: true

Now change the content of services/deployment.yml to:

deployments:
  - path: adservice
  - path: cartservice
  - path: checkoutservice
  - path: currencyservice
  {% if services.emailservice.enabled %}
  - path: emailservice
  {% endif %}
  - path: frontend
  {% if services.loadgenerator.enabled %}
  - path: loadgenerator
  {% endif %}
  - path: paymentservice
  - path: productcatalogservice
  - path: recommendationservice
  - path: shippingservice

A deployment to test should not change anything now. Deploying to local however should reveal multiple orphan resources, which you can then prune.

For an overview of the above changes, look into this commit.

How to continue

After this tutorial, you should have a basic understanding how templating in kluctl works and how a multi-environment deployment can be implemented.

We however only deployed to a single cluster so far and are unable to properly manage the image versions of our microservices at the moment. In the next tutorial of this series, we’ll learn how to deploy to multiple clusters and split third-party image management and (self developed) application image management.