Tutorial: Writing your First Commodore Component
This tutorial will guide you through the required steps to create your first Commodore Component.
Commodore Components allow you to customize and extend the number and variety of tasks that you can perform with Project Syn. Using Commodore Components, you can customize and apply changes to all of your clusters, ensuring conformity, coherence, alignment, and drastically reducing the workload for DevOps engineers.
There are many Commodore Components already published; they perform various tasks, all related to the maintenance of Kubernetes clusters of many different kinds, including OpenShift clusters.
This tutorial has been prepared with version v0.2.0 of Commodore. Project Syn is moving fast, and we will update this tutorial accordingly as new features and capabilities are made available by the development team.
In this tutorial, we are going to do the following:
-
First, we are going to launch two lightweight Kubernetes clusters, one Minikube and one Rancher K3s, and then we’re going to federate them under the same tenant.
-
Then, we are going to create a very simple component, one that will deploy a workload in both clusters at the same time.
-
Finally, we are going to wrap up, not without first cleaning up all the different pieces of this experiment.
This tutorial should give you a good idea of how all the different pieces of Project Syn work together, and how Commodore Components fit in the picture.
Requirements
As for the required knowledge bits and pieces:
-
You should understand what Project Syn is, and its architecture.
-
You should have already followed the Getting Started with Project Syn tutorial, and be familiar with common Project Syn tasks.
This tutorial was built on a Linux system, and tested in both Linux and macOS; it should, however, be easily ported to other Unices, since all tools referenced here are cross-platform.
To follow this tutorial you will need quite an array of software in your system:
-
ngrok version 2.3
-
Docker version 19
-
Minikube version 1.12
-
K3d version 3.0
-
A GitLab account with your SSH key configured. You can use a private GitLab instance, if you have one nearby.
-
A GitHub account with your SSH key configured.
-
SSH-agent running with your SSH key added.
-
The following commands must be installed and available in your path: curl, jq,
ssh-keyscan
, andbase64
. -
We will be editing files with Visual Studio Code together with the following extensions:
This tutorial project contains a 0_requirements.sh
script which outputs all of your currently installed versions of the aforementioned software packages.
The specified version numbers are the minimum required. Also, please make sure that both your GitLab and GitHub accounts must have SSH keys configured.
Preparations
Follow these three steps before starting the tutorial:
-
Clone the repository of the tutorial:
git clone git@github.com:projectsyn/commodore-component-tutorial.git
andcd commodore-component-tutorial
. -
Check all required tools by running the
./0_requirements.sh
script. -
In GitHub, fork the commodore-defaults project to your own personal account.
-
And in GitLab, create a Personal Access Token with API access.
Remember to save your token somewhere, since it will only be visible once! |

Install Lieutenant Operator and API on Minikube
The Lieutenant Operator and its API are the major elements of a Project Syn installation. They provide a complete catalog of information about all of your clusters, and allow developers to operate on those clusters through a unique API.
In this tutorial we are going to install Lieutenant in the same Minikube cluster that we will use later on; this is usually not the case, and Lieutenant and its API are usually installed separately. Doing so will, however, simplify our setup greatly.
First you need to set a few environment variables:
$ export GITLAB_TOKEN="TOKEN_CREATED_IN_PREVIOUS_STEP"
$ export GITLAB_ENDPOINT=gitlab.com
$ export GITLAB_USERNAME=your_gitlab_username
$ export GITHUB_USERNAME=your_github_username
To install Lieutenant Operator and API on Minikube, we are going to execute the ./1_lieutenant_on_minikube.sh
script. This script mimicks the steps explained in the "Getting Started" tutorial, so we will not go through them here. The script is quite self-explanatory, though:
# ...
echo "===> API deployment"
kubectl -n lieutenant apply -k "github.com/projectsyn/lieutenant-api/deploy?ref=v0.2.0"
echo "===> For Minikube we must delete the default service and re-create it"
kubectl -n lieutenant delete svc lieutenant-api
kubectl -n lieutenant expose deployment lieutenant-api --type=NodePort --port=8080
echo "===> Launch ngrok in the background tunneling towards the Lieutenant API"
setsid ./ngrok.sh >/dev/null 2>&1 < /dev/null &
sleep 2s
echo "===> Find external Lieutenant URL through the ngrok API"
LIEUTENANT_URL=$(curl http://localhost:4040/api/tunnels --silent | jq -r '.["tunnels"][0]["public_url"]')
echo "===> Lieutenant API: $LIEUTENANT_URL"
# ...
This script will launch a new Minikube instance, will apply all the required kubectl
commands, and wait for the cluster to be in the expected state before continuing.
It will also expose the Lieutenant API using the ngrok reverse proxy tool. You do not need a commercial (paying) ngrok account to use it; just download the version corresponding to your system and install it somewhere in your PATH
.
Once the script has run, you will find two new private repositories in your GitLab account: one named tutorial-cluster-minikube
and another named tutorial-tenant
. Project Syn tools use these repositories to perform "GitOps" operations on all the clusters.

Bootstrapping Commodore on Minikube
The second step in this tutorial is to bootstrap Commodore in our Minikube cluster. For that we can just run the ./2_commodore_on_minikube.sh
script. This script requires an environment variable to run: COMMODORE_SSH_PRIVATE_KEY
which should contain the path to your personal SSH public key. For example:
$ export COMMODORE_SSH_PRIVATE_KEY=~/.ssh/id_rsa
Do not press the Enter key when prompted with the If you don’t see a command prompt, try pressing enter
message! Instead, type the passphrase of your SSH key.
Install Steward on Minikube
Now that Lieutenant is installed we can install Steward on our cluster. This tool is in charge of bootstrapping Argo CD which watches the GitLab and GitHub repositories, and triggers the GitOps operations required to keep each cluster up-to-date.
We can easily install Steward on Minikube using the ./3_steward_on_minikube.sh
script.
Synthesize a K3s Cluster
Now it is time to spin our second cluster, this time using K3s, itself managed using K3d. Again, a couple of scripts will make our life easier.
First we need to get some variables from our environment:
$ source ./env.sh
Now we can run a script that will not only spin a new k3s cluster, but will also install Steward and register it as a cluster belonging to our tenant. That is, we will "synthesize" our cluster.
The term "synthesize" means performing all the required steps so that a cluster can be managed as part of a Project Syn metacluster. |
$ ./4_synthesize_on_k3s.sh
And now we are ready: we have two working Kubernetes clusters, one Minikube and one k3s, and both are assigned to the same tenant. To verify this, we ask now the Lieutenant Custom Resources, available in our Minikube cluster:
$ kubectl --context minikube get clusters -n lieutenant
NAME DISPLAY NAME TENANT AGE
c-damp-dew-2385 K3s cluster t-muddy-sunset-5530 78s
c-small-cherry-9211 Minikube cluster t-muddy-sunset-5530 48m
$ kubectl --context minikube get tenants -n lieutenant
NAME DISPLAY NAME AGE
t-muddy-sunset-5530 Tutorial Tenant 49m
Of course, in your case the NAME column will show different, random values. But we can see both clusters and the tenant object, all happily residing inside the realm of our Project Syn installation.
For more information, check out the diagram in the Project Syn website.
Inspecting the Lieutenant API traffic
Since we’re using ngrok
to expose our Lieutenant API we can inspect all the requests exchanged between the clusters and Lieutenant. For that, just open a browser window and point it to localhost:4040/inspect/http. The following image shows a screenshot of a typical Lieutenant API session.

Creating a Commodore Component
Now we’re getting to the actual subject of this tutorial! Let us dive into the real task of creating a new Commodore Component.
Using the Commodore Docker Image
We are going to use now the commodore
tool to generate and compile our new component. This tool is written in Python and requires quite a few dependencies; to make our lives easier, we are instead going to use the corresponding container image available in Docker Hub. This is why it is strongly recommended that you define the following function in your environment, which will make using commodore
a much simpler task. See Running Commodore.
Checking out the Commodore Project
In order to create a new component, we need to clone the official Commodore project from GitHub first. Inside of that clone of the Commodore source code we will create a few folders:
$ git clone git@github.com:projectsyn/commodore.git
$ cd commodore
$ mkdir -p catalog inventory dependencies compiled
We also need to define the contents of the .env
file, used by Docker to set environment variables required by the commodore
command. We can do this in a snap with this command, which should display an output similar to this one:
$ source ../env.sh -docker > .env
$ cat .env
# URL of Lieutenant API
COMMODORE_API_URL=http://lieutenant.172.17.0.3.nip.io:32506
# Lieutenant API token
COMMODORE_API_TOKEN=………
# Base URL for global Git repositories
COMMODORE_GLOBAL_GIT_BASE=ssh://git@github.com/$GITHUB_USERNAME
Compiling the Commodore Catalog
The first step to create a Commodore Component is what is usually referred to as "compiling the catalog." This catalog provides a unique reference point of information about all the configuration and workloads affecting a single cluster.
Now we are ready to compile our catalog. Every time you call the commodore
tool you will have to enter the password of your SSH key.
$ kubectl --context minikube -n lieutenant get cluster
NAME DISPLAY NAME TENANT AGE
c-aged-cloud-4988 Kind cluster t-weathered-fire-7929 3m58s
c-late-cloud-248 K3s cluster t-weathered-fire-7929 8m24s
c-solitary-voice-5640 Minikube cluster t-weathered-fire-7929 10m
$ commodore catalog compile c-aged-cloud-4988
...
$ commodore catalog compile c-late-cloud-248
...
$ commodore catalog compile c-solitary-voice-5640
...
We must repeat this operation for each cluster, and of course, you should use the cluster IDs you got from the last kubectl
command above.
Creating a New Commodore Component
With our catalog ready to use, we can create a new component. In this case, we want to create a component that automatically deploys a very simple application called the "Fortune Cookie Service" to our clusters. This application, written in Python, returns a funny "fortune cookie" message in the console.
Let us create the component first:
$ commodore component new fortune
Agent pid 9
Enter passphrase for /app/.ssh/id_rsa:
Identity added: /app/.ssh/id_rsa (user@domain.com)
Adding component fortune...
> Installing component
Component fortune successfully added 🎉
The new component is located now in the dependencies/fortune
folder. We can edit it using Visual Studio Code:
$ code dependencies/fortune

Commodore Components are usually written using Jsonnet (pronounced "jay-sonnet"), a data templating language for app and tool developers. If you have never written Jsonnet before, do not worry; just know that all valid JSON files are valid Jsonnet files, and that Jsonnet provides useful extensions, such as variables and functions, and various export formats, which make it easy to write very complex JSON applications with it.
In Visual Studio Code, open the component/main.jsonnet
file, and replace its text with the following:
local kap = import 'lib/kapitan.libjsonnet';
local kube = import 'lib/kube.libjsonnet';
local inv = kap.inventory();
local params = inv.parameters.fortune;
local namespace = 'syn-fortune';
local appName = 'fortune-app';
local portName = 'fortune-port';
local containerName = 'vshn/fortune-cookie-service:1.0';
local labelSelector = {
app: appName,
};
{
namespace: kube.Namespace(namespace) {
metadata: {
name: namespace,
labels: {
name: namespace
},
},
},
service: kube.Service('fortune-service') {
metadata: {
name: 'fortune-service',
labels: labelSelector,
namespace: namespace
},
spec: {
ports: [
{
port: 3000,
targetPort: portName,
},
],
selector: labelSelector,
type: 'LoadBalancer',
},
},
deployment: kube.Deployment('fortune-deployment') {
metadata: {
name: 'fortune-deployment',
labels: labelSelector,
namespace: namespace
},
spec: {
replicas: 2,
template: {
spec: {
containers: [
{
image: containerName,
name: 'fortune-container',
ports: [
{
containerPort: 9090,
name: portName,
},
],
},
],
},
metadata: {
labels: labelSelector,
},
},
selector: {
matchLabels: labelSelector,
},
strategy: {
type: 'Recreate',
},
},
},
}
This Jsonnet file provides the definition of a complete Kubernetes deployment. However, instead of being specified in YAML, this file specifies it in Jsonnet format. After the import
statements we define a few variables. The entries that follow define various Kubernetes objects: a deployment, a service, and a namespace to hold everything together.
The vscode-jsonnet extension for Visual Studio Code has a nifty "live preview" feature; the picture below shows how Jsonnet outputs the classic "YAML" format that any Kubernetes cluster might expect.

Configuring the Jsonnet Preview Pane in Visual Studio Code
The Jsonnet extension for Visual Studio Code requires some values in your settings.json
in order for the preview pane to work properly:
"jsonnet.executablePath": "/home/username/.local/bin/jsonnet",
"jsonnet.libPaths": [
"/home/username/commodore/commodore/",
"/home/username/commodore/dependencies/",
"/home/username/.cache/pypoetry/virtualenvs/commodore-jtwx3-r8-py3.7/lib/python3.7/site-packages/kapitan"
]
The value of the third entry of the jsonnet.libPaths
property can be found using the poetry env info
command (run poetry install
first if you have not done it yet).
Creating Jsonnet files from YAML
To create your own Jsonnet files, you can just translate them from plain YAML using either the yq tool, or the YAML to JSON extension for Visual Studio Code. This is exactly how we created the Jsonnet examples for this tutorial! |
Open the component/app.jsonnet
file and replace its contents with the following:
local kap = import 'lib/kapitan.libjsonnet';
local inv = kap.inventory();
local params = inv.parameters.fortune;
local argocd = import 'lib/argocd.libjsonnet';
local app = argocd.App('fortune', params.namespace, secrets=false);
{
'fortune': app,
}
The secrets=false
parameter is required. Since we do not require secrets for this tutorial, we can safely disable this.
Compiling the Component
Now that we have written our component, let’s compile it to see if it’s generating what we need:
$ commodore component compile dependencies/fortune
Agent pid 9
Enter passphrase for /app/.ssh/id_rsa:
Identity added: /app/.ssh/id_rsa (user@domain.com)
Compile component fortune...
Updating Jsonnet libraries...
Compiling catalog...
> Component compiled to /app/compiled/test
$ ls compiled/test/fortune
. .. deployment.yaml namespace.yaml service.yaml
$ cat compiled/test/fortune/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
name: syn-tutorial
name: syn-tutorial
Our component compiles! Let us create a new project in GitHub to push our code into:

$ cd dependencies/fortune
$ git add .
$ git commit -m "Added code to component"
$ git remote remove origin
$ git remote add origin git@github.com:$GITHUB_USERNAME/component-fortune.git
$ git push --set-upstream origin master
$ cd ../../
Deploying the Component to our Clusters
Now that we have a component that compiles, and that we have pushed it to GitHub, we want to deploy it to our clusters.
We need to edit two files for that:
$ cd inventory/classes/global
The global
folder is a clone of the fork of the commodore-defaults
project.
Edit commodore.yml
and add this at the end:
- name: fortune
url: https://github.com/[YOUR_GITHUB_USERNAME]/component-fortune.git
Commit and push your changes, and cd
to another folder:
$ git commit -a -m "Added component"
$ git push
$ cd ../[$TENANT_ID]
This repository is a clone of the tutorial-tenant
project in your GitLab account.
Edit /common.yml
and add this text:
classes:
- components.fortune
And once again, commit and push your changes.
$ git commit -a -m "Added component"
$ git push
$ cd ../../../
Since we edited the common.yml
file, the changes will propagate to all clusters. If you want your changes to apply to just one cluster, edit the corresponding file in the same folder.
Deployment
We’re ready! Now we just have to recompile both catalogs and push the changes. For that we need to recompile the catalogs, but this time using the --push
parameter, so that our changes are sent to their respective GitLab repositories.
$ kubectl --context minikube -n lieutenant get cluster
NAME DISPLAY NAME TENANT AGE
c-aged-cloud-4988 Kind cluster t-weathered-fire-7929 3m58s
c-late-cloud-248 K3s cluster t-weathered-fire-7929 8m24s
c-solitary-voice-5640 Minikube cluster t-weathered-fire-7929 10m
$ commodore catalog compile --push c-solitary-voice-5640
Agent pid 9
Enter passphrase for /app/.ssh/id_rsa:
Identity added: /app/.ssh/id_rsa (user@domain.com)
Cleaning working tree
Updating global config...
Updating customer config...
Discovering components...
Fetching components...
Updating Kapitan target...
Updating cluster catalog...
Updating Jsonnet libraries...
Cleaning catalog repository...
Updating Kapitan secret references...
Compiling catalog...
Postprocessing...
> compiled/cluster/metrics-server/01_helmchart/metrics-server/templates/00_namespace.yaml does not exist, creating...
Updating catalog repository...
> Changes:
Added file manifests/apps/fortune.yaml
Added file manifests/fortune/deployment.yaml
Added file manifests/fortune/namespace.yaml
Added file manifests/fortune/service.yaml
> Commiting changes...
> Pushing catalog to remote...
Catalog compiled! 🎉
We have to do this for all clusters!
$ commodore catalog compile --push c-aged-cloud-4988
Agent pid 9
Enter passphrase for /app/.ssh/id_rsa:
Identity added: /app/.ssh/id_rsa (user@domain.com)
...
$ commodore catalog compile --push c-late-cloud-248
...
Since Argo CD, itself managed by Steward, is watching these repositories, it will pick up the new state and seamlessly deploy it across our two clusters in a few minutes.
With K9s we can watch our clusters as they pick up the changes and apply them in all transparency. The namespace syn-fortune
will contain the required deployment, pods, and service.

syn-fortune
namespace in K3s
syn-fortune
and lieutenant
pods in MinikubeAccessing the Argo CD Console
Argo CD is a declarative GitOps continuous deployment tool for Kubernetes. It is managed by Steward, and takes care of synchronizing and automatically updating the state of our clusters following changes in the configuration. You can access the Argo CD console in both clusters through these steps:
First, retrieve the Argo CD admin password with this command:
$ kubectl --context minikube -n syn get secret/steward -o jsonpath='{.data.token}' | base64 --decode | pbcopy
The pbcopy command is usually found in macOS. In Linux, you can alias pbcopy="xclip -selection clipboard" .
|
Second, redirect traffic to Argo CD:
$ kubectl --context minikube -n syn port-forward svc/argocd-server 8080:80
And third, open localhost:8080 with the username admin
and the password you copied in the previous step.

You can repeat this steps to access the console in the K3s cluster, by changing the value of the kubectl --context
parameter.
Executing the fortune-service
Of course you can test the fortune-service
running in your clusters using the following commands:
For k3d:
$ kubectl --context k3d-projectsyn -n syn-fortune port-forward svc/fortune-service 8081:3000
$ curl http://localhost:8081 # in a different terminal
In K9s you can add port forwarding to a pod simply by selecting the pod and hitting the Shift+F key combination. |
For Minikube:
$ curl $(minikube service fortune-service --namespace syn-fortune --url)
Cleanup
We are done! You can now remove the clusters in your computer, and also remove the projects you’ve created in GitHub and GitLab:
-
Run the
./5_delete.sh
script to deregister and stop all local clusters. -
Remove your clone of
commodore-defaults
on GitHub.
The GitLab projects tutorial-tenant
, tutorial-cluster-minikube
, and tutorial-cluster-k3s
have been automatically deleted by the Lieutenant Operator.
$ ./5_delete.sh
===> Find Tenant ID
===> OK: TENANT_ID variable set
===> Removing all clusters
cluster.syn.tools "c-aged-sea-7813" deleted
cluster.syn.tools "c-snowy-sunset-5873" deleted
===> Removing tenant
tenant.syn.tools "t-solitary-field-4017" deleted
===> Waiting 20 seconds for the removal of GitLab repositories
🔥 Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
🔥 Removing /home/username/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
INFO[0000] Deleting cluster 'projectsyn'
INFO[0002] Deleted k3d-projectsyn-server-0
INFO[0002] Deleting image volume 'k3d-projectsyn-images'
INFO[0002] Removing cluster details from default kubeconfig...
INFO[0002] Removing standalone kubeconfig file (if there is one)...
INFO[0002] Successfully deleted cluster projectsyn!
Conclusion
I hope this tutorial has given you a good idea of what Commodore Components are, and how they can help you in your DevOps workflow.
Commodore Components can be used to deploy monitoring tools, to change the state of lots of clusters at once, to perform backup operations, to modify network policies, and many other tasks that otherwise would have to be performed manually.
Creating Commodore Components at the moment is a long task, but the rewards are high in terms of the economies of scale DevOps reach when managing multi-cluster architectures. As a next step, we recommend that you read the Commodore Component Developer Guidelines.
If you create a new open source Commodore Component, add the commodore-component
tag on your GitHub project and let us know about it! We will be thrilled to check it out and who knows? Maybe even contribute back to your effort.