kid building in lego lego

Extending Kubernetes with CRDs – The Hard Way

This is a post I was planning to write a while ago when I worked on Kamus CRD feature. CRD, or Custom Resource Definition, is a way to extend Kubernetes with a new resource. In my case, I wanted to add a new resource, KamusSecret, which is very similar to a regular Secret, just encrypted. Let’s see how this can be easily done – using my beloved language, C# 🙂

Getting Started

The first step, of course, is reading the documentation:
The CustomResourceDefinition API resource allows you to define custom resources. Defining a CRD object creates a new custom resource with a name and schema that you specify. The Kubernetes API serves and handles the storage of your custom resource. The name of a CRD object must be a valid DNS subdomain name.

Confused? So I was when I first read this. I mean, I get what a CRD is – but what I need to do in order to implement one? Apparently, there are only two things you need to do:
  • Define the new resource using a CRD object
  • Write a code that reacts to CRUD (create, read, update and delete) events of this resource
Did you notice that there is not step for actually handling CRUD operations? Kubernetes does all this for you. Just by defining the resource, you can use it like any other Kubernetes resource – create, edit, delete – and everything will work without any additional code. This is really awesome. It also means that if you (like Corey Quinn) like to abuse things, you can use Kubernetes like a NoSQL database: Just create a CRD for the object you’d like to store, and that’s it. You can now store it on Kubernetes just like any other database. Anyway, please don’t try that in production 🙂

Creating the CRD

A CRD is just another Kubernetes resource, which you can create using a manifest. This, for example, is the CRD for KamusSecret:
kind: CustomResourceDefinition
  - name: v1alpha2
    served: true
    storage: true
        type: object
            type: object
            additionalProperties: true
            type: object
            additionalProperties: true
            type: string
            type: string
  scope: Namespaced
    plural: kamussecrets
    singular: kamussecret
    kind: KamusSecret
     - ks
What we have here?
  • Lines 8-24 defines the our resources object schema, using OpenAPI schema. It’s an object that has three properties, data, stringData and type (exactly like the properties regular secrets has). It also has additional property called serviceAccount, that is used for decryption.
  • Like any resource it has API group (, see line 5) and version (v1alpha2, see line 8.
  • It also has a name and kind (see lines 26-31) and a scope (Namespaced, see line 25).
After creating this resource, our new KamusSecret resource is ready to use! We can now use regular kubectl commands – like kubectl get ks --all-namespaces or kubectl apply -f kamus-secret.yaml, and it will work – just like any other resource. This is very nice if we want to use Kubernetes as a database, but usually, we need to do something when our resource is created/edited/deleted. For example, when a KamusSecret is created, I want to decrypt all the items and create a new Secret with the decrypted values. How can I do it? This is why we need a controller!

Writing our first controller

As I said earlier, Kubernetes handles all the hard parts for us – persistency, handling user requests, and everything we need for CRUD operations. Kubernetes also supports watching resources: You can watch any resource, and Kubernetes will send events about the resource (ADDED/MODIFIED/DELETED). This is how we can implement the logic for our CRD – by writing a code that watch changes to all the CRDs on the cluster, and responds to the various events. This is all the code we need (using Kubernetes C# SDK):
var path = $"apis/{group}/{version}/watch/{plural}";
var watcher = await kubernetes.WatchObjectAsync<TCRD>(path,
                   onEvent: (@type, @event) => subject.OnNext((@type, @event)),
                    onError: e => subject.OnError(e),
                    onClosed: () => subject.OnCompleted());
The WatchObjectAsync receive three callback: onEvent get called with the relevant event, onError when there is an error, and onClosed when the connection is closed. Now, all we need to do is implement a simple state machine and react to each one of the events:
switch (@event)
      case WatchEventType.Added:
          await HandleAdd(kamusSecret);

      case WatchEventType.Deleted:
           //Ignore delete event - it's handled by k8s GC;

       case WatchEventType.Modified:
            await HandleModify(kamusSecret);
                   "Event of type {type} is not supported. KamusSecret {name} in namespace {namespace}",
                 kamusSecret.Metadata.NamespaceProperty ?? "default");

Simple, right? You can find the entire code of the controller here. You might noticed that I’m ignoring the Deleted event. This is because we can let Kubernetes handle this for us – by marking object as “owned” by other object. For example, this is the code that create a Secret from a KamusSecret:
var ownerReference = !this.mSetOwnerReference ? new V1OwnerReference[0] : new[]
     new V1OwnerReference
          ApiVersion = kamusSecret.ApiVersion,
          Kind = kamusSecret.Kind,
          Name = kamusSecret.Metadata.Name,
          Uid = kamusSecret.Metadata.Uid,
           Controller = true,
           BlockOwnerDeletion = true,

return new V1Secret
    Metadata = new V1ObjectMeta
          Name = kamusSecret.Metadata.Name,
          NamespaceProperty = @namespace,
         OwnerReferences = ownerReference
      Type = kamusSecret.Type,
       StringData = decryptedStringData,
       Data = decryptedData 
The Owner Reference tell Kubernetes that this resource is owned by a different resource. When the owner resource (the KamusSecret) is deleted, Kubernetes Garbage Collection will also clean the owned resource – in my case, the Secret. Nice, right?

Are we done?

On one hand, yes. By writing just a few lines of codes, I can now create a KamusSecret object, and a decrypted secret will be created. On the other hand, this is just the tip of the iceberg. For example, testing is not easy at all (see the tests I wrote here). There are also very complex issues that I didn’t handle, like high availability (using leader election algorithm), using the resource version for filtering events that already handled, and versioning (and I can share from experience, this is not easy at all to implement). Each one of them is another thing we need to implement and test. Luckily, and this is why this post is called “The Hard Way”, as there are easy ways to write controllers. One way is using the Operator SDK, which generates and handles most of the heavy things for you, so you can focus only on the logic. Using the operator SDK it took me less than an hour to write a Zap Operator, with far less code and much more robust controller. On the other hand, if you can’t write the operator in Go language (like in the case of Kamus, where most of the code base is in C#), you cannot use this SDK. Another option is to use kubebuilder, the framework that also used by the Operator SDK. Kubebuilder is more low level, but gives your more flexibility – but still, requires you to write your code in Go.

Wrapping Up

Extending Kubernetes with CRDs is very simple and requires just a few lines of code, especially if you use Operator SDK or kubebuilder. As we saw, Kubernetes handles all the hard parts for you and lets you focus on the logic. I hope now you have a better understanding of the relationship between a controller and a CRD, and you’re ready to start writing you’re own CRDs! Did you end up writing something? Do share! I’ll be happy to hear about it!
Be Smurfs, Stay Home and Wash your Hands | COVID-19
Be Safe and Stay @ Home. Source: UN

Leave a Reply

Your email address will not be published. Required fields are marked *