Investigating Kubernetes Nodes Disk Usage

Today, I looked at our production Kubernetes cluster dashboard and I noticed something weird:

disk usage is high - almost 80%!
(sum (node_filesystem_size) – sum (node_filesystem_free)) / sum (node_filesystem_size) * 100

Well, this looks pretty bad. This is the average disk usage of the nodes running in the cluster. On average, only 20% percent of the disk in each node is available. This is probably not a good sign.

What can be the issue? There is probably some big files on the disk, the only question is what. Finding the biggest files on Linux is simple (see this post for more details). All we need to do is to run the following command:

du -a /dir/ | sort -n -r | head -n 20

So, to find out what eats all the space on our nodes, all I need to do is run this command on all the nodes, and find the problematic files. Using SSH to do that is one way, but I’m lazy and don’t like to manually connect to each one of the nodes. Let’s do it the Kubernetes way!

Leveraging the power of Daemonset

What I want to do is to run a command on all the nodes in the cluster. The easiest way to achieve that is by using a Daemonset. Daemonset is a Kubernetes object that let you run a pod on all (or some) of the nodes in the cluster, which is exactly what I want. What left is writing the pod spec so it will do the following:

  • Mount the node root file system (/) into the container
  • Run the pod as privileged so the container can access all the files from the host (careful here!)
  • Run the command to list the 20th most heavy files in the root file system

The full declaration of the daemonset is here. To run it, use the following kubectl command:

kubectl apply -f https://gist.githubusercontent.com/omerlh/cc5724ffeea17917eb06843dbff987b7/raw/8da2dfa37046ae04cd602a44b70ade5b747fcf1b/daemonset.yaml

And now watch the pods running using the following kubectl command:

kubectl get pods -w -l app=disk-checker

You should see an output similar to this, with more pods (depending on the number of the nodes running on your cluster):

The pods from the daemonset are running
The disk checker pod in action

Now it’s time to wait – this is going to take some time until the pod will finish going over all the files in the node filesystem. When all the pods are in a complete state, it’s time to inspect the logs to find which files are the biggest:

kubectl logs -l app=disk-checker -p

The -p flag is due to daemonset behavior – one the pod will complete, it will be restarted. I want the logs from the previous pod, not the current one. The output should look like this:

The output of the container

Easy, right? Just don’t forget to clean up the daemonset when you’re done:

kubectl delete daemonset disk-checker

Conclusion

After going over this process, I was able to find out which files were consuming most of the space. The next step was to investigate – why these files take so much space?

After discussing the findings with my colleges (thank you Shay Katz and Yaron Idan!), I find out that I was barking up the wrong tree. Look like kubelet start cleaning up images only when disk usage is over 85% percent (see the official documentation here). So actually, any disk usage below 85% is normal.

To conclude, today I learned how to find out which files consume most space on Kubernetes nodes and also how kubelet image cleaning policy works. But most importantly, I learned that I should always ask “is this a problem”, before starting to investigate it.

Leave a Reply

Your email address will not be published.