Today, I looked at our production Kubernetes cluster dashboard and I noticed something weird:

Well, this looks pretty bad. This is the average disk usage of the nodes running in the cluster. On average, only 20% percent of the disk in each node is available. This is probably not a good sign.
What can be the issue? There is probably some big files on the disk, the only question is what. Finding the biggest files on Linux is simple (see this post for more details). All we need to do is to run the following command:
du -a /dir/ | sort -n -r | head -n 20
So, to find out what eats all the space on our nodes, all I need to do is run this command on all the nodes, and find the problematic files. Using SSH to do that is one way, but I’m lazy and don’t like to manually connect to each one of the nodes. Let’s do it the Kubernetes way!
Leveraging the power of Daemonset
What I want to do is to run a command on all the nodes in the cluster. The easiest way to achieve that is by using a Daemonset. Daemonset is a Kubernetes object that let you run a pod on all (or some) of the nodes in the cluster, which is exactly what I want. What left is writing the pod spec so it will do the following:
- Mount the node root file system (
/
) into the container - Run the pod as privileged so the container can access all the files from the host (careful here!)
- Run the command to list the 20th most heavy files in the root file system
The full declaration of the daemonset is here. To run it, use the following kubectl command:
kubectl apply -f https://gist.githubusercontent.com/omerlh/cc5724ffeea17917eb06843dbff987b7/raw/1e58c8850aeeb6d22d8061338f09e5e1534ab638/daemonset.yaml
And now watch the pods running using the following kubectl command:
kubectl get pods -w -l app=disk-checker
You should see an output similar to this, with more pods (depending on the number of the nodes running on your cluster):

Now it’s time to wait – this is going to take some time until the pod will finish going over all the files in the node filesystem. When all the pods are in a complete state, it’s time to inspect the logs to find which files are the biggest:
kubectl logs -l app=disk-checker -p
The -p
flag is due to daemonset behavior – one the pod will complete, it will be restarted. I want the logs from the previous pod, not the current one. The output should look like this:
Easy, right? Just don’t forget to clean up the daemonset when you’re done:
kubectl delete daemonset disk-checker
Conclusion
After going over this process, I was able to find out which files were consuming most of the space. The next step was to investigate – why these files take so much space?
After discussing the findings with my colleges (thank you Shay Katz and Yaron Idan!), I find out that I was barking up the wrong tree. Look like kubelet start cleaning up images only when disk usage is over 85% percent (see the official documentation here). So actually, any disk usage below 85% is normal.
To conclude, today I learned how to find out which files consume most space on Kubernetes nodes and also how kubelet image cleaning policy works. But most importantly, I learned that I should always ask “is this a problem”, before starting to investigate it.
It did not worked for me, is the link still valid ? Please confirm
Sreekanths-MacBook-Air:Practicals sreekanthadari$ kubectl apply -f https://gist.githubusercontent.com/omerlh/cc5724ffeea17917eb06843dbff987b7/raw/8dfa37046ae04cd602a44b70ade5b747fcf1b/daemonset.yaml
error: unable to recognize “https://gist.githubusercontent.com/omerlh/cc5724ffeea17917eb06843dbff987b7/raw/8da2dfa37046ae04cd602a44b70ade5b747fcf1b/daemonset.yaml”: no matches for kind “DaemonSet” in version “extensions/v1beta1”
Sorry! The link was outdated, I fixed the gist to support newest Kubernetes versions, but did not updated the blog.
Thanks for letting me know!