How To (Securely) Monitor Multiple Kubernetes Clusters?

We are able to create a monitor/rule for a single kubernetes cluster following the guide here: https://checkmk.com/cms_monitoring_kubernetes.html . We use the raw edition of checkmk, version 1.6.0p16.

Problem: We cannot securely (via HTTPS) monitor more than one kubernetes cluster on a single instance of checkmk.

We have two k8s clusters, cluster-1 and cluster-2. We have created the check-mk ServiceAccount and namespace in both clusters, and we obtain the certificate and token from the secret in the respective clusters. Following the documentation linked above, we can configure a single cluster to be monitored by checkmk, but whenever we go to configure the next cluster, it fails due to SSL errors. It’s almost as if CheckMk is using the first cert it finds under Global Settings > Site Management. We have two certs defined. (I had a screenshot for this, but I got an error saying new users can only upload a single media item per post.)

Again, if I configure cluster-1 monitoring first, cluster-1 will be monitored. Once I add cluster-2 monitoring it fails due to SSL errors. However, if I reverse the order in which I create the monitor, cluster-2 will be monitored, but cluster-1 will fail due to SSL errors.

Our theory is that when creating the kubernetes special agent config, we only specify the token (password) to use, but have no option to specify which cert to use. E.g.:

It is not acceptable for us to use the same SSL certificate across multiple clusters for production. That’s an absolute non-starter.

Any advice or troubleshooting steps that can be provided would be greatly appreciated.

I am including the screenshot of how our certs are configured in checkmk:

Have booth certs the same common name?
If yes then this is the problem.

It appears both certs have the same common name. I extract the credentials from the kubernetes clusters the same way, and then copy/paste the output (certs and/or tokens) and copy directly to the CheckMk UI.

I don’t recall specifying the Common Name. I think the Common Names are defined within the certs generated when deploying the RBAC yaml to the target k8s cluster.

Is there a way to control Common Name somewhere within the RBAC yaml file? https://github.com/tribe29/checkmk/blob/master/doc/treasures/kubernetes/check_mk_rbac.yaml

kube-ca should be the internal master ca of your cluster.
As I’m no kubernetes specialist, i cannot give you some hints what you can configure to have there unique names for your two clusters.

These should not be the server certfiicates but the CA certificates that sign the server certs.
And as @andreas-doehler pointed out the certificate subjects need to differ as otherwise an SSL client is not able to distinguish between them.

@r.sander and @andreas-doehler

Thank you both so much! I now understand our issue, and I’ll see what I can do about modifying the names of the internal master ca’s for each cluster. Now I know those names need to be unique.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.