Native Kubernetes Monitoring - Checkmk Raw Edition 2.1.0p18

Hi ,

We are trying to configure Native Kubernetes monitoring in Checkmk Raw Edition 2.1.0p18 and following the below document

But, we are stuck in Providing communication via Ingress, can we use the existing ingress service for checkmk collector config, and how we configure that in yaml file?
We do have ingress in our environment but we don’t understand how to configure it in yaml file…

Secondly, for configuring the cluster collector for HTTPS- there are 3 certificates namely clusterCollectorKey, clusterCollectorCert, and checkmkCaCert. The first 2 belong to the K8 cluster whereas checkmkCaCert belongs to our checkmk server…am I correct in my understanding?

Please help, we are quite curious to explore native K8 monitoring in checkmk.

BR
//Prachi

Hi,

Any suggestion on the ingress configuration in yaml, we have a name of Ingress along with IP address of 2 load balancers along with the ports. As per the below screenshot in checkmk documentation, do we need to add only the name of ingress on the hosts?
and what needs to be updated on paths? or only enabled has to be set true and rest of the configuration as it is…

We will start with the configuration in 1 hour , any ideas please:)

BR
//Prachi

You can define the existing ingress in the “host:” . Just replace checkmk-cluster-collector.local with that value and use the correspoding annotations and ClassName depending on the ingress controller used by you.

This is used for cluster collector’s SSL certificate verification.
The checkmkCaCert is basically the CA cert which will be mounted into the node collector to verify the cluster collector cert.
Also documented here: Monitoring Kubernetes

Thank you so much !!
Very Sorry for bothering you:(

So all three certificates need to be generated on the K8 Cluster? we have sles sp4 as OS on K8.
We have a certificate portal in our organization where we submit the CSR.

  1. we create the conf file like below where we specify all the details like CN, O, OU etc
    [ req ]
    default_bits = 2048
    distinguished_name = dn
    req_extensions = req_ext
    [ dn ]
    CN = <service_name>. .cluster.local //What should be the service name here?
    [ req_ext ]
    subjectAltName = @alt_names
    [alt_names]
    DNS = <service_name>. .cluster.local //what should be the service name?
    Then we run the command
    openssl req -new -out sslcert.csr -keyout private.key -config config.cnf

Here we will get the key which would be the Cluster Collector key and after submitting the CSR, will get the Cluster Collector server certificate from my organization, now from where I will get checkmkCACert?
Do I have to create another conf file in K8 Cluster for checkmkCACert with FQDN as the name of the ingress?

BR
//Prachi

Hi,

We used Ingress but didn’t define the annotations, after installing the collector we got the token and CA cert but when we ran the last command to verify the setup, it could not resolve the host, we suspect maybe because we didn’t define the annotations so that could be a reason…could you give us the example of input and output of how you configure Ingress and what output you get? Do we need to remove the line below the annotations?

BR
//Prachi

If you don’t use nginx controller, then just comment out the whole annotations block.
Are you using on-prem or managed cluster ?
Also, which version of Kubernetes ?

Hi,

Thanks, our Ingress part is sorted today. Our Checkmk server is able to access K8 through ingress:)

Now, while configuring the special agent in our checkmk server, as we now have an Ingress then why do we need an API server connection endpoint? Cant, we give the same Ingress URL in the API server connection also?

Secondly, our’s is checkmk raw edition 2.10p18, so I have created a host with no IP and ran the below script on the Checkmk site
OMD[kubernetes]:~$ share/doc/check_mk/treasures/find_piggy_orphans but then where will I see the pods, worker nodes, and all…do I need to create another host apart from the one created above with no IP? If not, then do we need to include the below option of piggyback on the above host created with no IP ?

BR
//Prachi

There are two API endpoints which have to be defined in the rule. One is the standard Kuberbetes API endpoint and the other one is the cluster collector API endpoint (which will be available after you have deployed the Helm chart.

First you have to make sure that your kubernetes special agent is able to fetch the data from your Kubernetes cluster.

Here, you need to define the Kubernetes cluster API endpoint which you can determine via “kubectl configi view” . Also, explained here: Monitoring Kubernetes

Secondly, our’s is checkmk raw edition 2.10p18, so I have created a host with no IP and ran the below script on the Checkmk site
OMD[kubernetes]:~$ share/doc/check_mk/treasures/find_piggy_orphans but then where will I see the pods, worker nodes, and all…do I need to create another host apart from the one created above with no IP? If not, then do we need to include the below option of piggyback on the above host created with no IP ?

The find_piggy_orphans will only work when your special agent is able to talk to your cluster (either you use just the standard API endpoint or use both the standard + cluster collector).

Hi,

When we do kubectl config view in our K8 cluster , we get https://xx.xx.xx.xx:6443, and when we put this URL in Kubernetes rule of checkmk , its not able to connect.

Infact included this IP in /etc/hosts of checkmk server but when I do telnet to this IP on port 6443 , it’s not connecting

BR
//Prachi

This looks like a classic firewall issue.

Hi,

Just now, I am able to resolve this as well and getting all the Kube cluster services:)
image

Now how will I get to see all the pods, worker nods and services running because it’s my CRE 2.1.0p18 version…as per the below documentation

I ran the command ```
share/doc/check_mk/treasures/find_piggy_orphans and could see all my Kuberenetes clusters , pods and everything in my checkmk server, but how will I see that info on checkmk GUI?

BR
//Prachi

SInce, its CRE then you have to add them one by one manually or use the REST API to add them all in a single shot.

With CEE, it was quite easy to just use Dynamic host configuration and this adds the Piggyback data automatically.

How can I use Rest API to add them in 1 shot?

To Bulk create hosts:

See the “Bluk create hosts” example from The Checkmk REST API

Hi,

Oki, yeah then CEE is better.
Also, we could see the number of pods running in the services, but is it possible to see the name of the pods and containers also?

BR
//Prachi

Yeah this should be possible out of the box.

For the whole list:
Check Plug-Ins Catalog

Hi,

Kubernetes monitoring works fantastic in Checkmk new cloud version, it’s free for a month:)
Automatically all the hosts K8 cluster hosts are added which is not in the raw edition. Also, I liked the HW/SW inventory feature.

The only thing, which we are now looking at, is to suppose there is a host which has 4 pods running as shown in the services in Checkmk, so can we see the name of those pods?
Below example we could see 4 pods are running but where we could see the name of those pods?


Tried using the above kube_pod_containers plugin but it doesn’t give the name of the pods running in that host.

BR
//Prachi

Good to hear that it works fine for you.

The only thing, which we are now looking at, is to suppose there is a host which has 4 pods running as shown in the services in Checkmk, so can we see the name of those pods?
Below example we could see 4 pods are running but where we could see the name of those pods?
If you want to find the pods with a different status, then just simply put the following in the Quicksearch:

h: pod.* s: Status and then choose the filter to find which are Running, Pending, Succeeded,Failed and Unknown.