Thank you for your quick response. I see that the cluster should be monitored as a host, but I’m having trouble with the steps.
I’ve deleted the hosts and I’ve seen these combinations:
New Cluster -> Host 64.102.189.120 -> save -> Error: The cluster must have at least one node.
New Cluster -> Host 64.102.189.120 -> Node 64.102.189.120 -> save -> Error: The cluster can not be a node of its own.
New Cluster -> Host kube1 -> Node 64.102.189.120 -> save -> Error: The node 64.102.189.120 does not exist (must be a host that is configured with WATO).
Then, I add a node for 64.102.189.120. This has 2 problems: It can’t be pinged because of OpenStack, and it doesn’t have check mk installed.
However, I think that’s ok. As the instructions state in 2.5,
…So that the nodes are also monitored, you must also create them as hosts in WATO…
…Unless you have a Check_MK agent installed on the nodes themselves (which would generally be rather unusual), you will need to set the Check_MK Agent to No agent.
Once I do that and set it to No agent, I can see the 64.102.189.120 host in check mk. It has one service (PING) which fails.
This allows me to successfully do
New Cluster -> Host kube1 -> Node 64.102.189.120 -> save
Now we have a cluster, but no services.
At this point, I follow the instructions for creating the Datasource Programs -> Kubernetes. This time around I explicitly add the kube1 server, so this rule knows where to apply. I left the hosts blank before, which was probably my error.
My cluster now has 2 services, Ping which is Critical, and check mk discovery which is green with the message OK - no unmonitored services found, no vanished services found
I’ll admit, I’m still learning Check MK, so if there’s something obvious I’m not doing let me know. I can connect via kubectl and get nodes and pods, so I’m pretty sure it’s up.
Any ideas?
Joe
···
“There are only two industries that refer to their customers as ‘users’.” - Edward Tufte