Monitor DB instances cluster changes

we have a problem with our new check mk. We can no longer monitor the machines that host SQL DB instances, at the moment we only monitor them via ping. We would like to be able to perform an automatic check when a resource moves from one cluster node to another. Does anyone have any advice?

I little bit more information about the problem would be nice.
The normal agent should work as before on the Windows cluster.
And also the check plugins you use.
If your cluster is a normal active-passive cluster and the check plugin only works on the active node or produce output for monitored resource only on one node then all should be fine.

Hi Andreas
Let’s try to be more specific.
In our environment we have a SQL Cluster made of 4 Nodes.
Resources of this Cluster are actually Virtual Machines hosting DB.
What we want to do is to monitor the resources as a Vitual Machine using the Check_MK agent, which is already installed in them.
But we want the clustered services to be monitored on Cluster Level.
We would even like to have the chance to allow the system to retrieve data regarding where the resource is running.
ie:
NODE1
NODE2
ResourceA running on NODE1

If we move ResourceA from NODE1 to NODE2 we would that the Monitoring system retrieve this move in autonomy.

At the moment all Resources are monitored with “No Agent” check --> Just ping
This is because otherwise on each Resource we can see information about all the other ones and we do not want it .

I hope this is enough clear for you, otherwise please let me know

Thank you

Marco

I will describe what you should do.

Setup all your nodes inside CMK as normal nodes with normal agent configuration.
This must work without problem.
The next step is to create a cluster inside CMK for every service IP inside your MS cluster.
Then you use the rule Clustered services for overlapping clusters to assign services to your configured clusters. This should be done for all cluster controlled services (Windows service should have the start type “manual”) and cluster filesystems. If you have cluster shared volumes then these volumes must be visible on all nodes and no cluster assignment should be done.

For you database it is a little bit more complicated.

This is a strange setting. If you see all running database information on every node then you have no active-passive cluster but an SQL cluster with high availability.

That means for more advice the correct configuration of your database setup and some output samples from your Windows agent is needed.

1 Like

Thank you, I was looking for this. It’s unfortunate that the ruleset for overlapping clusters isn’t shown when you edit cluster services for a cluster host (or at least, I couldn’t find it there); I realise it’s a bit of an edge case, but it really shouldn’t be hidden so much :slight_smile: