Livestatus questions

Situation: three decentral sites, one central site, connection via livestatus.

  1. What is the poll interval of livestatus to the decentral hosts, if persistent connections are not enabled? I could not find something in the settings, e.g. to request every second or to request every 10 minutes.
  2. Can I filter (e.g. exclude specific hosts or services) to be sent via livestatus to the central server? use case example: site decentral-1 monitors hostABC, but hostABC is not sent via livestatus and thus not available on site central.
  3. How can I ensure on the central site that monitoring information are transferred, only? Thinking of exposing non-livestatus data via this channel or sending malicious code to the central site for RCE oder DoS.

best regards, Dennis

There is no poll interval, as data is only requested when needed. E.g. when you open a view in Checkmk’s GUI the web application will request data for that view from all connected sites via livestatus.

AFAIK this is not possible.

Livestatus is encrypted nowadays. IMHO it is not an attach vector, but your risk assessment may come to another conclusion.

1 Like

There is no poll interval, as data is only requested when needed. E.g. when you open a view in Checkmk’s GUI the web application will request data for that view from all connected sites via livestatus.

OK, I started wireshark to understand your statement. As long as there is no active browser tab of the central site, there is no active data traffic between central an decentral. If you have an open browser, there is an 30 second interval due to the refresh interval of the website. So, the interval is client depended, but done by the central site. I see.

AFAIK this is not possible.

Thanks, that is pity.

Livestatus is encrypted nowadays. IMHO it is not an attach vector, but your risk assessment may come to another conclusion.

Encryption is no security per se, the TLS encryption is for transport, only. If you prepare the data with malware before it got encrypted, you retrieve encrypted malware, decrypt it and execute it.

use case: your remote checkmk server is located in the network of your costumer. Someone is able to get access to this remote server (e.g. buffer overflow in sshd or checkmks new host registration service), manipulates this checkmk server and then you suck the potential malicious stream onto your server.

In total, I’m with you, there are little chances, but indeed there are loopholes.

@Maximilian do you want to weigh in here from an official checkmk security perspective :)?

1 Like

Hey,
so the livestatus data is not executed on the central site. So if you “just” monitor a remote site without config sync you should be fine.
Despite that the results are unreliable if the remote site was compromised…
BR, Max

Why is this a problem?

I think it is important to have a look how livestatus data is processed and if it is possible with manipulated data to get to some point on the central instance.

Yes and no. Data is not executed in a traditional way, but it will (probably) be parsed by a lot of libraries, e.g. regex or somehow. During the passing through the pipe, the data can be misinterpreted and malicious code in the consequence is executed, for example with a buffer overflow.

A good point for CMK is that every side is run by a separated non-priviledged user, therefore it needs an additional privilege escalation flaw to gain root access.

Why is this a problem?

Let’s imagine 3 subsidiaries and a main company, connected but legally separate.
The main company provides the IT in this concern structure, it also monitors the IT, the subsidiaries want to use CheckMK as a uniform tool to also monitor their own devices. Some things therefore do not have to and should not be transmitted to central IT.

Wouldn’t monitoring those things from a separate site (not connected via Livestatus to the parent company’s central site) solve that problem? (this would of course only work on the host level, not on the service level)

Wouldn’t monitoring those things from a separate site (not connected via Livestatus to the parent company’s central site) solve that problem? (this would of course only work on the host level, not on the service level)

Of course, it would. But based on the complexity, this yields into redundant work.

I’m sorry that we technical person have to solve problems and concerns that are created by non-technical persons.

This is more a legal problem.
These devices

and these

cannot monitored on the same instance. If you do it, you will have not a legally separated setup. But for correct answer you should consult the own legal department.
It is every time the question who owns what and how is the structure of the organisation.

PS: i don’t mean legally for CheckMK license but legally from the point of separate companies.

To terminate this path: the (maybe not well chosen) example should just give an imagination of the technical problem. Do not thing about the the legal stuff. The question is: can a decentral site decide, which status information is receiveable by the central instance and the answer seems to be: no, it cannot.

One approach may be to utilize cmcdump (? or similar), filter out what should not be sent and transfer the package to the central server with a shadow copy site. But the latency of the down event to the display event on the central site would be high and the complexity as well.

That is exactly what i do with some customer sites. Latency is normally here no problem as the notifications/alert handlers are triggered on the remote site. I fetch the cmcdump data every minute.

Maybe I’m overlooking something but why would a cmcdump of a compromised site be more reliable than the livestatus data? How would you filter it? Some regex magic or do you plan on implementing your own cmcdump parser?
Livestatus is an official API while cmcdump is IMHO not so the parsers/interpreters of livestatus is way better in regards to security.

cmcdump ist just livestatus commands. In the end the output of cmcdump is piped into the livestatus socket.