Having issues connecting to livestatus from external, I assume the solution is to add an entry into the hosts.allow file like you need to for check_mk_agent, but I dont know what to add. I am using nagios core, have tried adding nagios: ALL but no luck. Anyone have a solution?
Edit: I have a work around, have allowed ALL: servername so my remote server can access livestatus, not ideal but should still be secure with local firewall only allowing port 6557 access from that server.
Livestatus port has to configured and then you can verify if it is accessible from the external server using telnet or nmap like telnet <hostname> 6557
If the port is not reachable, probably its blocked by Firewall.
This can be setup in the Firewall which host is allowed to do a Livestatus query.
Follow this link for more details:
I think you need to do the below on your monitoring server:
From Version 1.5.0 you can also restrict the access to specific IP-Adresses with the omd-command:
omd config set LIVESTATUS_TCP_ONLY_FROM ‘127.0.0.1 “List the IP which you want allow seperated by spaces”’
or you could also do it via the OMD GUI. Login as site user and
run omd config
Go to Distributed Monitoring > Update LIVESTATUS_TCP_ONLY_FROM
Hi, sorry I should have elaborated. 6557 is open on the firewall and I can connect to livestatus fine on the VM running check_mk by connecting to 127.0.0.1:6557 (ALL: localhost is set in hosts.allow) and LIVESTATUS_TCP_ONLY_FROM in the omd config is set to 0.0.0.0. External machine is not able to connect, it passes the firewall but tcp_wrappers blocks the connection, the only way it works is to put ALL: externalVM into hosts.allow, ideally this needs to be daemonname: externalVM but I dont know what the daemon name is for livestatus. This is a pain in my ass feature for RHEL, gotta allow it in the firewall and in hosts.allow, you dont have to do this on centos.
In my setup, I am not using tcp_wrappers but I have a monitoring server OEL6 based on which I have two sites - One master and One slave(With Livestatus configure on Port 6557).
As you can see from the below output the livestatus here is managed by xinetd.
And this is how the $OMD_ROOT/etc/xinetd.d/mk-livestatus looks like on the site configured with Livestatus:
service livestatus
{
type = UNLISTED
socket_type = stream
protocol = tcp
wait = no
# limit to 100 connections per second. Disable 3 secs if above.
cps = 100 3
# set the number of maximum allowed parallel instances of unixcat.
# Please make sure that this values is at least as high as
# the number of threads defined with num_client_threads in
# etc/mk-livestatus/nagios.cfg
instances = 500
# limit the maximum number of simultaneous connections from
# one source IP address
per_source = 250
# Disable TCP delay, makes connection more responsive
flags = NODELAY
# configure the IP address(es) of your Nagios server here:
only_from = 0.0.0.0
# ----------------------------------------------------------
# These parameters are handled and affected by OMD
# Do not change anything beyond this point.
# Disabling is done via omd config set LIVESTATUS_TCP [on/off].
# Do not change this:
disable = no
# TCP port number. Can be configure via LIVESTATUS_TCP_PORT
port = 6557
# Paths and users. Manual changes here will break some omd
# commands such as 'cp', 'mv' and 'update'. Do not toutch!
user = noris
server = /omd/sites/test/bin/unixcat
server_args = /omd/sites/test/tmp/run/live
So, you can restrict service name xinetd for a particular IP to allow livestatus queries.
Hi marco, thanks for the reply but my server is configured the same as yours, listening on 6557 and open to all in the xinetd configuration (this is the default when you enable livestatus). You need to be on rhel7+ to have the tcp_wrapper issue I am dealing with.