Hello,
i am trying to create distributed monitoring with 4 new remote sites. I have created them and configured them as described in the documentation. On 2 of them i won’t get a connection, just on one of them.
Every 3 remote and 1 main site are on the same debian server. Every single remote site is configured with the same settings (different ports).
Well what i tried you can’t have main site and multiple remote sites on same device/server (not 100% sure someone more experienced with distributed monitoring could tell if it is true).
I tried this and i only got main site and hosts from one of the remote sites. But from second no hosts.
Different remote sites using the same address and port (both 6557 in your case) will not work, of course.
In remote site tes2 (possible typo insteadof “test2”?), you need to configure a different livestatus port. Then use this different port to connect to tes2 from your central site.
So the remote site itself is using 6550,/tcp for livestatus ok.
The distributed monitoring connection has to be configured to use this port.
In your previous screen shot, the central site was configured to connect to 6557/tcp instead.
In your last screen shot, this information is hidden (click the small triangle to show)
No. As long as different ports are being used, a central site can include multiple remote sites on the same host, be it localhost or another remote host.
test2 site is running? livestatus port is being listened on? check with netstat/ss, try to connect to it manually using telnet or netcat.
All using the same version?
Perhaps still something wrong with encryption between your central site and test2.
I used the port 6558. It is not blocked by the firewall and i also configured it in the omd conifg of my 2nd site. I can also connect to it via telnet. It is still shown as unknown in my distributed monitoring.
I’ve similar ‘Unknown’ issue.
I have checkmk distributed monitoring setup with 1 master and 9 remote nodes.
When I wanted to add one more remote node, the master couldn’t recognize the new node and didn’t trigger the active changes process.
If I modify any other remote node, the master node triggers the active changes process. But still the new remote node’s status is ‘Unknown’
All remote nodes are using port 6557. Master node can acess to the new node over telnet.
I’ve opened tcpdump to monitor the packages. If I modify any other remote node and submit ‘active changes’ button, all nodes have communication with master with on port 6557, but the new node has no traffic on port 6557.
Also, the new node doesn’t have distributed_wato.mk configuration file. Probably master node couldn’t push the required configs to the new node.
Do you have any recommendation?
OMD[newnode]:~$ omd version
OMD - Open Monitoring Distribution Version 2.0.0p16.cre
OMD[newnode]:~$ omd start
Temporary filesystem already mounted
Starting mkeventd...OK
Starting rrdcached...OK
Starting npcd...OK
Starting nagios...OK
Starting apache...OK
Starting redis...OK
Starting stunnel...OK
Starting xinetd...OK
Initializing Crontab...OK
OMD[newnode]:~$ netstat -lnp | grep 655
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:6556 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:6557 0.0.0.0:* LISTEN 116596/xinetd
OMD[master]:~$ telnet <newnode-IP-address> 6557
Trying <newnode-IP-address>...
Connected to <newnode-IP-address>.
OMD[newnode]:~/etc/check_mk/conf.d$ ls -l
total 8
-rw-r--r-- 1 newnode newnode 72 Jun 30 15:08 mkeventd.mk
-rw-r--r-- 1 newnode newnode 76 Jun 30 15:08 pnp4nagios.mk
drwxrwxr-x 2 newnode newnode 114 Jun 30 15:08 wato/
I’ve fixed ‘Unknown’ host issue. The problem was related to user Authorized sites config.
I’ve changed the value as ‘All sites’, then everything worked as expected.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed. Contact an admin if you think this should be re-opened.