Anyone have any insight on SVI settings? The priority they get for ping requests and the like is usually as low as it gets. Traffic prioritization for ICMP requests are low. As a result, you see latency or ping drops from switches or SVI’s altogether. However, servers plugged in to those switches are great, still sitting at acceptable ranges for response times. It’s quite normal, but does anyone account for this in some strategic fashion that works for them? As in, do you use a specific set of rules for switch rules and latency. I understand the WAN rules by default and the like, which could also be applied to those interfaces/devices as well, but was wondering if anyone has a good method they use for this scenario. Like, always assume host to be up? Monitor a service? Just curious what strategies you all may be using as opposed to relying on PING, since it’s really not a reliable resource, especially when the switches are performing intense backups for the hosts on it.
Anyone have any insight on SVI settings? The priority they get for ping requests and the like is usually as low as it gets. Traffic prioritization for ICMP requests are low. As a result, you see latency or ping drops from switches or SVI’s altogether. However, servers plugged in to those switches are great, still sitting at acceptable ranges for response times. It’s quite normal, but does anyone account for this in some strategic fashion that works for them? As in, do you use a specific set of rules for switch rules and latency. I understand the WAN rules by default and the like, which could also be applied to those interfaces/devices as well, but was wondering if anyone has a good method they use for this scenario. Like, always assume host to be up? Monitor a service? Just curious what strategies you all may be using as opposed to relying on PING, since it’s really not a reliable resource, especially when the switches are performing intense backups for the hosts on it.
I know that’s a forum post, though their tech support has also verified it; it’s just that if you monitor performance based on ICMP does its availability like you’d do with an SLA, it makes it seem like the switch has plenty of problems, but in reality it does not. Devices connected are great, but management interfaces of SVI’s aren’t treated quite the same.
I even have an hp switch (not high performance) where all the devices connected are like ~2 ms or less, but the SVI itself will usually come in at ~200-400 ms on a regular basis. This behavior will be regardless of a L2 or L3 boundary / hop between devices. Some of it is equipment-related, yes (regarding the SVI’s) but the results are the same on the servers plugged in to the switches. All those respond favorably, but the parent switches will show hiccups / timeouts, etc.
I deal with many different hardware types for switches; some SVI’s just behave slowly, but the devices connected are always within the low latencies I would expect. There have been many network discussions on forums regarding this, as your performance should be graded not by pinging SVI’s, but rather the devices connected to them. That’s why I wanted to inquire about how others handle this or if they even find it to be problematic. In many cases for me, I find it problematic for SVI’s, but not the connected devices.
Anyone have any insight on SVI settings? The priority they
get for ping requests and the like is usually as low as it
gets. Traffic prioritization for ICMP requests are low.
As a result, you see latency or ping drops from switches or SVI’s
altogether. However, servers plugged in to those switches are
great, still sitting at acceptable ranges for response times.
It’s quite normal, but does anyone account for this in some
strategic fashion that works for them? As in, do you use a
specific set of rules for switch rules and latency. I
understand the WAN rules by default and the like, which could also
be applied to those interfaces/devices as well, but was wondering
if anyone has a good method they use for this scenario. Like,
always assume host to be up? Monitor a service? Just
curious what strategies you all may be using as opposed to relying
on PING, since it’s really not a reliable resource, especially when
the switches are performing intense backups for the hosts on
it.
One thing you can try is the following setting.
Ping checks over the switch/router to the next end device should not be affected by this problem as it only occurs if you target the IP of the switch/router itself.
Second idea i have is to use not a normal ping check but a check if port 22 for SSH is reachable on the device.
For the host check command inside check_mk you can easily define other commands.
I know that’s a forum post, though their tech support has also verified it; it’s just that if you monitor performance based on ICMP does its availability like you’d do with an SLA, it makes it seem like the switch has plenty of problems, but in reality it does not. Devices connected are great, but management interfaces of SVI’s aren’t treated quite the same.
I even have an hp switch (not high performance) where all the devices connected are like ~2 ms or less, but the SVI itself will usually come in at ~200-400 ms on a regular basis. This behavior will be regardless of a L2 or L3 boundary / hop between devices. Some of it is equipment-related, yes (regarding the SVI’s) but the results are the same on the servers plugged in to the switches. All those respond favorably, but the parent switches will show hiccups / timeouts, etc.
I deal with many different hardware types for switches; some SVI’s just behave slowly, but the devices connected are always within the low latencies I would expect. There have been many network discussions on forums regarding this, as your performance should be graded not by pinging SVI’s, but rather the devices connected to them. That’s why I wanted to inquire about how others handle this or if they even find it to be problematic. In many cases for me, I find it problematic for SVI’s, but not the connected devices.
Anyone have any insight on SVI settings? The priority they
get for ping requests and the like is usually as low as it
gets. Traffic prioritization for ICMP requests are low.
As a result, you see latency or ping drops from switches or SVI’s
altogether. However, servers plugged in to those switches are
great, still sitting at acceptable ranges for response times.
It’s quite normal, but does anyone account for this in some
strategic fashion that works for them? As in, do you use a
specific set of rules for switch rules and latency. I
understand the WAN rules by default and the like, which could also
be applied to those interfaces/devices as well, but was wondering
if anyone has a good method they use for this scenario. Like,
always assume host to be up? Monitor a service? Just
curious what strategies you all may be using as opposed to relying
on PING, since it’s really not a reliable resource, especially when
the switches are performing intense backups for the hosts on
it.
One thing you can try is the following setting.
Ping checks over the switch/router to the next end device
should not be affected by this problem as it only occurs if you
target the IP of the switch/router itself.
Second idea i have is to use not a normal ping check but a
check if port 22 for SSH is reachable on the device.
For the host check command inside check_mk you can easily
define other commands.
I know that’s a forum post, though their tech support has also
verified it; it’s just that if you monitor performance based on
ICMP does its availability like you’d do with an SLA, it makes it
seem like the switch has plenty of problems, but in reality it does
not. Devices connected are great, but management interfaces
of SVI’s aren’t treated quite the same.
I even have an hp switch (not high performance) where all the
devices connected are like ~2 ms or less, but the SVI itself will
usually come in at ~200-400 ms on a regular basis. This
behavior will be regardless of a L2 or L3 boundary / hop between
devices. Some of it is equipment-related, yes (regarding the
SVI’s) but the results are the same on the servers plugged in to
the switches. All those respond favorably, but the parent
switches will show hiccups / timeouts, etc.
I deal with many different hardware types for switches; some SVI’s
just behave slowly, but the devices connected are always within the
low latencies I would expect. There have been many network
discussions on forums regarding this, as your performance should be
graded not by pinging SVI’s, but rather the devices connected to
them. That’s why I wanted to inquire about how others handle
this or if they even find it to be problematic. In many cases
for me, I find it problematic for SVI’s, but not the connected
devices.
Anyone have any insight on SVI settings? The priority
they get for ping requests and the like is usually as low as it
gets. Traffic prioritization for ICMP requests are low.
As a result, you see latency or ping drops from switches or SVI’s
altogether. However, servers plugged in to those switches are
great, still sitting at acceptable ranges for response times.
It’s quite normal, but does anyone account for this in some
strategic fashion that works for them? As in, do you use a
specific set of rules for switch rules and latency. I
understand the WAN rules by default and the like, which could also
be applied to those interfaces/devices as well, but was wondering
if anyone has a good method they use for this scenario. Like,
always assume host to be up? Monitor a service? Just
curious what strategies you all may be using as opposed to relying
on PING, since it’s really not a reliable resource, especially when
the switches are performing intense backups for the hosts on
it.