Graphs in reporting seem to be based on the Maximum aggregation. This is fine for things like RAM or filesystem size, where peaks are the real concern, but for things like CPU or interface traffic, it would make more sense to report on the Average aggregation. I can’t seem to find an option for this. The difference can be significant, especially for long-term reports where the resolution is hours or even days.
Hi @bmst
I suppose you mean the built-in graphs, not the custom ones, right?
Yeah, like a report that uses the “Multiple graphs Datasource: All services” element with some filters. I haven’t explored custom graphs much but it looked like you’d sort of create individual graphs, not like a “template” sort of thing that can be generated for any list of hosts or services like reports are.
Bump. Do we bump here? I still haven’t found a solution for this.
To be clear, my issue is that graphs in reports will take a spike of traffic for a second as a data-point to represent a whole hour. This is not useful behaviour to me, mostly. The interactive web graphs let you select the aggregation in the legend - how do I access the other aggregations in PDF reports?
Bump2
Same here, the longer the time-range you view, the more unuseful distortion you have when monitoring cpu, interfaces, etc.
I cannot understand that this is of nobodys’ concern…
First, I don’t have an answer for you… sorry!
But I have a similar concern. I had setup custom graphs to be included in a monthly report sent to IT leadership to be reviewed for trends to look for upcoming resource shortages. When I setup the custom graphs, I chose “average” because that seemed most reasonable. We soon figured out, the built-in graphs are based off of “maximum” as you state.
My question is this… what time period are these min, max, and averages based off of? For these particular metrics, we are running the check every minute. I didn’t expect any more granularity than that. Does this mean I really have data for every second within that minute and maximum is the maximum of those while average is the average of every second over the course of the minute? That seems unattainable to me.
Thanks!
It would be great if we could get some proper response on this.
The graph statistics are indeed not clear, especially those that go way back in time.
Hi guys,
The configuration of the RRDs can be found via the ruleset for services or hosts, found when searching for “red” in setup search box.
I do not have a site available, but from my head it’s like this:
- For 2 days the minute values get saved (2880 slots of 60s). In this case MIN/MAX/AVG are all the same (truly, in regards to a single value).
- For 10 days the MIN/MAX/AVG of every 5 of the values of first storage is built and saved in one of another 2780 slots.
- For 90 days MIN/MAX/AVG of every 30 of the values of first storage is built and saved in another number of slots

- For approx 4 years MIN/MAX/AVG of every 360 of the values of first storage is built and saved in another number of slots

As far as I know you’re only able to temporarily change the consolidation function from default MAX to MIN or AVG in the GUI by selecting a graphs appropriate legend header when not showing 1 minute values (aggregation time is shown in each graph’s upper right corner).
This is to be used for debugging and therefore not for static reports, sorry for that.
I hope this helps.
BR,
Marsellus W.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed. Contact an admin if you think this should be re-opened.