WARNING: Parsing of section df failed (2.0.0.p17)

CMK version: 2.0.0.p17
OS version: docker

Error message:WARNING: Parsing of section df failed

The GUI throws an error in service inventory, while the check output (same version) looks OK:

Starting job...
WARNING: Parsing of section df failed - please submit a crash report! (Crash-ID: c96708fc-6d46-11ec-b262-0242ac1e0002)
Completed.
<<<df>>>
tmpfs          tmpfs      403056     1296    401760       1% /run
/dev/vda5      ext4    102168536 21958108  74977532      23% /
tmpfs          tmpfs     2015268        0   2015268       0% /dev/shm
tmpfs          tmpfs        5120        0      5120       0% /run/lock
tmpfs          tmpfs     2015268        0   2015268       0% /sys/fs/cgroup
/dev/vda1      vfat       523248        4    523244       1% /boot/efi
tmpfs          tmpfs      403052        0    403052       0% /run/user/1000
tmpfs          tmpfs      403052        0    403052       0% /run/user/1005
<<<df>>>
[df_inodes_start]
tmpfs          tmpfs  503817    823  502994    1% /run
/dev/vda5      ext4  6520832 335290 6185542    6% /
tmpfs          tmpfs  503817      1  503816    1% /dev/shm
tmpfs          tmpfs  503817      2  503815    1% /run/lock
tmpfs          tmpfs  503817     18  503799    1% /sys/fs/cgroup
/dev/vda1      vfat        0      0       0     - /boot/efi
tmpfs          tmpfs  503817     20  503797    1% /run/user/1000
tmpfs          tmpfs  503817     20  503797    1% /run/user/1005
[df_inodes_end]

Hi @acfnews,

can you please post the crash report too, it contains usable information to find the problem.

It seems it is related to a newer check_mk_agent.linux on the server (dev version), while CMK server is 2.0.0.p17.

crash.txt (2.4 KB)

(I needed to unpack the crashfile, and then rename to .txt before I could upload here…)

In general checkmk supports older agents but not newer.

Your agents provides an subsection of df ([df_lsblk_start]) which isn’t handled by the check. From your crash report:

"section_content": 
 [["tmpfs", "tmpfs", "404612", "45340", "359272", "12%", "/run"], 
 ["/dev/vda1", "ext4", "150558956", "76364964", "66523004", "54%", "/"], 
 ["none", "tmpfs", "4", "0", "4", "0%", "/sys/fs/cgroup"], 
 ["none", "tmpfs", "5120", "0", "5120", "0%", "/run/lock"], 
 ["none", "tmpfs", "2023056", "0", "2023056", "0%", "/run/shm"], 
 ["none", "tmpfs", "102400", "0", "102400", "0%", "/run/user"], 
 ["[df_inodes_start]"], 
 ["tmpfs", "tmpfs", "505764", "1288", "504476", "1%", "/run"], 
 ["/dev/vda1", "ext4", "9568256", "440509", "9127747", "5%", "/"], 
 ["none", "tmpfs", "505764", "15", "505749", "1%", "/sys/fs/cgroup"], 
 ["none", "tmpfs", "505764", "12", "505752", "1%", "/run/lock"], 
 ["none", "tmpfs", "505764", "1", "505763", "1%", "/run/shm"], 
 ["none", "tmpfs", "505764", "6", "505758", "1%", "/run/user"], 
 ["[df_inodes_end]"], 
 --> ["[df_lsblk_start]"], 
 --> ["[df_lsblk_end]"]]}}

Can you check your agent output again, if it’s present after the block you already provided?

2 Likes

@tosch is right this is a dev agent. The 2.0 agent has not this section.
@acfnews is it possible that this agent was from your test with the daily build docker container?

1 Like

Oh, seems you both have a prehistory? :slight_smile:

So back to my original thoughts. Don’t use newer agents than your monitoring server.

3 Likes

Yes the issue is solved after downgrading the the test server.

1 Like