Checkmk force logwatch to read file from the beginning

Hello,

I try to force logwatch to read file from the beginning , i use fromstart=True (mk_logwatch: process new files from the beginning) like that

/path/to/*.log fromstart=True
 W error
 C FAILED

this is the output of my log :

start restore server1
backup in progress
restore in progress
OK

start restore server2
job hang up : FAILED to restore

I restart the agent but always i get OK status in checkmk UI

i get crtical state only if i edit the log file manually

Have you any idea please ?

Hello Community any idea please ?

Because i dont have CRE, i cant check it, but my you try to set in a seperate row and not after the path?

always don’t work
i would force logwatch to parse all log ( old and new ) from the beginning in every check. it doesn’t matter if i’ve many alert because it’s my need.

Just for testing, did you try to use “.*” (“boolean”) operators at start and end of your key words?

I cant show you an example, masking did not work for me.
W . * error . * without spaces between “.” “*” “error” :wink:

This is not possible without modifying the mk_logwatch script.
The option “fromstart” is only relevant for new files seen the first time by the script.

If you want to modify the mk_logwatch, then you need to set the for every file to “None” and also set the “fromstart” option.

Here are the relevant lines from the script.

    try:
        header = "[[[%s]]]\n" % section.name_write

        file_id, size = get_file_info(section.name_fs)
        prev_file_id = filestate.get("inode", -1)
        filestate["inode"] = file_id

        # Look at which file offset we have finished scanning the logfile last time.
        offset = filestate.get("offset")
        # Set the current pointer to the file end
        filestate["offset"] = size

        # If we have never seen this file before, we do not want
        # to make a fuss about ancient log messages... (unless configured to)
        if offset is None and not (section.options.fromstart or debug):
            return header, []

If you comment the line offset = filestate.get("offset") and write instead offset = None. It should work like you expect.
But then you have every time this behavior on this machine. Better would be to implement a new option to ignore any offset.

1 Like

@andreas-doehler

He dosen’t work

 try:
        header = u"[[[%s]]]\n" % section.name_write

         stat = os.stat(section.name_fs)
        inode = stat.st_ino if is_inode_capable(section.name_fs) else 1
         # If we have never seen this file before, we set the inode to -1
        prev_inode = filestate.get('inode', -1)
         filestate['inode'] = file_id
 
         # Look at which file offset we have finished scanning the logfile last time.
         #offset = filestate.get('offset')
 
         offset = None

         # Set the current pointer to the file end
         filestate['offset'] = size

You are sure that the indentation is correct? The code looks strange to me.
The output looks more like a stack trace and not a valid output from the script.

Yes i’am sure , this is the plugin downloaded from checkmk 2.0.0p23 (CRE)

i restart the agent and i see my logs

<<>>
[[[/path/job/0800.log]]]
[[[/path/job/1200.log]]]
[[[/path/job/1400.log]]]
[[[/path/job/1600.log]]]

but always checkmk cannot parse these file from the beginning

I see the problem - i was only looking for 2.1 version of mk_logwatch.
At the moment i don’t know what is different in the older one.

1 Like

Thanks , i wait your help

Inside the 2.0 mk_logwatch.py you only need to set the “offset” value to None.
I would do this in line 629.

        if offset is not None and offset > size:
            offset = None
        offset = None
        # now seek to offset where interesting data begins
        log_iter.set_position(offset)

With this the script should read the file every time from the start.

1 Like

Yes it’s work , i missing #

Thanks Sir

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed. Contact an admin if you think this should be re-opened.