Mk_docker.py: _pickle.PicklingError: Can't pickle local object

CMK version: 2.4.0p20
OS version: arch

Error message:

Output of “/usr/lib/check_mk_agent/plugins/300/mk_docker.py --debug -vvv”:

...
 INFO: (line 378) 'section_node_network' took 0.0011203289031982422s
Traceback (most recent call last):
  File "/usr/lib/check_mk_agent/plugins/300/mk_docker.py", line 758, in <module>
    main()
    ~~~~^^
  File "/usr/lib/check_mk_agent/plugins/300/mk_docker.py", line 754, in main
    call_container_sections(client, config)
    ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
  File "/usr/lib/check_mk_agent/plugins/300/mk_docker.py", line 684, in call_container_sections
    job.start()
    ~~~~~~~~~^^
  File "/usr/lib/python3.14/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
                  ~~~~~~~~~~~^^^^^^
  File "/usr/lib/python3.14/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
  File "/usr/lib/python3.14/multiprocessing/context.py", line 300, in _Popen
    return Popen(process_obj)
  File "/usr/lib/python3.14/multiprocessing/popen_forkserver.py", line 35, in __init__
    super().__init__(process_obj)
    ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
  File "/usr/lib/python3.14/multiprocessing/popen_fork.py", line 20, in __init__
    self._launch(process_obj)
    ~~~~~~~~~~~~^^^^^^^^^^^^^
  File "/usr/lib/python3.14/multiprocessing/popen_forkserver.py", line 47, in _launch
    reduction.dump(process_obj, buf)
    ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.14/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
_pickle.PicklingError: Can't pickle local object <function UnixHTTPAdapter.__init__.<locals>.<lambda> at 0x788479da7320>
when serializing dict item 'dispose_func'
when serializing urllib3._collections.RecentlyUsedContainer state
when serializing urllib3._collections.RecentlyUsedContainer object
when serializing dict item 'pools'
when serializing docker.transport.unixconn.UnixHTTPAdapter state
when serializing docker.transport.unixconn.UnixHTTPAdapter object
when serializing collections.OrderedDict item 'http+docker://'
when serializing collections.OrderedDict object
when serializing dict item 'adapters'
when serializing docker.api.client.APIClient state
when serializing docker.api.client.APIClient object
when serializing dict item 'api'
when serializing MKDockerClient state
when serializing MKDockerClient object
when serializing tuple item 0
when serializing dict item '_args'
when serializing multiprocessing.context.Process state
when serializing multiprocessing.context.Process object

Docker version 29.2.1, build a5c7197d72
Python 3.14.2
python docker version: 7.1.0
Kernel 6.12.68-1-lts

This issue started after a system update, but there were a lot of updates so I can’t tell which package might have changed something. Googling the error I mostly find stuff related to using multiprocessing wrong. but the script didn’t change so I doubt that that’s the issue. I don’t know enough about multiprocessing to debug it myself, so any help would be appreciated.

I thought it might be docker related, but downgrading back to 29.1.3 doesn’t change anything. Any other ideas?

Hi Christian

We have the same problem. It seems to be related to the python version. On our systems the issue appears since we have updated python from 3.12 to 3.14. We use Fedora Core OS and did an upgrade from version 42 to 43. I think python 3.14 is not yet officially supported.

Kind regards
Elmar

1 Like

Ugh, thanks, why didn’t I think of that?

Found at least a bandaid from a cython issue. Add the following at line 680 (after the jobs = [] in the call_container_sections function:

multiprocessing.set_start_method('fork', force=True)

The function should look like this now:

def call_container_sections(client, config):
    jobs = []
    multiprocessing.set_start_method('fork', force=True)
    for container_id in client.all_containers:
        job = multiprocessing.Process(
            target=_call_single_containers_sections, args=(client, config, container_id)
        )
        job.start()
        jobs.append(job)

    for job in jobs:
        job.join()

This lists the change: multiprocessing — Process-based parallelism — Python 3.14.3 documentation

As I’ve said, I don’t know enough to make it better or pretty, but at least I get data again!

Thank you very much! I can confirm that adding this line solves the problem.