Kubernetes Ingress

CMK version: 2.2.0p22
OS version: irrelevant

Error message: controller.go:1108] Error obtaining Endpoints for Service “default/myrelease-checkmk-cluster-collector”: no object matching key “default/myrelease-checkmk-cluster-collector” in local store

I’m trying to monitor a K8S cluster, a subject which I am not an expert on, over an already existing ingress rather than a split public IP. And, after several days of practically banging my head against the wall, I still don’t f$%&ing get it. Anything I’ve found in terms of guides or even forum posts (saw one where someon e got it to work but didn’t write how) has been extremely vague (like, “just customize the file, bro” - but how?), so I’m left guessing, and thus far guessing wrongly. :frowning_face:

The obvious changes I seem to need to do from a clean values.yml file would be setting network policy to enabled: true, and then these two changes to this segment:

ingress:
enabled: true
className: “”
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
hosts:
- host: myalreadyexistinghost.example.com
paths:
- path: /checkmk-cluster-collector
pathType: Prefix
tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local

and correspondingly, for the ingress itself:
- path: /checkmk-cluster-collector
pathType: Prefix
backend:
service:
name: myrelease-checkmk-cluster-collector
port:
number: 8080

If it was possible, I think I’d probably rather map it to its own separate port rather than some subpath, but I didn’t find a way how to do that.

It seems like it’s ending up searching in the wrong namespace, if it means the “default” in a namespace context. But if I just copy all the header into a second entry and change the namespace for that one to make it find the cluster collector, it doesn’t even eat the yml anymore because of some error in a line number that solely consists of an unchanged comment because it’s not actually showing the yml’s line numbers but mid-parsing ones so it’s completely unclear what it’s actually referring to to me.

error: error parsing ingress.yaml: error converting YAML to JSON: yaml: line 19: mapping values are not allowed in this context

I dont have the answer but my experience is that you more or less needs ot be an expert on K8 networking to be able to monitor K8…

I also think, based on your question that, in the cloud this might be different, we don’t have any K8. with “public IPs”

I’d ask your K8 folks to help you, the ones who setup K8 for you

Thank you. Situation makes that doing a bit difficult, else I wouldn’t have made such a frustrated post to begin with. :upside_down_face: If I do find out the method that works to integrate CMK with the ingress, I will write it down…

As per the documentation the path is “/”.

Also, does it work when you use the NodePort ?

What kind of K8 setup is this ? Managed or on-prem ?
Also, which version of K8 are you using?

Hi, it’s a managed setup, version 1.28.5

The path being “/” would be a problem if it were to be mapped to the same port since it would conflict with the actual thing running on the K8S cluster, so I changed it from the sample. If it can be mapped to a different port, there would be no reason to try it.
I haven’t actually applied the NodePort config in the test environment I’ve been trying to mess with the ingress with, but it’s working in a different instance.

You mean Ingress or Nodeport?

Hi, it’s a managed setup, version 1.28.5
Sometimes these managed setups have their extra configurations especially with annotations that you need to use with the ingress. Is it AWS or something else?

Can you describe the ingress via the kubectl command?
Also, kubectl get ingress -n “your namespace” will be helpful here.

  1. nodeport (ingress is working too)

  2. Right now, there are 2: one generated from values.yml, and one native to the cluster which I tried to add the CC service to.

Name:             ingress-nginx
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/part-of=ingress-nginx
                  io.kompose.service=ingress-nginx
Namespace:        default
Address:          1.2.3.4
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  dummy.example.com  
                           /svc1        svc1:1234 (10.10.10.11:1234)
                           /svc1/path2    svc1:1234 (10.10.10.11:1234)
                           /svc1-path3/   svc1:1234 (10.10.10.11:1234)
                           /svc2           svc2:2345 (10.10.10.12:2345)
                           /                svc3:3456 (10.10.10.13:3456)
Annotations:               nginx.ingress.kubernetes.io/client-body-buffer-size: 100G
                           nginx.ingress.kubernetes.io/proxy-body-size: 100G
Events:                    <none>
Name:             m-checkmk-cluster-collector
Labels:           app.kubernetes.io/instance=m
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=checkmk
                  app.kubernetes.io/version=1.5.1
                  helm.sh/chart=checkmk-1.5.1
Namespace:        cmk
Address:          
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  dummy.example.com  
                           /checkmk-cluster-collector   m-checkmk-cluster-collector:8080 (10.20.30.40:10050)
Annotations:               meta.helm.sh/release-name: m
                           meta.helm.sh/release-namespace: cmk
                           nginx.ingress.kubernetes.io/rewrite-target: /
Events:                    <none>

I put nonstandard names for release and namespace because I tried to get it under the 63 character limit to try referencing “m-checkmk-cluster-collector.cmk.svc.cluster.local” in the original ingress as the one off-namespace service (cgpt suggestion :D), but this doesn’t work because the “backend.service.name” expects a regex not containing any dots, just a single name that therefore will have to be in the ingress’ own namespace.

It’s clear to me that the current variant can’t work because it creates its own ingress rather than integrate with the existing one, and this own ingress is not linked to the existing and working ingress controller from nginx, therefore has no pub IP assigned. (using the existing one being the goal)

It seems to me that it’s probably impossible without adding a second ingress, so having that doesn’t seem to be a “bad thing”.

Ultimately, it seems to come down to this:

It seems only worth it if the use case is large enough to fiddle with these components. Perhaps just using basic nodeport really is the better solution here for mine after all. :confused:

Ideally this won’t work as the name of an Ingress object must be a valid DNS subdomain name.

Here is my ingress that I use for the chart on vanilla k8s. This is also using the same ingress that I use for 20 or so services. Your mileage may vary but this this is a working configuration in my environment. (I also added cert-manager to give me a letsencrypt cert in my example if you’re interested in that)

  ingress:
    enabled: true
    className: ""
    annotations:
      nginx.ingress.kubernetes.io/rewrite-target: /
      cert-manager.io/cluster-issuer: "letsencrypt"
      kubernetes.io/ingress.class: "nginx"
    hosts:
      - host: yourcool.dns.name
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: yourcoolsecret
        hosts:
          - yourcool.dns.name

Hey, thanks!
So in principle, do you have to create a distinct ingress for connecting to the checkmk service? Because one of my key problems was that I couldn’t set the rewriting rule service-by-service but only one for all, and using “rewrite-target: /” would kill functionality of some other services.

But I’m not sure how to converge multiple ingresses into one controller. I tried, but it would never eat my syntax.

I only use one ingress controller (ingress-nginx + cert-manager) for my entire 1 node cluster in this case which handles the 20+ web services and the checkmk cluster collector. They do have their own ingresses but they all route back to the controller itself (but if you look at my yamls with them they are practically identical 1:1 as I only change ports/name/secret/dns on them.

One such example (nextcloud):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nextcloud-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt"
    nginx.ingress.kubernetes.io/proxy-body-size: "10000m"
    nginx.org/client-max-body-size: "10000m"
spec:
  tls:
    - hosts:
      - cool.nextcloud.server
      secretName: coolsecret
  ingressClassName: nginx
  rules:
    - host: cool.nextcloud.server
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: nextcloud
              port:
                number: 80

As you can tell there really isn’t much of a difference between what I set for my checkmk ingress vs my nextcloud ingress aside from the large body size I set for nginx so that nextcloud could accept large uploads