Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Admission Controller not functional in default baremetal deployment #12336

Open
little-helper-001 opened this issue Nov 9, 2024 · 8 comments
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@little-helper-001
Copy link

What happened:

I setup a fresh Kubernetes cluster using kubeadm on baremetal hosts, with Calico as a networking solution. I have downloaded the baremetal manifest and only made two small modifications to it, and exposed the two ports of the ingress-nginx-controller service to unprivileged ports on the nodes.

Deploying an ingress now fails with the following error message:
Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": context deadline exceeded

What you expected to happen:

The ingress file should have been validated and deployed.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

NGINX Ingress controller
Release: v1.12.0-beta.0
Build: 80154a3
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5

Kubernetes version (use kubectl version):

Client Version: v1.29.9
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.31.2

Environment:

  • Cloud provider or hardware configuration: Baremetal
  • OS (e.g. from /etc/os-release): Fedora Server 41
  • Kernel (e.g. uname -a): 6.11.6-300.fc41.x86_6
  • Install tools:
    • kubeadm
  • Basic cluster related info:
    • kubectl get nodes -o wide
 NAME                         STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION           CONTAINER-RUNTIME
kubernetes-controlplane-01   Ready    control-plane   27m   v1.31.2   192.168.20.101   <none>        Fedora Linux 41 (Server Edition)   6.11.6-300.fc41.x86_64   containerd://1.7.23
kubernetes-worker-01         Ready    <none>          22m   v1.31.2   192.168.20.104   <none>        Fedora Linux 41 (Server Edition)   6.11.6-300.fc41.x86_64   containerd://1.7.23
  • How was the ingress-nginx-controller installed:

I have downloaded the baremetal manifest and only made two small modifications to it, and exposed the two ports of the ingress-nginx-controller service to unprivileged ports on the nodes.

  • Current State of the controller:
    kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.12.0-beta.0
Annotations:  <none>
Controller:   k8s.io/ingress-nginx
Events:       <none>

kubectl -n <ingresscontrollernamespace> get all -A -o wide

AMESPACE          NAME                                                     READY   STATUS      RESTARTS   AGE     IP               NODE                         NOMINATED NODE   READINESS GATES
calico-apiserver   pod/calico-apiserver-857b99ff6f-8hmgd                    1/1     Running     0          11m     10.0.124.134     kubernetes-worker-01         <none>           <none>
calico-apiserver   pod/calico-apiserver-857b99ff6f-bs8k9                    1/1     Running     0          11m     10.0.124.130     kubernetes-worker-01         <none>           <none>
calico-system      pod/calico-kube-controllers-669744f967-cs2hv             1/1     Running     0          11m     10.0.124.133     kubernetes-worker-01         <none>           <none>
calico-system      pod/calico-node-2xzvc                                    0/1     Running     0          11m     192.168.20.104   kubernetes-worker-01         <none>           <none>
calico-system      pod/calico-node-s64c9                                    0/1     Running     0          11m     192.168.20.101   kubernetes-controlplane-01   <none>           <none>
calico-system      pod/calico-typha-5475547669-rxnsw                        1/1     Running     0          11m     192.168.20.104   kubernetes-worker-01         <none>           <none>
calico-system      pod/csi-node-driver-ns9sb                                2/2     Running     0          11m     10.0.3.1         kubernetes-controlplane-01   <none>           <none>
calico-system      pod/csi-node-driver-szk7x                                2/2     Running     0          11m     10.0.124.132     kubernetes-worker-01         <none>           <none>
ingress-nginx      pod/ingress-nginx-admission-create-7fdmc                 0/1     Completed   0          9m24s   10.0.124.135     kubernetes-worker-01         <none>           <none>
ingress-nginx      pod/ingress-nginx-admission-patch-xd6zw                  0/1     Completed   0          9m24s   10.0.124.136     kubernetes-worker-01         <none>           <none>
ingress-nginx      pod/ingress-nginx-controller-8688dd9bfc-rbq46            1/1     Running     0          9m24s   10.0.124.137     kubernetes-worker-01         <none>           <none>
kube-system        pod/coredns-7c65d6cfc9-82mzt                             1/1     Running     0          16m     10.0.124.131     kubernetes-worker-01         <none>           <none>
kube-system        pod/coredns-7c65d6cfc9-kmnrv                             1/1     Running     0          16m     10.0.124.129     kubernetes-worker-01         <none>           <none>
kube-system        pod/etcd-kubernetes-controlplane-01                      1/1     Running     0          16m     192.168.20.101   kubernetes-controlplane-01   <none>           <none>
kube-system        pod/kube-apiserver-kubernetes-controlplane-01            1/1     Running     0          16m     192.168.20.101   kubernetes-controlplane-01   <none>           <none>
kube-system        pod/kube-controller-manager-kubernetes-controlplane-01   1/1     Running     0          16m     192.168.20.101   kubernetes-controlplane-01   <none>           <none>
kube-system        pod/kube-proxy-jd56t                                     1/1     Running     0          12m     192.168.20.104   kubernetes-worker-01         <none>           <none>
kube-system        pod/kube-proxy-wmg79                                     1/1     Running     0          16m     192.168.20.101   kubernetes-controlplane-01   <none>           <none>
kube-system        pod/kube-scheduler-kubernetes-controlplane-01            1/1     Running     0          16m     192.168.20.101   kubernetes-controlplane-01   <none>           <none>
tigera-operator    pod/tigera-operator-f8bc97d4c-whsbk                      1/1     Running     0          11m     192.168.20.104   kubernetes-worker-01         <none>           <none>

NAMESPACE          NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE     SELECTOR
calico-apiserver   service/calico-api                           ClusterIP   10.102.240.47   <none>        443/TCP                      11m     apiserver=true
calico-system      service/calico-kube-controllers-metrics      ClusterIP   None            <none>        9094/TCP                     10m     k8s-app=calico-kube-controllers
calico-system      service/calico-typha                         ClusterIP   10.102.149.84   <none>        5473/TCP                     11m     k8s-app=calico-typha
default            service/kubernetes                           ClusterIP   10.96.0.1       <none>        443/TCP                      16m     <none>
ingress-nginx      service/ingress-nginx-controller             NodePort    10.109.157.69   <none>        80:30100/TCP,443:30101/TCP   9m24s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx      service/ingress-nginx-controller-admission   ClusterIP   10.99.90.106    <none>        443/TCP                      9m24s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system        service/kube-dns                             ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP       16m     k8s-app=kube-dns

NAMESPACE       NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS                             IMAGES                                                                        SELECTOR
calico-system   daemonset.apps/calico-node       2         2         0       2            0           kubernetes.io/os=linux   11m   calico-node                            docker.io/calico/node:v3.29.0                                                 k8s-app=calico-node
calico-system   daemonset.apps/csi-node-driver   2         2         2       2            2           kubernetes.io/os=linux   11m   calico-csi,csi-node-driver-registrar   docker.io/calico/csi:v3.29.0,docker.io/calico/node-driver-registrar:v3.29.0   k8s-app=csi-node-driver
kube-system     daemonset.apps/kube-proxy        2         2         2       2            2           kubernetes.io/os=linux   16m   kube-proxy                             registry.k8s.io/kube-proxy:v1.31.2                                            k8s-app=kube-proxy

NAMESPACE          NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                IMAGES                                                                                                                            SELECTOR
calico-apiserver   deployment.apps/calico-apiserver           2/2     2            2           11m     calico-apiserver          docker.io/calico/apiserver:v3.29.0                                                                                                apiserver=true
calico-system      deployment.apps/calico-kube-controllers    1/1     1            1           11m     calico-kube-controllers   docker.io/calico/kube-controllers:v3.29.0                                                                                         k8s-app=calico-kube-controllers
calico-system      deployment.apps/calico-typha               1/1     1            1           11m     calico-typha              docker.io/calico/typha:v3.29.0                                                                                                    k8s-app=calico-typha
ingress-nginx      deployment.apps/ingress-nginx-controller   1/1     1            1           9m24s   controller                registry.k8s.io/ingress-nginx/controller:v1.12.0-beta.0@sha256:9724476b928967173d501040631b23ba07f47073999e80e34b120e8db5f234d5   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system        deployment.apps/coredns                    2/2     2            2           16m     coredns                   registry.k8s.io/coredns/coredns:v1.11.3                                                                                           k8s-app=kube-dns
tigera-operator    deployment.apps/tigera-operator            1/1     1            1           11m     tigera-operator           quay.io/tigera/operator:v1.36.0                                                                                                   name=tigera-operator

NAMESPACE          NAME                                                  DESIRED   CURRENT   READY   AGE     CONTAINERS                IMAGES                                                                                                                            SELECTOR
calico-apiserver   replicaset.apps/calico-apiserver-857b99ff6f           2         2         2       11m     calico-apiserver          docker.io/calico/apiserver:v3.29.0                                                                                                apiserver=true,pod-template-hash=857b99ff6f
calico-system      replicaset.apps/calico-kube-controllers-669744f967    1         1         1       11m     calico-kube-controllers   docker.io/calico/kube-controllers:v3.29.0                                                                                         k8s-app=calico-kube-controllers,pod-template-hash=669744f967
calico-system      replicaset.apps/calico-typha-5475547669               1         1         1       11m     calico-typha              docker.io/calico/typha:v3.29.0                                                                                                    k8s-app=calico-typha,pod-template-hash=5475547669
ingress-nginx      replicaset.apps/ingress-nginx-controller-8688dd9bfc   1         1         1       9m24s   controller                registry.k8s.io/ingress-nginx/controller:v1.12.0-beta.0@sha256:9724476b928967173d501040631b23ba07f47073999e80e34b120e8db5f234d5   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=8688dd9bfc
kube-system        replicaset.apps/coredns-7c65d6cfc9                    2         2         2       16m     coredns                   registry.k8s.io/coredns/coredns:v1.11.3                                                                                           k8s-app=kube-dns,pod-template-hash=7c65d6cfc9
tigera-operator    replicaset.apps/tigera-operator-f8bc97d4c             1         1         1       11m     tigera-operator           quay.io/tigera/operator:v1.36.0                                                                                                   name=tigera-operator,pod-template-hash=f8bc97d4c

NAMESPACE       NAME                                       STATUS     COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES                                                                                                                              SELECTOR
ingress-nginx   job.batch/ingress-nginx-admission-create   Complete   1/1           9s         9m24s   create       registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   batch.kubernetes.io/controller-uid=401d8eb4-7db8-4ebd-ad6f-564bc6fa6886
ingress-nginx   job.batch/ingress-nginx-admission-patch    Complete   1/1           9s         9m24s   patch        registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   batch.kubernetes.io/controller-uid=8ed01573-d251-4cd4-9f7f-62e4ee100bbc

kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>

[](Name:             ingress-nginx-controller-8688dd9bfc-rbq46
Namespace:        ingress-nginx
Priority:         0
Service Account:  ingress-nginx
Node:             kubernetes-worker-01/192.168.20.104
Start Time:       Sat, 09 Nov 2024 22:42:24 +0100
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=ingress-nginx
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.12.0-beta.0
                  pod-template-hash=8688dd9bfc
Annotations:      cni.projectcalico.org/containerID: f10884197b0d84039765cc2bdbaa83126b647a0491c64695dd1e2c7974148ecd
                  cni.projectcalico.org/podIP: 10.0.124.137/32
                  cni.projectcalico.org/podIPs: 10.0.124.137/32
Status:           Running
IP:               10.0.124.137
IPs:
  IP:           10.0.124.137
Controlled By:  ReplicaSet/ingress-nginx-controller-8688dd9bfc
Containers:
  controller:
    Container ID:    containerd://faf40d2ac246bc3349a1b21ba74770055da6743b934f5d8ba8eec3df98163e76
    Image:           registry.k8s.io/ingress-nginx/controller:v1.12.0-beta.0@sha256:9724476b928967173d501040631b23ba07f47073999e80e34b120e8db5f234d5
    Image ID:        registry.k8s.io/ingress-nginx/controller@sha256:9724476b928967173d501040631b23ba07f47073999e80e34b120e8db5f234d5
    Ports:           80/TCP, 443/TCP, 8443/TCP
    Host Ports:      0/TCP, 0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      /nginx-ingress-controller
      --election-id=ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    State:          Running
      Started:      Sat, 09 Nov 2024 22:42:40 +0100
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-8688dd9bfc-rbq46 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l2s47 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-l2s47:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                From                      Message
  ----     ------       ----               ----                      -------
  Normal   Scheduled    10m                default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-8688dd9bfc-rbq46 to kubernetes-worker-01
  Warning  FailedMount  10m (x4 over 10m)  kubelet                   MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
  Normal   Pulling      10m                kubelet                   Pulling image "registry.k8s.io/ingress-nginx/controller:v1.12.0-beta.0@sha256:9724476b928967173d501040631b23ba07f47073999e80e34b120e8db5f234d5"
  Normal   Pulled       10m                kubelet                   Successfully pulled image "registry.k8s.io/ingress-nginx/controller:v1.12.0-beta.0@sha256:9724476b928967173d501040631b23ba07f47073999e80e34b120e8db5f234d5" in 7.278s (7.278s including waiting). Image size: 104630026 bytes.
  Normal   Created      10m                kubelet                   Created container controller
  Normal   Started      10m                kubelet                   Started container controller
  Normal   RELOAD       10m                nginx-ingress-controller  NGINX reload triggered due to a change in configuration
</details>
  - `kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>`
<details>
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.12.0-beta.0
Annotations:              <none>
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.157.69
IPs:                      10.109.157.69
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30100/TCP
Endpoints:                10.0.124.137:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30101/TCP
Endpoints:                10.0.124.137:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>)
  • Current state of ingress object, if applicable:
    Not applicable

  • Others:

I can deploy ingresses by manually removing the admission controller, but instead of hacking this issue I would prefer to understand and fix it the proper way.

How to reproduce this issue:

Install Base OS

Install RHEL or Fedora based system. Disable SELinux, open Firewall. Install containerd + kubernetes tools.

Create cluster with kubeadm

sudo kubeadm init --upload-certs --control-plane-endpoint "kubernetes-loadbalancer.my-domain.com" --pod-network-cidr 10.0.0.0/16

kubernetes-loadbalancer.my-domain.com points to a haproxy that forwards port 80 to 30100 on the worker, port 443 to 30101 on the worker and port 6443 to port 6443 on the controlplane node.

Join a worker node with the command provided in the output.

Install Calico

Install the ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml with a modification to expose the HTTP and HTTPS port as unprivileged node ports.

Try to create an ingress e.g.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:

  • host: test.com
    http:
    paths:
    • path: /*
      pathType: Prefix
      backend:
      service:
      name: service-test
      port:
      number: 80

@little-helper-001 little-helper-001 added the kind/bug Categorizes issue or PR as related to a bug. label Nov 9, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Nov 9, 2024
@longwuyuan
Copy link
Contributor

/remove-kind bug

Can you check and confirm that the required ports are open between the nodes inside the cluster. grep for ports in the pod manifest

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Nov 10, 2024
@longwuyuan
Copy link
Contributor

/kind support

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Nov 10, 2024
@little-helper-001
Copy link
Author

Can you check and confirm that the required ports are open between the nodes inside the cluster. grep for ports in the pod manifest

Ports 80/TCP, 443/TCP and 8443/TCP are open between all nodes in the cluster.

@little-helper-001
Copy link
Author

I need to revise my comment. For sake of simplicity I turned the firewall, firewalld in this instance, completely off and deactivated it and now I can deploy the ingress. Can you maybe assist me what went wrong here, why was the connection not possible with ports 80, 443 and 8443 open but is possible with the firewall turned off. Did I open the wrong ports or is there something wrong with the firewall itself? Maybe people in the past ran into problems with firewalld but till now I am unable to find information about this searching the internet.

@longwuyuan
Copy link
Contributor

There is no code in the ingress-nginx controller for firewalld.
Please ask in firewalld related forums.
The controller runs webhook at 8443 and the kube-api-server needs to connect to that when the webhook is called.

Please close the issue if there are no questions on ingress-nginx controller.

Personally, I would practice difference options of firewalld and do a simple netcat or telnet between 2 pods (using image nginx:alpine) on 2 different nodes.

@longwuyuan
Copy link
Contributor

The healthcheck port is 10254

@little-helper-001
Copy link
Author

I will close this issue in a couple of days. I am still investigating this since I ran into the same issue when using Ubuntu 24.04 as a base OS and UFW as a firewall solution. If I can figure this out I want to post a comprehensive solution for people in the same situation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants