You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My environment: kubernetes:1.24.17 (3 masters and 1 worker)
etcd and kube-apiserver run as pod in my cluster.
It runs well when all of 3 masters are 3.10.0-1160.99.1.el7.x86_64 kernel version;
I upgraded one master's kernel version to 5.4.277-1.el7.elrepo.x86_64(delete this node—>upgrade kernel version—>join the cluster)
this master's kube-apiserver runs well within the first hour, then it begins restarting and make etcd restarting, Besides, kube-apiserver makes the node stuck, after mv kube-apiserver.yaml from /etc/kubernets/manifests to somewhere else ,my node becomes normal.
I can't figure out what happened to my apiserver, any help would be appreciate.
The following is the related logs of kube-apiserver.
E0904 07:37:58.681591 1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.1.19.38:443/apis/metrics.k8s.io/v1beta1: Get "https://10.1.19.38:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E0904 07:38:01.593119 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0904 07:38:01.593177 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0904 07:38:01.913053 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to reach service: dial tcp 10.1.19.38:443: connect: connection timed out
E0904 07:38:36.200130 1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
E0904 08:43:43.765735 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
E0904 08:43:43.765855 1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
E0904 08:43:43.767076 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0904 08:43:43.768363 1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
I0904 08:43:43.769740 1 trace.go:205] Trace[1304417108]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.24.17 (linux/amd64) kubernetes/22a9682/leader-election,audit-id:d139ac48-6f60-4895-b634-5260be768ba8,client:10.1.69.88,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (04-Sep-2024 08:43:38.780) (total time: 4989ms):
Trace[1304417108]: [4.989288367s] [4.989288367s] END
E0904 08:43:43.779268 1 timeout.go:141] post-timeout activity - time-elapsed: 13.468579ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result: <nil>
{"level":"warn","ts":"2024-09-04T08:43:43.965Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000a15340/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
{"level":"warn","ts":"2024-09-04T08:43:45.338Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002013880/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
E0904 08:43:45.338314 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
E0904 08:43:45.338398 1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
E0904 08:43:45.339620 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0904 08:43:45.340857 1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
I0904 08:43:45.342200 1 trace.go:205] Trace[1213158620]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.24.17 (linux/amd64) kubernetes/22a9682/leader-election,audit-id:e034f986-8083-4c20-b66c-029720afa2ba,client:10.1.69.88,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (04-Sep-2024 08:43:40.347) (total time: 4994ms):
Trace[1213158620]: [4.994932702s] [4.994932702s] END
E0904 08:43:45.347125 1 timeout.go:141] post-timeout activity - time-elapsed: 8.904939ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result: <nil>
E0904 08:45:01.625216 1 controller.go:220] unable to create required kubernetes system namespace kube-public: Internal error occurred: resource quota evaluation timed out
E0904 08:45:01.626394 1 controller.go:220] unable to create required kubernetes system namespace kube-node-lease: Post "https://[::1]:6443/api/v1/namespaces": dial tcp [::1]:6443: connect: connection refused
{"level":"warn","ts":"2024-09-04T08:45:01.679Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000a15340/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
The text was updated successfully, but these errors were encountered:
My environment:
kubernetes:1.24.17 (3 masters and 1 worker)
etcd and kube-apiserver run as pod in my cluster.
It runs well when all of 3 masters are 3.10.0-1160.99.1.el7.x86_64 kernel version;
I upgraded one master's kernel version to 5.4.277-1.el7.elrepo.x86_64(delete this node—>upgrade kernel version—>join the cluster)
this master's kube-apiserver runs well within the first hour, then it begins restarting and make etcd restarting, Besides, kube-apiserver makes the node stuck, after mv kube-apiserver.yaml from /etc/kubernets/manifests to somewhere else ,my node becomes normal.
I can't figure out what happened to my apiserver, any help would be appreciate.
The following is the related logs of kube-apiserver.
The text was updated successfully, but these errors were encountered: