You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Master nodes installed fine (RHEL 8.6) but the join commands on the worker nodes fail with the following errors:
⚙ Join Kubernetes master node
+ kubeadm join --config /opt/replicated/kubeadm.conf --ignore-preflight-errors=all
W0627 12:58:21.669395 5789 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.
[failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory, failure loading key for service account: couldn't load the private key file /etc/kubernetes/pki/sa.key: open /etc/kubernetes/pki/sa.key: no such file or directory, failure loading certificate for front-proxy CA: couldn't load the certificate file /etc/kubernetes/pki/front-proxy-ca.crt: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory, failure loading certificate for etcd CA: couldn't load the certificate file /etc/kubernetes/pki/etcd/ca.crt: open /etc/kubernetes/pki/etcd/ca.crt: no such file or directory]
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
To see the stack trace of this error execute with --v=5 or higher
Failed to join the kubernetes cluster.
so apparently some certificates are not being generated?
Do I need to copy them manually from the master node(s)?
What also made me scratch my head was the kubernetes-master-address parameter in the generated join command for the worker nodes; it's localhost:6444 which seems wrong to me when running this script on the worker nodes?
The text was updated successfully, but these errors were encountered:
Tried setting up a HA installation using curl -LO https://k8s.kurl.sh/bundle/f89c0f2.tar.gz
Master nodes installed fine (RHEL 8.6) but the join commands on the worker nodes fail with the following errors:
so apparently some certificates are not being generated?
Do I need to copy them manually from the master node(s)?
What also made me scratch my head was the
kubernetes-master-address
parameter in the generated join command for the worker nodes; it's localhost:6444 which seems wrong to me when running this script on the worker nodes?The text was updated successfully, but these errors were encountered: