Ansible playbook fails to join Kubernetes cluster: “Failed to connect to the host via ssh”
I am using Ansible to automate the setup of a Kubernetes cluster. The master node setup works fine, but I’m having issues when trying to join worker nodes to the cluster. This is the only part where SSH fails. I can connect manually, and I have installed all dependencies using my playbook, but joining the cluster fails every time.
Kubernetes Upgrade from version 1.27.x to 1.30.x
I’d like to upgrade our Kubernetes Clusters from version 1.27.x to 1.30.x using kubeadm.
kubeadm installation to run kuernetes cluster goes into deadlock state
I am trying to create kubernetes cluster in my ubuntu virtual machine using system configuration:
ubuntu: v22
RAM:4096
CPU:4
hard disk: 300 GB
when i do kubectl get nodes
i see both master and worker is up and running but after sometime my both machine stops abruptly as if in deadlock state and stops running .everything. i am not able to even exit from the machine .below i am attaching my syslog from worker machine
port-forward uses private ip, how to make it use public?
I have K8s cluster made with kubeadm
on AWS EC2. I have one worker node, I used
kubeadm upgrade apply – why is kubeadm forcing to download images from internal registry?
I am currently trying to patch a k8s cluster and after several changes I am trying to get the controlplane images from the “registry.k8s.io
“. When I am creating the pods manually or change the image within the manifest file, everything is fine and is pulled correctly.