GKE ingress nginx with TLS handshake errors
I have a GKE cluster with Ingress Nginx Controller attached to a Internal Load Balancer in GCP. The TLS termination happens on Ingress side.
GKE : Unable to see “logs” or “exec” in pod for autopilot private cluster
Cluster Architecture:
I’m looking to deploy a docker image from Google Artifact Registry on GKE using rest API
I am trying to deploy a Docker image stored in Google Artifact Registry to my Google Kubernetes Engine (GKE) cluster using the REST API
Unable to Identify Source of Deprecated autoscaling/v2beta2 API Usage in GKE Cluster
I’m currently managing a Kubernetes cluster on Google Cloud (GKE) and have received a notice that I’m using deprecated APIs which will affect my upgrade to version 1.26. Specifically, the API in question is autoscaling/v2beta2 for HorizontalPodAutoscalers.
Unable to Identify Source of Deprecated autoscaling/v2beta2 API Usage in GKE Cluster
I’m currently managing a Kubernetes cluster on Google Cloud (GKE) and have received a notice that I’m using deprecated APIs which will affect my upgrade to version 1.26. Specifically, the API in question is autoscaling/v2beta2 for HorizontalPodAutoscalers.
GKE unable to connect to the server: dial tcp server_ip: i/o timeout
Every time I run a command in the kubernetes cluster I get the following error message: unable to connect to the server: dial tcp server_ip: i/o timeout
kubernates: connect external mariadb server
I’m running gke cluster and i have my api pods running with env vars set to my mariadb server which on same network but it’s a instance. my api throwing an error saying
HA workloads on regional GKE clusters
My understanding of the docs (any a quick convo with gemini) is that in regional clusters, the pods are replicated across the node pools, and then a loadbalancer balances the traffic across the pods. Meaning that, if I deploy a hello-world pod, I will essentially deploy multiple replicas, thus removing the need for me to create the replicas myself.
GKE kubectl configuration issue
I have created a GKE cluster version 1.26.15 with 2 nodes. Cluster is created successfully and running.
How to force pods on same node when using GKE Autopilot?
I’m trying to run two 1 replica deployments on GKE Autopilot. I would like to have both pods be scheduled on the same node.