Handling K8s forcefull kill of celery pods before completing running tasks
I am deploying a Django app with Celery workers on AWS EKS. I have everything running as expected, except that K8s proceeds to stopping Celery workers replicas before finishing ongoing tasks, I also have the same behavior when making a new deployment or pushing new code to the master branch.