How to terminate a side-car container in Kubernetes Job
There is a side car pattern in Kubernetes. A pod contains a main container and sub container(s). DB Client and Cloud SQL Proxy. Ruby on Rails container working Puma and Nginx proxy container. It’s useful in most cases but sometimes we’re in trouble by using in k8s Job. Job is one time job. Every containers have to be terminated at the end of execution but proxy containers are going to be alive like daemon processes. In Kubernetes’s thread on Github, many people want to declare side car’s going trigger in manifest file but nowadays we can’t do that. This article is about how I avoid full of zombie jobs.
I wanted a job to execute sql to execute ALTER
. I used a MySQL docker container for execute sql through a Cloud SQL proxy docker container to Google Cloud SQL’s MySQL instance. I wrote shell script in the job manifest file like this comment.
apiVersion: batch/v1
kind: Job
metadata:
name: db-migration
spec:
template:
spec:
containers:
- name: client
image: mysql:5.7
command: ["/bin/sh", "-c"]
args:
- |
sleep 2s
trap "touch /tmp/pod/terminated" EXIT
cat /migration/create_db.sql | mysql --defaults-file=/conf/my.cnf --host=$(MYSQL_HOST) $(DB_NAME)
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: cloudsql
key: username
- name: MYSQL_HOST
value: 127.0.0.1
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql
key: password
- name: DB_NAME
value: database
volumeMounts:
- mountPath: /migration
name: migration-files
readOnly: true
- mountPath: /conf
name: db-cnf
readOnly: true
- mountPath: /tmp/pod
name: tmp-pod
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.11
name: cloudsql-proxy
command: ["/bin/sh", "-c"]
args:
- |
/cloud_sql_proxy -instances=my-project:us-central1:my-instance=tcp:3306 -credential_file=/secrets/cloudsql/credentials.json & CHILD_PID=$!
(while true; do if [[ -f "/tmp/pod/terminated" ]]; then kill $CHILD_PID; echo "Killed $CHILD_PID because the main container terminated."; fi; sleep 1; done) &
wait $CHILD_PID
if [[ -f "/tmp/pod/terminated" ]]; then exit 0; echo "Job completed. Exiting..."; fi
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- mountPath: /tmp/pod
name: tmp-pod
readOnly: true
restartPolicy: Never
volumes:
- name: migration-files
configMap:
name: migration
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: db-cnf
secret:
secretName: db-cnf
- name: tmp-pod
emptyDir: {}
backoffLimit: 1
parallelism: 1
completions: 1
To say at least, it’s complex. Let’s see section by section.
spec:
template:
spec:
containers:
- name: client
image: mysql:5.7
command: ["/bin/sh", "-c"]
args:
- |
sleep 2s
trap "touch /tmp/pod/terminated" EXIT
cat /migration/create_db.sql | mysql --defaults-file=/conf/my.cnf --host=$(MYSQL_HOST) $(DB_NAME)
Sleep at first for waiting sql-proxy starts. In k8s you can’t set starting order, so there is no choice. You may use loop,
for i in $(seq 20)
do
mysql --defaults-file=/conf/my.cnf --host=$(MYSQL_HOST) $(DB_NAME) -e 'select 1' || (sleep 1; false) && break
done
After that, executes sql and makes a trigger file when EXIT by trap
.
Next, let’s see sql proxy’s section.
spec.template.spec.containers:
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.11
name: cloudsql-proxy
command: ["/bin/sh", "-c"]
args:
- |
/cloud_sql_proxy -instances=my-project:us-central1:my-instance=tcp:3306 -credential_file=/secrets/cloudsql/credentials.json & CHILD_PID=$!
(while true; do if [[ -f "/tmp/pod/terminated" ]]; then kill $CHILD_PID; echo "Killed $CHILD_PID because the main container terminated."; fi; sleep 1; done) &
wait $CHILD_PID
if [[ -f "/tmp/pod/terminated" ]]; then exit 0; echo "Job completed. Exiting..."; fi
Line 7, starts cloud sql proxy and set process number. Line 8 waits for trigger file and kills its process. Exit after all.
Honestly, I think it’s easier that you use your own console not k8s, but it’s important that you leave footprint for co-workers that knows you executed some ALTER
.