One key feature of kubernetes is, that unhealthy pods will be restarted. How can this be tested?
First you should deploy KUARD (kubernetes up and runnind demo). With this docker image you can check the restart feature easily:
(To deploy kuard read this posting, but there a some small differences)
# kubectl create namespace kuard
namespace/kuard created
But then you can not use the kubectl run because there is no commandline parameter to add the livenessProbe configuration. So you have to write a yaml file:
apiVersion: v1and then run
kind: Pod
metadata:
creationTimestamp: null
labels:
run: kuard
name: kuard
namespace: kuard
spec:
containers:
- image: gcr.io/kuar-demo/kuard-arm64:3
name: kuard
livenessProbe:
httpGet:
path: /healthy
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
ports:
- containerPort: 8080
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
# kubectl apply -f kuard.yaml -n kuard
The exposed port will stay (this posting) untouched, so you can reach your kuard over http.
So go to the tab "liveness probe" and you will see:
Now click on "Fail" and the livenessProbe will get a http 500:
And after 3 retries you will see:
and the command line will show 1 restart:
Really cool - but really annoying, that this could not be configured via CLI but only per YAML.# kubectl get all -n kuard
NAME READY STATUS RESTARTS AGE
pod/kuard 1/1 Running 1 118s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kuard NodePort 10.152.183.227 <none> 8080:32047/TCP 3d21h
No comments:
Post a Comment