Create a deployment called my-webapp with image: nginx, label tier:frontend and 2 replicas. Expose the deployment as a NodePort service with name front-end-service , port: 80 and NodePort: 30083
kubectl create deployment my-webapp --image=nginx --replicas=2 --dry-run=client -o yaml > dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-webapp
tier: frontend
name: my-webapp
spec:
replicas: 2
selector:
matchLabels:
app: my-webapp
strategy: {}
template:
metadata:
labels:
app: my-webapp
tier: frontend
spec:
containers:
- image: nginx
name: nginx
Create Service
apiVersion: v1
kind: Service
metadata:
labels:
tier: frontend
name: front-end-service
spec:
selector:
tier: frontend
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30083
Add a taint to the node node01 of the cluster. Use the specification below:
key:app_type, value:alpha and effect:NoSchedule
Create a pod called alpha, image:redis with toleration to node01
kubectl taint nodes node01 app_type=alpha:NoSchedule
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
Apply a label app_type=beta to node node02. Create a new deployment called beta-apps with image:nginx and replicas:3. Set Node Affinity to the deployment to place the PODs on node02 only
NodeAffinity: requiredDuringSchedulingIgnoredDuringExecution
k label nodes node02 app_type=beta
k create deployment beta-apps --image=nginx --replicas=3 --dry-run=client -o yaml > dep.yaml
vi dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: beta-apps
name: beta-apps
spec:
replicas: 3
selector:
matchLabels:
app: beta-apps
strategy: {}
template:
metadata:
labels:
app: beta-apps
spec:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app_type
operator: In
values:
- beta
containers:
- image: nginx
name: nginx
k create -f dep.yaml
kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
beta-apps-6f69666d6-fwpts 1/1 Running 0 115s 10.244.5.8 node02 <none> <none>
beta-apps-6f69666d6-h8vjh 1/1 Running 0 115s 10.244.5.7 node02 <none> <none>
beta-apps-6f69666d6-hx9xc 1/1 Running 0 115s 10.244.5.6 node02 <none> <none>
Create a new Ingress Resource for the service: my-video-service to be made available at the URL: http://ckad-mock-exam-solution.com:30093/video.
Create an ingress resource with host: ckad-mock-exam-solution.com
path:/video
Once set up, curl test of the URL from the nodes should be successful / HTTP 200
http://ckad-mock-exam-solution.com:30093/video accessible?
Need to look up the service 1st to see what port it's running on:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front-end-service NodePort 10.104.240.144 <none> 80:30083/TCP 16m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m
my-video-service ClusterIP 10.98.8.131 <none> 8080/TCP 6m18s
my-video-service is running on port 8080. That's the port you need to setup for the ingress.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-video-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: ckad-mock-exam-solution.com
http:
paths:
- path: /video
backend:
serviceName: my-video-service
servicePort: 8080
root@controlplane:~# curl ckad-mock-exam-solution.com:30093/video
<!doctype html>
<title>Hello from Flask</title>
<body style="background: #30336b;">
<div style="color: #e4e4e4;
text-align: center;
height: 90px;
vertical-align: middle;">
<img src="https://res.cloudinary.com/cloudusthad/image/upload/v1547053817/error_404.png">
</div>
</body>
We have deployed a new pod called pod-with-rprobe. This Pod has an initial delay before it is Ready. Update the newly created pod pod-with-rprobe with a readinessProbe using the given spec
httpGet path: /ready
httpGet port: 8080
k get po pod-with-rprobe -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: pod-with-rprobe
name: pod-with-rprobe
namespace: default
spec:
containers:
- env:
- name: APP_START_DELAY
value: "180"
image: kodekloud/webapp-delayed-start
imagePullPolicy: Always
name: pod-with-rprobe
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
httpGet:
path: /ready
port: 8080
Create a new pod called nginx1401 in the default namespace with the image nginx.
Add a livenessProbe to the container to restart it if the command 'ls /var/www/html/probe' fails.
This check should start after a delay of 10 seconds and run every 60 seconds.
You may delete and recreate the object. Ignore the warnings from the probe.
k run nginx1401 --image=nginx --restart=never --dry-run=client -o yaml > pod3.yaml
vi pod3.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx1401
name: nginx1401
spec:
containers:
- image: nginx
name: nginx1401
livenessProbe:
exec:
command:
- ls
- /var/www/html/probe
initialDelaySeconds: 10
periodSeconds: 60
Create a job called whalesay with image docker/whalesay and command "cowsay I am going to ace CKAD!".
completions: 10
backoffLimit: 6
restartPolicy: Never
This simple job runs the popular cowsay game that was modifed by docker…
apiVersion: batch/v1
kind: Job
metadata:
name: whalesay
spec:
template:
spec:
containers:
- name: whalesay
image: docker/whalesay
command: ["/bin/sh", "-c", "cowsay I am going to ace CKAD!"]
restartPolicy: Never
backoffLimit: 6
completions: 10
k logs whalesay
Create a pod called multi-pod with two containers.
Container 1: name: jupiter, image: nginx
Container 2: europa, image: busybox
command: sleep 4800
Environment Variables: Container 1: type: planet
Container 2: type: moon
k run multi-pod --image=nginx --dry-run=client -o yaml > multi-pod.yaml
controlplane $ vi multi-pod.yaml
controlplane $ cat multi-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: multi-pod
name: multi-pod
spec:
containers:
- image: nginx
name: jupiter
env:
- name: type
value: planet
- name: europa
image: busybox
command: ["/bin/sh","-c","sleep 4800"]
env:
- name: type
value: moon
controlplane $ vi pv1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: custom-volume
spec:
capacity:
storage: 50Mi
accessModes:
- ReadWriteMany
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /opt/data
controlplane $ k create -f pv1.yaml
persistentvolume/custom-volume created
controlplane $ k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
custom-volume 50Mi RWX Retain Available 4s