Create a Persistent Volume called log-volume. It should make use of a storage class name manual. It should use RWX as the access mode and have a size of 1Gi. The volume should use the hostPath /opt/volume/nginx
apiVersion: v1
kind: PersistentVolume
metadata:
name: log-volume
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: manual
hostPath:
path: /opt/volume/nginx
Next, create a PVC called log-claim requesting a minimum of 200Mi of storage.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: log-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: manual
This PVC should bind to log-volume.
controlplane $ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
log-claim Bound log-volume 1Gi RWX manual 4s
controlplane $ k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
log-volume 1Gi RWX Retain Bound default/log-claim manual 2m14s
Mount this in a pod called logger at the location /var/www/nginx. This pod should use the image nginx:alpine.
kubectl run --generator=run-pod/v1 logger --image=nginx:alpine --dry-run=client -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: logger
name: logger
spec:
containers:
- image: nginx:alpine
name: logger
volumeMounts:
- name: log
mountPath: /var/www/nginx
volumes:
- name: log
persistentVolumeClaim:
claimName: log-claim
We have deployed a new pod called secure-pod and a service called secure-service. Incoming or Outgoing connections to this pod are not working.
Troubleshoot why this is happening.
controlplane $ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26m
secure-service ClusterIP 10.111.109.34 <none> 80/TCP 6m57s
k get pods
controlplane $ k get po
NAME READY STATUS RESTARTS AGE
logger 1/1 Running 0 5m13s
secure-pod 1/1 Running 0 4m58s
webapp-color 1/1 Running 0 22m
controlplane $ k exec -it webapp-color -- /bin/sh
> nc -z -v -w 1 secure-service 80
nc: secure-service (10.111.109.34:80): Operation timed out
controlplane $ k get netpol
NAME POD-SELECTOR AGE
default-deny <none> 8m49s
controlplane $ k get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
logger 1/1 Running 0 15m run=logger
secure-pod 1/1 Running 0 15m run=secure-pod
webapp-color 1/1 Running 0 32m name=webapp-color
#Create new network policy
kubectl get netpol default-deny -o yaml > netpol.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: secure-netpol
namespace: default
spec:
podSelector:
matchLabels:
run: secure-pod
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
name: webapp-color
ports:
- protocol: TCP
port: 80
controlplane $ k create -f netpol.yaml
networkpolicy.networking.k8s.io/secure-netpol created
Make sure that incoming connection from the pod webapp-color are successful.
controlplane $ k exec -it webapp-color -- /bin/sh
/opt # nc -z -v -w 1 secure-service 80
secure-service (10.111.109.34:80) open
/opt # exit
Create a pod called time-check in the dvl1987 namespace. This pod should run a container called time-check that uses the busybox image.
controlplane $ k get ns
NAME STATUS AGE
default Active 37m
e-commerce Active 35m
kube-node-lease Active 37m
kube-public Active 37m
kube-system Active 37m
marketing Active 35m
controlplane $ k create ns dvl1987
namespace/dvl1987 created
Create a config map called time-config with the data TIME_FREQ=10 in the same namespace.
controlplane $ k config set-context --current --namespace=dvl1987
k create configmap time-config --from-literal=TIME_FREQ=10
The time-check container should run the command: while true; do date; sleep $TIME_FREQ;done and write the result to the location /opt/time/time-check.log.
The path /opt/time on the pod should mount a volume that lasts the lifetime of this pod.
apiVersion: v1
kind: Pod
metadata:
labels:
run: time-check
name: time-check
spec:
containers:
- image: busybox
name: time-check
env:
- name: TIME_FREQ
valueFrom:
configMapKeyRef:
name: time-config
key: TIME_FREQ
command: ["/bin/sh", "-c", "while true; do date; sleep $TIME_FREQ;done > /opt/time/time-check.log"]
volumeMounts:
- mountPath: /opt/time
name: a-volume
volumes:
- name: a-volume
emptyDir: {}
controlplane $ kubectl exec time-check -- env|grep TIME
TIME_FREQ=10
Create a new deployment called nginx-deploy, with one signle container called nginx, image nginx:1.16 and 4 replicas. The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.
kubectl create deployment nginx-deploy --image=nginx:1.16 --dry-run=client -o yaml > depl.yaml
vi depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 4
selector:
matchLabels:
app: nginx-deploy
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 2
template:
metadata:
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx
k create -f depl.yaml
deployment.apps/nginx-deploy created
> k get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 4/4 4 4 16s
Next upgrade the deployment to version 1.17 using rolling update.
> kubectl set image deployment/nginx-deploy nginx=nginx:1.17
> kubectl get deployments.apps nginx-deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 4/4 4 4 3m8s
> kubectl describe deployments.apps nginx-deploy|grep Image
Image: nginx:1.17
Finally, once all pods are updated, undo the update and go back to the previous version.
k rollout history deployment nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
1 <none>
2 <none>
> kubectl rollout undo deployment nginx-deploy
deployment.apps/nginx-deploy rolled back
>kubectl describe deployments.apps nginx-deploy|grep Image
Image: nginx:1.16
Create a redis deployment with the following parameters:
Name of the deployment should be redis using the redis:alpine image. It should have exactly 1 replica.
The container should request for .2 CPU. It should use the label app=redis.
It should mount exactly 2 volumes:
Make sure that the pod is scheduled on master/controlplane node.
a. An Empty directory volume called data at path /redis-master-data.
b. A configmap volume called redis-config at path /redis-master.
c.The container should expose the port 6379.
The configmap has already been created.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
nodeName: master
containers:
- image: redis:alpine
name: redis
ports:
- containerPort: 6379
resources:
requests:
cpu: .2
volumeMounts:
- name: data
mountPath: /redis-master-data
- name: redis-config
mountPath: /redis/master
volumes:
- name: data
emptyDir: {}
- name: redis-config
configMap:
name: redis-config