Please answer the quiz and click the "Test" button at the bottom right.This quiz is part of the DevOpsTheHardWay course.
Kubernetes - Pods design - multichoice questions
Given a cluster composed by 3 nodes, as follows:
$ kubectl describe nodes node1
Name: node1
[ ... lines removed for clarity ...]
Resource Requests Limits
-------- -------- ------
cpu 1820m (91%) 2925m (146%)
$ kubectl describe nodes node2
Name: node1
[ ... lines removed for clarity ...]
Resource Requests Limits
-------- -------- ------
cpu 1620m (40.5%) 6925m (346%)
$ kubectl describe nodes node3
Name: node1
[ ... lines removed for clarity ...]
Resource Requests Limits
-------- -------- ------
cpu 2506m (84%) 3925m (213%)
Answer the below 3 questions.
Question 1
You apply the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: server
image: alpine:latest
command: ["/bin/sh", "-c", "sleep 3600"]
resources:
requests:
cpu: "1000m"
limits:
cpu: "1000m"
Which Node the Pod would be scheduled on?
- node1
- node2
- node3
- node2 or node3
Question 2
You get the following FailedScheduling
error when tried to deploy some Pod:
0/3 nodes are available: 1 Insufficient cpu.
Choose the correct sentence:
- The Pod has a CPU resource request larger than
2
CPUs (2000m
). - The Pod doesn't specify CPU request request.
- The Pod has a CPU resource limit larger than
2
CPUs (2000m
). - The cluster is overloaded at that moment, you should try again later.
Question 3
You are now told that every node in the cluster has the following taint: app=contoso:NoSchedule
.
You apply the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: server
image: alpine:latest
command: ["/bin/sh", "-c", "sleep 3600"]
Which error could you get?
- 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
- 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
- 0/1 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
Question 4
If the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resource than its request for that resource specifies.
- True
- False
Question 5
Given the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod
spec:
containers:
- name: container-1
image: nginx:latest
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
- name: container-2
image: busybox:latest
command: ["/bin/sh", "-c", "sleep 3600"]
resources:
requests:
memory: "128Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "100m"
- name: container-3
image: alpine:latest
command: ["/bin/sh", "-c", "sleep 3600"]
resources:
requests:
memory: "64Mi"
cpu: "25m"
limits:
memory: "128Mi"
cpu: "50m"
Choose the correct sentence(s):
- The Pod will be scheduled on a Node with at least 448M and 175m of unallocated memory and CPU.
- The Pod will be scheduled on a Node with at least 256Mi and 200m of unallocated memory and CPU.
- The Pod will be schedules on any available Node, regardless unallocated resources.
- In case of insufficient memory, only
container-3
would be scheduled since it requests the minimum.
Question 6
Given the following snippet from kubectl describe pod
command:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu
What can you do (choose all possible answers)?
- Add more nodes to the cluster.
- Terminate unneeded Pods to make room for pending Pods.
- Check that the Pod resource requirements are not larger than all the nodes. For example, if all the nodes have a capacity of cpu: 1, then a Pod with a request of cpu: 1.1 will never be scheduled.
- Check for node taints. If most of your nodes are tainted, and the new Pod does not tolerate that taint, the scheduler only considers placements onto the remaining nodes that don't have that taint.
Question 7
Given the below command and the output:
$ kubectl describe pod test-pod-123
Name: test-pod-123
Namespace: default
Image(s): alpine
Node: kubernetes-node-tf0f/10.240.216.66
Labels: name=test-pod-123
Status: Running
Reason:
Message:
IP: 10.244.2.75
Containers:
alpine:
Image: alpine:latest
Limits:
cpu: 100m
memory: 50Mi
State: Running
Started: Tue, 07 Jul 2019 12:54:41 -0700
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Fri, 07 Jul 2019 12:54:30 -0700
Finished: Fri, 07 Jul 2019 12:54:33 -0700
Ready: False
Restart Count: 5
Conditions:
Type Status
Ready False
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42s default-scheduler Successfully assigned test-pod-123 to kubernetes-node-tf0f
Normal Pulled 41s kubelet Container image "alpine:latest" already present on machine
Normal Created 41s kubelet Created container alpine
Normal Started 40s kubelet Started container alpine
Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod
Choose the correct sentence:
- The
alpine
container was killed because it hit a resource limit. - The
alpine
container is running with no issues (Status: Running
). - The
alpine
container is restarting again and again because it is hitting a resource limit. - The output is not authentic since the last Pod State is
Terminated
, but the Pod status isRunning
.
Question 8
Given:
$ kubectl describe service myservice
Name: myservice
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.97.201.254
IPs: 10.97.201.254
Port: grpc 9555/TCP
TargetPort: 9555/TCP
Endpoints: 10.244.0.46:9555,10.244.0.15:9555
Session Affinity: None
Events: <none>
Resolving myservice
from one of the cluster's Pod returns:
-
10.96.0.10
-
10.244.0.46
-
10.244.0.15
-
10.244.0.46
or10.244.0.15
-
10.97.201.254
Question 9
What happened if resources requests and limit are not specified?
- The Pod will use the default resource values defined at the cluster level.
- The Pod will be scheduled on any available node regardless of resource availability.
- Kubernetes will automatically assign random resource values to the Pod.
- The Pod will fail to schedule, and Kubernetes will return a scheduling error.
Question 10
Given the below Pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: container
image: nginx:latest
resources:
limits:
memory: "64Mi"
What would be the Memory resources request?
-
64Mi
- The Memory request was not specified, thus would not be defined.
- The minimum viable request:
1Mi
. - The manifest could not be applies since in that case, specifying resources request is mandatory.
Question 11
Choose the correct sentence(s) regarding container resources request and limit:
- Total resource limits in a node can be more than 100%.
- Total resource requests in a node can be more than 100%.
- Node where all pod's resource requests = resource limit, shouldn't experience node pressure.
- Node where all pod's resource requests = resource limit, might experience node pressure.
Question 12
From official k8s HPA docs:
Since the resource usages of all the containers are summed up the total pod utilization may not accurately represent the individual container resource usage. This could lead to situations where a single container might be running with high usage and the HPA will not scale out because the overall pod usage is still within acceptable limits.
How could that issue be addressed efficiently?
- Reduce the resource limits of the other containers.
- Set the resources requests to be equal to limits.
- Split the containers to different Pods.
- None of the above
Question 13
Given the below Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: nginx:latest
You are told that there is an HPA configured for this Deployment. What is the potential issue when changing the image tag and applying again the Deployment?
- This will instruct Kubernetes to scale the current number of Pods to the value of the
replicas
. - It's not recommended the perform Rolling Update during scale up event.
- The image tag should be changes in the HorizontalPodAutoscaler object first.
- There is no any issue.
Question 14
Given the below Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: nginx:latest
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
In the event of unavailability, how long does it take for kube-proxy to stop routing traffic to a Pod (the time denoted by x
)?
- x = 30
- 10 < x < 20
- x = 25
- 20 < x < 30
Question 15
You deploy an application by applying some Deployment object.
You notice that Pods are in CrashLoopBack
mode.
Which of the below step(s) can help?
- Investigate Pod events.
- Delete the problematic Pods.
- Read through the Pod logs.
- Re-apply the Deployment object.
- Drain the node on which the Pods are running.
Question 16
Given a node with a taint group=blue
, and a Pod with toleration for group=blue
.
- The Pod will always be placed on that node.
- The Pod will not be placed on that node.
- The Pod may be placed on that node.
Question 17
Given the below Pod:
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod
spec:
containers:
- name: container1
image: alpine
command: ["/bin/sh", "-c", "sleep 5"]
You apply the Pod in your cluster. What is the Pod status after 1 minute?
- Succeeded
- Running
- Pending
- Failed