Given the following scenario:
As an operator of a Kubernetes cluster used by multiple users, I want to have tight control over who can schedule privileged containers.
Kubernetes containers can be run in privileged mode by providing a well crafted SecurityContext.
Cluster administrators can prevent regular users to create privileged containers by using a Kubernetes built-in feature called Pod Security Policies.
However, Pod Security Polices are going to be deprecated in the near future.
Pod Security Policies could be replaced by using policies provided by an external Admission Controller, like Kubewarden.
This policy inspects the AdmissionReview objects generated by the Kubernetes API server and either accept or reject them.
The policy can be used to inspect CREATE
and UPDATE
requests of Pod
resources.
It will reject any pod with containers, init container or ephemeral containers
configured as privileged in their SecurityContext.
The policy has two configurations:
skip_init_containers
: if set totrue
instructs the policy to ignore that some init container is configured as privileged. Default value isfalse
skip_ephemeral_containers
: if set totrue
instructs the policy to ignore that some ephemeral container is configured as privileged. Default value isfalse
The main containers of the pod will always be validated.
The user is responsible to configure the policy defining the resources targeted by the policy. Otherwise, the policy will not be able to run. The current supported resources are listed in the metadata.yml file. See more information about how to configure a policy in the Kubewarden documentation.
Let's define the policy and see how the validation works:
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1
kind: ClusterAdmissionPolicy
metadata:
annotations:
io.kubewarden.policy.category: PSP
io.kubewarden.policy.severity: medium
name: pod-privileged-policy
spec:
module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.3.1
settings: {}
rules:
- apiGroups:
- ''
apiVersions:
- v1
resources:
- pods
operations:
- CREATE
- apiGroups:
- ''
apiVersions:
- v1
resources:
- replicationcontrollers
operations:
- CREATE
- UPDATE
- apiGroups:
- apps
apiVersions:
- v1
resources:
- deployments
- replicasets
- statefulsets
- daemonsets
operations:
- CREATE
- UPDATE
- apiGroups:
- batch
apiVersions:
- v1
resources:
- jobs
- cronjobs
operations:
- CREATE
- UPDATE
mutating: false
EOF
After the policy is running and active, we apply the following Pod specification which doesn't have any security context defined. Therefore, it should be accepted by the policy and it can be scheduled by the users of the cluster:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
EOF
pod/nginx created
However, the next Pod specification has one of its containers running in privileged mode and it will be rejected by the policy:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
securityContext:
privileged: true
- name: sleeping-sidecar
image: alpine
command: ["sleep", "1h"]
EOF
Error from server: error when creating "STDIN": admission webhook "clusterwide-pod-privileged-policy.kubewarden.admission" denied the request: Privileged container is not allowed
The next pod does not have a privileged container. But there is a init container requesting privileged access. Therefore, this will be rejected by the policy as well:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
initContainers:
- name: nginx-init
image: nginx
securityContext:
privileged: true
- name: sleeping-sidecar-init
image: alpine
command: ["sleep", "1h"]
containers:
- name: sleeping-sidecar
image: alpine
command: ["sleep", "1h"]
EOF
Error from server: error when creating "STDIN": admission webhook "clusterwide-pod-privileged-policy.kubewarden.admission" denied the request: Privileged init container is not allowed
However, if this privileged init container is expected and it must be run with privileged access, you can instruct the policy to ignore init containers:
kubectl patch clusteradmissionpolicies pod-privileged-policy -p '{"spec":{"settings":{"skip_init_containers":true}}}' --type "merge"
clusteradmissionpolicy.policies.kubewarden.io/pod-privileged-policy patched
Now the workload with privileged init container should be accepted:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
initContainers:
- name: nginx-init
image: nginx
securityContext:
privileged: true
- name: sleeping-sidecar-init
image: alpine
command: ["sleep", "1h"]
containers:
- name: sleeping-sidecar
image: alpine
command: ["sleep", "1h"]
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
initContainers:
- name: nginx-init
image: nginx
securityContext:
privileged: true
- name: sleeping-sidecar-init
image: alpine
command: ["sleep", "1h"]
containers:
- name: sleeping-sidecar
image: alpine
command: ["sleep", "1h"]
EOF
pod/nginx created