|
| 1 | +# Kyverno Policies |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +Kyverno is a policy engine designed for Kubernetes that validates, mutates, and generates configurations using policies as Kubernetes resources. It provides key features like: |
| 6 | + |
| 7 | +- Policy validation and enforcement |
| 8 | +- Resource mutation and generation |
| 9 | +- Image verification and security controls |
| 10 | +- Audit logging and reporting |
| 11 | +- Admission control webhooks |
| 12 | + |
| 13 | +The following policies are shipped by default in this platform to enforce security best practices, resource management, and operational standards. |
| 14 | + |
| 15 | +For detailed information about Kyverno's capabilities, refer to the [official documentation](https://kyverno.io/docs/) or [policy library](https://kyverno.io/policies/). |
| 16 | + |
| 17 | +--- |
| 18 | +## :material-shield-lock: Rule: deny-empty-ingress-host |
| 19 | + |
| 20 | +**Category:** Best Practices | **Severity:** medium | **Scope:** Cluster-wide |
| 21 | + |
| 22 | +An ingress resource needs to define an actual host name in order to be valid. This policy ensures that there is a hostname for each rule defined. |
| 23 | + |
| 24 | +**Rules** |
| 25 | + |
| 26 | +- **disallow-empty-ingress-host** (Validation) |
| 27 | + |
| 28 | +--- |
| 29 | + |
| 30 | +## :material-shield-lock: Rule: require-labels |
| 31 | + |
| 32 | +**Category:** Best Practices | **Severity:** medium | **Scope:** Cluster-wide |
| 33 | + |
| 34 | +Define and use labels that identify semantic attributes of your application or Deployment. A common set of labels allows tools to work collaboratively, describing objects in a common manner that all tools can understand. The recommended labels describe applications in a way that can be queried. This policy validates that the labels `app.kubernetes.io/name`, `app.kubernetes.io/version`, and `app.kubernetes.io/part-of` are specified with some value. |
| 35 | + |
| 36 | +**Rules** |
| 37 | + |
| 38 | +- **check-for-labels** (Validation) |
| 39 | + |
| 40 | +- **check-deployment-template-labels** (Validation) |
| 41 | + |
| 42 | +--- |
| 43 | + |
| 44 | +## :material-shield-lock: Rule: deny-no-limits |
| 45 | + |
| 46 | +**Category:** Best Practices, EKS Best Practices | **Severity:** medium | **Scope:** Cluster-wide |
| 47 | + |
| 48 | +As application workloads share cluster resources, it is important to limit resources requested and consumed by each Pod. It is recommended to require resource requests and limits per Pod, especially for memory and CPU. If a Namespace level request or limit is specified, defaults will automatically be applied to each Pod based on the LimitRange configuration. This policy validates that all containers have something specified for memory and CPU requests and memory limits. |
| 49 | + |
| 50 | +**Rules** |
| 51 | + |
| 52 | +- **validate-resources** (Validation) |
| 53 | + |
| 54 | +--- |
| 55 | + |
| 56 | +## :material-shield-lock: Rule: deny-external-secrets |
| 57 | + |
| 58 | +**Category:** Security | **Severity:** medium | **Scope:** Cluster-wide |
| 59 | + |
| 60 | +When provisioning ExternalSecrete, the key must be prefixed with the namespace name to ensure proper isolation and prevent unauthorized access. |
| 61 | + |
| 62 | +**Rules** |
| 63 | + |
| 64 | +- **namespace-prefix** (Validation) |
| 65 | + - Applies to: ExternalSecret |
| 66 | + |
| 67 | +--- |
| 68 | + |
| 69 | +## :material-shield-lock: Rule: deny-nodeport-service |
| 70 | + |
| 71 | +**Category:** Best Practices | **Severity:** medium | **Scope:** Cluster-wide |
| 72 | + |
| 73 | +A Kubernetes Service of type NodePort uses a host port to receive traffic from any source. A NetworkPolicy cannot be used to control traffic to host ports. Although NodePort Services can be useful, their use must be limited to Services with additional upstream security checks. This policy validates that any new Services do not use the `NodePort` type. |
| 74 | + |
| 75 | +**Rules** |
| 76 | + |
| 77 | +- **validate-nodeport** (Validation) |
| 78 | + |
| 79 | +--- |
| 80 | + |
| 81 | +## :material-shield-lock: Rule: deny-default-namespace |
| 82 | + |
| 83 | +**Category:** Multi-Tenancy | **Severity:** medium | **Scope:** Cluster-wide |
| 84 | + |
| 85 | +Kubernetes Namespaces are an optional feature that provide a way to segment and isolate cluster resources across multiple applications and users. As a best practice, workloads should be isolated with Namespaces. Namespaces should be required and the default (empty) Namespace should not be used. This policy validates that Pods specify a Namespace name other than `default`. Rule auto-generation is disabled here due to Pod controllers need to specify the `namespace` field under the top-level `metadata` object and not at the Pod template level. |
| 86 | + |
| 87 | +**Rules** |
| 88 | + |
| 89 | +- **validate-namespace** (Validation) |
| 90 | + |
| 91 | +- **validate-podcontroller-namespace** (Validation) |
| 92 | + |
| 93 | +--- |
| 94 | + |
| 95 | +## :material-shield-lock: Rule: deny-latest-image |
| 96 | + |
| 97 | +**Category:** Best Practices | **Severity:** medium | **Scope:** Cluster-wide |
| 98 | + |
| 99 | +The ':latest' tag is mutable and can lead to unexpected errors if the image changes. A best practice is to use an immutable tag that maps to a specific version of an application Pod. This policy validates that the image specifies a tag and that it is not called `latest`. |
| 100 | + |
| 101 | +**Rules** |
| 102 | + |
| 103 | +- **require-image-tag** (Validation) |
| 104 | + |
| 105 | +- **validate-image-tag** (Validation) |
| 106 | + |
| 107 | +--- |
| 108 | + |
| 109 | +## :material-shield-lock: Rule: deny-no-pod-probes |
| 110 | + |
| 111 | +**Category:** Best Practices, EKS Best Practices | **Severity:** medium | **Scope:** Cluster-wide |
| 112 | + |
| 113 | +Liveness and readiness probes need to be configured to correctly manage a Pod's lifecycle during deployments, restarts, and upgrades. For each Pod, a periodic `livenessProbe` is performed by the kubelet to determine if the Pod's containers are running or need to be restarted. A `readinessProbe` is used by Services and Deployments to determine if the Pod is ready to receive network traffic. This policy validates that all containers have one of livenessProbe, readinessProbe, or startupProbe defined. |
| 114 | + |
| 115 | +**Rules** |
| 116 | + |
| 117 | +- **deny-no-pod-probes** (Validation) |
| 118 | + |
| 119 | +--- |
| 120 | + |
| 121 | +## :material-shield-lock: Rule: deny-cap-net-raw |
| 122 | + |
| 123 | +**Category:** Best Practices | **Severity:** medium | **Scope:** Cluster-wide |
| 124 | + |
| 125 | +Capabilities permit privileged actions without giving full root access. The CAP_NET_RAW capability, enabled by default, allows processes in a container to forge packets and bind to any interface potentially leading to MitM attacks. This policy ensures that all containers explicitly drop the CAP_NET_RAW ability. Note that this policy also illustrates how to cover drop entries in any case although this may not strictly conform to the Pod Security Standards. |
| 126 | + |
| 127 | +**Rules** |
| 128 | + |
| 129 | +- **require-drop-cap-net-raw** (Validation) |
| 130 | + |
| 131 | +--- |
| 132 | + |
| 133 | +**Total Policies: 9** |
0 commit comments