-
-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
upstream tailscale
#835
base: main
Are you sure you want to change the base?
upstream tailscale
#835
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bridgecrew has found errors in this PR ⬇️
} | ||
} | ||
|
||
resource "kubernetes_deployment" "operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure CPU limits are set
Resource: kubernetes_deployment.operator | Bridgecrew ID: BC_K8S_10
| Checkov ID: CKV_K8S_11
How to Fix
apiVersion: v1
kind: Pod
metadata:
name: <name>
spec:
containers:
- name: <container name>
image: <image>
resources:
limits:
+ cpu: <cpu limit>
Description
Kubernetes allows administrators to set CPU quotas in namespaces, as hard limits for resource usage. Containers cannot use more CPU than the configured limit. Provided the system has CPU time free, a container is guaranteed to be allocated as much CPU as it requests.CPU quotas are used to ensure adequate utilization of shared resources. A system without managed quotas could eventually collapse due to inadequate resources for the tasks it bares.
} | ||
} | ||
|
||
resource "kubernetes_deployment" "operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure liveness probe is configured
Resource: kubernetes_deployment.operator | Bridgecrew ID: BC_K8S_7
| Checkov ID: CKV_K8S_8
How to Fix
apiVersion: v1
kind: Pod
metadata:
name: <name>
spec:
containers:
- name: <container name>
image: <image>
+ livenessProbe:
<Probe arguments>
Description
The kubelet uses liveness probes to know when to schedule restarts for containers. Restarting a container in a deadlock state can help to make the application more available, despite bugs.If a container is unresponsive, either to a deadlocked application or a multi-threading defect, restarting the container can make the application more available, despite the defect.
} | ||
} | ||
|
||
resource "kubernetes_deployment" "operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure readiness probe is configured
Resource: kubernetes_deployment.operator | Bridgecrew ID: BC_K8S_8
| Checkov ID: CKV_K8S_9
How to Fix
apiVersion: v1
kind: Pod
metadata:
name: <name>
spec:
containers:
- name: <container name>
image: <image>
+ readinessProbe:
<Probe configurations>
Description
Readiness Probe is a Kubernetes capability that enables teams to make their applications more reliable and robust. This probe regulates under what circumstances the pod should be taken out of the list of service endpoints so that it no longer responds to requests. In defined circumstances the probe can remove the pod from the list of available service endpoints.Using the Readiness Probe ensures teams define what actions need to be taken to prevent failure and ensure recovery in case of unexpected errors.
} | ||
} | ||
|
||
resource "kubernetes_deployment" "operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure securityContext is applied to pods and containers in contianer context
Resource: kubernetes_deployment.operator | Bridgecrew ID: BC_K8S_28
| Checkov ID: CKV_K8S_30
How to Fix
apiVersion: v1
kind: Pod
metadata:
name: <Pod name>
spec:
containers:
- name: <container name>
image: <image>
+ securityContext:
Description
**securityContext** defines privilege and access control settings for your pod or container, and holds security configurations that will be applied to a container. Some fields are present in both **securityContext** and **PodSecurityContext**, when both are set, **securityContext** takes precedence.Well-defined privilege and access control settings will enhance assurance that your pod is running with the properties it requires to function.
Benchmarks
- CIS EKS V1.1 4.6.2
- CIS GKE V1.1 4.6.3
- CIS KUBERNETES V1.5 1.6.5
- CIS KUBERNETES V1.6 5.7.3
} | ||
} | ||
|
||
resource "kubernetes_deployment" "operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure images are selected using a digest
Resource: kubernetes_deployment.operator | Bridgecrew ID: BC_K8S_39
| Checkov ID: CKV_K8S_43
How to Fix
apiVersion: v1
kind: Pod
metadata:
name: <Pod name>
spec:
containers:
- name: <container name>
image: image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
Description
In some cases you may prefer to use a fixed version of an image, rather than update to newer versions. Docker enables you to pull an image by its digest, specifying exactly which version of an image to pull.Pulling using a digest allows you to “pin” an image to that version, and guarantee that the image you’re using is always the same. Digests also prevent race-conditions; if a new image is pushed while a deploy is in progress, different nodes may be pulling the images at different times, so some nodes have the new image, and some have the old one. Services automatically resolve tags to digests, so you don't need to manually specify a digest.
} | ||
} | ||
|
||
resource "kubernetes_deployment" "operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure securityContext is applied to pods and containers in pod context
Resource: kubernetes_deployment.operator | Bridgecrew ID: BC_K8S_43
| Checkov ID: CKV_K8S_29
How to Fix
apiVersion: v1
kind: Pod
metadata:
name: <Pod name>
spec:
containers:
- name: <container name>
image: <image>
+ securityContext:
Description
**securityContext** defines privilege and access control settings for your pod or container, and holds security configurations that will be applied to a container. Some fields are present in both **securityContext** and **PodSecurityContext**, when both are set, **securityContext** takes precedence.Well-defined privilege and access control settings will enhance assurance that your pod is running with the properties it requires to function.
} | ||
} | ||
|
||
resource "kubernetes_cluster_role" "tailscale_operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minimize wildcard use in Roles and ClusterRoles
Resource: kubernetes_cluster_role.tailscale_operator | Bridgecrew ID: BC_K8S_107
| Checkov ID: CKV_K8S_49
How to Fix
resource "kubernetes_cluster_role" "pass" {
metadata {
name = "terraform-example"
}
rule {
api_groups = [""]
resources = ["namespaces", "pods"]
verbs = ["get", "list", "watch"]
}
Description
In Kubernetes, roles and ClusterRoles are used to define the permissions that are granted to users, service accounts, and other entities in the cluster. Roles are namespaced and apply to a specific namespace, while ClusterRoles are cluster-wide and apply to the entire cluster.When you define a role or ClusterRole, you can use wildcards to specify the resources and verbs that the role applies to. For example, you might specify a role that allows users to perform all actions on all resources in a namespace by using the wildcard "*" for the resources and verbs.
However, using wildcards can be a security risk because it grants broad permissions that may not be necessary for a specific role. If a role has too many permissions, it could potentially be abused by an attacker or compromised user to gain unauthorized access to resources in the cluster.
} | ||
} | ||
|
||
resource "kubernetes_deployment" "operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure admission of containers with NET_RAW capability is minimized
Resource: kubernetes_deployment.operator | Bridgecrew ID: BC_K8S_27
| Checkov ID: CKV_K8S_28
How to Fix
apiVersion: v1
kind: Pod
metadata:
name: <Pod name>
spec:
containers:
- name: <container name>
image: <image>
securityContext:
capabilities:
drop:
+ - NET_RAW
+ - ALL
Description
NET_RAW capability allows the binary to use RAW and PACKET sockets as well as binding to any address for transparent proxying. The *ep* stands for “effective” (active) and “permitted” (allowed to be used).With Docker as the container runtime NET_RAW capability is enabled by default and may be misused by malicious containers. We recommend you define at least one PodSecurityPolicy (PSP) to prevent containers with NET_RAW capability from launching.
Benchmarks
- CIS EKS V1.1 4.2.7
- CIS GKE V1.1 4.2.7
- CIS KUBERNETES V1.5 1.7.7
- CIS KUBERNETES V1.6 5.2.7
- SOC2 CC6.3.4
} | ||
} | ||
|
||
resource "kubernetes_role" "operator" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minimize wildcard use in Roles and ClusterRoles
Resource: kubernetes_role.operator | Bridgecrew ID: BC_K8S_107
| Checkov ID: CKV_K8S_49
How to Fix
resource "kubernetes_cluster_role" "pass" {
metadata {
name = "terraform-example"
}
rule {
api_groups = [""]
resources = ["namespaces", "pods"]
verbs = ["get", "list", "watch"]
}
Description
In Kubernetes, roles and ClusterRoles are used to define the permissions that are granted to users, service accounts, and other entities in the cluster. Roles are namespaced and apply to a specific namespace, while ClusterRoles are cluster-wide and apply to the entire cluster.When you define a role or ClusterRole, you can use wildcards to specify the resources and verbs that the role applies to. For example, you might specify a role that allows users to perform all actions on all resources in a namespace by using the wildcard "*" for the resources and verbs.
However, using wildcards can be a security risk because it grants broad permissions that may not be necessary for a specific role. If a role has too many permissions, it could potentially be abused by an attacker or compromised user to gain unauthorized access to resources in the cluster.
} | ||
} | ||
|
||
resource "kubernetes_role" "proxies" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minimize wildcard use in Roles and ClusterRoles
Resource: kubernetes_role.proxies | Bridgecrew ID: BC_K8S_107
| Checkov ID: CKV_K8S_49
How to Fix
resource "kubernetes_cluster_role" "pass" {
metadata {
name = "terraform-example"
}
rule {
api_groups = [""]
resources = ["namespaces", "pods"]
verbs = ["get", "list", "watch"]
}
Description
In Kubernetes, roles and ClusterRoles are used to define the permissions that are granted to users, service accounts, and other entities in the cluster. Roles are namespaced and apply to a specific namespace, while ClusterRoles are cluster-wide and apply to the entire cluster.When you define a role or ClusterRole, you can use wildcards to specify the resources and verbs that the role applies to. For example, you might specify a role that allows users to perform all actions on all resources in a namespace by using the wildcard "*" for the resources and verbs.
However, using wildcards can be a security risk because it grants broad permissions that may not be necessary for a specific role. If a role has too many permissions, it could potentially be abused by an attacker or compromised user to gain unauthorized access to resources in the cluster.
what
why
references