Cloud Native Security in 2021: Terrascan Policy Update #2
Note: Part 2 of a series
Fresh on the heels of our first batch of new Terrascan policies in 2021, and the Terrascan v1.3.1 release, we have a second batch of policies designed to improve cloud native security — specifically, for Kubernetes. These new policies are included in Terrascan v1.3.
To stay on top of the latest developments, consider watching or starring the Terrascan repo and following this blog. If you have any suggestions, comments, or special requests please let us know by opening an issue (or PR!) or posting a note on our community site.
This time, we focused on two other common sources of misconfigurations which can lead to security problems. Most of the policies in this batch focus on hardening the Kubernetes configuration from a few different perspectives:
- Avoiding insecure defaults, components, or versions
- Favoring explicit over implicit configurations to avoid misunderstandings
- Implementing best practices to improve security and repeatability
In the descriptions below, I use the term “Pod” loosely: Terrascan policies typically apply to multiple relevant scopes, such as DaemonSet, Deployment, ReplicaSet, etc.
Cloud Native Security and Hardened Configurations
- Ensure Kubernetes Dashboard Is Not Deployed The Kubernetes dashboard has its place, but it really shouldn’t be present in a production deployment. As Tesla discovered a couple of years ago, there are a variety of ways that the dashboard can represent a security risk, and security often boils down to minimizing opportunities for error.
- Ensure Tiller (Helm V2) Is Not Deployed
- Ensure that the Tiller Service (Helm v2) is deleted These policies alert you if Helm v2 is used by your cluster. Helm v2 should be avoided because it is old, unsupported, no longer receives security fixes, and Helm v3 provides a better alternative without these concerns.
- Ensure Default Service Accounts are Not Actively Used We all know we should be designing for the principle of least privilege. Right? Using default roles is not that. This policy identifies roles and bindings that use these default service accounts so you can design something better and more tailored.
- Image Tag Should be Fixed – Not Latest or Blank
- Ensure images are selected using a digest Best practices include ensuring repeatable behavior when containers are spun up. Specifying a tag or, better, a digest in the image spec ensures that you always get exactly the expected image.
- Ensure imagePullPolicy set to Always Using the “always” pull policy ensures that the kubelet verifies the image digest before using a locally cached copy. This can be important, for example, if you replace a tagged image in the repository–without the “always” pull policy, the kubelet may use an old cached version of the tag rather than the updated image in the repo.
- Default Namespace Should Not be Used Much as we should avoid using default service accounts, we should avoid use of the default namespace. This helps ensure that consideration is given to appropriate segmentation of the application, and simplifies roles and permissions. Meaningful namespaces can also reduce operational friction.
- All namespaces must have an ‘owner’ label The “owner” label does not have an immediate security impact, but it can prove very useful in helping operational teams understand who is responsible for a particular resource, or who to contact in the event that something seems amiss with the resource. This policy helps you identify resources that do not have an owner label.
- Apply Security Context to Pods and Containers The SecurityContext defines numerous important security parameters for your objects. They should be assigned according to the principle of least privilege and explicitly defined. This policy identifies objects which do not specify a SecurityContext.
- Ensure proper ProcMount security context policy is used The /proc filesystem can enable containers to escape their sandbox and gain access to the host node. This policy identifies containers whose configuration allows unsafe access to the /proc filesystem.
- Block Nodeport Services NodePort configurations expose ports on all nodes in the cluster and can lead to confusion around which ports are open on which nodes. Moreover, they provide very little control over access to the service and they are problematic from an operational perspective. It is safer, often easier, and more controllable to explicitly configure an Ingress controller.
- Restrict the use of externalIPs Similar to NodePort, externalIPs create dependencies on externally-managed resources in the best case and they create operational confusion. More importantly, they can be leveraged in attack scenarios like CVE-2020-8554. As a result, you should try to avoid using externalIPs.
- Ensure ingress is configured to https only It’s no surprise that HTTPS is more secure than HTTP. Encrypting incoming traffic, whether it comes from end users or other services, protects the contents of that traffic. This policy helps you identify services which utilize HTTP ingress.
- Prefer Using Secrets As Files Over Secrets As Environment Variables Files provide more access controls than environment variables, and provide a safer, more controllable and auditable way to make secrets available to your pods.
- Ensure Service Account Tokens are Mounted only where Necessary The automountServiceAccountToken option gives a container a service account token for accessing the API server. This option should be used only when necessary, and this policy helps you identify containers that use it.
- Check for the host file system paths that are not allowed Mounting paths from the host filesystem inside pods can enable workloads to escape the container sandbox and access resources on the nodes. This can lead to all sorts of problems including complete compromise of the node. This policy identifies configurations which allow unsafe access to host paths.
Policy as Code Can Help Improve Observability
The last two policies in this batch are designed to improve the operational readiness of your Pods:
- Ensure liveness probe defined for pod
Ensure readiness probe defined for pod
Best practices include specifying liveness and readiness probes for your pods so that Kubernetes can correctly manage your instances. These policies ensure that you are doing so.
More To Come
This second batch of policies helps you implement both proactive and reactive security controls in your development process. We’re excited about our plans to introduce hundreds more policies in the coming weeks, and helping teams to better secure their systems. Be sure to watch the Terrascan repo and this blog for updates. If you like what we’re doing, feel free to let us know by starring our repo or posting a note on our community site.
Note: this is post #2 in a series discussing Terrascan policy updates in 2021. You can find the first post here, and the complete series is available in our Terrascan category. The policies discussed in this document are available in Terrascan 1.3 and later. To ensure you are using the latest policies, you can delete your local policy configuration (typically in $HOME/.terrascan) and run
terrascan init. That will pull down the latest policies from the Terrascan repository.