If you’re a security practitioner, you are probably familiar with virtualization. In much the same way that virtualization exploded a decade ago, application containerization technologies are becoming more prevalent today. These include container runtimes, such as Docker and Rocket, and also associated orchestration technologies that surround them, such as Kubernetes and Rancher. But can they replace VMs?
As security practitioners, our job is to facilitate risk decisions. Most of those decisions nowadays involve either virtualization or containerization technologies. When evaluating a container vs. VM use case, it is therefore natural to ask which is more secure. Are there security advantages to using containers instead of VMs or vice versa?
These tools are not equivalent; therefore, a direct security comparison isn’t an apples-to-apples comparison. Evaluating the security of how either is being used requires a different tool set, understanding of vastly different security models and familiarity with different orchestration ecosystems.
The salient questions instead become: “What security properties do each have?” and “How are they being used in furtherance of security goals?” Some explanation — and a deeper dive under the hood — is advantageous to help practitioners consider how these tools fit into their organization’s risk profile.
Application containers, by contrast, have different properties, some of which bolster security and — depending on usage — some that can undermine it. A key property of containers is a — historically at least — more porous segmentation boundary relative to using a hypervisor.
What does a more porous segmentation boundary entail in this context? In an OS virtualization context, segmentation is enforced by the hypervisor below the level of the guest OS. The segmentation used by a container engine operates a different way. Containers run on the same OS instance as the container engine, and they also run on the same OS as other containers. This is true even though containers can’t normally interact with each other or with the container engine.
Containers use OS features to create logical environments where views of processes, files and network state are invisible to each other. This is accomplished via namespaces. This, however, means it’s up to OS capabilities to enforce the segmentation. As an example of why this can be tricky, consider the Linux time namespace. Until March 2020 (Linux 5.6), time wasn’t namespaced. This means that, if security teams on a kernel version prior to 5.6 — for example, the 4.19 kernel currently used by Debian 10 — disabled the default protections preventing a container from changing the time in most container engines, it would enable that container to change the time across the entire host, other containers included. In this case, segmentation enforcement varies between OS versions — this can get complex when the whole of Linux namespaces and cgroups functionality is considered.
In addition, artifacts of the OS operate the same way they always did: The root user is still the root user, sensitive files are where they always are and so forth. This means it is up to the administrator to ensure the host running the container engine is secured and hardened appropriately.
That said, there can be potential security advantages depending on what you’re trying to do — just like with OS virtualization, there are pros, as well as cons. The use of containers fosters a mentality of eliminating old or stale containers and redeployment of a new, fresh one. This, in turn, reduces configuration drift over time as individual containers don’t exist long enough to develop “personality.” In the OS virtualization world, by contrast, instances can persist a long time. They can get stale, collect backup files and old software artifacts, and undergo configuration tweaks that, in a containerized environment, would be reset on redeployment.
Beyond this, don’t underestimate the security value that can be realized by enabling you to componentize application components or services that would otherwise be on the same virtual or physical device. This adds value to a broader, security-aware, microservices-based deployment model.
Container security pros
- It minimizes configuration drift as containers are destroyed and redeployed.
- Containers are lightweight and portable; they can be rapidly fielded to new environments to facilitate development and be used for specialized testing.
- Creation artifacts — for example, Dockerfile/YAML artifacts — provide a manifest of software and configuration.
Container security cons
- The segmentation model is more complicated and enforced at the OS level.
- Underlying segmentation enforcement features are version-dependent.
- Security of the underlying host running the container engine is paramount.
Container security best practices
To help secure your container environments, there are a few important principles to keep in mind (note that this is not intended to be an exhaustive list):
- Consider orchestration. As with VMs, containers can get difficult and unwieldy to manage at scale. Consider the use of orchestration tools — such as the ubiquitous Kubernetes — to help with deployment, scaling and security services, including secrets management.
- Don’t ignore Dockerfile or YAML. One of the biggest but often overlooked security benefits is the manifest. By looking at the build artifacts for a container, you can know with certainty exactly what’s in that container and how it’s configured. This can be a huge benefit from a security review standpoint.
- Consider container scanning. Traditional tools, such as vulnerability scanning tools, often work well in a VM context; assuming an image is up, you can scan the same way you would with a physical host. Containers pose a different challenge, which is why there are special-purpose tools — e.g., the open source tools Anchore Engine and Clair — that will scan images to find and alert admins to vulnerable software. These can be automated directly into your development and release pipeline.
- Consider sidecar proxies. One of the ways you can add security value is to decouple a container’s application flow from the underlying network. To help do this, proxies such as Envoy, typically deployed as a sidecar or separate container running on the same engine, and Istio — service mesh orchestration — can help monitor traffic and provide a foothold to add or tweak underlying security functionality.
Among the virtual data center, hybrid cloud and IaaS, most technology and security practitioners have experience with VMs and hypervisors. A key security feature in this area is a strong segmentation boundary — both between virtual hosts on the same hypervisor and also between virtual hosts and the hypervisor itself.
Segmentation attacks can happen in the virtualization world in hypervisors or in the hardware on which those hypervisors run. Hardware-level issues, such as TRRespass (e.g., CVE-2020-10255) have implications in a virtual environment as they can cause state changes across the segmentation boundary. Likewise, software issues in hypervisor software can cause denial-of-service issues (e.g., CVE-2020-3999) or access to information across that boundary (e.g., CVE-2020-3971).
Despite these potential issues, the segmentation between VMs and the hypervisor — and between VMs themselves — is designed to be as strong as possible. This means VMs help in situations where a powerful segmentation boundary is critical — for example, where security teams don’t have control over the entire environment and/or in a multi-tenant environment. It can also be advantageous when teams need to make changes quickly to the underlying network configuration, either through software changes or when they want to migrate a physical device to a virtual environment.
There are some potential security downsides, though. Management issues may arise, particularly at scale, without systematic planning. Issues such as undesired proliferation of images and sprawl, stale or outdated images in serialized form on disk, and inventorying challenges can cause headaches for security teams. Orchestration can lessen the consequences of these issues but cannot alleviate them entirely.
For those organizations with extensive physical assets, physical-to-virtual migration is relatively easy to do. This cuts both ways from a security perspective: Virtualizing a physical host can provide an opportunity to harden those devices, and features such as snapshots can assist with patching but can also, in some cases, lead to staleness — when a device that should have been decommissioned stays live — or extend the sins of the past, such as deviation of configuration that can happen when a server persists over many years.
From a development point of view, working within a virtual environment gives developers a playground to work within. They can use the cloud to rapidly provision new instances, and snapshots let them reset a VM to a known stable configuration. However, the images themselves are large and therefore not as portable as they might desire.
VM security pros
- There is a strong segmentation boundary between workloads and between guest and hypervisor.
- It facilitates easy physical-to-virtual migration in situations where physical hardware is being outmoded.
- Snapshots can assist with testing of patching efforts and development.
VM security cons
- While rare, segmentation attacks still can and do happen.
- Virtual workloads are large and therefore less portable.
VM security best practices
To help secure your virtual environments, there are a few important principles to keep in mind (note that this is not intended to be an exhaustive list):
- Get educated. Learn about the capabilities of the hypervisor platform you’re using and security features. If you’re using an IaaS provider, understand its available security features and configuration options. Stay alert to vulnerabilities affecting the hypervisor platform, as well as any that affect the underlying hardware.
- Use snapshots purposefully. Snapshots can be a fantastic tool for security. They can help make patching easier, preserve a known-good state, assist with development and so on. However, they should be stored securely and managed as information within snapshots can contain server secrets, including passwords or cryptographic keys.
- Manage your environment. Consider orchestration tools or other software to help manage your virtual environment. Even if you don’t go the orchestration route, keep an eye on VM usage to avoid sprawl, and keep an eye on the health and staleness of workloads to ensure they’re kept up to date.
- Select compatible tools. Ensure you are using the right tools for security tasks. Understand that the tools you use will reflect your usage. For example, a virtual tap or port mirroring is different operationally between a hypervisor you own and an IaaS environment.
Comparing container vs. VM security
To compare container vs. VM and ask, “Which is more secure?” is a bit like asking: “Which is more useful: a hammer or a banana?” It depends entirely on the usage context. The banana makes for a better breakfast, but don’t try to hammer in a nail with it.
For a legacy application — maybe even one that currently resides on a physical host — a physical-to-virtual migration to a protected hypervisor might fit the bill, whereas a new application built using hundreds of microservices is perhaps a better fit for a containerized environment. Some organizations are still wary of multi-tenant container platforms due to the historical namespacing complexity with containers, while other organizations favor the advantages brought about through minimization of configuration drift. Ultimately, the choice is a decision based on where your company’s challenge areas are, your specific usage and your business context.