What is Container Deployment?
Container deployment involves using container technologies to deploy real life applications. Containers make it possible to build and deploy applications quickly, efficiently, and at large scale.
Container deployment increases development velocity, because it eliminates the need to manually deploy software components and their dependencies at every stage of the development lifecycle. It also improves quality by allowing teams to use repeatable and consistent components in development, testing, and production deployments. It also facilitates microservices architectures, by enabling teams to deploy each microservice as one or more containers, which can be deployed and updated independently of the rest of the application.
Containerized applications can be deployed manually, using script-based automation, or via container platforms. To deploy containers at large scale, organizations use orchestrators, which handle concerns like resource provisioning, container networking, security, and scalability.
In this article, you will learn:
- Advantages of Container Deployment
- Making the Move to a Containerized Deployment Model
- Top Container Engines and Platforms
- Securing Containers in Production with Aqua Security
Advantages of Container Deployment
Containers are designed for distributed environments, including microservices architectures, multi-cloud and hybrid cloud.
Many organizations adopt containers to ‘decompose’ monolithic applications into microservices, or to migrate software development and deployment to the cloud. Containerized software deployment can improve agility, promote efficiency, and reduce overhead. Containerization also lends itself to automated continuous integration and continuous delivery (CI/CD) pipelines, which are a foundation of DevOps development processes.
Containers offer the following advantages for teams developing and deploying software:
Flexibility
Containerized components can be spun up and shut down quickly, making it possible to perform rapid, rolling updates of software to ensure business continuity, without having to manage multiple dependencies. Because containers run software components and their dependencies in a contained, or isolated, process, each component of the application can be tested, deployed and updated separately, streamlining development workflows.
Speed
Containers can facilitate development velocity through the reuse of common components, reducing the time developers spend on configuration and resource provisioning. With more focus on coding, containers help developers improve productivity and allow for a faster feedback loop. Combined with automation and container orchestration tools, containers allow you to simplify operations related to provisioning infrastructure, testing, and shipping code to production.
Portability
Containers can run reliably in any environment because they are abstracted away from the underlying infrastructure and OS. Regardless of where it is deployed—a public, private or hybrid cloud; a hosted server; on-premises; or even your laptop—a container will run consistently and the code will execute in the same way.
Resource Optimization
Because containers support greater density, you can run multiple containers on a single host. Container abstraction means they are lightweight and require less CPU resources to run, in contrast to virtual machines that require a full guest OS for each application. Container deployments only need a single OS for multiple applications, enabling improved utilization of resources on the same machine and higher density of containers per physical host.
Making the Move to a Containerized Deployment Model
Successful application migrations take into account both application requirements and the characteristics of container environments.
Approaches for Migrating Applications to Containers
There are three common approaches for migrating an application to a containerized model:
- “Lift and shift” migration, involves porting an exact copy of an application – typically from on-premise to a cloud environment – to reduce the need for physical servers and take advantage of automated resource provisioning. This approach allows companies to incrementally move their development model to cloud native processes for new releases.
- Making small changes to an application to take advantage of the new distributed architecture, for example by switching from tightly integrated databases to cloud-based database services.
- Refactor the application, typically rebuilding it using a microservices architecture. Each microservice can then be deployed in its own container.
Architecture Considerations
To make the best use of a containerized environment, applications need to be broken down into separate services so that they can be scaled and deployed individually. Determine whether your existing application can be split into multiple containers. Ideally, you should assign a single functional process, known as a “service”, to a single container.
Performance
Determine if your application has specific hardware requirements. Containers use Linux features to partition the underlying kernel. You may need to customize container parameters, and provision appropriate host machines, to ensure the container has enough hardware resources.
On the other hand, make sure containers have restrictions that prevent them from consuming too many system resources—this may affect the performance of other containers on the host.
Security
Because containers share an OS, they are less isolated than VMs. Rather than establishing a security boundary, containers can increase security risk without the right controls in place.
When moving to container deployment, a key consideration should be to follow security best practices and determine what additional security controls and steps are needed for your application.
In place of simply implementing default settings for their container environment, developers and administrators should follow a set baseline security practices:
- Never grant containers root privileges on the host.
- Set strict limits on user and service accounts.
- Limit container access to only the resources and network connections needed for their function
- Ensure that no passwords, credentials or access keys are hard coded or saved in container images
- Following the principle of least privilege access
- Maintain defense-in-depth for image vulnerability scanning, network segmentation, role-based access controls and run-time security
- Protect Kubernetes clusters by separating secrets and passwords from container images
Security for containers is not something you can set and forget. Because new versions of container images are constantly released, and hundreds or thousands of new containers may be deployed to production environments every day, security should be an integral component of the development and deployment process.
It is essential to continuously monitor container images at build time for vulnerabilities, ensure that containers are deployed with the appropriate security configurations, and monitor containers at runtime to detect and respond to threats.
Persistent Storage
In a containerized architecture, persistent data is typically not stored in the container’s writable layer. This is because containers are immutable, and are frequently shut down and reconstructed from a container image. Instead, persistent data is stored on external, persistent volumes. This ensures persistent data exists independently of the container’s lifecycle, and keeps containers lightweight.
If your application currently writes files to multiple file system paths or file shares, consider modifying it so all persistent data is written to one file system path. This will make it easier to run the application in a container, by modifying the file path to point to an external persistent volume.
Prepare Images for Multiple Environments
One of the major benefits of containers is that they can be used consistently in development, testing, and production environments. However, this requires planning images to accommodate all stages of the development process.
You can use environment-specific configuration variables in your images. For example, if you set the environment flag to “production”, the image will build the container without development dependencies, and with security parameters appropriate for a production environment.
Top Container Engines and Platforms
Here are some of the software tools most commonly used to run containers, manage them, and orchestrate them in production environments.
Docker
Docker is the world’s most popular container engine, and is the basis for a majority of containerized application deployments. It is open-source and supported by a large developer community. Docker is used in demanding production environments, but to run Docker containers effectively at scale, the Docker container engine must be combined with a container orchestrator like Kubernetes.
Kubernetes
Kubernetes is a container orchestration system first released in 2014 by Google. It is an open source project managed by the Cloud Native Computing Foundation. You can use Kubernetes to automate the deployment, management, operations and scaling of large-scale containerized applications across clusters of host machines, both on-premises and in any public cloud.
As Kubernetes becomes a common infrastructure component used by organizations of all sizes, including for mission-critical workloads, Kubernetes security is becoming a major concern. Kubernetes has multiple layers – the control plane, worker nodes, pods, and containers – each of which must have secure configuration, and must be monitored and protected at runtime. Kubernetes environments are not compatible with most traditional security tools, and so many organizations are using dedicated, cloud native security tools.
Google Kubernetes Engine (GKE)
Google Kubernetes Engine (GKE) allows you to deploy, scale and manage containerized applications in a managed environment. It utilizes Google infrastructure, running Kubernetes on clusters of Compute Engine instances.
GKE clusters are based on a public version of open source Kubernetes, so you can take advantage of Kubernetes tools and commands to run your applications, and existing Kubernetes applications can easily be migrated to the service. GKE allows you to deploy software, adjust policies and monitor the health of your workloads.
Amazon Elastic Container Service (ECS)
Amazon Elastic Container Service (ECS) is a container management platform that allows you to run containerized applications on a cluster of Amazon EC2 instances. It provides full orchestration capabilities, and is an easy to use alternative to Kubernetes. However, unlike Kubernetes, it is not cloud agnostic, and can only run on AWS infrastructure.
Related content: read our guide to AWS containers ›
Azure Container Instances
Azure Container Instance (ACI) is a service that allows developers to deploy containers directly to the Microsoft Azure public cloud without the need to provision virtual machines (VMs).
This service supports both Linux and Windows containers. There is no need to manage virtual machines or container orchestration platforms (e.g. Kubernetes). You can launch new containers through the Azure portal or Azure CLI, and Azure automatically configures and scales the underlying compute resources.
ACI supports the use of images from public container registries like Docker Hub, as well as the Azure Container Registry.
OpenShift Container Platform
OpenShift Container Platform is a platform-as-a-service (PaaS) solution mainly intended to run on private infrastructure, but can also be used in the public cloud. It runs on the Red Hat Enterprise Linux (RHEL) operating system. OpenShift uses Kubernetes orchestration to manage an array of Docker containers, so you can deploy and manage your containerized applications.
Securing Containers in Production with Aqua Security
Aqua’s advanced container security capabilities enable customers to protect container-based cloud native applications from development to production.
Container image vulnerability scanning
Aqua scans container images based on a constantly updated stream of aggregate sources of vulnerability data (CVEs, vendor advisories, and proprietary research), which ensures up-to-date, broad coverage while minimizing false positives
Container assurance policies
Aqua creates flexible image assurance policies that determine which images would be allowed to progress through your pipeline and run in your clusters or hosts, including vulnerability score or severity, malware severity, the presence of sensitive data, use of root privileges or super-user permissions.
Drift Prevention to enforce container immutability
Based on an image’s digital signature, Aqua enforces the immutability of containers by preventing changes to a running container vis-à-vis its originating image.
Prevention of run time exploits
Aqua vShield acts as a virtual patch to prevent the exploitation of a specific vulnerability and provides visibility into such exploitation attempts when containers are already running in production.
Enforce policies based on behavioral profiles
Aqua’s behavioral profiling uses advanced machine learning techniques to analyze a container’s behavior, creating a model that allows only observed behaviors and capabilities to be accessed.
Maintain workload firewalling
Aqua limits the “blast radius” of attacks by limiting container networking to defined nano-segments based on application identity and context.
Secrets Injection at run time
Aqua securely delivers secrets to containers at runtime, encrypted in transit and at rest, loading them in memory with no persistence on disk, where they are only visible to the container that needs them.