DevOpsSoftware Development

What is Application Containerization? What is a Container? What are the benefits of using Containers?

Container Apps Docker

Containerization is an approach to software development in which an application or service, its dependencies, and its configuration (abstracted as deployment manifest files) are packaged together as a container image. The containerized application can be tested as a unit and deployed as a container image instance to the host operating system (OS).

Just as shipping containers allow goods to be transported by ship, train, or truck regardless of the cargo inside, software containers act as a standard unit of software that can contain different code and dependencies. Containerizing software this way enables developers and IT professionals to deploy them across environments with little or no modification.

Containers also isolate applications from each other on a shared OS. Containerized applications run on top of a container host that in turn runs on the OS (Linux or Windows). Containers therefore have a significantly smaller footprint than virtual machine (VM) images. Each container can run a whole web application or a service.

A container allows scalabilty by creating new containers for short-term tasks.

In short, containers offer the benefits of isolation, portability, agility, scalability, and control across the whole application life-cycle workflow. The most important benefit is the isolation provided between Dev and Ops.

In general, containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. Here’s what you need to know about this popular technology.

How and Application Container works?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

Application containers should include the runtime components in order to do work, files, environment variables and libraries should be included as well. Application containers consume fewer resources than a comparable deployment on virtual machines because containers share resources without a full operating system to underpin each app. The complete set of information to execute in a container is the image. The container engine deploys these images on hosts.

The most common app containerization technology is Docker, specifically the open source Docker Engine and containers based on universal runtime runC. Docker Swarm is a clustering and scheduling tool with which IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system.

Application containerization works with microservices and distributed applications, as each container operates independently of others and uses minimal resources from the host. Each microservice communicates with others through application programming interfaces, with the container virtualization layer able to scale up microservices to meet rising demand for an application component and distribute the load. With virtualization, the developer can present a set of physical resources as disposable virtual machines. This setup also encourages flexibility. For example, if a developer desires a variation from the standard image, he or she can create a container that holds only the new library in a virtualized environment.

What is Docker?

Docker is an open-source project for automating the deployment of applications as portable, self-sufficient containers that can run on the cloud or on-premises. Docker is also a company that promotes and evolves this technology, working in collaboration with cloud, Linux, and Windows vendors, including Microsoft.

Docker image containers can run natively on Linux and Windows. However, Windows images can run only on Windows hosts and Linux images can run only on Linux hosts, meaning host a server or a VM.

Developers can use development environments on Windows, Linux, or macOS. On the development computer, the developer runs a Docker host where Docker images are deployed, including the app and its dependencies. Developers who work on Linux or on the Mac use a Docker host that is Linux based, and they can create images only for Linux containers. (Developers working on the Mac can edit code or run the Docker CLI from macOS, but as of the time of this writing, containers do not run directly on macOS.) Developers who work on Windows can create images for either Linux or Windows Containers.

To host containers in development environments and provide additional developer tools, Docker ships Docker Community Edition (CE) for Windows or for macOS. These products install the necessary VM (the Docker host) to host the containers. Docker also makes available Docker Enterprise Edition (EE), which is designed for enterprise development and is used by IT teams who build, ship, and run large business-critical applications in production.

Application containerization benefits

– Efficiency for memory, CPU and storage compared to traditional virtualization and physical application hosting.

– Portability. As long as the OS is the same across systems, an application container can run on any system and in any cloud without requiring code changes.

– Reproducibility. This is one of the main reasons why container adoption coincides with the use of DevOps methodology. Throughout the application lifecycle from code build through test and production, the file systems, binaries and other information stay the same — all the development artifacts become one image. Version control at the image level replaces configuration management at the system level.

– Isolation. Each app can be developed, run and tested as a separated entity, which means it can be created by different teams, different people, with different practices, methodologies and such, but the end result will work with the other applications in the ecosystem.

– Another benefit of containerization is scalability. You can scale out quickly by creating new containers for short-term tasks. From an application point of view, instantiating an image (creating a container) is similar to instantiating a process like a service or web app. For reliability, however, when you run multiple instances of the same image across multiple host servers, you typically want each container (image instance) to run in a different host server or VM in different fault domains.

Container Alternatives to Docker

According to some data, Docker makes up 83% of the existent containers, but that number was 99% just 3 years ago.

Other teams have created other container runtime environments, and are steadily growing as the market continues to evolve and diversify.

Docker is still the best known container technology out there, but it’s not the only option. The following are some alternative container runtimes.

CoreOS rkt

In 2018, 12 percent of production containers were rkt (pronounced “Rocket”).

rkt is an application container engine developed for modern production cloud-native environments. It features a pod-native approach, a pluggable execution environment, and a well-defined surface area that makes it ideal for integration with other systems.

The core execution unit of rkt is the pod, a collection of one or more applications executing in a shared context (rkt’s pods are synonymous with the concept in the Kubernetes orchestration system). rkt allows users to apply different configurations (like isolation parameters) at both pod-level and at the more granular per-application level. rkt’s architecture means that each pod executes directly in the classic Unix process model (i.e. there is no central daemon), in a self-contained, isolated environment. rkt implements a modern, open, standard container format, the App Container (appc) spec, but can also execute other container images, like those created with Docker.

Since its introduction by CoreOS in December 2014, the rkt project has greatly matured and is widely used. It is available for most major Linux distributions and every rkt release builds self-contained rpm/deb packages that users can install. These packages are also available as part of the Kubernetes repository to enable testing of the rkt + Kubernetes integration. rkt also plays a central role in how Google Container Image and CoreOS Container Linux run Kubernetes.

Mesos Containerizer

The Mesos Containerizer provides lightweight containerization and resource isolation of executors using Linux-specific functionality such as control cgroups and namespaces. It is composable so operators can selectively enable different isolators.

It also provides basic support for POSIX systems (e.g., OSX) but without any actual isolation, only resource usage reporting.

A potential downside is that you can’t run these containers standalone; in other words, you require the Mesos framework to make them run.

LXC Linux Containers

LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

Current LXC uses the following kernel features to contain processes:

  • Kernel namespaces (ipc, uts, mount, pid, network and user)
  • Apparmor and SELinux profiles
  • Seccomp policies
  • Chroots (using pivot_root)
  • Kernel capabilities
  • CGroups (control groups)

LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.

OpenVZ

OpenVZ is an OS level virtualization technology for Linux. It allows a physical server to run multiple operating systems on it. We can call these platforms containers or VPS. OpenVZ is a container which holds an operating system. We can install multiple operating systems on their containers. OpenVZ containers have no kernel, so it uses VZkernel as common and OpenVZ containers cannot be operated using Linux kernel.

With OpenVZ’s focus on containers for whole operating systems, a disadvantage is that it is not ideal for single applications. There is no CRI or Kubernetes integration yet. Word is that OpenVZ 7, the latest version, is not yet as stable as its predecessor, OpenVZ 6.

Containerd

Containerd is described as “an industry-standard container runtime with an emphasis on simplicity, robustness and portability.”. As of February 28, 2019, containerd is officially a graduated project within the Cloud Native Computing Foundation, following Kubernetes, Prometheus, Envoy, and CoreDNS.

Containerd supports OCI images, is designed to work in concert with gRPC and comes with many container lifecycle management features.

Considerations when choosing a platform for containerization

Developers should consider the following:

  • Application architecture: focus on the application architecture decisions they need to make, such as whether the applications are monolithic or microservices and are they stateless or stateful.
  • Workflow and collaboration: consider the changes to the workflows and whether the platform will enable them to easily collaborate with other stakeholders.
  • DevOps: consider the requirements for using the self-service interface to deploy their apps using the DevOps pipeline.
  • Packaging: consider the format and tools to use the application code, dependencies, containers and their dependencies.
  • Monitoring and logging: ensure that the available monitoring and logging options meet their requirements and work well with their development workflows.

IT operations should consider:

  • Architectural needs of applications: ensure that the platform meets the architectural needs of the application as well as the storage needs for stateful applications.
  • Legacy application migration: the platform and tooling around the platform must support any legacy applications that have to be migrated.
  • Application updates and rollback strategies: work with the developers to define application updates and rollbacks to meet the service level agreement
  • Monitoring and logging: put plans in place for the right infrastructure and application monitoring and logging tools to collect a variety of metrics.
  • Storage and network: ensure that the necessary storage clusters, network identities and automation to handle the needs of any stateful applications are in place.

Conclusion

Docker is certainly a popular runtime for today’s containers and is probably not going anywhere for some time. With that said, its superiority may be dwindling as other containerization methods are refined for specific environments. If the use of non-Docker containers surges, it could have a ripple effect on the tooling industry built around Docker platform.

Since the Open Container Initiative (OCI) has emerged into the field, we will likely see this body lead standardization and evolution of container technology. In choosing the right container tool, engineers should consider OCI compliance, along with portability, community activity and adoption numbers as indicators for robustness and future stability.

Links:

https://www.cio.com/article/2924995/what-are-containers-and-why-do-you-need-them.html

https://searchitoperations.techtarget.com/definition/application-containerization-app-containerization

https://www.docker.com/resources/what-container

https://cloud.google.com/containers/

https://www.sumologic.com/insight/microservices-architecture-docker-containers/

https://containerjournal.com/topics/container-ecosystems/5-container-alternatives-to-docker/

https://www.opencontainers.org/

https://containerd.io/

http://mesos.apache.org/documentation/latest/mesos-containerizer/

https://linuxcontainers.org/lxc/introduction/

https://openvz.org/

https://coreos.com/rkt/

https://www.interserver.net/tips/kb/beginners-guide-openvz/

Bibliography

De la Torre, César; Wagner, Bill; Rousos, Mike. .NET Microservices: Architecture for Containerized .NET Applications. v2.1 Edition. Redmond, Washington, USA. 2018.

Leave a Reply

Your email address will not be published. Required fields are marked *