Dentsu World Services India
6 min readMay 25, 2021

Thinking Containerization? Discover its Potential & Know Why it Matters

Three decades ago when HTTP protocol was being developed, no one believed that it will bring such a massive revolution in the computer science and technology world. With the development of HTTP protocol and web framework, the entire world was entered into true globalization.

Developers of the languages for the web platform such as PHP, Javascript, Java, ASP and ASP.NET were in great demand. Since then the web platform and browser-based technologies never looked back. We have witnessed massive evolution of languages and web frameworks and the developers of the web ecosystem evolved exponentially. However, there is not much focus on the deployment and infrastructure side of things for the web ecosystem and it largely remains the same over the period. Developers and IT professionals have to spend sleepless nights figuring out software installation, OS and versions, patches, framework dependencies and security issues.

Journey to virtualization

Everything in application development revolves around the infrastructure. Securing the state of the art, enterprise-grade hardware is extremely expensive and requires a lot of planning and approvals to procure such an extensive setup.

Industry challenges with bare metal hosting

Though bare metal deployment provides blazingly fast performance, over the period when business applications mature they drive a significant customer base, generate a lot of data and the application demands more and more resources such as computing power, memory and disk space. On the contrary, some applications carry routine back-office tasks and mostly the provisioned hardware remains underutilized.

To sum up, below are the key challenges faced by the industry.

· Highly expensive (Capex)

· Upgrade limitations ( at certain point hardware upscaling become stagnant)

· Underutilized hardware

· Isolation of application is expensive

· Unfortunate events such as fire, or hardware failure — result in huge business loss

· Provisioning and hardware set up is time-consuming and may take several days

· Need dedicated power supply, cooling and security

Virtualization became the saviour for the IT industry by saving not just millions of dollars but also helped in maintaining business applications to drive business growth. With hardware virtualization, businesses turned more robust and predictable. Here are some significant benefits of virtualization over bare metal hosting:

· VMs (virtual machines) created using the virtualization software (e.g. Oracle Virtualbox) saved companies millions of dollars in infrastructure spending by deploying several virtual machines on one server.

· Application isolation becomes easy and cost-effective.

· With the multi-tenant model, you can deploy several applications and hence effectively utilize the hardware resources.

· With VM snapshot, you can quickly deploy VMs on a different server even if hardware fails.

· Provisioning and booting up new VMs is a matter of several minutes to hours than days.

A brief history of application deployment

Before a decade and a half, when Amazon Web Services introduced Cloud Computing to the world, the journey of a web application from developers machine to the production server was fairly common. It was mostly clunky, full of emotions, eventful, chaotic, time-consuming, and quite unpredictable.

When the developer sets up the application, many infrastructural and external actors play an equal role in running the application. e.g. CPU, memory, hard disks, networking, OS, runtimes, framework libraries, package dependencies, etc. Knowingly or unknowingly most of the time these key dependencies are taken for granted during the development phase.

When the application journey started towards production, these dependencies become critical and sometimes create a bottleneck. The entire focus and energy shifts from the application to manage these external dependencies to ensure that, there is no impact on other running applications.

An era of cloud-native applications

Modern applications today are built with a cloud-native mindset. Cloud-native is all about changing the way you think about constructing critical business systems. These systems are designed to embrace rapid change, scale quickly, and be resilient. To meet the ever-changing customer demands, businesses need both speed and agility to respond swiftly to complex needs without impacting the performance and overall health of the application.

The main challenge with legacy monolith applications is that they have become extremely complex and bulky. Making even a small change or adding a feature in these application has to be implemented with an intensive approach, exhaustive planning, and measuring impact on other parts of application and deployment window to promote to live environment. To counter such challenges, Cloud Computing brings new paradigms, patterns and practices along with various productivity tools to access applications and their dependencies to make it Cloud portable. It is up to the individual business how to plan and start working on these aspects before it becomes a pressing issue.

What is Containerization and why it matters

If your business is looking forward to modernising the current legacy monolith application to a scalable and highly available distributed system by refactoring it into Microservices, it means you are going to break out the application into smaller modules.

Let’s assume you have broken your monolith application into 5 smaller modules. So now you have 4 new apps to develop, test and deploy. It just doesn’t stop there. You will need to provision more environments to deploy and test these applications. Even with a conservative approach, let’s do capacity planning for this modern application.

· Integration Environment (QA Servers) 1x * 5 apps = 5x VMs

  • UAT Environment = 2x * 5 apps = 10x VMs
  • PROD Environment with High Availability and Disaster Recovery ( HADR) = 4x * 5 apps = 20x VMs

So roughly, around 35 general-purpose VMs will be required for this application.

Issues with virtual machines to isolate workloads

· Virtual machine contains full-blown OS which takes bigger disk space.

· Full-blown OS requires more time to boot and get up and running with applications.

· Non-equitable or poor distribution of CPU resources.

· Many companies offer VM virtualizations. Unfortunately, the snapshot format is not standardized. Though there are tools available to convert into other formats, this entire process is tedious and intense.

· With enterprise OS, licensing cost is significant.

· Most of the times application doesn’t need full-blown OS resources meaning we’re paying for the CPU allocation when there is no need.

Understanding containers

Like shipping industries use physical containers to isolate different cargos to transport ships and trains; software development technologies increasingly use an approach called containerization.

· A standard package of software known as ‘container’ bundles an application’s code together with the related configuration files, libraries and the dependencies required for the app to run.

· dotCloud (the company behind Docker) was in the business of hosting apps. They were looking for an efficient way to package and deploy workloads. As a hosting business, you can offer a more competitive product if you can pack more apps in a single server than the competitor.

The below example shows file app reading and writing into the file system. & meaning the 3 instances of apps will be running asynchronously. Without proper isolation, the apps may overwrite the files.

So, is there any way by which we could let processes pretend that they are dealing with dedicated file systems? This is where dotCloud shines. So even if you have a single network interface, you could use a part of it and still pretend it’s the full networking interface.

Containers existed in the Linux world way before the docker came into existence. However, Docker offered better abstraction over the low-level kernel primitives Namespaces and control groups, making it convenient to package the processes into bundles called container images. dotCloud developed a mechanism to define how many resources a process can use and how to partition them.

To conclude, running containers doesn’t require as many resources as virtual machines since there is no hardware to virtualise. Containers are processed with a restricted view of the operating system, they have low overheads. All containers share the same kernel, unlike VMs where each VM has its kernel. Container startup time is about milliseconds hence it is a fast, secure, scalable, and agile way of the future.

References:

https://azure.microsoft.com/en-us/overview/what-is-a-container/

https://learnk8s.io/

Authored by, Mohit Dharmadhikari, Technical Architect, dentsu World Services.

No responses yet