Container Portability – What is so great about it?

Blog

Container Portability - What is so great about it?

July 24, 2019 - by Spruha Pandya

There is a lot of talk going around containers and the benefits that they bring to the table. Lightweight, agile, portable, fast, efficient…the list goes on and on. Portability is one of the prominent traits of containers as portability is a necessary feature in today’s software world. Applications that are able to run on only one type of host server or software environment so not meet business needs any more as they lack agility and are more difficult to maintain due to the constrictions to upgrade software and hardware.

Containers offer a new approach to build, ship and run applications by isolating the operating system. A container consists of an entire runtime environment bundled into one package – the application along with all its dependencies, libraries and other binaries, and the configurational files needed to run it. This brings forth the ability to “build once and run anywhere.” With containers, the code is compiled and placed into a container image. That image can then be deployed on any type of host environment. As all the application artifacts are stored within the container, environment variables outside the container shouldn’t impact the application. This way, portability comes naturally to containerised applications.

With the level of portability containers bring to the table, the application becomes independent of the host version of the deployment model and thus, can be transferred to another host and executed without any compatibility issues. This way, the processes of app development and app deployment are made independent of each other. Developers can focus on the business logic in applications without being concerned with the infrastructure that is necessary to make an application run, while the IT operations team can craft suitable container images for the application enabling it to run smoothly in various environments. Eventually, this results in faster time to market.

In short, by enabling portability, containers not only ease app development and app deployment but also bring forth several benefits from the business perspective. However, container portability has its own roadblocks. The significant ones are those related to Linux distribution incompatibilities, commercial licenses, security and networking. Also, porting containers from one family of OS to another is complicated. For enterprises that have complex IT ecosystems leveraging software developed by multiple external software vendors, this can be a major roadblock. VMs, on the other hand, do not fall short on this front. A Linux-based VM can typically run on a Windows host, and vice versa. For such cases, a hybrid app architecture, where containers are run on VMs, is used. In order to overcome this shortcoming, Docker launched a LinuxKit that allows developers to create a “Linux subsystem” within containers. This Linux subsystem provides a tiny Linux-based OS that runs within the container itself and enables the container to run on OS that is different than its own. Thus, achieving a maximum level of portability.

Considering all the factors related to container portability, it goes without saying that Docker containers do an amazing job.

Self Service — The Key to Unlock DevOps

Blog

Self Service — The Key to Unlock DevOps

July 11, 2019 - by Spruha Pandya | appeared on HackerNoon

 

Enterprises today are dedicating a lot of time and resources to ensure that the apps they build are the best version of themselves in all ways. Meanwhile, developers and IT Ops, together are trying to establish a continuous app delivery pipeline that enables effortless app deployment on demand. Yet, not all teams are able to achieve continuous delivery.

In the present day scenario, there is more pressure than ever for apps to move faster and be more agile without compromising on the security or reliability which makes enterprises look for alternative approaches to software development. Apart from adopting modern app architectures, it is essential to change the workflow as well because the old methods of app development cannot be relied upon to speed up app delivery. That is why enterprises all around the world are trying to adopt the DevOps approach. DevOps is defined as the blending of the development and IT operations teams in order to enhance the app delivery process and have a continuous delivery pipeline in place.

The adoption of DevOps is not a generic one. This is because a DevOps workflow cannot be implemented by assigning the responsibility to just one professional. DevOps includes everyone who has a stake in the app development and delivery process. Everyone gets involved early on in the collaborative process. Achieving success with DevOps starts with understanding the key requirements of the application and the enterprise as well. Thus, DevOps is not just a role — every individual in the team needs to be involved in the process, with clear responsibilities, for it to work effectively.

DevOps is a culture, a movement, a philosophy — A way to bring together the best of software development and IT operations.
The DevOps workflow

Challenges of DevOps at Scale

Organizations today are dealing with a large number of applications, devices, and data introduced by cloud, mobile, big data, and the Internet of Things (IoT). With the advent of automation and continuous everything, the situation is likely to get more challenging. Enterprises try putting more stress on having a clear DevOps strategy for the workflow to move smoothly. But, as DevOps is a result of efficient synchronisation between developers and IT team, making it work at scale can be met with many glitches. Here are a few common issues generally faced by teams in the DevOps workflow:

  • Too many interruptions
    While working with two or more teams, there are bound to be interruptions. Like in every team, IT operations will have planned and unplanned work coexists by design. For them, engineering tasks like analyzing new technologies to deliver new platforms, building suitable environments for the applications, and working on scaling and performance are permanent. But, along with them, come the constant interruptions due to incidents that require immediate attention like security events, customer requests, and sudden outages. All such interruptions are time-sensitive and require the immediate attention of the team. It might be impossible to have a separate team just for addressing such interruptions due to the uneven distribution of knowledge and skills as well as limited resources.
    Thus, every team ends up juggling all types of work at one point or another. This, in turn, delays the tasks related to app delivery and extends deadlines.

  • Silos destroying workflow
    Most organizations trying to implement DevOps workflow have teams working in silos. Silos create segregations based on work specialization and these divisions end up becoming barriers eventually. In theory, silos appear to be a well-organized system where each team handles tasks according to their capabilities and passes on the rest to other teams, i.e.: creating request queues. But, in practice, these request queues can at times, be the death of steady workflows. As discussed before, interruptions are unavoidable in every team. The delay effect of interruptions is cumulative in linear workflows.

  • Longer waiting time as tasks are codependent
    Each team in silos are dependent on other teams to complete tasks. Also, all teams have separate task priorities. Considering this, each team loses time while waiting for the other team to do something for them and vice versa. So, unless a particular team is an expert in switching context and accomplishing tasks assigned to the other team for completing workflows without delays, there is surely going to be a lot of delay due to this time spent waiting around. Delays slow down the feedback loop, and a slow feedback loop leads to more errors. Delays also increase the likelihood of a change in the context of the work, causing further issues. In short, even small delays can compound into big problems.

Enabling Smooth DevOps at Scale

These glitches can be difficult to resolve as there is no straight-forward solution for any. That is why organizations are leaning towards pursuing self-service DevOps. But, for enabling manual self-service DevOps, one either needs to be well-versed with REST, ESB, SOA, SOAP and all the needed context switching or else completely rely on IT teams to build each integration beforehand. Either of the options seems very tough to achieve but can work wonders for enabling continuous delivery. But, considering the gaps in the workflow, there has to be a system in place that can enable self-service app delivery.

Imagine if there could be a self-service kiosk for the purpose, just like a bank Automatic Teller Machine. There are 2 contributing parties in the operation of the ATM machine — the bank and the customer. Customers can perform simple tasks like withdrawing or depositing money all by themselves without contacting the bank executives. Similarly, if there could be a kiosk-like system where the IT operators could predefine and build all the required app delivery artifacts like preconfigured environment profiles, containerized app stacks, choice of target infrastructure, etc., developers can add their needed specifications to the suitable integration and deploy apps on their own accord without having to contact the IT teams.

Consider a modern application with a microservices architecture with X number of services. Chances are there will be more than one microservice for each service. Now, for building each microservice, there will be a separate team of developers working on it and each team obviously will have a different work pipeline. The output of each pipeline from the development team will be dumped on IT’s table leading to a classic model of silos workflow. But, if there was a way of letting the IT team have an independent work pipeline, wouldn’t that simplify it all?

 

DevOps now vs self-service DevOps

 

This is what self-service DevOps can offer. Self-service DevOps is a way to enable developers to deploy applications on demand all by themselves as the IT team has done the needful beforehand at its own pace. This way, there are minimal interruptions and delays in app delivery, thus achieving a state of continuous delivery.

In short, enabling self-service DevOps brings forth the following benefits:

  • Drastically cut cycle times
  • Accelerate releases
  • Reduce application backlogs and other bottlenecks
  • Reduce operational costs and increase operational efficiency

The idea of having a virtual self-service kiosk for software delivery can actually be implemented by either having a process in place or having an app delivery platform that can offer such functionalities. A well-built app delivery platform can streamline the CI/CD pipeline for an application of any scale. The idea is to have a predefined app delivery process in place that can enable any enterprise to unlock the benefits of DevOps with ease.

4 ways containers help enterprises save money

Blog

4 ways containers help enterprises save money

June 20, 2019 - by Spruha Pandya

Software containers, with all the benefits that they have to offer, have become pretty popular with the technical folks. Container technology is said to be revolutionizing deployment and management of applications in a multi-node distributed environments at scale. Apart from the containers offering a light-weight infrastructure for applications to run on a cloud, they help to make development process easier and more agile. Additional benefits include the ability to push out updates faster, immense and smooth scaling of applications, improved developer productivity, and the ability to build apps just once that are capable of running on any platform.


One of the major advantages is cost saving and that has lured most enterprises into shifting their infrastructure to Docker containers. Containers offer cost benefits on various fronts when compared to a legacy environment.

How containers save costs?
  • Unlike VMs, containers are free and open sourced The core code of Docker containers is open source and freely available for everyone to use. On the other hand, VMs cost a lot of money to use on a large scale. There are commercial implementations of containers available just like there are for VMs. But, considering the overall costs of running VMs and running containers, containers cost less as one does not have to pay anything for using containers.
  • Low licensing costs VMs require duplication of the OS and thus need separate licensing for each while containers share a common OS and thus need a single OS license for all the containers. This way, the container environment does not waste system resources duplicating operating-system functions, and all the resources can be utilized for the functioning of the actual app instead of duplications. Ultimately, this means more apps can be run with fewer physical servers while using containers.
  • Lower maintenance costs Containers facilitate environment parity which ensures that there is a consistency in all the environments – development, testing, and production. Such consistent environments are easier to maintain with a small team. Containers also behave as the best wrapping for microservices, offering better isolation of services. This makes it easier for developers and testers to spot bugs and isolate faults, thus preventing major bugs that might cost the company to fix.
  • The feasibility of paying only for what you use As containers have the ability to scale in response to fluctuations in demand of the application. So, while hosting containers in a cloud environment, one can pay only for the resources that the app consumes. This way, instead of having to maintain a cloud infrastructure with virtual servers, and pay for server capacity that may or may not be used by the application at all times, containers can help build a cloud environment where the company expenses scale with the service needs.

This is how containers help enterprises save costs of app development and delivery. For enterprises with large scale applications, these factors play a major role in cutting down costs in app development and maintenance.

Speeding up software delivery with containers

Blog

Speeding up software delivery with containers

June 20, 2019 - by Spruha Pandya

Containers have been the talk of the software development world for a few years now. At present, over 26% of environments currently use containerized applications and the number is expected to double by 2021. The key appealing factor that containers bring to the table is offering quality software delivery within a comparatively short time-span. Containers has played a major role in transforming the app delivery process.

How containers impact app delivery
  • Containers offer secure and reliable environments Containers isolate applications from each other within the operating system. Containers have isolated access to the relevant data or network. Multiple containers can run on the same server and leverage similar resources or be distributed across resources. This enables developers to work on separate segments of the application at once and deliver updates faster. Additionally, as every microservice has an isolated environment, any technical issues can be located and resolved without causing any downtime across the whole application.
  • Simplified system operations Placing an application within a container seals it along with its environment. Nobody can access nor change anything inside the container and thus, one does not need to write individual installation instructions for the varying system operators. There are no changes needed for running the application in a test or a production environment. There are no application specific differences. This way, the system operations are uniform and simple – thus, less time spent on delivery and maintenance.
  • Improvised DevOps processes DevOps is meant to ease friction points in the software delivery cycle and containers can add flexibility to the mix to make it all even simpler. As containers run independently and do not rely on the underlying host OS of the application, teams can use different frameworks or platforms. The choice of platform can depend completely on what the application needs or what the team is most comfortable using as containers can easily host any type of application irrespective of the choice of platform. When teams are working with the platform and processes that they are comfortable with, app delivery is effortless and faster.
  • Solid version control and reliable deployments Once the application is wrapped in containers, each deployment is identical – operating system, software packages, version and configuration – all are identical. One can also version sign every deployment, making it easy to differentiate between various versions of the app without having to maintain a configuration management database. No matter how often the deployment is done and by whomever it is done, reproducibility is 100%. Thus, environment-specific and version specific errors are no longer a concern.
  • Containers are disaster proof When an application is running on containers, container images form the basic building blocks of the entire application. All these container images are stored in a repository. So, in the event of any major disaster on the cloud, one can always pull the images from the repository and re-run the application anywhere one needs to. This way, containers offer a built-in disaster recovery function, which saves a lot of software recovery time in the face of a major failure.

Despite the container technology being only four years old, it is incredibly stable and reliable. Many enterprises have been able to scale and expand exponentially by shifting to containers. In short, containers simplify every aspect of software delivery and thus, enable enterprises deliver quality applications at a faster pace. The delays that legacy systems caused, have been eliminated completely by the container infrastructure.

Turning five in tech is no joke and you’ve done it in style, Kubernetes!

Blog

Turning five in tech is no joke and you’ve done it in style, Kubernetes!

June 6, 2019 - by admin

Happy birthday to you!

We remember early 2014 as an interesting time at HyScale with us trying to solve the deployment and management of hundreds of containers on top of VMs. We took up the complex task of building a container orchestrator with several proprietary optimizations such as hibernation and passivation. By June we were off to a promising start, but it came with the realization that we didn’t have to solve orchestration anymore. Kubernetes had entered the space.

On that day, we went back to the drawing board, embraced Kubernetes with all its uncertainties, took our chances, and fast forward to 2019, built an application delivery platform for k8s.  As for Kubernetes, just like we predicted, it has now emerged to be the de-facto orchestrator and container hosting platform.

For us, the benefits of Kubernetes are many and we find two prominent pursuits within it. The Dev/DevOps teams looking to push new features of modern applications into the market faster through CI/CD built for Kubernetes and the enterprise IT team that wants to extend container and Kubernetes benefits to traditional applications. We believe the latter is still untapped and holds great promise.

For us at HyScale, 5 years of Kubernetes marks our own journey so far in helping enterprises successfully achieve both these pursuits. And that’s how Kubernetes has enabled an entire community of players like us to benefit enterprises.

It has been a hell of a ride so far, and here is a snapshot of Kubernetes adoption in 2018-19. We are excited to see where this road takes us!

Eliminate Friction in App Delivery to Kubernetes

Blog

Eliminate Friction in App Delivery to Kubernetes

May 7, 2019 - by Anoop Balakuntalam | appeared on thenewstack

The last two decades have seen a sea change in the way software is written and delivered. Waterfall to iterative to agile, native to hybrid to responsive interfaces, monoliths to microservices, installed to pay-as-you-go SaaS, data centers to private and hybrid clouds, and virtual machines to containers. As the market constantly evolves, enterprises are facing a ton of choices with increasing complexity.

When microservices came along to break up huge monoliths into manageable parts, it meant teams still had to manage those individual parts, each with their own lifecycle. Containers (you know where that metaphor came from) solved some of that complexity by packaging each part and all dependent information into self-contained, repeatable units. This is where our story begins.

With the advent of containers as an economical portable solution, software began to be packaged along with its stack dependencies as an immutable “container image.” Once you have an image, you can launch any number of identical containers. This guarantees outcomes that are predictable and consistent overcoming the familiar software challenge — the “works for me” syndrome.

The Rising Need for Automation

Microservices and containers are typically hand in glove. With container images being “immutable,” any change to the service code or its dependencies entails building a new image. This means refreshing images built from the new code and replacing the relevant containers. In an enterprise with hundreds of apps, each containing multiple services, this means hundreds of new images & container replacements in a cycle of change. This demands a massive collaborative effort among developers and IT/DevOps. The situation is in desperate need for automation and process standardization.

Automation would help enterprises auto-containerize hundreds of services without having to write manual scripts, custom code or lots of configuration files. It would also lead to process standardization in order to prevent a proliferation of custom methodologies across teams and to reduce dependence on specific people for know-how.

With Kubernetes (k8s) emerging as the de facto container orchestration platform for hosting containers, a good process and automation would also facilitate a clean hand-off interface between those managing the k8s cluster/infrastructure and the app teams who are doing app delivery to the cluster. Only then can app teams develop, test & deliver apps with the least amount of friction enabling rapid updates of enterprise applications while also helping the enterprise realize the benefits of containers and k8s.

The Cause of Friction in Enterprises

Container DevOps is becoming all about ensuring Frictionless delivery. Here are a few overarching goals for this to become a reality. Foremost is speed. If a dev team’s need for a new test environment involves putting a ticket and waiting a day or two, that’s friction. The lack of a well-defined hand-off interface between application & IT teams, for information to be shared and collated in an error-free way, also leads to friction. If making changes to a couple of services and deploying them to staging or production environment involves manual intervention and takes a couple of hours or a day, making it an event, then that’s friction too.

Eliminating Friction for a Seamless Delivery

An enterprise’s best bet at eliminating friction is to adopt an automated self-service experience that enables app development teams to input service-related information and trigger delivery on demand. Such a delivery platform must combine pieces of information from the various owners in a reliable way. At the same time, black box automation may not win the confidence of app development teams who want to track changes made to service configurations and dependencies, visualize & identify configuration changes between different environments of an app. Most teams are already using CI scripting mechanisms and tools. It would be ideal if the automation leverages as many existing investments around CI as possible with minimal disruption to the roles and workflows in the organization.

Enterprises today would, therefore, be well served by moving away from a script-based & people dependent approach to delivery, and towards a platform-based approach. Such a delivery automation platform would provide a frictionless and continuous delivery experience to Kubernetes allowing them to tap into all the intended benefits of containers: improved cost economics, speed, and reliability of deployments, and app portability across multiple clouds.

Delays in App Delivery to Kubernetes

Blog

Delays in App Delivery to Kubernetes

April 23, 2019 - by Spruha Pandya | appeared on HackerNoon

Overcoming roadblocks and achieving continuous delivery

Delivering enterprise applications to Kubernetes

Enterprises around the world are waking up to the containers and Kubernetes trend. There are numerous benefits of delivering an application as container packages to Kubernetes but at the same time, the process of app containerization and the subsequent app deployment to Kubernetes can hit many roadblocks. Since the idea of using Kubernetes and containers for app delivery is fairly recent, the transition from traditional delivery systems to these modern delivery systems is a bumpy ride.

Major roadblocks while achieving continuous delivery

To modernize, breaking down large applications into smaller microservices is just a start. The main challenge is in continuously delivering these microservices as containers to Kubernetes. Most enterprise teams sink in a lot of time and effort into this and end up with a delayed delivery process.

So, what exactly are these glitches that are causing this delay?

Causes of delay in app delivery

In Containerization

With the container technology being a newbie in the enterprise software stack, not many teams have been able to find a way to simplify the complexity that it brings. The containerization process starts with creating a Docker file. This Docker file is supposed to be read by Docker and get a bunch of instructions about how the Docker image should get built. This is followed by creating a build command that uses the docker file to generate a Docker image. When this Docker image runs, you get a docker container. This process is a bit complicated as well as time-consuming if one has to repeat the same for a large number of containers. For a small-scale application with just a couple of services, containerization is easy. But, for a bulky enterprise application which may have hundreds of microservices, creating and maintaining containers can become very daunting.

Add container orchestration to the process, and you see one more level of complexity that involves setting up the orchestrator and deploying to Kubernetes.

In Delivery to Kubernetes
  • Manually deploying containers at scale
    Almost all user-facing components in Kubernetes are designed in YAML files. So, for any enterprise application with a lot of interlinked services, there are potentially multiple YAML files and that requires one to be well-accustomed to white-space syntax failing which, the outcome can be frustratingly error prone with no feedback of when and where things go wrong.
    If one has that part all worked out, there are several different resource types in Kubernetes applications to be handled. Some of those may need upgrading, and depending on the type of app you are working on, knowing what to update can further complicate deployment to Kubernetes.
    Doing this for a one-time delivery is still feasible but managing the delivery of a large number of containers to Kubernetes and updating them from time to time is an entirely different scenario. With more iterations, the app delivery tends to become error prone.
    There are also tasks like keeping track of the diff history and managing rollbacks of new and old deployments, adding more layers of complications to the pile.
  • Too many Alternatives to enable deployment
    Of course, there are some automation tools to reduce the work required in deploying an application to Kubernetes. There are many options that offer running Kubernetes as a managed or a hosted platform. That again has several complexities.
    Firstly, getting your updates or new features rolled out to your running cluster is not as simple as it seems to be. As there is no universally agreed way to do this, choosing one out of several methods ensuring that it is the suitable one for the team on hand is difficult.
    If you consider involving a deployment automation tool in the process to speed it up, the team is faced with several choices to make. As there is an abundance of tools to automate deployment, the wide array of options makes enterprises stay stuck in analysis paralysis and eventually, causing a delay in achieving a standardized way to deliver apps to Kubernetes. This negates the purpose of using a deployment automation tool in the first place.
    Even after the automation tool is set in place, there is still a major issue of figuring out some parameters of the CD pipeline to get the tool up and running like version control system, CI system, Docker registry, and Kubernetes cluster.
    Once these components are in place, an enterprise is ready with a continuous delivery pipeline for Kubernetes.
  • The need for context switching for developers while enabling smooth delivery
    Containers bring a lot of complexity to the table for the IT operations team as well as developers. As the technology is comparatively new, there is often a lack of adequate understanding of Kubernetes as well as Docker containers.
    In order to ensure a smooth app delivery process, the DevOps team has to be efficient. If not, there is bound to be a constant struggle to streamline the process. When developers are required to get involved in the container management and delivery process, they take time to get accustomed to container delivery. If that time is utilized in the coding of the application services instead, it is more productive.
    Sangeetha Narayanan, the Director of Engineering at Netflix says, “Any time spent fighting the system or tools is time not spent delivering value to the business.” while talking about improving the developer experience for a smooth app deployment process. This is applicable to most enterprise app development teams while they try their hand at app delivery to Kubernetes in the process of embracing modernization.
  • Microservices architecture considered to be a hard prerequisite to using Kubernetes
    As microservices work well with containers, it is assumed that the same is the case with Kubernetes.
    Many enterprises keep delaying the app deployment to Kubernetes because their application is still a legacy one, and miss out on reaping the benefits that Kubernetes has to offer like portability, IT cost savings, and scalability.
    Though microservices based applications are ideal for Kubernetes in most cases, it is not true for all apps. Wrapping a monolith in a container and deploying it through Kubernetes can work wonders too. For those monoliths which cannot be refactored into microservices because of the costs outweighing the benefits, this approach can jumpstart app modernization efforts and at the same time, speed up the application deployment.

What is the Solution?

The advantages of deploying an enterprise app to Kubernetes supersede the shortcomings that your team faces, leaving you with no alternative other than figuring out a way to overcome these shortcomings. The key is to automate your complete CI/CD pipeline. But, by just saying that you need an automated CI/CD pipeline, does not make all your app delivery delays vanish. For a smoother and faster delivery to Kubernetes, you need an approach that can streamline the process, keeping in mind these specific considerations.

Solution Considerations:
  1. Automation: Automation of the containerization process is the first step towards a smooth app delivery. The need to manually create Docker images for every single app service should be eliminated.
  2. Standardization of process: In order to reduce delays caused while trying to accommodate each custom process and tool, there needs to be one process standardized across teams for delivering applications to Kubernetes.
  3. Self-service delivery experience for developers: All the delays caused due to the DevOps friction can be eliminated if service-centric delivery is enabled by letting developers handle the DevOps configuration all by themselves.

Having all these considerations fulfilled will offer the level of automation in app delivery that most enterprises hope to achieve.

An enterprise can either have a very efficient DevOps team to enable this level of automation or seek the help of an app delivery platform that is purpose-built to offer these features for a smooth delivery process to Kubernetes. There are several app delivery platforms designed just for the purpose. Platforms like HyScale, Codefresh, and OpenShift are some of the options worth exploring.

Depending on the landscape of the enterprise application and the capabilities of the team, you need to act fast and make a decision of how you want to smoothly deliver containerized applications to Kubernetes.

VMs vs. Containers for Microservices

Blog

VMs vs. Containers for Microservices

March 19, 2019 - by Spruha Pandya | appeared on HackerNoon

  • 6
    Shares

What makes more sense for an enterprise?

In my previous blog, I talked about how enterprises can achieve continuous delivery of applications using microservices and containers. Here, I delve deeper to compare containers and VMs from a microservices architecture viewpoint.

In this software era of constant evolution, we hear a lot of talk about using containers for microservices and the need to modernize monolithic applications. But, there is always an impending question for an enterprise that arises next and is rarely addressed — Why not use VMs instead of containers?

Virtual Machines offer virtualization of hardware as well as the OS and create an efficient, isolated duplicate of a real machine. In the case of containers, only the OS is virtualized and not the hardware, creating a lightweight environment that houses the application and the dependent assets. Containers have continued to gain popularity since 2014 and Virtual machines have existed for a very long time, the difference between the two is known to all. As the world keeps comparing the two, it is imperative to remember both are here to serve different purposes.

"I like to think virtual Machines are to containers just as trains are to aeroplanes. While aeroplanes offer much faster mobility, we are always going to have trains around for some journeys that aeroplanes are not equipped to handle. Likewise, in certain business situations, containerization may be the prefered choice and in some, virtualization."

But when microservices come into the picture, the question — “containerize or virtualize?” seems to be frequent.

Benefits of using containers over VMs for microservices

Lowered Cost

VMs make it easy to partition execution environments among microservices by virtualizing hardware as well as operating systems. When you are executing every microservice through a separate VM, that requires replication of the OS as well. This will increase licensing costs. It is possible to execute multiple microservices in a single VM, but that would negate the single biggest advantage of breaking down a monolithic application into small easily executable microservices. The issue of conflicting libraries and application components will remain unsolved.

Containers offer isolation at the OS level. Thus, a single operating system can support multiple containers, each running within its own, separate execution environment. By running multiple components on a single operating system you reduce overhead licensing costs.

Higher Efficiency

It is common knowledge that VMs often impose a large performance penalty by using up server processing cycles which can be used to run applications instead. Every virtual machine runs its own execution environment on a separate copy of the operating system exhausting server processing cycles that you otherwise could have used to run the applications.

On the other hand, containers perform environment isolation at the operating system level. This way, a single operating system can support multiple containers, each running within its own, separate isolated environment. By running multiple components on a single operating system, containers reduce the overhead server processing cycles thereby freeing up processing power for other application components.

Flexible Storage

VMs have multiple options for storage. They can either have a local or a network-based storage type. Whichever option you choose, there will always be a physical disk space allocated to every VM which is isolated from the VM itself. So, at all times, the operating system, program files and data of a VM occupy storage space on the dedicated storage disk-—stateful storage.

In contrast, Containers, offer the choice of being stateful or stateless. Storage space is created or occupied when the container is created and discarded along with its deletion. A sandbox layer is created when a container image is edited and that stores all the data. This layer is active only as part of the container. Thus, each service of your microservices app can have its own storage, that can be managed differently from the storage of other services providing more flexibility and control.
Better isolation

As mentioned above, containers offer independent execution environments to microservices while enabling cohabitation on a single operating system. The databases, as well as the environments of microservices, are completely independent, despite being deployed on a single OS. If one tries to run multiple microservices on a single VM, there can be overlapping environments which may end up clogging up the server.

With containers, one can make the most out of a given piece of hardware by smart application processing. For example, some microservices require a lot of processing power, while others may generate a lot of network traffic. With smart workload allocation within containers, app developers can efficiently utilize server resources, ensuring network capacity is not left unused.

Reduced Size

VMs, as we know, take up a lot more storage space than containers. A container is typically as small as 10MB whereas a VM occupies at least a few GBs of storage space. While segregating a complex application using the microservices architecture, where each functionality is a separate microservice, the isolation will require a large number of VMs or containers. VMs are bound to require a lot of space while containers will not. The same physical server can hold many more containers than VMs. So, given the need to ensure the enterprise app does not become too bulky, containerization is often the preferred choice.

Faster execution

VMs are created by a hypervisor that needs a lot of configurational decisions at the start-—the guest OS for running the application, amount of storage space needed, network preferences, and many such settings. Though VMWare offers preset defaults in its creation wizard of VMs, the process still remains pretty complex and takes a couple of minutes to be executed on a platform. On the other hand, containers are created faster than VMs due to the absence of a hypervisor. Container images are stored in a repository from where they can be pulled as required through a few quick commands. Thus, the startup time for Docker containers ranges from a few milliseconds to a couple of seconds which makes it much swifter than a VM.

Executing containerized microservices at scale can be further simplified by using container orchestrators like Kubernetes and Docker Swarm. Docker containers can be quickly launched, retired, and replaced at scale with such orchestrators. To further enhance working with microservices and containers, one can use app delivery platforms designed exclusively to overcome modern delivery challenges for an enterprise. An app delivery platform can help accelerate, standardize, and sustain the complete process of containerization and subsequent delivery to orchestrators.

In reality, every enterprise comes with its own needs, requirements, and legacy systems. In the choice between VMs and containers, the latter is better suited for packaging microservices. So, if one is looking at scalable & flexible modern architecture for their teams, containers are their best bet to ensure continuous delivery.

Of Microservices and Containers

Blog

Of Microservices and Containers

February 22, 2019 - by Spruha Pandya | appeared on HackerNoon

Modern-day enterprises are largely dependent on software applications to facilitate numerous business requirements. In most enterprises, a software application offers hundreds of functionalities - all piled into one single monolithic application. For instance, ERP and CRM platforms have monolithic architecture and serve hundreds of functionalities efficiently. But, with multiple dependencies overlapping and creating a cluster, the tasks of troubleshooting, scaling, and upgrading them become a nightmare. At times, enterprises try tweaking such monolith applications for their convenience to the point that they get stuck in time and cease to serve any real purpose. This is when enterprises start to look for ways of modernizing applications and adopting an architecture that offers flexibility.

The rise of Microservices

There is a growing demand for microservices architecture amongst enterprises to make the transition to modern delivery. In this architecture, functionalities are designed as independent microservices that are loosely coupled to create one application that can multitask. This method facilitates building applications at scale, where making changes at the component level becomes easy without disturbing other parts of the application.

source: https://dzone.com/articles/the-shift-to-microservices-and-continuous-delivery

Netflix is one of the biggest and the most interesting success stories of transitioning from monolith to microservices architecture based application. The media services provider will never forget the day a single missing semicolon led to major database corruption and brought down the entire platform for several hours in 2008. Netflix realized they had to change their approach towards an architecture which led them to consider shifting to a microservices architecture from monolith one.

Although Netflix started its shift towards microservices architecture in the year 2009 and was successfully running on a cloud-based microservices architecture by 2011, the term microservices was not coined before 2012. It started gaining popularity only by 2014 when Martin Flower and other leaders in the industry started talking about this.

Adrian Cockcroft, the lead cloud engineer at Netflix and a visionary who played a major role in the changing of architecture landscape, explains microservices as “loosely coupled service-oriented architecture with bounded contexts".

With his bold decision to shift to microservices, Netflix was able to take quantum leaps forward in scalability and in early 2016, they announced their expansion of services to over 130 new countries.

How Microservices benefit an Enterprise Application

The transition to microservices from a monolithic architecture can open up a world of possibilities for enterprises such as:

  • The ability to create service-enabled and independently running components
    This way, each component is independent in itself, but all of them are coupled through APIs to work in a unified manner as an application.

  • Independently testing and running components
    One can easily run tests and make changes to one component without having to alter any other components.

  • Interconnected components working in sync
    Components use simple communication channels and protocols to co-exist and work together as a single unit.

  • A decentralized application
    Each component is independent and can be developed and deployed exclusively. So, the risk of the complete application crashing because of a minor flaw is eliminated.

  • Decentralized data management
    Each component has its own separate database. Thus, preventing a data breach to take over the entire application and limiting it to only one component. This enhances the security of the application.

  • A flexible and scalable application
    An application that can have any part of it upgraded or expanded without having to make any change to the existing components.


With all its advantages, the microservices architecture also comes with its own limitations. One of the biggest challenges with microservices remains the issue of delivering them at scale. The continuous integration and delivery of such a segmented application become complicated as it requires a lot of coordination to integrate and deploy a group of microservices in sync. Only a very efficient team of DevOps can achieve this feat. The key is to have seamless channels of communication between microservices and the assets they are dependent on for functioning. To fully exploit the value of microservices, it is essential to deliver them as self-sustained and portable units which are enabled by containers coming into the equation.

Why containers for microservices

"Containers simplify the continuous deployment of microservices" - a statement that has been so often been repeated by tech experts. But, what exactly are software containers and how do they simplify the delivery of microservices?

IT containers do exactly what physical containers do but digitally. In short, containers let you put your microservices in dedicated boxes. The idea is to containerize ‘like’ services and their required assets into a singular package. A container offers an isolated workload environment in a virtualized operating system. By running your microservices in separate containers, they can all be deployed independently. As containers operate in isolated environments, they can be used to deploy microservices, regardless of the code language used to create each microservice. Thus, containerization removes the risk of any friction or conflict between languages, libraries or frameworks and thus, making them compatible.

As containers are extremely light in weight and portable, they can be used to deploy microservices quickly. Typically, an application comprises of small, self-contained microservices, each acting as a single function application, working together through APIs that are not dependent on a specific language. Containers offer the required isolation in this case thus, enabling component cohabitation.

Backing up the benefits of using containers for microservices, docker reported a 46% increase in the frequency of software releases by using Docker containers.

These containers can be orchestrated through container orchestration platforms like Kubernetes, Docker Swarm, Helios, etc. These platforms help in the creation of multiple containers as required, and make them readily available for smooth deployment of the application. Orchestration also controls how containers are connected to build sophisticated applications from multiple microservices.

The road ahead

While containers and orchestrators are part of the buzz today, the larger question is how and when can enterprises start using them in production? Both these technologies set a new baseline for speed, scale, and frequency of app delivery, that is going to be difficult to achieve without automation and process standardization. This can be accomplished by choosing an efficient app delivery platform that is capable of automating the process of app delivery by offering containerization for existing apps as well as future cloud-native apps and piping them seamlessly into Kubernetes. Through this, one can simply standardize the process of app delivery and accelerate the key aspects of container native delivery and thus, achieve continuous delivery of microservices.

Continuous delivery in the world of containers

Blog

Continuous delivery in the world of containers

February 22, 2019 - by admin

In the previous blogs of the Modern DevOps with Containers series, we spoke about the importance of Containers and Kubernetes in app delivery, their challenges and their growing adoption. In this post, we take a look at how a platform-based approach can help enterprises simplify, accelerate and scale their modern DevOps journey with containers.

HyScale: purpose-built for modern DevOps with containers

The container delivery process, in the overall workflow of app delivery using containers, has changed when compared to the VM based app delivery model.

Using scripts in a Container based application delivery model has challenges like disruption, lack of standardization, scalability, visibility and a demanding learning curve. A container-based application delivery platform can help overcome these challenges by automating, standardizing and providing the right visibility while performing the CD (Continuous Delivery) process.

HyScale, a modern DevOps platform is purpose-built to help enterprises deliver applications in a seamless manner using containers. The illustration below shows how HyScale fits into an existing enterprise ecosystem and provides key capabilities that make it easy to onboard & containerize applications and deliver them to runtime hosting platforms.

As explained in the illustration above, in a typical enterprise app the source code is pushed to the VCS repo which is run through the build machine like Jenkins that compiles the App Binaries & WAR files to an artifact repository (JFrog or any other) with shell scripts. HyScale on boards these existing monolithic apps and re-uses the existing scripts. Thereby eliminating the various manual scripts required for different services. HyScale automatically containerizes these apps with the existing preset OS-environments and configurations that generate deployment ready artifacts that are ready to be deployed to a hosting provider like Kubernetes or AWS.

Benefits with HyScale

HyScale is a unique platform built with an application-centric approach to help enterprises accelerate their application delivery. The platform is purpose built with key benefits to help enterprises make the shift towards container adoption and fast track their applications to Kubernetes and other app hosting providers.

  • HyScale eliminates disruption caused by manual scripting efforts. It directly on boards all the app artifacts and services required for container app delivery.
  • The platform offers standardization of best practices for your existing app packages thereby getting rid of the custom scripts required for each service.
  • Container automation is a proprietary feature of HyScale where your existing apps are automatically converted to containerized images with existing OS environments and configurations ready for app delivery.
  • HyScale also offers a one of a kind UI which offers visibility, change tracking and progress of your app delivery pipeline.
  • There is no need for Docker expertise as the platform is designed to accelerate containerized app delivery regardless of your app readiness for Docker.
  • Quick and hassle free integrations with your existing pipeline saves setup time.
  • Finally, HyScale provides a single pane of glass, to deliver applications to multiple Kubernetes clusters. This way, enterprises can seamlessly deliver applications to public, private, or hybrid infrastructure running Kubernetes runtime.

With HyScale, IT operations can now truly become automated, self-serviced, standardized, and minimally disruptive for enterprises looking to embrace container-based app delivery.