Container Portability - What is so great about it?
July 24, 2019 - by Spruha Pandya
July 24, 2019 - by Spruha Pandya
Organizations today are dealing with a large number of applications, devices, and data introduced by cloud, mobile, big data, and the Internet of Things (IoT). With the advent of automation and continuous everything, the situation is likely to get more challenging. Enterprises try putting more stress on having a clear DevOps strategy for the workflow to move smoothly. But, as DevOps is a result of efficient synchronisation between developers and IT team, making it work at scale can be met with many glitches. Here are a few common issues generally faced by teams in the DevOps workflow:
These glitches can be difficult to resolve as there is no straight-forward solution for any. That is why organizations are leaning towards pursuing self-service DevOps. But, for enabling manual self-service DevOps, one either needs to be well-versed with REST, ESB, SOA, SOAP and all the needed context switching or else completely rely on IT teams to build each integration beforehand. Either of the options seems very tough to achieve but can work wonders for enabling continuous delivery. But, considering the gaps in the workflow, there has to be a system in place that can enable self-service app delivery.
Imagine if there could be a self-service kiosk for the purpose, just like a bank Automatic Teller Machine. There are 2 contributing parties in the operation of the ATM machine — the bank and the customer. Customers can perform simple tasks like withdrawing or depositing money all by themselves without contacting the bank executives. Similarly, if there could be a kiosk-like system where the IT operators could predefine and build all the required app delivery artifacts like preconfigured environment profiles, containerized app stacks, choice of target infrastructure, etc., developers can add their needed specifications to the suitable integration and deploy apps on their own accord without having to contact the IT teams.
Consider a modern application with a microservices architecture with X number of services. Chances are there will be more than one microservice for each service. Now, for building each microservice, there will be a separate team of developers working on it and each team obviously will have a different work pipeline. The output of each pipeline from the development team will be dumped on IT’s table leading to a classic model of silos workflow. But, if there was a way of letting the IT team have an independent work pipeline, wouldn’t that simplify it all?
This is what self-service DevOps can offer. Self-service DevOps is a way to enable developers to deploy applications on demand all by themselves as the IT team has done the needful beforehand at its own pace. This way, there are minimal interruptions and delays in app delivery, thus achieving a state of continuous delivery.
In short, enabling self-service DevOps brings forth the following benefits:
The idea of having a virtual self-service kiosk for software delivery can actually be implemented by either having a process in place or having an app delivery platform that can offer such functionalities. A well-built app delivery platform can streamline the CI/CD pipeline for an application of any scale. The idea is to have a predefined app delivery process in place that can enable any enterprise to unlock the benefits of DevOps with ease.
June 20, 2019 - by Spruha Pandya
June 20, 2019 - by Spruha Pandya
June 6, 2019 - by admin
The advantages of deploying an enterprise app to Kubernetes supersede the shortcomings that your team faces, leaving you with no alternative other than figuring out a way to overcome these shortcomings. The key is to automate your complete CI/CD pipeline. But, by just saying that you need an automated CI/CD pipeline, does not make all your app delivery delays vanish. For a smoother and faster delivery to Kubernetes, you need an approach that can streamline the process, keeping in mind these specific considerations.
Having all these considerations fulfilled will offer the level of automation in app delivery that most enterprises hope to achieve.
An enterprise can either have a very efficient DevOps team to enable this level of automation or seek the help of an app delivery platform that is purpose-built to offer these features for a smooth delivery process to Kubernetes. There are several app delivery platforms designed just for the purpose. Platforms like HyScale, Codefresh, and OpenShift are some of the options worth exploring.
Depending on the landscape of the enterprise application and the capabilities of the team, you need to act fast and make a decision of how you want to smoothly deliver containerized applications to Kubernetes.
In my previous blog, I talked about how enterprises can achieve continuous delivery of applications using microservices and containers. Here, I delve deeper to compare containers and VMs from a microservices architecture viewpoint.
Modern-day enterprises are largely dependent on software applications to facilitate numerous business requirements. In most enterprises, a software application offers hundreds of functionalities - all piled into one single monolithic application. For instance, ERP and CRM platforms have monolithic architecture and serve hundreds of functionalities efficiently. But, with multiple dependencies overlapping and creating a cluster, the tasks of troubleshooting, scaling, and upgrading them become a nightmare. At times, enterprises try tweaking such monolith applications for their convenience to the point that they get stuck in time and cease to serve any real purpose. This is when enterprises start to look for ways of modernizing applications and adopting an architecture that offers flexibility.
There is a growing demand for microservices architecture amongst enterprises to make the transition to modern delivery. In this architecture, functionalities are designed as independent microservices that are loosely coupled to create one application that can multitask. This method facilitates building applications at scale, where making changes at the component level becomes easy without disturbing other parts of the application.
Netflix is one of the biggest and the most interesting success stories of transitioning from monolith to microservices architecture based application. The media services provider will never forget the day a single missing semicolon led to major database corruption and brought down the entire platform for several hours in 2008. Netflix realized they had to change their approach towards an architecture which led them to consider shifting to a microservices architecture from monolith one.
Although Netflix started its shift towards microservices architecture in the year 2009 and was successfully running on a cloud-based microservices architecture by 2011, the term microservices was not coined before 2012. It started gaining popularity only by 2014 when Martin Flower and other leaders in the industry started talking about this.
Adrian Cockcroft, the lead cloud engineer at Netflix and a visionary who played a major role in the changing of architecture landscape, explains microservices as “loosely coupled service-oriented architecture with bounded contexts".
With his bold decision to shift to microservices, Netflix was able to take quantum leaps forward in scalability and in early 2016, they announced their expansion of services to over 130 new countries.
The transition to microservices from a monolithic architecture can open up a world of possibilities for enterprises such as:
The ability to create service-enabled and independently running components
This way, each component is independent in itself, but all of them are coupled through APIs to work in a unified manner as an application.
Independently testing and running components
One can easily run tests and make changes to one component without having to alter any other components.
Interconnected components working in sync
Components use simple communication channels and protocols to co-exist and work together as a single unit.
A decentralized application
Each component is independent and can be developed and deployed exclusively. So, the risk of the complete application crashing because of a minor flaw is eliminated.
Decentralized data management
Each component has its own separate database. Thus, preventing a data breach to take over the entire application and limiting it to only one component. This enhances the security of the application.
A flexible and scalable application
An application that can have any part of it upgraded or expanded without having to make any change to the existing components.
With all its advantages, the microservices architecture also comes with its own limitations. One of the biggest challenges with microservices remains the issue of delivering them at scale. The continuous integration and delivery of such a segmented application become complicated as it requires a lot of coordination to integrate and deploy a group of microservices in sync. Only a very efficient team of DevOps can achieve this feat. The key is to have seamless channels of communication between microservices and the assets they are dependent on for functioning. To fully exploit the value of microservices, it is essential to deliver them as self-sustained and portable units which are enabled by containers coming into the equation.
"Containers simplify the continuous deployment of microservices" - a statement that has been so often been repeated by tech experts. But, what exactly are software containers and how do they simplify the delivery of microservices?
IT containers do exactly what physical containers do but digitally. In short, containers let you put your microservices in dedicated boxes. The idea is to containerize ‘like’ services and their required assets into a singular package. A container offers an isolated workload environment in a virtualized operating system. By running your microservices in separate containers, they can all be deployed independently. As containers operate in isolated environments, they can be used to deploy microservices, regardless of the code language used to create each microservice. Thus, containerization removes the risk of any friction or conflict between languages, libraries or frameworks and thus, making them compatible.
As containers are extremely light in weight and portable, they can be used to deploy microservices quickly. Typically, an application comprises of small, self-contained microservices, each acting as a single function application, working together through APIs that are not dependent on a specific language. Containers offer the required isolation in this case thus, enabling component cohabitation.
Backing up the benefits of using containers for microservices, docker reported a 46% increase in the frequency of software releases by using Docker containers.
These containers can be orchestrated through container orchestration platforms like Kubernetes, Docker Swarm, Helios, etc. These platforms help in the creation of multiple containers as required, and make them readily available for smooth deployment of the application. Orchestration also controls how containers are connected to build sophisticated applications from multiple microservices.
While containers and orchestrators are part of the buzz today, the larger question is how and when can enterprises start using them in production? Both these technologies set a new baseline for speed, scale, and frequency of app delivery, that is going to be difficult to achieve without automation and process standardization. This can be accomplished by choosing an efficient app delivery platform that is capable of automating the process of app delivery by offering containerization for existing apps as well as future cloud-native apps and piping them seamlessly into Kubernetes. Through this, one can simply standardize the process of app delivery and accelerate the key aspects of container native delivery and thus, achieve continuous delivery of microservices.
February 22, 2019 - by admin
In the previous blogs of the Modern DevOps with Containers series, we spoke about the importance of Containers and Kubernetes in app delivery, their challenges and their growing adoption. In this post, we take a look at how a platform-based approach can help enterprises simplify, accelerate and scale their modern DevOps journey with containers.
The container delivery process, in the overall workflow of app delivery using containers, has changed when compared to the VM based app delivery model.
Using scripts in a Container based application delivery model has challenges like disruption, lack of standardization, scalability, visibility and a demanding learning curve. A container-based application delivery platform can help overcome these challenges by automating, standardizing and providing the right visibility while performing the CD (Continuous Delivery) process.
HyScale, a modern DevOps platform is purpose-built to help enterprises deliver applications in a seamless manner using containers. The illustration below shows how HyScale fits into an existing enterprise ecosystem and provides key capabilities that make it easy to onboard & containerize applications and deliver them to runtime hosting platforms.
As explained in the illustration above, in a typical enterprise app the source code is pushed to the VCS repo which is run through the build machine like Jenkins that compiles the App Binaries & WAR files to an artifact repository (JFrog or any other) with shell scripts. HyScale on boards these existing monolithic apps and re-uses the existing scripts. Thereby eliminating the various manual scripts required for different services. HyScale automatically containerizes these apps with the existing preset OS-environments and configurations that generate deployment ready artifacts that are ready to be deployed to a hosting provider like Kubernetes or AWS.
HyScale is a unique platform built with an application-centric approach to help enterprises accelerate their application delivery. The platform is purpose built with key benefits to help enterprises make the shift towards container adoption and fast track their applications to Kubernetes and other app hosting providers.
With HyScale, IT operations can now truly become automated, self-serviced, standardized, and minimally disruptive for enterprises looking to embrace container-based app delivery.