DevSecOps best practices for enterprises leveraging Kubernetes

Blog

DevSecOps best practices for enterprises leveraging Kubernetes

April 17, 2020 - by Niranjan Ramesh | appeared on Jaxenter

Security can no longer afford to be at the end of the DevOps process. It needs to be integrated into every step of both development and operations to eliminate vulnerabilities before the application is shipped. In essence, DevOps needs to evolve into DevSecOps.

Would you be surprised if we told you that a whopping 92% of DevOps teams don’t catch all security vulnerabilities before moving the code to production? We are sure most of them are glad they found the vulnerability before someone else did. Because, good security teams know the quantum of risk opened up by even a small nagging loophole in their application.

This is especially true when enterprise application teams deploy more times and at faster rates than ever before. Legend has it that Amazon deploys once every second, while Netflix, Google and others deploy thousands of times each day. In fact, CapitalOne, the financial services corporation that you might see as a non-tech company deploys 50 times a day.

One of the key trends in technology that enables such rapid development is containerization. Kubernetes is powering several enterprise applications to the cloud. Combined with robust DevOps, enterprise application development is delivering greater speed and agility. This, however, isn’t entirely uncompromising.

Security has long been considered a speed breaker in the process — you’ll find several conversations around ‘sacrificing’ speed for security in the rapid application development space. As a result, we see that "vulnerabilities in container software have increased by 46% in the first half of 2019 compared to the same period in 2018, and by 240% compared to the two years ago figures.”

At this rate, security can no longer afford to be at the end of the DevOps process. It needs to be integrated into every step of both development and operations to eliminate vulnerabilities before the application is shipped. In essence, DevOps needs to evolve into DevSecOps.

What is DevSecOps?

If DevOps is about breaking the silos between development and operations, DevSecOps is doing the same for DevOps and security. DevSecOps brings security into the software development lifecycle, to eliminate vulnerabilities as soon as possible.

DevSecOps is underlined by four key factors:

  • Thinking about security right from the start.
  • Integrating security procedures within the DevOps processes.
  • Building observability and auditability into your development lifecycle.
  • Automating security tasks everywhere possible.

Let’s look at them one by one.

Security from the start

How many infosec engineers do you have in your application team? Now, what’s the ratio of infosec engineers to developers and operators? Chances are, you don’t have enough infosec engineers. And this is very common. By shifting security left, DevSecOps makes security the collective responsibility of your application teams, instead of the lonely infosec professional at the deployment bottle’s neck. Here are a few ways in which you can include security from the start.

  • Bring the security teams to the DevOps table. Encourage them to share insights and feedback on known threats freely.
  • Train your development teams in secure coding, to eliminate common and repetitive mistakes, even before they appear.
  • Address security concerns as they appear, not after you’ve been attacked or your application compromised.
  • Repeatedly review all devices and tools to ensure they are compliant with your security policies.

Security with process-driven DevOps

Even the best-intentioned developers might lose track of the security needs of the application, in the hustle of developing working software at great speeds. To make security non-negotiable, you need clear processes, standardization and regular checkpoints. Here are some ways to begin setting it up.

  • Conduct a thorough risk-benefit analysis to gauge your risk tolerance and understand your security posture.
  • Build unambiguous processes for security functions like access management, secrets management, firewall, vulnerability scanning etc. and put them into practice.
  • Implement version control processes for your application code as well as any infra as code or platform as code in your system.
  • Consider immutable infrastructure, if you’re using a mutable system.

Security through observability and auditability

We discussed already that to successfully adopt DevSecOps, the security teams must move left. But taking responsibility goes beyond writing security-aware code. Development and operations teams must build a system of monitoring and auditability throughout the application development lifecycle, across multi-cloud deployments. Here are some ways you can achieve that.

  • Build a network of 'security champions' — who are security-minded members of non-security teams — throughout your enterprise. In essence, get more infosec engineers without actually adding more people to your teams.
  • Bring monitoring and observability to the service level. Set up monitoring for containers, clusters, and pods.
  • Set up artifact level metadata to identify container image, libraries used, repository information, commit info etc.
  • Have real-time visibility over config changes.
  • Automate alerts for compliance and security-related issues.

Security by automation

If we have to pick one key benefit of Kubernetes, it has to be scale. K8s has enabled application deployment at never-seen-before scale, even across complex multi-cloud or hybrid environments. But implementing security check-points manually at this scale is practically impossible. Which is why including automation to your DevSecOps process is critical.

  • Automate infrastructure provisioning.
  • Integrate scanning into your CI/CD pipeline and ensure all container images are secure before the production push.
  • Isolate container registries, and control its access.
  • Embed automated code-review and security testing with security static and dynamic analysis tools.
  • Update security patches automatically.

In 2020, it would be utterly foolhardy to ignore security in your enterprise application development. The stakes are high and the consequences — financial, reputational and compliance — could be significant. It would be just as risky to rest security in the hands of a few experts. What enterprises need is a perfect combination of strategic inclusion of security into DevOps processes, end-to-end monitoring and audits, and thoughtful automation of security tasks. Another efficient approach to go about it would be to find a deployment automation platform that can also offer security management capabilities.

Infrastructure as code vs platform as code

Blog

Infrastructure as code vs. platform as code

April 15, 2020 - by Dhivya Venkatesan | appeared on Jaxenter

With infrastructure as code (IaC), you write declarative instructions about compute, storage and network requirements for the infra and execute it. How does this compare to platform as code (PaC) and what did these two concepts develop in response to?

In its simplest form, the tech stack of any application has three layers — the infra layer containing bare metal instances, virtual machines, networking, firewall, security etc.; the platform layer with the OS, runtime environment, development tools etc.; and the application layer which, of course, contains your application code and data. A typical operations team works on the provisioning, monitoring and management of the infra and platform layers, in addition to enabling the deployment of code.

The rise of cloud computing first abstracted the infra layer. The infrastructure as a service (IaaS) model allowed IT/Operations teams to instantly provision cloud infrastructure at the click of a button. AWS EC2, Azure VM and Google CE are the most popularly used IaaS services today. Along came the platform as a service (PaaS) model that abstracted the next layer. Infrastructure providers themselves began offering the platform layer — operating system, development tools, database management and the like. PaaS services like AWS Beanstalk, Azure CDN, Google App Engine gained popularity.

In fact, Ops teams also began building their own PaaS, bringing together a selected subset of features to be compatible with their existing infra or have custom workflows. If you’re using containerization or microservices paradigms, this can get tedious and unwieldy.

The need for scale, consistency, repeatability, shareability and auditability in building microservices-based applications forces operations teams to consider radically new approaches to the workings of the infra and platform layers. It is in response to these concerns that the concepts of infrastructure as code (IaC) and platform as code (PaC) emerged.

Infrastructure as code

Infrastructure as code is exactly what it says on the can: Managing and provisioning infrastructure through software, instead of physical hardware configuration or other tools. With IaC, you write declarative instructions about compute, storage and network requirements for the infra and execute it. The automation engine — tools like AWS Cloud Formation and Terraform — will then provision it for you by capturing declaration/code through an abstracted IaaS API.

As a result, provisioning infra becomes significantly faster, whether it’s a natural part of your delivery pipeline or meant to auto-scale in response to specific events. If you use multiple environments like dev, QA, staging, prod etc., firing up infra using the same code base ensures consistency, saving you much time and possible heartache by mitigating risks of errors, misconfigurations, downtime etc. Change management too becomes much simpler — you can write code to update your infra, and have complete version control.

This is especially impactful for containerized applications on the cloud.

  • Containerization and microservices launch hundreds of small applications, instead of the handful of large instances like those used in the previous development paradigm. At this scale, there would be time lags into the development journey, affecting agility significantly.
  • In multi-cloud deployments, repeatability across the hundreds / thousands of applications is crucial for delivery consistent customer experience.
  • The money mechanics of cloud makes it prudent to up-scale and down-scale infra dynamically based on needs — something that would be nearly impossible to manage manually at that scale.

With infrastructure as code, cloud native applications can have consistent, reliable and version-controlled infrastructure at scale. But IaC alone doesn’t offer the best application lifecycle management experience. The platform still needs to be provisioned and managed by the Ops teams. IaC is implemented by writing abstractions as wrappers over APIs of infra layer, as a result, developers would need a new CLI for each abstraction.

For a smoother developer experience, IaC wasn’t enough. We needed platform as code.

Platform as code

Platform as code (PaC) is a similar abstraction of the platform layer. PaC allows you to write declarative instructions about the platform layer — including the OS and other tools that are needed for development and operations of your application — into code and execute it.

In essence, PaC allows the developers to define their own platform: That is, have a customized execution environment for your application. This can be a different environment for each application, how many ever they are. If Kubernetes is your platform, you’d be able to write YAML declarations for your platform elements just the way you would for your application code.

Unlike IaC, PaC is implemented through abstractions as Kubernetes API extensions instead of writing wrappers over k8s APIs. So, PaC abstractions become first-class entities, allowing developers to use kubectl and YAML to provide declarative instructions.

The savings in time and energy resulting from automation goes without saying. But the real value of PaC on Kubernetes is that there will be repeatability and control even as developers are creating custom platform stacks for their K8s cluster. It will ensure dev/prod parity for your application. All platform elements like YAML files, operator manifests etc. will be shareable. With Kubernetes Operators, you will also be able to deploy consistently on multi-cloud environments.

Platform as code as a paradigm has enabled efficient, consistent, repeatable delivery of enterprise applications at scale. It has brought Dev and Ops closer, collaborating through the common language. Most importantly, it has paved the way for the next generation of development lifecycle tools — which offer iterative development, optimized workflows, light-weight client-side tool, production-ready CI/CD pipelines and app centric deployment automation.

HyScale and Platform9 combine offerings to accelerate Kubernetes adoption

Blog

HyScale and Platform9 combine offerings to accelerate Kubernetes adoption

Providing an end to end solution for teams migrating to K8s

March 12, 2020 - by admin | appeared on Faun

Migration to K8s (Kubernetes) involves multiple aspects like setting up a K8s cluster, managing & operational aspects, deployment considerations as well as moving applications to that cluster with continuous delivery so as to derive the intended benefits of cost-efficiency, scalability, and portability. The whole project typically involves diverse pieces of effort requiring different skill-sets and a lot of time.

The complexities around K8s can be resolved by bringing together two critical pieces of the puzzle - 1) Simplification of K8s cluster setup & management, and 2) Simplification of app migration/delivery to K8s. This is exactly what the powerful combination of Platform9 and HyScale brings to the table.

For companies looking for freedom to use their existing infrastructure and want the flexibility to start small and scale to production on their own terms, Platform9 has launched new, free and flexible SaaS-managed Kubernetes plans. The partnership brings together HyScale’s app-centric abstraction towards deployment of apps to K8s, with Platform9’s capabilities to eliminate day-2 operational complexities

In simpler words, teams can now spin up a cluster on any infrastructure in under 5 minutes and take advantage of Platform9’s SaaS management which provides automated upgrades, remote monitoring, and security patching. Teams can then leverage HyScale to automatically containerize & deploy apps to Kubernetes without K8s jargon, or writing/maintaining long K8s manifests.

The Platform9 + HyScale partnership brings forth these benefits:

  • Easy-to-setup K8s infrastructure for application deployments enabling modern continuous delivery
  • App-centric abstraction and well-defined automation for clear handoff between the dev teams and IT
  • Fully automated deployment and Day-2 operations in any environment, on-premises or public cloud
  • Single pane of glass visibility across all environments & 100 % pure upstream Kubernetes
  • Standardized delivery process thereby avoiding ad-hoc processes, tribal knowledge and repetitive learning curve across teams

This partnership is designed to be of value to enterprise teams who are tasked to provide Kubernetes to their app teams and have these teams quickly migrate workloads onto those clusters but do not have the skills, time, or budget. The use cases can range across Sandbox development/testing environment, Staging clusters, CI/CD, test clusters, Scale testing, feature testing of your microservices apps, Production apps, etc.

HyScale is provided by Pramati Prism, Inc. Please find step by step instructions on how to setup Platform9 and HyScale together for a Managed Kubernetes setup with App-Centric delivery. Reach us on Write to us at connect@hyscale.io if you have any questions.

For details on the new Platform9 Managed Kubernetes plans, read their launch blog.

Kubernetes in production: Five challenges you’re likely to face and how to approach them

Blog

Kubernetes in production: Five challenges you’re likely to face and how to approach them

March 11, 2020 - by admin | appeared on Faun

Kubernetes is the technology that ran production workloads for Google for over fifteen years. Since then, it has been open-sourced, and hundreds of members of the community have come together to make it better. There is no doubt that Kubernetes is a production-grade container orchestration system.

Yet, one of the most common misconceptions about Kubernetes adoption is that if it works on dev / QA / staging, it’ll work on prod. This is hardly ever true. Firing up a k8s environment and deploying your microservice to it for dev / QA / staging is simple and relatively developer-friendly. Migrating it to an enterprise-grade production environment brings with it several complexities of performance, security, interoperability, and governance.

In this blog post, we’ll discuss the differences in using Kubernetes across environments, the challenges you are likely to face.

#1 Making Kubernetes deployments work at scale on demand.


One of the first concerns for the IT/Ops teams while deploying to Kubernetes in production is setting up dynamic scalability. Don’t get me wrong, Kubernetes is built for scalability. There are several in-built tools that ensure infrastructure- and application-level scaling needs and load balancing. But enabling demand-based auto-scaling in production requires Ops teams to work harder on setting things up correctly.


  • You might have to configure a load balancer like HAProxy or NGINX, if you’re deploying k8s anywhere other than the Google Cloud Engine.
  • You can not afford to skip specifications like resource or request limits.
  • You must implement graceful pod termination to downscale safely.
  • You must design your autoscaling in a manner that Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) don’t apply together.

#2 Ensuring reliability


When a developer fires up a development environment and begins to code, their primary concern is function and agility. As long as the code works, interacts with other services, and tests right, they’re happy. But in production, enterprise-grade applications — their k8s pods — are to work at significantly higher standards for performance, high-availability, disaster recovery etc.

This requires your IT Ops teams to plan the architecture and k8s deployment definitions accordingly.


  • You need multi-master setups for high-availability. And then build redundancies at application and infra-level for good measure.
  • You must plan zero-downtime environment upgrades. You must also be patching applications and upgrading Kubernetes to its latest version, while carefully maintaining compatibility between the components and k8s.
  • You must set up a CI/CD toolchain that not only expedites releases, but also ensures their quality, without additional efforts from your DevOps teams.

#3 Providing enterprise-grade security in production.


In development or staging environments, security is often not a primary concern. But security and resilience from external attacks is fundamental to the app in production. To ensure unbreakable security,IT/Ops teams must set up processes and failsafes across the board.


  • Infrastructure design, development, and deployment processes need to keep security in mind.
  • You must control kubesprawl, which can result in a larger attack surface for your application.
  • You must have clear visibility over access control, multi-factor authentication, anonymous authentication etc.
  • All unnecessary open network connections need to be closed.
  • You need to change custom images used in dev environments into trusted images for production.
  • You must run patches and upgrades on time.

#4 Enabling in-depth and end-to-end governance.


For any enterprise-grade application, governance is essential — at pod, cluster, application and infrastructure levels. Unlike in dev or test environments, containers in prod need to be monitored all the time. Enabling this requires IT/Ops teams to make focussed and persistent efforts.


  • You need to set up an automated audit trail for your production deployments.
  • You must monitor infra elements like CPU, RAM etc. as well as abstractions like pods, replica sets etc.
  • Version control for configurations, policies, containers and even infrastructure is crucial.
  • You must have systems to generate reports for resource usage, utilization and saturation metrics to ensure cost management.

#5 Bringing consistency and visibility across multi-cloud environments.


Even though Kubernetes is consistent in providing environments, there can be differences across cloud vendors. Depending on how some of your services like load balancers and firewalls — which are not a native capability in open source k8s — are structured, your containers might work differently in different cloud environments.

DevOps teams must take care that these application services run effectively across multi-cloud deployments, which involves:


  • Finding distribution that supports multi-cloud deployments. You must also make sure it accommodates the needs of each cloud platform.
  • Ensuring you configure your cluster topology not just for the needs of your application, but also for multi-cloud environments.
  • Minimizing inconsistencies among environments, often achieved through declarative approach, for smooth CI/CD.

What we've presented here is hardly an exhaustive checklist. It is just a few of the many things that you must keep in mind while using k8s in production. In our experience, to ensure smooth deployment of apps to k8s in prod environments, we've seen application teams need two things: Firstly, automation that makes DevOps simple, secure, reliable, manageable and consistent at scale. As your enterprise application grows in scale and complexity, you will need a platform that provides an app-centric experience, for specific tasks like CI/CD, tracking, monitoring, multi-cloud visibility etc.

Secondly, and perhaps just as important if you’re only now adopting Kubernetes for production, you also need a solutioning expert, who will eliminate the bumps in your path. They will assess your readiness, identify requirements, set up your K8s operations, build your application deployment strategy and work with you on migrating your applications.

With the right platform and a capable partner, there will be nothing stopping you from taking your #app2k8s.

Declarative vs. imperative approach for k8s deployments

Blog

Delegating with code: declarative vs. imperative approach for k8s deployments

January 29, 2020 - by admin | appeared on better programming

The biggest difference between the declarative and imperative approaches to programming is the answer to the question: "Who does it serve?"

The biggest difference between the declarative and imperative approaches to programming is the answer to the question: "Who does it serve?"

The imperative approach, which outlines how a machine needs to do something, serves the machine. It aligns itself with the operational model of the machine and tells it how it needs to do something.

For instance, let’s say you are writing a program to pick out all the vowels from this sentence, calculating the sum of elements in an array, you’d write this:

int sum=Integer.MIN_VALUE;
for(int i=0;i<arr.length;i++) {
    sum += arr[i];
}

A declarative approach, which declares what you want, serves you, the developer. You tell the machine what you need and the rest gets done.

In this, you tell the computer the logic of computation, without detailing the control flow, or low-level constructs like loops, if-statements, assignments, and the like. Like so:

    int sum = IntStream.of(arr).sum();

In essence, it's a question of the "how" vs. "what" paradigm. But this doesn't mean one is, in any way, better than the other.

Declarative programming is easier to understand in that you can read the code and make sense of it. It is succinct, reusable, and enables referential transparency.

But declarative can be slower, taking longer for the process to be finished. In mission-critical, real-time applications like oil rigs' monitoring or radar processing, this can be too late to be useful.

The imperative approach, on the other hand, gives you more control. But, it is not very reusable, given that it’ll be closely tied to the context it is written for.

As a result, it's difficult to scale with imperative code. Depending on how focussed and cued-in the programmer was, this approach is also prone to errors.

Based on the need, programming language, and the comfort-levels of the developer, they might choose one or another to do their job. There is only one way to write your application — your way.

But deployment is a completely different ballgame. Deployment to Kubernetes, more so.

Declarative Approach to Deployment Automation

In the containerized world, a developer needs to be more than just a developer. In addition to programming, they need to understand the infrastructure their code is going to run on, and make it run efficiently.

Deploying to Kubernetes forces developers to climb a steep learning curve, up the complex environment that it is. And this is not the kind of learning developers seek.

A potential solution is a deployment automation tool using an app-centric model that lets you declare your requirements in your own lingo, while it generates the necessary manifests for you.

Instead of forcing you to write for the operational models of K8s, this tool must adapt to the developers’ mental model and work with that.

Sample Service Deployment to Kubernetes

Here's everything you need to deploy a simple service to K8s, with a declarative approach, using an automation tool.

Give your service a name

It'll act as the DNS name for service discovery and communication with other services.

Point it to service images

  • Pull from the image registry.
  • Generate and add images from source/binaries.
  • Use existing Docker images, if any.

Set up other top-level directives

Ports, volumes, environment properties, secrets, etc.

Deploy to Kubernetes

Deploy your spec — specify the path, K8s namespace, identifier for your app name.

The tool will then automatically create your Kubernetes manifests, storage classes, daemon sets, replica sets, stateful-sets, PVCs, config-maps, liveness-probes, NetworkPolicy, etc.

This way, you’ll just tell the tool what you need, and it takes care of deployment, saving you a significant amount of time and energy.

This is what we aim to do with our open-source project, HyScale. We see that development teams need app-centric Kubernetes abstraction and automation for frictionless handoff between dev and ops.

With a specific focus on user experience, we built a tool that requires minimal effort from the developer, while doing all the heavy lifting under the hood. We’ve made it available for development teams — you can try it out here.

Going to the cloud? go containers

Blog

Going to the Cloud? Go Containers

January 22, 2020 - by Niranjan Ramesh | appeared on containerjournal

Errorlessly deploying a modern application to the cloud is, more often than not, a Herculean task. To undertake that journey one release cycle after another, and reach your goal every single time, you need something which is high-performing, dependable and consistently successful. While VMs continue to be the poster child, containers and microVMs are looking to lead the coming era.

The Ever-So-Reliable Virtual Machines

A virtual machine, as VMware famously defined, is a software computer. Abstracting the hardware, VMs give you the computing power and environment your applications need while optimizing the underlying physical computer or a server to run multiple VMs at the same time.

For enterprises moving from on-prem data centers to the cloud, this brought previously unseen advantages: centralized network management; optimized server usage; multiple OS environments on the same machine, yet isolated from one another; consolidation of applications into a single system; saving money; optimizing DR; and more.

For new adopters of cloud, such as enterprises with deep-rooted legacy systems or monolithic applications, VMs offered significant benefits. But, as the years passed, it was clear that VMs aren’t always ideal. Each VM having its own OS made them larger and slower to boot and added to their RAM and CPU cycles.

Meet the Lightweight, Fast, Dependable Containers

Containers—orchestrated by Kubernetes, Docker Swarm and their ilk—abstract the OS, providing a way to run applications on multiple isolated systems, while sharing an OS, often binaries and libraries too. This made containers lightweight; they are often in megabytes and take a few milliseconds to start. In fact, you might be able to put twice or thrice as many applications on a single server with containers, as compared to VMs.

Much like VMs abstracted hardware, containers abstract software. Much like VMs took away the burden of server management, containers significantly reduce software overheads—bug fixing, patch updates, etc.—as they need to happen for one OS instance rather than for each instance in case of VMs. In fact, today, Kubernetes-orchestrated containers often run on top of VM-based infrastructure, as much of enterprise IT is still VM-based.

But, more recently, among progressive application engineering teams, containers have become the most preferred way to deploy applications in a multi-cloud environment. Especially for microservices-based applications, containers had distinct advantages over virtual machines across cost, efficiency, flexibility and speed of execution. Containers also made possible the ability to create an efficient and portable environment for development, testing and deployment.

Yet, they lack the unassailable security of VMs. Containers have had to bear the consequences of process-level isolation, unlike boundaries of hardware virtualization that VMs had. I don’t mean to imply that containers are not secure—they have container-level, cluster-level and kernel-level security.

Meet the MicroVM: Looks Like a Container, Acts Like a VM

MicroVMs are hardware-isolated lightweight virtual machines with their own mini-kernel. They offer security from hardware virtualization as with VMs, with the agility of containers. The main difference between containers as we know them today, and microVMs is that the latter offer hardware-backed isolation within a Kubernetes container pod.

MicroVMs automatically hardware-isolate vulnerable/untrustworthy tasks to protect the rest of your environment. They are isolated from both other microVMs and the operating system—making sure any attack is contained in the microVMs and not affecting any other part of the application. Even in attacks that surpass host and network-based security—as sophisticated attackers of today are often able to do—microVMs make sure that the endpoints are secure. By the same model, microVMs can also protect sensitive applications and prevent data loss by only providing as much access to other systems or data as necessary. So, you can run both trusted and untrusted tasks in a single system without the worry of the latter destructing the former.

Yet, microVMs are unlike traditional VMs in that they are not full machines but “just enough” machines. They leverage the hardware virtualization of VMs within the context of application containers. They only access a small part of OS resources and other processes, ensuring there is no loss in speed and performance as a result of increased security.

Even though Bromium started the conversation around microVMs in 2012, it’s only this year that their momentum has picked up. Tools such as AWS Firecracker and Google’s gvisor have slowly joined the enterprise application engineer’s toolkit, yet microVMs are still unorthodox—showing great potential, yet untested.

Find Your Sweet Spot

The cloud, until very recently, was dominated by virtual machines. As more and more applications are deployed to VMs, its shortcomings become apparent. But jumping ahead to the microVM too much of a risk. MicroVM is a maverick—untested, untrusted, having a long way to go before mainstream acceptance.

Containers are your sweet spot. They’re significantly ahead of the traditional VM. They’ve found acceptance from the who’s who of tech—Netflix, Airbnb and the like swear by Kubernetes. Cloud providers are stepping on each other’s toes to make containerized deployment efficient, to say nothing of the dozens of advanced tools available in the market!

In 2020, if you’re not on the container bandwagon, you’re already well behind.

What do developers want?

Blog

What Do Developers Want?

January 22, 2020 - by admin | appeared on better-programming

Enhancing developer experiences with Kubernetes

More often than not, developers just want to write clean, secure, maintainable code for their applications. No developer ever said, “the best part about my job is writing the k8s manifest for deployment”. In fact, you’ve heard the opposite. Developers often complain about time lost in deployment — writing stateful-sets, PVCs, config-maps, and other things. Developer experience concerns around Kubernetes are not new. In a recent New Stack survey, over 17% of the community said that developer experience should be the top area that the core Kubernetes project needs to address.

But what do developers want?

Independence: Self-Service Deployment

If there’s anything that bugs a developer, especially one in an agile team, it’s waiting for a DevOps engineer to provision the infra and set up the environment. Having to wait for this is not just a hit on productivity, but also overall motivation and effectiveness. Let’s say a developer is debugging something and wants to test in a different environment. If the DevOps team has an SLA of two days to enable that, they’re unlikely to persist. They might try to debug within the resources available to them. This can be constraining.
Developers want to quickly and independently deploy to any k8s cluster on any cloud for dev, test, or delivery purposes, without back-and-forth with the delivery team.

Speed: Idea to Code to Outcome

DevOps tends to conjure up an image of the development and operations halves of the cycle as not just equal, but also symmetrical — remember the infinity visual of the DevOps cycle? The reality is a little more complex. Application teams want as little time and energy spent on deployment and operations as possible. They want to quickly propagate app changes and focus on understanding customer feedback, planning feature improvements, writing better code and optimising application performance. Time spent by a developer learning the peculiarities of Kubernetes is time snatched away from focusing on customer-centric application development

As a result, developers want to speed up the journey from idea, to code, to outcome. They want to declare their end-game and have the process be taken care of.

Accuracy: Eliminating Errors and Version Problems

Idiosyncrasies of delivery processes are directly proportional to the size of the team and the freedom they have. A large enterprise might have guide rails for deployment practices, but smaller teams at startups may not. Either way, a developer writing a custom script for deployment is bound to be inconsistent with the next person. This is most likely to result in go-live errors, version mismatches, and delivery delays.

Developers don’t want to write custom scripts either. Give them a tool that standardizes it and they’ll be glad.

Performance: With CI/CD

A robust CI/CD pipeline goes a long way to making a developer more productive. It automates testing for each codebase, identifies errors quickly, prevents code from breaking, enables you to ship quickly, and so much more. Without a good CI/CD pipeline, development teams will be waiting in queue for deployment, investigation, and bug-fixing. They’ll spend time monitoring the various versions of the system, maintaining a log of code changes and so on.

Developers wants a flexible, yet robust CI/CD pipeline, which also integrates well with other Kubernetes tools for deployment optimization.

Flexibility: Developing and Testing Locally

Developers like to develop and test locally. To test their code, they might need to call other microservices, which may be on local or remote clusters. This can get cumbersome — for example, when there are myriad dependencies across microservices and applications.

Developers need a system that can integrate local development and testing to the remote Kubernetes clusters without hassle.

Experimentation: Production-Grade Beta Testing

For every new application version/feature, customer validation is extremely important. Developers want more than what beta testing can offer — they want their code tested in a production environment, by real customers, without the whole system breaking if something isn’t right.

Developers want the ability to seamlessly perform canary deployments.

Visibility: Single-Pane View of Changes

Multiple developers making changes to microservices in any application is rather common. This is made more complex when the same application is deployed on a multi-cloud strategy. Developers fear their code breaking or their feature not working as expected because of this.

Therefore, developers want visibility into the changes made across microservices — so that updates are clean and accountable.

Leveraging Tools for Enhancing the Developer Experience

Even if these look like a rather unattainable wish list, in the end, developers just want to write code that works. They want automation that can take care of platform issues. They want tools that can integrate effortlessly with their existing systems. They want to keep track of the work they’ve done and the progress they’ve made.

Working directly with Kubernetes can seem convoluted and daunting, but there are several products on the market that can help you. You might find one product for each of your wants, or one that can handle them all. For example, we’ve been working on an open-source app delivery tool that can generate Docker files, build Docker images, generate Kubernetes manifests and deploy to any k8s cluster — significantly reducing the manual work developers have to do.

It’s widely noted that the k8s ecosystem isn’t the most user-friendly. Adam Jacob of Chef called it “hot garbage, from a user experience perspective”. But it doesn’t have to be. If you know what you want, there will definitely be a tool for it!

More than just a developer – Kubernetes deployment challenges and choosing the tools to address them

Blog

More than 'just a developer': Kubernetes deployment challenges and choosing the tools to address them

November 28, 2019 - by admin | appeared on faun

The journey from app2k8s

In the containerized world, a developer needs to be more than ‘just a developer’. In order for the cloud-native application to run smoothly and efficiently, DevOps needs to function right — which often means that developers will need to do more than just programming. They need to understand the infra and make the code work on that infra — for instance, in addition to code-level dependencies, they now need to worry about environment-level and system-level dependencies too as part of their day job.

The uphill task of upskilling

Developers — especially seasoned ones — often find this the most challenging part of adopting Kubernetes. A good developer before Kubernetes needed to know application-level coding and how to package them. But a developer today needs to understand k8s architecture, ReplicaSet, ConfigMaps; and be able to build a container image, do rolling upgrades, monitor the application in k8s, etc. Adopting Kubernetes necessitates a developer to climb a rather steep learning curve.

Even when developers are happy to learn, they are under tremendous pressure to do so quicker — so as to meet the ever-shortening deployment cycles. Add the complexities of k8s to the mix and it becomes a significant investment of time and effort, all while chasing a release deadline.

How to YAML?

Across environments and resources, accounting for storage classes, secrets, deployments, ingress, network policies, admission controls, and things, a medium-sized microservices-based app would need thousands of lines of k8s manifest files to be written manually by the developer. For every new deployment, there might be a need to edit /modify /update these files.

As the application and deployment cycles grow, it becomes proportionately complex to manage these YAML files and monitor them in motion.

Adapting to the new development culture

Containerization isn’t a minor upgrade on traditional ways of application delivery; it is a new way of doing things. So, along with it comes the uncertainty of how the new way works, how it maps to the old way, and how to transition seamlessly. For example, the developer needs to get into the habit of building on the platform, instead of building locally. Like I said earlier, developers will have to take responsibility not just for their code, but for the code to run as expected in production — which involves infra elements.

On the other hand, even minor changes to a configuration can have ripple effects across containers. Like, if you absent-mindedly push an auto-upgrade k8s version (as in your dev config) to production, it can possibly impact several nodes. As a result, ops teams tend to be wary of giving developers unfettered access to the cluster. This often ends up being a pointless tug-of-war.

Pandora’s box of tools

And then there is the story of Pandora’s box of tools. The rapid evolution of containerization technologies has brought a wide range of tools to market. Image-writing tools, portability tools, debug servers, environment emulation tools, automation tools, replication tools — there are hundreds of tools in the market. Ironically, this chokes developers before they even start!

Even with tools, developers find themselves ill-equipped. For example, developer CI/CD tools such as Jenkins, Travis CI, Bamboo, etc. aren’t integrated with the container runtime environment, forcing developers to write scripts to make them work together.

For developers who are just getting acquainted with the world of containerization and Kubernetes, this sprawling set of tools can be overwhelming, preventing them from even beginning to try. And for rookie developers, distinguishing a Kubernetes feature from a community project, contributed by another developer might also be difficult. Confusing an immature community project to a battle-tested k8s feature can have demotivating consequences.

But, Kubernetes, which might look like a behemoth of new things to an uninitiated developer, is, in fact, a gold mine of opportunity in increasing developer efficiency — 66% of those surveyed by Forrester reported some increased levels of efficiency. As I see container adoption among enterprises grow, I believe k8s will be a key skill for the future that developers can’t ignore. The balance is in knowing what needs to be learned and what needs to be automated.

Understanding the foundation — such as the fundamentals of containerization, the philosophy, and structure of Kubernetes, basic concepts that shape application delivery — will make you, dear developer, a force to be reckoned with.

For everything else, choose a tool.

  • Find a tool that is aimed at the developer — especially to minimize the learning curve.
  • Find one that can integrate with your existing CI/CD pipeline, which will keep you in a familiar environment, even as your delivery process changes.
  • Find something that can automate non-core developer activities like k8s manifest or Docker image generation, and let you focus on writing your code.
  • Look for one that enables self-service; you need an environment where you can learn without having to depend on your IT teams or raise IT tickets.

Ideally, look for one that can do all this.

The long and winding road to Kubernetes

Blog

The Long and Winding Road to Kubernetes

November 16, 2019 - by Spruha Pandya | appeared on HackerNoon

A look at why enterprises are slow in moving to Kubernetes.

With the advent of the microservices architecture, containers have emerged as the tool of choice for building and deploying applications. Enterprises are rapidly adopting containers as that enables them to scale on-demand. But when it comes to deploying a bulk-load of applications, container orchestration and management becomes a tough nut to crack. Enterprises are increasingly embracing a cloud-first strategy for their applications, and for all these requirements, Kubernetes seems to be the go-to solution. Yet, the adoption rates of Kubernetes are not as high as one would perceive them to be. Kubernetes has become the defacto standard in enabling enterprises to build massive digital architectures to be able to successfully deliver business goals. Yet, only 40% of the companies have completely moved their applications to Kubernetes.

The reason that Kubernetes is successful is because people look at it and they don’t understand why they need it until they see it do stuff. Then they say “Oh my God, I need that!”
Tim Hockin

One of the reasons for this estimated gap in adoption of Kubernetes could be the amount of complexity that it brings to the table. Most organizations lack the expertise required to run and manage multiple Kubernetes clusters. While they acknowledge the benefits that Kubernetes has to offer, deploying large systems at scale in the Kubernetes cloud poses a lot of inconsistencies due to the varied interfaces and different topologies of multiple cloud providers.

Developers and enterprise IT engineers have to learn the nuances of the cloud and deal with all its dependencies. As a result of such a steep learning curve, obstruction of the workflow due to complexities becomes common, developer productivity decreases, and cost increases.

There are multiple factors that deter an enterprise from moving to Kubernetes and here are some factors faced by organizations across all business units:

  • The complete change in approach Kubernetes provides new abstractions and mechanisms for a lot of infra-specific stuff that developers really didn't have much visibility into, but the real problem is pre-k8s developers told servers what they wanted to run but post-k8s they tell servers how they want their applications to run. For Ops teams, complex deployment systems aren't totally new. They have for a long time now created systems wiring up Puppet/Chef/Salt/Ansible, AWS, Terraform, etc. But with Kubernetes, they are facing a situation where they need to learn to solve a complexity that they did not organically build themselves. That's tough and takes time.
  • The initial inertia The IT operations of any enterprise do not shift directions overnight. Also, when it comes to Kubernetes adoption, there is a certain amount of inertia that the developers and IT leaders face due to the drastic change in direction, the increase in budgets and the impending possibility of major downtime. All digital transformation initiatives are generally implemented and driven by developers and IT leaders. Their aim is to take advantage of the next big shift in enterprise IT and thereby position their organizations in the market. All these initiatives need to happen without the involvement or disruption of the rest of the organizational activities. It takes a lot of effort on the part of the app development and IT ops leaders to convince the organization that the benefits of Kubernetes are worth all the trouble.
  • DevOps or SILOs? As discussed before, due to the steep learning curve within their zones of expertise, the developers and IT operation teams have their roles and tasks clearly defined. The goal of the development team is agility in developing and testing new code and apps along with generating all the YAML files that are required for the applications to run on Kubernetes. Meanwhile, the IT Ops are supposed to develop all the necessary supporting artefacts needed for deployment to Kubernetes and at the same time, enable reliability and scalability, enhance performance, and ensure security. Considering both teams have a lot on their plates with limited expertise in the arena, the DevOps system set in place crumbles and before anyone realises turns into SILOs. SILOs end up creating a lot more confusion, slowing down the workflow eventually bringing it to a standstill.
  • Distraction from coding Even when an organization has all the required skills and expertise for moving their applications to the Kubernetes cloud, doing it manually at scale means a huge workload. Deploying to multiple Kubernetes clusters require the creation of YAML files and all the other supporting artefacts required for the integration. Doing this at scale for large applications and then scaling, managing, and upgrading all the clusters is a massive challenge. In short, a lot of time and effort go into working with Kubernetes cloud clusters thereby leaving no time for improvising the core functionalities of the application.
  • Too many alternatives Considering the challenges in deploying applications to Kubernetes in-house, most organizations consider experimenting with Kubernetes clusters as a platform for deploying containers due to its flexibility and cost savings. All Kubernetes cloud providers are painting themselves as the experts of the field. But, each one has a different way of implementation and a different infrastructure with limited amount of flexibility. With too many variables to consider and too many options in the market, there is no definite way to choose the most suitable one, as each organization is unique and requires an app infrastructure that can complement its business goals.

Towards a road without obstacles

Though there are ways to effectively run multiple app clusters without Kubernetes, deploying to Kubernetes is the preferred way as it makes the container operational aspects simpler. To overcome all the challenges discussed above, organizations are required to take a methodical approach and implement best practices to ensure smooth maintainability, as well as seamless developer interaction. In order to speed up the transition to Kubernetes journey, it is imperative to have a well-knit strategy.

Enterprises today need to smartly choose which skills need to be developed and which ones need to be automated or outsourced. One can either look for a suitable Kubernetes cloud service provider or a platform that can check all the requirement check-boxes of the organization or set in place a self-service DevOps system than can enable dev and IT Ops teams to work in perfect sync.

DevOps Before and After Kubernetes

Blog

DevOps Before and After Kubernetes

November 15, 2019 - by Anoop Balakuntalam | appeared on thenewstack

As Kubernetes turns five, we explore the changing face of DevOps in the K8s world.

DevOps Inception and Shortcomings

In the pre-Kubernetes era, infrastructure and app development were inescapably intertwined. As complexities grew and teams evolved, we saw “DevOps” emerge as a bridge between development and operations in an attempt to resolve the age-old delivery trouble arising from developers throwing things over the wall to ops and then ops having to deal with production issues on the other side. DevOps rose to be a new subculture within existing teams, sometimes yielding new “DevOps teams” and even leading to a new class of tools and methodologies.

In reality, though, bridging skills and evolving an operational culture was not enough. Issues arise in the correct functioning of an application without proper configuration management in place. Application configuration would often conflict with IT’s own configuration goals towards security, scalability and availability. Inevitably, every time the pager went off — except for trivial issues — both roles would get pulled in to uncover the mysteries of running software in production.

The challenge of agile delivery was that application configuration and infrastructure configuration had to be achieved collaboratively yet have well-defined ownership and clear role separation. In the absence of such a model, there were too many heads still involved in delivery, resulting in a lot of friction and spilled energies. On top of that was the classic case of ‘it-works-for-me’ syndrome wherein developers would often complain that their software worked fine in their own configured development environments but behaved differently once pushed to an IT configured environment. Configuration hell reigned, even as the culture of DevOps seemed to be finding its way.

Configuration-Driven DevOps: From Idempotency to Immutability

To address these challenges, the industry turned to the principle of idempotency. Most of us understand idempotency simply as “an operation that produces the same end-result no matter how many times it is performed.” In the case of configuration management, this end-result would be the desired configuration of an app’s environment. When an environment deviates from its desired configuration we could use this principle to ensure that the drift is corrected and the environment is brought back to the desired state. Tools such as Puppet and Chef were born out of this idea and seemed to be the most suitable solution for a while.

While the idempotency theory solved some issues, it came with its challenges.

We had a way of knowing when things changed and only updating things that needed to be updated. However, it did not solve all the problems as we still had to cater to a seemingly large number of cases.

This complexity became avoidable with the advent of containers. Instead of changing things in place, we could now deploy immutable fully-configured container images and simply replace older containers with new updated ones. Thus the focus shifted from idempotency to another important principle: immutability. As Wikipedia says, “an immutable object is an object whose state cannot be modified after it is created.” So once an application has been packaged into a container image along with its dependencies and configurations, any number of identical containers can be spawned from it.

Enter Kubernetes: Immutability + Infra Abstraction

With the popularity of containers becoming a DevOps game-changer, Kubernetes came to be the most sought after container orchestration platform. Application teams could now be sure that their applications packaged as containers can be deployed onto any K8s environment running anywhere, and that the application would behave the same thanks to immutability. In addition to that, Kubernetes’ excellent abstraction over the infrastructure meant that infrastructure and development teams could cleanly and separately focus on their own areas of expertise, taking away the massive configuration and collaboration challenges.

This seemed to put an end to the configuration hell mentioned earlier, as we witnessed a fundamental shift emerge with the clean separation of concerns between runtime infrastructure ops and application deployment. So IT can focus on things like cluster infrastructure, capacity management, infrastructure monitoring, cluster level disaster recovery, networking and network security, storage redundancy, etc. On the other hand, application teams can focus their energies on building container images, writing scripts (Kubernetes manifest YAML) for deployment and configuration, externalizing configuration and secrets, and so on.

Application teams no longer need to go back and forth between different skill sets, seeking information and coordinating with different teams to get the job done, nor waiting for hours/days to have tickets addressed. Suddenly, there are ways to eliminate friction and lift the weight of collaboration. The actual infrastructure doesn’t matter so much anymore for delivery as it got nicely abstracted by K8s.

The Road Ahead for DevOps

With all the abstraction and clean separation that K8s brings, it also adds another layer on top of the VMs/machines on which it runs. This means additional overhead for IT with regard to cluster management, networking, storage, etc. In recent times, there has been a lot of industry focus on how to simplify K8s setup and management for enterprise teams to access all the intended benefits.

For an app team, containerizing a typical medium-sized, microservices-based app would require several thousands of lines of K8s manifest files to be written and managed. Each new deployment would need a rebuild of container images and potential modifications of several manifest files. Clearly, DevOps in today’s world will be different from DevOps in the pre-Kubernetes era.

These new-world DevOps teams may do well with an automation process for delivery to Kubernetes so that efficiency gains and economic benefits can be realized sooner while also maintaining reliability and speed. Such automation along with a standardized process will further enable a clean hand-off interface between the IT teams managing the infrastructure and the app teams delivering apps to K8s. For enterprises pursuing agility and frictionless delivery at scale, finding the shortest path to Kubernetes will be at the heart of DevOps in times to come.