Use Containers to Take Your DevOps Pipeline to the Next Level

Almost all of the businesses that we talk to have made significant strides toward DevOps maturity in recent years, but most of them still have the same obstruction that slows their DevOps pipeline to a trickle (at least compared to what it could be). That obstruction is the constraint of using shared environments. Some development teams are still sharing environments that are provisioned on bare metal, which means that you have to get in line and wait your turn to deploy your build for integration, functional and performance testing. Others have made the move to virtualization, but are still hamstrung by limited resources available for compute and storage, or provisioning policies that require human approval and setup. These constraints cause development teams to give up on the dream of having a true continuous integration and continuous deployment (CI/CD) pipeline.

Implementing a solution like the one outlined below can take your DevOps pipeline to the next level. Imagine going from requirements to production code in days or weeks instead of months. That’s within your grasp now, and everyone can do it. We’d like to show you how.

Docker

The rising popularity of containers, and the introduction of Docker for standardizing and managing them, has created a revolutionary new tool in the DevOps tool chest that gives you the ability to:

  1. Reduce the resources required to run your applications so you can get more use out of existing compute resources (on-prem or cloud)
  2. Simplify the management of these resources so self-provisioning becomes easy

It turns out that marrying these new capabilities with a couple of existing technologies (GitFlow and CI/CD tools) gives development teams the ability to do something they’ve been dreaming about for years: create simple workflows that make it easy to build and deploy changes made to multiple code branches, and to do it all automatically.

GitFlow

The first of these existing technologies is a popular code branching and merging strategy called GitFlow. If you aren’t familiar with it, you should definitely read the linked information. In a nutshell, it uses two permanent branches (“develop” and “master”) and a bunch of short-lived branches to manage features, hotfixes and releases. It’s simple, yet powerful.

CI/CD Tools

The second category of technologies are CI/CD orchestration tools, such as Jenkins. These types of tools allow you to create a sequence of activities (typically build, test, publish, and deploy), and automate them using plugins to support a variety of source code, testing and application platforms.

By combining these three technologies, development and operations teams can build and deploy applications automatically — usually in seconds or minutes — whenever code is delivered to the source code management system.

Visit our DevOps Solutions webpage for information about how Sirius can help you implement a DevOps strategy. Or to see a live demo of these technologies in action and discuss how you can begin adopting them, contact us.

By |2018-12-26T21:42:30-05:00May 17th, 2017|Blog|0 Comments

About the Author:

Mark Speich is a Business Solution Architect at Sirius.