Pipeline PaaS - the first release
Banzai Pipeline, or simply Pipeline, is a tabletop reef break located in Hawaii, on Oahu's North Shore. It is the most famous and infamous reef on the planet, and serves as the benchmark by which all other waves are measured.
Pipeline is a PaaS with a built in CI/CD engine to deploy cloud native microservices to a public cloud or on-premise. It simplifies and abstracts all the details of provisioning cloud infrastructure, installing or reusing a Kubernetes cluster and deploying an application.
Note: The Pipeline CI/CD module mentioned in this post is outdated and not available anymore. You can integrate Pipeline to your CI/CD solution using the Pipeline API. Contact us for details.
Today we're extremely pleased to announce the first version of Pipeline, 0.1.0 - with end to end support for deploying and monitoring cloud native apps from a GitHub commit hook to the cloud in minutes, using a fully customizable CI/CD workflow. We built this version in less than six weeks. We're pushing it out now, in order to demonstrate the benefits and productivity of the platform, the simplicity of using default spotguides, and, more importantly, to shed some light on how it works and what to expect from us in the near feature.
The core part of the Pipeline PaaS is its control plane, a collection of services that connect to a custom GitHub repository, manage the lifecycle of the application, k8s and cloud cluster, and have a deep understanding of application-type.
As seen above, the control plane is relatively complex and - putting aside the core Pipeline services - monitoring, alerting, dashboards and a full CI/CD system are all running. These systems are
complimentary services that you can opt out of. However, these are typical cloud native side projects and they're routinely included in Kubernetes clusters, and are configured on-demand in accordance with the type/spotguide of the application/microservices being deployed.
In this post we'd like to introduce you to the suite of services offered by this very early release, and give you some details on how it works - posts that elaborate on the various supported
spotguides will be forthcoming in the next few days.
Cloud provider support
This release focuses exclusively on AWS. We're working hard to merge our provider specialisation PR into Pipeline, which will open the way for us to add support for new cloud providers. Some additional providers are already in our pipeline: notably, Microsoft Azure AKS. This PR adds support for AKS as an example cloud abstraction; we're putting our recently released open source Microsoft AKS client to good use, since AKS still lacks swagger or language bindings.
Please find below a list of services run on the control plane. Note that this is not an exhaustive list - there are other low level infrastructure services running inside k8s, like Tiller or Rudder, which Pipeline speaks to, and other
beta services like aggregated log collectors, Helm chart orchestrators, namespace operators or K8S watchers/informers.
|Monitoring||Prometheus||Full vertical and horizontal stack monitoriong for AWS, K8S, and deployed microservices||Default node exporters and push gateways are deployed as needed|
|Alerting||Prometheus||Default alerts for infra and apps, customizable node exporters and push gateways||Correlated alerts across the stack, collected for model build|
|Dashboards||Grafana||AWS, K8S, app deployment specific dashboards||Dashboards are based on |
|CI/CD||Drone||AWS, K8S, microservices||Vendor independent, can be used with CircleCI or Travis|
|Pipeline plugins||Golang, Docker||AWS, K8S, Spark, Zeppelin, Kafka, Java, Golang||Extensible, custom plugins are built in |
|WebHooks||GitHub||Default GitHub web hooks||There is forthcoming GitHub marketplace placement and GitLab support|
Pipeline PaaS release components - DIY, be your own PaaS vendor
Pipeline PaaS is built on several components that we've been developing over the past few weeks. Only AWS-related building blocks are part of this release, but as you can see from our list it still involves a lot of moving pieces. They are assembled, built and released together, using our own CI/CD workflow and cloud specific automation (like Cloudformation). Once we release, for example, Microsoft Azure AKS support, that same level of automation (like ARM templates for Azure) will be part of the release.
Pipeline PaaS release - the hosted version
Here, we have demonstrated the complexity and number of building blocks necessary to host your own PaaS and become your own microservice provider to better illustrate the level of work, maintenance - the HA deployments and Kubernetes expertise - that's required to run these components.
We are moving towards a hosted service that will host, deploy, maintain, patch and support all these services, with the end goal that our users should never know, be exposed to, or care about the underlying systems; at the end of the day your focus should be on writing applications, without having to worry about whether they are built, deployed and automatically operated according to your company's SLA rules and configured CI/CD workflow.
Once a control plane is up and running (please check the following installation guide), Pipeline is able to serve as a fully customizable CI/CD workflow by placing a
.pipeline.yml file under source control alongside your project. This is similar to commercial CI systems like CircleCI or Travis, however, instead of using these systems, the platform uses Pipeline's own CI/CD system with plugins for workflow steps, such as:
clone- clones a GH repository inside a k8s clusters (uses PVC)
remote_checkout- checks out a GH repository inside a k8s cluster (uses PVC)
cluster- provisions, updates, reuses or deletes a cloud infrastructure and Kubernetes cluster, and deploys a runtime required for the
spotguide(e.g. if it's Spark then it understands pre-requisites like RSS and executors running as k8s daemon sets)
remote_build- builds an application inside a k8s cluster based on the
run- runs an application on a runtime required by the
A typical flow in the UI, which deploys the custom workflow described in
.pipeline.yml, looks like this.
This is an early release that we're handing over to a few beta testers across different industries, with an eye toward receiving feedback and allowing them to drive the direction and features of Pipeline's next generation. We don't recommend deploying production use cases to Pipeline yet, not because the system is too unstable to run the supported
spotguides, but because the current release falls far short of what we at Banzai Cloud believe a next generation PaaS platform should look like. In the coming months we'll add support for many cloud or managed k8s providers, deliver an end to end security model integrated with Kubernetes and a real
service mesh to support throttling, as well as canary releases and policy driven ops and many many other features, all of which will be available through the current rest API, UI and CLI. Also, we're doing extensive work on Hollowtrees, with custom plugins for Pipeline
spotguides and SLA policy-driven Kubernetes schedulers that support the resource management needs of your microservices.
I'd like to take this opportunity to thank all of my colleagues for the hard work and dedication they showed in the past few weeks and add that we look forward to supporting you in trying out Pipeline.
Building a PaaS is a challenging yet rewarding engineering experience, which we are all excited about. We encourage you to take part in this open source engagement, and we'll continue to provide detailed information in our blog in order to share the experience, allowing you to join in and even to drive the process of creation.
About Banzai Cloud Pipeline
Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.