There is a newer version of Backyards available, read more here: Announcing Backyards 1.1
Today we are happy to announce the first major release of Backyards, Banzai Cloud’s automated service mesh product.
Before jumping into the details, we need to answer an obvious question:
Why another service mesh product?
The service mesh became one of the most hyped things lately, and it may seem that a new product is released every other week. Some of these received serious criticism in the Kubernetes community. And in some cases it really is hard to decide if they only exist because of the hype, or if there’s a real vision and a need behind them.
In general it seems to be clear that the service mesh is here to stay. The concept is good and clear: move everything related to service-to-service communication to a dedicated infrastructure layer. Applications code stays clean, observability, security and traffic control is unified and can be managed from a dedicated control plane. Who wouldn’t want that?
But even if users can see the benefits that a service mesh could offer, they often fail to utilize it. The biggest problem is that the complexity and domain knowledge required to operate a mesh is too high.
We wanted to come up with something that helps overcome these problems, that eases the adoption and increases the velocity of working with a service mesh. But we also didn’t want to reinvent the wheel and start from scratch.
That’s why we’ve built our product on the solid foundation of Istio. Istio may had (or still have?) its shortcomings, but there is a great and strong community behind it that works hard on overcoming these problems and it’s still the most complete and mature solution by far.
Ease of adoption 🔗︎
The journey into the service mesh starts with the deployment of the various control plane components. Backyards does this with the help of our open source Istio operator. It does the heavy lifting of managing and configuring all the control plane components from Pilot to Galley. Because it’s an operator running continuously in the cluster, it is able to re-configure or upgrade Istio during runtime.
Istio also needs some additional components to unleash its full potential of observing the mesh. Prometheus collects metrics from Envoy proxies or from Mixer, Grafana displays monitoring information on analytics dashboards, while Jaeger handles distributed traces provided by Envoys.
Backyards builds an integrated, production-ready environment of these components with a single CLI command in less then two minutes. It also adds Banzai Cloud’s own management dashboard, CLI and GraphQL API, which makes working with a service mesh extremely simple.
Ease of use 🔗︎
Having a production ready service mesh environment is one thing, properly configuring and monitoring service-to-service communication is another. Backyards’ advanced management dashboard eases mesh configuration and simplifies the presentation of telemetry data.
It was built to serve two purposes:
- Help gaining insight into the behaviour of your mesh and your applications and services inside.
- Let you focus on the high-level requirements by taking care of complex low-level configuration options.
The dashboard displays the topology of services and workloads inside the mesh, and annotates it with real-time information about latency, throughput or HTTP request failures. It serves as a starting point of diagnosing problems within the mesh. The UI is integrated with Grafana to easily access more in-depths analytics if needed, and with Jaeger for one-click access of distributed traces of various services.
Istio has a powerful mesh configuration mechanism through Kubernetes custom resources. While we like this approach a lot (how can you not love CRDs?), we also think that their complexity is intimidating for Istio adopters, and they’re quite error-prone without a deep domain knowledge. Backyards lowers the bar for getting started, and eliminates configuration errors by providing tools to manage service-to-service communication. It tries to make traffic management configuration as seamless and intuitive as possible.
It guides you through setting up complex traffic routing rules and takes care of creating, merging and validating the YAML configuration. And the best thing is, that unlike some other similar products, it’s working in both directions. You can edit the YAML manually, and you will still be able to view or manipulate the config from Backyards. It’s possible because there’s no intermediate config layer in Backyards. We wanted to keep Backyards as lightweight as possible, so it’s only doing CR transformations and merging.
Of course we couldn’t cover every Istio feature (yet!), so let’s see what’s available today. These features are available from the UI and the CLI, but for advanced users we’re offering the GraphQL API as well. This is an introductory post about the features - for concrete examples head to the Backyards documentation.
Traffic routing 🔗︎
One of the top features of Backyards is the ability to fully configure how traffic flows in the service mesh. This kind of routing works in the application layer, and lets users configure sophisticated rules based on URIs, ports or headers.
In Istio, routing is mostly described in
Virtual Services, and then translated to Envoy configuration.
Backyards covers almost everything that could be described with
but comes with an easy to understand structure of
routes and different
You can add routing or redirect rules for requests that match certain criteria,
and configure different rules like
request timeouts, or
mirroring to an other destination.
We’ll cover traffic routing in an upcoming blog post, and you can read more about the details on how to use it in the Backyards docs.
Fault injection 🔗︎
Fault injection is a system testing method which involves the deliberate introduction of network faults and errors into a system. It can be used to identify design or configuration weaknesses and to ensure that the system is able the handle faults and recover from error conditions.
With Backyards, failures can be injected at the application layer to test the resiliency of the services. You can configure faults to be injected into requests that match specific conditions to simulate service failures and higher latency between services.
Circuit breaking 🔗︎
In the Kubernetes world, where tens, hundreds or even more services are communicating with each other, it is critical to protect services from abnormal behaviour of their peers. Downstream clients need to be protected from excessive slowness of upstream services. Upstream services in turn must be protected from being overloaded by a backlog of requests. The solution to the latter problem is the time-tested circuit breaker pattern.
A circuit breaker lets requests through without interference until the number of failures reach a certain threshold. When the threshold is reached, the circuit breaker trips the requests, by returning an error without even attempting to execute the call.
Mutual TLS 🔗︎
Backyards uses Istio’s mutual TLS capability to automatically encrypt and authenticate communication traffic. In the first major release TLS can be globally enabled or disabled for a cluster.
In the next release, we’ll add the ability specify coverage for critical groups of services only. You can also expect integration with Vault as a trusted Certificate Authority to store and issue certificates for mTLS.
Ingress (experimental) 🔗︎
Allowing and controlling incoming traffic to a cluster from outside is needed in most use cases. In general, ingress controllers are responsible for this in the Kubernetes world. There are a bunch of these available (a good comparison can be found here), but if you’re using Istio it makes perfect sense to use its own ingress gateway.
It’s one of the most feature complete solutions anyway, and you can avoid managing one more application in your stack.
One of its drawbacks is the lack of a UI.
Backyards will try to fill this void by allowing the configuration of most ingress rules from it’s dashboard.
For now, you can select which services to expose through the ingress gateway on selected host and ports,
and Backyards will take care of opening up the port on the service (and the cloud load balancer if available),
and building the required
virtual service YAML configs.
It still lacks a few important things (in particular: TLS configuration, multiple gateways or different policy configurations on the gateway level), but this is one of the highest priority items on our roadmap. We’d like to provide a full ingress UI for Istio within Backyards as soon as possible.
Canary deployments (experimental) 🔗︎
The main goal of canary releases is to reduce the risk of introducing new versions of applications. By rolling out new versions gradually, it decreases the number of users affected by a potentially faulty application version.
The process starts with the deployment of a new version, which receives zero traffic. Throughout the canary release, traffic is gradually shifted towards the new version, while network traffic is continuously analysed to prevent the rollout of broken application versions. If failure threshold were not hit throughout the process, the new version takes the place of the previous one by receiving all traffic.
We know that configuring traffic flow and network policies in a cluster is a critical and sensitive task. So this whole product wouldn’t be complete without proper security. For us, security means two (it’s basically three) things in this context:
- Authentication and authorization
The same philosophy that says don’t reinvent the wheel, and keep it as lightweight and seamless as possible applies here as well.
For authentication and authorization, Backyards leverages the Kubeconfig and corresponding RBAC permissions.
When you open the dashboard through the recommended way of typing
backyards dashboard, you’re seamlessly authenticated through your current Kubeconfig.
If you’re allowed to add, edit or delete specific Istio custom resources, you’ll have the same permissions from Backyards as well.
Every action or configuration update through Backyards is audited for accountability and to get insights from tracking changes. By default this audit information is logged to the default console output, or to a configurable audit sink that’s compatible with the dynamic audit webhook backend of Kubernetes. We’re suggesting to use a tool like our own logging operator to collect and distribute these logs to your selected output for analysis.
Multi and hybrid cloud 🔗︎
At Banzai Cloud, we noticed an accelerated interest in hybrid and multi-cloud solutions at most of the companies we had discussions with. Use cases can range from scaling out to the public cloud for peak workloads, to cost effectiveness or to avoiding lock-in. But be it on-premise, or different cloud providers, most large companies need to span across multiple clusters.
The service mesh can be a good fit for some of these use cases by connecting clusters, and hiding the complexity of inter-cluster communication. Backyards is built to handle these scenarios perfectly. The underlying operator is capable of deploying and operating the service mesh in multi-cluster environments. Backyards collects and displays telemetry data properly from these clusters, and makes routing and policy configuration as easy as in a single cluster scenario.
We still believe that the service mesh is one of the next big things. But it may be hard to navigate through the hype, the many products, and the complexity that surrounds this topic.
Our vision is to clear the picture by offering a product that leverages and integrates everything that we think is the best choice currently, and that makes the adoption and use of the service mesh as easy as possible.