UPDATE: For a newer Istio control plane upgrade method using the canary upgrade flow see the Safe and sound canary upgrade for your Istio control plane post.
Want to know more? Get in touch with us, or delve into the details of the latest release.
Or just take a look at some of the Istio features that Backyards automates and simplifies for you, and which we’ve already blogged about.
Since releasing our open-source Istio operator, we’ve been doing our best to add support for the latest versions of Istio as rapidly as possible. Today, we’re happy to announce that we have added Istio 1.3 support for the Banzai Cloud Istio operator.
In this post, we’ll be outlining how to easily upgrade Istio control planes to 1.3 with the Banzai Cloud Istio operator, within a single-mesh multi-cluster topology or across a multi-cloud or hybrid-cloud service mesh.
The new Istio 1.3 release added a variety of new features and bug fixes. The largest of these was the experimental Mixerless HTTP telemetry, which is now also fully supported by our Istio operator. The full list of changes can be found in the official release notes.
Here is a list of new features we think are worth highlighting:
Mixerless
HTTP telemetryThere is an ongoing effort to move the logic at work in the centralized Mixer v1 (which provides rich telemetry) to proxies as Envoy filters. Istio 1.3 contains experimental support in sidecar proxies for standard Prometheus telemetry. It is a drop-in replacement for the http
metrics currently produced by Mixer, namely:istio_requests_total
, istio_request_duration_*
and istio_request_size
.
If you are interested in exploring how Istio telemetry works in conjunction with Mixer in greater detail, you may want to read our post on Istio telemetry.
There is a simple switch in the operator CR to turn on this experimental feature:
spec:
mixerlessTelemetry:
enabled: true
The istio_request_duration_
metric uses more granular buckets inside the proxy, which results in lower latency measurements in histograms. The new metric is called istio_request_duration_milliseconds
.
istio-telemetry
deployment can be switched off, saving 0.5 vCPU per 1000 rps of mesh traffic. This halves Istio’s CPU usage while collecting its standard metricsistio-proxy
, than the original Mixer filterAs of now, no TCP metrics yet!
Let’s suppose we have a Kubernetes master and remote cluster connected to a single-mesh multi-cluster topology with Istio 1.2.5, and we’d like to upgrade our Istio components on both clusters to Istio version 1.3.0. Here are the steps we’d need to go through in order to accomplish that with our operator:
It really is that easy!
Once the operator discerns that the Custom Resources it’s watching has changed, it reconciles all Istio-related components so as to perform a control plane upgrade. First, this happens on the master cluster, but then the modified images are automatically propagated to the remotes as well, and the Istio components installed on the remotes (usually Citadel
, Sidecar Injector
and Gateways
) are also reconciled for use with new image versions.
In this demo, we’ll perform the following steps:
For this demo we’ll need two Kubernetes clusters.
We created one Kubernetes cluster on GKE and one on AWS, using Banzai Cloud’s lightweight, CNCF-certified Kubernetes distribution, PKE via the Pipeline platform. If you’d like to do likewise, go ahead and create your clusters on any of the several cloud providers we support or on-premise using Pipeline for free.
Next, we’ll take our clusters and form a single-mesh multi-cluster topology with Istio 1.2.5. If you need help with this, take a look at the demo part of our detailed blog post, Multi-cloud service mesh with the Istio operator. There, we describe precisely how to setup a single-mesh multi-cluster topology with Split Horizon EDS.
The mesh can also be created via the Pipeline UI with just a few clicks. On Pipeline, the entire process is streamlined and automated, with all the work being done behind the scenes.
Next we install a simple echo service as a way of checking if everything works after the control plane upgrade.
Create
Gateway
andVirtualService
resources to reach the service through an ingress gateway.
First, deploy to the master cluster:
$ kubectl --context ${CTX_MASTER} -n default apply -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.2/docs/federation/multimesh/echo-service.yaml
$ kubectl --context ${CTX_MASTER} -n default apply -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.2/docs/federation/multimesh/echo-gw.yaml
$ kubectl --context ${CTX_MASTER} -n default apply -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.2/docs/federation/multimesh/echo-vs.yaml
$ kubectl --context ${CTX_MASTER} -n default get pods
NAME READY STATUS RESTARTS AGE
echo-5c7dd5494d-k8nn9 2/2 Running 0 1m
Then deploy to the remote cluster:
$ kubectl --context ${CTX_REMOTE} -n default apply -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.1/docs/federation/multimesh/echo-service.yaml
$ kubectl --context ${CTX_REMOTE} -n default get pods
NAME READY STATUS RESTARTS AGE
echo-595496dfcc-6tpk5 2/2 Running 0 1m
Determine the external hostname of the ingress gateway and make sure the echo service responds from both clusters:
$ export MASTER_INGRESS=$(kubectl --context=${CTX_MASTER} -n istio-system get svc/istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ for i in `seq 1 100`; do curl -s "http://${MASTER_INGRESS}/" | grep "Hostname"; done | sort | uniq -c
61 Hostname: echo-5c7dd5494d-k8nn9
39 Hostname: echo-595496dfcc-6tpk5
To install Istio 1.3.0, we need to check out the release-1.3
branch of our operator (this branch supports Istio versions 1.3.x):
$ git clone git@github.com:banzaicloud/istio-operator.git
$ git checkout release-1.3
Install the Istio Operator
Simply run the following make
goal from the project root in order to install the operator (KUBECONFIG
must be set for your master cluster):
$ make deploy
This command will install a Custom Resource Definition in the cluster, and will deploy the operator to the istio-system
namespace.
Apply the new Istio
Custom Resource
If you’ve installed Istio 1.2.5 with the Istio operator, and if you check the logs of the operator pod at this point, you will see the following error message:
intended Istio version is unsupported by this version of the operator
. We need to update the Istio Custom Resource with Istio 1.3’s components so the operator will be reconciled with the Istio control plane.
To deploy Istio 1.3.0 with its default configuration options, use the following command:
$ kubectl --context=${CTX_MASTER} apply -n istio-system -f config/samples/istio_v1beta1_istio.yaml
After a little while, the Istio components on the master cluster will start using 1.3.0
images:
$ kubectl --context=${CTX_MASTER} get pod -n istio-system -o yaml | grep "image: docker.io/istio" | sort | uniq
image: docker.io/istio/citadel:1.3.0
image: docker.io/istio/galley:1.3.0
image: docker.io/istio/mixer:1.3.0
image: docker.io/istio/pilot:1.3.0
image: docker.io/istio/proxyv2:1.3.0
image: docker.io/istio/sidecar_injector:1.3.0
Notice, Istio components are now using 1.3.0
images on the remote cluster as well:
$ kubectl --context=${CTX_REMOTE} get pod -n istio-system -o yaml | grep "image: docker.io/istio" | sort | uniq
image: docker.io/istio/citadel:1.3.0
image: docker.io/istio/proxyv2:1.3.0
image: docker.io/istio/sidecar_injector:1.3.0
Check the app
At this point, your Istio control plane will be upgraded to Istio 1.3.0 and your echo application will still be available at:
$ curl -s "http://${MASTER_INGRESS}/"
In order to change older versions of the istio-proxy sidecar in the echo pods (to perform a data plane upgrade), we need to restart the pods manually.
The Istio operator now supports Istio 1.3. Upgrading Istio control planes between Istio’s major versions with our operator, even in a single-mesh multi-cluster setup, is as easy as deploying a new version of the operator, then applying a new Custom Resource using your desired component versions.
Obviously, this is a process that’s completely automated and hyper-simplified with Backyards (now Cisco Service Mesh Manager).
Banzai Cloud’s Backyards (now Cisco Service Mesh Manager) is a multi and hybrid-cloud enabled service mesh platform for constructing modern applications. Built on Kubernetes and our Istio operator, it gives you flexibility, portability, and consistency across on-premise datacenters and cloud environments. Use our simple, yet extremely powerful UI and CLI, and experience automated canary releases, traffic shifting, routing, secure service communication, in-depth observability and more, for yourself.