Running TiDB on Kubernetes

Janos Matyas
Janos Matyas

Thursday, January 11th, 2018

At Banzai Cloud we provision different applications or frameworks to Pipeline, the PaaS we built on Kubernetes. We practice what we preach, and our PaaS' control plane also runs on Kubernetes and requires a layer of data storage. It was therefore necessary that we explore two different use cases: how to deploy and to run a distributed, scalable and fully SQL compliant DB to cover our client's, and our own, internal needs. Additionally, most of the legacy or Java Enterprise Edition applications we provision to the Pipeline platform require a database backend. While it may be true that we (currently) support only two wire protocols, this post focuses on how we run/deploy, operate, autoscale and monitor TiDB - a mysql wire protocol-based database.


TiDB (pronounced: /‘taɪdiːbi:/ tai-D-B, etymology: titanium) is a Hybrid Transactional/Analytical Processing (HTAP) database. Inspired by the design of Google F1 and Google Spanner, TiDB features infinite horizontal scalability, strong consistency, and high availability. The goal of TiDB is to serve as a one-stop solution for online transactions and analysis.


We deploy, run, scale and monitor TiDB on Kubernetes. We love TiDB's architecture and the separation of concerns inherent in its building blocks, which perfectly suits k8s.

$ helm repo add banzaicloud-incubator
$ helm repo update
$ helm install banzaicloud-incubator/tidb


This chart bootstraps a TiDB deployment on a Kubernetes cluster using the Helm package manager.


  • Kubernetes 1.7+ with Beta APIs enabled
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

$ helm install --name my-release banzaicloud-incubator/tidb

It deploys TiDB to the Kubernetes cluster with the its default configuration. The configuration section lists the parameters that can be configured during installation.

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The above command removes all Kubernetes components associated with the chart and deletes the release.


The following table lists the configurable parameters of the TiDB chart and their default values.

pd.namePlacement Drive container namepd
pd.imagePlacement Drive container imagepingcap/pd:{VERSION}
pd.replicaCountReplica Count3
pd.service.typeKubernetes service type to exposeClusterIP
pd.service.nodePortPort to bind to for NodePort service typenil
pd.service.annotationsAdditional annotations to add to servicenil
pd.service.PeerPortPort to bind to for Peer service type2380
pd.service.ClientPortPort to bind to for Client service type2379
pd.imagePullPolicyImage pull policy.IfNotPresent
pd.resourcesCPU/Memory resource requests/limitsMemory: 256Mi, CPU: 250m
tidb.nameTiDB container namedb
tidb.imageTiDB container imagepingcap/tidb:{VERSION}
tidb.replicaCountReplica Count2
tidb.service.typeKubernetes service type to exposeClusterIP
tidb.service.nodePortPort to bind to for NodePort service typenil
tidb.service.annotationsAdditional annotations to add to servicenil
tidb.service.mysqlPort to bind to for Mysql service type4000
tidb.service.statusPort to bind to for Status service type10080
tidb.imagePullPolicyImage pull policy.IfNotPresent
tidb.persistence.enabledUse a PVC to persist datafalse
tidb.persistence.existingClaimUse an existing PVCnil
tidb.persistence.storageClassStorage class of backing PVCnil (uses alpha storage class annotation)
tidb.persistence.accessModeUse volume as ReadOnly or ReadWriteReadWriteOnce
tidb.persistence.sizeSize of data volume8Gi
tidb.resourcesCPU/Memory resource requests/limitsMemory: 128Mi, CPU: 250m
tikv.nameTiKV container namekv
tikv.imageTiKV container imagepingcap/tikv:{VERSION}
tikv.replicaCountReplica Count3
tikv.service.typeKubernetes service type to exposeClusterIP
tikv.service.nodePortPort to bind to for NodePort service typenil
tikv.service.annotationsAdditional annotations to add to servicenil
tidb.service.ClientPortPort to bind to for Client service type20160
tikv.imagePullPolicyImage pull policy.IfNotPresent
tikv.persistence.enabledUse a PVC to persist datafalse
tikv.persistence.existingClaimUse an existing PVCnil
tikv.persistence.storageClassStorage class of backing PVCnil (uses alpha storage class annotation)
tikv.persistence.accessModeUse volume as ReadOnly or ReadWriteReadWriteOnce
tikv.persistence.sizeSize of data volume8Gi
tikv.resourcesCPU/Memory resource requests/limitsMemory: 128Mi, CPU: 250m

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Alternatively, a .yaml file that specifies the values for these parameters may be provided during the chart's installation. For example:

$ helm install --name my-release -f values.yaml banzaicloud-incubator/tidb

Tip: You can use the default values.yaml


The chart mounts a Persistent Volume to a given location. By default, the volume is created using dynamic volume provisioning. An existing PersistentVolumeClaim can be defined thusly:

Existing PersistentVolumeClaims

  1. Create the PersistentVolume
  2. Create the PersistentVolumeClaim
  3. Install the chart
$ helm install --set persistence.existingClaim=PVC_NAME banzaicloud-incubator/tidb

What's next

This posts highlights how easy it is to use TiDB on Kubernetes through Helm. Obviously, on Pipeline we do things differently, and the cluster and the deployment is provisioned either through the REST API.

Remark 1: Currently we use our own Helm charts, but we've noticed that PingCAP is already working on a TiDB operator - once that's released or the source code is made available we'll reconsider this approach. We love Kubernetes operators and have written/use quite a few, so we look forward to getting our hands on a new one.

Remark 2: In the event of PD node/StatefulSet failure Pipeline auto-recovers, however, due to an issue with the PD's internal etcd re-joining, that recovery may not be successful.