At Banzai Cloud we try to provide our users with a unified, cloud and on-premise-agnostic authentication and authorization mechanism. Note that our Pipeline platform supports cloud provider-managed Kubernetes and, as of recently, our own Kubernetes distribution - the Pipeline Kubernetes Engine, PKE. We also recently introduced an open source project, JWT-to-RBAC (you can read more about that project, here), designed to solve authentication and authorization challenges within the Pipeline platform in a cloud provider-agnostic way. We have extended those solutions to our own Kubernetes distribution, as well as to any other Kubernetes cluster provisioned through Pipeline.
A big difference between PKE and cloud provider-managed k8s is that, when using PKE, we have access to Kubernetes-provided API Server flags, allowing us to directly configure authentication however we’d like.
- We needed a way to unify
authn/autzacross the Pipeline platform and our own Kubernetes distribution, regardless of where it is run
- We standardized an OAuth2 and introduced a new open source project, JWT-to-RBAC
- Pipeline and PKE is now integrated with Dex
Initially, Pipeline only supported Github OAuth-based authentication flows. This initial implementation served us and our cloud-based users well for over a year, but our new enterprise users began asking us to extend our support over multiple authentication providers. Dex seemed liked the obvious choice, since it provides great Kubernetes support and uses a single generic interface called OpenID Connect - working as a proxy for multiple different identity providers.
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
Dex also helped us to replace our user filtering code with a few lines of YAML, since it supports organization level (and also team level) filtering out-of-the-box:
1connectors: 2- type: github 3 id: github 4 name: GitHub 5 config: 6 clientID: "ourGitHubOauthClientID" 7 clientSecret: "ourGitHubOauthClientSecret" 8 redirectURI: https://some.banzai.server.dev/dex/callback 9 loadAllGroups: true 10 orgs: 11 - name: banzaicloud
Some examples of how to try this out locally on a laptop for GitHub, Google, and LDAP can be found in our developer documentation.
PKE OIDC authentication and RBAC 🔗︎
authn are currently only configurable via API server startup parameters. Some implementations have reached PR status, but there still exists no generic webhook-based authentication. This can cause a lot of headaches on a cloud provider-managed Kubernetes distribution, but in our PKE distribution we have full control over - and use of - API server flags.
In the PKE distribution, API Servers are started with the following parameters:
1kube-apiserver \ 2 --oidc-issuer-url=https://some.banzai.server.dev/dex/callback \ 3 --oidc-client-id=clustersDexClientID \ 4 --oidc-username-claim=email \ 5 --oidc-username-prefix=oidc: \ 6 --oidc-groups-claim=groups \ 7 --oidc-groups-prefix=oidc: \ 8 ...
The CLI version of the ClusterRoleBinding creation looks as follows:
1kubectl create clusterrolebinding banzaiers-are-admins \ 2 --clusterrole cluster-admin \ 3 --group banzaicloud
A very similar API call is executed by the PKE provisioning code during cluster creation, but with client-go. Proper Roles and ClusterRoles can still be configured via our jwt-to-rbac application, however, it also has a set of well-define, straightforward default roles.
Since this solution makes a
groups claim of the OIDC ID Token, users must be part of at least one group, otherwise bindings can only be made with a
User subject (and have to be made individually for every user and to be kept in sync for all members of a group) and not with a
Group subject. To read more about subjects and bindings see the official Kubernetes documentation.
When a PKE cluster is provisioned, a corresponding Dex Client is dynamically registered through the Dex gRPC API. To get the user credentials of a Kubernetes cluster, the user has to, first, go through an OAuth Authorization Code Flow to get a valid ID token from Dex, which is later merged into the Kubernetes client configuration:
1users: 2- name: malkovich 3 user: 4 auth-provider: 5 name: oidc 6 config: 7 client-id: kubernetes-cluster-123 8 client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5 9 id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw 10 idp-issuer-url: https://some.banzai.server.dev/dex 11 refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
Pipeline supports generating this configuration for your clusters, individually for every user within your organization. Since each cluster has its own
client-secret, setting up an authentication flow client is bit more tricky. Tricky, because we prepare N different authentication callback handlers for all the client-ids, or we create one handler which will handle the preparation of those OAuth configs dynamically. We don’t really want to maintain state, since Pipeline is a distributed application and can be run in HA mode, so we chose the latter option.
The OAuth state parameter helps us through this process; it has two roles: one of them is to maintain state between the request and callback.
cluster-id and the
client-id is going to be encoded into it as a JWT signed by Pipeline and is valid only for 1 minute so we can make sure that the client who requests login credentials for a Kubernetes cluster is entitled to get it. The second role of the OAuth state parameter is to prevent XRSF, which we get for “free”.
In progress 🔗︎
Unified authentication throughout the whole stack 🔗︎
Now, since the Pipeline control plane and its provisioned PKE clusters are protected by OIDC authentication, there’s only one piece of the puzzle left: protecting the Services deployed to those clusters.
Since we already have a fully functional and configured Dex installation, and since our users are already familiar with it, we can use it as an identity provider for different service proxies. We’ve previously analyzed the oauth2_proxy originally open sourced by Bitly, here (now maintained by Pusher). Jenkins X’s SSO Operator extends this solution. With this proxy authentication is standardized through the entire development stack, from cluster management control plane to Kubernetes, and end-user applications deployed to Kubernetes.
CI/CD group mappings and OPA 🔗︎
Pipeline’s control plane comes with an integrated CI/CD solution. This CI/CD engine has integration for GitHub, GitLab, BitBucket, and other source code management (SCM) platforms. These platforms usually also have the notional idea of organizations/groups, which are straightforward to map once you’re logged in (e.g. Pipeline or a CI/CD system with GitHub). However, complications arise if one user on another identity provider - let’s say Google - would like to manage their GitHub repositories with our CI/CD. In order to head these problems off at the pass, mapping has to be configured and persisted to somewhere it can be properly enforced.
Currently, we enforce policies in the Kubernetes clusters by using Kubernetes RBAC roles and bindings, but this method is often unsatisfactory, or inadequately fine-grained, to cover our users’ needs. Open Policy Agent is a relative newcomer to the policy engine business, and may also represent a potential solution to our CI/CD vs auth provider group mapping dilemma.
Note: The Pipeline CI/CD module mentioned in this post is outdated and not available anymore. You can integrate Pipeline to your CI/CD solution using the Pipeline API. Contact us for details.
If you’re looking to experiment with any of the above, or are interested in our unified authn/autz solution from control plane to K8s clusters and/or deployments, try Pipeline.
About Banzai Cloud Pipeline 🔗︎
Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.