Outshift Logo

INSIGHTS

10 min read

Blog thumbnail
Published on 07/11/2023
Last updated on 04/11/2024

Platform engineering for cloud-native applications developments

Share

In today's rapidly evolving digital landscape, organizations are constantly seeking ways to leverage the power of cloud computing to drive innovation and accelerate their business growth. Modern cloud-native applications have emerged as the go-to approach for building scalable, flexible, and resilient software systems that can adapt to dynamic demands. And that’s why cloud native application development has become so critical for enterprise success.

At the heart of this transformative paradigm lies platform engineering. For cloud-native applications, platform engineering is a crucial discipline that empowers organizations to harness the full potential of cloud-native architectures. Platform engineering for cloud-native applications involves designing, building, and managing the underlying infrastructure and services that enable the seamless deployment, scaling, and operation of modern software applications. It also paves the way for the most effective possible cloud native application development.

This article walks through building a cloud-native platform that provides cloud-native infrastructure combined with observability using popular open sources, such as MongoDB, Kafka messaging, Elasticsearch, OpenTelemetry, MinIO, and certificate management services. We can deploy this cloud-native platform in a few simple steps on any Kubernetes cluster, such as Amazon Elastic Kubernetes Service (Amazon EKS) or kind cluster.

The cloud-native platform also implements an interface layer using Argtor Kubernetes operator, for any application to access the cloud-native services, such as accessing MongoDB using custom resource definition (CRD). We can develop and test the application on one type of Kubernetes cluster, e.g. kind cluster on-premise. The application can be seamlessly deployed on another type of Kubernetes cluster, e.g. Amazon EKS, without any code changes.

In this example, we use Temporal workflow engine to deploy the platform in a kind cluster on-premise. Temporal is a distributed workflow manager that can manage tasks in an order and in a fault tolerant manner. Tasks in the workflow setup the basic platform services. There are alternatives to this approach like - Argo Workflows, Terraform or Ansible (each with their own pros and cons). In the end, we demonstrate how a golang microservice accesses the platform services.

Prerequisites to building a cloud-native platform

  1. Create a kind Cluster.
  2. Deploy Temporal workflow engine.
  3. Deploy MinIO with appropriate disk space
  4. Deploy Fluent Bit - used to forward the logs to the Elasticsearch.

Deployment overview

Figure 1 shows the deployment overview. The Jenkins pipeline calls the Temporal rest API which initiates a series of workflow tasks. These tasks deploy platform services in the Kubernetes cluster in a fault tolerant manner and consistently across several clusters (handles retry logics, wait times for cluster bring up, distributing TLS certs etc.). As mentioned earlier, it is also worthwhile to look into other configuration management solutions. Our goal here is to deploy services consistently and effectively to handle any failures.

Figure 1: Deployment Overview

Cloud-native platform services

Argtor

Argtor is a Kubernetes operator that provides an interface, i.e. custom resource, to application developers. It implements the platform operations to create Kubernetes resources for the application to access the platform services, such as MongoDB or Kafka messaging.

Argtor defines the following custom resource definition (CRD):

  1. each CRD defines a unique application name.
  2. One application can have one or more services. Each service is uniquely identified by a name within the application. The service models a Kubernetes Service.
  3. The CRD can define a section for a platform service if needed. For instance, The CRD's service definition has a section to define a list of kafka topics to be configured in kafka cluster for the service to access.

MongoDB Database

As shown in Figure 2 below, the platform provides the MongoDB database service. 

Figure 2: MongoDB database service

The platform installs the MongoDB community operator.  The operator deploys the MongoDB cluster in 3 pods.

For an application pod to access the MongoDB, the developer creates one CRD instance by specifying the application name, the service name and the database name. As shown in Figure 2, the database credential and a certificate are generated for the pod to access the database:

  1. Argtor generates the database username/password and store the username/password in one Kubernetes secret.
  2. Argtor updates the MongoDB operator's MongoDBCommunity CRD instance to reference the Kubernetes secret created in step 1.
  3. The MongoDB operator configures the database credential in MongoDB.
  4. Argtor creates one CertificateRequest CRD instance
  5. According to the CertificateRequest created in step 4, the cert-manager generates the certificate and stores the certificate in Kubernetes secret.

Kafka messaging service

Figure 3 shows the Kafka messaging service.

Figure 3. Kafka cluster messaging service

The platform installs the Cisco AKO (Kafka Operator), with the option to install other kafka operator, such as: koperator or Strimzi . The operator deploys the kafka cluster with kafka security enabled. 

For any application pod to access the kafka cluster, the developer creates an Argtor CRD instance with the following properties:

  1. application name and microservice name.
  2. List of kafka topics needed to be configured in the kafka cluster. Define topic name, number of partitions, replicator factor, etc for each kafka topic. 

As shown in Figure 3, Argtor creates one CertificateRequest CRD instance on behalf of the application and the cert manager generates the the SSL certificate for the pod to access the kafka cluster.

OpenTelemetry

Figure 4 shows the OpenTelemetry pipeline deployed for the platform. In this example, spartan, aegis, and proxy are application pods. Traces and metrics are collected and forwarded using load balancers. They are processed and forwarded to Grafana Tempo and Prometheus backends. Fluent Bit forwards the logs to Elasticsearch. Dashboards can be created using Grafana and Kibana.

Figure 4. OpenTelemetry pipeline

Metrics and traces

To process metrics and traces, the platform deploys two layers of OTEL collectors:

  1. Loadbalancing OTEL collector: this is a layer 4 load balancer. It defines a load balancer exporter that picks the same backend, i.e. processing OTEL collector pod, for the same trace ID.
  2. Processing OTEL collector: The processing collector is responsible for doing data processing as well as making the sampling decision. It exports traces to Grafana Tempo, and exports metrics to Prometheus.

By default, the platform enables Grafana service graph. The user can create custom dashborard as needed.

The platform deploys one OTEL sidecar mode opentelemetrycollectors. The application developer needs to put  sidecar.opentelemetry.io/inject annotation in Kubernetes deployment specification. As shown below:

spec:
  template:
    metadata:
      annotations:
        sidecar.opentelemetry.io/inject: opentelemetry-operator-system/otel-agent-sidecar

The platform injects an OTEL sidecar into the pod. The sidecar forwards the metrics and traces to the loadbalancing OTEL collector. The application OpenTelemetry implementation sends the metrics and traces to localhost:4317 using gRPC, which is received by the OTEL sidecar.

Logs

The application pod needs to mount its log files to /var/log directory. The platform deploys Fluent Bit to parse logs under this directory and forward the logs to Elasticsearch. 

Deploy application Kubernetes service

This section goes through steps to deploy a Kubernetes microservice, spartan, in the platform.  spartan needs to read from and write to a MongoDB database and publish messages to a Kafka topic.

Argtor CRD

One Argtor Application CRD is created to deploy spartan. The CRD is created in the Kubernetes namespace: argtor-infra. The CRD name is: pontos-crd. The application name is pontos, which is deployed in the Kubernetes namespace: pontos. The spartan's database name is Global-spartan.

The following is the CRD definition:

apiVersion: argtor.golang.cisco.com/v1
kind: Application
metadata:
   name: pontos-crd
   namespace: argtor-infra
spec:
   name: pontos
   namespace:
     name: pontos
   services:
      - spartan
   serviceDeployments:
      - name: spartan
          kafkaTopics:
            - numPartition: 3
              replicationFactor: 1
              name: spartan.argo.cisco.com.v1.Spartan-spartan-svc-topic
              configEntries:
                retention.ms: "86400000"
          database: Global-spartan

Kubernetes deployment spec

The platform generates the following Kubernetes secrets under Kubernetes namespace: pontos, under which the application services are deployed:

  1. spartan-kafka-cert: which holds the SSL certificate used to communicate with the Kafka brokers.
  2. spartan-mongodb-cert: which holds SSL certificte used to communicate with MongoDB.
  3. spartan-mongodb-password: which holds username/password to access the database: Global-spartan.

The following is part of the spartan deployment specification which is related to the platform services. Each Kubernetes secret is mounted to the spartan pod's file system so that spartan can read certificates and username/password from the file system. spartan writes json format logs into files under /var/log, which is mounted to /var/log/andro/frontend/spartan. The log files are parsed by the Fluent Bit which forwards the logs to the Elastishsearch.

     volumeMounts:
       - mountPath: /etc/kafka
         name: kafka-secret
         readOnly: true
       - mountPath: /etc/mongodb/secret
         name: mongodb-secret
         readOnly: true
       - mountPath: /etc/mongodb/cert
         name: mongodb-cert
         readOnly: true
       - mountPath: /var/log/
         name: log-dir
     volumes:
       - name: kafka-secret
         secret:
         secretName: spartan-kafka-cert
       - name: mongodb-cert
         secret:
         secretName: spartn-mongodb-cert
       - name: mongodb-secret
         secret:
         secretName: spartan-mongodb-password
       - hostPath:
         path: /var/log/andro/frontend/spartan
         type: ""
         name: log-dir

Service graph

After deploying the application, we can view the metrics and traces from the Grafana UI. Since the service graph is enabled by default, as shown below in Figure 5, we can view the generated service graph. The service graph shows that the spartan pod accesses the mongodb database: Global-spartan. There are other pods deployed in the platform, such as aegis shown in the graph.

Figure 5. Service Graph

Cloud-native application development made simple

This example demonstrates how open source tools can be utilized to swiftly implement platform engineering for cloud-native applications, whether deployed on-premises or in the cloud. Such an approach greatly assists developers in iterating on application development without the necessity of deploying in the cloud, thus saving costs while maintaining the flexibility to deploy the validated application in the cloud when ready.

Next steps for cloud native applications

We will integrate with GitOps method of deployment using ArgoCD and detail how to leverage Gitops to deploy cloud native security.

Acknowledgements

We would like to express our gratitude to GopiKrishna Saripuri, Saravanan Masilamani, and Suresh Kannan for their valuable contributions to this project. Additionally, we extend our thanks to Kalyan Ghosh for providing guidance throughout the project.

 

Subscribe card background
Subscribe
Subscribe to
the Shift!

Get emerging insights on emerging technology straight to your inbox.

Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

thumbnail
I
Subscribe
Subscribe
 to
the Shift
!
Get
emerging insights
on emerging technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background