Outshift Logo

6 min read

Blog thumbnail
Published on 02/18/2018
Last updated on 02/05/2024

Function as a service with OpenFaaS on Banzai Cloud Pipeline

Share

At Banzai Cloud we provision different frameworks and tools like Spark, Zeppelin, Kafka, Tensorflow, etc to our Pipeline PaaS (built on Kubernetes). Last week we added serverless capabilities to Pipeline, using OpenFaas. This blog post explains how to deploy OpenFaaS to Kubernetes using Pipeline and invoke an example function. We'll distinguish between the provisioning of the serverless frameworks we support (this post is about OpenFaaS but Pipeline also supports Kubeless), from the invocation of functions through the Pipeline API once it's dispatched to any of the serverless frameworks we deploy to Kubernetes.

Create a Kubernetes cluster

The first thing we need is a Kubernetes cluster. Kubernetes clusters can be easily provisioned with Pipeline through a simple REST API call on any supported cloud provider or managed Kubernetes offering (AWS, Azure, Google). In order to deploy Pipeline, follow these instructions. Once Pipeline is up and running, Kubernetes clusters can be created using REST API calls - see this Postman collection we created for the REST API that's exposed by Pipeline.

Deploy OpenFaaS to a Kubernetes cluster

OpenFaas can be deployed into a Kubernetes cluster with a RESTful API call to Pipeline. Look for the Deployment Create API call in the Postman collection. To invoke the Deployment Create API we need to set two parameters:
  • cluster id - this is the identifier of the desired Kubernetes cluster from the list of the Kubernetes clusters that Pipeline manages. (To see the list of Kubernetes clusters managed by the Pipeline invoke the Cluster List REST API call).
  • REST call body:
    ```json
        {
            "name": "banzaicloud-stable/openfaas"
        }
    ```
Those who prefer to deploy OpenFaaS manually can do that using our OpenFaaS helm chart, which Pipeline uses behind the scenes.
Once the cluster is created and deployed to Pipeline, it will look like this.

Deploy a Function to OpenFaaS

OpenFaas provides a UI for deploying functions. Execute the Cluster Public Endpoints REST API call in Pipeline to get the URL through which the OpenFaaS Portal is exposed. Look for an endpoint named pipeline-traefik. The host field of that endpoint is the URL through which the OpenFaaS Portal is exposed under the ui/ path (http://{{public_endpoint_host}}/ui/). To deploy a function we need to specify which Docker image (pullable from DockerHub) contains the binary of our function. I took the N-queens problem written in GoLang and adapted it for OpenFaaS.
main.go
package main

import (
	"bytes"
	"fmt"
	"math/rand"
	"strings"

	"github.com/MaxHalford/gago"
	"io/ioutil"
	"os"
	"strconv"
)

// N_QUEENS is the size of each genome.
var nqueens = 15

// Positions is a slice of ints.
type Positions []int

// String prints a chess board and marks the queen's positions with an x.
func (P Positions) String() string {
	var board bytes.Buffer
	for _, p := range P {
		board.WriteString(strings.Repeat(" .", p))
		board.WriteString(" \u2655")
		board.WriteString(strings.Repeat(" .", nqueens-p-1))
		board.WriteString("\n")
	}
	return board.String()
}

func absInt(n int) int {
	if n < 0 {
		return -n
	}
	return n
}

// Evaluate a slice of Positions by counting the number of diagonal collisions.
// Queens are on the same diagonal if their row distance is equal to their
// column distance.
func (P Positions) Evaluate() float64 {
	var collisions float64
	for i := 0; i < len(P); i++ {
		for j := i + 1; j < len(P); j++ {
			if j-i == absInt(P[i]-P[j]) {
				collisions++
			}
		}
	}
	return collisions
}

// Mutate a slice of Positions by permuting it's values.
func (P Positions) Mutate(rng *rand.Rand) {
	gago.MutPermuteInt(P, 3, rng)
}

// Crossover a slice of Positions with another by applying partially mapped
// crossover.
func (P Positions) Crossover(Y gago.Genome, rng *rand.Rand) {
	gago.CrossPMXInt(P, Y.(Positions), rng)
}

// Clone a slice of Positions.
func (P Positions) Clone() gago.Genome {
	var PP = make(Positions, len(P))
	copy(PP, P)
	return PP
}

// MakeBoard creates a random slices of positions by generating random number
// permutations in [0, N_QUEENS).
func MakeBoard(rng *rand.Rand) gago.Genome {
	var positions = make(Positions, nqueens)
	for i, position := range rng.Perm(nqueens) {
		positions[i] = position
	}
	return gago.Genome(positions)
}

func main() {
	// read function input
	input, err := ioutil.ReadAll(os.Stdin)

	if err != nil {
		fmt.Println(err)
		return
	}

	nqueens, err = strconv.Atoi(strings.TrimSpace(string(input)))

	if err != nil {
		fmt.Println(err)
		return
	}

	var ga = gago.Generational(MakeBoard)
	ga.Initialize()

	for ga.HallOfFame[0].Fitness > 0 {
		ga.Evolve()
	}

	// print function output
	fmt.Println(ga.HallOfFame[0].Genome)
	fmt.Printf("Optimal solution obtained after %d generations in %s\n", ga.Generations, ga.Age)
}
The Dockerfile to build the above function in a Docker image for OpenFaaS is:
Dockerfile
FROM golang:1.9.2-alpine as builder

RUN apk --no-cache add make curl git glide \
    && curl -sL https://github.com/openfaas/faas/releases/download/0.7.0/fwatchdog > /usr/bin/fwatchdog \
    && chmod +x /usr/bin/fwatchdog

WORKDIR /go/src/n_queens

# add n_queens sources
COPY main.go .

# download n_queens dependencies
RUN glide init -non-interactive
RUN sed -i '/- package: github.com\/MaxHalford\/gago/a \ \ version: ~0.4.1' glide.yaml
RUN glide update



# build n_queens binary
RUN go install

FROM alpine:3.6

# Needed to reach the hub
RUN apk --no-cache add ca-certificates

COPY --from=builder /usr/bin/fwatchdog  /usr/bin/fwatchdog
COPY --from=builder /go/bin/n_queens  /usr/bin/n_queens
ENV fprocess "/usr/bin/n_queens"

CMD ["/usr/bin/fwatchdog"]
This Docker file is a multistage Docker file. It has a build stage that:
  1. adds the OpenFaaS fwatchdog component
  2. copies the sources of our n_queens function and builds a binary from it
The second section of the docker file actually creates the final docker image. It copies fwatchdog and n_queens binaries from the build stage. Now build the Docker image and push it to DockerHub.
$ ll
total 16
drwxr-xr-x  4 sebastian  staff   128 Feb 15 18:40 .
drwxr-xr-x  3 sebastian  staff    96 Feb 15 18:29 ..
-rw-r--r--  1 sebastian  staff   568 Feb 15 18:40 Dockerfile
-rw-r--r--  1 sebastian  staff  2321 Feb 15 18:29 main.go

$ docker build -t banzaicloud/nqueens:1.0-dev .

$ docker push banzaicloud/nqueens:1.0-dev

Deploy a Function to OpenFaaS using the UI

Open the OpenFaaS Portal -> Deploy a New Function -> Manual Docker image: banzaicloud/nqueens:1.0-dev Function name: nqueens Click Deploy. Once our nqueens function is deployed, it can be invoked from the OpenFaaS Portal or by using curl. (e.g. curl -X POST -d "15" http://{{public_endpoint_host}}/function/nqueens)

Deploy a Function to OpenFaaS using the faas-cli command line tool

Functions can not only be deployed through the OpenFaaS portal, but also using the faas-cli command line tool (which has a rich set of options compared to the UI). Let's explore this through an example of how faas-cli is used. We built an aws-cli docker image banzaicloud/openfaas-aws-cli:1.0 based on this blogpost. To deploy the aws-cli function run:
faas-cli deploy --name aws-cli --image banzaicloud/openfaas-aws-cli:1.0 --env AWS_ACCESS_KEY_ID={{replace_your-access-key}} --env AWS_SECRET_ACCESS_KEY={{replace_your-secret-key}} --gateway http://{{public_endpoint_host}}/
The deployed function can be invoked either from the OpenFaaS Portal or curl or faas-cli command: e.g.:
curl -d "ec2 describe-instances --region eu-west-1" http://{{public_endpoint_host}}/function/aws-cli
faas-cli invoke aws-cli --gateway http://{{public_endpoint_host}}/

About Banzai Cloud Pipeline

Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.
Subscribe card background
Subscribe
Subscribe to
the Shift!

Get emerging insights on emerging technology straight to your inbox.

Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

thumbnail
I
Subscribe
Subscribe
 to
the Shift
!
Get
emerging insights
on emerging technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background