Outshift Logo

PRODUCT

10 min read

Blog thumbnail
Published on 03/09/2023
Last updated on 04/17/2024

APIClarity overview series: Architecture

Share

APIClarity

https://www.apiclarity.io/

This blog is part of the APIClarity Overview Series.

APIClarity architecture: The components of the API security tool

In this blog, we’ll explore the architecture and components of APIClarity. For a higher-level introduction to APIClarity and API security, see the APIClarity Introduction blog. And for tips on getting started, don’t forget about our guide to APIClarity installation.

Overview of APIClarity and API security

APIClarity is an open-source project that runs as a pod deployment within a Kubernetes cluster and analyzes API traffic for security risks. Let’s take a look at the different components. Your API security may be missing key risks, leaving you open to vulnerabilities you aren’t aware of—and APIClarity can help identify them. To get started, let’s take a look at the different components and learn how to secure APIs

The main APIClarity service consists of a server pod and a PostgreSQL database running in the “apiclarity” namespace, shown in dark green in Figure 1 below. 

APIClarity Architecture

Figure 1: APIClarity Architecture

The server pod is responsible for collecting the incoming API traffic, analyzing it and reporting any API security risks via the UI. The API specifications, traffic and analysis are stored in the database. 

To analyze API traffic, APIClarity has many plugins that can be used with different types of traffic sources to tap the API traffic and send it to the APIClarity server. The different plugins are shown in light green in Figure 1, and each has an arrow that feeds into the larger “API Traffic” arrow that is sent to the APIClarity server, for purposes of illustration. 

To monitor API traffic sourced externally, APIClarity has plugins for the following:  

For internal API traffic between application microservices, APIClarity integrates with service meshes by installing WebAssemby (WASM) filters at the envoy level to tap API traffic. Istio and Kuma service meshes are supported. The light green “WASM” boxes in Figure 1 represent the envoy WASM filters for APIClarity. 

In addition, APIClarity has an API tapping capability that will passively tap API traffic for a given Kubernetes namespace. This is shown in light green and labeled “APIClarity Tapper” in Figure 1

A UI is available to see the API traffic that was observed and check for any abnormalities or security risks that were reported by APIClarity. 

APIClarity server

Let's take a look at the functionality of the APIClarity Server.

OpenAPI specification upload/reconstruction 

This module allows the upload of existing OpenAPI specifications (specs) or learns and reconstructs specs based on observed API traffic if none are provided. The reconstructed specs are available in the UI, where they can be reviewed and approved by the user. OpenAPI v2.0 and v3.0 are supported. 

OpenAPI spec diff detector 

The “spec diff” detector looks for differences between the approved OpenAPI specs and the observed API traffic. It can detect shadow and zombie APIs. Shadow APIs are observed, but are not in the approved spec, meaning they are unknown API calls. Zombie APIs are deprecated API versions that are still being used. These will be explored in future blogs. 

BFLA (Broken Function-Level Authorization) detector 

The BFLA detector builds an authorization model for application microservice interactions by first observing the API interactions and then detecting any discrepancies from the model. A BFLA violation would mean that functionality within the application was being used without authorization. The user can mark any interactions that have been learned as “illegitimate,” in which case those interactions would be flagged as BFLA violations going forward. Much more information is available in the README file. 

Trace analyzer 

The trace analyzer detects different kinds of API security weaknesses in the observed API traffic, either at the API endpoint level or at the event level (i.e. an actual API call). It provides a score for each detected vulnerability of low, medium, or high. You can configure some things the trace analyzer scans for, such as dictionary matches and regex rules for matching sensitive information. There’s also a way to ignore findings if desired. 

There are many types of security vulnerabilities the trace analyzer can detect and flag. 

Weak basic authentication 

If basic authentication (username/password) is used for an application, the trace analyzer will check for short, weak (well-known) or reused passwords. 

Weak JSON web tokens 

If JSON web tokens (JWT) are used for an application, the trace analyzer will check for the following: 

  • Unset algorithm 
  • Unset signing algorithm 
  • Recommended signing algorithm 
  • Token claims containing sensitive data 
  • Token expiration claims 
  • Secret signing key dictionary attack 

Sensitive information 

Sensitive information, including Personally Identifiable Information or PII, can be detected by configuring a set of regex patterns to compare against. An example is the keyword “password”, a phone number, a social security number, etc.  

Guessable object ID 

Easily guessed object IDs, for example IDs in ascending or descending order, can be detected and flagged. These could leave the application at risk of a BOLA attack (see next section). 

BOLA (Broken Object-Level Authorization) 

A BOLA attack is where objects are accessed in an application without the proper authorization. One way to detect BOLAs is by looking for “non-learnt identifiers” in API requests, meaning that a request is being made for an object ID that hasn’t been provided by the application in a previous response. A guessable object ID can contribute to this problem. 

Data fuzzer 

APIClarity has a data fuzzer component that detects data injection risks. Using the approved OpenAPI specs for an application, the fuzzer attempts to inject unauthorized or invalid data into application API endpoints to flag weaknesses in input validation and processing.  

UI 

A UI is available to see the API traffic that was observed and check for any abnormalities or security risks that were reported by APIClarity. 

Database 

APIClarity uses its own PostgreSQL database to store OpenAPI specs, API traffic flows and traffic analysis. If installed via Helm, PostgreSQL will require a persistent volume for storage. 

The tables in the APIClarity database are the following: 

api_annotations 

This table lists the results from the trace analyzer that occur at the API endpoint level. 

api_events 

This table populates the “API Events” UI pane, and for each observed API call it includes a timestamp, RESTAPI method, URL, status, source/destination IP and port, host, external/internal flag, and a list of alerts that have been detected. 

api_inventory 

This table populates the “API Inventory” UI pane, and lists the API endpoints for an application.  

event_annotations 

This table lists the results from the BFLA detector, the trace analyzer and the fuzzer that occur at the event level. 

reviews 

This table provides a list of user reviews for reconstructed API specs. 

trace_sampling 

This table contains samples of API calls from trace sources (next section). 

trace_sources 

This table maintains a list of trace sources for APIClarity, which are API traffic sources external to an application, including Apigee X Gateway, F5 BIG-IP LTM Load Balancer, an OpenTelemetry Collector or the API tapper. 

WASM filters 

Incoming WASM traffic filters are set within envoy sidecars for the API microservice application that APIClarity will profile. These WASM filters forward incoming, internal API traffic (i.e. traffic between application microservices) to the APIClarity engine. APIClarity has WASM filter support for Istio and Kuma service meshes. 

Details on how WASM filters are configured to export HTTP traffic for APIClarity are here. Additionally, a proxy template is used to install the WASM filter for Kuma.  

Traffic sources 

APIClarity includes support for many different API traffic sources that interact with Kubernetes applications. We’ll take a look at the current set of traffic sources. 

Kong API gateway 

The Kong plugin can be installed by either running a script or by running a post-install patch to the Kong container by setting the following values as appropriate for your deployment in the APIClarity values.yaml file: 

  kong: 
    ## Enable Kong traffic source 
    ## 
    enabled: true 

    ## Carry out post-install patching of kong container to install plugin 
    patch: true 
 
    ## Specify the name of the proxy container in Kong gateway to patch 
    ## 
    containerName: "proxy" 

    ## Specify the name of the Kong gateway deployment to patch 
    ## 
    deploymentName: "" 

    ## Specify the namespace of the Kong gateway deployment to patch 
    ## 
    deploymentNamespace: "" 

    ## Specify the name of the ingress resource to patch 
    ## 
    ingressName: "" 

    ## Specify the namespace of the ingress resource to patch 
    ## 
    ingressNamespace: "" 

Tyk API gateway 

The Tyk plugin can be installed by either running a script or by running a pre-install init container that will add the plugin by setting the following values as appropriate for your deployment in the APIClarity values.yaml file: 

  tyk: 
    ## Enable Tyk traffic source 
    ## 
    enabled: true

    ## Enable Tyk verification in a Pre-Install Job 
    ## 
    enableTykVerify: true 

    ## Specify the name of the proxy container in Tyk gateway to patch 
    ## 
    containerName: "proxy" 

    ## Specify the name of the Tyk gateway deployment to patch 
    ## 
    deploymentName: "" 

    ## Specify the namespace of the Tyk gateway deployment to patch 
    ## 
    deploymentNamespace: "" 

External trace sources 

The following external traffic sources are supported by APIClarity. 

Apigee X Gateway 

To tap traffic in an Apigee X Gateway that is external to the Kubernetes cluster where your application is running, you’ll need to configure a proxy so that Apigee-X has reachability to APIClarity, install the APIClarity public certificate in Apigee-X, and configure a shared flow bundle. See the README for more details. 

F5 BIG-IP Local Traffic Manager 

To tap traffic in a BIG-IP Local Traffic Manager (LTM) that is external to the Kubernetes cluster where your application is running, you’ll need to install the APIClarity Agent on a host VM (separate from LTM) with reachability to both the LTM and to APIClarity. This will act as a proxy for the forwarded traffic. See the README file for installation steps. 

OpenTelemetry Exporter 

APIClarity has an HTTP exporter that can be built into an external OpenTelemetry collector and forward API traces and metrics to the APIClarity server. The OpenTelemetry collector must be built with the APIClarity exporter image and configured with the APIClarity server endpoint. 

API tapper 

APIClarity has an API traffic tapper that deploys a daemonset in a given namespace and forwards API traffic to the APIClarity server without needing an envoy sidecar or service mesh. It will use this “tap stream” as a traffic source for API monitoring.  

To use it, set the following values in APIClarity’s values.yaml as appropriate for the namespace you want to tap, and redeploy APIClarity: 

  tap: 
    ## Enable Tap traffic source 
    ## 
    enabled: true

    ## Enable APIClarity Tap in the following namespaces 
    ## 
    namespaces: 
      - default 

How to secure APIs with APIClarity 

Whew! That was a lot of information coming at you, but hopefully it was useful to understand a bit more about how APIClarity works and how you can use it to change the way you think about API security.

Next in this blog series, I’ll give installation steps to get you started on your APIClarity journey to protect your cloud-native apps! 


Anne McCormick is a cloud architect and open-source advocate at Outshift, formerly Cisco’s Emerging Technology & Incubation organization. 

Subscribe card background
Subscribe
Subscribe to
the Shift!

Get emerging insights on emerging technology straight to your inbox.

Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

thumbnail
I
Subscribe
Subscribe
 to
the Shift
!
Get
emerging insights
on emerging technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background