Cisco Tech Blog CiscoTech Blog Close
Home Contact

There are many ancient software development principles that still hold true today, and one that is undoubtedly true is that no software is perfect. There will always be bugs, new feature requests, and perhaps most importantly, every company will always want to be innovative and get more done in less time. Every company can choose to either do it alone or band together with other companies to develop open source projects.

In this blog post we will briefly consider a few familiar scenarios that tend to pop up when working with open source and show you what you can do when it is not feasible to merge a certain change that you need into the public upstream repository. We are going to go through a case study of a Java application (Apache Kafka) and how dynamic modifications are possible using JVM’s mechanisms, highlighting one of Cisco Streaming Data Manager’s (formerly Banzai Cloud Supertubes) core feature: Kubernetes integrated authentication of Kafka clients out of the box.

Improving 🔗︎

Let’s assume a common scenario: you have a project that you’ve been using for a while, and have some knowledge about its components and configurations. Depending on the maturity of the project, you will almost inevitably get to the point where you either find a bug, are unable to configure a component or need to add a significant feature. Most open source projects have an open backlog or some kind of issue tracking system where you can browse requests and check whether your particular issue has been drawn to the attention of the community, and when you can expect a release resolving that particular issue. As you can never have enough developers working on a project, often this release date is far in the future, or you may bump into your favorite “PRs are welcome” label.

Most of the time, this is the point where you must make an important decision. You can:

  • conclude that the software is not right for your given objectives and go back to searching for a new solution, or implement your own
  • accept to circumvent this limitation in your component for now, and eagerly wait for a release that may or may not come
  • willingly contribute the feature.

Contributing 🔗︎

You should especially consider contributing to the given project, if:

  • you have an advantage in the domain
  • you are familiar with the project (been using for a while, are clear about the fundamentals and assumptions)
  • the required change is simple
  • it’s been labeled as help wanted, but it wasn’t crucial enough, as the community could still provide help

If the above is fully or even just partially true your change might still not make it into the upstream repo due to one or more of the following reasons:

  • The community is not active or does not have bandwidth to review your change. Your Pull Request will be laying unmerged for a long time.
    • To be honest, you shouldn’t pick this solution if it’s not actively maintained, should you?
  • The community doesn’t approve your change.
    • This can be either collective rejection, or one of the community members is picky or rejectful.
  • The change is merged, but the release process takes an unnecessarily long time.
    • For example, some Rust compiler features may wait for years to be stabilized after they have been merged to nightly.

Static and dynamic modification 🔗︎

If any of the points above are true, you can still decide to go with your modified version of the solution. It’s time to fork!

Depending on the programming language you use, you should also consider the way it reacts to changes. If you have a single binary built from your code, you may find yourself rebuilding your entire project even for the slightest change. That could take somewhere between seconds (Go) to hours (complex C++ project with lots of dependencies).

This is just another factor that should be taken into account when deciding on a specific component in your architecture.

There are other languages that tolerate modifications in a more dynamic manner. Let’s see our case study: Java. Just for the record, compiling Java code from scratch could also take somewhere between seconds to hours.

Java is still one of the most popular languages for building robust, production-level systems. Deploying a change in Java code is also a common task for many developers, in the case of a simple version upgrade for example. Many Apache projects used in production systems are also written in Java, for example, Apache Hadoop, Cassandra or in our case Apache Kafka. There are lots of examples of these software being forked, modified, and used by lots of companies - like LinkedIn’s Hadoop or Confluent’s Kafka. For their own sake, these companies try to synchronize the code of their products as close to the open source as possible, so that they can simply cherry-pick anything that is worth incorporating into their product. They may also have several private patches like bug fixes and performance improvements that can make a difference when customers decide which vendor’s distribution shall be chosen.

Is it possible to make these changes as elastic as possible without compromising robustness and reliability? Let’s first consider the trivial scenarios!

The traditional rolling upgrade 🔗︎

When a new change is introduced, most companies’ build tools pick up that change and create new compiled artifacts - in the case of Java: new jars. You can bundle your jars into a new version of the image, and you’re ready to restart your containers in a rolling upgrade fashion wherever you orchestrate them. In Kubernetes, with more than one replica and PodDistruptionBudget fields correctly set on your resources, you can handle the update with zero downtime.

Messing with the bytecode 🔗︎

To understand how a new Java jar is created, we should dig a bit deeper.

At first glance, Java seems to be a compiled language, but it is well-known that it’s actually an interpreted language. The compiler translates the Java code into bytecode instructions, and those are interpreted by the Java Virtual Machine (JVM) later. The compiler also performs several checks that certain rules are kept, and if not, displays a familiar heart-warming error message. You can write arbitrary bytecode instructions; however, those would most likely not conform the internal logic of the JVM’s interpreter layer, and could sadly cause the JVM to crash. Therefore the compiler serves the role of a gatekeeper: do not allow the generation of malformed bytecode. The gain is clear here - the bytecode generation is robust and the output is (almost) guaranteed to not cause the JVM to crash, but in exchange we have to get through the whole compilation process: check all the rules for the whole code, not just inspect the changed part. Some compilers may perform optimization depending on the size of the change, but you must compile your code either way.

Moving on, the bytecode instructions are usually contained in jars, typically as a handful of classfiles. Those classfiles are listed in the CLASSPATH environment variable of the JVM process, and are picked up and loaded by the JVM’s ClassLoader upon start. The JVM loads the bytecode instructions into the memory and executes them accordingly. This is how the Java code is actually running on a computer.

So we know how a Java code gets executed. Now what? Can we intercept this process, or can we change the bytecode on the fly? Before we answer, let’s see what Java offers.

Java has the following tricks up its sleeve to aid us:

  • You can generate bytecode without the compiler’s assistance.
  • You can alter the Classloader’s behavior to load different kinds of bytecode instructions for a class.
  • You can use Java’s special API to modify bytecode instructions while the JVM loads the classfile into its memory.
  • You can also dynamically load Java classes.

That may sound worrisome at first, but these features hold big opportunities, and it is exactly where we want to go.

So what can we do to modify the bytecode?

Bytecode manipulation frameworks 🔗︎

There are several Java bytecode manipulation frameworks out there in the wild. This article does not attempt to compare these frameworks, for a very detailed comparison see this comment on Stack Overflow.

Notably, there were three of them that we thought were worth taking a closer look:

Javaassist is a high-level framework, while ASM and ByteBuddy are more low-level frameworks. At first glance Javaassist did not cover all our use cases, and we wanted to move to more low-level instructions at the expense of complexity. They may all have been equally appropriate for our use case in the long run, but the most important factor was the licensing. Starting from OpenJDK 11, ASM is available as an internal library of the OpenJDK. Therefore, instead of using an external library with a nonstandard license, it was much easier to use its internal version to comply with Cisco’s licensing requirements (as each company has one).

Bytecode injection 🔗︎

ASM is definitely not the easiest framework to start with, its user-guide is a 150 pages document.

This article is not intended to deep dive into the JVM mechanics and how the bytecode is interpreted. To thoroughly understand the different mutations that this framework is capable of, we recommend getting through the first few pages of the ASM User Guide.

That being said, we’ll show you an example of how you can write changes using ASM without knowing much about the underlying bytecode system.

Visitors 🔗︎

Anyone who has good knowledge of design patterns probably has heard about certain patterns like Builders, Adapters, or Visitors. The good news is that to get started with the ASM principles you only need to know about the Visitor design pattern.

In essence, the visitor allows adding new virtual functions to a family of classes, without modifying the classes.

You will shortly see a practical example and what this means in the case of ASM.

Generating visitors by ASMifier 🔗︎

ASM has a very useful utility called ASMifier. It takes a class as an input, and outputs Java code which is basically how ASM interprets that code. Let’s see an example.

The jar is released in the util jar with all the other artifacts of the ASM release, and it’s easy to use with the following command on your class

java -classpath asm-jars/asm.jar:asm-jars/asm-util.jar \
org.objectweb.asm.util.ASMifier \
com.cisco.MyClass

If you have the following one-liner in com.cisco.MyClass class’s main method:

System.out.println("Hello world!");

ASM will translate it into the following code:

// ...
ClassWriter classWriter = new ClassWriter(0);
MethodVisitor methodVisitor = classWriter.visitMethod(ACC_PUBLIC | ACC_STATIC, "main", "([Ljava/lang/String;)V", null, null);
// ...
methodVisitor.visitCode();
Label label0 = new Label();
methodVisitor.visitLabel(label0);
methodVisitor.visitLineNumber(21, label0);
methodVisitor.visitFieldInsn(GETSTATIC, "java/lang/System", "out", "Ljava/io/PrintStream;");
methodVisitor.visitLdcInsn("Hello world!");
methodVisitor.visitMethodInsn(INVOKEVIRTUAL, "java/io/PrintStream", "println", "(Ljava/lang/String;)V", false);
// ...

Let’s ignore the label and the visitLineNumber instructions which are used mostly by tools like debuggers.

For all the Java code instructions the code will call the methodVisitor's appropriate functions.

  • visitCode starts visiting the code block
  • visitFieldInsn visits the field instruction, like a static or any other function
  • visitLdcInsn visits an LDC instruction which usually registers a constant (int, float, String, or even any other custom class)
  • visitMethodInsn visits the method of the previously referred instance

Which translates the last three lines of the code above to:

  • first, java.lang.System's out static field is visited, which must be of type java.io.PrintStream
  • then the "Hello world!" constant is registered in the JVM’s stack and used in the next function
  • lastly, java.io.PrintStream class’s virtual println function is invoked which must have the following signature (Ljava/lang/String;)V (translating to void f(java.lang.String)), and that the calling object is not an interface (false)

By writing your own implementation of the MethodVisitor you can intercept calls and modify anything that flows through them.

Want to print something other than “Hello world!”? You can do it by:

public class OurCustomMethodVisitor extends MethodVisitor {
  private static final String HELLO_WORLD = "Hello world!";
  @Override
  public void visitLdcInsn(Object cst) {
    if (cst != null && cst.equals(HELLO_WORLD)) {
      cst = "Hello ASM!";
    }
    super.visitLdcInsn(cst);
  }
}

Using the Visitor pattern you can successfully replace the String parameter before passing to the parent visitor.

Want to print to stderr instead of stdout? Let’s write another methodVisitor!

public class MyMethodVisitor extends MethodVisitor {
  private static final String SYSTEM = "java/lang/System";
  private static final String PRINTSTREAM_CLASS = "Ljava/io/PrintStream;";
  private static final String ERR = "err";

  @Override
  public void visitFieldInsn(int opcode, String owner, String name, String desc) {
    if (opcode == Opcodes.GETSTATIC &&
        owner != null && owner.equals(SYSTEM) &&
        desc != null && desc.equals(PRINTSTREAM_CLASS)) {
      name = ERR;
    }
    super.visitFieldInsn(opcode, owner, name, desc);
  }
}

Usually the classes of a MethodVisitor are bundled to a ClassVisitor just like the example below, and they tend to have additional overrides to achieve other behaviors which are out of the scope of this article.

public class MyClassVisitor extends ClassVisitor {
  public MyClassVisitor(ClassWriter writer, String className) {
    super(Opcodes.ASM5, writer);
  }

  @Override
  public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) {
    MethodVisitor mv = super.visitMethod(access, name, desc, signature, exceptions);
    if (name.equals("myMethod")) {
      // this visitor will be called first, and our implementation will call super (the original visitor)
      return new MyMethodVisitor(mv);
    }
    return mv;
  }
}

Packaging 🔗︎

Now that we have the logic implemented in a set of visitors, let’s wrap it up and use it in our actual Java application. For this purpose we leverage Java agents.

Java Agents are software components that provide instrumentation capabilities to an application. Agents are loaded and started before the actual main application, so they can perform various tasks before your application even starts. They can use the Java Instrumentation API to interact with the internal mechanisms of the JVM. Among others, agents can add class transformers, add new classpath entries, or redefine classes. ASM handles the bytecode transformation, so only an interface must be provided for it.

Usually the agent code looks something like this:

public class MyAgent {
  public static void premain(String arguments, Instrumentation instrumentation) {
    // may pass arguments to the transformer
    MyClassTransformer transformer = new MyClassTransformer();
    instrumentation.addTransformer(transformer);
  }
}

Where the class transformer is just a read-transform-write cycle using the ASM provided classes:

public class MyClassTransformer implements ClassFileTransformer {
  @Override
  public byte[] transform(ClassLoader loader,
                          String className,
                          Class<?> classBeingRedefined,
                          ProtectionDomain protectionDomain,
                          byte[] classfileBuffer) {
    if (className.equals("com.mycompany.AnyClass")) {
      ClassReader reader = new ClassReader(classfileBuffer);
      ClassWriter writer = new ClassWriter(reader, ClassWriter.COMPUTE_FRAMES);
      ClassVisitor visitor = new MyClassVisitor();
      reader.accept(visitor, 0);
      return writer.toByteArray();
    }
    return null;
  }
}

In order for the agent to be started, the following conditions must be met:

  1. The agent’s code must be packaged into a jar.
  2. That jar must be on the classpath of the Java process.
  3. The java process must be started with the -javaagent:<path-to-agent-jar>:<additional-arguments> option.

The mechanics around how the Java application is started and how to set these values depends on the particular framework. In our case (since we run Apache Kafka on Kubernetes) we added them through environment variables.

Automating bytecode injection of Apache Kafka on Kubernetes 🔗︎

Taking the solution one step further, in Streaming Data Manager we introduced a webhook in Kubernetes which mutates the KafkaCluster resource so that the Kafka broker pods start with the following additional fields:

apiVersion: v1
kind: Pod
spec:
  initContainers:
  - name: agent-jar-container
    image: <custom-image-with-agent-jar>
    command: ["/bin/bash"]
    args: ["-c", "cp", "-r", "/var/lib/agent/*", "/var/lib/agent/*"]
    volumeMounts:
    - name: agent-jar
      mountPath: /var/lib/agent/
  volumes:
  - name: agent-jar
    emptyDir: {}
  containers:
  - name: main
    volumeMounts:
    - name: agent-jar
      mountPath: /var/lib/agent/
    envs:
    - name: CLASSPATH
      value: /var/lib/agent/*:<other-classes>
    - name: KAFKA_OPTS
      value: "-javaagent:/var/lib/agent/agent.jar <other Kafka opts>"

The initcontainer copies the jar from the image to a shared volume, while the CLASSPATH and KAFKA_OPTS environment variables make sure that it is picked up by the Kafka broker (Java process).

To make it more dynamic (so the Koperator and the user provided settings to CLASSPATH and KAFKA_OPTS are merged instead of overwriting each other), we’ve also introduced a change so that these environment variables are appended or preprended. See the related code change on GitHub.

Why? 🔗︎

As we discussed above, open source communities can be complicated to get along with and if there’s no support for pluggability or extendability, sometimes you need to do this by hand. This can be a motivation why bytecode generation or agent-based alteration can be used, but let’s see some more examples.

  • Your framework has many JMX metrics but you collect your metrics in Prometheus? Use the jmx-exporter agent (no bytecode injection in this case).
  • Your favorite framework does not log important details that would help you debug certain issues? Insert a LOG.info entry to that line with a custom agent you wrote.

Our use case 🔗︎

We wanted to enrich the Kafka client communication on Kubernetes with runtime information, like service account name or spiffe. This information has been added to Kafka’s ChannelBuilder and caught on the other side through the KafkaPrincipal class. This, along with some fine-tuning is implemented in Cisco Streaming Data Manager.

Obviously we could have just released our own distribution of Apache Kafka, which we did, initially. However customers told us that they want to have control over the Kafka distribution running on the cluster. Using this solution any customer could bring its own distribution that could contain any changes, assuming those few interfaces and classes required for the authentication haven’t been modified.

The cons 🔗︎

While this solution may have plenty of advantages, it is important to highlight the drawbacks too.

Compiled and unit-tested code obviously starts from a better position when we consider code stability. Due to its complexity, lots of things can go wrong in the method described above, for example, misconfiguration.

One thing that is inherently dangerous and why the compiler was invented in the first place is that by using this method we essentially sacrificed the compiler’s safety net in order to obtain more flexibility. This could easily escalate into fatal errors like the JVM crashing if one class or parameter type does not match. You have to be extra careful when writing your own agent by checking all the method signatures, input parameters, and so on, wherever possible. It is also possible to write tests to check proper code injection (just like we did), and unit test the functionality of the modified class. Everything comes at a price, but that was a cost we were willing to pay to earn this flexibility.