How Earthly Solved Our CI Problem

11 minute read     Updated:

Eddie Chayes %
Eddie Chayes

In this guest post, the Konfig team discusses how they solved their complex Continuous Integration challenges with Earthly, sharing insights valuable for any software development team.

The value of a continuous integration pipeline in software development is clear. It allows engineers to test every code change, ensuring deployments are free from regressions in functionality and performance. At Konfig, a startup creating SDKs in many coding languages from our clients’ OpenAPI specs, not having Continuous Integration was a major issue. Code generation software tends to be especially fickle, and many of our code paths are shared across generators for different coding languages, making it easy to introduce bugs. When we made revisions for one customer, it wasn’t uncommon for these changes to be incompatible for another customer. We badly needed CI – and quickly, too. Thus we embarked on (what we thought would be) the treacherous journey of building a CI pipeline that could accommodate our architecture.

Img

The Problem: Multiple Languages and Components

From the get-go, we presumed that building an efficient CI pipeline for Konfig would not be straightforward. None of us had any experience creating a CI pipeline for a multi-service architecture. Our product naturally interacts with many languages, necessitating the deployment of different services tailored to each language. For instance, certain functionalities, like formatting, require environments that support their specific language. Despite our complicated architecture, we wanted our CI to be speedy and cost-effective. As a startup, we were moving fast, and could not afford for build times to stretch into hours. This was a tricky problem.

The nature of our system lent itself well to docker compose, where we could bake each service into a docker image and use a compose file to orchestrate their communication. Docker compose proved extremely effective for local testing, as docker layer caching offered a huge benefit: Only components that changed needed to be rebuilt. It wasn’t too problematic to translate each of our components into docker images that could be used to run the testing pipeline locally.

version: "3.8"
services:
  konfig-api:
    image: konfig-api:latest
    ports:
      - "8911:8911"
    depends_on:
      - konfig-db

  konfig-python-formatter:
    image: konfig-python-formatter:latest
    ports:
      - "10000:10000"

  konfig-generator:
    image: konfig-generator:latest
    ports:
      - "8080:8080"

  konfig-db:
    image: postgres
    restart: always
    ports:
      - "5432:5432"

A Simplified Version of our Docker Compose

However, the hard part was imitating this behavior in the cloud in order to create a fast and economical CI pipeline. Of course, we could just blindly port this docker-compose setup to Github actions, but then we would crucially lose the benefit of caching. When developing locally, docker uses your local disk to cache images in layers, allowing you to re-use these layers in subsequent builds if nothing has changed. Because there is no built-in persistent storage across CI builds, two builds running one after the other would each be forced to build all of the images from the ground up. One potential solution to this is to introduce a persistent storage that is pushed to and pulled from before and after each CI build; the problem with this is that our images are large – in the realm of multiple GB apiece – and the network cost of uploading/downloading these images would offset any gains from reusing them.

Thus, to avoid a network bottleneck, we need persistent storage that lives next to the compute instance (much like it does when running docker on your laptop!) To this end, we explored the nascent github actions cache feature, only to unfortunately find that it was limited to 10GB – not enough space for our various large images.

So, to recap: In order to achieve CI with our desired speed, efficiency, and tooling, we were looking for a CI that wasn’t too dissimilar from docker with a large amount of persistent storage living next to the execution environment. Luckily for us, that last sentence is a perfect characterization of Earthly.

Img

The Solution: Earthly

When Konfig first found Earthly, I’ll admit, I immediately liked the “Earth” branding; as a result, I really wanted Earthly to work out as the solution we were searching for.

After installing, it was easy to get our system ported into the Earthly ecosystem – the Earthfile doesn’t fall too far from the Dockerfile tree, and Earthly’s integration with docker compose let me reuse the same compose.yaml file. The target syntax, which lets Earthfiles reference files and images from other targets in any Earthfile, was advantageous, as it let me clean up some duplicated resources between different images. (This is also possible with docker multi-stage builds, but Earthly made it even easier and more intuitive!)

​test:
  FROM earthly/dind:alpine-3.18-docker-23.0.6-r4
  COPY compose/.env .
  COPY compose/compose.yaml .
  ARG TEST_ARGS
  WITH DOCKER \
        --compose compose.yaml \
        --load konfig-python-formatter:latest=../konfig-python-formatter-server-blackd+run-python-formatter \
        --load konfig-generator:latest=../konfig-generator-api+run-generator \
        --load konfig-api:latest=../konfig-dash+run-konfig-api \
        --load konfig-integration-tests:latest=+integration-tests
    RUN docker run --network="konfig-network" konfig-integration-tests:latest $TEST_ARGS
  END

Very Simplified Test Against Docker-Compose Structure from One of our Earthfile

One favorable characteristic that I enjoyed was how I could build and experiment with Earthly locally, much as I would with docker. In fact, its use of docker under the hood made it easy to reuse familiar tools for local development and debugging, like docker desktop. Our Earthly pipeline essentially functions to build the images for each of our components, then use docker compose (integrated into Earthly) to orchestrate them in order to run our tests against them. Earthly builds targets in parallel, only waiting when required by interdependencies between targets, which made the builds quite speedy without requiring us to write any scripts.

Once everything was working locally, it was beyond simple to get Earthly running in the cloud – launching an Earthly “satellite” took just a single CLI command. After that, everything stayed the same, except it wasn’t running on my machine anymore – although, the logs streaming back to my terminal in real-time could have fooled me. Satellites have large built-in caches, so the caching worked just as well in the cloud as it was on my laptop. The final step was just integrating the process into Github actions to trigger on pull requests, which once again could not have been easier: just a “Setup Earthly” action, followed by the same CLI command I’d been using to run Earthly locally.

I can now look back and say my presumptions were wrong, and building an efficient CI for Konfig was not so treacherous once we found Earthly, a tool seemingly built with our exact conundrum in mind. To this day, it never fails to make me happy seeing *cached* *cached* *cached* in our github action logs!

Earthly Cloud: Consistent, Fast Builds, Any CI
Consistent, repeatable builds across all environments. Advanced caching for faster builds. Easy integration with any CI. 6,000 build minutes per month included.

Get Started Free

Eddie Chayes %
Eddie Chayes
Eddie is an early career software engineer with a few years of experience working with AI platforms; he’s now a founding engineer at Konfig, a startup that generates SDKs for API companies.

Published:

Get notified about new articles!
We won't send you spam. Unsubscribe at any time.