K3s vs K8s
Table of Contents
This article compares K3s and K8s, highlighting their differences. Earthly provides K8s users with consistent and efficient container builds. Learn more about Earthly.
Container orchestration tools automate container management tasks, such as scheduling, scaling, load balancing, and networking. Orchestrators have become the standard way to run containerized workloads in production. They solve many of the challenges of successful container operation by abstracting low-level concepts and providing solutions for common problems.
Orchestrators allow you to rapidly deploy containers and scale them across multiple physical hosts. They make it easier to build highly available applications with good scalability. While smaller systems can get away with simpler technologies, like Docker, orchestration will usually prove more effective for larger teams dealing with sprawling microservices architectures. It lets you create, distribute, and roll back container deployments with consistency.
K3s and K8s
Kubernetes, often abbreviated to K8s, is the leading container orchestrator. Originally developed at Google, the open source project has helped to form the modern definition of orchestration. The system includes everything you need to deploy and operate containerized systems.
Community vendors have built upon Kubernetes to create independent distributions that focus on different use cases. K3s is a popular distribution created by Rancher and is now maintained as part of the Cloud Native Computing Foundation (CNCF).
K3s aims to be a lightweight Kubernetes version that’s suitable for use on resource-constrained hardware, such as IoT devices. It’s also easy to set up and use, making it a good candidate for local development clusters. The focus on edge deployments doesn’t obviate support for large-scale cloud deployments either: K3s’s CNCF certification means it offers all Kubernetes features and is ready for production workloads.
In this article, you’ll learn how K3s compares to the official distribution provided by the Kubernetes project, including the ways in which they differ, when they should be used, and how easily you can get to grips with them. This isn’t a straightforward comparison because K3s builds upon upstream Kubernetes. They mainly differ in when and where they should be used.
What Is K8s
The term K8s is used in many different contexts. It can refer to any Kubernetes distribution, whether it’s part of the Kubernetes project itself or another vendor, such as Rancher or Canonical. K8s can also be used in reference to K3s.
In this article, K8s means the distribution developed from the Kubernetes source and published at kubernetes.io. This is the purest form of Kubernetes, providing all the project’s components for manual deployment from scratch.
Kubernetes gives you everything you need to deploy containers and scale them across multiple hosts. Each physical machine in your Kubernetes cluster is called a node. Nodes are managed by the Kubernetes control plane. It schedules your containers onto free nodes, manages network discoverability and storage, and provides the API that you interact with.
What Is K3s
K3s is a specific Kubernetes distribution with development led by Rancher. It builds upon the upstream project without forking it. Kubernetes distributions are conceptually equivalent to Linux operating systems: K3s is a Kubernetes distribution in the same way Ubuntu is a Linux distribution. K3s does everything Kubernetes can while layering its own capabilities on top.
K3s is specifically designed to run well in even the smallest of hardware environments. This focus underpins the project’s architecture. K3s ships a single binary that weighs in at under 50 MB. This lightweight executable bundles everything you need to start a fully functioning Kubernetes cluster.
The tiny binary size was achieved by dropping non-essential Kubernetes functionality, such as cloud provider integrations and non-CSI storage providers. Go’s goroutines are used to run the individual Kubernetes components from a single entry point.
Ease of Deployment with K3s and K8s
K3s is usually much easier to deploy and maintain than K8s. The lightweight binary lets you launch all the Kubernetes control plane components with one command. Starting an official Kubernetes cluster is much more time-consuming and can be harder to maintain.
Deploying K3s
The following command gets a functioning K3s cluster up and running:
curl -sFL https://get.k3s.io | sh - $
The official installation script downloads the binary and registers a system service that automatically starts K3s when the process dies or your host reboots. It also configures Kubernetes utilities, including the kubectl CLI, to be available in your PATH
. You should be able to start interacting with your cluster a few seconds after running the script on a fresh machine:
kubectl run nginx --image=nginx
$ pod/nginx created
You can easily join additional nodes to your cluster by installing K3s on them and running the following command:
sudo k3s agent --server https://<control-plane-ip>:6443 \
$ --token <node-token>
You can get the value of <node-token>
by reading the /var/lib/rancher/k3s/server/node-token
file from the machine that’s running your K3s control plane.
K3s can also be deployed alongside Docker or Docker Desktop using the community-developed k3d project. K3d is a lightweight wrapper that runs K3s inside a container. It lets you run multiple K3s clusters on a single host and administer them using familiar Docker tools. You can install k3d with the following command:
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | \
$ TAG=v5.4.5 bash
Then create your first cluster:
k3d create cluster demo-cluster $
Now you can use an existing kubectl installation to start adding objects to your k3d/K3s cluster:
kubectl run nginx –image nginx:latest
$ pod/nginx created
Deploying K8s
K8s has a much more involved deployment experience. The Kubernetes project provides downloads for individual components, such as the API server, controller manager, and scheduler. It’s up to you to successfully deploy each component to create your control plane. You then have to install the kubelet agent on each of the worker nodes you want to connect to your cluster.
This process is simplified by the kubeadm tool that now provides a relatively streamlined experience for creating a cluster. You still need to install a container runtime such as containerd before you start using kubeadm. Then you can run the following command to initialize the Kubernetes control plane on your host:
kubeadm init $
Once the process completes, you’ll be told to run further commands to make the generated config file accessible to kubectl:
mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config $
Then you have to manually select and install a pod networking add-on so your pods are able to communicate with each other. Flannel is one popular option:
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yaml $
After all this, you’re ready to join nodes to your cluster by downloading kubeadm to them and running this command:
kubeadm join --token <token> <control-plane-ip>:<control-plane-port> \
--discovery-token-ca-cert-hash sha256:<hash>
The value of <token>
is obtained by running the kubeadm token list
command on your control plane host. To find the correct discovery-token-ca-cert-hash
, you’ll have to run this command on the control plane host:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin \
-outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
Starting a local Kubernetes cluster with kubeadm is much more complicated than using K3s. K3s abstracts away the cluster setup steps, making it quicker and easier to get up and running. Using K8s locally requires extra time and effort to learn about the installation process and configure your environment. However, there are plenty of cloud-based managed Kubernetes solutions, such as the Google Kubernetes Engine (GKE) and Azure Kubernetes Service (AKS), that can set up a cluster for you when you don’t want to use K3s.
Key Differences Between K3s and K8s
K8s and K3s present the same functional surface to cluster users. If you have a valid Kubernetes YAML manifest, you can deploy it using either solution without making modifications.
Where the two distributions differ is in how they’re packaged and the components they include. Following are some of the key characteristics you should consider:
Components Installed by Default
K8s and K3s bundle different components to implement the Kubernetes architecture. One of the biggest changes concerns the datastore used for the control plane: upstream Kubernetes uses etcd, whereas K3s opts for an embedded SQLite database. This generally improves performance and reduces the binary size but might not be suitable for the largest clusters. K3s can connect to an external etcd datastore if needed as well as other SQL-based databases, like MySQL and PostgreSQL.
The standard Kubernetes distribution only includes the components required to run a functioning control plane. Because K3s aims to offer a simpler user experience, it also bundles popular ecosystem utilities.
K3s has integrated Helm support that lets you deploy Helm charts by representing them as HelmChart
objects in your cluster. However, upstream Kubernetes doesn’t understand Helm; you have to install the Helm CLI separately and use its commands to install your charts.
Both K8s and K3s use containerd as their default container runtime, but this can be customized. K3s includes several other components sourced from the community, including flannel for pod networking and Traefik as an ingress controller and an embedded load balancer. Kubernetes makes you select and install these utilities yourself, whereas K3s aims for a batteries-included approach.
K3s is the better choice when you don’t want to spend time evaluating and setting up individual components of the broader containerization stack. It sets up a fully functioning cluster that’s immediately ready for production workloads.
Resource Requirements
K3s is designed to run on anything with a free CPU core and at least 512 MB of memory. The binary weighs in at under 50 MB and requires no external dependencies.
Clusters created with kubeadm have heavier resource requirements. The documentation advises at least two free CPU cores and 2 GB of RAM. The increased overheads of the control plane components mean more hardware is needed to attain the same result. This can increase costs when you’re deploying your cluster in the cloud.
K3s is the natural choice for resource-constrained environments. This is the project’s core focus area, so you can expect a highly optimized experience. Remember that just because you can run with 512 MB of RAM doesn’t mean you should; this will leave relatively little headroom for your own applications.
Updates
K3s offers a streamlined cluster upgrade experience. You simply run the installation script again to download the latest version and automatically migrate your cluster:
curl -sfL https://get.k3s.io | sh - $
Repeating this command on each of your nodes will bring your cluster up to the latest stable release without any manual intervention.
Upgrading K8s clusters created with kubeadm requires a few more steps. You have to get the latest release of kubeadm itself:
# Updating to v1.24.1
apt-get update
$ apt-get install -y kubeadm=1.24.1 $
Next, use kubeadm to upgrade your control plane:
kubeadm upgrade apply v1.24.1 $
Finally, upgrade kubelet and kubectl on each of your worker nodes:
apt-get update
$ apt-get install -y kubelet=1.24.1 kubectl=1.24.1
$ systemctl daemon-reload
$ systemctl restart kubelet $
K3s once again offers a simpler hands-off experience. Kubeadm upgrades are still fairly straightforward, but you’ve got more commands to run. This increases the margin for error during upgrades. With K3s, you can simply call the installation script and wait while your cluster updates.
Speed
A K8s cluster and a K3s cluster deployed on equivalent hardware should run your containers with comparable performance, as they use the same containerd runtime. However, K3s is so lightweight that it installs and starts the control plane considerably faster than K8s.
In contrast, upstream Kubernetes can take multiple minutes to get running (K3s is typically usable in under one minute). This makes it much better suited to ephemeral clusters, such as local development and staging environments. You don’t want to be waiting for regular Kubernetes distribution to start each time you test a change. With K3s, you can quickly spin up a cluster and then tear it down once you’ve finished.
Security
K3s is secure by design and presents a minimal attack surface. Everything’s packaged into one self-contained binary that contains the bare essentials to run Kubernetes. The reduced number of dependencies makes it less likely that security flaws will creep in.
This isn’t to say K8s is insecure. Kubernetes has become one of the most used open source projects, adopted by major companies across the world. It comes under regular scrutiny to ensure clusters are protected against attack.
Whichever solution you use, you should take responsibility for your own protection by hardening your cluster after installation. Both K3s and Kubernetes have their own recommendations for creating secure clusters.
Ideal Use Cases of K3s and K8s
K3s has minimal hardware requirements, which makes it suitable for heavily resource-constrained environments that can’t accommodate a standard K8s cluster. Eschewing components like etcd in favor of smaller alternatives means K3s can fit onto IoT products and edge devices that otherwise wouldn’t support Kubernetes.
Consequently, K3s is ideal for all environments with limited resources. This isn’t the end of its remit, though: the project’s compact single binary means it’s also a convenient solution for running local single-node Kubernetes clusters in development. An engineer can spin up their own environment in seconds without installing dependencies or incurring costs on a managed Kubernetes service in the cloud. You could even run K3s within your CI pipeline scripts to simplify testing routines.
Although K3s is an appropriate choice for many different environments, there are situations where a bigger Kubernetes distribution makes sense. This could be when you’re running a large-scale deployment or you need to use technologies that depend on K8s-specific components. For example, K3s’s removal of etcd could cause problems if you want to set up detailed monitoring of control plane activity.
In some industries, there might be a regulatory or compliance reason to use upstream Kubernetes. K3s may not necessarily be approved for use in highly sensitive environments. If you’re operating at this scale, you’ve probably got the resources to effectively deploy and maintain K8s using kubeadm or another tool. K3s’s simplicity might prove too limiting for large clusters where you want full control over individual control plane components.
Conclusion
Kubernetes is the leading orchestrator for deploying and distributing containers at scale. While it’s been instrumental in progressing containers to production technology, plain Kubernetes is still quite complex and difficult to maintain.
K3s solves these challenges by delivering a CNCF-certified Kubernetes distribution packaged as a single 50 MB binary. Its lightweight approach lets you run the same version of Kubernetes at the edge, on your workstation, and in more conventional cloud scenarios.
Whichever K8s distribution you choose, you’ll need a CI/CD pipeline to reliably initiate new deployments as you make code changes. Automated container rollouts guarantee consistency between each deployment, eliminating the brittleness that occurs when developers manually create them.
Earthly is an effortless CI/CD framework that runs everywhere, whether you’re using GitHub, GitLab, Jenkins, or your laptop’s console. It produces repeatable language-agnostic builds so you can easily reproduce failures for faster iteration and simpler debugging.
Earthly Cloud: Consistent, Fast Builds, Any CI
Consistent, repeatable builds across all environments. Advanced caching for faster builds. Easy integration with any CI. 6,000 build minutes per month included.