Lightweight Kubernetes at the Edge with K3s#

K3s is a production-grade Kubernetes distribution packaged as a single binary under 100 MB. It was built for environments where resources are constrained and operational simplicity matters: edge locations, IoT gateways, retail stores, factory floors, branch offices, and CI/CD pipelines where you need a real cluster but cannot justify the overhead of a full Kubernetes deployment.

K3s achieves its small footprint by replacing etcd with SQLite (by default), embedding containerd directly, removing in-tree cloud provider and storage plugins, and packaging everything into a single binary. Despite these changes, K3s is a fully conformant Kubernetes distribution – it passes the CNCF conformance tests and runs standard Kubernetes workloads without modification.

Resource Requirements#

K3s runs on hardware that would be impossible for standard Kubernetes:

Component Minimum Recommended
Server node (control plane) 1 CPU, 512 MB RAM 2 CPU, 2 GB RAM
Agent node (worker only) 1 CPU, 256 MB RAM 1 CPU, 1 GB RAM
Disk 1 GB free SSD, 10+ GB

Compare this to kubeadm-based clusters that need 2 CPU and 2 GB RAM minimum per control plane node just for the Kubernetes components, before any workloads.

Installation#

Single Server#

The fastest path from zero to running cluster:

curl -sfL https://get.k3s.io | sh -

This installs K3s as a systemd service, starts it immediately, and writes the kubeconfig to /etc/rancher/k3s/k3s.yaml. The node is both server (control plane) and agent (worker).

# Check the cluster
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl get nodes
kubectl get pods -A

You will see Traefik (ingress controller), CoreDNS, local-path-provisioner (dynamic PV provisioning), and metrics-server running out of the box. These are bundled defaults – all configurable or removable.

Installation Options#

Control what K3s installs with flags:

# Install without Traefik (you will use your own ingress controller)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -

# Install without the bundled load balancer (ServiceLB)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable servicelb" sh -

# Install with a specific version
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.30.2+k3s1" sh -

# Install with a custom data directory (useful when /var is small)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--data-dir /opt/k3s" sh -

# Set the node name explicitly
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-name edge-store-001" sh -

Adding Agent Nodes#

On the server, retrieve the node token:

cat /var/lib/rancher/k3s/server/node-token

On each agent node:

curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 K3S_TOKEN=your-token sh -

The agent joins the cluster and begins accepting workloads. No additional configuration is required.

Embedded etcd vs. External Database#

K3s supports three data store options:

SQLite (Default, Single Server)#

The default for single-server installations. Zero configuration, zero additional processes. The SQLite database is stored at /var/lib/rancher/k3s/server/db/state.db.

Use when: Single server node, dev/test environments, small edge deployments where control plane HA is not required.

Do not use when: You need more than one server node. SQLite does not support concurrent writes from multiple K3s servers.

Embedded etcd (HA, No External Dependencies)#

For high-availability control planes without managing a separate etcd cluster:

# Initialize the first server with embedded etcd
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--cluster-init" sh -

# Join additional servers to the cluster
curl -sfL https://get.k3s.io | K3S_URL=https://first-server:6443 \
  K3S_TOKEN=your-token \
  INSTALL_K3S_EXEC="--server" sh -

You need an odd number of server nodes (3 or 5) for etcd quorum. Three servers tolerate one failure. Five tolerate two.

Use when: You need HA at the edge without managing external infrastructure. The embedded etcd is fully managed by K3s – snapshots, compaction, and defragmentation are handled automatically.

External Database (PostgreSQL or MySQL)#

For organizations that prefer a managed database as the data store:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="\
  --datastore-endpoint='postgres://k3s:password@db.example.com:5432/k3s?sslmode=require'" sh -

Use when: You already have managed database infrastructure, or you want the control plane data store separated from the nodes for independent scaling and backup.

Air-Gapped Deployment#

Edge locations often have no internet access. K3s supports fully air-gapped installation.

Step 1: Download artifacts on a connected machine.

# Download the K3s binary
wget https://github.com/k3s-io/k3s/releases/download/v1.30.2+k3s1/k3s

# Download the air-gap images tarball
wget https://github.com/k3s-io/k3s/releases/download/v1.30.2+k3s1/k3s-airgap-images-amd64.tar.zst

# Download the install script
wget https://get.k3s.io -O install.sh

Step 2: Transfer to the target node. USB drive, local network, satellite link – whatever your air-gapped environment supports.

Step 3: Install on the target node.

# Place the binary
chmod +x k3s
cp k3s /usr/local/bin/

# Place the images where K3s expects them
mkdir -p /var/lib/rancher/k3s/agent/images/
cp k3s-airgap-images-amd64.tar.zst /var/lib/rancher/k3s/agent/images/

# Run the install script in air-gap mode
chmod +x install.sh
INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh

K3s imports the images from the tarball at startup instead of pulling from a registry.

Private registry for workload images:

For your own application images, configure K3s to use a private registry:

# /etc/rancher/k3s/registries.yaml
mirrors:
  docker.io:
    endpoint:
      - "https://registry.local:5000"
  "registry.local:5000":
    endpoint:
      - "https://registry.local:5000"
configs:
  "registry.local:5000":
    tls:
      cert_file: /etc/certs/registry.crt
      key_file: /etc/certs/registry.key
      ca_file: /etc/certs/ca.crt

Fleet Management with Rancher#

Managing one K3s cluster is easy. Managing hundreds at edge locations requires fleet management. Rancher and its Fleet component are purpose-built for this.

Architecture#

Rancher runs on a central management cluster. Each edge K3s cluster registers with Rancher as a downstream cluster. Fleet (bundled with Rancher) provides GitOps-based deployment across all clusters.

Central Rancher Cluster
    |
    +-- Fleet Controller
         |
         +-- ClusterGroup: retail-stores
         |    +-- store-001 (K3s)
         |    +-- store-002 (K3s)
         |    +-- store-003 (K3s)
         |
         +-- ClusterGroup: warehouses
              +-- warehouse-east (K3s)
              +-- warehouse-west (K3s)

Registering Edge Clusters#

From the Rancher UI or CLI, generate a registration command:

# On the edge K3s cluster, run the generated registration command
kubectl apply -f https://rancher.central.example.com/v3/import/cluster-registration-token.yaml

The edge cluster’s agent connects outbound to Rancher. This is critical for edge environments – the edge cluster initiates the connection, so no inbound firewall rules are needed at the edge site.

GitOps with Fleet#

Fleet watches Git repositories and deploys manifests to target clusters based on labels and group selectors.

# fleet.yaml in your Git repository
defaultNamespace: my-app
helm:
  releaseName: store-app
  chart: ./chart
  values:
    replicaCount: 2
    image:
      tag: v1.5.0
targetCustomizations:
- name: high-traffic-stores
  clusterSelector:
    matchLabels:
      traffic-tier: high
  helm:
    values:
      replicaCount: 4
      resources:
        requests:
          cpu: 500m
- name: low-resource-sites
  clusterSelector:
    matchLabels:
      hardware: constrained
  helm:
    values:
      replicaCount: 1
      resources:
        requests:
          cpu: 100m
          memory: 64Mi

Push to your Git repo, and Fleet rolls out the appropriate configuration to each cluster based on its labels. High-traffic stores get 4 replicas; constrained hardware sites get 1 replica with minimal resources.

Edge-Specific Networking#

Node Port Ranges#

By default, K3s uses the standard Kubernetes NodePort range (30000-32767). On edge devices with limited ports or specific firewall rules, customize it:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--service-node-port-range 8000-9000" sh -

Flannel Backends#

K3s uses Flannel for CNI by default with the VXLAN backend. For edge environments with specific network requirements:

# WireGuard encryption (built-in, no extra software)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--flannel-backend wireguard-native" sh -

# Host gateway (no overlay, requires L2 connectivity between nodes)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--flannel-backend host-gw" sh -

# Disable Flannel entirely (bring your own CNI like Cilium)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--flannel-backend none" sh -

WireGuard is particularly valuable at the edge – it encrypts all pod-to-pod traffic between nodes with minimal CPU overhead, important when edge nodes communicate over untrusted networks.

ServiceLB (Formerly Klipper)#

K3s includes a built-in load balancer that works without cloud provider integration. When you create a LoadBalancer Service, K3s runs a DaemonSet that binds the service port on every node’s host IP. This gives you external access without MetalLB or a cloud load balancer.

# Verify ServiceLB is running
kubectl get pods -n kube-system -l app=svclb

For production edge deployments, consider MetalLB for more control over IP allocation, especially when multiple services need distinct external IPs.

Upgrades#

K3s supports automated upgrades through the system-upgrade-controller:

kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml

Define upgrade plans:

apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
  name: server-plan
  namespace: system-upgrade
spec:
  concurrency: 1
  cordon: true
  nodeSelector:
    matchExpressions:
    - key: node-role.kubernetes.io/master
      operator: In
      values: ["true"]
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/k3s-upgrade
  channel: https://update.k3s.io/v1-release/channels/stable
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
  name: agent-plan
  namespace: system-upgrade
spec:
  concurrency: 2
  cordon: true
  nodeSelector:
    matchExpressions:
    - key: node-role.kubernetes.io/master
      operator: DoesNotExist
  prepare:
    args: ["prepare", "server-plan"]
    image: rancher/k3s-upgrade
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/k3s-upgrade
  channel: https://update.k3s.io/v1-release/channels/stable

The agent plan waits for the server plan to complete before upgrading worker nodes. Concurrency controls how many nodes upgrade simultaneously – set to 1 for server nodes to maintain quorum.

When to Use K3s#

Scenario K3s Fit
Edge retail/IoT locations with constrained hardware Excellent – designed for this
CI/CD ephemeral test clusters Excellent – starts in seconds
Dev/test local Kubernetes Good – lighter than minikube with Docker
Small production clusters (< 50 nodes) Good – fully conformant
Large production clusters (100+ nodes) Use standard Kubernetes – K3s’s optimizations matter less at scale
Managed cloud Kubernetes replacement Not recommended – use your cloud provider’s managed offering

K3s removes the operational weight of Kubernetes without removing Kubernetes itself. For edge computing where you need container orchestration on hardware with 1 GB of RAM, it is the standard choice.