A guide to install K3s on a VM or Raspberry Pi with a custom Traefik Ingress Controller, SSL certificate management via cert-manager, and Let's Encrypt.
K3s is a lightweight, fully conformant Kubernetes distribution designed for resource-constrained environments like edge computing, IoT, and ARM devices. This repository provides step-by-step instructions and ready-to-use manifests to get a production-ready cluster running with HTTPS ingress.
┌─────────────┐ ┌────────────────────┐ ┌─────────────────┐ ┌──────────────┐ ┌────────────┐
│ Client │────▶│ Traefik Ingress │────▶│ Ingress Rules │────▶│ ClusterIP │────▶│ Go Webapp │
│ (Browser) │ │ Controller │ │ (TLS via LE) │ │ Service │ │ Pods │
└─────────────┘ └────────────────────┘ └─────────────────┘ └──────────────┘ └────────────┘
│ │
K3s Cluster cert-manager /
Let's Encrypt
- Prerequisites
- Repository Structure
- What's K3s?
- 1. Install K3s with k3sup
- 2. Install Ingress Controller
- 3. Deploy a Dummy App
- 4. cert-manager
- 5. Expose App Externally via Ingress
- Troubleshooting
- How to Contribute
- License
- kubectl - Kubernetes CLI
- k3sup - K3s installer over SSH
- k9s - Optional TUI for cluster management
- Helm v3 - Package manager for Kubernetes
| File | Description |
|---|---|
traefik.yml |
HelmChart resource for Traefik v3 ingress controller (chart v39.0.2, deployed to kube-system namespace) |
cluster-issuer.yml |
Let's Encrypt ClusterIssuer using ACME HTTP-01 challenge with Traefik |
dummy_app.yml |
Sample Go webapp Deployment + ClusterIP Service (nicomincuzzi/go-webapp:0.1.0, port 3030) |
ingress.yml |
Ingress resource routing gowebapp.dev.pettycashmate.co to the dummy app with TLS |
Find here all you need.
This step installs K3s on a remote machine via SSH. The --disable traefik flag prevents the default Traefik installation so we can deploy a custom version in the next step.
export IP=<HOST_IP>
k3sup install \
--ip $IP \
--user root \
--ssh-key <SSH_PATH> \
--merge \
--local-path $HOME/.kube/config \
--context my-k8s \
--k3s-extra-args '--disable traefik'Options for install:
--cluster- start this server in clustering mode using embedded etcd (embedded HA)--skip-install- if you already have k3s installed, you can just run this command to get thekubeconfig--ssh-key- specify a specific path for the SSH key for remote login--local-path- default is./kubeconfig- set the file where you want to save your cluster'skubeconfig. By default this file will be overwritten.--merge- Merge config into existing file instead of overwriting (e.g. to add config to the default kubectl config, use--local-path ~/.kube/config --merge).--context- default isdefault- set the name of the kubeconfig context.--ssh-port- default is22, but you can specify an alternative port i.e.2222--k3s-extra-args- Optional extra arguments to pass to k3s installer, wrapped in quotes, i.e.--k3s-extra-args '--disable traefik'or--k3s-extra-args '--docker'. For multiple args combine them within single quotes--k3s-extra-args '--disable traefik --docker'.--k3s-version- set the specific version of k3s, i.e.v0.9.1--ipsec- Enforces the optional extra argument for k3s:--flannel-backendoption:ipsec--print-command- Prints out the command, sent over SSH to the remote computer--datastore- used to pass a SQL connection-string to the--datastore-endpointflag of k3s. You must use the format required by k3s in the Rancher docs.
See even more install options by running k3sup install --help.
Note: Traefik can be configured by editing the
traefik.ymlfile. To prevent k3s from using or overwriting the modified version, deploy k3s with--disable traefikand store the modified copy in thek3s/server/manifestsdirectory. For more information, refer to the official Traefik Helm Chart.
Build a 3-node Kubernetes cluster with k3s and k3sup, which uses SSH to make the whole process quick and painless.
Note: Running k3s/MicroK8s on some ARM hardware may run into difficulties because
cgroupsare not enabled by default.This can be remedied on Ubuntu by editing the boot parameters:
sudo vi /boot/firmware/cmdline.txtNote: In some Raspberry Pi Linux distributions the boot parameters are in
/boot/cmdline.txtor/boot/firmware/nobtcmd.txt.Add the following:
cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memoryMore details: https://microk8s.io/docs/install-alternatives#heading--arm
1. Create the server
In Kubernetes terminology, the server is often called the master.
export IP=<HOST_IP>
k3sup install \
--ip $IP \
--user root \
--ssh-key <SSH_PATH> \
--merge \
--local-path $HOME/.kube/config \
--context my-k8s \
--k3s-extra-args '--disable traefik'k3s is so fast to start up, that it may be ready for use after the command has completed.
Test it out:
export KUBECONFIG=`pwd`/kubeconfig
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready master 15h v1.19.15+k3s2 192.168.1.45 <none> Ubuntu 20.04.3 LTS 5.4.0-1045-raspi containerd://1.4.11-k3s12. Extend the cluster
You can add additional hosts in order to expand available capacity.
k3sup join --ip <WORKER_X_IP> --server-ip <SERVER_IP> --user root --ssh-key <SSH_PATH>Replace <WORKER_X_IP> with each worker IP address.
3. Control plane node isolation: taint
Unlike k8s, the master node here is eligible to run containers destined for worker nodes as it does not have the node-role.kubernetes.io/master=true:NoSchedule taint that's typically present.
Tainting your master node is recommended to prevent workloads from being scheduled on it, unless you are only running a single-node k3s cluster on a Raspberry Pi.
kubectl taint nodes <SERVER_NAME> node-role.kubernetes.io/master=true:NoScheduleReplace <SERVER_NAME> with your k3s server node NAME shown in the kubectl get nodes output.
4. Optional labels
By default, k3s does not label agent nodes with the worker role (unlike k8s). You can label them manually for clarity:
kubectl label node <WORKER_NAME> node-role.kubernetes.io/worker=''Replace <WORKER_NAME> with the hostname of your nodes.
Deploy the custom Traefik v3 ingress controller from the official Helm chart repository. This replaces the default K3s Traefik installation (which we disabled in step 1) with a version we can configure.
kubectl apply -f traefik.ymlVerify that everything is working by running kubectl get pods --all-namespaces:
Alternatively, browse to http://<HOST_IP>/ — you should see a 404 page not found, which confirms Traefik is running and listening:
Deploy a dummy app, based on the nicomincuzzi/go-webapp image, and its service:
kubectl apply -f dummy_app.ymlVerify your app responds correctly:
kubectl port-forward pod/<POD_NAME> <YOUR_LOCAL_PORT>:<POD_PORT>cert-manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Let's Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, or self signed.
As an alternative to the YAML manifests referenced above, we also provide an official Helm chart for installing cert-manager. Read more here.
In order to install the Helm chart, you must follow these steps:
Create the namespace for cert-manager:
kubectl create namespace cert-managerAdd the Jetstack Helm repository:
Warning: It is important that this repository is used to install cert-manager. The version residing in the helm stable repository is deprecated and should not be used.
helm repo add jetstack https://charts.jetstack.ioUpdate your local Helm chart repository cache:
helm repo updatecert-manager requires a number of CRD resources to be installed into your cluster as part of installation.
This can either be done manually, using kubectl, or using the installCRDs option when installing the Helm chart.
Note: If you're using a Helm version based on Kubernetes v1.18 or below (Helm v3.2),
installCRDswill not work with cert-manager v0.16. For more info see the v0.16 upgrade notes.
Option 1: installing CRDs with kubectl
Install the CustomResourceDefinition resources using kubectl:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.crds.yamlOption 2: install CRDs as part of the Helm release
To automatically install and manage the CRDs as part of your Helm release, you must add the --set installCRDs=true flag to your Helm installation command.
Uncomment the relevant line in the next steps to enable this.
To install the cert-manager Helm chart:
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.2.0 \
--create-namespace \
--set installCRDs=trueThe default cert-manager configuration is good for the majority of users, but a full list of the available options can be found in the Helm chart README.
Once you've installed cert-manager, you can verify it is deployed correctly by checking the cert-manager namespace for running pods:
kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2mYou should see the cert-manager, cert-manager-cainjector, and cert-manager-webhook pod in a Running state. It may take a minute or so for the TLS assets required for the webhook to function to be provisioned. This may cause the webhook to take a while longer to start for the first time than other pods. If you experience problems, please check the FAQ guide.
In order to configure cert-manager to begin issuing certificates, first Issuer or ClusterIssuer resources must be created. These resources represent a particular signing authority and detail how the certificate requests are going to be honored. You can read more on the concept of Issuers here.
cert-manager supports multiple 'in-tree' issuer types that are denoted by being in the cert-manager.io group. cert-manager also supports external issuers that can be installed into your cluster that belong to other groups. These external issuer types behave no differently and are treated equal to in-tree issuer types.
When using ClusterIssuer resource types, ensure you understand the Cluster Resource Namespace where other Kubernetes resources will be referenced from.
Create ClusterIssuer resource:
kubectl apply -f cluster-issuer.ymlVerify that it's ready:
kubectl get clusterissuerFinally, expose your app externally by applying the Ingress resource. This creates routing rules that direct traffic from your domain to the app service, and triggers cert-manager to provision a TLS certificate.
kubectl apply -f ingress.ymlVerify the Ingress was created:
kubectl get ingressCheck that the TLS certificate has been issued:
kubectl get certificateThe certificate status should show Ready: True once Let's Encrypt has successfully issued it. This may take a minute or two.
If K3s fails to start on ARM hardware, cgroups may not be enabled. Edit your boot parameters:
sudo vi /boot/firmware/cmdline.txtAdd: cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
Reboot the device after saving.
If kubectl get certificate shows the certificate is not ready:
- Check the certificate status for details:
kubectl describe certificate <name> - Check the challenge status:
kubectl get challenges - Ensure your domain's DNS A record points to your server's public IP
- Ensure port 80 is open and reachable from the internet (required for HTTP-01 challenge)
- Verify the Ingress resource exists:
kubectl get ingress - Check the Ingress host matches your DNS:
kubectl describe ingress <name> - Ensure the backend Service and Pods are running:
kubectl get svc,pods
Contributions are welcome! To get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature/my-change - Commit your changes:
git commit -m 'Add my change' - Push to the branch:
git push origin feature/my-change - Open a Pull Request
For bug reports or feature requests, please open an issue.
Distributed under Apache-2.0 License, please see license file within the code for more details.


