This is a Vagrant Environment for a playing with Sidero.
For playing with Talos see the rgl/talos-vagrant repository.
Install docker, vagrant, vagrant-libvirt, and the Ubuntu Base Box.
If you want to connect to the external physical network, you must configure your host network as described in rgl/ansible-collection-tp-link-easy-smart-switch (e.g. have the br-rpi linux bridge) and set CONFIG_PANDORA_BRIDGE_NAME in the Vagrantfile.
This environment sometimes hits the GitHub rate limits (at the time of writing, these were 60 unauthenticated requests per hour), as such, you might want to export the GITHUB_USERNAME/GITHUB_TOKEN environment variables before running vagrant to have an higher (5000 requests per hour).
NB This token is also saved in the .netrc file inside the VMs.
Bring up the pandora virtual machine:
vagrant up --provider=libvirt --no-destroy-on-error pandoraEnter the pandora virtual machine and watch the progress:
vagrant ssh pandora
sudo -i
watch kubectl get servers,machines,clustersIn another shell, bring up the example cluster virtual machines:
vagrant up --provider=libvirt --no-destroy-on-errorAccess the example cluster:
vagrant ssh pandora
sudo -i
kubectl get talosconfig \
-l cluster.x-k8s.io/cluster-name=example \
-o jsonpath='{.items[0].status.talosConfig}' \
>example-talosconfig.yaml
first_control_plane_ip="$(cat /vagrant/shared/machines.json | jq -r '.[] | select(.role == "controlplane") | .ip' | head -1)"
talosctl --talosconfig example-talosconfig.yaml config endpoints $first_control_plane_ip
talosctl --talosconfig example-talosconfig.yaml config nodes $first_control_plane_ip
# NB the following will only work after the example cluster has a working
# control plane (e.g. after the cp1 node is ready).
talosctl --talosconfig example-talosconfig.yaml kubeconfig example-kubeconfig.yaml
cp example-*.yaml /vagrant/shared
kubectl --kubeconfig example-kubeconfig.yaml get nodes -o wideAccess kubernetes with k9s:
vagrant ssh pandora
sudo -i
k9s # management cluster.
k9s --kubeconfig example-kubeconfig.yaml # example cluster.You can easily capture and see traffic from the host with the wireshark.sh
script, e.g., to capture the traffic from the eth1 interface:
./wireshark.sh pandora eth1- only the
amd64architecture is currently supported by sidero.- see
kubectl get environment default -o yaml
- see
- Sidero
clusterctl config repositorieskubectl get crd servers.metal.sidero.dev -o yamlkubectl get clusterskubectl get serverskubectl get serverclasseskubectl get machineskubectl get taloscontrolplanekubectl get environment default -o yamlkubectl get nskubectl -n sidero-system get podskubectl -n sidero-system logs -l app=siderokubectl -n capi-webhook-system get deploymentskubectl -n capi-webhook-system get podskubectl -n capi-webhook-system logs -l control-plane=controller-manager -c managerkubectl -n sidero-system logs -l control-plane=caps-controller-manager -c managerkubectl -n cabpt-system logs deployment/cabpt-controller-manager -c manager
- Talos
- Troubleshooting Control Plane
talosctl -n cp1 dashboardtalosctl -n cp1 logs controller-runtimetalosctl -n cp1 logs kubelettalosctl -n cp1 diskstalosctl -n cp1 get resourcedefinitionstalosctl -n cp1 get machineconfigs -o yamltalosctl -n cp1 get staticpods -o yamltalosctl -n cp1 get staticpodstatustalosctl -n cp1 get manifeststalosctl -n cp1 get servicestalosctl -n cp1 get addressestalosctl -n cp1 list /systemtalosctl -n cp1 list /vartalosctl -n cp1 read /proc/cmdline
- Kubernetes
kubectl get events --all-namespaces --watchkubectl --namespace kube-system get events --watch