Friday, December 29, 2023

Tailscale switch

As always, the documentation for something leave a bit unexplained. I was interested in using "tailscale switch" to switch between a small non-shared tailnet (managed by Tailscale) and a shared cyberclub tailnet (managed by Headscale). The unmentioned part is to never use "tailscale logout", which expires the authentication key. Instead, use the following procedure for setting up the multiple networks:

    tailscale login
    tailscale status
    tailscale down
    tailscale login --login-server=[headscale URL]
    tailscale status

In other words, first authenticate to the Tailscale-hosted network. Then run "tailscale down" and authenticate to the second network.

You can then run the following to list the available networks:

    tailscale switch --list

The output will look something like:

    ID    Tailnet     Account
    cde0  bob.github  bob@github*
    41da  othernet    othernet

The currently active network will be denoted by the asterisk at the end of the line. You can switch between the two with:

    tailscale switch ACCOUNTNAME

My reasoning for needing the Tailscale-hosted account: I periodically need access to a less-technical family member's network for troubleshooting. I gave them a GL-Net Slate AX wifi router, which has runs a Tailscale client (you have to add it). You can configure the physical switch (on the side of the router) to turn the tailnet on and off. End result: if they're having troubles with something in their network, they turn the switch on, call me, and I can remotely troubleshoot their house network.

Tuesday, November 28, 2023

Tasking for self...

Note to self: certs for house cluster expires in early March. You'll want the info from: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/

Sunday, October 22, 2023

Ouch

I'm guessing that there are others whose muscle-memory, when typing, is a bit munged. I finally did a full re-indexing of the document search engine (a little over 34K of docs). then then tested the engine but mispelled the search ("epbf" instead of "ebpf"). This produced 4 answers. I then mispelled "falco" by typing in "flaco" (which Google indicates is Spanish for "skinny" or "thin"). This produced two documents with "falco" mispelled and two Spanish langage files. I'm thinking I need to research if Recoll can do fuzzy searches.

Friday, August 18, 2023

Breaking/fixing my K8S controller

Just a bit of blowing my own horn...

I managed to break the home lab's K8S config while attempting to troubleshoot a friend's cluster, a week or so back. The primary symptom (other than Multus not working) was showing up as a "NoExecute" status for the controller, when listing taints for the nodes. There were also log entries, complaining about not being able to delete sandboxes. This was also causing issues with Falco, which was deploying only 4 of an expected 6 pods (i.e., the DS wasn't installing on the controller), when trying to deploy it with Helm (a story for another time, I think).

In any case, after a number of Google searches and using "kubectl describe" against a few resources, I backtraced it to "Network plugin returns error: cni plugin not initialized". This turned out to be Multus.

Uninstalling and re-installing Multus corrected the issue. K8S then woke up and destroyed the old sandboxes, fired up the missing Falco pods, and the taint on the controller went back to its normal "NoSchedule" status.

Two things learned today:

  1. Piping "kubectl describe ..." into /bin/less is a good troubleshooting tool.
  2. The same YAML file, that you use to install something, can be used to delete it. In other words: "kubectl create -f multus-thick.yaml" for installing and "kubectl delete -f multus-thick.yaml" for uninstalling.

Sunday, August 13, 2023

Prototyping my Falco install

Just spent a couple hours getting Falco + Sidekick + UI + Redis figured out. Following works. Next up: getting it to work in K8s.

#!/bin/bash

docker run -d -p 6379:6379 redislabs/redisearch:2.2.4

docker run -itd --name falco \
           --privileged \
           -v /var/run/docker.sock:/host/var/run/docker.sock \
           -v /proc:/host/proc:ro \
           -e HTTP_OUTPUT_URL=http://192.168.2.22:2801 \
           falcosecurity/falco-no-driver:latest falco --modern-bpf

docker run -itd --name falcosidekick -p 2801:2801 \
           -e WEBUI_URL=http://192.168.2.22:2802 \
           falcosecurity/falcosidekick

docker run -itd --name fs-ui -p 2802:2802 \
           -e FALCOSIDEKICK_UI_REDIS_URL=192.168.2.22:6379 \
           falcosecurity/falcosidekick-ui falcosidekick-ui 


Saturday, July 8, 2023

Krew custom columns

My contribution to the custom-cols plugin for Krew: show what nodes pods are running on.

Create a file ~/.krew/store/custom-cols/v0.0.5/tamplates/node.tpl so that it contains:

 NAME             NODE             STATUS 
 .metadata.name   .spec.nodeName   .status.phase 

The output will look something like:

 tim@cf-desk:~$ kubectl custom-cols -o node pods -n weave 
 NAME                                         NODE   STATUS 
 weave-scope-agent-g9jgh                      cf1    Running 
 weave-scope-agent-gllg5                      cf2    Running 
 weave-scope-agent-kkm2z                      cf3    Running 
 weave-scope-app-658845597b-wnt9b             cf2    Running 
 weave-scope-cluster-agent-84f7b6767c-2vdkw   cf2    Running 

There may also be some value in making it sortable, based on the node. To do so, create another template (I called mine "nodes.tpl")and swap the first and second columns in each row. Then you can pipe the output through the tail and sort commands. Example template:

 NODE              NAME            STATUS 
 .spec.nodeName    .metadata.name  .status.phase 

The output will look something like:

 tim@cf-desk:~$ k custom-cols -o nodes pods -n weave|tail -n +2|sort 
 cf1    weave-scope-agent-g9jgh                      Running 
 cf2    weave-scope-agent-gllg5                      Running 
 cf2    weave-scope-app-658845597b-wnt9b             Running 
 cf2    weave-scope-cluster-agent-84f7b6767c-2vdkw   Running 
 cf3    weave-scope-agent-kkm2z                      Running 

For info: the "-n +2" in the above tells tail to start processing on the second line (i.e., skip the line with the column headers).

Monday, July 18, 2022

Troubleshooting k8s

New command learned today, while a Gitea deployment was stalled in the "ContainerCreating" step. Short version: the following is valuable.

kubectl get events --all-namespaces  --sort-by='.metadata.creationTimestamp'

It's also worthwhile to note that the output from the above is different from the output of:

kubectl get events -A

It turned out that the permissions for a volume were not correct and the PVC mount was timing out.