CenturyLink Gigabit service on Mikrotik RouterOS with PPPoE and IPv6

I recently helped my friends configure their CenturyLink Gigabit fiber service so they can use their own hardware instead of the provided hardware. This gave them a lot of flexibility in how the network is configured, however CenturyLink requires you to enable PPPoE and use 6RD to use IPv6 instead of natively supporting IP packets, you have to jump through hoops. I’m sure there’s some reason why their network works like that, but I figured I’d document what needs to be done and explain how it works.

Featured image of post The one where Rancher ruined my birthday

The one where Rancher ruined my birthday

Feb 2026 Update:

This post is from 2021. Since then, I’ve made many changes to my cluster and Rancher has also made many changes to their product. Some better, some still challenging. I haven’t had to rebuild my cluster from scratch since then, which is a positive improvement. Now my issues are mostly due to self-hosted Kubernetes cluster + Longhorn PVC issues instead of Rancher. They deprecated RKE1 without an in-place migration plan, but I figured out how to migrate to NixOS. Every upgrade to Rancher fixes one issue, then add another. I stopped using Rancher Fleet because it was buggy and started using cdk8s+Helm. While it had it’s own issues, I was able to more easily navigate them. Rancher v2.11 broke copy from the view YAML screen. v2.13 got rid of the combined Workloads screen which pulled in deployments, jobs, etc. because of supposed performance issues. I used that feature way too much. I’ve remained on v2.11.

Home Lab - Using the bridge CNI with Systemd

This article is part of the Home Lab series.

After I’ve had time to run my home lab for a while, I’ve started switching to a more up to date Linux distribution (instead of RancherOS.) I’m currently testing Ubuntu Server which leverages Systemd. Systemd-networkd is responsible for managing the network interface configuration and it differs in behavior compared to NetworkManager enough that we need to update the Home Lab Bridge CNI to handle it.

Previously the CNI was creating a bridge network adapter when the first container started up, but this causes problems with systemd because resolved (the DNS resolver component) was eventually failing to make DNS queries and networkd was duplicating IP addresses on both eth0 (the actual uplink adapter) and on cni0 because we were copying it over.

Upgrading Longhorn from Helm 2 in Rancher 2.6 the hard-way

Long ago, I installed Longhorn onto my Kubernetes cluster using Helm 2. Then eventually Helm 3 was released and helm 2to3 was made available. However, I was not able to use helm 2to3 for whatever reason because Rancher didn’t deploy Tiller in the way that this CLI expected. Additionally, Rancher did not provide an upgrade mechanism to handle this. Eventually Rancher 2.6 was released which entirely dropped Helm 2 support and I was stuck with a cluster where Longhorn was deployed, but not managed by a working Helm installation.

Why is Kubernetes opening random ports?

I recently responded to the Log4j vulnerability. If you’re not aware, Log4j is a very popular Java logging library used in many Java applications. There was a vulnerability where malicious actors could remotely take control of your computer by submitting a specially crafted request parameter that gets directly logged to log4j.

This situation was not ideal since I was running several Java applications on my servers, thus I decided to use Nmap to port scan my dedicated server to see what ports were open. I ended up finding a number of ports I didn’t expect because several of Kubernetes Service instances were being mapped as node ports.