Split Horizon DNS with external-dns and cert-manager for Kubernetes

There were a few services that I ran that I wanted to be able to access from both inside my home network and outside my home network. If I was inside my home network, I wanted to route directly to the service, but if I was outside I needed to be able to route traffic through a proxy that would then route into my home lab. Additionally, I wanted to support SSL on all my services for security using cert-manager

Since my IPv4 addresses differ inside my network vs outside, I need to use split-horizon DNS to respond with the correct DNS query. Split-horizon DNS refers to the DNS on one horizon (inside the network) showing different results than outside the network.

Continue reading “Split Horizon DNS with external-dns and cert-manager for Kubernetes”

Kubernetes: A hybrid Calico and Layer 2 Bridge+DHCP network using Multus

This entry is part 8 of 8 in the series Home Lab

Previously in my Home Lab series, I described how my home lab Kubernetes clusters runs with a DHCP CNI–all pods get an IP address on the same layer 2 network as the rest of my home and an IP from DHCP. This enabled me to run certain software that needed this like Home Assistant which wanted to be able to do mDNS and send broadcast packets to discover device.

However, not all pods actually needed to be on the same layer 2 network and lead to a few situations where I ran out of IP addresses on the DHCP server and couldn’t connect any new devices until reservations expired:

My DHCP IP pool completely out of addresses to give to clients

I also had a circular dependency where the main VLAN told clients to use a DNS server that was running in Kubernetes. If I had to reboot the cluster, my Kubernetes cluster could get stuck starting because it tried to query a DNS server that wasn’t started yet (For simplicity, I use DHCP for everything instead of static config).

In this post, I explain how I built a new home lab cluster with K3s and used Multus to run both Calico and my custom Bridge+DHCP CNI so that only pods that need layer 2 access get access.

Continue reading “Kubernetes: A hybrid Calico and Layer 2 Bridge+DHCP network using Multus”

CenturyLink Gigabit service on Mikrotik RouterOS with PPPoE and IPv6

I recently helped my friends configure their CenturyLink Gigabit fiber service so they can use their own hardware instead of the provided hardware. This gave them a lot of flexibility in how the network is configured, however CenturyLink requires you to enable PPPoE and use 6RD to use IPv6 instead of natively supporting IP packets, you have to jump through hoops. I’m sure there’s some reason why their network works like that, but I figured I’d document what needs to be done and explain how it works.

Continue reading “CenturyLink Gigabit service on Mikrotik RouterOS with PPPoE and IPv6”

A Wireguard VPN from a home lab to Kubernetes cluster

In addition to my home lab K8s cluster, I have two dedicated servers that I run in the cloud running a separate Kubernetes cluster. This cluster runs my production servers, like this blog, Postfix, DNS, etc. I wanted to add a VPN between my home network and my prod k8s network for two reasons:

  1. All data should be encrypted between these networks. While I use HTTPS when possible, some traffic like DNS isn’t encrypted
  2. My servers outside the NAT should be able to access servers running behind my NAT. I run a Prometheus instance at home that I want my primary Prometheus instance to be able to scrape. Using a VPN can help bypass the NAT and firewall on my router so it can scrape. Additionally, I wanted to be able to access pods directly from my home as needed.

I came across a number of guides for basic Wireguard VPN tunnel configurations which were fine, but they didn’t describe how to solve some of the more advanced issues like BGP routing for MetalLB or how to encrypt traffic to the host itself.

For example, since I have more than one host in my cluster, if I use MetalLB to announce an IP, the Wireguard instance on my router won’t know which host to forward traffic to because it uses the destination IP to pick the encryption key. This results in Wireguard sending traffic possibly to the wrong host.

This blog post will explain everything you need to know to configure a Wireguard VPN that doesn’t suffer from these limitations.

Continue reading “A Wireguard VPN from a home lab to Kubernetes cluster”

Home Lab – Using the bridge CNI with Systemd

This entry is part 7 of 8 in the series Home Lab

After I’ve had time to run my home lab for a while, I’ve started switching to a more up to date Linux distribution (instead of RancherOS.) I’m currently testing Ubuntu Server which leverages Systemd. Systemd-networkd is responsible for managing the network interface configuration and it differs in behavior compared to NetworkManager enough that we need to update the Home Lab Bridge CNI to handle it.

Previously the CNI was creating a bridge network adapter when the first container started up, but this causes problems with systemd because resolved (the DNS resolver component) was eventually failing to make DNS queries and networkd was duplicating IP addresses on both eth0 (the actual uplink adapter) and on cni0 because we were copying it over.

Continue reading “Home Lab – Using the bridge CNI with Systemd”

Home Lab: Part 6 – Replacing MACvlan with a Bridge

This entry is part 6 of 8 in the series Home Lab

In previous posts, I leveraged the MACvlan CNI to provide the networking to forward packets between containers and the rest of my network, however I ran into several issues rooted from the fact that MACvlan traffic bypasses several parts of the host’s IP stack including conntrack and IPTables. This conflicted with how Kubernetes expects to handle routing and meant we had to bypass and modify IPTables chains to get it to work.

While I got it to work, there was simply too much wire bending involved and I wanted to investigate alternatives to see if anything was able to fit my requirements better. Let’s consider the bridge CNI.

Continue reading “Home Lab: Part 6 – Replacing MACvlan with a Bridge”

Home Lab: Part 5 – Problems with asymmetrical routing

This entry is part 5 of 8 in the series Home Lab

In the previous post (DHCP IPAM), we successfully got our containers running with macvlan + DHCP. I additionally installed MetalLB and everything seemingly worked, however when I tried to retroactively add this to my existing Kubernetes home lab cluster already running Calico, I was not able to access the Metallb service. All connections were timing out.

A quick Wireshark packet capture of the situation exposed this problem:

The SYN packet from my computer made it to the container (LB IP 1921.168.6.2), but the responding SYN/ACK packet that came back had a source address of 192.168.2.76 (the pod’s network interface.) This wouldn’t work because my computer ignored it because it didn’t belong to an active flow.

Continue reading “Home Lab: Part 5 – Problems with asymmetrical routing”

Home Lab: Part 4 – A DHCP IPAM

This entry is part 4 of 8 in the series Home Lab

In the previous post, we end up abusing subnets and routing to get Calico to exist on the correct subnet, but what if we could get rid of Calico’s duplicate IPAM system and just depend on our existing DHCP server to handle reservations? In this post, we’re going to prototype a cluster that uses DHCP + layer 2 Linux bridging to avoid the complications outlined in Part 3.

The official CNI documentation describes two plugins that could be relevant.

With dhcp plugin the containers can get an IP allocated by a DHCP server already running on your network.

https://www.cni.dev/plugins/current/ipam/dhcp/

This avoids overlapping IPAM problems with the previous solution and means that the DHCP server already running on my network would be responsible for handing out IP addresses directly to the containers.

Continue reading “Home Lab: Part 4 – A DHCP IPAM”

Home Lab: Part 3 – Networking Revisited

This entry is part 3 of 8 in the series Home Lab

The Problem

In my previous post series, I described how I installed my Kubernetes Home Lab using Calico and MetalLB. This worked great up until I started installing smart home software that expected to be able to do local network discovery. For example, Home Assistant and my Sonos control software both attempted to do subnet local discovery using mDNS or broadcast packets. This did not work because the pods were running on a 192.168.4.0/24 subnet, but all of my physical devices were on 192.168.2.0/24.

This prevented Home Assistant from discovering any devices and had to be fixed.

Continue reading “Home Lab: Part 3 – Networking Revisited”

Home Lab: Part 2 – Networking Setup

This entry is part 2 of 8 in the series Home Lab

Next up in the series, we’re going to manually configure all of the network settings to get our flat network home lab. Our flat network should not use any packet encapsulation with all pods and services fully routable to and from the existing network.

Detailed in the previous post, I want a so-called flat network because packet encapsulation tunnels IP packets inside of other IP packets and creates a separate IP network that runs on-top of my existing network.) I wanted all nodes, pods, and services to be fully routable on my home network. Additionally, I had several Sonos speakers and other smart-home devices that I wanted to be control from my k8s cluster which required pods that ran on the same subnet as my other software.

Install CNI Plugin

The CNI (Container Network Interface) plugin is responsible configuring the network adapter that each Kubernetes pod has. Since each pod usually gets a separate network namespace isolated from the host’s main network adapter, without it, no pod could make any network calls. For more information, check out cni.dev or the K8s documentation.

Continue reading “Home Lab: Part 2 – Networking Setup”