Accurate, Local Home Energy Monitoring: Part 1 – Hardware

This entry is part 1 of 1 in the series Home Energy Monitoring

Ever wondered where the energy is going in your house and know exactly when and which circuit is consuming the most electricity? How much is your air conditioning unit costing you each month in kWh?

Home energy monitors are devices that you can use to monitor how much energy you’re using at any given point in time. You can use them to figure out how much each device or circuit you’re using overnight vs the day. If you have differing energy costs at the day vs night, you can use them to ensure devices run at lower cost time of day, you can use it to as part of a smart home automation to automatically notify you when your washing machine is done, or even identify when you need to upgrade a circuit because your server room is pulling too much.

In this post, I’m going to walk through the different products I considered for a project at a friend’s house, pros and cons, and how to order the appropriate equipment.

Continue reading “Accurate, Local Home Energy Monitoring: Part 1 – Hardware”

CenturyLink Gigabit service on Mikrotik RouterOS with PPPoE and IPv6

I recently helped my friends configure their CenturyLink Gigabit fiber service so they can use their own hardware instead of the provided hardware. This gave them a lot of flexibility in how the network is configured, however CenturyLink requires you to enable PPPoE and use 6RD to use IPv6 instead of natively supporting IP packets, you have to jump through hoops. I’m sure there’s some reason why their network works like that, but I figured I’d document what needs to be done and explain how it works.

Continue reading “CenturyLink Gigabit service on Mikrotik RouterOS with PPPoE and IPv6”

A Wireguard VPN from a home lab to Kubernetes cluster

In addition to my home lab K8s cluster, I have two dedicated servers that I run in the cloud running a separate Kubernetes cluster. This cluster runs my production servers, like this blog, Postfix, DNS, etc. I wanted to add a VPN between my home network and my prod k8s network for two reasons:

  1. All data should be encrypted between these networks. While I use HTTPS when possible, some traffic like DNS isn’t encrypted
  2. My servers outside the NAT should be able to access servers running behind my NAT. I run a Prometheus instance at home that I want my primary Prometheus instance to be able to scrape. Using a VPN can help bypass the NAT and firewall on my router so it can scrape. Additionally, I wanted to be able to access pods directly from my home as needed.

I came across a number of guides for basic Wireguard VPN tunnel configurations which were fine, but they didn’t describe how to solve some of the more advanced issues like BGP routing for MetalLB or how to encrypt traffic to the host itself.

For example, since I have more than one host in my cluster, if I use MetalLB to announce an IP, the Wireguard instance on my router won’t know which host to forward traffic to because it uses the destination IP to pick the encryption key. This results in Wireguard sending traffic possibly to the wrong host.

This blog post will explain everything you need to know to configure a Wireguard VPN that doesn’t suffer from these limitations.

Continue reading “A Wireguard VPN from a home lab to Kubernetes cluster”

The one where Rancher ruined my birthday

Other titles:

  • You were supposed to bring balance to Kubernetes, Rancher, not destroy it
  • et tu? Rancher?

I’ve been maintaining my own dedicated servers for around 7 years now as a way to learn and improve skills and have a place to run my different web sites, mail servers, even this blog. Over the years the hardware has changed and I’ve moved from hosting Rails applications directly on the OS to Docker and finally Kubernetes. I’ve learned a lot of skills that eventually helped me in my professional career at my job that it’s definitely been worth it, but maintaining this server has had its massive pain points where I’ve just had to walk away and leave stuff broken for days until I finally fix the issues.

I selected Rancher several years ago (at least 3 or 4 years ago I’d estimate) when I finally moved to Kubernetes. I liked how it automatically provisioned my clusters, managed networking, and provided a nice UI. It was also reasonably recommended by the internet. Things worked reasonably well, but after adopting Rancher and Kubernetes, every 6-12 months I’d end up having something massively break and I’d have to rebuild the entire Kubernetes cluster painstakingly and many times I’d tell myself if it broke, then I’d just swear off Rancher entirely, but it never happened because I eventually got everything working.

After upgrading to Rancher v2.6.3 that just launched yesterday and finding that all my clusters were removed from Rancher, I hit my breaking point.

Continue reading “The one where Rancher ruined my birthday”

Home Lab – Using the bridge CNI with Systemd

This entry is part 7 of 7 in the series Home Lab

After I’ve had time to run my home lab for a while, I’ve started switching to a more up to date Linux distribution (instead of RancherOS.) I’m currently testing Ubuntu Server which leverages Systemd. Systemd-networkd is responsible for managing the network interface configuration and it differs in behavior compared to NetworkManager enough that we need to update the Home Lab Bridge CNI to handle it.

Previously the CNI was creating a bridge network adapter when the first container started up, but this causes problems with systemd because resolved (the DNS resolver component) was eventually failing to make DNS queries and networkd was duplicating IP addresses on both eth0 (the actual uplink adapter) and on cni0 because we were copying it over.

Continue reading “Home Lab – Using the bridge CNI with Systemd”

Upgrading Longhorn from Helm 2 in Rancher 2.6 the hard-way

Long ago, I installed Longhorn onto my Kubernetes cluster using Helm 2. Then eventually Helm 3 was released and helm 2to3 was made available. However, I was not able to use helm 2to3 for whatever reason because Rancher didn’t deploy Tiller in the way that this CLI expected. Additionally, Rancher did not provide an upgrade mechanism to handle this. Eventually Rancher 2.6 was released which entirely dropped Helm 2 support and I was stuck with a cluster where Longhorn was deployed, but not managed by a working Helm installation.

This blog post outlines how you can recover Longhorn and upgrade it to Helm 3 without deleting all your volumes. This guide isn’t specific to Rancher 2.6.

Continue reading “Upgrading Longhorn from Helm 2 in Rancher 2.6 the hard-way”

Why is Kubernetes opening random ports?

Kubernetes automatically exposes certain services as a port on your host and may unintentionally exposing private services. Here’s how to fix that

I recently responded to the Log4j vulnerability. If you’re not aware, Log4j is a very popular Java logging library used in many Java applications. There was a vulnerability where malicious actors could remotely take control of your computer by submitting a specially crafted request parameter that gets directly logged to log4j.

This situation was not ideal since I was running several Java applications on my servers, thus I decided to use Nmap to port scan my dedicated server to see what ports were open. I ended up finding a number of ports I didn’t expect because several of Kubernetes Service instances were being mapped as node ports.

In this post, I outline the problem with Kubernete’s default strategy for services and how to avoid exposing ports that you don’t need.

Continue reading “Why is Kubernetes opening random ports?”

Picking a mortgage for data engineers using Python

Over the past year I helped a few people pick mortgages while buying their homes by helping them visualize different mortgage options from different companies. In a seller’s market, like where I live, you only get a few days to pick from a number of different mortgages that all offer different fees, points, and interest rates that all influence the monthly rate that you pay.

Given all this data, how do you compare the difference options and decide which one to go with? The lowest monthly rate isn’t always the best option.

Continue reading “Picking a mortgage for data engineers using Python”

Home Lab: Part 6 – Replacing MACvlan with a Bridge

In previous posts, I leveraged the MACvlan CNI to provide the networking to forward packets between containers and the rest of my network, however I ran into several issues rooted from the fact that MACvlan traffic bypasses several parts of the host’s IP stack including conntrack and IPTables. This conflicted with how Kubernetes expects to handle routing and meant we had to bypass and modify IPTables chains to get it to work.

While I got it to work, there was simply too much wire bending involved and I wanted to investigate alternatives to see if anything was able to fit my requirements better. Let’s consider the bridge CNI.

Continue reading “Home Lab: Part 6 – Replacing MACvlan with a Bridge”

Home Lab: Part 5 – Problems with asymmetrical routing

In the previous post (DHCP IPAM), we successfully got our containers running with macvlan + DHCP. I additionally installed MetalLB and everything seemingly worked, however when I tried to retroactively add this to my existing Kubernetes home lab cluster already running Calico, I was not able to access the Metallb service. All connections were timing out.

A quick Wireshark packet capture of the situation exposed this problem:

The SYN packet from my computer made it to the container (LB IP 1921.168.6.2), but the responding SYN/ACK packet that came back had a source address of 192.168.2.76 (the pod’s network interface.) This wouldn’t work because my computer ignored it because it didn’t belong to an active flow.

Continue reading “Home Lab: Part 5 – Problems with asymmetrical routing”