Home Lab: Part 6 – Replacing MACvlan with a Bridge

This entry is part 6 of 6 in the series Home Lab

In previous posts, I leveraged the MACvlan CNI to provide the networking to forward packets between containers and the rest of my network, however I ran into several issues rooted from the fact that MACvlan traffic bypasses several parts of the host’s IP stack including conntrack and IPTables. This conflicted with how Kubernetes expects to handle routing and meant we had to bypass and modify IPTables chains to get it to work.

While I got it to work, there was simply too much wire bending involved and I wanted to investigate alternatives to see if anything was able to fit my requirements better. Let’s consider the bridge CNI.

Continue reading “Home Lab: Part 6 – Replacing MACvlan with a Bridge”

Home Lab: Part 5 – Problems with asymmetrical routing

This entry is part 5 of 6 in the series Home Lab

In the previous post (DHCP IPAM), we successfully got our containers running with macvlan + DHCP. I additionally installed MetalLB and everything seemingly worked, however when I tried to retroactively add this to my existing Kubernetes home lab cluster already running Calico, I was not able to access the Metallb service. All connections were timing out.

A quick Wireshark packet capture of the situation exposed this problem:

The SYN packet from my computer made it to the container (LB IP 1921.168.6.2), but the responding SYN/ACK packet that came back had a source address of 192.168.2.76 (the pod’s network interface.) This wouldn’t work because my computer ignored it because it didn’t belong to an active flow.

Continue reading “Home Lab: Part 5 – Problems with asymmetrical routing”

Home Lab: Part 4 – A DHCP IPAM

This entry is part 4 of 6 in the series Home Lab

In the previous post, we end up abusing subnets and routing to get Calico to exist on the correct subnet, but what if we could get rid of Calico’s duplicate IPAM system and just depend on our existing DHCP server to handle reservations? In this post, we’re going to prototype a cluster that uses DHCP + layer 2 Linux bridging to avoid the complications outlined in Part 3.

The official CNI documentation describes two plugins that could be relevant.

With dhcp plugin the containers can get an IP allocated by a DHCP server already running on your network.

https://www.cni.dev/plugins/current/ipam/dhcp/

This avoids overlapping IPAM problems with the previous solution and means that the DHCP server already running on my network would be responsible for handing out IP addresses directly to the containers.

Continue reading “Home Lab: Part 4 – A DHCP IPAM”

Home Lab: Part 3 – Networking Revisited

This entry is part 3 of 6 in the series Home Lab

The Problem

In my previous post series, I described how I installed my Kubernetes Home Lab using Calico and MetalLB. This worked great up until I started installing smart home software that expected to be able to do local network discovery. For example, Home Assistant and my Sonos control software both attempted to do subnet local discovery using mDNS or broadcast packets. This did not work because the pods were running on a 192.168.4.0/24 subnet, but all of my physical devices were on 192.168.2.0/24.

This prevented Home Assistant from discovering any devices and had to be fixed.

Continue reading “Home Lab: Part 3 – Networking Revisited”

Home Lab: Part 2 – Networking Setup

This entry is part 2 of 6 in the series Home Lab

Next up in the series, we’re going to manually configure all of the network settings to get our flat network home lab. Our flat network should not use any packet encapsulation with all pods and services fully routable to and from the existing network.

Detailed in the previous post, I want a so-called flat network because packet encapsulation tunnels IP packets inside of other IP packets and creates a separate IP network that runs on-top of my existing network.) I wanted all nodes, pods, and services to be fully routable on my home network. Additionally, I had several Sonos speakers and other smart-home devices that I wanted to be control from my k8s cluster which required pods that ran on the same subnet as my other software.

Install CNI Plugin

The CNI (Container Network Interface) plugin is responsible configuring the network adapter that each Kubernetes pod has. Since each pod usually gets a separate network namespace isolated from the host’s main network adapter, without it, no pod could make any network calls. For more information, check out cni.dev or the K8s documentation.

Continue reading “Home Lab: Part 2 – Networking Setup”

Home Lab: Part 1 – Cluster Setup

This entry is part 1 of 6 in the series Home Lab

I recently setup a Kubernetes cluster home lab and wanted to do it the hard-way and share it with you. I setup a home lab so I could run my smart home software and learn more about different Kubernetes networking technologies.

This blog post is broken up into several sections. Feel free to skip directly to the section that applies to you.

When I started I had a few things already:

  • I was already using Rancher as a UI to manage my Kubernetes clusters on my dedicated servers
  • A Windows computer that can run K8s
  • A Ubiquiti EdgeRouter 12 acting as my home network’s router

Requirements

I wanted a fully flat network, that means no packet encapsulation. Packet encapsulation tunnels IP packets inside of other IP packets and creates a separate IP network that runs on-top of my existing network.) I wanted all nodes, pods, and services to be fully routable on my home network. Additionally, I had several Sonos speakers and other smart-home devices that I wanted to be control from my k8s cluster which required pods that ran on the same IP network.

Alternatives

Docker Desktop and WSL2 are both great for development Docker projects where you use the Docker CLI, but when you try to run Kubernetes you’ll quickly run into networking issues. WSL2 and Docker Desktop can’t expose services to the rest of your network very easily because they use NAT’d network adapters. (GitHub microsoft/WSL#4150) This means you can’t expose nodes or pods as devices on the network, they will always be NAT’d to the host’s IP address. This failed my requirement.

Continue reading “Home Lab: Part 1 – Cluster Setup”

A precompiled almost-HAML engine in C#

Introduction

This project is still a work in progress, so this article serves as an introduction to the problem space and walks through how the code works.

In the past when I wrote different web applications, I used Ruby on Rails combined with the HAML template language. HAML is my favorite way to write HTML because it is an abstract representation of an HTML DOM combined with a hint of Python syntax.

Being an abstract representation means that it doesn’t have to directly correspond to what the resulting HTML looks like. This decoupling enables a HAML render engine to reorganize the code to cleaner and simpler.

Take a look at the following example:

%html{ lang: 'en' } %head %title Hello world! %body %a{ href: 'https://technowizardry.net' }= my_link %div{ b: 'abc', a: 'xyz' } test
Code language: JavaScript (javascript)

In other template engines like Ruby’s ERB or C#’s Razor, the white space is preserved and whatever indention you add, is included in the output HTML.

<html lang="en">
  <head>
    <title>Hello world!</title>
  <body>
    <a href="https://technowizardry.net">Test</a>
    <div b="abc" a="xyz">test</div>
  </body>
</html>

Indention can be handy when developing, but why waste the space when running production? One could just delete all the spacing in the source code and check this in, but now your code is harder to read. Can we have the best of both worlds?

The current state of the world

I’ve started experimenting with the new .net Core framework a lot because I like the framework and C# as a language. Unfortunately, HAML isn’t directly supported and instead the default render engine in ASP.NET MVC is just an low level HTML renderer which has the same problems as we highlighted above.

Instead I wanted to try to see if I can build my own solution and want to see how far we can push it with performance optimizations. Can we precompile the template into partial HTML streams? Can we optimize the HTML to be more friendly to Gzip? For example, if you have <a class="foo bar" /> and <a class="bar foo" /> Both of these elements are semantically equivalent and the classes can be ordered consistently so that Gzip can be efficiently compress them.

Fair warning, this will be prototype code and not ready for production quite yet.

Adding C# to the mix

I found a previous attempt at this called NHaml. There was quite a bit of work done on it, but it did not support .NET Core and seemed coupled to ASP.NET. I ended up borrowing the parsing logic (with modifications) and writing my own rendering engine.

But first, let’s see some results:

!!!
%html{ lang: 'en' }
  %head
    %title Hello world
    %meta{ charset: 'utf-8' }
    %meta{ content: 'width=device-width, initial-scale=1.0, maximum-scale=1.0', name: 'viewport' }
  %body
    .page-wrap{ class: DateTime.Now.ToString("yyyy"), d: 'bar', a: 'foo' }
      = DateTime.Now.ToString("yyyy-mm-dd")
      %h1= new Random().Next().ToString()
      %p= model.ToString()
      .content-pane.container
      - if (true)
        - if (1 > 0)
          %div really true
        %div Is True
      - else
        %div wat
      - if (false)
        %div Is False
    .modal-backdrop.in

Gets compiled into the following, then the cached class is called for following executions.

using System;
using System.IO;

internal sealed class __haml_UserCode_CompilationTarget
{
	private string model;

	public __haml_UserCode_CompilationTarget(string _modelType)
	{
		this.model = _modelType;
	}

	public void render(TextWriter textWriter)
	{
		textWriter.Write("<!DOCTYPE html><html lang=\"en\"><head><title>Hello world</title><meta charset=\"utf-8\"/><meta content=\"width=device-width, initial-scale=1.0, maximum-scale=1.0\" name=\"viewport\"/></head><body><div a=\"foo\" class=\"page-wrap ");
		textWriter.Write(DateTime.get_Now().ToString("yyyy"));
		textWriter.Write(" d=\"bar\" \">");
		textWriter.Write(DateTime.get_Now().ToString("yyyy-mm-dd"));
		textWriter.Write("<h1>");
		textWriter.Write(new Random().Next().ToString());
		textWriter.Write("</h1><p>");
		textWriter.Write(this.model.ToString());
		textWriter.Write("</p><div class=\"container content-pane\"/>");
		textWriter.Write("<div>really true</div>");
		textWriter.Write("<div>Is True</div>");
		textWriter.Write("</div><div class=\"in modal-backdrop\"/></body></html>");
	}
}

Note how the runs of HTML that never changes is transformed into static strings and all elements are normalized consistently.

A walk through the code

The HamlView is the effective entry point. It checks to see if it has a cached copy of the template in memory, if not then it requests a compilation.

[ To Be continued]

Best Practices for Elasticsearch mappings

At first, Elasticsearch may appear to be schemaless since you can add new fields any time you want, but every field in a document must match the mapping.

Dynamic Templates reduce boilerplate

How many times have you opened up a mapping file to something like this where the same type definition is repeated over and over again?

{
  "properties": {
    "foo": {
      "type": "keyword"
    },
    "foo": {
      "type": "keyword"
    },
    "foo": {
      "type": "keyword"
    },
    "baz": {
      "type": "keyword"
    },
    "other": {
      "type": "text"
    },
    ...
  }
}

It’s super easy to refactor this into an alternative where by default all string values are mapped as keyword, except for the specific field listed as “text”.

{
  "properties": {
    "dynamic_templates": [
      {
        "example_name": {
          "match_mapping_type": "string",
          "mapping": {
            "type": "keyword"
          }
        }
      }
    ],
    "other": {
      "type": "text"
    }
  }
}

Disable type detection

For new fields, Elasticsearch can automatically identify what type to use, but it can be wrong or do unexpected things. For example, I’ve seen Elasticsearch accidentally identify a decimal value as a long because the first value to go into the index did not have any decimal points. Then all other documents failed to be indexed because they did not match. This is especially important if you have fields that have a wide range of values (for example, user controlled) because you can’t predict if the first value is going to look like a number or a date, when it should always be considered to be a string.

Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-field-mapping.html

{
  "mappings": {
    "date_detection": false,
    "numeric_detection": false
  }
}

Query-level metrics for PostgreSQL/MySQL in Kubernetes with Packetbeat

MySQL and PostgreSQL can be a bit of a black box when running if you don’t take the time to configure metrics. How do you identify which queries are slow and need to be optimized? MySQL has the slow log, but that requires a time threshold to log queries that run for longer than >N seconds. What if you want to identify the most common queries even if they are fast?

Continue reading “Query-level metrics for PostgreSQL/MySQL in Kubernetes with Packetbeat”

Migrate Sprockets to Webpacker with React-rails

Ruby on Rails recently launched support for compiling static assets, such as JavaScript, using Webpack. Among other things, Webpack is more powerful at JS compilation when compared to the previous Rails default of Sprockets. Integration with Rails is provided by the Webpacker gem. Several features that I was interested in leveraging were tree shaking and support for the NPM package repository. With Sprockets, common JS libraries such as ReactJS had to be imported using Gems such as react-rails or classnames-rails. This added friction to adding new dependencies and upgrading to new versions of dependencies.

A couple of my projects used react-rails to render React components on the server-side using the legacy Sprockets system. This worked well, but I wanted to migrate to Webpacker to easily upgrade to the newest versions of React and React Bootstrap (previously I imported this using the reactbootstrap-rails, but this stopped being maintained with the launch of Webpacker.) However, migrating React components to support Webpack required changes to every single file adding ES6-style imports, file moves/renames, and scoping changes. This would have been too large to do all at once. What if there was a way to slowly migrate the JS code from Sprockets to Webpack, making components in either side available to the other side?

Continue reading “Migrate Sprockets to Webpacker with React-rails”