Home Lab: Part 2 – Networking Setup

This entry is part 2 of 6 in the series Home Lab

Next up in the series, we’re going to manually configure all of the network settings to get our flat network home lab. Our flat network should not use any packet encapsulation with all pods and services fully routable to and from the existing network.

Detailed in the previous post, I want a so-called flat network because packet encapsulation tunnels IP packets inside of other IP packets and creates a separate IP network that runs on-top of my existing network.) I wanted all nodes, pods, and services to be fully routable on my home network. Additionally, I had several Sonos speakers and other smart-home devices that I wanted to be control from my k8s cluster which required pods that ran on the same subnet as my other software.

Install CNI Plugin

The CNI (Container Network Interface) plugin is responsible configuring the network adapter that each Kubernetes pod has. Since each pod usually gets a separate network namespace isolated from the host’s main network adapter, without it, no pod could make any network calls. For more information, check out cni.dev or the K8s documentation.

IP Network Plan

I already have an existing home network IP space, so instead of changing everything, I’m going to define a network plan that fits around that. Readers can use their own network plan, however this blog series will reference these ranges. The only requirement is that the different subnets don’t overlap with each other.

  • 192.168.2.1/32 – Main router
  • 192.168.2.0/24 – Home network subnet
  • 192.168.2.225/32 – The Host VM IP
  • 192.168.4.0/24 – Kubernetes pod subnet
  • 192.168.6.0/24 – MetalLB subnet

We’re going to use Calico as our CNI because it supports a flat network. Our goal is to use this networking option outlined here:

This will display any overlay network (no packet encapsulation) and use BGP enable all networks and pods to communicate. BGP (Border Gateway Protocol) is a very popular routing protocol used by ISPs to tell other ISPs what IP addresses are available on their network. It’s also being popularized inside large data centers as a mechanism to route packets to the correct rack of servers.

To start setting up Calico, follow along with the Quick Start Guide.

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
Code language: JavaScript (javascript)

Once the Tigera operator is installed and running on your cluster, configure Calico by creating a custom resource:

This snippet will install all Calico software and agents onto the cluster. The spec is irrelevant because we’ll finish configuration further down.

apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: calicoNetwork: bgp: Enabled hostPorts: Enabled ipPools: - blockSize: 26 cidr: 192.168.4.0/24 encapsulation: None natOutgoing: Disabled nodeSelector: all() --- apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {}
Code language: JavaScript (javascript)

This will configure Calico to run in a BGP mode without any type of encapsulation.

Now it’s time to configure BGP. We’ll need to configure both the Router and K8s cluster.

Router

Your router may be different, but my EdgeRouter had native support for BGP. The following block configures the router to accept and make connections to the Calico node running on my VM with the correct AS number. The AS number (remote-as 64512 and bgp 64512) uniquely identifies each BGP “network” and defines a rudimentary (and bad) security control. For the purposes of this series, we’ll use the same number.

protocols { # 64512 is the AS number for both the router and Calico # This runs the peering as an iBGP (Internal) network bgp 64512 { neighbor 192.168.2.225 { remote-as 64512 } parameters { router-id 192.168.2.1 } } }
Code language: PHP (php)

Kubernetes

Create an IP Pool:

apiVersion: crd.projectcalico.org/v1 kind: IPPool metadata: name: default-ipv4-ippool spec: blockSize: 26 cidr: 192.168.4.0/24 ipipMode: Never natOutgoing: false nodeSelector: all()
Code language: JavaScript (javascript)

Create a BGP Peer relationship with the router.

apiVersion: crd.projectcalico.org/v1 kind: BGPPeer metadata: name: my-global-peer spec: peerIP: 192.168.2.1 # The IP address of the LAN router asNumber: 64512 # AS number of the router
Code language: PHP (php)

This should cause Calico to connect to your router. You can verify this by SSHing to the router and checking peering stats.

$ show ip bgp BGP table version is 47, local router ID is 192.168.2.1 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, l - labeled S Stale Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path *>i 192.168.4.192/26 192.168.2.225 0 100 0 i Total number of prefixes 1

Here we can see that Calico set the node to use IPs within the range 192.168.4.192/26. All pods running on this node should fall within this CIDR.

Now you should be able to launch pods that IP addresses that you can directly connect to.

Install MetalLB

Reference

Add a helm catalog for MetalLB

When installing metallb, use the following values.

Note: MetalLB also uses BGP to announce routes, however this won’t work with Calico also announcing routes because BGP only permits one connection at a time. This problem is documented extensively here. However, thanks to this Pull Request, we can disable the MetalLB speaker so that Calico announces routes for each Kubernetes LoadBalancer instance to the router for us.

configInline: address-pools: - addresses: # Define a separate IP pool that LBs will be allocated from # Must not overlap with any other pool - 192.168.6.0/24 name: default protocol: bgp speaker: enabled: false
Code language: PHP (php)

At this point, MetalLB will be installed. Now you can create an L4 LoadBalancer in Kubernetes and it should be announced over BGP:

$ show ip bgp BGP table version is 47, local router ID is 192.168.2.1 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, l - labeled S Stale Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path *>i 192.168.4.192/26 192.168.2.225 0 100 0 i *>i 192.168.6.0 192.168.2.225 0 100 0 i Total number of prefixes 2

Here we can see that the load balancer at IP 192.168.6.0/32 is now being announced and I should be able to open that up in my browser and access it.

After this, you should be able to access a LoadBalancer type service running in your Kubernetes cluster from any machine on your LAN. However, Pods are not yet running on the same subnet as my LAN. Thus the smart-home software will not work without running it on the hostNetwork. In future posts, I will expose alternative networking solutions to fix this.

Series Navigation<< Home Lab: Part 1 – Cluster SetupHome Lab: Part 3 – Networking Revisited >>