Home Lab: Part 6 – Replacing MACvlan with a Bridge

This entry is part 6 of 6 in the series Home Lab

In previous posts, I leveraged the MACvlan CNI to provide the networking to forward packets between containers and the rest of my network, however I ran into several issues rooted from the fact that MACvlan traffic bypasses several parts of the host’s IP stack including conntrack and IPTables. This conflicted with how Kubernetes expects to handle routing and meant we had to bypass and modify IPTables chains to get it to work.

While I got it to work, there was simply too much wire bending involved and I wanted to investigate alternatives to see if anything was able to fit my requirements better. Let’s consider the bridge CNI.

To recap what we’re looking for in this CNI: we want to be able to run pods on the same subnet as my home LAN, this ultimately requires some kind of L2 layer bridge combined with a DHCP IPAM. Nothing pre-existing fully supports this situation. Thus I ended up modifying and extending existing CNIs.

Bridge CNI

The bridge CNI’s IP stack

The Bridge stack is slightly different that the MACvlan stack. On the host side, we now have a point-to-point adapter (prefixed with veth*). These are added to the bridge and traffic from them can be routed between the adapters associated with the bridge.

Starting with the reference bridge CNI along with the following configuration:

{ "cniVersion": "0.3.1", "name": "dhcp-cni-network", "plugins": [ { "type": "bridge", "name": "mybridge", "ipam": { "type": "dhcp" } } ] }
Code language: JSON / JSON with Comments (json)

Unfortunately, the reference bridge CNI gives us the following errors in the Kubelet log:

"Error adding pod to network" err="error calling DHCP.Allocate: no more tries" pod="metallb/metallb-controller-7cb7dd579d-8zlgr"
Code language: JavaScript (javascript)

The DHCP daemon isn’t receiving any responses from the DHCP server. While the daemon is configured to use the host network, it assumes the Pod’s network namespace while sending the DHCP request packets. Taking a look at the pod’s network namespace, I see that the requests are being sent, but no responses are received:

[rancher@rancher ~]$ sudo docker run -ti --rm --net=container:e6d4baa7820f crccheck/tcpdump -i any IP > BOOTP/DHCP, Request from aa:9c:3e:ae:d1:68 (oui Unknown), length 336 IP > BOOTP/DHCP, equest from c2:be:1c:d3:d4:ba (oui Unknown), length 336
Code language: JavaScript (javascript)

Looking at the host’s network adapter, we can see that they’re making it to the bridge, but not being sent outwards on eth0, thus the rest of the network won’t hear anything.

[rancher@rancher ~]$ sudo docker run -ti --rm --net=host --cap-add NET_ADMIN crccheck/tcpdump -i any -f 'udp port 67 or udp port 68' IP > BOOTP/DHCP, Request from 82:a0:15:d9:51:02 (oui Unknown), length 336 IP > BOOTP/DHCP, Request from 82:a0:15:d9:51:02 (oui Unknown), length 336
Code language: JavaScript (javascript)

Additionally, there are absolutely no routes in the Pod’s netns, thus nothing will ever work because it has no idea where to send packets. Luckily broadcast packets don’t need to be routed, so they somehow manage to get to host’s bridge adapter.

[rancher@rancher ~]$ sudo docker run -ti --rm --net=container:e6d4baa7820f igneoussystems/iproute2 ip route [rancher@rancher ~]$

This should be an easy fix since the bridge plugin contains two different configuration options: isGateway and isDefaultGateway. We should be able to set one of these to true and it should work. Unfortunately, it decides to use the gateway as returned by the IPAM plugin (see here). In the DHCP IPAM case, this is the IP of the network’s router ( not the local host ( which we want everything to forward through to get IPTables.

The fix for this is the same as in the MACvlan CNI. As part of the CNI, I need to define routes that forward all traffic to the host’s IP. I modified the bridge CNI code here. Ultimately, it gets the host’s primary IPv4 address (IPv6 to come later) and creates a default route to send all traffic to the host. Note that we again

gwIp := uplinkAddrs[0].IP err = netns.Do(func(_ ns.NetNS) error { containerLink, err := netlink.LinkByName(args.IfName) routes, _ := netlink.RouteList(containerLink, netlink.FAMILY_ALL) for _, route := range routes { err = netlink.RouteDel(&route) } // This route tells the OS that can be found on eth0 // Before we can set a default route, Linux needs to know where to find the gateway err = netlink.RouteAdd(&netlink.Route{ LinkIndex: containerLink.Attrs().Index, Scope: netlink.SCOPE_LINK, Dst: netlink.NewIPNet(gwIp), }) // This route tells the OS to forward (all traffic, even on the local LAN) / // to It knows that is on the eth0 interface err = netlink.RouteAdd(&netlink.Route{ LinkIndex: containerLink.Attrs().Index, Gw: gwIp, Src: ipamResult.IPs[0].Address.IP, })
Code language: PHP (php)

Great, now the Pod has the correct routes:

[rancher@rancher ~]$ sudo docker run -ti --rm --net=container:e6d4baa7820f igneoussystems/iproute2 ip route default via dev eth0 src dev eth0 proto kernel scope link
Code language: JavaScript (javascript)

DHCP still doesn’t work though. Looking back at the IP stack diagram, there’s a missing link from the bridge to eth0:

This is confirmed using the brctl command:

[rancher@rancher ~]$ sudo docker run -ti --rm --net=host igneoussystems/iproute2 brctl show bridge name bridge id STP enabled interfaces cni0 8000.00155d02cb02 no veth0c90ef73

That means we need to add the eth0 interface to the bridge. This is done here. The code (error handling removed) below shows how it works. First, we need to copy the IP address from eth0 to the bridge. This is because the bridge interface will effectively replace eth0 as the primary interface handling all traffic for even this host. Then we call LinkSetMaster to add eth0 into the bridge.

// Copy the IPv4 address from eth0 to the bridge addrs, err := netlink.AddrList(br, netlink.FAMILY_V4) gwIp := uplinkAddrs[0].IP foundAddr := false for _, addr := range addrs { if addr.IP.Equal(gwIp) { foundAddr = true break } } var failed bool if !foundAddr { addr := &netlink.Addr{ IPNet: netlink.NewIPNet(gwIp), } err = netlink.AddrAdd(br, addr) } // Add the uplink interface to the bridge if it isn't already there // If MasterIndex == 0, then the interface isn't part of a bridge // If MasterIndex != BridgeIndex, then the interface is part of a different bridge if uplinkLink.Attrs().MasterIndex != br.Attrs().Index && uplinkLink.Attrs().MasterIndex != 0 { // Fail } err = netlink.LinkSetMaster(uplinkLink, br)
Code language: JavaScript (javascript)

Unfortunately this caused my SSH connection to disconnect after a minute and still prevent traffic. To fix this, we need to move the routes to the bridge interface because it needs to effectively replace eth0 as the primary interface.

In the code below, we get the routes defined on eth0 so we can add them to the bridge. This failed at first with Linux giving a syscall error.

This was tricky to figure out, but in Linux you can’t define a route that points to a destination, that Linux doesn’t also know how to find. For example, defining the route default via means that you also need to define where to find In most networks, you get a free route defined, dev eth0, that tells Linux to find that IP on the eth0 interface, but in our case we’re explicitly defining all routes so that doesn’t work. We need to define dev eth0, then define default via

As a simple trick, I sort the routes based on their mask length so more specific routes appear first in the slice.

After sorting it, I remove it from eth0 and add it to the bridge.

routes, err := netlink.RouteList(uplinkLink, netlink.FAMILY_V4) if len(routes) > 0 { // Sort routes so that most specific routes appear first. This is to avoid an issue where we can't create a // default route until the subnet route is available sort.Slice(routes, func(i, j int) bool { l, _ := routes[i].Dst.Mask.Size() if routes[j].Dst == nil { return true } if routes[j].Dst.Mask == nil { return true } r, _ := routes[j].Dst.Mask.Size() return l >= r }) for _, route := range routes { err = netlink.RouteDel(&route) route.LinkIndex = br.Index err = netlink.RouteAdd(&route) } }
Code language: JavaScript (javascript)

Now I have a route table that looks like this and my pods are able to work on RancherOS:

rancher@rancher$ ip route [...] default via dev cni0 src metric 203 dev cni0 proto kernel scope link src metric 203 dev veth3ec79a35 scope link [...]
Code language: JavaScript (javascript)

Of course, what would this blog series be without a new problem to solve. When I tried running this on an Ubuntu machine, I encountered more issues with networking as DHCP requests were not making it out to the network.

As it turns out, there’s a difference in the default IPTables rule set between Ubuntu Server and RancherOS.

In RancherOS, the FORWARD chain has a default value of ACCEPT:

[rancher@rancher ~]$ sudo iptables -L -v [...] Chain FORWARD (policy ACCEPT 15883 packets, 2853K bytes) [...]

Whereas, Ubuntu Server has a default value of DROP:

user@ubuntu:~$ sudo iptables -L -v [...] Chain FORWARD (policy DROP 2735 packets, 495K bytes) [...]

This means we’re going to have manage IPTables rules that permit each pod to communicate with the network. Stay tuned for the next post where we extend the CNI to include IPTables rules.

Series Navigation<< Home Lab: Part 5 – Problems with asymmetrical routing

Leave a Reply

Your email address will not be published. Required fields are marked *