Split Horizon DNS with external-dns and cert-manager for Kubernetes

There were a few services that I ran that I wanted to be able to access from both inside my home network and outside my home network. If I was inside my home network, I wanted to route directly to the service, but if I was outside I needed to be able to route traffic through a proxy that would then route into my home lab. Additionally, I wanted to support SSL on all my services for security using cert-manager

Since my IPv4 addresses differ inside my network vs outside, I need to use split-horizon DNS to respond with the correct DNS query. Split-horizon DNS refers to the DNS on one horizon (inside the network) showing different results than outside the network.

One day, we’ll all be on IPv6 and this won’t be needed anymore because all services will be using globally unique IPv6 addresses, but a las we’re stuck in IPv4 land. With IPv6, instead of having separate views of the IP addresses, all services could just get a single global IPv6 address registered. Then no matter where you are, the same IPv6 address gets routed to the correct Kubernetes service.

My cluster is using MetalLB as it’s L4 load balancer and it or an equivalent is required to be able to forward IP-level packets to the DNS server. NodePorts don’t work because they use a random port and other servers expect to be able to use port 53/udp and 53/tcp.

Prior to this, I purchased a domain name, mydomain.com. I reserved a subdomain, *.home.mydomain.com that all of my home lab software will run on-top of. I usually purchase my domain names from porkbun.com, but any domain name registrar will work.

The Problem with TLS

To provision certificates, Let’s Encrypt needs to confirm that you own or at least have control over the DNS domain name that you’re requesting. Since I’m running home lab inside my private network that’s not accessible to the internet, it won’t be able to verify I own *.home.mydomain.com. To fix this, I need cert-manager to be able to create a DNS record on a publicly resolvable DNS record.

The diagram below shows the problem with split horizon DNS in this case. cert-manager needs to be able to update the external DNS server and be able to verify the DNS record is updated, but if it hits the internal DNS server then it won’t be able to confirm everything is working.

A diagram showing how the split-horizon DNS prevents Let’s Encrypt and cert-manager from verifying ownership because there’s two different DNS servers.

Architectural diagram showing how DNS queries progress through the network. Clients call into Pi-Hole, which then conditionally forwards the lab zone to the CoreDNS instance, everything else goes to the internet. CoreDNS then queries etcd for the lab zone results.

Solution

Let’s break down the different components and how to configure each one.

Component diagram showing the software we’re going to deploy

Ingress-NGINX

Ingress controllers in Kubernetes automatically update each Ingress resource’s status block with an IP address that can be used to access that Ingress. However, there are multiple IP addresses that can be associated with a given ingress. If you’re using a NodePort service, then that ingress would be pointing to the Node’s IP, but a Layer 3 LB service, would need to use the IP of the ingress-nginx’s LB.

See the below example, this ingress is pointing to the node:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sonos-api
spec:
  # ...
status:
  loadBalancer:
    ingress:
    - ip: 192.168.2.196 # This is the node's IP, not the service IP

The correct IP as exposed by MetalLB for NGINX is 192.168.6.8. To fix this, we need to tell ingress-nginx to instead use it’s service IP. If you’ve deployed nginx using Helm, then change the values.yaml

1
2
3
controller:
  publishService:
    enabled: true # Change from false to true

After deploying, NGINX should modify the Ingress statuses to point to the correct IP:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sonos-api
spec:
  # ...
status:
  loadBalancer:
    ingress:
    - ip: **192.168.6.8**

etcd

External-dns will store the DNS records in etcd and CoreDNS will look up query responses for the zone in in etcd.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: etcd
  namespace: dns
spec:
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: apps.statefulset-dns-etcd
  template:
    metadata:
      labels:
        workload.user.cattle.io/workloadselector: apps.statefulset-dns-etcd
    spec:
      containers:
        - command:
            - /bin/sh
            - '-c'
            - |
              exec etcd --name ${HOSTNAME} \\
                --listen-peer-urls http://0.0.0.0:2380 \\
                --listen-client-urls http://0.0.0.0:2379 \\
                --advertise-client-urls http://${HOSTNAME}.etcd:2379 \\
                --initial-advertise-peer-urls http://${HOSTNAME}:2380 \\
                --initial-cluster-token etcd-cluster-1 \\
                --initial-cluster-state new \\
                --data-dir /var/run/etcd/default.etcd              
          image: quay.io/coreos/etcd:latest
          name: etcd
          ports:
            - containerPort: 2379
              name: client
              protocol: TCP
            - containerPort: 2380
              name: peer
              protocol: TCP
          volumeMounts:
            - mountPath: /var/run/etcd
              name: data
      volumes:
        - hostPath:
            path: /home/docker/dns-etcd
            type: ''
          name: data
  replicas: 1

External-DNS

External-DNS is a Kubernetes tool that will take ingresses or services defined in Kubernetes and automatically create DNS records in some provider for you. For ingresses, it takes the IP address defined in the status of the Ingress by the ingress controller (ingress-nginx.)

Following the Helm install method, udate the following Helm values. Note that I’m using the K8s namespace ‘dns’. If you deploy your etcd/external-dns in a different namespace, make sure to update ETCD_URLs below. Also note, I’m using the fully qualified names because of an issue I discovered and documented in this post.

1
2
3
4
5
6
7
8
env:
  - name: ETCD_URLS
    value: http://etcd.dns.svc.cluster.local.:2379

provider: coredns
sources:
  - ingress
  - service

Once deployed, check the logs to verify that the records are being created correctly.

CoreDNS

CoreDNS will answer the DNS queries by querying etcd for records.

Kubernetes has a limitation where we can’t use TCP and UDP on the same Load Balancer service.

To fix this, you have two options:

  1. Enable the feature gate MixedProtocolLBService
  2. Disable TCP based DNS queries and hope you don’t exceed the size of a UDP packet (ref.)

Many Kubernetes clusters come built in with a CoreDNS instance already to handle pod DNS queries.

  1. Create a separate CoreDNS instance to handle split-horizon zone queries.
  2. Update the existing kube-dns instance (if it’s CoreDNS)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
isClusterService: false
servers:
  - plugins:
      - name: errors
      - configBlock: lameduck 5s
        name: health
      - name: ready
      - name: prometheus
        parameters: 0.0.0.0:9153
      - name: loadbalance
      - configBlock: |-
          stubzones
          path /skydns
          endpoint http://etcd.dns.svc.cluster.local.:2379          
        name: etcd
        parameters: home.mydomain.com
    port: 53
    zones:
      - zone: home.mydomain.com
        # Disable TCP unless MixedProtocolLBService=true
        # https://github.com/kubernetes/kubernetes/issues/23880
        scheme: dns://
        use_tcp: false

If the downstream DNS server is located outside the cluster, then enable the LB. If you’re running something like Pi-hole in the cluster, then you don’t need this and can route directly here

1
2
3
service:
  externalTrafficPolicy: Local
serviceType: LoadBalancer

After that, you should have a Load Balancer created like below. Use this to configure your DNS server to forward queries for your internal domain to the LB IP address.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: Service
metadata:
  name: split-horizon-dns-coredns
  namespace: dns
spec:
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.168.6.2 # <-- Changed

cert-manager

Cert-manager is responsible for requesting certificates from Let’s Encrypt (or any other compliant ACME certificate provider) and storing them in your cluster. To install, following the standard Helm installation process, but make sure to update following values.

The --dns01-recursive-nameservers flag tells cert-manager not to use the internal DNS server to verify propagation of the DNS01 verifier and instead hit Google’s public DNS server. Feel free to use any public resolver.

1
2
3
extraArgs:
  - --dns01-recursive-nameservers=8.8.8.8:53,8.8.4.4:53
installCRDs: true

After that, create a ClusterIssuer or Issuer. In the DNS01 solver, you’ll need to configure a DNS solver for your domain name. See the official docs on how to do that for your case.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: home-issuer
spec:
  acme:
    email: user@example.com
    preferredChain: ""
    privateKeySecretRef:
      name: cert-manager-info
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
    - dns01:
        # Point this to your DNS provider
      selector:
        dnsNames:
        # Replace with your domain zone
        - home.mydomain.com
        - '*.home.mydomain.com'

Router Configuration

After the service is deployed, update the router to forward traffic for your given domain name to the newly created service IP address.

In the Ubiquiti EdgeRouter config, this is defined as:

set service dns forwarding options server=/home.mydomain.com/192.168.6.2

dnsmasq:

server=/home.mydomain.com/192.168.6.2

Future Work

After this is setup, you should be able to access any ingress you’ve got configured under your home DNS zone when you’re inside your home network.

If you want to be able to access your home lab services outside your home network, you can use Wireguard or expose NGINX to the internet and create DNS records in a publicly accessible DNS zone. If you expose any services to the internet, do take care to ensure that you securely restrict access. In a future post, I may document how I do this for my own network.

Copyright - All Rights Reserved

Comments

Comments are currently unavailable while I move to this new blog platform. To give feedback, send an email to adam [at] this website url.