Auto disable Kubernetes' service LB NodePorts

In a previous post, I noticed that all my Kubernetes services with type=LoadBalancer were exposing some internal services as NodePorts which meant that I might be exposing internal services to the Internet at high ports. I was running Kubernetes directly on my dedicated servers and not behind a load balancer. Kubernetes expected everybody to sit behind a LB which often times required a NodePort.

The solution was to set the Service spec.allocateLoadBalancerNodePorts value to false when the service is created. This works if I can set it while I create the Service, however Helm based templates often wouldn’t allow me to set this and once it was set to true and the node port was allocated it was difficult to deallocate the NodePort.

In this post, I walk through using a Kubernetes mutating webhook to automatically set the value for all Services.

What are webhooks?

The Kubernetes admission webhooks are a Kubernetes feature that enables you to process any change made in the cluster. There are two flavors:

  • Validating - Checks a pending resource change and returns whether it should be denied or accepted
  • Mutating - Checks a pending resource change and possibly changes the values in that resource.

Webhooks get registered as a Kubernetes resource, then are called automatically.

Options

There’s a few different options:

I first experimented with OPA Gatekeeper, but found the authoring and policy registration process to be complicated. KubeWarden was another option, but overly complex and I just needed some basic policies. Kyverno looked perfect. I could easily define policies using just YAML. Far simpler than OPA’s WebAssembly project. Thus I selected Kyverno.

Deploying Kyverno

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# values.yaml
# Cleanups are currently considered alpha. Minimize the deployment size
cleanupController:
  enabled: false
reportsController:
  enabled: false
features:
  admissionReports:
    enabled: false
  # This next feature is optional
  # Set it to true to allow fail-open if Kyverno is offline
  # In my home lab, I'm using this for house-keeping, not security
  # So I want to be able to gracefully handle errors.
  # https://kyverno.io/docs/installation/#security-vs-operability
  forceFailurePolicyIgnore:
    enabled: true
  policyReports:
    enabled: false
1
2
helm repo add
helm install --create-namespace -n kyverno kyverno/kyverno kyverno -f values.yaml

Authoring the Policy

I defined the following policy and uploaded it to my cluster. Anytime a LoadBalancer Service is created or updated, the following policy will apply and disable NodePorts.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disable-lb-node-port
spec:
  mutateExistingOnPolicyUpdate: false
  rules:
    - match:
        resources:
          kinds:
            - Service
      preconditions:
        all:
        - key: "{{ request.object.spec.type }}"
          operator: Equals
          value: LoadBalancer
      mutate:
        patchStrategicMerge:
          spec:
            allocateLoadBalancerNodePorts: false
      name: mutate-loadbalancer-service
  validationFailureAction: Enforce

Existing Resources

Any service created prior to the policy won’t be updated automatically. Even if we toggle the allocateLoadBalancerNodePorts to false, and existing nodePorts will remain allocated and accessible to the Internet.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: Service
metadata:
  name: postgres-lb
  namespace: datastore
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.43.23.122
  clusterIPs:
    - 10.43.23.122
  externalTrafficPolicy: Local
  healthCheckNodePort: 32439
  internalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  ports:
    - name: sql
      port: 5432
      nodePort: 30313
      protocol: TCP
      targetPort: 5432
  selector:
    cnpg.io/cluster: postgres
  sessionAffinity: None
  type: LoadBalancer

The only way to fix this is to remove the nodePort field and rename each port name temporarily to get Kubernetes to clear it out.

Unfortunately as of Kubernetes 1.26 it’s still not possible to disable the healthCheckNodePort if you’re using externalTrafficPolicy=Local

Conclusion

Using Kyverno, I showed how to automatically disable unintended node ports created by Load Balancers.

Copyright - All Rights Reserved
Last updated on Aug 29, 2024 00:00 UTC

Comments

Comments are currently unavailable while I move to this new blog platform. To give feedback, send an email to adam [at] this website url.