In a previous post, I noticed that all my Kubernetes services with type=LoadBalancer
were exposing some internal services as NodePorts which meant that I might be exposing internal services to the Internet at high ports. I was running Kubernetes directly on my dedicated servers and not behind a load balancer. Kubernetes expected everybody to sit behind a LB which often times required a NodePort.
The solution was to set the Service spec.allocateLoadBalancerNodePorts
value to false
when the service is created. This works if I can set it while I create the Service, however Helm based templates often wouldn’t allow me to set this and once it was set to true and the node port was allocated it was difficult to deallocate the NodePort.
In this post, I walk through using a Kubernetes mutating webhook to automatically set the value for all Services.
What are webhooks?
The Kubernetes admission webhooks are a Kubernetes feature that enables you to process any change made in the cluster. There are two flavors:
- Validating - Checks a pending resource change and returns whether it should be denied or accepted
- Mutating - Checks a pending resource change and possibly changes the values in that resource.
Webhooks get registered as a Kubernetes resource, then are called automatically.
Options
There’s a few different options:
- Manually create a webhook and implement the API
- OpenPolicyAgent Gatekeeper
- Kyverno
- KubeWarden
I first experimented with OPA Gatekeeper, but found the authoring and policy registration process to be complicated. KubeWarden was another option, but overly complex and I just needed some basic policies. Kyverno looked perfect. I could easily define policies using just YAML. Far simpler than OPA’s WebAssembly project. Thus I selected Kyverno.
Deploying Kyverno
|
|
|
|
Authoring the Policy
I defined the following policy and uploaded it to my cluster. Anytime a LoadBalancer
Service
is created or updated, the following policy will apply and disable NodePorts.
|
|
Existing Resources
Any service created prior to the policy won’t be updated automatically. Even if we toggle the allocateLoadBalancerNodePorts
to false
, and existing nodePorts will remain allocated and accessible to the Internet.
|
|
The only way to fix this is to remove the nodePort field and rename each port name temporarily to get Kubernetes to clear it out.
Unfortunately as of Kubernetes 1.26 it’s still not possible to disable the healthCheckNodePort
if you’re using externalTrafficPolicy=Local
Conclusion
Using Kyverno, I showed how to automatically disable unintended node ports created by Load Balancers.