Why is Kubernetes opening random ports?

I recently responded to the Log4j vulnerability. If you’re not aware, Log4j is a very popular Java logging library used in many Java applications. There was a vulnerability where malicious actors could remotely take control of your computer by submitting a specially crafted request parameter that gets directly logged to log4j.

This situation was not ideal since I was running several Java applications on my servers, thus I decided to use Nmap to port scan my dedicated server to see what ports were open. I ended up finding a number of ports I didn’t expect because several of Kubernetes Service instances were being mapped as node ports.

In this post, I outline the problem with Kubernete’s default strategy for services and how to avoid exposing ports that you don’t need.

Using Nmap, I scanned all TCP ports on my own server using the following command. This command scanned TCP ports from 1-65535. Note: always have permission before scanning a target.

1
nmap -p 1-65535 -T4 -A -v 192.168.5.1

This is also a great place to leverage natlas/natlas, a project that a colleague (0xdade) and I have been working on. It provides an automated agent with a dashboard website to view port scan results.

After a few minutes I got a list of open ports:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[...]
Discovered open port 80/tcp on 192.168.5.1
Discovered open port 22/tcp on 192.168.5.1
Discovered open port 53/tcp on 192.168.5.1
Discovered open port 110/tcp on 192.168.5.1
Discovered open port 587/tcp on 192.168.5.1
Discovered open port 995/tcp on 192.168.5.1
Discovered open port 993/tcp on 192.168.5.1
Discovered open port 25/tcp on 192.168.5.1
Discovered open port 443/tcp on 192.168.5.1
Discovered open port 143/tcp on 192.168.5.1
Discovered open port 6443/tcp on 192.168.5.1
Discovered open port 31171/tcp on 192.168.5.1
Discovered open port 9120/tcp on 192.168.5.1
Discovered open port 32006/tcp on 192.168.5.1
Discovered open port 30941/tcp on 192.168.5.1
Discovered open port 10250/tcp on 192.168.5.1
Discovered open port 9100/tcp on 192.168.5.1
Discovered open port 30516/tcp on 192.168.5.1
Discovered open port 8081/tcp on 192.168.5.1
Discovered open port 10254/tcp on 192.168.5.1
Discovered open port 8181/tcp on 192.168.5.1
Discovered open port 30921/tcp on 192.168.5.1
[...]

Many of these ports I expected, but some of the ports in the 30k-65k range, showed that they were exposing some internal applications. For example, a PowerDNS status page:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
31171/tcp open     unknown
| fingerprint-strings: 
|   GenericLines: 
|     HTTP/1.1 404 Not Found
|     Connection: close
|     Content-Length: 9
|     Content-Type: text/plain; charset=utf-8
|     Server: PowerDNS/4.3.1
|     Found
|   GetRequest, HTTPOptions: 
|     HTTP/1.1 200 OK
|     Connection: close
|     Content-Length: 21271
|     Content-Type: text/html; charset=utf-8
|     Server: PowerDNS/4.3.1
|     <!DOCTYPE html>
|     <html><head>
|     <title>PowerDNS Authoritative Server Monitor</title>
|     <link rel="stylesheet" href="style.css"/>
|     </head><body>
|     <div class="row">
|     <div class="headl columns"><a href="/" id="appname">PowerDNS 4.3.1</a></div>
|     <div class="headr columns"></div></div><div class="row"><div class="all columns"><p>Uptime: 3.83 days<br>
|     Queries/second, 1, 5, 10 minute averages: 0, 0, 0. Max queries/second: 0<br>
|     Cache hitrate, 1, 5, 10 minute averages: 0.0%, 0.0%, 0.0%<br>
|     Backend query cache hitrate, 1, 5, 10 minute averages: 0.0%, 0.0%, 1.3%<br>
|     Backend query load, 1, 5, 10 minute averages: 0, 0, 0. Max queries/second: 0<br>
|     Total queries: 900. Question/answer latency: 51.8ms</p><br>
|_    <div class="panel"><span class=resetring><i></i><a href="?resetring=logmessages

This matched a Kubernetes Service that looked like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
kind: Service
metadata:
  labels:
    manager: controller
    operation: Update
  name: pdns-tcp
  namespace: technowizardry
spec:
  clusterIP: 10.43.125.234
  clusterIPs:
  - 10.43.125.234
  externalTrafficPolicy: Local
  **healthCheckNodePort: 30516**
  ports:
  - name: dns
    **nodePort: 30921**
    port: 53
    protocol: TCP
    targetPort: 53
  - name: http
    **nodePort: 31171**
    port: 8081
    protocol: TCP
    targetPort: 8081
  selector:
    workload.user.cattle.io/workloadselector: daemonSet-technowizardry-powerdns
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.168.10.0

This service was created to privately expose a service over a Wireguard VPN, I didn’t want it to be exposed publicly. While write access was still protected by an API key, for security, I didn’t want to expose this.

Unfortunately, while Kubernetes works on-premise, it was designed with the mind that you’d be running in the cloud and load balancers (like AWS ELB) need a TCP port on each host to forward traffic to, thus it was allocating node ports for everything by default. I was using MetalLB which allocates a new layer 3 IP address and didn’t need this.

I used kubectl to then find all node ports exposed:

1
2
kubectl get svc --all-namespaces -o go-template='{{range $item :=.items}}{{range $item.spec.ports}}{{if .nodePort}}{{.nodePort}}/{{.protocol}} {{ $item.metadata.name }} {{ .name }}{{"n"}}{{en
d}}{{end}}{{end}}'

Digging around, I found the GitHub issue kubernetes/kubernetes#69845 which requested an option to disable allocating the node ports: spec.allocateLoadBalancerNodePorts=false

As of Kubernetes 1.20, this is currently in Alpha and can be enabled with a Kubernetes feature-gate. Alpha features can change before they become stable, thus be careful before setting this on a cluster that actually matters. If you’re using Rancher RKE1 to deploy your cluster, this is as easy as modifying your cluster:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
rancher_kubernetes_engine_config:
  services:
    kube-api:
      extra_args:
        feature-gates: 'ServiceLBNodePortControl=true'
    kube-controller:
      extra_args:
        feature-gates: 'ServiceLBNodePortControl=true'
    kubelet:
      extra_args:
        feature-gates: 'ServiceLBNodePortControl=true'
    kubeproxy:
      extra_args:
        feature-gates: 'ServiceLBNodePortControl=true'
    scheduler:
      extra_args:
        feature-gates: 'ServiceLBNodePortControl=true'

After the cluster gets updated, you can now take advantage of this new setting. For each service, add the allocateLoadBalancerNodePorts: false and delete the nodePort values. This will cause Kubernetes to redeploy the service and remove the exposed ports.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
kind: Service
metadata:
  labels:
    manager: controller
    operation: Update
  name: pdns-tcp
  namespace: technowizardry
spec:
  **allocateLoadBalancerNodePorts: false**
  clusterIP: 10.43.125.234
  clusterIPs:
  - 10.43.125.234
  externalTrafficPolicy: Local
  healthCheckNodePort: 30516
  ports:
  - name: dns
    **nodePort: 30921**
    port: 53
    protocol: TCP
    targetPort: 53
  - name: http
    **nodePort: 31171**
    port: 8081
    protocol: TCP
    targetPort: 8081
  selector:
    workload.user.cattle.io/workloadselector: daemonSet-technowizardry-powerdns
  sessionAffinity: None
  type: LoadBalancer

Unfortunately the healthCheckNodePort can’t be hidden in the same manner. The only way that I found to hide this is to set the global property --nodeport-addresses=127.0.0.1 on the kube-proxy, but I wouldn’t recommend that because it applies to all

Updates

Oct 2023: I discovered how to use Kyverno to auto fix this issue in my newer post here.

Copyright - All Rights Reserved
Last updated on Dec 31, 2023 00:00 UTC

Comments

Comments are currently unavailable while I move to this new blog platform. To give feedback, send an email to adam [at] this website url.