cloudMetadataServer = false ignored for hub pod

I have:

        networkPolicy = {
          enabled = true
          egressAllowRules = {
            cloudMetadataServer = false
          }
        }

but from inside the pod I can curl the metadata server (169.254.169.254).

This is incomplete configuraiion in a format uninown to me. Network policy config can be found under hub or singleuser for example, is this leading to helm chart config values under hub?

Network policy enforcement has requirements on your k8s cluster, does it have network policy enforcement?

It’s terraform. That’s the networkPolicy config for the hub pod. Yes, network policies are working correctly on the gke cluster.

1 Like

Can you copy paste the hub netpol resource within the k8s cluster?

Sure

spec:
  egress:
  - ports:
    - port: 8001
      protocol: TCP
    to:
    - podSelector:
        matchLabels:
          app: jupyterhub
          component: proxy
          release: binderhub
  - ports:
    - port: 8888
      protocol: TCP
    to:
    - podSelector:
        matchLabels:
          app: jupyterhub
          component: singleuser-server
          release: binderhub
  - ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
    to:
    - ipBlock:
        cidr: 169.254.169.254/32
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    - ipBlock:
        cidr: 10.0.0.0/8
    - ipBlock:
        cidr: 172.16.0.0/12
    - ipBlock:
        cidr: 192.168.0.0/16
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.0.0.0/8
        - 172.16.0.0/12
        - 192.168.0.0/16
        - 169.254.169.254/32
  - to:
    - ipBlock:
        cidr: 10.0.0.0/8
    - ipBlock:
        cidr: 172.16.0.0/12
    - ipBlock:
        cidr: 192.168.0.0/16
  - to:
    - ipBlock:
        cidr: 169.254.169.254/32
  ingress:
  - from:
    - podSelector:
        matchLabels:
          hub.jupyter.org/network-access-hub: "true"
    ports:
    - port: http
      protocol: TCP
  podSelector:
    matchLabels:
      app: jupyterhub
      component: hub
      release: binderhub
  policyTypes:
  - Ingress
  - Egress

Hnmm, so access is granted from the last entry in the egress section.

Why is the netpol helm template rendered to give that to you, when you havnt configurwd it?

Ill try myself and render the chart, focusing on that netpol resource (helm template … --show-only …)

The setting works - are you sure that the configuration have applied correctly?

# The setting is true by default in the hub pod, so we get an
# egress rule allowing access to the cloud metadata server
$ helm template ./jupyterhub --show-only templates/hub/netpol.yaml | tail -n 10

        - ipBlock:
            cidr: 10.0.0.0/8
        - ipBlock:
            cidr: 172.16.0.0/12
        - ipBlock:
            cidr: 192.168.0.0/16
    # Allow outbound connections to the cloud metadata server
    - to:
        - ipBlock:
            cidr: 169.254.169.254/32



# With the setting explicitly true, we get an egress rule
# allowing access to the cloud metadata server
$ helm template ./jupyterhub --show-only templates/hub/netpol.yaml --set hub.networkPolicy.egressAllowRules.cloudMetadataServer=false | tail -n 10

              # - don't allow outbound connections to the cloud metadata server
              - 169.254.169.254/32
    # Allow outbound connections to private IP ranges
    - to:
        - ipBlock:
            cidr: 10.0.0.0/8
        - ipBlock:
            cidr: 172.16.0.0/12
        - ipBlock:
            cidr: 192.168.0.0/16
1 Like

My bad, this is what the network policy looks like with the configuration I shared:

spec:
  egress:
  - ports:
    - port: 8001
      protocol: TCP
    to:
    - podSelector:
        matchLabels:
          app: jupyterhub
          component: proxy
          release: binderhub
  - ports:
    - port: 8888
      protocol: TCP
    to:
    - podSelector:
        matchLabels:
          app: jupyterhub
          component: singleuser-server
          release: binderhub
  - ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
    to:
    - ipBlock:
        cidr: 169.254.169.254/32
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    - ipBlock:
        cidr: 10.0.0.0/8
    - ipBlock:
        cidr: 172.16.0.0/12
    - ipBlock:
        cidr: 192.168.0.0/16
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.0.0.0/8
        - 172.16.0.0/12
        - 192.168.0.0/16
        - 169.254.169.254/32
  - to:
    - ipBlock:
        cidr: 10.0.0.0/8
    - ipBlock:
        cidr: 172.16.0.0/12
    - ipBlock:
        cidr: 192.168.0.0/16
  ingress:
  - from:
    - podSelector:
        matchLabels:
          hub.jupyter.org/network-access-hub: "true"
    ports:
    - port: http
      protocol: TCP
  podSelector:
    matchLabels:
      app: jupyterhub
      component: hub
      release: binderhub
  policyTypes:
  - Ingress
  - Egress

but from inside the pod I can curl it:

$ kubectl exec -it pod/hub-96637f65b-lmd91 -- bash
jovyan@hub-96637f65b-lmd91:/srv/jupyterhub$ curl 169.254.169.254
computeMetadata/