Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Endpoint Selectors and Kubernetes Namespaces in CiliumNetworkPolicies

While performing some testing with CiliumNetworkPolicies, I came across a behavior that was unintuitive and unexpected to me. The behavior centers around how an endpoint selector behaves in a CiliumNetworkPolicies when Kubernetes namespaces are involved. (If you didn’t understand a bit of what I just said, I’ll provide some additional explanation shortly—stay with me!) After chatting through the behavior with a few folks, I realized the behavior is essentially “correct” and expected. However, if I was confused by the behavior then there’s a good chance others might be confused by the behavior as well, so I thought a quick blog post might be a good idea. Keep reading to get more details on the interaction between endpoint selectors and Kubernetes namespaces in CiliumNetworkPolicies.

Before digging into the behavior, let me first provide some definitions or explanations of the various things involved here:

With these high-level definitions in mind, let’s dig into the behavior. I’ll start with this short CiliumNetworkPolicy (taken directly from the Cilium docs):

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "allow-all-to-victim"
spec:
  endpointSelector:
    matchLabels:
      role: victim
  ingress:
  - fromEndpoints:
    - {}

This policy is described as a rule that will allow all inbound traffic to the endpoints that match the endpointSelector. Indeed, the rule has an empty fromEndpoints rule (the - {} under fromEndpoints in the example above), which does indeed select all endpoints.

Now look at this policy (also taken directly from the Cilium documentation):

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "isolate-ns1"
  namespace: ns1
spec:
  endpointSelector:
    matchLabels:
      {}
  ingress:
  - fromEndpoints:
    - matchLabels:
        {}

Note the similarity in the ingress section of this example policy versus the first one—this policy has an empty matchLabels set on the fromEndpoints, meaning it will match any set of labels on an endpoint (essentially, matching all endpoints). Because the fromEndpoints matches all endpoints, like the first policy, you might look at this policy and think it is also intended to allow all inbound traffic.

However, this policy is described as a policy to “lock down ingress of the pods” in the specified namespace. Huh? How can two policies, both of which have empty fromEndpoint rules, both allow all inbound traffic and also lock down inbound traffic?

The key here is understanding the interaction between CiliumNetworkPolicies and Kubernetes namespaces. The Cilium docs have a page discussing the use of Kubernetes constructs in policies, and on that page it reminds you that CiliumNetworkPolicies are namespaced-scoped (limited or constrained to a single namespace). What this means is that the empty fromEndpoints rule in the policy example above does select all endpoints—but it’s only all endpoints in the namespace in which the policy is applied.

When you add that missing piece—that an empty endpoint selector only select endpoints in the same/specified namespace—then understanding these two example policies makes more sense. Both policies accomplish the same thing: they both allow all traffic from the current namespace while denying traffic from other namespaces. (The first policy is, in my opinion, a bit inaccurately described. It does allow all inbound traffic, but only from the current namespace.)

So what’s the key takeaway here? If you need to write a CiliumNetworkPolicy that targets endpoints in another namespace, you can’t use an empty fromEndpoints rule—instead, you need to explicitly call out the namespace of the source endpoint, like this:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "k8s-expose-across-namespace"
  namespace: ns1
spec:
  endpointSelector:
    matchLabels:
      {}
  ingress:
  - fromEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: ns2

I hope this explanation is helpful. As I mentioned at the start of the post, if I was confused by this then there’s a reasonable chance that others are (or will be) confused by it as well. I do intend to submit one or more PRs to the Cilium documentation to see if I can help clarify this in some way. In the meantime, if you have any questions—or any feedback for me—feel free to reach out. You can find me on Twitter, in the Fediverse, or in a number of different Slack communities. Thanks for reading!

Metadata and Navigation

Be social and share this post!