In one of my previous blog posts, I covered how to do egress traffic blocking with Cilium bring-your-own CNI on Azure Kubernetes Service
-> https://www.danielstechblog.io/egress-traffic-blocking-with-cilium-cluster-wide-network-policies-on-azure-kubernetes-service/
Today we look into Cilium Hubble Exporter which lets us write Hubble flows to the Cilium agent log output. Thus, Hubble flows can be collected by the logging solution running on an Azure Kubernetes Service cluster.
On my Azure Kubernetes Service cluster, I use Fluent Bit for the log collection and Azure Data Explorer as the logging backend.
Enable Cilium Hubble Exporter
The Cilium Hubble Exporter can be enabled in two different modes: static or dynamic. We use the dynamic mode, which provides several advantages over the static mode. It allows you to configure multiple filters and does not require restarting the Cilium agent to apply changes.
-> https://docs.cilium.io/en/stable/observability/hubble/configuration/export/
Using the Helm Chart we set hubble.export.dynamic.enabled to true to deploy the Cilium Hubble Exporter in its default configuration.
-> https://artifacthub.io/packages/helm/cilium/cilium?modal=values&path=hubble.export.dynamic
❯ helm upgrade --install cilium cilium/cilium --version 1.17.0 \
--wait \
--namespace kube-system \
--kubeconfig "$KUBECONFIG" \
...
--set hubble.export.dynamic.enabled=true
We do that to retrieve the config map structure for the Cilium Hubble Exporter configuration. The config map is called cilium-flowlog-config.
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-flowlog-config
namespace: kube-system
data:
flowlogs.yaml: |
flowLogs:
- excludeFilters: []
fieldMask: []
filePath: /var/run/cilium/hubble/events.log
includeFilters: []
name: all
Once we have the structure, we start by fine-tuning the Cilium Hubble Exporter to log only Hubble flows for egress traffic that has been denied by a Cilium network or cluster-wide network policy. Furthermore, those Hubble flows should be logged to stdout instead of to the Cilium’s agent file system.
Fine-tune Cilium Hubble Exporter
The fine-tuning of the Cilium Hubble Exporter requires some deeper looks into the flow API documentation as well into Cilium’s source code to gather the required configuration keys and values.
-> https://docs.cilium.io/en/stable/_api/v1/flow/README/#flowfilter
-> https://github.com/cilium/cilium/blob/v1.17.0/pkg/monitor/api/types.go
-> https://github.com/cilium/cilium/blob/v1.17.0/pkg/monitor/api/drop.go
Before we start this journey let us configure the filePath the Hubble flows should be logged to.
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-flowlog-config
namespace: kube-system
data:
flowlogs.yaml: |
flowLogs:
- name: egress-traffic-blocking
excludeFilters: []
fieldMask: []
filePath: /dev/stdout
includeFilters: []
Why do we use /dev/stdout and not a file stored in the Cilium’s agent file system? The answer to that is simple. We want to collect those log data with an already existing logging solution. In our example, Fluent Bit ingests those logs into Azure Data Explorer.
Now, comes the interesting part, remember we want only Hubble flows for egress traffic that has been denied by a network policy. For that we add the traffic_direction condition to the includeFilters section with the value EGRESS.
Identifying the correct value for the event_type condition is a bit tricky and requires a look into Cilium’s source code.
-> https://github.com/cilium/cilium/blob/v1.17.0/pkg/monitor/api/types.go#L19-L58
As we want to log blocked egress traffic, we need to identify the value that stands for type DROPPED. Looking at the comment in the source code the counting starts at 0 and 1 is the value for the type DROPPED.
-> https://github.com/cilium/cilium/blob/v1.17.0/pkg/monitor/api/types.go#L25
The sub_type for traffic denied by a network policy is way easier to identify by looking at the Go map errors and it is 181.
-> https://github.com/cilium/cilium/blob/v1.17.0/pkg/monitor/api/drop.go#L17-L103
-> https://github.com/cilium/cilium/blob/v1.17.0/pkg/monitor/api/drop.go#L79
Our final Cilium Hubble Exporter configuration is shown below.
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-flowlog-config
namespace: kube-system
data:
flowlogs.yaml: |
flowLogs:
- name: egress-traffic-blocking
excludeFilters: []
fieldMask: []
filePath: /dev/stdout
includeFilters:
- event_type:
- type: 1
sub_type: 181
traffic_direction:
- EGRESS
Before we roll out our final configuration, we update the Cilium installation to not automatically create the config map for us.
❯ helm upgrade --install cilium cilium/cilium --version 1.17.0 \
--wait \
--namespace kube-system \
--kubeconfig "$KUBECONFIG" \
...
--set hubble.export.dynamic.enabled=true \
--set hubble.export.dynamic.config.configMapName=cilium-flowlog-config \
--set hubble.export.dynamic.config.createConfigMap=false
Afterward, we apply our Cilium Hubble Exporter configuration and check with the following command if the configuration was applied successfully.
❯ kubectl logs cilium-9rl9n cilium-agent | grep "Configuring Hubble event exporter"
time="2025-01-23T07:50:22.887800089Z" level=info msg="Configuring Hubble event exporter" flowLogName=egress-traffic-blocking options="{0x3c55a00 0x3c561c0 [] [] map[] [] [0x3b817c0] []}" subsys=hubble
Next step is the testing if everything works correctly by running the curl command against a blocked CIDR range within a pod.
❯ kubectl logs cilium-9rl9n cilium-agent -f | grep '"flow":'
{"flow":{"time":"2025-01-23T07:59:23.501253423Z","uuid":"62b81705-ed79-4e5b-8f10-70a06970104d","verdict":"DROPPED","drop_reason":181,"ethernet":{"source":"5e:a0:3e:10:9e:71","destination":"82:a8:b0:95:ff:15"},"IP":{"source":"100.64.0.183","destination":"217.160.0.92","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":44240,"destination_port":443,"flags":{"SYN":true}}},"source":{"ID":3810,"identity":14406,"cluster_name":"aks-azst-2","namespace":"default","labels":[...],"pod_name":"bash"},"destination":{"identity":16777218,"labels":["cidrgroup:io.cilium.policy.cidrgroupname/egress-traffic-blocking","cidrgroup:policy=egress-traffic-blocking","reserved:world"]},"Type":"L3_L4","node_name":"aks-azst-2/aks-nodepool1-25275757-vmss000000","node_labels":[...],"event_type":{"type":1,"sub_type":181},"traffic_direction":"EGRESS","file":{"name":"bpf_lxc.c","line":1360},"drop_reason_desc":"POLICY_DENY","Summary":"TCP Flags: SYN"},"node_name":"aks-azst-2/aks-nodepool1-25275757-vmss000000","time":"2025-01-23T07:59:23.501253423Z"}
![Terminal view of curl and kubectl logs command with output]()
As seen above the egress traffic denied by a network policy is logged correctly to stdout of the Cilium agent. Furthermore, Fluent Bit ingests the Hubble flow output into the Azure Data Explorer cluster.
![Azure Data Explorer KQL query view]()
Summary
The Cilium Hubble Exporter feature is a powerful functionality of Cilium to write Hubble flows as logs to a specified output whether it is a file or directly to stdout.
However, the configuration of the Cilium Hubble Exporter requires a deeper look into the flow API and Cilium’s source code and has a steeper learning curve than other Cilium features.
The example configuration can be found on my GitHub repository.
-> https://github.com/neumanndaniel/kubernetes/tree/master/cilium/hubble-exporter