by jhasensio

Category: Antrea

Antrea Observability Part 3: Dashboards for metrics and logs

In the previous post we learnt how to install some important tools to provide observability including Grafana, Prometheus, Loki and Fluent-Bit.

Importing precreated Dashboards and Datasources in Grafana

The task of creating powerful data insights from metrics and logs through awesome visual dashboards is by far the most complex and time consuming task in the observability discipline. To avoid the creation of dashboard from scratch each time you deploy your observability stack, there is a method to import precreated dashboards very easily as long as someone already put the effort in creating your desired dashboard for you. A dashboard in Grafana is represented by a JSON object, which stores plenty of metadata including includes properties, size, placement, template variables, panel queries, etc.

If you remember from previous post here, we installed Grafana with datasource and dashboard sidecars enabled. As you can guess these sidecars are just auxiliary containers whose task is to watch for the existence of configmap in current namespace with a particular label. As soon as a new matching configmap appears the sidecars inject dynamically the extracted configuration and create a new datasource or dashboard.

I have created my own set of dashboards and datasources that you can reuse if you wish. To do so just clone the git repository as shown here.

git clone https://github.com/jhasensio/antrea.git
Cloning into 'antrea'...
remote: Enumerating objects: 309, done.
remote: Counting objects: 100% (309/309), done.
remote: Compressing objects: 100% (182/182), done.
remote: Total 309 (delta 135), reused 282 (delta 109), pack-reused 0
Receiving objects: 100% (309/309), 1.05 MiB | 6.67 MiB/s, done.
Resolving deltas: 100% (135/135), done.

First we will create the Datasources, so navigate to antrea/GRAFANA/datasources/ folder and list the content.

ls -alg
total 16
drwxrwxr-x 2 jhasensio 4096 Feb 27 18:48 .
drwxrwxr-x 4 jhasensio 4096 Feb 27 18:37 ..
-rw-rw-r-- 1 jhasensio  571 Feb 27 18:37 datasources.yaml

In this case, the Datasource is defined through a yaml file. The format is fully defined at official documentation here is very easy to identify the different fields as you can see below.

cat datasources.yaml
datasources:
 datasources.yaml:
   apiVersion: 1
   datasources:
   # Loki Datasource using distributed deployment (otherwise use port 3100)
    - name: loki
      type: loki
      uid: loki
      url: http://loki-gateway.loki.svc.cluster.local:80
      access: proxy
      version: 1
      editable: false
   # Prometheus Datasource marked as default
    - name: prometheus
      type: prometheus
      uid: prometheus
      url: http://prometheus-service.monitoring.svc.cluster.local:8080
      access: proxy
      isDefault: true
      version: 1
      editable: false

Now all that we need to do is to generate a configmap or a secret from the yaml file using kubectl create with –from-file keyword. Since the datasources might contain sensitive information is it better to use a secret instead of a configmap to leverage the encoding capabilities of the secret object. The following command will create a generic secret importing the data contained in the datasources.yaml all existing yaml files in current directory and will use the filename stripping .yaml extension as prefix for the configmaps names followed by -datasource-cm suffix.

kubectl create secret generic -n grafana –from-file=datasources.yaml datasources-secret
secret/datasources-secret created

Next step is to label the secret object using the label and key we defined when we deployed Grafana.

kubectl label secret datasources-secret -n grafana grafana_datasource=1
secret/datasources-secret labeled

Now we have to do a similar procedure with the dashboards. From the path where you cloned the git repo before, navigate to antrea/GRAFANA/dashboards/ and list the contents. You should see the json files that define the dashboards we want to import.

ls -alg
total 176
drwxrwxr-x 2 jhasensio  4096 Feb 27 18:37 .
drwxrwxr-x 4 jhasensio  4096 Feb 27 18:37 ..
-rw-rw-r-- 1 jhasensio 25936 Feb 27 18:37 1-agent-process-metrics.json
-rw-rw-r-- 1 jhasensio 28824 Feb 27 18:37 2-agent-ovs-metrics-and-logs.json
-rw-rw-r-- 1 jhasensio 15384 Feb 27 18:37 3-agent-conntrack-and-proxy-metrics.json
-rw-rw-r-- 1 jhasensio 39998 Feb 27 18:37 4-agent-network-policy-metrics-and-logs.json
-rw-rw-r-- 1 jhasensio 17667 Feb 27 18:37 5-antrea-agent-logs.json
-rw-rw-r-- 1 jhasensio 31910 Feb 27 18:37 6-controller-metrics-and-logs.json

Now we will create the configmaps in the grafana namespace and label it. We can use a single-line command to do it recursively leveraging xargs command as shown below.

ls -1 *.json | sed ‘s/\.[^.]*$//’ | xargs -n 1 -I {arg} kubectl create configmap -n grafana –from-file={arg}.json {arg}-dashboard-cm
configmap/1-agent-process-metrics-dashboard-cm created
configmap/2-agent-ovs-metrics-and-logs-dashboard-cm created
configmap/3-agent-conntrack-and-proxy-metrics-dashboard-cm created
configmap/4-agent-network-policy-metrics-and-logs-dashboard-cm created
configmap/5-antrea-agent-logs-dashboard-cm created
configmap/6-controller-metrics-and-logs-dashboard-cm created

And now tag the created configmap objects using the label and key we defined when we deploy grafana using the command below.

kubectl get configmaps -n grafana | grep dashboard-cm | awk ‘{print $1}’ | xargs -n1 -I{arg} kubectl label cm {arg} -n grafana grafana_dashboard=1
configmap/1-agent-process-metrics-dashboard-cm labeled
configmap/2-agent-ovs-metrics-and-logs-dashboard-cm labeled
configmap/3-agent-conntrack-and-proxy-metrics-dashboard-cm labeled
configmap/4-agent-network-policy-metrics-and-logs-dashboard-cm labeled
configmap/5-antrea-agent-logs-dashboard-cm labeled
configmap/6-controller-metrics-and-logs-dashboard-cm labeled

Feel free to explore the created configmaps and secrets to verify if they have proper labels before moving to Grafana UI using following command.

kubectl get cm,secret -n grafana –show-labels | grep -E “NAME|dashboard-cm|datasource”
NAME                                                             DATA   AGE     LABELS
configmap/1-agent-process-metrics-dashboard-cm                   1      5m32s   grafana_dashboard=1
configmap/2-agent-ovs-metrics-and-logs-dashboard-cm              1      5m31s   grafana_dashboard=1
configmap/3-agent-conntrack-and-proxy-metrics-dashboard-cm       1      5m31s   grafana_dashboard=1
configmap/4-agent-network-policy-metrics-and-logs-dashboard-cm   1      5m31s   grafana_dashboard=1
configmap/5-antrea-agent-logs-dashboard-cm                       1      5m31s   grafana_dashboard=1
configmap/6-controller-metrics-and-logs-dashboard-cm             1      5m31s   grafana_dashboard=1
NAME                                   TYPE                 DATA   AGE   LABELS
secret/datasources-secret              Opaque               1      22m   grafana_datasource=1

If the sidecars did their job of watching for secrets with corresponding labels and importing their configuration into Grafana UI, we should now be able to access Grafana UI and see the imported datasources.

Grafana imported Datasources through labeled configmaps

If you navigate to the Dashboards section you should now see the six imported dashboards as well.

Grafana imported Dashboards through labeled configmaps

Now we have imported the datasources and dashboards we can move into Grafana UI to explore the created visualizations, but before moving jumping to UI, lets dive a little bit into log processing to understand how the logs are properly formatted and pushed ultimately to Grafana dashboards.

Parsing logs with FluentBit

As we seen in previous post here, Fluent-bit is a powerful log shipper that can be used to push any log produced by Antrea components. The formatting or parsing is the process to extract useful information within the raw logs to achieve better understanding and filtering capabilites of the overall information. This might be a tough task that require some time and attention and understanding of how a regular expression work. The following sections will show how to obtain the desired log formatting configuration using a methodical approach.

1 Install FluentBit in a Linux box

The first step is to isntall fluent-bit. The quickest way is by using the provided shell script in the official page.

curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
... skipped
Need to get 29.9 MB of archives.
After this operation, 12.8 MB of additional disk space will be used.
Get:1 https://packages.fluentbit.io/ubuntu/focal focal/main amd64 fluent-bit amd64 2.0.9 [29.9 MB]
Fetched 29.9 MB in 3s (8,626 kB/s)     
(Reading database ... 117559 files and directories currently installed.)
Preparing to unpack .../fluent-bit_2.0.9_amd64.deb ...
Unpacking fluent-bit (2.0.9) over (2.0.6) ...
Setting up fluent-bit (2.0.9) ...

Installation completed. Happy Logging!

Once the script is completed, the binary will be placed into the /opt/fluent-bit/bin path. Launch fluent-bit to check if is working. An output like the one shown below should be seen.

/opt/fluent-bit/bin/fluent-bit
Fluent Bit v2.0.9
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/03/13 19:15:50] [ info] [fluent bit] version=2.0.9, commit=, pid=1943703
...

2 Obtain a sample of the target log

Next step is to get a sample log file of the intended system we want to process logs from. This sampe log will be used as the input for fluent-bit. As an example, let’s use the network policy log generated by Antrea Agent when enableLogging spec is set to true. Luckily, the log format is fully docummented here and we will use that as a base for our parser. We can easily identify some fields within the log line such as timestamp, RuleName, Action, SourceIP, SourcePort and so on.

cat /var/log/antrea/networkpolicy/np.log
2023/02/14 11:23:54.727335 AntreaPolicyIngressRule AntreaClusterNetworkPolicy:acnp-acme-allow FrontEnd_Rule Allow 14000 10.34.6.14 46868 10.34.6.5 8082 TCP 60

3 Create the the fluent-bit config file

Once we have a sample log file with targeted log entries create a fluent-bit.conf file that will be used to process our log file. In the INPUT section we are using the tail input plugin that will read the file specified in the path keyword below. After tail module read the file, the fluent-bit pipeline will send the data to the antreanetworkpolicy custom parser that we will create in next section. Last, the output section will send the result to the standard output (console).

vi fluent-bit.conf
[INPUT]
    name tail
    path /home/jhasensio/np.log
    tag antreanetworkpolicy
    read_from_head true
    parser antreanetworkpolicy
    path_key on

[OUTPUT]
    Name  stdout
    Match *

4 Create the parser

The parser is by far the most tricky part since you need to use regular expression to create matching patterns in order to extract the values for the desired fields according to the position of the values within the log entry. You can use rubular that is a great website to play with regular expressions and also provides an option to create a permalink to share the result of the parsing. I have created a permalink here that is also commented in the parser file below that can be used for this purpose to understand and play with regex using a sample log line as an input. The basic idea is to choose a name for the intended fields and use a regular expression to match with the log position that shows the value for that field. Note we end up with a pretty long regex expression to extract all the fields.

vi parser.conf
[PARSER]
    Name antreanetworkpolicy
    Format regex
    # https://rubular.com/r/gCTJfLIkeioOgO
    Regex ^(?<date>[^ ]+) (?<time>[^ ]+) (?<ovsTableName>[^ ]+) (?<antreaNativePolicyReference>[^ ]+) (?<rulename>[^ ]+) (?<action>[^( ]+) (?<openflowpriority>[^ ]+) (?<sourceip>[^ ]+) (?<sourceport>[^ ]+) (?<destinationip>[^ ]+) (?<destinationport>[^ ]+) (?<protocol>[^ ]+) (?<packetLength>.*)$

4 Test filter and parser

Now that the config file and the parser are prepared, it is time to test the parser to check if is working as expected. Sometime you need to iterate over the parser configuration till you get the desired outcome. Use following command to test fluent-bit.

/opt/fluent-bit/bin/fluent-bit -c fluent-bit.conf –parser parser.conf -o stdout -v
Fluent Bit v2.0.9
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/03/13 19:26:01] [ info] Configuration:
[2023/03/13 19:26:01] [ info]  flush time     | 1.000000 seconds
[2023/03/13 19:26:01] [ info]  grace          | 5 seconds
... skipp
[2023/03/13 19:26:01] [ info] [output:stdout:stdout.0] worker #0 started
[0] antreanetworkpolicy: [1678731961.923762559, {"on"=>"/home/jhasensio/np.log", "date"=>"2023/02/14", "time"=>"11:23:54.727335", "ovsTableName"=>"AntreaPolicyIngressRule", "antreaNativePolicyReference"=>"AntreaClusterNetworkPolicy:acnp-acme-allow", "rulename"=>"FrontEnd_Rule", "action"=>"Allow", "openflowpriority"=>"14000", "sourceip"=>"10.34.6.14", "sourceport"=>"46868", "destinationip"=>"10.34.6.5", "destinationport"=>"8082", "protocol"=>"TCP", "packetLength"=>"60"}]
[2023/03/13 19:26:02] [debug] [task] created task=0x7f6835e4bbc0 id=0 OK
[2023/03/13 19:26:02] [debug] [output:stdout:stdout.1] task_id=0 assigned to thread #0
[2023/03/13 19:26:02] [debug] [out flush] cb_destroy coro_id=0
[2023/03/13 19:26:02] [debug] [task] destroy task=0x7f6835e4bbc0 (task_id=0)

As you can see in the highlighted section the log entry has been sucessfully processed and a JSON document has been generated and sent to the console mapping the defined field names with the corresponding values matched in the log according to regex capturing process.

Repeat this procedure for all the different log formats that you plan to process until you get the desired results. Once done is time to push the configuration into kubernetes and run FluentBit as a daemonSet. The following values.yaml can be used to inject the fluent-bit configuration via Helm to process logs generated by Antrea controller, agents, openvSwitch and NetworkPolicy logs. Note that, apart from the regex, is also important to tag the logs with significant labels in order to get the most of the dashboards and achieve good filtering capabilities at Grafana.

vi values.yaml
# kind -- DaemonSet or Deployment
kind: DaemonSet

env: 
  - name: NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

config:
  service: |
    [SERVICE]
        Daemon Off
        Flush {{ .Values.flush }}
        Log_Level {{ .Values.logLevel }}
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.metricsPort }}
        Health_Check On

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/antrea-agent*.log
        Tag antreaagent
        parser antrea
        Mem_Buf_Limit 5MB
    
    [INPUT]
        Name tail
        Path /var/log/containers/antrea-controller*.log
        Tag antreacontroller
        parser antrea
        Mem_Buf_Limit 5MB

    [INPUT]
        Name tail
        Path /var/log/antrea/networkpolicy/np*.log
        Tag antreanetworkpolicy
        parser antreanetworkpolicy
        Mem_Buf_Limit 5MB

    [INPUT]
        Name tail
        Path /var/log/antrea/openvswitch/ovs*.log
        Tag ovs
        parser ovs
        Mem_Buf_Limit 5MB
  
  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name kubernetes
        Match antrea
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On

    [FILTER]
        Name record_modifier
        Match *
        Record podname ${HOSTNAME}
        Record nodename ${NODE_NAME}

  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name loki
        Match antreaagent
        Host loki-gateway.loki.svc
        Port 80
        Labels job=fluentbit-antrea, agent_log_category=$category
        Label_keys $log_level, $nodename
    
    [OUTPUT]
        Name loki
        Match antreacontroller
        Host loki-gateway.loki.svc
        Port 80
        Labels job=fluentbit-antrea-controller, controller_log_category=$category
        Label_keys $log_level, $nodename
  
    [OUTPUT]
        Name loki
        Match antreanetworkpolicy
        Host loki-gateway.loki.svc
        Port 80
        Labels job=fluentbit-antrea-netpolicy
        Label_keys $nodename, $action
  
    [OUTPUT]
        Name loki
        Match ovs
        Host loki-gateway.loki.svc
        Port 80
        Labels job=fluentbit-antrea-ovs
        Label_keys $nodename, $ovs_log_level, $ovs_category

    ## https://docs.fluentbit.io/manual/pipeline/parsers
  customParsers: |
    [PARSER]
        Name antrea
        Format regex
        # https://rubular.com/r/04kWAJU1E3e20U
        Regex ^(?<timestamp>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) ((?<log_level>[^ ]?)(?<code>\d\d\d\d)) (?<time>[^ ]+) (.*) ((?<category>.*)?(.go.*\])) (?<message>.*)$
   
    [PARSER]    
        Name antreanetworkpolicy
        Format regex
        # https://rubular.com/r/gCTJfLIkeioOgO
        Regex ^(?<date>[^ ]+) (?<time>[^ ]+) (?<ovsTableName>[^ ]+) (?<antreaNativePolicyReference>[^ ]+) (?<ruleName>[^ ]+) (?<action>[^( ]+) (?<openflowPriority>[^ ]+) (?<sourceIP>[^ ]+) (?<sourcePort>[^ ]+) (?<destinationIP>[^ ]+) (?<destinationPort>[^ ]+) (?<protocol>[^ ]+) (?<packetLength>.*)$

    [PARSER]    
        Name ovs
        Format regex
        Regex ^((?<date>[^ ].*)\|(?<log_code>.*)\|(?<ovs_category>.*)\|(?<ovs_log_level>.*))\|(?<message>\w+\s.*$)

Find a copy of values.yaml in my github here. Once your fluent-bit configuration is done, you can upgrade the fluent-bit release through helm using the following command to apply the new configuration.

helm upgrade –values values.yaml fluent-bit fluent/fluent-bit -n fluent-bit
Release "fluent-bit" has been upgraded. Happy Helming!
NAME: fluent-bit
LAST DEPLOYED: Tue Mar 14 17:27:23 2023
NAMESPACE: fluent-bit
STATUS: deployed
REVISION: 32
NOTES:
Get Fluent Bit build information by running these commands:

export POD_NAME=$(kubectl get pods --namespace fluent-bit -l "app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluent-bit" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace fluent-bit port-forward $POD_NAME 2020:2020
curl http://127.0.0.1:202

Exploring Dashboards

Now we understand how log shipping process work its time to jump into grafana to navigate through the different precreated dashboards that will represent not only prometheus metrics but also some will be helpful to explore logs and extract some metrics from them.

All the dashboards use variables that allow you to create more interactive and dynamic dashboards. Instead of hard-coding things like agent-name, instance or log-level, you can use variables in their place that are displayed as dropdown lists at the top of the dashboard. These dropdowns make it easy to filter the data being displayed in your dashboard.

Dashboard 1: Agent Process Metrics

The first dashboard provide a general view of observed kubernetes cluster. This dashboard can be used to understand some important metrics across the time such as CPU and memory usage, kubernetes API usage, filedescriptor and so on. I have tried to extract the most significative out of the full set of metrics as documented here but feel free to add extra metrics you are interested in.

Dashboard 1 Agent Process Metrics

Just to see graphs in action reacting to cluster conditions, lets play with CPU usage as an example. An easy way to impact CPU usage is by creating some activity in the cluster to stress CPU. Lets create a new deployment that will be used for this purpose with following command.

kubectl create deployment www –image=httpd –replicas=1
deployment.apps/www created

Now lets try to stress the antrea agent CPU a little bit by calling quite aggresively to the Kubernetes API. To do this we will use a loop to scale the deployment randomly to a number between 1 and 20 replicas and waiting a random number between 1 and 500 milliseconds between API calls.

while true; do kubectl scale deployment www –replicas=$[($RANDOM % 20) + 1]; sleep .$[ ( $RANDOM % 500 ) + 1 ]s ; done
deployment.apps/www scaled
deployment.apps/www scaled
deployment.apps/www scaled
deployment.apps/www scaled
... Stop with CTRL+C

As you can see as soon as we start to inject the CPU usage of the antrea agent running in the nodes is fairly impacted as the agent need to reprogram the ovs and plumb the new replica pods to the network.

Antrea Agent CPU Usage Panel

On the other hand, the KubeAPI process is heavily impacted because is in charge to process all the API calls we are sending.

Kubernetes API CPU Usage Panel

Another interesting panel is below and titld as Logs per Second. This panel uses all the logs received from the nodes via fluent-bit and received by Loki to calculate the logs per second rate. As soon as we scale in/out there is a lot of activity in the agents that is generating a big amount of logs. This can be useful as an indication of current cluster activity.

Antrea Logs per Second panel

Explore other panels and play with variables such as instance to filter out the displayed information.

Dashboard 2: Agent OpenvSwitch (OVS) Metrics and Logs

The second dashboard will help us to understand what is happening behind the scenes in relation to the OpenvSwitch component. OpenvSwitch is an open source OpenFlow capable virtual switch used to create the SDN solution that will provide connectivity between the nodes and pods in this kubernetes scenario.

OpenvSwitch uses a Pipeline Model that is explained in the Antrea IO website here and is represented in the following diagram. (Note: It seems that the diagram is a little bit outdated since the actual Table IDs does not correspond to the IDs represented in official diagram).

Antrea OVS Pipeline (Note:Table IDs are outdated)

The tables are used to store the flow information at different stages of the transit of the packet. As an example, lets take the SpoofGuardTable (correspond to TableID 1 in current implementation). The SpoofGuardTable which map to ID 1 is responsible for preventing IP and ARP spoofing from local Pods.

We can easily explore its content, for example, display the pods running on a given node (k8s-worker01)

kubectl get pod -A -o wide | grep -E “NAME|worker-01”
jhasensio@forty-two:~/ANTREA$ kubectl get pod -A -o wide | grep -E "NAME|worker-01"
NAMESPACE           NAME                                           READY   STATUS    RESTARTS        AGE     IP            NODE                  NOMINATED NODE   READINESS GATES
acme-fitness        payment-5ffb9c8d65-g8qb2                       1/1     Running   0               2d23h   10.34.6.179   k8s-worker-01         <none>           <none>
fluent-bit          fluent-bit-9n6nq                               1/1     Running   2 (4d13h ago)   6d8h    10.34.6.140   k8s-worker-01         <none>           <none>
kube-system         antrea-agent-hbc86                             2/2     Running   0               47h     10.113.2.15   k8s-worker-01         <none>           <none>
kube-system         coredns-6d4b75cb6d-mmscv                       1/1     Running   2 (4d13h ago)   48d     10.34.6.9     k8s-worker-01         <none>           <none>
kube-system         coredns-6d4b75cb6d-sqg4z                       1/1     Running   1 (20d ago)     48d     10.34.6.7     k8s-worker-01         <none>           <none>
kube-system         kube-proxy-fvqbr                               1/1     Running   1 (20d ago)     48d     10.113.2.15   k8s-worker-01         <none>           <none>
load-gen            locust-master-67bdb5dbd4-ngtw7                 1/1     Running   0               2d22h   10.34.6.184   k8s-worker-01         <none>           <none>
load-gen            locust-worker-6c5f87b5c8-pgzng                 1/1     Running   0               2d22h   10.34.6.185   k8s-worker-01         <none>           <none>
logs                logs-pool-0-4                                  1/1     Running   0               6d18h   10.34.6.20    k8s-worker-01         <none>           <none>
loki                loki-canary-7kf7l                              1/1     Running   0               15h     10.34.6.213   k8s-worker-01         <none>           <none>
loki                loki-logs-drhbm                                2/2     Running   0               6d18h   10.34.6.22    k8s-worker-01         <none>           <none>
loki                loki-read-2                                    1/1     Running   0               6d9h    10.34.6.139   k8s-worker-01         <none>           <none>
loki                loki-write-1                                   1/1     Running   0               15h     10.34.6.212   k8s-worker-01         <none>           <none>
minio-operator      minio-operator-868fc4755d-q42nc                1/1     Running   1 (20d ago)     35d     10.34.6.3     k8s-worker-01         <none>           <none>
vmware-system-csi   vsphere-csi-node-42d8j                         3/3     Running   7 (18d ago)     35d     10.113.2.15   k8s-worker-01         <none>           <none>

Take the Antrea Agent pod name and dump the content of the SpoofGuardTable of the OVS container in that particular Node by issuing the following command.

kubectl exec -n kube-system antrea-agent-hbc86 -c antrea-ovs — ovs-ofctl dump-flows br-int table=1 –no-stats
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=2,arp_spa=10.34.6.1,arp_sha=12:2a:4a:31:e2:43 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=10,arp_spa=10.34.6.9,arp_sha=16:6d:5b:d2:e6:86 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=183,arp_spa=10.34.6.185,arp_sha=d6:c2:ae:76:9d:25 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=140,arp_spa=10.34.6.140,arp_sha=4e:b4:15:50:90:3e actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=139,arp_spa=10.34.6.139,arp_sha=c2:07:8f:35:8b:17 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=4,arp_spa=10.34.6.3,arp_sha=7a:d9:fd:e7:2d:af actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=182,arp_spa=10.34.6.184,arp_sha=7a:5e:26:ef:76:07 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=21,arp_spa=10.34.6.20,arp_sha=fa:e6:b9:f4:b1:e7 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=23,arp_spa=10.34.6.22,arp_sha=3a:17:f0:79:49:24 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=177,arp_spa=10.34.6.179,arp_sha=92:76:e0:a4:f4:57 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=8,arp_spa=10.34.6.7,arp_sha=8e:cc:4d:06:80:ed actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=210,arp_spa=10.34.6.212,arp_sha=7e:34:08:3d:86:c5 actions=resubmit(,2)
 cookie=0x7010000000000, table=1, priority=200,arp,in_port=211,arp_spa=10.34.6.213,arp_sha=9a:a8:84:d5:74:70 actions=resubmit(,2)
 cookie=0x7000000000000, table=1, priority=0 actions=resubmit(,2)

This SpoofGuard table contains the port of the pod, the IP address and the associated MAC and is automatically populated upon pod creation. The OVS metrics are labeled by table ID so its easy to identify the counters for any given identifier. Using the filter variable at the top of the screen select the ovs_table 1. As you can see from the graph the value for each of the table remains steady which mean there has not been recent pod creation/deletion activity in the cluster in the interval of observation. Also you can see how the value is almost the same in all the worker nodes which means the pods are evenly distributed across the worker. As expected the control plane node has less fewer entries.

Again, as we want to see dashboards in action, we can generate some counter variations easily scaling in/out a deployment. Using the www deployment we created before, scale now to 100 replicas. The pods should be scheduled among the worker nodes and the Table 1 will be populated with corresponding entries. Lets give a try.

kubectl scale deployment www –replicas=100
deployment.apps/www scaled

The counter for this particular table now increases rapidly and the visualization graph shows now updated values.

Agent OVS Flow Count SpoofGuard Table ID1

Similarly, there is another ovs table which is the EndpointDNAT (Table11) that program DNAT rules to reach endpoints behind a clusterIP service. Using the variable filters at the top of the dashboard select ovs_table 1 and 11 in the same graph and select only the instance k8s-worker-01. Note how the EndpointDNAT (11) hasn’t change at all during the scale out process and has remained steady at the value 101 in this particular case.

Agent OVS Flow Count SpoofGuard Table ID1 + Table ID11 at Worker 01

If you expose now the deployment under test, all the pod replicas will be used as endpoints which means the overall endpoints should be incremented by exactly one hundred.

kubectl expose deployment www –port=28888
service/www exposed

Now the www service should have 100 endpoints as showed below and correspondindly, the counter of the Table11 which contains the information to reach the endpoints of the cluster will be also incremented by the same amount.

kubectl get endpoints www
NAME   ENDPOINTS                                                         AGE
www    10.34.1.59:28888,10.34.1.60:28888,10.34.1.61:28888 + 97 more...   47s

You can check in any of the antrea agent container the entries that the creation of the service has created using the following ovs command line at the antrea-ovs container.

kubectl exec -n kube-system antrea-agent-hbc86 -c antrea-ovs — ovs-ofctl dump-flows br-int table=11 –no-stats | grep 28888
 cookie=0x7030000000000, table=11, priority=200,tcp,reg3=0xa22049a,reg4=0x270d8/0x7ffff actions=ct(commit,table=12,zone=65520,nat(dst=10.34.4.154:28888),exec(load:0x1->NXM_NX_CT_MARK[4],move:NXM_NX_REG0[0..3]->NXM_NX_CT_MARK[0..3]))
 cookie=0x7030000000000, table=11, priority=200,tcp,reg3=0xa2202da,reg4=0x270d8/0x7ffff actions=ct(commit,table=12,zone=65520,nat(dst=10.34.2.218:28888),exec(load:0x1->NXM_NX_CT_MARK[4],move:NXM_NX_REG0[0..3]->NXM_NX_CT_MARK[0..3]))
 cookie=0x7030000000000, table=11, priority=200,tcp,reg3=0xa22013c,reg4=0x270d8/0x7ffff actions=ct(commit,table=12,zone=65520,nat(dst=10.34.1.60:28888),exec(load:0x1->NXM_NX_CT_MARK[4],move:NXM_NX_REG0[0..3]->NXM_NX_CT_MARK[0..3]))

... skipped

And consequently the graph shows the increment by 100 endpoints that correspond to the ClusterIP www service exposition backed by the 100-replica deployment we created.

Table1 and Table11 in Worker01

Note also how this particular table 11 in charge of Endpoints is synced accross all the nodes because every pod in the cluster should be able to reach any of the endpoints through the clusterIP service. This can be verified changing to All instances and displaying only the ovs_table11. All the nodes in the cluster shows the same value for this particular table.

These are just two examples of tables that are automatically programmed and might be useful to visualize the entries of the OVS switch. Feel free to explore any other panels playing with filters within this dashboard.

As a bonus, you can see at the bottom of the dashboard there is a particular panel that shows the OVS logs. Having all the logs from all the workers in a single place is a great advantage for troubleshooting.

The logs can be filtered out using the instance, ovs_log_level and ovs_category variables at the top of the dashboard. The filters might be very useful to focus only on the relevant information we want to display. Note that, by default, the log level of the OVS container is set to INFO, however, you can increase the log level for debugging purposes to get higher verbosity. (Note: As with any other logging subsystem, increasing log level to DEBUG can adversely impact performance so be careful). To check the current status of the logs

kubectl exec -n kube-system antrea-agent-hbc86 -c antrea-ovs — ovs-appctl vlog/list
                 console    syslog    file
                 -------    ------    ------
backtrace          OFF        ERR       INFO
bfd                OFF        ERR       INFO
bond               OFF        ERR       INFO
bridge             OFF        ERR       INFO
bundle             OFF        ERR       INFO
bundles            OFF        ERR       INFO
cfm                OFF        ERR       INFO
collectors         OFF        ERR       INFO
command_line       OFF        ERR       INFO
connmgr            OFF        ERR       INFO
conntrack          OFF        ERR        DBG
conntrack_tp       OFF        ERR       INFO
coverage           OFF        ERR       INFO
ct_dpif            OFF        ERR       INFO
daemon             OFF        ERR       INFO
... <skipped>
vlog               OFF        ERR       INFO
vswitchd           OFF        ERR       INFO
xenserver          OFF        ERR       INFO

If you want to increase the error level in any of the categories use following command ovs-appctl command. The syntax of the command specifies the subsystem, the log target (console, file or syslog) and the log level where dbg is the maximum level and info is the default for the file logging. As you can guess dbg must be enabled only during a troubleshooting session and disabled afterwards to avoid performance issues. This command is applied only to an specified agent. To restore the configuration of all the subsystems just use the keyworkd ANY instead of specifying a subsystem.

kubectl exec -n kube-system antrea-agent-hbc86 -c antrea-ovs -- ovs-appctl vlog/set dpif:file:info

If you want to repeat and change in all the agents accross the cluster use the following command to add recursiveness through xargs command.

kubectl get pods -n kube-system | grep antrea-agent | awk '{print $1}' | xargs -n1 -I{arg} kubectl exec -n kube-system {arg} -c antrea-ovs -- ovs-appctl vlog/set dpif:file:dbg

Go back to the grafana dashboard and explore some of the entries. As an example the creation of a new pod produces the following log output. As mentioned earlier in this post, the original message has been parsed by FluentBit to extract the relevant fields and is formatted by Grafana to gain some human readability.

Dashboard 3: Agent Conntrack and Proxy Metrics

The next dashboard is the conntrack and is helpful to identify the current sessions and therefore the actual activity in terms of traffic in the cluster. Conntrack is part of every Linux stack and allows the kernel to track all the network connections (a.k.a flows) in order to identify all the packets that belong to a particular flow and provide a consistent treatment. This is useful for allowing or denying packets as part of a stateful firewall subsystem and also for some network translations mechanisms that require this connection tracking to work properly.

The conntrack table has its a maximum size and is important to monitor that the limits are not surpassed to avoid issues. Lets use the traffic generator we installed in the previous post to see how the metrics reacts to new traffic coming into the system. Access the Dashboard Number 3 in Grafana UI. The Conntrack section shows the number of entries (namely connections or flows) in the conntrack table. The second panel calculates the percentage of flows using the Prometheus antrea_agent_conntrack_max_connection_track metric and is useful to monitor how close we are to the limit of the node in terms of established connections.

Conntrack Connection Count

To play around with this metric lets inject some traffic using the Load Generator tool we installed in previous post here. Open the Load Generator and inject some traffic by clicking on New Test and select 5000 users with a spawn rate of 100 users/second and http://frontend.acme-fitness.svc as target endpoint to sent our traffic to. Next, click Start swarming button.

Locust Load-Generator

After some time we can see how we have reached a fair number of request per second and there are no failures.

Locust Load Generator Statistics

Narrow the dispaly interval at the right top corner to show the last 5 or 15 minutes and, after a while you can see how the traffic injection produces a change in the number of connections.

Conntrack entries by node

The Conntrack percentage also follow the same pattern and show a noticeable increase.

Conntrack entries percentage over Conntrack Max

The Percentage is still far from the limit. Let’s push a little bit more simulating a L4 DDoS attack in next section.

Detecting a L4 DDoS attack

To push the networking conntrack stack to the limit, let’s attempt a L4 DDoS attack using the popular hping3 tool. This tool can be useful to generate a big amount of traffic against a target (namely victim) system. We will run it in kubernetes as a deployment to be able to scale in/out to inject even more traffic to see if we can reach the limits of the system. The following manifest will be used to spin up the ddos-atacker pods. Modify the variables to match with the target you want to try. The below ddos-attacker.yaml file will instruct kubernetes to spin up a single replica deployment that will inject traffic to our acme-fitness frontend application listening at port 80. The hping3 command will flood as much TCP_SYN segments as it can send without waiting for confirmation.

vi ddos-attacker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: ddos-attacker
  name: ddos-attacker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ddos-attacker
  template:
    metadata:
      labels:
        app: ddos-attacker
    spec:
      containers:
      - image: sflow/hping3
        name: hping3
        env:
        # Set TARGET_HOST/TARGET_PORT variables to your intended target
        # For Pod use  pod-ip-address.namespace.pod.cluster-domain.local 
        #  Example: 10-34-4-65.acme-fitness.pod.cluster.local
        # For svc use svc_name.namespace.svc.cluster.local
        #  Example : frontend.acme-fitness.svc.cluster.local
        # Flood Mode, use with caution! 
        - name: TARGET_HOST
          value: "frontend.acme-fitness.svc.cluster.local"
          #value: "10-34-1-13.acme-fitness.pod.cluster.local"
        - name: TARGET_PORT
          # if pod use port 3000, if svc, then 80
          value: "80"
        # -S --flood initiates DDoS SYN Flooding attack
        command: ["/usr/sbin/hping3"]
        args: [$(TARGET_HOST), "-p", $(TARGET_PORT), "-S", "--flood"]

Once the manifest is applied the deployment will spin up a single replica. Hovering at the graphic of the top panel you can see the number of connections generated by the hping3 execution. As you can see there are two workers that shows a count at around 70K connections. The hping3 will be able to generate up to 65K connections as per TCP limitations. As you can see below, the panel at the top shows an increase in the connections in two of the workers.

Antrea Conntrack Connection count during single pod ddos attack

As you can guess, these two affected workers must have something to do with the attacker and the victim pods. You can easily verify where are both of them currently scheduled using the command below that in fact, frontend pod (victim) is running on k8s-worker-02 node whereas ddos-attacker pod is running on k8s-worker-06 which makes senses according to the observed behaviour.

kubectl get pods -A -o wide | grep -E “NAME|frontend|ddos”
NAMESPACE              NAME                                           READY   STATUS    RESTARTS       AGE     IP            NODE                  NOMINATED NODE   READINESS GATES
acme-fitness           frontend-6cd56445-bv5vf                        1/1     Running   0              55m     10.34.2.180   k8s-worker-02         <none>           <none>
default                ddos-attacker-6f4688cffd-9rxdf                 1/1     Running   0              2m36s   10.34.1.175   k8s-worker-06         <none>           <none>

Additionaly, the panel below shows the Conntrack connections percentage. Hovering the mouse over the graph you can display a table with actual values for all the measurements represnted in the graphic. The two workers involved in the attack are reaching an usage of 22% of the total available connections.

Antrea Conntrack Connection Percentage during single pod ddos attack

Now scale the ddos deployment to spin up more replicas. Adjust the replica count according to your cluster size. In my case I am using a 6-worker cluster so I will scale out to 5 replicas that will be more than enough.

kubectl scale deployment ddos-attacker –replicas=5
deployment.apps/ddos-attacker scaled

The graph now show how the count usage reach the 100% in k8s-worker-02 that is the node where the victim is running on. The amount of traffic sent to the pod is likely to cause a denial of service since the Antrea Conntrack table will have issues to accomodate new incoming connections.

Antrea Conntrack Connection Percentage reaching 100% after multiple pod ddos attack

The number of connections (panel at the top of the dashboards) shows an overall of +500K connections at node k8s-worker-02 which exceeds the maximum configured conntrack table.

Antrea Conntrack Connection Count after multiple pod ddos attack

As you can imagine, the DDoS attack is also impacting the CPU usage at each of the antrea-agent process. If you go back to dashboard 1 you can check how CPU usage is heavily impacted with values peaking at 150% of CPU usage and how the k8s-worker-02 node is specially struggling during the attack.

CPU Agent Usage during ddos attack

Now stop the attack and wide the display vistualization interval to visualize 6 hours range. If you look retrospectively is easy to identify the ocurrence of the DDoS attacks attemps in the timeline.

DDoS Attack identification

Dashboard 4: Network Policy and Logs

The next dashboard is related with Network Policies and is used to understand what is going on in this topic by means of the related metrics and logs. If there are no policies at all in your cluster you will see a boring dashboard like the one below showing the idle state.

Network policies and logs

As docummented in the Antrea.io website, Antrea supports not only standard K8s NetworkPolicies to secure ingress/egress traffic for Pods but also adds some extra CRDs to provide the administrator with more control over security within the cluster. Using the acme-fitness application we installed in previous post we will play with some policies to see how they are reflected in our grafana dashboard. Lets create firstly an Allow ClusterNetworkPolicy to control traffic going from frontend to catalog microservice that are part of the acme-fitness application under test. See comments in the yaml for further explanation

vi acme-fw-policy_Allow.yaml
apiVersion: crd.antrea.io/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: acnp-acme-allow
spec:
  # The policy with the highest precedence (the smallest numeric priority value) is enforced first. 
  priority: 10
 # Application Tier
  tier: application
 # Enforced at pods matching service=catalog at namespace acme-fitness
  appliedTo:
    - podSelector:
        matchLabels:
          service: catalog
      namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: acme-fitness
 # Application Direction. Traffic coming from frontend pod at acme-fitness ns
  ingress:
    - action: Allow
      from:
        - podSelector:
            matchLabels:
              service: frontend
          namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: acme-fitness
 # Application listening at TCP 8082
      ports:
        - protocol: TCP
          port: 8082
      name: Allow-From-FrontEnd
 # Rule hits will be logged in the worker 
      enableLogging: true

Once you apply the above yaml you can verify the new Antrea Cluster Network Policy (in short acnp).

kubectl get acnp
NAME              TIER          PRIORITY   DESIRED NODES   CURRENT NODES   AGE
acnp-acme-allow   application   10         1               1               22s

Return to the cluster and you will see some changes showing up.

Note the policy is calculated and effective only in the worker where the appliedTo matching pods actually exist. In this case the catalog pod lives on the k8s-worker-01 as reflected in the dasboard and verified by the output shown below.

kubectl get pod -n acme-fitness -o wide | grep -E “NAME|catalog”
NAME                              READY   STATUS    RESTARTS      AGE   IP            NODE            NOMINATED NODE   READINESS GATES
catalog-958b9dc7c-brljn           1/1     Running   2 (42m ago)   60m   10.34.6.182   k8s-worker-01   <none>           <none>

If you scale the catalog deployment, then the pods will be scheduled accross more nodes in the cluster and the networkpolicy will be enforced in multiple nodes.

kubectl scale deployment -n acme-fitness catalog –replicas=6
deployment.apps/catalog scaled

Note how, after scaling, the controller recalculates the network policy and it pushed it to 6 of the nodes.

kubectl get acnp
NAME              TIER          PRIORITY   DESIRED NODES   CURRENT NODES   AGE
acnp-acme-allow   application   10         6               6               6m46s

This can be seen in the dashboard as well. Now use the locust tool mentioned earlier to inject some traffic to see network policy hits.

Network Policy Dashboard showing Ingress Network Policies and rule hits

Now change the policy to deny the traffic using Reject action using the following manifest.

vi acme-fw-policy_Reject.yaml
apiVersion: crd.antrea.io/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: acnp-acme-reject
spec:
  # The policy with the highest precedence (the smallest numeric priority value) is enforced first. 
  priority: 10
 # Application Tier
  tier: application
 # Enforced at pods matching service=catalog at namespace acme-fitness
  appliedTo:
    - podSelector:
        matchLabels:
          service: catalog
      namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: acme-fitness
 # Application Direction. Traffic coming from frontend pod at acme-fitness ns
  ingress:
    - action: Reject
      from:
        - podSelector:
            matchLabels:
              service: frontend
          namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: acme-fitness
 # Application listening at TCP 8082
      ports:
        - protocol: TCP
          port: 8082
      name: Reject-from-Frontend
 # Rule hits will be logged in the worker 
      enableLogging: true

The Reject action will take precedence over the existing ACNP with Allow action and the hit counter begin to increment progressively.

Feel free to try with egress direction and with drop action to see how the values are populated. A little bit below, you can find a panel with the formatted log entries that are being pushed to Loki by fluent-bit. Expand any entry to see all the fields and values.

Network Policy Log sample

You can use the filters at the top of the dashboard to filter using source or destination IP. You can enter the whole IP or just a some octect since the matching is done using a regex to find the at any part of the IPv4. As an example, the following filter will display logs and logs derived metric where the source IP contains .180 and the destination IP contains .85.

Once the data is entered all the logs are filtered to display only matching entries. This not only affect to the Log Analysis panel but also to the rest of panels derived from the logs and can be useful to drill down and focus only on desired conversations within the cluster.

Log Analysis entries after filtering by SRC/DST

Last but not least, there are two more sections that analyze the received logs and extract some interesting metrics such as Top DestinationIP, Top SourceIPs, Top Converstion, Top Hits by rule and so on. These analytics can be useful to identify most top talkers in the cluster as well as identifying traffic that might be accidentaly denied. Remember this analytics are also affected by the src/dst filters mentioned above.

Allowed Traffic Analytics

Dashboard 5: Antrea Agent Logs

Next dashboard is the Antrea Agent Logs. This dashboard is purely generated from the logs that the Antrea Agent produces at each of the k8s nodes. All the panels can be filtered out using the variables like the log level and the log_category to obtain the desired set of logs and avoid other noisy and less relevant logs.

Antrea Agent Logs Dashboard

As an example lets see the trace left by Antrea when a new node is scheduled in a worker node. First use the log_category filter to select pod_configuration and server subsystems only. Next use following command to create a new 2-replica deployment using apache2 image.

kubectl create deployment www –image=httpd –replicas=2
deployment.apps/www created

Note how the new deployment has produced some logs in the pod_configuration and server log categorie subsystem specifically in k8s-worker-01 and k8s-worker-02

Verify the pods has been actually scheduled in k8s-worker-01 and k8s-worker-02

kubectl get pod -o wide | grep -E “NAME|www”
NAME                       READY   STATUS    RESTARTS     AGE     IP            NODE            NOMINATED NODE   READINESS GATES
www-86d9694b49-m85xl       1/1     Running   0            5m26s   10.34.2.94    k8s-worker-02   <none>           <none>
www-86d9694b49-r6kwq       1/1     Running   0            5m26s   10.34.6.187   k8s-worker-01   <none>           <none>

The logs panel shows the activity created when the kubelet request IP connectivity to the new pod (CmdAdd).

If you need to go deeper in the logs there’s always an option to increase the log verbosity. By default the log is set to the minimum or zero.

kubectl get pods -n kube-system | grep antrea-agent | awk ‘{print $1}’ | xargs -n1 -I{arg} kubectl exec -n kube-system {arg} -c antrea-agent — antctl log-level
0
0
0
0
0
0
0

But, for troubleshooting or research purposes you can change log level to 4 which is the maximum verbosity. You can do it using following single-line command in all the nodes in the cluster.

kubectl get pods -n kube-system | grep antrea-agent | awk '{print $1}' | xargs -n1 -I{arg} kubectl exec -n kube-system {arg} -c antrea-agent -- antctl log-level 4

Note the command is executed silently, so repeat previous command without the log-level keyword to check if the new settings has been applied and you should receive an output of 4 in each of iterations. Now delete the previous www deployment and recreate it again. As you can see in the Antrea Logs panel the same operation now produces much more logs with enriched information.

Increasing the log level can be useful to see conntrack information. If you are interested in flow information, increasing the log verbosity temporarily might be a good option. Lets give a try. Expose the previous www deployment using below command.

kubectl expose deployment www –port=80
service/www exposed

And now create a new curl pod to generate some traffic towards the created service to see if to locate the conntrack flows.

kubectl run mycurlpod –image=curlimages/curl -i –tty — sh
If you don't see a command prompt, try pressing enter.
/ $ / $ curl www.default.svc
<html><body><h1>It works!</h1></body></html>

Using the log_category filter at the top select the conntrack_connections to focus only on flow relevant logs.

For further investigation you can click on the arrow next to the Panel Title and then click on Explore to drill down.

Change the query and add filter to get only logs containing “mycurlpod” and “DestionationPort:80” which corresponds with the flow we are looking for.

And you will get the log entries matching the search filter entered as shown in the following screen.

You can copy the raw log entry for further analysis or forensic / reporting purposes. Its quite easy to identigy Source/Destination IP Addresses, ports and other interesting metadata such as matching ingress and egress security policies affecting this particular traffic.

FlowKey:{SourceAddress:10.34.6.190 DestinationAddress:10.34.2.97 Protocol:6 SourcePort:46624 DestinationPort:80} OriginalPackets:6 OriginalBytes:403 SourcePodNamespace:default SourcePodName:mycurlpod DestinationPodNamespace: DestinationPodName: DestinationServicePortName:default/www: DestinationServiceAddress:10.109.122.40 DestinationServicePort:80 IngressNetworkPolicyName: IngressNetworkPolicyNamespace: IngressNetworkPolicyType:0 IngressNetworkPolicyRuleName: IngressNetworkPolicyRuleAction:0 EgressNetworkPolicyName: EgressNetworkPolicyNamespace: EgressNetworkPolicyType:0 EgressNetworkPolicyRuleName: EgressNetworkPolicyRuleAction:0 PrevPackets:0 PrevBytes:0 ReversePackets:4 ReverseBytes:486 PrevReversePackets:0 PrevReverseBytes:0 TCPState:TIME_WAIT PrevTCPState:}

Once your investigation is done, do not forget to restore the log level to zero to contain the log generation using following filter.

kubectl get pods -n kube-system | grep antrea-agent | awk '{print $1}' | xargs -n1 -I{arg} kubectl exec -n kube-system {arg} -c antrea-agent -- antctl log-level 0

These are just a couple of examples to explore the logs sent by Fluent-Bit shipper in each of the workers. Feel free to explore the dashboards using the embedded filters to see how affect to the displayed information.

Dashboard 6: Antrea Controller Logs

The dashboard 6 displays the activity related to Controller component of Antrea. The controller main function at this moment is to take care of NetworkPolicy implementation. For that reason unless there is networkpolicy changes you won’t see any log or metric expect the Controller Process CPU Seconds which should remain very low (under 2% of usage).

If you want to see some metrics in action you can generate some activity just by pushing some networkpolicies. As an example create a manifest to create cluster network policies and networkpolicies.

vi dummy_policies.yaml
apiVersion: crd.antrea.io/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: test-acnp
  namespace: acme-fitness
spec:
  priority: 10
  tier: application
  appliedTo:
    - podSelector:
        matchLabels:
          service: catalog
  ingress:
    - action: Reject
      from:
        - podSelector:
            matchLabels:
              service: frontend
      ports:
        - protocol: TCP
          port: 8082
      name: AllowFromFrontend
      enableLogging: false
---
apiVersion: crd.antrea.io/v1alpha1
kind: NetworkPolicy
metadata:
  name: test-np
  namespace: acme-fitness
spec:
  priority: 3
  tier: application
  appliedTo:
    - podSelector:
        matchLabels:
          service: catalog
  ingress:
    - action: Allow
      from:
        - podSelector:
            matchLabels:
              service: frontend
      ports:
        - protocol: TCP
          port: 8082
      name: AllowFromFrontend
      enableLogging: true

Using a simple loop like the one below, create and delete recursively the same policy waiting a random number of time between iterations

while true; do kubectl apply -f dummy_policies.yaml; sleep .$[ ( $RANDOM % 500 ) + 1 ]s; kubectl delete -f dummy_policies.yaml; done
clusternetworkpolicy.crd.antrea.io/test-acnp created
networkpolicy.crd.antrea.io/test-np created
clusternetworkpolicy.crd.antrea.io "test-acnp" deleted
networkpolicy.crd.antrea.io "test-np" deleted
clusternetworkpolicy.crd.antrea.io/test-acnp created
networkpolicy.crd.antrea.io/test-np created
clusternetworkpolicy.crd.antrea.io "test-acnp" deleted
networkpolicy.crd.antrea.io "test-np" deleted

Keep the loop running for several minutes and return to grafana UI to check how the panels are populated.

As in the other dashboards, at the bottom you can find the related logs that are being generated by the network policy creation / deletion activity pushed to the cluster.

Stop the loop by pressing CTRL C. As in the others systems generating logs, you can increase the log level by using the same method mentioned earlier. You can try to configure the antrea controller to the maximum verbosity which is 4.

kubectl exec -n kube-system antrea-controller-74799d4774-g6cxt -c antrea-controller -- antctl log-level 4

Repeat the same loop and you will find a dramatic increase in number of messages being logged. At the Logs Per Second Panel at the upper right corner you can notice an increasing in the logs per second rate from 6 to a steady rate of +120 logs per second.

When finished do not forget to restore log level to 0

kubectl exec -n kube-system antrea-controller-74799d4774-g6cxt -c antrea-controller -- antctl log-level 0

This concludes this series of Observability on Antrea. I hope it has served to shed light on a crucial component in any Kubernetes cluster such as the CNI and that it allows not only to properly understand the operation from the perspective of networks, but also to enhance the capabilities of analysis and troubleshooting.

Antrea Observability Part 2: Installing Grafana, Prometheus, Loki and Fluent-Bit

Who does not love to watch a nice dashboard full of colors? Observing patterns and real-time metrics in a time series might make us sit in front of a screen as if hypnotized for hours. But apart from the inherent beauty of dashboards, they provide observability, which is a crucial feature for understanding the performance of our applications and also a very good tool for predicting future behavior and fixing existing problems.

There is a big ecosystem out there with plenty of tools to create a logging pipeline that collect, parse, process, enrich, index, analyze and visualize logs. In this post we will focus on a combination that is gaining popularity for log Analysis that is based on FluentBit, Loki and Grafana as shown below. On the other hand we will use Prometheus for metric collection.

Opensource Observability Stack

Let’s build the different blocks starting by the visualization tool.

Installing Grafana as visualization platform

Grafana is a free software based on Apache 2.0 license, which allows us to visualize data collected from various sources such as Prometheus, InfluxDB or Telegraph, tools that collect data from our infrastructure, such as CPU usage, memory, or network traffic of a virtual machine, a Kubernetes cluster, or each of its containers.

The real power of Grafana lies in the flexibility to create as many dashboards as we need with very smart visualization grapsh where we can format this data and represent it as we want. We will use Grafana as main tool for adding observatility capabilities to Antrea which is the purpose of this series of posts.

To carry out the installation of grafana we will rely on the official helm charts. The first step, therefore, would be to add the grafana repository so that helm can access it.

helm repo add grafana https://grafana.github.io/helm-charts

Once the repository has been added we can broswe it. We will use the latest available release of chart to install the version 9.2.4 of Grafana.

helm search repo grafana
NAME                                    CHART VERSION   APP VERSION             DESCRIPTION                                       
grafana/grafana                         6.43.5          9.2.4                   The leading tool for querying and visualizing t...
grafana/grafana-agent-operator          0.2.8           0.28.0                  A Helm chart for Grafana Agent Operator           
grafana/enterprise-logs                 2.4.2           v1.5.2                  Grafana Enterprise Logs                           
grafana/enterprise-logs-simple          1.2.1           v1.4.0                  DEPRECATED Grafana Enterprise Logs (Simple Scal...
grafana/enterprise-metrics              1.9.0           v1.7.0                  DEPRECATED Grafana Enterprise Metrics             
grafana/fluent-bit                      2.3.2           v2.1.0                  Uses fluent-bit Loki go plugin for gathering lo...
grafana/loki                            3.3.2           2.6.1                   Helm chart for Grafana Loki in simple, scalable...
grafana/loki-canary                     0.10.0          2.6.1                   Helm chart for Grafana Loki Canary                
grafana/loki-distributed                0.65.0          2.6.1                   Helm chart for Grafana Loki in microservices mode 
grafana/loki-simple-scalable            1.8.11          2.6.1                   Helm chart for Grafana Loki in simple, scalable...
grafana/loki-stack                      2.8.4           v2.6.1                  Loki: like Prometheus, but for logs.              
grafana/mimir-distributed               3.2.0           2.4.0                   Grafana Mimir                                     
grafana/mimir-openshift-experimental    2.1.0           2.0.0                   Grafana Mimir on OpenShift Experiment             
grafana/oncall                          1.0.11          v1.0.51                 Developer-friendly incident response with brill...
grafana/phlare                          0.1.0           0.1.0                   🔥 horizontally-scalable, highly-available, mul...
grafana/promtail                        6.6.1           2.6.1                   Promtail is an agent which ships the contents o...
grafana/rollout-operator                0.1.2           v0.1.1                  Grafana rollout-operator                          
grafana/synthetic-monitoring-agent      0.1.0           v0.9.3-0-gcd7aadd       Grafana's Synthetic Monitoring application. The...
grafana/tempo                           0.16.3          1.5.0                   Grafana Tempo Single Binary Mode                  
grafana/tempo-distributed               0.27.5          1.5.0                   Grafana Tempo in MicroService mode                
grafana/tempo-vulture                   0.2.1           1.3.0                   Grafana Tempo Vulture - A tool to monitor Tempo...

Any helm chart includes configuration options to customize the setup by passing a configuration file that helm will use when deploying our release. We can research in the documentation to understand what all this possible helm chart values really means and how affect the final setup. Sometimes it is useful to get a file with all the default configuration values and personalize as requiered. To get the default values associated with a helm chart just use the following command.

helm show values grafana/grafana > default_values.yaml

Based on the default_values.yaml we will create a customized and reduced version and we will save in a new values.yaml file with some modified values for our custom configuration. You can find the full values.yaml here. The first section enables data persistence by creating a PVC that will use the vsphere-sc storageClass we created in this previous post to leverage vSphere Container Native Storage capabilities to provision persistent volumes. Adjust the storageClassName as per your setup.

vi values.yaml
# Enable Data Persistence
persistence:
  type: pvc
  enabled: true
  storageClassName: vsphere-sc
  accessModes:
    - ReadWriteOnce
  size: 10Gi

The second section enables the creation of sidecars containers that allow us to import grafana configurations such as datasources or dashboards through configmaps, this will be very useful to deploy Grafana fully configured in an automated way without user intervention through the graphical interface. With this settings applied, any configmap in the grafana namespace labeled with grafana_dashboard=1 will trigger the import of dashboard. Similarly, any configmaps labeled with grafana_datasource=1 will trigger the import of the grafana datasource.

vi values.yaml (Sidecars Section)
# SideCars Section
# Enable Sidecars containers creationfor dashboards and datasource import via configmaps
sidecar:
  dashboards:
    enabled: true
    label: grafana_dashboard
    labelValue: "1"
  datasources:
    enabled: true
    label: grafana_datasource
    labelValue: "1"
    

The last section defines how to expose Grafana graphical interface externally. We will use a kubernetes service type LoadBalancer for this purpose. In my case I will use AVI as the ingress solution for our cluster so the load balancer will be created in the service engine. Feel free to use any other external LoadBalancer solution if you want.

vi values.yaml (service expose section)
# Define how to expose the service
service:
  enabled: true
  type: LoadBalancer
  port: 80
  targetPort: 3000
  portName: service

The following command creates the namespace grafana and installs the grafana/grafana chart named grafana in the grafana namespace taking the values.yaml configuration file as the input. After successful deployment, the installation gives you some hints for accessing the application, e.g. how to get the credentials, which are stored in a secret k8s object. Ignore any warning about PSP you might get.

helm install grafana –create-namespace grafana -n grafana grafana/grafana -f values.yaml
Release "grafana" has been installed. Happy Helming!
NAME: grafana
LAST DEPLOYED: Mon Dec 26 18:42:05 2022
NAMESPACE: grafana
STATUS: deployed
REVISION: 2
NOTES:
1. Get your 'admin' user password by running:

   kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:

   grafana.grafana.svc.cluster.local

   Get the Grafana URL to visit by running these commands in the same shell:
     export POD_NAME=$(kubectl get pods --namespace grafana -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}")
     kubectl --namespace grafana port-forward $POD_NAME 3000

3. Login with the password from step 1 and the username: admin

As explained in the notes after helm installation, the first step is to get the plaintext password that will be used to authenticate the default admin username in the Grafana UI.

kubectl get secret –namespace grafana grafana -o jsonpath=”{.data.admin-password}” | base64 –decode ; echo
wFCT81uGC7ij5Sv1rTIuf2CwQa5Y9xkGQSixDKOx

Veryfing Grafana Installation

Before moving to the Grafana UI let’s explore created kubernetes resources and their status.

kubectl get all -n grafana
NAME                           READY   STATUS    RESTARTS   AGE
pod/grafana-7d95c6cf8c-pg5dw   3/3     Running   0          24m

NAME              TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
service/grafana   LoadBalancer   10.100.220.164   10.113.3.106   80:32643/TCP   24m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana   1/1     1            1           24m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-7d95c6cf8c   1         1         1       24m

The chart has created a deployment with 3 grafana replica pods that are in running status. Note how the LoadBalancer service has already allocated the external IP address 10.113.3.106 to provide outside reachability. As mentioned earlier, if you have a LoadBalancer solution such as AVI with his AKO operator deployed in your setup you will see that a new Virtual Service has been created and it’s ready to use as shown below:

Now you can open your browser and type the IP Address. AVI also register in its internal DNS the new LoadBalancer objects that the developer creates in kubernetes. In this specific setup an automatic FQDN is created and grafana should be available from your browser at http://grafana.grafana.avi.sdefinitive.net. As specified in the LoadBalancer section in the values.yaml at deployment type, the grafana GUI will be exposed on port 80. For security purpose is strongly recommended to use a Secure Ingress object instead if you are planning to deploy in production.

Grafana GUI Welcome Page

This way we would have finished the installation of Grafana visualization tool. Let’s move now to install another important piece in observability in charge of retrieving metrics which is Prometheus.

Prometheus for metric collection

Prometheus was created to monitor highly dynamic environments, so over the past years it has become the mainstream monitoring tool of choice in container and microservices world. Modern devops is becoming more and more complex to handle manually and there is a need for automation. Imagine a complex infrastructure with loads of servers distributed over many locations and you have no insight of what is happening in terms of errors, latency, usage and so on. In a modern architecture there are more things than can go wrong when you have tons of dynamic and ephemeral services and applications and any of them can crash and cause failure of other services. This is why is crucial to avoid manual intervention and allow the administrator to quickly identify and fix any potential problem or degradation of the system.

The prometheus architecture is represented in the following picture that has been taken from the official prometheus.io website.

Prometheus architecture and its ecosystem

The heart componentes of the prometheus server are listed below

  • Data Retrieval Worker.- responsible for fetching time series data from a particular data source, such as a web server or a database, and converting it into the Prometheus metric format.
  • Time Series Data Base (TSDB).- used to store, manage, and query time-series data.
  • HTTP Server.- responsible for exposing the Prometheus metrics endpoint, which provides access to the collected metrics data for monitoring and alerting purposes.

The first step would be enabling Prometheus

Installing Prometheus

Prometheus requires access to Kubernetes API resources for service discovery, access to the Antrea Metrics Listener and some configuration to instruct the Data Retrieval Worker to scrape the required metrics in both Agent and Controller components. There are some manifests in the Antrea website ready to use to save some time with all the scraping and job configurations of Prometheus. Lets apply the provided manifest as a first step.

kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-prometheus.yml
namespace/monitoring created
serviceaccount/prometheus created
secret/prometheus-service-account-token created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prometheus-server-conf created
deployment.apps/prometheus-deployment created
service/prometheus-service create

As you can see in the output the manifest include all required kubernetes objects including permissions, configurations and lastly the prometheus server itself. The manifest deploy all the resources in a dedicated monitoring namespace.

kubectl get all -n monitoring
NAME                                         READY   STATUS    RESTARTS   AGE
pod/prometheus-deployment-57d7b4c6bc-jx28z   1/1     Running   0          42s

NAME                         TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/prometheus-service   NodePort   10.101.177.47   <none>        8080:30000/TCP   42s

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/prometheus-deployment   1/1     1            1           42s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/prometheus-deployment-57d7b4c6bc   1         1         1       42s

Feel free to explore all created resources. As you can tell there is a new service is running as a NodePort type so you should be able to reach the Prometheus server using any of your workers IP addresses that listen at the static 30000 port. Alternatively you can always use port-forward method to redirect a local port to the service listening at 8080. Open a browser to verify you can access to the HTTP Server component of Prometheus.

Prometheus configuration will retrieve not only Antrea related metrics but also some built in kubernetes metrics. Just for fun type “api” in the search box and you will see dozens of metrics available.

Prometheus Server

Now we are sure Prometheus Server is running and is able to scrape metrics succesfully, lets move into our area of interest that is Antrea CNI.

Enabling Prometheus Metrics in Antrea

The first step is to configure Antrea to generate Prometheus metrics. As we explain in the previous post here we are using Helm to install Antrea so the better way to change the configuration of the Antrea setup is by using the values.yaml file and redeploying the helem chart. As you can see we are enabling also the FlowExporter featuregate. This is a mandatory setting to allow conntrack flows related metrics to get updated. Edit the values.yaml

vi values.yaml
# -- Container image to use for Antrea components.
image:
  tag: "v1.10.0"

enablePrometheusMetrics: true

featureGates:
   FlowExporter: true
   AntreaProxy: true
   TraceFlow: true
   NodePortLocal: true
   Egress: true
   AntreaPolicy: true

Deploy a new release of antrea helm chart taking the values.yaml file as in input. Since the chart is already deployed, now we need to use upgrade keyword instead of install as did the first time.

helm upgrade -f values.yaml antrea antrea/antrea -n kube-system
Release "antrea" has been upgraded. Happy Helming!
NAME: antrea
LAST DEPLOYED: Wed Jan 18 13:53:56 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
The Antrea CNI has been successfully installed

You are using version 1.10.0

For the Antrea documentation, please visit https://antrea.io

Now with antctl command verify the feature gates has been enabled as expected

antctl get featuregates
Antrea Agent Feature Gates
FEATUREGATE              STATUS         VERSION   
Traceflow                Enabled        BETA      
AntreaIPAM               Disabled       ALPHA     
Multicast                Disabled       ALPHA     
AntreaProxy              Enabled        BETA      
Egress                   Enabled        BETA      
EndpointSlice            Disabled       ALPHA     
ServiceExternalIP        Disabled       ALPHA     
AntreaPolicy             Enabled        BETA      
Multicluster             Disabled       ALPHA     
FlowExporter             Enabled        ALPHA     
NetworkPolicyStats       Enabled        BETA      
NodePortLocal            Enabled        BETA      

Antrea Controller Feature Gates
FEATUREGATE              STATUS         VERSION   
AntreaPolicy             Enabled        BETA      
NetworkPolicyStats       Enabled        BETA      
NodeIPAM                 Disabled       ALPHA     
Multicluster             Disabled       ALPHA     
Egress                   Enabled        BETA      
Traceflow                Enabled        BETA      
ServiceExternalIP        Disabled       ALPHA     

As seen before Prometheus has some built-in capabilities to browse and visualize metrics however we want this metric to be consumed from the powerful Grafana that we have installed earlier. Lets access to the Grafana console and click on the Gear icon to add the new Prometheus Datasource.

Grafana has been deployed in the same cluster in a different namespace. The prometheus URL is derived from <service>.<namespace>.svc.<port>.domain. In our case the URL is http://prometheus-service.monitoring.svc:8080. If your accessing Prometheus from a different cluster ensure you use the FQDN adding the corresponding domain at the end of the URL (by default cluster.local).

Click on Save & Test blue button at the bottom of the screen and you should see a message indicating the Prometheus server is reachable and working as expected.

Now click on the compass button to verify that Antrea metrics are being populated and are reachable from Grafana for visualization. Select new added Prometheus as Datasource at the top.

Pick up any of the available antrea agent metrics (I am using here antrea_agent_local_pod_count as an example) and you should see the visualization of the gathered metric values in the graph below.

That means Prometheus datasource is working and Antrea is populating metrics successfully. Let’s move now into the next piece Loki.

Installing Loki as log aggregator platform

In the last years the area of log management has been clearly dominated by the Elastic stack becoming the de-facto standard whereas Grafana has maintained a strong position in terms of visualization of metrics from multiple data sources, among which prometheus stands out.

Lately a very popular alternative for log management is Grafana Loki. The Grafana Labs website describes Loki as a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.

For that reason we will use Loki as a solution for aggregating the logs we got from kubernetes pods. We will focus on Antrea related pods but it can be used with a wider scope for the rest of applications ruunning in your cluster.

In the same way that we did in the Grafana installation, we will use the official helm chart of the product to proceed with the installation. This time it is not necessary to install a new helm repository because the Loki chart is already included in the grafana repo. As we did with Grafana helm chart, the first step will be to obtain the configuration file associated to the helm chart that we will use to be able to customize our installation.

helm show values grafana/loki > default_values.yaml

Using this file as a reference, create a reduced and customized values.yaml file with some modified configuration. As a reminder, any setting not explicitly mentioned in the reduced values.yaml file will take the default values. Find the values.yaml file I am using here.

For a production solution it is highly recommended to install Loki using the scalable architecture. The scalable architecture requires a managed object store such as AWS S3 or Google Cloud Storage but, if you are planning to use it on-premises, a very good choice is to use a self-hosted store solution such as the popular MinIO. There is a previous post explaining how to deploy a a MinIO based S3-like storage platform based on vSAN here. In case you are going with MinIO Operator, as a prerequisite before installing Loki, you would need to perform following tasks.

The following script will

  • Create a new MinIO tenant. We will use a new tenant called logs in the namespace logs.
  • Obtain the S3 endpoint that will be used to interact with your S3 storage via API. I am using here the internal ClusterIP endpoint but feel free to use the external FQDN if you want to expose it externally. In that case you would use the built-in kubernetes naming convention for pods and services as explained here and, that it would be something like minio.logs.svc.cluster.local.
  • Obtain the AccessKey and SecretAccessKey. By default a set of credentials are generated upon tenant creation. You can extract them from the corresponding secrets easily or just create a new set of credentials using Minio Tenant Console GUI.
  • Create the required buckets. You can use console or mc tool as well.

Considering you are using MinIO operator, the following script will create required tenant and buckets. Copy and Paste the contents in the console or create a sh file and execute it using bash. Adjust the tenant settings in terms of servers, drives and capacity match with your environment. All the tenant objects will be placed in the namespace logs.

vi create-loki-minio-tenant.sh
#!/bin/bash

TENANT=logs
BUCKET1=chunks
BUCKET2=ruler
BUCKET3=admin

# CREATE NAMESPACE AND TENANT 6 x 4 drives for raw 50 G, use Storage Class SNA
# -------------------------------------------------------------------------------
kubectl create ns $TENANT
kubectl minio tenant create $TENANT --servers 6 --volumes 24 --capacity 50G --namespace $TENANT --storage-class vsphere-sna --expose-minio-service --expose-console-service

# EXTRACT CREDENTIALS FROM CURRENT TENANT AND CREATE SECRET 
# ---------------------------------------------------------
echo "MINIO_S3_ENDPOINT=https://minio.${TENANT}.svc.cluster.local" > s3vars.env
echo "MINIO_S3_BUCKET1=${BUCKET1}" >> s3vars.env
echo "MINIO_S3_BUCKET2=${BUCKET2}" >> s3vars.env
echo "MINIO_S3_BUCKET3=${BUCKET3}" >> s3vars.env
echo "SECRET_KEY=$(kubectl get secrets -n ${TENANT} ${TENANT}-user-1 -o jsonpath="{.data.CONSOLE_SECRET_KEY}" | base64 -d)" >> s3vars.env
echo "ACCESS_KEY=$(kubectl get secrets -n ${TENANT} ${TENANT}-user-1 -o jsonpath="{.data.CONSOLE_ACCESS_KEY}" | base64 -d)" >> s3vars.env

kubectl create secret generic -n $TENANT loki-s3-credentials --from-env-file=s3vars.env

Once the tenant is created we can proceed with bucket creation. You can do it manually via console or mc client or using following yaml file used to define a job that will create the required buckets as shown here. Basically it will wait untill the tenant is initialized and then it will created the three required buckets as per the secret loki-s3-credentials injected variables.

vi create-loki-minio-buckets-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: create-loki-minio-buckets
  namespace: logs
spec:
  template:
    spec:
      containers:
      - name: mc
       # loki-s3-credentials contains $ACCESS_KEY, $SECRET_KEY, $MINIO_S3_ENDPOINT, $MINIO_S3_BUCKET1-3
        envFrom:
        - secretRef:
            name: loki-s3-credentials
        image: minio/mc
        command: 
          - sh
          - -c
          - ls /tmp/error > /dev/null 2>&1 ; until [[ "$?" == "0" ]]; do sleep 5; echo "Attempt to connect with MinIO failed. Attempt to reconnect in 5 secs"; mc alias set s3 $(MINIO_S3_ENDPOINT) $(ACCESS_KEY) $(SECRET_KEY) --insecure; done && mc mb s3/$(MINIO_S3_BUCKET1) --insecure; mc mb s3/$(MINIO_S3_BUCKET2) --insecure; mc mb s3/$(MINIO_S3_BUCKET3) --insecure
      restartPolicy: Never
  backoffLimit: 4

Verify job execution is completed displaying the logs created by the pod in charge of completing the defined job.

kubectl logs -n logs create-loki-minio-buckets
mc: <ERROR> Unable to initialize new alias from the provided credentials. The Access Key Id you provided does not exist in our records.          
Attempt to connect with MinIO failed. Attempt to reconnect in 5 secs                                                                             
Added `s3` successfully.                                                                                                                         
Bucket created successfully `s3/chunks`.
Bucket created successfully `s3/ruler`.
Bucket created successfully `s3/admin`.

Now the S3 storage requirement is fullfiled. Lets move into the values.yaml file that will be used as the configuration source for our Loki deployment. The first section provides some general configuration options including the data required to access the shared S3 store. Replace s3 attributes with your particular settings.

vi values.yaml
loki:
  auth_enabled: false
  storage_config:
    boltdb_shipper:
      active_index_directory: /var/loki/index
      cache_location: /var/loki/index_cache
      resync_interval: 5s
      shared_store: s3
  compactor:
    working_directory: /var/loki/compactor
    shared_store: s3
    compaction_interval: 5m
  storage:
    bucketNames:
      chunks: chunks
      ruler: ruler
      admin: admin
    type: s3
    s3:
      s3: 
      endpoint: https://minio.logs.svc.cluster.local:443
      region: null
      secretAccessKey: YDLEu99wPXmAAFyQcMzDwDNDwzF32GnS8HhHBuoD
      accessKeyId: ZKYLET51JWZ8LXYYJ0XP
      s3ForcePathStyle: true
      insecure: true
      
  # 
  querier:
    max_concurrent: 4096
  #
  query_scheduler:
    max_outstanding_requests_per_tenant: 4096

# Configuration for the write
# <continue below...>

Note. If you used the instructions above to create the MinIO Tenant you can extract the S3 information from the plaintext s3vars.env variable. You can also extract from the secret logs-user-1. Remember to delete the s3vars.env file after usage as it may reveal sensitive information.

cat s3vars.env
MINIO_S3_ENDPOINT=https://minio.logs.svc.cluster.local
MINIO_S3_BUCKET1=chunks
MINIO_S3_BUCKET2=ruler
MINIO_S3_BUCKET3=admin
SECRET_KEY=YDLEu99wPXmAAFyQcMzDwDNDwzF32GnS8HhHBuoD
ACCESS_KEY=ZKYLET51JWZ8LXYYJ0XP

When object storage is configured, the helm chart configures Loki to deploy read and write targets in high-availability fashion running 3 replicas of each independent process. It will use a storageClass able to provide persistent volumes to avoid losing data in case of the failure of the application. Again, I am using here a storage class called vsphere-sc that is backed by vSAN and accesed by a CSI driver. If you want to learn how to provide data persistence using vSphere and vSAN check a previous post here.

vi values.yaml (Storage and General Section)

# Configuration for the write
write:
  persistence:
    # -- Size of persistent disk
    size: 10Gi
    storageClass: vsphere-sc
# Configuration for the read node(s)
read:
  persistence:
    # -- Size of persistent disk
    size: 10Gi
    storageClass: vsphere-sc
    # -- Selector for persistent disk

# Configuration for the Gateway
# <continue below..>

Additionally, the chart installs the gateway component which is an NGINX that exposes Loki’s API and automatically proxies requests to the correct Loki components (read or write in our scalable setup). If you want to reach Loki from the outside (e.g. other clusters) you must expose it using any kubernetes methods to gain external reachability. In this example I am using a LoadBalancer but feel free to explore further options in the defaults_values.yaml such as a secure Ingress. Remember when the gateway is enabled, the visualization tool (Grafana) as well as the log shipping agents (Fluent-Bit) should be configured to use the gateway as endpoint.

vi values.yaml (Gateway Section)

# Configuration for the Gateway
gateway:
  # -- Specifies whether the gateway should be enabled
  enabled: true
  # -- Number of replicas for the gateway
  service:
    # -- Port of the gateway service
    port: 80
    # -- Type of the gateway service
    type: LoadBalancer
  # Basic auth configuration
  basicAuth:
    # -- Enables basic authentication for the gateway
    enabled: false
    # -- The basic auth username for the gateway

The default chart will install another complementary components in Loki called canary and backend. Loki canary component is fully described here. Basically it is used to audit the log-capturing performance of Loki by generating artificial log lines.

Once the values.yaml file is completed we can proceed with the installation of the helm chart using following command. I am installing loki in the a namespace named loki.

helm install loki –create-namespace loki -n loki grafana/loki -f values.yaml
Release "loki" has been installed. Happy Helming!
NAME: loki
LAST DEPLOYED: Wed Feb 14 12:18:46 2023
NAMESPACE: loki
STATUS: deployed
REVISION: 1
NOTES:
***********************************************************************
 Welcome to Grafana Loki
 Chart version: 4.4.1
 Loki version: 2.7.2
***********************************************************************

Installed components:
* grafana-agent-operator
* gateway
* read
* write
* backend

Now we are done with Loki installation in a scalable and distributed architecture and backed by a MinIO S3 storage, let’s do some verifications to check everything is running as expected.

Verifying Loki Installation

As a first step, explore the kubernetes objects that the loki chart has created.

kubectl get all -n loki
NAME                                               READY   STATUS    RESTARTS   AGE
pod/loki-backend-0                                 1/1     Running   0          2m13s
pod/loki-backend-1                                 1/1     Running   0          2m49s
pod/loki-backend-2                                 1/1     Running   0          3m37s
pod/loki-canary-4xkfw                              1/1     Running   0          5h42m
pod/loki-canary-crxwt                              1/1     Running   0          5h42m
pod/loki-canary-mq79f                              1/1     Running   0          5h42m
pod/loki-canary-r76pz                              1/1     Running   0          5h42m
pod/loki-canary-rclhj                              1/1     Running   0          5h42m
pod/loki-canary-t55zt                              1/1     Running   0          5h42m
pod/loki-gateway-574476d678-vkqc7                  1/1     Running   0          5h42m
pod/loki-grafana-agent-operator-5555fc45d8-rcs59   1/1     Running   0          5h42m
pod/loki-logs-25hvr                                2/2     Running   0          5h42m
pod/loki-logs-6rnmt                                2/2     Running   0          5h42m
pod/loki-logs-72c2w                                2/2     Running   0          5h42m
pod/loki-logs-dcwkb                                2/2     Running   0          5h42m
pod/loki-logs-j6plp                                2/2     Running   0          5h42m
pod/loki-logs-vgqqb                                2/2     Running   0          5h42m
pod/loki-read-598f8c5cd5-dqtqt                     1/1     Running   0          2m59s
pod/loki-read-598f8c5cd5-fv6jq                     1/1     Running   0          2m18s
pod/loki-read-598f8c5cd5-khmzw                     1/1     Running   0          3m39s
pod/loki-write-0                                   1/1     Running   0          93s
pod/loki-write-1                                   1/1     Running   0          2m28s
pod/loki-write-2                                   1/1     Running   0          3m33s

NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)             AGE
service/loki-backend            ClusterIP      10.98.167.78     <none>         3100/TCP,9095/TCP   5h42m
service/loki-backend-headless   ClusterIP      None             <none>         3100/TCP,9095/TCP   5h42m
service/loki-canary             ClusterIP      10.97.139.4      <none>         3500/TCP            5h42m
service/loki-gateway            LoadBalancer   10.111.235.35    10.113.3.104   80:30251/TCP        5h42m
service/loki-memberlist         ClusterIP      None             <none>         7946/TCP            5h42m
service/loki-read               ClusterIP      10.99.220.81     <none>         3100/TCP,9095/TCP   5h42m
service/loki-read-headless      ClusterIP      None             <none>         3100/TCP,9095/TCP   5h42m
service/loki-write              ClusterIP      10.102.132.138   <none>         3100/TCP,9095/TCP   5h42m
service/loki-write-headless     ClusterIP      None             <none>         3100/TCP,9095/TCP   5h42m

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/loki-canary   6         6         6       6            6           <none>          5h42m
daemonset.apps/loki-logs     6         6         6       6            6           <none>          5h42m

NAME                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/loki-gateway                  1/1     1            1           5h42m
deployment.apps/loki-grafana-agent-operator   1/1     1            1           5h42m
deployment.apps/loki-read                     3/3     3            3           5h42m

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/loki-gateway-574476d678                  1         1         1       5h42m
replicaset.apps/loki-grafana-agent-operator-5555fc45d8   1         1         1       5h42m
replicaset.apps/loki-read-598f8c5cd5                     3         3         3       3m40s
replicaset.apps/loki-read-669c9d7689                     0         0         0       5h42m
replicaset.apps/loki-read-6c7586fdc7                     0         0         0       11m

NAME                            READY   AGE
statefulset.apps/loki-backend   3/3     5h42m
statefulset.apps/loki-write     3/3     5h42mNAME                                               

A simple test you can do to verify gateway status is by “curling” the API endpoint exposed in the allocated external IP. The OK response would indicate the service is up and ready to receive requests.

curl http://10.113.3.104/ ; echo
OK

Now access grafana console and try to add Loki as a DataSource. Click on the Engine button at the bottom of the left bar and then click on Add Data Source blue box.

Adding Loki as Datasource in Grafana (1)

Click on Loki to add the required datasource type.

Adding Loki as Datasource in Grafana (2)

In this setup Grafana and Loki has been deployed in the same cluster so we can use as URL the internal FQDN corresponding to the loki-gateway ClusterIP service. In case you are accesing from the outside you need to change that to the external URL (e.g. http://loki-gateway.loki.avi.sdefinitive.net in my case).

Adding Loki as Datasource in Grafana (3)

Click on “Save & test” button and if the attempt of Grafana to reach Loki API endpoint is successful, it should display the green tick as shown below.

Another interesting verification would be to check if the MinIO S3 bucket is getting the logs as expected. Open the MinIO web console and access to the chunks bucket wich is the target to write the logs Loki receives. You should see how the bucket is receiving new objects and how the size is increasing.

You may wonder who is sending this data to Loki at this point since we have not setup any log shipper yet. The reason behind is that, by default, when you deploy Loki using the official chart, a sub-chart with Grafana Agent is also installed to enable self monitoring. Self monitoring settings determine whether Loki should scrape it’s own logs. It will create custom resources to define how to scrape it’s own logs. If you are curious about it explore (i.e kubectl get) GrafanaAgent, LogsInstance, and PodLogs CRDs objects created in the Loki namespace to figure out how this is actually pushing self-monitoring logs into Loki.

To verify what are this data being pushed into MinIO S3 bucket, you can explore Loki Datasource through Grafana. Return to the Grafana GUI and try to show logs related to a loki component such as the loki-gateway pod. Click on the compass icon at the left to explore the added datasource. Now filter using job as key label and select the name of the loki/loki-gateway as value label as shown below. Click on Run Query blue button on the top right corner next to see what happens.

Displaying Loki logs at Grafana (1)

Et voila! If everything is ok you should see how logs are successfully being shipped to Loki by its internal self monitoring Grafana agent.

Displaying Loki logs at Grafana (2)

Now that our log aggregator seems to be ready let’s move into the log shipper section.

Installing Fluent-Bit

Fluent-Bit is an log shipper based in a open source software designed to cover highly distributed environments that demand high performance but keeping a very light footprint.

The main task of Fluent-bit in this setup is watch for changes in any interesting log file and send any update in that file to Loki in a form of a new entry. We will focus on Antrea related logs only so far but you can extend the input of Fluent-bit to a wider scope in order to track other logs of your OS.

Again we will rely in helm to proceed with the installation. This time we need to add a new repository maintaned by fluent.

helm repo add fluent https://fluent.github.io/helm-charts

Explore the repo to see what is.

helm search repo fluent
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
fluent/fluent-bit       0.22.0          2.0.8           Fast and lightweight log processor and forwarde...
fluent/fluentd          0.3.9           v1.14.6         A Helm chart for Kubernetes                       

As we did before, create a reference yaml file with default configuration values of the chart.

helm show values fluent/fluent-bit > default_values.yaml

Using default_values.yaml as a template, create a new values.yaml file that will contain the desired configuration. The main piece of the values.yaml file resides on the config section. You can customize how the logs will be treated in a secuencial fashion creating a data pipeline scheme as depicted here.

FluentBit DataPipeline

The full documentation is maintained in the Fluent-Bit website here, but in a nutshell the main subsections we will use to achieve our goal are:

  • SERVICE.- The service section defines global properties of the service, including additional parsers to adapt the data found in the logs.
  • INPUT.- The input section defines a source that is associated to an input plugin. Depending on the selected input plugin you will have extra configuration keys. In this case we are using the tail input plugin that capture any new line of the watched files (Antrea logs in this case). This section is also used to tag the captured data for classification purposes in later stages.
  • PARSER.- This section is used to format or parse any information present on records such as extracting fields according to the position of the information in the log record.
  • FILTER.- The filter section defines a filter that is associated with a filter plugin. In this case we will use the kubernetes filter to be able enrich our log files with Kubernetes metadata.
  • OUTPUT.- The output section section specify a destination that certain records should follow after a Tag match. We would use here Loki as target.

We will use following values.yaml file. A more complex values file including parsing and regex can be found in an specific section of the next post here.

vi values.yaml
env: 
  - name: NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

config:
  service: |
    [SERVICE]
        Daemon Off
        Flush {{ .Values.flush }}
        Log_Level {{ .Values.logLevel }}
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.metricsPort }}
        Health_Check On

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/antrea*.log
        multiline.parser docker, cri
        Tag antrea.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On
        
  ## https://docs.fluentbit.io/manual/pipeline/filters
  ## First filter Uses a kubernetes filter plugin. Match antrea tag. Use K8s Parser
  ## Second filter enriches log entries with hostname and node name 
  filters: |
    [FILTER]
        Name kubernetes
        Match antrea.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On
        
    [FILTER]
        Name record_modifier
        Match antrea.*
        Record podname ${HOSTNAME}
        Record nodename ${NODE_NAME}

  ## https://docs.fluentbit.io/manual/pipeline/outputs
  ## Send the matching data to loki adding a label
  outputs: |
    [OUTPUT]
        Name loki
        Match antrea.*
        Host loki-gateway.loki.svc
        Port 80
        Labels job=fluentbit-antrea

Create the namespace fluent-bit where all the objects will be placed.

kubectl create ns fluent-bit

And now proceed with fluent-bit chart installation.

helm install fluent-bit -n fluent-bit fluent/fluent-bit -f values.yaml
NAME: fluent-bit
LAST DEPLOYED: Wed Jan 11 19:21:20 2023
NAMESPACE: fluent-bit
STATUS: deployed
REVISION: 1
NOTES:
Get Fluent Bit build information by running these commands:

export POD_NAME=$(kubectl get pods --namespace fluent-bit -l "app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluent-bit" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace fluent-bit port-forward $POD_NAME 2020:2020
curl http://127.0.0.1:2020

Verifying Fluent-Bit installation

As suggested by the highlighted output of the previous chart installlation you can easily try to reach the fluent-bit API that is listening at port TCP 2020 using a port-forward. Issue the port-forward action and try to curl to see if the service is accepting the GET request. The output indicates the service is ready and you get some metadata such as flags, and version associated with the running fluent-bit pod.

curl localhost:2020 -s | jq
jhasensio@forty-two:~/ANTREA$ curl localhost:2020 -s | jq
{
  "fluent-bit": {
    "version": "2.0.8",
    "edition": "Community",
    "flags": [
      "FLB_HAVE_IN_STORAGE_BACKLOG",
      "FLB_HAVE_CHUNK_TRACE",
      "FLB_HAVE_PARSER",
      "FLB_HAVE_RECORD_ACCESSOR",
      "FLB_HAVE_STREAM_PROCESSOR",
      "FLB_HAVE_TLS",
      "FLB_HAVE_OPENSSL",
      "FLB_HAVE_METRICS",
      "FLB_HAVE_WASM",
      "FLB_HAVE_AWS",
      "FLB_HAVE_AWS_CREDENTIAL_PROCESS",
      "FLB_HAVE_SIGNV4",
      "FLB_HAVE_SQLDB",
      "FLB_LOG_NO_CONTROL_CHARS",
      "FLB_HAVE_METRICS",
      "FLB_HAVE_HTTP_SERVER",
      "FLB_HAVE_SYSTEMD",
      "FLB_HAVE_FORK",
      "FLB_HAVE_TIMESPEC_GET",
      "FLB_HAVE_GMTOFF",
      "FLB_HAVE_UNIX_SOCKET",
      "FLB_HAVE_LIBYAML",
      "FLB_HAVE_ATTRIBUTE_ALLOC_SIZE",
      "FLB_HAVE_PROXY_GO",
      "FLB_HAVE_JEMALLOC",
      "FLB_HAVE_LIBBACKTRACE",
      "FLB_HAVE_REGEX",
      "FLB_HAVE_UTF8_ENCODER",
      "FLB_HAVE_LUAJIT",
      "FLB_HAVE_C_TLS",
      "FLB_HAVE_ACCEPT4",
      "FLB_HAVE_INOTIFY",
      "FLB_HAVE_GETENTROPY",
      "FLB_HAVE_GETENTROPY_SYS_RANDOM"
    ]
  }
}

Remember the fluent-bit process needs access to the logs that are generated on every single node, that means you will need daemonSet object that will run a local fluent-bit pod in each of the eligible nodes across the cluster.

kubectl get all -n fluent-bit
NAME                   READY   STATUS    RESTARTS   AGE
pod/fluent-bit-8s72h   1/1     Running   0          9m20s
pod/fluent-bit-lwjrn   1/1     Running   0          9m20s
pod/fluent-bit-ql5gp   1/1     Running   0          9m20s
pod/fluent-bit-wkgnh   1/1     Running   0          9m20s
pod/fluent-bit-xcpn9   1/1     Running   0          9m20s
pod/fluent-bit-xk7vc   1/1     Running   0          9m20s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/fluent-bit   ClusterIP   10.111.248.240   <none>        2020/TCP   9m20s

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/fluent-bit   6         6         6       6            6           <none>          9m20s

You can also display the logs that any of the pods generates on booting. Note how the tail input plugin only watches matching files according to the regex (any file with name matching antrea*.log in /var/log/containers/ folder).

kubectl logs -n fluent-bit fluent-bit-8s72h
Fluent Bit v2.0.8
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/01/11 18:21:32] [ info] [fluent bit] version=2.0.8, commit=9444fdc5ee, pid=1
[2023/01/11 18:21:32] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2023/01/11 18:21:32] [ info] [cmetrics] version=0.5.8
[2023/01/11 18:21:32] [ info] [ctraces ] version=0.2.7
[2023/01/11 18:21:32] [ info] [input:tail:tail.0] initializing
[2023/01/11 18:21:32] [ info] [input:tail:tail.0] storage_strategy='memory' (memory only)
[2023/01/11 18:21:32] [ info] [input:tail:tail.0] multiline core started
[2023/01/11 18:21:32] [ info] [filter:kubernetes:kubernetes.0] https=1 host=kubernetes.default.svc port=443
[2023/01/11 18:21:32] [ info] [filter:kubernetes:kubernetes.0]  token updated
[2023/01/11 18:21:32] [ info] [filter:kubernetes:kubernetes.0] local POD info OK
[2023/01/11 18:21:32] [ info] [filter:kubernetes:kubernetes.0] testing connectivity with API server...
[2023/01/11 18:21:32] [ info] [filter:kubernetes:kubernetes.0] connectivity OK
[2023/01/11 18:21:32] [ info] [output:loki:loki.0] configured, hostname=loki-gateway.loki.svc:80
[2023/01/11 18:21:32] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2023/01/11 18:21:32] [ info] [sp] stream processor started
[2023/01/11 18:21:32] [ info] [input:tail:tail.0] inotify_fs_add(): inode=529048 watch_fd=1 name=/var/log/containers/antrea-agent-b4tfl_kube-system_antrea-agent-9dadd3c909f9471408ebf569c0d8f2622bedd572ef7a982bfe71a7f3cd6010d0.log
[2023/01/11 18:21:32] [ info] [input:tail:tail.0] inotify_fs_add(): inode=532180 watch_fd=2 name=/var/log/containers/antrea-agent-b4tfl_kube-system_antrea-agent-fd6cfdb5a18c77e66403e66e3a16e2f577d213cd010bdf09f863e22d897194a8.log
[2023/01/11 18:21:32] [ info] [input:tail:tail.0] inotify_fs_add(): inode=532181 watch_fd=3 name=/var/log/containers/antrea-agent-b4tfl_kube-system_antrea-ovs-5344c17989a14d5773ae75e4403c12939c34b2ca53fb5a09951d8fd953cea00d.log
[2023/01/11 18:21:32] [ info] [input:tail:tail.0] inotify_fs_add(): inode=529094 watch_fd=4 name=/var/log/containers/antrea-agent-b4tfl_kube-system_antrea-ovs-cb343ab16cc1d9a718b938be8a889196fd93134f63c9f9da6c53a2ff291f25f5.log
[2023/01/11 18:21:32] [ info] [input:tail:tail.0] inotify_fs_add(): inode=528986 watch_fd=5 name=/var/log/containers/antrea-agent-b4tfl_kube-system_install-cni-583b2d7380e3dc9cff9c3a05870c7997747d9751c075707bd182d1d0a0ec5e9b.log

Now we are sure the fluent-bit is working properly, the last step is to check if we actually are receiving the logs in Loki using Grafana to retrieve ingested logs. Remember in the fluent-bit output configuration we labeled the logs using job=fluentbit-antrea and we will use that as input to filter our interesting logs. Click on the compass icon at the left ribbon and use populate the label filter with mentioned label (key and value).

Exploring Antrea logs sent to Loki with Grafana

Generate some activity in the antrea agents, for example, as soon as you create a new pod and the CNI should provide the IP Address and it will write a corresponding log indicating a new IP address has been Allocated. Let’s try to locate this exact string in any antrea log. To do so, just press on the Code button, to write down a custom filter by hand.

Code Option for manual queries creation

Type the following filter to find any log with a label job=fluentbit-antrea that also contains the string “Allocated”.

{job="fluentbit-antrea"} |= "Allocated" 

Press Run Query blue button at the right top corner and you should be able to display any matching log entry as shown below.

Exploring Antrea logs sent to Loki with Grafana (2)

Feel feel to explore further the log to see the format and different labels and fields as shown below

Exploring Antrea logs sent to Loki with Grafana (2)

This concludes this post. If you followed it you should now have the required tools up and running to gain observability. This is just the first step though. For any given observability solution, the real effort come in the Day 2 when you need to figure out what are your KPI according to your particular needs and how to visualize them in the proper way. Next post here will continue diving in dashboards and log analysis. Stay tuned!

Antrea Observability Part 1: Installing Antrea and Test Application

Antrea is an opensource Kubernetes Networking and Security project maintaned by VMware that implements a Container Network Interface (CNI) to provide network connectivity and security for pod workloads. It has been designed with flexibility and high performance in mind and is based on Open vSwitch (OVS), a very mature project that stands out precisely because of these characteristics.

Antrea creates a SDN-like architecture separating the data plane (which is based in OVS) and the control plane. To do so, Antrea installs an agent component in each of the worker nodes to program the OVS datapath whereas a central controller running on the control plane node is be in charge of centralized tasks such as calculating network policies. The following picture that is available at main Antrea site depicts how it integrates with a kubernetes cluster.

Antrea high-level architecture

This series of post will focus in how to provide observability capabilities to this CNI using different tools. This first article will explain how to install Antrea and some related management stuff and also how to create a testbed environment based on a microservices application and a load-generator tool that will be used as reference example throughout the next articles. Now we have a general overview of what Antrea is, let’s start by installing Antrea through Helm.

Installing Antrea using Helm

To start with Antrea, the first step is to create a kubernetes cluster. The process of setting up a kubernetes cluster is out of scope of this post. By the way, this series of articles are based on kubernetes vanilla but feel free to try with another distribution such as VMware Tanzu or Openshift. If you are using kubeadm to build your kubernetes cluster you need to start using –pod-network-cidr=<CIDR> parameter to enable the NodeIpamController in kubernetes, alternatively you can leverage some built-in Antrea IPAM capabilities.

The easiest way to verify that NodeIPAMController is operating in your existing cluster is by checking if the flag –-allocate-node-cidrs is set to true and if there is a cluster-cidr configured. Use the following command for verification.

kubectl cluster-info dump | grep cidr
                            "--allocate-node-cidrs=true",
                            "--cluster-cidr=10.34.0.0/16",

Is important to mention that in kubernetes versions prior to 1.24, the CNI plugin could be managed by kubelet and there was a requirement to ensure the kubelet was started with network-plugin=cni flag enabled, however in newer versions is the container runtime instead of the kubelet who is in charge of managing the CNI plugin. In my case I am using containerd 1.6.9 as container runtime and Kubernetes version 1.24.8.

To install the Antrea CNI you just need to apply the kubernetes manifest that specifies all the resources needed. The latest manifest is generally available at the official antrea github site here.

Alternatively, VMware also maintains a Helm chart available for this purpose. I will install Antrea using Helm here because it has some advantages. For example, in general when you enable a new featuregate you must manually restart the pods to actually applying new settings but, when using Helm, the new changes are applied and pods restarted as part of the new release deployment. There is however an important consideration on updating CRDs when using Helm. Make sure to check this note before updating Antrea.

Lets start with the installation process. Considering you have Helm already installed in the OS of the server you are using to manage your kubernetes cluster (otherwise complete installation as per official doc here), the first step would be to add the Antrea repository source as shown below.

helm repo add antrea https://charts.antrea.io
"antrea" has been added to your repositories

As a best practice update the repo before installing to ensure you work with the last available version of the chart.

helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "antrea" chart repository

You can also explore the contents of the added repository. Note there is not only a chart for Antrea itself but also other interesting charts that will be explore in next posts. At the time of writing this post the last Antrea version is 1.10.0.

helm search repo antrea
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                
antrea/antrea           1.10.0          1.10.0          Kubernetes networking based on Open vSwitch
antrea/flow-aggregator  1.10.0          1.10.0          Antrea Flow Aggregator                     
antrea/theia            0.4.0           0.4.0           Antrea Network Flow Visibility    

When using Helm, you can customize the installation of your chart by means of templates in the form of yaml files that contain all configurable settings. To figure out what settings are available for a particular chart a good idea is to write in a default_values.yaml file that would contain all the accepted values using the helm show values command as you can see here.

helm show values antrea/antrea >> default_values.yaml

Using the default_values.yaml file as a reference, you can change any of the default values to meet your requirements. Note that, as expected for a default configuration file, any configuration not explicitily referenced in the values file will use default settings. We will use a very simplified version of the values file with just a simple setting to specifiy the desired tag or version for our deployment. We will add extra configuration later when enabling some features. Create a simple values.yaml file with this content.

vi values.yaml
# -- Container image to use for Antrea components.
image:
  tag: "v1.10.0"

And now install the new chart using helm install and the created values.yaml file as input.

helm install antrea -n kube-system antrea/antrea -f values.yaml
NAME: antrea
LAST DEPLOYED: Fri Jan 13 18:52:48 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Antrea CNI has been successfully installed

You are using version 1.10.0

For the Antrea documentation, please visit https://antrea.io

After couple of minutes you should be able to see the pods in Running state. The Antrea agents are deployed using a daemonSet and thus they run an independent pod on every single node in the cluster. On the other hand the Antrea controller component is installed as a single replica deployment and may run by default on any node.

kubectl get pods -n kube-system -o wide
NAME                                          READY   STATUS    RESTARTS        AGE   IP            NODE                  NOMINATED NODE   READINESS GATES
antrea-agent-52cjk                            2/2     Running   0               86s   10.113.2.17   k8s-worker-03         <none>           <none>
antrea-agent-549ps                            2/2     Running   0               86s   10.113.2.10   k8s-contol-plane-01   <none>           <none>
antrea-agent-5kvhb                            2/2     Running   0               86s   10.113.2.15   k8s-worker-01         <none>           <none>
antrea-agent-6p856                            2/2     Running   0               86s   10.113.2.19   k8s-worker-05         <none>           <none>
antrea-agent-f75b7                            2/2     Running   0               86s   10.113.2.16   k8s-worker-02         <none>           <none>
antrea-agent-m6qtc                            2/2     Running   0               86s   10.113.2.18   k8s-worker-04         <none>           <none>
antrea-agent-zcnd7                            2/2     Running   0               86s   10.113.2.20   k8s-worker-06         <none>           <none>
antrea-controller-746dcd98d4-c6wcd            1/1     Running   0               86s   10.113.2.10   k8s-contol-plane-01   <none>           <none>

Your nodes should now transit to Ready status as a result of the CNI sucessfull installation.

kubectl get nodes
NAME                  STATUS   ROLES           AGE   VERSION
k8s-contol-plane-01   Ready    control-plane   46d   v1.24.8
k8s-worker-01         Ready    <none>          46d   v1.24.8
k8s-worker-02         Ready    <none>          46d   v1.24.8
k8s-worker-03         Ready    <none>          46d   v1.24.8
k8s-worker-04         Ready    <none>          46d   v1.24.8
k8s-worker-05         Ready    <none>          46d   v1.24.8
k8s-worker-06         Ready    <none>          46d   v1.24.8

Jump into any of the nodes using ssh. If Antrea CNI plugin is successfully installed you should see a new configuration file in the /etc/cni/net.d directory with a content similiar to this.

cat /etc/cni/net.d/10-antrea.conflist
{
    "cniVersion":"0.3.0",
    "name": "antrea",
    "plugins": [
        {
            "type": "antrea",
            "ipam": {
                "type": "host-local"
            }
        }
        ,
        {
            "type": "portmap",
            "capabilities": {"portMappings": true}
        }
        ,
        {
            "type": "bandwidth",
            "capabilities": {"bandwidth": true}
        }
    ]
}

As a last verification, spin up any new pod to check if the CNI, which is supposed to provide connectivity for the pod, is actually working as expected. Let’s run a simple test app called kuard.

kubectl run –restart=Never –image=gcr.io/kuar-demo/kuard-amd64:blue kuard pod/kuard created
pod/kuard created

Check the status of new created pod and verify if it is running and if an IP has been assigned.

kubectl get pod kuard -o wide
NAME    READY   STATUS    RESTARTS   AGE    IP           NODE            NOMINATED NODE   READINESS GATES
kuard   1/1     Running   0          172m   10.34.4.26   k8s-worker-05   <none>           <none>

Now forward the port the container is listening to a local port.

kubectl port-forward kuard 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

Next open your browser at http://localhost:8080 and you should reach the kuard application that shows some information about the pod that can be used for test and troubleshooting purposes.

So far so good. It seems our CNI plugin is working as expected lets install extra tooling to interact with the CNI component.

Installing antctl tool to interact with Antrea

Antrea includes a nice method to interact with the CNI through cli commands using a tool called antctl. Antctl can be used in controller mode or in agent mode. For controller mode you can run the command externally or from whithin the controller pod using regular kubectl exec to issue the commands you need. For using antctl in agent mode you must run the commands from within an antrea agent pod.

To install antctl simply copy and paste the following commands to download a copy of the antctl prebuilt binary for your OS.

TAG=v1.10.0
curl -Lo ./antctl "https://github.com/antrea-io/antrea/releases/download/$TAG/antctl-$(uname)-x86_64" 
chmod +x ./antctl 
sudo mv ./antctl /usr/local/bin/antctl 

Now check installed antctl version.

antctl version
antctlVersion: v1.10.0
controllerVersion: v1.10.0

When antctl run out-of-cluster (for controller-mode only) it will look for your kubeconfig file to gain access to the antrea components. As an example, you can issue following command to check overall status.

antctl get controllerinfo
POD                                            NODE          STATUS  NETWORK-POLICIES ADDRESS-GROUPS APPLIED-TO-GROUPS CONNECTED-AGENTS
kube-system/antrea-controller-746dcd98d4-c6wcd k8s-worker-01 Healthy 0                0              0                 4

We can also display the activated features (aka featuregates) by using antctl get featuregates. To enable certain functionalities such as exporting flows for analytics or antrea proxy we need to change the values.yaml file and deploy a new antrea release through helm as we did before.

antctl get featuregates
Antrea Agent Feature Gates
FEATUREGATE              STATUS         VERSION   
FlowExporter             Disabled       ALPHA     
NodePortLocal            Enabled        BETA      
AntreaPolicy             Enabled        BETA      
AntreaProxy              Enabled        BETA      
Traceflow                Enabled        BETA      
NetworkPolicyStats       Enabled        BETA      
EndpointSlice            Disabled       ALPHA     
ServiceExternalIP        Disabled       ALPHA     
Egress                   Enabled        BETA      
AntreaIPAM               Disabled       ALPHA     
Multicast                Disabled       ALPHA     
Multicluster             Disabled       ALPHA     

Antrea Controller Feature Gates
FEATUREGATE              STATUS         VERSION   
AntreaPolicy             Enabled        BETA      
Traceflow                Enabled        BETA      
NetworkPolicyStats       Enabled        BETA      
NodeIPAM                 Disabled       ALPHA     
ServiceExternalIP        Disabled       ALPHA     
Egress                   Enabled        BETA      
Multicluster             Disabled       ALPHA    

On the other hand we can use the built-in antctl utility inside each of the antrea agent pods. For example, using kubectl get pods, obtain the name of the antrea-agent pod running on the node in which the kuard application we created before has been scheduled. Now open a shell using the following command (use your unique antrea-agent pod name for this).

kubectl exec -ti -n kube-system antrea-agent-6p856 — bash
root@k8s-worker-05:/# 

The prompt indicates you are inside the k8s-worker-05 node. Now you can interact with the antrea agent with several commands, as an example, get the information of the kuard podinterface using this command.

root@k8s-worker-05:/# antctl get podinterface kuard
NAMESPACE NAME  INTERFACE-NAME IP         MAC               PORT-UUID                            OF-PORT CONTAINER-ID
default   kuard kuard-652adb   10.34.4.26 46:b5:b9:c5:c6:6c 99c959f1-938a-4ee3-bcda-da05c1dc968a 17      4baee3d2974 

We will explore further advanced options with antctl later in this series of posts. Now the CNI is ready lets install a full featured microservices application.

Installing Acme-Fitness App

Throughout this series of posts we will use a reference kubernetes application to help us with some of the examples. We have chosen the popular acme-fitness created by Google because it is a good representation of a polyglot application based on microservices that includes some typical services of a e-commerce app such as front-end, catalog, cart, payment. We will use a version maintained by VMware that is available here. The following picture depicts how the microservices that conform the acme-fitness application are communication each other.

The first step is to clone the repo to have a local version of the git locally using the following command:

git clone https://github.com/vmwarecloudadvocacy/acme_fitness_demo.git
Cloning into 'acme_fitness_demo'...
remote: Enumerating objects: 765, done.
remote: Total 765 (delta 0), reused 0 (delta 0), pack-reused 765
Receiving objects: 100% (765/765), 1.01 MiB | 1.96 MiB/s, done.
Resolving deltas: 100% (464/464), done.

In order to get some of the database microservices running, we need to setup in advance some configurations (basically credentials) that will be injected as secrets into kubernetes cluster. Move to the cloned kubernetes-manifest folder and create a new file that will contain the required configurations that will be used as required credentials for some of the database microservices. Remember the password must be Base64 encoded. In my case I am using passw0rd so the base64 encoded form results in cGFzc3cwcmQK as shown below.

vi acme-fitness-secrets.yaml
# SECRETS FOR ACME-FITNESS (Plain text password is "passw0rd" in this example)
apiVersion: v1
data:
  password: cGFzc3cwcmQK
kind: Secret
metadata:
  name: cart-redis-pass
type: Opaque
---
apiVersion: v1
data:
  password: cGFzc3cwcmQK
kind: Secret
metadata:
  name: catalog-mongo-pass
type: Opaque
---
apiVersion: v1
data:
  password: cGFzc3cwcmQK
kind: Secret
metadata:
  name: order-postgres-pass
type: Opaque
---
apiVersion: v1
data:
  password: cGFzc3cwcmQK
kind: Secret
metadata:
  name: users-mongo-pass
type: Opaque
---
apiVersion: v1
data:
  password: cGFzc3cwcmQK
kind: Secret
metadata:
  name: users-redis-pass
type: Opaque

Now open the manifest named point-of-sales.yaml and set the value of the FRONTEND_HOST variable as per your particular setup. In this deployment we will install the acme-fitness application in a namespace called acme-fitness so the FQDN would be frontend.acme-fitness.svc.cluster.local. Adjust accordingly if you are using a domain different than cluster.local.

vi point-of-sales-total.yaml
      labels:
        app: acmefit
        service: pos
    spec:
      containers:
      - image: gcr.io/vmwarecloudadvocacy/acmeshop-pos:v0.1.0-beta
        name: pos
        env:
        - name: HTTP_PORT
          value: '7777'
        - name: DATASTORE
          value: 'REMOTE'
        - name: FRONTEND_HOST
          value: 'frontend.acme-fitness.svc.cluster.local'
        ports:
        - containerPort: 7777
          name: pos

Now create the namespace acme-fitness that is where we will place all microservices and related objects.

kubectl create ns acme-fitness

And now apply all the manifest in current directory kubernetes-manifest using the namespace as keyword as shown here to ensure all objects are place in this particular namespace. Ensure the acme-fitness-secrets.yaml manifest we created in previous step is also placed in the same directory and check if you get in the output the confirmation of the new secrets being created.

kubectl apply -f . -n acme-fitness
service/cart-redis created
deployment.apps/cart-redis created
service/cart created
deployment.apps/cart created
configmap/catalog-initdb-config created
service/catalog-mongo created
deployment.apps/catalog-mongo created
service/catalog created
deployment.apps/catalog created
service/frontend created
deployment.apps/frontend created
service/order-postgres created
deployment.apps/order-postgres created
service/order created
deployment.apps/order created
service/payment created
deployment.apps/payment created
service/pos created
deployment.apps/pos created
secret/cart-redis-pass created
secret/catalog-mongo-pass created
secret/order-postgres-pass created
secret/users-mongo-pass created
secret/users-redis-pass created
configmap/users-initdb-config created
service/users-mongo created
deployment.apps/users-mongo created
service/users-redis created
deployment.apps/users-redis created
service/users created
deployment.apps/users created

Wait a couple of minutes to allow workers to pull the images of the containers and you should see all the pods running. In case you find any pod showing a non-running status simply delete it and wait for kubernetes to renconcile the state till you see everything up and running.

kubectl get pods -n acme-fitness
NAME                              READY   STATUS      RESTARTS   AGE
cart-76bb4c586-blc2m              1/1     Running     0          2m45s
cart-redis-5cc665f5bd-zrjwh       1/1     Running     0          3m57s
catalog-958b9dc7c-qdzbt           1/1     Running     0          2m27s
catalog-mongo-b5d4bfd54-c2rh5     1/1     Running     0          2m13s
frontend-6cd56445-8wwws           1/1     Running     0          3m57s
order-584d9c6b44-fwcqv            1/1     Running     0          3m56s
order-postgres-8499dcf8d6-9dq8w   1/1     Running     0          3m57s
payment-5ffb9c8d65-g8qb2          1/1     Running     0          3m56s
pos-5995956dcf-xxmqr              1/1     Running     0          108s
users-67f9f4fb85-bntgv            1/1     Running     0          3m55s
users-mongo-6f57dbb4f-kmcl7       1/1     Running     0          45s
users-mongo-6f57dbb4f-vwz64       0/1     Completed   0          3m56s
users-redis-77968854f4-zxqgc      1/1     Running     0          3m56s

Now check the created services. The entry point of the acme-fitness application resides on the service named frontend that points to the frontend microservice. This service is a LoadBalancer type so, in case you have an application in your cluster to expose LoadBalancer type services, you should receive an external IP that will be used to reach our application externally. If this is not your case you can always access using the corresponding NodePort for the service. In this particular case the service is exposed in the dynamic 31967 port within the NodePort port range as shown below.

kubectl get svc -n acme-fitness
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE
cart             ClusterIP      10.96.58.243     <none>         5000/TCP         4m38s
cart-redis       ClusterIP      10.108.33.107    <none>         6379/TCP         4m39s
catalog          ClusterIP      10.106.250.32    <none>         8082/TCP         4m38s
catalog-mongo    ClusterIP      10.104.233.60    <none>         27017/TCP        4m38s
frontend         LoadBalancer   10.111.94.177    10.113.3.110   80:31967/TCP     4m38s
order            ClusterIP      10.108.161.14    <none>         6000/TCP         4m37s
order-postgres   ClusterIP      10.99.98.123     <none>         5432/TCP         4m38s
payment          ClusterIP      10.99.108.248    <none>         9000/TCP         4m37s
pos              NodePort       10.106.130.147   <none>         7777:30431/TCP   4m37s
users            ClusterIP      10.106.170.197   <none>         8083/TCP         4m36s
users-mongo      ClusterIP      10.105.18.58     <none>         27017/TCP        4m37s
users-redis      ClusterIP      10.103.94.131    <none>         6379/TCP         4m37s

I am using the AVI Kubernetes Operator in my setup to watch por LoadBalancer type services, so we should be able to access the site using the IP address or the allocated name, that according to my particular settings here will be http://frontend.acme-fitness.avi.sdefinitive.net. If you are not familiar with AVI Ingress solution and how to deploy it you can check a series related posts here that explain step by step how to integrate this powerful enterprise solution with your cluster.

Acme-Fitness application Home Page

Now that the application is ready let’s deploy a traffic generator to inject some traffic.

Generating traffic using Locust testing tool

A simple but powerful way to generate synthetic traffic is using the Locust tool written in Python. An interesting advantage of Locust is that it has a distributed architecture that allows running multiple tests on multiple workers. It also includes a web interface that allows to start and parameterize the load test. Locust allows you to define complex test scenarios that are described through a locustfile.py. Thankfully the acme-fitness repository includes here a load-generator folder that contains instructions and a ready-to-use locustfile.py to fully simulate traffic and users according to the architecture of the acme-fitness application.

We can install the locust application in kubernetes easily. The first step is to create the namespace in which the load-gen application will be installed

kubectl create ns load-gen

As usual, if we want to inject configuration into kubernetes we can use a configmap object containing the mentioned the required settings (in this case in a form of locustfile.py file). Be careful because creating the .py file into a cm using kubectl create from-file might cause some formatting errors. I have created a functional configmap in yaml format file that you can use directly using following command.

kubectl apply -n load-gen -f https://raw.githubusercontent.com/jhasensio/antrea/main/LOCUST/acme-locustfile-cm.yaml
configmap/locust-configmap created

Once the configmap is created you can apply the following manifest that includes services and deployments to install the Locust load-testing tool in a distributed architecture and taking as input the locustfile.py file coded in a configmap named locust-configmap. Use following command to deploy Locust.

kubectl apply -n load-gen -f https://raw.githubusercontent.com/jhasensio/antrea/main/LOCUST/locust.yaml
deployment.apps/locust-master created
deployment.apps/locust-worker created
service/locust-master created
service/locust-ui created

You can inspect the created kubernetes objects in the load-gen namespace. Note how there is a LoadBalancer service that will be used to reach the Locust application.

kubectl get all -n load-gen
NAME                                 READY   STATUS    RESTARTS   AGE
pod/locust-master-67bdb5dbd4-ngtw7   1/1     Running   0          81s
pod/locust-worker-6c5f87b5c8-pgzng   1/1     Running   0          81s
pod/locust-worker-6c5f87b5c8-w5m6h   1/1     Running   0          81s

NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
service/locust-master   ClusterIP      10.110.189.42   <none>         5557/TCP,5558/TCP,8089/TCP   81s
service/locust-ui       LoadBalancer   10.106.18.26    10.113.3.111   80:32276/TCP                 81s

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/locust-master   1/1     1            1           81s
deployment.apps/locust-worker   2/2     2            2           81s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/locust-master-67bdb5dbd4   1         1         1       81s
replicaset.apps/locust-worker-6c5f87b5c8   2         2         2       81s

Open a browser to access to Locust User Interface using the allocated IP at http://10.113.3.111 or the FQDN that in my case corresponds to http://locust-ui.load-gen.avi.sdefinitive.net and you will reach the following web site. If you don’t have any Load Balancer solution installed just go the allocated port (32276 in this case) using any of your the IP addresses of your nodes.

Locust UI

Now populate the Host text box using to start the test against the acme-fitness application. You can use the internal URL that will be reachable at http://frontend.acme-fitness.svc.cluster.local or the external name http://frontend.acme-fitness.avi.sdefinitive.net. Launch the test and observe how the Request Per Second counter is increasing progressively.

If you need extra load simply scale the locust-worker deployment and you would get more pods acting as workers available to generate traffic

We have the cluster up and running, the application and the load-generator ready. It’s time to start the journey to add observability using extra tools. Be sure to check next part of this series. Stay tuned!

Antrea Observability Part 0: Quick Start Guide

In this series of posts I have tried to create a comprehensive guide with a good level of detail around the installation of Antrea as CNI along with complimentary mainstream tools necessary for the visualization of metrics and logs. The guide is composed of the following modules:

If you are one of those who like the hard-way and like to understand how different pieces work together or if you are thinking in deploying in a more demanding enviroment that requires scalability and performance, then I highly recommend taking the long path and going through the post series in which you will find some cool stuff to gain understanding of this architecture.

Alternatively, if you want to take the short path, just continue reading the following section to deploy Antrea and related observability stack using a very simple architecture that is well suited for demo and non-production environments.

Quick Setup Guide

A basic requirement before moving forwared is to have a kubernetes cluster up and running. If you are reading this guide you probably are already familiar with how to setup a kubernetes cluster so I won’t spent time describing the procedure. Once the cluster is ready and you can interact with it via kubectl command you are ready to go. The first step is to get the Antrea CNI installed with some particular configuration enabled. The installation of Antrea is very straightforward. As docummented in Antrea website you just need to apply the yaml manifest as shown below and you will end up with your CNI deployed and a fully functional cluster prepared to run pod workloads.

kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
customresourcedefinition.apiextensions.k8s.io/antreaagentinfos.crd.antrea.io created
customresourcedefinition.apiextensions.k8s.io/antreacontrollerinfos.crd.antrea.io created
customresourcedefinition.apiextensions.k8s.io/clustergroups.crd.antrea.io created
<skipped>

mutatingwebhookconfiguration.admissionregistration.k8s.io/crdmutator.antrea.io created
validatingwebhookconfiguration.admissionregistration.k8s.io/crdvalidator.antrea.io created

Once Antrea is deployed, edit the configmap that contains the configuration settings in order to enable some required features related to observability.

kubectl edit configmaps -n kube-system antrea-config

Scroll down and locate FlowExporter setting and ensure is set to true and the line is uncomment to take effect in the configuration.

# Enable flowexporter which exports polled conntrack connections as IPFIX flow records from each
# agent to a configured collector.
      FlowExporter: true

Locate also the enablePrometheusMetrics keyword and ensure is enabled and uncommented as well. This setting should be enabled by default but double check just in case.

# Enable metrics exposure via Prometheus. Initializes Prometheus metrics listener.
    enablePrometheusMetrics: true

Last step would be to restart the antrea agent daemonSet to ensure the new settings are applied.

kubectl rollout restart ds/antrea-agent -n kube-system

Now we are ready to go. Clone my git repo here that contains all required stuff to proceed with the observability stack installation.

git clone https://github.com/jhasensio/antrea.git
Cloning into 'antrea'...
remote: Enumerating objects: 155, done.
remote: Counting objects: 100% (155/155), done.
remote: Compressing objects: 100% (86/86), done.
remote: Total 155 (delta 75), reused 145 (delta 65), pack-reused 0
Receiving objects: 100% (155/155), 107.33 KiB | 1.18 MiB/s, done.
Resolving deltas: 100% (75/75), done

Create the observability namespace that will be used to place all the needed kubernetes objects.

kubectl create ns observability
namespace/observability created

Navigate to the antrea/antrea-observability-quick-start folder and now install all the required objects by applying recursively the yaml files in the folder tree using the mentioned namespace. As you can tell from the output below, the manifest will deploy a bunch of tools including fluent-bit, loki and grafana with precreated dashboards and datasources.

kubectl apply -f tools/ –recursive -n observability
clusterrole.rbac.authorization.k8s.io/fluent-bit created
clusterrolebinding.rbac.authorization.k8s.io/fluent-bit created
configmap/fluent-bit created
daemonset.apps/fluent-bit created
service/fluent-bit created
serviceaccount/fluent-bit created
clusterrole.rbac.authorization.k8s.io/grafana-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/grafana-clusterrolebinding created
configmap/grafana-config-dashboards created
configmap/grafana created
configmap/1--agent-process-metrics-dashboard-cm created
configmap/2--agent-ovs-metrics-and-logs-dashboard-cm created
configmap/3--agent-conntrack-metrics-dashboard-cm created
configmap/4--agent-network-policy-metrics-and-logs-dashboard-cm created
configmap/5--antrea-agent-logs-dashboard-cm created
configmap/6--controller-metrics-and-logs-dashboard-cm created
configmap/loki-datasource-cm created
configmap/prometheus-datasource-cm created
deployment.apps/grafana created
role.rbac.authorization.k8s.io/grafana created
rolebinding.rbac.authorization.k8s.io/grafana created
secret/grafana created
service/grafana created
serviceaccount/grafana created
configmap/loki created
configmap/loki-runtime created
service/loki-memberlist created
serviceaccount/loki created
service/loki-headless created
service/loki created
statefulset.apps/loki created
clusterrole.rbac.authorization.k8s.io/prometheus-antrea created
serviceaccount/prometheus created
secret/prometheus-service-account-token created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prometheus-server-conf created
deployment.apps/prometheus-deployment created
service/prometheus-service created

The grafana UI is exposed through a LoadBalancer service-type object. If you have a load balancer solution deployed in your cluster, use the allocated external IP Address, otherwise use the dynamically allocated NodePort if you are able to reach the kubernetes nodes directly or ultimately use the port-forward from your administrative console as shown below.

kubectl port-forward -n observability services/grafana 8888:80
Forwarding from 127.0.0.1:8888 -> 3000

Open your browser at localhost 8888 and you will access the Grafana login page as depicted in following picture.

Grafana Login Page

Enter the default username admin. The password is stored in a secret object with a superstrong password only for demo purposes that you can easly decode.

kubectl get secrets -n observability grafana -o jsonpath=”{.data.admin-password}” | base64 -d ; echo
passw0rd

Now you should reach the home page of Grafana application with a similar aspect of what you see below. By default Grafana uses dark theme but you can easily change. You just have to click on the settings icon at the left column > Preferences and select Light as UI Thene. Find at the end of the post some dashboard samples with both themes.

Grafana home page

Now, click on the General link at the top left corner and you should be able to reach the six precreated dashboards in the folder.

Grafana Dashboard browsing

Unless your have a productive cluster running hundreds of workloads, you will find some of the panels empty because there is no metrics nor logs to display yet. Just to add some fun to the dashboards I have created a script to simulate a good level of activity in the cluster. The script is also included in the git repo so just execute it and wait for some minutes.

bash simulate_activity.sh
 This simulator will create some activity in your cluster using current context
 It will spin up some client and a deployment-backed ClusterIP service based on apache application
 After that it will create random AntreaNetworkPolicies and AntreaClusterNetworkPolicies and it will generate some traffic with random patterns
 It will also randomly scale in and out the deployment during execution. This is useful for demo to see all metrics and logs showing up in the visualization tool

For more information go to https://sdefinitive.net

   *** Starting activity simulation. Press CTRL+C to stop job ***   
....^CCTRL+C Pressed...

Cleaning temporary objects, ACNPs, ANPs, Deployments, Services and Pods
......
Cleaning done! Thanks for using it!

After some time, you can enjoy the colorful dashboards and logs specially designed to get some key indicators from Antrea deployment. Refer to Part 3 of this post series for extra information and use cases. Find below some supercharged dashboards and panels as a sample of what you will get.

Dashboard 1: Agent Process Metrics
Dashboard 2: Agent OVS Metrics and Logs
Dashboard 3: Agent Conntrack Metrics
Dashboard 4: Agent Network Policy Metric and Logs (1/2) (Light theme)
Dashboard 4: Agent Network Policy Metric and Logs (2/2) (Light theme)
Dashboard 5: Antrea Agent Logs
Dashboard 6: Controller Metrics and Logs (1/2)

Keep reading next post in this series to get more information around dashboards usage and how to setup all the tools with data persistence and with a high-available and distributed architecture.

© 2024 SDefinITive

Theme by Anders NorenUp ↑