diff --git a/modules/network-observability-con_filter-network-flows-at-ingestion.adoc b/modules/network-observability-con_filter-network-flows-at-ingestion.adoc new file mode 100644 index 000000000000..1176ac8e24cd --- /dev/null +++ b/modules/network-observability-con_filter-network-flows-at-ingestion.adoc @@ -0,0 +1,83 @@ +// Module included in the following assemblies: + +// * networking/network_observability/configuring-operators.adoc + +:_mod-docs-content-type: CONCEPT +[id="network-observability-filter-network-flows-at-ingestion_{context}"] += Filter network flows at ingestion + +You can create filters to reduce the number of generated network flows. Filtering network flows can reduce the resource usage of the Network Observability components. + +You can configure two kinds of filters: + +* eBPF agent filters +* Flowlogs-pipeline filters + +[id="ebpf-agent-filters_{context}"] +== eBPF agent filters + +eBPF agent filters maximize performance because they take effect at the earliest stage of the network flows collection process. + +To configure eBPF agent filters with the Network Observability Operator, see "Filtering eBPF flow data using multiple rules". + +[id="flowlogs-pipeline-filters_{context}"] +== Flowlogs-pipeline filters + +Flowlogs-pipeline filters provide greater control over traffic selection because they take effect later in the network flows collection process. They are primarily used to improve data storage. + +Flowlogs-pipeline filters use a simple query language to filter network flow, as shown in the following example: + +[source,terminal] +---- +(srcnamespace="netobserv" OR (srcnamespace="ingress" AND dstnamespace="netobserv")) AND srckind!="service" +---- + +The query language uses the following syntax: + +.Query language syntax +[cols="1,3", options="header"] +|=== +| Category +| Operators + +| Logical boolean operators (not case-sensitive) +| `and`, `or` + +| Comparison operators +| `=` (equals), + +`!=` (not equals), + +`=~` (matches regexp), + +`!~` (not matches regexp), + +`<` / `\<=` (less than or equal to), + +`>` / `>=` (greater than or equal to) + +| Unary operations +| `with(field)` (field is present), + +`without(field)` (field is absent) + +| Parenthesis-based priority +|=== + +You can configure flowlogs-pipeline filters in the `spec.processor.filters` section of the `FlowCollector` resource. For example: + +.Example YAML Flowlogs-pipeline filter +[source,yaml] +---- +apiVersion: flows.netobserv.io/v1beta2 +kind: FlowCollector +metadata: + name: cluster +spec: + namespace: netobserv + agent: + processor: + filters: + - query: | + (SrcK8S_Namespace="netobserv" OR (SrcK8S_Namespace="openshift-ingress" AND DstK8S_Namespace="netobserv")) + outputTarget: Loki <1> + sampling: 10 <2> +---- +<1> Sends matching flows to a specific output, such as Loki, Prometheus, or an external system. When omitted, sends to all configured outputs. +<2> Optional. Applies a sampling ratio to limit the number of matching flows to be stored or exported. For example, `sampling: 10` means 1/10 of the flows are kept. + + diff --git a/modules/network-observability-con_user-defined-networks.adoc b/modules/network-observability-con_user-defined-networks.adoc new file mode 100644 index 000000000000..90606244f609 --- /dev/null +++ b/modules/network-observability-con_user-defined-networks.adoc @@ -0,0 +1,13 @@ +// Module included in the following assemblies: +// +// * network_observability/observing-network-traffic.adoc + +:_mod-docs-content-type: CONCEPT +[id="network-observability-user-defined-networks_{context}"] += User-defined networks + +User-defined networks (UDN) improve the flexibility and segmentation capabilities of the default Layer 3 topology for a Kubernetes pod network by enabling custom Layer 2 and Layer 3 network segments, where all these segments are isolated by default. These segments act as primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. + +UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance. + +When the `UDNMapping` feature is enabled with Network Observability, the *Traffic* flow table has a *UDN labels* column. You can filter on *Source Network Name* and *Destination Network Name*. \ No newline at end of file diff --git a/modules/network-observability-ebpf-manager-operator.adoc b/modules/network-observability-ebpf-manager-operator.adoc new file mode 100644 index 000000000000..c88cd002d84a --- /dev/null +++ b/modules/network-observability-ebpf-manager-operator.adoc @@ -0,0 +1,38 @@ +// Module included in the following assemblies: +// +// * network_observability/observing-network-traffic.adoc + +:_mod-docs-content-type: PROCEDURE +[id="network-observability-ebpf-manager-operator_{context}"] += Working with the eBPF Manager Operator + +The eBPF Manager Operator reduces the attack surface and ensures compliance, security, and conflict prevention by managing all eBPF programs. Network observability can use the eBPF Manager Operator to load hooks. As a result, you no longer need to provide the eBPF Agent with privileged mode or additional Linux capabilities such as `CAP_BPF` and `CAP_PERFMON`. The eBPF Manager Operator with network observability is only supported on 64-bit AMD architecture. + +:FeatureName: eBPF Manager Operator with network observability +include::snippets/technology-preview.adoc[] + +.Procedure +. In the web console, navigate to *Operators* -> *Operator Hub*. +. Install *eBPF Manager*. +. Check *Workloads* -> *Pods* in the `bpfman` namespace to make sure they are all up and running. +. Configure the `FlowCollector` custom resource to use the eBPF Manager Operator: ++ +.Example `FlowCollector` configuration +[source,yaml] +---- +apiVersion: flows.netobserv.io/v1beta2 +kind: FlowCollector +metadata: + name: cluster +spec: + agent: + ebpf: + features: + - EbpfManager +---- + +.Verification +. In the web console, navigate to *Operators* -> *Installed Operators*. +. Click *eBPF Manager Operator* -> *All instances* tab. ++ +For each node, verify that a `BpfApplication` named `netobserv` and a pair of `BpfProgram` objects, one for Traffic Control (TCx) ingress and another for TCx egress, exist. If you enable other eBPF Agent features, you might have more objects. \ No newline at end of file diff --git a/modules/network-observability-ebpf-rule-flow-filter.adoc b/modules/network-observability-ebpf-rule-flow-filter.adoc index b20c6a803514..8c217556f4a1 100644 --- a/modules/network-observability-ebpf-rule-flow-filter.adoc +++ b/modules/network-observability-ebpf-rule-flow-filter.adoc @@ -5,13 +5,15 @@ :_mod-docs-content-type: CONCEPT [id="network-observability-ebpf-flow-rule-filter_{context}"] = eBPF flow rule filter -You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be recorded. Then only the packets that match the filter are cached and the rest are not cached. +You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be captured. Then only the packets that match the filter are captured and the rest are dropped. + +You can apply multiple filter rules. [id="ingress-and-egress-traffic-filtering_{context}"] == Ingress and egress traffic filtering -CIDR notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation. +Classless Inter-Domain Routing (CIDR) notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation. -After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the `peerIP` to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached. +After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the `peerIP` to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached. [id="dashboard-and-metrics-integrations_{context}"] == Dashboard and metrics integrations diff --git a/modules/network-observability-filtering-ebpf-rule.adoc b/modules/network-observability-filtering-ebpf-rule.adoc index 85e41e34b28e..0df4bd7dcc3f 100644 --- a/modules/network-observability-filtering-ebpf-rule.adoc +++ b/modules/network-observability-filtering-ebpf-rule.adoc @@ -1,22 +1,28 @@ // Module included in the following assemblies: -// // network_observability/observing-network-traffic.adoc :_mod-docs-content-type: PROCEDURE [id="network-observability-filtering-ebpf-rule_{context}"] -= Filtering eBPF flow data using a global rule -You can configure the `FlowCollector` to filter eBPF flows using a global rule to control the flow of packets cached in the eBPF flow table. += Filtering eBPF flow data using multiple rules +You can configure the `FlowCollector` custom resource to filter eBPF flows using multiple rules to control the flow of packets cached in the eBPF flow table. + +[IMPORTANT] +==== +* You cannot use duplicate Classless Inter-Domain Routing (CIDRs) in filter rules. +* When an IP address matches multiple filter rules, the rule with the most specific CIDR prefix (longest prefix) takes precedence. +==== .Procedure . In the web console, navigate to *Operators* -> *Installed Operators*. . Under the *Provided APIs* heading for *Network Observability*, select *Flow Collector*. . Select *cluster*, then select the *YAML* tab. . Configure the `FlowCollector` custom resource, similar to the following sample configurations: -+ -[%collapsible] -.Filter Kubernetes service traffic to a specific Pod IP endpoint -==== + +.Example YAML to sample all North-South traffic, and 1:50 East-West traffic + +By default, all other flows are rejected. + [source, yaml] ---- apiVersion: flows.netobserv.io/v1beta2 @@ -30,22 +36,29 @@ spec: type: eBPF ebpf: flowFilter: - action: Accept <1> - cidr: 172.210.150.1/24 <2> - protocol: SCTP - direction: Ingress - destPortRange: 80-100 - peerIP: 10.10.10.10 - enable: true <3> + enable: true <1> + rules: + - action: Accept <2> + cidr: 0.0.0.0/0 <3> + sampling: 1 <4> + - action: Accept + cidr: 10.128.0.0/14 + peerCIDR: 10.128.0.0/14 <5> + - action: Accept + cidr: 172.30.0.0/16 + peerCIDR: 10.128.0.0/14 + sampling: 50 ---- -<1> The required `action` parameter describes the action that is taken for the flow filter rule. Possible values are `Accept` or `Reject`. -<2> The required `cidr` parameter provides the IP address and CIDR mask for the flow filter rule and supports IPv4 and IPv6 address formats. If you want to match against any IP address, you can use `0.0.0.0/0` for IPv4 or `::/0` for IPv6. -<3> You must set `spec.agent.ebpf.flowFilter.enable` to `true` to enable this feature. -==== -+ -[%collapsible] -.See flows to any addresses outside the cluster -==== +<1> To enable eBPF flow filtering, set `spec.agent.ebpf.flowFilter.enable` to `true`. +<2> To define the action for the flow filter rule, set the required `action` parameter. Valid values are `Accept` or `Reject`. +<3> To define the IP address and CIDR mask for the flow filter rule, set the required `cidr` parameter. This parameter supports both IPv4 and IPv6 address formats. To match any IP address, use `0.0.0.0/0` for IPv4 or ``::/0` for IPv6. +<4> To define the sampling rate for matched flows and override the global sampling setting `spec.agent.ebpf.sampling`, set the `sampling` parameter. +<5> To filter flows by Peer IP CIDR, set the `peerCIDR` parameter. + +.Example YAML to filter flows with packet drops + +By default, all other flows are rejected. + [source, yaml] ---- apiVersion: flows.netobserv.io/v1beta2 @@ -57,18 +70,19 @@ spec: deploymentModel: Direct agent: type: eBPF - ebpf: + ebpf: + privileged: true <1> + features: + - PacketDrop <2> flowFilter: - action: Accept <1> - cidr: 0.0.0.0/0 <2> - protocol: TCP - direction: Egress - sourcePort: 100 - peerIP: 192.168.127.12 <3> - enable: true <4> ----- -<1> You can `Accept` flows based on the criteria in the `flowFilter` specification. -<2> The `cidr` value of `0.0.0.0/0` matches against any IP address. -<3> See flows after `peerIP` is configured with `192.168.127.12`. -<4> You must set `spec.agent.ebpf.flowFilter.enable` to `true` to enable the feature. -==== \ No newline at end of file + enable: true <3> + rules: + - action: Accept <4> + cidr: 172.30.0.0/16 + pktDrops: true <5> +---- +<1> To enable packet drops, set `spec.agent.ebpf.privileged` to `true`. +<2> To report packet drops for each network flow, add the `PacketDrop` value to the `spec.agent.ebpf.features` list. +<3> To enable eBPF flow filtering, set `spec.agent.ebpf.flowFilter.enable` to `true`. +<4> To define the action for the flow filter rule, set the required `action` parameter. Valid values are `Accept` or `Reject`. +<5> To filter flows containing drops, set `pktDrops` to `true`. \ No newline at end of file diff --git a/modules/network-observability-flowcollector-api-specifications.adoc b/modules/network-observability-flowcollector-api-specifications.adoc index 5764bb9aa881..d1af330342e2 100644 --- a/modules/network-observability-flowcollector-api-specifications.adoc +++ b/modules/network-observability-flowcollector-api-specifications.adoc @@ -154,7 +154,7 @@ is set to `eBPF`. | `type` | `string` -| `type` [deprecated *] selects the flows tracing agent. Previously, this field allowed to select between `eBPF` or `IPFIX`. +| `type` [deprecated (*)] selects the flows tracing agent. Previously, this field allowed to select between `eBPF` or `IPFIX`. Only `eBPF` is allowed now, so this field is deprecated and is planned for removal in a future version of the API. |=== @@ -180,7 +180,8 @@ Type:: | `object` | `advanced` allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed mostly for debugging and fine-grained performance optimizations, -such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. +such as `GOGC` and `GOMAXPROCS` environment vars. Set these values at your own risk. You can also +override the default Linux capabilities from there. | `cacheActiveTimeout` | `string` @@ -205,25 +206,28 @@ Otherwise it is matched as a case-sensitive string. | List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. Possible values are: + - `PacketDrop`: Enable the packets drop flows logging feature. This feature requires mounting -the kernel debug filesystem, so the eBPF agent pods must run as privileged. -If the `spec.agent.ebpf.privileged` parameter is not set, an error is reported. + +the kernel debug filesystem, so the eBPF agent pods must run as privileged via `spec.agent.ebpf.privileged`. + - `DNSTracking`: Enable the DNS tracking feature. + - `FlowRTT`: Enable flow latency (sRTT) extraction in the eBPF agent from TCP traffic. + - `NetworkEvents`: Enable the network events monitoring feature, such as correlating flows and network policies. -This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. -It requires using the OVN-Kubernetes network plugin with the Observability feature. + -IMPORTANT: This feature is available as a Technology Preview. +This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged via `spec.agent.ebpf.privileged`. +It requires using the OVN-Kubernetes network plugin with the Observability feature. +IMPORTANT: This feature is available as a Technology Preview. + - `PacketTranslation`: Enable enriching flows with packet translation information, such as Service NAT. + -- `EbpfManager`: Unsupported * . Use eBPF Manager to manage Network Observability eBPF programs. Pre-requisite: the eBPF Manager operator (or upstream bpfman operator) must be installed. + +- `EbpfManager`: [Unsupported (*)]. Use eBPF Manager to manage Network Observability eBPF programs. Pre-requisite: the eBPF Manager operator (or upstream bpfman operator) must be installed. + + +- `UDNMapping`: Enable interfaces mapping to User Defined Networks (UDN). + + +This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged via `spec.agent.ebpf.privileged`. +It requires using the OVN-Kubernetes network plugin with the Observability feature. + + +- `IPSec`, to track flows between nodes with IPsec encryption. + -- `UDNMapping`: Unsupported *. Enable interfaces mapping to User Defined Networks (UDN). + -This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. -It requires using the OVN-Kubernetes network plugin with the Observability feature. | `flowFilter` | `object` @@ -255,7 +259,7 @@ Otherwise it is matched as a case-sensitive string. | `privileged` | `boolean` | Privileged mode for the eBPF Agent container. When ignored or set to `false`, the operator sets -granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container. +granular capabilities (BPF, PERFMON, NET_ADMIN) to the container. If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF is in use, then you can turn on this mode for more global privileges. Some agent features require the privileged mode, such as packet drops tracking (see `features`) and SR-IOV support. @@ -267,7 +271,7 @@ For more information, see https://kubernetes.io/docs/concepts/configuration/mana | `sampling` | `integer` -| Sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled. +| Sampling ratio of the eBPF probe. 100 means one packet on 100 is sent. 0 or 1 means all packets are sampled. |=== == .spec.agent.ebpf.advanced @@ -276,7 +280,8 @@ Description:: -- `advanced` allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed mostly for debugging and fine-grained performance optimizations, -such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. +such as `GOGC` and `GOMAXPROCS` environment vars. Set these values at your own risk. You can also +override the default Linux capabilities from there. -- Type:: @@ -289,6 +294,10 @@ Type:: |=== | Property | Type | Description +| `capOverride` +| `array (string)` +| Linux capabilities override, when not running as privileged. Default capabilities are BPF, PERFMON and NET_ADMIN. + | `env` | `object (string)` | `env` allows passing custom environment variables to underlying components. Useful for passing @@ -444,11 +453,10 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports: | `rules` defines a list of filtering rules on the eBPF Agents. When filtering is enabled, by default, flows that don't match any rule are rejected. To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules. -Unsupported *. | `sampling` | `integer` -| `sampling` sampling rate for the matched flows, overriding the global sampling defined at `spec.agent.ebpf.sampling`. +| `sampling` is the sampling ratio for the matched packets, overriding the global sampling defined at `spec.agent.ebpf.sampling`. | `sourcePorts` | `integer-or-string` @@ -470,7 +478,6 @@ Description:: `rules` defines a list of filtering rules on the eBPF Agents. When filtering is enabled, by default, flows that don't match any rule are rejected. To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules. -Unsupported *. -- Type:: @@ -551,7 +558,7 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports: | `sampling` | `integer` -| `sampling` sampling rate for the matched flows, overriding the global sampling defined at `spec.agent.ebpf.sampling`. +| `sampling` is the sampling ratio for the matched packets, overriding the global sampling defined at `spec.agent.ebpf.sampling`. | `sourcePorts` | `integer-or-string` @@ -663,7 +670,7 @@ If set to `true`, the `providedCaFile` field is ignored. | Select the type of TLS configuration: + - `Disabled` (default) to not configure TLS for the endpoint. -- `Provided` to manually provide cert file and a key file. Unsupported *. +- `Provided` to manually provide cert file and a key file. [Unsupported (*)]. - `Auto` to use {product-title} auto generated certificate using annotations. |=== @@ -793,7 +800,7 @@ Type:: | `object` | `advanced` allows setting some aspects of the internal configuration of the console plugin. This section is aimed mostly for debugging and fine-grained performance optimizations, -such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. +such as `GOGC` and `GOMAXPROCS` environment vars. Set these values at your own risk. | `autoscaler` | `object` @@ -835,7 +842,7 @@ Description:: -- `advanced` allows setting some aspects of the internal configuration of the console plugin. This section is aimed mostly for debugging and fine-grained performance optimizations, -such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. +such as `GOGC` and `GOMAXPROCS` environment vars. Set these values at your own risk. -- Type:: @@ -1167,7 +1174,7 @@ Required:: | `sasl` | `object` -| SASL authentication configuration. Unsupported *. +| SASL authentication configuration. [Unsupported (*)]. | `tls` | `object` @@ -1182,7 +1189,7 @@ Required:: Description:: + -- -SASL authentication configuration. Unsupported *. +SASL authentication configuration. [Unsupported (*)]. -- Type:: @@ -1480,15 +1487,15 @@ Type:: | `input` | `string` -| +| | `multiplier` | `integer` -| +| | `output` | `string` -| +| |=== == .spec.exporters[].openTelemetry.logs @@ -1678,7 +1685,7 @@ Required:: | `sasl` | `object` -| SASL authentication configuration. Unsupported *. +| SASL authentication configuration. [Unsupported (*)]. | `tls` | `object` @@ -1693,7 +1700,7 @@ Required:: Description:: + -- -SASL authentication configuration. Unsupported *. +SASL authentication configuration. [Unsupported (*)]. -- Type:: @@ -2079,7 +2086,7 @@ Type:: - `Forward` forwards the user token for authorization. + -- `Host` [deprecated *] - uses the local pod service account to authenticate to Loki. + +- `Host` [deprecated (*)] - uses the local pod service account to authenticate to Loki. + When using the Loki Operator, this must be set to `Forward`. @@ -2695,7 +2702,7 @@ This feature requires the "topology.kubernetes.io/zone" label to be set on nodes | `object` | `advanced` allows setting some aspects of the internal configuration of the flow processor. This section is aimed mostly for debugging and fine-grained performance optimizations, -such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. +such as `GOGC` and `GOMAXPROCS` environment vars. Set these values at your own risk. | `clusterName` | `string` @@ -2704,14 +2711,12 @@ such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. | `deduper` | `object` | `deduper` allows you to sample or drop flows identified as duplicates, in order to save on resource usage. -Unsupported *. | `filters` | `array` | `filters` lets you define custom filters to limit the amount of generated flows. These filters provide more flexibility than the eBPF Agent filters (in `spec.agent.ebpf.flowFilter`), such as allowing to filter by Kubernetes namespace, but with a lesser improvement in performance. -Unsupported *. | `imagePullPolicy` | `string` @@ -2745,9 +2750,9 @@ This setting is ignored when Kafka is disabled. - `Flows` to export regular network flows. This is the default. + -- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates. + +- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates. Note that in this mode, Prometheus metrics are not accurate on long-standing conversations. + -- `EndedConversations` to generate only ended conversations events. + +- `EndedConversations` to generate only ended conversations events. Note that in this mode, Prometheus metrics are not accurate on long-standing conversations. + - `All` to generate both network flows and all conversations events. It is not recommended due to the impact on resources footprint. + @@ -2777,7 +2782,7 @@ Description:: -- `advanced` allows setting some aspects of the internal configuration of the flow processor. This section is aimed mostly for debugging and fine-grained performance optimizations, -such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. +such as `GOGC` and `GOMAXPROCS` environment vars. Set these values at your own risk. -- Type:: @@ -2805,7 +2810,7 @@ This delay is ignored when a FIN packet is collected for TCP flows (see `convers | `dropUnusedFields` | `boolean` -| `dropUnusedFields` [deprecated *] this setting is not used anymore. +| `dropUnusedFields` [deprecated (*)] this setting is not used anymore. | `enableKubeProbes` | `boolean` @@ -2912,7 +2917,8 @@ Description:: + -- Defines secondary networks to be checked for resources identification. -To guarantee a correct identification, indexed values must form an unique identifier across the cluster. If the same index is used by several resources, those resources might be incorrectly labeled. +To guarantee a correct identification, indexed values must form an unique identifier across the cluster. +If the same index is used by several resources, those resources might be incorrectly labeled. -- Type:: @@ -2957,7 +2963,6 @@ Description:: + -- `deduper` allows you to sample or drop flows identified as duplicates, in order to save on resource usage. -Unsupported *. -- Type:: @@ -2972,7 +2977,7 @@ Type:: | `mode` | `string` -| Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication because the Agent cannot de-duplicate same flows reported from different nodes. + +| Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication, since the Agent cannot de-duplicate same flows reported from different nodes. + - Use `Drop` to drop every flow considered as duplicates, allowing saving more on resource usage but potentially losing some information such as the network interfaces used from peer, or network events. + @@ -2983,7 +2988,7 @@ Type:: | `sampling` | `integer` -| `sampling` is the sampling rate when deduper `mode` is `Sample`. +| `sampling` is the sampling ratio when deduper `mode` is `Sample`. For example, a value of `50` means that 1 flow in 50 is sampled. |=== == .spec.processor.filters @@ -2993,7 +2998,6 @@ Description:: `filters` lets you define custom filters to limit the amount of generated flows. These filters provide more flexibility than the eBPF Agent filters (in `spec.agent.ebpf.flowFilter`), such as allowing to filter by Kubernetes namespace, but with a lesser improvement in performance. -Unsupported *. -- Type:: @@ -3019,64 +3023,17 @@ Type:: |=== | Property | Type | Description -| `allOf` -| `array` -| `filters` is a list of matches that must be all satisfied in order to remove a flow. - | `outputTarget` | `string` -| If specified, these filters only target a single output: `Loki`, `Metrics` or `Exporters`. By default, all outputs are targeted. - -| `sampling` -| `integer` -| `sampling` is an optional sampling rate to apply to this filter. - -|=== -== .spec.processor.filters[].allOf -Description:: -+ --- -`filters` is a list of matches that must be all satisfied in order to remove a flow. --- - -Type:: - `array` - - - - -== .spec.processor.filters[].allOf[] -Description:: -+ --- -`FLPSingleFilter` defines the desired configuration for a single FLP-based filter. --- - -Type:: - `object` - -Required:: - - `field` - - `matchType` - - - -[cols="1,1,1",options="header"] -|=== -| Property | Type | Description - -| `field` -| `string` -| Name of the field to filter on. -Refer to the documentation for the list of available fields: https://github.com/netobserv/network-observability-operator/blob/main/docs/flows-format.adoc. +| If specified, these filters target a single output: `Loki`, `Metrics` or `Exporters`. By default, all outputs are targeted. -| `matchType` +| `query` | `string` -| Type of matching to apply. +| A query that selects the network flows to keep. More information about this query language in https://github.com/netobserv/flowlogs-pipeline/blob/main/docs/filtering.md. -| `value` -| `string` -| Value to filter on. When `matchType` is `Equal` or `NotEqual`, you can use field injection with `$(SomeField)` to refer to any other field of the flow. +| `sampling` +| `integer` +| `sampling` is an optional sampling ratio to apply to this filter. For example, a value of `50` means that 1 matching flow in 50 is sampled. |=== == .spec.processor.kafkaConsumerAutoscaler @@ -3201,7 +3158,7 @@ If set to `true`, the `providedCaFile` field is ignored. | Select the type of TLS configuration: + - `Disabled` (default) to not configure TLS for the endpoint. -- `Provided` to manually provide cert file and a key file. Unsupported *. +- `Provided` to manually provide cert file and a key file. [Unsupported (*)]. - `Auto` to use {product-title} auto generated certificate using annotations. |=== @@ -3595,4 +3552,4 @@ If the namespace is different, the config map or the secret is copied so that it | `string` | Type for the certificate reference: `configmap` or `secret`. -|=== \ No newline at end of file +|=== diff --git a/modules/network-observability-flowmetric-api-specifications.adoc b/modules/network-observability-flowmetric-api-specifications.adoc index c9adb00fb33c..155f55dc3bea 100644 --- a/modules/network-observability-flowmetric-api-specifications.adoc +++ b/modules/network-observability-flowmetric-api-specifications.adoc @@ -103,13 +103,12 @@ When set to `Egress`, it is equivalent to adding the regular expression filter o | `filters` | `array` -| `filters` is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must -be used to eliminate duplicates: `Duplicate != "true"` and `FlowDirection = "0"`. +| `filters` is a list of fields and values used to restrict which flows are taken into account. Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html. | `flatten` | `array (string)` -| `flatten` is a list of list-type fields that must be flattened, such as Interfaces and NetworkEvents. Flattened fields generate one metric per item in that field. +| `flatten` is a list of array-type fields that must be flattened, such as Interfaces or NetworkEvents. Flattened fields generate one metric per item in that field. For instance, when flattening `Interfaces` on a bytes counter, a flow having Interfaces [br-ex, ens5] increases one counter for `br-ex` and another for `ens5`. | `labels` @@ -131,9 +130,10 @@ Refer to the documentation for the list of available fields: https://docs.opensh | `type` | `string` -| Metric type: "Counter" or "Histogram". +| Metric type: "Counter", "Histogram" or "Gauge". Use "Counter" for any value that increases over time and on which you can compute a rate, such as Bytes or Packets. Use "Histogram" for any value that must be sampled independently, such as latencies. +Use "Gauge" for other values that don't necessitate accuracy over time (gauges are sampled only every N seconds when Prometheus fetches the metric). | `valueField` | `string` @@ -261,8 +261,7 @@ To learn more about `promQL`, refer to the Prometheus documentation: https://pro Description:: + -- -`filters` is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must -be used to eliminate duplicates: `Duplicate != "true"` and `FlowDirection = "0"`. +`filters` is a list of fields and values used to restrict which flows are taken into account. Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html. -- diff --git a/modules/network-observability-flows-format.adoc b/modules/network-observability-flows-format.adoc index a81303aa2b6c..854796311138 100644 --- a/modules/network-observability-flows-format.adoc +++ b/modules/network-observability-flows-format.adoc @@ -155,16 +155,9 @@ The "Cardinality" column gives information about the implied metric cardinality | no | fine | n/a -| `Duplicate` -| boolean -| Indicates if this flow was also captured from another interface on the same host -| n/a -| no -| fine -| n/a | `Flags` | string[] -| List of TCP flags comprised in the flow, according to RFC-9293, with additional custom flags to represent the following per-packet combinations: + +| List of TCP flags comprised in the flow, as per RFC-9293, with additional custom flags to represent the following per-packet combinations: + - SYN_ACK + - FIN_ACK + - RST_ACK @@ -182,6 +175,13 @@ The "Cardinality" column gives information about the implied metric cardinality | yes | fine | host.direction +| `IPSecStatus` +| string +| Status of the IPsec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input) +| `ipsec_status` +| no +| fine +| n/a | `IcmpCode` | number | ICMP code @@ -242,7 +242,7 @@ The "Cardinality" column gives information about the implied metric cardinality | `Packets` | number | Number of packets -| `pkt_drop_cause` +| n/a | no | avoid | packets @@ -423,35 +423,35 @@ The "Cardinality" column gives information about the implied metric cardinality | n/a | `XlatDstAddr` | string -| Packet translation destination address +| packet translation destination address | `xlat_dst_address` | no | avoid | n/a | `XlatDstPort` | number -| Packet translation destination port +| packet translation destination port | `xlat_dst_port` | no | careful | n/a | `XlatSrcAddr` | string -| Packet translation source address +| packet translation source address | `xlat_src_address` | no | avoid | n/a | `XlatSrcPort` | number -| Packet translation source port +| packet translation source port | `xlat_src_port` | no | careful | n/a | `ZoneId` | number -| Packet translation zone id +| packet translation zone id | `xlat_zone_id` | no | avoid @@ -470,4 +470,4 @@ The "Cardinality" column gives information about the implied metric cardinality | yes | fine | n/a -|=== \ No newline at end of file +|=== \ No newline at end of file diff --git a/modules/network-observability-multitenancy.adoc b/modules/network-observability-multitenancy.adoc index eebd146c77d2..a6530996a49a 100644 --- a/modules/network-observability-multitenancy.adoc +++ b/modules/network-observability-multitenancy.adoc @@ -5,7 +5,7 @@ :_mod-docs-content-type: PROCEDURE [id="network-observability-multi-tenancy_{context}"] = Enabling multi-tenancy in Network Observability -Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces. +Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces. For Developers, multi-tenancy is available for both Loki and Prometheus but requires different access rights. @@ -15,11 +15,11 @@ For Developers, multi-tenancy is available for both Loki and Prometheus but requ .Procedure -* For per-tenant access, you must have the `netobserv-reader` cluster role and the `netobserv-metrics-reader` namespace role to use the developer perspective. Run the following commands for this level of access: +* For per-tenant access, you must have the `netobserv-loki-reader` cluster role and the `netobserv-metrics-reader` namespace role to use the developer perspective. Run the following commands for this level of access: + [source,terminal] ---- -$ oc adm policy add-cluster-role-to-user netobserv-reader +$ oc adm policy add-cluster-role-to-user netobserv-loki-reader ---- + [source,terminal] @@ -27,11 +27,11 @@ $ oc adm policy add-cluster-role-to-user netobserv-reader $ oc adm policy add-role-to-user netobserv-metrics-reader -n ---- -* For cluster-wide access, non-cluster-administrators must have the `netobserv-reader`, `cluster-monitoring-view`, and `netobserv-metrics-reader` cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access: +* For cluster-wide access, non-cluster-administrators must have the `netobserv-loki-reader`, `cluster-monitoring-view`, and `netobserv-metrics-reader` cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access: + [source,terminal] ---- -$ oc adm policy add-cluster-role-to-user netobserv-reader +$ oc adm policy add-cluster-role-to-user netobserv-loki-reader ---- + [source,terminal] diff --git a/modules/network-observability-netobserv-cli-reference.adoc b/modules/network-observability-netobserv-cli-reference.adoc index 9bc475bce829..75fbae7094cc 100644 --- a/modules/network-observability-netobserv-cli-reference.adoc +++ b/modules/network-observability-netobserv-cli-reference.adoc @@ -7,8 +7,8 @@ You can use the Network Observability CLI (`oc netobserv`) to pass command-line arguments to capture flows data, packets data, and metrics for further analysis and enable features supported by the Network Observability Operator. [id="cli-syntax_{context}"] -== Syntax -The basic syntax for `oc netobserv` commands: +== Syntax +The basic syntax for `oc netobserv` commands: .`oc netobserv` syntax [source,terminal] @@ -57,12 +57,14 @@ $ oc netobserv flows [] [] | Option | Description | Default |--enable_all| enable all eBPF features | false |--enable_dns| enable DNS tracking | false +|--enable_ipsec| enable IPsec tracking | false |--enable_network_events| enable network events monitoring | false |--enable_pkt_translation| enable packet translation | false |--enable_pkt_drop| enable packet drop | false |--enable_rtt| enable RTT tracking | false |--enable_udn_mapping| enable User Defined Network mapping | false |--get-subnets| get subnets information | false +|--sampling| value that determines the ratio of packets being sampled | 1 |--background| run in background | false |--copy| copy the output files locally | prompt |--log-level| components logs | info @@ -84,12 +86,13 @@ $ oc netobserv flows [] [] |--port| filter port | – |--ports| filter on either of two ports | – |--protocol| filter protocol | – -|--regexes| filter flows using regular expression | – +|--query| filter flows by using a custom query | – |--sport_range| filter source port range | – |--sport| filter source port | – |--sports| filter on either of two source ports | – |--tcp_flags| filter TCP flags | – -|--interfaces| interfaces to monitor | – +|--interfaces| list of interfaces to monitor, comma separated | – +|--exclude_interfaces| list of interfaces to exclude, comma separated | lo |=== .Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled: @@ -131,7 +134,7 @@ $ oc netobserv packets [