From db200bdf1d36feef45e6e16d38afc237a463ce8f Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Wed, 1 Sep 2021 17:08:51 +0800
Subject: [PATCH 01/10] Update docs for Function Mesh v0.1.7 release
---
docs/connectors/io-crd-config/sink-crd-config.md | 6 +++++-
.../io-crd-config/source-crd-config.md | 7 ++++++-
docs/connectors/run-connector.md | 4 ++++
docs/functions/function-crd.md | 8 ++++++--
docs/functions/run-function/run-go-function.md | 4 ++++
docs/functions/run-function/run-java-function.md | 4 ++++
.../run-function/run-python-function.md | 4 ++++
docs/scaling.md | 16 +++++++++++++++-
8 files changed, 48 insertions(+), 5 deletions(-)
diff --git a/docs/connectors/io-crd-config/sink-crd-config.md b/docs/connectors/io-crd-config/sink-crd-config.md
index 67853542..8ed47488 100644
--- a/docs/connectors/io-crd-config/sink-crd-config.md
+++ b/docs/connectors/io-crd-config/sink-crd-config.md
@@ -14,9 +14,10 @@ This table lists sink configurations.
| `name` | The name of a sink connector. |
| `classname` | The class name of a sink connector. |
| `tenant` | The tenant of a sink connector. |
+| `namespace` | The Pulsar namespace of a sink connector. |
| `Replicas`| The number of instances that you want to run this sink connector. By default, the `Replicas` is set to `1`. |
| `MaxReplicas`| The maximum number of Pulsar instances that you want to run for this sink connector. When the value of the `maxReplicas` parameter is greater than the value of `replicas`, it indicates that the sink controller automatically scales the sink connector based on the CPU usage. By default, `maxReplicas` is set to 0, which indicates that auto-scaling is disabled. |
-| `SinkConfig` | The map to a ConfigMap specifying the configuration of a sink connector. |
+| `SinkConfig` | The sink connector configurations in YAML format.|
| `Timeout` | The message timeout in milliseconds. |
| `NegativeAckRedeliveryDelayMs`| The number of redelivered messages due to negative acknowledgement. |
| `AutoAck` | Whether or not the framework acknowledges messages automatically. This field is required. You can set it to `true` or `false`.|
@@ -122,3 +123,6 @@ Function Mesh supports customizing the Pod running Pulsar connectors. This table
| `ServiceAccountName` | Specify the name of the service account which is used to run Pulsar Functions or connectors in the Function Mesh Worker service.|
| `InitContainers` | The initialization containers belonging to a Pod. A typical use case could be using an initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
+| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
+| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
\ No newline at end of file
diff --git a/docs/connectors/io-crd-config/source-crd-config.md b/docs/connectors/io-crd-config/source-crd-config.md
index 891be031..ff296ff7 100644
--- a/docs/connectors/io-crd-config/source-crd-config.md
+++ b/docs/connectors/io-crd-config/source-crd-config.md
@@ -14,10 +14,12 @@ This table lists source configurations.
| `name` | The name of a source connector. |
| `classname` | The class name of a source connector. |
| `tenant` | The tenant of a source connector. |
+| `namespace` | The Pulsar namespace of a source connector. |
| `Replicas`| The number of instances that you want to run this source connector. |
| `MaxReplicas`| The maximum number of Pulsar instances that you want to run for this source connector. When the value of the `maxReplicas` parameter is greater than the value of `replicas`, it indicates that the source controller automatically scales the source connector based on the CPU usage. By default, `maxReplicas` is set to 0, which indicates that auto-scaling is disabled. |
-| `SourceConfig` | The map to a ConfigMap specifying the configuration of a source connector. |
+| `SourceConfig` | The source connector configurations in YAML format. |
| `ProcessingGuarantee` | The processing guarantees (delivery semantics) applied to the source connector. Available values: `ATLEAST_ONCE`, `ATMOST_ONCE`, `EFFECTIVELY_ONCE`.|
+| ForwardSourceMessageProperty | Configure whether to pass message properties to a target topic. |
## Images
@@ -114,3 +116,6 @@ Function Mesh supports customizing the Pod running connectors. This table lists
| `ServiceAccountName` | Specify the name of the service account which is used to run Pulsar Functions or connectors in the Function Mesh Worker service.|
| `InitContainers` | Initialization containers belonging to a Pod. A typical use case could be using an Initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
+| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
+| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
diff --git a/docs/connectors/run-connector.md b/docs/connectors/run-connector.md
index 10982eac..cf9f0c19 100644
--- a/docs/connectors/run-connector.md
+++ b/docs/connectors/run-connector.md
@@ -148,6 +148,10 @@ This section describes how to package a Pulsar connector to a NAR or JAR package
Use the `pulsar-admin` CLI tool to upload the NAR or uber JAR package to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
+> **Note**
+>
+> To upload the package to the Pulsar package management service, you need to enable package management service in the `broker.config` file in advance.
+
This example shows how to upload the NAR package of the `my-sink` connector to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
```bash
diff --git a/docs/functions/function-crd.md b/docs/functions/function-crd.md
index 5b4f5a94..c38a4f04 100644
--- a/docs/functions/function-crd.md
+++ b/docs/functions/function-crd.md
@@ -15,16 +15,17 @@ This table lists Pulsar Function configurations.
| `name` | The name of a Pulsar Function. |
| `classname` | The class name of a Pulsar Function. |
| `tenant` | The tenant of a Pulsar Function. |
-| `namespace` | The namespace of a Pulsar Function. |
+| `namespace` | The Pulsar namespace of a Pulsar Function. |
| `Replicas`| The number of instances that you want to run this Pulsar Function. By default, the `Replicas` is set to `1`. |
| `MaxReplicas`| The maximum number of Pulsar instances that you want to run for this Pulsar Function. When the value of the `maxReplicas` parameter is greater than the value of `replicas`, it indicates that the Functions controller automatically scales the Pulsar Functions based on the CPU usage. By default, `maxReplicas` is set to 0, which indicates that auto-scaling is disabled. |
| `Timeout` | The message timeout in milliseconds. |
| `DeadLetterTopic` | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. |
-| `FuncConfig` | The map to a ConfigMap specifying the configuration of a Pulsar function. |
+| `FuncConfig` | Pulsar Functions configurations in YAML format. |
| `LogTopic` | The topic to which the logs of a Pulsar Function are produced. |
| `AutoAck` | Whether or not the framework acknowledges messages automatically. This field is required. You can set it to `true` or `false`.|
| `MaxMessageRetry` | How many times to process a message before giving up. |
| `ProcessingGuarantee` | The processing guarantees (delivery semantics) applied to the function. Available values: `ATLEAST_ONCE`, `ATMOST_ONCE`, `EFFECTIVELY_ONCE`.|
+| ForwardSourceMessageProperty | Configure whether to pass message properties to a target topic. |
| `RetainOrdering` | Function consumes and processes messages in order. |
| `RetainKeyOrdering`| Configure whether to retain the key order of messages. |
| `SubscriptionName` | Pulsar Functions’ subscription name if you want a specific subscription-name for the input-topic consumer. |
@@ -147,3 +148,6 @@ Function Mesh supports customizing the Pod running function instance. This table
| `ServiceAccountName` | Specify the name of the service account which is used to run Pulsar Functions or connectors in the Function Mesh Worker service.|
| `InitContainers` | Initialization containers belonging to a Pod. A typical use case could be using an Initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
+| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
+| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
diff --git a/docs/functions/run-function/run-go-function.md b/docs/functions/run-function/run-go-function.md
index 8009313a..7c85496b 100644
--- a/docs/functions/run-function/run-go-function.md
+++ b/docs/functions/run-function/run-go-function.md
@@ -86,6 +86,10 @@ To package Go Functions in Go, follow these steps.
Use the `pulsar-admin` CLI tool to upload the package to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
+> **Note**
+>
+> To upload the package to the Pulsar package management service, you need to enable package management service in the `broker.config` file in advance.
+
This example shows how to upload the package of the `my-function@0.1` Functions to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
```bash
diff --git a/docs/functions/run-function/run-java-function.md b/docs/functions/run-function/run-java-function.md
index 81e9f5e2..2b360d91 100644
--- a/docs/functions/run-function/run-java-function.md
+++ b/docs/functions/run-function/run-java-function.md
@@ -116,6 +116,10 @@ To package a Functions in Java, follow these steps.
Use the `pulsar-admin` CLI tool to upload the package to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
+> **Note**
+>
+> To upload the package to the Pulsar package management service, you need to enable package management service in the `broker.config` file in advance.
+
This example shows how to upload the package of the `my-function@0.1` Functions to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
```bash
diff --git a/docs/functions/run-function/run-python-function.md b/docs/functions/run-function/run-python-function.md
index 9d729c35..5b1c9f31 100644
--- a/docs/functions/run-function/run-python-function.md
+++ b/docs/functions/run-function/run-python-function.md
@@ -79,6 +79,10 @@ Python Function supports One Python file or ZIP file.
Use the `pulsar-admin` CLI tool to upload the package to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
+> **Note**
+>
+> To upload the package to the Pulsar package management service, you need to enable package management service in the `broker.config` file in advance.
+
This example shows how to upload the package of the `my-function@0.1` Functions to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
```bash
diff --git a/docs/scaling.md b/docs/scaling.md
index cae6725e..cd8b51ee 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -30,7 +30,21 @@ In CRDs, the `replicas` parameter is used to specify the number of Pods (Pulsar
## Autoscaling
-Function Mesh supports scaling Pods (Pulsar instances) based on the CPU utilization automatically. By default, autoscaling is disabled (The value of the `maxReplicas` parameter is set to `0`). To enable autoscaling, you can specify the `maxReplicas` parameter and set a value for it in the CRD. This value should be greater than the value of the `replicas` parameter.
+Function Mesh supports scaling Pods (Pulsar instances) based on the CPU utilization automatically. Function Mesh auto-scales the number of Pods based on the CPU usage, memory usage, a single metrics.
+
+- CPU usage: auto-scale the number of Pods based on 80%, 50% or 20% CPU utilization.
+- Memory usage: auto-scale the number of Pods based on 80%, 50% or 20% memory utilization.
+- metrics: auto-scale the number of Pods based on a single metrics. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling).
+
+> **Note**
+>
+> If you have configured autoscaling based on CPU usages, memory usage, or both of them, you do not need to configure autoscaling based on a specific memory and vice versa.
+
+By default, autoscaling is disabled (The value of the `maxReplicas` parameter is set to `0`). To enable autoscaling, you can specify the `maxReplicas` parameter and set a value for it in the CRD. This value should be greater than the value of the `replicas` parameter. Then, the number of Pods is automatically scaled when 80% CPU is utilized.
+
+### Prerequisites
+
+Deploy the metrics server in the cluster. The Metrics server provides metrics through the Metrics API. The Horizontal Pod Autoscaler (HPA) uses this API to collect metrics. To learn how to deploy the metrics-server, see the [metrics-server documentation](https://github.com/kubernetes-sigs/metrics-server#deployment).
### Auto-scale Pulsar Functions
From 46805c882562b65133d7026c0b161aad4058240e Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Fri, 3 Sep 2021 18:40:42 +0800
Subject: [PATCH 02/10] update comments
---
docs/connectors/io-crd-config/sink-crd-config.md | 4 ++--
docs/connectors/io-crd-config/source-crd-config.md | 4 ++--
docs/connectors/pulsar-io-debug.md | 2 +-
docs/connectors/run-connector.md | 2 +-
docs/functions/function-crd.md | 4 ++--
docs/functions/function-debug.md | 2 +-
docs/functions/run-function/run-go-function.md | 2 +-
docs/functions/run-function/run-java-function.md | 2 +-
docs/functions/run-function/run-python-function.md | 2 +-
docs/install-function-mesh.md | 2 +-
docs/scaling.md | 8 ++++----
11 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/docs/connectors/io-crd-config/sink-crd-config.md b/docs/connectors/io-crd-config/sink-crd-config.md
index 8ed47488..130fce28 100644
--- a/docs/connectors/io-crd-config/sink-crd-config.md
+++ b/docs/connectors/io-crd-config/sink-crd-config.md
@@ -124,5 +124,5 @@ Function Mesh supports customizing the Pod running Pulsar connectors. This table
| `InitContainers` | The initialization containers belonging to a Pod. A typical use case could be using an initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
-| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
-| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
\ No newline at end of file
+| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
\ No newline at end of file
diff --git a/docs/connectors/io-crd-config/source-crd-config.md b/docs/connectors/io-crd-config/source-crd-config.md
index ff296ff7..b44c0117 100644
--- a/docs/connectors/io-crd-config/source-crd-config.md
+++ b/docs/connectors/io-crd-config/source-crd-config.md
@@ -117,5 +117,5 @@ Function Mesh supports customizing the Pod running connectors. This table lists
| `InitContainers` | Initialization containers belonging to a Pod. A typical use case could be using an Initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
-| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
-| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
+| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
diff --git a/docs/connectors/pulsar-io-debug.md b/docs/connectors/pulsar-io-debug.md
index ec1998a7..fdcecf1a 100644
--- a/docs/connectors/pulsar-io-debug.md
+++ b/docs/connectors/pulsar-io-debug.md
@@ -18,7 +18,7 @@ In addition, you can use the following command to check the specific Pod.
- `kubectl describe pod POD_NAME -n NAMESPACE_NAME`: check the current state of the Pod and recent events.
-For the use of `kubectl` commands, see [here](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
+For the use of `kubectl` commands, see [kubectl command reference](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
## Use log topics
diff --git a/docs/connectors/run-connector.md b/docs/connectors/run-connector.md
index cf9f0c19..d60e9702 100644
--- a/docs/connectors/run-connector.md
+++ b/docs/connectors/run-connector.md
@@ -150,7 +150,7 @@ Use the `pulsar-admin` CLI tool to upload the NAR or uber JAR package to the [Pu
> **Note**
>
-> To upload the package to the Pulsar package management service, you need to enable package management service in the `broker.config` file in advance.
+> Before uploading the package to Pulsar package management service, you need to enable the package management service in the `broker.config` file.
This example shows how to upload the NAR package of the `my-sink` connector to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
diff --git a/docs/functions/function-crd.md b/docs/functions/function-crd.md
index c38a4f04..5a1ef486 100644
--- a/docs/functions/function-crd.md
+++ b/docs/functions/function-crd.md
@@ -149,5 +149,5 @@ Function Mesh supports customizing the Pod running function instance. This table
| `InitContainers` | Initialization containers belonging to a Pod. A typical use case could be using an Initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
-| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
-| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
+| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
diff --git a/docs/functions/function-debug.md b/docs/functions/function-debug.md
index de2be26f..8101968e 100644
--- a/docs/functions/function-debug.md
+++ b/docs/functions/function-debug.md
@@ -18,7 +18,7 @@ In addition, you can use the following command to check the specific Pod.
- `kubectl describe pod POD_NAME -n NAMESPACE_NAME`: check the current state of the Pod and recent events.
-For the use of `kubectl` commands, see [here](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
+For the use of `kubectl` commands, see [kubectl command reference](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
## Use log topic
diff --git a/docs/functions/run-function/run-go-function.md b/docs/functions/run-function/run-go-function.md
index 7c85496b..2bc4d1dd 100644
--- a/docs/functions/run-function/run-go-function.md
+++ b/docs/functions/run-function/run-go-function.md
@@ -88,7 +88,7 @@ Use the `pulsar-admin` CLI tool to upload the package to the [Pulsar package man
> **Note**
>
-> To upload the package to the Pulsar package management service, you need to enable package management service in the `broker.config` file in advance.
+> Before uploading the package to Pulsar package management service, you need to enable the package management service in the `broker.config` file.
This example shows how to upload the package of the `my-function@0.1` Functions to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
diff --git a/docs/functions/run-function/run-java-function.md b/docs/functions/run-function/run-java-function.md
index 2b360d91..61d0fa31 100644
--- a/docs/functions/run-function/run-java-function.md
+++ b/docs/functions/run-function/run-java-function.md
@@ -118,7 +118,7 @@ Use the `pulsar-admin` CLI tool to upload the package to the [Pulsar package man
> **Note**
>
-> To upload the package to the Pulsar package management service, you need to enable package management service in the `broker.config` file in advance.
+> Before uploading the package to Pulsar package management service, you need to enable the package management service in the `broker.config` file.
This example shows how to upload the package of the `my-function@0.1` Functions to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
diff --git a/docs/functions/run-function/run-python-function.md b/docs/functions/run-function/run-python-function.md
index 5b1c9f31..4e641922 100644
--- a/docs/functions/run-function/run-python-function.md
+++ b/docs/functions/run-function/run-python-function.md
@@ -81,7 +81,7 @@ Use the `pulsar-admin` CLI tool to upload the package to the [Pulsar package man
> **Note**
>
-> To upload the package to the Pulsar package management service, you need to enable package management service in the `broker.config` file in advance.
+> Before uploading the package to Pulsar package management service, you need to enable the package management service in the `broker.config` file.
This example shows how to upload the package of the `my-function@0.1` Functions to the [Pulsar package management service](http://pulsar.apache.org/docs/en/next/admin-api-packages/).
diff --git a/docs/install-function-mesh.md b/docs/install-function-mesh.md
index afb2b1a4..68db49c3 100644
--- a/docs/install-function-mesh.md
+++ b/docs/install-function-mesh.md
@@ -41,7 +41,7 @@ This example shows how to install Function Mesh through [Helm](https://helm.sh/)
> **Note**
>
> - Before installation, ensure that Helm v3 is installed properly.
-> - For the use of `kubectl` commands, see [here](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
+> - For the use of `kubectl` commands, see [kubectl command reference](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
1. Clone the StreamNative Function Mesh repository.
diff --git a/docs/scaling.md b/docs/scaling.md
index cd8b51ee..b2038ae2 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -30,17 +30,17 @@ In CRDs, the `replicas` parameter is used to specify the number of Pods (Pulsar
## Autoscaling
-Function Mesh supports scaling Pods (Pulsar instances) based on the CPU utilization automatically. Function Mesh auto-scales the number of Pods based on the CPU usage, memory usage, a single metrics.
+Function Mesh auto-scales the number of Pods based on the CPU usage, memory usage, a single metric.
-- CPU usage: auto-scale the number of Pods based on 80%, 50% or 20% CPU utilization.
+- CPU usage: auto-scale the number of Pods based on 80%, 50% or 20% CPU utilization.
- Memory usage: auto-scale the number of Pods based on 80%, 50% or 20% memory utilization.
-- metrics: auto-scale the number of Pods based on a single metrics. For details, see [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling).
+- metrics: auto-scale the number of Pods based on a single metric. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling).
> **Note**
>
> If you have configured autoscaling based on CPU usages, memory usage, or both of them, you do not need to configure autoscaling based on a specific memory and vice versa.
-By default, autoscaling is disabled (The value of the `maxReplicas` parameter is set to `0`). To enable autoscaling, you can specify the `maxReplicas` parameter and set a value for it in the CRD. This value should be greater than the value of the `replicas` parameter. Then, the number of Pods is automatically scaled when 80% CPU is utilized.
+By default, autoscaling is disabled (the value of the `maxReplicas` parameter is set to `0`). To enable autoscaling, you can specify the `maxReplicas` parameter and set a value for it in the CRD. This value should be greater than the value of the `replicas` parameter. Then, the number of Pods is automatically scaled when 80% CPU is utilized.
### Prerequisites
From 03f03a8a9efcaea6aa1137e666b5aacc4eccfdc2 Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Fri, 10 Sep 2021 17:19:37 +0800
Subject: [PATCH 03/10] update comments
---
.../io-crd-config/sink-crd-config.md | 2 +-
.../io-crd-config/source-crd-config.md | 4 ++--
docs/functions/function-crd.md | 4 ++--
docs/scaling.md | 24 +++++++++++++++----
4 files changed, 24 insertions(+), 10 deletions(-)
diff --git a/docs/connectors/io-crd-config/sink-crd-config.md b/docs/connectors/io-crd-config/sink-crd-config.md
index 130fce28..c83d511c 100644
--- a/docs/connectors/io-crd-config/sink-crd-config.md
+++ b/docs/connectors/io-crd-config/sink-crd-config.md
@@ -124,5 +124,5 @@ Function Mesh supports customizing the Pod running Pulsar connectors. This table
| `InitContainers` | The initialization containers belonging to a Pod. A typical use case could be using an initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
-| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingMetrics` | Specify how to scale based on customized metrics defined in connectors. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
\ No newline at end of file
diff --git a/docs/connectors/io-crd-config/source-crd-config.md b/docs/connectors/io-crd-config/source-crd-config.md
index b44c0117..bb307bb1 100644
--- a/docs/connectors/io-crd-config/source-crd-config.md
+++ b/docs/connectors/io-crd-config/source-crd-config.md
@@ -19,7 +19,7 @@ This table lists source configurations.
| `MaxReplicas`| The maximum number of Pulsar instances that you want to run for this source connector. When the value of the `maxReplicas` parameter is greater than the value of `replicas`, it indicates that the source controller automatically scales the source connector based on the CPU usage. By default, `maxReplicas` is set to 0, which indicates that auto-scaling is disabled. |
| `SourceConfig` | The source connector configurations in YAML format. |
| `ProcessingGuarantee` | The processing guarantees (delivery semantics) applied to the source connector. Available values: `ATLEAST_ONCE`, `ATMOST_ONCE`, `EFFECTIVELY_ONCE`.|
-| ForwardSourceMessageProperty | Configure whether to pass message properties to a target topic. |
+| `ForwardSourceMessageProperty` | Configure whether to pass message properties to a target topic. |
## Images
@@ -117,5 +117,5 @@ Function Mesh supports customizing the Pod running connectors. This table lists
| `InitContainers` | Initialization containers belonging to a Pod. A typical use case could be using an Initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
-| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingMetrics` | Specify how to scale based on customized metrics defined in connectors. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
diff --git a/docs/functions/function-crd.md b/docs/functions/function-crd.md
index 5a1ef486..a4e5aae2 100644
--- a/docs/functions/function-crd.md
+++ b/docs/functions/function-crd.md
@@ -25,7 +25,7 @@ This table lists Pulsar Function configurations.
| `AutoAck` | Whether or not the framework acknowledges messages automatically. This field is required. You can set it to `true` or `false`.|
| `MaxMessageRetry` | How many times to process a message before giving up. |
| `ProcessingGuarantee` | The processing guarantees (delivery semantics) applied to the function. Available values: `ATLEAST_ONCE`, `ATMOST_ONCE`, `EFFECTIVELY_ONCE`.|
-| ForwardSourceMessageProperty | Configure whether to pass message properties to a target topic. |
+| `ForwardSourceMessageProperty` | Configure whether to pass message properties to a target topic. |
| `RetainOrdering` | Function consumes and processes messages in order. |
| `RetainKeyOrdering`| Configure whether to retain the key order of messages. |
| `SubscriptionName` | Pulsar Functions’ subscription name if you want a specific subscription-name for the input-topic consumer. |
@@ -149,5 +149,5 @@ Function Mesh supports customizing the Pod running function instance. This table
| `InitContainers` | Initialization containers belonging to a Pod. A typical use case could be using an Initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
-| `AutoScalingMetrics` | Specify how to scale based on a single metric. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
+| `AutoScalingMetrics` | Specify how to scale based on customized metrics defined in Pulsar Functions. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
diff --git a/docs/scaling.md b/docs/scaling.md
index b2038ae2..0073bc1c 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -30,11 +30,25 @@ In CRDs, the `replicas` parameter is used to specify the number of Pods (Pulsar
## Autoscaling
-Function Mesh auto-scales the number of Pods based on the CPU usage, memory usage, a single metric.
-
-- CPU usage: auto-scale the number of Pods based on 80%, 50% or 20% CPU utilization.
-- Memory usage: auto-scale the number of Pods based on 80%, 50% or 20% memory utilization.
-- metrics: auto-scale the number of Pods based on a single metric. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling).
+Function Mesh auto-scales the number of Pods based on the CPU usage, memory usage, customized metrics.
+
+- CPU usage: auto-scale the number of Pods based on CPU utilization, as listed in the following table.
+
+ | Option | Description |
+ | --- | --- |
+ | AverageUtilizationCPUPercent80 | Auto-scale the number of Pods if 80% CPU is utilized.|
+ | AverageUtilizationCPUPercent50 | Auto-scale the number of Pods if 50% CPU is utilized.|
+ | AverageUtilizationCPUPercent20 | Auto-scale the number of Pods if 20% CPU is utilized. |
+
+- Memory usage: auto-scale the number of Pods based on memory utilization, as listed in the following table.
+
+ | Option | Description |
+ | --- | --- |
+ | AverageUtilizationMemoryPercent80 | Auto-scale the number of Pods if 80% memory is utilized. |
+ | AverageUtilizationMemoryPercent50 | Auto-scale the number of Pods if 50% memory is utilized. |
+ | AverageUtilizationMemoryPercent20 | Auto-scale the number of Pods if 20% memory is utilized. |
+
+- metrics: auto-scale the number of Pods based on customized metrics defined in Pulsar Functions or connectors. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling).
> **Note**
>
From 390886f4e62909a592a023cce41bda815da384e2 Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Mon, 13 Sep 2021 11:46:15 +0800
Subject: [PATCH 04/10] Update contents
---
docs/scaling.md | 160 +++++++++++++++++++++++++++++++++++++-----------
1 file changed, 123 insertions(+), 37 deletions(-)
diff --git a/docs/scaling.md b/docs/scaling.md
index 0073bc1c..5a0e530d 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -30,9 +30,11 @@ In CRDs, the `replicas` parameter is used to specify the number of Pods (Pulsar
## Autoscaling
-Function Mesh auto-scales the number of Pods based on the CPU usage, memory usage, customized metrics.
+Function Mesh auto-scales the number of Pods based on the CPU usage, memory usage, customized metrics.
-- CPU usage: auto-scale the number of Pods based on CPU utilization, as listed in the following table.
+- CPU usage: auto-scale the number of Pods based on CPU utilization.
+
+ This table lists built-in CPU-based autoscaling metrics. If these metrics cannot meet your requirements, you can auto-scale the number of Pods based on customized metrics defined in Pulsar Functions or connectors. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling).
| Option | Description |
| --- | --- |
@@ -40,7 +42,9 @@ Function Mesh auto-scales the number of Pods based on the CPU usage, memory usag
| AverageUtilizationCPUPercent50 | Auto-scale the number of Pods if 50% CPU is utilized.|
| AverageUtilizationCPUPercent20 | Auto-scale the number of Pods if 20% CPU is utilized. |
-- Memory usage: auto-scale the number of Pods based on memory utilization, as listed in the following table.
+- Memory usage: auto-scale the number of Pods based on memory utilization.
+
+ This table lists built-in CPU-based autoscaling metrics. If these metrics cannot meet your requirements, you can auto-scale the number of Pods based on customized metrics defined in Pulsar Functions or connectors. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling).
| Option | Description |
| --- | --- |
@@ -52,7 +56,7 @@ Function Mesh auto-scales the number of Pods based on the CPU usage, memory usag
> **Note**
>
-> If you have configured autoscaling based on CPU usages, memory usage, or both of them, you do not need to configure autoscaling based on a specific memory and vice versa.
+> If you have configured autoscaling based on the CPU usage, memory usage, or both of them, you do not need to configure autoscaling based on customized metrics defined in Pulsar Functions or connectors and vice versa.
By default, autoscaling is disabled (the value of the `maxReplicas` parameter is set to `0`). To enable autoscaling, you can specify the `maxReplicas` parameter and set a value for it in the CRD. This value should be greater than the value of the `replicas` parameter. Then, the number of Pods is automatically scaled when 80% CPU is utilized.
@@ -62,44 +66,126 @@ Deploy the metrics server in the cluster. The Metrics server provides metrics th
### Auto-scale Pulsar Functions
-This example shows how to auto-scale the number of Pods running Pulsar Functions to `8`.
-
-1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
-
- ```yaml
- apiVersion: cloud.streamnative.io/v1alpha1
- kind: Function
- metadata:
- name: java-function-sample
- namespace: default
- spec:
- className: org.apache.pulsar.functions.api.examples.ExclamationFunction
- forwardSourceMessageProperty: true
- MaxPendingAsyncRequests: 1000
- replicas: 1
- maxReplicas: 8
- logTopic: persistent://public/default/logging-function-logs
- input:
- topics:
- - persistent://public/default/java-function-input-topic
- typeClassName: java.lang.String
- output:
- topic: persistent://public/default/java-function-output-topic
- typeClassName: java.lang.String
- # Other function configs
- ```
-
-2. Apply the configurations.
-
- ```bash
- kubectl apply -f path/to/source-sample.yaml
- ```
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+
+
+ This example shows how to auto-scale the number of Pods running Pulsar Functions to `8`.
+
+ 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
+
+ ```yaml
+ apiVersion: cloud.streamnative.io/v1alpha1
+ kind: Function
+ metadata:
+ name: java-function-sample
+ namespace: default
+ spec:
+ className: org.apache.pulsar.functions.api.examples.ExclamationFunction
+ forwardSourceMessageProperty: true
+ MaxPendingAsyncRequests: 1000
+ replicas: 1
+ maxReplicas: 8
+ logTopic: persistent://public/default/logging-function-logs
+ input:
+ topics:
+ - persistent://public/default/java-function-input-topic
+ typeClassName: java.lang.String
+ output:
+ topic: persistent://public/default/java-function-output-topic
+ typeClassName: java.lang.String
+ # Other function configs
+ ```
+
+ 2. Apply the configurations.
+
+ ```bash
+ kubectl apply -f path/to/source-sample.yaml
+ ```
+
+
+
+ This example shows how to auto-scale the number of Pods if 20% CPU is utilized.
+
+ 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD.
+
+ ```yaml
+ apiVersion: cloud.streamnative.io/v1alpha1
+ kind: Function
+ metadata:
+ name: java-function-sample
+ namespace: default
+ spec:
+ className: org.apache.pulsar.functions.api.examples.ExclamationFunction
+ forwardSourceMessageProperty: true
+ MaxPendingAsyncRequests: 1000
+ replicas: 1
+ maxReplicas: 4
+ logTopic: persistent://public/default/logging-function-logs
+ input:
+ topics:
+ - persistent://public/default/java-function-input-topic
+ typeClassName: java.lang.String
+ pod:
+ builtinAutoscaler:
+ - AverageUtilizationCPUPercent20
+ - AverageUtilizationMemoryPercent20
+ # Other function configs
+ ```
+
+ 2. Apply the configurations.
+
+ ```bash
+ kubectl apply -f path/to/source-sample.yaml
+ ```
+
+
+
+
+ This example shows how to auto-scale the number of Pods based on a customized metrics.
+
+ 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
+
+ ```yaml
+ apiVersion: cloud.streamnative.io/v1alpha1
+ kind: Function
+ metadata:
+ name: java-function-sample
+ namespace: default
+ spec:
+ className: org.apache.pulsar.functions.api.examples.ExclamationFunction
+ forwardSourceMessageProperty: true
+ MaxPendingAsyncRequests: 1000
+ replicas: 1
+ maxReplicas: 4
+ logTopic: persistent://public/default/logging-function-logs
+ pod:
+ autoScalingMetrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 45
+ # Other function configs
+ ```
+
+ 2. Apply the configurations.
+
+ ```bash
+ kubectl apply -f path/to/source-sample.yaml
+ ```
+
+
+;
### Auto-scale Pulsar connectors
This example shows how to auto-scale the number of Pods for running a Pulsar source connector to `5`.
-1. Specify the the `maxReplicas` to `5` in the Pulsar source CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar source connector.
+1. Specify the `maxReplicas` to `5` in the Pulsar source CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar source connector.
**Example**
From 54ae404736e15bca13c9f274cc0e220a421c8cb6 Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Mon, 13 Sep 2021 11:54:03 +0800
Subject: [PATCH 05/10] update contents
---
docs/connectors/io-crd-config/sink-crd-config.md | 4 ++--
docs/connectors/io-crd-config/source-crd-config.md | 2 +-
docs/functions/function-crd.md | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/docs/connectors/io-crd-config/sink-crd-config.md b/docs/connectors/io-crd-config/sink-crd-config.md
index c83d511c..d7d94508 100644
--- a/docs/connectors/io-crd-config/sink-crd-config.md
+++ b/docs/connectors/io-crd-config/sink-crd-config.md
@@ -123,6 +123,6 @@ Function Mesh supports customizing the Pod running Pulsar connectors. This table
| `ServiceAccountName` | Specify the name of the service account which is used to run Pulsar Functions or connectors in the Function Mesh Worker service.|
| `InitContainers` | The initialization containers belonging to a Pod. A typical use case could be using an initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
-| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
+| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
| `AutoScalingMetrics` | Specify how to scale based on customized metrics defined in connectors. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
-| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
\ No newline at end of file
+| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
diff --git a/docs/connectors/io-crd-config/source-crd-config.md b/docs/connectors/io-crd-config/source-crd-config.md
index bb307bb1..eff7da57 100644
--- a/docs/connectors/io-crd-config/source-crd-config.md
+++ b/docs/connectors/io-crd-config/source-crd-config.md
@@ -116,6 +116,6 @@ Function Mesh supports customizing the Pod running connectors. This table lists
| `ServiceAccountName` | Specify the name of the service account which is used to run Pulsar Functions or connectors in the Function Mesh Worker service.|
| `InitContainers` | Initialization containers belonging to a Pod. A typical use case could be using an Initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
-| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
+| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
| `AutoScalingMetrics` | Specify how to scale based on customized metrics defined in connectors. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
diff --git a/docs/functions/function-crd.md b/docs/functions/function-crd.md
index a4e5aae2..6b6e96a1 100644
--- a/docs/functions/function-crd.md
+++ b/docs/functions/function-crd.md
@@ -148,6 +148,6 @@ Function Mesh supports customizing the Pod running function instance. This table
| `ServiceAccountName` | Specify the name of the service account which is used to run Pulsar Functions or connectors in the Function Mesh Worker service.|
| `InitContainers` | Initialization containers belonging to a Pod. A typical use case could be using an Initialization container to download a remote JAR to a local path. |
| `Sidecars` | Sidecar containers run together with the main function container in a Pod. |
-| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
+| `BuiltinAutoscaler` | Specify the built-in autoscaling rules.
- CPU-based autoscaling: auto-scale the number of Pods based on the CPU usage (80%, 50%, or 20%).
- Memory-based autoscaling: auto-scale the number of Pods based on the memory usage (80%, 50%, or 20%).
If you configure the `BuiltinAutoscaler` field, you do not need to configure the `AutoScalingMetrics` and `AutoScalingBehavior` options and vice versa.|
| `AutoScalingMetrics` | Specify how to scale based on customized metrics defined in Pulsar Functions. For details, see [MetricSpec v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#metricspec-v2beta2-autoscaling). |
| `AutoScalingBehavior` | Configure the scaling behavior of the target in both up and down directions (`scaleUp` and `scaleDown` fields respectively). If not specified, the default Kubernetes scaling behaviors are adopted. For details, see [HorizontalPodAutoscalerBehavior v2beta2 autoscaling](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#horizontalpodautoscalerbehavior-v2beta2-autoscaling). |
From 603332ebd8443b61ba5a61e0b097a867e4f6a8ac Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Mon, 13 Sep 2021 13:47:29 +0800
Subject: [PATCH 06/10] update contents
---
docs/scaling.md | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/docs/scaling.md b/docs/scaling.md
index 5a0e530d..1f945972 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -66,13 +66,15 @@ Deploy the metrics server in the cluster. The Metrics server provides metrics th
### Auto-scale Pulsar Functions
+This example shows how to auto-scale the number of Pods running Pulsar Functions.
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
- This example shows how to auto-scale the number of Pods running Pulsar Functions to `8`.
+ Function Mesh supports automatically scaling up the number of Pods by updating the `maxReplica` parameter. This example auto-scales the number of Pods to `8`.
1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
@@ -107,7 +109,7 @@ import TabItem from '@theme/TabItem';
- This example shows how to auto-scale the number of Pods if 20% CPU is utilized.
+ Function Mesh supports automatically scaling up the number of Pods based on the built-in autoscaling metric. This example auto-scales the number of Pods if 20% CPU is utilized.
1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD.
@@ -144,7 +146,7 @@ import TabItem from '@theme/TabItem';
- This example shows how to auto-scale the number of Pods based on a customized metrics.
+ Function Mesh supports automatically scaling up the number of Pods based on a customized autoscaling metric. This example auto-scales the number of Pods if 45% CPU is utilized
1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
@@ -177,7 +179,6 @@ import TabItem from '@theme/TabItem';
```bash
kubectl apply -f path/to/source-sample.yaml
```
-
;
From cc7a557808a4b6a4cdcfc993e409fbc9876f678b Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Mon, 13 Sep 2021 14:33:01 +0800
Subject: [PATCH 07/10] update contents
---
docs/scaling.md | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/docs/scaling.md b/docs/scaling.md
index 1f945972..8119e1e0 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -72,7 +72,13 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
+ defaultValue="replica"
+ values={[
+ {label: 'Maximum Number of Replicas', value: 'replica'},
+ {label: 'Built-in Metrics', value: 'builtin'},
+ {label: 'Customized Metrics', value: 'customize'},
+ ]}>
+
Function Mesh supports automatically scaling up the number of Pods by updating the `maxReplica` parameter. This example auto-scales the number of Pods to `8`.
@@ -146,7 +152,7 @@ import TabItem from '@theme/TabItem';
- Function Mesh supports automatically scaling up the number of Pods based on a customized autoscaling metric. This example auto-scales the number of Pods if 45% CPU is utilized
+ Function Mesh supports automatically scaling up the number of Pods based on a customized autoscaling metric. This example auto-scales the number of Pods if 45% CPU is utilized.
1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
From 9a33fe62dccfc75fe7a2a963ca5980599c0b8885 Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Tue, 14 Sep 2021 11:14:19 +0800
Subject: [PATCH 08/10] update comments
---
docs/scaling.md | 218 ++++++++++++++++++++++--------------------------
1 file changed, 98 insertions(+), 120 deletions(-)
diff --git a/docs/scaling.md b/docs/scaling.md
index 8119e1e0..c5c2c5ab 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -66,127 +66,105 @@ Deploy the metrics server in the cluster. The Metrics server provides metrics th
### Auto-scale Pulsar Functions
-This example shows how to auto-scale the number of Pods running Pulsar Functions.
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-
- defaultValue="replica"
- values={[
- {label: 'Maximum Number of Replicas', value: 'replica'},
- {label: 'Built-in Metrics', value: 'builtin'},
- {label: 'Customized Metrics', value: 'customize'},
- ]}>
-
+- Function Mesh supports automatically scaling up the number of Pods by updating the `maxReplica` parameter. In this case, the number of Pods is updated when 80% CPU is utilized.
+
+ 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
+
+ ```yaml
+ apiVersion: cloud.streamnative.io/v1alpha1
+ kind: Function
+ metadata:
+ name: java-function-sample
+ namespace: default
+ spec:
+ className: org.apache.pulsar.functions.api.examples.ExclamationFunction
+ forwardSourceMessageProperty: true
+ MaxPendingAsyncRequests: 1000
+ replicas: 1
+ maxReplicas: 8
+ logTopic: persistent://public/default/logging-function-logs
+ input:
+ topics:
+ - persistent://public/default/java-function-input-topic
+ typeClassName: java.lang.String
+ output:
+ topic: persistent://public/default/java-function-output-topic
+ typeClassName: java.lang.String
+ # Other function configs
+ ```
+
+ 2. Apply the configurations.
+
+ ```bash
+ kubectl apply -f path/to/source-sample.yaml
+ ```
+
+- Function Mesh supports automatically scaling up the number of Pods based on a built-in autoscaling metric. This example auto-scales the number of Pods if 20% CPU is utilized.
+
+ 1. Specify the CPU-based autoscaling metric in the Pulsar Functions CRD.
+
+ ```yaml
+ apiVersion: cloud.streamnative.io/v1alpha1
+ kind: Function
+ metadata:
+ name: java-function-sample
+ namespace: default
+ spec:
+ className: org.apache.pulsar.functions.api.examples.ExclamationFunction
+ forwardSourceMessageProperty: true
+ MaxPendingAsyncRequests: 1000
+ replicas: 1
+ maxReplicas: 4
+ logTopic: persistent://public/default/logging-function-logs
+ input:
+ topics:
+ - persistent://public/default/java-function-input-topic
+ typeClassName: java.lang.String
+ pod:
+ builtinAutoscaler:
+ - AverageUtilizationCPUPercent20
+ # Other function configs
+ ```
+
+ 2. Apply the configurations.
+
+ ```bash
+ kubectl apply -f path/to/source-sample.yaml
+ ```
- Function Mesh supports automatically scaling up the number of Pods by updating the `maxReplica` parameter. This example auto-scales the number of Pods to `8`.
-
- 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
-
- ```yaml
- apiVersion: cloud.streamnative.io/v1alpha1
- kind: Function
- metadata:
- name: java-function-sample
- namespace: default
- spec:
- className: org.apache.pulsar.functions.api.examples.ExclamationFunction
- forwardSourceMessageProperty: true
- MaxPendingAsyncRequests: 1000
- replicas: 1
- maxReplicas: 8
- logTopic: persistent://public/default/logging-function-logs
- input:
- topics:
- - persistent://public/default/java-function-input-topic
- typeClassName: java.lang.String
- output:
- topic: persistent://public/default/java-function-output-topic
- typeClassName: java.lang.String
- # Other function configs
- ```
-
- 2. Apply the configurations.
-
- ```bash
- kubectl apply -f path/to/source-sample.yaml
- ```
-
-
-
- Function Mesh supports automatically scaling up the number of Pods based on the built-in autoscaling metric. This example auto-scales the number of Pods if 20% CPU is utilized.
-
- 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD.
-
- ```yaml
- apiVersion: cloud.streamnative.io/v1alpha1
- kind: Function
- metadata:
- name: java-function-sample
- namespace: default
- spec:
- className: org.apache.pulsar.functions.api.examples.ExclamationFunction
- forwardSourceMessageProperty: true
- MaxPendingAsyncRequests: 1000
- replicas: 1
- maxReplicas: 4
- logTopic: persistent://public/default/logging-function-logs
- input:
- topics:
- - persistent://public/default/java-function-input-topic
- typeClassName: java.lang.String
- pod:
- builtinAutoscaler:
- - AverageUtilizationCPUPercent20
- - AverageUtilizationMemoryPercent20
- # Other function configs
- ```
-
- 2. Apply the configurations.
-
- ```bash
- kubectl apply -f path/to/source-sample.yaml
- ```
-
-
-
-
- Function Mesh supports automatically scaling up the number of Pods based on a customized autoscaling metric. This example auto-scales the number of Pods if 45% CPU is utilized.
-
- 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
-
- ```yaml
- apiVersion: cloud.streamnative.io/v1alpha1
- kind: Function
- metadata:
- name: java-function-sample
- namespace: default
- spec:
- className: org.apache.pulsar.functions.api.examples.ExclamationFunction
- forwardSourceMessageProperty: true
- MaxPendingAsyncRequests: 1000
- replicas: 1
- maxReplicas: 4
- logTopic: persistent://public/default/logging-function-logs
- pod:
- autoScalingMetrics:
- - type: Resource
- resource:
- name: cpu
- target:
- type: Utilization
- averageUtilization: 45
- # Other function configs
- ```
-
- 2. Apply the configurations.
-
- ```bash
- kubectl apply -f path/to/source-sample.yaml
- ```
-
-;
+- Function Mesh supports automatically scaling up the number of Pods based on a customized autoscaling metric. This example auto-scales the number of Pods if 45% CPU is utilized.
+
+ 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
+
+ ```yaml
+ apiVersion: cloud.streamnative.io/v1alpha1
+ kind: Function
+ metadata:
+ name: java-function-sample
+ namespace: default
+ spec:
+ className: org.apache.pulsar.functions.api.examples.ExclamationFunction
+ forwardSourceMessageProperty: true
+ MaxPendingAsyncRequests: 1000
+ replicas: 1
+ maxReplicas: 4
+ logTopic: persistent://public/default/logging-function-logs
+ pod:
+ autoScalingMetrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 45
+ # Other function configs
+ ```
+
+ 2. Apply the configurations.
+
+ ```bash
+ kubectl apply -f path/to/source-sample.yaml
+ ```
### Auto-scale Pulsar connectors
From fcaf1477eb12e64bcf4464c8d67dde5e7f1f1813 Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Tue, 14 Sep 2021 11:25:53 +0800
Subject: [PATCH 09/10] update contents
---
docs/scaling.md | 60 ++++++++++---------------------------------------
1 file changed, 12 insertions(+), 48 deletions(-)
diff --git a/docs/scaling.md b/docs/scaling.md
index c5c2c5ab..ffe24469 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -64,9 +64,11 @@ By default, autoscaling is disabled (the value of the `maxReplicas` parameter is
Deploy the metrics server in the cluster. The Metrics server provides metrics through the Metrics API. The Horizontal Pod Autoscaler (HPA) uses this API to collect metrics. To learn how to deploy the metrics-server, see the [metrics-server documentation](https://github.com/kubernetes-sigs/metrics-server#deployment).
-### Auto-scale Pulsar Functions
+### Examples
-- Function Mesh supports automatically scaling up the number of Pods by updating the `maxReplica` parameter. In this case, the number of Pods is updated when 80% CPU is utilized.
+These examples describe how to autoscaling the number of Pods running Pulsar Functions.
+
+- Function Mesh supports automatically scaling up the number of Pods by updating the `maxReplica` parameter. In this case, the number of Pods is updated to `8` when 80% CPU is utilized.
1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
@@ -101,7 +103,7 @@ Deploy the metrics server in the cluster. The Metrics server provides metrics th
- Function Mesh supports automatically scaling up the number of Pods based on a built-in autoscaling metric. This example auto-scales the number of Pods if 20% CPU is utilized.
- 1. Specify the CPU-based autoscaling metric in the Pulsar Functions CRD.
+ 1. Specify the CPU-based autoscaling metric under `pod.builtinAutoscaler` in the Pulsar Functions CRD.
```yaml
apiVersion: cloud.streamnative.io/v1alpha1
@@ -131,10 +133,14 @@ Deploy the metrics server in the cluster. The Metrics server provides metrics th
```bash
kubectl apply -f path/to/source-sample.yaml
```
-
+
+ >**Note**
+ >
+ > If you specify multiple metrics for the HPA to scale on, the HPA controller evaluates each metric, and proposes a new scale based on that metric. The largest of the proposed scales will be used as the new scale.
+
- Function Mesh supports automatically scaling up the number of Pods based on a customized autoscaling metric. This example auto-scales the number of Pods if 45% CPU is utilized.
- 1. Specify the `maxReplicas` to `8` in the Pulsar Functions CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar Functions.
+ 1. Specify the customized autoscaling metric under `pod.autoScalingMetrics` in the Pulsar Functions CRD.
```yaml
apiVersion: cloud.streamnative.io/v1alpha1
@@ -164,46 +170,4 @@ Deploy the metrics server in the cluster. The Metrics server provides metrics th
```bash
kubectl apply -f path/to/source-sample.yaml
- ```
-
-### Auto-scale Pulsar connectors
-
-This example shows how to auto-scale the number of Pods for running a Pulsar source connector to `5`.
-
-1. Specify the `maxReplicas` to `5` in the Pulsar source CRD. The `maxReplicas` refers to the maximum number of Pods that are required for running the Pulsar source connector.
-
- **Example**
-
- ```yaml
- apiVersion: compute.functionmesh.io/v1alpha1
- kind: Source
- metadata:
- name: source-sample
- spec:
- className: org.apache.pulsar.io.debezium.mongodb.DebeziumMongoDbSource
- replicas: 1
- maxReplicas: 5
- replicas: 1
- maxReplicas: 1
- output:
- producerConf:
- maxPendingMessages: 1000
- maxPendingMessagesAcrossPartitions: 50000
- useThreadLocalProducers: true
- topic: persistent://public/default/destination
- typeClassName: org.apache.pulsar.common.schema.KeyValue
- resources:
- limits:
- cpu: "0.2"
- memory: 1.1G
- requests:
- cpu: "0.1"
- memory: 1G
- # Other configurations
- ```
-
-2. Apply the configurations.
-
- ```bash
- kubectl apply -f path/to/source-sample.yaml
- ```
\ No newline at end of file
+ ```
\ No newline at end of file
From d0c6acfa14025947fecf510e9ff8f2c9dc7212c5 Mon Sep 17 00:00:00 2001
From: HuanliMeng <48120384+Huanli-Meng@users.noreply.github.com>
Date: Tue, 14 Sep 2021 11:28:20 +0800
Subject: [PATCH 10/10] update contents
---
docs/scaling.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/scaling.md b/docs/scaling.md
index ffe24469..0df7a5aa 100644
--- a/docs/scaling.md
+++ b/docs/scaling.md
@@ -66,7 +66,7 @@ Deploy the metrics server in the cluster. The Metrics server provides metrics th
### Examples
-These examples describe how to autoscaling the number of Pods running Pulsar Functions.
+These examples describe how to auto-scale the number of Pods running Pulsar Functions.
- Function Mesh supports automatically scaling up the number of Pods by updating the `maxReplica` parameter. In this case, the number of Pods is updated to `8` when 80% CPU is utilized.