From 527d63ad7af366b4fe58180a5636cd2fdbfa9c5d Mon Sep 17 00:00:00 2001 From: Weijie Guo Date: Thu, 26 Dec 2024 17:03:20 +0800 Subject: [PATCH] [docs] Use `config.yaml` for flink version >= 1.19 --- docs/content/cdc-ingestion/mysql-cdc.md | 2 +- docs/content/flink/quick-start.md | 2 +- docs/content/maintenance/write-performance.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/content/cdc-ingestion/mysql-cdc.md b/docs/content/cdc-ingestion/mysql-cdc.md index e64efde2dc91..8a16418a1f2e 100644 --- a/docs/content/cdc-ingestion/mysql-cdc.md +++ b/docs/content/cdc-ingestion/mysql-cdc.md @@ -261,5 +261,5 @@ to avoid potential name conflict. ## FAQ 1. Chinese characters in records ingested from MySQL are garbled. -* Try to set `env.java.opts: -Dfile.encoding=UTF-8` in `flink-conf.yaml` +* Try to set `env.java.opts: -Dfile.encoding=UTF-8` in `flink-conf.yaml`(Flink version < 1.19) or `config.yaml`(Flink version >= 1.19) (the option is changed to `env.java.opts.all` since Flink-1.17). \ No newline at end of file diff --git a/docs/content/flink/quick-start.md b/docs/content/flink/quick-start.md index e50acfe484e1..b23f976e63b6 100644 --- a/docs/content/flink/quick-start.md +++ b/docs/content/flink/quick-start.md @@ -104,7 +104,7 @@ cp flink-shaded-hadoop-2-uber-*.jar /lib/ **Step 4: Start a Flink Local Cluster** -In order to run multiple Flink jobs at the same time, you need to modify the cluster configuration in `/conf/flink-conf.yaml`. +In order to run multiple Flink jobs at the same time, you need to modify the cluster configuration in `/conf/flink-conf.yaml`(Flink version < 1.19) or `/conf/config.yaml`(Flink version >= 1.19). ```yaml taskmanager.numberOfTaskSlots: 2 diff --git a/docs/content/maintenance/write-performance.md b/docs/content/maintenance/write-performance.md index ade2c3353e3c..55d9021aedd4 100644 --- a/docs/content/maintenance/write-performance.md +++ b/docs/content/maintenance/write-performance.md @@ -28,7 +28,7 @@ under the License. Paimon's write performance is closely related to checkpoint, so if you need greater write throughput: -1. Flink Configuration (`'flink-conf.yaml'` or `SET` in SQL): Increase the checkpoint interval +1. Flink Configuration (`'flink-conf.yaml'/'config.yaml'` or `SET` in SQL): Increase the checkpoint interval (`'execution.checkpointing.interval'`), increase max concurrent checkpoints to 3 (`'execution.checkpointing.max-concurrent-checkpoints'`), or just use batch mode. 2. Increase `write-buffer-size`.