From 392c6406692a4b9649aa1a7602a11657d13bed46 Mon Sep 17 00:00:00 2001 From: Luke Chen Date: Tue, 14 Apr 2020 19:22:49 +0800 Subject: [PATCH 1/2] update the deprecated --zookeeper option in the documentation into --bootstrap-server --- docs/configuration.html | 8 +++---- docs/ops.html | 46 ++++++++++++++++++++--------------------- docs/security.html | 8 +++---- 3 files changed, 31 insertions(+), 31 deletions(-) diff --git a/docs/configuration.html b/docs/configuration.html index dc17333e9dbb3..9a239d64e54b3 100644 --- a/docs/configuration.html +++ b/docs/configuration.html @@ -100,7 +100,7 @@
Updating Password Configs in ZooKeeper Before Starting Brokers
on broker 0:
-  > bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type brokers --entity-name 0 --alter --add-config
+  > bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config
     'listener.name.internal.ssl.key.password=key-password,password.encoder.secret=secret,password.encoder.iterations=8192'
   
@@ -240,18 +240,18 @@

3.2 Topic-Level Configs

Overrides can also be changed or set later using the alter configs command. This example updates the max message size for my-topic:
-  > bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic
+  > bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic
       --alter --add-config max.message.bytes=128000
   
To check overrides set on the topic you can do
-  > bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic --describe
+  > bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic --describe
   
To remove an override you can do
-  > bin/kafka-configs.sh --zookeeper localhost:2181  --entity-type topics --entity-name my-topic
+  > bin/kafka-configs.sh --bootstrap-server localhost:9092  --entity-type topics --entity-name my-topic
       --alter --delete-config max.message.bytes
   
diff --git a/docs/ops.html b/docs/ops.html index a09867351f8cc..5c8e96ed5f9c6 100644 --- a/docs/ops.html +++ b/docs/ops.html @@ -308,7 +308,7 @@
Automatically mi Once the json file is ready, use the partition reassignment tool to generate a candidate assignment:
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
+  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
   Current partition replica assignment
 
   {"version":1,
@@ -334,7 +334,7 @@ 
Automatically mi

The tool generates a candidate assignment that will move all partitions from topics foo1,foo2 to brokers 5,6. Note, however, that at this point, the partition movement has not started, it merely tells you the current assignment and the proposed new assignment. The current assignment should be saved in case you want to rollback to it. The new assignment should be saved in a json file (e.g. expand-cluster-reassignment.json) to be input to the tool with the --execute option as follows:

-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file expand-cluster-reassignment.json --execute
+  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --execute
   Current partition replica assignment
 
   {"version":1,
@@ -360,7 +360,7 @@ 
Automatically mi

Finally, the --verify option can be used with the tool to check the status of the partition reassignment. Note that the same expand-cluster-reassignment.json (used with the --execute option) should be used with the --verify option:

-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file expand-cluster-reassignment.json --verify
+  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --verify
   Status of partition reassignment:
   Reassignment of partition [foo1,0] completed successfully
   Reassignment of partition [foo1,1] is in progress
@@ -382,7 +382,7 @@ 
Then, use the json file with the --execute option to start the reassignment process:
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --execute
+  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --execute
   Current partition replica assignment
 
   {"version":1,
@@ -400,7 +400,7 @@ 

The --verify option can be used with the tool to check the status of the partition reassignment. Note that the same custom-reassignment.json (used with the --execute option) should be used with the --verify option:

-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --verify
+  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --verify
   Status of partition reassignment:
   Reassignment of partition [foo1,0] completed successfully
   Reassignment of partition [foo2,1] completed successfully
@@ -422,7 +422,7 @@ 

- > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --execute + > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --execute Current partition replica assignment {"version":1, @@ -436,7 +436,7 @@

- > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --verify + > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --verify Status of partition reassignment: Reassignment of partition [foo,0] completed successfully

@@ -453,13 +453,13 @@

Limiting Bandwidth Usage during Da There are two interfaces that can be used to engage a throttle. The simplest, and safest, is to apply a throttle when invoking the kafka-reassign-partitions.sh, but kafka-configs.sh can also be used to view and alter the throttle values directly.

So for example, if you were to execute a rebalance, with the below command, it would move partitions at no more than 50MB/s. -
$ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --execute --reassignment-json-file bigger-cluster.json --throttle 50000000
+
$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --execute --reassignment-json-file bigger-cluster.json --throttle 50000000
When you execute this script you will see the throttle engage:
   The throttle limit was set to 50000000 B/s
   Successfully started reassignment of partitions.

Should you wish to alter the throttle, during a rebalance, say to increase the throughput so it completes quicker, you can do this by re-running the execute command passing the same reassignment-json-file:

-
$ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181  --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
+  
$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
   There is an existing assignment running.
   The throttle limit was set to 700000000 B/s
@@ -470,7 +470,7 @@

Limiting Bandwidth Usage during Da

When the --verify option is executed, and the reassignment has completed, the script will confirm that the throttle was removed:

-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181  --verify --reassignment-json-file bigger-cluster.json
+  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --verify --reassignment-json-file bigger-cluster.json
   Status of partition reassignment:
   Reassignment of partition [my-topic,1] completed successfully
   Reassignment of partition [mytopic,0] completed successfully
@@ -493,7 +493,7 @@ 

Limiting Bandwidth Usage during Da

To view the throttle limit configuration:

-  > bin/kafka-configs.sh --describe --zookeeper localhost:2181 --entity-type brokers
+  > bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type brokers
   Configs for brokers '2' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000
   Configs for brokers '1' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000
@@ -503,7 +503,7 @@

Limiting Bandwidth Usage during Da

To view the list of throttled replicas:

-  > bin/kafka-configs.sh --describe --zookeeper localhost:2181 --entity-type topics
+  > bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type topics
   Configs for topic 'my-topic' are leader.replication.throttled.replicas=1:102,0:101,
       follower.replication.throttled.replicas=1:101,0:102
@@ -552,19 +552,19 @@

Setting quotas

Configure custom quota for (user=user1, client-id=clientA):

-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
   Updated config for entity: user-principal 'user1', client-id 'clientA'.
   
Configure custom quota for user=user1:
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
   Updated config for entity: user-principal 'user1'.
   
Configure custom quota for client-id=clientA:
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA
   Updated config for entity: client-id 'clientA'.
   
@@ -572,46 +572,46 @@

Setting quotas

Configure default client-id quota for user=userA:

-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-default
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-default
   Updated config for entity: user-principal 'user1', default client-id.
   
Configure default quota for user:
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
   Updated config for entity: default user-principal.
   
Configure default quota for client-id:
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
   Updated config for entity: default client-id.
   
Here's how to describe the quota for a given (user, client-id):
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
   Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   
Describe quota for a given user:
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-name user1
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1
   Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   
Describe quota for a given client-id:
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type clients --entity-name clientA
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type clients --entity-name clientA
   Configs for client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   
If entity name is not specified, all entities of the specified type are described. For example, describe all users:
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users
   Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   Configs for default user-principal are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   
Similarly for (user, client):
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-type clients
+  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-type clients
   Configs for user-principal 'user1', default client-id are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   
diff --git a/docs/security.html b/docs/security.html index bee628fa6ffa7..a210e4a8bfbf2 100644 --- a/docs/security.html +++ b/docs/security.html @@ -754,22 +754,22 @@

7.3 Authentication using SASL

Create SCRAM credentials for user alice with password alice-secret:

-    > bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice
+    > bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice
         

The default iteration count of 4096 is used if iterations are not specified. A random salt is created and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in Zookeeper. See RFC 5802 for details on SCRAM identity and the individual fields.

The following examples also require a user admin for inter-broker communication which can be created using:

-    > bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
+    > bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
         

Existing credentials may be listed using the --describe option:

-    > bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type users --entity-name alice
+    > bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-name alice
         

Credentials may be deleted for one or more SCRAM mechanisms using the --delete option:

-    > bin/kafka-configs.sh --zookeeper localhost:2181 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
+    > bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
         
  • Configuring Kafka Brokers
    From bcc815508e75b8b9ed9b277e9e9d937d0ea1ca29 Mon Sep 17 00:00:00 2001 From: Luke Chen Date: Wed, 15 Apr 2020 16:16:27 +0800 Subject: [PATCH 2/2] address reviewer's comments 1. keep --zookeeper argument in the places that is talking about configure zk before broker up 2. use --zookeeper --zk-tls-config-file when doing some security related action, ex: update password --- docs/configuration.html | 2 +- docs/ops.html | 2 +- docs/security.html | 8 ++++---- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/configuration.html b/docs/configuration.html index 9a239d64e54b3..9e913a2e6da05 100644 --- a/docs/configuration.html +++ b/docs/configuration.html @@ -100,7 +100,7 @@
    Updating Password Configs in ZooKeeper Before Starting Brokers
    on broker 0:
    -  > bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config
    +  > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --entity-type brokers --entity-name 0 --alter --add-config
         'listener.name.internal.ssl.key.password=key-password,password.encoder.secret=secret,password.encoder.iterations=8192'
       
    diff --git a/docs/ops.html b/docs/ops.html index 5c8e96ed5f9c6..bfa42a9664b7d 100644 --- a/docs/ops.html +++ b/docs/ops.html @@ -94,7 +94,7 @@

    Balanc

  • You can also set this to false, but you will then need to manually restore leadership to the restored replicas by running the command:
    -  > bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot
    +  > bin/kafka-preferred-replica-election.sh --bootstrap-server broker_host:port
       

    Balancing Replicas Across Racks

    diff --git a/docs/security.html b/docs/security.html index a210e4a8bfbf2..984a1a96d610f 100644 --- a/docs/security.html +++ b/docs/security.html @@ -754,22 +754,22 @@

    7.3 Authentication using SASL

    Create SCRAM credentials for user alice with password alice-secret:

    -    > bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice
    +    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice
             

    The default iteration count of 4096 is used if iterations are not specified. A random salt is created and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in Zookeeper. See RFC 5802 for details on SCRAM identity and the individual fields.

    The following examples also require a user admin for inter-broker communication which can be created using:

    -    > bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
    +    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
             

    Existing credentials may be listed using the --describe option:

    -    > bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-name alice
    +    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --describe --entity-type users --entity-name alice
             

    Credentials may be deleted for one or more SCRAM mechanisms using the --delete option:

    -    > bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
    +    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
             
  • Configuring Kafka Brokers