diff --git a/connect/runtime/src/main/java/org/apache/kafka/connect/tools/PredicateDoc.java b/connect/runtime/src/main/java/org/apache/kafka/connect/tools/PredicateDoc.java index d4399d6cb9ac1..dedbe13c8fe42 100644 --- a/connect/runtime/src/main/java/org/apache/kafka/connect/tools/PredicateDoc.java +++ b/connect/runtime/src/main/java/org/apache/kafka/connect/tools/PredicateDoc.java @@ -61,7 +61,7 @@ private static void printPredicateHtml(PrintStream out, DocInfo docInfo) { out.println("
"); out.print("
"); - out.print(docInfo.predicateName); + out.print("" + docInfo.predicateName + ""); out.println("
"); out.println(docInfo.overview); diff --git a/connect/runtime/src/main/java/org/apache/kafka/connect/tools/TransformationDoc.java b/connect/runtime/src/main/java/org/apache/kafka/connect/tools/TransformationDoc.java index 5771a6b0ed757..543703a13ac07 100644 --- a/connect/runtime/src/main/java/org/apache/kafka/connect/tools/TransformationDoc.java +++ b/connect/runtime/src/main/java/org/apache/kafka/connect/tools/TransformationDoc.java @@ -75,7 +75,7 @@ private static void printTransformationHtml(PrintStream out, DocInfo docInfo) { out.println("
"); out.print("
"); - out.print(docInfo.transformationName); + out.print("" + docInfo.transformationName + ""); out.println("
"); out.println(docInfo.overview); diff --git a/docs/connect.html b/docs/connect.html index 07f8778f002f2..66d621248dec5 100644 --- a/docs/connect.html +++ b/docs/connect.html @@ -41,8 +41,7 @@

Running Kafka ConnectIn standalone mode all work is performed in a single process. This configuration is simpler to setup and get started with and may be useful in situations where only one worker makes sense (e.g. collecting log files), but it does not benefit from some of the features of Kafka Connect such as fault tolerance. You can start a standalone process with the following command:

-    > bin/connect-standalone.sh config/connect-standalone.properties connector1.properties [connector2.properties ...]
-    
+> bin/connect-standalone.sh config/connect-standalone.properties connector1.properties [connector2.properties ...]

The first parameter is the configuration for the worker. This includes settings such as the Kafka connection parameters, serialization format, and how frequently to commit offsets. The provided example should work well with a local cluster running with the default configuration provided by config/server.properties. It will require tweaking to use with a different configuration or production deployment. All workers (both standalone and distributed) require a few configs: