From f6e35dec9bf330c3531fd95c6566070d4ddf0457 Mon Sep 17 00:00:00 2001
From: Anna Povzner
- * A destination node is ready to send data if ANY one of its partition is not backing off the send and ANY of the
- * following are true :
+ * A destination node is ready to send data if:
* "
+ "No attempt will be made to batch records larger than this size. "
@@ -70,15 +71,6 @@ public class ProducerConfig extends AbstractConfig {
+ "batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a "
+ "buffer of the specified batch size in anticipation of additional records.";
- /** "
- + "This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since "
- + "not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if "
- + "compression is enabled) as well as for maintaining in-flight requests.";
-
/** "
+ + "This parameter is deprecated and will be removed in a future release. "
+ + "Parameter "
+ + "This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since "
+ + "not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if "
+ + "compression is enabled) as well as for maintaining in-flight requests.";
/** ");
+ // Version header
+ b.append(" ");
+ // Version header
+ b.append("
- *
*/
public ReadyCheckResult ready(Cluster cluster, long nowMs) {
@@ -282,7 +290,7 @@ public ReadyCheckResult ready(Cluster cluster, long nowMs) {
Node leader = cluster.leaderFor(part);
if (leader == null) {
unknownLeadersExist = true;
- } else if (!readyNodes.contains(leader)) {
+ } else if (!readyNodes.contains(leader) && !muted.contains(part)) {
synchronized (deque) {
RecordBatch batch = deque.peekFirst();
if (batch != null) {
@@ -333,7 +341,10 @@ public boolean hasUnsent() {
* @param now The current unix time in milliseconds
* @return A list of {@link RecordBatch} for each node specified with total size less than the requested maxSize.
*/
- public Map
+ *
* batch.size */
public static final String BATCH_SIZE_CONFIG = "batch.size";
- private static final String BATCH_SIZE_DOC = "The producer will attempt to batch records together into fewer requests whenever multiple records are being sent" + " to the same partition. This helps performance on both the client and the server. This configuration controls the "
+ private static final String BATCH_SIZE_DOC = "The producer will attempt to batch records together into fewer requests whenever multiple records are being sent"
+ + " to the same partition. This helps performance on both the client and the server. This configuration controls the "
+ "default batch size in bytes. "
+ "buffer.memory */
- public static final String BUFFER_MEMORY_CONFIG = "buffer.memory";
- private static final String BUFFER_MEMORY_DOC = "The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are " + "sent faster than they can be delivered to the server the producer will either block or throw an exception based "
- + "on the preference specified by block.on.buffer.full. "
- + "acks */
public static final String ACKS_CONFIG = "acks";
private static final String ACKS_DOC = "The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the "
@@ -103,20 +95,22 @@ public class ProducerConfig extends AbstractConfig {
*/
@Deprecated
public static final String TIMEOUT_CONFIG = "timeout.ms";
- private static final String TIMEOUT_DOC = "The configuration controls the maximum amount of time the server will wait for acknowledgments from followers to " + "meet the acknowledgment requirements the producer has specified with the acks configuration. If the "
+ private static final String TIMEOUT_DOC = "The configuration controls the maximum amount of time the server will wait for acknowledgments from followers to "
+ + "meet the acknowledgment requirements the producer has specified with the acks configuration. If the "
+ "requested number of acknowledgments are not met when the timeout elapses an error will be returned. This timeout "
+ "is measured on the server side and does not include the network latency of the request.";
/** linger.ms */
public static final String LINGER_MS_CONFIG = "linger.ms";
- private static final String LINGER_MS_DOC = "The producer groups together any records that arrive in between request transmissions into a single batched request. " + "Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to "
+ private static final String LINGER_MS_DOC = "The producer groups together any records that arrive in between request transmissions into a single batched request. "
+ + "Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to "
+ "reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount "
+ "of artificial delay—that is, rather than immediately sending out a record the producer will wait for up to "
+ "the given delay to allow other records to be sent so that the sends can be batched together. This can be thought "
+ "of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once "
- + "we get batch.size worth of records for a partition it will be sent immediately regardless of this "
+ + "we get " + BATCH_SIZE_CONFIG + " worth of records for a partition it will be sent immediately regardless of this "
+ "setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the "
- + "specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5, "
+ + "specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting " + LINGER_MS_CONFIG + "=5, "
+ "for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load.";
/** client.id */
@@ -130,24 +124,47 @@ public class ProducerConfig extends AbstractConfig {
/** max.request.size */
public static final String MAX_REQUEST_SIZE_CONFIG = "max.request.size";
- private static final String MAX_REQUEST_SIZE_DOC = "The maximum size of a request in bytes. This is also effectively a cap on the maximum record size. Note that the server " + "has its own cap on record size which may be different from this. This setting will limit the number of record "
+ private static final String MAX_REQUEST_SIZE_DOC = "The maximum size of a request in bytes. This is also effectively a cap on the maximum record size. Note that the server "
+ + "has its own cap on record size which may be different from this. This setting will limit the number of record "
+ "batches the producer will send in a single request to avoid sending huge requests.";
/** reconnect.backoff.ms */
public static final String RECONNECT_BACKOFF_MS_CONFIG = CommonClientConfigs.RECONNECT_BACKOFF_MS_CONFIG;
+ /** max.block.ms */
+ public static final String MAX_BLOCK_MS_CONFIG = "max.block.ms";
+ private static final String MAX_BLOCK_MS_DOC = "The configuration controls how long KafkaProducer.send() and KafkaProducer.partitionsFor() will block."
+ + "These methods can be blocked either because the buffer is full or metadata unavailable."
+ + "Blocking in the user-supplied serializers or partitioner will not be counted against this timeout.";
+
/** block.on.buffer.full */
/**
* @deprecated This config will be removed in a future release. Also, the {@link #METADATA_FETCH_TIMEOUT_CONFIG} is no longer honored when this property is set to true.
*/
@Deprecated
public static final String BLOCK_ON_BUFFER_FULL_CONFIG = "block.on.buffer.full";
- private static final String BLOCK_ON_BUFFER_FULL_DOC = "When our memory buffer is exhausted we must either stop accepting new records (block) or throw errors. By default " + "this setting is true and we block, however in some scenarios blocking is not desirable and it is better to "
- + "immediately give an error. Setting this to false will accomplish that: the producer will throw a BufferExhaustedException if a record is sent and the buffer space is full.";
+ private static final String BLOCK_ON_BUFFER_FULL_DOC = "When our memory buffer is exhausted we must either stop accepting new records (block) or throw errors. "
+ + "By default this setting is false and the producer will throw a BufferExhaustedException if a record is sent and the buffer space is full. "
+ + "However in some scenarios getting an error is not desirable and it is better to block. Setting this to true will accomplish that."
+ + "If this property is set to true, parameter " + METADATA_FETCH_TIMEOUT_CONFIG + " is not longer honored."
+ + "" + MAX_BLOCK_MS_CONFIG + " should be used instead.";
+
+ /** buffer.memory */
+ public static final String BUFFER_MEMORY_CONFIG = "buffer.memory";
+ private static final String BUFFER_MEMORY_DOC = "The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are "
+ + "sent faster than they can be delivered to the server the producer will either block or throw an exception based "
+ + "on the preference specified by " + BLOCK_ON_BUFFER_FULL_CONFIG + ". "
+ + "retries */
public static final String RETRIES_CONFIG = "retries";
- private static final String RETRIES_DOC = "Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error." + " Note that this retry is no different than if the client resent the record upon receiving the "
+ private static final String RETRIES_DOC = "Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error."
+ + " Note that this retry is no different than if the client resent the record upon receiving the "
+ "error. Allowing retries will potentially change the ordering of records because if two records are "
+ "sent to a single partition, and the first fails and is retried but the second succeeds, then the second record "
+ "may appear first.";
@@ -157,7 +174,8 @@ public class ProducerConfig extends AbstractConfig {
/** compression.type */
public static final String COMPRESSION_TYPE_CONFIG = "compression.type";
- private static final String COMPRESSION_TYPE_DOC = "The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid " + " values are none, gzip, snappy, or lz4. "
+ private static final String COMPRESSION_TYPE_DOC = "The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid "
+ + " values are none, gzip, snappy, or lz4. "
+ "Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression).";
/** metrics.sample.window.ms */
@@ -190,12 +208,6 @@ public class ProducerConfig extends AbstractConfig {
public static final String PARTITIONER_CLASS_CONFIG = "partitioner.class";
private static final String PARTITIONER_CLASS_DOC = "Partitioner class that implements the Partitioner interface.";
- /** max.block.ms */
- public static final String MAX_BLOCK_MS_CONFIG = "max.block.ms";
- private static final String MAX_BLOCK_MS_DOC = "The configuration controls how long {@link KafkaProducer#send()} and {@link KafkaProducer#partitionsFor} will block."
- + "These methods can be blocked either because the buffer is full or metadata unavailable."
- + "Blocking in the user-supplied serializers or partitioner will not be counted against this timeout.";
-
/** request.timeout.ms */
public static final String REQUEST_TIMEOUT_MS_CONFIG = CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG;
private static final String REQUEST_TIMEOUT_MS_DOC = CommonClientConfigs.REQUEST_TIMEOUT_MS_DOC;
From 324b0c85f603005dceee69033b8fbffc7ef95281 Mon Sep 17 00:00:00 2001
From: Rajini Sivaram \n");
+ b.append("
\n");
+ return b.toString();
+ }
+
+ public static void main(String[] args) {
+ System.out.println(toHtml());
+ }
+
}
diff --git a/clients/src/main/java/org/apache/kafka/common/protocol/Errors.java b/clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
index e7098fc05fcdd..ab299af47486c 100644
--- a/clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
+++ b/clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
@@ -48,6 +48,7 @@
import org.apache.kafka.common.errors.RecordBatchTooLargeException;
import org.apache.kafka.common.errors.RecordTooLargeException;
import org.apache.kafka.common.errors.ReplicaNotAvailableException;
+import org.apache.kafka.common.errors.RetriableException;
import org.apache.kafka.common.errors.TimeoutException;
import org.apache.kafka.common.errors.TopicAuthorizationException;
import org.apache.kafka.common.errors.UnknownMemberIdException;
@@ -208,4 +209,37 @@ public static Errors forException(Throwable t) {
}
return UNKNOWN;
}
+
+ private static String toHtml() {
+ final StringBuilder b = new StringBuilder();
+ b.append("");
+ b.append(" ");
+ for (ApiKeys key : ApiKeys.values()) {
+ b.append("Name \n");
+ b.append("Key \n");
+ b.append("\n");
+ b.append(" \n");
+ }
+ b.append("");
+ b.append(key.name);
+ b.append(" ");
+ b.append("");
+ b.append(key.id);
+ b.append(" ");
+ b.append("\n");
+ b.append("
\n");
+ return b.toString();
+ }
+
+ public static void main(String[] args) {
+ System.out.println(toHtml());
+ }
}
diff --git a/clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java b/clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java
index 3787d2cecf23c..a77bf8cbb8623 100644
--- a/clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java
+++ b/clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java
@@ -19,6 +19,12 @@
import org.apache.kafka.common.protocol.types.ArrayOf;
import org.apache.kafka.common.protocol.types.Field;
import org.apache.kafka.common.protocol.types.Schema;
+import org.apache.kafka.common.protocol.types.Type;
+
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.Map;
+import java.util.Set;
import static org.apache.kafka.common.protocol.types.Type.BYTES;
import static org.apache.kafka.common.protocol.types.Type.INT16;
@@ -750,4 +756,164 @@ public class Protocol {
+ " but " + RESPONSES[api.id].length + " response versions.");
}
+ private static String indentString(int size) {
+ StringBuilder b = new StringBuilder(size);
+ for (int i = 0; i < size; i++)
+ b.append(" ");
+ return b.toString();
+ }
+
+ private static void schemaToBnfHtml(Schema schema, StringBuilder b, int indentSize) {
+ final String indentStr = indentString(indentSize);
+ final Map");
+ b.append(" \n");
+ for (Errors error : Errors.values()) {
+ b.append("Error \n");
+ b.append("Code \n");
+ b.append("Retriable \n");
+ b.append("Description \n");
+ b.append("");
+ b.append(" \n");
+ }
+ b.append("");
+ b.append(error.name());
+ b.append(" ");
+ b.append("");
+ b.append(error.code());
+ b.append(" ");
+ b.append("");
+ b.append(error.exception() != null && error.exception() instanceof RetriableException ? "True" : "False");
+ b.append(" ");
+ b.append("");
+ b.append(error.exception() != null ? error.exception().getMessage() : "");
+ b.append(" ");
+ b.append("\n");
+ b.append("
\n");
+ }
+
+ public static String toHtml() {
+ final StringBuilder b = new StringBuilder();
+ b.append("");
+ b.append(" ");
+ for (Field field : fields) {
+ b.append("Field \n");
+ b.append("Description \n");
+ b.append("\n");
+ b.append(" \n");
+ }
+ b.append("");
+ b.append(field.name);
+ b.append(" ");
+ b.append("");
+ b.append(field.doc);
+ b.append(" ");
+ b.append("Headers:
\n");
+
+ b.append("");
+ b.append("Request Header => ");
+ schemaToBnfHtml(REQUEST_HEADER, b, 2);
+ b.append("\n");
+ schemaToFieldTableHtml(REQUEST_HEADER, b);
+
+ b.append("");
+ b.append("Response Header => ");
+ schemaToBnfHtml(RESPONSE_HEADER, b, 2);
+ b.append("\n");
+ schemaToFieldTableHtml(RESPONSE_HEADER, b);
+
+ for (ApiKeys key : ApiKeys.values()) {
+ // Key
+ b.append("");
+ b.append(key.name);
+ b.append(" API (Key: ");
+ b.append(key.id);
+ b.append("):
\n\n");
+ // Requests
+ b.append("Requests:
\n");
+ Schema[] requests = REQUESTS[key.id];
+ for (int i = 0; i < requests.length; i++) {
+ Schema schema = requests[i];
+ // Schema
+ if (schema != null) {
+ b.append("");
+ b.append(key.name);
+ b.append(" Request (Version: ");
+ b.append(i);
+ b.append(") => ");
+ schemaToBnfHtml(requests[i], b, 2);
+ b.append("");
+ schemaToFieldTableHtml(requests[i], b);
+ }
+ b.append("
\n");
+ Schema[] responses = RESPONSES[key.id];
+ for (int i = 0; i < responses.length; i++) {
+ Schema schema = responses[i];
+ // Schema
+ if (schema != null) {
+ b.append("");
+ b.append(key.name);
+ b.append(" Response (Version: ");
+ b.append(i);
+ b.append(") => ");
+ schemaToBnfHtml(responses[i], b, 2);
+ b.append("");
+ schemaToFieldTableHtml(responses[i], b);
+ }
+ b.append("
More details about broker configuration can be found in the scala class kafka.server.KafkaConfig.
For those interested in the legacy Scala producer configs, information can be found
@@ -330,7 +330,7 @@ This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described here Kafka uses a binary protocol over TCP. The protocol defines all apis as request response message pairs. All messages are size delimited and are made up of the following primitive types. The client initiates a socket connection and then writes a sequence of request messages and reads back the corresponding response message. No handshake is required on connection or disconnection. TCP is happier if you maintain persistent connections used for many requests to amortize the cost of the TCP handshake, but beyond this penalty connecting is pretty cheap. The client will likely need to maintain a connection to multiple brokers, as data is partitioned and the clients will need to talk to the server that has their data. However it should not generally be necessary to maintain multiple connections to a single broker from a single client instance (i.e. connection pooling). The server guarantees that on a single TCP connection, requests will be processed in the order they are sent and responses will return in that order as well. The broker's request processing allows only a single in-flight request per connection in order to guarantee this ordering. Note that clients can (and ideally should) use non-blocking IO to implement request pipelining and achieve higher throughput. i.e., clients can send requests even while awaiting responses for preceding requests since the outstanding requests will be buffered in the underlying OS socket buffer. All requests are initiated by the client, and result in a corresponding response message from the server except where noted. The server has a configurable maximum limit on request size and any request that exceeds this limit will result in the socket being disconnected. Kafka is a partitioned system so not all servers have the complete data set. Instead recall that topics are split into a pre-defined number of partitions, P, and each partition is replicated with some replication factor, N. Topic partitions themselves are just ordered "commit logs" numbered 0, 1, ..., P. All systems of this nature have the question of how a particular piece of data is assigned to a particular partition. Kafka clients directly control this assignment, the brokers themselves enforce no particular semantics of which messages should be published to a particular partition. Rather, to publish messages the client directly addresses messages to a particular partition, and when fetching messages, fetches from a particular partition. If two clients want to use the same partitioning scheme they must use the same method to compute the mapping of key to partition. These requests to publish or fetch data must be sent to the broker that is currently acting as the leader for a given partition. This condition is enforced by the broker, so a request for a particular partition to the wrong broker will result in an the NotLeaderForPartition error code (described below). How can the client find out which topics exist, what partitions they have, and which brokers currently host those partitions so that it can direct its requests to the right hosts? This information is dynamic, so you can't just configure each client with some static mapping file. Instead all Kafka brokers can answer a metadata request that describes the current state of the cluster: what topics there are, which partitions those topics have, which broker is the leader for those partitions, and the host and port information for these brokers. In other words, the client needs to somehow find one broker and that broker will tell the client about all the other brokers that exist and what partitions they host. This first broker may itself go down so the best practice for a client implementation is to take a list of two or three urls to bootstrap from. The user can then choose to use a load balancer or just statically configure two or three of their kafka hosts in the clients. The client does not need to keep polling to see if the cluster has changed; it can fetch metadata once when it is instantiated cache that metadata until it receives an error indicating that the metadata is out of date. This error can come in two forms: (1) a socket error indicating the client cannot communicate with a particular broker, (2) an error code in the response to a request indicating that this broker no longer hosts the partition for which data was requested. As mentioned above the assignment of messages to partitions is something the producing client controls. That said, how should this functionality be exposed to the end-user? Partitioning really serves two purposes in Kafka: For a given use case you may care about only one of these or both. To accomplish simple load balancing a simple approach would be for the client to just round robin requests over all brokers. Another alternative, in an environment where there are many more producers than brokers, would be to have each client chose a single partition at random and publish to that. This later strategy will result in far fewer TCP connections. Semantic partitioning means using some key in the message to assign messages to partitions. For example if you were processing a click message stream you might want to partition the stream by the user id so that all data for a particular user would go to a single consumer. To accomplish this the client can take a key associated with the message and use some hash of this key to choose the partition to which to deliver the message. Our apis encourage batching small things together for efficiency. We have found this is a very significant performance win. Both our API to send messages and our API to fetch messages always work with a sequence of messages not a single message to encourage this. A clever client can make use of this and support an "asynchronous" mode in which it batches together messages sent individually and sends them in larger clumps. We go even further with this and allow the batching across multiple topics and partitions, so a produce request may contain data to append to many partitions and a fetch request may pull data from many partitions all at once. The client implementer can choose to ignore this and send everything one at a time if they like. The protocol is designed to enable incremental evolution in a backward compatible fashion. Our versioning is on a per-api basis, each version consisting of a request and response pair. Each request contains an API key that identifies the API being invoked and a version number that indicates the format of the request and the expected format of the response. The intention is that clients would implement a particular version of the protocol, and indicate this version in their requests. Our goal is primarily to allow API evolution in an environment where downtime is not allowed and clients and servers cannot all be changed at once. The server will reject requests with a version it does not support, and will always respond to the client with exactly the protocol format it expects based on the version it included in its request. The intended upgrade path is that new features would first be rolled out on the server (with the older clients not making use of them) and then as newer clients are deployed these new features would gradually be taken advantage of. Currently all versions are baselined at 0, as we evolve these APIs we will indicate the format for each version individually. The protocol is built out of the following primitive types. Fixed Width Primitives
+
+ int8, int16, int32, int64 - Signed integers with the given precision (in bits) stored in big endian order. Variable Length Primitives
+
+ bytes, string - These types consist of a signed integer giving a length N followed by N bytes of content. A length of -1 indicates null. string uses an int16 for its size, and bytes uses an int32. Arrays
+
+ This is a notation for handling repeated structures. These will always be encoded as an int32 size containing the length N followed by N repetitions of the structure which can itself be made up of other primitive types. In the BNF grammars below we will show an array of a structure foo as [foo]. The BNFs below give an exact context free grammar for the request and response binary format. The BNF is intentionally not compact in order to give human-readable name. As always in a BNF a sequence of productions indicates concatenation. When there are multiple possible productions these are separated with '|' and may be enclosed in parenthesis for grouping. The top-level definition is always given first and subsequent sub-parts are indented. All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:3.3.1 Old Consumer Con
3.3.2 New Consumer Configs
Since 0.9.0.0 we have been working on a replacement for our existing simple and high-level consumers. The code is considered beta quality. Below is the configuration for the new consumer:
-
+
3.4 Kafka Connect Configs
-
+
diff --git a/docs/protocol.html b/docs/protocol.html
new file mode 100644
index 0000000000000..98923aad53405
--- /dev/null
+++ b/docs/protocol.html
@@ -0,0 +1,163 @@
+Kafka Wire Protocol
+
+
+
+
+
+
+ Preliminaries
+
+Network
+
+Partitioning and bootstrapping
+
+
+
+
+Partitioning Strategies
+
+
+
+
+Batching
+
+Versioning and Compatibility
+
+The Protocol
+
+Protocol Primitive Types
+
+Notes on reading the request format grammars
+
+Common Request and Response Structure
+
+
+RequestOrResponse => Size (RequestMessage | ResponseMessage)
+Size => int32
+
+
+
| Field | Description |
|---|---|
| message_size | The message_size field gives the size of the subsequent request or response message in bytes. The client can read requests by first reading this 4 byte size as an integer N, and then reading and parsing the subsequent N bytes of the request. |
A description of the message set format can be found here. (KAFKA-3368)
+ +We use numeric codes to indicate what problem occurred on the server. These can be translated by the client into exceptions or whatever the appropriate error handling mechanism in the client language. Here is a table of the error codes currently in use:
+ + +The following are the numeric codes that the ApiKey in the request can take for each of the below request types.
+ + +This section gives details on each of the individual API Messages, their usage, their binary format, and the meaning of their fields.
+ + +Some people have asked why we don't use HTTP. There are a number of reasons, the best is that client implementors can make use of some of the more advanced TCP features--the ability to multiplex requests, the ability to simultaneously poll many connections, etc. We have also found HTTP libraries in many languages to be surprisingly shabby.
+ +Others have asked if maybe we shouldn't support many different protocols. Prior experience with this was that it makes it very hard to add and test new features if they have to be ported across many protocol implementations. Our feeling is that most users don't really see multiple protocols as a feature, they just want a good reliable client in the language of their choice.
+ +Another question is why we don't adopt XMPP, STOMP, AMQP or an existing protocol. The answer to this varies by protocol, but in general the problem is that the protocol does determine large parts of the implementation and we couldn't do what we are doing if we didn't have control over the protocol. Our belief is that it is possible to do better than existing messaging systems have in providing a truly distributed messaging system, and to do this we need to build something that works differently.
+ +A final question is why we don't use a system like Protocol Buffers or Thrift to define our request messages. These packages excel at helping you to managing lots and lots of serialized messages. However we have only a few messages. Support across languages is somewhat spotty (depending on the package). Finally the mapping between binary log format and wire protocol is something we manage somewhat carefully and this would not be possible with these systems. Finally we prefer the style of versioning APIs explicitly and checking this to inferring new values as nulls as it allows more nuanced control of compatibility.
+ + From df41bc544aea91fd1e2d5258ebf1b99347700731 Mon Sep 17 00:00:00 2001 From: Grant HenkeThis document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described here
@@ -160,4 +179,4 @@A final question is why we don't use a system like Protocol Buffers or Thrift to define our request messages. These packages excel at helping you to managing lots and lots of serialized messages. However we have only a few messages. Support across languages is somewhat spotty (depending on the package). Finally the mapping between binary log format and wire protocol is something we manage somewhat carefully and this would not be possible with these systems. Finally we prefer the style of versioning APIs explicitly and checking this to inferring new values as nulls as it allows more nuanced control of compatibility.
- + From 6eb061fa85de1b5346eb2652622c9c60f7f3baf1 Mon Sep 17 00:00:00 2001 From: Gwen Shapira