Skip to content

Logic adjustments to SeekableStreamIndexTaskRunner.#7267

Merged
clintropolis merged 3 commits intoapache:masterfrom
gianm:fix-ss-stuff
Mar 15, 2019
Merged

Logic adjustments to SeekableStreamIndexTaskRunner.#7267
clintropolis merged 3 commits intoapache:masterfrom
gianm:fix-ss-stuff

Conversation

@gianm
Copy link
Copy Markdown
Contributor

@gianm gianm commented Mar 14, 2019

A mix of simplifications and bug fixes. They are intermingled because
some of the bugs were made difficult to fix, and also more likely to
happen in the first place, by how the code was structured. I tried to
keep restructuring to a minimum. The changes are:

  • Remove "initialOffsetsSnapshot", which was used to determine when to
    skip start offsets. Replace it with "lastReadOffsets", which I hope
    is more intuitive. (There is a connection: start offsets must be
    skipped if and only if they have already been read, either by a
    previous task or by a previous sequence in the same task, post-restoring.)
  • Remove "isStartingSequenceOffsetsExclusive", because it should always
    be the opposite of isEndOffsetExclusive. The reason is that starts are
    exclusive exactly when the prior ends are inclusive: they must match
    up in that way for adjacent reads to link up properly.
  • Don't call "seekToStartingSequence" after the initial seek. There is
    no reason to, since we expect to read continuous message streams
    throughout the task. And calling it makes offset-tracking logic
    trickier, so better to avoid the need for trickiness. I believe the
    call being here was causing a bug in Kinesis ingestion where a
    message might get double-read.
  • Remove the "continue" calls in the main read loop. They are bad
    because they prevent keeping currOffsets and lastReadOffsets up to
    date, and prevent us from detecting that we have finished reading.
  • Rework "verifyInitialRecordAndSkipExclusivePartition" into
    "verifyRecordInRange". It no longer has side effects. It does a sanity
    check on the message offset and also makes sure that it is not past
    the endOffsets.
  • Rework "assignPartitions" to replace inline comparisons with
    "isRecordAlreadyRead" and "isMoreToReadBeforeReadingRecord" calls. I
    believe this fixes an off-by-one error with Kinesis where the last
    record would not get read. It also makes the logic easier to read.
  • When doing the final publish, only adjust end offsets of the final
    sequence, rather than potentially adjusting any unpublished sequence.
    Adjusting sequences other than the last one is a mistake since it
    will extend their endOffsets beyond what they actually read. (I'm not
    sure if this was an issue in practice, since I'm not sure if real
    world situations would have more than one unpublished sequence.)
  • Rename "isEndSequenceOffsetsExclusive" to "isEndOffsetExclusive". It's
    shorter and more clear, I think.
  • Add equals/hashCode/toString methods to OrderedSequenceNumber.

Kafka test changes:

  • Added a Kafka "testRestoreAtEndOffset" test to verify that restores at
    the very end of the task lifecycle still work properly.

Kinesis test changes:

  • Renamed "testRunOnNothing" to "testRunOnSingletonRange". I think that
    given Kinesis semantics, the right behavior when start offset equals
    end offset (and there aren't exclusive partitions set) is to read that
    single offset. This is because they are both meant to be treated as
    inclusive.
  • Adjusted "testRestoreAfterPersistingSequences" to expect one more
    message read. I believe the old test was wrong; it expected the task
    not to read message number 5.
  • Adjusted "testRunContextSequenceAheadOfStartingOffsets" to use a
    checkpoint starting from 1 rather than 2. I believe the old test was
    wrong here too; it was expecting the task to start reading from the
    checkpointed offset, but it actually should have started reading from
    one past the checkpointed offset.
  • Adjusted "testIncrementalHandOffReadsThroughEndOffsets" to expect
    11 messages read instead of 12. It's starting at message 0 and reading
    up to 10, which should be 11 messages.

@gianm
Copy link
Copy Markdown
Contributor Author

gianm commented Mar 14, 2019

Most of the bug fixes should only affect Kinesis, since they were in code that handled the possibility of inclusive end offsets, which the Kafka codepath doesn't use. I think the only Kafka-related issue fixed by this patch was the removal of the "continue" calls in the main read loop, which beforehand, could potentially have caused Kafka ingestion to get stuck.

A mix of simplifications and bug fixes. They are intermingled because
some of the bugs were made difficult to fix, and also more likely to
happen in the first place, by how the code was structured. I tried to
keep restructuring to a minimum. The changes are:

- Remove "initialOffsetsSnapshot", which was used to determine when to
  skip start offsets. Replace it with "lastReadOffsets", which I hope
  is more intuitive. (There is a connection: start offsets must be
  skipped if and only if they have already been read, either by a
  previous task or by a previous sequence in the same task, post-restoring.)
- Remove "isStartingSequenceOffsetsExclusive", because it should always
  be the opposite of isEndOffsetExclusive. The reason is that starts are
  exclusive exactly when the prior ends are inclusive: they must match
  up in that way for adjacent reads to link up properly.
- Don't call "seekToStartingSequence" after the initial seek. There is
  no reason to, since we expect to read continuous message streams
  throughout the task. And calling it makes offset-tracking logic
  trickier, so better to avoid the need for trickiness. I believe the
  call being here was causing a bug in Kinesis ingestion where a
  message might get double-read.
- Remove the "continue" calls in the main read loop. They are bad
  because they prevent keeping currOffsets and lastReadOffsets up to
  date, and prevent us from detecting that we have finished reading.
- Rework "verifyInitialRecordAndSkipExclusivePartition" into
  "verifyRecordInRange". It no longer has side effects. It does a sanity
  check on the message offset and also makes sure that it is not past
  the endOffsets.
- Rework "assignPartitions" to replace inline comparisons with
  "isRecordAlreadyRead" and "isMoreToReadBeforeReadingRecord" calls. I
  believe this fixes an off-by-one error with Kinesis where the last
  record would not get read. It also makes the logic easier to read.
- When doing the final publish, only adjust end offsets of the final
  sequence, rather than potentially adjusting any unpublished sequence.
  Adjusting sequences other than the last one is a mistake since it
  will extend their endOffsets beyond what they actually read. (I'm not
  sure if this was an issue in practice, since I'm not sure if real
  world situations would have more than one unpublished sequence.)
- Rename "isEndSequenceOffsetsExclusive" to "isEndOffsetExclusive". It's
  shorter and more clear, I think.
- Add equals/hashCode/toString methods to OrderedSequenceNumber.

Kafka test changes:

- Added a Kafka "testRestoreAtEndOffset" test to verify that restores at
  the very end of the task lifecycle still work properly.

Kinesis test changes:

- Renamed "testRunOnNothing" to "testRunOnSingletonRange". I think that
  given Kinesis semantics, the right behavior when start offset equals
  end offset (and there aren't exclusive partitions set) is to read that
  single offset. This is because they are both meant to be treated as
  inclusive.
- Adjusted "testRestoreAfterPersistingSequences" to expect one more
  message read. I believe the old test was wrong; it expected the task
  not to read message number 5.
- Adjusted "testRunContextSequenceAheadOfStartingOffsets" to use a
  checkpoint starting from 1 rather than 2. I believe the old test was
  wrong here too; it was expecting the task to start reading from the
  checkpointed offset, but it actually should have started reading from
  one past the checkpointed offset.
- Adjusted "testIncrementalHandOffReadsThroughEndOffsets" to expect
  11 messages read instead of 12. It's starting at message 0 and reading
  up to 10, which should be 11 messages.
Copy link
Copy Markdown
Contributor

@jihoonson jihoonson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gianm thank you for cleaning up! It looks better to read. I left some comments. Please consider reverting topic and offset to stream and sequence, respectively.

if (!restoredNextPartitions.getStream().equals(ioConfig.getStartPartitions().getStream())) {
throw new ISE(
"WTF?! Restored stream[%s] but expected stream[%s]",
"WTF?! Restored topic[%s] but expected topic[%s]",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, the term stream was used intentionally in #6431 because the author thought it's a more generic term to represent both Kafka topic and Kinesis stream. This stream is used in other places in Druid too.

Copy link
Copy Markdown
Contributor Author

@gianm gianm Mar 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I'll revert these changes, but I do think it's better w/ Kafkaesque terminology (I agree w/ #7267 (comment)). Especially because "sequence" already means something else in the context of seekable stream tasks (SequenceMetadata, sequenceName, etc) and so it is best to avoid. But this can be driven separately and doesn't need to be looped into this logic adjustment PR.

private final Map<PartitionIdType, SequenceOffsetType> endOffsets;

// lastReadOffsets are the last offsets that were read and processed.
private final ConcurrentMap<PartitionIdType, SequenceOffsetType> lastReadOffsets = new ConcurrentHashMap<>();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a ConcurrentHashMap?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. There is no reason. I changed it to a regular HashMap.

private final List<ListenableFuture<SegmentsAndMetadata>> publishWaitList = new ArrayList<>();
private final List<ListenableFuture<SegmentsAndMetadata>> handOffWaitList = new ArrayList<>();
private final Set<PartitionIdType> initialOffsetsSnapshot = new HashSet<>();
private final Set<PartitionIdType> exclusiveStartingPartitions = new HashSet<>();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you please remove this? It's not used anymore.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks.

final SequenceOffsetType recordOffset
)
{
// Check only for the first record among the record batch.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks that this isn't true anymore.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it, thanks.


log.trace(
"Got stream[%s] partition[%s] sequence[%s].",
"Got topic[%s] partition[%s] offset[%s], shouldProcess[%s].",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. stream and sequence were used intentionally.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally prefer offset over sequence because the former is more obviously a position to me, but am indifferent about whether stream or topic.

sequenceMetadata.setEndOffsets(currOffsets);
sequenceMetadata.updateAssignments(this, currOffsets);
final boolean isLast = i == (sequences.size() - 1);
if (isLast) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to add a sanity check that the endOffsets are properly set for non-last sequences?

)
);

final ListenableFuture<TaskStatus> future2 = runTask(task2);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you please add a comment about why task2 reads nothing?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The actual bug here was that if a task was given a 'bad' end offset that was a kafka transactional topic control offset instead of a record, and was right after the last read good offset, that the task would get stuck in an infinite read loop due to the continue statements in the loop that were removed in this PR. I think this test should either be removed since it shouldn't happen in practice, or be renamed to like testDoesntGetStuckWithTransactionOffset and maybe slightly reworked and commented to clear this up.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this test should probably just be removed, since it's not testing a real scenario.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I removed it.

true
);
// Set end offsets to one past the checkpoint, simulating a replica that needs to catch up.
task.getRunner().setEndOffsets(ImmutableMap.of(shardId1, "10"), true);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, I fixed this test to be more realistic in #7264.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like that PR has enough approvals to commit. I'll do that and merge it into this one.

Copy link
Copy Markdown
Member

@clintropolis clintropolis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM; it looks a lot clearer, thanks for doing this refactor 👍

I think we need to nail down the terminology a bit though, there now a bit more of a mix of offset, 'sequence number' and 'sequence offset'. Offset was never totally removed it appears, and SeekableStreamPartitions seems to support both terminology presumably for backwards compatibility, SequenceMetadata is using offset terminology probably for the same reason, I'm not quite sure what else is using what yet.

My vote is for 'offset' though.


@Override
protected Long getSequenceNumberToStoreAfterRead(@NotNull Long sequenceNumber)
protected Long getNextStartOffset(@NotNull Long sequenceNumber)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:+1 on switching to 'offset', i think it's more intuitive terminology, though maybe change parameter variable name too?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to revert this for now, but plan to try again later.

)
);

final ListenableFuture<TaskStatus> future2 = runTask(task2);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The actual bug here was that if a task was given a 'bad' end offset that was a kafka transactional topic control offset instead of a record, and was right after the last read good offset, that the task would get stuck in an infinite read loop due to the continue statements in the loop that were removed in this PR. I think this test should either be removed since it shouldn't happen in practice, or be renamed to like testDoesntGetStuckWithTransactionOffset and maybe slightly reworked and commented to clear this up.


log.trace(
"Got stream[%s] partition[%s] sequence[%s].",
"Got topic[%s] partition[%s] offset[%s], shouldProcess[%s].",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally prefer offset over sequence because the former is more obviously a position to me, but am indifferent about whether stream or topic.

@jihoonson
Copy link
Copy Markdown
Contributor

I think the best would be using different terms for kinesis and kafka. They are defining their own terminologies and this would be especially good for logging.

@clintropolis
Copy link
Copy Markdown
Member

I think the best would be using different terms for kinesis and kafka. They are defining their own terminologies and this would be especially good for logging.

I'm not sure what you mean... I think that would only be good for logging? I am mostly concerned about addressing what we call stuff in the code in the shared common structure to make it easy to follow and not switching what we call things all the time, and where I find offset to be more intuitive. I guess the implementors of SeekableStreamIndexTaskRunner could supply string labels of what terminology to use for topics and offsets to make the logs label things appropriately?

@jihoonson
Copy link
Copy Markdown
Contributor

jihoonson commented Mar 14, 2019

I meant, there are people who prefer sequence and stream. I also think offset and topic are better terms for internal usage. But it was decided to use sequence and stream in #6431 because the authors think they are better. If you think we should change it, I think we should discuss it with other people including original authors. I personally prefer to not change in this PR to not block 0.14 release anymore.

@clintropolis
Copy link
Copy Markdown
Member

I personally prefer to not change in this PR to not block 0.14 release anymore.

We should ensure that whatever terminology we want to use is correct now since this is being introduced with 0.14, at least things like json that escapes the source code, else we are going to have a bad time later. I agree that it's not worth blocking over renaming variables.

@fjy
Copy link
Copy Markdown
Contributor

fjy commented Mar 15, 2019

I don't think its worth blocking a release over variable names.

@clintropolis
Copy link
Copy Markdown
Member

I don't think its worth blocking a release over variable names.

I agree, I wasn't thinking about variable names, I was talking about making sure we are happy with the things that end up in json that will be hard to change later once this is in the wild. I think it is maybe ok from what I've looked through so far.

@jon-wei
Copy link
Copy Markdown
Contributor

jon-wei commented Mar 15, 2019

If we had to pick one set of terms, I would personally lean towards using Kafka-based terminology like "topic" and "offset" since I view Kafka as more "archetypal" than Kinesis.

This PR doesn't change spec properties, so I think it's fine in that respect.

@gianm
Copy link
Copy Markdown
Contributor Author

gianm commented Mar 15, 2019

I reverted the offset/topic naming changes. However, I also changed sequence[%s] to sequenceNumber[%s] where it refers to a sequenceNumber/offset because I think calling that thing a "sequence" is not right, since it's not a sequence; it's a number in a sequence, or an offset.

Copy link
Copy Markdown
Contributor

@jihoonson jihoonson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. It looks much easier to read. Thanks!

Copy link
Copy Markdown
Member

@clintropolis clintropolis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@clintropolis clintropolis merged commit a8c7132 into apache:master Mar 15, 2019
@gianm gianm deleted the fix-ss-stuff branch March 15, 2019 13:44
clintropolis pushed a commit to clintropolis/druid that referenced this pull request Mar 15, 2019
* Logic adjustments to SeekableStreamIndexTaskRunner.

A mix of simplifications and bug fixes. They are intermingled because
some of the bugs were made difficult to fix, and also more likely to
happen in the first place, by how the code was structured. I tried to
keep restructuring to a minimum. The changes are:

- Remove "initialOffsetsSnapshot", which was used to determine when to
  skip start offsets. Replace it with "lastReadOffsets", which I hope
  is more intuitive. (There is a connection: start offsets must be
  skipped if and only if they have already been read, either by a
  previous task or by a previous sequence in the same task, post-restoring.)
- Remove "isStartingSequenceOffsetsExclusive", because it should always
  be the opposite of isEndOffsetExclusive. The reason is that starts are
  exclusive exactly when the prior ends are inclusive: they must match
  up in that way for adjacent reads to link up properly.
- Don't call "seekToStartingSequence" after the initial seek. There is
  no reason to, since we expect to read continuous message streams
  throughout the task. And calling it makes offset-tracking logic
  trickier, so better to avoid the need for trickiness. I believe the
  call being here was causing a bug in Kinesis ingestion where a
  message might get double-read.
- Remove the "continue" calls in the main read loop. They are bad
  because they prevent keeping currOffsets and lastReadOffsets up to
  date, and prevent us from detecting that we have finished reading.
- Rework "verifyInitialRecordAndSkipExclusivePartition" into
  "verifyRecordInRange". It no longer has side effects. It does a sanity
  check on the message offset and also makes sure that it is not past
  the endOffsets.
- Rework "assignPartitions" to replace inline comparisons with
  "isRecordAlreadyRead" and "isMoreToReadBeforeReadingRecord" calls. I
  believe this fixes an off-by-one error with Kinesis where the last
  record would not get read. It also makes the logic easier to read.
- When doing the final publish, only adjust end offsets of the final
  sequence, rather than potentially adjusting any unpublished sequence.
  Adjusting sequences other than the last one is a mistake since it
  will extend their endOffsets beyond what they actually read. (I'm not
  sure if this was an issue in practice, since I'm not sure if real
  world situations would have more than one unpublished sequence.)
- Rename "isEndSequenceOffsetsExclusive" to "isEndOffsetExclusive". It's
  shorter and more clear, I think.
- Add equals/hashCode/toString methods to OrderedSequenceNumber.

Kafka test changes:

- Added a Kafka "testRestoreAtEndOffset" test to verify that restores at
  the very end of the task lifecycle still work properly.

Kinesis test changes:

- Renamed "testRunOnNothing" to "testRunOnSingletonRange". I think that
  given Kinesis semantics, the right behavior when start offset equals
  end offset (and there aren't exclusive partitions set) is to read that
  single offset. This is because they are both meant to be treated as
  inclusive.
- Adjusted "testRestoreAfterPersistingSequences" to expect one more
  message read. I believe the old test was wrong; it expected the task
  not to read message number 5.
- Adjusted "testRunContextSequenceAheadOfStartingOffsets" to use a
  checkpoint starting from 1 rather than 2. I believe the old test was
  wrong here too; it was expecting the task to start reading from the
  checkpointed offset, but it actually should have started reading from
  one past the checkpointed offset.
- Adjusted "testIncrementalHandOffReadsThroughEndOffsets" to expect
  11 messages read instead of 12. It's starting at message 0 and reading
  up to 10, which should be 11 messages.

* Changes from code review.
gianm pushed a commit that referenced this pull request Mar 15, 2019
* Logic adjustments to SeekableStreamIndexTaskRunner.

A mix of simplifications and bug fixes. They are intermingled because
some of the bugs were made difficult to fix, and also more likely to
happen in the first place, by how the code was structured. I tried to
keep restructuring to a minimum. The changes are:

- Remove "initialOffsetsSnapshot", which was used to determine when to
  skip start offsets. Replace it with "lastReadOffsets", which I hope
  is more intuitive. (There is a connection: start offsets must be
  skipped if and only if they have already been read, either by a
  previous task or by a previous sequence in the same task, post-restoring.)
- Remove "isStartingSequenceOffsetsExclusive", because it should always
  be the opposite of isEndOffsetExclusive. The reason is that starts are
  exclusive exactly when the prior ends are inclusive: they must match
  up in that way for adjacent reads to link up properly.
- Don't call "seekToStartingSequence" after the initial seek. There is
  no reason to, since we expect to read continuous message streams
  throughout the task. And calling it makes offset-tracking logic
  trickier, so better to avoid the need for trickiness. I believe the
  call being here was causing a bug in Kinesis ingestion where a
  message might get double-read.
- Remove the "continue" calls in the main read loop. They are bad
  because they prevent keeping currOffsets and lastReadOffsets up to
  date, and prevent us from detecting that we have finished reading.
- Rework "verifyInitialRecordAndSkipExclusivePartition" into
  "verifyRecordInRange". It no longer has side effects. It does a sanity
  check on the message offset and also makes sure that it is not past
  the endOffsets.
- Rework "assignPartitions" to replace inline comparisons with
  "isRecordAlreadyRead" and "isMoreToReadBeforeReadingRecord" calls. I
  believe this fixes an off-by-one error with Kinesis where the last
  record would not get read. It also makes the logic easier to read.
- When doing the final publish, only adjust end offsets of the final
  sequence, rather than potentially adjusting any unpublished sequence.
  Adjusting sequences other than the last one is a mistake since it
  will extend their endOffsets beyond what they actually read. (I'm not
  sure if this was an issue in practice, since I'm not sure if real
  world situations would have more than one unpublished sequence.)
- Rename "isEndSequenceOffsetsExclusive" to "isEndOffsetExclusive". It's
  shorter and more clear, I think.
- Add equals/hashCode/toString methods to OrderedSequenceNumber.

Kafka test changes:

- Added a Kafka "testRestoreAtEndOffset" test to verify that restores at
  the very end of the task lifecycle still work properly.

Kinesis test changes:

- Renamed "testRunOnNothing" to "testRunOnSingletonRange". I think that
  given Kinesis semantics, the right behavior when start offset equals
  end offset (and there aren't exclusive partitions set) is to read that
  single offset. This is because they are both meant to be treated as
  inclusive.
- Adjusted "testRestoreAfterPersistingSequences" to expect one more
  message read. I believe the old test was wrong; it expected the task
  not to read message number 5.
- Adjusted "testRunContextSequenceAheadOfStartingOffsets" to use a
  checkpoint starting from 1 rather than 2. I believe the old test was
  wrong here too; it was expecting the task to start reading from the
  checkpointed offset, but it actually should have started reading from
  one past the checkpointed offset.
- Adjusted "testIncrementalHandOffReadsThroughEndOffsets" to expect
  11 messages read instead of 12. It's starting at message 0 and reading
  up to 10, which should be 11 messages.

* Changes from code review.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants