Skip to content

Enum of ResponseContext keys#8157

Merged
leventov merged 16 commits intoapache:masterfrom
metamx:response-context-enum
Aug 3, 2019
Merged

Enum of ResponseContext keys#8157
leventov merged 16 commits intoapache:masterfrom
metamx:response-context-enum

Conversation

@esevastyanov
Copy link
Copy Markdown
Contributor

@esevastyanov esevastyanov commented Jul 25, 2019

Description

Aggregated ResponseContext keys into enum as a next step of ResponseContext refactoring. Previously the keys were just static strings so theoretically there was no obstacle to use any string as a key of ReponseContext. This refactoring eliminates this possibility by introducing the enum of ResponseContext keys and exposing only methods requiring the enum instance as a key.

Fixed the issue of merging ResponseContext instances returned by Historicals to Broker

There was no rule of merging different response contexts but from my point of view, the current solution of rewriting existing values by last ones is incorrect because of losing valuable information. For example, a value associated with the key UNCOVERED_INTERVALS contains a list of uncovered intervals and the result value is not the last returned but the concatenation of all returned lists. The same issue with the key MISSING_SEGMENTS (a list of missing segments) and COUNT (the number of scanned rows). Thus I decided to provide every key with a merge function. So response contexts merge became a simple procedure.

Also improved ResponseContext serialization. Previously the result of serialization was truncated if its length was greater than the limit. I believe it would be better to keep the context structure and make it deserializable thus the process of serialization removes max-length fields completely from the context until the final result length doesn't exceed the limit.


This PR has:

  • been self-reviewed.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added unit tests or modified existing tests to cover new code paths.

For reviewers: the key changed classes is ResponseContext.

*/
UNCOVERED_INTERVALS(
"uncoveredIntervals",
(oldValue, newValue) -> {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this lambda argument of Key() constructor should be indented at the same level as the first argument. The same about other constants in this enum.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, we need to fix checkstyle config

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this cannot be fixed just by updating the config as checkstyle plugin has some known issues with indentation of lambdas as arguments
checkstyle/checkstyle#4638
checkstyle/checkstyle#3342

responseContext.put(ResponseContext.CTX_UNCOVERED_INTERVALS, uncoveredIntervals);
responseContext.put(ResponseContext.CTX_UNCOVERED_INTERVALS_OVERFLOWED, uncoveredIntervalsOverflowed);
responseContext.merge(ResponseContext.Key.UNCOVERED_INTERVALS, uncoveredIntervals);
responseContext.merge(ResponseContext.Key.UNCOVERED_INTERVALS_OVERFLOWED, uncoveredIntervalsOverflowed);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uncoveredIntervalsOverflowed should be based on post-merge size of the list.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is so right now. the merge is applicable only for resulting values

* The number of scanned rows.
*/
public static final String CTX_COUNT = "count";
public enum Key
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the design should be more extension-friendly. Some ideas for extensible enums are presented here: #6823 (comment) (completely unrelated to ResponseContext, but may be useful).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Developed extension-friendly enum with an example

responseContext,
JacksonUtils.TYPE_REFERENCE_MAP_STRING_OBJECT
);
return new ResponseContext()
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please comment on why it creates an inner class instead of creating a DefaultResponseContext.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resulting ResponseContext depends on a TypeReference so in general in case of changing the TypeReference the resulting context should be also updated. To eliminate the possible resulting context update I used the inner class. If that fits I can add this description as a comment, if not I may remove inner class usage and use DefaultReponseContext as a resulting map.

* The method removes max-length fields one by one if the resulting string length is greater than the limit.
* The resulting string might be correctly deserialized as a {@link ResponseContext}.
*/
public SerializationResult serializeWith(ObjectMapper objectMapper, int maxLength) throws JsonProcessingException
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please specify units (chars/bytes)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

renamed the argument (also units are mentioned in a method description)

return query.getLimit() - (long) responseContext.get(ResponseContext.CTX_COUNT);
return query.getLimit() - (long) responseContext.get(ResponseContext.Key.COUNT);
}
return query.getLimit();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rename this property to "scanRowsLimit" for clarity.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed

@@ -358,8 +358,8 @@ private void computeUncoveredIntervals(TimelineLookup<String, ServerSelector> ti
// Which is not necessarily an indication that the data doesn't exist or is
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase above "This returns intervals..." is strange. I would say "Record in the response context the intervals..."

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

if (responseContext.get(ResponseContext.CTX_ETAG) != null) {
builder.header(HEADER_ETAG, responseContext.get(ResponseContext.CTX_ETAG));
responseContext.remove(ResponseContext.CTX_ETAG);
if (responseContext.get(ResponseContext.Key.ETAG) != null) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Double get looks awkward. It could be

Object entityTag = responseContext.remove(ResponseContext.Key.ETAG);
if (entityTag != null) {
  builder.header(HEADER_ETAG, entityTag);
}

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, updated

builder.header(HEADER_ETAG, responseContext.get(ResponseContext.CTX_ETAG));
responseContext.remove(ResponseContext.CTX_ETAG);
if (responseContext.get(ResponseContext.Key.ETAG) != null) {
builder.header(HEADER_ETAG, responseContext.get(ResponseContext.Key.ETAG));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think would be clearer to call this variable responseBuilder

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed

RESPONSE_CTX_HEADER_LEN_LIMIT
);
if (serializationResult.isReduced()) {
log.warn(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should Druid cluster operators monitor these messages? Can they do anything about them? If not, this should probably be info(). See #7362.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about that, even left the log message as is although the context is not truncated anymore but "reduced". According to the corresponding PR discussion, it was important to have a log message with full context, it's likely someone has a filter in a log aggregator for this kind of message.
BTW I see your point and started discussion to only mention backward compatibility for log filters.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment like "Whether or not this logging statement should properly be on the WARN level (which is unclear), it's kept on the warn level for backward compatibility: see #2336".

(If I understood your comment correctly.)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the change will be tagged as incompatible I decided to update the log level to info

@leventov
Copy link
Copy Markdown
Member

Labelled ResponseContext because this PR heavily affects ResponseContext, a @PublicApi class.

Eugene Sevastianov added 4 commits July 25, 2019 19:22
Renamed an argument

Updated comparator

Replaced Pair usage with Map.Entry

Added a comment about quadratic complexity

Removed boolean field with an expression

Renamed SerializationResult field

Renamed the method merge to add and renamed several context keys

Renamed field and method related to scanRowsLimit

Updated a comment

Simplified a block of code

Renamed a variable
@esevastyanov esevastyanov marked this pull request as ready for review July 29, 2019 17:15
* Merge function associated with a key: Object (Object oldValue, Object newValue)
* TreeMap is used to have the natural ordering of its keys
*/
private static Map<String, BaseKey> map = new TreeMap<>();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested to call it "registeredKeys".

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this static variable and the associated methods don't need to be nested in Key. They might as well be in the higher-level ResponseContext.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they might be there. But I think we may leave them inside enum as this static variable and methods are part of enum "extension" and might be helpful in a way of understanding how this "extension" is implemented. Since there is no built-in enum extension support this implementation may be used as an example in some cases so I would prefer to keep enum and this static field and methods together.

* The primary way of registering context keys.
* Only the keys registered this way are considered during the context merge.
*/
public static void addKey(BaseKey key)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about "registerKey"?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

renamed


/**
* The primary way of registering context keys.
* Only the keys registered this way are considered during the context merge.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please note what happens if a context has an unregistered key. (I think, ideally, it should throw ISE.)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated exceptions and related comments

ETAG("ETag"),
/**
* Query fail time (current time + timeout).
* The final value in comparison to continuously updated TIMEOUT_AT.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I failed to understand this sentence after several readings. Please reword.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reworded it

* @Override public BiFunction<Object, Object, Object> getMergeFunction() { return mergeFunction; }
* }
* }</pre>
* Make sure all extension enum values added with Key.addKey method.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make Key.addKey a {@link }

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

}

/**
* Keys associated with objects in the context. The enum is extension-friendly.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it doesn't make a lot of sense to say that "The enum is extension-friendly." Enum is not extension-friendly, but the key system (based from BaseKey) is. So I would just remove this sentence.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

/**
* Returns all keys the enum contains and the added via addKey method
*/
public static Collection<BaseKey> getKeys()
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested "getAllRegisteredKeys"

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed

}

public Object get(String key)
protected abstract Map<String, Object> getDelegate();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add a comment like we are mapping from Strings rather than {@link BaseKey}s because ...?

/**
* Serializes the context given that the resulting string length is less than the provided limit.
* The method removes max-length fields one by one if the resulting string length is greater than the limit.
* The resulting string might be correctly deserialized as a {@link ResponseContext}.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Please put this discussion in code comments.

  2. I see a regression scenario: before this PR, UNCOVERED_INTERVALS and MISSING_SEGMENTS keys were always reasonably short. Now, they may grow very large at Broker, and Broker will prune them altogether. I suggest to hard-code reduction logic specifically for UNCOVERED_INTERVALS and MISSING_SEGMENTS.

", resultFormat='" + resultFormat + '\'' +
", batchSize=" + batchSize +
", limit=" + limit +
", limit=" + scanRowsLimit +
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no backward compatibility concerns in toString(), please change the key.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

));
}
// quadratic complexity: while loop with map serialization on each iteration
while (!copiedMap.isEmpty() && !serializedValueEntries.isEmpty()) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can get away with just one empty check as both copiedMap and serializedValueEntries have same number of entries and entries are being removed from both in the loop.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the whole method

Comment thread processing/src/main/java/org/apache/druid/query/scan/ScanQuery.java
/**
* Serializes the context given that the resulting string length is less than the provided limit.
* The method removes max-length fields one by one if the resulting string length is greater than the limit.
* The resulting string might be correctly deserialized as a {@link ResponseContext}.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember exactly but some of the systems relied on having the MISSING_SEGMENTS key in the header to do something, so removing the entire entry would break the logic for them. @will-lauer can confirm ?

I agree a good solution would be to have a reduce_length function in the enum itself which would reduce the length step by step (for example removing segment information one by one for missing segments key) until the header length is in bounds. Probably it can be skipped because as per my understanding the truncation previously was random without any guarantees on what keys will be present or truncated. @gianm @himanshug any thoughts on this ?

@will-lauer
Copy link
Copy Markdown
Contributor

@pjain1 While we certainly talked about using MISSING_SEGMENTS, I don't believe we ever actually implemented it in production, so while removing it completely probably won't break anything we have, it is less than desirable. I'd prefer to have a partial list than no list at all. Or at least some other indication that the list was non-empty.

@himanshug
Copy link
Copy Markdown
Contributor

himanshug commented Jul 31, 2019

Also improved ResponseContext serialization. Previously the result of serialization was truncated if its length was greater than the limit. I believe it would be better to keep the context structure and make it deserializable thus the process of serialization removes max-length fields completely from the context until the final result length doesn't exceed the limit.

I haven't looked at the code , but this is definitely an incompatibility with previous behavior, so should be tagged as such . It should also be mentioned in release notes. I don't think most customers would care about it.
Now, if possible, a better strategy might be to retain all keys but trim the bigger ones so to keep serialized response limited.
Or, don't change the behavior since we haven't had use cases where this has been a problem so far. In general, it might be good to keep refactoring sort of PRs separate from PRs that introduce a change in behavior.
Finally, if this PR does get merged with a behavioral change, then should be tagged incompatible .

…tions

Reducing serialized context length by removing some of its'
collection elements
@esevastyanov
Copy link
Copy Markdown
Contributor Author

Thanks, everyone. I updated the logic of truncation and kept it as general as possible without mentioning custom context keys. New algorithm removes some values from resulting (serialized) JSON arrays (no matter if it's MISSING_SEGMENTS or UNCOVERED_INTERVALS) to satisfy the length limit. So as previously, the serialized context has some but not all array's values if the limit exceeded. I also added a boolean indicator if a context was truncated.

Eugene Sevastianov added 2 commits August 1, 2019 14:49
Copy link
Copy Markdown
Member

@pjain1 pjain1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

* @throws IllegalArgumentException if the key has already been registered.
*/
public static void addKey(BaseKey key)
public static void registerKey(BaseKey key)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just in case, please make this method synchronized

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added synchronized

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turns out it was not quite enough, #9106

public static Collection<BaseKey> getAllRegisteredKeys()
{
return map.values();
return registeredKeys.values();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just in case, please wrap with Collections.unmodifiableCollection()

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrapped

* The method removes max-length fields one by one if the resulting string length is greater than the limit.
* The resulting string might be correctly deserialized as a {@link ResponseContext}.
* This method tries to remove some elements from context collections if it's needed to satisfy the limit.
* The resulting string might be correctly deserialized to {@link ResponseContext}.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please comment on why explicit priorities of keys are not implemented.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

commented

for (Map.Entry<String, JsonNode> e : sortedNodesByLength) {
final String fieldName = e.getKey();
final JsonNode node = e.getValue();
if (node.isArray()) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this block aims for MISSING_SEGMENTS and UNCOVERED_INTERVALS, please comment on that with an example.

Copy link
Copy Markdown
Contributor Author

@esevastyanov esevastyanov Aug 2, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

commented in the javadoc

if (node.isArray()) {
if (needToRemoveCharsNumber >= node.toString().length()) {
final int lengthBeforeRemove = node.toString().length();
// Empty array could be correctly deserialized so we remove only its elements.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the logic of this block should avoid producing empty array because it may be misleading.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed empty arrays

add(Key.TRUNCATED, true);
final ObjectNode contextJsonNode = objectMapper.valueToTree(getDelegate());
final ArrayList<Map.Entry<String, JsonNode>> sortedNodesByLength = Lists.newArrayList(contextJsonNode.fields());
final Comparator<Map.Entry<String, JsonNode>> valueLengthReversedComparator =
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please extract this comparator as a constant.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extracted

final int lengthAfterRemove = node.toString().length();
needToRemoveCharsNumber -= lengthBeforeRemove - lengthAfterRemove;
} else {
final ArrayNode arrNode = (ArrayNode) node;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This block needs a comment. It's not obvious what and why is going on here. Please extract as a method (or the upper block) if possible.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a comment and extracted


protected abstract Map<BaseKey, Object> getDelegate();

private final Comparator<Map.Entry<String, JsonNode>> valueLengthReversedComparator =
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be static final constant.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

* This method tries to remove some elements from context collections if it's needed to satisfy the limit.
* This method removes some elements from context collections if it's needed to satisfy the limit.
* There is no explicit priorities of keys which values are being truncated because for now there are only
* two potential limit breaking keys (UNCOVERED_INTERVALS and MISSING_SEGMENTS) and their values are arrays.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please wrap UNCOVERED_INTERVALS and MISSING_SEGMENTS with {@link }

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrapped

((ArrayNode) node).removeAll();
final int lengthAfterRemove = node.toString().length();
needToRemoveCharsNumber -= lengthBeforeRemove - lengthAfterRemove;
// We need to remove more chars than the field's lenght so removing it completely
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo, "length". There is one other instance of this typo in the repository, in StringDimensionHandler - please fix it too.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed both typos

}

/**
* Removes {@code node}'s elements which total lenght of serialized values is greater or equal to the passed limit.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

* @param needToRemoveCharsNumber the number of chars need to be removed.
* @return the number of removed chars.
*/
private int removeNodeElementsToSatisfyCharsLimit(ArrayNode node, int needToRemoveCharsNumber)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this method can be static.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

final ArrayNode arrayNode = (ArrayNode) node;
needToRemoveCharsNumber -= removeNodeElementsToSatisfyCharsLimit(arrayNode, needToRemoveCharsNumber);
if (arrayNode.size() == 0) {
// The field is empty, removing it.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please extend the comment like The field is empty, removing it because an empty array field may be misleading for the recipients of the truncated response context.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extended

ETAG("ETag"),
/**
* Query fail time (current time + timeout).
* It is not updated continuously as TIMEOUT_AT.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please wrap TIMEOUT_AT with {@link }.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrapped

@leventov leventov merged commit 3f3162b into apache:master Aug 3, 2019
@leventov leventov deleted the response-context-enum branch August 3, 2019 09:05
@clintropolis clintropolis added this to the 0.16.0 milestone Aug 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants