KAFKA-10439: Connect's Values to parse BigInteger as Decimal with zero scale.#9320
Merged
kkonstantine merged 1 commit intoapache:trunkfrom Oct 6, 2020
Merged
KAFKA-10439: Connect's Values to parse BigInteger as Decimal with zero scale.#9320kkonstantine merged 1 commit intoapache:trunkfrom
kkonstantine merged 1 commit intoapache:trunkfrom
Conversation
kkonstantine
approved these changes
Oct 6, 2020
Contributor
kkonstantine
left a comment
There was a problem hiding this comment.
Thanks @avocader
The fix LGTM!
The test failures were only related to known broken tests in
org.apache.kafka.clients.ClientUtilsTest and org.apache.kafka.clients.ClusterConnectionStatesTest that are unrelated to the changes here and have been fixed in the meantime.
kkonstantine
pushed a commit
that referenced
this pull request
Oct 6, 2020
…o scale. (#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
kkonstantine
pushed a commit
that referenced
this pull request
Oct 6, 2020
…o scale. (#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
kkonstantine
pushed a commit
that referenced
this pull request
Oct 6, 2020
…o scale. (#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
kkonstantine
pushed a commit
that referenced
this pull request
Oct 6, 2020
…o scale. (#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
kkonstantine
pushed a commit
that referenced
this pull request
Oct 6, 2020
…o scale. (#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
kkonstantine
pushed a commit
that referenced
this pull request
Oct 6, 2020
…o scale. (#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
kkonstantine
pushed a commit
that referenced
this pull request
Oct 6, 2020
…o scale. (#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
javierfreire
pushed a commit
to javierfreire/kafka
that referenced
this pull request
Oct 8, 2020
…o scale. (apache#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
ijuma
added a commit
to confluentinc/kafka
that referenced
this pull request
Oct 8, 2020
* commit '2804257fe221f37e5098bd': (67 commits) KAFKA-10562: Properly invoke new StateStoreContext init (apache#9388) MINOR: trivial cleanups, javadoc errors, omitted StateStore tests, etc. (apache#8130) KAFKA-10564: only process non-empty task directories when internally cleaning obsolete state stores (apache#9373) KAFKA-9274: fix incorrect default value for `task.timeout.ms` config (apache#9385) KAFKA-10362: When resuming Streams active task with EOS, the checkpoint file is deleted (apache#9247) KAFKA-10028: Implement write path for feature versioning system (KIP-584) (apache#9001) KAFKA-10402: Upgrade system tests to python3 (apache#9196) KAFKA-10186; Abort transaction with pending data with TransactionAbortedException (apache#9280) MINOR: Remove `TargetVoters` from `DescribeQuorum` (apache#9376) Revert "KAFKA-10469: Resolve logger levels hierarchically (apache#9266)" MINOR: Don't publish javadocs for raft module (apache#9336) KAFKA-9929: fix: add missing default implementations (apache#9321) KAFKA-10188: Prevent SinkTask::preCommit from being called after SinkTask::stop (apache#8910) KAFKA-10338; Support PEM format for SSL key and trust stores (KIP-651) (apache#9345) KAFKA-10527; Voters should not reinitialize as leader in same epoch (apache#9348) MINOR: Refactor unit tests around RocksDBConfigSetter (apache#9358) KAFKA-6733: Printing additional ConsumerRecord fields in DefaultMessageFormatter (apache#9099) MINOR: Annotate test BlockingConnectorTest as integration test (apache#9379) MINOR: Fix failing test due to KAFKA-10556 PR (apache#9372) KAFKA-10439: Connect's Values to parse BigInteger as Decimal with zero scale. (apache#9320) ...
rgo
pushed a commit
to rgo/kafka
that referenced
this pull request
Oct 20, 2020
…o scale. (apache#9320) The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: `SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");` returns: `SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}` Also, this method parses values that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The
org.apache.kafka.connect.data.Values#parsemethod parses integers, which are larger thanLong.MAX_VALUEasdoublewithSchema.FLOAT64_SCHEMA.That means we are losing precision for these larger integers.
For example:
returns:
Also, this method parses values, that can be parsed as
FLOAT32toFLOAT64.This PR changes parsing logic, to use
FLOAT32/FLOAT64for numbers that don't have and fraction part(decimal.scale()!=0) only, and use an arbitrary-precisionorg.apache.kafka.connect.data.Decimalotherwise.Also, it updates the method to parse numbers, that can be represented as
floattoFLOAT64.Added unit tests, that cover parsing
BigInteger,Byte,Short,Integer,Long,Float,Doubletypes.Committer Checklist (excluded from commit message)