Hi,
We're using Druid version 0.12.0 cloned off of the master branch. Druid 0.12.0 has a feature that are particularly interested in, batch ingestion (mapping a single Kafka message to multiple rows in a table). We're using Kafka version 0.11.0.2.
Some of the indexing tasks fail with a NullPointerException. Find below the stacktrace. Attached Extended log containing logs from right before crash.
indexing.log
API used:
We've implemented a custom parser that extends the ByteBufferInputRowParser and implemented a parseBatch function that returns a List[InputRow].
Question:
Any idea as to why I'm hitting this exception? I get the same error even when I try ingesting data using the parse function which has been deprecated now. We don't see this error with Druid 0.11.0.
2018-01-22T22:18:55,880 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[KafkaIndexTask{id=index_kafka_table_dc82e5a127e831a_hpgooncg, type=index_kafka, dataSource=table}]
**java.lang.NullPointerException
at io.druid.segment.incremental.OnheapIncrementalIndex.getMetricLongValue(OnheapIncrementalIndex.java:274) ~[druid-processing-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at** io.druid.segment.incremental.IncrementalIndex$LongMetricColumnSelector.getLong(IncrementalIndex.java:1392) ~[druid-processing-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at io.druid.segment.LongColumnSelector.getObject(LongColumnSelector.java:64) ~[druid-processing-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at io.druid.segment.LongColumnSelector.getObject(LongColumnSelector.java:29) ~[druid-processing-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at io.druid.segment.AbstractIndex.toString(AbstractIndex.java:62) ~[druid-processing-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_151]
at java.lang.StringBuilder.append(StringBuilder.java:131) ~[?:1.8.0_151]
at io.druid.segment.realtime.FireHydrant.toString(FireHydrant.java:144) ~[druid-server-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at java.util.Formatter$FormatSpecifier.printString(Formatter.java:2886) ~[?:1.8.0_151]
at java.util.Formatter$FormatSpecifier.print(Formatter.java:2763) ~[?:1.8.0_151]
at java.util.Formatter.format(Formatter.java:2520) ~[?:1.8.0_151]
at java.util.Formatter.format(Formatter.java:2455) ~[?:1.8.0_151]
at java.lang.String.format(String.java:2940) ~[?:1.8.0_151]
at com.metamx.common.StringUtils.safeFormat(StringUtils.java:76) ~[java-util-1.3.2.jar:?]
at com.metamx.common.logger.Logger.info(Logger.java:69) [java-util-1.3.2.jar:?]
at io.druid.segment.realtime.appenderator.AppenderatorImpl.persist(AppenderatorImpl.java:408) ~[druid-server-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at io.druid.segment.realtime.appenderator.AppenderatorImpl.persistAll(AppenderatorImpl.java:499) ~[druid-server-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at io.druid.segment.realtime.appenderator.AppenderatorDriver.persistAsync(AppenderatorDriver.java:356) ~[druid-server-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:733) ~[?:?]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:450) [druid-indexing-service-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:422) [druid-indexing-service-0.12.0-SNAPSHOT.jar:0.12.0-SNAPSHOT]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
2018-01-22T22:18:55,913 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_table_dc82e5a127e831a_hpgooncg] status changed to [FAILED].
2018-01-22T22:18:55,916 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
"id" : "index_kafka_table_dc82e5a127e831a_hpgooncg",
"status" : "FAILED",
"duration" : 153891
}
Hi,
We're using Druid version 0.12.0 cloned off of the master branch. Druid 0.12.0 has a feature that are particularly interested in, batch ingestion (mapping a single Kafka message to multiple rows in a table). We're using Kafka version 0.11.0.2.
Some of the indexing tasks fail with a
NullPointerException. Find below the stacktrace. Attached Extended log containing logs from right before crash.indexing.log
API used:
We've implemented a custom parser that extends the ByteBufferInputRowParser and implemented a parseBatch function that returns a
List[InputRow].Question:
Any idea as to why I'm hitting this exception? I get the same error even when I try ingesting data using the parse function which has been deprecated now. We don't see this error with Druid 0.11.0.