2017-06-26T19:23:43,774 INFO [qtp1146045637-133] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_kafka_XXX_14759287c3f3b7b_hjehimnb]: SegmentAllocateAction{dataSource='XXX', timestamp=213484092-10-22T10:45:15.392Z, queryGranularity={type=period, period=PT1M, timeZone=UTC, origin=null}, preferredSegmentGranularity={type=period, period=P1D, timeZone=UTC, origin=null}, sequenceName='index_kafka_XXX_14759287c3f3b7b_10', previousSegmentId='XXX_10031072-10-16T00:00:00.000Z_10031072-10-17T00:00:00.000Z_2017-06-22T22:47:52.070Z_1'}
2017-06-26T19:23:43,832 ERROR [qtp1146045637-133] com.sun.jersey.spi.container.ContainerResponse - The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
java.lang.IllegalArgumentException: The end instant must be greater or equal to the start
at org.joda.time.base.AbstractInterval.checkInterval(AbstractInterval.java:63) ~[joda-time-2.8.2.jar:2.8.2]
at org.joda.time.base.BaseInterval.<init>(BaseInterval.java:94) ~[joda-time-2.8.2.jar:2.8.2]
at org.joda.time.Interval.<init>(Interval.java:122) ~[joda-time-2.8.2.jar:2.8.2]
Timestamps after
JodaUtils.MAX_INSTANTcause strange behavior. Example stack trace is below. I'm not exactly sure what went wrong here (probably some kind of overflow?). I think we should simply treat these out of range timestamps as unparseable rows.