-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Remove ServerView from RealtimeIndexTasks and use coordinator http endpoint for handoff information #2015
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove ServerView from RealtimeIndexTasks and use coordinator http endpoint for handoff information #2015
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -24,10 +24,10 @@ | |
| import com.google.inject.Inject; | ||
| import com.metamx.emitter.service.ServiceEmitter; | ||
| import com.metamx.metrics.MonitorScheduler; | ||
| import io.druid.client.FilteredServerView; | ||
| import io.druid.client.cache.Cache; | ||
| import io.druid.client.cache.CacheConfig; | ||
| import io.druid.guice.annotations.Processing; | ||
| import io.druid.indexing.common.actions.TaskActionClient; | ||
| import io.druid.indexing.common.actions.TaskActionClientFactory; | ||
| import io.druid.indexing.common.config.TaskConfig; | ||
| import io.druid.indexing.common.task.Task; | ||
|
|
@@ -38,6 +38,7 @@ | |
| import io.druid.segment.loading.DataSegmentKiller; | ||
| import io.druid.segment.loading.DataSegmentMover; | ||
| import io.druid.segment.loading.DataSegmentPusher; | ||
| import io.druid.segment.realtime.plumber.SegmentHandoffNotifierFactory; | ||
| import io.druid.server.coordination.DataSegmentAnnouncer; | ||
|
|
||
| import java.io.File; | ||
|
|
@@ -56,7 +57,7 @@ public class TaskToolboxFactory | |
| private final DataSegmentMover dataSegmentMover; | ||
| private final DataSegmentArchiver dataSegmentArchiver; | ||
| private final DataSegmentAnnouncer segmentAnnouncer; | ||
| private final FilteredServerView newSegmentServerView; | ||
| private final SegmentHandoffNotifierFactory handoffNotifierFactory; | ||
| private final QueryRunnerFactoryConglomerate queryRunnerFactoryConglomerate; | ||
| private final ExecutorService queryExecutorService; | ||
| private final MonitorScheduler monitorScheduler; | ||
|
|
@@ -77,7 +78,7 @@ public TaskToolboxFactory( | |
| DataSegmentMover dataSegmentMover, | ||
| DataSegmentArchiver dataSegmentArchiver, | ||
| DataSegmentAnnouncer segmentAnnouncer, | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. minor nit, but we should keep the arguments in the same order and just replace newSegmentServerView with handoffNotifierFactory. Makes it easier to track changes and resolve conflicts.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. fixed. |
||
| FilteredServerView newSegmentServerView, | ||
| SegmentHandoffNotifierFactory handoffNotifierFactory, | ||
| QueryRunnerFactoryConglomerate queryRunnerFactoryConglomerate, | ||
| @Processing ExecutorService queryExecutorService, | ||
| MonitorScheduler monitorScheduler, | ||
|
|
@@ -97,7 +98,7 @@ public TaskToolboxFactory( | |
| this.dataSegmentMover = dataSegmentMover; | ||
| this.dataSegmentArchiver = dataSegmentArchiver; | ||
| this.segmentAnnouncer = segmentAnnouncer; | ||
| this.newSegmentServerView = newSegmentServerView; | ||
| this.handoffNotifierFactory = handoffNotifierFactory; | ||
| this.queryRunnerFactoryConglomerate = queryRunnerFactoryConglomerate; | ||
| this.queryExecutorService = queryExecutorService; | ||
| this.monitorScheduler = monitorScheduler; | ||
|
|
@@ -112,18 +113,17 @@ public TaskToolboxFactory( | |
| public TaskToolbox build(Task task) | ||
| { | ||
| final File taskWorkDir = config.getTaskWorkDir(task.getId()); | ||
|
|
||
| return new TaskToolbox( | ||
| config, | ||
| task, | ||
| taskActionClientFactory, | ||
| taskActionClientFactory.create(task), | ||
| emitter, | ||
| segmentPusher, | ||
| dataSegmentKiller, | ||
| dataSegmentMover, | ||
| dataSegmentArchiver, | ||
| segmentAnnouncer, | ||
| newSegmentServerView, | ||
| handoffNotifierFactory, | ||
| queryRunnerFactoryConglomerate, | ||
| queryExecutorService, | ||
| monitorScheduler, | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if this is "druid/coordinator" by default, it should be set to "coordinator" in the example common.runtime.properties, since the example coordinator properties set the druid.service to "coordinator". Many people start off their clusters by copying the example configs.
I wonder if we should also just change the defaults in the code to match those…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the configs were created before we had defaults. I'd rather just change the example configs to match the defaults, or leave it out of the example configs entirely, since it's not necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default overlord service name in tranquility is "overlord" to match the examples, so if we do change the examples we should change that too. All three of those things should match though (druid defaults, tranquility defaults, druid example configs)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#2046 to track
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#2048