projections + compaction + catalog + msq#17803
Conversation
changes: * `CompactionTask` now accepts a `projections` property which will cause classic and MSQ auto-compaction to build segments with projections * `DataSourceCompactionConfig` has been turned into an interface, with the existing implementation renamed to `InlineSchemaDataSourceCompactionConfig` * Added projections list to `InlineSchemaDataSourceCompactionConfig` to allow explicitly defining projections in an inline schema compaction spec * if not explicitly defined, compaction tasks will now preserve existing projections when processing segments, combining all named projections across the segments being processed - different projections with the same name are not checked for equivalence, rather one will be chosen dependent on segment processing order. * Added ability to define projections as a property of a datasource in the catalog * If projections are defined in a catalog, they will be automatically used by MSQ insert and replace queries * Added new experimental `CatalogDataSourceCompactionConfig` which allows populating much of a `CompactionTask` using information stored in the catalog. Currently this has some feature gaps compared to `InlineSchemaDataSourceCompactionConfig`, but will be improved in follow-up work to eventually become much more powerful than what can be expressed via a `InlineSchemaDataSourceCompactionConfig` * Moved `MetadataCatalog` to druid-server from the catalog extension * Added method to get `MetadataCatalog` from `CatalogResolver` * Added `CatalogCoreModule` to provide a null binding for `MetadataCatalog`, overridden if the catalog extension is loaded * Overlord added as a watcher for catalog like the Broker so that it can have `CatalogResolver` and `MetadataCatalog` available * Added binding for `MetadataCatalog` to Coordinator to have `MetadataCatalg` available
…nator is not running, `CatalogClientConfig` to control resync rate, retries
… can exclude being loaded in coordinator-overlord combined mode
| <type>test-jar</type> | ||
| <scope>test</scope> | ||
| </dependency> | ||
| <dependency> |
| fieldMapping.stream().map(Entry::getValue).collect(Collectors.toSet()) | ||
| ); | ||
|
|
||
| final List<AggregateProjectionSpec> projectionSpecs; |
There was a problem hiding this comment.
break this out into its own method please; the current function is long enough as it is.
There was a problem hiding this comment.
done, also split out some other methods because yeah this was a lot
| } | ||
|
|
||
|
|
||
| private void processProjections(final QueryableIndex index) |
There was a problem hiding this comment.
some comments here about the logic would be useful, especially how conflicts are handled between projections of different names.
There was a problem hiding this comment.
added.
This got me thinking a bit though about what behavior would be better between using the first encountered vs the last encountered? Should it warn if there is a mismatch? Right now it is using last encountered, but i didn't give it a lot of thought, since the main goal for this discovery based stuff was for compaction to not automatically wipe out projections by default. My preference is that this will all be driven via the catalog instead of relying on discovery from segments, but this at least covers that case where the setup was not done (or not explicitly specified in the inline schema compaction spec).
There was a problem hiding this comment.
I'm not sure it matters much. I don't think there would be a serious consequence to picking the "wrong" projection when there is a conflict. I think the current logic is ok, I just wanted to see a description of it.
| return granularitySpec; | ||
| } | ||
|
|
||
| @JsonProperty |
There was a problem hiding this comment.
Should this be nullable? Are null and empty meaningfully different & can both happen?
There was a problem hiding this comment.
not meaningfully different in the sense that neither will result in projections being built into the segment. I modified to coerce to empty so that they are considered equivalent for comparison.
There was a problem hiding this comment.
Oh wait, i think the modification wasn't the right thing to do since it will not compare correctly, reverting that and marking nullable for now.
| return spec.properties(); | ||
| } | ||
|
|
||
| public <T> T getProperty(String key) |
There was a problem hiding this comment.
@Nullable? Also, the fact that decoding is automatic seems interesting and should be mentioned in the javadoc. Otherwise it's not clear why getProperty is different from calling properties() and doing a get.
There was a problem hiding this comment.
fair, renamed to decodeProperty which i think resolves the problem
| } | ||
| } | ||
|
|
||
| public static boolean isDatasource(String tableType) |
There was a problem hiding this comment.
Would be nice to have javadoc about what tableType is meant to be. Like, what kinds of types can be put in here? Perhaps it's obvious to people that are more familiar with the catalog than I am. But, to me, it's not obvious.
There was a problem hiding this comment.
i just shuffled some stuff around, this isn't a new method, but agree it could use some javadocs. This method only seems to be used by TableEditor, and also is always inverted (it is validating that catalog hidden column modifications are only applied to 'datasource' typed specs. I'll just add a note for now that the expected strings arguments are from TableSpec.type, but it does seem like this could be reworked to do this differently (same with the instanceOf version right below it)
| return context; | ||
| } | ||
|
|
||
| @JsonProperty("projections") |
There was a problem hiding this comment.
It can be null, so how about adding @Nullable and also json-include only if nonnull?
| @LoadScope(roles = NodeRole.BROKER_JSON_NAME) | ||
| public class CatalogBrokerModule implements DruidModule | ||
| @LoadScope(roles = {NodeRole.BROKER_JSON_NAME, NodeRole.OVERLORD_JSON_NAME}) | ||
| @ExcludeScope(roles = {NodeRole.COORDINATOR_JSON_NAME}) |
There was a problem hiding this comment.
Why do we need both @LoadScope and @ExcludeScope? Isn't not listing COORDINATOR_JSON_NAME in @LoadScope enough to not load it there?
There was a problem hiding this comment.
I see that we need it because there's a conflict between this module and CatalogCoordinatorModule. Please add a javadoc reference to CatalogCoordinatorModule pointing out that the exclusion is needed because we can't load both in the same process.
There was a problem hiding this comment.
added javadocs to link module and updated LoadScope/ExcludeScope javadoc to indicate that ExcludeScope takes priority if both are defined
| import java.util.Objects; | ||
|
|
||
| public class DataSourceCompactionConfig | ||
| @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type", defaultImpl = InlineSchemaDataSourceCompactionConfig.class) |
There was a problem hiding this comment.
Do we need the defaultImpl for backwards-compat reasons?
We should avoid it whenever possible, because it does this weird thing where if you provide an invalid type, rather than being an error, it's mapped to the defaultImpl. It tends to burn people that mistype things, forget to load extensions, etc.
There was a problem hiding this comment.
yea, it is for backwards compatibility. Looking into a custom serde to see if we can better restrict it to only allow coercion where type is missing instead of cases where it is incorrect.
There was a problem hiding this comment.
That would be ideal, although it doesn't necessarily need to be done in this PR. If you do a custom serde it would be nice if it was generic enough to apply to other interfaces. (I would love to strike defaultImpl from the codebase.)
Description
changes:
CompactionTasknow accepts aprojectionsproperty which will cause classic and MSQ auto-compaction to build segments with projectionsDataSourceCompactionConfighas been turned into an interface, with the existing implementation renamed toInlineSchemaDataSourceCompactionConfigInlineSchemaDataSourceCompactionConfigto allow explicitly defining projections in an inline schema compaction specCatalogDataSourceCompactionConfigwhich allows populating much of aCompactionTaskusing information stored in the catalog. Currently this has some feature gaps compared toInlineSchemaDataSourceCompactionConfig, but will be improved in follow-up work to eventually become much more powerful than what can be expressed via aInlineSchemaDataSourceCompactionConfigMetadataCatalogto druid-server from the catalog extensionMetadataCatalogfromCatalogResolverCatalogCoreModuleto provide a null binding forMetadataCatalog, overridden if the catalog extension is loadedCatalogResolverandMetadataCatalogavailableMetadataCatalogto Coordinator to haveMetadataCatalgavailableCatalogUpdateNotifiernow periodically resyncs the catalog on clients, and retries resync failures, fixing an issue if a catalog client is started and the coordinator is not runningCatalogClientConfigto control polling behavior for resyncs of catalog clients, similar to basic-auth cacheExcludeScopeannotation so thatCatalogClientModulecan be skipped for coordinator node roles (when operating in combined coordinator overlord mode both roles are present, causing guice binding errors from attempted duplicate bindings).API Examples
I used intellij rest client to run these, but should work with curl or whatever else too
list catalog tables (should be empty if un-initialized)
create projection for table wiki-projections-catalog
get spec of catalog table
update catalog table projections to add a projection (version is updated field from 'get' response)
for coordinator based compaction
(alternative) create autocompaction supervisor if using compcation supervisors instead of coordinator compaction
Release note
todo
This PR has: