Move ingest validation to the Calcite validator#13903
Move ingest validation to the Calcite validator#13903paul-rogers wants to merge 7 commits intoapache:masterfrom
Conversation
* String literals for PARTITIONED BY * Revise handlers to use new validation logic
| Map<String, Object> context = ImmutableMap.<String, Object>builder() | ||
| .putAll(DEFAULT_MSQ_CONTEXT) | ||
| .put( | ||
| MultiStageQueryContext.CTX_CLUSTER_STATISTICS_MERGE_MODE, | ||
| ClusterStatisticsMergeMode.SEQUENTIAL.toString() | ||
| ) | ||
| .build(); | ||
| .putAll(DEFAULT_MSQ_CONTEXT) | ||
| .put( | ||
| MultiStageQueryContext.CTX_CLUSTER_STATISTICS_MERGE_MODE, | ||
| ClusterStatisticsMergeMode.SEQUENTIAL.toString() | ||
| ) | ||
| .build(); |
Check notice
Code scanning / CodeQL
Possible confusion of local and field
|
@paul-rogers , this PR #14023 moved some things around, including |
|
This pull request has been marked as stale due to 60 days of inactivity. |
|
This pull request/issue has been closed due to lack of activity. If you think that |
This PR was split out of the "big" catalog PR.
Refactors the Druid planner to move
INSERTandREPLACEvalidation out of the statement handers and into the Calcite validator. This is the right place for validation in general, and is required for the catalog integration to be added in a separate PR.Most changes are purely mechanical. One that is not is the way that the
PARTITIONED BYgrain is represented in the AST. Prior to this commit,PARTITIONED BYwas represented as a Druid granularity object. However, the Calcite validator requires that all properties of an AST node be a subclass ofSqlNode, whichGranularityis not. To solve this, the granularity is represented as a stringSqlLiteralin the AST, and validated into a Druid granularity in the validator. This is probably a better choice, now that we can do it. A side effect of this change is that the parser can now allow string literals forPARTITIONED BY. Example:PARTITIONED BY 'P1D'. This seems safe because, in Druid, partitioning is always by time and gives the segment grain. We are unlikely to change the segment grain idea any time soon.This PR also forbids the use of the
WEEK(orP1W) granularity, since earlier feedback indicated we no longer wish to support that gain.Release note
This PR introduces another option for how to specify time granularity: by using ISO-8601 periods and Druid period names as a string argument to
PARTITIONED BY. Seereference.mdfor details.Hints to reviewers
The real core of this PR is
sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidSqlValidator.java: the place where we moved the former ad-hocINSERTandREPLACEvalidation to instead run within the SQL validator.No runtime code was changed: all the non-trivial changes are in the SQL planner.
The code here is identical to that in the "full" PR: all prior review comments are reflected. The only difference is that the validator in this PR omits the catalog integration from the full PR.
This PR has: