-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Add persist operations for routine load job #754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add persist operations for routine load job #754
Conversation
| columnsInfo = (LoadColumnsInfo) parseNode; | ||
| columnsInfo.analyze(analyzer); | ||
| } else if (parseNode instanceof Expr) { | ||
| importColumnsStmt = (ImportColumnsStmt) parseNode; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't you analyze importColumnStmt
| } | ||
| wherePredicate = (Expr) parseNode; | ||
| wherePredicate.analyze(analyzer); | ||
| importWhereStmt = (ImportWhereStmt) parseNode; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't you analyze importWhereStmt
| } | ||
|
|
||
| private void checkCustomProperties() throws AnalysisException { | ||
| private void checkLoadSourceProperties() throws AnalysisException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LoadSource or DataSource ?
| throw new AnalysisException(KAFKA_OFFSETS_PROPERTY + " could not be a empty string"); | ||
| } | ||
| String[] kafkaOffsetsStringList = kafkaOffsetsString.split(","); | ||
| if (kafkaOffsetsStringList.length != kafkaPartitionOffsets.size()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if some partition has offset while others hasn't ?
| super(-1, LoadDataSourceType.KAFKA); | ||
| } | ||
|
|
||
| public KafkaRoutineLoadJob(Long id, String name, long dbId, long tableId, String brokerList, String topic) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe id can be initialized by itself
| } | ||
|
|
||
| // init kafka routine load job | ||
| long id = Catalog.getInstance().getNextId(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as constructor
| // when currentTotalNum is more then ten thousand or currentErrorNum is more then maxErrorNum | ||
| protected int currentErrorNum; | ||
| protected int currentTotalNum; | ||
| protected long currentErrorNum; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use int is enough. It will be reset after more then 10000
| import org.apache.doris.transaction.AbortTransactionException; | ||
| import org.apache.doris.transaction.TransactionException; | ||
| import org.apache.doris.transaction.TransactionState; | ||
| public abstract class TxnStateChangeListener { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why change interface to abstract class?
5fdb023 to
27b42c1
Compare
No description provided.