Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 1 addition & 25 deletions docs/en/admin-manual/config/fe-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -433,20 +433,6 @@ MasterOnly:true

As long as one BE is down, Routine Load cannot be automatically restored

### enable_materialized_view

Default:true

IsMutable:true

MasterOnly:true

This configuration is used to turn on and off the creation of materialized views. If set to true, the function to create a materialized view is enabled. The user can create a materialized view through the `CREATE MATERIALIZED VIEW` command. If set to false, materialized views cannot be created.

If you get an error `The materialized view is coming soon` or `The materialized view is disabled` when creating the materialized view, it means that the configuration is set to false and the function of creating the materialized view is turned off. You can start to create a materialized view by modifying the configuration to true.

This variable is a dynamic configuration, and users can modify the configuration through commands after the FE process starts. You can also modify the FE configuration file and restart the FE to take effect

### check_java_version

Default:true
Expand All @@ -471,7 +457,7 @@ IsMutable:true

MasterOnly:true

Whether to enable dynamic partition, enabled by default
Whether to enable dynamic partition scheduler, enabled by default

### dynamic_partition_check_interval_seconds

Expand Down Expand Up @@ -2126,16 +2112,6 @@ Only for Master FE: false

If set to true, the compaction slower replica will be skipped when select get queryable replicas

### enable_create_sync_job

Enable Mysql data synchronization job function. The default is false, this function is turned off

Default: false

Is it possible to configure dynamically: true

Whether it is a configuration item unique to the Master FE node: true

### sync_commit_interval_second

The maximum time interval for committing transactions. If there is still data in the channel that has not been submitted after this time, the consumer will notify the channel to submit the transaction.
Expand Down
4 changes: 0 additions & 4 deletions docs/en/data-operate/import/import-way/binlog-load-manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -510,10 +510,6 @@ You can use [STOP SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-
### Fe configuration

The following configuration belongs to the system level configuration of SyncJob. The configuration value can be modified in configuration file fe.conf.

* `enable_create_sync_job`

Turn on the Binlog Load feature. The default value is false. This feature is turned off.

* `sync_commit_interval_second`

Expand Down
26 changes: 1 addition & 25 deletions docs/zh-CN/admin-manual/config/fe-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -419,20 +419,6 @@ show data (其他用法:HELP SHOW DATA)

只要有一个BE宕机,Routine Load 就无法自动恢复

### `enable_materialized_view`

默认值:true

是否可以动态配置:true

是否为 Master FE 节点独有的配置项:true

该配置用于开启和关闭创建物化视图功能。如果设置为 true,则创建物化视图功能开启。用户可以通过 `CREATE MATERIALIZED VIEW` 命令创建物化视图。如果设置为 false,则无法创建物化视图。

如果在创建物化视图的时候报错 `The materialized view is coming soon` 或 `The materialized view is disabled` 则说明改配置被设置为了 false,创建物化视图功能关闭了。可以通过修改配置为 true 来启动创建物化视图功能。

该变量为动态配置,用户可以在 FE 进程启动后,通过命令修改配置。也可以通过修改 FE 的配置文件,重启 FE 来生效

### `check_java_version`

默认值:true
Expand All @@ -457,7 +443,7 @@ Doris 将检查已编译和运行的 Java 版本是否兼容,如果不兼容

是否为 Master FE 节点独有的配置项:true

是否启用动态分区,默认启用
是否启用动态分区调度,默认启用

### `dynamic_partition_check_interval_seconds`

Expand Down Expand Up @@ -2171,16 +2157,6 @@ load 标签清理器将每隔 `label_clean_interval_second` 运行一次以清

如果设置为true,则在选择可查询副本时,将跳过 compaction 较慢的副本

### enable_create_sync_job

开启 MySQL 数据同步作业功能。默认是 false,关闭此功能

默认值:false

是否可以动态配置:true

是否为 Master FE 节点独有的配置项:true

### sync_commit_interval_second

提交事务的最大时间间隔。若超过了这个时间 channel 中还有数据没有提交,consumer 会通知 channel 提交事务。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -494,10 +494,6 @@ binlog_desc

下面配置属于数据同步作业的系统级别配置,主要通过修改 fe.conf 来调整配置值。

- `enable_create_sync_job`

开启数据同步作业功能。默认为 false,关闭此功能。

- `sync_commit_interval_second`

提交事务的最大时间间隔。若超过了这个时间channel中还有数据没有提交,consumer会通知channel提交事务。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@
import org.apache.doris.catalog.OlapTable;
import org.apache.doris.cluster.ClusterNamespace;
import org.apache.doris.common.AnalysisException;
import org.apache.doris.common.Config;
import org.apache.doris.common.ErrorCode;
import org.apache.doris.common.ErrorReport;
import org.apache.doris.common.UserException;
Expand Down Expand Up @@ -79,11 +78,6 @@ public void analyze(Analyzer analyzer) throws UserException {
}
dbName = ClusterNamespace.getFullName(analyzer.getClusterName(), dbName);

if (!Config.enable_create_sync_job) {
throw new AnalysisException("Mysql sync job is disabled." +
" Set config 'enable_create_sync_job' = 'true' to enable this feature. ");
}

if (binlogDesc != null) {
binlogDesc.analyze();
dataSyncJobType = binlogDesc.getDataSyncJobType();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@
import org.apache.doris.catalog.PrimitiveType;
import org.apache.doris.catalog.Type;
import org.apache.doris.common.AnalysisException;
import org.apache.doris.common.Config;
import org.apache.doris.common.ErrorCode;
import org.apache.doris.common.ErrorReport;
import org.apache.doris.common.FeConstants;
Expand Down Expand Up @@ -131,9 +130,6 @@ public KeysType getMVKeysType() {

@Override
public void analyze(Analyzer analyzer) throws UserException {
if (!Config.enable_materialized_view) {
throw new AnalysisException("The materialized view is disabled");
}
super.analyze(analyzer);
FeNameFormat.checkTableName(mvName);
// TODO(ml): The mv name in from clause should pass the analyze without error.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,6 @@ public FunctionCallExpr getFnExpr() {

@Override
public void analyze(Analyzer analyzer) throws UserException {
if (!analyzer.getContext().getSessionVariable().isEnableLateralView()) {
throw new AnalysisException("The session variables `enable_lateral_view` is false");
}

if (isAnalyzed) {
return;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@

import org.apache.doris.analysis.DataSortInfo;
import org.apache.doris.common.AnalysisException;
import org.apache.doris.common.Config;
import org.apache.doris.common.DdlException;
import org.apache.doris.common.FeMetaVersion;
import org.apache.doris.common.io.Text;
import org.apache.doris.common.io.Writable;
Expand Down Expand Up @@ -117,14 +115,7 @@ public TableProperty resetPropertiesForRestore() {
return this;
}

public TableProperty buildDynamicProperty() throws DdlException {
if (properties.containsKey(DynamicPartitionProperty.ENABLE)
&& Boolean.valueOf(properties.get(DynamicPartitionProperty.ENABLE))
&& !Config.dynamic_partition_enable) {
throw new DdlException("Could not create table with dynamic partition "
+ "when fe config dynamic_partition_enable is false. "
+ "Please ADMIN SET FRONTEND CONFIG (\"dynamic_partition_enable\" = \"true\") firstly.");
}
public TableProperty buildDynamicProperty() {
executeBuildDynamicProperty();
return this;
}
Expand Down
12 changes: 0 additions & 12 deletions fe/fe-core/src/main/java/org/apache/doris/common/Config.java
Original file line number Diff line number Diff line change
Expand Up @@ -1288,18 +1288,6 @@ public class Config extends ConfigBase {
@ConfField
public static boolean check_java_version = true;

/**
* control materialized view
*/
@ConfField(mutable = true, masterOnly = true)
public static boolean enable_materialized_view = true;

/**
* enable create sync job
*/
@ConfField(mutable = true, masterOnly = true)
public static boolean enable_create_sync_job = false;

/**
* it can't auto-resume routine load job as long as one of the backends is down
*/
Expand Down
11 changes: 0 additions & 11 deletions fe/fe-core/src/main/java/org/apache/doris/qe/SessionVariable.java
Original file line number Diff line number Diff line change
Expand Up @@ -415,9 +415,6 @@ public class SessionVariable implements Serializable, Writable {
@VariableMgr.VarAttr(name = CPU_RESOURCE_LIMIT)
public int cpuResourceLimit = -1;

@VariableMgr.VarAttr(name = ENABLE_LATERAL_VIEW, needForward = true)
public boolean enableLateralView = false;

@VariableMgr.VarAttr(name = DISABLE_JOIN_REORDER)
private boolean disableJoinReorder = false;

Expand Down Expand Up @@ -876,14 +873,6 @@ public boolean isEnableParallelOutfile() {
return enableParallelOutfile;
}

public boolean isEnableLateralView() {
return enableLateralView;
}

public void setEnableLateralView(boolean enableLateralView) {
this.enableLateralView = enableLateralView;
}

public boolean isDisableJoinReorder() {
return disableJoinReorder;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,6 @@ public class AlterTest {
public static void beforeClass() throws Exception {
FeConstants.runningUnitTest = true;
FeConstants.default_scheduler_interval_millisecond = 100;
Config.dynamic_partition_enable = true;
Config.dynamic_partition_check_interval_seconds = 1;
Config.disable_storage_medium_check = true;
UtFrameUtils.createDorisCluster(runningDir);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,10 +61,7 @@
import org.apache.doris.transaction.GlobalTransactionMgr;

import com.google.common.collect.Lists;
import mockit.Expectations;
import mockit.Mock;
import mockit.MockUp;
import mockit.Mocked;

import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
Expand All @@ -80,6 +77,10 @@
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import mockit.Expectations;
import mockit.Mock;
import mockit.MockUp;
import mockit.Mocked;

public class RollupJobV2Test {
private static String fileName = "./RollupJobV2Test";
Expand Down Expand Up @@ -313,7 +314,6 @@ public void testSchemaChangeWhileTabletNotStable() throws Exception {
@Test
public void testSerializeOfRollupJob(@Mocked CreateMaterializedViewStmt stmt) throws IOException,
AnalysisException {
Config.enable_materialized_view = true;
// prepare file
File file = new File(fileName);
file.createNewFile();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@
import org.apache.doris.catalog.Database;
import org.apache.doris.catalog.KeysType;
import org.apache.doris.catalog.OlapTable;
import org.apache.doris.common.Config;
import org.apache.doris.common.UserException;
import org.apache.doris.load.sync.DataSyncJobType;
import org.apache.doris.mysql.privilege.PaloAuth;
Expand All @@ -30,9 +29,7 @@

import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import mockit.Expectations;
import mockit.Injectable;
import mockit.Mocked;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.junit.Assert;
Expand All @@ -41,6 +38,9 @@

import java.util.List;
import java.util.Map;
import mockit.Expectations;
import mockit.Injectable;
import mockit.Mocked;

public class CreateDataSyncJobStmtTest {
private static final Logger LOG = LogManager.getLogger(CreateDataSyncJobStmtTest.class);
Expand Down Expand Up @@ -91,8 +91,6 @@ public void setUp() {
result = catalog;
}
};

Config.enable_create_sync_job = true;
}
@Test
public void testNoDb() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,15 +32,16 @@
import org.apache.doris.qe.ConnectContext;

import com.google.common.collect.Lists;
import mockit.Expectations;
import mockit.Injectable;
import mockit.Mocked;

import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import java.util.ArrayList;
import java.util.List;
import mockit.Expectations;
import mockit.Injectable;
import mockit.Mocked;

public class CreateMaterializedViewStmtTest {

Expand All @@ -55,7 +56,7 @@ public class CreateMaterializedViewStmtTest {

@Before
public void initTest() {
Deencapsulation.setField(Config.class, "enable_materialized_view", true);

}

@Test
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ public static void beforeClass() throws Exception {
FeConstants.runningUnitTest = true;
UtFrameUtils.createDorisCluster(runningDir);
dorisAssert = new DorisAssert();
dorisAssert.withEnableMV().withDatabase(HR_DB_NAME).useDatabase(HR_DB_NAME);
dorisAssert.withDatabase(HR_DB_NAME).useDatabase(HR_DB_NAME);
}

@Before
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,6 @@ public void tearDown() throws Exception {
public static void setUp() throws Exception {
UtFrameUtils.createDorisCluster(runningDir);
ctx = UtFrameUtils.createDefaultCtx();
ctx.getSessionVariable().setEnableLateralView(true);
String createDbStmtStr = "create database db1;";
CreateDbStmt createDbStmt = (CreateDbStmt) UtFrameUtils.parseAndAnalyzeStmt(createDbStmtStr, ctx);
Catalog.getCurrentCatalog().createDb(createDbStmt);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@
import org.apache.doris.catalog.Catalog;
import org.apache.doris.cluster.ClusterNamespace;
import org.apache.doris.common.AnalysisException;
import org.apache.doris.common.Config;
import org.apache.doris.common.util.SqlParserUtils;
import org.apache.doris.planner.Planner;
import org.apache.doris.qe.ConnectContext;
Expand Down Expand Up @@ -68,12 +67,6 @@ public DorisAssert(ConnectContext ctx) {
this.ctx = ctx;
}

public DorisAssert withEnableMV() {
ctx.getSessionVariable().setTestMaterializedView(true);
Config.enable_materialized_view = true;
return this;
}

public DorisAssert withDatabase(String dbName) throws Exception {
CreateDbStmt createDbStmt =
(CreateDbStmt) UtFrameUtils.parseAndAnalyzeStmt("create database " + dbName + ";", ctx);
Expand Down
3 changes: 0 additions & 3 deletions regression-test/data/query/lateral_view/test_issue_8850.out
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,7 @@
0

-- !test_issue_8850_3 --
0

-- !test_issue_8850_4 --

-- !test_issue_8850_5 --
0

2 changes: 0 additions & 2 deletions regression-test/suites/query/lateral_view/test_issue_8850.sql
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@ DROP TABLE IF EXISTS tag_map;

CREATE TABLE `tag_map` ( `tag_group` bigint(20) NULL COMMENT "标签组", `tag_value_id` varchar(64) NULL COMMENT "标签值", `tag_range` int(11) NOT NULL DEFAULT "0" COMMENT "", `partition_sign` varchar(32) NOT NULL COMMENT "分区标识", `bucket` int(11) NOT NULL COMMENT "分桶字段", `confidence` tinyint(4) NULL DEFAULT "100" COMMENT "置信度", `members` bitmap BITMAP_UNION NULL COMMENT "人群") ENGINE=OLAP AGGREGATE KEY(`tag_group`, `tag_value_id`, `tag_range`, `partition_sign`, `bucket`, `confidence`) COMMENT "dmp_tag_map" PARTITION BY LIST(`partition_sign`) (PARTITION p202203231 VALUES IN ("2022-03-23-1"), PARTITION p202203251 VALUES IN ("2022-03-25-1"), PARTITION p202203261 VALUES IN ("2022-03-26-1"), PARTITION p202203271 VALUES IN ("2022-03-27-1"), PARTITION p202203281 VALUES IN ("2022-03-28-1"), PARTITION p202203291 VALUES IN ("2022-03-29-1"), PARTITION p202203301 VALUES IN ("2022-03-30-1"), PARTITION p202203311 VALUES IN ("2022-03-31-1"), PARTITION p202204011 VALUES IN ("2022-04-01-1"), PARTITION crowd VALUES IN ("crowd"), PARTITION crowd_tmp VALUES IN ("crowd_tmp"), PARTITION extend_crowd VALUES IN ("extend_crowd"), PARTITION partition_sign VALUES IN ("online_crowd")) DISTRIBUTED BY HASH(`bucket`) BUCKETS 64 PROPERTIES ("replication_allocation" = "tag.location.default: 1", "in_memory" = "false", "storage_format" = "V2");

set enable_lateral_view=true;

with d as (select f1.bucket, bitmap_and(f1.members, f2.members) as members from (select f1.bucket, bitmap_and(f1.members, f2.members) as members from (select bucket, bitmap_union(members) as members from tag_map where partition_sign='2022-03-31-1' and tag_group=810004 and tag_value_id in (5524627,5524628,5524629) group by bucket) f1,(select bucket, bitmap_union(members) as members from tag_map where partition_sign='2022-03-31-1' and tag_group=810007 and tag_value_id in ('5525013_17357124_5525019','5525013_17357124_5525020','5525013_17357124_5525021','5525013_17357124_5525022','5525013_17357124_5525023') group by bucket) f2 where f1.bucket=f2.bucket) f1, (select f1.bucket, bitmap_and(f1.members, f2.members) as members from (select f1.bucket, bitmap_and(f1.members, f2.members) as members from (select f1.bucket, bitmap_and(f1.members, f2.members) as members from (select bucket, bitmap_union(members) as members from tag_map where partition_sign='2022-03-31-1' and tag_group=660004 and tag_value_id in (1392235) group by bucket) f1,(select bucket, bitmap_union(members) as members from tag_map where partition_sign='2022-03-31-1' and tag_group=630004 and tag_value_id in (5404632) group by bucket) f2 where f1.bucket=f2.bucket) f1,(select bucket, bitmap_union(members) as members from tag_map where partition_sign='2022-03-31-1' and tag_group=420004 and tag_value_id in (5404628) group by bucket) f2 where f1.bucket=f2.bucket) f1,(select bucket, bitmap_union(members) as members from tag_map where partition_sign='2022-03-31-1' and tag_group=240004 and tag_value_id in (14622211) group by bucket) f2 where f1.bucket=f2.bucket) f2 where f1.bucket=f2.bucket) select bucket, member_id from d lateral view explode_bitmap(members) t as member_id;

DROP TABLE tag_map;
Loading