Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@ Syntax:
* BITMAP_UNION: Only for BITMAP type
Allow NULL: Default is NOT NULL. NULL value should be represented as `\N` in load source file.
Notice:

The origin value of BITMAP_UNION column should be TINYINT, SMALLINT, INT, BIGINT.
2. index_definition
Syntax:
Expand Down Expand Up @@ -133,14 +134,14 @@ Syntax:
"line_delimiter" = "value_delimiter"
)
```

```
BROKER PROPERTIES(
"username" = "name",
"password" = "password"
)
```

For different broker, the broker properties are different
Notice:
Files name in "path" is separated by ",". If file name includes ",", use "%2c" instead. If file name includes "%", use "%25" instead.
Expand Down Expand Up @@ -220,7 +221,7 @@ Syntax:
["replication_num" = "3"]
)
```

storage_medium: SSD or HDD, The default initial storage media can be specified by `default_storage_medium= XXX` in the fe configuration file `fe.conf`, or, if not, by default, HDD.
Note: when FE configuration 'enable_strict_storage_medium_check' is' True ', if the corresponding storage medium is not set in the cluster, the construction clause 'Failed to find enough host in all backends with storage medium is SSD|HDD'.
storage_cooldown_time: If storage_medium is SSD, data will be automatically moved to HDD when timeout.
Expand All @@ -246,9 +247,9 @@ Syntax:
"colocate_with"="table1"
)
```

4) if you want to use the dynamic partitioning feature, specify it in properties

```
PROPERTIES (
"dynamic_partition.enable" = "true|false",
Expand All @@ -268,6 +269,7 @@ Syntax:
Dynamic_partition. Prefix: used to specify the partition name prefix to be created, such as the partition name prefix p, automatically creates the partition name p20200108

Dynamic_partition. Buckets: specifies the number of partition buckets that are automatically created
```
8. rollup_index
grammar:
```
Expand Down Expand Up @@ -320,6 +322,7 @@ Syntax:
"storage_medium" = "SSD",
"storage_cooldown_time" = "2015-06-04 00:00:00"
);
```

3. Create an olap table, with range partitioned, distributed by hash.

Expand Down Expand Up @@ -347,16 +350,16 @@ Syntax:
"storage_medium" = "SSD", "storage_cooldown_time" = "2015-06-04 00:00:00"
);
```

Explain:
This statement will create 3 partitions:

```
( { MIN }, {"2014-01-01"} )
[ {"2014-01-01"}, {"2014-06-01"} )
[ {"2014-06-01"}, {"2014-12-01"} )
```

Data outside these ranges will not be loaded.

2) Fixed Range
Expand All @@ -381,8 +384,8 @@ Syntax:
);

4. Create a mysql table

```
4.1 Create MySQL table directly from external table information
```
CREATE EXTERNAL TABLE example_db.table_mysql
(
k1 DATE,
Expand All @@ -400,8 +403,38 @@ Syntax:
"password" = "mysql_passwd",
"database" = "mysql_db_test",
"table" = "mysql_table_test"
);
```
)
```

4.2 Create MySQL table with external ODBC catalog resource
```
CREATE EXTERNAL RESOURCE "mysql_resource"
PROPERTIES
(
"type" = "odbc_catalog",
"user" = "mysql_user",
"password" = "mysql_passwd",
"host" = "127.0.0.1",
"port" = "8239"
);
```
```
CREATE EXTERNAL TABLE example_db.table_mysql
(
k1 DATE,
k2 INT,
k3 SMALLINT,
k4 VARCHAR(2048),
k5 DATETIME
)
ENGINE=mysql
PROPERTIES
(
"odbc_catalog_resource" = "mysql_resource",
"database" = "mysql_db_test",
"table" = "mysql_table_test"
)
```

5. Create a broker table, with file on HDFS, line delimit by "|", column separated by "\n"

Expand Down Expand Up @@ -549,7 +582,7 @@ Syntax:
"dynamic_partition.prefix" = "p",
"dynamic_partition.buckets" = "32"
);
```
```
12. Create a table with rollup index
```
CREATE TABLE example_db.rolup_index_table
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,15 +151,15 @@ under the License.
注意:
"path" 中如果有多个文件,用逗号[,]分割。如果文件名中包含逗号,那么使用 %2c 来替代。如果文件名中包含 %,使用 %25 代替
现在文件内容格式支持CSV,支持GZ,BZ2,LZ4,LZO(LZOP) 压缩格式。

3) 如果是 hive,则需要在 properties 提供以下信息:
```
PROPERTIES (
"database" = "hive_db_name",
"table" = "hive_table_name",
"hive.metastore.uris" = "thrift://127.0.0.1:9083"
)

```
其中 database 是 hive 表对应的库名字,table 是 hive 表的名字,hive.metastore.uris 是 hive metastore 服务地址。
注意:目前hive外部表仅用于Spark Load使用,不支持查询。
Expand Down Expand Up @@ -193,7 +193,7 @@ under the License.
...
)
```

说明:
使用指定的 key 列和指定的数值范围进行分区。
1) 分区名称仅支持字母开头,字母、数字和下划线组成
Expand All @@ -202,7 +202,7 @@ under the License.
3) 分区为左闭右开区间,首个分区的左边界为做最小值
4) NULL 值只会存放在包含最小值的分区中。当包含最小值的分区被删除后,NULL 值将无法导入。
5) 可以指定一列或多列作为分区列。如果分区值缺省,则会默认填充最小值。

注意:
1) 分区一般用于时间维度的数据管理
2) 有数据回溯需求的,可以考虑首个分区为空分区,以便后续增加分区
Expand Down Expand Up @@ -270,9 +270,9 @@ under the License.
"colocate_with"="table1"
)
```

4) 如果希望使用动态分区特性,需要在properties 中指定

```
PROPERTIES (
"dynamic_partition.enable" = "true|false",
Expand All @@ -288,15 +288,15 @@ under the License.
dynamic_partition.end: 用于指定提前创建的分区数量。值必须大于0。
dynamic_partition.prefix: 用于指定创建的分区名前缀,例如分区名前缀为p,则自动创建分区名为p20200108
dynamic_partition.buckets: 用于指定自动创建的分区分桶数量

5) 建表时可以批量创建多个 Rollup
语法:
```
ROLLUP (rollup_name (column_name1, column_name2, ...)
[FROM from_index_name]
[PROPERTIES ("key"="value", ...)],...)
```

6) 如果希望使用 内存表 特性,需要在 properties 中指定

```
Expand Down Expand Up @@ -419,6 +419,7 @@ under the License.

4. 创建一个 mysql 表

4.1 直接通过外表信息创建mysql表
```
CREATE EXTERNAL TABLE example_db.table_mysql
(
Expand All @@ -440,6 +441,36 @@ under the License.
)
```

4.2 通过External Catalog Resource创建mysql表
```
CREATE EXTERNAL RESOURCE "mysql_resource"
PROPERTIES
(
"type" = "odbc_catalog",
"user" = "mysql_user",
"password" = "mysql_passwd",
"host" = "127.0.0.1",
"port" = "8239"
);
```
```
CREATE EXTERNAL TABLE example_db.table_mysql
(
k1 DATE,
k2 INT,
k3 SMALLINT,
k4 VARCHAR(2048),
k5 DATETIME
)
ENGINE=mysql
PROPERTIES
(
"odbc_catalog_resource" = "mysql_resource",
"database" = "mysql_db_test",
"table" = "mysql_table_test"
)
```

5. 创建一个数据文件存储在HDFS上的 broker 外部表, 数据使用 "|" 分割,"\n" 换行

```
Expand Down Expand Up @@ -650,3 +681,5 @@ under the License.
## keyword

CREATE,TABLE

```
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ private void analyzeSubPredicate(Expr subExpr) throws AnalysisException {

if (!valid) {
throw new AnalysisException("Where clause should looks like: NAME = \"your_resource_name\","
+ " or NAME LIKE \"matcher\", " + " or RESOURCETYPE = \"SPARK\", "
+ " or NAME LIKE \"matcher\", " + " or RESOURCETYPE = \"resource_type\", "
+ " or compound predicate with operator AND");
}
}
Expand Down
30 changes: 19 additions & 11 deletions fe/fe-core/src/main/java/org/apache/doris/catalog/Catalog.java
Original file line number Diff line number Diff line change
Expand Up @@ -4059,10 +4059,14 @@ public static void getDdlStmt(Table table, List<String> createTableStmt, List<St
}
// properties
sb.append("\nPROPERTIES (\n");
sb.append("\"host\" = \"").append(mysqlTable.getHost()).append("\",\n");
sb.append("\"port\" = \"").append(mysqlTable.getPort()).append("\",\n");
sb.append("\"user\" = \"").append(mysqlTable.getUserName()).append("\",\n");
sb.append("\"password\" = \"").append(hidePassword ? "" : mysqlTable.getPasswd()).append("\",\n");
if (mysqlTable.getOdbcCatalogResourceName() == null) {
sb.append("\"host\" = \"").append(mysqlTable.getHost()).append("\",\n");
sb.append("\"port\" = \"").append(mysqlTable.getPort()).append("\",\n");
sb.append("\"user\" = \"").append(mysqlTable.getUserName()).append("\",\n");
sb.append("\"password\" = \"").append(hidePassword ? "" : mysqlTable.getPasswd()).append("\",\n");
} else {
sb.append("\"odbc_catalog_resource\" = \"").append(mysqlTable.getOdbcCatalogResourceName()).append("\",\n");
}
sb.append("\"database\" = \"").append(mysqlTable.getMysqlDatabaseName()).append("\",\n");
sb.append("\"table\" = \"").append(mysqlTable.getMysqlTableName()).append("\"\n");
sb.append(")");
Expand All @@ -4073,14 +4077,18 @@ public static void getDdlStmt(Table table, List<String> createTableStmt, List<St
}
// properties
sb.append("\nPROPERTIES (\n");
sb.append("\"host\" = \"").append(odbcTable.getHost()).append("\",\n");
sb.append("\"port\" = \"").append(odbcTable.getPort()).append("\",\n");
sb.append("\"user\" = \"").append(odbcTable.getUserName()).append("\",\n");
sb.append("\"password\" = \"").append(hidePassword ? "" : odbcTable.getPasswd()).append("\",\n");
if (odbcTable.getOdbcCatalogResourceName() == null) {
sb.append("\"host\" = \"").append(odbcTable.getHost()).append("\",\n");
sb.append("\"port\" = \"").append(odbcTable.getPort()).append("\",\n");
sb.append("\"user\" = \"").append(odbcTable.getUserName()).append("\",\n");
sb.append("\"password\" = \"").append(hidePassword ? "" : odbcTable.getPasswd()).append("\",\n");
sb.append("\"driver\" = \"").append(odbcTable.getOdbcDriver()).append("\",\n");
sb.append("\"odbc_type\" = \"").append(odbcTable.getOdbcTableTypeName()).append("\",\n");
} else {
sb.append("\"odbc_catalog_resource\" = \"").append(odbcTable.getOdbcCatalogResourceName()).append("\",\n");
}
sb.append("\"database\" = \"").append(odbcTable.getOdbcDatabaseName()).append("\",\n");
sb.append("\"table\" = \"").append(odbcTable.getOdbcTableName()).append("\",\n");
sb.append("\"driver\" = \"").append(odbcTable.getOdbcDriver()).append("\",\n");
sb.append("\"type\" = \"").append(odbcTable.getOdbcTableTypeName()).append("\"\n");
sb.append("\"table\" = \"").append(odbcTable.getOdbcTableName()).append("\"\n");
sb.append(")");
} else if (table.getType() == TableType.BROKER) {
BrokerTable brokerTable = (BrokerTable) table;
Expand Down
Loading