Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/help/Contents/Administration/admin_stmt.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
2) heartbeat_port 为该节点的心跳端口
3) 增加和删除节点为同步操作。这两种操作不考虑节点上已有的数据,节点直接从元数据中删除,请谨慎使用。
4) 节点下线操作用于安全下线节点。该操作为异步操作。如果成功,节点最终会从元数据中删除。如果失败,则不会完成下线。
5) 可以手动取消节点下线操作。详见 CANCEL ALTER SYSTEM
5) 可以手动取消节点下线操作。详见 CANCEL DECOMMISSION
6) Load error hub:
当前支持两种类型的 Hub:Mysql 和 Broker。需在 PROPERTIES 中指定 "type" = "mysql" 或 "type" = "broker"。
如果需要删除当前的 load error hub,可以将 type 设为 null。
Expand Down Expand Up @@ -92,20 +92,20 @@
## keyword
ALTER,SYSTEM,BACKEND,BROKER,FREE

# CANCEL ALTER SYSTEM
# CANCEL DECOMMISSION
## description

该语句用于撤销一个节点下线操作。(仅管理员使用!)
语法:
CANCEL ALTER SYSTEM DECOMMISSION BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
CANCEL DECOMMISSION BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];

## example

1. 取消两个节点的下线操作:
CANCEL ALTER SYSTEM DECOMMISSION BACKEND "host1:port", "host2:port";
CANCEL DECOMMISSION BACKEND "host1:port", "host2:port";

## keyword
CANCEL,ALTER,SYSTEM,BACKEND
CANCEL,DECOMMISSION,BACKEND

# CREATE CLUSTER
## description
Expand Down
10 changes: 5 additions & 5 deletions docs/help/Contents/Data Manipulation/streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,19 +65,19 @@
## example

1. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表,使用Label用于去重
curl --location-trusted -u root -H "lable:123" -T testData http://host:port/api/testDb/testTbl/_stream_load
curl --location-trusted -u root -H "label:123" -T testData http://host:port/api/testDb/testTbl/_stream_load

2. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表,使用Label用于去重, 并且只导入k1等于20180601的数据
curl --location-trusted -u root -H "lable:123" -H "where: k1=20180601" -T testData http://host:port/api/testDb/testTbl/_stream_load
curl --location-trusted -u root -H "label:123" -H "where: k1=20180601" -T testData http://host:port/api/testDb/testTbl/_stream_load

3. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表, 允许20%的错误率(用户是defalut_cluster中的)
curl --location-trusted -u root -H "lable:123" -H "max_filter_ratio:0.2" -T testData http://host:port/api/testDb/testTbl/_stream_load
curl --location-trusted -u root -H "label:123" -H "max_filter_ratio:0.2" -T testData http://host:port/api/testDb/testTbl/_stream_load

4. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表, 允许20%的错误率,并且指定文件的列名(用户是defalut_cluster中的)
curl --location-trusted -u root -H "lable:123" -H "max_filter_ratio:0.2" -H "columns: k2, k1, v1" -T testData http://host:port/api/testDb/testTbl/_stream_load
curl --location-trusted -u root -H "label:123" -H "max_filter_ratio:0.2" -H "columns: k2, k1, v1" -T testData http://host:port/api/testDb/testTbl/_stream_load

5. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表中的p1, p2分区, 允许20%的错误率。
curl --location-trusted -u root -H "lable:123" -H "max_filter_ratio:0.2" -H "partitions: p1, p2" -T testData http://host:port/api/testDb/testTbl/_stream_load
curl --location-trusted -u root -H "label:123" -H "max_filter_ratio:0.2" -H "partitions: p1, p2" -T testData http://host:port/api/testDb/testTbl/_stream_load

6. 使用streaming方式导入(用户是defalut_cluster中的)
seq 1 10 | awk '{OFS="\t"}{print $1, $1 * 10}' | curl --location-trusted -u root -T - http://host:port/api/testDb/testTbl/_stream_load
Expand Down