Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.

Meta Info Action is used to obtain metadata information in the cluster. Such as database list, table structure, etc.

## List Datbase
## List Database

### Request

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Used to obtain the table structure information of the specified table. This inte

* `<db>`

Sepcify database
Specify database

* `<table>`

Expand Down
2 changes: 1 addition & 1 deletion docs/en/administrator-guide/http-actions/profile-action.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
{
"title": "RPOFILE",
"title": "PROFILE",
"language": "en"
}
---
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ The detailed syntax for creating a routine load task can be connected to Doris a

* data\_source\_properties

The specific Kakfa partition can be specified in `data_source_properties`. If not specified, all partitions of the subscribed topic are consumed by default.
The specific Kafka partition can be specified in `data_source_properties`. If not specified, all partitions of the subscribed topic are consumed by default.

Note that when partition is explicitly specified, the load job will no longer dynamically detect changes to Kafka partition. If not specified, the partitions that need to be consumed are dynamically adjusted based on changes in the kafka partition.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -307,6 +307,6 @@ Cluster situation: The concurrency of Stream load is not affected by cluster siz

Since Stream load is an HTTP protocol submission creation import task, HTTP Clients in various languages usually have their own request retry logic. After receiving the first request, the Doris system has started to operate Stream load, but because the result is not returned to the Client side in time, the Client side will retry to create the request. At this point, the Doris system is already operating on the first request, so the second request will be reported to Label Already Exists.

To sort out the possible methods mentioned above: Search FE Master's log with Label to see if there are two ``redirect load action to destination = ``redirect load action to destination'cases in the same Label. If so, the request is submitted repeatedly by the Client side.
To sort out the possible methods mentioned above: Search FE Master's log with Label to see if there are two ``redirect load action to destination = ``redirect load action to destination cases in the same Label. If so, the request is submitted repeatedly by the Client side.

It is suggested that the user calculate the approximate import time according to the data quantity of the current request, and change the request time-out time of the Client end according to the import time-out time, so as to avoid the request being submitted by the Client end many times.
2 changes: 1 addition & 1 deletion docs/en/administrator-guide/materialized_view.md
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ MySQL [test]> desc advertiser_view_record;

In Doris, the result of `count(distinct)` aggregation is exactly the same as the result of `bitmap_union_count` aggregation. And `bitmap_union_count` is equal to the result of `bitmap_union` to calculate count, so if the query ** involves `count(distinct)`, you can speed up the query by creating a materialized view with `bitmap_union` aggregation.**

For this case, you can create a materialized view that accurately deduplicates `user_id` based on advertising and channel grouping.
For this case, you can create a materialized view that accurately deduplicate `user_id` based on advertising and channel grouping.

```
MySQL [test]> create materialized view advertiser_uv as select advertiser, channel, bitmap_union(to_bitmap(user_id)) from advertiser_view_record group by advertiser, channel;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ For the time being, read the [Doris metadata design document](../../internal/met

## Important tips

* Current metadata design is not backward compatible. That is, if the new version has a new metadata structure change (you can see whether there is a new VERSION in the `FeMetaVersion. java'file in the FE code), it is usually impossible to roll back to the old version after upgrading to the new version. Therefore, before upgrading FE, be sure to test metadata compatibility according to the operations in the [Upgrade Document](../../installing/upgrade_EN.md).
* Current metadata design is not backward compatible. That is, if the new version has a new metadata structure change (you can see whether there is a new VERSION in the `FeMetaVersion.java` file in the FE code), it is usually impossible to roll back to the old version after upgrading to the new version. Therefore, before upgrading FE, be sure to test metadata compatibility according to the operations in the [Upgrade Document](../../installing/upgrade_EN.md).

## Metadata catalog structure

Expand Down
14 changes: 7 additions & 7 deletions docs/en/administrator-guide/operation/monitor-alert.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.

This document mainly introduces Doris's monitoring items and how to collect and display them. And how to configure alarm (TODO)

[Dashborad template click download](https://grafana.com/dashboards/9734/revisions)
[Dashboard template click download](https://grafana.com/dashboards/9734/revisions)

> Note: Before 0.9.0 (excluding), please use revision 1. For version 0.9.x, use revision 2. For version 0.10.x, use revision 3.

Expand Down Expand Up @@ -102,7 +102,7 @@ Users will see the following monitoring item results (for example, FE partial mo
...
```

This is a monitoring data presented in [Promethus Format] (https://prometheus.io/docs/practices/naming/). We take one of these monitoring items as an example to illustrate:
This is a monitoring data presented in [Prometheus Format] (https://prometheus.io/docs/practices/naming/). We take one of these monitoring items as an example to illustrate:

```
# HELP jvm_heap_size_bytes jvm heap stat
Expand Down Expand Up @@ -133,9 +133,9 @@ Please start building the monitoring system after you have completed the deploym

Prometheus

1. Download the latest version of Proetheus on the [Prometheus Website] (https://prometheus.io/download/). Here we take version 2.3.2-linux-amd64 as an example.
1. Download the latest version of Prometheus on the [Prometheus Website] (https://prometheus.io/download/). Here we take version 2.3.2-linux-amd64 as an example.
2. Unzip the downloaded tar file on the machine that is ready to run the monitoring service.
3. Open the configuration file promethues.yml. Here we provide an example configuration and explain it (the configuration file is in YML format, pay attention to uniform indentation and spaces):
3. Open the configuration file prometheus.yml. Here we provide an example configuration and explain it (the configuration file is in YML format, pay attention to uniform indentation and spaces):

Here we use the simplest way of static files to monitor configuration. Prometheus supports a variety of [service discovery] (https://prometheus.io/docs/prometheus/latest/configuration/configuration/), which can dynamically sense the addition and deletion of nodes.

Expand Down Expand Up @@ -180,9 +180,9 @@ Prometheus

```

4. start Promethues
4. start Prometheus

Start Promethues with the following command:
Start Prometheus with the following command:

`nohup ./prometheus --web.listen-address="0.0.0.0:8181" &`

Expand Down Expand Up @@ -241,7 +241,7 @@ Prometheus

7. Configure Grafana

For the first landing, you need to set up the data source according to the prompt. Our data source here is Proetheus, which was configured in the previous step.
For the first landing, you need to set up the data source according to the prompt. Our data source here is Prometheus, which was configured in the previous step.

The Setting page of the data source configuration is described as follows:

Expand Down
10 changes: 5 additions & 5 deletions docs/en/administrator-guide/operation/multi-tenant.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
{
"title": "Multi-tenancy(Exprimental)",
"title": "Multi-tenancy(Experimental)",
"language": "en"
}
---
Expand All @@ -24,7 +24,7 @@ specific language governing permissions and limitations
under the License.
-->

# Multi-tenancy(Exprimental)
# Multi-tenancy(Experimental)

This function is experimental and is not recommended for use in production environment.

Expand Down Expand Up @@ -179,7 +179,7 @@ The concrete structure is as follows:

Supports selecting multiple instances on the same machine. The general principle of selecting instance is to select be on different machines as much as possible and to make the number of be used on all machines as uniform as possible.

For use, each user and DB belongs to a cluster (except root). To create user and db, you first need to enter a cluster. When a cluster is created, the system defaults to the manager of the cluster, the superuser account. Supuser has the right to create db, user, and view the number of be nodes in the cluster to which it belongs. All non-root user logins must specify a cluster, namely `user_name@cluster_name`.
For use, each user and DB belongs to a cluster (except root). To create user and db, you first need to enter a cluster. When a cluster is created, the system defaults to the manager of the cluster, the superuser account. Superuser has the right to create db, user, and view the number of be nodes in the cluster to which it belongs. All non-root user logins must specify a cluster, namely `user_name@cluster_name`.

Only root users can view all clusters in the system through `SHOW CLUSTER', and can enter different clusters through @ different cluster names. User clusters are invisible except root.

Expand All @@ -191,11 +191,11 @@ The concrete structure is as follows:

The process of cluster expansion is the same as that of cluster creation. BE instance on hosts that are not outside the cluster is preferred. The selected principles are the same as creating clusters.

5. 集群缩容、CLUSTER DECOMMISSION
5. Cluster and Shrinkage CLUSTER DECOMMISSION

Users can scale clusters by setting instance num of clusters.

Cluster shrinkage takes precedence over downlining instances on hosts with the largest number of BE instances.
Cluster shrinkage takes precedence over Shrinking instances on hosts with the largest number of BE instances.

Users can also directly use `ALTER CLUSTER DECOMMISSION BACKEND` to specify BE for cluster scaling.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,7 @@ Duplicate status view mainly looks at the status of the duplicate, as well as th

The figure above shows some additional information, including copy size, number of rows, number of versions, where the data path is located.

> Note: The contents of the `State'column shown here do not represent the health status of the replica, but the status of the replica under certain tasks, such as CLONE, SCHEMA CHANGE, ROLLUP, etc.
> Note: The contents of the `State` column shown here do not represent the health status of the replica, but the status of the replica under certain tasks, such as CLONE, SCHEMA CHANGE, ROLLUP, etc.

In addition, users can check the distribution of replicas in a specified table or partition by following commands.

Expand Down
4 changes: 2 additions & 2 deletions docs/en/administrator-guide/resource-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,10 +142,10 @@ PROPERTIES

`port`: The port of the external table, required.

`odbc_type`: Indicates the type of external table. Currently, Doris supports `MySQL` and `Oracle`. In the future, it may support more databases. The ODBC exteranl table referring to the resource is required. The old MySQL exteranl table referring to the resource is optional.
`odbc_type`: Indicates the type of external table. Currently, Doris supports `MySQL` and `Oracle`. In the future, it may support more databases. The ODBC external table referring to the resource is required. The old MySQL external table referring to the resource is optional.

`driver`: Indicates the driver dynamic library used by the ODBC external table.
The ODBC exteranl table referring to the resource is required. The old MySQL exteranl table referring to the resource is optional.
The ODBC external table referring to the resource is required. The old MySQL external table referring to the resource is optional.

For the usage of ODBC resource, please refer to [ODBC of Doris](../extending-doris/odbc-of-doris.html)

Expand Down
2 changes: 1 addition & 1 deletion docs/en/administrator-guide/running-profile.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ under the License.

# Statistics of query execution

This document focuses on introducing the **RuningProfle** which recorded runtime status of Doris in query execution. Using these statistical information, we can understand the execution of frgment to become a expert of Doris's **debugging and tuning**.
This document focuses on introducing the **Running Profile** which recorded runtime status of Doris in query execution. Using these statistical information, we can understand the execution of frgment to become a expert of Doris's **debugging and tuning**.

## Noun Interpretation

Expand Down
4 changes: 2 additions & 2 deletions docs/en/administrator-guide/variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ Note that the comment must start with /*+ and can only follow the SELECT.

* `enable_insert_strict`

Used to set the `strict` mode when loadingdata via INSERT statement. The default is false, which means that the `strict` mode is not turned on. For an introduction to this mode, see [here] (./load-data/insert-into-manual.md).
Used to set the `strict` mode when loading data via INSERT statement. The default is false, which means that the `strict` mode is not turned on. For an introduction to this mode, see [here] (./load-data/insert-into-manual.md).

* `enable_spilling`

Expand All @@ -181,7 +181,7 @@ Note that the comment must start with /*+ and can only follow the SELECT.

* `forward_to_master`

The user sets whether to forward some commands to the Master FE node for execution. The default is false, which means no forwarding. There are multiple FE nodes in Doris, one of which is the Master node. Usually users can connect to any FE node for full-featured operation. However, some of detail informationcan only be obtained from the Master FE node.
The user sets whether to forward some commands to the Master FE node for execution. The default is false, which means no forwarding. There are multiple FE nodes in Doris, one of which is the Master node. Usually users can connect to any FE node for full-featured operation. However, some of detail information can only be obtained from the Master FE node.

For example, the `SHOW BACKENDS;` command, if not forwarded to the Master FE node, can only see some basic information such as whether the node is alive, and forwarded to the Master FE to obtain more detailed information including the node startup time and the last heartbeat time.

Expand Down
4 changes: 2 additions & 2 deletions docs/en/community/release-process.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ under the License.

# Publish of Apache Doris

Apache publishing must be at least an IPMC member, a commiter with Apache mailboxes, a role called release manager.
Apache publishing must be at least an IPMC member, a committer with Apache mailboxes, a role called release manager.

The general process of publication is as follows:

Expand Down Expand Up @@ -170,7 +170,7 @@ Email address is apache's mailbox.

##### View and Output

The first line shows the name of the public key file (pubring. gpg), the second line shows the public key characteristics (4096 bits, Hash string and generation time), the third line shows the "user ID", and the fourth line shows the private key characteristics.
The first line shows the name of the public key file (pubring.gpg), the second line shows the public key characteristics (4096 bits, Hash string and generation time), the third line shows the "user ID", and the fourth line shows the private key characteristics.

```
$ gpg --list-keys
Expand Down
2 changes: 1 addition & 1 deletion docs/en/community/verify-apache-release.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ sha512sum --check apache-doris-a.b.c-incubating-src.tar.gz.sha512

## 3. Verify license header

Apache RAT is recommended to verify license headder, which can dowload as following command.
Apache RAT is recommended to verify license header, which can download as following command.

``` shell
wget http://mirrors.tuna.tsinghua.edu.cn/apache/creadur/apache-rat-0.13/apache-rat-0.13-bin.tar.gz
Expand Down
2 changes: 1 addition & 1 deletion docs/en/extending-doris/logstash.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ You will get logstash-output-doris-{version}.gem file in the same directory
### 3.Plug-in installation
copy logstash-output-doris-{version}.gem to the logstash installation directory

Excuting an order
Executing an order

`./bin/logstash-plugin install logstash-output-doris-{version}.gem`

Expand Down
2 changes: 1 addition & 1 deletion docs/en/extending-doris/plugin-development-manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Doris plugin framework supports install/uninstall custom plugins at runtime with
For example, the audit plugin worked after a request execution, it can obtain information related to a request (access user, request IP, SQL, etc...) and write the information into the specified table.

Differences from UDF:
* UDF is a function used for data calculation when SQL is executed. Plugin is additional function that is used to extend Doris with customized function, such as support different storage engines and different import ways, and plugin does't participate in data calculation when executing SQL.
* UDF is a function used for data calculation when SQL is executed. Plugin is additional function that is used to extend Doris with customized function, such as support different storage engines and different import ways, and plugin doesn't participate in data calculation when executing SQL.
* The execution cycle of UDF is limited to a SQL execution. The execution cycle of plugin may be the same as the Doris process.
* The usage scene is different. If you need to support special data algorithms when executing SQL, then UDF is recommended, if you need to run custom functions on Doris, or start a background thread to do tasks, then the use of plugin is recommended.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/extending-doris/udf/user-defined-function.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ After the compilation is completed, the UDF dynamic link library is successfully

After following the above steps, you can get the UDF dynamic library (that is, the `.so` file in the compilation result). You need to put this dynamic library in a location that can be accessed through the HTTP protocol.

Then log in to the Doris system and create a UDF function in the mysql-client through the `CREATE FUNCTION` syntax. You need to have AMDIN authority to complete this operation. At this time, there will be a UDF created in the Doris system.
Then log in to the Doris system and create a UDF function in the mysql-client through the `CREATE FUNCTION` syntax. You need to have ADMIN authority to complete this operation. At this time, there will be a UDF created in the Doris system.

```
CREATE [AGGREGATE] FUNCTION
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,4 +43,4 @@ MySQL > select stddev_samp(scan_rows) from log_statis group by datetime;
+--------------------------+
```
## keyword
STDDEVu SAMP,STDDEV,SAMP
STDDEV SAMP,STDDEV,SAMP
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ under the License.

`BITMAP TO_BITMAP(expr)`

Convert an unsigned bigint (ranging from 0 to 18446744073709551615) to a bitmap containing that value. Mainly be used to load interger value into bitmap column, e.g.,
Convert an unsigned bigint (ranging from 0 to 18446744073709551615) to a bitmap containing that value. Mainly be used to load integer value into bitmap column, e.g.,

```
cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,user_id, user_id=to_bitmap(user_id)" http://host:8410/api/test/testDb/_stream_load
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.
## Description
### Syntax

'VARCHAR ST'u AsText (GEOMETRY geo)'
'VARCHAR ST_AsText (GEOMETRY geo)'


Converting a geometric figure into a WKT (Well Known Text) representation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.
## Description
### Syntax

'GEOMETRY ST'u GeometryFromText (VARCHAR wkt)'
'GEOMETRY ST_GeometryFromText (VARCHAR wkt)'


Converting a WKT (Well Known Text) into a corresponding memory geometry
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.
## Description
### Syntax

'GEOMETRY ST'u Polygon (VARCHAR wkt)'
'GEOMETRY ST_Polygon (VARCHAR wkt)'


Converting a WKT (Well Known Text) into a corresponding polygon memory form
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.
## Description
### Syntax

'INT LOCATION (WARCHAR substrate, WARCHAR str [, INT pos]]'
'INT LOCATION (VARCHAR substrate, VARCHAR str [, INT pos]]'


Returns where substr appears in str (counting from 1). If the third parameter POS is specified, the position where substr appears is found from the string where STR starts with POS subscript. If not found, return 0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.
## Description
### Syntax

'INT lower (WARCHAR str)'
'INT lower (VARCHAR str)'


Convert all strings in parameters to lowercase
Expand Down
Loading