From 9a0a08b9d2d2996223d0c3cd67f8cc8d2da67bf3 Mon Sep 17 00:00:00 2001 From: slfan1989 Date: Wed, 26 Jul 2023 17:40:36 +0800 Subject: [PATCH 1/2] Fix Some Typos. --- docs/querying/datasourcemetadataquery.md | 2 +- docs/querying/multitenancy.md | 2 +- docs/querying/querying.md | 2 +- docs/querying/searchquery.md | 2 +- docs/querying/sorting-orders.md | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/querying/datasourcemetadataquery.md b/docs/querying/datasourcemetadataquery.md index b076671bd534..bdc7128ac898 100644 --- a/docs/querying/datasourcemetadataquery.md +++ b/docs/querying/datasourcemetadataquery.md @@ -29,7 +29,7 @@ sidebar_label: "DatasourceMetadata" Data Source Metadata queries return metadata information for a dataSource. These queries return information about: -* The timestamp of latest ingested event for the dataSource. This is the ingested event without any consideration of rollup. +* The timestamp of the latest ingested event for the dataSource. This is the ingested event without any consideration of rollup. The grammar for these queries is: diff --git a/docs/querying/multitenancy.md b/docs/querying/multitenancy.md index 6fc484c24198..3619298291f6 100644 --- a/docs/querying/multitenancy.md +++ b/docs/querying/multitenancy.md @@ -75,7 +75,7 @@ stored on this tier. ## Supporting high query concurrency -Druid uses a [segment](../design/segments.md) as its fundamental unit of computation. Processes scan segments in parallel and a given process can scan `druid.processing.numThreads` concurrently. You can add more cores to a cluster to process more data in parallel and increase performance. Size your Druid segments such that any computation over any given segment should complete in at most 500ms. Use the the [`query/segment/time`](../operations/metrics.md#historical) metric to monitor computation times. +Druid uses a [segment](../design/segments.md) as its fundamental unit of computation. Processes scan segments in parallel and a given process can scan `druid.processing.numThreads` concurrently. You can add more cores to a cluster to process more data in parallel and increase performance. Size your Druid segments such that any computation over any given segment should complete in at most 500ms. Use the [`query/segment/time`](../operations/metrics.md#historical) metric to monitor computation times. Druid internally stores requests to scan segments in a priority queue. If a given query requires scanning more segments than the total number of available processors in a cluster, and many similarly expensive queries are concurrently diff --git a/docs/querying/querying.md b/docs/querying/querying.md index e957e7a527df..fe15c9a2bb20 100644 --- a/docs/querying/querying.md +++ b/docs/querying/querying.md @@ -57,7 +57,7 @@ are designed to be lightweight and complete very quickly. This means that for mo more complex visualizations, multiple Druid queries may be required. Even though queries are typically made to Brokers or Routers, they can also be accepted by -[Historical](../design/historical.md) processes and by [Peons (task JVMs)](../design/peons.md)) that are running +[Historical](../design/historical.md) processes and by [Peons (task JVMs)](../design/peons.md) that are running stream ingestion tasks. This may be valuable if you want to query results for specific segments that are served by specific processes. diff --git a/docs/querying/searchquery.md b/docs/querying/searchquery.md index 3ee13e78b140..113e1fba9458 100644 --- a/docs/querying/searchquery.md +++ b/docs/querying/searchquery.md @@ -159,7 +159,7 @@ If any part of a dimension value contains the value specified in this search que ### `fragment` -If any part of a dimension value contains all of the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is: +If any part of a dimension value contains all the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is: ```json { diff --git a/docs/querying/sorting-orders.md b/docs/querying/sorting-orders.md index 2c420b173693..e3de3e1d0ab0 100644 --- a/docs/querying/sorting-orders.md +++ b/docs/querying/sorting-orders.md @@ -30,7 +30,7 @@ title: "String comparators" These sorting orders are used by the [TopNMetricSpec](./topnmetricspec.md), [SearchQuery](./searchquery.md), GroupByQuery's [LimitSpec](./limitspec.md), and [BoundFilter](./filters.md#bound-filter). ## Lexicographic -Sorts values by converting Strings to their UTF-8 byte array representations and comparing lexicographically, byte-by-byte. +Sort values by converting Strings to their UTF-8 byte array representations and comparing lexicographically, byte-by-byte. ## Alphanumeric Suitable for strings with both numeric and non-numeric content, e.g.: "file12 sorts after file2" From 9124427c2907db60588c017120abaf5ccdc218b6 Mon Sep 17 00:00:00 2001 From: slfan1989 Date: Wed, 26 Jul 2023 19:14:04 +0800 Subject: [PATCH 2/2] Fix Some Typos. --- docs/querying/sorting-orders.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/querying/sorting-orders.md b/docs/querying/sorting-orders.md index e3de3e1d0ab0..2c420b173693 100644 --- a/docs/querying/sorting-orders.md +++ b/docs/querying/sorting-orders.md @@ -30,7 +30,7 @@ title: "String comparators" These sorting orders are used by the [TopNMetricSpec](./topnmetricspec.md), [SearchQuery](./searchquery.md), GroupByQuery's [LimitSpec](./limitspec.md), and [BoundFilter](./filters.md#bound-filter). ## Lexicographic -Sort values by converting Strings to their UTF-8 byte array representations and comparing lexicographically, byte-by-byte. +Sorts values by converting Strings to their UTF-8 byte array representations and comparing lexicographically, byte-by-byte. ## Alphanumeric Suitable for strings with both numeric and non-numeric content, e.g.: "file12 sorts after file2"