Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions docs/_redirects.json
Original file line number Diff line number Diff line change
Expand Up @@ -59,9 +59,9 @@
{"source": "Post-aggregations.html", "target": "querying/post-aggregations.html"},
{"source": "Query-Context.html", "target": "querying/query-context.html"},
{"source": "Querying.html", "target": "querying/querying.html"},
{"source": "Realtime-Config.html", "target": "configuration/realtime.html"},
{"source": "Realtime-Config.html", "target": "ingestion/standalone-realtime.html"},
{"source": "Realtime.html", "target": "ingestion/standalone-realtime.html"},
{"source": "Realtime-ingestion.html", "target": "ingestion/stream-ingestion.html"},
{"source": "Realtime.html", "target": "design/realtime.html"},
{"source": "Recommendations.html", "target": "operations/recommendations.html"},
{"source": "Rolling-Updates.html", "target": "operations/rolling-updates.html"},
{"source": "Router.html", "target": "development/router.html"},
Expand Down Expand Up @@ -167,4 +167,8 @@
{"source": "development/extensions-core/namespaced-lookup.html", "target": "lookups-cached-global.html"},
{"source": "operations/performance-faq.html", "target": "../operations/basic-cluster-tuning.html"},
{"source": "development/extensions-contrib/orc.html", "target": "../extensions-core/orc.html"}
{"source": "operations/performance-faq.html", "target": "../operations/basic-cluster-tuning.html"},
{"source": "configuration/realtime.md", "target": "../ingestion/standalone-realtime.html"},
{"source": "design/realtime.md", "target": "../ingestion/standalone-realtime.html"},
{"source": "ingestion/stream-pull.md", "target": "../ingestion/standalone-realtime.html"}
]
7 changes: 1 addition & 6 deletions docs/content/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,6 @@ This page documents all of the configuration properties for each Druid service t
* [Segment Discovery](#segment-discovery)
* [Caching](#cache-configuration)
* [General Query Configuration](#general-query-configuration)
* [Realtime processes (Deprecated)](#realtime-processes)

## Recommended Configuration File Organization

Expand Down Expand Up @@ -493,7 +492,7 @@ To use graphite as emitter set `druid.emitter=graphite`. For configuration detai

### Metadata Storage

These properties specify the jdbc connection and other configuration around the metadata storage. The only processes that connect to the metadata storage with these properties are the [Coordinator](../design/coordinator.html), [Overlord](../design/overlord.html) and [Realtime Processes](../design/realtime.html).
These properties specify the jdbc connection and other configuration around the metadata storage. The only processes that connect to the metadata storage with these properties are the [Coordinator](../design/coordinator.html) and [Overlord](../design/overlord.html).

|Property|Description|Default|
|--------|-----------|-------|
Expand Down Expand Up @@ -1672,7 +1671,3 @@ Supported query contexts:
|`maxResults`|Can be used to lower the value of `druid.query.groupBy.maxResults` for this query.|None|
|`useOffheap`|Set to true to store aggregations off-heap when merging results.|false|


## Realtime processes

Configuration for the deprecated realtime process can be found [here](../configuration/realtime.html).
98 changes: 0 additions & 98 deletions docs/content/configuration/realtime.md

This file was deleted.

4 changes: 2 additions & 2 deletions docs/content/dependencies/zookeeper.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ title: "ZooKeeper"
Apache Druid (incubating) uses [Apache ZooKeeper](http://zookeeper.apache.org/) (ZK) for management of current cluster state. The operations that happen over ZK are

1. [Coordinator](../design/coordinator.html) leader election
2. Segment "publishing" protocol from [Historical](../design/historical.html) and [Realtime](../design/realtime.html)
2. Segment "publishing" protocol from [Historical](../design/historical.html)
3. Segment load/drop protocol between [Coordinator](../design/coordinator.html) and [Historical](../design/historical.html)
4. [Overlord](../design/overlord.html) leader election
5. [Overlord](../design/overlord.html) and [MiddleManager](../design/middlemanager.html) task management
Expand All @@ -44,7 +44,7 @@ ${druid.zk.paths.coordinatorPath}/_COORDINATOR

The `announcementsPath` and `servedSegmentsPath` are used for this.

All [Historical](../design/historical.html) and [Realtime](../design/realtime.html) processes publish themselves on the `announcementsPath`, specifically, they will create an ephemeral znode at
All [Historical](../design/historical.html) processes publish themselves on the `announcementsPath`, specifically, they will create an ephemeral znode at

```
${druid.zk.paths.announcementsPath}/${druid.host}
Expand Down
80 changes: 0 additions & 80 deletions docs/content/design/realtime.md

This file was deleted.

3 changes: 1 addition & 2 deletions docs/content/development/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,7 @@ Most of the coordination logic for (real-time) ingestion is in the Druid indexin
## Real-time Ingestion

Druid loads data through `FirehoseFactory.java` classes. Firehoses often wrap other firehoses, where, similar to the design of the
query runners, each firehose adds a layer of logic. Much of the core management logic is in `RealtimeManager.java` and the
persist and hand-off logic is in `RealtimePlumber.java`.
query runners, each firehose adds a layer of logic, and the persist and hand-off logic is in `RealtimePlumber.java`.

## Hadoop-based Batch Ingestion

Expand Down
6 changes: 2 additions & 4 deletions docs/content/ingestion/firehose.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ title: "Apache Druid (incubating) Firehoses"

# Apache Druid (incubating) Firehoses

Firehoses are used in [native batch ingestion tasks](../ingestion/native_tasks.html), stream push tasks automatically created by [Tranquility](../ingestion/stream-push.html), and the [stream-pull (deprecated)](../ingestion/stream-pull.html) ingestion model.
Firehoses are used in [native batch ingestion tasks](../ingestion/native_tasks.html), stream push tasks automatically created by [Tranquility](../ingestion/stream-push.html) ingestion model.

They are pluggable and thus the configuration schema can and will vary based on the `type` of the firehose.

Expand Down Expand Up @@ -204,9 +204,7 @@ This can be used to merge data from more than one firehose.

### Streaming Firehoses

The firehoses shown below should only be used with the [stream-pull (deprecated)](../ingestion/stream-pull.html) ingestion model, as they are not suitable for batch ingestion.

The EventReceiverFirehose is also used in tasks automatically generated by [Tranquility stream push](../ingestion/stream-push.html).
The EventReceiverFirehose is used in tasks automatically generated by [Tranquility stream push](../ingestion/stream-push.html). These firehoses are not suitable for batch ingestion.

#### EventReceiverFirehose

Expand Down
2 changes: 0 additions & 2 deletions docs/content/ingestion/ingestion-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,6 @@ The IOConfig spec differs based on the ingestion task type.
* Hadoop Batch ingestion: See [Hadoop Batch IOConfig](../ingestion/hadoop.html#ioconfig)
* Kafka Indexing Service: See [Kafka Supervisor IOConfig](../development/extensions-core/kafka-ingestion.html#KafkaSupervisorIOConfig)
* Stream Push Ingestion: Stream push ingestion with Tranquility does not require an IO Config.
* Stream Pull Ingestion (Deprecated): See [Stream pull ingestion](../ingestion/stream-pull.html#ioconfig).

# Tuning Config

Expand All @@ -320,7 +319,6 @@ The TuningConfig spec differs based on the ingestion task type.
* Hadoop Batch ingestion: See [Hadoop Batch TuningConfig](../ingestion/hadoop.html#tuningconfig)
* Kafka Indexing Service: See [Kafka Supervisor TuningConfig](../development/extensions-core/kafka-ingestion.html#KafkaSupervisorTuningConfig)
* Stream Push Ingestion (Tranquility): See [Tranquility TuningConfig](http://static.druid.io/tranquility/api/latest/#com.metamx.tranquility.druid.DruidTuning).
* Stream Pull Ingestion (Deprecated): See [Stream pull ingestion](../ingestion/stream-pull.html#tuningconfig).

# Evaluating Timestamp, Dimensions and Metrics

Expand Down
43 changes: 43 additions & 0 deletions docs/content/ingestion/standalone-realtime.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
---
layout: doc_page
title: "Realtime Process"
---

<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->

# Realtime Process

Older versions of Apache Druid (incubating) supported a standalone 'Realtime' process to query and index 'stream pull'
modes of real-time ingestion. These processes would periodically build segments for the data they had collected over
some span of time and then set up hand-off to [Historical](../design/historical.html) servers.

This processes could be invoked by

```
org.apache.druid.cli.Main server realtime
```

This model of stream pull ingestion was deprecated for a number of both operational and architectural reasons, and
removed completely in Druid 0.16.0. Operationally, realtime nodes were difficult to configure, deploy, and scale because
each node required an unique configuration. The design of the stream pull ingestion system for realtime nodes also
suffered from limitations which made it not possible to achieve exactly once ingestion.

Please consider using the [Kafka Indexing Service](../development/extensions-core/kafka-ingestion.html) or
[Kinesis Indexing Service](../development/extensions-core/kinesis-ingestion.md) for stream pull ingestion instead.
Loading