Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 14 additions & 9 deletions docs/content/querying/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -332,18 +332,25 @@ of configuration.
### JSON over HTTP

You can make Druid SQL queries using JSON over HTTP by posting to the endpoint `/druid/v2/sql/`. The request should
be a JSON object with a "query" field, like `{"query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar'"}`. You can
use _curl_ to send these queries from the command-line:
be a JSON object with a "query" field, like `{"query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar'"}`.

Results are available in two formats: "object" (the default; a JSON array of JSON objects), and "array" (a JSON array
of JSON arrays). In "object" form, each row's field names will match the column names from your SQL query. In "array"
form, each row's values are returned in the order specified in your SQL query.

You can use _curl_ to send SQL queries from the command-line:

```bash
$ cat query.json
{"query":"SELECT COUNT(*) FROM data_source"}
{"query":"SELECT COUNT(*) AS TheCount FROM data_source"}

$ curl -XPOST -H'Content-Type: application/json' http://BROKER:8082/druid/v2/sql/ -d @query.json
[{"EXPR$0":24433}]
[{"TheCount":24433}]
```

You can also provide [connection context parameters](#connection-context) by adding a "context" map, like:
Metadata is available over the HTTP API by querying the ["INFORMATION_SCHEMA" tables](#retrieving-metadata).

Finally, you can also provide [connection context parameters](#connection-context) by adding a "context" map, like:

```json
{
Expand All @@ -354,8 +361,6 @@ You can also provide [connection context parameters](#connection-context) by add
}
```

Metadata is available over the HTTP API by querying the ["INFORMATION_SCHEMA" tables](#retrieving-metadata).

### JDBC

You can make Druid SQL queries using the [Avatica JDBC driver](https://calcite.apache.org/avatica/downloads/). Once
Expand Down Expand Up @@ -404,15 +409,15 @@ Druid SQL supports setting connection parameters on the client. The parameters i
All other context parameters you provide will be attached to Druid queries and can affect how they run. See
[Query context](query-context.html) for details on the possible options.

Connection context can be specified as JDBC connection properties or as a "context" object in the JSON API.

|Parameter|Description|Default value|
|---------|-----------|-------------|
|`sqlTimeZone`|Sets the time zone for this connection, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
|`useApproximateCountDistinct`|Whether to use an approximate cardinalty algorithm for `COUNT(DISTINCT foo)`.|druid.sql.planner.useApproximateCountDistinct on the broker|
|`useApproximateTopN`|Whether to use approximate [TopN queries](topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](groupbyquery.html) will be used instead.|druid.sql.planner.useApproximateTopN on the broker|
|`useFallback`|Whether to evaluate operations on the broker when they cannot be expressed as Druid queries. This option is not recommended for production since it can generate unscalable query plans. If false, SQL queries that cannot be translated to Druid queries will fail.|druid.sql.planner.useFallback on the broker|

Connection context can be specified as JDBC connection properties or as a "context" object in the JSON API.

### Retrieving metadata

Druid brokers infer table and column metadata for each dataSource from segments loaded in the cluster, and use this to
Expand Down
96 changes: 87 additions & 9 deletions sql/src/main/java/io/druid/sql/http/SqlQuery.java
Original file line number Diff line number Diff line change
Expand Up @@ -21,23 +21,99 @@

import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.core.JsonGenerator;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableMap;
import io.druid.java.util.common.StringUtils;

import javax.annotation.Nullable;
import java.io.IOException;
import java.util.Map;
import java.util.Objects;

public class SqlQuery
{
public enum ResultFormat
{
ARRAY {
@Override
public void writeResultStart(final JsonGenerator jsonGenerator) throws IOException
{
jsonGenerator.writeStartArray();
}

@Override
public void writeResultField(
final JsonGenerator jsonGenerator,
final String name,
final Object value
) throws IOException
{
jsonGenerator.writeObject(value);
}

@Override
public void writeResultEnd(final JsonGenerator jsonGenerator) throws IOException
{
jsonGenerator.writeEndArray();
}
},

OBJECT {
@Override
public void writeResultStart(final JsonGenerator jsonGenerator) throws IOException
{
jsonGenerator.writeStartObject();
}

@Override
public void writeResultField(
final JsonGenerator jsonGenerator,
final String name,
final Object value
) throws IOException
{
jsonGenerator.writeFieldName(name);
jsonGenerator.writeObject(value);
}

@Override
public void writeResultEnd(final JsonGenerator jsonGenerator) throws IOException
{
jsonGenerator.writeEndObject();
}
};

public abstract void writeResultStart(JsonGenerator jsonGenerator) throws IOException;

public abstract void writeResultField(JsonGenerator jsonGenerator, String name, Object value)
throws IOException;

public abstract void writeResultEnd(JsonGenerator jsonGenerator) throws IOException;

@JsonCreator
public static ResultFormat fromString(@Nullable final String name)
{
if (name == null) {
return null;
}
return valueOf(StringUtils.toUpperCase(name));
}
}

private final String query;
private final ResultFormat resultFormat;
private final Map<String, Object> context;

@JsonCreator
public SqlQuery(
@JsonProperty("query") final String query,
@JsonProperty("resultFormat") final ResultFormat resultFormat,
@JsonProperty("context") final Map<String, Object> context
)
{
this.query = Preconditions.checkNotNull(query, "query");
this.resultFormat = resultFormat == null ? ResultFormat.OBJECT : resultFormat;
this.context = context == null ? ImmutableMap.<String, Object>of() : context;
}

Expand All @@ -47,6 +123,12 @@ public String getQuery()
return query;
}

@JsonProperty
public ResultFormat getResultFormat()
{
return resultFormat;
}

@JsonProperty
public Map<String, Object> getContext()
{
Expand All @@ -62,28 +144,24 @@ public boolean equals(final Object o)
if (o == null || getClass() != o.getClass()) {
return false;
}

final SqlQuery sqlQuery = (SqlQuery) o;

if (query != null ? !query.equals(sqlQuery.query) : sqlQuery.query != null) {
return false;
}
return context != null ? context.equals(sqlQuery.context) : sqlQuery.context == null;
return Objects.equals(query, sqlQuery.query) &&
resultFormat == sqlQuery.resultFormat &&
Objects.equals(context, sqlQuery.context);
}

@Override
public int hashCode()
{
int result = query != null ? query.hashCode() : 0;
result = 31 * result + (context != null ? context.hashCode() : 0);
return result;
return Objects.hash(query, resultFormat, context);
}

@Override
public String toString()
{
return "SqlQuery{" +
"query='" + query + '\'' +
", resultFormat=" + resultFormat +
", context=" + context +
'}';
}
Expand Down
6 changes: 3 additions & 3 deletions sql/src/main/java/io/druid/sql/http/SqlResource.java
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ public void write(final OutputStream outputStream) throws IOException, WebApplic

while (!yielder.isDone()) {
final Object[] row = yielder.get();
jsonGenerator.writeStartObject();
sqlQuery.getResultFormat().writeResultStart(jsonGenerator);
for (int i = 0; i < fieldList.size(); i++) {
final Object value;

Expand All @@ -130,9 +130,9 @@ public void write(final OutputStream outputStream) throws IOException, WebApplic
value = row[i];
}

jsonGenerator.writeObjectField(fieldList.get(i).getName(), value);
sqlQuery.getResultFormat().writeResultField(jsonGenerator, fieldList.get(i).getName(), value);
}
jsonGenerator.writeEndObject();
sqlQuery.getResultFormat().writeResultEnd(jsonGenerator);
yielder = yielder.next(null);
}

Expand Down
38 changes: 38 additions & 0 deletions sql/src/test/java/io/druid/sql/calcite/http/SqlQueryTest.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
/*
* Licensed to Metamarkets Group Inc. (Metamarkets) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. Metamarkets licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

package io.druid.sql.calcite.http;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.collect.ImmutableMap;
import io.druid.segment.TestHelper;
import io.druid.sql.http.SqlQuery;
import org.junit.Assert;
import org.junit.Test;

public class SqlQueryTest
{
@Test
public void testSerde() throws Exception
{
final ObjectMapper jsonMapper = TestHelper.getJsonMapper();
final SqlQuery query = new SqlQuery("SELECT 1", SqlQuery.ResultFormat.ARRAY, ImmutableMap.of("useCache", false));
Assert.assertEquals(query, jsonMapper.readValue(jsonMapper.writeValueAsString(query), SqlQuery.class));
}
}
Loading