Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions contrib/format-httpd/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Web Server Log Format Plugin (HTTPD)
This plugin enables Drill to read and query httpd (Apache Web Server) and nginx access logs natively. This plugin uses the work by [Niels Basjes](https://github.com/nielsbasjes
) which is available here: https://github.com/nielsbasjes/logparser.

## Configuration
There are five fields which you can to configure in order for Drill to read web server logs. In general the defaults should be fine, however the fields are:
* **`logFormat`**: The log format string is the format string found in your web server configuration. If you have multiple logFormats then you can add all of them in this
single parameter separated by a newline (`\n`). The parser will automatically select the first matching format.
* **`timestampFormat`**: The format of time stamps in your log files. This setting is optional and is almost never needed.
* **`extensions`**: The file extension of your web server logs. Defaults to `httpd`.
* **`maxErrors`**: Sets the plugin error tolerance. When set to any value less than `0`, Drill will ignore all errors. If unspecified then maxErrors is 0 which will cause the query to fail on the first error.
* **`flattenWildcards`**: There are a few variables which Drill extracts into maps. Defaults to `false`.


```json
"httpd" : {
"type" : "httpd",
"logFormat" : "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\"",
"timestampFormat" : "dd/MMM/yyyy:HH:mm:ss ZZ",
"maxErrors": 0,
"flattenWildcards": false
}
```

## Data Model
The fields which Drill will return from HTTPD access logs should be fairly self explanatory and should all be mapped to correct data types. For instance, `TIMESTAMP` fields are
all Drill `TIMESTAMPS` and so forth.

### Nested Columns
The HTTPD parser can produce a few columns of nested data. For instance, the various `query_string` columns are parsed into Drill maps so that if you want to look for a specific
field, you can do so.

Drill allows you to directly access maps in with the format of:
```
<table>.<map>.<field>
```
One note is that in order to access a map, you must assign an alias to your table as shown below:
```sql
SELECT mylogs.`request_firstline_uri_query_$`.`username` AS username
FROM dfs.test.`logfile.httpd` AS mylogs

```
In this example, we assign an alias of `mylogs` to the table, the column name is `request_firstline_uri_query_$` and then the individual field within that mapping is `username
`. This particular example enables you to analyze items in query strings.

### Flattening Maps
In the event that you have a map field that you would like broken into columns rather than getting the nested fields, you can set the `flattenWildcards` option to `true` and
Drill will create columns for these fields. For example if you have a URI Query option called `username`. If you selected the `flattedWildcards` option, Drill will create a
field called `request_firstline_uri_query_username`.

** Note that underscores in the field name are replaced with double underscores **

## Useful Functions
If you are using Drill to analyze web access logs, there are a few other useful functions which you should know about:

* `parse_url(<url>)`: This function accepts a URL as an argument and returns a map of the URL's protocol, authority, host, and path.
* `parse_query(<query_string>)`: This function accepts a query string and returns a key/value pairing of the variables submitted in the request.
* `parse_user_agent(<user agent>)`, `parse_user_agent( <useragent field>, <desired field> )`: The function parse_user_agent() takes a user agent string as an argument and
returns a map of the available fields. Note that not every field will be present in every user agent string.
[Complete Docs Here](https://github.com/apache/drill/tree/master/contrib/udfs#user-agent-functions)


## Implicit Columns
Data queried by this plugin will return two implicit columns:

* **`_raw`**: This returns the raw, unparsed log line
* **`_matched`**: Returns `true` or `false` depending on whether the line matched the config string.

Thus, if you wanted to see which lines in your log file were not matching the config, you could use the following query:

```sql
SELECT _raw
FROM <data>
WHERE _matched = false
```
100 changes: 100 additions & 0 deletions contrib/format-httpd/pom.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
<?xml version="1.0"?>
<!--

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>drill-contrib-parent</artifactId>
<groupId>org.apache.drill.contrib</groupId>
<version>1.19.0-SNAPSHOT</version>
</parent>
<artifactId>drill-format-httpd</artifactId>
<name>contrib/httpd-format-plugin</name>

<dependencies>
<dependency>
<groupId>org.apache.drill.exec</groupId>
<artifactId>drill-java-exec</artifactId>
<version>${project.version}</version>
</dependency>

<dependency>
<groupId>nl.basjes.parse.httpdlog</groupId>
<artifactId>httpdlog-parser</artifactId>
<version>5.6</version>
<exclusions>
<exclusion>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
</exclusion>
<exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>nl.basjes.parse.useragent</groupId>
<artifactId>yauaa-logparser</artifactId>
<version>5.19</version>
</dependency>
<!-- Test dependencies -->
<dependency>
<groupId>org.apache.drill.exec</groupId>
<artifactId>drill-java-exec</artifactId>
<classifier>tests</classifier>
<version>${project.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.drill</groupId>
<artifactId>drill-common</artifactId>
<classifier>tests</classifier>
<version>${project.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<executions>
<execution>
<id>copy-java-sources</id>
<phase>process-sources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/classes/org/apache/drill/exec/store/httpd
</outputDirectory>
<resources>
<resource>
<directory>src/main/java/org/apache/drill/exec/store/httpd</directory>
<filtering>true</filtering>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.drill.exec.store.httpd;

import org.apache.drill.common.exceptions.CustomErrorContext;
import org.apache.drill.common.exceptions.UserException;
import org.apache.drill.common.types.TypeProtos;
import org.apache.drill.common.types.TypeProtos.MinorType;
import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
import org.apache.drill.exec.physical.resultSet.RowSetLoader;
import org.apache.drill.exec.record.metadata.ColumnMetadata;
import org.apache.drill.exec.record.metadata.MetadataUtils;
import org.apache.drill.exec.store.dfs.easy.EasySubScan;
import org.apache.drill.exec.vector.accessor.ScalarWriter;
import org.apache.drill.shaded.guava.com.google.common.base.Charsets;
import org.apache.hadoop.mapred.FileSplit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;

public class HttpdLogBatchReader implements ManagedReader<FileSchemaNegotiator> {

private static final Logger logger = LoggerFactory.getLogger(HttpdLogBatchReader.class);
public static final String RAW_LINE_COL_NAME = "_raw";
public static final String MATCHED_COL_NAME = "_matched";
private final HttpdLogFormatConfig formatConfig;
private final int maxRecords;
private final EasySubScan scan;
private HttpdParser parser;
private FileSplit split;
private InputStream fsStream;
private RowSetLoader rowWriter;
private BufferedReader reader;
private int lineNumber;
private CustomErrorContext errorContext;
private ScalarWriter rawLineWriter;
private ScalarWriter matchedWriter;
private int errorCount;


public HttpdLogBatchReader(HttpdLogFormatConfig formatConfig, int maxRecords, EasySubScan scan) {
this.formatConfig = formatConfig;
this.maxRecords = maxRecords;
this.scan = scan;
}

@Override
public boolean open(FileSchemaNegotiator negotiator) {
// Open the input stream to the log file
openFile(negotiator);
errorContext = negotiator.parentErrorContext();
try {
parser = new HttpdParser(formatConfig.getLogFormat(), formatConfig.getTimestampFormat(), formatConfig.getFlattenWildcards(), scan);
negotiator.tableSchema(parser.setupParser(), false);
} catch (Exception e) {
throw UserException.dataReadError(e)
.message("Error opening HTTPD file: " + e.getMessage())
.addContext(errorContext)
.build(logger);
}

ResultSetLoader loader = negotiator.build();
rowWriter = loader.writer();
parser.addFieldsToParser(rowWriter);
rawLineWriter = addImplicitColumn(RAW_LINE_COL_NAME, MinorType.VARCHAR);
matchedWriter = addImplicitColumn(MATCHED_COL_NAME, MinorType.BIT);
return true;
}

@Override
public boolean next() {
while (!rowWriter.isFull()) {
if (!nextLine(rowWriter)) {
return false;
}
}
return true;
}

private boolean nextLine(RowSetLoader rowWriter) {
String line;

// Check if the limit has been reached
if (rowWriter.limitReached(maxRecords)) {
return false;
}

try {
line = reader.readLine();
if (line == null) {
return false;
} else if (line.isEmpty()) {
return true;
}
} catch (Exception e) {
throw UserException.dataReadError(e)
.message("Error reading HTTPD file at line number %d", lineNumber)
.addContext(e.getMessage())
.addContext(errorContext)
.build(logger);
}
// Start the row
rowWriter.start();

try {
parser.parse(line);
matchedWriter.setBoolean(true);
} catch (Exception e) {
errorCount++;
if (errorCount >= formatConfig.getMaxErrors()) {
throw UserException.dataReadError()
.message("Error reading HTTPD file at line number %d", lineNumber)
.addContext(e.getMessage())
.addContext(errorContext)
.build(logger);
} else {
matchedWriter.setBoolean(false);
}
}

// Write raw line
rawLineWriter.setString(line);

// Finish the row
rowWriter.save();
lineNumber++;

return true;
}

@Override
public void close() {
if (fsStream == null) {
return;
}
try {
fsStream.close();
} catch (IOException e) {
logger.warn("Error when closing HTTPD file: {} {}", split.getPath().toString(), e.getMessage());
}
fsStream = null;
}

private void openFile(FileSchemaNegotiator negotiator) {
split = negotiator.split();
try {
fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
} catch (Exception e) {
throw UserException
.dataReadError(e)
.message("Failed to open open input file: %s", split.getPath().toString())
.addContext(e.getMessage())
.build(logger);
}
reader = new BufferedReader(new InputStreamReader(fsStream, Charsets.UTF_8));
}

private ScalarWriter addImplicitColumn(String colName, MinorType type) {
ColumnMetadata colSchema = MetadataUtils.newScalar(colName, type, TypeProtos.DataMode.OPTIONAL);
colSchema.setBooleanProperty(ColumnMetadata.EXCLUDE_FROM_WILDCARD, true);
int index = rowWriter.addColumn(colSchema);

return rowWriter.scalar(index);
}
}
Loading