Skip to content

Fix BUG "No FileSystem for scheme: hdfs" for hdfs-srorage-extension#1022

Closed
haoch wants to merge 2 commits intoapache:masterfrom
haoch:hdfs-storage-extension
Closed

Fix BUG "No FileSystem for scheme: hdfs" for hdfs-srorage-extension#1022
haoch wants to merge 2 commits intoapache:masterfrom
haoch:hdfs-storage-extension

Conversation

@haoch
Copy link
Copy Markdown
Member

@haoch haoch commented Jan 10, 2015

Problem

While starting realtime node with hdfs as deep storage (i.e. hdfs-storage-extenstion), log as following shows that it should already loaded hadoop-hdfs correctly

INFO [main] io.druid.initialization.Initialization - Added URL[file:/home/druid/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.3.0/hadoop-hdfs-2.3.0.jar]

But we got IOException about "No FileSystem for scheme: hdfs" while it tried to persist segments onto hdfs which means hadoop-hdfs ( class org.apache.hadoop.hdfs.DistributedFileSystem in fact) should not been loaded correctly.

No FileSystem for scheme: hdfs","exceptionStackTrace":"java.io.IOException: No FileSystem for scheme: hdfs
     at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304)
     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311)
     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90)
     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350)
     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332)
     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369)
     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
     at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:75)
     at io.druid.segment.realtime.plumber.RealtimePlumber$4.doRun(RealtimePlumber.java:356)
     at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:745)\n","interval":"2015-01-08T08:00:00.000Z/2015-01-08T09:00:00.000Z"}}

In fact, lots of other Druid users rather than us are also being confused by the same bug

Root Cause

In fact, this is a typical case of the maven-assembly plugin breaking things in hadoop hdfs.

The root cause is because differents JARs (hadoop-commons for LocalFileSystem, hadoop-hdfs for DistributedFileSystem) each contain a different file called org.apache.hadoop.fs.FileSystem in their META-INFO/services directory. This file lists the canonical classnames of the filesystem implementations they want to declare (This is called a Service Provider Interface, see org.apache.hadoop.FileSystem line L2591). Druid module management system seems to rely on customized ServiceProviderInterface too, right ? When we use maven-assembly, all META-INFO/services/org.apache.hadoop.fs.FileSystem overwrite each-other. Only one of these files remains (the last one that was added). In this case, the Filesystem list from hadoop-commons overwrites the list from hadoop-hdfs, so DistributedFileSystem was no longer declared.

Solution

  1. As a quick solution, the druid users caught with the same problems may start druid nodes with hadoop classpath or copy all hadoop-hdfs related jars' path to the end of classpath like:

    java -Ddruid.realtime.specFile=config/realtime/metrics.spec -classpath lib/*:config/realtime:`hadoop classpath` io.druid.cli.Main server realtime
    
  2. To completely fixed the bug, we can explicitly refer to DistributedFileSystem in hdfs-storage-extension code

… FileSystem for scheme: hdfs' while loading hadoop-hdfs dependency
@haoch haoch changed the title Fixed BUG "No FileSystem for scheme: hdfs" for hdfs-srorage-extension Fix BUG "No FileSystem for scheme: hdfs" for hdfs-srorage-extension Jan 10, 2015
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm looking around the docs for hadoop and I cannot find that in 2.x configs:

https://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml (or similar)

That was a setting in 1.x but is it still valid in 2.x?

Also, there is no reason someone couldn't use https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/LocalFileSystem.html as the impl

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First of all, thanks very much for your quick response, Charles!

  • Firstly, it does work for hadoop 2.x (for us, it's hadoop-2.4.0) too and I've tested with Druid in our environment. When I added a hdfs-site.xml into druid class path (say config/realtime/hdfs-site.xml ) and put such settings in it,

        <?xml version="1.0" encoding="UTF-8"?>
        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
        <!--
          Licensed under the Apache License, Version 2.0 (the "License");
          you may not use this file except in compliance with the License.
          You may obtain a copy of the License at
    
            http://www.apache.org/licenses/LICENSE-2.0
    
          Unless required by applicable law or agreed to in writing, software
          distributed under the License is distributed on an "AS IS" BASIS,
          WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
          See the License for the specific language governing permissions and
          limitations under the License. See accompanying LICENSE file.
        -->
    
        <!-- Put site-specific property overrides in this file. -->
    
        <configuration>
            <property>
               <name>fs.file.impl</name>
               <value>org.apache.hadoop.fs.LocalFileSystem</value>
               <description>The FileSystem for file: uris.</description>
            </property>
    
            <property>
               <name>fs.hdfs.impl</name>
               <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
               <description>The FileSystem for hdfs: uris.</description>
            </property>
        </configuration>
    

    the exception changed to org.apache.hadoop.hdfs.DistributedFileSystem not found as expected , which should answer your first concern:

      2015-01-09 05:57:15,042 ERROR [datanode_sherlock-2014-09-11T22:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - Failed to persist merged index[datanode_sherlock]: {class=io.druid.segment.realtime.plumber.RealtimePlumber, exceptionType=class java.lang.RuntimeException, exceptionMessage=java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found, interval=2014-09-11T22:00:00.000Z/2014-09-11T23:00:00.000Z}
      java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
              at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1882)
              at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2298)
              at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311)
              at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90)
              at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350)
              at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332)
              at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369)
              at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
              at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:75)
              at io.druid.segment.realtime.plumber.RealtimePlumber$4.doRun(RealtimePlumber.java:356)
              at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
              at java.lang.Thread.run(Thread.java:745)
      Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
              at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1788)
              at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1880)
              ... 13 more
    
  • Secondly, org.apache.hadoop.fs.LocalFileSystem is implemented in hadoop-common which should be correctly loaded because it has been explicitly referred in druid's code in HdfsStorageDruidModule.java#L32, so that the LocalFileSystem is always available by default.

@drcrallen
Copy link
Copy Markdown
Contributor

Having two impls in the service discovery path shouldn't be the root cause, that should simply let the service loader know there are two impls. If having two service impls is preventing HDFS from parsing the URIs correctly that sounds like a HDFS bug.

@drcrallen
Copy link
Copy Markdown
Contributor

Druid purposefully tries to disconnect itself from any particular Hadoop version. With 0.x/1.x still in wide use, and 2.x very popular, and 3.x actively moving along, we need to do everything we can to make sure that the solution is easily used in whatever version the users decide to have on the backend.

As such, I think the hadoop classpath solution is preferred unless there is a particular reason that solution confounds other behaviors

@haoch
Copy link
Copy Markdown
Member Author

haoch commented Jan 11, 2015

To continue with my comments replied in line, druid users are really appreciating for the convenient extension loader mechanism in druid, but the bug just makes them confused. I'm not sure but I think it may be caused by the customized class loader in Druid rather than about HDFS itself, because the hadoop classpath solution proves that it should be OK in traditional class path loader for hadoop-hdfs but how should it not work for Druid? Especially when druid's log prints:

INFO [main] io.druid.initialization.Initialization - Added URL[file:/home/druid/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.3.0/hadoop-hdfs-2.3.0.jar]

Surely, I also agree with the opinion about disconnecting Druid with any particular Hadoop version. So i think we have two options here:

  • Option 1: If druid is still willing to help users to load all jar dependencies like what hdfs-storage-extension does for us, it should fix the bug as in this PR. And in fact it has nothing to do hadoop version, because hadoop-hdfs basic API is very stable and mature, and they seems intended to let their users to handle about it.

  • Option 2: If druid really prefers users to add hadoop related dependencies by hands, the POM scope of hadoop-client dependency here extensions/hdfs-storage/pom.xml#L58 should be changed from compile to provided, right? And it should also make sure Druid module loader not to load additional dependent hadoop-hdfs jars into local MAVEN repository, otherwise the JVM will always load at least two different versions of hadoop-hdfs jars which may cause some other unknown and confused problems:

    $ lsof -p 17094 | grep hadoop-hdfs
    java    17094 druid  mem    REG              252,0  5783912   774885 /home/druid/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.3.0/hadoop-hdfs-2.3.0.jar
    java    17094 druid  mem    REG              252,0  7822243   774161 /usr/hdp/2.2.0.0-2041/hadoop-hdfs/hadoop-hdfs-2.6.0.2.2.0.0-2041.jar
    java    17094 druid  229r   REG              252,0  7822243   774161 /usr/hdp/2.2.0.0-2041/hadoop-hdfs/hadoop-hdfs-2.6.0.2.2.0.0-2041.jar
    java    17094 druid  472r   REG              252,0  5783912   774885 /home/druid/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.3.0/hadoop-hdfs-2.3.0.jar
    

@drcrallen
Copy link
Copy Markdown
Contributor

@haoch : Just FYI, there is a discussion about how how to handle dependencies going forward, which is part of why this PR isn't getting traction yet.

@fjy fjy force-pushed the master branch 2 times, most recently from 8b0ec82 to d05032b Compare February 1, 2015 04:57
@haoch
Copy link
Copy Markdown
Member Author

haoch commented Feb 8, 2015

Thanks @drcrallen

@gianm
Copy link
Copy Markdown
Contributor

gianm commented Jun 24, 2015

@haoch, are you having this problem when you try to make a single self contained jar? I'm asking because in that case, I think it makes more sense to concatenate the services files rather than hard-coding fs impls in Druid code. Hard-coding the impls in Druid wouldn't help for other FS types (like S3) unless we add them all, and the strategy of adding these things in the code would tie us more closely to a particular version of hadoop.

Fwiw, what I've seen work for self contained jars in the past is using the maven shade plugin with a configuration like:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <executions>
    <execution>
      <phase>package</phase>
      <goals>
        <goal>shade</goal>
      </goals>
      <configuration>
        <shadedArtifactAttached>true</shadedArtifactAttached>
        <shadedClassifierName>selfcontained</shadedClassifierName>
        <transformers>
          <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
            <resource>META-INF/services/org.apache.hadoop.fs.FileSystem</resource>
          </transformer>
        </transformers>
      </configuration>
    </execution>
  </executions>
</plugin>

This will concatenate the listed services files together so different files from different jars don't clobber each other. There may be something equivalent for the assembly plugin.

@himanshug
Copy link
Copy Markdown
Contributor

looks like a genuine problem. however, it is more of an extension mechanism problem than the code being fixed here which is prone to failure in various situations requiring other FileSystem schemes than hdfs and local.
I think the problem is that when FIleSystem.get(..) is called, hadoo-hdfs.jar is not in the classpath for classloader(which is Thread.currentThread().getContextClassLoader()) used because it wasn't in the original classpath of the druid process but was a transitive dependency of druid-hdfs-storage extension. Actually, I could argue that this is a hadoop bug as well because it did not honor config.getClassLoader() and used Thread context classloader. (otherwise #1454 would have fixed this problem)

There is some work happening to remove dynamic loading of extensions and that should fix this issue eventually. In the meantime, I would recommend just putting "hadoop classpath" manually on the classpath and that is what we [and most people] do.

@himanshug
Copy link
Copy Markdown
Contributor

actually, I created a bug for hdfs as well for using correct classloader https://issues.apache.org/jira/browse/HDFS-8750

@mark1900
Copy link
Copy Markdown
Contributor

@mark1900
Copy link
Copy Markdown
Contributor

This issue also exists when using the Druid Tranquility library (https://github.com/druid-io/tranquility).

Tranquility works with the Druid indexing service (http://druid.io/docs/latest/Indexing-Service.html). To get started, you'll need an Overlord, enough Middle Managers for your realtime workload, and enough Historical nodes to receive handoffs. You don't need any Realtime nodes, since Tranquility uses the indexing service for all of its ingestion needs.

@mark1900
Copy link
Copy Markdown
Contributor

My successful workaround to Druid 0.8.0 by patching the druid-hdfs-storage extension:

  • Git Clone Druid from https://github.com/druid-io/druid.git and check out tag "druid-0.8.0".
  • Update the pom.xml dependency scopes of artifacts "hadoop-hdfs", "hadoop-common" and "hadoop-hdfs" from "test" to "provided".
         <dependency>
           <groupId>org.apache.hadoop</groupId>
           <artifactId>hadoop-hdfs</artifactId>
           <version>2.3.0</version>
           <classifier>tests</classifier>
-          <scope>test</scope>
+          <scope>provided</scope>
         </dependency>
         <dependency>
           <groupId>org.apache.hadoop</groupId>
           <artifactId>hadoop-common</artifactId>
           <version>2.3.0</version>
           <classifier>tests</classifier>
-          <scope>test</scope>
+          <scope>provided</scope>
         </dependency>
         <dependency>
           <groupId>org.apache.hadoop</groupId>
           <artifactId>hadoop-hdfs</artifactId>
           <version>2.3.0</version>
-          <scope>test</scope>
+          <scope>provided</scope>
         </dependency>
  • Update HdfsStorageDruidModule.java with patch mentioned in this thread.
 public class HdfsStorageDruidModule implements DruidModule
@@ -82,6 +86,10 @@ public class HdfsStorageDruidModule implements DruidModule

     final Configuration conf = new Configuration();

+    // Walk around a typical case that "maven-assembly" causes bug about "No FileSystem for scheme: hdfs" while loading hadoop-hdfs dependency
+    conf.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
+    conf.set("fs.file.impl", LocalFileSystem.class.getName());
+
     // Set explicit CL. Otherwise it'll try to use thread context CL, which may not have all of our dependencies.
     conf.setClassLoader(getClass().getClassLoader());
  • Do a maven clean and install ("mvn clean install").
  • Zip the patched Druid extension (.m2\repository\io\druid\extensions\druid-hdfs-storage -> druid-hdfs-storage.zip).
  • Copy the patched Druid extension on the Druid host.
ls -al ~/.m2/repository/io/druid/extensions/druid-hdfs-storage/
cp druid-hdfs-storage.zip ~/.m2/repository/io/druid/extensions/
rm -rf ~/.m2/repository/io/druid/extensions/druid-hdfs-storage/
cd ~/.m2/repository/io/druid/extensions/
unzip druid-hdfs-storage.zip
  • Launch Druid

Full patch contents: patch.diff

diff --git a/extensions/hdfs-storage/pom.xml b/extensions/hdfs-storage/pom.xml
index 9518404..7ad60dc 100644
--- a/extensions/hdfs-storage/pom.xml
+++ b/extensions/hdfs-storage/pom.xml
@@ -81,20 +81,20 @@
           <artifactId>hadoop-hdfs</artifactId>
           <version>2.3.0</version>
           <classifier>tests</classifier>
-          <scope>test</scope>
+          <scope>provided</scope>
         </dependency>
         <dependency>
           <groupId>org.apache.hadoop</groupId>
           <artifactId>hadoop-common</artifactId>
           <version>2.3.0</version>
           <classifier>tests</classifier>
-          <scope>test</scope>
+          <scope>provided</scope>
         </dependency>
         <dependency>
           <groupId>org.apache.hadoop</groupId>
           <artifactId>hadoop-hdfs</artifactId>
           <version>2.3.0</version>
-          <scope>test</scope>
+          <scope>provided</scope>
         </dependency>
       </dependencies>

diff --git a/extensions/hdfs-storage/src/main/java/io/druid/storage/hdfs/HdfsStorageDruidModule.java b/extensions/hdfs-storage/src/main/java/io/druid/storage/hdfs/HdfsStorageDruidModule.java
index 52eadd3..94b63fe 100644
--- a/extensions/hdfs-storage/src/main/java/io/druid/storage/hdfs/HdfsStorageDruidModule.java
+++ b/extensions/hdfs-storage/src/main/java/io/druid/storage/hdfs/HdfsStorageDruidModule.java
@@ -29,6 +29,8 @@ import io.druid.initialization.DruidModule;
 import io.druid.storage.hdfs.tasklog.HdfsTaskLogs;
 import io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.hdfs.DistributedFileSystem;

 import java.util.List;
 import java.util.Properties;
@@ -82,6 +84,10 @@ public class HdfsStorageDruidModule implements DruidModule

     final Configuration conf = new Configuration();

+    // Walk around a typical case that "maven-assembly" causes bug about "No FileSystem for scheme: hdfs" while loading hadoop-hdfs dependency
+    conf.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
+    conf.set("fs.file.impl", LocalFileSystem.class.getName());
+
     // Set explicit CL. Otherwise it'll try to use thread context CL, which may not have all of our dependencies.
     conf.setClassLoader(getClass().getClassLoader());

@himanshug
Copy link
Copy Markdown
Contributor

@mark1900 @haoch can you pls try the fix in #1721 and see if that fixes the issue?

@mark1900
Copy link
Copy Markdown
Contributor

@himanshug @haoch I merged the changes in #1721 (https://github.com/druid-io/druid/pull/1721/files) into the druid release tagged "druid-0.8.0" and it seemed to also address this issue.

@himanshug
Copy link
Copy Markdown
Contributor

@mark1900 thanks for testing the patch.

@fjy
Copy link
Copy Markdown
Contributor

fjy commented Sep 11, 2015

@himanshug can we fix merge conflicts?
👍

@himanshug
Copy link
Copy Markdown
Contributor

#1714 is fixed by #1721
this PR is redundant, so closing.

@mark1900 @haoch thanks for all the details you provided to find the fix for this issue.

@mark1900
Copy link
Copy Markdown
Contributor

mark1900 commented Oct 8, 2015

@himanshug @haoch I merged the changes in #1721 (https://github.com/druid-io/druid/pull/1721/files) into the druid release tagged "druid-0.8.1" and it seems that this issue still occurs.

2015-10-08T17:39:01,106 INFO [druid_datasource_01-2015-10-08T17:30:00.000Z-persist-n-merge] io.druid.storage.hdfs.HdfsDataSegmentPusher - Copying segment[druid_datasource_01_2015-10-08T17:30:00.000Z_2015-10-08T17:31:00.000Z_2015-10-08T17:32:20.971Z] to HDFS at location[hdfs://server1:9000/druid/druid-hdfs-storage/druid_datasource_01/20151008T173000.000Z_20151008T173100.000Z/2015-10-08T17_32_20.971Z/0]
2015-10-08T17:39:01,109 ERROR [druid_datasource_01-2015-10-08T17:30:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - Failed to persist merged index[druid_datasource_01]: {class=io.druid.segment.realtime.plumber.RealtimePlumber, exceptionType=class java.io.IOException, exceptionMessage=No FileSystem for scheme: hdfs, interval=2015-10-08T17:30:00.000Z/2015-10-08T17:31:00.000Z}
java.io.IOException: No FileSystem for scheme: hdfs
        at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304) ~[?:?]
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311) ~[?:?]
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90) ~[?:?]
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350) ~[?:?]
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332) ~[?:?]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369) ~[?:?]
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) ~[?:?]
        at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:83) ~[?:?]
        at io.druid.segment.realtime.plumber.RealtimePlumber$4.doRun(RealtimePlumber.java:454) [druid-server-0.8.1.jar:0.8.1]
        at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:40) [druid-common-0.8.1.jar:0.8.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_60]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_60]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
2015-10-08T17:39:01,133 INFO [druid_datasource_01-2015-10-08T17:30:00.000Z-persist-n-merge] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"alerts","timestamp":"2015-10-08T17:39:01.131Z","service":"overlord","host":"server1:8100","severity":"component-failure","description":"Failed to persist merged index[druid_datasource_01]","data":{"class":"io.druid.segment.realtime.plumber.RealtimePlumber","exceptionType":"java.io.IOException","exceptionMessage":"No FileSystem for scheme: hdfs","exceptionStackTrace":"java.io.IOException: No FileSystem for scheme: hdfs\n\tat org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304)\n\tat org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311)\n\tat org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90)\n\tat org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350)\n\tat org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332)\n\tat org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369)\n\tat org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)\n\tat io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:83)\n\tat io.druid.segment.realtime.plumber.RealtimePlumber$4.doRun(RealtimePlumber.java:454)\n\tat io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:40)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\n","interval":"2015-10-08T17:30:00.000Z/2015-10-08T17:31:00.000Z"}}]

@mark1900
Copy link
Copy Markdown
Contributor

Issue seems to be resolved in the latest Druid 0.8.2 release: http://static.druid.io/artifacts/releases/druid-0.8.2-bin.tar.gz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants