スレッドの例外「streaming-start」java.lang.NosuchMethodError:Scala.PreDef $ .ARROWASSOC(LJAVA / LANG / Object;)LJAVA / LANG / OBJECT -- scala フィールド と apache-spark フィールド と spark-streaming フィールド 関連 問題

Exception in thread “streaming-start” java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;












1
vote

問題

日本語

私はこの問題に関連するすべてのトレッドを見ました、そして、彼らはすべてが2つのバージョンのScalaでクロスコンパイルされていることをすべて非常に明確です。私の場合、私は1つのバージョン2.11しか持っていないことを確認しますが、まだ同じエラーが発生します。あらゆる助けがありがとうございました。 私の火花env:

<事前> <コード> /___/ .__/_,_/_/ /_/_ version 2.0.0.2.5.3.0-37 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)

私のpom.xml:

<事前> <コード> <properties> <spark.version>2.2.1</spark.version> <scala.version>2.11.8</scala.version> <scala.library.version>2.11.8</scala.library.version> <scala.binary.version>2.11</scala.binary.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.source.version>1.7</java.source.version> <java.compile.version>1.7</java.compile.version> <kafka.version>0-10</kafka.version> </properties> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-hive_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>com.typesafe.scala-logging</groupId> <artifactId>scala-logging-slf4j_${scala.binary.version}</artifactId> <version>2.1.2</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka-${kafka.version}_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.library.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.11.0.2</version> </dependency> </dependencies>

これは例外です:

<事前> <コード> at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream$$anonfun$start$1.apply(DirectKafkaInputDStream.scala:246) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream$$anonfun$start$1.apply(DirectKafkaInputDStream.scala:245) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.mutable.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:45) at scala.collection.SetLike$class.map(SetLike.scala:93) at scala.collection.mutable.AbstractSet.map(Set.scala:45) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:245) at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:47) at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:47) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:145) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:138) at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:975) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:54) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:53) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:53) at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:56) at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:972) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:165) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:514) at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

コマンドの出力で "_2.1"の場合:MVN依存関係:tree -dverbose 2.10への参照は見られません。

<事前> <コード> [INFO] +- org.apache.spark:spark-core_2.11:jar:2.2.1:compile [INFO] | +- com.twitter:chill_2.11:jar:0.8.0:compile [INFO] | +- org.apache.spark:spark-launcher_2.11:jar:2.2.1:compile [INFO] | | +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | +- org.apache.spark:spark-network-common_2.11:jar:2.2.1:compile [INFO] | +- org.apache.spark:spark-network-shuffle_2.11:jar:2.2.1:compile [INFO] | | +- (org.apache.spark:spark-network-common_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | +- org.apache.spark:spark-unsafe_2.11:jar:2.2.1:compile [INFO] | | +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | | +- (com.twitter:chill_2.11:jar:0.8.0:compile - omitted for duplicate) [INFO] | +- org.json4s:json4s-jackson_2.11:jar:3.2.11:compile [INFO] | | +- org.json4s:json4s-core_2.11:jar:3.2.11:compile [INFO] | | | +- org.json4s:json4s-ast_2.11:jar:3.2.11:compile [INFO] | | | +- org.scala-lang.modules:scala-xml_2.11:jar:1.0.1:compile [INFO] | | | - (org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.0.1:compile - omitted for conflict with 1.0.4) [INFO] | +- com.fasterxml.jackson.module:jackson-module-scala_2.11:jar:2.6.5:compile [INFO] | +- org.apache.spark:spark-tags_2.11:jar:2.2.1:compile [INFO] +- org.apache.spark:spark-sql_2.11:jar:2.2.1:compile [INFO] | +- org.apache.spark:spark-sketch_2.11:jar:2.2.1:compile [INFO] | | +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | +- (org.apache.spark:spark-core_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | +- org.apache.spark:spark-catalyst_2.11:jar:2.2.1:compile [INFO] | | +- (org.apache.spark:spark-core_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | | +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | | +- (org.apache.spark:spark-unsafe_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | | +- (org.apache.spark:spark-sketch_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] +- org.apache.spark:spark-hive_2.11:jar:2.2.1:compile [INFO] | +- (org.apache.spark:spark-core_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | +- (org.apache.spark:spark-sql_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] +- com.typesafe.scala-logging:scala-logging-slf4j_2.11:jar:2.1.2:compile [INFO] | +- com.typesafe.scala-logging:scala-logging-api_2.11:jar:2.1.2:compile [INFO] +- org.apache.spark:spark-streaming-kafka-0-10_2.11:jar:2.2.1:compile [INFO] | +- org.apache.kafka:kafka_2.11:jar:0.10.0.1:compile [INFO] | | +- org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.0.4:compile [INFO] | +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] +- org.apache.spark:spark-streaming_2.11:jar:2.2.1:compile [INFO] | +- (org.apache.spark:spark-core_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] | +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate)

また、Spark-Submitを使用してSpark Serverで実行するようにUber Jarを使用していることを述べるべきです。 Uberは以下の瓶を含んでいます。私は問題を解決するための最後のリソースとしてScala Jarsを含めましたが、そうでないかどうかは関係ありません。

<事前> <コード> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.1.0</version> <configuration> <shadedArtifactAttached>false</shadedArtifactAttached> <keepDependenciesWithProvidedScope>false</keepDependenciesWithProvidedScope> <artifactSet> <includes> <include>org.apache.kafka:spark*</include> <include>org.apache.spark:spark-streaming-kafka-${kafka.version}_${scala.binary.version} </include> <include>org.apache.kafka:kafka_${scala.binary.version}</include> <include>org.apache.kafka:kafka-clients</include> <include>org.apache.spark:*</include> <include>org.scala-lang:scala-library</include> </includes> <excludes> <exclude>org.apache.hadoop:*</exclude> <exclude>com.fasterxml:*</exclude> </excludes> </artifactSet> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/services/javax.ws.rs.ext.Providers</resource> </transformer> </transformers> </configuration> <executions> <execution> <goals> <goal>shade</goal> </goals> </execution> </executions> </plugin>
英語

I seen all the treads related to this issue and they are all very clear that the posters were cross compiling with two versions of Scala. In my case I make sure I only have one version 2.11 but I still get the same error. Any help is appreciated, thanks. My Spark Env:

   /___/ .__/_,_/_/ /_/_   version 2.0.0.2.5.3.0-37   /_/    Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67) 

My pom.xml:

    <properties>     <spark.version>2.2.1</spark.version>     <scala.version>2.11.8</scala.version>     <scala.library.version>2.11.8</scala.library.version>     <scala.binary.version>2.11</scala.binary.version>     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>     <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>     <java.source.version>1.7</java.source.version>     <java.compile.version>1.7</java.compile.version>     <kafka.version>0-10</kafka.version> </properties>  <dependencies>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-core_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-sql_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-hive_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>com.typesafe.scala-logging</groupId>         <artifactId>scala-logging-slf4j_${scala.binary.version}</artifactId>         <version>2.1.2</version>     </dependency>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-streaming-kafka-${kafka.version}_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>org.scala-lang</groupId>         <artifactId>scala-library</artifactId>         <version>${scala.library.version}</version>     </dependency>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-streaming_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>org.apache.kafka</groupId>         <artifactId>kafka-clients</artifactId>         <version>0.11.0.2</version>     </dependency> </dependencies> 

This is the exception:

    at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream$$anonfun$start$1.apply(DirectKafkaInputDStream.scala:246) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream$$anonfun$start$1.apply(DirectKafkaInputDStream.scala:245) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.mutable.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:45) at scala.collection.SetLike$class.map(SetLike.scala:93) at scala.collection.mutable.AbstractSet.map(Set.scala:45) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:245) at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:47) at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:47) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:145) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:138) at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:975) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:54) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:53) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:53) at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:56) at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:972) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:165) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:514) at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 

When I grep "_2.1" in the output of command: mvn dependency:tree -Dverbose I don't see any references to 2.10.

    [INFO] +- org.apache.spark:spark-core_2.11:jar:2.2.1:compile [INFO] |  +- com.twitter:chill_2.11:jar:0.8.0:compile [INFO] |  +- org.apache.spark:spark-launcher_2.11:jar:2.2.1:compile [INFO] |  |  +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  +- org.apache.spark:spark-network-common_2.11:jar:2.2.1:compile [INFO] |  +- org.apache.spark:spark-network-shuffle_2.11:jar:2.2.1:compile [INFO] |  |  +- (org.apache.spark:spark-network-common_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  +- org.apache.spark:spark-unsafe_2.11:jar:2.2.1:compile [INFO] |  |  +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  |  +- (com.twitter:chill_2.11:jar:0.8.0:compile - omitted for duplicate) [INFO] |  +- org.json4s:json4s-jackson_2.11:jar:3.2.11:compile [INFO] |  |  +- org.json4s:json4s-core_2.11:jar:3.2.11:compile [INFO] |  |  |  +- org.json4s:json4s-ast_2.11:jar:3.2.11:compile [INFO] |  |  |        +- org.scala-lang.modules:scala-xml_2.11:jar:1.0.1:compile [INFO] |  |  |        - (org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.0.1:compile - omitted for conflict with 1.0.4) [INFO] |  +- com.fasterxml.jackson.module:jackson-module-scala_2.11:jar:2.6.5:compile [INFO] |  +- org.apache.spark:spark-tags_2.11:jar:2.2.1:compile [INFO] +- org.apache.spark:spark-sql_2.11:jar:2.2.1:compile [INFO] |  +- org.apache.spark:spark-sketch_2.11:jar:2.2.1:compile [INFO] |  |  +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  +- (org.apache.spark:spark-core_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  +- org.apache.spark:spark-catalyst_2.11:jar:2.2.1:compile [INFO] |  |  +- (org.apache.spark:spark-core_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  |  +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  |  +- (org.apache.spark:spark-unsafe_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  |  +- (org.apache.spark:spark-sketch_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] +- org.apache.spark:spark-hive_2.11:jar:2.2.1:compile [INFO] |  +- (org.apache.spark:spark-core_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  +- (org.apache.spark:spark-sql_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] +- com.typesafe.scala-logging:scala-logging-slf4j_2.11:jar:2.1.2:compile [INFO] |  +- com.typesafe.scala-logging:scala-logging-api_2.11:jar:2.1.2:compile [INFO] +- org.apache.spark:spark-streaming-kafka-0-10_2.11:jar:2.2.1:compile [INFO] |  +- org.apache.kafka:kafka_2.11:jar:0.10.0.1:compile [INFO] |  |  +- org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.0.4:compile [INFO] |  +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] +- org.apache.spark:spark-streaming_2.11:jar:2.2.1:compile [INFO] |  +- (org.apache.spark:spark-core_2.11:jar:2.2.1:compile - omitted for duplicate) [INFO] |  +- (org.apache.spark:spark-tags_2.11:jar:2.2.1:compile - omitted for duplicate) 

Also I should state that I am using a Uber jar to run in the Spark server using spark-submit. The Uber is including the jars below. I included the scala jars as a last resource to solve the problem but it does not matter if do or don't.

            <plugin>             <groupId>org.apache.maven.plugins</groupId>             <artifactId>maven-shade-plugin</artifactId>             <version>3.1.0</version>             <configuration>                 <shadedArtifactAttached>false</shadedArtifactAttached>                 <keepDependenciesWithProvidedScope>false</keepDependenciesWithProvidedScope>                 <artifactSet>                     <includes>                         <include>org.apache.kafka:spark*</include>                         <include>org.apache.spark:spark-streaming-kafka-${kafka.version}_${scala.binary.version}                         </include>                         <include>org.apache.kafka:kafka_${scala.binary.version}</include>                         <include>org.apache.kafka:kafka-clients</include>                         <include>org.apache.spark:*</include>                         <include>org.scala-lang:scala-library</include>                     </includes>                     <excludes>                         <exclude>org.apache.hadoop:*</exclude>                         <exclude>com.fasterxml:*</exclude>                     </excludes>                 </artifactSet>                 <transformers>                     <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">                         <resource>META-INF/services/javax.ws.rs.ext.Providers</resource>                     </transformer>                 </transformers>             </configuration>             <executions>                 <execution>                     <goals>                         <goal>shade</goal>                     </goals>                 </execution>             </executions>         </plugin> 
</div
        
   
   

回答リスト

2
 
vote
vote
ベストアンサー
 

Scala 2.11はJava 7では機能しません。 https:// scala-lang。 org / download / 2.11.8.html 。 Javaから8

を更新してください
 

Scala 2.11 does not work with Java 7: https://scala-lang.org/download/2.11.8.html. Please update Java to 8

</div
 
 
         
         
0
 
vote

最後に私の与えられた環境のためにそれを手に入れました。私がした変更はScala 2.10.6 Java 1.7 Spark 2.0.0です。

完全な場合、ここにpom.xml:

<事前> <コード> <properties> <spark.version>2.0.0</spark.version> <scala.version>2.10.6</scala.version> <scala.library.version>2.10.6</scala.library.version> <scala.binary.version>2.10</scala.binary.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.source.version>1.7</java.source.version> <java.compile.version>1.7</java.compile.version> <kafka.version>0-10</kafka.version> </properties> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-hive_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>com.typesafe.scala-logging</groupId> <artifactId>scala-logging-slf4j_${scala.binary.version}</artifactId> <version>2.1.2</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka-${kafka.version}_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.library.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_${scala.binary.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.11.0.2</version> </dependency> </dependencies> <build> <sourceDirectory>src/main/java</sourceDirectory> <testSourceDirectory>src/test/java</testSourceDirectory> <resources> <resource> <directory>src/main/resources</directory> </resource> </resources> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.1.0</version> <configuration> <shadedArtifactAttached>false</shadedArtifactAttached> <keepDependenciesWithProvidedScope>false</keepDependenciesWithProvidedScope> <artifactSet> <includes> <include>org.apache.kafka:spark*</include> <include>org.apache.spark:spark-streaming-kafka-${kafka.version}_${scala.binary.version} </include> <include>org.apache.kafka:kafka_${scala.binary.version}</include> <include>org.apache.kafka:kafka-clients</include> <include>org.apache.spark:*</include> <include>org.scala-lang:scala-library</include> </includes> <excludes> <exclude>org.apache.hadoop:*</exclude> <exclude>com.fasterxml:*</exclude> </excludes> </artifactSet> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/services/javax.ws.rs.ext.Providers</resource> </transformer> </transformers> </configuration> <executions> <execution> <goals> <goal>shade</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <source>${java.source.version}</source> <target>${java.compile.version}</target> </configuration> </plugin> <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <version>2.15.2</version> <executions> <execution> <id>compile</id> <goals> <goal>compile</goal> </goals> <phase>compile</phase> </execution> <execution> <id>test-compile</id> <goals> <goal>testCompile</goal> </goals> <phase>test-compile</phase> </execution> <execution> <phase>process-resources</phase> <goals> <goal>compile</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.9</version> <configuration> <sourceIncludes> <sourceInclude>**/*.scala</sourceInclude> </sourceIncludes> <projectNameTemplate>[artifactId]</projectNameTemplate> <projectnatures> <projectnature>org.scala-ide.sdt.core.scalanature</projectnature> <projectnature>org.eclipse.m2e.core.maven2Nature</projectnature> <projectnature>org.eclipse.jdt.core.javanature</projectnature> </projectnatures> <buildcommands> <buildcommand>org.eclipse.m2e.core.maven2Builder</buildcommand> <buildcommand>org.scala-ide.sdt.core.scalabuilder</buildcommand> </buildcommands> <classpathContainers> <classpathContainer>org.scala-ide.sdt.launching.SCALA_CONTAINER"</classpathContainer> </classpathContainers> <excludes> <exclude>org.scala-lang:scala-library</exclude> <exclude>org.scala-lang:scala-compiler</exclude> </excludes> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.6</version> <configuration> <archive> <manifestEntries> <Implementation-Version>${project.version}</Implementation-Version> <SCM-Revision>1.0</SCM-Revision> </manifestEntries> </archive> </configuration> </plugin> </plugins> </build>
 

Finally I got it to work for my given environment. Changes I did was Scala 2.10.6 Java 1.7 Spark 2.0.0.

For completeness here is my pom.xml:

   <properties>     <spark.version>2.0.0</spark.version>     <scala.version>2.10.6</scala.version>     <scala.library.version>2.10.6</scala.library.version>     <scala.binary.version>2.10</scala.binary.version>     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>     <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>     <java.source.version>1.7</java.source.version>     <java.compile.version>1.7</java.compile.version>     <kafka.version>0-10</kafka.version> </properties>  <dependencies>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-core_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-sql_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-hive_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>com.typesafe.scala-logging</groupId>         <artifactId>scala-logging-slf4j_${scala.binary.version}</artifactId>         <version>2.1.2</version>     </dependency>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-streaming-kafka-${kafka.version}_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>org.scala-lang</groupId>         <artifactId>scala-library</artifactId>         <version>${scala.library.version}</version>     </dependency>     <dependency>         <groupId>org.apache.spark</groupId>         <artifactId>spark-streaming_${scala.binary.version}</artifactId>         <version>${spark.version}</version>     </dependency>     <dependency>         <groupId>org.apache.kafka</groupId>         <artifactId>kafka-clients</artifactId>         <version>0.11.0.2</version>     </dependency> </dependencies>  <build>     <sourceDirectory>src/main/java</sourceDirectory>     <testSourceDirectory>src/test/java</testSourceDirectory>     <resources>         <resource>             <directory>src/main/resources</directory>         </resource>     </resources>      <plugins>         <plugin>             <groupId>org.apache.maven.plugins</groupId>             <artifactId>maven-shade-plugin</artifactId>             <version>3.1.0</version>             <configuration>                 <shadedArtifactAttached>false</shadedArtifactAttached>                 <keepDependenciesWithProvidedScope>false</keepDependenciesWithProvidedScope>                 <artifactSet>                     <includes>                         <include>org.apache.kafka:spark*</include>                         <include>org.apache.spark:spark-streaming-kafka-${kafka.version}_${scala.binary.version}                         </include>                         <include>org.apache.kafka:kafka_${scala.binary.version}</include>                         <include>org.apache.kafka:kafka-clients</include>                         <include>org.apache.spark:*</include>                         <include>org.scala-lang:scala-library</include>                     </includes>                     <excludes>                         <exclude>org.apache.hadoop:*</exclude>                         <exclude>com.fasterxml:*</exclude>                     </excludes>                 </artifactSet>                 <transformers>                     <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">                         <resource>META-INF/services/javax.ws.rs.ext.Providers</resource>                     </transformer>                 </transformers>             </configuration>             <executions>                 <execution>                     <goals>                         <goal>shade</goal>                     </goals>                 </execution>             </executions>         </plugin>          <plugin>             <groupId>org.apache.maven.plugins</groupId>             <artifactId>maven-compiler-plugin</artifactId>             <version>3.1</version>             <configuration>                 <source>${java.source.version}</source>                 <target>${java.compile.version}</target>             </configuration>         </plugin>          <plugin>             <groupId>org.scala-tools</groupId>             <artifactId>maven-scala-plugin</artifactId>             <version>2.15.2</version>             <executions>                 <execution>                     <id>compile</id>                     <goals>                         <goal>compile</goal>                     </goals>                     <phase>compile</phase>                 </execution>                 <execution>                     <id>test-compile</id>                     <goals>                         <goal>testCompile</goal>                     </goals>                     <phase>test-compile</phase>                 </execution>                 <execution>                     <phase>process-resources</phase>                     <goals>                         <goal>compile</goal>                     </goals>                 </execution>             </executions>         </plugin>         <plugin>             <groupId>org.apache.maven.plugins</groupId>             <artifactId>maven-eclipse-plugin</artifactId>             <version>2.9</version>             <configuration>                 <sourceIncludes>                     <sourceInclude>**/*.scala</sourceInclude>                 </sourceIncludes>                 <projectNameTemplate>[artifactId]</projectNameTemplate>                 <projectnatures>                     <projectnature>org.scala-ide.sdt.core.scalanature</projectnature>                     <projectnature>org.eclipse.m2e.core.maven2Nature</projectnature>                     <projectnature>org.eclipse.jdt.core.javanature</projectnature>                 </projectnatures>                 <buildcommands>                     <buildcommand>org.eclipse.m2e.core.maven2Builder</buildcommand>                     <buildcommand>org.scala-ide.sdt.core.scalabuilder</buildcommand>                 </buildcommands>                 <classpathContainers>                     <classpathContainer>org.scala-ide.sdt.launching.SCALA_CONTAINER"</classpathContainer>                 </classpathContainers>                 <excludes>                     <exclude>org.scala-lang:scala-library</exclude>                     <exclude>org.scala-lang:scala-compiler</exclude>                 </excludes>             </configuration>         </plugin>         <plugin>             <groupId>org.apache.maven.plugins</groupId>             <artifactId>maven-jar-plugin</artifactId>             <version>2.6</version>             <configuration>                 <archive>                     <manifestEntries>                         <Implementation-Version>${project.version}</Implementation-Version>                         <SCM-Revision>1.0</SCM-Revision>                     </manifestEntries>                 </archive>             </configuration>         </plugin>     </plugins> </build> 
</div
 
 
   
   

関連する質問

1  SSC.FILestReam()を使用する方法Javaでzipディレクトリを処理する方法  ( How to use ssc filestream to handle a zip directory in java ) 
私はスパークストリーミングで新しいです。 特定のディレクトリにあるすべての.zipファイルを監視および解凍したいです。 http://cutler.io/2012/07. / Hadoop-Processing-Zip-Files-In-MapReduce ...

0  複数の拡張機能を持つSCR SQL拡張機能  ( Spark sql extensions with multiple extensions ) 
Spark 3.0の spark.sql.extensions 構成で複数の拡張子を指定したい。 しかし、それは新しい拡張子を新しいものと上書きします。 Spark 3.0ドキュメントに従って、コンマ区切りの拡張リストを使用できますが、機能しません。コンマ区...

2  1つのコア内のスパークワーカー上の複数のプロセッサスレッドを起動する  ( Start multiple processor threads on spark worker within one core ) 
私たちの状況は、AWSキネシスでスパークストリーミングを使用しています。 スパークマスターをメモリに " local [32] "として指定した場合、Sparkはキネシスからのデータをかなり速いです。 しかし、1マスターと3つの労働者(4つの別のマシン上)でク...

1  Spark StreamingはFileNotFoundExceptionをスローします  ( Spark streaming throws filenotfoundexception ) 
クラスタモードでのSparkストリーミングは、Linuxファイルシステム(すべてのノードにわたるGFS共有ファイルシステム)を使用して FileNotFoundException を投げていますが、入力としてHDFSで機能します。 データは実際に利用可能で、す...

6  Spark Streamingを使用したスト​​リーミングデータからグラフを構築する  ( Constructing a graph from streaming data using spark streaming ) 
私は火花に新たです。私は共起グラフを構築する必要があります(ツイートワードではノードになるでしょう、そして単語が同じツイートからの場合はそれらの間のエッジを追加する)Twitterツイートのようなストリーミングデータからのエッジを追加します。 Spark St...

0  Drools Kie Scanner 6.3.0 Final Swing ProvisionException(RepositorySystemの発見ではありません)  ( Drools kie scanner 6 3 0 final throwing provisionexception not finding reposito ) 
私はちょうどドロールで作業し始め、それを私のスパークストリーミングジョブと統合しようとしました。私はKie-CIでDrools 6.3.0.finalを使っていますので、Kjarが私のスパークジョブからリモートでリモートで引き寄せ、新しいバージョンがある場合は...

0  Sparkを使用してPythonでスタンドアロンアプリの例を実行中のエラー  ( Error while running standalone app example in python using spark ) 
私はちょうどSparkで始めて、Amazon EC2インスタンスの上でスタンドアロンモードで実行されています。私はドキュメントに記載されている例を試していて、この例をシンプルなアプリと呼ばれていましたが、このエラーが発生し続けています。 nameError:n...

3  複数のマイクロバッチを介して構築されたすべてのキーにアクセスする方法  ( Spark mapwithstate how to access all keys built over multiple microbatches ) 
複数のマイクロバッチによって構築されたすべてのキーの状態にアクセスする方法。 <事前> <コード> val stateSpec = StateSpec.function(stateUpdate _) .numPartitions(numPartition...

14  Spark Streaming + Kafka:Sparkexception:セット用のリーダーオフセットを見つけることができませんでした  ( Spark streaming kafka sparkexception couldnt find leader offsets for set ) 
カフカキューからメッセージを取得するためにSparkストリーミングを設定しようとしています。次のエラーが発生しました: <事前> <コード> py4j.protocol.Py4JJavaError: An error occurred while callin...

5  ストリーミングファイルのためにSparkのドロップス  ( Drools in spark for streaming file ) 
DroolsをSparkとうまく統合することができました。これは、HDFSに存在するバッチファイルに対して行うことができましたが、作成できるように、ストリーミングファイルのためにドロールを使用しようとしました。即座に決断ですが、それを行う方法を理解することは...




© 2022 cndgn.com All Rights Reserved. Q&Aハウス 全著作権所有