1.windows10使用idea创建wordcount时,hadoop 二进制 加 空指针异常。是因为没有hadoop,hadoop环境变量
解决:配置下载hadoop,配置环境变量
2.写的wordcount在spark集群上跑是
19/09/11 20:19:54 INFO spark.SparkContext: Created broadcast 0 from textFile at WordCount.scala:14
Exception in thread "main" java.lang.RuntimeException: Error in configuring object
............
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
... 48 more
Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
...............
异常,而在yarn cluster上不报错,是因为在hadoop的core-site.xml 和mapred-site.xml中开启了压缩,并且压缩式lzo的。这就导致写入/上传到hdfs的文件自动被压缩为lzo了
解决:
spark-env.sh中
配置SPARK_LIBRARY_PATH添加hadoop的native
export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:hadoop-2.7.2/lib/native
export SPARK_CLAsspATH=$SPARK_CLAsspATH:hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar
错误三:
Error:(52, 27) overloaded method value / with alternatives:
(x: Double)Double <and>
(x: Float)Double <and>
(x: Long)Double <and>
(x: Int)Double <and>
(x: Char)Double <and>
(x: Short)Double <and>
(x: Byte)Double
cannot be applied to (AnyVal)
val rate = double / d
源码:
val result = ppp.map {
case (flow, fc) =>
val page = flow.split("->")(0)
val d = rdd2.getorElse(page.toLong, Double.MaxValue)
val double = fc.todouble
val rate = double / d
val formater = new DecimalFormat(".00%")
(flow, formater.format(rate))
}
原因:前面那是因为你的元组数组由两种不同的类型组成:map[String, Int]
和map1[String, Double]
。这些类型由编译器推断,然后是元组数组的受干扰类型map1[String, AnyVal]
。当您放置Double表示时,编译器能够创建map2[String, Double]
。
所以把rdd2的数据改成string,double
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。