微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

python中的Hadoop Streaming Job失败错误

this guide开始,我成功地进行了样本练习.但是在运行我的mapreduce作业时,我收到以下错误
ERROR streaming.StreamJob:工作不成功!
10/12/16 17:13:38 INFO streaming.StreamJob:killJob …
流媒体工作失败!
日志文件出错

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess Failed with code 2
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)

Mapper.py

import sys

i=0

for line in sys.stdin:
    i+=1
    count={}
    for word in line.strip().split():
        count[word]=count.get(word,0)+1
    for word,weight in count.items():
        print '%s\t%s:%s' % (word,str(i),str(weight))

Reducer.py

import sys

keymap={}
o_tweet="2323"
id_list=[]
for line in sys.stdin:
    tweet,tw=line.strip().split()
    #print tweet,o_tweet,tweet_id,id_list
    tweet_id,w=tw.split(':')
    w=int(w)
    if tweet.__eq__(o_tweet):
        for i,wt in id_list:
            print '%s:%s\t%s' % (tweet_id,i,str(w+wt))
        id_list.append((tweet_id,w))
    else:
        id_list=[(tweet_id,w)]
        o_tweet=tweet

[edit]命令来运行作业:

hadoop@ubuntu:/usr/local/hadoop$bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper /home/hadoop/mapper.py -file /home/hadoop/reducer.py -reducer /home/hadoop/reducer.py -input my-input/* -output my-output

输入是任意随机的句子序列.

谢谢,

解决方法:

您的-mapper和-reducer应该只是脚本名称.

hadoop@ubuntu:/usr/local/hadoop$bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper mapper.py -file /home/hadoop/reducer.py -reducer reducer.py -input my-input/* -output my-output

当您的脚本位于hdfs中另一个文件夹中的作业时,该文件夹与执行为“.”的尝试任务相关. (仅供参考,如果您想要另一个文件,例如查找表,您可以在Python中打开它,就好像它与您的脚本位于M / R作业中的脚本一样)

还要确保你有chmod a x mapper.py和chmod a x reducer.py

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐