微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

运行hadoop 报错 No job jar file set. User classes may not be found. See Job or Job#setJar(String)

如下创建主类程序

public class JobMain extends Configured implements Tool {
@Override
public int run(String[] args) throws Exception {
//1:创建job对象
Job job = Job.getInstance(super.getConf(), "mapreduce_sort");
...
}

结果单机上跑没有问题,但是放在集群上就会出问题
报如下错误

20/12/19 18:09:34 INFO mapreduce.Job: Job job_1608426890333_0001 Failed with state Failed due to: Application application_1608426890333_0001 Failed 2 times due to AM Container for appattempt_1608426890333_0001_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://node01:8088/cluster/app/application_1608426890333_0001Then, click on links to logs of each attempt.
Diagnostics: Could not obtain block: BP-1773037379-192.168.177.101-1607232537994:blk_1073741875_1051 file=/tmp/hadoop-yarn/staging/root/.staging/job_1608426890333_0001/job.splitMetainfo
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1773037379-192.168.177.101-1607232537994:blk_1073741875_1051 file=/tmp/hadoop-yarn/staging/root/.staging/job_1608426890333_0001/job.splitMetainfo
        at org.apache.hadoop.hdfs.DFSInputStream.chooseDatanode(DFSInputStream.java:975)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:632)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:874)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:926)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IoUtils.copyBytes(IoUtils.java:86)
        at org.apache.hadoop.io.IoUtils.copyBytes(IoUtils.java:60)
        at org.apache.hadoop.io.IoUtils.copyBytes(IoUtils.java:120)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
        at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:267)
        at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
        at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
        at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroup@R_904_404[email protected](UserGroup@R_904_404[email protected]:1754)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Failing this attempt. Failing the application.
20/12/19 18:09:34 INFO mapreduce.Job: Counters: 0

查阅了一下资料,解决思路是修改job对象的定义方式

Job job = Job.getInstance(super.getConf(), JobMain.class.getSimpleName());

再回到集群上运行就好了。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐