微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Hadoop实践小项目——统计学生成绩,按总成绩排序

1、创建一个生成绩的文件

vi data

 2、将txt文件上传到hdfs上

hdfs -fs -put /hadoop/data

3、 在eclipse上实现统计生成绩的代码

(1)scoreSortEntity.class

 (2)scoreSortReduce.class

package demo2;

import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Counter;
import org.apache.hadoop.mapreduce.Reducer;

public class scoreSortReduce extends Reducer< scoreSortEntity, Text,Text, scoreSortEntity> {
	@Override
	protected void reduce(scoreSortEntity scoreSortEntity, Iterable<Text> iterable,Context context)
			throws IOException, InterruptedException {
		try {
			context.write(iterable.iterator().next(),scoreSortEntity);
		} catch (Exception e) {
			Counter countPrint = context.getCounter("Reduce-OutValue",e.getMessage());
			countPrint.increment(1l);
		}
	}



}

(3)scoreSortMapper.class

package demo2;

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Counter;
import org.apache.hadoop.mapreduce.Mapper;

public class scoreSortMapper extends Mapper<LongWritable, Text,  scoreSortEntity, Text>{

	@Override
	protected void map(LongWritable key, Text value, Context context)
			throws IOException, InterruptedException {
		String[] fields=value.toString().split("\t");
		try {
			scoreSortEntity entity=new scoreSortEntity(
					fields[0],
					Integer.parseInt(fields[1]),
					Integer.parseInt(fields[2]),
					Integer.parseInt(fields[3]),
					Integer.parseInt(fields[4]));
			context.write(entity, new Text(fields[0]));
		} catch (Exception e) {
			Counter countPrint = context.getCounter("Map-Exception",e.toString());
			countPrint.increment(1l);
		}
	}
}

 (4)scoreSortDemo.class

package demo2;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.io.Text;

public class scoreSortDemo {

	public static void main(String[] args) throws Exception {
		Configuration conf=new Configuration();
		Job job=Job.getInstance(conf);
		//设置jar
		job.setJarByClass(scoreSortDemo.class);
		//设置Map和Reduce类
		job.setMapperClass(scoreSortMapper.class);
		job.setReducerClass(scoreSortReduce.class);
		//设置Map输出
		job.setMapOutputKeyClass(scoreSortEntity.class);
		job.setMapOutputValueClass(Text.class);
		//设置Reduce输出
		job.setoutputKeyClass(Text.class);
		job.setoutputValueClass(scoreSortEntity.class);
		Path inputPath=	new Path("/hadoop/data");
		Path outputPath = new Path("/hadoop/dataout");
		outputPath.getFileSystem(conf).delete(outputPath, true);//如果输出路径存在,删除之
		//设置输入输出路径
		FileInputFormat.setInputPaths(job, inputPath);
		FileOutputFormat.setoutputPath(job,outputPath);
		boolean waitForCompletion= job.waitForCompletion(true);
		System.exit(waitForCompletion?0:1);
	}

}

将写好的的代码打架包

可参考(81条消息) 在eclipse上实现一个 WordCount 程序,并将 WordCount 程序打包发布到 Hadoop 分布式中运行。_柿子镭的博客-CSDN博客

4、在hadoop中运行

hadoop jar jar_004.jar demo2.scoreSortDemo

查看结果

hadoop fs -cat /hadoop/dataout/part-r-00000

 

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐