lyzx1,19 lyzx2,20 lyzx3,21 lyzx4,22 lyzx5,23 lyzx6,24 lyzx7,25 托塔天王 lyzx7,25,哈哈
package com.zxwa.live.process.test import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext} object ScalaTs { def main(args: Array[String]): Unit = { val sparkContext = new SparkContext(new SparkConf().setAppName("ProductSalesstat").setMaster("local[*]")) val rdd = sparkContext.textFile("E:\\Data\\LIVE-DATA-SPARK\\src\\main\\resources\\people.txt") rdd.map(line => line.split(",")) .map(rt =>
if (rt.length == 1) rt(0) else if (rt.length == 2) (rt(0), rt(1)) else (rt(0), rt(1), rt(2)) ) .map { case (one: String) => "one:" + one case (name: String, age: String) => ("name:" + name, "age:" + age) case _ => ("_name", "_age", "_") } .foreach(println) } }
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。