微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Spark实现WordCount案例

RDD操作实现

1.文本文件

Preface
“The Forsyte Saga” was the title originally destined for that part of it which is called “The Man of Property”; 
and to adopt it for the collected chronicles of the Forsyte family has indulged the Forsytean tenacity that is in all of us. 
The word Saga might be objected to on the ground that it connotes the heroic and that there is little heroism in these pages.
 But it is used with a suitable irony; and, after all, this long tale, though it may deal with folk in frock coats, furbelows 

方式一:没用正则处理的

val file = spark.sparkContext.textFile("file:///F:\\dataset\\The_Man_of_Property.txt")
    file.flatMap(line=>line.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2,ascending = false)
      .take(10).foreach(println(_))

方式二:添加正则表达式

val p = "[0-9a-zA-Z]+".r
    file.flatMap(line => line.split(" ")).map(x=>(p.findAllIn(x).mkString(""),1)).reduceByKey(_+_)
    .take(10).foreach(println)

结果:
(welshed,1)
(mattered,1)
(someone,10)
(disregarded,2)
(jowl,2)
(bone,1)
(House,6)

方式三:利用sortBy()进行排序

sortBy(_._2,ascending = false) 
认升序
ascending = false 代表:降序

val p = "[0-9a-zA-Z]+".r
    file.flatMap(line => line.split(" ")).map(x=>(p.findAllIn(x).mkString(""),1)).reduceByKey(_+_)
    .sortBy(_._2,ascending = false)
    .take(10).foreach(println)
结果:
(the,5168)
(of,3425)
(to,2810)
(and,2686)
(a,2564)             

Spark sql实现

导入隐式转换

 import spark.implicits._
//hdfs上文件
    val file = spark.sparkContext.textFile("/data/The_Man_of_Property.txt")
    val p = "[0-9a-zA-Z]+".r
    val dataFrame = file.flatMap(line=>line.split(" ")).map(x=>(p.findAllIn(x).mkString(""),1)).reduceByKey(_+_).toDF("word","count")

    dataFrame.createOrReplaceTempView("Property")


    dataFrame.show(5)
+-----------+-----+                                                             
|       word|count|
+-----------+-----+
|    welshed|    1|
|   mattered|    1|
|    someone|   10|
|disregarded|    2|
|       jowl|    2|
+-----------+-----+
only showing top 5 rows

createOrReplaceTempView使用

createOrReplaceTempView:创建临时视图,此视图的生命周期与用于创建此数据集的[SparkSession]相关联。
dataFrame.createOrReplaceTempView("Property")
    //指定列名
scala> val res = spark.sql("select * from Property ")
21/01/12 20:25:20 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
res: org.apache.spark.sql.DataFrame = [word: string, count: int]

scala> res.show(5)
+-----------+-----+
|       word|count|
+-----------+-----+
|    welshed|    1|
|   mattered|    1|
|    someone|   10|
|disregarded|    2|
|       jowl|    2|
+-----------+-----+
only showing top 5 rows

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐