1.pandas df 与 spark df的相互转换
df_s=spark.createDataFrame(df_p)
df_p=df_s.toPandas()
import pandas as pd import numpy as np arr = np.arange(6).reshape(-1,3)
df_p=pd.DataFrame(arr) df_p
df_p.columns=['a','b','c'] df_p
df_s=spark.createDataFrame(df_p) df_s.show()
df_s.collect()
df_s.toPandas()
2. Spark与Pandas中DataFrame对比
http://www.lining0806.com/spark%E4%B8%8Epandas%E4%B8%ADdataframe%E5%AF%B9%E6%AF%94/
3.1 利用反射机制推断RDD模式
- sc创建RDD
- 转换成Row元素,列名=值
- spark.createDataFrame生成df
- df.show(), df.printSchema()
from pyspark.sql import Row people = spark.sparkContext.textFile('file:///usr/local/spark/examples/src/main/resources/people.txt').map(lambda line:line.split(',')).map(lambda w:Row(name=w[0],age=int(w[1]))) sPeople = spark.createDataFrame(people) sPeople.createOrReplaceTempView('people')
personDF = spark.sql('select name,age from people where age>20') personRDD = personDF.rdd.map(lambda p:"Name:"+p.name+","+"Age:"+str(p.age)) personRDD.foreach(print)
sPeople.show()
sPeople.printSchema()
3.2 使用编程方式定义RDD模式
- 生成“表头”
- fields = [StructField(field_name, StringType(), True) ,...]
- schema = StructType(fields)
from pyspark.sql.types import * from pyspark.sql import Row schemaString = 'name age' fields = [StructField(field_name,StringType(),True) for field_name in schemaString.split(' ')]
schema = StructType(fields)
- 生成“表中的记录”
- 创建RDD
- 转换成Row元素,列名=值
lines = spark.sparkContext.textFile('file:///usr/local/spark/examples/src/main/resources/people.txt') part = lines.map(lambda w:w.split(",")) peoples = part.map(lambda p:Row(p[0],p[1].strip())) peoples.collect()
- 把“表头”和“表中的记录”拼装在一起
- = spark.createDataFrame(RDD, schema)
schemaPeople = spark.createDataFrame(people,schema) schemaPeople.show() schemaPeople.printSchema()
4. DataFrame保存为文件
df.write.json(dir)
schemaPeople.write.json('file:///home/hadoop/schema_out')
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。