Spark sql 数据源(json文件、hive表、parquet文件)
-- json 详见 524
hive表
scala> val hivecontext = new org.apache.spark.sql.hive.HiveContext(sc) warning: one deprecation (since 2.0.0); for details, enable `:setting -deprecation' or `:replay -deprecation' 22/06/24 14:29:08 WARN sql.SparkSession$Builder: Using an existing SparkSession; the static sql configurations will not take effect. hivecontext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@7c089fbc scala> hivecontext.sql("CREATE TABLE IF NOT EXISTS Demo(id INT, name STRING, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LInes TERMINATED BY '\n' ") 22/06/24 14:31:36 WARN session.SessionState: metastore_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. res1: org.apache.spark.sql.DataFrame = []
建表
scala> hivecontext.sql("CREATE TABLE IF NOT EXISTS mycdh.Demo(id INT, name STRING, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LInes TERMINATED BY '\n' ") res5: org.apache.spark.sql.DataFrame = []
上述将Demo建在了默认库;这里修改为自己的hive库,最好先删除这个表,以免搞混
scala> hivecontext.sql("LOAD DATA INPATH 'hdfs://cdh1:9013/user/hive/employee.txt' INTO TABLE mycdh.Demo") res12: org.apache.spark.sql.DataFrame = []
scala> val result = hivecontext.sql("FROM mycdh.Demo SELECT id,name") result: org.apache.spark.sql.DataFrame = [id: int, name: string] scala> result.show() +----+--------+ | id| name| +----+--------+ |1201| satish| |1202| krishna| |1203| amith| |1204| javed| |1205| prudvi| +----+--------+
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。