大数据之Spark RDD

本文详细介绍了Spark中的Transformation和Action算子,包括RDD的创建、特点及常见操作,如map、flatMap、reduceByKey、join、groupByKey等。通过实例展示了如何使用这些算子进行数据处理,如WordCount、笛卡尔积、并集、交集等。此外,还讨论了高级算子如aggregate、cogroup、checkpoint等,并提供了统计网站不同学科前两名和计算用户在基站停留时间最长的两种方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Spark的算子分为两类:

一类叫做Transformation(转换),延迟加载,它会记录元数据信息,当计算任务触发Action,才会真正开始计算;
一类叫做Action(动作);

一个算子会产生多个RDD

RDD(Resilient Distributed Dataset)叫做分布式数据集,是Spark中最基本的数据抽象,它代表一个不可变、可分区、里面的元素可并行计算的集合。

一、RDD创建方式

方式一:通过HDFS支持的文件系统系统创建,RDD里没有真正要计算的数据,只是记录了一下元数据
在这里插入图片描述
方式二:通过Scala集合或数组以并行化方式创建
在这里插入图片描述

二、RDD特点

1、一台机器上有多个分区;
2、一个函数会作用到一个分区;
3、RDD之间有一系列依赖;
4、如果是key-value类型,会有分区器;
5、RDD会有一个最佳位置;

三、RDD练习

val rdd1 = sc.parallelize(List(5,6,4,7,3,8,2,9,1,10))
val rdd2 = sc.parallelize(List(5,6,4,7,3,8,2,9,1,10)).map(*2).sortBy(x=>x,true)
val rdd3 = rdd2.filter(
>10)
val rdd2 = sc.parallelize(List(5,6,4,7,3,8,2,9,1,10)).map(*2).sortBy(x=>x+“”,true)
val rdd2 = sc.parallelize(List(5,6,4,7,3,8,2,9,1,10)).map(
*2).sortBy(x=>x.toString,true)



val rdd4 = sc.parallelize(Array(“a b c”, “d e f”, “h i j”))
rdd4.flatMap(_.split(’ ')).collect

val rdd5 = sc.parallelize(List(List(“a b c”, “a b b”),List(“e f g”, “a f g”), List(“h i j”, “a a b”)))

rdd5.flatMap(.flatMap(.split(" "))).collect



#union求并集,注意类型要一致
val rdd6 = sc.parallelize(List(5,6,4,7))
val rdd7 = sc.parallelize(List(1,2,3,4))
val rdd8 = rdd6.union(rdd7)
rdd8.distinct.sortBy(x=>x).collect



#intersection求交集
val rdd9 = rdd6.intersection(rdd7)

val rdd1 = sc.parallelize(List((“tom”, 1), (“jerry”, 2), (“kitty”, 3)))
val rdd2 = sc.parallelize(List((“jerry”, 9), (“tom”, 8), (“shuke”, 7)))



#join
val rdd3 = rdd1.join(rdd2)
val rdd3 = rdd1.leftOuterJoin(rdd2)
val rdd3 = rdd1.rightOuterJoin(rdd2)



#groupByKey
val rdd3 = rdd1 union rdd2
rdd3.groupByKey
rdd3.groupByKey.map(x=>(x._1,x._2.sum))



#WordCount, 第二个效率低
sc.textFile(“/root/words.txt”).flatMap(x=>x.split(" ")).map((,1)).reduceByKey(+).sortBy(.2,false).collect
sc.textFile(“/root/words.txt”).flatMap(x=>x.split(" ")).map((
,1)).groupByKey.map(t=>(t._1, t._2.sum)).collect



#cogroup
val rdd1 = sc.parallelize(List((“tom”, 1), (“tom”, 2), (“jerry”, 3), (“kitty”, 2)))
val rdd2 = sc.parallelize(List((“jerry”, 2), (“tom”, 1), (“shuke”, 2)))
val rdd3 = rdd1.cogroup(rdd2)
val rdd4 = rdd3.map(t=>(t._1, t._2._1.sum + t._2._2.sum))



#cartesian笛卡尔积
val rdd1 = sc.parallelize(List(“tom”, “jerry”))
val rdd2 = sc.parallelize(List(“tom”, “kitty”, “shuke”))
val rdd3 = rdd1.cartesian(rdd2)



#spark action
val rdd1 = sc.parallelize(List(1,2,3,4,5), 2)



#collect
rdd1.collect



#reduce
val rdd2 = rdd1.reduce(+)



#count
rdd1.count



#top
rdd1.top(2)



#take
rdd1.take(2)



#first(similer to take(1))
rdd1.first



#takeOrdered
rdd1.takeOrdered(3)

四、高级算子方法

map是对每个元素操作, mapPartitions是对其中的每个partition操作

mapPartitionsWithIndex: 把每个partition中的分区号和对应的值拿出来

val func = (index: Int, iter: Iterator[(Int)]) => {
iter.toList.map(x => “[partID:” + index + ", val: " + x + “]”).iterator
}
val rdd1 = sc.parallelize(List(1,2,3,4,5,6,7,8,9), 2)
rdd1.mapPartitionsWithIndex(func).collect



aggregate: 对RDD里元素进行聚合

def func1(index: Int, iter: Iterator[(Int)]) : Iterator[String] = {
iter.toList.map(x => “[partID:” + index + ", val: " + x + “]”).iterator
}
val rdd1 = sc.parallelize(List(1,2,3,4,5,6,7,8,9), 2)
rdd1.mapPartitionsWithIndex(func1).collect

###是action操作, 第一个参数是初始值, 二:是2个函数[每个函数都是2个参数(第一个参数:先对个个分区进行合并, 第二个:对个个分区合并后的结果再进行合并), 输出一个参数]

###0 + (0+1+2+3+4 + 0+5+6+7+8+9)
rdd1.aggregate(0)(+, +)
rdd1.aggregate(0)(math.max(_, _), _ + _)

###5和1比, 得5再和234比得5 --> 5和6789比,得9 --> 5 + (5+9)
rdd1.aggregate(5)(math.max(_, _), _ + _)

val rdd2 = sc.parallelize(List(“a”,“b”,“c”,“d”,“e”,“f”),2)
def func2(index: Int, iter: Iterator[(String)]) : Iterator[String] = {
iter.toList.map(x => “[partID:” + index + “, val: " + x + “]”).iterator
}
rdd2.aggregate(”")(_ + _, _ + )
rdd2.aggregate(“=”)(
+ _, _ + _)

val rdd3 = sc.parallelize(List(“12”,“23”,“345”,“4567”),2)
rdd3.aggregate(“”)((x,y) => math.max(x.length, y.length).toString, (x,y) => x + y)

val rdd4 = sc.parallelize(List(“12”,“23”,“345”,“”),2)
rdd4.aggregate(“”)((x,y) => math.min(x.length, y.length).toString, (x,y) => x + y)

val rdd5 = sc.parallelize(List(“12”,“23”,“”,“345”),2)
rdd5.aggregate(“”)((x,y) => math.min(x.length, y.length).toString, (x,y) => x + y)



aggregateByKey:对RDD里元素先分区再聚合

val pairRDD = sc.parallelize(List( (“cat”,2), (“cat”, 5), (“mouse”, 4),(“cat”, 12), (“dog”, 12), (“mouse”, 2)), 2)
def func2(index: Int, iter: Iterator[(String, Int)]) : Iterator[String] = {
iter.toList.map(x => “[partID:” + index + ", val: " + x + “]”).iterator
}
pairRDD.mapPartitionsWithIndex(func2).collect
pairRDD.aggregateByKey(0)(math.max(_, _), _ + ).collect
pairRDD.aggregateByKey(100)(math.max(
, _), _ + _).collect



checkpoint

sc.setCheckpointDir(“hdfs://node-1.itcast.cn:9000/ck”)
val rdd = sc.textFile(“hdfs://node-1.itcast.cn:9000/wc”).flatMap(.split(" ")).map((, 1)).reduceByKey(+)
rdd.checkpoint
rdd.isCheckpointed
rdd.count
rdd.isCheckpointed
rdd.getCheckpointFile



coalesce, repartition

val rdd1 = sc.parallelize(1 to 10, 10)
val rdd2 = rdd1.coalesce(2, false)
rdd2.partitions.length



collectAsMap : Map(b -> 2, a -> 1)
val rdd = sc.parallelize(List((“a”, 1), (“b”, 2)))
rdd.collectAsMap



combineByKey : 和reduceByKey是相同的效果

###第一个参数x:原封不动取出来, 第二个参数:是函数, 局部运算, 第三个:是函数, 对局部运算后的结果再做运算

###每个分区中每个key中value中的第一个值, (hello,1)(hello,1)(good,1)–>(hello(1,1),good(1))–>x就相当于hello的第一个1, good中的1
val rdd1 = sc.textFile(“hdfs://master:9000/wordcount/input/”).flatMap(.split(" ")).map((, 1))
val rdd2 = rdd1.combineByKey(x => x, (a: Int, b: Int) => a + b, (m: Int, n: Int) => m + n)
rdd1.collect
rdd2.collect

###当input下有3个文件时(有3个block块, 不是有3个文件就有3个block, ), 每个会多加3个10
val rdd3 = rdd1.combineByKey(x => x + 10, (a: Int, b: Int) => a + b, (m: Int, n: Int) => m + n)
rdd3.collect

val rdd4 = sc.parallelize(List(“dog”,“cat”,“gnu”,“salmon”,“rabbit”,“turkey”,“wolf”,“bear”,“bee”), 3)
val rdd5 = sc.parallelize(List(1,1,2,2,2,1,2,2,2), 3)
val rdd6 = rdd5.zip(rdd4)
val rdd7 = rdd6.combineByKey(List(_), (x: List[String], y: String) => x :+ y, (m: List[String], n: List[String]) => m ++ n)



countByKey

val rdd1 = sc.parallelize(List((“a”, 1), (“b”, 2), (“b”, 2), (“c”, 2), (“c”, 1)))
rdd1.countByKey
rdd1.countByValue



filterByRange

val rdd1 = sc.parallelize(List((“e”, 5), (“c”, 3), (“d”, 4), (“c”, 2), (“a”, 1)))
val rdd2 = rdd1.filterByRange(“b”, “d”)
rdd2.collect



flatMapValues : Array((a,1), (a,2), (b,3), (b,4))
val rdd3 = sc.parallelize(List((“a”, “1 2”), (“b”, “3 4”)))
val rdd4 = rdd3.flatMapValues(_.split(" "))
rdd4.collect



foldByKey

val rdd1 = sc.parallelize(List(“dog”, “wolf”, “cat”, “bear”), 2)
val rdd2 = rdd1.map(x => (x.length, x))
val rdd3 = rdd2.foldByKey(“”)(+)

val rdd = sc.textFile(“hdfs://node-1.itcast.cn:9000/wc”).flatMap(.split(" ")).map((, 1))
rdd.foldByKey(0)(+)



foreachPartition

val rdd1 = sc.parallelize(List(1, 2, 3, 4, 5, 6, 7, 8, 9), 3)
rdd1.foreachPartition(x => println(x.reduce(_ + _)))



keyBy : 以传入的参数做key
val rdd1 = sc.parallelize(List(“dog”, “salmon”, “salmon”, “rat”, “elephant”), 3)
val rdd2 = rdd1.keyBy(_.length)
rdd2.collect



keys values
val rdd1 = sc.parallelize(List(“dog”, “tiger”, “lion”, “cat”, “panther”, “eagle”), 2)
val rdd2 = rdd1.map(x => (x.length, x))
rdd2.keys.collect
rdd2.values.collect

五、统计网站不同学科前两名

方式一:数据量大的话,可以把数据放硬盘上

package cn.itcast.spark.day2
import java.net.URL
import org.apache.spark.{SparkConf, SparkContext}
	
    /**
	  * 根据指定的网站, 取出点击量前三的
	  * Created by root on 2016/5/16.
	  */
	object AdvUrlCount {
	
	  def main(args: Array[String]) {
	
	    //从数据库中加载规则
	    val arr = Array("java.itcast.cn", "php.itcast.cn", "net.itcast.cn")
	
	    val conf = new SparkConf().setAppName("AdvUrlCount").setMaster("local[2]")
	    val sc = new SparkContext(conf)
	    //rdd1将数据切分,元组中放的是(URL, 1)
	    val rdd1 = sc.textFile("c://itcast.log").map(line => {
	      val f = line.split("\t")
	      (f(1), 1)
	    })
	    val rdd2 = rdd1.reduceByKey(_ + _)
	
	    val rdd3 = rdd2.map(t => {
	      val url = t._1
	      val host = new URL(url).getHost
	      (host, url, t._2)
	    })
	
	    //println(rdd3.collect().toBuffer)
	
	//    val rddjava = rdd3.filter(_._1 == "java.itcast.cn")
	//    val sortdjava = rddjava.sortBy(_._3, false).take(3)
	//    val rddphp = rdd3.filter(_._1 == "php.itcast.cn")
	
	    for (ins <- arr) {
	      val rdd = rdd3.filter(_._1 == ins)
	      val result= rdd.sortBy(_._3, false).take(3)
	      //通过JDBC向数据库中存储数据
	      //id,学院,URL,次数, 访问日期
	      println(result.toBuffer)
	    }
	
	    //println(sortdjava.toBuffer)
	    sc.stop()
	
	  }
}

方式二:自定义分区,将相同学科放在同一个分区

package cn.itcast.spark.day3
import java.net.URL
import org.apache.spark.{HashPartitioner, Partitioner, SparkConf, SparkContext}
import scala.collection.mutable
	
	/**
	  * Created by root on 2016/5/18.
	  */
	object UrlCountPartition {
	
	  def main(args: Array[String]) {
	
	    val conf = new SparkConf().setAppName("UrlCountPartition").setMaster("local[2]")
	    val sc = new SparkContext(conf)
	
	    //rdd1将数据切分,元组中放的是(URL, 1)
	    val rdd1 = sc.textFile("c://itcast.log").map(line => {
	      val f = line.split("\t")
	      (f(1), 1)
	    })
	    val rdd2 = rdd1.reduceByKey(_ + _)
	
	    val rdd3 = rdd2.map(t => {
	      val url = t._1
	      val host = new URL(url).getHost
	      (host, (url, t._2))
	    })
	
	    val ints = rdd3.map(_._1).distinct().collect()
	
	    val hostParitioner = new HostParitioner(ints)
	
	//  val rdd4 = rdd3.partitionBy(new HashPartitioner(ints.length))
	
	    val rdd4 = rdd3.partitionBy(hostParitioner).mapPartitions(it => {
	      it.toList.sortBy(_._2._2).reverse.take(2).iterator
	    })
	
	    rdd4.saveAsTextFile("c://out4")
	
	
	    //println(rdd4.collect().toBuffer)
	    sc.stop()
	
	  }
	}
	
	/**
	  * 决定了数据到哪个分区里面
	  * @param ins
	  */
	class HostParitioner(ins: Array[String]) extends Partitioner {
	
	  val parMap = new mutable.HashMap[String, Int]()
	  var count = 0
	  for(i <- ins){
	    parMap += (i -> count)
	    count += 1
	  }
	
	  override def numPartitions: Int = ins.length
	
	  override def getPartition(key: Any): Int = {
	    parMap.getOrElse(key.toString, 0)
	  }
}

方式三:数据量大的话,可能会内存溢出

package cn.itcast.spark.day2
import java.net.URL
import org.apache.spark.{SparkConf, SparkContext}
	
	/**
	  * 取出网站点击前三的
	  * Created by root on 2016/5/16.
	  */
	object UrlCount {
	
	  def main(args: Array[String]) {
	    val conf = new SparkConf().setAppName("UrlCount").setMaster("local[2]")
	    val sc = new SparkContext(conf)
	
	    //rdd1将数据切分,元组中放的是(URL, 1)
	    val rdd1 = sc.textFile("D:\\itcast-大数据\\day29\\itcast.log").map(line => {
	      val f = line.split("\t")
	      (f(1), 1)
	    })
	    val rdd2 = rdd1.reduceByKey(_+_)
	
	    val rdd3 = rdd2.map(t => {
	      val url = t._1
	      val host = new URL(url).getHost
	      (host, url, t._2)
	    })
	
	    val rdd4 = rdd3.groupBy(_._1).mapValues(it => {
	      it.toList.sortBy(_._3).reverse.take(3)
	    })
	
	    println(rdd2.collect().toBuffer)
	    sc.stop()
	
	  }
}

六、计算用户在基站停留时间最长的两个基站

方式一:

package cn.itcast.spark.day2
import org.apache.spark.{SparkConf, SparkContext}
	
	/**
	  * 根据日志统计出每个用户在站点所呆时间最长的前2个的信息
	  *   1, 先根据"手机号_站点"为唯一标识, 算一次进站出站的时间, 返回(手机号_站点, 时间间隔)
	  *   2, 以"手机号_站点"为key, 统计每个站点的时间总和, ("手机号_站点", 时间总和)
	  *   3, ("手机号_站点", 时间总和) --> (手机号, 站点, 时间总和)
	  *   4, (手机号, 站点, 时间总和) --> groupBy().mapValues(以时间排序,取出前2个) --> (手机->((m,s,t)(m,s,t)))
	  * Created by root on 2016/5/16.
	  */
	object UserLocation {
	  def main(args: Array[String]) {
	    val conf = new SparkConf().setAppName("ForeachDemo").setMaster("local[2]")
	    val sc = new SparkContext(conf)
	    //sc.textFile("c://bs_log").map(_.split(",")).map(x => (x(0), x(1), x(2), x(3)))
	    val mbt = sc.textFile("c://bs_log").map( line => {
	      val fields = line.split(",")
	      val eventType = fields(3)
	      val time = fields(1)
	      val timeLong = if(eventType == "1")  -time.toLong else time.toLong
	      (fields(0) + "_"  + fields(2), timeLong)
	    })
	    //println(mbt.collect().toBuffer)
	    //(18611132889_9F36407EAD0629FC166F14DDE7970F68,54000)
	    val rdd1 = mbt.groupBy(_._1).mapValues(_.foldLeft(0L)(_ + _._2))
	    val rdd2 = rdd1.map( t => {
	      val mobile_bs = t._1
	      val mobile = mobile_bs.split("_")(0)
	      val lac = mobile_bs.split("_")(1)
	      val time = t._2
	      (mobile, lac, time)
	    })
	    val rdd3 = rdd2.groupBy(_._1)
	    //ArrayBuffer((18688888888,List((18688888888,16030401EAFB68F1E3CDF819735E1C66,87600), (18688888888,9F36407EAD0629FC166F14DDE7970F68,51200))), (18611132889,List((18611132889,16030401EAFB68F1E3CDF819735E1C66,97500), (18611132889,9F36407EAD0629FC166F14DDE7970F68,54000))))
	    val rdd4 = rdd3.mapValues(it => {
	      it.toList.sortBy(_._3).reverse.take(2)
	    })
	    println(rdd4.collect().toBuffer)
	    sc.stop()
	  }
}

方式二; 推荐

package cn.itcast.spark.day2
import org.apache.spark.{SparkConf, SparkContext}
	
	/**
	  * ((fields(0),fields(2)), timeLong) -->reduceByKey(_+_).map --> (lac, (mobile, time))
	  *     -->rdd1.join(rdd2).map-->(mobile, lac, time, x, y)
	  *     --> groupBy().mapValues(以时间排序,取出前2个) --> (手机->((m,s,t)(m,s,t)))
	  * Created by root on 2016/5/16.
	  */
	object AdvUserLocation {
	
	  def main(args: Array[String]) {
	    val conf = new SparkConf().setAppName("AdvUserLocation").setMaster("local[2]")
	    val sc = new SparkContext(conf)
	    val rdd0 = sc.textFile("c://bs_log").map( line => {
	      val fields = line.split(",")
	      val eventType = fields(3)
	      val time = fields(1)
	      val timeLong = if(eventType == "1")  -time.toLong else time.toLong
	      ((fields(0),fields(2)), timeLong)
	    })
	    val rdd1 = rdd0.reduceByKey(_+_).map(t => {
	      val mobile = t._1._1
	      val lac = t._1._2
	      val time = t._2
	      (lac, (mobile, time))
	    })
	    val rdd2 = sc.textFile("c://lac_info.txt").map(line => {
	      val f = line.split(",")
	      //(基站ID, (经度,纬度))
	      (f(0), (f(1), f(2)))
	    })
	    //rdd1.join(rdd2)-->(CC0710CC94ECC657A8561DE549D940E0,((18688888888,1300),(116.303955,40.041935)))
	    val rdd3 = rdd1.join(rdd2).map(t => {
	      val lac = t._1
	      val mobile = t._2._1._1
	      val time = t._2._1._2
	      val x = t._2._2._1
	      val y = t._2._2._2
	      (mobile, lac, time, x, y)
	    })
	    //rdd4分组后的
	    val rdd4 = rdd3.groupBy(_._1)
	    val rdd5 = rdd4.mapValues(it => {
	      it.toList.sortBy(_._3).reverse.take(2)
	    })
	        println(rdd1.join(rdd2).collect().toBuffer)
	    //    println(rdd5.collect().toBuffer)
	    rdd5.saveAsTextFile("c://out")
	    sc.stop()
	  }
	}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

大数据同盟会

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值