Cloud Computing - RDD SPARK

Back to Course

Lesson Description

Lession - #1483 reducedByKey Function

Spark reduceByKey Function

In Spark, the reduceByKey work is a much of the time utilized change activity that performs accumulation of information. It gets key-esteem matches (K, V>
as an information, totals the qualities in view of the key and produces a dataset of (K, V>
matches as a result.

Example of reducedByKey function
  • In this example, we aggregate the values on the basis of key.

To open the Spark in Scala mode, follow the below command.
$ spark-shell

  • Create an RDD using the parallelized collection.

scala> val data = sc.parallelize(Array(("C",3>

Now, we can read the generated result by using the following command.
scala> data.collect

  • Apply reduceByKey(>
    capacity to total the qualities.

scala> val reducefunc = data.reduceByKey((value, x>
=> (esteem + x>

  • Presently, we can peruse the produced outcome by utilizing the accompanying order.

scala> reducefunc.collect