Overcoming long Spark job runtime on small datasets

If you are dealing with relatively low datasets < 1M entries (and you just have to use Spark for some reasons), significant speedup can be achieved with tuning (lowering) number of partitions.

Basically, setting `spark.default.parallelism` param to number of cores and `spark.sql.shuffle.partitions` to something like 20 (instead of default 200), will allow you to receive significant speedup, since Spark won’t lose time on shuffling RDDs and generating large number of tasks.

Source.

Another useful link.

 

Migrating code from Zeppelin to Spark

When you have shiny Zeppelin application, which runs smoothly and does what it supposed to do, you start transferring your code into Spark environment to use it in production. If you are novice in Hadoop environment (like me), you might encounter a couple of tasks, required to be solved before you will celebrate project launch.

Basically, it can be broken down into easy chunks:

  1. Launching spark-submit with test class.
  2. Adding main class and Spark context initialization.
  3. Building fat jar (which includes all the  libraries).
  4. Launching a job with a spark-submit.

…