When you have shiny Zeppelin application, which runs smoothly and does what it supposed to do, you start transferring your code into Spark environment to use it in production. If you are novice in Hadoop environment (like me), you might encounter a couple of tasks, required to be solved before you will celebrate project launch.
Basically, it can be broken down into easy chunks:
- Launching spark-submit with test class.
- Adding main class and Spark context initialization.
- Building fat jar (which includes all the libraries).
- Launching a job with a spark-submit.