Spark Fundamentals II

Spark-Scala1

Building on your foundational knowledge of Spark, take this opportunity to move your skills to the next level. With a focus on Spark Resilient Distributed Data Set operations this course exposes you to concepts that are critical to your success in this field.

About this Course

Expand your knowledge of the concepts discussed in Spark Fundamentals I with a focus on RDDs (Resilient Distributed Datasets). RDDs are the main abstraction Spark provides to enable parallel processing across the nodes of a Spark cluster.
  • Get in-deptth knowledge on Spark’s architecture and how data is distributed and tasks are parallelized.
  • Learn how to optimize your data for joins using Spark’s memory caching.
  • Learn how to use the more advanced operations available in the API.
  • The lab exercises for this course are performed exclusively on the Cloud and using a Notebook interface.
IBM Data Science Experience provides you with Jupyter notebooks that is already connected to Spark and supports Python, R, and Scala so that you start creating your Spark projects and collaborating with other data scientists. When you sign up, you get free access to Data Science Experience and all other IBM services for 30 days. Start now and take advantage of this offer.

Course Syllabus

  • Module 1 – Introduction to Notebooks
    1. Understand how to use Zeppelin in your Spark projects
    2. Identify the various notebooks you can use with Spark
  • Module 2 – Spark RDD Architecture
    1. Understand how Spark generates RDDs
    2. Manage partitions to improve RDD performance
  • Module 3 – Optimizing Transformations and Actions
    1. Use advanced Spark RDD operations
    2. Identify what operations cause shuffling
  • Module 4 – Caching and Serialization
    1. Understand how and when to cache RDDs
    2. Understand storage levels and their uses
  • Module 5 – Develop and Testing
    1. Understand how to use sbt to build Spark projects
    2. Understand how to use Eclipse and IntelliJ for Spark development

GENERAL INFORMATION

  • This course is self-paced.
  • It can be taken at any time.
  • It can be audited as many times as you wish.

RECOMMENDED SKILLS PRIOR TO TAKING THIS COURSE

  • Basic understanding of Apache Hadoop and Big Data.
  • Have taken the Spark Fundamentals I course on BDU.
  • Basic understanding of the Scala, Python, R, or Java programming languages.
  • Basic Linux Operating System knowledge.

REQUIREMENTS

  • None
Ibm Certification Courses