Hortonworks HDP Developer Enterprise Apache Spark I
This course is designed as an entry point for developers who need to create applications to analyze Big Data stored in Apache Hadoop using Spark.
Topics include:
An overview of the Hortonworks Data Platform (HDP), including HDFS and YARN
Using Spark Core APIs for interactive data exploration
Spark SQL and DataFrame operations
Spark Streaming and DStream operations
Data visualization, reporting, and collaboration
Performance monitoring and tuning
Building and deploying Spark applications
Introduction to the Spark Machine Learning Library
About This Course
This course is designed as an entry point for developers who need to create applications to analyze Big Data stored in Apache Hadoop using Spark.
Topics include:
- An overview of the Hortonworks Data Platform (HDP), including HDFS and YARN
- Using Spark Core APIs for interactive data exploration
- Spark SQL and DataFrame operations
- Spark Streaming and DStream operations
- Data visualization, reporting, and collaboration
- Performance monitoring and tuning
- Building and deploying Spark applications
- Introduction to the Spark Machine Learning Library
Audience Profile
Software engineers that are looking to develop in-memory applications for time sensitive and highly iterative applications in an Enterprise HDP environment.
Prerequisites
Students should be familiar with programming principles and have previous experience in software development using either Python or Scala. Previous experience with data streaming, SQL, and HDP is also helpful, but not required.
At Course Completion
Upon course completion, students will be able to:
- Describe Hadoop, HDFS, YARN, and the HDP ecosystem
- Describe Spark use cases
- Explore and manipulate data using Zeppelin
- Explore and manipulate data using a Spark REPL
- Explain the purpose and function of RDDs
- Employ functional programming practices
- Perform Spark transformations and actions
- Work with Pair RDDs
- Perform Spark queries using Spark SQL and DataFrames
- Use Spark Streaming stateless and window transformations
- Visualize data, generate reports, and collaborate using Zeppelin
- Monitor Spark applications using Spark History Server
- Learn general application optimization guidelines/tips
- Use data caching to increase performance of applications
- Build and package Spark applications
- Deploy applications to the cluster using YARN
- Understand the purpose of Spark MLlib
Course Outline
Format
- 50% Lecture/Discussion
- 50% Hands-on Labs
Hands-On Lab Activities
Labs can be performed using either Python or Scala
- Use common HDFS commands
- Use a REPL to program in Spark
- Use Zeppelin to program in Spark
- Perform RDD transformations and actions
- Perform Pair RDD transformations and actions
- Utilize Spark SQL
- Perform stateless transformations using Spark Streaming
- Perform window-based transformations
- Use Zeppelin for data visualization and reporting
- Monitor applications using Spark History Server
- Cache and persist data
- Configure checkpointing, broadcast variables, and executors
- Build and submit a Spark application to YARN
- Run Spark MLlib applications
Sorry! It looks like we haven’t updated our dates for the class you selected yet. There’s a quick way to find out. Contact us at 502.265.3057 or email info@training4it.com
Request a Date