Book Online Tickets for Cloudera Developer for Spark Hadoop | Ba, Bengaluru. Cloudera Developer for Spark & Hadoop
Overview :- 
Xebia\'s four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techni

Cloudera Developer for Spark Hadoop | Bangalore | May 24-27

 

Invite friends

Contact Us

Page Views : 9

About The Event

Cloudera Developer for Spark & Hadoop

Overview :- 

Xebia's four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark, Hive, Flume, Sqoop, and Impala, this training course is the best preparation for the real-world challenges faced by Hadoop developers. Participants learn to identify which tool is the right one to use in a given situation, and will gain hands-on experience in developing using those tools.

Learn how to import data into your Apache Hadoop cluster and process it with Spark, Hive, Flume, Sqoop, Impala, and other Hadoop ecosystem tools

 

Hands-On-Hadoop : -

Through instructor-led discussion and interactive, hands-on exercises, participants will learn Apache Spark and how it integrates with the entire Hadoop ecosystem, learning:

  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to use Sqoop and Flume to ingest data
  • How to process distributed data with Apache Spark
  • How to model structured data as tables in Impala and Hive
  • How to choose the best data storage format for different data usage patterns
  • Best practices for data storage

 

Course Curriculum :-

 

Introduction to Apache Hadoop and the Hadoop Ecosystem

  • Introduction to Apache Hadoop and the Hadoop Ecosystem
  • Apache Hadoop Overview
  • Data Ingestion and Storage
  • Data Processing
  • Data Analysis and Exploration
  • Other Ecosystem Tools
  • Introduction to the Hands-On Exercises

 

Apache Hadoop File Storage

  • Apache Hadoop Cluster Components
  • HDFS Architecture
  • Using HDFS

 

Distributed Processing on an Apache Hadoop Cluster

  • YARN Architecture
  • Working With YARN

 

Apache Spark Basics

  • What is Apache Spark?
  • Starting the Spark Shell
  • Using the Spark Shell
  • Getting Started with Datasets and DataFrames
  • DataFrame Operations

 

Working with DataFrames and Schemas

  • Creating DataFrames from Data Sources
  • Saving DataFrames to Data Sources
  • DataFrame Schemas
  • Eager and Lazy Execution

 

Analyzing Data with DataFrame Queries

  • Querying DataFrames Using Column Expressions
  • Grouping and Aggregation Queries
  • Joining DataFrames

 

RDD Overview

  • RDD Overview
  • RDD Data Sources
  • Creating and Saving RDDs
  • RDD Operations

 

Transforming Data with RDDs

  • Writing and Passing Transformation Functions
  • Transformation Execution
  • Converting Between RDDs and DataFrames

 

Aggregating Data with Pair RDDs

  • Key-Value Pair RDDs
  • Map-Reduce
  • Other Pair RDD Operations

 

Querying Tables and Views with Apache Spark SQL

  • Querying Tables in Spark Using SQL
  • Querying Files and Views
  • The Catalog API
  • Comparing Spark SQL, Apache Impala, and Apache Hive-on-Spark

 

Working with Datasets in Scala

  • Datasets and DataFrames
  • Creating Datasets
  • Loading and Saving Datasets
  • Dataset Operations

 

Writing, Configuring, and Running Apache Spark Applications

  • Writing a Spark Application
  • Building and Running an Application
  • Application Deployment Mode
  • The Spark Application Web UI
  • Configuring Application Properties

 

Distributed Processing

  • Review: Apache Spark on a Cluster
  • RDD Partitions
  • Example: Partitioning in Queries
  • Stages and Tasks
  • Job Execution Planning
  • Example: Catalyst Execution Plan
  • Example: RDD Execution Plan

 

Distributed Data Persistence

  • DataFrame and Dataset Persistence
  • Persistence Storage Levels
  • Viewing Persisted RDDs

 

Common Patterns in Apache Spark Data Processing

  • Common Apache Spark Use Cases
  • Iterative Algorithms in Apache Spark
  • Machine Learning
  • Example: k-means

 

Apache Spark Streaming: Introduction to DStreams

  • Apache Spark Streaming Overview
  • Example: Streaming Request Count
  • DStreams
  • Developing Streaming Applications

 

Apache Spark Streaming: Processing Multiple Batches

  • Multi-Batch Operations
  • Time Slicing
  • State Operations
  • Sliding Window Operations
  • Preview: Structured Streaming

 

Apache Spark Streaming: Data Sources

  • Streaming Data Source Overview
  • Apache Flume and Apache Kafka Data Sources
  • Example: Using a Kafka Direct Data Source

 

Prerequistie :-

This course is designed for developers and engineers who have programming experience. Apache Spark examples and hands-on exercises are presented in Scala and Python, so the ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful. Prior knowledge of Hadoop is not required.

Participants need to carry their own laptop during the training.

 

Certification :-

CCA: Spark and Hadoop Developer Certification

CCA175 is a hands-on, practical exam using Cloudera technologies. Each user is given their own CDH5 (currently 5.3.2) cluster pre-loaded with Spark, Impala, Crunch, Hive, Pig, Sqoop, Kafka, Flume, Kite, Hue, Oozie, DataFu, and many others. In addition the cluster also comes with Python (2.6 and 3.4), Perl 5.10, Elephant Bird, Cascading 2.6, Brickhouse, Hive Swarm, Scala 2.11, Scalding, IDEA, Sublime, Eclipse, and NetBeans.

 

More Events From Same Organizer

Similar Category Events

Venue Map