Book Online Tickets for Cloudera Big Data Developer Training, Bengaluru. Cloudera Developer for Spark & Hadoop


Overview:- 
Xebia\'s four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techn

Cloudera Big Data Developer Training

 

  • Cloudera Developer Spark & Hadoop (Only Training)

    Last Date: 15-11-2018

    INR 85999
  • Cloudera Developer Spark & Hadoop (Training & Certification)

    Last Date: 15-11-2018

    INR 110000
  • Cloudera Developer Spark & Hadoop (Only Training)

    Early Bird Discount

    Sale Date Ended

    INR 82500
    Sold Out
  • Cloudera Developer Spark & Hadoop (Training & Certification)

    Sale Date Ended

    INR 107500
    Sold Out

Invite friends

Contact Us

Page Views : 43

About The Event

Cloudera Developer for Spark & Hadoop

Overview:- 

Xebia's four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark, Hive, Flume, Sqoop, and Impala, this training course is the best preparation for the real-world challenges faced by Hadoop developers. Participants learn to identify which tool is the right one to use in a given situation, and will gain hands-on experience in developing using those tools.

Learn how to import data into your Apache Hadoop cluster and process it with Spark, Hive, Flume, Sqoop, Impala, and other Hadoop ecosystem tools

 

Hands-On-Hadoop : -

 

Through instructor-led discussion and interactive, hands-on exercises, participants will learn Apache Spark and how it integrates with the entire Hadoop ecosystem, learning:

  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to use Sqoop and Flume to ingest data
  • How to process distributed data with Apache Spark
  • How to model structured data as tables in Impala and Hive
  • How to choose the best data storage format for different data usage patterns
  • Best practices for data storage

 

Course Curriculum :-

 

  • Introduction to Apache Hadoop and the Hadoop Ecosystem
  • Apache Hadoop File Storage
  • Distributed Processing on an Apache Hadoop Cluster
  • Apache Spark Basics
  • Working with DataFrames and Schemas
  • Analyzing Data with DataFrame Queries
  • RDD Overview
  • Transforming Data with RDDs
  • Aggregating Data with Pair RDDs
  • Querying Tables and Views with Apache Spark SQL
  • Working with Datasets in Scala
  • Writing, Configuring and Running Apache Spark Applications
  • Distributed Processing
  • Distributed Data Persistence
  • Common Patterns in Apache Spark Data Processing
  • Apache Spark Streaming: Introduction to DStreams
  • Apache Spark Streaming: Processing Multiple Batches
  • Apache Spark Streaming: Data Sources

 

Prerequisite:-

 

This course is designed for developers and engineers who have programming experience. Apache Spark examples and hands-on exercises are presented in Scala and Python, so the ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful. Prior knowledge of Hadoop is not required.

Participants need to carry their own laptop during the training.

Animesh Pandey

Venue Map