Big Data Hadoop Certification Training Course



featured project

Big Data Hadoop Certification Training Course

Certpine's extensive Big Data Hadoop Certification Training is curated by Hadoop experts, and it covers in-depth knowledge on Big Data and Hadoop Ecosystem tools such as HDFS, YARN, MapReduce, Hive, and Pig. Throughout this online instructor-led Big Data Hadoop certification training, you will be working on real-life industry use cases in Retail, Social Media, Aviation, Tourism, and Finance domains using Certpine's Cloud Lab. Enroll now in this Big Data Certification to learn Big Data from instructors with over 10+ years of experience, with hands-on demonstrations.

Weekend (Sat-Sun) Weekdays (Mon-Fri)
$499 $499

CURRICULUM

Learning Objectives: In this module, you will understand what Big Data is, the limitations of the traditional solutions for Big Data problems, how Hadoop solves those Big Data problems, Hadoop Ecosystem, Hadoop Architecture, HDFS, Anatomy of File Read and Write & how MapReduce works.

Topics:
  • Introduction to Big Data & Big Data Challenges 
  • Limitations & Solutions of Big Data Architecture
  • Hadoop & its Features
  • Hadoop Ecosystem
  • Hadoop 2.x Core Components 
  • Hadoop Storage: HDFS (Hadoop Distributed File System)
  • Hadoop Processing: MapReduce Framework
  • Different Hadoop Distributions

Learning Objectives: In this module, you will learn Hadoop Cluster Architecture, important configuration files of Hadoop Cluster, Data Loading Techniques using Sqoop & Flume, and how to setup Single Node and Multi-Node Hadoop Cluster.

Topics:
  • Hadoop 2.x Cluster Architecture 
  • Federation and High Availability Architecture 
  • Typical Production Hadoop Cluster
  • Hadoop Cluster Modes
  • Common Hadoop Shell Commands 
  • Hadoop 2.x Configuration Files
  • Single Node Cluster & Multi-Node Cluster set up
  • Basic Hadoop Administration

Learning Objectives: In this module, you will understand Hadoop MapReduce framework comprehensively, the working of MapReduce on data stored in HDFS. You will also learn the advanced MapReduce concepts like Input Splits, Combiner & Partitioner.

Topics:
  • Traditional way vs MapReduce way
  • Why MapReduce 
  • YARN Components
  • YARN Architecture
  • YARN MapReduce Application Execution Flow
  • YARN Workflow
  • Anatomy of MapReduce Program 
  • Input Splits, Relation between Input Splits and HDFS Blocks
  • MapReduce: Combiner & Partitioner
  • Demo of Health Care Dataset
  • Demo of Weather Dataset

Learning Objectives: In this module, you will learn Advanced MapReduce concepts such as Counters, Distributed Cache, MRunit, Reduce Join, Custom Input Format, Sequence Input Format and XML parsing.

Topics:
  • Counters
  • Distributed Cache
  • MRunit
  • Reduce Join 
  • Custom Input Format 
  • Sequence Input Format
  • XML file Parsing using MapReduce

Learning Objectives: In this module, you will learn Apache Pig, types of use cases where we can use Pig, tight coupling between Pig and MapReduce, and Pig Latin scripting, Pig running modes, Pig UDF, Pig Streaming & Testing Pig Scripts. You will also be working on healthcare dataset.

Topics:
  • Introduction to Apache Pig 
  • MapReduce vs Pig
  • Pig Components & Pig Execution
  • Pig Data Types & Data Models in Pig
  • Pig Latin Programs 
  • Shell and Utility Commands
  • Pig UDF & Pig Streaming
  • Testing Pig scripts with Punit
  • Aviation use-case in PIG
  • Pig Demo of Healthcare Dataset

Learning Objectives: This module will help you in understanding Hive concepts, Hive Data types, loading and querying data in Hive, running hive scripts and Hive UDF.

Topics:
Introduction to Apache Hive 
Hive vs Pig
Hive Architecture and Components 
Hive Metastore
Limitations of Hive
Comparison with Traditional Database
Hive Data Types and Data Models
Hive Partition
Hive Bucketing
Hive Tables (Managed Tables and External Tables)
Importing Data
Querying Data & Managing Outputs
Hive Script & Hive UDF
Retail use case in Hive
Hive Demo on Healthcare Dataset

Learning Objectives: In this module, you will understand advanced Apache Hive concepts such as UDF, Dynamic Partitioning, Hive indexes and views, and optimizations in Hive. You will also acquire indepth knowledge of Apache HBase, HBase Architecture, HBase running modes and its components.

Topics:
  • Hive QL: Joining Tables, Dynamic Partitioning 
  • Custom MapReduce Scripts
  • Hive Indexes and views 
  • Hive Query Optimizers
  • Hive Thrift Server
  • Hive UDF 
  • Apache HBase: Introduction to NoSQL Databases and HBase 
  • HBase v/s RDBMS
  • HBase Components
  • HBase Architecture 
  • HBase Run Modes
  • HBase Configuration
  • HBase Cluster Deployment

Learning Objectives: This module will cover advance Apache HBase concepts. We will see demos on HBase Bulk Loading & HBase Filters. You will also learn what Zookeeper is all about, how it helps in monitoring a cluster & why HBase uses Zookeeper.

Topics:
  • HBase Data Model 
  • HBase Shell
  • HBase Client API
  • Hive Data Loading Techniques
  • Apache Zookeeper Introduction
  • ZooKeeper Data Model
  • Zookeeper Service
  • HBase Bulk Loading 
  • Getting and Inserting Data
  • HBase Filters

Learning Objectives: In this module, you will learn what is Apache Spark, SparkContext & Spark Ecosystem. You will learn how to work in Resilient Distributed Datasets (RDD) in Apache Spark. You will be running application on Spark Cluster & comparing the performance of MapReduce and Spark.

Topics:
  • What is Spark 
  • Spark Ecosystem
  • Spark Components 
  • What is Scala 
  • Why Scala
  • SparkContext
  • Spark RDD

Learning Objectives: In this module, you will understand how multiple Hadoop ecosystem www.edureka.co © 2019 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved. components work together to solve Big Data problems. This module will also cover Flume & Sqoop demo, Apache Oozie Workflow Scheduler for Hadoop Jobs, and Hadoop Talend integration.

Topics:
  • Oozie 
  • Oozie Components
  • Oozie Workflow
  • Scheduling Jobs with Oozie Scheduler
  • Demo of Oozie Workflow
  • Oozie Coordinator 
  • Oozie Commands
  • Oozie Web Console
  • Oozie for MapReduce
  • Combining flow of MapReduce Jobs
  • Hive in Oozie
  • Hadoop Project Demo
  • Hadoop Talend Integration

Analyses of a Online Book Store
  • Find out the frequency of books published each year. (Hint: Sample dataset will be provided)
  • B. Find out in which year the maximum number of books were published
  • Find out how many books were published based on ranking in the year 2002.
Sample Dataset Description
  • The Book-Crossing dataset consists of 3 tables that will be provided to you.
Airlines Analysis
  • Find list of Airports operating in Country India
  • Find the list of Airlines having zero stops
  • List of Airlines operating with codeshare
  • Which country (or) territory having highest Airports
  • Find the list of Active Airlines in United state
Sample Dataset Description
  • In this use case, there are 3 data sets. Final_airlines, routes.dat, airports_mod.dat

TRAINING FEATURES


Instructor-led Sessions

30 Hours of Online Live Instructor-Led Classes.
 Weekend Class : 10 sessions of 3 hours each.
Weekday Class: 15 sessions of 2 hours each.

Real-life Case Studies

Live project based on any of the selected use cases, involving implementation of the various Big Data concepts.

Assessments

Each class will be followed a quiz to assess  to your learning.

Lifetime Access

You get lifetime access to LMS where presentations, quizzes, installation guide & class recordings are there.

24 x 7 Expert Support

We have lifetime 24x7 online support team to resolve all your technical queries, through ticket based tracking system.

Certification

Sucessfully complete your final course project and Edureka will certify you as a Big Data Expert.

Forum

We have a community forum for all our learners that further facilitates learning through peer interaction and knowledge sharing.

PREREQUISITES