Learn Big Data: The Hadoop Ecosystem Masterclass
4.3 (3,028 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
17,220 students enrolled

Learn Big Data: The Hadoop Ecosystem Masterclass

Master the Hadoop ecosystem using HDFS, MapReduce, Yarn, Pig, Hive, Kafka, HBase, Spark, Knox, Ranger, Ambari, Zookeeper
4.3 (3,028 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
17,224 students enrolled
Created by Edward Viaene
Last updated 8/2018
English [Auto], Portuguese [Auto]
Current price: $27.99 Original price: $39.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 6 hours on-demand video
  • 1 article
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Process Big Data using batch
  • Process Big Data using realtime data
  • Be familiar with the technologies in the Hadoop Stack
  • Be able to install and configure the Hortonworks Data Platform (HDP)
  • You will need to have a background in IT. The course is aimed at Software Engineers, System Administrators, DBAs who want to learn about Big Data
  • Knowing any programming language will enhance your course experience
  • The course contains demos you can try out on your own machine. To run the Hadoop cluster on your own machine, you will need to run a virtual server. 8 GB or more RAM is recommended.

In this course you will learn Big Data using the Hadoop Ecosystem. Why Hadoop? It is one of the most sought after skills in the IT industry. The average salary in the US is $112,000 per year, up to an average of $160,000 in San Fransisco (source: Indeed).

The course is aimed at Software Engineers, Database Administrators, and System Administrators that want to learn about Big Data. Other IT professionals can also take this course, but might have to do some extra research to understand some of the concepts.

You will learn how to use the most popular software in the Big Data industry at moment, using batch processing as well as realtime processing. This course will give you enough background to be able to talk about real problems and solutions with experts in the industry. Updating your LinkedIn profile with these technologies will make recruiters want you to get interviews at the most prestigious companies in the world.

The course is very practical, with more than 6 hours of lectures. You want to try out everything yourself, adding multiple hours of learning. If you get stuck with the technology while trying, there is support available. I will answer your messages on the message boards and we have a Facebook group where you can post questions.

Who this course is for:
  • This course is for anyone that wants to know how Big Data works, and what technologies are involved
  • The main focus is on the Hadoop ecosystem. We don't cover any technologies not on the Hortonworks Data Platform Stack
  • The course compares MapR, Cloudera, and Hortonworks, but we only use the Hortonworks Data Platform (HDP) in the demos
Course content
Expand all 98 lectures 05:58:54
+ Introduction
2 lectures 03:58

Course introduction, lecture overview, course objectives

Preview 03:01

This document provides a guide to do the demos in this course

Course Guide
+ What is Big Data and Hadoop
5 lectures 15:34

The 3 (or 4) V's of Big Data explained

Preview 02:16

What is Big Data? Some examples of companies using Big Data, like Spotify, Amazon, Google, and Tesla

Preview 03:29

What can we do with Big Data? Data Science explained.

What is Data Science

How to build a Big Data System? What is Hadoop?

What is Hadoop

Hadoop Distributions: a comparison between Apache Hadoop, Hortonworks Data Platform, Cloudera, and MapR

Hadoop Distributions
What is Big Data Quiz
12 questions
+ Introduction to Hadoop
16 lectures 01:14:03

How to install Hadoop? You can install Hadoop using vagrant with Virtualbox / VMWare, or on the Cloud using AWS. Hortonworks also provides a Sandbox.

Hadoop Installation

This is a demo of how to install and use the Hortonworks Sandbox. An alternative to the full installation using Ambari if you have a machine that doesn't have a lot of memory available. You can also use both in conjunction. 

Demo: Hortonworks Sandbox

A walkthrough of how to install the Hortonworks Data Platform (HDP) on your Laptop or Desktop

Demo: Hadoop Installation - Part 1

A walkthrough of how to install the Hortonworks Data Platform (HDP) on your Laptop or Desktop (Part II)

Demo: Hadoop Installation - Part 2

An introduction to HDFS, The Hadoop Distributed Filesystem

Introduction to HDFS

Communications between the DataNode and the NameNode explained

DataNode Communications

An introduction to HDFS using hadoop fs put. I'm also showing how a files gets divided in blocks and where those blocks are stored.

Demo: HDFS - Part 1

An introduction to downloading, uploading and listing files. This time I'm using the Ambari HDFS Viewer and the NameNode UI. I also show what configuration changes are necessary to make this work.

Demo: HDFS - Part 2 - Using Ambari

MapReduce WordCount, step by step explained

MapReduce WordCount Example

A demo of MapReduce WordCount on our HDP cluster

Demo: MapReduce WordCount

In HDFS, files are divided in blocks and stored on the DataNodes. In this lecture we're going to see what happens when we're reading lines from files that potentially span over multiple blocks.

Lines that span blocks

Introducing Yarn, and concepts like the ResourceManager, the scheduler, the applicationsManager, the NodeManager, and the Application Master. I explain how an application is executed and the consequences when a node crashes.

Introduction to Yarn

A demo of an application executed using yarn jar. I provide an overview of Ambari Yarn metrics and the ResourceManager UI

Demo: Yarn and ResourceManager UI

Ambari also exposes a REST API. Commands can be executed directly to this API. Ambari also lets you do unattended install using Ambari Blueprints

Ambari API and Blueprints

A demo showing you the Ambari API and how to work with blueprints

Demo: Ambari API and Blueprints

An introduction to ETL processing in Hadoop. MapReduce, Pig, and Spark are suitable to do batch processing. Hive is more suitable for data exploration.

ETL Processing in Hadoop
Introduction Quiz
5 questions
+ Pig
4 lectures 15:07

An introduction to Pig and Pig Latin.

Introduction to Pig

This demo shows how to install pig and tez using Ambari on the Hortonworks Data Platform

Demo: Part 1 - Pig Installation

In this demo I will show you basic pig commands to load, dump and store data. I'll also show you an example how to filter data.

Demo: Part 2 - Pig Commands

More Pig commands in this final part of the pig demo. I'll go over commands like GROUP BY, FOREACH ... GENERATE and COUNT()

Demo: Part 3 - More Pig Commands
+ Apache Spark
7 lectures 26:22

An introduction to Apache Spark. This lecture explains the differences between the spark-submit using local mode, yarn-cluster and yarn-client.

Introduction to Apache Spark

An introduction to WordCount in Spark using Python (pyspark)

Spark WordCount

Spark installation using Ambari and a demo of the Spark Wordcount using the pyspark shell.

Demo: Spark installation and WordCount

This lectures gives an introduction to Resilient Distributed Datasets (RDDs). This abstraction allows you to do transformations and actions in Spark. I give an example using filter RDDs, and explain how shuffle RDDs impact disk and network IO

Preview 03:52

A demo of RDD transformations and actions in Spark

Demo: RDD Transformations and Actions

An overview of the most common RDD actions and transformations

Overview of RDD Transformations and Actions

An overview of what Spark MLLib (Machine Learning Library) can do. I explain a Recommendation Engine example, and a Clustering Example (K-Means / DBScan)

Spark MLLib
+ Hive
6 lectures 23:46

An introduction to SQL on Hadoop using Hive, enabling data warehouse capabilities. This lecture provides an architecture overview and an overview of the hive CLI and beeline using JDBC.

Introduction to Hive

An overview of Hive Queries: creating tables, creating databases, inserting data, and selecting data. This lecture also shows where the hive data is stored in HDFS.

Hive Queries

A demo that shows the installation of Hiveserver2 and the clients. Afterwards I show you a few example queries using a JDBC beeline connection.

Demo: Hive Installation and Hive Queries

Optimizing hive can't be done using indexes. This lecture explains how queries in hive should be optimized, using partitions and buckets. This lecture also handles User Defined Functions (UDFs) and Serialization / Deserialization

Hive Partitioning, Buckets, UDFs, and SerDes

The Stinger initiative brings optimizations to Spark. Query time has lowered significantly over the years. This lecture explains you the details.

The Stinger Initiative

You can also use Hive in Spark using the Spark SQLContext.

Hive in Spark
+ Real Time Processing
1 lecture 02:53

All the lectures up until now were batch oriented. From now on we're going to discuss Realtime processing technologies like Kafka, Storm, Spark Streaming, and HBase / Phoenix.

Introduction to Realtime Processing
+ Kafka
5 lectures 19:14

An introduction to Kafka and its terminology like Producers, Consumers, Topics and Partitions.

Introduction to Kafka

An explanation of Kafka Topics covering Leader partitions, Follower partitions, and how writes are sent to the partitions. Also covers the Consumer groups to show the difference between publish-subscribe (pubsub) mechanism and queuing

Kafka Topics

Kafka guarantees at-least-once message delivery, but can also be configured for at-most-once. Log Compaction is a technique that Kafka provides to have a full dataset maintained in the commit log. This lecture shows an example of a customer dataset fully kept in Kafka and explains Log Tail, Cleaner Point and Log Head and how it impacts consumers.

Kafka Messages and Log Compaction

A few example use cases of Kafka

Kafka Use Cases and Usage

The installation of Kafka on the Hortonworks Data Platform and a demo of a producer - consumer example.

Demo: Kafka Installation and Usage
+ Storm
5 lectures 23:18

This lecture provides an introduction to Storm, a realtime computing system. The architecture overview explains components like Nimbus, Zookeeper, and the Supervisor

Introduction to Storm

This lecture explains what Storm topologies are. I talk about streams, tuples, spouts, and bolts.

A Storm Topology

A demo of a Storm Topology ingesting data from Kafka and doing computation on the data.

Demo: Storm installation and Example Topology

Message Delivery explained:

  • At most once delivery
  • At least once delivery
  • Exactly once delivery

This lecture also explains the Storm's reliability API (Anchoring and Acking) and the performance impact of acking.

Storm Message Processing and Reliability

An introduction to the Trident API, an alternative interface for Storm that supports exactly-once processing of messages.

+ Spark Streaming
7 lectures 17:35

Spark streaming is an alternative to Storm that gained a lot of popularity in the last few years. It allows you to reuse the code you wrote in batch and use it for stream processing.

Introduction to Spark Streaming

Spark Streaming generates DStreams, micro-batches of RDDs. This lecture explains the Spark Streaming Architecture

Spark Streaming Architecture

This lecture explains possible receivers, like Kafka. It also shows a WordCount streaming example, where data is ingested from Kafka and processed using WordCount in Spark Streaming

Spark Receivers and WordCount Streaming Example

This demo shows the Kafka-spark-streaming example.

Demo: Spark Streaming with Kafka

In the previous lecture we did a WordCount using Spark Streaming, but our example was stateless. In this lecture I'm adding state, using UpdateStateByKey to keep state and checkpointing to save the data to HDFS.

Spark Streaming State and Checkpointing

A demo of a stateful spark streaming application. Performs a global WordCount from a topic from Kafka. Does checkpointing in HDFS.

Demo: Stateful Spark Streaming

More Spark Streaming Features, like Windowing and streaming algorithms

More Spark Streaming Features