Learn Big Data: The Hadoop Ecosystem Masterclass
4.3 (435 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
2,997 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Learn Big Data: The Hadoop Ecosystem Masterclass to your Wishlist.

Add to Wishlist

Learn Big Data: The Hadoop Ecosystem Masterclass

Master the Hadoop ecosystem using HDFS, MapReduce, Yarn, Pig, Hive, Kafka, HBase, Spark, Knox, Ranger, Ambari, Zookeeper
Bestselling
4.3 (435 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
2,997 students enrolled
Created by Edward Viaene
Last updated 1/2017
English
Current price: $10 Original price: $40 Discount: 75% off
1 day left at this price!
30-Day Money-Back Guarantee
Includes:
  • 6 hours on-demand video
  • 1 Article
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Process Big Data using batch
  • Process Big Data using realtime data
  • Be familiar with the technologies in the Hadoop Stack
  • Be able to install and configure the Hortonworks Data Platform (HDP)
View Curriculum
Requirements
  • You will need to have a background in IT. The course is aimed at Software Engineers, System Administrators, DBAs who want to learn about Big Data
  • Knowing any programming language will enhance your course experience
  • The course contains demos you can try out on your own machine. To run the Hadoop cluster on your own machine, you will need to run a virtual server. 8 GB or more RAM is recommended.
Description

In this course you will learn Big Data using the Hadoop Ecosystem. Why Hadoop? It is one of the most sought after skills in the IT industry. The average salary in the US is $112,000 per year, up to an average of $160,000 in San Fransisco (source: Indeed).

The course is aimed at Software Engineers, Database Administrators, and System Administrators that want to learn about Big Data. Other IT professionals can also take this course, but might have to do some extra research to understand some of the concepts.

You will learn how to use the most popular software in the Big Data industry at moment, using batch processing as well as realtime processing. This course will give you enough background to be able to talk about real problems and solutions with experts in the industry. Updating your LinkedIn profile with these technologies will make recruiters want you to get interviews at the most prestigious companies in the world.

The course is very practical, with more than 6 hours of lectures. You want to try out everything yourself, adding multiple hours of learning. If you get stuck with the technology while trying, there is support available. I will answer your messages on the message boards and we have a Facebook group where you can post questions.

Who is the target audience?
  • This course is for anyone that wants to know how Big Data works, and what technologies are involved
  • The main focus is on the Hadoop ecosystem. We don't cover any technologies not on the Hortonworks Data Platform Stack
  • The course compares MapR, Cloudera, and Hortonworks, but we only use the Hortonworks Data Platform (HDP) in the demos
Students Who Viewed This Course Also Viewed
Curriculum For This Course
Expand All 97 Lectures Collapse All 97 Lectures 05:56:08
+
Introduction
2 Lectures 04:05

Course introduction, lecture overview, course objectives

Preview 03:01

This document provides a guide to do the demos in this course

Course Guide
01:04
+
What is Big Data and Hadoop
5 Lectures 15:34

The 3 (or 4) V's of Big Data explained

Preview 02:16

What is Big Data? Some examples of companies using Big Data, like Spotify, Amazon, Google, and Tesla

Preview 03:29

What can we do with Big Data? Data Science explained.

What is Data Science
02:19

How to build a Big Data System? What is Hadoop?

What is Hadoop
04:13

Hadoop Distributions: a comparison between Apache Hadoop, Hortonworks Data Platform, Cloudera, and MapR

Hadoop Distributions
03:17

What is Big Data Quiz
12 questions
+
Introduction to Hadoop
16 Lectures 01:14:03

How to install Hadoop? You can install Hadoop using vagrant with Virtualbox / VMWare, or on the Cloud using AWS. Hortonworks also provides a Sandbox.

Hadoop Installation
04:40

This is a demo of how to install and use the Hortonworks Sandbox. An alternative to the full installation using Ambari if you have a machine that doesn't have a lot of memory available. You can also use both in conjunction. 

Demo: Hortonworks Sandbox
04:21

A walkthrough of how to install the Hortonworks Data Platform (HDP) on your Laptop or Desktop

Demo: Hadoop Installation - Part 1
04:58

A walkthrough of how to install the Hortonworks Data Platform (HDP) on your Laptop or Desktop (Part II)

Demo: Hadoop Installation - Part 2
06:38

An introduction to HDFS, The Hadoop Distributed Filesystem

Introduction to HDFS
03:28

Communications between the DataNode and the NameNode explained

DataNode Communications
01:15

An introduction to HDFS using hadoop fs put. I'm also showing how a files gets divided in blocks and where those blocks are stored.

Demo: HDFS - Part 1
05:45

An introduction to downloading, uploading and listing files. This time I'm using the Ambari HDFS Viewer and the NameNode UI. I also show what configuration changes are necessary to make this work.

Demo: HDFS - Part 2 - Using Ambari
04:59

MapReduce WordCount, step by step explained

MapReduce WordCount Example
04:17

A demo of MapReduce WordCount on our HDP cluster

Demo: MapReduce WordCount
07:05

In HDFS, files are divided in blocks and stored on the DataNodes. In this lecture we're going to see what happens when we're reading lines from files that potentially span over multiple blocks.

Lines that span blocks
02:29

Introducing Yarn, and concepts like the ResourceManager, the scheduler, the applicationsManager, the NodeManager, and the Application Master. I explain how an application is executed and the consequences when a node crashes.

Introduction to Yarn
04:20

A demo of an application executed using yarn jar. I provide an overview of Ambari Yarn metrics and the ResourceManager UI

Demo: Yarn and ResourceManager UI
05:45

Ambari also exposes a REST API. Commands can be executed directly to this API. Ambari also lets you do unattended install using Ambari Blueprints

Ambari API and Blueprints
03:35

A demo showing you the Ambari API and how to work with blueprints

Demo: Ambari API and Blueprints
08:38

An introduction to ETL processing in Hadoop. MapReduce, Pig, and Spark are suitable to do batch processing. Hive is more suitable for data exploration.

ETL Processing in Hadoop
01:50

Introduction Quiz
5 questions
+
Pig
4 Lectures 15:07

An introduction to Pig and Pig Latin.

Introduction to Pig
02:36

This demo shows how to install pig and tez using Ambari on the Hortonworks Data Platform

Demo: Part 1 - Pig Installation
02:08

In this demo I will show you basic pig commands to load, dump and store data. I'll also show you an example how to filter data.

Demo: Part 2 - Pig Commands
06:21

More Pig commands in this final part of the pig demo. I'll go over commands like GROUP BY, FOREACH ... GENERATE and COUNT()

Demo: Part 3 - More Pig Commands
04:02
+
Apache Spark
7 Lectures 26:22

An introduction to Apache Spark. This lecture explains the differences between the spark-submit using local mode, yarn-cluster and yarn-client.

Introduction to Apache Spark
03:42

An introduction to WordCount in Spark using Python (pyspark)

Spark WordCount
02:36

Spark installation using Ambari and a demo of the Spark Wordcount using the pyspark shell.

Demo: Spark installation and WordCount
04:36

This lectures gives an introduction to Resilient Distributed Datasets (RDDs). This abstraction allows you to do transformations and actions in Spark. I give an example using filter RDDs, and explain how shuffle RDDs impact disk and network IO

Preview 03:52

A demo of RDD transformations and actions in Spark

Demo: RDD Transformations and Actions
06:02

An overview of the most common RDD actions and transformations

Overview of RDD Transformations and Actions
03:36

An overview of what Spark MLLib (Machine Learning Library) can do. I explain a Recommendation Engine example, and a Clustering Example (K-Means / DBScan)

Spark MLLib
01:58
+
Hive
6 Lectures 23:46

An introduction to SQL on Hadoop using Hive, enabling data warehouse capabilities. This lecture provides an architecture overview and an overview of the hive CLI and beeline using JDBC.

Introduction to Hive
02:47

An overview of Hive Queries: creating tables, creating databases, inserting data, and selecting data. This lecture also shows where the hive data is stored in HDFS.

Hive Queries
04:29

A demo that shows the installation of Hiveserver2 and the clients. Afterwards I show you a few example queries using a JDBC beeline connection.

Demo: Hive Installation and Hive Queries
07:33

Optimizing hive can't be done using indexes. This lecture explains how queries in hive should be optimized, using partitions and buckets. This lecture also handles User Defined Functions (UDFs) and Serialization / Deserialization

Hive Partitioning, Buckets, UDFs, and SerDes
04:32

The Stinger initiative brings optimizations to Spark. Query time has lowered significantly over the years. This lecture explains you the details.

The Stinger Initiative
02:42

You can also use Hive in Spark using the Spark SQLContext.

Hive in Spark
01:43
+
Real Time Processing
1 Lecture 02:53

All the lectures up until now were batch oriented. From now on we're going to discuss Realtime processing technologies like Kafka, Storm, Spark Streaming, and HBase / Phoenix.

Introduction to Realtime Processing
02:53
+
Kafka
5 Lectures 19:14

An introduction to Kafka and its terminology like Producers, Consumers, Topics and Partitions.

Introduction to Kafka
01:42

An explanation of Kafka Topics covering Leader partitions, Follower partitions, and how writes are sent to the partitions. Also covers the Consumer groups to show the difference between publish-subscribe (pubsub) mechanism and queuing

Kafka Topics
04:10

Kafka guarantees at-least-once message delivery, but can also be configured for at-most-once. Log Compaction is a technique that Kafka provides to have a full dataset maintained in the commit log. This lecture shows an example of a customer dataset fully kept in Kafka and explains Log Tail, Cleaner Point and Log Head and how it impacts consumers.

Kafka Messages and Log Compaction
04:04

A few example use cases of Kafka

Kafka Use Cases and Usage
02:47

The installation of Kafka on the Hortonworks Data Platform and a demo of a producer - consumer example.

Demo: Kafka Installation and Usage
06:31
+
Storm
5 Lectures 23:18

This lecture provides an introduction to Storm, a realtime computing system. The architecture overview explains components like Nimbus, Zookeeper, and the Supervisor

Introduction to Storm
02:49

This lecture explains what Storm topologies are. I talk about streams, tuples, spouts, and bolts.

A Storm Topology
04:14

A demo of a Storm Topology ingesting data from Kafka and doing computation on the data.

Demo: Storm installation and Example Topology
09:33

Message Delivery explained:

  • At most once delivery
  • At least once delivery
  • Exactly once delivery

This lecture also explains the Storm's reliability API (Anchoring and Acking) and the performance impact of acking.

Storm Message Processing and Reliability
04:00

An introduction to the Trident API, an alternative interface for Storm that supports exactly-once processing of messages.

Trident
02:42
+
Spark Streaming
7 Lectures 17:35

Spark streaming is an alternative to Storm that gained a lot of popularity in the last few years. It allows you to reuse the code you wrote in batch and use it for stream processing.

Introduction to Spark Streaming
01:57

Spark Streaming generates DStreams, micro-batches of RDDs. This lecture explains the Spark Streaming Architecture

Spark Streaming Architecture
01:32

This lecture explains possible receivers, like Kafka. It also shows a WordCount streaming example, where data is ingested from Kafka and processed using WordCount in Spark Streaming

Spark Receivers and WordCount Streaming Example
03:28

This demo shows the Kafka-spark-streaming example.

Demo: Spark Streaming with Kafka
03:57

In the previous lecture we did a WordCount using Spark Streaming, but our example was stateless. In this lecture I'm adding state, using UpdateStateByKey to keep state and checkpointing to save the data to HDFS.

Spark Streaming State and Checkpointing
02:09

A demo of a stateful spark streaming application. Performs a global WordCount from a topic from Kafka. Does checkpointing in HDFS.

Demo: Stateful Spark Streaming
03:24

More Spark Streaming Features, like Windowing and streaming algorithms

More Spark Streaming Features
01:08
7 More Sections
About the Instructor
Edward Viaene
4.3 Average rating
2,441 Reviews
13,736 Students
5 Courses
DevOps, Cloud, Big Data Specialist

I've been a System Administrator and full stack developer for over 10 years, the typical profile for a DevOps engineer. I've been working in multiple organizations and startups. I've cofounded a startup that focusses on applying DevOps and Cloud. I have been training people in newer technologies, like Big Data. I've trained a lot of people working in FTSE 100 & S&P 100 companies. Today I mainly work together with companies to improve their software delivery processes, while coaching and teaching on platforms like Udemy.