Learn Big Data: The Hadoop Ecosystem Masterclass

Master the Hadoop ecosystem using HDFS, MapReduce, Yarn, Pig, Hive, Kafka, HBase, Spark, Knox, Ranger, Ambari, Zookeeper
4.2 (180 ratings)
Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
1,321 students enrolled Bestselling in Big Data
Instructed by Edward Viaene IT & Software / Other
$19
$40
52% off
Take This Course
  • Lectures 97
  • Length 6 hours
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 3/2016 English

Course Description

In this course you will learn Big Data using the Hadoop Ecosystem. Why Hadoop? It is one of the most sought after skills in the IT industry. The average salary in the US is $112,000 per year, up to an average of $160,000 in San Fransisco (source: Indeed).

The course is aimed at Software Engineers, Database Administrators, and System Administrators that want to learn about Big Data. Other IT professionals can also take this course, but might have to do some extra research to understand some of the concepts.

You will learn how to use the most popular software in the Big Data industry at moment, using batch processing as well as realtime processing. This course will give you enough background to be able to talk about real problems and solutions with experts in the industry. Updating your LinkedIn profile with these technologies will make recruiters want you to get interviews at the most prestigious companies in the world.

The course is very practical, with more than 6 hours of lectures. You want to try out everything yourself, adding multiple hours of learning. If you get stuck with the technology while trying, there is support available. I will answer your messages on the message boards and we have a Facebook group where you can post questions.

What are the requirements?

  • You will need to have a background in IT. The course is aimed at Software Engineers, System Administrators, DBAs who want to learn about Big Data
  • Knowing any programming language will enhance your course experience
  • The course contains demos you can try out on your own machine. To run the Hadoop cluster on your own machine, you will need to run a virtual server. 8 GB or more RAM is recommended.

What am I going to get from this course?

  • Process Big Data using batch
  • Process Big Data using realtime data
  • Be familiar with the technologies in the Hadoop Stack
  • Be able to install and configure the Hortonworks Data Platform (HDP)

What is the target audience?

  • This course is for anyone that wants to know how Big Data works, and what technologies are involved
  • The main focus is on the Hadoop ecosystem. We don't cover any technologies not on the Hortonworks Data Platform Stack
  • The course compares MapR, Cloudera, and Hortonworks, but we only use the Hortonworks Data Platform (HDP) in the demos

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Introduction
03:01

Course introduction, lecture overview, course objectives

Article

This document provides a guide to do the demos in this course

Section 2: What is Big Data and Hadoop
02:16

The 3 (or 4) V's of Big Data explained

03:29

What is Big Data? Some examples of companies using Big Data, like Spotify, Amazon, Google, and Tesla

02:19

What can we do with Big Data? Data Science explained.

04:13

How to build a Big Data System? What is Hadoop?

03:17

Hadoop Distributions: a comparison between Apache Hadoop, Hortonworks Data Platform, Cloudera, and MapR

What is Big Data Quiz
12 questions
Section 3: Introduction to Hadoop
04:40

How to install Hadoop? You can install Hadoop using vagrant with Virtualbox / VMWare, or on the Cloud using AWS. Hortonworks also provides a Sandbox.

04:21

This is a demo of how to install and use the Hortonworks Sandbox. An alternative to the full installation using Ambari if you have a machine that doesn't have a lot of memory available. You can also use both in conjunction. 

04:58

A walkthrough of how to install the Hortonworks Data Platform (HDP) on your Laptop or Desktop

06:38

A walkthrough of how to install the Hortonworks Data Platform (HDP) on your Laptop or Desktop (Part II)

03:28

An introduction to HDFS, The Hadoop Distributed Filesystem

01:15

Communications between the DataNode and the NameNode explained

05:45

An introduction to HDFS using hadoop fs put. I'm also showing how a files gets divided in blocks and where those blocks are stored.

04:59

An introduction to downloading, uploading and listing files. This time I'm using the Ambari HDFS Viewer and the NameNode UI. I also show what configuration changes are necessary to make this work.

04:17

MapReduce WordCount, step by step explained

07:05

A demo of MapReduce WordCount on our HDP cluster

02:29

In HDFS, files are divided in blocks and stored on the DataNodes. In this lecture we're going to see what happens when we're reading lines from files that potentially span over multiple blocks.

04:20

Introducing Yarn, and concepts like the ResourceManager, the scheduler, the applicationsManager, the NodeManager, and the Application Master. I explain how an application is executed and the consequences when a node crashes.

05:45

A demo of an application executed using yarn jar. I provide an overview of Ambari Yarn metrics and the ResourceManager UI

03:35

Ambari also exposes a REST API. Commands can be executed directly to this API. Ambari also lets you do unattended install using Ambari Blueprints

08:38

A demo showing you the Ambari API and how to work with blueprints

01:50

An introduction to ETL processing in Hadoop. MapReduce, Pig, and Spark are suitable to do batch processing. Hive is more suitable for data exploration.

Introduction Quiz
5 questions
Section 4: Pig
02:36

An introduction to Pig and Pig Latin.

02:08

This demo shows how to install pig and tez using Ambari on the Hortonworks Data Platform

06:21

In this demo I will show you basic pig commands to load, dump and store data. I'll also show you an example how to filter data.

04:02

More Pig commands in this final part of the pig demo. I'll go over commands like GROUP BY, FOREACH ... GENERATE and COUNT()

Section 5: Apache Spark
03:42

An introduction to Apache Spark. This lecture explains the differences between the spark-submit using local mode, yarn-cluster and yarn-client.

02:36

An introduction to WordCount in Spark using Python (pyspark)

04:36

Spark installation using Ambari and a demo of the Spark Wordcount using the pyspark shell.

03:52

This lectures gives an introduction to Resilient Distributed Datasets (RDDs). This abstraction allows you to do transformations and actions in Spark. I give an example using filter RDDs, and explain how shuffle RDDs impact disk and network IO

06:02

A demo of RDD transformations and actions in Spark

03:36

An overview of the most common RDD actions and transformations

01:58

An overview of what Spark MLLib (Machine Learning Library) can do. I explain a Recommendation Engine example, and a Clustering Example (K-Means / DBScan)

Section 6: Hive
02:47

An introduction to SQL on Hadoop using Hive, enabling data warehouse capabilities. This lecture provides an architecture overview and an overview of the hive CLI and beeline using JDBC.

04:29

An overview of Hive Queries: creating tables, creating databases, inserting data, and selecting data. This lecture also shows where the hive data is stored in HDFS.

07:33

A demo that shows the installation of Hiveserver2 and the clients. Afterwards I show you a few example queries using a JDBC beeline connection.

04:32

Optimizing hive can't be done using indexes. This lecture explains how queries in hive should be optimized, using partitions and buckets. This lecture also handles User Defined Functions (UDFs) and Serialization / Deserialization

02:42

The Stinger initiative brings optimizations to Spark. Query time has lowered significantly over the years. This lecture explains you the details.

01:43

You can also use Hive in Spark using the Spark SQLContext.

Section 7: Real Time Processing
02:53

All the lectures up until now were batch oriented. From now on we're going to discuss Realtime processing technologies like Kafka, Storm, Spark Streaming, and HBase / Phoenix.

Section 8: Kafka
01:42

An introduction to Kafka and its terminology like Producers, Consumers, Topics and Partitions.

04:10

An explanation of Kafka Topics covering Leader partitions, Follower partitions, and how writes are sent to the partitions. Also covers the Consumer groups to show the difference between publish-subscribe (pubsub) mechanism and queuing

04:04

Kafka guarantees at-least-once message delivery, but can also be configured for at-most-once. Log Compaction is a technique that Kafka provides to have a full dataset maintained in the commit log. This lecture shows an example of a customer dataset fully kept in Kafka and explains Log Tail, Cleaner Point and Log Head and how it impacts consumers.

02:47

A few example use cases of Kafka

06:31

The installation of Kafka on the Hortonworks Data Platform and a demo of a producer - consumer example.

Section 9: Storm
02:49

This lecture provides an introduction to Storm, a realtime computing system. The architecture overview explains components like Nimbus, Zookeeper, and the Supervisor

04:14

This lecture explains what Storm topologies are. I talk about streams, tuples, spouts, and bolts.

09:33

A demo of a Storm Topology ingesting data from Kafka and doing computation on the data.

04:00

Message Delivery explained:

  • At most once delivery
  • At least once delivery
  • Exactly once delivery

This lecture also explains the Storm's reliability API (Anchoring and Acking) and the performance impact of acking.

02:42

An introduction to the Trident API, an alternative interface for Storm that supports exactly-once processing of messages.

Section 10: Spark Streaming
01:57

Spark streaming is an alternative to Storm that gained a lot of popularity in the last few years. It allows you to reuse the code you wrote in batch and use it for stream processing.

01:32

Spark Streaming generates DStreams, micro-batches of RDDs. This lecture explains the Spark Streaming Architecture

03:28

This lecture explains possible receivers, like Kafka. It also shows a WordCount streaming example, where data is ingested from Kafka and processed using WordCount in Spark Streaming

03:57

This demo shows the Kafka-spark-streaming example.

02:09

In the previous lecture we did a WordCount using Spark Streaming, but our example was stateless. In this lecture I'm adding state, using UpdateStateByKey to keep state and checkpointing to save the data to HDFS.

03:24

A demo of a stateful spark streaming application. Performs a global WordCount from a topic from Kafka. Does checkpointing in HDFS.

01:08

More Spark Streaming Features, like Windowing and streaming algorithms

Section 11: HBase
02:18

Introduction to HBase: a realtime, distributed, scalable, big data store on top of Hadoop. The lecture also briefly explains the CAP theorem.

02:58

An HBase table is different than a table in a Relational Database. This lecture explains the differences and talks about the row key, Column Families, Column Qualifiers, versions, and regions.

02:33

A lecture that explains the hbase:meta table, which is retrieved using Zookeeper when a client connects. This way the clients knows what RegionServer to contact to read/write data.

02:12

This lecture shows how a write (a PUT request) is handled by HBase. It shows how writes go to the WAL (Write-ahead-log), and the Memstore. I also show how flushes work to persist the data in HDFS.

02:48

HBase reads go to the Memstore and the BlockCache first, then to HFiles on HDFS. The lecture shows how indexes and Bloomfilters are used to speed up reads from disk.

02:13

HBase does minor and major compactions to merge HFiles in HDFS.

01:57

This lecture explains how a crash recovery in HBase happens, how Zookeeper and the HMaster are involved, how recovery uses the WAL files and how data is persisted to disk after a crash.

02:13

When tables become bigger, they split. This lecture explains how Regions are split. balanced over the RegionServer and how pre-splitting can help with the performance.

01:44

HBase hotspotting is something to avoid. This lecture explains when hotspotting can happen and how to avoid it using salting.

02:41

This demo shows how to install HBase using Ambari.

07:07

This demo gives you an introduction to the HBase Shell, where table can be created, data can be retrieved using get / scan, and data can be written using put

09:18

An example of a stateful Spark Streaming application that ingests data from a Kafka topic, runs the wordcount on the data, and stores the data in an HBase table.

Section 12: Phoenix
02:30

An introduction to Phoenix, which brings SQL back into HBase.

03:12

An overview of Phoenix features like Salting, Compression, and Indexes. All implemented using standard SQL commands to make it easier for the database administrators and analysts to use HBase.

02:36

More Phoenix features like JOINs, VIEWs, and a Phoenix in Spark plugin.

06:18

A demo showing the Phoenix features

Section 13: Hadoop Security
02:39

An introduction to Kerberos, which we are going to use to secure our Hadoop cluster

00:56

An overview of different deployment strategies of Kerberos in Hadoop

02:04

Getting familiar with Kerberos Technologies like Principals, Realms, and keytabs

10:06

A demo showing you how to install MIT Kerberos, enabling Kerberos in Ambari, and showing how this impacts the users using HDFS

01:53

Introduction to SPNEGO, protecting the HTTP interfaces in Hadoop against unauthorized access

03:29

A demo showing how SPNEGO works

01:07

The Knox gateway provides a single entry point to the Hadoop APIs and UIs. This lecture explains the Knox gateway architecture and how it can be used.

Section 14: Ranger
03:26

This lectures gives an introduction to Ranger, which can be used for access control on the Hadoop services (authorization)

05:41

Demo of installing ranger using Ambari

04:52

A demo of Ranger with Hive. Ranger can be used to put granular access controls on hive databases, tables, and columns.

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Edward Viaene, DevOps, Cloud, Big Data Specialist

I've been a System Administrator and full stack developer for over 10 years, the typical profile for a DevOps engineer. I've been working in multiple organizations and startups. I've cofounded a startup that focusses on applying DevOps and Cloud. I have been training people in newer technologies, like Big Data. I've trained a lot of people working in FTSE 100 & S&P 100 companies. Today I mainly work together with companies to improve their software delivery processes, while coaching and teaching on platforms like Udemy.

Ready to start learning?
Take This Course