The Ultimate Hands-On Hadoop - Tame your Big Data!
4.6 (1,873 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
13,474 students enrolled
Wishlisted Wishlist

Please confirm that you want to add The Ultimate Hands-On Hadoop - Tame your Big Data! to your Wishlist.

Add to Wishlist

The Ultimate Hands-On Hadoop - Tame your Big Data!

Hadoop, MapReduce, HDFS, Spark, Pig, Hive, HBase, MongoDB, Cassandra, Flume - the list goes on! Over 25 technologies.
Bestselling
4.6 (1,873 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
13,474 students enrolled
Last updated 5/2017
English
English
Current price: $10 Original price: $180 Discount: 94% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 14.5 hours on-demand video
  • 2 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Design distributed systems that manage "big data" using Hadoop and related technologies.
  • Use HDFS and MapReduce for storing and analyzing data at scale.
  • Use Pig and Spark to create scripts to process data on a Hadoop cluster in more complex ways.
  • Analyze relational data using Hive and MySQL
  • Analyze non-relational data using HBase, Cassandra, and MongoDB
  • Query data interactively with Drill, Phoenix, and Presto
  • Choose an appropriate data storage technology for your application
  • Understand how Hadoop clusters are managed by YARN, Tez, Mesos, Zookeeper, Zeppelin, Hue, and Oozie.
  • Publish data to your Hadoop cluster using Kafka, Sqoop, and Flume
  • Consume streaming data using Spark Streaming, Flink, and Storm
View Curriculum
Requirements
  • You will need access to a PC running 64-bit Windows, MacOS, or Linux with an Internet connection, if you want to participate in the hands-on activities and exercises. You must have at least 8GB of free RAM on your system; 10GB or more is recommended. If your PC does not meet these requirements, you can still follow along in the course without doing hands-on activities.
  • Some activities will require some prior programming experience, preferably in Python or Scala.
  • A basic familiarity with the Linux command line will be very helpful.
Description

The world of Hadoop and "Big Data" can be intimidating - hundreds of different technologies with cryptic names form the Hadoop ecosystem. With this course, you'll not only understand what those systems are and how they fit together - but you'll go hands-on and learn how to use them to solve real business problems!

Learn and master the most popular big data technologies in this comprehensive course, taught by a former engineer and senior manager from Amazon and IMDb. We'll go way beyond Hadoop itself, and dive into all sorts of distributed systems you may need to integrate with.

  • Install and work with a real Hadoop installation right on your desktop with Hortonworks and the Ambari UI
  • Manage big data on a cluster with HDFS and MapReduce
  • Write programs to analyze data on Hadoop with Pig and Spark
  • Store and query your data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto
  • Design real-world systems using the Hadoop ecosystem
  • Learn how your cluster is managed with YARN, Mesos, Zookeeper, Oozie, Zeppelin, and Hue
  • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm

Understanding Hadoop is a highly valuable skill for anyone working at companies with large amounts of data.

Almost every large company you might want to work at uses Hadoop in some way, including Amazon, Ebay, Facebook, Google, LinkedIn, IBM,  Spotify, Twitter, and Yahoo! And it's not just technology companies that need Hadoop; even the New York Times uses Hadoop for processing images.

This course is comprehensive, covering over 25 different technologies in over 14 hours of video lectures. It's filled with hands-on activities and exercises, so you get some real experience in using Hadoop - it's not just theory.

You'll find a range of activities in this course for people at every level. If you're a project manager who just wants to learn the buzzwords, there are web UI's for many of the activities in the course that require no programming knowledge. If you're comfortable with command lines, we'll show you how to work with them too. And if you're a programmer, I'll challenge you with writing real scripts on a Hadoop system using Scala, Pig Latin, and Python.

You'll walk away from this course with a real, deep understanding of Hadoop and its associated distributed systems, and you can apply Hadoop to real-world problems. Plus a valuable completion certificate is waiting for you at the end! 

Please note the focus on this course is on application development, not Hadoop administration. Although you will pick up some administration skills along the way.

I hope to see you in the course soon!

-Frank


Who is the target audience?
  • Software engineers and programmers who want to understand the larger Hadoop ecosystem, and use it to store, analyze, and vend "big data" at scale.
  • Project, program, or product managers who want to understand the lingo and high-level architecture of Hadoop.
  • Data analysts and database administrators who are curious about Hadoop and how it relates to their work.
  • System architects who need to understand the components available in the Hadoop ecosystem, and how they fit together.
Students Who Viewed This Course Also Viewed
Curriculum For This Course
95 Lectures
14:29:27
+
Learn all the buzzwords! And install Hadoop.
4 Lectures 42:55

Hello! After a quick intro, we'll dive right in and install Hortonworks Sandbox in a virtual machine right on your own PC. We'll then download some real movie ratings data, and use Hive to analyze it!

Preview 16:59

What's Hadoop for? What problems does it solve? Where did it come from? We'll learn Hadoop's backstory in this lecture.

Preview 07:44

We'll take a quick tour of all the technologies we'll cover in this course, and how they all fit together. You'll come out of this lecture knowing all the buzzwords!

Overview of the Hadoop Ecosystem
16:46

Tips for Using This Course
01:26
+
Using Hadoop's Core: HDFS and MapReduce
10 Lectures 01:33:53

Learn how Hadoop's Distributed Filesystem allows you store massive data sets across a cluster of commodity computers, in a reliable and scalable manner.

HDFS: What it is, and how it works
13:53

You don't need to mess with command lines or programming to use HDFS. We'll start by importing some real movie ratings data into HDFS just using a web-based UI provided by Ambari.

Preview 06:20

Developers might be more comfortable interacting with HDFS via the command line interface. We'll import the same data, this time from a terminal prompt.

[Activity] Install the MovieLens dataset into HDFS using the command line
07:50

Learn how mappers and reducers provide a clever way to analyze massive distributed datasets quickly and reliably.

MapReduce: What it is, and how it works
10:40

Learn what makes MapReduce so powerful, by horizontally scaling across a cluster of computers.

How MapReduce distributes processing
12:57

Let's look at a very simple example of MapReduce - counting how many of each rating type exists in our movie ratings data.

MapReduce example: Break down movie ratings by rating score
11:35

The quickest and easiest way to get started with MapReduce is by using Python's MRJob package, which lets you use MapReduce's streaming feature to write MapReduce code in Python instead of Java. Let's get set up.

[Activity] Installing Python, MRJob, and nano
07:33

We'll study our code for building a breakdown of movie ratings, and actually run it on your system!

Preview 07:36

As a challenge, see if you can write your own MapReduce script that sorts movies by how many ratings they received. I'll give you some hints, set you off, and then review my solution to the problem.

[Exercise] Rank movies by their popularity
07:06

Let's see how I solved the challenge from the previous lecture - we'll change our script to count movies instead of ratings, and then review and run my solution for sorting by rating count.

[Activity] Check your results against mine!
08:23
+
Programming Hadoop with Pig
7 Lectures 56:08

Ambari is Hortonworks' web-based UI (similar to Hue used by Cloudera.) We can use it as an easy way to experiment with Pig, so let's take a closer look at it before moving ahead.

Introducing Ambari
09:49

An overview of what Pig is used for, who it's for, and how it works.

Introducing Pig
06:25

We'll use Pig to script a chain of queries on MovieLens to solve a more complex problem.

Example: Find the oldest movie with a 5-star rating using Pig
15:07

Let's actually run our example from the previous lecture on your Hadoop sandbox, and find some good, old movies!

Preview 09:40

We covered most of the basics of Pig in our example, but let's look at what else Pig Latin can do.

More Pig Latin
07:34

I'll give you some pointers, and challenge you to write your own Pig script that finds the most popular really bad movie!

[Exercise] Find the most-rated one-star movie
01:56

Let's look at my code for finding the most popular bad movies, and you can compare my results to yours.

Pig Challenge: Compare Your Results to Mine!
05:37
+
Programming Hadoop with Spark
8 Lectures 01:14:07

What's so special about Spark? Learn how its efficiency and versatility make Apache Spark one of the hottest Hadoop-related technologies right now, and how it achieves this under the hood.

Why Spark?
10:06

The core building block of Spark is the RDD; learn how they are used and the functions available on them.

The Resilient Distributed Dataset (RDD)
10:13

As an example, let's write a Spark script to find the movie with the lowest average rating. We'll start by doing it just with RDD's.

[Activity] Find the movie with the lowest average rating - with RDD's
15:33

Spark 2.0 placed a new emphasis on Datasets and SparkSQL. Learn how Datasets can make your Spark scripts even faster and easier to write.

Preview 06:28

Let's revisit the previous problem of finding the lowest-rated movies, but this time using DataFrames.

[Activity] Find the movie with the lowest average rating - with DataFrames
10:00

As an example of the more complicated things Spark is capable of, we'll use Spark's machine learning library to produce movie recommendations using the ALS algorithm.

Preview 12:16

As a very simple exercise, we'll build upon our earlier activity to filter the results by movies with a given number of ratings.

[Exercise] Filter the lowest-rated movies by number of ratings
02:51

We'll review my solution to the previous exercise, and run the resulting scripts.

[Activity] Check your results against mine!
06:40
+
Using relational data stores with Hadoop
9 Lectures 01:02:53

An introduction to Apache Hive and how it enables relational queries on HDFS-hosted data.

What is Hive?
06:31

We'll import the MovieLens data set into Hive using the Ambari UI, and run a simple query to find the most popular movies.

[Activity] Use Hive to find the most popular movie
10:45

Learn how Hive works under the hood to efficiently query your data across a cluster using SQL commands.

Preview 09:10

As a challenge, use this same Hive database to find the best-rated movie.

[Exercise] Use Hive to find the movie with the highest average rating
01:55

Compare your solution to mine for the exercise of finding the highest-rated movies using Hive.

Compare your solution to mine.
04:10

A quick overview of MySQL and how it might fit into your Hadoop-based work.

Integrating MySQL with Hadoop
08:00

Let import the MovieLens data set into MySQL, and run a query to view the most popular movies just to see that's it's working.

[Activity] Install MySQL and import our movie data
07:35

Learn how Sqoop works as a way to transfer data from an existing RDBMS like MySQL into Hadoop.

[Activity] Use Sqoop to import data from MySQL to HFDS/Hive
07:31

Sqoop can also work the other way - let's build a new table with Hive and export it back into MySQL.

[Activity] Use Sqoop to export data from Hadoop to MySQL
07:16
+
Using non-relational data stores with Hadoop
12 Lectures 02:27:34

Learn why "NoSQL" databases are important for efficiently and scalably vending your data.

Why NoSQL?
13:54

HBase is a NoSQL columnar data store that sits on top of Hadoop. Learn what it's for and how it works.

What is HBase
12:55

We'll import our movie ratings into HBase through a RESTful service interface, using a Python script running our desktop to both populate and query the table.

[Activity] Import movie ratings into HBase
13:28

We'll see how HBase can integrate with Pig to store big data into HBase in a distributed manner.

[Activity] Use HBase with Pig to import data at scale.
11:19

Cassandra is a popular NoSQL database, that is appropriate for vending data at massive scale outside of Hadoop.

Cassandra overview
14:50

Cassandra isn't a part of Hortonworks, so we'll need to install it ourselves.

[Activity] Installing Cassandra
11:43

We'll modify our HBase example to write results into a Cassandra database instead, and look at the results.

[Activity] Write Spark output into Cassandra
11:00

MongoDB is a popular alternative to Cassandra. Learn what's different about it.

MongoDB overview
16:54

We'll install MongoDB on our virtual machine using Ambari. Then, we'll study and run a script to load up a Spark DataFrame of user data, store it into MongoDB, and query MongoDB to get users under 20 years old.

[Activity] Install MongoDB, and integrate Spark with MongoDB
12:44

We'll query our movie user data using MongoDB's command line interface, and set up an index on it.

[Activity] Using the MongoDB shell
07:48

With so many options for choosing a database, how do you decide? We'll look at the requirements of given problems, such as consistency, latency, and scalability, and how that can inform your decision.

Preview 15:59

In the previous lecture, I challenged you to choose a database for a stock trading application. Let's talk about my own thought process in this decision, and see if we reached the same conclusion.

[Exercise] Choose a database for a given problem
05:00
+
Querying your Data Interactively
9 Lectures 01:22:15

What is Drill and what problems does it solve?

Overview of Drill
07:55

We'll install Drill so we can play with it, after installing a Hive and MongoDB database to work with.

[Activity] Setting up Drill
11:19

We'll use Drill to execute a query that spans data on MongoDB and Hive at the same time!

Preview 07:07

What is Phoenix for? How does it work?

Overview of Phoenix
08:55

We'll get our hands dirty with Phoenix and use it to query our HBase database.

[Activity] Install Phoenix and query HBase with it
07:08

We'll use Phoenix with Pig to store and load MovieLens users data, and accelerate queries on it.

[Activity] Integrate Phoenix with Pig
11:45

What is Presto, and how does it differ from Drill and Phoenix?

Overview of Presto
06:39

We'll install Presto, and issue some queries on Hive through it.

[Activity] Install Presto, and query Hive with it.
12:26

We'll configure Presto to also talk to our Cassandra database that we set up earlier, and do a JOIN query that spans both data in Cassandra and Hive!

Preview 09:01
+
Managing your Cluster
13 Lectures 01:59:14

Learn how YARN works in more depth as it controls and allocates the resources of your Hadoop cluster.

Preview 10:01

Like Spark, Tez also uses Directed Acyclic Graphs to optimize tasks on your cluster. Learn how it works, and how it's different.

Tez explained
04:56

As an example of the power of Tez, we'll execute a Hive query with and without it.

[Activity] Use Hive on Tez and measure the performance benefit
08:35

Mesos is an alternative cluster manager to Hadoop YARN. Learn how it differs, who uses Mesos, and why.

Mesos explained
07:13

Zookeeper is a deceptively simple service for maintaining states across your cluster, like which servers are in service, in a highly reliable manner. Learn how it works, and what systems depend on Zookeeper for reliable operation.

ZooKeeper explained
13:10

Let's use ZooKeeper's command line interface to explore how it works.

[Activity] Simulating a failing master with ZooKeeper
06:47

Oozie allows you to set up complex workflows on your cluster using multiple technologies, and schedule them. Let's look at some examples of how it works.

Oozie explained
11:56

As a hands-on example, we'll use Oozie to import movie data into HDFS from MySQL using Sqoop, then analyze that data using Hive.

[Activity] Set up a simple Oozie workflow
16:39

Apache Zeppelin provides a notebook-based environment for importing, transforming, and analyzing your data.

Zeppelin overview
05:01

We'll set up a Zeppelin notebook to load movie ratings and titles into Spark dataframes, and interactively query and visualize them.

[Activity] Use Zeppelin to analyze movie ratings, part 1
12:28

We'll set up a Zeppelin notebook to load movie ratings and titles into Spark dataframes, and interactively query and visualize them.

[Activity] Use Zeppelin to analyze movie ratings, part 2
09:46

Apache Hue is a popular alternative to Ambari views, especially on Cloudera platforms. Let's see what it offers and how it's different.

Hue overview
08:07

Let's talk about Chukwa and Ganglia, just so you know what they are.

Other technologies worth mentioning
04:35
+
Feeding Data to your Cluster
6 Lectures 54:47

Learn how Kafka provides a scalable, reliable means for collecting data across a cluster of computers and broadcasting it for further processing.

Kafka explained
09:48

We'll get Kafka running, and set it up to publish and consume some data from a new topic.

[Activity] Setting up Kafka, and publishing some data.
07:24

We'll simulate a web server by monitoring an Apache log files using a Kafka connector, and watch Kafka pick up new lines in it.

[Activity] Publishing web logs with Kafka
10:21

Flume is another way to publish logs from a cluster. Learn about sinks and Flume's architecture, and how it differs from Kafka.

Flume explained
10:16

As a simple way to get started with Flume, we'll connect a source listening to a telnet connection to a sink that just logs information received.

[Activity] Set up Flume and publish logs with it.
07:46

As something closer to a real-world example, we'll configure Flume to monitor a directory on our local filesystem for new files, and publish their data into HDFS, organized by the time the data was received.

Preview 09:12
+
Analyzing Streams of Data
8 Lectures 01:16:28

Spark streaming allows you to write "continuous applications" that process micro-batches of information in real time. Learn how it works, about DStreams, windowing, and the new Structured Streaming API.

Spark Streaming: Introduction
14:27

We'll write and run a Spark Streaming application that analyzes web logs as they are streamed in from Flume.

[Activity] Analyze web logs published with Flume using Spark Streaming
14:20

As a challenge, extend the previous activity to look for status codes in the web log and aggregate how often different status codes appear. Also, let's fiddle with the slide interval.

[Exercise] Monitor Flume-published logs for errors in real time
02:02

Let's review my solution to the previous exercise, and run it.

Exercise solution: Aggregating HTTP access codes with Spark Streaming
04:24

Storm is an alternative to Spark Streaming. Learn how it differs and is a true streaming solution.

Apache Storm: Introduction
09:27

We'll walk through, and run, the word count topology sample included with Storm.

[Activity] Count words with Storm
14:35

Apache Flink is an up-and-coming alternative to Storm that offers a higher-level API. Let's talk about what sets it apart.

Preview 06:53

Let's install Flink and run a simple example with it.

[Activity] Counting words with Flink
10:20
2 More Sections
About the Instructor
Sundog Education by Frank Kane
4.5 Average rating
15,178 Reviews
72,820 Students
9 Courses
Training the World in Big Data and Machine Learning

Sundog Education's mission is to make highly valuable career skills in big data, data science, and machine learning accessible to everyone in the world. Our consortium of expert instructors shares our knowledge in these emerging fields with you, at prices anyone can afford. 

Sundog Education is led by Frank Kane and owned by Frank's company, Sundog Software LLC. Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.

Frank Kane
4.5 Average rating
14,788 Reviews
69,164 Students
7 Courses
Founder, Sundog Education

Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computingdata mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.