Learn Hadoop, MapReduce and BigData from Scratch

A Complete Guide to Learn and Master the Popular Big Data Technologies
3.5 (182 ratings) Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
9,181 students enrolled
$30
Take This Course
  • Lectures 76
  • Contents Video: 17 hours
    Other: 2 mins
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 5/2014 English

Course Description

Modern companies estimate that only 12% of their accumulated data is analyzed, and IT professionals who are able to work with the remaining data are becoming increasingly valuable to companies. Big data talent requests are also up 40% in the past year.

Simply put, there is too much data and not enough professionals to manage and analyze it. This course aims to close the gap by covering MapReduce and its most popular implementation: Apache Hadoop. We will also cover Hadoop ecosystems and the practical concepts involved in handling very large data sets.

Learn and Master the Most Popular Big Data Technologies in this Comprehensive Course.

  • Apache Hadoop and MapReduce on Amazon EMR
  • Hadoop Distributed File System vs. Google File System
  • Data Types, Readers, Writers and Splitters
  • Data Mining and Filtering
  • Shell Comments and HDFS
  • Cloudera, Hortonworks and Apache Bigtop Virtual Machines

Mastering Big Data for IT Professionals World Wide
Broken down, Hadoop is an implementation of the MapReduce Algorithm and the MapReduce Algorithm is used in Big Data to scale computations. The MapReduce algorithms load a block of data into RAM, perform some calculations, load the next block, and then keep going until all of the data has been processed from unstructured data into structured data.

IT managers and Big Data professionals who know how to program in Java, are familiar with Linux, have access to an Amazon EMR account, and have Oracle Virtualbox or VMware working will be able to access the key lessons and concepts in this course and learn to write Hadoop jobs and MapReduce programs.

This course is perfect for any data-focused IT job that seeks to learn new ways to work with large amounts of data.

Contents and Overview
In over 16 hours of content including 74 lectures, this course covers necessary Big Data terminology and the use of Hadoop and MapReduce.

This course covers the importance of Big Data, how to setup Node Hadoop pseudo clusters, work with the architecture of clusters, run multi-node clusters on Amazons EMR, work with distributed file systems and operations including running Hadoop on HortonWorks Sandbox and Cloudera.

Students will also learn advanced Hadoop development, MapReduce concepts, using MapReduce with Hive and Pig, and know the Hadoop ecosystem among other important lessons.

Upon completion students will be literate in Big Data terminology, understand how Hadoop can be used to overcome challenging Big Data scenarios, be able to analyze and implement MapReduce workflow, and be able to use virtual machines for code and development testing and configuring jobs.

What are the requirements?

  • A familiarity of programming in Java.
  • A familiarity of Linux
  • Have Oracle Virtualbox or VMware installed and functioning

What am I going to get from this course?

  • Become literate in Big Data terminology and Hadoop.
  • Understand the Distributed File Systems architecture and any implementation such as Hadoop Distributed File System or Google File System
  • Use the HDFS shell
  • Use the Cloudera, Hortonworks and Apache Bigtop virtual machines for Hadoop code development and testing
  • Configure, execute and monitor a Hadoop Job

What is the target audience?

  • Big Data professionals who want to Master MapReduce and Hadoop.
  • IT professionals and managers who want to understand and learn this hot new technology

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Introduction to Big Data
04:55

Introduction to the Course

11:55

Introduction to Big Data, Hadoop and Map Reduce

Why Hadoop, Big Data and Map Reduce Part - B
Preview
11:56
Why Hadoop, Big Data and Map Reduce Part - C
12:26
20:43

Lecture to help you understand the server cluster architecture

17:39

Learn all about virtual machine provisioning

Section 2: Hadoop Architecture
13:07

Learn to setup the single node cluster

Set up a single Node Hadoop pseudo cluster Part - B
13:40
Set up a single Node Hadoop pseudo cluster Part - c
14:31
16:30

Learn to set up a Hadoop Cluster

Clusters and Nodes, Hadoop Cluster Part - B
15:54
11:55

Lecture about Node Hiearchy

NameNode, Secondary Name Node, Data Nodes Part - B
11:15
18:12

Learn to use Amazon web services for running multi node cluster

Running Multi node clusters on Amazons EMR Part - B
15:00
Running Multi node clusters on Amazons EMR Part - C
14:26
Running Multi node clusters on Amazons EMR Part - D
13:50
Running Multi node clusters on Amazons EMR Part - E
13:21
Section 3: Distributed file systems
20:00

A comparison between HDFS and GFS file systems

18:04

Learn to Run Hadoop on Cloudera

19:17

Learn to run Hadoop on Hortonworks

19:57

Learn to perform file system operations using HDFS Shell

File system operations with the HDFS shell Part - B
17:08
11:13

Learn all about Hadoop development using Apache Bigtop

Advanced hadoop development with Apache Bigtop Part - B
11:24
Section 4: Mapreduce Version 1
13:12

Learn the underlying concepts of the Map Reduce algorithm

MapReduce Concepts in detail Part - B
10:55
09:39

Learn to create Hadoop Jobs

Jobs definition, Job configuration, submission, execution and monitoring Part -B
10:44
Jobs definition, Job configuration, submission, execution and monitoring Part -C
16:48
09:32

Learn the basic syntax of Hadoop

Hadoop Data Types, Paths, FileSystem, Splitters, Readers and Writers Part B
10:39
Hadoop Data Types, Paths, FileSystem, Splitters, Readers and Writers Part C
18:52
15:14

Learn all about the ETL class definition, transformation and load

The ETL class, Definition, Extract, Transform, and Load Part - B
24:14
12:18

Learn the basics of User defined class and functions

The UDF class, Definition, User Defined Functions Part - B
13:01
Section 5: Mapreduce with Hive ( Data warehousing )
15:41

Learn the schema design for data warehousing

Schema design for a Data warehouse Part - B
16:20
10:29

Introduction to Hive and its use for Data Warehousing

Hive Configuration Part B
13:41
16:50

Learn all about Hive Query Patterns

Hive Query Patterns Part - B
17:15
Hive Query Patterns Part - C
12:06
Hive Query Patterns Part D
12:18
12:15

A live example to implement Hive ETL class

Example Hive ETL class Part - B
13:28
Example Hive ETL class Part C
08:50
Section 6: Mapreduce with Pig (Parallel processing)
12:17

Introduction to Parallel Processing using Apache Pig

Introduction to Apache Pig Part - B
13:45
Introduction to Apache Pig Part - C
09:07
Introduction to Apache Pig Part - D
10:09
13:28

Advance Pig features and usage of LoadFunc and EvalFunc Class

12:40

A working example of PIG ETL class

Example Pig ETL class Part - B
14:11
Section 7: The Hadoop Ecosystem
15:20

A brief intro to Hadoop ecosystem and detail discussion on Crunch

Introduction to Crunch Part - B
12:52
15:18

Learn all about the Arvo hadoop component

12:51

Lecture discussing the use and implementation of Mahout

Introduction to Mahout Part - B
13:05
Introduction to Mahout Part - C
13:32
Section 8: Mapreduce Version 2
12:44

Introduction to Yarn and its usage in hadoop 2

Apache Hadoop 2 and YARN Part - B
08:23
14:51

Yarn Implementation examples for beginners.

Section 9: Putting it all together
12:03

Implementing the concepts on Amazon web services.

Amazon EMR example Part - B
11:46
Amazon EMR example Part - C
08:26
Amazon EMR example Part - D
10:18
12:46

A live example implementation of Apache Bigtop

Apache Bigtop example Part - B
13:01
Apache Bigtop example Part - C
13:27
Apache Bigtop example Part - D
13:54
Apache Bigtop example Part - E
13:06
Apache Bigtop example Part - F
13:45
04:40

Course Summary

2 pages

Reference links for various topics

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Eduonix creates and distributes high quality technology training content. Our team of industry professionals have been training manpower for more than a decade. We aim to teach technology the way it is used in industry and professional world. We have professional team of trainers for technologies ranging from Mobility, Web to Enterprise and Database and Server Administration.

Instructor Biography

Ready to start learning?
Take This Course