Building Hadoop Clusters

Deploy multi-node Hadoop clusters to harness the Cloud for storage and large-scale data processing
3.0 (2 ratings)
Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
44 students enrolled
$19
$85
78% off
Take This Course
  • Lectures 24
  • Length 2.5 hours
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 7/2015 English

Course Description

Hadoop is an Apache top-level project that allows the distributed processing of large data sets across clusters of computers using simple programming models. It allows you to deliver a highly available service on top of a cluster of computers, each of which may be prone to failures. While Big Data and Hadoop have seen a massive surge in popularity over the last few years, many companies still struggle with trying to set up their own computing clusters.

This video series will turn you from a faltering first-timer into a Hadoop pro through clear, concise descriptions that are easy to follow.

We'll begin this course with an overview of Amazon's cloud service and its use. We'll then deploy Linux compute instances and you'll see how to connect your client machine to Linux hosts and configure your systems to run Hadoop. Finally, you'll install Hadoop, download data, and examine how to run a query.

This video series will go beyond just Hadoop; it will cover everything you need to get your own clusters up and running. You will learn how to make network configuration changes as well as modify Linux services. After you've installed Hadoop, we'll then go over installing HUE—Hadoop's UI. Using HUE, you will learn how to download data to your Hadoop clusters, move it to HDFS, and finally query that data with Hive.

Learn everything you need to deploy Hadoop clusters to the Cloud through these videos. You'll grasp all you need to know about handling large data sets over multiple nodes.

About the Author

Sean Mikha is a technical architect who specializes in implementing large-scale data warehouses using Massively Parallel Processing (MPP) technologies. Sean has held roles at multiple companies that specialize in MPP technologies, where he was a part of implementing one of the largest commercial clinical data warehouses in the world. Sean is currently a solution architect, focusing on architecting Big Data solutions while also educating customers on Hadoop technologies. Sean graduated from UCLA with a BS in Computer Engineering, and currently lives in Southern California.

What are the requirements?

  • This video series assumes no prior knowledge of any cloud technologies, Hadoop, or Linux.

What am I going to get from this course?

  • Explore Amazon's Web Services to manage big data
  • Configure network and security settings when deploying instances to the cloud
  • Explore methods to connect to cloud instances using your client machine
  • Set up Linux environments and configure settings for services and package installations
  • Examine Hadoop's general architecture and what each service brings to the table
  • Harness and navigate Hadoop's file storage and processing mechanisms
  • Install and master Apache Hadoop User Interface (HUE)

What is the target audience?

  • If you are a system administrator or anyone interested in building a Hadoop cluster to process large sets of data, this video course is for you.

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Deploying Cloud Instances for Hadoop 2.0
04:44

There are many choices when building a Hadoop cluster. We will give you the tools, education, and skills to build your own at a low cost.

05:43

When deploying an Amazon instance for Hadoop, you will have to choose the correct AMI and set it up properly. This video will show you how.

04:12

Learn how to set up a static IP address for Amazon instances, and manage instances by starting and terminating.

Section 2: Setting Up Network and Security Settings
04:54

The goal of this video is to help you identify the correct inbound and outbound IP addresses and ports specifications for Hadoop so that you can set up your security settings properly.

05:50

Security must be defined when building clusters. We need to make sure our security settings are compatible with Hadoop.

05:39

Connecting to Amazon instances requires special tools and configuration settings. This video will show you how to prepare to connect through Windows.

Section 3: Connecting to Cloud Instances
04:53

There are multiple ways to connect to Amazon. We show you how to use these different methods to connect.

04:47

Putty is a free utility to connect to Amazon instances, but it may be difficult to set up. We will show you how to set it up in detail.

04:18

We will need special tools and settings to show you how to get your private key to the Amazon instances.

Section 4: Setting Up Network Connectivity and Access for Hadoop Clusters
06:13

Before we begin setting up our Hadoop cluster, we will need to gain a better understanding of the overall architecture. We will cover the key Hadoop components so you know how it works.

08:19

We need to set up SSH properly so that we will not have to verify our credentials each time we log in.

08:26

To install Hadoop properly, we will need to configure the network details on each node.

Section 5: Setting Up Configuration Settings across Hadoop Clusters
05:10
To have a proper Hadoop installation, we need to make sure we install all dependencies. Having the proper software repositories setup will go a long way in having a smooth Hadoop installation.
07:26

It is very difficult to manage a Hadoop cluster if you have to reissue the same commands over and over again to every instance. The pdsh utility will allow us to run a command once and apply it many times to the data nodes in our cluster.

08:58

There are a series of steps we will need to take for a proper Hadoop installation. In this video, we will show you how to prep the data nodes, remove any conflicting software, and set up the daemon processes needed.

Section 6: Creating a Hadoop Cluster
06:54

To install Hadoop, our environment has to be set up correctly. We will check the Linux environment and download Ambari.

05:27

We will take you step-by-step through the first half of installing Hadoop. This will make sure you configure all of the settings correctly.

07:09

We continue stepping through the Hadoop installation process. Here, we take the detailed steps needed to configure Hadoop services.

Section 7: Loading and Navigating the Hadoop File System (HDFS)
06:22

Before you are able to effectively use the Hadoop file system, you will have to understand its architecture and how it works. This section will show you exactly how HDFS is configured.

07:36

To get a file transferred to HDFS, you will need to take a set of steps that distribute data across nodes.

07:00

Hadoop can be difficult and complex to troubleshoot. With Ambari, the complexity of this task is reduced with a rich graphical UI.

Section 8: Hadoop Tools and Processing Files
10:16

Hadoop comes with many components like Hive and Pig. However, using them is difficult because they use a command-line interface. In this section, we install HUE, the Apache Hadoop UI that solves our interface problems.

07:37

For us to proceed with using HUE, we need to make multiple Hadoop configuration changes as well as install the Hadoop code on our servers.

06:03
We can now use HUE and do not need to deal with the command line. We will use HUE to load data into a table and query that data – one of the most common use cases for Hadoop.

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Packt Publishing, Tech Knowledge in Motion

Over the past ten years Packt Publishing has developed an extensive catalogue of over 2000 books, e-books and video courses aimed at keeping IT professionals ahead of the technology curve. From new takes on established technologies through to the latest guides on emerging platforms, topics and trends – Packt's focus has always been on giving our customers the working knowledge they need to get the job done. Our Udemy courses continue this tradition, bringing you comprehensive yet concise video courses straight from the experts.

Ready to start learning?
Take This Course