Prerequisites: Hive requires knowledge of SQL. The course includes and SQL primer at the end. Please do that first if you don't know SQL. You'll need to know Java if you want to follow the sections on custom functions.
Taught by a 4 person team including 2 Stanford-educated, ex-Googlers and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with large-scale data.
Hive is like a new friend with an old face (SQL). This course is an end-to-end, practical guide to using Hive for Big Data processing.
Let's parse that
A new friend with an old face: Hive helps you leverage the power of Distributed computing and Hadoop for Analytical processing. It's interface is like an old friend : the very SQL like HiveQL. This course will fill in all the gaps between SQL and what you need to use Hive.
End-to-End: The course is an end-to-end guide for using Hive: whether you are analyst who wants to process data or an Engineer who needs to build custom functionality or optimize performance - everything you'll need is right here. New to SQL? No need to look elsewhere. The course has a primer on all the basic SQL constructs, .
Practical: Everything is taught using real-life examples, working queries and code .
Analytical Processing: Joins, Subqueries, Views, Table Generating Functions, Explode, Lateral View, Windowing and more
Tuning Hive for better functionality: Partitioning, Bucketing, Join Optimizations, Map Side Joins, Indexes, Writing custom User Defined functions in Java. UDF, UDAF, GenericUDF, GenericUDTF, Custom functions in Python, Implementation of MapReduce for Select, Group by and Join
For SQL Newbies: SQL In Great Depth
Using discussion forums
Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(
We're super small and self-funded with only 2 people developing technical video content. Our mission is to make high-quality courses available at super low prices.
The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.
We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.
It is a hard trade-off.
Thank you for your patience and understanding!
Data warehousing systems - which have become the rage with the rise of 'Big Data' - are quite different from traditional transaction processing systems. Hive is a prototypical data warehousing system.
Hive is built atop Hadoop, and can even be characterized as the SQL skin atop Hadoop MapReduce.
Hive tries really hard - and mostly succeeds - at pretending to be a relational DBMS, but really, under the hood its quite different - understand how, and understand schema-on-read.
Now that we understand the differences between Hive and a traditional RDBMS, the differences between HiveQL and SQL will seem a lot less annoying and arbitrary.
Before we install Hive, we need to install Hadoop. Hadoop has 3 different install modes - Standalone, Pseudo-distributed and Fully Distributed. Get an overview of when to use each
How to set up Hadoop in the standalone mode. Windows users need to install a Virtual Linux instance before this video.
Set up Hadoop in the Pseudo-Distributed mode. All Hadoop services will be up and running!
If you are all set with Hadoop, let's go ahead and install Hive.
Hive has some really cool datatypes - collections that make it feel like there is a real programming language under the hood. Oh, and btw - there is!
Structs and unions are yet another bit of Hive that seem more at home in a programming language.
Let's get into the nitty-gritty - starting with creating tables. Remember schema-on-read?
Inserting into tables has a few quirks in Hive, because, after all, all writes are just data dumps that know nothing about the schema
More on inserts - remember that no schema checking happens during database writes!
Alter table works in Hive - understand how.
Hive data is stored as files on HDFS, the distributed file system that is an integral part of Hadoop. Understanding the physical layout of hive tables will make many advanced concepts - bucketing and partitioning - far more clear.
Learn how to interact with HDFS. This comes in handy if you want to understand what's going on under the hood of your Hive Queries.
Let's create a few tables and see how to insert data. We'll see external tables as well and what happens under the hood in HDFS with each of these activities.
Hive CLI allows you to run scripts and execute queries directly from the command line rather than the hive shell.
Hive has a whole bunch of useful functions available out-of-the-box. This is an introduction to the 3 types of functions available. Standard, aggregate and table generating functions.
The case-when statement is very useful to populate columns by evaluating conditions. Size() and Cast() are other useful built-in functions.
explode() is a very interesting table generating function which expands an array to produce row for every element in the array.
Sub-queries in Hive are rather quirky. For instance, union is fine, but intersect is not.
Sub-queries have a few rather arcane rules - no equality signs, and some rather specific rules on exists and in.
It is possible to insert data into a table using subqueries - just don't try to specify any schema information!
Views are an awesome bit of functionality in Hive - use them. Oh, btw, views are non-materialized, if that means anything to you. If not - never mind!
Indices are just a lot less important in Hive than they are in SQL. Understand why, and also how they can be used.
Partitioning in Hive is conceptually similar to Indexing in traditional DBMS - way to quickly look up rows with specific values in a particular column
Let's understand the why of partitioning
Partitioning needs to specified at the time of table creation - understand the syntax.
Once a table has been partitioned appropriately, using it is not a lot of work.
Inserting data into partitioned tables can be a bit tedious - understand how dynamic partitioning can help!
Let's see partitioning in action!
Bucketing is conceptually quite close to partitioning - and indeed to Indexing in traditional RDBMS - but with a key difference.
Bucketing has an important advantage over partitioning - the metastore is unlikely to be taken down by it.
Bucketing needs to specified at the time of table creation - understand how.
Once a table has been bucketed, using it is not that difficult.
Sampling is a very handy technique in a data warehouse, and bucketing helps power this functionality
Windowing functions start to get at the real number-crunching power of Hive. In effect, they help tack on a new column to a query result - and that column contains the results of aggregate functions on a window of rows.
Let's use windowing to set up a running total, aka a cumulative sum, for revenues in a sales table
Let's now make that running sum reset each day - combining the power of windowing and the power of partitioning
Rownumber, rank, lead and lag - Hive places really nifty windowing functions at your disposal.
Loonycorn is us, Janani Ravi and Vitthal Srinivasan. Between us, we have studied at Stanford, been admitted to IIM Ahmedabad and have spent years working in tech, in the Bay Area, New York, Singapore and Bangalore.
Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft
Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too
We think we might have hit upon a neat way of teaching complicated tech courses in a funny, practical, engaging way, which is why we are so excited to be here on Udemy!
We hope you will try our offerings, and think you'll like them :-)