Apache Spark Fundamentals
0.0 (0 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
0 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Apache Spark Fundamentals to your Wishlist.

Add to Wishlist

Apache Spark Fundamentals

Get the most out of the popular Apache Spark framework to perform efficient analytics on your real-time data
0.0 (0 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
0 students enrolled
Created by Packt Publishing
Last updated 7/2017
English
Curiosity Sale
Current price: $10 Original price: $125 Discount: 92% off
30-Day Money-Back Guarantee
Includes:
  • 2 hours on-demand video
  • 1 Supplemental Resource
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • History of Apache Spark and the introduction of Spark components
  • Learn how to get started with Apache Spark
  • Introduction to Apache Hadoop, it’s processed and components – HDFS, YARN and Map Reduce
  • Introduction of programming language – Scala, Scala fundamentals such as classes, objects in Scala, Collections in Scala, etc.
  • Apache Spark programming fundamentals and Resilient Distributed Datasets (RDD)
  • See which operations can be used to perform a transformation or action operation on the RDD
  • Find out how to load and save data in Spark
  • Write Spark application in Scala and execute it on Hadoop cluster
View Curriculum
Requirements
  • Some familiarity with Scala would be helpful.
Description

This video is a comprehensive tutorial to help you learn all the fundamentals of Apache Spark, one of the trending big data processing frameworks on the market today. We will introduce you to the various components of the Spark framework to efficiently process, analyze, and visualize data.

You will also get the brief introduction of Apache Hadoop and Scala programming language before start writing with Spark programming. You will learn about the Apache Spark programming fundamentals such as Resilient Distributed Datasets (RDD) and See which operations can be used to perform a transformation or action operation on the RDD. We'll show you how to load and save data from various data sources as different type of files, No-SQL and RDBMS databases etc.. We’ll also explain Spark advanced programming concepts such as managing Key-Value pairs, accumulators etc. Finally, you'll discover how to create an effective Spark application and execute it on Hadoop cluster to the data and gain insights to make informed business decisions.

By the end of this video, you will be well-versed with all the fundamentals of Apache Spark and implementing them in Spark.

About The Author

Nishant Garg has over 16 years of software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum).

He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a senior technical architect for the Big Data R&D Labs with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM.

Nishant has also undertaken many speaking engagements on big data technologies and is also the author of Learning Apache Kafka & HBase Essestials, Packt Publishing.


Who is the target audience?
  • This course is for data scientists, big data technology developers and analysts who want to learn the fundamentals of Apache Spark from a single, comprehensive source, instead of spending countless hours on the internet trying to take bits and pieces from different sources.
Students Who Viewed This Course Also Viewed
Curriculum For This Course
18 Lectures
02:07:29
+
Introducing Spark
3 Lectures 14:39

This video provides an overview of the entire course.

Preview 03:44

What are the origins of Apache Spark and what are its uses?           

Spark Introduction
04:53

What are the various components in Apache Spark?           

Spark Components
06:02
+
Hadoop and Spark
4 Lectures 28:10

This video explains the complete historical journey of project Nutch to Apache Hadoop—how the project Hadoop was started, what were the research papers that influenced the Spark project, and so on. In the end, various goals achieved by developing Hadoop are explained.

Preview 06:49

In this video, we are going to look at the Apache Hadoop background running JVM processes—name node, data node, resource manager, and node manager. It also provides an overview of Hadoop components—HDFS, YARN, and Map Reduce programming mode.            

Hadoop Processes and Components
07:24

This video shares more details about Hadoop components Hadoop distributed filesystem—Goals, HDFS components, and the working of HDFS. It also explains another Hadoop component YARN—components, lifecycle, and its use cases.            

HDFS and YARN
07:10

This video provides an overview of Map Reduce—the Hadoop programming model and its execution behavior at various stages.         

Map Reduce
06:47
+
Scala from 30,000 feet
4 Lectures 29:56

The aim of this video is to introduce the Scala language and its features, and by the end of this video, you should be able to get started with Scala.

Preview 07:16

The aim of this video is to explain the fundamentals of Scala Programming, such as Scala classes, fields, methods, and the different types of arguments, such as default and named arguments passed to class constructors and methods.            

Scala Programming Fundamentals
07:42

The aim of this video is to explain the objects in Scala language, singleton object in Scala, and outline the usages of objects in Scala applications. It also describes companion objects.            

Objects in Scala
06:22

The aim of this video is to explain the structure of the Scala collections hierarchy. Look at the examples of different collection types, such as Array, Set, and Map. It also covers how to apply functions to data in collections and outlines the basics of structural sharing.            

Collections
08:36
+
Spark Programming
3 Lectures 23:51

The aim of this video is to start your learning of Apache Spark fundamentals. It introduces you to the Spark component architecture and how different components are stitched together for Spark execution.

Preview 07:39

The aim of this video is to take the first step towards Spark programming. It explains the Spark Context and also shares the need of Resilient Distributed Datasets called RDD. It also explains the execution approach change in Map Reduce due to RDD. 

Understanding RDD
07:06

The aim of this video is to explain the operations that can be applied on RDDs. These operations are in the form of transformations and actions. It explains various operations under both the categories with examples.            

RDD Operations
09:06
+
Advanced Spark Programming
4 Lectures 30:53

The aim of this video is to explain and demonstrate data loading and storing in Spark from different file types; such as text, CSV, JSON file, and sequence file; different filesystems, such as local filesystem, Amazon S3, and HDFS; and different databases, such as My SQL, Postgres, HBase, and so on.           

Preview 10:15

The aim of this video is to explain the motivations behind key-value-based RDD and the creation of such RDDs. Next, it explains the various transformations and actions that can be applied on key-value-based RDD. Finally, it explains data partitioning techniques in Spark.

Managing Key-Value Pairs
06:56

The aim of this video is to explain a few more advance concepts, such as accumulators, broadcast variables, and passing data to external programs using pipes.

Accumulators
06:56

The aim of this video is to demonstrate the writing of Spark jobs using Eclipse-based Scala IDE, creating Spark job JAR files, and, finally, copying and executing the Spark job on Hadoop cluster.

Writing a Spark Application
06:46
About the Instructor
Packt Publishing
3.9 Average rating
7,336 Reviews
52,405 Students
616 Courses
Tech Knowledge in Motion

Packt has been committed to developer learning since 2004. A lot has changed in software since then - but Packt has remained responsive to these changes, continuing to look forward at the trends and tools defining the way we work and live. And how to put them to work.

With an extensive library of content - more than 4000 books and video courses -Packt's mission is to help developers stay relevant in a rapidly changing world. From new web frameworks and programming languages, to cutting edge data analytics, and DevOps, Packt takes software professionals in every field to what's important to them now.

From skills that will help you to develop and future proof your career to immediate solutions to every day tech challenges, Packt is a go-to resource to make you a better, smarter developer.

Packt Udemy courses continue this tradition, bringing you comprehensive yet concise video courses straight from the experts.