Apache Spark for Big Data Analytics and Data Processing
- 7 hours on-demand video
- 1 downloadable resource
- Full lifetime access
- Access on mobile and TV
- Certificate of Completion
Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business
- Query your structured data using Spark SQL and work with the DataSets API
- Analyze and process graph structures using Spark’s GraphX module
- Train machine learning models with streaming data, and use them for making real-time predictions
- Implement high-velocity streaming and data processing use cases while working with streaming API
- Dive into MLlib– the machine learning functional library in Spark with highly scalable algorithm
- See how SparkR allows to create and transform RDDs in R
- See analytical use case implementations using MLLib, GraphX, and Spark streaming
- Examine a number of real-world use cases with hands-on projects
- Build Hadoop and Apache Spark jobs that process data quickly and effectively
This video explains the complete introduction of Spark SQL discusses the type of applications where Spark SQL is useful to use and in the end it explains the performance of the Spark SQL.
At first it talks about the Spark SQL introduction
Next it explains the type of applications where Spark SQL is useful
The final step explains the performance of the Spark SQL
This video explains about the Spark SQL core abstractions used by programming interfaces. These core abstractions are SQLContext, HiveContext, SparkSession, Dataset and DataFrames.
At first it talks about the SQLContextfor Spark 1.6 and 2.0
Next it explains the HiveContext for Spark 1.6 and 2.0
The final step explains concept of Dataset and DataFrame
This video explains the creation of DataFrames from different type of files and also run some code examples
At first it talks about creating DataFrame from CSV files with demonstration
Next it talks about creating DataFrames from JSON files with demonstration
Last it talk about creating DataFrames from Parquet and ORC files
This video explains the ways of creating DataFrames from different data source. It also talks about the storing DataFrames also run some code examples.
At first it talks about creating DataFramefrom HIVE data source
Next it explains the creating DataFramesfrom JDBC data source
The final step explains storing the data within JSON/ORC files and using HIVE and JDBC
This video explains the data frame API for common operations such as columns, dtypes, Explain, printSchema, registerTempTable, and so on with demonstration.
At first it talks about the common operations – columns and dtypes
Next it explains the common operations – explain and printSchema
The final steptalk about the common operations – registerTempTable
This video explains the data frame API for query operationsfor aggregation, sampling, filter, groupBy, join, intersect, orderBy, sort, and so on with demonstration.
At first it talk about the query operations for aggregation, sampling, filter
Next it explains the query operations for groupBy, join, intersect
The final step talks about the query operations for orderBy, sort and distinct
This video explains the data frame API for actionssuch as limit, select, withColumn, selectExpr, count, describe, collect, and so on with demonstration.
At first it talks about the limit, select, withColumnactions
Next it explains the selectExpr, count, describe actions
The final step talks about the collect, show, take actions
This video explains the data frame API for built-in functions for collections, date, time, math, and string that Spark SQL provides, optimized for fast execution.
At first it talks about the built-in functions for collections
Next it explains the built-in functions for date and time
The final steptalk about the built-in functions for math and string
This video explains the complete introduction of Spark Streaming, DStreams and support for different data sources.
At first it talk why Spark Streaming is needed
Next it explains the how DStreams is different from RDD
The final step explains the different data sources supported by Spark Streaming
This video explains the complete architecture of Spark Streaming, concept of DStreams with example and Streaming execution in spark.
At first it walks through the Spark Streaming architecture in detail
Next it explains the concept of DStreams with example
The final step explains the Spark Streaming execution details
This video explains the different type of transformations available in Spark Streaming such as stateless transformations and stateful transformations.
At first it talks about the stateless transformations such as map(), filter(), groupByKey(), and so on
Next it explains the windowed operations under stateful transformation
The final step explains the updateStateByKey() stateful transformation
This video explains the different type of input sources available for Spark Streaming such as Sockets, Files, Kafka, Flume, and so on. It also explains the different available output operations.
At first it talks about the core input sources such as Sockets, Files, Akka drivers
Next it explains other input sources such as Flume and Kafka
The final step explains the output operations such as Save(), saveAsHadoopFiles(), and so on
This video briefly explains the performance considerations for Spark Streaming such as batch size, parallelism, garbage collection, memory usage.
At first it talks about the tuning batch size for Spark Streaming
Next it explains usage of parallelism for Spark Streaming performance
The final step explains garbage collection and memory usage for Streaming applications
Aim of this video is to explain about the best practice for handling high velocity streams such as using parallelism, scheduling, setting right configuration for memory usage and few other tips.
First it explains the parallelism based best practices
Next it explains scheduling based best practices
In the last it explains the memory related configuration and few other tips
Aim of this video is to explain about the Best practice for external data sources such as Flume, Kafka, Sockets and Message Queue protocol.
First it explains about Flume in context of Streaming
Next it explains about Kafka in context of Streaming
In the last it explains usage of Sockets and Message Queue protocol
Aim of this video is to explain design patterns which can be used with to maintain the Global State and foreachRDD output action in Spark Streaming.
First it explains patterns for maintain the Global State within Streaming application
Next it explains patterns for handling connections within foreachRDD action
The goal of this video is to look into the processing Streaming data and understand how the Streaming processing differs from processing batch data.
Find out what the unbounded data is
Find out how stream processing is different from the batch processing
Process each event really fast
Streaming architecture needs to have a data source. So often, Apache Kafka does an event queue which is a great data source for events. The goal of this video is to integrate Spark Streaming with Apache Kafka.
Understand what the Apache Kafka is
Use Apache Kafka as a DataSource for the Spark Streaming job
Learn about writing DStream provider
The aim of this video is to implement streaming stateful processing that saves some data to Cassandra database and retrieve it so it can be used as a state durable data store.
Stream stateful processing
Use Cassandra as state store
Use Spark Streaming mapWithState to implement stateful processing
- Basic understanding and functional knowledge of Apache Spark and big data are required.
Today’s world witnesses a massive amount of data being generated everyday, everywhere. As a result, a number of organizations are focusing on Big Data processing to process large amounts of data in real-time with maximum efficiency. This has led to Apache Spark gaining popularity in the Big Data market rapidly. If you want to get the most out of the trending Big Data framework for all your data processing needs, then go for this Learning Path.
This comprehensive 3-in-1 course focuses on performing data streaming and data analytics with Apache Spark. You will learn to load data from a variety of structured sources such as JSON, Hive, and Parquet using Spark SQL and schema RDDs. You will also build streaming applications and learn best practices for managing high-velocity streaming and external data sources. Next, you will explore Spark machine learning libraries and GraphX where you will perform graphical processing and analysis. Finally, you will build projects which will help you put your learnings into practice and get a strong hold of the topic.
Contents and Overview
This training program includes 3 complete courses, carefully chosen to give you the most comprehensive training possible.
The first course, Spark Analytics for Real-Time Data Processing, starts off with explaining Spark SQL. You will learn how to use the Spark SQL API and built-in functions with Apache Spark. You will also go through some interactive analysis and look at some integrations between Spark and Java/Scala/Python. Next, you will explore Spark Streaming, streamingcontext, and DStreams. You will learn how Spark streaming works on top of the Spark core, thus inheriting its features. Finally, you will stream data and also learn best practices for managing high-velocity streaming and external data sources.
In the second course, Advanced Analytics and Real-Time Data Processing in Apache Spark, you will leverage the features of various components of the Spark framework to efficiently process, analyze, and visualize your data. You will then learn how to implement the high velocity streaming operation for data processing in order to perform efficient analytics on your real-time data. You will also analyze data using machine learning techniques and graphs. Next, you will learn to solve problems using machine learning techniques and find out about all the tools available in the MLlib toolkit. Finally, you will see some useful machine learning algorithms with the help of Spark MLlib and will integrate Spark with R.
The third course, Big Data Analytics Projects with Apache Spark, contains various projects that consist of real-world examples. The first project is to find top selling products for an e-commerce business by efficiently joining data sets in the Mapreduce paradigm. Next, a Market Basket Analysis will help you identify items likely to be purchased together and find correlations between items in a set of transactions. Moving on, you will learn about probabilistic logistic regression by finding an author for a post. Next, you will build a content-based recommendation system for movies to predict whether an action will happen, which you will do by building a trained model. Finally, you will use the Mapreduce Spark program to calculate mutual friends on social network.
By the end of this course, you will have a sound understanding of the Spark framework, which will help you in analyzing and processing big data in real time.
Meet Your Expert(s):
We have the best work of the following esteemed author(s) to ensure that your learning journey is smooth:
Nishant Garg has over 17 years of software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum). He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a technical architect for the Big Data RandD Group with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM. Nishant has also undertaken many speaking engagements on big data technologies and is also the author of Apache Kafka and HBase Essentials, Packt Publishing.
Tomasz Lelek is a Software Engineer and Co-Founder of InitLearn. He mostly does programming in Java and Scala. He dedicates his time and effort to get better at everything. He is currently diving into Big Data technologies. Tomasz is very passionate about everything associated with software development. He has been a speaker at a few conferences in Poland-Confitura and JDD, and at the Krakow Scala User Group. He has also conducted a live coding session at Geecon Conference. He was also a speaker at an international event in Dhaka. He is very enthusiastic and loves to share his knowledge.
- This course is for software engineers, data scientists, big data developers, and big data analysts who are interested in big data processing and data analytics with Apache Spark.