Welcome to the SGLearn Series targeted at Singapore-based learners picking up new skillsets and competencies. This course is eligible for the CITREP+ funding scheme if you are a Singaporean above 16 years old, terms and conditions apply. Enjoy the course.
Do note that this course on Spark for Data Science is co-created by Janani Ravi and the team. It is duplicated for Singaporeans to enjoy the training subsidy from the Singapore government.
Note from the team ...
Taught by a 4 person team including 2 Stanford-educated, ex-Googlers and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data.
Get your data to fly using Spark for analytics, machine learning and data science
Let’s parse that.
What's Spark? If you are an analyst or a data scientist, you're used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.
Analytics: Using Spark and Python you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease.
Machine Learning and Data Science : Spark's core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We'll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets.
Lot's of cool stuff ..
.. and of course all the Spark basic and advanced features:
Using discussion forums
Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(
We're super small and self-funded with only 2-3 people developing technical video content. Our mission is to make high-quality courses available at super low prices.
The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.
We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.
It is a hard trade-off.
Thank you for your patience and understanding!
He has a great categorization for insights in data, really!
There is a profound truth in here which data scientists and analysts have known for years.
Explore, investigate and find patterns in data. Build fully fledged, scalable productions system. All using the same environment.
RDDs are pretty magical, they are the core programming abstraction in Spark
Spark is even more powerful because of the packages that come with it. Spark SQL, Spark Streaming, MLlib and GraphX.
Let's get started by installing Spark. We'll also configure Spark to work with IPython Notebook
Start munging data using the PySpark REPL environment.
Operations on data, transform data to extract information and then retrieve results.
We've learnt a little bit about how Spark and RDDs work. Let's see it in action!
If you are unfamiliar with softwares that require working with a shell/command line environment, this video will be helpful for you. It explains how to update the PATH environment variable, which is needed to set up most Linux/Mac shell based softwares.
RDDs are very intuitive to use. What are some of the characteristics that make RDDs performant, resilient and efficient?
Lazy evaluation of RDDs is possible because RDDs can reconstruct themselves. They know where they came from.
A quick overview of all operations and transformations on RDDs
Parse a CSV file, transform is using the map() operation, create Flight objects on the fly.
Use the flights dataset to get interesting insights.
Cache RDDs in memory to optimize operations using persist()
Use the aggregate() operation to calculate average flight delays in one step. Much more compact than map() and reduce().
This is surprisingly simple!
See all of the RDD operations in action. Using map, reduce, aggregate to analyze Airline data.
Pair RDDs are special types of RDDs where every record is a key value pair. All normal actions and transformations apply to these in addition to some special ones.
Pair RDDs are useful to get information on a per-key basis. Sales per city, delays per airport etc.
Instead of 3 steps use just one to get the average delay per airport.
Sort RDDs easily
Looking up airport descriptions in a pair RDD can be done in many ways, understand how each work.
Analyze airlines data with the help of Pair RDDs.
Accumulators are special variables which allow the main driver program to collect information from nodes on which the actual processing takes place.
Spark is more than just the Read-Evaluate-Print Loop environment, it can run long running programs as well.
How does Spark submit jobs for distributed processing? How does the scheduler work? What does the cluster manager do? All this and more in this behind the scenes.
MapReduce is a powerful paradigm for distributed processing. Many tasks lend themselves well to this model and Spark has transformations which deal with this beautifully.
Spark works with Java as well. If that is your language of choice then you have reason to rejoice.
Pair RDDs in Java have to be created explicitly, a tuple RDD is not automatically a Pair RDD
Using spark-submit with Java code.
Maven is a prerequisite to compiling and building your Java JARs for Spark.
This will be way simpler than the explanation.
Optimize the algorithm by making joins more performant
Pretend data is in a relational database using Dataframes. Dataframes are also RDDs, you get the best of both worlds!
This is a family of algorithms which give recommendations based on user data and user preferences.
One type of collaborative filtering algorithm is latent factor analysis. There is some math here but don't worry, MLlib takes care of all this for you.
Let's write a recommendation engine for music services
The code in Spark is surprisingly simple.
Spark can process streaming data in near real time using DStreams.
A script to parse logs in real time
Stateful transformations allow cumulative results across a stream using a sliding window.
Dioworks is an e-learning design company focused on using technology as enablers to make learning easy, engaging and effective. Premised on innovative designs, pedagogy and research, we provide quality learning experiences for learners globally. Dioworks offers bespoke solutions for organisations to integrate learning, training and assessment of work-based competencies via blended learning strategies. We are also the local partner to Udemy in Singapore.
More specifically, we combine the strengths of Classroom-Facilitated Learning, Massive Open Online Courses (MOOCs) in partnership with UDEMY Inc, and our "Kinetic Coach" automated response training solution to achieve learning outcomes.