Real World Spark 2 - Interactive Python pyspark Core
3.5 (2 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
119 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Real World Spark 2 - Interactive Python pyspark Core to your Wishlist.

Add to Wishlist

Real World Spark 2 - Interactive Python pyspark Core

Build a Vagrant Python pyspark cluster and Code/Monitor against Spark 2 Core. The modern cluster computation engine.
3.5 (2 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
119 students enrolled
Created by Toyin Akin
Last updated 1/2017
English
Curiosity Sale
Current price: $10 Original price: $90 Discount: 89% off
30-Day Money-Back Guarantee
Includes:
  • 3 hours on-demand video
  • 3 Articles
  • 2 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Simply run a single command on your desktop, go for a coffee, and come back with a running distributed environment for cluster deployment
  • Ability to automate the installation of software across multiple Virtual Machines
  • Code in Python against Spark. Transformation, Actions and Spark Monitoring
View Curriculum
Requirements
  • Basic programming or scripting experience is required.
  • You will need a desktop PC and an Internet connection. The course is created with Windows in mind.
  • The software needed for this course is freely available
  • Optional : This course is based on top of my previous course - "Real World Vagrant - Build an Apache Spark Development - Toyin Akin"
  • You will require a computer with a Virtualization chipset support - VT-x. Most computers purchased over the last five years should be good enough
  • Optional : Some exposure to Linux and/or Bash shell environment
  • 64-bit Windows operating system required (Would recommend Windows 7 or above)
  • This course is not recommened if you have no desire to work with/in distributed computing
Description

Note : This course is built on top of the "Real World Vagrant - Build an Apache Spark Development Env! - Toyin Akin" course. So if you do not have a Spark environment already installed (within a VM or directly installed), you can take the stated course above.

Spark’s python shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. It is available in Python. Start it by running the following anywhere within a bash terminal within the built Virtual Machine 

pyspark

Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from collections, Hadoop InputFormats (such as HDFS files) or by transforming other RDDs

Spark Monitoring and Instrumentation

While creating RDDs, performing transformations and executing actions, you will be working heavily within the monitoring view of the Web UI.

Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:

A list of scheduler stages and tasks A summary of RDD sizes and memory usage Environmental information. Information about the running executors


Why Apache Spark ...

Apache Spark run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing. Apache Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells. Apache Spark can combine SQL, streaming, and complex analytics.

Apache Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.

Who is the target audience?
  • Software engineers who want to expand their skills into the world of distributed computing
  • Developers / Data Scientists who want to write/test their code against Python / Spark
Students Who Viewed This Course Also Viewed
Curriculum For This Course
21 Lectures
03:08:46
+
Introduction to Python, Spark Core via pyspark
2 Lectures 07:56

A quick tour of Python pyspark

Preview 07:56

Suggested Spark Udemy curriculum courses to follow. You do not need to
take/purchase the first three courses if you already have spark
installed.

Preview 00:00
+
Author, Equipment and Compensation
4 Lectures 25:57

My experience within the Enterprise

Preview 11:28

Spark job compensation for those in this field.

Preview 07:09

Memory Requirements
00:17

Recommended Hardware for Spark and Hadoop labs ...

Recommended Hardware for Spark and Hadoop labs ...
07:03
+
Setup the Environment
5 Lectures 35:33

Resource files for the course

Resource files for the course
00:30

Spark setup

Spark setup
04:03

Walking through the Base Vagrant Spark Box.

Walking through the Base Vagrant Spark Box
16:40

Upgrade and Package the Vagrant Box to Spark 2

Upgrade and Package the Vagrant Box to Spark 2
11:28

Register the updated Vagrant Spark Box

Register the updated Vagrant Spark Box
02:52
+
Interact with Spark Core (Python)
9 Lectures 01:57:39

Boot up and Walkthrough of the pyspark Python Environment

Preview 15:36

Configure and Startup a Spark Environment for Distributed Computing

Configure and Startup a Spark Environment for Distributed Computing
19:44

Python Spark RDD, Transformations, Actions and Monitoring I

Python Spark RDD, Transformations, Actions and Monitoring I
19:43

Python Spark RDD, Transformations, Actions and Monitoring II

Python Spark RDD, Transformations, Actions and Monitoring II
11:05

Python Spark RDD, Transformations, Actions and Monitoring III

Python Spark RDD, Transformations, Actions and Monitoring III
11:46

Python Spark RDD, Transformations, Actions and Monitoring IV

Python Spark RDD, Transformations, Actions and Monitoring IV
14:13

Python Spark RDD, Transformations, Actions and Monitoring V

Python Spark RDD, Transformations, Actions and Monitoring V
07:38

Python Spark RDD, Transformations, Actions and Monitoring VI

Python Spark RDD, Transformations, Actions and Monitoring VI
11:12

Python Spark RDD, Transformations, Actions and Monitoring VII

Python Spark RDD, Transformations, Actions and Monitoring VII
06:42
+
Conclusion
1 Lecture 01:42

Conclusion

Conclusion
01:42
About the Instructor
Toyin Akin
3.8 Average rating
135 Reviews
1,374 Students
15 Courses
Big Data Engineer, Capital Markets FinTech Developer

I spent 6 years at "Royal Bank of Scotland" and 5 years at the investment bank "BNP Paribas"  developing and managing Interest Rate Derivatives services as well as engineering and deploying In Memory DataBases (Oracle Coherence), NoSQL and Hadoop clusters (Cloudera) into production.

In 2016, I left to start my own training, POC-D. "Proof Of Concept - Delivered", which focuses on delivering training on IMDB (In Memory Database), NoSQL, BigData and DevOps technology. 

From Q3 2017, this will also include FinTech Training in Capital Markets using Microsoft Excel (Windows), JVM languages (Java/Scala) as well as .NET (C#, VB.NET, C++/CLI, F# and IronPythyon)

I have a YouTube Channel, publishing snippets of my videos. These are not courses. Simply ad-hoc videos discussing various distributed computing ideas.

Check out my website and/or YouTube for more info

See you inside ...