A Crash Course In PySpark
- Python Familiarity, which can be learned through my 'No Nonsense Python' course
Spark is one of the most in-demand Big Data processing frameworks right now.
This course will take you through the core concepts of PySpark. We will work to enable you to do most of the things you’d do in SQL or Python Pandas library, that is:
Getting hold of data
Handling missing data and cleaning data up
Aggregating your data
And Writing it back
All of these things will enable you to leverage Spark on large datasets and start getting value from your data.
Let’s get started.
- People wanting to leverage their big data with Spark
- How is this course structured
- Introduction to our development environment
- Introduction to our dataset & dataframes
- Environment configuration code snippet
- Ingesting & Cleaning Data
- Answering our scenario questions
- Bringing data into dataframes
- Inspecting A Dataframe
- Handling Null & Duplicate Values
- Selecting & Filtering Data
- Applying Multiple Filters
- Running SQL on Dataframes
- Adding Calculated Columns
- Group By And Aggregation
- Writing Dataframe To Files
- Challenge Overview
- Challenge Solution
- Thanks for joining me to learn PySpark!
Hey guys! I am a data engineer by trade and specialize in Python, SQL, Spark, Hive, MongoDB and more. I've come on Udemy to try and make simple, short crash courses into these technologies as I personally find the longer courses too drawn out & I often lose interest. The idea is to keep it short and sharp!
For loads of advanced Spark, Python & Big Data topics, please visit my website (the button on this page will take you there) - where I talk about scaling up to enterprise grade solutions.