Best Hands-on Big Data Practices with PySpark & Spark Tuning
What you'll learn
- Understand Apache Spark’s framework, execution and programming model for the development of Big Data Systems
- Learn step-by-step hands-on PySpark practices on structured, unstructured and semi-structured data using RDD, DataFrame and SQL
- Learn how to work with a free Cloud-based and a Desktop computer for Spark setup and configuration
- Build simple to advanced Big Data applications for different types of data (volume, variety, veracity) through real case studies
- Investigate and apply optimization and performance tuning methods to manage data Skewness and prevent Spill
- Investigate and apply Adaptive Query Execution (AQE) to optimize Spark SQL query execution at runtime
- Investigate and be able to explain the lazy evaluations (Narrow vs Wide transformation) and internal working of Spark
- Build and learn Spark SQL applications using JDBC (Java Database Connectivity)
Requirements
- Very basic Python and SQL
- If you are new to Python programming, Don't worry at all, you can learn it freely through my YouTube channel. Subscribe to my YouTube channel and keep learning without any hassle
Description
In this course, students will be provided with hands-on PySpark practices using real case studies from academia and industry to be able to work interactively with massive data. In addition, students will consider distributed processing challenges, such as data skewness and spill within big data processing. We designed this course for anyone seeking to master Spark and PySpark and Spread the knowledge of Big Data Analytics using real and challenging use cases.
We will work with Spark RDD, DF, and SQL to process huge sized of data in the format of semi-structured, structured, and unstructured data. The learning outcomes and the teaching approach in this course will accelerate the learning by Identifying the most critical required skills in the industry and understanding the demands of Big Data analytics content.
We will not only cover the details of the Spark engine for large-scale data processing, but also we will drill down big data problems that allow users to instantly shift from an overview of large-scale data to a more detailed and granular view using RDD, DF and SQL in real-life examples. We will walk through the Big Data case studies step by step to achieve the aim of this course.
By the end of the course, you will be able to build Big Data applications for different types of data (volume, variety, veracity) and you will get acquainted with best-in-class examples of Big Data problems using PySpark.
Who this course is for:
- Beginner/Junior/Senior Data Developers who want to master Spark/PySpark and Spread the knowledge of Big Data Analytics
- If you are new to Python programming, Don't worry at all, you can learn it freely through my YouTube channel. Subscribe to my YouTube channel and keep learning without any hassle
Instructor
Dr Amin Karami is a Senior Lecturer in the Department of Computer Science and Digital Technologies (CS&DT) at UEL UK. He is the course leader for MSc Big Data Technologies and the PG academic leader at CS&DT. He is also a Senior Big Data Consultant and Developer in the industry. His research interests include Big Data Technologies, Computational Intelligence, Blockchain, and Optimization Techniques.
He has carried out several international research collaborations, such as at the Uppsala University, the University of Skövde Sweden, and the Polytechnic University of Catalonia Spain, several internal and external grants, such as the Erasmus+ fund, UEL Research Internship Scheme, the award by Fondazione Rosselli for AIC workshop at Turin, Italy supported by projects TIN2013-47272- C2-2 and SGR-2014-881 from Barcelona, and several invited talks and presentations such as at Hangzhou Dianzi University China and IoT Technologies Conference (AIOTT) Hong Kong.