Data Engineering on Google Cloud platform
What you'll learn
- Pyspark for ETL/Batch Processing on GCP using Bigquery as data warehousing component
- Automate and orchestrate SparkSql batch jobs using Apache Airflow and Google Workflows
- Sqoop for Data ingestion from CloudSql and using Airflow to automate the batch ETL
- Difference between Event-time data transformations and process-time transformations
- Pyspark Structured Streaming - Real Time Data streaming and transformations
- Save real time streaming raw data as external hive tables on Dataproc and perform ad-hoc queries using HiveSql
- Run Hive-SparkSql jobs on Dataproc and automate micro-batching and transformations using Airflow
- Pyspark Structured Streaming - Handling Late Data using watermarking and Event-time data processing
- Using different file formats - AVRO and Parquet . Different scenarios in which to use the file formats
- Basic Python Skills
- Comfortable with basic Linux/Bash commands
- Basic understanding of spark (python) and how hadoop works
- A Google cloud compute account / if not sign up for a free trial account
- Comfortable with setting up Google SDKs regardless of the operating system
- Should have the desire to learn and eagerness to explore more about the relevant topics
Google Cloud platform is catching up and a lot of companies have already started moving their infrastructure to GCP . This course provides the most practical solutions to real world use cases in terms of data engineering on Cloud . This course is designed keeping in mind end to end lifecycle of a typical Big data ETL project both batch processing and real time streaming and analytics .
Considering the most important components of any batch processing or streaming jobs , this course covers
Writing ETL jobs using Pyspark from scratch
Storage components on GCP (GCS & Dataproc HDFS)
Loading Data into Data-warehousing tool on GCP (BigQuery)
Handling/Writing Data Orchestration and dependencies using Apache Airflow(Google Composer) in Python from scratch
Batch Data ingestion using Sqoop , CloudSql and Apache Airflow
Real Time data streaming and analytics using the latest API , Spark Structured Streaming with Python
Micro batching using PySpark streaming & Hive on Dataproc
The coding tutorials and the problem statements in this course are extremely comprehensive and will surely give one enough confidence to take up new challenges in the Big Data / Hadoop Ecosystem on cloud and start approaching problem statements & job interviews without inhibition .
Most importantly , this course makes use of Linux Ubuntu 18.02 as a local operating system.Though most of the codes are run and triggered on Cloud , this course expects one to be experienced enough to be able to set up Google SDKs , python and a GCP Account by themselves on their local machines because the local operating system does not matter in order to succeed in this course .
P.S : 88BA1461141F3A2A6E2D for half price .
Who this course is for:
- Any techie who needs hands on project expertise on end to end batch data processing & real time streaming
- Aspiring Data Engineers who find it hard to setup and work practically on distributed processing
- Any Techie who is preparing for an interview for a Data engineering position and wants hands on expertise
I am a Business oriented Data Architect with a vast experience in the field of Software Development,Distributed processing and data engineering on cloud . I have worked on different cloud platforms such as AWS & GCP and also with on-prem hadoop clusters. I also give seminars on Distributed processing using Spark , real time streaming and analytics and best practices for ETL and data governance.I am also a passionate coder ,love writing and building optimal data pipelines for robust data processing and streaming solutions .