Writing production-ready ETL pipelines in Python / Pandas
What you'll learn
- How to write professional ETL pipelines in Python.
- Steps to write production level Python code.
- How to apply functional programming in Data Engineering.
- How to do a proper object oriented code design.
- How to use a meta file for job control.
- Coding best practices for Python in ETL/Data Engineering.
- How to implement a pipeline in Python extracting data from an AWS S3 source, transforming and loading the data to another AWS S3 target.
Requirements
- Basic Python and Pandas knowledge is desirable.
- Basic ETL and AWS S3 knowledge is desirable.
Description
This course will show each step to write an ETL pipeline in Python from scratch to production using the necessary tools such as Python 3.9, Jupyter Notebook, Git and Github, Visual Studio Code, Docker and Docker Hub and the Python packages Pandas, boto3, pyyaml, awscli, jupyter, pylint, moto, coverage and the memory-profiler.
Two different approaches how to code in the Data Engineering field will be introduced and applied - functional and object oriented programming.
Best practices in developing Python code will be introduced and applied:
design principles
clean coding
virtual environments
project/folder setup
configuration
logging
exeption handling
linting
dependency management
performance tuning with profiling
unit testing
integration testing
dockerization
What is the goal of this course?
In the course we are going to use the Xetra dataset. Xetra stands for Exchange Electronic Trading and it is the trading platform of the Deutsche Börse Group. This dataset is derived near-time on a minute-by-minute basis from Deutsche Börse’s trading system and saved in an AWS S3 bucket available to the public for free.
The ETL Pipeline we are going to create will extract the Xetra dataset from the AWS S3 source bucket on a scheduled basis, create a report using transformations and load the transformed data to another AWS S3 target bucket.
The pipeline will be written in a way that it can be deployed easily to almost any production environment that can handle containerized applications. The production environment we are going to write the ETL pipeline for consists of a GitHub Code repository, a DockerHub Image Repository, an execution platform such as Kubernetes and an Orchestration tool such as the container-native Kubernetes workflow engine Argo Workflows or Apache Airflow.
So what can you expect in the course?
You will receive primarily practical interactive lessons where you have to code and implement the pipeline and theory lessons when needed. Furthermore you will get the python code for each lesson in the course material, the whole project on GitHub and the ready to use docker image with the application code on Docker Hub.
There will be power point slides for download for each theoretical lesson and useful links for each topic and step where you find more information and can even dive deeper.
Who this course is for:
- Data engineers, scientists and developers who want to write professional production-ready data pipelines in Python.
- Everyone who is interested in writing data pipelines in Python that are ready for production.
Instructor
Es gibt so viele coole Tools da draußen - vor allem im Bereich Small/Large/Big Data. Ein Leben reicht garnicht aus, alle Tools zu kennen und gut darin zu sein. Aber bereits mit einem übersichtlichen und guten Toolset kann man tolle Projekte mit echtem Mehrwert umsetzen.
Ich habe 2012 die Universität als Diplomingenieur für Mechatronik abgeschlossen, wobei Programmierung vor allem im Embedded Umfeld eine wichtige Rolle spielte. Im Laufe meiner ersten Berufsjahre als Ingenieur entdeckte ich mehr und mehr meine Leidenschaft für Python vor allem mit Small/Large/Big Data.
Nach einigen Hobby-Projekten wagte ich 2016 schließlich den Schritt auch beruflich in diesem Umfeld zu arbeiten. Nun arbeite ich seit mehreren Jahren erfolgreich als Data-Engineer und hatte die Möglichkeit in tollen Projekten mitzuwirken.
Dieses Wissen möchte ich gern durch Kurse im Bereich Data-Engineering und Data-Science mit einem hohen Fokus auf die Praxis weitergeben.
----------------------------------------
English Version
There are so many cool tools out there - especially in the small / large / big data area. One life is not enough to know all tools and be proficient with them. But even with a quite small and good toolset, you can implement great projects with real value.
In 2012 I graduated as engineer for mechatronics. Programming especially in the embedded area was an important part of my education. During my first years as an engineer, I discovered more and more my passion for Python, especially with small / large / big data.
After a few hobby projects, I took the step to work professionally in this area in 2016. I've been working now for years as a data engineer being involved in great projects.
I like to pass this knowledge on through courses in data engineering and data science with a high focus on practice.