Apache Flink | A Real Time & Hands-On course on Flink
4.5 (170 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
1,441 students enrolled

Apache Flink | A Real Time & Hands-On course on Flink

A to Z, In-depth & HANDS-ON Practical course on a technology better than Spark for Stream processing i.e. Apache Flink
Bestseller
4.5 (170 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
1,441 students enrolled
Last updated 11/2018
English
Current price: $12.99 Original price: $99.99 Discount: 87% off
17 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 6 hours on-demand video
  • 34 downloadable resources
  • Full lifetime access
  • Access on mobile and TV
  • Assignments
  • Certificate of Completion
Training 5 or more people?

Get your team access to Udemy's top 3,000+ courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Learn a cutting edge and Apache's latest Stream processing framework i.e. Flink.
  • Learn a technology which is much faster than Hadoop and Spark.

  • Understand the working of each and every component of Apache Flink with HANDS-ON Practicals.

  • Even learn those concepts which are not properly explained in Flink's official documentation.
  • Solve Real-Time Business case studies using Apache Flink.
  • Data-sets and Flink codes used in lectures are available in resources tab. This will save your typing efforts.
Requirements
  • Basic knowledge of Distributed Frameworks.
  • Basic knowledge of OOPS.
  • Rest everything about Apache Flink is covered in this course with Practicals.
Description

Apache Flink is the successor to Hadoop and Spark. It is the next generation engine for Stream processing. If Hadoop is 2G, Spark is 3G then Apache Flink is the 4G in Big data stream processing frameworks. Actually Spark was not a true Stream processing framework, it was just a makeshift to do it but Apache Flink is a TRUE Streaming engine with added capacity to perform Batch, Graph, Table processing and also to run Machine Learning algorithms.

Apache Flink is the latest Big data technology and is rapidly gaining momentum in the market. It is assumed that same like Spark replaced Hadoop, Flink can also replace Spark in the coming near future.

Demand of Flink in market is already swelling. Big companies like Capital One (Bank), Alibaba (eCommerce), Uber (Transportation) have already started using Apache Flink to process their Real-time data and thousands other are diving into.

What's included in the course ?

  • Complete Apache Flink concepts explained from Scratch to Real-Time implementation.

  • Each and Every Apache Flink concept is explained with a HANDS-ON Flink code of it.

  • Include those concepts also, the explanation to which is not very clear even in Flink official documentation.

  • For Non-Java developer's help, All Flink Java codes are explained line by line in such a way that even a non technical person can understand.

  • Flink codes and Datasets used in lectures are attached in the course for your convenience.

Who this course is for:
  • Students who want to learn Apache Flink from SCRATCH to its Live Project Implementation.
  • Who are new to Stream processing and want to learn a Stream processing framework which is better than Spark.
  • Software engineers who feel they missed an early opportuninty to get into Hadoop & Spark.
  • Hadoop & Spark Developers who want to upgrade themseleves to Apache's latest Big Data Streaming Engine.
Course content
Expand all 55 lectures 05:45:56
+ Introduction to Flink
8 lectures 49:19

This is the pilot lecture to get you familiar with Flink. The video will explain What is Apache Flink and what functionalities it provides.

Preview 04:14
Announcement
01:01

This lecture will tell you the difference between stream processing and batch processing.

Preview 04:14

A lecture on difference between Hadoop and streaming technologies i.e Spark and Flink. This will also explain the similarities in Spark and Flink

Hadoop Vs Streaming Engines (Spark & Flink)
05:55

What is the difference between Spark and Flink. How Flink is better than Spark.

Spark Vs Flink
11:19

This video explains the architecture of Apache Flink. What different APIs flink provides for Batch, Stream, Graph, Table processing. It explains the full ecosystem of Apache Flink.

Flink Architecture/Ecosystem
02:55

Learn Apache Flink's programming model. You will see how to fit a Flink program in its architecture.

Flink's programming model | Flow of a Flink program
12:00

Install Flink in your local system

Installing Flink
07:41
+ Transformation operations of DataSet API
6 lectures 39:07
Default Code structure of a Flink Program
04:49

This lecture shows line to line explanation of program Word Count of Names starting with N while explaining the map operation, flatmap, filter, various data source functions, groupby(), sum etc.

WordCount using Map, Flatmap, Filter, groupby | Part 1
11:31

This lecture shows line to line explanation of program Word Count of Names starting with N while explaining the map operation, flatmap, filter, various data source functions, groupby(), sum etc.

WordCount using Map, Flatmap, Filter, groupby | Part 2
03:48

This video shows How to perform Inner Join using Flink. Flink provides a join operation to do so.

Joins - Inner join
07:00

In this video you will see how to perform Left outer join, Right Outer Join and Full Outer Join using Flink.

Joins - Left, Right & Full Outer Join
05:30

Join Hints is a Exclusive feature of Flink. By passing some Enumeration constants we can tell Flink which Join it has to perform. Flink provides us with 6 Join Hints.

Preview 06:29
+ DataStream API Operations
7 lectures 39:33

There are various types of Data Sources and Data Sinks in Datastream API. In this lecture we will see those sources and sinks methods and learn what type of data they read and in what manner.

Data Sources & Sinks of Datastream API
10:06

This lecture is the pilot lecture for Apache flink's datastream Api programs. The first program is a basic program i.e. Word count of names starting with N. The code will you the similarities and differences of  Dataset and Datastream API Flink program.

First program using Datastream API
05:04

Reduce method is applied on keyed streams. It will aggregate all the elements of a key.

Reduce Operation
06:41

Fold operation of Apache Flink is same like reduce operation only, just the difference is that unlike reducefunction interface fold interface can take different input and output type parameters.

Fold Operation
02:46

Apache Flink has provided the general Aggreagation operations like min(), minBy(), Max(), MaxBy(), Sum()

Aggregation Operations in Flink
05:47

Split operator of Apache Flink's Datastream API is used to split the incoming stream of data into 2 streams. It uses a select method to select data from SplitStream.

Split Operation
03:30

Iterate operator will iterate over the data stream again and again until it reaches to a desired output.

Iterate Operator
05:39
Cab data analysis
Assignment 1
1 question
+ Windows in Flink
12 lectures 58:21

This is the first Introductory lecture to the section of windows. Windowing is a crucial concept of Apache Flink. You will learn various types of built-in windows provided by Flink and how to code it in a program throughout the section.

Introduction to Windowing
04:39

There are 2 tyoes of window assigners for windows in Apache Flink.

  • window()

  • windowAll()

windowAll() for non keyed streams and window() for keyed stream.

Window Assigners
01:55

There are various time Notions of windows in Apache Flink. Processing time, Event time, Ingestion time.

Various Time Notions of Windows in Flink
03:36

Tumbling window is a time based window. It can be created using processing and event time notions. This video shows how to implement tumbling windows in a Flink program.

Preview 06:37

Sliding window is a time based window. It can be created using processing and event time notions. This video shows how to implement sliding windows in a Flink program.

Sliding Windows Implementation
02:43

This video explains how to implement Session Windows in Apache Flink program.

Session Windows Implementation
05:15

This video explains how to implement Global Windows in Apache Flink program.

Global Windows Implementation
03:41

With every window a trigger is attached which will ask the window to start processing. There are few default built-in triggers provided to us by Apache Flink but we can also create our own triggers by overriding few methods of trigger interface.

Triggers in Windows
07:47

Evictors are the components which allows us to keep only selected elements in a window.

Evictors for Windows
05:41

What is a watermark in Apache Flink

Watermarks, Late Elements & Allowed Lateness
08:41

This lecture explains How actually to create watermarks for a Window in Flink. This lecture will explain the method assignTimestampsandWatermarks.

How to generate Watermarks
05:48
Recommendation
01:58
Quiz 1
5 questions
+ State, Checkpointing and Fault tolerance
11 lectures 01:07:55

Flink provides us a fault tolerance to its applications. Means upon any node failures the app can be restored exactly from the same point where it failed.

Flink provides Fault tolerance using State and checkpointing. So this is the first lecture which explains what is a State in flink.

What is a State in Flink
05:48

Flink does not do checkpointing on regular intervals of time or when some amount of data is processed, Apache Flink does checkpointing based on Asynchronus Barrier Snapshoting algorithm.

Checkpointing and Barrier Snapshoting
08:38

Incremental checkpointing is a new feature in Apache Flink. It was included form flink 1.3. It gives us better performance than conventional checkpointing.

Preview 04:18

States can categorized into 2 types

  • Operator State - Managed operator state and Raw operator state

  • Keyed State   - Managed keyed state and Raw keyed state

Types of States
02:37

What is Value State in Flink and how to implement it in a Flink program.

Preview 08:08

What is List State in Flink and how to implement it in a Flink program.

List State Implementation
Processing..

What is Reducing State in Flink and how to implement it in a Flink program.

Reducing State Implementation
02:22
Analyze network data.
Assignment
1 question

Managed operator State in Flink and How to code it in a flink program

Managed Operator State Implementation
06:50

This lecture is dedicated to teach you how to perform checkpointing in a flink program . It also includes various restart strategies carried out by Flink.

Implement Checkpointing in a Flink Program
08:38

This lecture will show how to implement Broadcast State in a Flink program

The Broadcast State Implementation
09:49

Queryable state concept is still in Beta version of Apache Flink and is daily evolving. If we set our managed keyed state as queryable then it allows the non flink programs to access a state.

Queryable State (Beta Version)
10:47
+ Interacting with Real-Time Data
2 lectures 18:48

Live Twitter data can be used to generate Insights in real-time. Twitter provides data through APIs. We can access it using security tokens. This lecture deals with How to ingest Twitter data in Apache Flink.

Getting Twitter data using its APIs
13:39

This lecture shows How to integrate Apache Kafka with Apache Flink.

Adding Kafka to Flink as a Data source
05:09
+ Solve Real-Time Case studies using Flink
4 lectures 43:48

A real time use case of twitter analysis in healthcare domain where by using Apache flink a healthcare company wants to check from which devices how many users are posting tweets regarding pollution .

Twitter data analysis using Flink
12:34
Bank Real-Time Fraud detection
14:48

Stock Real-Time Data Processing using Flink

Stock Real-Time Data Processing | Part 1
05:18

Stock Real-Time Data Processing using Flink 

Stock Real-Time Data Processing | Part 2
11:08
+ Table & Sql API | Relational APIs of Flink
3 lectures 15:58

Flink has introduced 2 Relational APIs for table processing. These are Table API and Sql API

Introduction to Table & Sql API
03:21

This lecture will make you understand How to create and register a table in FLink using its Relational APIs.

How to register a Table in Relational APIs
07:37

An Example to show the implementation on how we write queries in Flink using Table and Sql API.

Writing Queries in Table & Sql API
05:00
Quiz 2
4 questions
+ Gelly API for Graph Processing
2 lectures 13:07

A graph is a ordered set of Edges and Vertices.

What is a Graph
06:24

In this video you will learn how using Gelly API of Apache Flink you can do graph processing. In the use explained in the lecture we are finding out friends of friends of a person.

Calculate Friends of Freinds of a Person using GELLY Api
06:43