ETL Testing: From Beginner to Expert
4.1 (208 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
1,475 students enrolled
Wishlisted Wishlist

Please confirm that you want to add ETL Testing: From Beginner to Expert to your Wishlist.

Add to Wishlist

ETL Testing: From Beginner to Expert

ETL Testing: Essential course for all software testing professionals.
4.1 (208 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
1,475 students enrolled
Created by Sid Inf
Last updated 5/2017
English
Current price: $10 Original price: $195 Discount: 95% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 18.5 hours on-demand video
  • 1 Article
  • 9 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Understand the concepts of Business Intelligence Data warehousing
  • Get to know what is ETL Testing, QA Lifecycle and RDBMS Concepts
  • Gain an in-depth understanding of Data Warehouse WorkFlow and comparison between Database Testing and Data Warehouse Testing
  • Understand different ETL Testing scenarios like Constraint Testing, Source to Target Testing, Business Rules Testing, Negative Scenarios Testing, Dependency Testing, Error Handling Testing
  • Perform data Checks using SQL and understand the scope of BI Testing
View Curriculum
Requirements
  • Basic RDBMS Concepts
Description

DW/BI/ETL Testing Training Course is designed for both entry-level and advanced Programmers. The course includes topics related to the foundation of  Data Warehouse with the concepts, Dimensional Modeling and important aspects of Dimensions, Facts and Slowly Changing Dimensions along with the DW/BI/ETL set up,  Database Testing Vs Data Warehouse Testing, Data Warehouse Workflow and Case Study, Data Checks using SQL, Scope of BI testing and as a bonus you will also get the steps to set up the environment with the most popular ETL tool Informatica to perform all the activities on your personal computer to get first hand practical knowledge. 

Who is the target audience?
  • BI/ETL Developers
  • Testers who want to get job on ETL Testing, to get more salary and also who want to get job quickly.
  • IT professionals with very basic knowledge on Database concepts
  • Project Managers, Database Professionals, Mainframe Professionals, SQL Developers
Students Who Viewed This Course Also Viewed
Curriculum For This Course
248 Lectures
18:25:23
+
Welcome! Thank you for learning the ETL Testing Course with me!
1 Lecture 07:20

In this lecture we talk about the layout of the course and what is covered and how to get the best out of this course. 

Preview 07:20
+
Before we start
2 Lectures 12:05

ETL is commonly associated with Data Warehousing projects but there in reality any form of bulk data movement from a source to a target can be considered ETL.  ETL testing is a data centric testing process to validate that the data has been transformed and loaded into the target as expected.


In this lecture we also talk about data testing and challenges in ETL testing. 


Preview 04:50

This is one of the common questions which is asked by most of the non-Java/Big Data IT professionals about their current technologies and the future of it. 

Especially, when it comes to the ETL or the DW world, the future would be better than ever since "Big Data" would help increase the requirement of better processing of data & these tools excel in doing that.

Preview 07:15
+
The Basics - Data warehouse Concepts course
5 Lectures 22:43

The original intent of the data warehouse was to segregate analytical operations from mainframe transaction processing in order to avoid slowdowns in transaction response times, and minimize the increased CPU costs accrued by running ad hoc queries and creating and distributing reports. Over time, the enterprise data warehouse became a core component of information architectures, and it's now rare to find a mature business that doesn't employ some form of an EDW or a collection of smaller data marts to support business intelligence, reporting and analytics applications.

In this lecture we see what will be the future of Data warehouse in the age of Big Data. 

Preview 04:25

Data is a collection of raw material in unorganized format. which refers an object. 

What is Data?
03:23

The concept of data warehousing is not hard to understand. The notion is to create a permanent storage space for the data needed to support reporting, analysis, and other BI functions. In this lecture we understand what are the main reasons behind creating a data warehouse and the benefits of it. 

This long list of benefits is what makes data warehousing an essential management tool for businesses that have reached a certain level of complexity.

Why Data Warehouse is implemented?
04:51

A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. It separates analysis workload from transaction workload and enables an organization to consolidate data from several sources.

In addition to a relational database, a data warehouse environment includes an extraction, transportation, transformation, and loading (ETL) solution, an online analytical processing (OLAP) engine, client analysis tools, and other applications that manage the process of gathering data and delivering it to business users.

What is a Data Warehouse?
05:43

Test your understanding on the Data Warehouse basics

Test your understanding on the Data Warehouse basics
3 questions
+
Data Mart
3 Lectures 04:38

The data mart is a subset of the data warehouse that is usually oriented to a specific business line or team. Data marts are small slices of the data warehouse. Whereas data warehouses have an enterprise-wide depth, the information in data marts pertains to a single department.

What is a Data Mart?
01:47

Data Warehouse:

  • Holds multiple subject areas
  • Holds very detailed information
  • Works to integrate all data sources
  • Does not necessarily use a dimensional model but feeds dimensional models.

Data Mart:

  • Often holds only one subject area- for example, Finance, or Sales
  • May hold more summarized data (although many hold full detail)
  • Concentrates on integrating information from a given subject area or set of source systems
  • Is built focused on a dimensional model using a star schema.
Fundamental difference between Data Warehouse and Data Mart
00:36

This lecture talks about the mistakes and the mis-conceptions one have with regard to the Data warehouse. 

Mistakes and Misconceptions of a Data Mart
02:15

Test your understanding on the Data Mart Concepts

Test your understanding on the Data Mart Concepts
3 questions
+
Data Warehouse Architectures
6 Lectures 22:25

In this lecture we see how the Centralized architecture is set up, in which there exists only one data warehouse which stores all data necessary for the business analysis

Revised: Enterprise Architecture or Centralized Architecture
04:08

In a Federated Architecture the data is logically consolidated but stored in separate physical database, at the same or at different physical sites. The local data marts store only the relevant information for a department. 

The amount of data is reduced in contrast to a central data warehouse. The level of detail is enhanced in this kind of model. 

Revised: Federated Data Warehouse Architecture
02:49

Multi Tired architecture is a distributed data approach. This process cannot be done in a one step because many sources have to be integrated into a warehouse.

Multi Tired Architecture
03:13

Different data warehousing systems have different structures. Some may have an ODS (operational data store), while some may have multiple data marts. Some may have a small number of data sources, while some may have dozens of data sources. In view of this, it is far more reasonable to present the different layers of a data warehouse architecture rather than discussing the specifics of any one system.

In general, all data warehouse systems have the following layers:

  • Data Source Layer
  • Data Extraction Layer
  • Staging Area
  • ETL Layer
  • Data Storage Layer
  • Data Logic Layer
  • Data Presentation Layer
  • Metadata Layer
  • System Operations Layer
Components of a Data Warehouse Architecture
03:57

This is where data is stored prior to being scrubbed and transformed into a data warehouse / data mart. Having one common area makes it easier for subsequent data processing / integration. Based on the business architecture and design there can be more than one staging area which can be termed with different naming conventions. 

Purpose of a Staging Area in Data Warehouse Architecture - Part 1
04:43

This is where data is stored prior to being scrubbed and transformed into a data warehouse / data mart. Having one common area makes it easier for subsequent data processing / integration. Based on the business architecture and design there can be more than one staging area which can be termed with different naming conventions. 

Purpose of a Staging Area in Data Warehouse Architecture - Part 2
03:35

Test your understanding on the Data Warehouse Architecutre

Test your understanding on the Data Warehouse Architecutre
3 questions
+
Dimensional Modeling
6 Lectures 24:19

Data modeling is the formalization and documentation of existing processes and events that occur during application software design and development. 

What is Data Modeling?
02:13

The below aspects will be discussed in this lecture. 


•Functional and Technical Aspects

•Completeness in the design

•Understanding DB Test Execution

•Validation

Why should a Tester know Data Modeling?
04:03

Data modeling techniques and tools capture and translate complex system designs into easily understood representations of the data flows and processes, creating a blueprint for construction and/or re-engineering. 

Data Modeling Techniques
03:53

An entity–relationship model (ER model) is a data model for describing the data or information aspects of a business domain or its process requirements, in an abstract way that lends itself to ultimately being implemented in a database such as a relational database.

ER Data Model
03:41

Dimensional Model is a database structure that is optimized for online queries and Data Warehousing tools. It is comprised of "fact" and "dimension" tables. A "fact" is a numeric value that a business wishes to count or sum. A "dimension" is essentially an entry point for getting at the facts.

Dimensional Model
04:39

In this lecture we talk about the differences between ER model and the Dimensional Model.

Differences between ER Model and Dimensional Model
05:50
+
How to build a Dimensional Model?
6 Lectures 14:54

To build  a Dimensional Model we need to follow five different phases

  • Gathering Business Requirements
  • Conceptual Data Model
  • Logical Data Model
  • Physical Data Model
  • Database Implenetation
Different phases required to build a Dimensional Data Model
01:01

Data Modelers have to interact with business analysts to get the functional requirements and with end users to find out the reporting needs. 

Business Requirements
02:25

This model includes all major entities, relationships. But, this will not contain much detail about attributes and is often used in the initial planning phase. 

CDM - Conceptual Data Model
02:41

In this phase the actual implementation of  a conceptual model in a logical data model will happen. A logical data model is the version of the model that represents all of the business requirements of an organization.

LDM - Logical Data Model
03:07

This is a complete model that includes all required tables, columns, relationships, database properties for the physical implementation of the database. 

Physical Data Model
01:24

DBA's or ETL developers prepare the scripts to create the entities, attributes and their relationships.


In this lecture we also talk about the reusable database script creation process which can be reused for multiple times.  

Database
04:16

On how to create a Dimensional Data Model

Test your understanding
2 questions
+
Various Objects in a Dimensional Model
17 Lectures 01:06:58

dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time. In a data warehousedimensions provide structured labeling information to otherwise un-ordered numeric measures.

What is a Dimension Table?
02:48

In data warehousing, a fact table consists of the measurements, metrics or facts of a business process. It is often located at the center of a star schema, surrounded by dimension tables.

There are four types of facts. 

  • Additive - Measures that can be added across all dimensions.
  • Non Additive - Measures that cannot be added across all dimensions.
  • Semi Additive – Measures that can be added across few dimensions and not with others.
  • Fact less fact tables – The fact table does not have aggregate numeric values or information.
What is a Fact Table?
02:05

There are four types of facts. 

  • Additive - Measures that can be added across all dimensions.
  • Non Additive - Measures that cannot be added across all dimensions.
  • Semi Additive – Measures that can be added across few dimensions and not with others.
  • Fact less fact tables – The fact table does not have aggregate numeric values or information.
Additive Facts
01:45

The numeric measures in a fact table fall into three categories. The most flexible and useful facts are fully additive; additive measures can be summed across any of the dimensions associated with the fact table. Semi-additive measures can be summed across some dimensions, but not all; balance amounts are common semi-additive facts because they are additive across all dimensions except time.

Semi Additive Facts
02:28

There are four types of facts. 

  • Additive - Measures that can be added across all dimensions.
  • Non Additive - Measures that cannot be added across all dimensions.
  • Semi Additive – Measures that can be added across few dimensions and not with others.
  • Fact less fact tables – The fact table does not have aggregate numeric values or information.
Non-Additive Facts
01:30

There are four types of facts. 

  • Additive - Measures that can be added across all dimensions.
  • Non Additive - Measures that cannot be added across all dimensions.
  • Semi Additive – Measures that can be added across few dimensions and not with others.
  • Fact less fact tables – The fact table does not have aggregate numeric values or information.
Fact less Facts
02:29

A star schema is the simplest form of a dimensional model, in which data is organized into facts and dimensions

What is a Star Schema?
05:23

 The snowflake schema is diagrammed with each fact surrounded by its associated dimensions (as in a star schema), and those dimensions are further related to other dimensions, branching out into a snowflake pattern.

What is a Snow Flake Schema?
03:25

Galaxy schema also know as fact constellation schema because it is the combination of both of star schema and Snow flake schema.

Galaxy Schema or Fact Constellation Schema
02:20

When choosing a database schema for a data warehouse, snowflake and star schema tend to be popular choices. This comparison discusses suitability of star vs. snowflake schema in different scenarios and their characteristics.

Snow Flake Vs Star Schema
05:27

conformed dimension is a dimension that has exactly the same meaning and content when being referred from different fact tables. A conformed dimension can refer to multiple tables in multiple data marts within the same organization.

Conformed Dimensions
06:38

 In a Junk dimension, we combine these indicator fields into a single dimension. This way, we'll only need to build a single dimension table, and the number of fields in the fact table, as well as the size of the fact table, can be decreased.

Junk Dimensions
03:06

According to Ralph Kimball, in a data warehouse, a degenerate dimension is a dimension key in the fact table that does not have its own dimension table, because all the interesting attributes have been placed in analytic dimensions. The term "degenerate dimension" was originated by Ralph Kimball. 

Degenerate Dimensions
03:35

A single physical dimension can be referenced multiple times in a fact table, with each reference linking to a logically distinct role for the dimension. For instance, a fact table can have several dates, each of which is represented by a foreign key to the date dimension.

Role Playing Dimensions
02:26

Slowly Changing Dimensions (SCD) - dimensions that change slowly over time, rather than changing on regular schedule, time-base.

Slowly Changing Dimensions - Intro and Example Creation
05:30

There are many approaches how to deal with SCD. The most popular are: 

  • Type 0 - The passive method
  • Type 1 - Overwriting the old value
  • Type 2 - Creating a new additional record
  • Type 3 - Adding a new column
  • Type 4 - Using historical table
  • Type 6 - Combine approaches of types 1,2,3 (1+2+3=6)
Slowly Changing Dimensions (SCD) Type 1, 2, 3
12:30

Dimension, Fact and SCD Type 1, 2 and 3 are reviewed in this lecture. 

Slowly Changing Dimensions - Summary
03:33

Test your understanding on Dimensional Model Objects

Test your understanding on Dimensional Model Objects
4 questions
+
Data Integration and ETL
5 Lectures 23:55

Data integration is the combination of technical and business processes used to combine data from disparate sources into meaningful and valuable information. A complete data integration solution delivers trusted data from a variety of sources.

What is Data Integration?
06:49

ETL is short for extract, transform, load, three database functions that are combined into one tool to pull data out of one database and place it into another database.

Extract is the process of reading data from a database.

Transform is the process of converting the extracted data from its previous form into the form it needs to be in so that it can be placed into another database. Transformation occurs by using rules or lookup tables or by combining the data with other data.

Load is the process of writing the data into the target database.

ETL is used to migrate data from one database to another, to form data marts and data warehouses and also to convert databases from one format or type to another.

What is ETL?
04:19

The process of extracting the data from different source (operational databases) systems, integrating the data and transforming the data into a homogeneous format and loading into the target warehouse database. Simple called as ETL (Extraction, Transformation and Loading). The Data Acquisition process designs are called in different manners by different ETL vendors.

Data Acquisition
04:16

Data transformation is the process of converting data or information from one format to another, usually from the format of a source system into the required format of a new destination system.

Data Transformation
05:42

In this lecture we discuss on what are the common questions which are raised for Data Integration and ETL.

Common Questions and Summary
02:49

Test your understanding on Data Integration and ETL

Test your understanding on Data Integration and ETL
3 questions
+
ETL Vs ELT
3 Lectures 15:01

ETL is short for extract, transform, load, three database functions that are combined into one tool to pull data out of one database and place it into another database.

Extract is the process of reading data from a database.

Transform is the process of converting the extracted data from its previous form into the form it needs to be in so that it can be placed into another database. Transformation occurs by using rules or lookup tables or by combining the data with other data.

Load is the process of writing the data into the target database.

ETL is used to migrate data from one database to another, to form data marts and data warehouses and also to convert databases from one format or type to another.

ETL - Explained
06:41

ELT is a variation of the Extract, Transform, Load (ETL), a data integration process in which transformation takes place on an intermediate server before it is loaded into the target.

ELT - Explained
05:57

ELT makes sense when the target is a high-end data engine, such as a data appliance, Hadoop cluster, or cloud installation to name three examples.  If this power is there, why not use it?

ETL, on the other hand, is designed using a pipeline approach. While data is flowing from the source to the target, a transformation engine (something unique to the tool) takes care of any data changes.

Which is better depends on priorities. All things being equal, it’s better to have fewer moving parts. ELT has no transformation engine – the work is done by the target system, which is already there and probably being used for other development work. On the other hand, the ETL approach can provide drastically better performance in certain scenarios. The training and development costs of ETL need to be weighed against the need for better performance. (Additionally, if you don’t have a target system powerful enough for ELT, ETL may be more economical.)

ETL Vs ELT
02:23
29 More Sections
About the Instructor
Sid Inf
4.3 Average rating
1,358 Reviews
8,909 Students
5 Courses
Data/ETL Architect

Business Intelligence Consultant and Trainer with 14+ years of extensive work experience on various client engagements. Delivered many large data warehousing projects and trained numerous professionals on business intelligence technologies. Extensively worked on all facets of data warehousing including requirement gathering, gap analysis, database design, data integration, data modeling, enterprise reporting, data analytics, data quality, data visualization, OLAP.

Has worked on broad range of business verticals and hold exceptional expertise on  various ETL tools like Informatica Powercenter, SSIS, ODI and IDQ, Data Virtualization, DVO, MDM.