Forecasting Models with Python
3.2 (33 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
295 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Forecasting Models with Python to your Wishlist.

Add to Wishlist

Forecasting Models with Python

Learn main forecasting models from basic to expert level through a practical course with Python programming language.
3.2 (33 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
295 students enrolled
Created by Diego Fernandez
Last updated 10/2016
English
Current price: $10 Original price: $50 Discount: 80% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 5.5 hours on-demand video
  • 7 Articles
  • 23 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Read data files and perform statistical computing operations by installing related packages and running code on the Python IDE.
  • Compute simple forecasting methods such as naïve or random walk and use them as initial benchmarks.
  • Recognize time series level, trend and seasonality patterns through simple moving averages together with Brown’s, Holt’s, Gardner’s, Taylor’s and Winter’s exponential smoothing (ETS) methods.
  • Assess if time series is first order trend stationary with augmented Dickey-Fuller test.
  • Estimate time series conditional mean with Box-Jenkins’s autoregressive integrated moving average (ARIMA) models.
  • Define models’ parameters with autocorrelation, partial autocorrelation functions and use them to evaluate if forecasting residuals are white noise together with Ljung-Box test.
  • Choose best methods and models by comparing Akaike’s, Hannan-Quinn’s and Schwarz’s Bayesian information loss criteria.
  • Test methods and models predicting accuracy by comparing forecasting errors’ metrics such as Hyndman and Koehler’s mean absolute scaled error.
View Curriculum
Requirements
  • Python programming language is required. Downloading instructions included.
  • Python Distribution (PD) and Integrated Development Environment (IDE) are recommended. Downloading instructions included.
  • Python code files provided by instructor.
  • Prior basic Python programming language knowledge is useful but not required.
Description

Learn forecasting models through a practical course with Python programming language using real world data. It explores main concepts from basic to expert level which can help you achieve better grades, develop your academic career, apply your knowledge at work or make business forecasting related decisions. All of this while exploring the wisdom of best academics and practitioners in the field.

Become a Forecasting Models Expert in this Practical Course with Python

  • Read data files and perform statistical computing operations by installing related packages and running code on the Python IDE.
  • Compute simple benchmarking methods such as random walk.
  • Recognize time series patterns with moving averages and exponential smoothing (ETS) methods.
  • Assess if time series is first order trend stationary or constant in its mean.
  • Estimate time series conditional mean with autoregressive integrated moving average (ARIMA) models.
  • Define models’ parameters and evaluate if forecasting errors are white noise.
  • Select best methods and models by comparing information loss criteria.
  • Test methods and models’ forecasting accuracy by comparing their predicting capabilities.

Become a Forecasting Models Expert and Put Your Knowledge in Practice

Learning forecasting methods and models is indispensable for business or financial analysts in areas such as sales and financial forecasting, inventory optimization, demand and operations planning, and cash flow management. It is also essential for academic careers in data science, applied statistics, operations research, economics, econometrics and quantitative finance. And it is necessary for any business forecasting related decision.

But as learning curve can become steep as complexity grows, this course helps by leading you through step by step real world practical examples for greater effectiveness.

Content and Overview

This practical course contains 34 lectures and 5.5 hours of content. It’s designed for all forecasting models knowledge levels and a basic understanding of Python programming language is useful but not required.

At first, you’ll learn how to read data files and perform statistical computing operations by installing related packages and running code on the Python IDE. Next, you’ll estimate simple forecasting methods such as arithmetic mean, naïve or random walk, random walk with drift, seasonal random walk and use them as benchmarks against other more complex ones. After that, you’ll evaluate these methods’ forecasting accuracy through scale-dependent mean absolute error and scale-independent mean absolute percentage error metrics.

Then, you’ll identify time series level, trend and seasonality patterns through simple moving averages together with Brown’s, Holt’s, Gardner’s, Taylor’s and Winter’s exponential smoothing (ETS) methods. Next, you’ll evaluate these methods’ forecasting accuracy through previously studied error metrics and the introduction of Hyndman and Koehler’s mean absolute scaled error.

After that, you’ll evaluate if time series is first order trend stationary with augmented Dickey-Fuller test. Next, you’ll calculate time series conditional mean with Box-Jenkins’s autoregressive integrated moving average (ARIMA) models. Then, you’ll determine models’ parameters with autocorrelation and partial autocorrelation functions. Later, you’ll select best model by comparing Akaike’s, Hannan-Quinn’s and Schwarz’s Bayesian information loss criteria and evaluate these models’ forecasting accuracy through previously studied errors metrics. Finally, you’ll value if best model’s forecasting errors are white noise with Ljung-Box lagged autocorrelation test and therefore don’t include any predicting information.

Who is the target audience?
  • Students at any knowledge level who want to learn about forecasting models using Python programming language.
  • Academic researchers who wish to deepen their knowledge in data science, applied statistics, operations research, economics, econometrics or quantitative finance.
  • Business or financial analysts and data scientists who desire to apply this knowledge in sales and financial forecasting, inventory optimization, demand and operations planning, or cash flow management.
Compare to Other Python Courses
Curriculum For This Course
34 Lectures
05:20:29
+
Course Overview
7 Lectures 31:19

In this lecture you will view course disclaimer and learn which are its objectives, how you will benefit from it, its previous requirements and my profile as instructor.

Preview 03:18

In this lecture you will learn that it is recommended to view course in an ascendant manner as each section builds on last one and also does its complexity. You will also study course structure and main sections (simple forecasting methods, moving averages and exponential smoothing methods, autoregressive integrated moving average models and bibliography).

Preview 03:28

In this lecture you will learn forecasting models definition, time series decomposition, Miniconda Distribution for Python 2.7 or 3.5 64-bit (PD) and Python PyCharm Integrated Development Environment (IDE) downloading websites.

Forecasting Models
05:31

In this lecture you will learn forecasting models .CSV data reading into Python PyCharm Integrated Development Environment (IDE), data sources, code files originally in .TXT format that need to be converted in .PY format with forecasting computation instructions, Python packages Miniconda Distribution for Python 2.7 or 3.5 64-bit (PD) installation (numpy, pandas, scipy,  matplotlib and statsmodels) and related code (import <package> as <name>, from <package> import <function>, read_csv(<path>), plot(<y=data>) and DataRange(<initial date>: <final date>)).

Forecasting Models Data
18:53

Before starting course please download .CSV data file as external resources. 

Course Data File
00:03

Before starting course please download .TXT Python code files as additional resources.

Course Code Files
00:03

You can download .PDF section slides file as additional resources.

Course Overview Slides
00:02
+
Simple Forecasting Methods
5 Lectures 36:40

You can download .PDF section slides file as additional resources.

Simple Forecasting Methods Slides
00:02

In this lecture you will learn section lectures’ details and main themes to be covered related to simple forecasting methods (arithmetic mean method, naïve or random walk methods, seasonal random walk method and forecasting accuracy (scale-dependent mean absolute error MAE and scale-independent mean absolute percentage error MAPE)).

Preview 06:32

In this lecture you will learn arithmetic mean method definition and main calculations (mean(<data>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)). 

Arithmetic Mean Method
06:50

In this lecture you will learn naïve or random walk and random walk with drift methods definition and main calculations (data.shift(), mean(<data>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Naïve or Random Walk Methods
09:01

In this lecture you will learn seasonal random walk method definition and main calculations (def LastSeason(<data>, <season>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Seasonal Random Walk Method
14:15
+
Moving Averages and Exponential Smoothing Methods
12 Lectures 02:43:49

You can download .PDF section slides file as additional resources.

Moving Averages and Exponential Smoothing Methods Slides
00:02

In this lecture you will learn section lectures’ details and main themes to be covered related to moving averages (simple moving average method, exponential moving average method and weighted moving average method), exponential smoothing methods (Brown’s simple exponential smoothing method, Holt’s linear trend method, Holt’s exponential trend method, Gardner’s additive damped trend method, Taylor’s multiplicative damped trend method, Holt-Winters additive seasonality method, Holt-Winters multiplicative seasonality method and Holt-Winters additive damped trend multiplicative seasonality method) and forecasting methods accuracy (scale-independent mean absolute scaled error MASE).

Moving Averages and Exponential Smoothing Methods Overview
10:08

In this lecture you will learn simple moving average SMA and exponential moving average EMA methods definition and main calculations (rolling_mean(<data>, <lag>), ewma(<data>, <span>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Simple and Exponential Moving Averages Methods
11:24

In this lecture you will learn simple moving average SMA method definition and main calculations (def OptimalWeights(<weights>): return SumSquareErrors, minimize(<OptimalWeights>, <InitialGuess>, <Bounds>), def WMAfunction(<data>, <weights>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Weighted Moving Average Method
17:59

In this lecture you will learn Brown’s simple exponential smoothing method definition and calculations (def OptimalParameters(<parameters>): return SumSquareErrors, minimize(<OptimalParameters>, <InitialGuess>, <Bounds>), def SESfunction(<data>, <parameters>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Brown’s Simple Exponential Smoothing Method
14:27

In this lecture you will learn Holt’s linear trend method definition and main calculations (def OptimalParameters(<parameters>): return SumSquareErrors, minimize(<OptimalParameters>, <InitialGuess>, <Bounds>), def HOLTfunction(<data>, <parameters>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Holt’s Linear Trend Method
14:34

In this lecture you will learn Holt’s exponential trend method definition and main calculations (def OptimalParameters(<parameters>): return SumSquareErrors, minimize(<OptimalParameters>, <InitialGuess>, <Bounds>), def EXPfunction(<data>, <parameters>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Holt’s Exponential Trend Method
15:17

In this lecture you will learn Gardner’s additive damped trend method definition and main calculations (def OptimalParameters(<parameters>): return SumSquareErrors, minimize(<OptimalParameters>, <InitialGuess>, <Bounds>), def GARDfunction(<data>, <parameters>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Gardner’s Additive Damped Trend Method
14:45

In this lecture you will learn Taylor’s multiplicative damped trend method definition and main calculations (def OptimalParameters(<parameters>): return SumSquareErrors, minimize(<OptimalParameters>, <InitialGuess>, <Bounds>), def TAYfunction(<data>, <parameters>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Taylor’s Multiplicative Damped Trend Method
15:59

In this lecture you will learn Holt-Winters additive seasonality method definition and main calculations (def InitialLevel(<data>, <season>), InitialTrend(<data>, <season>), InitialSeason(<data>, <season>), def OptimalParameters(<parameters>): return SumSquareErrors, minimize(<OptimalParameters>, <InitialGuess>, <Bounds>),  def HWAfunction(<data>, <parameters>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Holt-Winters Additive Seasonality Method
15:23

In this lecture you will learn Holt-Winters multiplicative seasonality method definition and main calculations (def InitialLevel(<data>, <season>), InitialTrend(<data>, <season>), InitialSeason(<data>, <season>), def OptimalParameters(<parameters>): return SumSquareErrors, minimize(<OptimalParameters>, <InitialGuess>, <Bounds>),  def HWMfunction(<data>, <parameters>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Holt-Winters Multiplicative Seasonality Method
16:00

In this lecture you will learn Holt-Winters additive damped trend and multiplicative seasonality method definition and main calculations (def InitialLevel(<data>, <season>), InitialTrend(<data>, <season>), InitialSeason(<data>, <season>), def OptimalParameters(<parameters>): return SumSquareErrors, minimize(<OptimalParameters>, <InitialGuess>, <Bounds>),  def HWDMfunction(<data>, <parameters>), for i in range(1, len(<forecast>)), plot(<y=data>) and absolute(<residuals>)).

Holt-Winters Additive Damped Trend and Multiplicative Seasonality Method
17:51
+
Auto Regressive Integrated Moving Average Models
9 Lectures 01:28:36

You can download .PDF section slides file as additional resources.

Auto Regressive Integrated Moving Average Models Slides
00:02

In this lecture you will learn section lectures’ details and main themes to be covered related to first order trend stationary time series (autocorrelation function ACF, partial autocorrelation function PACF, augmented Dickey-Fuller ADF unit root test and time series differentiation), ARIMA model specification (autocorrelation function ACF  and partial autocorrelation function PACF), ARIMA models (random walk with drift model, first order autoregressive model, differentiated first order autoregressive model, Brown’s simple exponential smoothing with growth model, Holt’s linear trend model), forecasting models selection criteria (Akaike information criterion AIC, Hannan-Quinn Information Criterion (HQIC) and Schwarz Bayesian information criterion BIC) and best model’s forecasting residuals white noise (Ljung-Box autocorrelation test).

Auto Regressive Integrated Moving Average Models Overview
10:10

In this lecture you will learn first order trend stationary time series tests, time series differentiation and ARIMA model specification definitions and main calculations (acf(<data>), pacf(<data>), adfuller(<data>), plot(<y=data>) and data.shift()).

First Order Trend Stationary Time Series
19:00

In this lecture you will learn random walk with drift model definition and main calculations (ARIMA(<data>, <order>), ARIMA.fit(), ARIMA.params, for i in range(1, len(<forecast>)), plot(<y=data>), RWDmodel.aic, RWDmodel.bic, RWDmodel.hqic and absolute(<residuals>) ).

ARIMA Random Walk with Drift Model
08:43

In this lecture you will learn first order auto regressive model definition and main calculations (ARIMA(<data>, <order>), ARIMA.fit(), ARIMA.params, for i in range(1, len(<forecast>)), plot(<y=data>), AR1model.aic, AR1model.bic, AR1model.hqic and absolute(<residuals>)).

ARIMA First Order Autoregressive Model
08:00

In this lecture you will learn differentiated first order autoregressive models definitions and main calculations (ARIMA(<data>, <order>), ARIMA.fit(), ARIMA.params, for i in range(1, len(<forecast>)), plot(<y=data>), DAR1model.aic, DAR1model.bic, DAR1model.hqic and absolute(<residuals>)).

ARIMA Differentiated First Order Autoregressive Model
08:30

In this lecture you will learn Brown’s simple exponential smoothing with growth ARIMA model definition and main calculations (ARIMA(<data>, <order>), ARIMA.fit(), ARIMA.params, for i in range(1, len(<forecast>)), plot(<y=data>), SESGmodel.aic, SESGmodel.bic, SESGmodel.hqic and absolute(<residuals>)).

ARIMA Brown’s Simple Exponential Smoothing with Growth Model
09:31

In this lecture you will learn Holt’s linear trend ARIMA model definition and main calculations (ARIMA(<data>, <order>), ARIMA.fit(), ARIMA.params, for i in range(1, len(<forecast>)), plot(<y=data>), HOLTmodel.aic, HOLTmodel.bic, HOLTmodel.hqic and absolute(<residuals>)).

ARIMA Holt’s Linear Trend Model
11:28

In this lecture you will learn forecasting residuals white noise tests definitions and main calculations (ARIMA(<data>, <order>), ARIMA.fit(),  ARIMA.resid, acf(<data>), pacf(<data>), acorr_ljungbox(<data>, <lags>) and plot(<y=data>)).

ARIMA Model Residuals White Noise
13:12
+
Bibliography
1 Lecture 00:02

In this lecture you can download slides with course bibliography.

Course Bibliography
00:02
About the Instructor
Diego Fernandez
3.9 Average rating
485 Reviews
3,342 Students
22 Courses
Exfinsis

Diego Fernandez is author of high-quality online courses and ebooks at Exfinsis for anyone who wants to become an expert in financial data analysis.

His main areas of expertise are financial analysis and data science. Within financial analysis he has focused on computational finance, quantitative finance and trading strategies analysis. Within data science he has concentrated on machine learning, applied statistics and econometrics. For all of this he has become proficient in Microsoft Excel®, R statistical software® and Python programming language® analysis tools. 

He has important online business development experience at fast-growing startups and blue-chip companies in several European countries. He has always exceeded expected professional objectives by starting with a comprehensive analysis of business environment and then efficiently executing formulated strategy.

He also achieved outstanding performance in his undergraduate and postgraduate degrees at world-class academic institutions. This outperformance allowed him to become teacher assistant for specialized subjects and constant student leader within study groups. 

His motivation is a lifelong passion for financial data analysis which he intends to transmit in all of the courses.