R Programming for Simulation and Monte Carlo Methods
4.2 (105 ratings)
1,946 students enrolled
Wishlisted Wishlist

Please confirm that you want to add R Programming for Simulation and Monte Carlo Methods to your Wishlist.

# R Programming for Simulation and Monte Carlo Methods

Learn to program statistical applications and Monte Carlo simulations with numerous "real-life" cases and R software.
4.2 (105 ratings)
1,946 students enrolled
Last updated 12/2016
English
Current price: \$10 Original price: \$60 Discount: 83% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
• 11.5 hours on-demand video
• 2 Supplemental Resources
• Access on mobile and TV
• Certificate of Completion
What Will I Learn?
• Use R software to program probabilistic simulations, often called Monte Carlo simulations.
• Use R software to program mathematical simulations and to create novel mathematical simulation functions.
• Use existing R functions and understand how to write their own R functions to perform simulated inference estimates, including likelihoods and confidence intervals, and to model other cases of stochastic simulation.
• Be able to generate different different families (and moments) of both discrete and continuous random variables.
• Be able to simulate parameter estimation, Monte-Carlo Integration of both continuous and discrete functions, and variance reduction techniques.
View Curriculum
Requirements
• Students will need to install the popular no-cost R Console and RStudio software (instructions provided).
Description

R Programming for Simulation and Monte Carlo Methods focuses on using R software to program probabilistic simulations, often called Monte Carlo Simulations. Typical simplified "real-world" examples include simulating the probabilities of a baseball player having a 'streak' of twenty sequential season games with 'hits-at-bat' or estimating the likely total number of taxicabs in a strange city when one observes a certain sequence of numbered cabs pass a particular street corner over a 60 minute period. In addition to detailing half a dozen (sometimes amusing) 'real-world' extended example applications, the course also explains in detail how to use existing R functions, and how to write your own R functions, to perform simulated inference estimates, including likelihoods and confidence intervals, and other cases of stochastic simulation. Techniques to use R to generate different characteristics of various families of random variables are explained in detail. The course teaches skills to implement various approaches to simulate continuous and discrete random variable probability distribution functions, parameter estimation, Monte-Carlo Integration, and variance reduction techniques. The course partially utilizes the Comprehensive R Archive Network (CRAN) spuRs package to demonstrate how to structure and write programs to accomplish mathematical and probabilistic simulations using R statistical software.

Who is the target audience?
• You do NOT need to be experienced with R software and you do NOT need to be an experienced programmer.
• Course is good for practicing quantitative analysis professionals.
• Course is good for graduate students seeking research data and scenario analysis skills.
• Anyone interested in learning more about programming statistical applications with R software would benefit from this course.
Compare to Other R Courses
Curriculum For This Course
107 Lectures
11:42:12
+
Review of Vectors, Matrices, Lists and Functions
13 Lectures 01:13:36
Preview 01:39

Install R and RStudio
00:45

Review: Vectors, Matrices, Lists (part 1)
08:07

Preview 06:34

Sequences and Replications (part 1)
07:12

Preview 05:56

Sort and Order
04:45

Creating a Matrix (part 1)
08:51

Using Matrices (part 2)
03:19

List Structures and Horsekicks (part 1)
09:43

Dpois() Function and Horsekicks (part 2)
09:56

Sampling from a Dataframe
04:24

Section 1 Exercises
02:25
+
Simulation Examples: Tossing a Coin
8 Lectures 58:12
R Expressions Exercises Answers (part 1)
07:36

R Expressions Exercises Answers (part 2)
07:08

Preview 07:13

Introduction to Simulation: A Game of Tossing a Coin (part 2)
07:25

Write a Simulation Function (part 1)
07:20

Write a Simulation Function (part 2)
07:17

Continue Coin Tossing Simulation (part 3)
06:16

Continue Coin Tossing Simulation (part 4)
07:57
+
Simulation Examples: Returning Checked Hats
7 Lectures 39:22

A random permutation is a random ordering of a set of objects, that is, a permutation-valued random variable. The use of random permutations is often fundamental to fields that use randomized algorithms such as coding theory, cryptography, and simulation. A good example of a random permutation is the shuffling of a deck of cards: this is ideally a random permutation of the 52 cards.

Preview 04:00

Random Permutations: Hat Problem (part 2 )
06:57

Random Permutations: Hat Problem (part 3)
07:46

Random Permutations: Hat Problem (part 4)
07:00

Preview 04:50

Random Permutations: Hat Problem (part 6)
06:34

Checking Hats Exercise
02:15
+
Simulation Examples: Collecting Baseball Cards and "Streaky" Behavior
11 Lectures 56:15
Solution to Checking Hats Exercise
05:45

Collecting Baseball Cards Simulation (part 1)
05:52

Preview 05:11

Collecting Baseball Cards Simulation (part 3)
05:05

Collecting Baseball Cards Simulation (part 4)
07:03

Collecting Quarters Exercise
00:27

Collecting State Quarters Exercise Solution
05:56

"Streaky" Baseball Batting Behavior (part 1)
05:33

"Streaky" Baseball Batting Behavior (part 2)
06:16

"Streaky" Baseball Batting Behavior (part 3)
05:40

"Streaky" Behavior Exercise
03:27
+
Monte Carlo Methods for Inference
12 Lectures 01:19:05
Solution to "Streaky" Behavior Exercise
08:53

Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other mathematical methods. Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution.

Using Monte Carlo Simulation to Estimate Inference
05:34

Preview 07:14

Sleepless in Seattle (part 2)
04:19

Statistical inference is the process of deducing properties of an underlying distribution by analysis of data. Inferential statistical analysis infers properties about a population: this includes testing hypotheses and deriving estimates. The population is assumed to be larger than the observed data set; in other words, the observed data is assumed to be sampled from a larger population.

Applying Monte Carlo Methods to Inference (part 1)
06:04

Applying Monte Carlo Methods to Inference (part 2)
05:46

Applying Monte Carlo Methods to Inference (part 3)
08:56

Applying Monte Carlo Methods to Inference (part 4)
09:54

Applying Monte Carlo Methods to Inference (part 5)
09:09

Comparing Estimators: The Taxi Problem (part 1)
05:26

Comparing Estimators: The Taxi Problem (part 2)
06:36

Late to Class Again ? Exercise
01:14
+
Stochastic Simulation and Random Variable Generation
11 Lectures 01:13:50
Late to Class Again Exercise Solution
11:20

A stochastic simulation is a simulation that traces the evolution of variables that can change stochastically (randomly) with certain probabilities.

What is Stochastic Simulation ?
06:51

In probability and statistics, a probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are found in experiments whose sample space is non-numerical, where the distribution would be a categorical distribution; experiments whose sample space is encoded by discrete random variables, where the distribution can be specified by a probability mass function; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a probability density function. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.

Simulation and Random Variable Generation (part 1)
08:33

Simulation and Random Variable Generation (part 2)
08:16

Simulation and Random Variable Generation (part 3)
04:02

Preview 08:12

Simulating Discrete Random Variables (part 2)
07:00

Simulating Discrete Random Variables (part 3)
03:39

The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools ofcalculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.

Root Finding: Newton-Raphson Technique (part 1)
07:21

Root Finding: Newton-Raphson Technique (part 2)
07:35

Create Random Variables Exercise
01:01
+
Inverse and General Transforms
10 Lectures 01:02:54
Create Random Variables Exercise Solution (part 1)
05:07

Create Random Variables Exercise Solution (part 2)
07:59

Inverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, Smirnov transform, golden rule,) is a basic method for pseudo-random number sampling, i.e. for generating sample numbers at random from any probability distribution given its cumulative distribution function (cdf).

The basic idea is to uniformly sample a number between 0 and 1, interpreted as a probability, and then return the largest number from the domain of the distribution such that . For example, imagine that is the standardnormal distribution (i.e. with mean 0, standard deviation 1). Then if we choose , we would return 0, because 50% of the probability of a normal distribution occurs in the region where . Similarly, if we choose , we would return 1.95996...; if we choose , we would return 2.5758...; if we choose , we would return 4.7534243...; if we choose , we would return 4.891638...; if we choose , we would return 8.1258906647...; if we choose , we would return 8.2095361516... etc. Essentially, we are randomly choosing a proportion of the area under the curve and returning the number in the domain such that exactly this proportion of the area occurs to the left of that number. Intuitively, we are unlikely to choose a number in the tails because there is very little area in them: We'd have to pick a number very close to 0 or 1.

Inverse Transforms (part 1)
06:18

Inverse Transforms (part 2)
09:22

General Transformations (part 1)
05:23

General Transformations (part 2)
08:07

In mathematics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of Monte Carlo method. The method works for any distribution in with a density.

Rejection sampling is based on the observation that to sample a random variable one can sample uniformly from the region under the graph of its density function.

Preview 06:52

Accept-Reject Method (part 2)
05:51

Accept-Reject Methods (part 3)
07:55

Random Variable (Poisson) Exercise 2
1 page
+
Simulating Numerical Integration
12 Lectures 01:15:52
Random Variable Exercise Solution (part 1)
06:27

Random Variable Exercise Solution (part 2)
06:46

In numerical analysis, numerical integration constitutes a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations.

Introduction to Simulating Numerical Integration (part 1)
05:15

Introduction to Simulating Numerical Integration (part 2)
05:59

Simpson's Rule for Trapezoidal Approximation
08:24

Simulating Numerical Integration (part 1)
06:07

Preview 06:14

More on Simpson's Rule
06:10

Simpson's Rule with phi Functions
09:13

Phi Functions Exercises
01:22

Hit and Miss (part 1)
06:49

Hit and Miss (part 2)
07:06
+
Permutation Tests
6 Lectures 47:06
Phi Functions (Numerical Integration) Exercise Solution
11:25

In statistics, resampling is any of a variety of methods for doing one of the following:

1. Estimating the precision of sample statistics (medians, variances, percentiles) by using subsets of available data (jackknifing) or drawing randomly with replacement from a set of data points (bootstrapping)
2. Exchanging labels on data points when performing significance tests (permutation tests, also called exact tests, randomization tests, or re-randomization tests)
3. Validating models by using random subsets (bootstrapping, cross validation)

Common resampling techniques include bootstrapping, jackknifing and permutation tests.

Permutation Tests on a Distribution: Chckwts Example (part 1)
07:50

Permutation Tests on a Distribution: Chckwts Example (part 2)
06:16

Permutation Tests on a Distribution: Chckwts Example (part 3)
07:30

Permutation Tests on a Distribution: Chckwts Example (part 4)
10:32

Finish Permutation Tests and an Exercise
03:33
+
Simulation Case Studies: Seed Dispersal
9 Lectures 01:12:45
Solution to Permutation Tests Exercises
06:07

Preview 07:23

Seed Dispersal Case: Creating Classes and Functions (part 1)
08:07

Seed Dispersal Case: Creating Classes and Functions (part 2)
07:54

Seed Dispersal Case (part 1)
07:11

Preview 06:10

Seed Dispersal Case (part 3)
07:41

Seed Dispersal Case (part 4)
10:22

Finish Seed Dispersal Case
11:50
1 More Section