Find online courses made by experts from around the world.
Take your courses with you and learn anywhere, anytime.
Learn and practice realworld skills and achieve your goals.
R Programming for Simulation and Monte Carlo Methods focuses on using R software to program probabilistic simulations, often called Monte Carlo Simulations. Typical simplified "realworld" examples include simulating the probabilities of a baseball player having a 'streak' of twenty sequential season games with 'hitsatbat' or estimating the likely total number of taxicabs in a strange city when one observes a certain sequence of numbered cabs pass a particular street corner over a 60 minute period. In addition to detailing half a dozen (sometimes amusing) 'realworld' extended example applications, the course also explains in detail how to use existing R functions, and how to write your own R functions, to perform simulated inference estimates, including likelihoods and confidence intervals, and other cases of stochastic simulation. Techniques to use R to generate different characteristics of various families of random variables are explained in detail. The course teaches skills to implement various approaches to simulate continuous and discrete random variable probability distribution functions, parameter estimation, MonteCarlo Integration, and variance reduction techniques. The course partially utilizes the Comprehensive R Archive Network (CRAN) spuRs package to demonstrate how to structure and write programs to accomplish mathematical and probabilistic simulations using R statistical software.
Not for you? No problem.
30 day money back guarantee.
Forever yours.
Lifetime access.
Learn on the go.
Desktop, iOS and Android.
Get rewarded.
Certificate of completion.
Section 1: Review of Vectors, Matrices, Lists and Functions  

Lecture 1 
Course Introduction
Preview

01:39  
Lecture 2 
Install R and RStudio

00:45  
Lecture 3 
Review: Vectors, Matrices, Lists (part 1)

08:07  
Lecture 4 
Review: Vectors, Matrices, Lists (part 2)
Preview

06:34  
Lecture 5 
Sequences and Replications (part 1)

07:12  
Lecture 6 
Sequences and Replications (part 2)
Preview

05:56  
Lecture 7 
Sort and Order

04:45  
Lecture 8 
Creating a Matrix (part 1)

08:51  
Lecture 9 
Using Matrices (part 2)

03:19  
Lecture 10 
List Structures and Horsekicks (part 1)

09:43  
Lecture 11 
Dpois() Function and Horsekicks (part 2)

09:56  
Lecture 12 
Sampling from a Dataframe

04:24  
Lecture 13 
Section 1 Exercises

02:25  
Section 2: Simulation Examples: Tossing a Coin  
Lecture 14 
R Expressions Exercises Answers (part 1)

07:36  
Lecture 15 
R Expressions Exercises Answers (part 2)

07:08  
Lecture 16 
Introduction to Simulation: A Game of Tossing a Coin (part 1)
Preview

07:13  
Lecture 17 
Introduction to Simulation: A Game of Tossing a Coin (part 2)

07:25  
Lecture 18 
Write a Simulation Function (part 1)

07:20  
Lecture 19 
Write a Simulation Function (part 2)

07:17  
Lecture 20 
Continue Coin Tossing Simulation (part 3)

06:16  
Lecture 21 
Continue Coin Tossing Simulation (part 4)

07:57  
Section 3: Simulation Examples: Returning Checked Hats  
Lecture 22  04:00  
A random permutation is a random ordering of a set of objects, that is, a permutationvalued random variable. The use of random permutations is often fundamental to fields that use randomized algorithms such as coding theory, cryptography, and simulation. A good example of a random permutation is the shuffling of a deck of cards: this is ideally a random permutation of the 52 cards. 

Lecture 23 
Random Permutations: Hat Problem (part 2 )

06:57  
Lecture 24 
Random Permutations: Hat Problem (part 3)

07:46  
Lecture 25 
Random Permutations: Hat Problem (part 4)

07:00  
Lecture 26 
Random Permutations: Hat Problem (part 5)
Preview

04:50  
Lecture 27 
Random Permutations: Hat Problem (part 6)

06:34  
Lecture 28 
Checking Hats Exercise

02:15  
Section 4: Simulation Examples: Collecting Baseball Cards and "Streaky" Behavior  
Lecture 29 
Solution to Checking Hats Exercise

05:45  
Lecture 30 
Collecting Baseball Cards Simulation (part 1)

05:52  
Lecture 31 
Collecting Baseball Cards Simulation (part 2)
Preview

05:11  
Lecture 32 
Collecting Baseball Cards Simulation (part 3)

05:05  
Lecture 33 
Collecting Baseball Cards Simulation (part 4)

07:03  
Lecture 34 
Collecting Quarters Exercise

00:27  
Lecture 35 
Collecting State Quarters Exercise Solution

05:56  
Lecture 36 
"Streaky" Baseball Batting Behavior (part 1)

05:33  
Lecture 37 
"Streaky" Baseball Batting Behavior (part 2)

06:16  
Lecture 38 
"Streaky" Baseball Batting Behavior (part 3)

05:40  
Lecture 39 
"Streaky" Behavior Exercise

03:27  
Section 5: Monte Carlo Methods for Inference  
Lecture 40 
Solution to "Streaky" Behavior Exercise

08:53  
Lecture 41  05:34  
Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other mathematical methods. Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. 

Lecture 42 
Sleepless in Seattle (part 1)
Preview

07:14  
Lecture 43 
Sleepless in Seattle (part 2)

04:19  
Lecture 44  06:04  
Statistical inference is the process of deducing properties of an underlying distribution by analysis of data. Inferential statistical analysis infers properties about a population: this includes testing hypotheses and deriving estimates. The population is assumed to be larger than the observed data set; in other words, the observed data is assumed to be sampled from a larger population. 

Lecture 45 
Applying Monte Carlo Methods to Inference (part 2)

05:46  
Lecture 46 
Applying Monte Carlo Methods to Inference (part 3)

08:56  
Lecture 47 
Applying Monte Carlo Methods to Inference (part 4)

09:54  
Lecture 48 
Applying Monte Carlo Methods to Inference (part 5)

09:09  
Lecture 49 
Comparing Estimators: The Taxi Problem (part 1)

05:26  
Lecture 50 
Comparing Estimators: The Taxi Problem (part 2)

06:36  
Lecture 51 
Late to Class Again ? Exercise

01:14  
Section 6: Stochastic Simulation and Random Variable Generation  
Lecture 52 
Late to Class Again Exercise Solution

11:20  
Lecture 53  06:51  
A stochastic simulation is a simulation that traces the evolution of variables that can change stochastically (randomly) with certain probabilities. 

Lecture 54  08:33  
In probability and statistics, a probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are found in experiments whose sample space is nonnumerical, where the distribution would be a categorical distribution; experiments whose sample space is encoded by discrete random variables, where the distribution can be specified by a probability mass function; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a probability density function. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. 

Lecture 55 
Simulation and Random Variable Generation (part 2)

08:16  
Lecture 56 
Simulation and Random Variable Generation (part 3)

04:02  
Lecture 57 
Simulating Discrete Random Variables (part 1)
Preview

08:12  
Lecture 58 
Simulating Discrete Random Variables (part 2)

07:00  
Lecture 59 
Simulating Discrete Random Variables (part 3)

03:39  
Lecture 60  07:21  
The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools ofcalculus), and one computes the xintercept of this tangent line (which is easily done with elementary algebra). This xintercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated. 

Lecture 61 
Root Finding: NewtonRaphson Technique (part 2)

07:35  
Lecture 62 
Create Random Variables Exercise

01:01  
Section 7: Inverse and General Transforms  
Lecture 63 
Create Random Variables Exercise Solution (part 1)

05:07  
Lecture 64 
Create Random Variables Exercise Solution (part 2)

07:59  
Lecture 65  06:18  
Inverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, Smirnov transform, golden rule,) is a basic method for pseudorandom number sampling, i.e. for generating sample numbers at random from any probability distribution given its cumulative distribution function (cdf). The basic idea is to uniformly sample a number between 0 and 1, interpreted as a probability, and then return the largest number from the domain of the distribution such that . For example, imagine that is the standardnormal distribution (i.e. with mean 0, standard deviation 1). Then if we choose , we would return 0, because 50% of the probability of a normal distribution occurs in the region where . Similarly, if we choose , we would return 1.95996...; if we choose , we would return 2.5758...; if we choose , we would return 4.7534243...; if we choose , we would return 4.891638...; if we choose , we would return 8.1258906647...; if we choose , we would return 8.2095361516... etc. Essentially, we are randomly choosing a proportion of the area under the curve and returning the number in the domain such that exactly this proportion of the area occurs to the left of that number. Intuitively, we are unlikely to choose a number in the tails because there is very little area in them: We'd have to pick a number very close to 0 or 1. 

Lecture 66 
Inverse Transforms (part 2)

09:22  
Lecture 67 
General Transformations (part 1)

05:23  
Lecture 68 
General Transformations (part 2)

08:07  
Lecture 69  06:52  
In mathematics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptancerejection method or "acceptreject algorithm" and is a type of Monte Carlo method. The method works for any distribution in with a density. Rejection sampling is based on the observation that to sample a random variable one can sample uniformly from the region under the graph of its density function. 

Lecture 70 
AcceptReject Method (part 2)

05:51  
Lecture 71 
AcceptReject Methods (part 3)

07:55  
Lecture 72 
Random Variable (Poisson) Exercise 2

1 page  
Section 8: Simulating Numerical Integration  
Lecture 73 
Random Variable Exercise Solution (part 1)

06:27  
Lecture 74 
Random Variable Exercise Solution (part 2)

06:46  
Lecture 75  05:15  
In numerical analysis, numerical integration constitutes a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations. 

Lecture 76 
Introduction to Simulating Numerical Integration (part 2)

05:59  
Lecture 77 
Simpson's Rule for Trapezoidal Approximation

08:24  
Lecture 78 
Simulating Numerical Integration (part 1)

06:07  
Lecture 79 
Simulating Numerical Integration (part 2)
Preview

06:14  
Lecture 80 
More on Simpson's Rule

06:10  
Lecture 81 
Simpson's Rule with phi Functions

09:13  
Lecture 82 
Phi Functions Exercises

01:22  
Lecture 83 
Hit and Miss (part 1)

06:49  
Lecture 84 
Hit and Miss (part 2)

07:06  
Section 9: Permutation Tests  
Lecture 85 
Phi Functions (Numerical Integration) Exercise Solution

11:25  
Lecture 86  07:50  
In statistics, resampling is any of a variety of methods for doing one of the following:
Common resampling techniques include bootstrapping, jackknifing and permutation tests. 

Lecture 87 
Permutation Tests on a Distribution: Chckwts Example (part 2)

06:16  
Lecture 88 
Permutation Tests on a Distribution: Chckwts Example (part 3)

07:30  
Lecture 89 
Permutation Tests on a Distribution: Chckwts Example (part 4)

10:32  
Lecture 90 
Finish Permutation Tests and an Exercise

03:33  
Section 10: Simulation Case Studies: Seed Dispersal 
Dr. Geoffrey Hubona held fulltime tenuretrack, and tenured, assistant and associate professor faculty positions at 3 major state universities in the Eastern United States from 19932010. In these positions, he taught dozens of various statistics, business information systems, and computer science courses to undergraduate, master's and Ph.D. students. He earned a Ph.D. in Business Administration (Information Systems and Computer Science) from the University of South Florida (USF) in Tampa, FL (1993); an MA in Economics (1990), also from USF; an MBA in Finance (1979) from George Mason University in Fairfax, VA; and a BA in Psychology (1972) from the University of Virginia in Charlottesville, VA. He was a fulltime assistant professor at the University of Maryland Baltimore County (19931996) in Catonsville, MD; a tenured associate professor in the department of Information Systems in the Business College at Virginia Commonwealth University (19962001) in Richmond, VA; and an associate professor in the CIS department of the Robinson College of Business at Georgia State University (20012010). He is the founder of the Georgia R School (20102014) and of RCourseware (2014Present), online educational organizations that teach research methods and quantitative analysis techniques. These research methods techniques include linear and nonlinear modeling, multivariate methods, data mining, programming and simulation, and structural equation modeling and partial least squares (PLS) path modeling. Dr. Hubona is an expert of the analytical, opensource R software suite and of various PLS path modeling software packages, including SmartPLS. He has published dozens of research articles that explain and use these techniques for the analysis of data, and, with software codevelopment partner Dean Lim, has created a popular cloudbased PLS software application, PLSGUI.