Find online courses made by experts from around the world.
Take your courses with you and learn anywhere, anytime.
Learn and practice realworld skills and achieve your goals.
Bayesian Computational Analyses with R is an introductory course on the use and implementation of Bayesian modeling using R software. The Bayesian approach is an alternative to the "frequentist" approach where one simply takes a sample of data and makes inferences about the likely parameters of the population. In contrast, the Bayesian approach uses both likelihood functions and a sample of observed data (the 'prior') to estimate the most likely values and distributions for the estimated population parameters (the 'posterior'). The course is useful to anyone who wishes to learn about Bayesian concepts and is suited to both novice and intermediate Bayesian students and Bayesian practitioners. It is both a practical, "handson" course with many examples using R scripts and software, and is conceptual, as the course explains the Bayesian concepts. All materials, software, R scripts, slides, exercises and solutions are included with the course materials. It is helpful to have some grounding in basic inferential statistics and probability theory. No experience with R is necessary, although it is also helpful.
The course begins with an introductory section (12 video lessons) on using R and R 'scripting.' The introductory section is intended to introduce RStudio and R commands so that even a novice R user will be comfortable using R. Section 2 introduces the Bayesian Rule, with examples of both discrete and beta priors, predictive priors, and beta posteriors in Bayesian estimation. Section 3 explains and demonstrates the use of Bayesian estimation for single parameter models, for example, when one wishes to estimate the most likely value of a mean OR of a standard deviation (but not both). Section 4 explains and demonstrates the use of "conjugate mixtures." These are singleparameter models where the functional form of the prior and post are similar (for example, both normally distributed). But 'mixtures' imply there may be more than one component for the prior or posterior density functions. Mixtures enable the simultaneous test of competing, alternative theories as to which is more likely. Section 5 deals with multiparameter Bayesian models where one is estimating the likelihood of more than one posterior variable value, for example, both mean AND standard deviation. Section 6 extends the Bayesian discussion by examining the estimation of integrals to estimate a probability. Section 7 covers the application the Bayesian approach to rejection and importance sampling and Section 8 looks at examples of comparing and validating Bayesian models.
Not for you? No problem.
30 day money back guarantee.
Forever yours.
Lifetime access.
Learn on the go.
Desktop, iOS and Android.
Get rewarded.
Certificate of completion.
Section 1: Introduction to Bayesian Course and to R Software  

Lecture 1 
Introduction to Bayesian Computational Analyses with R
Preview

02:00  
Lecture 2 
Introduction to Course Materials
Preview

02:19  
Lecture 3  09:54  
R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. 

Lecture 4 
Introduction to R Software (slides, part 2)
Preview

08:43  
Lecture 5 
Introduction to R Software (slides, part 3)

12:16  
Lecture 6  07:25  
An R script is simply a text file containing the same commands that you would enter on the command line of R. 

Lecture 7 
Introduction to R Software with Scripts (part 2)

09:52  
Lecture 8 
Introduction to R Software with Scripts (part 3)

11:50  
Lecture 9 
Introduction to R Software with Scripts (part 4)

08:38  
Lecture 10 
Introduction to R Software with Scripts (part 5)

07:40  
Lecture 11  10:13  
Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values—a probability distribution—for any factor that has inherent uncertainty. It then calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values. 

Lecture 12 
Section 1 R Scripting Exercises

00:22  
Section 2: Introduction to Bayesian Thinking  
Lecture 13 
More on the Course and Materials
Preview

02:19  
Lecture 14 
Session 1 R Scripting Exercise Solutions
Preview

12:32  
Lecture 15 
Background on Probability Density Functions (PDFs)

08:45  
Lecture 16 
Normal dnorm() Functions (part 1)

04:15  
Lecture 17 
Normal dnorm() Functions (part 2)

05:08  
Lecture 18  10:18  
The pnorm( ) function is the cumulative density function or CDF. It returns the area below the CDF and to the left up to some point "x" along the horizontal axis, for example, "x = 1." 

Lecture 19  11:36  
In probability theory and applications, Bayes's rule relates the odds of event to the odds of event , before (prior to) and after (posterior to) conditioning on another event . The odds on to event is simply the ratio of the probabilities of the two events. The prior odds is the ratio of the unconditional or prior probabilities, the posterior odds is the ratio of conditional or posterior probabilities given the event . The relationship is expressed in terms of the likelihood ratio or Bayes factor, . By definition, this is the ratio of the conditional probabilities of the event given that is the case or that is the case, respectively. The rule simply states: posterior odds equals prior odds times Bayes factor. 

Lecture 20  06:58  
In statistics, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics. In informal contexts, "likelihood" is often used as a synonym for "probability." But in statistical usage, a distinction is made depending on the roles of the outcome or parameter. Probability is used when describing a function of the outcome given a fixed parameter value. For example, if a coin is flipped 10 times and it is a fair coin, what is the probability of it landing headsup every time? Likelihood is used when describing a function of a parameter given an outcome. For example, if a coin is flipped 10 times and it has landed headsup 10 times, what is the likelihood that the coin is fair? 

Lecture 21  08:31  
Discrete priors are in contrast to continuous priors. Discrete priors refers to a set of whole numbers describing the frequency of outcomes of some event, for example, the number of consecutive tosses of "heads" in a series of tests, or samples, of the likelihood of this event. Since cumulative density functions are continuous, one needs to apply 'adjusting functions' to discrete priors to produce continuous posterior distributions. 

Lecture 22 
Using Discrete Priors (part 2)

06:34  
Lecture 23  05:12  
Beta priors may be used to approximate a continuous CDF distribution for discrete eventbased occurrences, such as with the use of a binomial distribution to estimate the number of "success" and "failure" outcomes in the toss of a coin. 

Lecture 24 
Using a Beta Prior (part 2)

07:10  
Lecture 25 
Using a Beta Prior (part 3)

05:05  
Lecture 26 
Simulating Beta Posteriors

04:38  
Lecture 27 
Brute Force Posterior Simulation using Histogram Prior

08:43  
Lecture 28  06:24  
The prior predictive distribution, in a Bayesian context, is the distribution of a data point marginalized over its prior distribution. 

Lecture 29 
Predictive Priors (scripts, part 1)

07:23  
Lecture 30 
Predictive Priors (scripts, part 2)

07:19  
Lecture 31 
Section 2 Exercises

02:20  
Section 3: Single Parameter Bayesian Models  
Lecture 32 
Section 2 Exercise Solution

10:58  
Lecture 33  11:17  
The use of single parameter models may be exemplified when one is trying to estimate the most likely mean parameter values, or the most likely standard deviation parameter values, but not both (that would be a multiparameter model). 

Lecture 34 
Single Parameter Models

10:59  
Lecture 35 
Heart Transplant Mortality Rate (part 1)
Preview

12:25  
Lecture 36 
Heart Transplant Mortality Rate (part 2)

10:39  
Lecture 37 
Test of Bayesian Robustness (part 1)

10:23  
Lecture 38 
Test of Bayesian Robustness (part 2)

11:02  
Lecture 39 
Exercise: How Many Taxis?

03:36  
Section 4: Conjugate Mixtures  
Lecture 40 
Exercise Solution: How Many Taxis?

06:15  
Lecture 41  10:37  
"Conjugate" models in the Bayesian approach simply mean that the functional form of the density function for both the prior distribution and the posterior distribution are similar, for example, both normally distributed. However "mixtures" refers to Bayesian models where there may be two different, and competing, components to the prior distribution, and one seeks an estimate of which of the two components is more likely, or more tenable. 

Lecture 42 
Conjugate Mixtures (part 2)

09:19  
Lecture 43 
A Bayesian Test of the Fairness of a Coin (part 1)
Preview

07:42  
Lecture 44 
A Bayesian Test of the Fairness of a Coin (part 2)

10:06  
Lecture 45 
More on the Fairness of a Coin (part 3)

11:30  
Lecture 46  13:25  
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value. For example, the PDF for a normallydistributed random variable takes the shape of the familiar "bell curve." 

Lecture 47 
Intro to PDFs (part 2)

06:54  
Lecture 48 
Intro to PDFs (part 3)

07:14  
Section 5: MultiParameter Bayesian Models  
Lecture 49 
Mortality Rate Exercise Solution (part 1)

08:06  
Lecture 50 
Mortality Rate Exercise Solution (part 2)

07:53  
Lecture 51  11:03  
In the Bayesian approach, multiparameter models are models in which one is attempting to estimate the probability density functions for more than one parameter, for example, both the mean and standard deviation of the target posterior parameters. 

Lecture 52 
Normal Multiparameter Models (part 2)

08:20  
Lecture 53 
Normal Multiparameter Models (part 3)

07:58  
Lecture 54  10:22  
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories. The binomial distribution is the probability distribution of the number of successes for one of just two categories in n independent Bernoulli trials, with the same probability of success on each trial. In a multinomial distribution, the analog of the Bernoulli distribution is the categorical distribution, where each trial results in exactly one of some fixed finite number k possible outcomes, with probabilities p_{1}, ..., p_{k} 

Lecture 55 
Multinomial Multiparameter Models (part 2)

11:34  
Lecture 56 
Bioassay Experiment (part 1)

10:32  
Lecture 57 
Bioassay Experiment (part 2)

10:47  
Lecture 58 
Exercise: Comparing Two Proportions

03:10  
Section 6: Bayesian Computation  
Lecture 59 
Exercise Solution: Comparing Two Proportions (part 1)

05:12  
Lecture 60 
Exercise Solution: Comparing Two Proportions (part 2)

11:22  
Lecture 61 
Introduction to Bayesian Computation Section

06:08  
Lecture 62  11:21  
An integral is a mathematical object that can be interpreted as an area or a generalization of area. For example, to calculate the area under the "curve" of a continuous function f(x) up to some point "x" along the horizontal axis, one might compute the integral of f(x) at "x." Computing integrals are useful for finding probabilities that are represented as areas under a continuous function "curve" or plot. 

Lecture 63 
Computing Integrals to Estimate a Probability (part 2)

10:20  
Lecture 64  10:57  
In probability theory and statistics, the betabinomial distribution is a family of discrete probability distributions on a finite support of nonnegative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. The betabinomial distribution is the binomial distribution in which the probability of success at each trial is not fixed but random and follows the beta distribution. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics as an overdispersed binomial distribution. 

Lecture 65 
A BetaBinomial Model of Overdispersion (part 2)

10:50  
Lecture 66 
Exercise: Inference About a Normal Population

02:55  
Section 7: Rejection and Importance Sampling  
Lecture 67 
Exercise Solution: Inference about a Normal Population

09:39  
Lecture 68  10:06  
In mathematics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptancerejection method or "acceptreject algorithm" and is a type of Monte Carlo method. The method works for any distribution in with a density. 

Lecture 69 
Rejection Sampling (part 2)

10:01  
Lecture 70 
Rejection Sampling (part 3)

06:41  
Lecture 71 
Rejection Sampling (part 4)

06:54  
Lecture 72 
Rejection Sampling (part 5)

10:40  
Lecture 73 
Rejection Sampling (part 6)

09:45  
Lecture 74  08:25  
In statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. It is related to umbrella sampling in computational physics. 

Section 8: Comparing Bayesian Models  
Lecture 75 
OneSided Test of a Normal Mean (part 1)

09:51  
Lecture 76 
OneSided Test of a Normal Mean (part 2)
Preview

09:40  
Lecture 77 
OneSided Test of a Normal Mean (part 3)

07:43  
Lecture 78 
TwoSided Test of a Normal Mean

11:32  
Lecture 79 
Streaky Behavior (part 1)

12:34  
Lecture 80 
Streaky Behavior (part 2)

10:25  
Lecture 81 
Streaky Behavior (part 3)

09:31  
Lecture 82 
Streaky Behavior (part 4)

08:43 
Dr. Geoffrey Hubona held fulltime tenuretrack, and tenured, assistant and associate professor faculty positions at 3 major state universities in the Eastern United States from 19932010. In these positions, he taught dozens of various statistics, business information systems, and computer science courses to undergraduate, master's and Ph.D. students. He earned a Ph.D. in Business Administration (Information Systems and Computer Science) from the University of South Florida (USF) in Tampa, FL (1993); an MA in Economics (1990), also from USF; an MBA in Finance (1979) from George Mason University in Fairfax, VA; and a BA in Psychology (1972) from the University of Virginia in Charlottesville, VA. He was a fulltime assistant professor at the University of Maryland Baltimore County (19931996) in Catonsville, MD; a tenured associate professor in the department of Information Systems in the Business College at Virginia Commonwealth University (19962001) in Richmond, VA; and an associate professor in the CIS department of the Robinson College of Business at Georgia State University (20012010). He is the founder of the Georgia R School (20102014) and of RCourseware (2014Present), online educational organizations that teach research methods and quantitative analysis techniques. These research methods techniques include linear and nonlinear modeling, multivariate methods, data mining, programming and simulation, and structural equation modeling and partial least squares (PLS) path modeling. Dr. Hubona is an expert of the analytical, opensource R software suite and of various PLS path modeling software packages, including SmartPLS. He has published dozens of research articles that explain and use these techniques for the analysis of data, and, with software codevelopment partner Dean Lim, has created a popular cloudbased PLS software application, PLSGUI.