Categories

Bayesian Computational Analyses with R

Please confirm that you want to add **Bayesian Computational Analyses with R** to your Wishlist.

Learn the concepts and practical side of using the Bayesian approach to estimate likely event outcomes.

1,538 students enrolled

Current price: $10
Original price: $60
Discount:
83% off

30-Day Money-Back Guarantee

- 11.5 hours on-demand video
- 14 Supplemental Resources
- Full lifetime access
- Access on mobile and TV

- Certificate of Completion

What Will I Learn?

- Understand Bayesian concepts, and gain a great deal of practical "hands-on" experience creating and estimating Bayesian models using R software.
- Effectively use the Bayesian approach to estimate likely event outcomes, or probabilities, using their own data.
- Be able to apply a range of Bayesian functions using R software in order to model and estimate single parameter, multi-parameter, conjugate mixture, multinomial, and rejection and importance sampling Bayesian models.
- Understand and use both predictive priors and predictive posteriors in Bayesian applications.
- Be able to compare and evaluate alternative, competing Bayesian models.

Requirements

- Students will need to install R and RStudio software, but ample instruction for doing so is provided in the course materials.

Description

** Bayesian Computational Analyses with R** is an introductory course on the use and implementation of Bayesian modeling using R software. The Bayesian approach is an alternative to the "frequentist" approach where one simply takes a sample of data and makes inferences about the likely parameters of the population. In contrast, the Bayesian approach uses both likelihood functions and a sample of observed data (the 'prior') to estimate the most likely values and distributions for the estimated population parameters (the 'posterior'). The course is useful to anyone who wishes to learn about Bayesian concepts and is suited to both novice and intermediate Bayesian students and Bayesian practitioners. It is both a practical, "hands-on" course with many examples using R scripts and software, and is conceptual, as the course explains the Bayesian concepts. All materials, software, R scripts, slides, exercises and solutions are included with the course materials. It is helpful to have some grounding in basic inferential statistics and probability theory. No experience with R is necessary, although it is also helpful.

The course begins with an introductory section (12 video lessons) on using R and R 'scripting.' The introductory section is intended to introduce RStudio and R commands so that even a novice R user will be comfortable using R. Section 2 introduces the Bayesian Rule, with examples of both discrete and beta priors, predictive priors, and beta posteriors in Bayesian estimation. Section 3 explains and demonstrates the use of Bayesian estimation for single parameter models, for example, when one wishes to estimate the most likely value of a mean OR of a standard deviation (but not both). Section 4 explains and demonstrates the use of "conjugate mixtures." These are single-parameter models where the functional form of the prior and post are similar (for example, both normally distributed). But 'mixtures' imply there may be more than one component for the prior or posterior density functions. Mixtures enable the simultaneous test of competing, alternative theories as to which is more likely. Section 5 deals with multi-parameter Bayesian models where one is estimating the likelihood of more than one posterior variable value, for example, both mean AND standard deviation. Section 6 extends the Bayesian discussion by examining the estimation of integrals to estimate a probability. Section 7 covers the application the Bayesian approach to rejection and importance sampling and Section 8 looks at examples of comparing and validating Bayesian models.

Who is the target audience?

- The course is ideal for anyone interested in learning both the conceptual and practical side of using Bayes' Rule to model likely event outcomes.
- The course is best suited for both students and professionals who currently make use of quantitative or probabilistic modeling.
- It is useful to have a working knowledge of either basic inferential statistics or probability theory.
- It is NOT necessary to have prior experience using R software to successfully complete and to benefit from this course.

Students Who Viewed This Course Also Viewed

Curriculum For This Course

Expand All 82 Lectures
Collapse All 82 Lectures
11:37:43

+
–

Introduction to Bayesian Course and to R Software
12 Lectures
01:31:12

R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.

Preview
09:54

Introduction to R Software (slides, part 3)

12:16

An **R** script is simply a text file containing the same commands that you would enter on the command line of **R**.

Preview
07:25

Introduction to R Software with Scripts (part 2)

09:52

Introduction to R Software with Scripts (part 3)

11:50

Introduction to R Software with Scripts (part 4)

08:38

Introduction to R Software with Scripts (part 5)

07:40

Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values—a *probability distribution*—for any factor that has inherent uncertainty. It then calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values.

Programming a Monte Carlo Simulation

10:13

Section 1 R Scripting Exercises

00:22

+
–

Introduction to Bayesian Thinking
19 Lectures
02:11:10

Background on Probability Density Functions (PDFs)

08:45

Normal dnorm() Functions (part 1)

04:15

Normal dnorm() Functions (part 2)

05:08

The pnorm( ) function is the cumulative density function or CDF. It returns the area below the CDF and to the left up to some point "x" along the horizontal axis, for example, "x = 1."

Normal pnorm() Function

10:18

In probability theory and applications, **Bayes's rule** relates the odds of event to the odds of event , before (prior to) and after (posterior to) conditioning on another event . The odds on to event is simply the ratio of the probabilities of the two events. The prior odds is the ratio of the unconditional or prior probabilities, the posterior odds is the ratio of conditional or posterior probabilities given the event . The relationship is expressed in terms of the **likelihood ratio** or **Bayes factor**, . By definition, this is the ratio of the conditional probabilities of the event given that is the case or that is the case, respectively. The rule simply states: **posterior odds equals prior odds times Bayes factor**.

Preview
11:36

In statistics, a **likelihood function** (often simply the **likelihood**) is a function of the parameters of a statistical model. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics. In informal contexts, "likelihood" is often used as a synonym for "probability." But in statistical usage, a distinction is made depending on the roles of the outcome or parameter. *Probability* is used when describing a function of the outcome given a fixed parameter value. For example, if a coin is flipped 10 times and it is a fair coin, what is the *probability* of it landing heads-up every time? *Likelihood* is used when describing a function of a parameter given an *outcome.* For example, if a coin is flipped 10 times and it has landed heads-up 10 times, what is the *likelihood* that the coin is fair?

Likelihood Function

06:58

Discrete priors are in contrast to continuous priors. Discrete priors refers to a set of whole numbers describing the frequency of outcomes of some event, for example, the number of consecutive tosses of "heads" in a series of tests, or samples, of the likelihood of this event. Since cumulative density functions are continuous, one needs to apply 'adjusting functions' to discrete priors to produce continuous posterior distributions.

Preview
08:31

Using Discrete Priors (part 2)

06:34

Beta priors may be used to approximate a continuous CDF distribution for discrete event-based occurrences, such as with the use of a binomial distribution to estimate the number of "success" and "failure" outcomes in the toss of a coin.

Using a Beta Prior (part 1)

05:12

Using a Beta Prior (part 2)

07:10

Using a Beta Prior (part 3)

05:05

Simulating Beta Posteriors

04:38

Brute Force Posterior Simulation using Histogram Prior

08:43

The **prior predictive distribution**, in a Bayesian context, is the distribution of a data point marginalized over its prior distribution.

Predictive Priors (slides)

06:24

Predictive Priors (scripts, part 1)

07:23

Predictive Priors (scripts, part 2)

07:19

Section 2 Exercises

02:20

+
–

Single Parameter Bayesian Models
8 Lectures
01:21:19

Section 2 Exercise Solution

10:58

The use of single parameter models may be exemplified when one is trying to estimate the most likely mean parameter values, or the most likely standard deviation parameter values, but not both (that would be a multi-parameter model).

Preview
11:17

Single Parameter Models

10:59

Heart Transplant Mortality Rate (part 2)

10:39

Test of Bayesian Robustness (part 1)

10:23

Test of Bayesian Robustness (part 2)

11:02

Exercise: How Many Taxis?

03:36

+
–

Conjugate Mixtures
9 Lectures
01:23:02

Exercise Solution: How Many Taxis?

06:15

"Conjugate" models in the Bayesian approach simply mean that the functional form of the density function for both the prior distribution and the posterior distribution are similar, for example, both normally distributed. However "mixtures" refers to Bayesian models where there may be two different, and competing, components to the prior distribution, and one seeks an estimate of which of the two components is more likely, or more tenable.

Preview
10:37

Conjugate Mixtures (part 2)

09:19

A Bayesian Test of the Fairness of a Coin (part 2)

10:06

More on the Fairness of a Coin (part 3)

11:30

In **probability** theory, a **probability density function** (PDF), or **density **of a continuous random variable, is a **function** that describes the relative likelihood for this random variable to take on a given value. For example, the PDF for a normally-distributed random variable takes the shape of the familiar "bell curve."

Introduction to Probability Density Functions (part 1)

13:25

Intro to PDFs (part 2)

06:54

Intro to PDFs (part 3)

07:14

+
–

Multi-Parameter Bayesian Models
10 Lectures
01:29:45

Mortality Rate Exercise Solution (part 1)

08:06

Mortality Rate Exercise Solution (part 2)

07:53

In the Bayesian approach, multiparameter models are models in which one is attempting to estimate the probability density functions** for more than one parameter,** for example, both the mean and standard deviation of the target posterior parameters.

Normal Multiparameter Models (part 1)

11:03

Normal Multiparameter Models (part 2)

08:20

Normal Multiparameter Models (part 3)

07:58

In probability theory, the **multinomial distribution** is a generalization of the binomial distribution. For *n* independent trials each of which leads to a success for exactly one of *k* categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

The binomial distribution is the probability distribution of the number of successes for one of just two categories in *n* independent Bernoulli trials, with the same probability of success on each trial. In a multinomial distribution, the analog of the Bernoulli distribution is the categorical distribution, where each trial results in exactly one of some fixed finite number *k* possible outcomes, with probabilities *p*_{1}, ..., *p*_{k}

Preview
10:22

Multinomial Multiparameter Models (part 2)

11:34

Bioassay Experiment (part 1)

10:32

Bioassay Experiment (part 2)

10:47

Exercise: Comparing Two Proportions

03:10

+
–

Bayesian Computation
8 Lectures
01:09:05

Exercise Solution: Comparing Two Proportions (part 1)

05:12

Exercise Solution: Comparing Two Proportions (part 2)

11:22

Introduction to Bayesian Computation Section

06:08

An **integral** is a mathematical object that can be interpreted as an area or a generalization of area. For example, to calculate the area under the "curve" of a continuous function f(x) up to some point "x" along the horizontal axis, one might compute the integral of f(x) at "x." Computing integrals are useful for finding probabilities that are represented as areas under a continuous function "curve" or plot.

Preview
11:21

Computing Integrals to Estimate a Probability (part 2)

10:20

In probability theory and statistics, the **beta-binomial distribution** is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. The beta-binomial distribution is the binomial distribution in which the probability of success at each trial is not fixed but random and follows the beta distribution. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics as an overdispersed binomial distribution.

A Beta-Binomial Model of Overdispersion (part 1)

10:57

A Beta-Binomial Model of Overdispersion (part 2)

10:50

Exercise: Inference About a Normal Population

02:55

+
–

Rejection and Importance Sampling
8 Lectures
01:12:11

Exercise Solution: Inference about a Normal Population

09:39

In mathematics, **rejection sampling** is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-**rejection** method or "accept-**reject** algorithm" and is a type of Monte Carlo method. The method works for any distribution in with a density.

Preview
10:06

Rejection Sampling (part 2)

10:01

Rejection Sampling (part 3)

06:41

Rejection Sampling (part 4)

06:54

Rejection Sampling (part 5)

10:40

Rejection Sampling (part 6)

09:45

In statistics, **importance sampling** is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. It is related to umbrella **sampling** in computational physics.

Importance Sampling

08:25

+
–

Comparing Bayesian Models
8 Lectures
01:19:59

One-Sided Test of a Normal Mean (part 1)

09:51

One-Sided Test of a Normal Mean (part 3)

07:43

Two-Sided Test of a Normal Mean

11:32

Streaky Behavior (part 1)

12:34

Streaky Behavior (part 2)

10:25

Streaky Behavior (part 3)

09:31

Streaky Behavior (part 4)

08:43

Frequently Bought Together

Geoffrey Hubona, Ph.D.,
Professor of Information Systems

(69)

$10
$60

Wishlisted

Wishlist

Geoffrey Hubona, Ph.D.,
Professor of Information Systems

(59)

$10
$40

Wishlisted

Wishlist

$20
Total
$100

About the Instructor

Professor of Information Systems

- About Us
- Udemy for Business
- Become an Instructor
- Affiliates
- Blog
- Topics
- Mobile Apps
- Support
- Careers
- Resources

- Copyright © 2017 Udemy, Inc.
- Terms
- Privacy Policy and Cookie Policy
- Intellectual Property