ACE the AP Statistics Exam and MASTER Elementary Statistics!

My AP Statistics and Elementary Video Series will help you ace the AP exam and master all Elementary Statistics Concepts
4.8 (25 ratings)
Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
513 students enrolled
$19
$25
24% off
Take This Course
  • Lectures 91
  • Length 22.5 hours
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 11/2015 English

Course Description

Want to ace the AP Statistics exam and also do well in your class? Maybe you are taking an elementary or introductory statistics course in college and need the extra help. We'll help you do it with 90 lessons, including several hours of illustrated lecture video, several worked-out example questions, and a complete understanding of the graphing calculator and its statistical capabilities.

Each lesson also comes with a downloadable word document of course notes to help you learn the material as you watch the video lessons.

Although our course is catered towards high school students taking the AP test, college students in a first year statistics course will also find this class life-saving.

Did we mention you'll also have an awesome teacher?

Jerry Linch obtained his B.S. in Mathematics from the University of Nebraska and M.S. in Statistics from the University of Houston Clear Lake. With several years of practice in the actuarial field, he has an excellent understanding of the material and can explain the concepts at a level which any entry level student can understand. If you want a comprehensive course of all the AP Statistics topics and most all elementary statistics topics covered in a college course and explained with ease, then this course is for you.

What are the requirements?

  • Fundamentals of Algebra.

What am I going to get from this course?

  • Understand the concepts of most elementary college and advanced placement statistics courses.
  • Describe patterns and departures from patterns using descriptive statistics.
  • Interpret information from graphical and numerical displays and summaries.
  • Plan and conduct statistical studies by looking at data collection and analysis. Observational studies and experiments are both considered as well as proper sampling techniques and possible biases that can occur.
  • Explore random phenomena using probability and simulation. Both discrete and continuous probability models are considered and sampling distributions are introduced.
  • Estimate population parameters using statistics and testing hypothesis. The student will be able to construct a confidence interval and hypothesis test for numerical and categorical data.
  • Successfully complete a college entry level statistics class and achieve success on the Advanced Placement Statistics Exam.

What is the target audience?

  • Any student taking an elementary statistics course or AP Statistics should take this course.

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Introduction to Your Statistics Course!
01:42

Here is a quick intro to AP Statistics and Elementary Statistics Video Series and your instructor, Jerry Linch.

Section 2: Exploring Data
10:12

An introduction to the the topic of statistics. The two main branches of statistics are discussed: descriptive statistics and inferential statistics. The definitions of population and sample are discussed.

15:29

We talk about discrete and continuous variables in this section and classify data by number of variables.

19:26

Frequency and Relative Frequency are discussed, We constructing Bar Charts and Pie Graphs based on our categorical data, Comparative Displays are used to look at differences in distributions.

17:27

Graphing small to medium sized data sets. Construction of dotplots and stem-and-leaf plots with comparative displays.

16:58
Graphing medium to large sized data sets. Construction of Histograms with density scales included. Construction of Ogives (Cumulative Relative Frequency Graphs).
10:02

Graphing medium to large sized data sets. Construction of Histograms with density scales included. Construction of Ogives (Cumulative Relative Frequency Graphs).

13:17

In this section we discuss the construction of modified boxplots using the 5 number summary statistics of our data. We discuss the calculation of outliers based on the location of fences using the IQR. Multiple boxplots are used in comparative displays to discuss the differences in the features of distributions.

12:29

In this section we discuss the construction of modified boxplots using the 5 number summary statistics of our data. We discuss the calculation of outliers based on the location of fences using the IQR. Multiple boxplots are used in comparative displays to discuss the differences in the features of distributions.

09:01

In this section we discuss the construction of modified boxplots using the 5 number summary statistics of our data. We discuss the calculation of outliers based on the location of fences using the IQR. Multiple boxplots are used in comparative displays to discuss the differences in the features of distributions.

14:09

Describing Distributions and Graphical Displays. Features of a graph including Center, Shape, Spread and Unusual Occurrences. Each category is discussed.

16:40

Measures of Center including: mean, median and mode. Relationships of each and their use in graphical displays. Basic calculations of all measures of center. Resistant measures and trimmed mean are also discussed in this section.

04:49

Measures of Center including: mean, median and mode. Relationships of each and their use in graphical displays. Basic calculations of all measures of center. Resistant measures and trimmed mean are also discussed in this section.

18:10

Measures of Spread. Range, IQR (Inner Quartile Range), Standard Deviation, Variance and Deviations are all introduced in this section with examples and calculations. Each measure is discussed in its use to describe data.

17:59

Measures of Spread. Range, IQR (Inner Quartile Range), Standard Deviation, Variance and Deviations are all introduced in this section with examples and calculations. Each measure is discussed in its use to describe data.

18:19

Density curves and Z-Scores are discuessed with formulas and examples. The emperical rule is investigated along with Chebychevs lower bound inequality. Introduction to Normal Bell Shaped Curves. Transition points are introduced as well.

19:59

Density curves and Z-Scores are discuessed with formulas and examples. The emperical rule is investigated along with Chebychevs lower bound inequality. Introduction to Normal Bell Shaped Curves. Transition points are introduced as well.

19:44

Introduction to Correlation and scatterplots. Pearsons correlation coefficent is developed and investigated. The rules for correlation and examples are given.

17:18

Introduction to Correlation and scatterplots. Pearsons correlation coefficent is developed and investigated. The rules for correlation and examples are given.

19:34
Investigating the Least Squares Regression Line. This lesson will show us the LSRL is the line of best fit. We will look at calculating the LSRL and its use as a model for linear data. We will also look at the concept of extrapolation.
09:39

Investigating the Least Squares Regression Line. This lesson will show us the LSRL is the line of best fit. We will look at calculating the LSRL and its use as a model for linear data. We will also look at the concept of extrapolation.

15:55

Residuals and error components are studied in a least squares regression setting. Coefficient of determination is discussed, defined and interpreted. Influential points and outliers are discussed in length in a least squares regression setting.

18:22

Residuals and error components are studied in a least squares regression setting. Coefficient of determination is discussed, defined and interpreted. Influential points and outliers are discussed in length in a least squares regression setting.

13:57

In this section we investigate residual plots to determine if data is linear. If the data is nonlinear, we transform the variables to achieve a linear model. Logarithmic, exponential, power, quadratic and reciprocal models are considered.

11:18

In this section we investigate residual plots to determine if data is linear. If the data is nonlinear, we transform the variables to achieve a linear model. Logarithmic, exponential, power, quadratic and reciprocal models are considered.

Section 3: Sampling and Experimentation
16:36

Types of Sampling Designs. Advantages and disadvantages of each design with important definitions and concepts in sampling. We discuss a simple random sample, stratified sampling, systematic sampling, cluster sampling and multistage sampling. Definitions of sample design and sampling frame are introduced. The importance of proper sampling is also discussed.

08:33

Types of Sampling Designs. Advantages and disadvantages of each design with important definitions and concepts in sampling. We discuss a simple random sample, stratified sampling, systematic sampling, cluster sampling and multistage sampling. Definitions of sample design and sampling frame are introduced. The importance of proper sampling is also discussed.

17:29
We introduce types of bias in sampling design and experimentation. Random digit tables introduced. Examples with biased results. Examples of types of bias are introduced in problems and designs.
10:26

Observational Study versus Experimentation. Definitions of experimental components are introduced.

13:23
Examples of Experimental Design.
13:37
Completely randomized designs versus block designed experiments. Matched pairs experiments. Randomization, replication and control of extraneous variables. Concept of confounding variables introduced.
11:08

Completely randomized designs versus block designed experiments. Matched pairs experiments. Randomization, replication and control of extraneous variables. Concept of confounding variables introduced.

Section 4: Anticipating Patterns
15:50

Fundamental Principle of Counting is introduced. Combinations and Permutations are introduced. Examples of counting questions with and without imposed conditions.

13:07

Fundamental Principle of Counting is introduced. Combinations and Permutations are introduced. Examples of counting questions with and without imposed conditions.

15:20
Sample Space, Event Space, Complement, Union, Intersection, Venn Diagrams, Mutually Exclusive Events, Disjoint Events considered.
09:29

Sample Space, Event Space, Complement, Union, Intersection, Venn Diagrams, Mutually Exclusive Events, Disjoint Events considered.

16:24

Experimental probability, law of large numbers, basic rules of probability, independence, dependence are investigated through examples. The use of complements is considered in calculating probabilities.

09:24

Experimental probability, law of large numbers, basic rules of probability, independence, dependence are investigated through examples. The use of complements is considered in calculating probabilities.

14:30

Conditional probability introduced. Two way and contingency tables introduced with conditional probability as well as tree diagrams.Basic rules of probability, independence, dependence are investigated through examples.

09:17

Conditional probability introduced. Two way and contingency tables introduced with conditional probability as well as tree diagrams.Basic rules of probability, independence, dependence are investigated through examples.

18:14

Conditional probability introduced. Two way and contingency tables introduced with conditional probability as well as tree diagrams.Basic rules of probability, independence, dependence are investigated through examples.

11:56

What is a simulation? The steps of a simulation are considered in this video. Introduction to random digit tables and sources of random numbers are considered. Examples of probabilities conducted with simulations. Experimental versus theoretical probability is investigated.

19:55
The concept of a random variable is introduced. Discrete probability distributions are explored. Linear transformations and linear combinations are introduced with the calculation of the mean and standard deviations for discrete distributions.
14:41
The concept of a random variable is introduced. Discrete probability distributions are explored. Linear transformations and linear combinations are introduced with the calculation of the mean and standard deviations for discrete distributions.
17:35
The concept of discrete distributions is discussed and the characteristics of binomial probabilities are presented. Binomial Distributions are investigated and several problems are addressed. The mean and standard deviation of binomial distributions are presented and used in context of problems.
18:47

The concept of discrete distributions is discussed and the characteristics of binomial probabilities are presented. Binomial Distributions are investigated and several problems are addressed. The mean and standard deviation of binomial distributions are presented and used in context of problems.

18:47
Geometric Probability Distributions are discussed and examples solved. Understanding the probabilities of a first success. The binomial and geometric distrubutions are compared with similarities and differences. The mean and standard deviation for geometric distrubutions are considered as well.
17:47
The Poisson probability distribution is discussed. Analyzing the probability of rare occurrences. Discrete probability distributions are compared. The mean and standard deviation of the poisson distribution and the probability density function are discussed. Several examples of Poisson distributions are solved.
12:59

Unusual Density Curves are discussed with basic geometric shapes. Probability density functions are discussed for generic continuous distributions with unusual density curves. The concept of continuous probabilities and random variables are explored. Many examples are given and solved with continuous probabilities.

10:43
We explore the continuous uniform distribution and its properties. The mean and standard deviation are explored as well as the probability distribution function. Many examples are presented and solved.
19:59

The normal distribution is discussed. Emperical rule is discussed with examples. Normal bell shaped curves are graphed and discussed. Many problems are explained and solved with normal probabilities. The concept of z scores are discussed and normal probability tables are presented.

17:40

The normal distribution is discussed. Emperical rule is discussed with examples. Normal bell shaped curves are graphed and discussed. Many problems are explained and solved with normal probabilities. The concept of z scores are discussed and normal probability tables are presented.

14:20

In this section, we assess the normality of data through central limit theorem and graphical displays. Calculator functions are introduced to determine normal continuous probabilities and graphically displaying normal curves. Functions such as normalpdf, normalcdf, invnorm are discussed.

15:15

Normal approximations to binomial distributions is considered in this lesson. Approximating binomial distributions with a normal bell shaped curve is addressed with the continuity correction based on the discrete histogram. Several problems are addressed and solved.

16:58

Sampling distributions are introduced and discussed. The role of the sampling distribution is introduced to begin inferential statistics. The central limit theorem is discussed. The mean and standard deviation are discussed for sampling distributions. The concept of the mean as an unbiased estimator is presented. Z-scores for sampling distributions are introduced. Examples are presented and solved.

10:18

Sampling distributions are introduced and discussed. The role of the sampling distribution is introduced to begin inferential statistics. The central limit theorem is discussed. The mean and standard deviation are discussed for sampling distributions. The concept of the mean as an unbiased estimator is presented. Z-scores for sampling distributions are introduced. Examples are presented and solved.

15:15
Sampling distributions are introduced and discussed. The role of the sampling distribution is introduced to begin inferential statistics. The central limit theorem is discussed. The mean and standard deviation are discussed for sampling distributions. The concept of the mean as an unbiased estimator is presented. Z-scores for sampling distributions are introduced. Examples are presented and solved.
Section 5: Statistical Inference
18:16
We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error.
17:18

We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error.

17:14

We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error.

19:30

We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error.

12:58

We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error.

09:51

We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error.

18:09

We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test.

19:31

We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test.

19:52

We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test.

13:12

We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test.

16:15

We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test.

12:35

We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test.

13:57

We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test.

18:38
Errors are introduced based on the decisions of hypothesis tests. Type I and Type II errors are explored and we discuss the decisions made leading to these errors and the consequences associated with making these errors. The relationship between the two errors is investigated as well as the relationship to our level of significance and the probability of a type I error. Many examples are given and we discuss the nature and consequences associated with both type I and type II errors.
18:11

In this lesson, we look at the different errors that are possible in hypothesis testing, their consequences and assess probabilities based on a hypothetical alternate mean. The power of the test is addressed and its relationship to a type II error. We also consider the values of power and probabilities associated with Type I and Type II errors and discuss what is acceptable in practice.

18:59

In this lesson, we look at the different errors that are possible in hypothesis testing, their consequences and assess probabilities based on a hypothetical alternate mean. The power of the test is addressed and its relationship to a type II error. We also consider the values of power and probabilities associated with Type I and Type II errors and discuss what is acceptable in practice.

13:38

In this lesson, we look at sampling distributions for one sample proportions. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions.

16:41

In this lesson, we look at sampling distributions for one sample proportions. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions.

14:57

In this lesson, we look at one sample inference with proportions. Confidence Intervals and Hypothesis Tests are discussed in this lesson for one sample proportion inference. Conditions for inference are also discussed. We look at the sample size required to achieve a certain margin of error. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions, using confidence intervals and hypothesis testing.

07:23

In this lesson, we look at one sample inference with proportions. Confidence Intervals and Hypothesis Tests are discussed in this lesson for one sample proportion inference. Conditions for inference are also discussed. We look at the sample size required to achieve a certain margin of error. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions, using confidence intervals and hypothesis testing.

13:34

In this lesson, we look at one sample inference with proportions. Confidence Intervals and Hypothesis Tests are discussed in this lesson for one sample proportion inference. Conditions for inference are also discussed. We look at the sample size required to achieve a certain margin of error. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions, using confidence intervals and hypothesis testing.

17:06

In this lesson, we look at two sample inference with means. Confidence Intervals and Hypothesis Tests are discussed in this lesson for two sample mean inference. Conditions for inference are also discussed. We look at the differences between mean difference and difference of means, from matched pairs to two independent samples. Several problems are presented and solved based on sample data involving two sample procedures, using confidence intervals and hypothesis testing. We discuss the robustness of t-inference in particular with two sample procedures.

18:14

In this lesson, we look at two sample inference with means. Confidence Intervals and Hypothesis Tests are discussed in this lesson for two sample mean inference. Conditions for inference are also discussed. We look at the differences between mean difference and difference of means, from matched pairs to two independent samples. Several problems are presented and solved based on sample data involving two sample procedures, using confidence intervals and hypothesis testing. We discuss the robustness of t-inference in particular with two sample procedures.

19:44

In this lesson, we look at two sample inference with means. Confidence Intervals and Hypothesis Tests are discussed in this lesson for two sample mean inference. Conditions for inference are also discussed. We look at the differences between mean difference and difference of means, from matched pairs to two independent samples. Several problems are presented and solved based on sample data involving two sample procedures, using confidence intervals and hypothesis testing. We discuss the robustness of t-inference in particular with two sample procedures.

18:20

In this lesson we discuss two sample inference with proportions. We begin by looking at the sampling distribution of the difference in population proportions. Confidence Intervals and Hypothesis Tests are conducted for the difference in population proportions. The conditions for inference are addressed.

14:08

In this lesson we discuss two sample inference with proportions. We begin by looking at the sampling distribution of the difference in population proportions. Confidence Intervals and Hypothesis Tests are conducted for the difference in population proportions. The conditions for inference are addressed.

14:01

In this lesson we discuss two sample inference with proportions. We begin by looking at the sampling distribution of the difference in population proportions. Confidence Intervals and Hypothesis Tests are conducted for the difference in population proportions. The conditions for inference are addressed.

12:40
In this lesson we discuss inference procedures for categorical data. We begin with Chi Square Goodness of Fit tests. Actual data is compared to expected data. Both Chi Square Tests for Independence and Homogeneity are then discussed. We look at two way tables for both tests and find expected counts. The Chi Square Test Statistic is studied as well as the conditions for Chi Square Inference. Examples of Hypothesis Tests are given.
10:59

In this lesson we discuss inference procedures for categorical data. We begin with Chi Square Goodness of Fit tests. Actual data is compared to expected data. Both Chi Square Tests for Independence and Homogeneity are then discussed. We look at two way tables for both tests and find expected counts. The Chi Square Test Statistic is studied as well as the conditions for Chi Square Inference. Examples of Hypothesis Tests are given.

14:38

In this lesson we discuss inference procedures for categorical data. We begin with Chi Square Goodness of Fit tests. Actual data is compared to expected data. Both Chi Square Tests for Independence and Homogeneity are then discussed. We look at two way tables for both tests and find expected counts. The Chi Square Test Statistic is studied as well as the conditions for Chi Square Inference. Examples of Hypothesis Tests are given.

11:49

In this lesson we discuss inference procedures for categorical data. We begin with Chi Square Goodness of Fit tests. Actual data is compared to expected data. Both Chi Square Tests for Independence and Homogeneity are then discussed. We look at two way tables for both tests and find expected counts. The Chi Square Test Statistic is studied as well as the conditions for Chi Square Inference. Examples of Hypothesis Tests are given.

19:45
In this lesson, we look at linear regression inference with the construction of both confidence intervals and hypothesis tests. The conditions for inference are addressed. We look at the standard error component and summarized statistical values found in tables. Regression concepts are revisted.
12:02

In this lesson, we look at linear regression inference with the construction of both confidence intervals and hypothesis tests. The conditions for inference are addressed. We look at the standard error component and summarized statistical values found in tables. Regression concepts are revisted.

17:40

In this lesson, we look at linear regression inference with the construction of both confidence intervals and hypothesis tests. The conditions for inference are addressed. We look at the standard error component and summarized statistical values found in tables. Regression concepts are revisted.

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Jerry Linch, AP Statistics Instructor and College Math Instructor

Welcome!

I am excited to impart knowledge for your entry level statistics and AP statistics courses.

I am a graduate of the University of Nebraska where I earned my bachelors degree in Mathematics and Actuarial Science. I furthered my education in Texas and earned my Masters of Science degree in Statistics from the University of Houston Clear Lake.

My first career began in Galveston, Texas where I worked as an actuary for American National Insurance Company from January 1985 – August 1992. After eight years in the actuarial profession I decided to turn my efforts toward teaching. I taught the next eight years at two different public schools in the Houston area. After eight years of instruction, I switched my career path back to the actuarial field. I moved to Omaha, Nebraska for two years and worked with Central States Health and Life Company as a valuation actuary. My family and I returned to Texas where I continued the actuarial profession, once again, for American National Insurance Company in Galveston for the next three years. I currently teach AP Statistics and college math courses for a school in the Houston area.

I am excited to bring the classroom to you. I have always had a passion to teach and enjoy it very much. I hope you find the videos both educational and helpful!
Sincerely,

Jerry Linch

Ready to start learning?
Take This Course