Find online courses made by experts from around the world.
Take your courses with you and learn anywhere, anytime.
Learn and practice real-world skills and achieve your goals.
Want to ace the AP Statistics exam and also do well in your class? Maybe you are taking an elementary or introductory statistics course in college and need the extra help. We'll help you do it with 90 lessons, including several hours of illustrated lecture video, several worked-out example questions, and a complete understanding of the graphing calculator and its statistical capabilities.
Each lesson also comes with a downloadable word document of course notes to help you learn the material as you watch the video lessons.
Although our course is catered towards high school students taking the AP test, college students in a first year statistics course will also find this class life-saving.
Did we mention you'll also have an awesome teacher?
Jerry Linch obtained his B.S. in Mathematics from the University of Nebraska and M.S. in Statistics from the University of Houston Clear Lake. With several years of practice in the actuarial field, he has an excellent understanding of the material and can explain the concepts at a level which any entry level student can understand. If you want a comprehensive course of all the AP Statistics topics and most all elementary statistics topics covered in a college course and explained with ease, then this course is for you.
Not for you? No problem.
30 day money back guarantee.
Forever yours.
Lifetime access.
Learn on the go.
Desktop, iOS and Android.
Get rewarded.
Certificate of completion.
Section 1: Introduction to Your Statistics Course! | |||
---|---|---|---|
Lecture 1 | 01:42 | ||
Here is a quick intro to AP Statistics and Elementary Statistics Video Series and your instructor, Jerry Linch. |
|||
Section 2: Exploring Data | |||
Lecture 2 | 10:12 | ||
An introduction to the the topic of statistics. The two main branches of statistics are discussed: descriptive statistics and inferential statistics. The definitions of population and sample are discussed. |
|||
Lecture 3 | 15:29 | ||
We talk about discrete and continuous variables in this section and classify data by number of variables. |
|||
Lecture 4 | 19:26 | ||
Frequency and Relative Frequency are discussed, We constructing Bar Charts and Pie Graphs based on our categorical data, Comparative Displays are used to look at differences in distributions. |
|||
Lecture 5 | 17:27 | ||
Graphing small to medium sized data sets. Construction of dotplots and stem-and-leaf plots with comparative displays. |
|||
Lecture 6 | 16:58 | ||
Graphing medium to large sized data sets. Construction of Histograms with density scales included. Construction of Ogives (Cumulative Relative Frequency Graphs). | |||
Lecture 7 | 10:02 | ||
Graphing medium to large sized data sets. Construction of Histograms with density scales included. Construction of Ogives (Cumulative Relative Frequency Graphs). |
|||
Lecture 8 | 13:17 | ||
In this section we discuss the construction of modified boxplots using the 5 number summary statistics of our data. We discuss the calculation of outliers based on the location of fences using the IQR. Multiple boxplots are used in comparative displays to discuss the differences in the features of distributions. |
|||
Lecture 9 | 12:29 | ||
In this section we discuss the construction of modified boxplots using the 5 number summary statistics of our data. We discuss the calculation of outliers based on the location of fences using the IQR. Multiple boxplots are used in comparative displays to discuss the differences in the features of distributions. |
|||
Lecture 10 | 09:01 | ||
In this section we discuss the construction of modified boxplots using the 5 number summary statistics of our data. We discuss the calculation of outliers based on the location of fences using the IQR. Multiple boxplots are used in comparative displays to discuss the differences in the features of distributions. |
|||
Lecture 11 | 14:09 | ||
Describing Distributions and Graphical Displays. Features of a graph including Center, Shape, Spread and Unusual Occurrences. Each category is discussed. |
|||
Lecture 12 | 16:40 | ||
Measures of Center including: mean, median and mode. Relationships of each and their use in graphical displays. Basic calculations of all measures of center. Resistant measures and trimmed mean are also discussed in this section. |
|||
Lecture 13 | 04:49 | ||
Measures of Center including: mean, median and mode. Relationships of each and their use in graphical displays. Basic calculations of all measures of center. Resistant measures and trimmed mean are also discussed in this section. |
|||
Lecture 14 | 18:10 | ||
Measures of Spread. Range, IQR (Inner Quartile Range), Standard Deviation, Variance and Deviations are all introduced in this section with examples and calculations. Each measure is discussed in its use to describe data. |
|||
Lecture 15 | 17:59 | ||
Measures of Spread. Range, IQR (Inner Quartile Range), Standard Deviation, Variance and Deviations are all introduced in this section with examples and calculations. Each measure is discussed in its use to describe data. |
|||
Lecture 16 | 18:19 | ||
Density curves and Z-Scores are discuessed with formulas and examples. The emperical rule is investigated along with Chebychevs lower bound inequality. Introduction to Normal Bell Shaped Curves. Transition points are introduced as well. |
|||
Lecture 17 | 19:59 | ||
Density curves and Z-Scores are discuessed with formulas and examples. The emperical rule is investigated along with Chebychevs lower bound inequality. Introduction to Normal Bell Shaped Curves. Transition points are introduced as well. |
|||
Lecture 18 | 19:44 | ||
Introduction to Correlation and scatterplots. Pearsons correlation coefficent is developed and investigated. The rules for correlation and examples are given. |
|||
Lecture 19 | 17:18 | ||
Introduction to Correlation and scatterplots. Pearsons correlation coefficent is developed and investigated. The rules for correlation and examples are given. |
|||
Lecture 20 | 19:34 | ||
Investigating the Least Squares Regression Line. This lesson will show us the LSRL is the line of best fit. We will look at calculating the LSRL and its use as a model for linear data. We will also look at the concept of extrapolation. | |||
Lecture 21 | 09:39 | ||
Investigating the Least Squares Regression Line. This lesson will show us the LSRL is the line of best fit. We will look at calculating the LSRL and its use as a model for linear data. We will also look at the concept of extrapolation. |
|||
Lecture 22 | 15:55 | ||
Residuals and error components are studied in a least squares regression setting. Coefficient of determination is discussed, defined and interpreted. Influential points and outliers are discussed in length in a least squares regression setting. |
|||
Lecture 23 | 18:22 | ||
Residuals and error components are studied in a least squares regression setting. Coefficient of determination is discussed, defined and interpreted. Influential points and outliers are discussed in length in a least squares regression setting. |
|||
Lecture 24 | 13:57 | ||
In this section we investigate residual plots to determine if data is linear. If the data is nonlinear, we transform the variables to achieve a linear model. Logarithmic, exponential, power, quadratic and reciprocal models are considered. |
|||
Lecture 25 | 11:18 | ||
In this section we investigate residual plots to determine if data is linear. If the data is nonlinear, we transform the variables to achieve a linear model. Logarithmic, exponential, power, quadratic and reciprocal models are considered. |
|||
Section 3: Sampling and Experimentation | |||
Lecture 26 | 16:36 | ||
Types of Sampling Designs. Advantages and disadvantages of each design with important definitions and concepts in sampling. We discuss a simple random sample, stratified sampling, systematic sampling, cluster sampling and multistage sampling. Definitions of sample design and sampling frame are introduced. The importance of proper sampling is also discussed. |
|||
Lecture 27 | 08:33 | ||
Types of Sampling Designs. Advantages and disadvantages of each design with important definitions and concepts in sampling. We discuss a simple random sample, stratified sampling, systematic sampling, cluster sampling and multistage sampling. Definitions of sample design and sampling frame are introduced. The importance of proper sampling is also discussed. |
|||
Lecture 28 | 17:29 | ||
We introduce types of bias in sampling design and experimentation. Random digit tables introduced. Examples with biased results. Examples of types of bias are introduced in problems and designs. | |||
Lecture 29 | 10:26 | ||
Observational Study versus Experimentation. Definitions of experimental components are introduced. |
|||
Lecture 30 | 13:23 | ||
Examples of Experimental Design. | |||
Lecture 31 | 13:37 | ||
Completely randomized designs versus block designed experiments. Matched pairs experiments. Randomization, replication and control of extraneous variables. Concept of confounding variables introduced. | |||
Lecture 32 | 11:08 | ||
Completely randomized designs versus block designed experiments. Matched pairs experiments. Randomization, replication and control of extraneous variables. Concept of confounding variables introduced. |
|||
Section 4: Anticipating Patterns | |||
Lecture 33 | 15:50 | ||
Fundamental Principle of Counting is introduced. Combinations and Permutations are introduced. Examples of counting questions with and without imposed conditions. |
|||
Lecture 34 | 13:07 | ||
Fundamental Principle of Counting is introduced. Combinations and Permutations are introduced. Examples of counting questions with and without imposed conditions. |
|||
Lecture 35 | 15:20 | ||
Sample Space, Event Space, Complement, Union, Intersection, Venn Diagrams, Mutually Exclusive Events, Disjoint Events considered. | |||
Lecture 36 | 09:29 | ||
Sample Space, Event Space, Complement, Union, Intersection, Venn Diagrams, Mutually Exclusive Events, Disjoint Events considered. |
|||
Lecture 37 | 16:24 | ||
Experimental probability, law of large numbers, basic rules of probability, independence, dependence are investigated through examples. The use of complements is considered in calculating probabilities. |
|||
Lecture 38 | 09:24 | ||
Experimental probability, law of large numbers, basic rules of probability, independence, dependence are investigated through examples. The use of complements is considered in calculating probabilities. |
|||
Lecture 39 | 14:30 | ||
Conditional probability introduced. Two way and contingency tables introduced with conditional probability as well as tree diagrams.Basic rules of probability, independence, dependence are investigated through examples. |
|||
Lecture 40 | 09:17 | ||
Conditional probability introduced. Two way and contingency tables introduced with conditional probability as well as tree diagrams.Basic rules of probability, independence, dependence are investigated through examples. |
|||
Lecture 41 | 18:14 | ||
Conditional probability introduced. Two way and contingency tables introduced with conditional probability as well as tree diagrams.Basic rules of probability, independence, dependence are investigated through examples. |
|||
Lecture 42 | 11:56 | ||
What is a simulation? The steps of a simulation are considered in this video. Introduction to random digit tables and sources of random numbers are considered. Examples of probabilities conducted with simulations. Experimental versus theoretical probability is investigated. |
|||
Lecture 43 | 19:55 | ||
The concept of a random variable is introduced. Discrete probability distributions are explored. Linear transformations and linear combinations are introduced with the calculation of the mean and standard deviations for discrete distributions. | |||
Lecture 44 | 14:41 | ||
The concept of a random variable is introduced. Discrete probability distributions are explored. Linear transformations and linear combinations are introduced with the calculation of the mean and standard deviations for discrete distributions. | |||
Lecture 45 | 17:35 | ||
The concept of discrete distributions is discussed and the characteristics of binomial probabilities are presented. Binomial Distributions are investigated and several problems are addressed. The mean and standard deviation of binomial distributions are presented and used in context of problems. | |||
Lecture 46 | 18:47 | ||
The concept of discrete distributions is discussed and the characteristics of binomial probabilities are presented. Binomial Distributions are investigated and several problems are addressed. The mean and standard deviation of binomial distributions are presented and used in context of problems. |
|||
Lecture 47 | 18:47 | ||
Geometric Probability Distributions are discussed and examples solved. Understanding the probabilities of a first success. The binomial and geometric distrubutions are compared with similarities and differences. The mean and standard deviation for geometric distrubutions are considered as well. | |||
Lecture 48 | 17:47 | ||
The Poisson probability distribution is discussed. Analyzing the probability of rare occurrences. Discrete probability distributions are compared. The mean and standard deviation of the poisson distribution and the probability density function are discussed. Several examples of Poisson distributions are solved. | |||
Lecture 49 | 12:59 | ||
Unusual Density Curves are discussed with basic geometric shapes. Probability density functions are discussed for generic continuous distributions with unusual density curves. The concept of continuous probabilities and random variables are explored. Many examples are given and solved with continuous probabilities. |
|||
Lecture 50 | 10:43 | ||
We explore the continuous uniform distribution and its properties. The mean and standard deviation are explored as well as the probability distribution function. Many examples are presented and solved. | |||
Lecture 51 | 19:59 | ||
The normal distribution is discussed. Emperical rule is discussed with examples. Normal bell shaped curves are graphed and discussed. Many problems are explained and solved with normal probabilities. The concept of z scores are discussed and normal probability tables are presented. |
|||
Lecture 52 | 17:40 | ||
The normal distribution is discussed. Emperical rule is discussed with examples. Normal bell shaped curves are graphed and discussed. Many problems are explained and solved with normal probabilities. The concept of z scores are discussed and normal probability tables are presented. |
|||
Lecture 53 | 14:20 | ||
In this section, we assess the normality of data through central limit theorem and graphical displays. Calculator functions are introduced to determine normal continuous probabilities and graphically displaying normal curves. Functions such as normalpdf, normalcdf, invnorm are discussed. |
|||
Lecture 54 | 15:15 | ||
Normal approximations to binomial distributions is considered in this lesson. Approximating binomial distributions with a normal bell shaped curve is addressed with the continuity correction based on the discrete histogram. Several problems are addressed and solved. |
|||
Lecture 55 | 16:58 | ||
Sampling distributions are introduced and discussed. The role of the sampling distribution is introduced to begin inferential statistics. The central limit theorem is discussed. The mean and standard deviation are discussed for sampling distributions. The concept of the mean as an unbiased estimator is presented. Z-scores for sampling distributions are introduced. Examples are presented and solved. |
|||
Lecture 56 | 10:18 | ||
Sampling distributions are introduced and discussed. The role of the sampling distribution is introduced to begin inferential statistics. The central limit theorem is discussed. The mean and standard deviation are discussed for sampling distributions. The concept of the mean as an unbiased estimator is presented. Z-scores for sampling distributions are introduced. Examples are presented and solved. |
|||
Lecture 57 | 15:15 | ||
Sampling distributions are introduced and discussed. The role of the sampling distribution is introduced to begin inferential statistics. The central limit theorem is discussed. The mean and standard deviation are discussed for sampling distributions. The concept of the mean as an unbiased estimator is presented. Z-scores for sampling distributions are introduced. Examples are presented and solved. | |||
Section 5: Statistical Inference | |||
Lecture 58 | 18:16 | ||
We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error. | |||
Lecture 59 | 17:18 | ||
We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error. |
|||
Lecture 60 | 17:14 | ||
We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error. |
|||
Lecture 61 | 19:30 | ||
We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error. |
|||
Lecture 62 | 12:58 | ||
We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error. |
|||
Lecture 63 | 09:51 | ||
We begin the inferential section of statistics discussing the confidence interval for a one sample mean procedure. Both z-intervals and t-intervals are discussed and the student’s t-distribution is introduced. Conditions for inference with confidence intervals are explored with Simple Random Sampling. The conditions for normality are evaluated and the calculation of the interval is broken down into its most basic form including the point estimate and the margin of error, made up of the critical value and the standard deviation of the statistic we use to estimate the population parameter value of the mean. Several examples are presented in the construction of a confidence interval. We find the value of the sample size to produce a certain value for our margin of error. |
|||
Lecture 64 | 18:09 | ||
We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test. |
|||
Lecture 65 | 19:31 | ||
We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test. |
|||
Lecture 66 | 19:52 | ||
We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test. |
|||
Lecture 67 | 13:12 | ||
We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test. |
|||
Lecture 68 | 16:15 | ||
We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test. |
|||
Lecture 69 | 12:35 | ||
We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test. |
|||
Lecture 70 | 13:57 | ||
We continue the inferential section of statistics discussing hypothesis tests for a one sample mean procedure. Both z-tests and t-tests are discussed the robustness of the t-distribution is introduced. We examine right tail, left tail and two tailed hypothesis tests. Conditions for inference with hypothesis tests are explored with Simple Random Sampling. The conditions for normality are evaluated and the hypothesis statements for both the null and alternative hypothesis are discussed. Calculation of the test statistic value is addressed as well as the calculation of the p-value associated with the test statistic value. Several examples are presented in the one sample hypothesis test procedures. We also discuss the matched pairs t-test using one sample hypothesis test procedures. The confidence interval is compared to a two-tailed hypothesis test. |
|||
Lecture 71 | 18:38 | ||
Errors are introduced based on the decisions of hypothesis tests. Type I and Type II errors are explored and we discuss the decisions made leading to these errors and the consequences associated with making these errors. The relationship between the two errors is investigated as well as the relationship to our level of significance and the probability of a type I error. Many examples are given and we discuss the nature and consequences associated with both type I and type II errors. | |||
Lecture 72 | 18:11 | ||
In this lesson, we look at the different errors that are possible in hypothesis testing, their consequences and assess probabilities based on a hypothetical alternate mean. The power of the test is addressed and its relationship to a type II error. We also consider the values of power and probabilities associated with Type I and Type II errors and discuss what is acceptable in practice. |
|||
Lecture 73 | 18:59 | ||
In this lesson, we look at the different errors that are possible in hypothesis testing, their consequences and assess probabilities based on a hypothetical alternate mean. The power of the test is addressed and its relationship to a type II error. We also consider the values of power and probabilities associated with Type I and Type II errors and discuss what is acceptable in practice. |
|||
Lecture 74 | 13:38 | ||
In this lesson, we look at sampling distributions for one sample proportions. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions. |
|||
Lecture 75 | 16:41 | ||
In this lesson, we look at sampling distributions for one sample proportions. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions. |
|||
Lecture 76 | 14:57 | ||
In this lesson, we look at one sample inference with proportions. Confidence Intervals and Hypothesis Tests are discussed in this lesson for one sample proportion inference. Conditions for inference are also discussed. We look at the sample size required to achieve a certain margin of error. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions, using confidence intervals and hypothesis testing. |
|||
Lecture 77 | 07:23 | ||
In this lesson, we look at one sample inference with proportions. Confidence Intervals and Hypothesis Tests are discussed in this lesson for one sample proportion inference. Conditions for inference are also discussed. We look at the sample size required to achieve a certain margin of error. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions, using confidence intervals and hypothesis testing. |
|||
Lecture 78 | 13:34 | ||
In this lesson, we look at one sample inference with proportions. Confidence Intervals and Hypothesis Tests are discussed in this lesson for one sample proportion inference. Conditions for inference are also discussed. We look at the sample size required to achieve a certain margin of error. We discuss the rules for normality and independence based on sample size and value of parameter. Several problems are presented and solved based on sample data involving proportions, using confidence intervals and hypothesis testing. |
|||
Lecture 79 | 17:06 | ||
In this lesson, we look at two sample inference with means. Confidence Intervals and Hypothesis Tests are discussed in this lesson for two sample mean inference. Conditions for inference are also discussed. We look at the differences between mean difference and difference of means, from matched pairs to two independent samples. Several problems are presented and solved based on sample data involving two sample procedures, using confidence intervals and hypothesis testing. We discuss the robustness of t-inference in particular with two sample procedures. |
|||
Lecture 80 | 18:14 | ||
In this lesson, we look at two sample inference with means. Confidence Intervals and Hypothesis Tests are discussed in this lesson for two sample mean inference. Conditions for inference are also discussed. We look at the differences between mean difference and difference of means, from matched pairs to two independent samples. Several problems are presented and solved based on sample data involving two sample procedures, using confidence intervals and hypothesis testing. We discuss the robustness of t-inference in particular with two sample procedures. |
|||
Lecture 81 | 19:44 | ||
In this lesson, we look at two sample inference with means. Confidence Intervals and Hypothesis Tests are discussed in this lesson for two sample mean inference. Conditions for inference are also discussed. We look at the differences between mean difference and difference of means, from matched pairs to two independent samples. Several problems are presented and solved based on sample data involving two sample procedures, using confidence intervals and hypothesis testing. We discuss the robustness of t-inference in particular with two sample procedures. |
|||
Lecture 82 | 18:20 | ||
In this lesson we discuss two sample inference with proportions. We begin by looking at the sampling distribution of the difference in population proportions. Confidence Intervals and Hypothesis Tests are conducted for the difference in population proportions. The conditions for inference are addressed. |
|||
Lecture 83 | 14:08 | ||
In this lesson we discuss two sample inference with proportions. We begin by looking at the sampling distribution of the difference in population proportions. Confidence Intervals and Hypothesis Tests are conducted for the difference in population proportions. The conditions for inference are addressed. |
|||
Lecture 84 | 14:01 | ||
In this lesson we discuss two sample inference with proportions. We begin by looking at the sampling distribution of the difference in population proportions. Confidence Intervals and Hypothesis Tests are conducted for the difference in population proportions. The conditions for inference are addressed. |
|||
Lecture 85 | 12:40 | ||
In this lesson we discuss inference procedures for categorical data. We begin with Chi Square Goodness of Fit tests. Actual data is compared to expected data. Both Chi Square Tests for Independence and Homogeneity are then discussed. We look at two way tables for both tests and find expected counts. The Chi Square Test Statistic is studied as well as the conditions for Chi Square Inference. Examples of Hypothesis Tests are given. | |||
Lecture 86 | 10:59 | ||
In this lesson we discuss inference procedures for categorical data. We begin with Chi Square Goodness of Fit tests. Actual data is compared to expected data. Both Chi Square Tests for Independence and Homogeneity are then discussed. We look at two way tables for both tests and find expected counts. The Chi Square Test Statistic is studied as well as the conditions for Chi Square Inference. Examples of Hypothesis Tests are given. |
|||
Lecture 87 | 14:38 | ||
In this lesson we discuss inference procedures for categorical data. We begin with Chi Square Goodness of Fit tests. Actual data is compared to expected data. Both Chi Square Tests for Independence and Homogeneity are then discussed. We look at two way tables for both tests and find expected counts. The Chi Square Test Statistic is studied as well as the conditions for Chi Square Inference. Examples of Hypothesis Tests are given. |
|||
Lecture 88 | 11:49 | ||
In this lesson we discuss inference procedures for categorical data. We begin with Chi Square Goodness of Fit tests. Actual data is compared to expected data. Both Chi Square Tests for Independence and Homogeneity are then discussed. We look at two way tables for both tests and find expected counts. The Chi Square Test Statistic is studied as well as the conditions for Chi Square Inference. Examples of Hypothesis Tests are given. |
|||
Lecture 89 | 19:45 | ||
In this lesson, we look at linear regression inference with the construction of both confidence intervals and hypothesis tests. The conditions for inference are addressed. We look at the standard error component and summarized statistical values found in tables. Regression concepts are revisted. | |||
Lecture 90 | 12:02 | ||
In this lesson, we look at linear regression inference with the construction of both confidence intervals and hypothesis tests. The conditions for inference are addressed. We look at the standard error component and summarized statistical values found in tables. Regression concepts are revisted. |
|||
Lecture 91 | 17:40 | ||
In this lesson, we look at linear regression inference with the construction of both confidence intervals and hypothesis tests. The conditions for inference are addressed. We look at the standard error component and summarized statistical values found in tables. Regression concepts are revisted. |
Welcome!
I am excited to impart knowledge for your entry level statistics and AP statistics courses.
I am a graduate of the University of Nebraska where I earned my bachelors degree in Mathematics and Actuarial Science. I furthered my education in Texas and earned my Masters of Science degree in Statistics from the University of Houston Clear Lake.
My first career began in Galveston, Texas where I worked as an actuary for American National Insurance Company from January 1985 – August 1992. After eight years in the actuarial profession I decided to turn my efforts toward teaching. I taught the next eight years at two different public schools in the Houston area. After eight years of instruction, I switched my career path back to the actuarial field. I moved to Omaha, Nebraska for two years and worked with Central States Health and Life Company as a valuation actuary. My family and I returned to Texas where I continued the actuarial profession, once again, for American National Insurance Company in Galveston for the next three years. I currently teach AP Statistics and college math courses for a school in the Houston area.
I am excited to bring the classroom to you. I have always had a passion to teach and enjoy it very much. I hope you find the videos both educational and helpful!
Sincerely,
Jerry Linch