Mastering Statistical Quality Control with Minitab
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
Find online courses made by experts from around the world.
Take your courses with you and learn anywhere, anytime.
Learn and practice realworld skills and achieve your goals.
Start to learn the concepts and procedures of Statistical Quality Control, which are intensively used in business, engineering, manufacturing and in many places when decision making processes are driven by data and facts rather just by intuitive feelings.
The course covers the core chapters of Statistical Quality Control and it helps understanding and applying the most complex and sophisticated statistical methods in practice.
Learn how to decide which factors influence the key indicators of your product or service using the Analysis of Variance Method (ANOVA).
Get familiar with the concept of Statistical Process Control (SPC) and the proper use of Control Charts.
Learn how to quantify the actual and potential capability of your process to satisfy the needs of your costumer.
Learn the way of checking the capability of your measurement system to ensure the required precision.
This course is comprehensive and covers the core chapters of Statistical Quality Control.
35 video lectures.
5 hours.
Enjoy the benefit of the wellstructured, short and yet comprehensive video lectures.
In these lectures all things happen inside a Minitab driven analysis.
All in one place, within the same video lesson, gaining computer skills, getting theoretical background, and mainly getting the ability to interpret the outputs properly.
These lessons are specially prepared with intensive screen animations, concise and yet comprehensive, wellstructured explanations. If you like you can turn on subtitles to support the comprehension.
The verification of the assumptions for a test, the basic theoretical background or even the formulas applied in a procedure appear in these video tutorials at the right instances of the analysis. The outputs are explained in a detailed manner in such an order that enables you to make the appropriate conclusions.
In these video lectures Minitab is used not only to solve problems but to explore and demonstrate different concepts of Statistical Quality Control to make you easily familiar with rather complex concepts of the field.
Learn in a way when you watch the video and do the same simultaneously in your own Minitab.
Watching a video, pausing it and doing the same steps simultaneously in your own Minitab is the best way of getting experience and practice in data manipulation. Repeating the sessions with different sample data develops your skill to solve statistical problems with a software.
Not for you? No problem.
30 day money back guarantee.
Forever yours.
Lifetime access.
Learn on the go.
Desktop, iOS and Android.
Get rewarded.
Certificate of completion.
Section 1: Introduction  

Lecture 1 
Introduction and Data Files to Download
Preview

02:50  
Section 2: Analysis of Variance  One Factor Expermimental Design  
Lecture 2  08:52  
From the lecture: "In a onefactor experimental design our objective is to determine the effect of one factor (often called one treatment) on one response variable when the factor may have several levels." "In the 4MachinesTyre data file the StoppingDistance results of the braking tests are recorded, when the data show the distances in meters where a test vehicle installed with different tires in each test stopped. The tires were manufactured by 4 different vulcanizing machines and our task is to determine if there is a difference in mean stopping distances of the tires produced by the 4 different machines, or not." 

Lecture 3 
Completely Randomized Design (CRD). Part II. Checking the Model's Assumptions

04:30  
Lecture 4  06:29  
From the lecture: "Having established in an ANOVA test that at least one of the population means significantly differs from the others we may interest which one or ones are those. " "To determine which means differ from each other we can use Minitab's multiple comparison test capabilities. From the four different methods that Minitab provides we choose the Tukey's multiple comparison test which involves comparing each pair of means. " 

Section 3: Analysis of Variance  Randomized Block Design and One Random Factor Design  
Lecture 5  08:43  
From the lecture: " In a randomized block design there is one factor we want to study, but now we try to reduce some of the variability in the data by grouping the material, people or whatever into relatively homogeneous blocks. " "In general, we really don't care whether the blocking variable is significant or not. We do blocking to reduce variation so that we are more likely to detect differences in treatment. " "To make comparisons between the treatment levels we should use the General Linear Model as the mathematical model of ANOVA." 

Lecture 6  03:56  
From the lecture: " We are frequently interested in a factor that has a large number of possible levels. If we randomly select some levels from the population of factor levels than we say that the factor is random. " "In a random factor ANOVA we are not interested in the means of the response variable at the actual factor levels but we want to get an estimation for the variance components of the response attributable to the random factor and to the random errors." 

Section 4: Analysis of Variance  Two Factor Experimental Design  
Lecture 7  11:55  
From the lecture: " In this tutorial we begin to analyze the results of experiments designed to measure the impact of two factors on a response. " " Similarly to the onefactor experimental layout the statistical test for the different effects are based on the fundamental mathematical relation that the total variation around the grand average, more precisely the total sum of squares can be decomposed to the sums of squares for the treatments and for the random errors. Moreover the sums of squares for the treatments can further be decomposed to the sums of squares for the factors and for the interaction. In this way the basis of the statistical test for the significance of the different effects are the comparisons of the variations for the various effects to the variation for the random error. " 

Lecture 8  04:27  
From the lecture: " In this section we will study the case when in a twoway experimental design the interaction of the two factors does not prove to be significant. " " In this case we may rebuild our effects model dropping this interaction term out of the interactive model. This reduced relation is called the additive model. " 

Lecture 9  09:08  
From the lecture: " Having conducted the Ftest in a twoway ANOVA, the next step is to rank the factor level means or the treatment level means and to check if there is a significant difference between any pair of these means. " "In this example we wish to study the effect of 4 different heat treatment times and 3 different treatment temperatures on the tensile strength of the copper wires used in electrical cables. " 

Lecture 10  05:38  
From the lecture: " In this tutorial we will see the method of multiple comparisons for treatment means in a two way ANOVA when the interaction of the two main factors is significant. 

Lecture 11  08:29  
From the lecture: " The assumptions for the Ftest in the two way ANOVA are similar to those in the oneway ANOVA. Randomness, normality and equal variances are usually tested. " " The methods for checking the randomness and the normality of the observations require to determine the residual values for each observation. The residual value at a given observation is the difference of the observed value and the associated fitted value which is calculated by using the effects model of the two way ANOVA. The fitted value is actually the expected value of the response at the given factor levels (in the given cell). It means that the residual values comprise the pure random error and the possible error due to lack of the interaction term in the model. This is why we prefer the use of the residual values since they are suitable not only for checking the randomness and the normality of the observations but also for checking the adequacy of the model we used. " 

Lecture 12  03:43  
From the lecture: " Occasionally, we encounter a twofactor experiment with only a single replicate, that is, only one observation per cell. This situation needs special considerations. 

Lecture 13  07:19  
From the lecture: " The presence of a nuisance factor may require that the experiment be run in blocks. Now, we will see how blocking can be incorporated in a two factor factorial design. " Three different bulb types were used in the rear fog lamps of the car ahead and two lamp installation arrangements, single and double, were investigated. The visual distances were measured in meters, these are the response values, and our task is to determine whether there is a significant difference in the mean visual distance values using BulbType and Installation as two factors having 3 and 2 levels respectively. 4 drivers were involved in the experiment. " 

Lecture 14  05:06  
From the lecture: " Similarly to the one factor experimental layout we are frequently interested in two factors that have large number of possible levels. If we randomly select some levels from both populations of factor levels than we say that we have a twofactor factorial design with random factors. "In a random factor ANOVA we are not interested in the means of the response variable at the actual treatments or factor levels but we want to get an estimation for the variance components of the response attributable to the random factors (to the manufacturer and to the clinic here), to their interaction and to the random errors. " 

Lecture 15  05:58  
From the lecture: "We now consider the case when some of the parameters in the model are fixed unknown constants and the rest are random variables. Such a model is called a mixed effects model. In this example we test the effect of a drug for lowering blood sugar level. This drug is marketed only by 3 manufacturers and applied together with diets of different clinics. In our experiment the drugs produced by each of the 3 manufacturers were distributed to 4 randomly selected clinics. 6 patients' blood sugar levels were measured at each of the 12 manufacturerclinic pairs (called 12 treatments). Patients have similar conditions for all known parameters and were randomly assigned to the 12 treatments." " In this analysis, on one hand we want to check whether there is a difference in mean blood sugar levels among the manufacturers, on the other hand we want to determine what proportion of total variation is due to diets of clinics. " 

Lecture 16  06:04  
From the lecture: " In certain experiments, the levels of one factor are similar but not identical for different levels of the other factor. Such arrangement is called nested or hierarchical design. " The different doses are tested on different patients that is the factor Patient is nested under the factor Dose. 

Section 5: Analysis of Variance  Multifactor Balanced Design  
Lecture 17  06:13  
From the lecture: " The results for the twofactor factorial design may be extended to the general case where several factors with several levels are arranged in a factorial experiment. " 

Section 6: Statistical Process Control  Control Charts in Phase I.  
Lecture 18  10:35  
From the lecture: " In this tutorial the basic concepts of statistical process control (SPC) and the idea of control charts are introduced. " Monitoring, controlling and eliminating the variation of a process in order to keep it in a state of statistical control or bring a process into a statistical control is the objective of the so called statistical process control (SPC). " " This graph could serve as a basis to specify the so called control limits. These control limits are 3 sample mean standard deviations from the center. If a sample mean occurs outside of these limits we take some action to improve the process. We think that if it happens, it has some special, assignable cause beyond the inherent common cause chance process variation. We want to take action before we would produce nonconforming units. " 

Lecture 19  07:53  
From the lecture: "To specify control limits we need to calculate the standard deviation of the sample means, denoted by Sigma Xbar. However Sigma Xbar can be calculated in a way that we divide the process standard deviation Sigma by the square root of the sample size n. To estimate the Sigma process standard deviation we use not the overall standard deviation but rather the so called within sample standard deviation, denoted by Sigma within. " " The reason why we use this within sample variation is to avoid overestimating the Sigma process variation. If the sample means are different due to some shifts or drifts of the process mean, the overall standard deviation would overestimate the standard deviation of a stable process when these shifts are not present. " "It is the typical layout of a control chart. The basic elements are the line connecting the sample means (or whatever sample statistics we are monitoring), the center line and the two control limits." 

Lecture 20  09:26  
From the lecture: " When dealing with a quality characteristic that is a variable, it is necessary to monitor both the mean value and the variability of the quality characteristic. In this tutorial the Shewhart Xbar control chart is used for monitoring the mean. " " Assuming that the variability of the process is in control to check whether the process mean is out of control the basic criterion on the Xbar chart is one or more points outside of the control limits. Supplementary criteria are sometimes used to increase the sensitivity of the control charts to smaller shifts, however care should be exercised when using several decision rules simultaneously, since the overall type I. error probability of false alarm can be substantially increased. " 

Lecture 21  06:49  
From the lecture: " The most frequently used control charts for monitoring the variability of a quality characteristic are the Shewhart R charts where changes of the ranges of the subgroups are monitored. 

Lecture 22  09:29  
From the lecture: " In practice Xbar charts and R charts are used simultaneously and not in isolation. XbarR charts are a combination of the two charts and in this tutorial we will see their use in Phase I. for historical data analysis and for ongoing control. 

Lecture 23  08:31  
From the lecture: " Although Xbar and R charts are widely used, it is occasionally desirable to estimate the process standard deviation directly instead of indirectly through the use of the sample ranges. This leads to the XbarS charts where S denotes the sample standard deviation. " Usually the use of these charts are desirable in two cases. Either if we have moderately large sample size, larger than 10 or the sizes of the samples are varying. It is easier to interpret these charts where the center lines remain at their original positions than for example the R chart where frequent changes in sample sizes make the interpretation of the chart rather difficult due to the several shifts of the center line. " 

Lecture 24  11:28  
From the lecture: "There are many situations in which the sample size is one, that is, the sample consists of an individual unit. In such situations the control chart for individual units is useful. In this example the dissolved water content of diesel fuel in a tank is controlled each day at an oil refinery. " Our task is to decide whether the process is in statistical control or not. " 

Lecture 25  10:53  
From the lecture: " Many quality characteristics cannot be conveniently represented numerically. In such cases we usually classify each item inspected as either conforming or nonconforming unit. These classifications are called the attributes of the items. In this tutorial we will use the so called pchart, an attribute control chart for fraction nonconforming to check the stability of a process. The npchart for checking number nonconforming will also be shown. 

Lecture 26  08:10  
From the lecture: " It is possible to develop control charts for either the total number of nonconformities in a unit or the average number of nonconformities per unit. These kind of charts are the c chart and the u chart respectively and in this tutorial we will see, how we can construct and apply them in practice. 

Section 7: Statistical Process Control  Control Charts in Phase II.  
Lecture 27  10:20  
From the lecture: "In Phase II. process monitoring when small process shifts are of interest the cumulative sum (CUSUM) control chart is a very effective alternative to the Shewhart control charts. In this example volumes of apple juice in boxes are controlled." " The cumulative sum chart (CUSUM chart) is more effective because it directly incorporates all the information in the sequence of samples by calculating the C cumulative sums of the deviations of the sample values from a target value. " 

Lecture 28  07:36  
From the lecture: " When we use Onesided CUSUM chart the Fast Initial Response (FIR) procedure can increase the sensitivity of the chart at process startup after a corrective action to quickly detect if the corrective action did not reset the mean to the target value. "Two different startup situations are investigated. " 

Lecture 29  05:31  
From the lecture: " An alternative procedure to the use of a onesided CUSUM is the so called Vmask control scheme. " "In the Vmask procedure the original cumulative sums are plotted on the CUSUM chart without the use of any slack value and without resetting the cumulative sums to zero when their positive or negative signs are changing. To decide if the process is in control we place a Vmask on the chart with the center point on the last plotted value. 

Lecture 30  05:13  
From the lecture: " The Exponentially Weighted Moving Average control chart is a very good alternative to the Shewhart charts when we are interested in detecting small shifts. " Similarly to the CUSUM charts the Exponentially Weighted Moving Average (EWMA) control chart is typically used with individual observations. " " This insensitivity of the Exponentially Weighted Moving Average control chart to the nonnormality of the data is a basic feature of this chart. " 

Lecture 31  03:58  
From the lecture: "The Moving Average control chart is the simplest member of the timeweighted control charts family. In this tutorial we solve the same problem that we investigated with Exponentially Weighted Moving Average control chart." 

Section 8: Product Characterization and Process Capability Analysis  
Lecture 32  12:02  
From the lecture: "In a capability analysis we are interested in the uniformity of the output. In a product characterization study, when we have no direct observation of time history of production we can only estimate the distribution of the product quality characteristic and the process yield, which is the fraction conforming to specifications. In this example we obtained one sample of ball bearings from a supplier with 150 elements. The diameter of a ball bearing is a criticaltoquality characteristic." " Let's determine the so called capability ratios and the performance of the production. " 

Lecture 33  12:12  
From the lecture: "When we can directly observe a process and we can control the datacollection activity, we can pursue a true process capability study. Knowing the time sequence of the data, inferences can be made about the current and the potential capability of the process. In this tutorial we investigate the same example of ball bearing manufacturing that we studied in the product characterization section, but now, we will see the aspects of the problem from the viewpoint of the manufacturers. " 

Lecture 34  05:42  
From the lecture: " In a capability study if we assume the normality of our data and we use the corresponding command of Minitab we always have to check the validity of this assumption. " 

Section 9: Gauge and Measurement System Analysis (Gauge R&R Study)  
Lecture 35  10:56  
From the lecture: "In this tutorial we are dealing with Measurement System Analysis which can help to identify and measure the sources of error in our data. The so called Gauge R and R Study measures precision error by taking one part and measuring it several times, with several different people. In this Gauge R and R worksheet 10 parts are measured by three different people, three times each, providing a total of 90 results. 
Engineer and mathematician teaching different subjects of Engineering and Mathematics for more than 20 years at a college in Hungary. Beside teaching he is intensively involved in industrial projects as a consultant or as a structural designer. His special fields are Quality Statistics, Statistical Process Control, Multivariate Statistical Analysis and Stochastic Processes. He was elected to the President of the Chamber of Engineers in Fejer county and holds several awards. He served as the rector of the College of Dunaujvaros for 8 years.