# Mastering Statistical Quality Control with Minitab

**5 hours**left at this price!

- 4.5 hours on-demand video
- 2 downloadable resources
- Full lifetime access
- Access on mobile and TV

- Certificate of Completion

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business- Get familiar with those chapters of Statistics, which are intensively used in Statistical Quality Control or in Six Sigma projects.
- Be the master of Minitab.
- Learn how to analyze and evaluate your Measurement System or how to use different Capability measures in the MEASURE phase to quantify the actual and the potential capability of your process. Learn how to conduct a Minitab session when you analyze your experiment with One- or Multifactor ANOVA in the ANALYSE phase and learn how to interpret the session outputs of the analysis properly.
- Get familiar with the concepts of Statistical Process Control and the technics of using Control Charts in the CONTROL phase for quality improvement either in the manufacturing or in the service sectors.

- Download and install Minitab. Version 17.1 is used in the video lectures but earlier or later versions can also be used since little changes have been made in the way of manipulating data.
- Download the dataset used throughout the course. The dataset downloadable from the “Lecture 1. Introduction.”
- If you have never met Minitab software before take the free short course of “Getting Familiar with Minitab” where the screens and the most basic commands of Minitab are introduced.
- To complete this course you should be familiar with the basic concepts of Statistics. If you need help in this field or you want to polish your knowledge take my other course “Foundation of Statistics with Minitab”, here on Udemy and use the relevant chapters whenever you need.

Start to learn the concepts and procedures of **Statistical Quality Control**, which are intensively used in business, engineering, manufacturing and in many places when decision making processes are driven by data and facts rather just by intuitive feelings.

The course covers the core chapters of **Statistical Quality Control** and it helps understanding and applying the most complex and sophisticated statistical methods in practice.

**Learn how to decide which factors influence the key indicators of your product or service using the Analysis of Variance Method (ANOVA).**

- Completely Randomized Design.
- Randomized Block Design.
- Completely Randomized Design with Random Factor.
- One-way and Two-way Experimental Layout.
- Model Building in Multifactor Balanced Design.

**Get familiar with the concept of Statistical Process Control (SPC) and the proper use of Control Charts.**

- Stability and Capability of a process.
- Control Charts for Variables.
- Control Charts for Attributes.
- Application of Control Charts in Phase I and Phase II.
- Shewhart Charts.
- Cumulative Sum (CUSUM) Charts.
- Exponentially Weighted Moving Average (EWMA) Charts.
- Moving Average (MA) Charts.
- Normal and Non-normal data.
- Fast Initial Response Method.

**Learn how to quantify the actual and potential capability of your process to satisfy the needs of your costumer.**

- Product Characterization.
- Process Capability Analysis.
- Overall and Potential Capability.
- Capability Analysis with Non-normal Data.

**Learn the way of checking the capability of your measurement system to ensure the required precision.**

- Gauge and Measure System Analysis.
- The Analysis of Variance Method with Balanced ANOVA.
- Minitab Gage R&R Tools.

**This course is comprehensive and covers the core chapters of Statistical Quality Control.**

**35** video lectures.

**5** hours.

**Enjoy the benefit of the well-structured, short and yet comprehensive video lectures**.

In these lectures all things happen inside a Minitab driven analysis.

All in one place, within the same video lesson, gaining computer skills, getting theoretical background, and mainly getting the ability to interpret the outputs properly.

These lessons are specially prepared with intensive screen animations, concise and yet comprehensive, well-structured explanations. If you like you can turn on subtitles to support the comprehension.

The verification of the assumptions for a test, the basic theoretical background or even the formulas applied in a procedure appear in these video tutorials at the right instances of the analysis. The outputs are explained in a detailed manner in such an order that enables you to make the appropriate conclusions.

In these video lectures Minitab is used not only to solve problems but to explore and demonstrate different concepts of Statistical Quality Control to make you easily familiar with rather complex concepts of the field.

**Learn in a way when you watch the video and do the same simultaneously in your own Minitab.**

Watching a video, pausing it and doing the same steps simultaneously in your own Minitab is the best way of getting experience and practice in data manipulation. Repeating the sessions with different sample data develops your skill to solve statistical problems with a software.

- The course is ideal for two groups of audiences.
- - For undergraduate or graduate students who have been studying Quality Management at their universities and who need help in understanding the concepts of statistical methods for quality improvement and want to get skill of using Minitab for problem solving in that area.
- - For professionals in the field of quality working on Lean Six Sigma projects and occasionally they need support in the proper use of statistical methods in the different phases of the project.

From the lecture:

"In a one-factor experimental design our objective is to determine the effect of one factor (often called one treatment) on one response variable when the factor may have several levels."

"In the 4MachinesTyre data file the StoppingDistance results of the braking tests are recorded, when the data show the distances in meters where a test vehicle installed with different tires in each test stopped. The tires were manufactured by 4 different vulcanizing machines and our task is to determine if there is a difference in mean stopping distances of the tires produced by the 4 different machines, or not."

From the lecture:

"Having established in an ANOVA test that at least one of the population means significantly differs from the others we may interest which one or ones are those. "

"To determine which means differ from each other we can use Minitab's multiple comparison test capabilities. From the four different methods that Minitab provides we choose the Tukey's multiple comparison test which involves comparing each pair of means. "

From the lecture:

" In a randomized block design there is one factor we want to study, but now we try to reduce some of the variability in the data by grouping the material, people or whatever into relatively homogeneous blocks. "

"In general, we really don't care whether the blocking variable is significant or not. We do blocking to reduce variation so that we are more likely to detect differences in treatment. "

"To make comparisons between the treatment levels we should use the General Linear Model as the mathematical model of ANOVA."

From the lecture:

" We are frequently interested in a factor that has a large number of possible levels. If we randomly select some levels from the population of factor levels than we say that the factor is random. "

"In a random factor ANOVA we are not interested in the means of the response variable at the actual factor levels but we want to get an estimation for the variance components of the response attributable to the random factor and to the random errors."

From the lecture:

" In this tutorial we begin to analyze the results of experiments designed to measure the impact of two factors on a response. "

" Similarly to the one-factor experimental layout the statistical test for the different effects are based on the fundamental mathematical relation that the total variation around the grand average, more precisely the total sum of squares can be decomposed to the sums of squares for the treatments and for the random errors. Moreover the sums of squares for the treatments can further be decomposed to the sums of squares for the factors and for the interaction. In this way the basis of the statistical test for the significance of the different effects are the comparisons of the variations for the various effects to the variation for the random error. "

From the lecture:

" In this section we will study the case when in a two-way experimental design the interaction of the two factors does not prove to be significant. "

" In this case we may rebuild our effects model dropping this interaction term out of the interactive model. This reduced relation is called the additive model. "

From the lecture:

" Having conducted the F-test in a two-way ANOVA, the next step is to rank the factor level means or the treatment level means and to check if there is a significant difference between any pair of these means. "

"In this example we wish to study the effect of 4 different heat treatment times and 3 different treatment temperatures on the tensile strength of the copper wires used in electrical cables. "

From the lecture:

" The assumptions for the F-test in the two way ANOVA are similar to those in the one-way ANOVA. Randomness, normality and equal variances are usually tested. "

" The methods for checking the randomness and the normality of the observations require to determine the residual values for each observation. The residual value at a given observation is the difference of the observed value and the associated fitted value which is calculated by using the effects model of the two way ANOVA. The fitted value is actually the expected value of the response at the given factor levels (in the given cell). It means that the residual values comprise the pure random error and the possible error due to lack of the interaction term in the model. This is why we prefer the use of the residual values since they are suitable not only for checking the randomness and the normality of the observations but also for checking the adequacy of the model we used. "

From the lecture:

" The presence of a nuisance factor may require that the experiment be run in blocks. Now, we will see how blocking can be incorporated in a two factor factorial design.

" Three different bulb types were used in the rear fog lamps of the car ahead and two lamp installation arrangements, single and double, were investigated. The visual distances were measured in meters, these are the response values, and our task is to determine whether there is a significant difference in the mean visual distance values using BulbType and Installation as two factors having 3 and 2 levels respectively. 4 drivers were involved in the experiment. "

From the lecture:

" Similarly to the one factor experimental layout we are frequently interested in two factors that have large number of possible levels.

If we randomly select some levels from both populations of factor levels than we say that we have a two-factor factorial design with random factors.

"In a random factor ANOVA we are not interested in the means of the response variable at the actual treatments or factor levels but we want to get an estimation for the variance components of the response attributable to the random factors (to the manufacturer and to the clinic here), to their interaction and to the random errors. "

From the lecture:

"We now consider the case when some of the parameters in the model are fixed unknown constants and the rest are random variables. Such a model is called a mixed effects model.

In this example we test the effect of a drug for lowering blood sugar level. This drug is marketed only by 3 manufacturers and applied together with diets of different clinics. In our experiment the drugs produced by each of the 3 manufacturers were distributed to 4 randomly selected clinics. 6 patients' blood sugar levels were measured at each of the 12 manufacturer-clinic pairs (called 12 treatments). Patients have similar conditions for all known parameters and were randomly assigned to the 12 treatments."

" In this analysis, on one hand we want to check whether there is a difference in mean blood sugar levels among the manufacturers, on the other hand we want to determine what proportion of total variation is due to diets of clinics. "

From the lecture:

" In certain experiments, the levels of one factor are similar but not identical for different levels of the other factor. Such arrangement is called nested or hierarchical design.

" The different doses are tested on different patients that is the factor Patient is nested under the factor Dose.

From the lecture:

" In this tutorial the basic concepts of statistical process control (SPC) and the idea of control charts are introduced.

" Monitoring, controlling and eliminating the variation of a process in order to keep it in a state of statistical control or bring a process into a statistical control is the objective of the so called statistical process control (SPC). "

"

This graph could serve as a basis to specify the so called control limits. These control limits are 3 sample mean standard deviations from the center. If a sample mean occurs outside of these limits we take some action to improve the process. We think that if it happens, it has some special, assignable cause beyond the inherent common cause chance process variation. We want to take action before we would produce nonconforming units.

"From the lecture:

"To specify control limits we need to calculate the standard deviation of the sample means, denoted by Sigma Xbar. However Sigma Xbar can be calculated in a way that we divide the process standard deviation Sigma by the square root of the sample size n.

To estimate the Sigma process standard deviation we use not the overall standard deviation but rather the so called within sample standard deviation, denoted by Sigma within. "

" The reason why we use this within sample variation is to avoid overestimating the Sigma process variation. If the sample means are different due to some shifts or drifts of the process mean, the overall standard deviation would overestimate the standard deviation of a stable process when these shifts are not present. "

"It is the typical layout of a control chart.

The basic elements are the line connecting the sample means (or whatever sample statistics we are monitoring), the center line and the two control limits."

From the lecture:

" When dealing with a quality characteristic that is a variable, it is necessary to monitor both the mean value and the variability of the quality characteristic. In this tutorial the Shewhart Xbar control chart is used for monitoring the mean. "

" Assuming that the variability of the process is in control to check whether the process mean is out of control the basic criterion on the Xbar chart is one or more points outside of the control limits.

Supplementary criteria are sometimes used to increase the sensitivity of the control charts to smaller shifts, however care should be exercised when using several decision rules simultaneously, since the overall type I. error probability of false alarm can be substantially increased. "

From the lecture:

" In practice Xbar charts and R charts are used simultaneously and not in isolation.

Xbar-R charts are a combination of the two charts and in this tutorial we will see their use in Phase I. for historical data analysis and for ongoing control.

From the lecture:

" Although Xbar and R charts are widely used, it is occasionally desirable to estimate the process standard deviation directly instead of indirectly through the use of the sample ranges. This leads to the Xbar-S charts where S denotes the sample standard deviation.

" Usually the use of these charts are desirable in two cases. Either if we have moderately large sample size, larger than 10 or the sizes of the samples are varying. It is easier to interpret these charts where the center lines remain at their original positions than for example the R chart where frequent changes in sample sizes make the interpretation of the chart rather difficult due to the several shifts of the center line. "

From the lecture:

"There are many situations in which the sample size is one, that is, the sample consists of an individual unit. In such situations the control chart for individual units is useful.

In this example the dissolved water content of diesel fuel in a tank is controlled each day at an oil refinery.

" Our task is to decide whether the process is in statistical control or not. "From the lecture:

" Many quality characteristics cannot be conveniently represented numerically. In such cases we usually classify each item inspected as either conforming or nonconforming unit. These classifications are called the attributes of the items.

In this tutorial we will use the so called p-chart, an attribute control chart for fraction nonconforming to check the stability of a process.

The np-chart for checking number nonconforming will also be shown.

From the lecture:

" It is possible to develop control charts for either the total number of nonconformities in a unit or the average number of nonconformities per unit. These kind of charts are the c chart and the u chart respectively and in this tutorial we will see, how we can construct and apply them in practice.

From the lecture:

"In Phase II. process monitoring when small process shifts are of interest the cumulative sum (CUSUM) control chart is a very effective alternative to the Shewhart control charts.

In this example volumes of apple juice in boxes are controlled."

" The cumulative sum chart (CUSUM chart) is more effective because it directly incorporates all the information in the sequence of samples by calculating the C cumulative sums of the deviations of the sample values from a target value. "

From the lecture:

" When we use One-sided CUSUM chart the Fast Initial Response (FIR) procedure can increase the sensitivity of the chart at process start-up after a corrective action to quickly detect if the corrective action did not reset the mean to the target value.

"Two different start-up situations are investigated. "

From the lecture:

" An alternative procedure to the use of a one-sided CUSUM is the so called V-mask control scheme. "

"In the V-mask procedure the original cumulative sums are plotted on the CUSUM chart without the use of any slack value and without resetting the cumulative sums to zero when their positive or negative signs are changing.

To decide if the process is in control we place a V-mask on the chart with the center point on the last plotted value.

From the lecture:

" The Exponentially Weighted Moving Average control chart is a very good alternative to the Shewhart charts when we are interested in detecting small shifts.

" Similarly to the CUSUM charts the Exponentially Weighted Moving Average (EWMA) control chart is typically used with individual observations. "

" This insensitivity of the Exponentially Weighted Moving Average control chart to the non-normality of the data is a basic feature of this chart. "

From the lecture:

"In a capability analysis we are interested in the uniformity of the output. In a product characterization study, when we have no direct observation of time history of production we can only estimate the distribution of the product quality characteristic and the process yield, which is the fraction conforming to specifications.

In this example we obtained one sample of ball bearings from a supplier with 150 elements. The diameter of a ball bearing is a critical-to-quality characteristic."

" Let's determine the so called capability ratios and the performance of the production. "

From the lecture:

"When we can directly observe a process and we can control the data-collection activity, we can pursue a true process capability study. Knowing the time sequence of the data, inferences can be made about the current and the potential capability of the process.

In this tutorial we investigate the same example of ball bearing manufacturing that we studied in the product characterization section, but now, we will see the aspects of the problem from the viewpoint of the manufacturers. "

From the lecture:

"In this tutorial we are dealing with Measurement System Analysis which can help to identify and measure the sources of error in our data. The so called Gauge R and R Study measures precision error by taking one part and measuring it several times, with several different people.

In this Gauge R and R worksheet 10 parts are measured by three different people, three times each, providing a total of 90 results.