I’ve been teaching in classes and using six sigma in improvement projects for many many years and one concept that people struggle with and have a tendency to not understand is that typical 1.5 sigma shift. Why 1.5? Why typical? Why even considering the shift? Must it always be 1.5? And many more questions, many more worries and grey areas. People finally will accept the idea because first of all the books and the software available always say "typically 1.5 sigma shift" when calculating sigma quality level, SQL. As if it was an axiom in mathematics. Axiom is defined as: "An axiom or postulate is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments" meaning you do not prove an axiom. This topic today is to simply explain where it comes from, where to use it as is, and how to calculate the actual shift allowed by the process because it has already been displaced somehow from the mean, which in turn implies that you do not necessarily always have a 1.5 σ shift allowed in your SQL calculations.

William B. Smith back in 1986-1987, the farther of Six Sigma in Motorola, at the time the Motorola University and the Six Sigma Institute was just being founded, use to say that it had something to do with process control charts and usually will explain it. How do I know? Because I received first hand materials from Motorola through my brother who worked at the semiconductor plant in Austin Texas and was interested in sharing those DMAIC techniques with me.

Before I continue I must first remind you that computers or calculators did not exist in 1920. You also probably know that the inventor of Control Charts is Walter Shewhart back in 1920s. Therefore Shewhart did all his calculations manually. Shewhart assumptions on calculating and developing X-bar, R charts is that:

1- Standard deviation can be estimated by R-bar/d-sub 2, that is the mean range of the population over a constant d2.

But Shewart also says that:

2- This is good for subgroup numbers up to 10, and when you have more than 10 subgroups of data, you cannot used estimate and must use the standard deviation and call this control chart X-bar, S.

Therefore it seems quite reasonable to think that Shewhart in the calculations toward developing the Control Chart of means and range, has used estimates through genuine methods of simplifications. History tells us that one of the genuine simplification of methods, due to the lack of a computing device back then, used by Shewhart is to work with 25 subgroups and a sample size of 4 in each subgroup. This allowed him to calculate percentage or probabilities a lot easily as 25 is a fourth of 100. One of the reasons most tables of control charts constants are up to 25 as can be in many tables on the web or in books.

If you calculate the control limits with the appropriate formula for a normal distribution with mean zero and standard deviation 1 N(0,1), and for a sample size of 4 you'll find out that the limits expressed in terms of the standard deviation sigma are around 1.5σ apart from the mean. And applying larger sample sizes you will be in presence of smaller variations, that is the central tendency or central limit theorem. Are you seeing where the 1.5 sigma is coming from? If you are still not seeing it then you will in the example below that the allowed shift before going out of spec is not always 1.5 sigma? With a sample size of 10 for instance for N(0,1) it is 0.95 approximately.

Example: say you have a distribution with mean 19.9 and a standard deviation such that the minimum and maximum values of the distribution are respectively 19.5 and 20.4 Mean while the given spec for the measurement is 20+/- 0.6, the lower specification limit LSL is 19.4 and upper specification limit USL is 20.6 the data shows us that there is a shift of the expected mean toward the LSL as the difference between means (actual mean - expected mean) is negative (19.9-20= -0.1). Our interest is how much more shift is allowed before this process produces defects? First you must calculate the theoretical sigma expected form the specs. If the allowed spec range is 20.6-19.4 = 1.2 then the theoretical sigma is calculated from

6 sigma = 1.2 then sigma= 1.2/6= 0.2. This would be the maximum standard deviation allowed by the spec. But because the actual process has shifted already toward the LSL of 19.4 and has it standard deviation so that the minimum values is 19.5 we have an allowed shift left of 19.5-19.6= -0.1 and this shift expressed in terms of sigma would be absolute value |-0.1/0.2| = 0.5. The sigma shift is 0.5 and not the typical 1.5

The other important reasoning is the fact that because of common causes of variations in any process it is expected that the process mean, no matter how centered it is, will shift overtime. By how much? Typically 1.5 sigma as standardized by Smith based on Shewhart estimations of sample size of 4 with 25 subgroups. Therefore the logical things to do to have a robust design, design for manufacturability, is to consider the 1.5 sigma shift as a correction factor, and for any given critical to quality CTQ parameter inject the shift into the calculations of tolerances. This means that tolerances would have included in them the 1.5 sigma shift so that when over time the process really shifts, you still do not produce defects. Therefore it is correct to say that the 1.5 sigma shift is purely a conceptual judgment applied during the course of designing a system, product, or service, to guarantee in the future that the process will not fail. And is is precisely what Smith has done in the six sigma model. This makes the difference between short term capacity and long term capacity.

But it also tells us we should not blindly apply 1.5σ to the actual mean of the process, as the process may have already been moved by some sigma simply because it is not the "first day or week or month of the process". Therefore the sigma shift allowed must be calculated, and not applied blindly, and based on the process capability of not exeeding both tails limits, but with equal probability on either side. However most of the time the observed mean is not equal to expected mean implying there have been a shift already. The shift is on one specific side therefore increasing the possibilities of failing on that tail.

Since the performance of the process is measured based on its capability to stay inside the specification limit on EITHER side, the σ value is calculated by looking at the SHORTER distance between the mean and the specification limits, and taking the tail end-value distance from the specification limit expressed in terms of sigma, as shown in above example.

William B. Smith back in 1986-1987, the farther of Six Sigma in Motorola, at the time the Motorola University and the Six Sigma Institute was just being founded, use to say that it had something to do with process control charts and usually will explain it. How do I know? Because I received first hand materials from Motorola through my brother who worked at the semiconductor plant in Austin Texas and was interested in sharing those DMAIC techniques with me.

Before I continue I must first remind you that computers or calculators did not exist in 1920. You also probably know that the inventor of Control Charts is Walter Shewhart back in 1920s. Therefore Shewhart did all his calculations manually. Shewhart assumptions on calculating and developing X-bar, R charts is that:

1- Standard deviation can be estimated by R-bar/d-sub 2, that is the mean range of the population over a constant d2.

But Shewart also says that:

2- This is good for subgroup numbers up to 10, and when you have more than 10 subgroups of data, you cannot used estimate and must use the standard deviation and call this control chart X-bar, S.

Therefore it seems quite reasonable to think that Shewhart in the calculations toward developing the Control Chart of means and range, has used estimates through genuine methods of simplifications. History tells us that one of the genuine simplification of methods, due to the lack of a computing device back then, used by Shewhart is to work with 25 subgroups and a sample size of 4 in each subgroup. This allowed him to calculate percentage or probabilities a lot easily as 25 is a fourth of 100. One of the reasons most tables of control charts constants are up to 25 as can be in many tables on the web or in books.

If you calculate the control limits with the appropriate formula for a normal distribution with mean zero and standard deviation 1 N(0,1), and for a sample size of 4 you'll find out that the limits expressed in terms of the standard deviation sigma are around 1.5σ apart from the mean. And applying larger sample sizes you will be in presence of smaller variations, that is the central tendency or central limit theorem. Are you seeing where the 1.5 sigma is coming from? If you are still not seeing it then you will in the example below that the allowed shift before going out of spec is not always 1.5 sigma? With a sample size of 10 for instance for N(0,1) it is 0.95 approximately.

Example: say you have a distribution with mean 19.9 and a standard deviation such that the minimum and maximum values of the distribution are respectively 19.5 and 20.4 Mean while the given spec for the measurement is 20+/- 0.6, the lower specification limit LSL is 19.4 and upper specification limit USL is 20.6 the data shows us that there is a shift of the expected mean toward the LSL as the difference between means (actual mean - expected mean) is negative (19.9-20= -0.1). Our interest is how much more shift is allowed before this process produces defects? First you must calculate the theoretical sigma expected form the specs. If the allowed spec range is 20.6-19.4 = 1.2 then the theoretical sigma is calculated from

6 sigma = 1.2 then sigma= 1.2/6= 0.2. This would be the maximum standard deviation allowed by the spec. But because the actual process has shifted already toward the LSL of 19.4 and has it standard deviation so that the minimum values is 19.5 we have an allowed shift left of 19.5-19.6= -0.1 and this shift expressed in terms of sigma would be absolute value |-0.1/0.2| = 0.5. The sigma shift is 0.5 and not the typical 1.5

The other important reasoning is the fact that because of common causes of variations in any process it is expected that the process mean, no matter how centered it is, will shift overtime. By how much? Typically 1.5 sigma as standardized by Smith based on Shewhart estimations of sample size of 4 with 25 subgroups. Therefore the logical things to do to have a robust design, design for manufacturability, is to consider the 1.5 sigma shift as a correction factor, and for any given critical to quality CTQ parameter inject the shift into the calculations of tolerances. This means that tolerances would have included in them the 1.5 sigma shift so that when over time the process really shifts, you still do not produce defects. Therefore it is correct to say that the 1.5 sigma shift is purely a conceptual judgment applied during the course of designing a system, product, or service, to guarantee in the future that the process will not fail. And is is precisely what Smith has done in the six sigma model. This makes the difference between short term capacity and long term capacity.

But it also tells us we should not blindly apply 1.5σ to the actual mean of the process, as the process may have already been moved by some sigma simply because it is not the "first day or week or month of the process". Therefore the sigma shift allowed must be calculated, and not applied blindly, and based on the process capability of not exeeding both tails limits, but with equal probability on either side. However most of the time the observed mean is not equal to expected mean implying there have been a shift already. The shift is on one specific side therefore increasing the possibilities of failing on that tail.

Since the performance of the process is measured based on its capability to stay inside the specification limit on EITHER side, the σ value is calculated by looking at the SHORTER distance between the mean and the specification limits, and taking the tail end-value distance from the specification limit expressed in terms of sigma, as shown in above example.