Posts Tagged ‘measurement’

Control Charts; A last resort control system.

Wednesday, May 23rd, 2012


Control charts are the most difficult of the seven basic quality tools to use. They are seldom the method of choice. When a process step is important, we would prefer that the step not vary at all. ONLY when this can not be accomplished in an economical way does one choose to use a control chart.

 

 

 

In any process, pieces vary from each other.


But as they vary, they form a pattern.

And if that pattern is stable, it can be described as a distribution.  These distributions can differ in location, spread, and shape.

If this variation that is present is, only common caused variation the process output forms a distribution that is stable and predictable over time.

If this variation that is present is special caused variation, the process is unstable over time producing varying distributions and thus is not predictable.

Control charts are only useful if the step (operation or function), over time, exhibits measurable random variation. Control charts display the data over time.

 

The Control Chart above (Time is on the x axis above listed as sample). Control Limits (the red lines) are displayed on control charts, where data falling within the control limits are considered common caused or  “normal” variation. Any point outside the control limits are considered “special caused” variation and need to be look at and corrected through an action plan. If you create a control chart, you must also have with it an action plan.

Besides control limits for control charts, there are several other type of trends (runs) that can indicate an out-of-control process before any defective parts are produced.  Remember that with common cause I can predict what will happen next. This means that if I have several things happen in a row it could tell me something has changed.

What I have shown above is only one type a control chart and one of the simplest to use but there are several others types of control charts available.

As mentioned in other articles, there are two types of data and we have control charts for both. There is what we call variables data and attribute data. Variables data is data that you can measure and attribute data is data that we count. So let me give you a brief description of the most commonly used Control Charts we have for each data type.

Variables Data Control Charts (Measurable)

The Individuals and Moving Range Chart (X-MR)

Individual and Moving range charts are used when taking more than one is expensive  or the data is collected in a  destructive test. If this is not your situation use one of the other types of variable control charts. One of the main reasons why you should use a different variables control chart is that with individual control charts the data need to be normally distributed. If it is not normally distributed, the chart will not work properly for you. The other charts below use averages and the distribution of averages are normal (Central Limit Theorem).

Average and Range Chart ( Xbar – R)

This is the most popular of the variables control charts. This is because it is uses a small sample size and the range of that sample. With this chart, it is easiest to calculate the average (Xbar) of the sample and Range of the sample by hand. As mentioned above using the averages of samples in this chart assures you that the data plotted will be normal and all the trends can be predicted.

Average and Standard Deviation Chart (Xbar – s)

This chart is used a lot when there is a computer associated with the work area that can calculate and plot the statistics needed for this chart (Xbar and s). Here we have to calculate and plot the sample average (Xbar) and the sample standard deviation (s).

 Other Variable Control Charts

There are several other types of variables control charts all used for very special conditions. These are the three that are used 95% of the time.

Attribute Data Control Charts (Count)

Proportion Defective (p)

p charts measure the proportion defective over time. It is important that each item (part, component or document) being check is either conforming (good) or defective (bad) as a whole. This means even if an item has several defects in it, it is still counted as 1 defective item. It is also important that the inspection of these items is grouped in some meaningful way. In this chart, the groups do not always have to be equal, but because of the varying group sizes, the control limits vary for group to group. This way the defective items can be expressed as a decimal fraction (percentage) of the grouping.

 

Number Defective (np)

np charts also measure the proportion defective over time. It is important that each item (part, component or document) being check is either conforming (good) or defective (bad) as a whole. This means even if an item has several defects in it, it is still counted as 1 defective item.

The difference between the p and the np chart is in the grouping. Here in the np charts all of the groups being inspected need to always be the same size and never vary. This having the grouping (sample size) always the same make the control limits always the same an easier to calculate. These groupings, like the p chart, should be grouped in some meaningful way. This way the defective items can be expressed as a decimal fraction (percentage) of the grouping

Number of Defects (c)

c charts measure the number of defects over time. Here we are counting defect in each item (part, component or document), so an item can have more than one defect in it.  It is also important that the inspection of these items is grouped in some meaningful way. In this chart, the groups do not always have to be equal, but because of the varying group sizes, the control limits vary for group to group. This way the defective items can be expressed as a decimal fraction (percentage) of the grouping.

Defects per unit (u)

u charts also measure the number of defects over time. . Here, just like c charts, we are counting defect in each item (part, component or document), so an item can have more than one defect in it.  The difference between the c and the u chart is in the grouping. Here in the u charts all of the groups being inspected need to always be the same size and never vary. This having the grouping (sample size) always the same make the control limits always the same an easier to calculate. These groupings, like the p chart, should be grouped in some meaningful way. This way the defective items can be expressed as a decimal fraction (percentage) of the grouping.

Selecting the Correct Control Chart

Below is a Decision Tree Diagram of the different type and there use. In this chart “n” is your grouping size or what is called sample size. Be sure you understand the application of each control chart or get help if you plan to use one of these.

 

The Control Systems

I have talked to you about control charts, but to make them useful and not just a pretty picture on the wall you have to have a “Reaction Plan”. What is a Reaction Plan; a plan so when an out-of-control condition does occur you have a plan of action for the operator (person that plots the points) to follow to correct the condition NOW before another item is produced. Without this, the chart is a waste of time to build and maintain.

Well there you have a short article on Control Charts. Check back on my website blog to find videos on how to build each one of these in Minitab. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

 

The Seven Basic Quality Control Tools

Saturday, November 26th, 2011

Product or service quality is everyone’s responsibility, from a “Mom and Pop Shop” to an international corporation. So I thought I give those who don’t know how to look at the quality of what they do, a set of basic tools. Quality professional have all heard of “The Seven Basic Quality Control Tools” so here they are.

The Seven Basic QC (Quality Control) Tools are a given set of graphical techniques identified as being helpful in troubleshooting issues related to quality[1]. These seven are called basic because they can be used easily by anyone to solve the vast majority of quality-related issues. Many quality professional believe these were originated by Dr. Ishikawa, a world renowned quality professional.  But, he would tell you that he was inspired by the “Seven Famous Weapons of Benkei[2] . The designation as the “Seven Basic Tools of Quality” arose in postwarJapan.

The Tools

  1. 1.      Cause and Effect Diagrams: (Fishbone Diagrams, Ishikawa Diagrams)

These diagrams are tools that organize a group or persons knowledge about the causes of a problem or issue and display the information graphically.

 

It was originally created and used by Dr. Kaoru Ishikawa and is sometimes called an Ishikawa Diagram. Also, because of its shape it is called a Fishbone Diagram. In general what you do is brainstorm ideas (causes) then group them in to categories. Those categories become the many branches of the Cause and Effect diagram.

  1. 2.      Check Sheets:

This is another simple but powerful tool. Check Sheets are lists of items and the frequency that the item occurs. They can be made in so many different ways that many times, we don’t think of them as a list, but they are. below are two, one that kind of looks like a list the other not so much. On the shoe the defects are marked with an “x” in the location it was found.

They are use to answer many important questions such as:

  • Has all the work been done?
  • Has all the inspection been done?
  • How frequently a problem occurs?

They are often used to remind individuals doing complex tasks of what to do and in what order. They are also used many times in conjunction with other tools to help quantify or validate information.

  1. 3.      Control Charts:

Control charts are the most difficult of the seven tools to use. They are seldom the method  of choice. When a process step is important, we would prefer that the step not vary at all. ONLY when this can not be accomplished in an economical way does one choose to use a control chart. Below is an “XBar-R Chart” also called an “Average and Range Control Chart”.

Control charts are only useful if the step (operation or function), over time, exhibits measurable random variation. Control charts display the data over time (Time is on the x axis above listed as sample). Control Limits (the red lines) are displayed on control charts, where data falling within the control limits are considered “normal” variation. Any point outside the control limits are considered “special caused” variation and need to be look at and corrected through an action plan. If you create a control chart, you must also have with it an action plan.

Besides control limits for control charts, there are several other type of trends (runs) that can indicate an out-of-control process.

What I have shown above is only one type a control chart and one of the simplest to use but there are several others (not so simple to use). Below is a Decision Tree Diagram of the different type and there use. Be sure you understand the application of each control chart or get help if you plan to use one of these.

  1. 4.      Histograms:

Histograms are a “picture” of a set of data (or information). It is created by grouping the data you collect in to “Cells” or “Bins” (Bars in the chart below).

Histograms take your data and give it a shape (Distribution). With this, you can see the data sets spread, central tendencies, and if it meets requirements. As you can see, it is a valuable troubleshooting tool. You can take it a compare differences between machines, people, suppliers etc. Never use a histogram alone always also plot it in a time ordered  plot (run chart).

  1. 5.      Pareto Charts:

Pareto Charts are a specialized Histogram of count data. It arranges the Bins or Cells in largest to smallest counts and gives you an accumulation line as seen below.

The Pareto Chart gets its name from the use of the Pareto Principle which states “ 80% of the effect comes from 20% of the causes”. Vilfredo Pareto, an Italian economist, originated this principle by determining that 80% of the land inItalyis owned by 20% of the population. Later it was found to hold true in many things and help us focus on the critical few. With a chart like this a team can decide where to place its priority and focus ( the big hitters). This is extremely helpful when time and money is limited as it is in most cases.

  1. 6.      Scatter Diagrams:

Scatter plot are a very simple tool to use to see if there is a correlation between two things (i.e. does one thing lead to another). I always before going into any major analysis of data, plot the data in some way to get a “gut feel” of what is happening. This tool lets you create a simple picture showing how two or more variables change “together”.

As one can see in the chart above the fruit on the tree increase in weight the longer it is on the tree. In scatter charts we see if one thing relates (correlates) with another. Below is a set of chart that shows some of the relationships you might find with this tool.

  1. 7.      Stratification: (Flow Charts, Run Charts, etc.)

To me Stratification is a catch-all for summarizing, picturing, or applying some tool to data so you can understand what is happening. Stratification is the process of dividing members of a population into homogeneous subgroups before using it. The data (strata) should be mutually exclusive: every element in the population must be assigned to only one subgroup (stratum). The data should also be collectively exhaustive: no population element (data) can be excluded.

That’s a mouthful, but if you look at above six tools all of them do this stratification of the data. In many texts they list either flow charts or run charts under this seventh tool area. A run chart is just the “Individuals Chart” of the above control chart without control limits. A flow chart takes a group of steps in a process and summaries them into a map of the way the process works. They are sometimes called a Process Map or a Process Flow Map.

They are created to:

  • Create a common understanding of the process flow
  • Clarify steps in a process
  • Uncover problems and misunderstanding in a process
  • Reveal how a process operates (good and bad)
  • Helps you ID places for improvement.

Well there you have a short description of the Seven Basic Quality Tools. Stay in touch as I go into each tool with details of how to construct and interpret them. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090



[2] Ishikawa, Kaoru (1990), Introduction to Quality Control (1 ed.), Tokyo: 3A Corp, p. 98, ISBN 9784906224616, OCLC 23372992

 

Calculating the Correct Net Present Value using Excel

Monday, June 20th, 2011


In Six Sigma we are always wanting to be able to show a return on investment to management. Most of the time in Dollars. To do that we need to consider both the cost and the benefits the project will obtain. But costs and benefits do not happen in one lump sum. Usually they fluctuate over time. This is referred to as a cash flow or a cash flow stream. The best tool to look at both the cost and benefits to see what the return would be is calculation what is called “Net Present Value” (NPV).

Definition of Net Present Value (NPV): The difference between the present value of cash inflows and the present value of cash outflows. NPV is used in budgeting to analyze the profitability of an investment or project[1].

In Excel there is a function [NPV(rate,value1,value2, …)] to calculate NPV, but this formula can be misleading if you do not understand what it is doing. The function help on this function explains it all be rarely do we read these unless we really don’t know what goes where. In this functions case most people need to read it. In the above formula the rate is applied to every value listed. That is OK but in many projects the first value is costs/benefits obtained before the end of the first year (or period of the rate) so the rate should not be applied. That why in many projects we list the first year as “year 0”. These cost/benefits should be added to the above formula thus eliminating the rate from being applied.

Example: Lets say you have a project that will cost you $10,000 this year and $2000 next year to implement, but next year you will see a benefit of $500 and the year after that 5,000, Then in the third year $10,000 benefit and in the last two years $15,000 each year in benefits. Lets also say management want to see a rate of return of 10% over a five year period. You want to know if you can meet that with these figures. Below is the table in Excel I would create to show these figures and calculate the NPV.

You can see that the formula calculating NPV has year 0 being added to the formula so the interest rate is not applied to calculating the present value of it this year (year 0 is the present). But year 1-5 we did have the rate applied as we what to make sure that the expected rate of return is met by year five. This show we have meet managements expectations because it is a positive number.

The reason for doing this is because not all investments have cost/benefits in year 0. Take for instance the purchase of a new piece of equipment that the purchase price is not paid for 12 months. In this case year 0 maybe to delivery and setup of the machine which is included in the cost of $12000. Let say on this example we have the same benefits and expected return. Here is the table and calculations for this one:

You can see that this can be big if you do it wrong, so make sure that you apply this formula correctly. In most, not all, Six Sigma projects you will use the first way. But if you make sure to include always a year 0 and put it in the formula even the lower table will be correct using “=NPV(D2,D4;D8) + D3”.

Well there you have what to look out for in using the NPV formula in Excel. If you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

 

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

 


[1] http://www.investopedia.com/terms/n/npv.asp#axzz1PrAMAMip

Tools to look at Interval or Ratio (Scale) Data

Friday, December 31st, 2010

Interval Scale is use when we are measure the differences between observations. Difference between any two successive points is equal. Interval scale numbers that are equally different represent differences of equal magnitude. The zero vale of an interval scale is arbitrary. Often data of this scale are treated as a Ratio scale even if the assumption of equal intervals is incorrect. Some examples of Interval scale data is Calendar time, Voltage and  Temperature.

Ratio Scales is like Interval Scale except it has a true zero point. In other words you can have nothing less than zero, on negative values. Some examples of ratio scale data  is Time, Distance, weight, Speed, ad Frequency.

Of all the measurement scales of data these two give us the most information about the thing we are studying.

So once we have done this data collection how can we look at these data to see better what we found? Well for this scale there several types of statistical tools we can use, but let me tell you about four of the major ones. These are these four are the t-test, F-test, Correlations, and Multiple regressions.

t-test

This test is used to compare and determine differences between the averages or means of two groups of normalized data. It is used for small scale experiments. If you have a large sample you can use want is called a Z test which uses the normal distribution instead of the t distribution. Below is the shape of the normal and two t distributions (sample size =2 and 10) You can see that the t distribution is a good approximation of the normal.

What is done here is you have to calculate an actual “t” value and compare it to a Tabled t value. The Table t values can be found in any statistic book. To calculate the actual ”t” you have to calculate the following formula:

If you have excel and have add the Analysis ToolPak (which is a free download) you can do this comparison using the t-Test: Two Sample Assuming unequal Variances in this add-on.

If the actual is more than the table value then the two means are different.

F test

Where the t test compares the averages or means, the F test is used to compare and determine differences between the variation (distribution spread) of two groups of normalized data.

Like the t test we calculate F and compare it to a tabled value. The formula for calcultating F is:

If you have excel and have add the Analysis ToolPak (which is a free download) you can do this comparison using the F-Test: Two-Sample for Variances in this add-on.

If the actual is more than the table value then the two variances are different.

Correlation

A Scatter plot is one of the most useful correlation tools available. In a scatter plot all you do is plot one factor against another.



In this chart you can see a direct correlation between the time, in days, of fruit on the tree and its weight.


Multiple Regressions

Multiple regression is the term use to describe a study in which we want to learn more about the relationship between several independent or predictor variables and a dependent or criterion variable. An ANOVA is a type of multiple regression. Here is a simple regression on trying to find the key predictors of engine knock.


Regression Analysis: Knock versus Spark, AFR, Intake, Exhaust

The regression equation is

Knock = 23.8 – 0.296 Spark + 3.19 AFR + 0.359 Intake + 0.0134 Exhaust



Predictor      Coef   SE Coef      T      P

Constant     23.815     8.137   2.93  0.019

Spark       -0.2965    0.3072  -0.97  0.363

AFR          3.1918    0.2398  13.31  0.000     A P value less than .05 means it

Intake      0.35870   0.07848   4.57  0.002     is a predictor

Exhaust    0.013376  0.005421   2.47  0.039



S = 0.510560   R-Sq = 98.8%   R-Sq(adj) = 98.2%



Analysis of Variance


Source          DF       SS      MS       F      P

Regression       4  170.245  42.561  163.28  0.000

Residual Error   8    2.085   0.261

Total           12  172.331



Source   DF  Seq SS

Spark     1  84.250

AFR       1  80.029

Intake    1   4.380

Exhaust   1   1.587





Well there you have my thoughts on tools to measure Interval and Ratio Scale Data. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.


Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090


Tools to look at Ordered (Ordinal) Data

Monday, November 22nd, 2010

Ordinal data is information that you collect on items that you can rank order some characteristic or attribute.  Examples of this type of data scale is the count of food items on a table that taste excellent, good or bad. Another would the count of dress that are very attractive, look OK, or are ugly. You can see with this type of count data you can arrange the counts in order of best to worse. This scale of data gives us  more information than Nominal scale but not as much as the other types of measurement scales (Interval, Ratio). Scales are ways we collect data.

So once we have done this data collection how can we look at the data to see better what we found? Well for this scale there two types of correlation tools one can use are Pearson correlation, Chi Square which are some what complicated. But one of the simplest is Spearman’s Rank order correlation. In this correlation you are comparing how two people/inspectors/groups  correlate with each other. This will let us know if the two saw things the same way or not.  This could be anything like rating several wine, movies, cars, TV’s etc. For example if you had two friend (x and y) rate 5 movies (A, B, C, D, E) from best(1) to worst(5). you would create the below table and chart to compare your friend results and tell if they look at these movies the same.

Well there you have my thoughts on tools to measure the Ordinal Scale. Next time I am going to discuss the different statistical tool used for the Interval scales of measurement. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.


Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

Tools to look at Counts (Nominal) Data

Monday, November 15th, 2010

Just to refresh your mind Nominal Data (Count Data) is information that you collect about the presence or absence of an attribute (characteristic).  Like the number of “naughty” or “nice” kids on Santa’s List. Or, more practical to some, the number of red cars going through an intersection; the number of Order forms with mistakes in them. These counts are all, what we call, nominal scale measurements.  This scale of measurement gives us the least amount of information of the four types of measurement scales (Nominal, Ordinal, Interval, Ratio). Scales are ways we collect data. For instance here we are counting the occurrence of something which is what is called a nominal scale.

So once we have done this counting how can we look at the data to see better what we found. Well for this scale there are a few good tools.

Percentage (%) – This gives you a feel for of all the things you saw, how many were what you were looking for. For example lets say you sat at an intersection and counted red cars going through that intersection in one hour. And during that hour you saw 300 cars go through that intersection and 30 were Red. That would mean that for that hour  10% of the cars that went through it were red. (30 red cars/300 cars through the intersection*100=10%).

Proportion (1/10, 1 in 10) – This, like percentage, gives you a feel for of all the things you saw, how many were what you were looking for. This gives you one other piece of information and that is out of how many you looked at. This, if you are doing the study for yourself, may not be important, but if you are convincing others with a percentage they may want to know how many in the total count. A good example where I like this best is on the internet when looking at customer ratings (those stars showing you that customers really liked the product.  I always want to know how many customers actually rated the product at all. When you see 1 to 5 I am not impressed. But if there was 100 now I feel better about the rating. Remember that 100% liked something out of 1 (1/1) customer is different that 100 (100/100).

Chi-square Test (X2) – There are many times where we want to compare the percentages of items in several different categories. For instance, instead of just red cars we want to collect the number of all cars by color (not just red). It might be, instead of cars, operators, materials, TV channels, Hospitals or any other grouping we might have in mind. In any of these groups your could collect data and place it into different categories (Colors, Sizes, Ratings). The results can be put into what is called a Chi Square Table to answer the question ”Do the groups differ with regard to the proportion of items in the categories?” An example that one could use Chi-square test would be: (The following example is from Narrella(1963) and the Six Sigma Handbook [Pyzdek, 2003]).

Rejects of metal castings were classified by cause of rejection for three different weeks. The question that the Chi-squared test would help answer is: “Does the distribution of rejects differ from week to week?

Well there you have my thoughts on tools to measure the Nominal Scale. Next time I am going to discuss the different statistical tool used for the ordinal scales of measurement. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.


Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090




Measurement

Wednesday, October 20th, 2010

[Note: Most of the information for this article comes from “The Six Sigma Handbook”[i]]

Why do we measure things? To see how things are, or if change has occurred or to understand something. Measurement is just looking at something and describing it in numbers. The rules (mapping functions) that we use to describe the “thing” in numbers provide us with a model of reality. If this model is correct (valid) we can learn about the real world by studying the model and the numbers that it predicts. Without these measurements systems astronomers could not describe the make up of galaxies billions of light-years away from us.

Every questions we have starts in the real world But to understand the question and come up with the answer we use mapping functions (rules) to describe the real world question using numbers. There are times when we map to a non numeric entities in the real world, Like a question about color but we convert these into numbers like the number of red things in a room. These characteristics (elements) are X’s. The “numbers” are Y’s derived using the mapping function as a transfer function of the elements into numbers.

A good example of this we all can relate to is the fuel tank on your car. It would be nice to know how much fuel is in your tank? That would be a measurement of the amount of fuel in your tank.


Real World – Your fuel tank with some amount of fuel in it.

Mapping function – A float with a sensor on a spindle connected to a fuel gage. The gage marked off in numerical intervals. Plus YOU reading the indicator.

Numbers – The gage needle pointing to a numerical value on the gage (like the 1/8 mark just above Empty (0))

Usage – Time to get gas!


Measurement Scales

Not all data (numbers we collect) are created equal. That does not mean some are better than others is just means that some tells us more information than other. You will find that our numbers fall into one of four scales. In teams that I have worked with I always bring up the discussion of measurement scales because not everyone looks at how they would measure something in the same way. Some may look at the fuel tank about as fuel empty or half full, other may talk in gallons of fuel. With that said we need to understand the scale we are going to measure the real world in. The scale of the data to be collected in the measurement process. .So here are the four measurement scales.

  • Nominal Scale – These are numbers that only indicate the presents or absence of an attribute. All we can do here is count items with or without this attribute.
  • Ordinal Scale – This scale gives us a little bit more information. With this scale we can say if an item has more or less of an attribute With this scale we can rank order items.

  • Interval Scale – This scale is use when we are measure the differences between observations. Interval scale numbers that are equally different represent differences of equal magnitude. The zero vale of an interval scale is arbitrary.

  • Ratio Scales – This scale is like Interval Scale except it has a true zero point. In other words you can have nothing less than zero.

Well there you have my thoughts on Measurement and the importance of your scale of measurement. Next time I am going to discuss the different statistical tool use for different scales of measurement. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.


Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090




[i] Thomas Pyzdek The Six Sigma Handbook, 2003, McGraw Hill