Screening experiment with strange results

April 14th, 2017


In screening experiments that are performed there are three possible results in the analysis.

  1. We run the analysis and find factors with P values below our Alpha Risk level (usually .05 or 5%). This is what we explain in the lesson and is the usual result of a screening experiment.
  2. We run the analysis and find no factors with a P value below our Alpha Risk Level. This usually happens when one of two things has occurred.
    1. First, but very rare, we have picked the wrong factors that affect the result we were looking at.
    2. Second, which is the most likely (and the one students do on the assignment) the level that are picked for each of the factors are too close to each other. Here we have to learn to pick level that are wide apart as we are running just a few runs and will have minimal data to analyze.
  3. Last is the tricky one where we run the analysis and find factors with P values that are below our Alpha Risk Level but as we remove the ones that are above that level others fall out until we only have one factor left. This is the one that I will discuss today.

So here are a few things we need to remember.

  1. This is a screening experiment. Because it is a screening experiment we have a lot of factors and few runs to determine what is worth looking closer at. It usually is done to reduce the cost and time to run a full factorial experiment. This implies we WILL be running another experiment on the factors (and interactions) that we find significant here.
  2. A P value above .05 only means that we do not have enough data to show that these factors ARE significant. Usually we plan our design to insure we have enough data BUT here we have reduce the data amount to try and isolate just the key factors.

In your data we see what happens when the factor changes are small enough that as you eliminate factors well above a p value of .05 you find that other significant factors start to fall out until all you have is one factor. This is telling us that we do not have enough data to show any are significant per the P value alone.

We need to look at three other statistics that are found in the ANOVA analysis to help guide us to what we need and NOT have to rerun the screen with more runs. These are  R-squared, R-squared (Pred). and the Model P Value. The R-aquare values look at how well the model explains the variation. These we want as high as possible. The Model P value which tell if the model is significant or not. This value you want as low as possible.

Remember this is a screening experiment so leave all the interactions out of the analysis. Remember in a screen many time the interactions are confounded with the main effects or other interactions. We will look at them in the full.

Below you will see the results that one of my students had and I discuss these three statistics (with the factor p values) below.

 

Factorial Regression: FlightTime versus W1, W2, L1, L2, L3, ClipSize, …

 

Analysis of Variance

 

Source              DF   Adj SS   Adj MS  F-Value  P-Value

Model                  7  1.29191  0.18456     3.90    0.038

Linear               7  1.29191  0.18456     3.90    0.038

W1                    1  0.02364  0.02364     0.50    0.500

W2                    1  0.21506  0.21506     4.54    0.066

L1                      1  0.34076  0.34076     7.19    0.028

L2                      1  0.30388  0.30388     6.42    0.035

L3                      1  0.02681  0.02681     0.57    0.473

ClipSize            1  0.10481  0.10481     2.21    0.175

PaperWeight   1  0.27694  0.27694     5.85    0.042

Error                    8  0.37893  0.04737

Total                  15  1.67084

 

 

Model Summary

 

S    R-sq  R-sq(adj)  R-sq(pred)

0.217636  77.32%     57.48%       9.28%

 

Factors with p values less than or close to .05

Source               DF   Adj SS   Adj MS  F-Value  P-Value

W2                     1  0.21506  0.21506     4.54    0.066

L1                       1  0.34076  0.34076     7.19    0.028

L2                      1  0.30388  0.30388     6.42    0.035
PaperWeight   1  0.27694  0.27694     5.85    0.042

R-Square values

Model Summary

S    R-sq  R-sq(adj)  R-sq(pred)

0.217636  77.32%     57.48%       9.28%

 

Model P Value

Source           DF   Adj SS   Adj MS  F-Value  P-Value

Model             7  1.29191  0.18456     3.90    0.038

 

 

This will be our baseline!


 

We rerun the analysis with just these four factors.

Factorial Regression: FlightTime versus W2, L1, L2, PaperWeight

Analysis of Variance

Source                DF  Adj SS   Adj MS  F-Value  P-Value

Model                   4  1.1366  0.28416     5.85    0.009

Linear                4  1.1366  0.28416     5.85    0.009

W2                    1  0.2151  0.21506     4.43    0.059

L1                      1  0.3408  0.34076     7.02    0.023

L2                      1  0.3039  0.30388     6.26    0.029

PaperWeight   1  0.2769  0.27694     5.70    0.036

Error                   11  0.5342  0.04856

Total                  15  1.6708

Model Summary

S    R-sq  R-sq(adj)  R-sq(pred)

0.220370  68.03%     56.40%      32.36%

Factors with p values less than or close to .05

Source                 DF  Adj SS   Adj MS  F-Value  P-Value

L1                       1  0.3408  0.34076     7.02    0.023

L2                      1  0.3039  0.30388     6.26    0.029

PaperWeight   1  0.2769  0.27694     5.70    0.036

R-Square values

Model Summary

S    R-sq  R-sq(adj)  R-sq(pred)

0.220370  68.03%     56.40%      32.36%

Model P Value

Source           DF  Adj SS   Adj MS  F-Value  P-Value

Model             4  1.1366  0.28416     5.85    0.009

 

Here you can see Factor P values show we would drop W2 for next analysis. But BOTH R-sq and Model P value have improved.


Factorial Regression: FlightTime versus L1, L2, PaperWeight

Analysis of Variance

Source                DF  Adj SS   Adj MS  F-Value  P-Value

Model                    3  0.9216  0.30719     4.92    0.019

Linear                  3  0.9216  0.30719     4.92    0.019

L1                        1  0.3408  0.34076     5.46    0.038

L2                       1  0.3039  0.30388     4.87    0.048

PaperWeight   1  0.2769  0.27694     4.44    0.057

Error                  12  0.7493  0.06244

Total                  15  1.6708

Model Summary

S    R-sq  R-sq(adj)  R-sq(pred)

0.249876  55.16%     43.95%      20.28%

Factors with p values less than or close to .05

Source           DF  Adj SS   Adj MS  F-Value  P-Value

L1                  1  0.3408  0.34076     5.46    0.038

L2                  1  0.3039  0.30388     4.87    0.048

R-Square values

Model Summary

S    R-sq  R-sq(adj)  R-sq(pred)

0.249876  55.16%     43.95%      20.28%

Model P Value

Source            DF  Adj SS   Adj MS  F-Value  P-Value

Model                3  0.9216  0.30719     4.92    0.019

 

Here you see a shift in the other three statistics to a worse condition. Both R-sq values dropped showing that the model explains less of the variation than with W2 included. The P value for the Model is now increased from .009 to .019.

 This tell you that the better selection for moving on is the last analysis with the four factors of L1, L2, Paperweight, and W2.

 If you continue to look at each analysis as you drop factor you will see these three factor get worse and worse.


I do know that that many times a company would only do the screening to save on cost and/or time, and that is true (but not recommended). If that were the case then looking at a cube plot of the data would show you the best setting of the four factors. This would be the best guess (give the amount of data you had).

 

 

Here you can see that the analysis shows that best flight time is 2.80406 which is found in two places in this chart. That is telling us that W2 is not in the best estimate and that the best settings for max flight time is L1=8.9; L2=6; Paper Weight= Light.

Control Charts; A last resort control system.

May 23rd, 2012


Control charts are the most difficult of the seven basic quality tools to use. They are seldom the method of choice. When a process step is important, we would prefer that the step not vary at all. ONLY when this can not be accomplished in an economical way does one choose to use a control chart.

 

 

 

In any process, pieces vary from each other.


But as they vary, they form a pattern.

And if that pattern is stable, it can be described as a distribution.  These distributions can differ in location, spread, and shape.

If this variation that is present is, only common caused variation the process output forms a distribution that is stable and predictable over time.

If this variation that is present is special caused variation, the process is unstable over time producing varying distributions and thus is not predictable.

Control charts are only useful if the step (operation or function), over time, exhibits measurable random variation. Control charts display the data over time.

 

The Control Chart above (Time is on the x axis above listed as sample). Control Limits (the red lines) are displayed on control charts, where data falling within the control limits are considered common caused or  “normal” variation. Any point outside the control limits are considered “special caused” variation and need to be look at and corrected through an action plan. If you create a control chart, you must also have with it an action plan.

Besides control limits for control charts, there are several other type of trends (runs) that can indicate an out-of-control process before any defective parts are produced.  Remember that with common cause I can predict what will happen next. This means that if I have several things happen in a row it could tell me something has changed.

What I have shown above is only one type a control chart and one of the simplest to use but there are several others types of control charts available.

As mentioned in other articles, there are two types of data and we have control charts for both. There is what we call variables data and attribute data. Variables data is data that you can measure and attribute data is data that we count. So let me give you a brief description of the most commonly used Control Charts we have for each data type.

Variables Data Control Charts (Measurable)

The Individuals and Moving Range Chart (X-MR)

Individual and Moving range charts are used when taking more than one is expensive  or the data is collected in a  destructive test. If this is not your situation use one of the other types of variable control charts. One of the main reasons why you should use a different variables control chart is that with individual control charts the data need to be normally distributed. If it is not normally distributed, the chart will not work properly for you. The other charts below use averages and the distribution of averages are normal (Central Limit Theorem).

Average and Range Chart ( Xbar – R)

This is the most popular of the variables control charts. This is because it is uses a small sample size and the range of that sample. With this chart, it is easiest to calculate the average (Xbar) of the sample and Range of the sample by hand. As mentioned above using the averages of samples in this chart assures you that the data plotted will be normal and all the trends can be predicted.

Average and Standard Deviation Chart (Xbar – s)

This chart is used a lot when there is a computer associated with the work area that can calculate and plot the statistics needed for this chart (Xbar and s). Here we have to calculate and plot the sample average (Xbar) and the sample standard deviation (s).

 Other Variable Control Charts

There are several other types of variables control charts all used for very special conditions. These are the three that are used 95% of the time.

Attribute Data Control Charts (Count)

Proportion Defective (p)

p charts measure the proportion defective over time. It is important that each item (part, component or document) being check is either conforming (good) or defective (bad) as a whole. This means even if an item has several defects in it, it is still counted as 1 defective item. It is also important that the inspection of these items is grouped in some meaningful way. In this chart, the groups do not always have to be equal, but because of the varying group sizes, the control limits vary for group to group. This way the defective items can be expressed as a decimal fraction (percentage) of the grouping.

 

Number Defective (np)

np charts also measure the proportion defective over time. It is important that each item (part, component or document) being check is either conforming (good) or defective (bad) as a whole. This means even if an item has several defects in it, it is still counted as 1 defective item.

The difference between the p and the np chart is in the grouping. Here in the np charts all of the groups being inspected need to always be the same size and never vary. This having the grouping (sample size) always the same make the control limits always the same an easier to calculate. These groupings, like the p chart, should be grouped in some meaningful way. This way the defective items can be expressed as a decimal fraction (percentage) of the grouping

Number of Defects (c)

c charts measure the number of defects over time. Here we are counting defect in each item (part, component or document), so an item can have more than one defect in it.  It is also important that the inspection of these items is grouped in some meaningful way. In this chart, the groups do not always have to be equal, but because of the varying group sizes, the control limits vary for group to group. This way the defective items can be expressed as a decimal fraction (percentage) of the grouping.

Defects per unit (u)

u charts also measure the number of defects over time. . Here, just like c charts, we are counting defect in each item (part, component or document), so an item can have more than one defect in it.  The difference between the c and the u chart is in the grouping. Here in the u charts all of the groups being inspected need to always be the same size and never vary. This having the grouping (sample size) always the same make the control limits always the same an easier to calculate. These groupings, like the p chart, should be grouped in some meaningful way. This way the defective items can be expressed as a decimal fraction (percentage) of the grouping.

Selecting the Correct Control Chart

Below is a Decision Tree Diagram of the different type and there use. In this chart “n” is your grouping size or what is called sample size. Be sure you understand the application of each control chart or get help if you plan to use one of these.

 

The Control Systems

I have talked to you about control charts, but to make them useful and not just a pretty picture on the wall you have to have a “Reaction Plan”. What is a Reaction Plan; a plan so when an out-of-control condition does occur you have a plan of action for the operator (person that plots the points) to follow to correct the condition NOW before another item is produced. Without this, the chart is a waste of time to build and maintain.

Well there you have a short article on Control Charts. Check back on my website blog to find videos on how to build each one of these in Minitab. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

 

Could a Focus on Getting Every Call Right have the Wide-ranging Benefits for Call Centers that JIT had in Manufacturing | Pyzdek Institute

May 4th, 2012


Could a Focus on Getting Every Call Right have the Wide-ranging Benefits for Call Centers that JIT had in Manufacturing | Pyzdek Institute.

The Pyzdek Institute Project Charter – A forgotten but an extremely important tool

April 20th, 2012

Why create a charter it seems just like busy work? I mean the boss wants this done NOW why waste the time on creating a charter? Good questions, but I can guarantee you a charter is NOT a waste of your time, nor the bosses time either. Do you remember the school yard game where you would line up and tell the person next to you something and have them pass it on. What was said at the end of the line? Was it ever the same thing? I’d say NO. Why is that? Because as much as we think we state things clearly the receiver never gets it exactly like what we said. I see this all the time in training and working in teams. We are such a diverse group of people that our individual picture of what is said gets mixed with what we know and that changes the thought.

So we write a charter to capture the “true” reason we are doing the project. It is the best way to capture and pass on what you are doing. With out it your team may get lost very quickly as the direction at the beginning is slightly different in everyone’s mind.

The charter is more than a simple statement of the project objective. It hold a lot of information so everyone gets the same “picture” of what we are doing. Some things in a charter may seem redundant but they are not. They are stating the approach to the issue in a slightly different way so other will get the complete picture of what is happening. You will find that you will come back to the charter time and again to bring the team back on task for what they were brought together to accomplish. So lets look at what should be in your charter.

The Header Block

  • Project Name
  • Black/ Green Belt Name & Telephone Number – Who will be the project Lead and their phone number
  • Sponsor Name – Who will be the project Sponsor

Note: A sponsor is ALWAYS needed. The Sponsor will be a top level manager that the project will impact and help the most . This Sponsor will help remove road blocks as the team encounters them. Plus the Sponsor will be the main conduit to top management that will need to support this project as well.

  • The Project Start and Target Completion Date. Management will not support a project that they do not have some time frame to complete.
  • Projected Annual Savings – All Six Sigma Project need to return savings back to the company. Top Management talks in big dollars so to get their support this had better be big dollars.
  • Type of Savings –  There are basically two types of savings Hard savings and cost avoidance.
    • Hard Savings – This is savings that impact the “bottom line.” This means that at the end of this project someone, the sponsor most likely, will be reducing his/her budget to show a hard decrease in spending by the company.
    • Cost Avoidance – This is a savings that may not impact the bottom line as it is a savings that avoids an expense that is causing, in many cases, a department to overrun their budget. This is things like scrap and rework.

The Project Detail Block

  •  Opportunity Statement (Current State) – Here we need to describe the problem as briefly as we can but with enough detail that everyone understands it.
  • The Project objective (Future State) – I like to call this a vision statement of the future state of the process. Many times this makes it easier for others to “picture” what it will look like when the project is complete. It should be specific, measurable and attainable within the time frame of the project.
  • Business Case – This statement needs to tie this opportunity to the company goals and objective and defines why we need to do it NOW. In other words it is a statement showing us the  “burning platform” or real need to do it now.

The Metrics for DPMO Calculations Block

  •  Opportunity Definition – Above we gave a brief description of the opportunity. Here we provide some details as to where an error could be made or a defect could be created. For example an employee filling a form, or the manufacturing of a part or feature, or an interaction with a customer.
  • Defect Definition – In this area we describe what is an unacceptable condition for the opportunity mentioned above and describe the value for the metric? For example: a form returned  for missing or incorrect information, a defect in a part or feature, or a customer placed on hold for over ten minutes.
  •  Metrics – The metric is one or more measurements of the defect described above. This may seem hard to define right now but believe me when I say management when they saw this problem it was not a touchy feely thing it had hard number associated to it. Numbers they want to see changes in. It could be dollars, volume, time, or number of customers but there are numbers that are the metrics YOU need to improve. Sometimes even management does not quite know what they are but it is your job to ask why they think they see this as an issue and find the metric!
    •  Before Project – Here list a value for the metric that is what Management/Sponsor/Team thinks it is today. Later in the measure phase we will actually measure these and get them exact.
    • After Project – Here list a value for the metric that is what Management/Sponsor/Team expect you to achieve. In the Improve phase we will report on how well we did in meeting this metric and goal.

Project Scope Block

The best way I can describe this is when looking at the process you are trying to improve. What steps of that process will be looked at in this project. This will help you keep the project focus and not have what I call scope creep due to not know what areas of the company this project will cover. It also should be noted that you need to make sure the scope is not to big ( you can not solve world hungry, you may be able to only solve hungry in your neighborhood).

  • First Process Step (Start) –  This is the very first step of the business process that you are trying to improve. This IS NOT the first step of the DMAIC process.
  • Last Process Step (Stop) – This is the very last step of the business process that you are trying to improve. This IS NOT the last step of the DMAIC process.
 Potential Adverse Impacts – Here we have to think about what impacts to the process this project might have besides the listed metrics. Plus what other process might be impacted by the improvements to this process. For instance there may be in the process several things that it does good and we do not what the project to make them worse. Also Improving one process may in fact make another one worse.
  • Besides identifying them you will want to explain how you will monitor them to insure no impact.

Milestones and Expected Dates Block

The Milestones are the all the basic steps to completing a project including the five phase DMAIC process. I know you have the project Start and estimated completion dates above, but you will need an estimated completion time for each of the five step of DMAIC. These will be milestones to you and management on how well the project is moving. It is better to make adjustment as you go then to find you are way behind and over budget near the end of the project.

  •  Core Team Member Block
  • Core Team Members – list here the member of the project team. Include their phone number so everyone will knopw how to contact each other.
  • DACI – Here you will define the DACI roles they will

Well there you have the basic components of the Pyzdek Institutes six sigma project charter. At least from my prospective. If you have questions or comments please feel free to leave them below or you can contact me.

 

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

 

The Pareto Chart: One of the Seven Basic Quality Tools

April 10th, 2012


 

Pareto Charts are a specialized Histogram of count data. It arranges the Bins or Cells in largest to smallest counts and gives you an accumulation line as seen below. It is one of the Seven Basic Quality Tools.

The Pareto Chart gets its name from the use of the Pareto Principle which states “ 80% of the effect comes from 20% of the causes”. Vilfredo Pareto, an Italian economist, originated this principle by determining that 80% of the land inItalyis owned by 20% of the population. Later it was found to hold true in many things and help us focus on the critical few. With a chart like this a team can decide where to place its priority and focus ( the big hitters). This is extremely helpful when time and money is limited as it is in most cases.

How to create a Pareto (Example in Bold)

Creating a Pareto chart is slightly more difficult than a histogram but is do able to build.

  1. Decide what the problem is that you want to chart. Damaged Fruit
  2. Collect the data on the problem over a good amount of time to insure a representative sample size.
  3. Determine the classification (categories) to group the data into.
  4. Group the data by category an determine the total for each category. Bruised 100; Undersized 87; Rotten 235; Under Ripe 9; Wrong Variety 7; Wormy 3
  5. Determine the total over all the categories. 100+87+235+9+7+3= 441
  6. Calculate the percentage for each category. Bruised 22.7%; Undersized 19.7%; Rotten 53.3%; Under Ripe 2.0%; Wrong Variety 1.6; Wormy .7%
  7. Rank order the categories from the largest to the smallest. Rotten; Bruised; Undersized; Under Ripe; Wrong Variety; Wormy
  8. Calculate the cumulative percentage at each category starting from the largest and going to the smallest. Rotten 53.3%; Bruised 76.0%; Undersized 95.7%; Under Ripe 97.7%; Wrong Variety 99.3%; Wormy100.0%
  9. Construct a chart
    1. With the left scale the count starting at 0 and going to the over all total count. Scale 0 – 441
    2. With the right scale the percentage starting a 0% and going to 100%
    3. The Horizontal axis will be labeled with the categories starting on the left with the largest and going to the smallest. Many times we will add up all the categories that have only 1 item in them and label it other. But you do not want this category to be the biggest. If it is you need to group some of them in to new categories. Rotten; Bruised; Undersized; Under Ripe; Wrong Variety; Wormy
    4. Draw the bars to show the count for each category listed.
    5. Draw a line to show the cumulative percentage for each bar. Start with the left most bar (the largest) and draw it to the right.

 Well there you have a short article on how to construct a Pareto Chart. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

Scatter Plots for Visualization of Relationships.

March 15th, 2012


Scatter plots are one of the Seven Basic QC Tools. They are a graph showing you the relationship between two factors or variables. It can show you if one variable effects another. This can be a very effective tool to find out if you change one thing in a process will it affect another. To see if there is a cause and effect relationship between two factors or variables.

Creating a Scatter Plot:

To create a scatter plot you follow the below steps:

  1. First you need to collect data. This data is called paired data because it will be values from both factors gathered so you can compare one with the other. You can and should collect this paired data with other information (data) that potentially could help understand what is going on.
  2. Next you need to determine which factor you want on the horizontal axis (x) and which to put on the vertical axis (y). This is your choice, but many put the potential cause on the horizontal axis (x) and the effect on the vertical (y) axis.
  3. After you have decided which goes on which axis you need to find the minimum and maximum value of each factor. These will be used to define the each axis scale.
  4. Now we setup the Vertical (y) and Horizontal (x) axis. Both should be the same length but necessarily the same scale. These axis will make the plot (graph) look like square fit the two are the same length.
  5. Mark are each axis scale by starting with the minimum value in the lower left corner for both and the maximum value at the other end. Make sure to divide and label the rest of the axis into equal segments so you will be able to easily plot your data.
  6. Now we plot all of the x, y paired data on the graph. Do this by finding the x value on the horizontal axis and plotting a point above that value that corresponds to the y value on the vertical axis. You continue doing that until all the points are plotted.
  7. Last, but never least, label your graph with a title and a label for the vertical and horizontal axis so everyone who looks at it will be able to under stand what they see.

Interpreting a Scatter Plot:

Now that we have a scatter plot how do we interpret what we see? Is there a relationship or not? Well how we do that is look for patterns. But first, are there any outliers? These are data point that are way out side the pattern of dots that you have plotted. What these point are cause from something other than the relationship of these two variables. Note them and if you can find out what happen that created them. Now let’s look for the patterns.

  • When seeing patterns remember that the tighter together the points are clustered, the stronger the correlation (the effect) between the variables (factors) you have plotted.
  • If you find a pattern that slopes from the lower left to the upper right. This tells you that as x (horizontal axis factor/variable) increases so does the  y (vertical axis factor/variable) increases. This means there is a “Positive” correlation between the two factors/variables.
  • If you find a pattern that slopes from the upper left to the lower right. This tells you that as x (horizontal axis factor/variable) increases, the  y (vertical axis factor/variable) decreases. This means there is a “Negative” correlation between the two factors/variables.

Below is a table of pattern to help you interpret your results:

 

Correlation and Causation

Now that you see a pattern and you have found or not found a correlation between the two factors or variables, please do not assume that one caused the other to happen. That may or may not be true. You see you may very well find a correlation between the number of people using public swimming pools and the number of cooler that break down, but I do not think one caused the other. What you have to do to verify the cause is to conduct a controlled experiment and see if I hold everything else steady will the change in one make the other change as predicted in the scatter plot.

 

But even though you do not have the cause true verified you now see that something is going on and that there is a good chance that one of these factor does effect the other.

Well there you have it. All you need to know about scatter plots. Or at least the basics on how to construct and interpret them. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

How to build a good Check Sheet

March 5th, 2012


In an earlier article “The Check Sheet – Simple but Powerful” I talked about the power of a check sheet and how some use them. Here in this article I’m going to talk you through constructing one. Check Sheets are one of the Seven Basic QC Tools. You will usually construct a check sheet for one of two reasons. To collect information (data) about something, or to help you remember to do some thing. Lets look at both.

Collecting information:

When collecting information, check sheets are lists of items and the frequency that the item occurs. They can be made in so many different ways that many times, we don’t think of them as a list, but they are.

Simple Table Check Sheet

Figure 1: Simple Table Check Sheet

The most recognized check sheet is the simple table (Fig. 1). Here you create two columns, the first will be your categories and the second one the count or frequency it is detected. In the categories column you list in each row either an attribute or a range of values that you what to know information about. In figure 1 we have ranges of inches so we can capture different sizes (categories) of some measurement we are making.

The right column you label as frequency and put “tick” marks ever time you find a value in that measurement range. If you arrange the tick marks as seen in figure 1 (groups of five) you will see that your check sheet will look like a Histogram and you now can see the shape of the measurement distribution . With this information you can see the highest , lowest, middle and most frequent value that you collected easily from this table.

The same can be done for attributes or characteristics. For instance you could want to collect the number of each color car that drives by your house. In this case the left column would be a list of colors while the right tick marks for each car color that passed.

The Picture Check Sheet

Figure 2: Picture Check Sheet

Another very handy check sheet is what I call a “Picture” check sheet (Fig. 2). Here you take a picture of something you want to collect information about and you mark on the picture where something occurred. In figure 2 you can see a picture of a shoe. On it are red x’s  where defects were seen during an inspection. This type of check sheet is great for showing you were something occurred most frequently. Here on this shoe most of the defects are in the toe. So now we can work toe issue as it is the most frequent type of defect on this shoe. This could have been done in a table but find the correct description of where the defects are is sometimes difficult so using a picture make’s it real easy to see.

 

 

Help to Remember:

Figure3: Grocery List

Another type of check sheet we use a lot is a “Help me remember” sheet. It help us to remember what to do especially when we have complex task to perform and we do not what to forget anything. The one I know many use is a shopping list. Here you just have a list of thing to get and as you get them you check or cross them off the list. Assembly instructions can be a check sheet. Also think about restaurants where the waitress take your order she creates a check sheet to insure your order is completed as you ordered.

Another good check sheet at a restaurant that I frequent take the waitress out of the loop by making the menu a check sheet. Here you pickup the menu items you want by marking what you want to eat right on the menu it self. Then you hand it to the cashier they ring you up and the menu goes to the kitchen to be filled. Great idea, one less opportunity for a mistake on my order.

 

In  Summary

As you can see all of these answer such questions as:

  • Has all the work been done?
  • Has all the inspection been done?
  • How frequently a problem occurs?
  • What should I do next?
  • Have I done everything?

Well there you have how to build a good check sheet. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

How to build a good Histogram

February 20th, 2012


A histogram, one of the Seven Basic QC Tools,  is a very good tool to use to picture what a set of data looks like.  It give shape to a set of data by grouping the data into “cells.” It shows you the spread or dispersion and the central tendency which can be used to compare to a standard or another group of data. In this way it can be an excellent troubleshooting tool by using it to compare different suppliers, equipment, processes to reveal their differences or similarities.

Although most statistical or spreadsheet software can create a histogram for you very easily I am going to talk you through how to create a good histogram by hand. The real key to a good histogram is to get the correct number of “cells” for the size of the set of data you have. If you have to few or to many it will not give you much of a feel for the spread or center of the data you have. Too few looks like a big clump and too many looks like a broad scatter of points. Neither shows or tells you much about your data. So here is what you do to build a histogram by hand.

  1. Find your largest and smallest number in the data and calculate the data range by subtracting the smallest value from the largest one.
  2. Now we determine the all important number of cells for our histogram. These cells will be the columns you see in a histogram. The “Six Sigma Handbook” by Thomas Pyzdek shows two ways to get the correct number of cells for you data. This first number will change a bit as you do some calculations but they are a very good starting point. The first is to use the table below.

Sample Size

Number of Cells

100 or less

7 to 10

101-200

11 to 15

201 or more

13 to 20

 

The second method, using a calculator, you can take the square root of the sample size and round that number to the nearest integer.

  1. Next we determine the width of each cell by dividing the range that you found in the step 1 by the number of cells we determined in step 2.

 

Once you have calculated the cell width round it to a convenient number. Doing this will affect the number of cells in your histogram, but that will be ok.

  1. Next we will computer the “cell boundaries.” Look at a cell as a range of values of your data. The cell boundaries define the start and end point for each cell in your histogram. Since it will be these start and end point we will make them one more decimal place more than our data values.  Thus if our data values are integers (1, 12, 36)  then our cell boundaries will be one decimal place (xx.x).
  2. Now we determine the low boundary of the first cell. This boundary has to be set less than the smallest value of your data set.
  3. Now that the lowest cell boundary is determined all the other cell boundaries are determined by adding the cell width to the previous boundary. Continue this until the upper boundary  is larger than the largest value in the data set.
  4. Now go through the data that you have and determine in what cell each value goes and make a tick mark in that cell (bounded by the boundaries you calculated).
  5. Count the ticks in each cell and record the total count in each cell.
  6. Now we have all the statistics to create the histogram. First, on graph paper, draw a horizontal line near the bottom of the page. Leave room below to label the cell boundaries on this line.
  7. Starting with the lowest cell boundary, equally space all the boundaries along this line.
  8. Next at the left end of the horizontal line draw a vertical line. This lines length will be just longer than the largest cell count that you found. This line should be label from 0 to the largest cell count or just beyond. This is the count or frequency axis
  9. Last you draw in the columns (or bars) for each of the cells up to the count/frequency of that cell .

 

So below is a histogram made in Minitab but let me give you the basic information about its data.

    • Lowest Value = 596.2
    • Highest Value = 604.2
    • Range = 8.0
    • There are 200 values in this set of data

 

 

Now, let’s see how close it is to the manual method.

  1. Number of cells: Table value 15; Square root method 14.142
  2. Cell Width: Table: 8/15 = .5333 ~  .5 Square Root: 8/14.142 = .5656 ~ .5
  3. lowest Cell Boundary < 596.2  (and one decimal more) = 596.15
  4. All the other boundaries (Largest must be larger than the highest value [604.2]

 

# Cells Lower Boundary Cell Center Upper Boundary Lowest Val=

596.2

1

596.15

596.4

596.65

Cell Width=

0.5

2

596.65

596.9

597.15

 

3

597.15

597.4

597.65

 

4

597.65

597.9

598.15

 

5

598.15

598.4

598.65

 

6

598.65

598.9

599.15

 

7

599.15

599.4

599.65

 

8

599.65

599.9

600.15

 

9

600.15

600.4

600.65

 

10

600.65

600.9

601.15

 

11

601.15

601.4

601.65

 

12

601.65

601.9

602.15

 

13

602.15

602.4

602.65

 

14

602.65

602.9

603.15

 

15

603.15

603.4

603.65

 

16

603.65

603.9

604.15

 

17

604.15

604.4

604.65

Highest Val=

604.2

 

Well there you have how to build a histogram by hand. . If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

 

The Cause and Effect Diagram

February 10th, 2012

The Cause and Effect Diagram (C&E Diagram), sometimes called the Fishbone or Isjikawa Diagram, is one of the Seven Basic QC Tools. It is a simple but effective way to organize a group or persons knowledge about the potential causes of a problem or issue and display the information graphically. You might want to use this if you want to stimulate the thinking of a group around an issue. Or  so you can see the relationships between different potential cause of an issue or problem.

It was originally created and used by Dr. Kaoru Ishikawa and is sometimes called an Ishikawa Diagram. Also, because of its shape it is called a Fishbone Diagram.

There are several easy steps to constructing a good C&E Diagram, they are as follows:

  1. Develop a team of people that are involved in the process area where this issue or problem occurs. Never try to do this alone because as a team each member brings to the discussion a different perspective of the issue or problem at hand.
  2. Have the team Brainstorm  to find all possible causes of the problem. Remember Brainstorming is a process of collecting ideas. You what as many as you can get even if some seem strange.
  3. Now have the team, using Affinity Diagramming, organize the results into rational categories and sub-categories. Many times getting these categories started or named is difficult so some start with a few basic one at the get go. The four “M’s” are commonly used ( Manpower, Machine, Material, Method) but other are just as good such as environment. As you can see by the diagram above they used Assembly, Process, Fabrication, Design. To them those worked.
  4. Now we start constructing the diagram by Drawing a box on the far right hand side of  a large sheet of paper and draw a horizontal arrow that points to the box.
  5. Inside the box, write the description of the problem or issue you are trying to solve.
  6. Next write the names of the categories above and below the horizontal line. Think of these as branches from the main trunk of a tree.
  7. Then draw arrows from those categories to the trunk ( the horizontal arrow drawn earlier).
  8. After that write in the next level of Sub-Categories and draw in arrow to their main categories. Think of these as limbs and twigs on the branches.
  9. Continue repeating step 8 until all of the sub-categories have been entered.

If you complete this exercise and find a lack of lower level branches and twigs this would suggest the team has a superficial understanding of the problem. In which case you will have to use GIMBA and gather more information.

Once you have this information you need to verify the information you have. Verify by going and collecting data to confirm which of these “potential” causes really do contribute to the issue. Your Diagram may be very large and doing this verification would take to long to do all, so for an alternative the team should prioritize the categories and look at the top few.

Once you have the big hitter then you can start trying to figure out why these occur and a good tool to start with is the 5 Whys.

Well there you have a short article on how to construct and interpret a Cause and Effect Diagrams. Stay in touch as I explain how to construct and interpret Histograms. If, you have questions or comments please feel free to contact me by leaving a comment below, emailing me, calling me, or leaving a comment on my website.

Bersbach Consulting
Peter Bersbach
Six Sigma Master Black Belt
http://sixsigmatrainingconsulting.com
peter@bersbach.com
1.520.829.0090

Reviews on training and coaching from students of the Pyzdek Institute

January 27th, 2012


If you are or were a student of mine in the Pyzdek Institute please feel free to write a review and send it to me at peter@bersbach.com and I will include it here:

I liked the format of this on-line course. The video “”lectures”” were great. Concise yet thorough. The assignments made a big difference. If I had not done the assignments, I would have left with an incomplete understanding of the subject matter. The fact that Peter also held me to a high standard was important. The mini lectures on various topics using Minitab and Excel, for example, were also very helpful. They saved me a lot time yet pointed me to the right location in the application. I did get stumped on the modules dealing with distributions. It took a couple of weeks (!) to feel comfortable with that subject. 1/26/12 Lean Six Sigma Black Belt Student