In the structured world of process improvement, the Analyze phase of the DMAIC (Define, Measure, Analyze, Improve, Control) methodology serves as a critical turning point. This phase bridges the gap between understanding what the problem is and determining how to fix it. Yet, many organizations rush through this crucial stage, eager to implement solutions without properly validating root causes. This premature leap can lead to wasted resources, ineffective improvements, and frustrated teams. Understanding the success criteria for the Analyze phase ensures that your Six Sigma project stands on solid ground before moving forward.
Understanding the Analyze Phase in DMAIC
The Analyze phase represents the investigative heart of any Six Sigma project. After defining the problem and measuring current performance in the previous phases, teams must now dig deeper to uncover the underlying factors driving poor performance. This phase requires rigorous statistical analysis, critical thinking, and a systematic approach to separating symptoms from actual root causes. You might also enjoy reading about Data Stratification Analysis: Breaking Down Data to Reveal Hidden Patterns for Better Decision Making.
The primary objective during this phase is to identify and validate the root causes of process variation and defects. Without proper validation, teams risk solving the wrong problem, which is perhaps the most costly mistake in process improvement work. The success criteria for this phase serve as checkpoints, ensuring that the analysis is thorough, data-driven, and reliable before committing resources to improvement initiatives. You might also enjoy reading about Normality Testing: Why It Matters and How to Check Your Data for Better Decision Making.
Key Success Criteria for the Analyze Phase
1. Statistical Significance of Findings
The first and most fundamental success criterion involves ensuring that your findings are statistically significant rather than products of random variation. Statistical significance provides confidence that the relationships you observe between variables are real and not coincidental. You might also enjoy reading about Service Industry Analysis: How to Leverage Transactional and Customer Data for Business Excellence.
Consider a manufacturing example where a team investigates defects in plastic molding. They collect data on 500 production runs over three months, tracking variables including temperature, pressure, cycle time, material batch, operator, and shift timing. Initial analysis suggests that temperature variations correlate with defect rates.
Sample dataset:
- Production runs analyzed: 500
- Temperature range: 185°C to 215°C
- Average defect rate at 185-195°C: 2.3%
- Average defect rate at 196-205°C: 1.1%
- Average defect rate at 206-215°C: 4.7%
- P-value from ANOVA test: 0.003
With a p-value of 0.003 (well below the standard threshold of 0.05), the team can confidently state that temperature significantly affects defect rates. This statistical validation distinguishes genuine root causes from coincidental correlations. Without this rigor, teams might pursue changes based on random patterns that offer no real improvement.
2. Verification Through Multiple Analytical Methods
Relying on a single analytical tool creates blind spots. Successful Analyze phases employ multiple complementary techniques to examine the problem from different angles. This triangulation approach strengthens confidence in identified root causes.
For the plastic molding example, the team might use:
Pareto Analysis: Reveals that temperature-related defects account for 68% of all defects, confirming this factor deserves focused attention.
Fishbone Diagram: Systematic brainstorming identifies potential causes across categories: equipment calibration, environmental conditions, maintenance schedules, operator training, and raw material quality.
Scatter Plots: Visual representation shows the clear U-shaped relationship between temperature and defects, with optimal performance in the middle range.
Regression Analysis: Quantifies the relationship, showing that each 5-degree deviation from the optimal 200°C increases defects by approximately 1.2%.
Process Capability Studies: Demonstrate that temperature control capability (Cpk) of 0.87 indicates the process cannot consistently maintain the narrow temperature band needed for quality production.
When multiple analytical methods point to the same conclusion, confidence in the root cause identification increases substantially. This criterion prevents teams from acting on incomplete or misleading analysis.
3. Quantified Impact of Root Causes
Understanding that something is a root cause is insufficient. Successful Analyze phases quantify how much each validated root cause contributes to the problem. This quantification enables prioritization and helps predict the potential impact of improvements.
Returning to our manufacturing example, the team quantifies various factors:
- Temperature control issues: Responsible for 68% of defects, with estimated annual cost of $340,000
- Material batch variation: Contributes 18% of defects, estimated annual cost of $90,000
- Operator technique differences: Accounts for 10% of defects, estimated annual cost of $50,000
- Equipment maintenance timing: Represents 4% of defects, estimated annual cost of $20,000
This quantification shows that solving the temperature control problem could eliminate over two-thirds of defects and save $340,000 annually. Such clarity enables informed decisions about where to focus improvement efforts and helps build business cases for necessary investments.
4. Documented Data Collection Integrity
The quality of analysis depends entirely on data integrity. A critical success criterion involves demonstrating that data collection methods were sound, consistent, and free from systematic bias. Documentation provides an audit trail showing how data was gathered, by whom, and under what conditions.
For the manufacturing project, documentation might include:
- Measurement system analysis (MSA) showing gage repeatability and reproducibility of 12%, well within the acceptable 30% threshold
- Calibration records for temperature sensors checked against standards
- Data collection protocols followed by operators
- Time stamps showing data represents various conditions: different shifts, days of the week, and production volumes
- Sample size calculations justifying the 500 production runs as statistically adequate
- Missing data analysis showing that only 3% of planned measurements were unavailable, with no systematic pattern
Without this documentation, stakeholders cannot trust the analysis, regardless of how sophisticated the statistical methods appear. Data integrity forms the foundation on which all other success criteria rest.
5. Validation of Hypotheses Through Controlled Comparisons
Strong Analyze phases test hypotheses rather than merely observing patterns. Controlled comparisons, where conditions are deliberately varied while other factors remain constant, provide powerful validation of cause-and-effect relationships.
In the plastic molding case, the team conducts a designed experiment:
Experimental Setup:
- Same operator performs all runs
- Same material batch used throughout
- Temperature deliberately varied: 190°C, 200°C, and 210°C
- 20 parts produced at each temperature setting
- Random order of temperature settings to eliminate time-based confounding
Results:
- At 190°C: 14 defects out of 20 parts (70% defect rate)
- At 200°C: 1 defect out of 20 parts (5% defect rate)
- At 210°C: 12 defects out of 20 parts (60% defect rate)
This controlled experiment definitively confirms the causal relationship between temperature and defects. The dramatic difference in defect rates when only temperature changes provides validation that cannot be achieved through observational data alone.
6. Cross-functional Team Agreement
Technical validation alone is insufficient. Successful Analyze phases also achieve cross-functional consensus that the identified root causes are credible and complete. This human element ensures that domain expertise complements statistical analysis and that key stakeholders support the findings.
For the manufacturing project, the team presents findings to representatives from:
- Production operators who provide frontline insights
- Maintenance technicians who understand equipment behavior
- Quality engineers who know historical defect patterns
- Process engineers who designed the original specifications
- Supply chain specialists who understand material characteristics
Through structured review sessions, the team addresses questions, incorporates additional perspectives, and achieves consensus. When operators mention that the temperature gauges sometimes lag actual conditions, this insight leads to additional analysis revealing a 45-second response delay in temperature sensors, adding another dimension to the root cause understanding.
This collaborative validation ensures that solutions developed in the next phase will be practical, accepted, and effectively implemented.
Common Pitfalls That Compromise Analyze Phase Success
Confirmation Bias
Teams often enter the Analyze phase with preconceived notions about root causes. This bias can lead analysts to unconsciously seek data that confirms initial hypotheses while ignoring contradictory evidence. Success requires deliberate efforts to challenge assumptions and consider alternative explanations.
In our manufacturing example, if leadership initially blamed operator error, the team might focus exclusively on operator-related factors while overlooking the temperature control issues. Rigorous methodology and diverse team perspectives help counteract this natural bias.
Analysis Paralysis
While thoroughness is essential, some teams become trapped in endless analysis, always seeking one more data point or conducting one more test. This paralysis delays improvement and wastes resources. Clear success criteria help teams recognize when analysis is sufficient to move forward with confidence.
Setting specific thresholds (such as requiring p-values below 0.05 or explaining at least 70% of variation) provides objective standards for determining when analysis is complete.
Superficial Root Cause Identification
Perhaps the most common failure involves stopping analysis too early, identifying symptoms rather than underlying root causes. The “Five Whys” technique helps teams dig deeper.
For example:
Problem: High defect rate in plastic molding
Why? Temperature varies during production
Why? Temperature sensors respond slowly to changes
Why? Sensors are located too far from the mold cavity
Why? Original design prioritized sensor protection over response time
Why? Design specifications did not adequately consider temperature control requirements
This deeper analysis reveals that the root cause is not simply temperature variation (a symptom) but rather the design specifications and sensor placement (true root causes). Solutions addressing sensor placement will be far more effective than simply trying to control temperature more tightly with the existing inadequate sensor configuration.
Tools and Techniques for Validating Root Causes
Hypothesis Testing
Statistical hypothesis testing provides a formal framework for validating root causes. Teams state a null hypothesis (the factor has no effect) and an alternative hypothesis (the factor does have an effect), then use data to determine which hypothesis the evidence supports.
Common hypothesis tests in the Analyze phase include t-tests for comparing two groups, ANOVA for comparing multiple groups, chi-square tests for categorical data, and regression analysis for continuous relationships.
Confidence Intervals
Beyond determining whether an effect exists, confidence intervals quantify the likely magnitude of that effect. For instance, analysis might show that optimizing temperature will reduce defects by 55% to 75% with 95% confidence. This range helps stakeholders understand both the expected improvement and the uncertainty around that estimate.
Control Charts
Control charts distinguish between common cause variation (inherent to the process) and special cause variation (due to specific, identifiable factors). This distinction is crucial because improvement strategies differ dramatically depending on variation type. Common cause variation requires fundamental process redesign, while special cause variation requires identifying and eliminating specific factors.
Failure Mode and Effects Analysis (FMEA)
FMEA systematically evaluates potential failure modes, their causes, and their effects. By scoring each failure mode on severity, occurrence, and detection, teams can prioritize which root causes to address first. This structured approach ensures that analysis considers not just what can go wrong, but how likely problems are and what consequences they bring.
Creating an Analyze Phase Success Checklist
Practical implementation of success criteria requires a structured checklist that teams complete before proceeding to the Improve phase. This checklist might include:
- Statistical analysis completed with p-values documented for all key relationships
- At least three different analytical methods applied with consistent findings
- Quantified impact calculated for each identified root cause
- Data collection methods documented and validated through measurement system analysis
- Controlled experiments or natural experiments conducted to verify causation
- Cross-functional review completed with documented consensus
- Alternative explanations considered and systematically eliminated
- Root cause depth verified through multiple levels of “why” questioning
- Capability analysis demonstrates process cannot meet requirements under current conditions
- Financial impact estimated with supporting calculations documented
Teams should secure formal approval from sponsors before proceeding, using this checklist as the basis for demonstrating readiness.
Real-World Application: Healthcare Example
To illustrate these principles in a different context, consider a hospital working to reduce patient wait times in the emergency department. Initial measurement shows average wait times of 147 minutes with high variation.
Analysis Process:
The team collects data on 850 patient visits over six weeks, tracking variables including time of day, day of week, patient acuity level, number of physicians on duty, number of nurses on duty, lab turnaround time, radiology turnaround time, and patient volume.
Key Findings:
- Regression analysis shows that lab turnaround time is the strongest predictor of overall wait time (R-squared = 0.61, p < 0.001)
- Pareto analysis reveals that 73% of excessive wait times occur when lab results take longer than 45 minutes
- Control charts identify special cause variation on weekends when lab staffing drops by 40%
- Process mapping shows that specimens travel an average of 340 meters from collection to lab, with an average transport time of 18 minutes
Validation:
The team validates findings through:
- Correlation analysis between lab turnaround time and overall wait time across different patient volumes
- Comparative analysis of wait times on weekdays versus weekends, controlling for patient volume
- Time study documenting the physical specimen transport process
- Interviews with lab staff identifying workflow constraints
- Review of findings with physicians, nurses, lab technicians, and administrators achieving consensus
Quantified Impact:
- Lab delays over 45 minutes occur in 37% of cases
- Each 10-minute reduction in lab turnaround time reduces overall wait by an average of 8.5 minutes
- Reducing lab turnaround to consistently under 45 minutes could reduce average wait times by approximately 35 minutes
- Estimated patient satisfaction improvement: 22 percentage points based on correlation between wait times and satisfaction scores
This thorough analysis, meeting all success criteria, positions the team to develop targeted improvements focusing on lab processes, specimen transport, and weekend staffing in the next phase.
The Business Case for Rigorous Analysis
Organizations sometimes question whether the detailed analysis required to meet these success criteria is worth the time and effort. The answer becomes clear when considering the costs of inadequate analysis.
Research shows that approximately 70% of process improvement projects fail to deliver expected benefits. A primary contributor to this failure rate is implementing solutions that do not address actual root causes. When analysis is rushed or superficial, teams often find themselves implementing multiple rounds of “improvements” as initial solutions fail to solve the problem.
Consider the financial implications in our manufacturing








