Calculate skewness, kurtosis, and shape metrics from data using Fisher, Pearson, Bowley, and Kelly methods. Includes skewness gauge, method comparison, and significance testing.
The skewness calculator measures the asymmetry of a data distribution using six different methods: Fisher g₁, adjusted sample G₁, Pearson's first and second coefficients, Bowley (quartile) skewness, and Kelly (decile) skewness. It also computes excess kurtosis to characterize tail heaviness.
Skewness tells you whether data piles up on one side with a tail stretching the other way. Positive (right) skewness means a long right tail and mean > median. Negative (left) skewness means a long left tail and mean < median. Zero skewness means the distribution is symmetric.
This tool provides a visual gauge, significance testing, method comparison table, and kurtosis reference so you can fully characterize the shape of your data distribution. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case. Use the example pattern when troubleshooting unexpected results. Validate that outputs match your chosen standards.
This calculator provides six different skewness measures, significance testing, a visual gauge, and kurtosis analysis — everything you need to characterize distribution shape. Different methods capture different aspects of asymmetry: moment-based (Fisher) for exact shape, median-based (Pearson 2nd) for robustness, and quartile-based (Bowley) for outlier resistance.
Whether you're reporting shape statistics for a research paper, checking normality assumptions, or screening financial data for tail risk, this tool gives you a complete shape analysis in one place.
Fisher skewness g₁ = m₃/s³ where m₃ = (1/n)Σ(xᵢ−x̄)³ and s = √(m₂). Adjusted sample skewness G₁ = [n/((n−1)(n−2))]Σ((xᵢ−x̄)/s)³. Pearson's 2nd: Sk₂ = 3(mean−median)/s. Bowley: Sk_B = (Q₁+Q₃−2×Median)/(Q₃−Q₁). Standard error SES = √(6n(n−1)/((n−2)(n+1)(n+3))).
Result: G₁ = −0.0916 (approximately symmetric)
With n=20 exam scores, the adjusted sample skewness G₁ ≈ −0.09 indicates near-symmetry. The Z-score = −0.18 is well within ±1.96, so the skewness is not statistically significant. Pearson's 2nd coefficient (−0.11) agrees. The distribution of these scores is roughly symmetric.
No single skewness coefficient captures all aspects of asymmetry. Fisher's g₁ is the most common (used by Excel, R, Python), but it's sensitive to outliers. Pearson's formula is intuitive but assumes unimodality. Bowley and Kelly use quantiles and are robust — but they only see the middle of the distribution. Comparing multiple methods helps you understand which aspects of asymmetry are real versus outlier-driven.
Many statistical tests (t-tests, ANOVA, regression) assume normally distributed data. Significant skewness violates this assumption. Solutions include: log-transforming right-skewed data, using the Box-Cox transformation, applying non-parametric alternatives, or using robust methods. As a rule of thumb, |skewness| > 1 is a red flag for methods assuming normality.
While skewness measures left-right asymmetry, kurtosis measures tail heaviness. The normal distribution has kurtosis = 3 (excess = 0). Financial returns typically show excess kurtosis of 5-10, meaning extreme values happen far more often than a normal model predicts. Always report skewness and kurtosis together for a complete shape story.
Positive (right) skewness means the right tail of the distribution is longer or fatter than the left. Most data points cluster on the left, with some extreme high values pulling the mean above the median. Common examples: income, house prices, and wait times.
Negative (left) skewness means the left tail is longer. Most values cluster on the right, with some extreme low values pulling the mean below the median. Common examples: scores on an easy exam, age at retirement, and failure times of well-made products.
Fisher skewness (g₁/G₁) uses the third standardized moment — it considers exact deviations of each data point. Pearson's second coefficient uses 3(mean−median)/s — a simpler approximation. They usually agree in sign but can differ in magnitude. Fisher's is more precise; Pearson's is more intuitive.
Bowley (quartile) skewness = (Q₁+Q₃−2×Median)/(Q₃−Q₁) measures asymmetry using quartiles. It's bounded between −1 and +1, is resistant to outliers, and captures skewness in the middle 50% of the data. It ignores the tails entirely, which makes it robust but less sensitive than moment-based measures.
Kurtosis measures the "tailedness" of a distribution. Excess kurtosis = kurtosis − 3 (the normal distribution has kurtosis = 3). High excess kurtosis (leptokurtic) means heavier tails and more outliers. Together, skewness and kurtosis fully characterize the "shape" of a distribution beyond its center and spread.
Divide skewness by its standard error (SES) to get a Z-score. If |Z| > 1.96, the skewness is significant at the 5% level. SES depends on sample size: for small samples (n < 30), even moderate skewness may not be significant. For large samples (n > 300), even tiny skewness becomes "significant" but may not be practically meaningful.