Fit Y = aX² + bX + c to data with R², vertex, roots, discriminant, residual analysis, and comparison to linear fit.
When data follows a curved pattern — rising then falling (like projectile motion) or accelerating (like compound growth) — linear regression fails. Quadratic regression fits Y = aX² + bX + c to your data, capturing the curvature that a straight line misses.
Enter your X and Y data and instantly get the three coefficients (a, b, c), R², adjusted R², standard error, the parabola's vertex (maximum or minimum point), axis of symmetry, discriminant, and X-intercepts. The R² comparison against linear regression shows exactly how much the quadratic term improves the fit.
Try the "Revenue vs Price" preset to see classic diminishing returns: revenue rises with price up to an optimal point (the vertex), then declines. The "Projectile Motion" preset demonstrates perfect quadratic behavior where R² ≈ 1. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case. Use the example pattern when troubleshooting unexpected results.
Many real-world relationships are approximately quadratic: projectile trajectories, dosage-response curves, price-revenue relationships, and diminishing returns scenarios. Linear regression forces a straight line through inherently curved data, producing biased estimates and poor predictions.
This calculator makes the quadratic analysis complete: not just the equation, but the vertex (often the business-critical answer), comparison with linear fit (is the curve justified?), and residual analysis (does the model fit well?). The parabola properties table provides the mathematical characteristics at a glance.
Y = aX² + bX + c (least squares via normal equations). Vertex: (−b/2a, f(−b/2a)). Discriminant: Δ = b²−4ac. Roots: X = (−b±√Δ)/2a.
Result: Y = −0.6488X² + 28.5595X − 62.5000, R² = 0.9958, Vertex: (22.01, 251.85), Linear R² = 0.5816
Revenue peaks at a price of ~22 with predicted revenue of 252. The quadratic model (R²=0.996) vastly outperforms linear (R²=0.582), confirming the data is genuinely curved.
Quadratic regression minimizes Σ(yᵢ − axᵢ² − bxᵢ − c)². Taking partial derivatives with respect to a, b, c and setting them to zero yields three simultaneous equations (the normal equations). These involve sums of x, x², x³, x⁴, y, xy, and x²y. Solving the 3×3 system (Cramer's rule, LU decomposition, or matrix inversion) gives exact coefficients.
If residuals from the quadratic fit still show a systematic pattern, consider: cubic regression (degree 3) for S-shaped curves, logarithmic regression for rapid initial growth followed by leveling, or exponential regression for unlimited acceleration. Always use the simplest model that adequately fits the data — adding unnecessary polynomial terms overfits.
In economics, quadratic regression is the backbone of revenue optimization: R(p) = ap² + bp + c, where p is price. Optimal price = −b/(2a). In agriculture, yield response to fertilizer is often quadratic — too much reduces yield. In pharmacology, the effective dose is the vertex of a quadratic dose-response curve. The vertex is the practical answer; the equation is just the means to find it.
Use quadratic when (1) a scatter plot shows curvature, (2) residuals from linear regression show a U or inverted-U pattern, (3) domain knowledge suggests a peak/trough (e.g., optimal dosage, price optimization), or (4) R² improves substantially over linear.
If a < 0 (parabola opens down), the vertex is the maximum — the X value that maximizes Y. If a > 0 (opens up), the vertex is the minimum. In business, the vertex often represents the optimal price, dosage, or resource allocation.
Minimum 3 (since we solve for 3 parameters), but that gives zero residual degrees of freedom. Realistically, 8+ points give reasonable R² estimates and residual diagnostics. More data near the vertex improves accuracy there.
If a is very close to zero, the quadratic term adds nothing — use linear regression instead. Check if the R² improvement over linear is < 1 percentage point; if so, the simpler linear model is preferable (parsimony principle).
Extrapolating quadratic models is especially risky because parabolas diverge rapidly. A model that fits well for X = 0–10 might predict absurdly high or negative values at X = 50. Limit predictions to within or near the observed X range.
Δ = b²−4ac determines where the parabola crosses Y = 0. If Δ > 0: two X-intercepts (roots). Δ = 0: one tangent root. Δ < 0: no real roots (parabola never crosses Y = 0). This matters when Y represents profit or quantity that can't be negative.