Compute the matrix condition number κ(A) in 1-norm, ∞-norm, and Frobenius norm. Checks if a matrix is well or ill-conditioned with sensitivity visualization.
The condition number κ(A) measures how sensitive a linear system Ax = b is to small changes in A or b. A matrix with a low condition number (near 1) is well-conditioned — small perturbations in the input produce proportionally small changes in the output. A large condition number means the system is ill-conditioned and numerical solutions may be unreliable. This calculator computes κ(A) = ‖A‖ · ‖A⁻¹‖ using three different matrix norms: the 1-norm (maximum absolute column sum), the infinity-norm (maximum absolute row sum), and the Frobenius norm (root of the sum of all squared entries). Enter a square matrix up to 5×5, choose from classic presets like the notoriously ill-conditioned Hilbert matrix, and instantly see the condition number, determinant, both the original and inverse matrices, and a color-coded sensitivity gauge. Engineers use condition numbers to assess whether finite-precision arithmetic will produce trustworthy results, and numerical analysts use them to choose appropriate algorithms and preconditioners for large linear systems.
Computing a condition number requires finding the matrix inverse and computing norms in multiple formulations — a tedious process for anything larger than 2×2. This calculator handles matrices up to 5×5, computes κ(A) in all three standard norms simultaneously, displays the inverse matrix, and provides a color-coded conditioning gauge. Classic ill-conditioned examples like the Hilbert matrix are available as presets, making it ideal for numerical analysis courses and engineering sensitivity checks.
κ(A) = ‖A‖ · ‖A⁻¹‖ ‖A‖₁ = max_j Σᵢ|aᵢⱼ| ‖A‖∞ = max_i Σⱼ|aᵢⱼ| ‖A‖F = √(Σᵢⱼ aᵢⱼ²)
Result: κ ≈ 60,000 (ill-conditioned)
For the Hilbert matrix H₃, κ₁(H₃) ≈ 748. This means a 0.1% change in b could produce up to a 74.8% change in x.
The condition number κ(A) = ‖A‖·‖A⁻¹‖ bounds the worst-case amplification of relative errors when solving Ax = b. If κ = 1000, a 0.1% perturbation in the input could cause up to a 100% change in the solution. A well-conditioned matrix (κ close to 1) produces stable solutions; an ill-conditioned matrix (κ ≫ 1) means numerical solutions may lose many digits of accuracy. The identity matrix has κ = 1 in every norm — the theoretical best.
The Hilbert matrix H with entries h_ij = 1/(i+j−1) is the classic example of severe ill-conditioning. For a 3×3 Hilbert matrix, κ ≈ 748; for 5×5, κ ≈ 476,000; for 10×10, κ exceeds 10¹³. This exponential growth means that solving Hx = b for large Hilbert matrices is essentially impossible in standard floating-point arithmetic. Vandermonde matrices with closely spaced nodes and matrices arising from polynomial interpolation are also notoriously ill-conditioned.
The 1-norm (maximum absolute column sum) measures the maximum stretch along coordinate axes. The ∞-norm (maximum absolute row sum) is its dual. The Frobenius norm (√Σa²_ij) is an "average" measure that treats all entries equally. For the 2-norm (spectral norm), κ equals the ratio of largest to smallest singular values (σ_max/σ_min). All these norms give condition numbers within a constant factor of each other, so any norm reveals the same qualitative conditioning status.
It bounds the worst-case amplification of relative errors. κ(A) = 1000 means a small perturbation in inputs can be amplified up to 1000× in the output.
A matrix with κ close to 1. The identity matrix has κ = 1 in any norm — the theoretical best.
Their entries (1/(i+j−1)) decrease smoothly, making rows nearly linearly dependent. κ grows exponentially with matrix size.
Different norms give different condition numbers, but they are all within a constant factor of each other for a given matrix size. Any norm reveals the same qualitative conditioning.
A small determinant alone does not imply ill-conditioning. The condition number is the proper measure — it accounts for the matrix norm.
The 2-norm condition number equals σ_max / σ_min, the ratio of the largest to smallest singular value. Use this as a practical reminder before finalizing the result.