Factor a symmetric positive definite matrix A = LLᵀ with step-by-step Cholesky algorithm, positive definiteness check, solve Ax=b, and verification display.
Cholesky decomposition factors a symmetric positive definite (SPD) matrix A into A = LLᵀ, where L is a lower triangular matrix with positive diagonal entries. This factorization is unique and approximately twice as fast as general LU decomposition because it exploits the symmetry of A.
The algorithm computes L column by column. For each diagonal entry, it takes the square root of (the original diagonal minus the sum of squares of previously computed entries in that row). For off-diagonal entries, it subtracts the inner product of previously computed entries and divides by the diagonal. This process is numerically stable and never requires pivoting.
Cholesky decomposition is central to statistics and machine learning (covariance matrices, Gaussian processes), optimization (Newton's method with SPD Hessians), computational finance (correlation matrix simulation), and numerical PDE solvers. It is also the most reliable numerical test for positive definiteness: if the algorithm completes without encountering a non-positive value under a square root, the matrix is SPD.
This calculator handles SPD matrices from 2×2 to 5×5. It performs the full factor computation, shows each step, checks positive definiteness, and optionally solves Ax=b using forward substitution (Ly = b) followed by back substitution (Lᵀx = y). The verification section confirms that L·Lᵀ reproduces the original matrix.
Cholesky decomposition involves computing square roots and careful accumulated subtractions at every step — a single rounding error propagates through the entire lower triangular factor. This calculator performs the full A = LLᵀ factorization with step-by-step logging, immediately flags non-symmetric or non-positive-definite inputs, and optionally solves Ax = b via forward and back substitution. It is indispensable for students learning matrix factorizations, statisticians working with covariance matrices, and engineers verifying SPD properties before using iterative solvers.
A = LLᵀ. Diagonal: lᵢᵢ = √(aᵢᵢ − Σₖ lᵢₖ²). Off-diagonal: lᵢⱼ = (aᵢⱼ − Σₖ lᵢₖlⱼₖ) / lⱼⱼ for i > j.
Result: L = [[2,0,0],[6,1,0],[-8,5,3]]
l₁₁ = √4 = 2. l₂₁ = 12/2 = 6, l₂₂ = √(37-36) = 1. l₃₁ = −16/2 = −8, l₃₂ = (−43+48)/1 = 5, l₃₃ = √(98-64-25) = 3.
The algorithm proceeds column by column through the lower triangular factor L. For each diagonal entry lⱼⱼ, compute lⱼⱼ = √(aⱼⱼ − Σₖ<ⱼ lⱼₖ²). If the value under the square root is non-positive, the matrix is not positive definite and the factorization fails. For each off-diagonal entry lᵢⱼ (i > j), compute lᵢⱼ = (aᵢⱼ − Σₖ<ⱼ lᵢₖlⱼₖ) / lⱼⱼ. This process requires roughly n³/3 floating-point operations — about half the cost of general LU decomposition — because it exploits the A = Aᵀ symmetry.
A symmetric matrix is positive definite if xᵀAx > 0 for every non-zero vector x. Equivalently, all eigenvalues are positive, all leading principal minors are positive, and Cholesky decomposition succeeds without encountering a non-positive value. In practice, attempting Cholesky is the fastest numerical test for positive definiteness. Common SPD matrices include covariance matrices (from well-conditioned data), Gram matrices (XᵀX for full-rank X), and stiffness matrices in finite element analysis.
In **statistics**, Cholesky factorization is used to sample from multivariate normal distributions: if L is the Cholesky factor of the covariance matrix Σ, then x = μ + Lz (where z ~ N(0,I)) produces samples from N(μ, Σ). In **computational finance**, it simulates correlated asset returns for Monte Carlo pricing. In **optimization**, Newton's method with SPD Hessians uses Cholesky to solve the Newton step efficiently. In **numerical PDEs**, sparse Cholesky solvers are the backbone of direct methods for symmetric positive definite systems arising from finite element discretizations.
It factors a symmetric positive definite matrix A into LLᵀ where L is lower triangular with positive diagonal entries. It is unique and numerically stable.
A symmetric matrix A is positive definite if xᵀAx > 0 for all non-zero vectors x. Equivalently, all eigenvalues are positive, all leading principal minors are positive, and Cholesky decomposition succeeds.
Exploiting symmetry means only n(n+1)/2 entries need to be computed (the lower triangle) instead of n² entries for L and U separately. This roughly halves the operation count.
No. Cholesky decomposition is defined only for symmetric (or Hermitian) positive definite matrices. For general matrices, use LU decomposition.
If the algorithm completes without encountering a zero or negative value under a square root, the matrix is positive definite. If it fails, the matrix is not positive definite (it may be positive semi-definite or indefinite).
Factor A = LLᵀ, then solve Ly = b by forward substitution, and Lᵀx = y by back substitution. Each is O(n²), making the total solve O(n²) after the O(n³/3) factorization.