SVD Calculator

Compute the Singular Value Decomposition A = UΣVᵀ. View singular values, rank, condition number, matrix norms, energy distribution, and low-rank approximation.

About the SVD Calculator

The Singular Value Decomposition (SVD) is arguably the most important matrix factorization in applied mathematics. Every m×n matrix A can be written as A = UΣVᵀ, where U and V are orthogonal (unitary) matrices and Σ is a diagonal matrix of non-negative singular values σ₁ ≥ σ₂ ≥ ⋯ ≥ σᵣ > 0. The number of non-zero singular values equals the rank of A, and the ratio σ₁/σᵣ is the condition number — a fundamental measure of numerical sensitivity.

SVD reveals the geometric structure of a linear transformation: V describes input directions, U describes output directions, and the singular values are the scaling factors. The Eckart-Young theorem proves that the best rank-k approximation to A (in both Frobenius and spectral norms) is obtained by keeping only the top k singular values — the foundation of principal component analysis, latent semantic analysis, and image compression.

This calculator computes the SVD for 2×2 and 3×3 matrices, delivering a complete analysis: singular values, rank, condition number, three matrix norms (Frobenius, spectral, nuclear), energy distribution across singular values, and the error of a rank-k approximation for your chosen k. Load a preset or enter your own matrix to explore.

Why Use This SVD Calculator?

Computing the SVD by hand is impractical for anything beyond a 2×2 matrix — it requires finding eigenvalues of AᵀA, computing eigenvectors, normalizing, and assembling three matrices. This calculator handles the entire process instantly and provides rich analysis: the singular value spectrum shows which directions matter most, the energy distribution reveals how many components capture the majority of the data, and the low-rank approximation error tells you the cost of truncation. Essential for students, data scientists, and engineers.

How to Use This Calculator

  1. Select the matrix size (2×2 or 3×3)
  2. Enter the matrix entries or click a preset to load an example
  3. Set the low-rank approximation parameter k (1 to n)
  4. View singular values, rank, and condition number in the output cards
  5. Examine the singular value bar chart with energy percentages
  6. Review the detailed SVD summary table with cumulative energy
  7. Check the matrix norms and the SVD computation walkthrough

Formula

SVD: A = UΣVᵀ. Singular values σᵢ = √(eigenvalues of AᵀA). ‖A‖_F = √(Σσᵢ²), ‖A‖₂ = σ₁, ‖A‖_* = Σσᵢ. Condition number κ = σ₁/σᵣ.

Example Calculation

Result: σ₁ = 5, σ₂ = 1; Rank = 2; κ = 5; Rank-1 energy = 96.2%

AᵀA = [[13,12],[12,13]], eigenvalues 25 and 1, so σ₁ = 5 and σ₂ = 1. The rank-1 approximation retains 25/26 ≈ 96.2% of the total energy.

Tips & Best Practices

The Mathematical Foundation of SVD

The SVD exists for every matrix: if A is m×n of rank r, then A = UΣVᵀ where U ∈ ℝᵐˣᵐ and V ∈ ℝⁿˣⁿ are orthogonal, and Σ ∈ ℝᵐˣⁿ has σ₁ ≥ σ₂ ≥ ⋯ ≥ σᵣ > 0 on the diagonal with all other entries zero. The columns of U are eigenvectors of AAᵀ, the columns of V are eigenvectors of AᵀA, and σᵢ = √λᵢ where λᵢ are the corresponding eigenvalues. This decomposition reveals the four fundamental subspaces: the first r columns of U span the column space, the last m−r span the left null space, the first r columns of V span the row space, and the last n−r span the null space.

SVD in Data Science and Machine Learning

In recommender systems, a user-item rating matrix R is approximately factored as R ≈ UΣVᵀ (truncated SVD), predicting missing ratings. In NLP, Latent Semantic Analysis applies SVD to the term-document matrix to discover hidden topics. In image compression, each color channel is decomposed via SVD and only the top k singular values are retained, reducing storage by a factor of n/k while preserving the dominant visual features. These applications all exploit the Eckart-Young theorem's guarantee of optimal low-rank approximation.

Numerical Computation of SVD

Modern SVD algorithms (as implemented in LAPACK's dgesdd) first reduce A to bidiagonal form using Householder reflections, then iteratively converge the off-diagonal entries to zero using implicit QR shifts. This process is backward stable and requires O(mn²) operations for an m×n matrix. For very large sparse matrices, iterative methods like the Lanczos algorithm compute only the top k singular values in O(nnz·k) time, where nnz is the number of nonzero entries. Randomized SVD further accelerates computation for low-rank approximation: it projects A onto a random low-dimensional subspace and computes SVD of the small projected matrix.

Frequently Asked Questions

What is SVD?

The Singular Value Decomposition factors any m×n matrix A as A = UΣVᵀ, where U is m×m orthogonal, Σ is m×n diagonal with non-negative entries (singular values), and V is n×n orthogonal. It exists for every matrix.

What are singular values?

Singular values σᵢ are the square roots of the eigenvalues of AᵀA (or equivalently AAᵀ). They represent the scaling factors of the linear transformation: the input direction vᵢ is mapped to σᵢuᵢ in the output space.

How does SVD relate to PCA?

PCA of a centered data matrix X is equivalent to computing the SVD of X. The right singular vectors (columns of V) are the principal components, and σᵢ²/(n−1) gives the variance explained by each component.

What is a low-rank approximation?

A rank-k approximation Aₖ keeps only the top k singular values and their corresponding vectors: Aₖ = Σᵢ₌₁ᵏ σᵢuᵢvᵢᵀ. The Eckart-Young theorem proves Aₖ is the closest rank-k matrix to A in both Frobenius and spectral norms.

What does the condition number tell me?

The condition number κ = σ₁/σᵣ measures sensitivity to perturbations. When solving Ax = b, relative errors in b can be amplified by up to κ in x. Matrices with κ ≈ 1 are well-conditioned; κ ≫ 1 indicates ill-conditioning.

Where is SVD used in practice?

SVD is used in image compression, recommender systems (matrix factorization), natural language processing (LSA/LSI), control theory (balanced truncation), signal processing (noise reduction), and statistics (PCA, regression diagnostics). Use this as a practical reminder before finalizing the result.

Related Pages