Compute the Moore-Penrose pseudoinverse of rectangular or singular matrices via SVD. Verify all four Moore-Penrose conditions with step-by-step details.
The Moore-Penrose pseudoinverse, denoted A⁺, is the most widely used generalized inverse. It exists for every matrix — square or rectangular, full rank or singular — and is uniquely defined by four conditions: A·A⁺·A = A, A⁺·A·A⁺ = A⁺, (A·A⁺)* = A·A⁺, and (A⁺·A)* = A⁺·A. When A is invertible, A⁺ reduces to the ordinary inverse A⁻¹.
The pseudoinverse is computed most reliably through the Singular Value Decomposition: given A = UΣVᵀ, the pseudoinverse is A⁺ = VΣ⁺Uᵀ, where Σ⁺ is formed by inverting the non-zero singular values and transposing the diagonal matrix. Singular values below a numerical tolerance are treated as zero to maintain stability, and the effective rank equals the number of retained singular values.
This calculator handles 2×2, 2×3, 3×2, and 3×3 matrices. Enter your matrix or load a preset to see the pseudoinverse, the SVD singular values, rank, condition number, and automatic verification of the Moore-Penrose conditions. The step-by-step breakdown shows exactly how the SVD is used to construct A⁺, making it ideal for students learning generalized inverses or engineers verifying least-squares solutions.
Regular matrix inversion fails for rectangular or singular matrices — but many practical problems (least squares regression, underdetermined systems, minimum-norm solutions) require a generalized inverse. The Moore-Penrose pseudoinverse provides the unique solution that minimizes the Euclidean norm and is used throughout statistics, control theory, signal processing, and machine learning. This calculator computes it via SVD with numerical safeguards and verifies correctness automatically.
A⁺ = VΣ⁺Uᵀ, where A = UΣVᵀ (SVD). Σ⁺ is formed by taking 1/σᵢ for non-zero singular values and 0 otherwise. For full-rank square matrices, A⁺ = A⁻¹.
Result: A⁺ = [[0.04, 0.08], [0.08, 0.16]]
The matrix [[1,2],[2,4]] has rank 1 (singular), so the standard inverse does not exist. The pseudoinverse is computed via SVD: σ₁ = √20, σ₂ = 0. A·A⁺·A = A is verified numerically.
The Singular Value Decomposition A = UΣVᵀ factorizes any m×n matrix into an m×m unitary U, an m×n diagonal Σ with non-negative singular values, and an n×n unitary V. The pseudoinverse Σ⁺ is the n×m matrix obtained by inverting each non-zero diagonal element and transposing. Then A⁺ = VΣ⁺Uᵀ. This method is numerically superior to the normal equation approach (AᵀA)⁻¹Aᵀ because it avoids squaring the condition number — the condition number of AᵀA is κ(A)², which makes the normal equations twice as sensitive to rounding errors.
Linear regression minimizes ‖Xβ − y‖₂, and the solution is β = X⁺y. In principal component analysis (PCA), the pseudoinverse reconstructs data from a reduced number of components. Ridge regression adds regularization by modifying the singular values: σᵢ → σᵢ/(σᵢ² + λ), which is equivalent to a "damped" pseudoinverse. Data scientists use the pseudoinverse implicitly every time they call `numpy.linalg.lstsq` or fit a linear model.
When a matrix is rank-deficient, some singular values are zero (or effectively zero), and the standard inverse does not exist. The pseudoinverse handles this gracefully by inverting only the non-zero singular values. Choosing the numerical rank — how many singular values to retain — requires setting a tolerance, typically ε·σ_max·max(m,n) where ε is machine epsilon. Truncated SVD, where only the top k singular values are retained, provides a low-rank approximation whose pseudoinverse is more stable than the full pseudoinverse.
It is the unique matrix A⁺ satisfying four conditions: A·A⁺·A = A, A⁺·A·A⁺ = A⁺, and both A·A⁺ and A⁺·A are Hermitian (self-adjoint). It exists for every matrix regardless of shape or rank.
When A is a square, invertible matrix (full rank), A⁺ = A⁻¹. The four Moore-Penrose conditions reduce to the single condition A·A⁻¹ = A⁻¹·A = I.
The standard method uses SVD: decompose A = UΣVᵀ, then A⁺ = VΣ⁺Uᵀ where Σ⁺ inverts non-zero singular values. Singular values below a tolerance ε are treated as zero for numerical stability.
For an m×n matrix A, the pseudoinverse A⁺ is n×m. For a tall matrix (m > n) with full column rank, A⁺ = (AᵀA)⁻¹Aᵀ. For a wide matrix (m < n) with full row rank, A⁺ = Aᵀ(AAᵀ)⁻¹.
The least-squares solution to Ax = b is x = A⁺b. This minimizes ‖Ax − b‖₂ and, among all minimizers, selects the one with smallest ‖x‖₂. This is the foundation of linear regression.
The condition number κ = σ_max/σ_min governs accuracy. Large condition numbers amplify rounding errors. Singular values below machine epsilon × σ_max × max(m,n) are typically treated as zero.