Find the inverse of a matrix up to 5×5 using the cofactor/adjugate method, with step-by-step cofactor table, verification A×A⁻¹=I, condition number analysis, and singular detection.
The inverse of a square matrix A, denoted A⁻¹, is the unique matrix such that AA⁻¹ = A⁻¹A = I, where I is the identity matrix. Not every matrix has an inverse — a matrix is invertible (non-singular) if and only if its determinant is non-zero. Singular matrices with zero determinant have no inverse.
This calculator computes the inverse using the classical adjugate method: A⁻¹ = adj(A)/det(A), where adj(A) is the transpose of the cofactor matrix. For each element, it calculates the minor (the determinant of the submatrix obtained by deleting that row and column), applies the checkerboard sign pattern to get the cofactor, and assembles the cofactor matrix. Transposing the cofactor matrix gives the adjugate, and dividing by the determinant gives the inverse.
The calculator provides a complete step-by-step breakdown: the cofactor matrix with each minor determinant and sign, the adjugate matrix, and the final inverse. It then verifies the result by computing A × A⁻¹ and checking that it equals the identity matrix.
The condition number κ(A) = ‖A‖‖A⁻¹‖ measures how sensitive the inverse is to small changes in A. A large condition number indicates an ill-conditioned matrix where numerical errors can be amplified during inversion. The visual condition scale helps you quickly assess whether your matrix is well-conditioned, moderately conditioned, or ill-conditioned.
Computing a matrix inverse via the adjugate method requires calculating n² cofactors (each a sub-determinant), transposing, and dividing by the main determinant — even a 3×3 matrix involves nine 2×2 determinants. This calculator performs the full inversion with verification (A·A⁻¹ = I), reports the condition number for numerical stability, and shows the adjugate and cofactor matrices. It is essential for students checking homework, engineers verifying symbolic inverses, and anyone needing a quick invertibility check.
A⁻¹ = (1/det(A)) × adj(A), where adj(A) = Cᵀ (transpose of cofactor matrix), Cᵢⱼ = (−1)^(i+j) × Mᵢⱼ
Result: A⁻¹ = [[0.6,−0.7],[−0.2,0.4]]
det(A) = 4×6 − 7×2 = 10. adj(A) = [[6,−7],[−2,4]]. A⁻¹ = (1/10)×adj(A) = [[0.6,−0.7],[−0.2,0.4]]. Verify: AA⁻¹ = I ✓
The classical formula A⁻¹ = adj(A)/det(A) works by computing the cofactor matrix C (where Cᵢⱼ = (−1)^(i+j) det(Mᵢⱼ)), transposing it to get the adjugate, and dividing every entry by det(A). For a 2×2 matrix [[a,b],[c,d]], the inverse is (1/(ad−bc))[[d,−b],[−c,a]] — swap the diagonals and negate the off-diagonals. This method is O(n!) and impractical for large matrices, but it produces exact symbolic results and reveals the deep connection between cofactors, determinants, and inverses.
A more efficient approach augments [A|I] and row-reduces to [I|A⁻¹]. This is equivalent to LU decomposition and costs O(n³) operations. Numerical libraries (LAPACK, NumPy) use LU with partial pivoting for matrix inversion. The **condition number** κ(A) = ‖A‖·‖A⁻¹‖ measures how much input errors are amplified: κ ≈ 1 is well-conditioned, while κ > 10³ warns that the inverse may be unreliable in floating-point arithmetic.
In theory, solving Ax = b via x = A⁻¹b is clean, but in practice, computing A⁻¹ explicitly is rarely necessary — LU decomposition solves the system directly without forming the inverse, which is faster and more numerically stable. Explicit inverses are needed in **control theory** (transfer functions), **statistics** (precision matrices, Fisher information), and **closed-form formulas** where symbolic inverses appear. The inverse of an orthogonal matrix is simply its transpose (Q⁻¹ = Qᵀ), and diagonal matrices invert by reciprocating each diagonal entry.
A singular matrix has a determinant of zero and no inverse. Geometrically, it maps some non-zero vectors to zero, collapsing dimensions. The rows (or columns) are linearly dependent.
The adjugate (or classical adjoint) method computes A⁻¹ = adj(A)/det(A). The adjugate is the transpose of the cofactor matrix, where each cofactor is a signed minor determinant.
The condition number κ(A) = ‖A‖·‖A⁻¹‖ measures sensitivity to perturbations. κ ≈ 1 means well-conditioned; κ > 100 means ill-conditioned. An ill-conditioned matrix amplifies rounding errors during inversion.
Verification catches computational errors and confirms the inverse is correct. In floating-point arithmetic, you may see values like 1.0000000001 instead of exactly 1, but they should be very close to the identity matrix.
Yes. Gaussian elimination (LU decomposition) is O(n³) and is preferred for large matrices. The cofactor/adjugate method is O(n!), making it impractical beyond ~10×10, but it provides educational insight into the structure.
A square matrix is invertible when: det ≠ 0, rank = n, all eigenvalues are non-zero, the rows/columns are linearly independent, or the null space contains only the zero vector. These conditions are all equivalent.