Multiply a matrix by a scalar with element-wise display, property verification, chain operations, determinant/trace scaling, and before/after visualization.
Scalar multiplication is one of the two fundamental operations that define a vector space — multiplying every element of a matrix by a single number (scalar). Given a matrix A and scalar k, the product kA has the element (kA)ᵢⱼ = k·aᵢⱼ at every position.
While the operation is simple, its properties are essential to linear algebra. Scalar multiplication is linear: k(A + B) = kA + kB and (k₁ + k₂)A = k₁A + k₂A. It is associative with scalars: k₁(k₂A) = (k₁k₂)A. And 1·A = A (identity element). These properties together with matrix addition give matrices their vector space structure.
Scalar multiplication affects key matrix properties in predictable ways. The trace scales linearly: tr(kA) = k·tr(A). The Frobenius norm scales by the absolute value: ‖kA‖_F = |k|·‖A‖_F. Most notably, the determinant of kA for an n×n matrix satisfies det(kA) = kⁿ·det(A) — the scalar is raised to the power of the matrix dimension, not just multiplied.
This calculator performs scalar multiplication on matrices up to 5×5, displaying element-wise before/after comparisons, verifying all key properties, and supporting chain multiplication to demonstrate the associative law.
While multiplying each element by a constant sounds simple, for a 4×4 or 5×5 matrix that is 16–25 individual multiplications — plus tracking how the determinant scales by k^n, not k. This calculator multiplies instantly, shows how every entry changes, computes the scaled determinant and Frobenius norm, and supports chain multiplication by a second scalar. It is a fast way to verify scaling operations and explore how scalar multiplication interacts with determinant, trace, and eigenvalues.
(kA)ᵢⱼ = k · aᵢⱼ — every element is multiplied by the scalar k.
Result: kA = [[3,6],[9,12]], det(kA) = 9·det(A) = 9·(−2) = −18
Each element is multiplied by 3. The determinant scales by 3² = 9 for a 2×2 matrix.
Scalar multiplication multiplies every entry of a matrix by a single number: (kA)ᵢⱼ = k·aᵢⱼ. It preserves the matrix dimensions and distributes over addition: k(A + B) = kA + kB. Two useful special cases: k = 0 produces the zero matrix, and k = −1 produces −A, the additive inverse. Scalar multiplication commutes freely with matrix multiplication: k(AB) = (kA)B = A(kB), making it easy to factor scalars in and out of expressions.
The most important subtlety is the **determinant scaling rule**: det(kA) = k^n · det(A) for an n×n matrix. The scalar is raised to the power n because each of the n rows is scaled by k. The **trace** scales linearly: tr(kA) = k·tr(A). Every **eigenvalue** of A is multiplied by k, while the eigenvectors remain unchanged. The **Frobenius norm** scales by |k|: ‖kA‖_F = |k|‖A‖_F. These identities are frequently tested in linear algebra courses.
In **control theory**, system matrices are often scaled to normalize gain or change units. In **computer graphics**, scaling matrices uniformly enlarge or shrink objects. In **machine learning**, weight matrices are scaled during regularization (weight decay) and learning rate adjustments. In **physics**, matrices representing physical quantities may be scaled when changing unit systems (e.g., SI to CGS). Understanding how scalar multiplication interacts with other matrix properties is fundamental to all these domains.
Multiplying a matrix by a scalar means multiplying every element by that number: (kA)ᵢⱼ = k·aᵢⱼ. The result has the same dimensions as the original.
For an n×n matrix, det(kA) = kⁿ·det(A). The scalar is raised to the power n because each of the n rows is scaled by k.
Yes — kA = Ak (scalar commutes with matrix). However, the notation kA is standard since scalars typically appear on the left.
Multiplying by 0 gives the zero matrix (all elements become 0). The rank drops to 0, the determinant becomes 0, and the trace becomes 0.
Scalar multiplication multiplies every element by a single number. Matrix multiplication combines rows and columns and requires compatible dimensions.
No — the eigenvectors of kA are the same as those of A. However, the eigenvalues all scale by k: if λ is an eigenvalue of A, then kλ is an eigenvalue of kA.