logo

Matrix Operations: Scalar Multiplication, Addition, and Multiplication 📂Matrix Algebra

Matrix Operations: Scalar Multiplication, Addition, and Multiplication

Scalar Multiplication

The multiplication of an arbitrary matrix AA of size m×nm \times n by a scalar kk is defined as multiplying each element of AA by kk and is denoted as follows:

kA=k[a11a12a1na21a22a2nam1am2amn]:=[ka11ka12ka1nka21ka22ka2nam1kam2kamn] kA = k\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} := \begin{bmatrix} ka_{11} & ka_{12} & \cdots & ka_{1n} \\ ka_{21} & ka_{22} & \cdots & ka_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & ka_{m2} & \cdots & ka_{mn} \end{bmatrix}

By definition, the multiplication of a scalar and a matrix is commutative, although the scalar is usually written in front.

kA=Ak kA = Ak

Addition

The addition of two matrices AA and BB, each of size m×nm \times n, is defined as adding the elements in the same row and column and is denoted as follows:

A+B=[a11a12a1na21a22a2nam1am2amn]+[b11b12b1nb21b22b2nbm1bm2bmn]:=[a11+b11a12+b12a1n+b1na21+b21a22+b22a2n+b2nam1+bm1am2+bm2amn+bmn] \begin{align*} A+B &= \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} + \begin{bmatrix} b_{11} & b_{12} & \cdots & b_{1n} \\ b_{21} & b_{22} & \cdots & b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b_{m1} & b_{m2} & \cdots & b_{mn} \end{bmatrix} \\ &:=\begin{bmatrix} a_{11} + b_{11} & a_{12} + b_{12} & \cdots & a_{1n} + b_{1n} \\ a_{21} + b_{21} & a_{22} + b_{22} & \cdots & a_{2n} + b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} + b_{m1} & a_{m2} + b_{m2} & \cdots & a_{mn} + b_{mn} \end{bmatrix} \end{align*}

As can be seen from the definition, the addition of two matrices is only defined for matrices of the same size and is commutative.

A+B=BA A+B=BA

Multiplication

While multiplying a matrix by a scalar or adding two matrices is intuitively acceptable, multiplication is a bit different. Let’s first look at the multiplication of a row vector and a column vector.

The multiplication of a row vector A=[a1a2an]A=\begin{bmatrix} a_{1} & a_{2} & \cdots & a_{n} \end{bmatrix} of size 1×n1\times n and a column vector B=[b1b2bn]B= \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{n} \end{bmatrix} of size n×1n \times 1 is defined as follows:

AB=[a1a2an][b1b2bn]:=a1b1+a2b2++anbn=i=1naibi \begin{align*} AB =\begin{bmatrix} a_{1} & a_{2} & \cdots & a_{n} \end{bmatrix}\begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{n} \end{bmatrix} &:= a_{1}b_{1}+a_{2}b_{2} + \cdots +a_{n}b_{n} \\ &= \sum \limits_{i=1}^{n}a_{i}b_{i} \end{align*}

If we were to explain this definition in words, it would be “the sum of the products of the elements in the same order,” which is conceptually the same as the dot product of two vectors learned in high school.

a=(a1,a2,a3)b=(b1,b2,b3),ab=a1b1+a2b2+a3b3 \begin{align*} \vec{a} &=(a_{1},a_{2},a_{3}) \\ \vec{b} &= (b_{1},b_{2},b_{3}) \end{align*},\quad \vec{a} \cdot \vec{b} = a_{1}b_{1} + a_{2}b_{2} + a_{3}b_{3}

The multiplication of two matrices can be thought of as an extension of this concept. The multiplication of a m×nm\times n matrix AA and a m×km\times k matrix BB is defined as follows:

AB=[a11a12a1na21a22a2nam1am2amn][b11b12b1kb21b22b2kbn1bn2bnk]:=[i=1na1ibi1i=1na1ibi2i=1na1ibiki=1na2ibi1i=1na2ibi2i=1na2ibiki=1namibi1i=1namibi2i=1namibik] \begin{align*} AB &= \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} & \cdots & b_{1k} \\ b_{21} & b_{22} & \cdots & b_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ b_{n1} & b_{n2} & \cdots & b_{nk} \end{bmatrix} \\ &:= \begin{bmatrix} \sum_{i=1}^{n} a_{1i}b_{i1} & \sum_{i=1}^{n} a_{1i}b_{i2} & \cdots & \sum_{i=1}^{n} a_{1i}b_{ik} \\ \sum_{i=1}^{n} a_{2i}b_{i1} & \sum_{i=1}^{n} a_{2i}b_{i2} & \cdots & \sum_{i=1}^{n} a_{2i}b_{ik} \\ \vdots & \vdots & \ddots & \vdots \\ \sum_{i=1}^{n} a_{mi}b_{i1} & \sum_{i=1}^{n} a_{mi}b_{i2} & \cdots & \sum_{i=1}^{n} a_{mi}b_{ik} \end{bmatrix} \end{align*}

Although the formula might seem complicated, it is merely the result of doing several multiplications of row vectors and column vectors. The element in the nn row and kk column of the resulting matrix ABAB from the multiplication of AA and BB is the same as the dot product of the nn row of AA and the kk column of BB. Therefore, the number of columns of AA and the number of rows of BB must be the same for the multiplication between them to be defined. Also, the multiplication of two matrices generally does not follow the commutative law.

ABBA AB \ne BA

This can be easily verified with a simple example. If we say A=[1101]A=\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}, B=[2111]B = \begin{bmatrix} 2 & -1 \\ 1 & 1 \end{bmatrix}, then

AB=[2+11+10+10+1]=[3011]BA=[2+0211+01+1]=[2112] \begin{align*} AB &=\begin{bmatrix} 2+1 & -1+1 \\ 0+1 & 0+1 \end{bmatrix}=\begin{bmatrix} 3 & 0 \\ 1 & 1 \end{bmatrix} \\ BA &=\begin{bmatrix} 2+0 & 2-1 \\ 1+0 & 1+1 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} \end{align*}

Therefore,

ABBA AB\ne BA

The process of matrix multiplication can be visually represented as follows:

Properties1

Let’s assume that AA, BB, and CC are arbitrary matrices of size m×nm \times n. Let rr and ss be arbitrary constants. The following properties of matrix operations hold:

(a) Commutative law for addition: A+B=B+AA + B = B + A

(b) Associative law for addition: A+(B+C)=(A+B)+CA + (B + C) = (A + B) + C

(c) (r+s)A=rA+sA(r + s)A = rA + sA

(d) r(A+B)=rA+rBr(A + B) = rA + rB

(e) (rs)A=r(sA)(rs)A = r(sA)

Assuming AA, BB, and CC are arbitrary matrices of size n×nn\times n:

(f) Associative law for multiplication: A(BC)=(AB)CA(BC) = (AB)C

(g) Distributive law for multiplication A(B+C)=AB+AC&(A+B)C=AC+BCA(B+C) = AB + AC \quad \& \quad (A+B)C=AC + BC

It should be noted again that the commutative law does not hold for matrix multiplication.


  1. Jim Hefferon, Linear Algebra(4th Edition). 2020, p235 ↩︎