logo

Hadamard Product of Matrices 📂Matrix Algebra

Hadamard Product of Matrices

Definition

The Hadamard product ABA \odot B of two matrices A,BMm×nA, B \in M_{m \times n} is defined as follows.

AB=[a11a1nam1amn][b11b1nbm1bmn]:=[a11b11a1nb1nam1bm1amnbmn] A \odot B = \begin{bmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{bmatrix} \odot\begin{bmatrix} b_{11} & \cdots & b_{1n} \\ \vdots & \ddots & \vdots \\ b_{m1} & \cdots & b_{mn} \end{bmatrix} := \begin{bmatrix} a_{11}b_{11} & \cdots & a_{1n}b_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1}b_{m1} & \cdots & a_{mn}b_{mn} \end{bmatrix}

[AB]ij:=[A]ij[B]ij [A \odot B]_{ij} := [A]_{ij} [B]_{ij}

Description

The code for \odot’s TeX\TeX is \odot.

It is also commonly called the elementwise product. Unlike matrix multiplication, it is only defined for matrices of the same size and the commutative law applies.

  • AB=BAA \odot B = B \odot A
  • (AB)C=A(BC)(A \odot B) \odot C = A \odot (B \odot C)
  • A(B+C)=AB+ACA \odot (B + C) = A \odot B + A \odot C
  • k(AB)=(kA)B=A(kB)k(A \odot B) = (kA) \odot B = A \odot (kB)

Hadamard Product of Vectors

The Hadamard product of two vectors x\mathbf{x} and yRn\mathbf{y} \in \mathbb{R}^{n} is defined as follows.

xy=[x1y1xnyn] \mathbf{x} \odot \mathbf{y} = \begin{bmatrix} x_{1}y_{1} \\ \vdots \\ x_{n}y_{n} \end{bmatrix}

The definition itself is a special case for matrices regarding the Hadamard product of matrices with n×1n \times 1. The following equation holds for diagonal matrices.

xy=diag(x)y=diag(y)x \mathbf{x} \odot \mathbf{y} = \diag(\mathbf{x})\mathbf{y} = \diag(\mathbf{y})\mathbf{x}

Hadamard Product of a Vector and a Matrix

The Hadamard product between a vector xRn\mathbf{x} \in \mathbb{R}^{n} and a matrix Y=[y1yn]\mathbf{Y} = \begin{bmatrix} \vert & & \vert \\ \mathbf{y}_{1} & \cdots & \mathbf{y}_{n} \\ \vert & & \vert \end{bmatrix} is defined as follows.

xY=[ ⁣ ⁣ ⁣ ⁣xy1xyn ⁣ ⁣ ⁣ ⁣] \mathbf{x} \odot \mathbf{Y} = \begin{bmatrix} \!\!\vert & & \!\!\vert \\ \mathbf{x} \odot \mathbf{y}_{1} & \cdots & \mathbf{x} \odot \mathbf{y}_{n} \\ \!\!\vert & & \!\!\vert \end{bmatrix}

Simply put, it takes the Hadamard product of the vector with each column of the matrix. The definition immediately gives the equation below.

xY=diag(x)Y \mathbf{x} \odot \mathbf{Y} = \diag(\mathbf{x}) \mathbf{Y}

Looking at the two definitions, one can see that for vector x\mathbf{x}, defining x\mathbf{x} \odot itself as a matrix of x:=diag(x)\mathbf{x} \odot := \diag(\mathbf{x}) is reasonable.

In Programming Languages

Such pointwise operations are implemented by adding a dot . to the existing operation symbols. This notation is quite intuitive; for example, if multiplication is *, then elementwise multiplication is .*.

Julia

julia> A = [1 2 3; 4 5 6]
2×3 Matrix{Int64}:
 1  2  3
 4  5  6

julia> B = [2 2 2; 2 2 2]
2×3 Matrix{Int64}:
 2  2  2
 2  2  2

julia> A.*B
2×3 Matrix{Int64}:
 2   4   6
 8  10  12

MATLAB

>> A = [1 2 3; 4 5 6]
A =
     1     2     3
     4     5     6

>> B = [2 2 2; 2 2 2]
B =
     2     2     2
     2     2     2

 
>> A.*B
ans =
     2     4     6
     8    10    12

However, in the case of Python, since it is not a language for scientific computing like Julia or MATLAB, it is not implemented in the same way. The multiplication symbol * stands for elementwise multiplication, and matrix multiplication is denoted by @.

>>> import numpy
>>> A = np.array([[1, 2, 3], [4, 5, 6]])
>>> B = np.array([[2, 2, 2], [2, 2, 2]])
>>> A*B
array([[ 2,  4,  6],
       [ 8, 10, 12]])
>>> import torch
>>> A = torch.tensor([[1, 2, 3],[4, 5, 6]])
>>> B = torch.tensor([[2, 2, 2],[2, 2, 2]])
>>> A*B
tensor([[ 2,  4,  6],
        [ 8, 10, 12]])