Hadamard Product of Matrices
Definition
The Hadamard product $A \odot B$ of two matrices $A, B \in M_{m \times n}$ is defined as follows.
$$ A \odot B = \begin{bmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{bmatrix} \odot\begin{bmatrix} b_{11} & \cdots & b_{1n} \\ \vdots & \ddots & \vdots \\ b_{m1} & \cdots & b_{mn} \end{bmatrix} := \begin{bmatrix} a_{11}b_{11} & \cdots & a_{1n}b_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1}b_{m1} & \cdots & a_{mn}b_{mn} \end{bmatrix} $$
$$ [A \odot B]_{ij} := [A]_{ij} [B]_{ij} $$
Description
The code for $\odot$’s $\TeX$ is \odot
.
It is also commonly called the elementwise product. Unlike matrix multiplication, it is only defined for matrices of the same size and the commutative law applies.
- $A \odot B = B \odot A$
- $(A \odot B) \odot C = A \odot (B \odot C)$
- $A \odot (B + C) = A \odot B + A \odot C$
- $k(A \odot B) = (kA) \odot B = A \odot (kB)$
Hadamard Product of Vectors
The Hadamard product of two vectors $\mathbf{x}$ and $\mathbf{y} \in \mathbb{R}^{n}$ is defined as follows.
$$ \mathbf{x} \odot \mathbf{y} = \begin{bmatrix} x_{1}y_{1} \\ \vdots \\ x_{n}y_{n} \end{bmatrix} $$
The definition itself is a special case for matrices regarding the Hadamard product of matrices with $n \times 1$. The following equation holds for diagonal matrices.
$$ \mathbf{x} \odot \mathbf{y} = \diag(\mathbf{x})\mathbf{y} = \diag(\mathbf{y})\mathbf{x} $$
Hadamard Product of a Vector and a Matrix
The Hadamard product between a vector $\mathbf{x} \in \mathbb{R}^{n}$ and a matrix $\mathbf{Y} = \begin{bmatrix} \vert & & \vert \\ \mathbf{y}_{1} & \cdots & \mathbf{y}_{n} \\ \vert & & \vert \end{bmatrix}$ is defined as follows.
$$ \mathbf{x} \odot \mathbf{Y} = \begin{bmatrix} \!\!\vert & & \!\!\vert \\ \mathbf{x} \odot \mathbf{y}_{1} & \cdots & \mathbf{x} \odot \mathbf{y}_{n} \\ \!\!\vert & & \!\!\vert \end{bmatrix} $$
Simply put, it takes the Hadamard product of the vector with each column of the matrix. The definition immediately gives the equation below.
$$ \mathbf{x} \odot \mathbf{Y} = \diag(\mathbf{x}) \mathbf{Y} $$
Looking at the two definitions, one can see that for vector $\mathbf{x}$, defining $\mathbf{x} \odot $ itself as a matrix of $\mathbf{x} \odot := \diag(\mathbf{x})$ is reasonable.
In Programming Languages
Such pointwise operations are implemented by adding a dot .
to the existing operation symbols. This notation is quite intuitive; for example, if multiplication is *
, then elementwise multiplication is .*
.
Julia
julia> A = [1 2 3; 4 5 6]
2×3 Matrix{Int64}:
1 2 3
4 5 6
julia> B = [2 2 2; 2 2 2]
2×3 Matrix{Int64}:
2 2 2
2 2 2
julia> A.*B
2×3 Matrix{Int64}:
2 4 6
8 10 12
MATLAB
>> A = [1 2 3; 4 5 6]
A =
1 2 3
4 5 6
>> B = [2 2 2; 2 2 2]
B =
2 2 2
2 2 2
>> A.*B
ans =
2 4 6
8 10 12
However, in the case of Python, since it is not a language for scientific computing like Julia or MATLAB, it is not implemented in the same way. The multiplication symbol *
stands for elementwise multiplication, and matrix multiplication is denoted by @
.
>>> import numpy
>>> A = np.array([[1, 2, 3], [4, 5, 6]])
>>> B = np.array([[2, 2, 2], [2, 2, 2]])
>>> A*B
array([[ 2, 4, 6],
[ 8, 10, 12]])
>>> import torch
>>> A = torch.tensor([[1, 2, 3],[4, 5, 6]])
>>> B = torch.tensor([[2, 2, 2],[2, 2, 2]])
>>> A*B
tensor([[ 2, 4, 6],
[ 8, 10, 12]])