logo

Einstein Notation 📂Mathematical Physics

Einstein Notation

Notation

The summation sign \sum is omitted when a subscript is repeated two or more times.

Description

Also referred to as the Einstein summation convention. It’s not really a formula but rather a rule. When doing vector calculations, there are often cases where one needs to write the summation sign \sum multiple times in a single formula, which can make the equation look cluttered and is very annoying to write by hand. Hence, it is a convention to omit the summation sign when a subscript is repeated more than twice. Of course, care must be taken to avoid confusion in its meaning.

If confused, check the left side for what indices are present. If index ii is clearly not on the left side, then on the right side, i\sum \limits_{i} is omitted due to the Einstein notation. Conversely, if the index jj is on the left side, then on the right side, the summation over jj is not just omitted; it is not there at all.

Examples

Let’s say 1,2,31,2,3 each represent x,y,zx,y,z. Suppose vectors A=(A1,A2,A3)\mathbf{A} = (A_{1}, A_{2}, A_{3}) and B=(B1,B2,B3)\mathbf{B} = (B_{1}, B_{2}, B_{3}) are given.

Vector

A=e^1A1+e^2A2+e^3A3=i=13e^iAi=e^iAi \begin{align*} \mathbf{A} &= \hat{\mathbf{e}}_{1}A_{1} + \hat{\mathbf{e}}_{2}A_{2} + \hat{\mathbf{e}}_{3}A_{3} \\ &= \sum \limits_{i=1}^{3} \hat{\mathbf{e}}_{i}A_{i} \\ &= \hat{\mathbf{e}}_{i}A_{i} \end{align*}

Inner Product of Two Vectors

AB=A1B1+A2B2+A3B3=i=13AiBi=AiBi \begin{align*} \mathbf{A} \cdot \mathbf{B} &= A_{1}B_{1} + A_{2}B_{2} + A_{3}B_{3} \\ &= \sum \limits_{i=1}^{3} A_{i}B_{i} \\ &= A_{i}B_{i} \end{align*}

It can be expressed using the Kronecker delta as follows.

AB=AiBi=δijAiBj \mathbf{A} \cdot \mathbf{B} = A_{i}B_{i} = \delta_{ij}A_{i}B_{j}

Divergence of a Vector Function

Let’s say xi=i\dfrac{\partial }{\partial x_{i}} = \nabla_{i}. Then, a similar result to the inner product of two vectors is obtained.

A=A1x1+A2x2+A3x3=1A1+2A2+3A3=i=13iAi=iAi=δijiAj \begin{align*} \nabla \cdot \mathbf{A} &= \dfrac{\partial A_{1}}{\partial x_{1}} + \dfrac{\partial A_{2}}{\partial x_{2}} + \dfrac{\partial A_{3}}{\partial x_{3}} \\ &= \nabla_{1} A_{1} + \nabla_{2} A_{2} + \nabla_{3} A_{3} \\ &= \sum \limits_{i=1}^{3} \nabla_{i} A_{i} \\ &= \nabla_{i}A_{i} \\ &= \delta_{ij}\nabla_{i}A_{j} \end{align*}

Cross Product of Two Vectors

A×B= e^1(A2B3A3B2)+e^2(A3BA1A1B3)+e^3(A1B2A2B1)= e^1A2B3e^1A3B2+e^2A3B1e^2A1B3+e^3A1B2e^1A2B1= ϵ123e^1A2B3+ϵ132e^1A3B2+ϵ231e^2A3B1+ϵ213e^2A1B3+ϵ312e^3A1B2+ϵ321e^3A2B1= i=13j=13k=13ϵijke^iAjBk= ϵijke^iAjBk \begin{align*} & \mathbf{A} \times \mathbf{B} \\ =&\ \hat{\mathbf{e}}_{1} \left( A_{2} B_{3} - A_{3} B_{2} \right) + \hat{\mathbf{e}}_{2} \left( A_{3} BA_{1} - A_{1} B_{3} \right) + \hat{\mathbf{e}}_{3} \left( A_{1} B_{2} - A_{2} B_{1} \right) \\ =&\ \hat{\mathbf{e}}_{1} A_{2} B_{3} - \hat{\mathbf{e}}_{1} A_{3} B_{2} + \hat{\mathbf{e}}_{2} A_{3} B_{1} - \hat{\mathbf{e}}_{2} A_{1} B_{3} + \hat{\mathbf{e}}_{3} A_{1} B_{2} - \hat{\mathbf{e}}_{1} A_{2} B_{1} \\ =&\ \epsilon_{123} \hat{\mathbf{e}}_{1} A_{2} B_{3} + \epsilon_{132} \hat{\mathbf{e}}_{1} A_{3} B_{2} + \epsilon_{231} \hat{\mathbf{e}}_{2} A_{3} B_{1} + \epsilon_{213} \hat{\mathbf{e}}_{2} A_{1} B_{3} + \epsilon_{312} \hat{\mathbf{e}}_{3} A_{1} B_{2} + \epsilon_{321} \hat{\mathbf{e}}_{3} A_{2} B_{1} \\ =&\ \sum\limits_{i=1}^{3} \sum\limits_{j=1}^{3} \sum\limits_{k=1}^{3} \epsilon_{ijk} \hat{\mathbf{e}}_{i} A_{j} B_{k} \\ =&\ \epsilon_{ijk} \hat{\mathbf{e}}_{i} A_{j}B_{k} \end{align*}

Here, ϵijk\epsilon_{ijk} is the Levi-Civita symbol. By the above result, the following equation holds.

(A×B)i=ϵijkAjBk (\mathbf{A} \times \mathbf{B} )_{i} = \epsilon_{ijk} A_{j}B_{k}

Curl of a Vector Function

Let’s say xi=i\dfrac{\partial }{\partial x_{i}} = \nabla_{i} again. Then, a similar result to the cross product of two vectors is obtained.

×A= e^1(2A33A2)+e^2(3A11A3)+e^3(1A22A1)= e^12A3e^13A2+e^23A1e^21A3+e^31A2e^12A1= ϵ123e^12A3+ϵ132e^13A2+ϵ231e^23A1+ϵ213e^21A3+ϵ312e^31A2+ϵ321e^32A1= i=13j=13k=13ϵijke^ijAk= ϵijke^ijAk \begin{align*} & \nabla \times \mathbf{A} \\ =&\ \hat{\mathbf{e}}_{1} \left( \nabla_{2} A_{3} - \nabla_{3} A_{2} \right) + \hat{\mathbf{e}}_{2} \left( \nabla_{3} A_{1} - \nabla_{1} A_{3} \right) + \hat{\mathbf{e}}_{3} \left( \nabla_{1} A_{2} - \nabla_{2} A_{1} \right) \\ =&\ \hat{\mathbf{e}}_{1} \nabla_{2} A_{3} - \hat{\mathbf{e}}_{1} \nabla_{3} A_{2} + \hat{\mathbf{e}}_{2} \nabla_{3} A_{1} - \hat{\mathbf{e}}_{2} \nabla_{1} A_{3} + \hat{\mathbf{e}}_{3} \nabla_{1} A_{2} - \hat{\mathbf{e}}_{1} \nabla_{2} A_{1} \\ =&\ \epsilon_{123} \hat{\mathbf{e}}_{1} \nabla_{2} A_{3} + \epsilon_{132} \hat{\mathbf{e}}_{1} \nabla_{3} A_{2} + \epsilon_{231} \hat{\mathbf{e}}_{2} \nabla_{3} A_{1} + \epsilon_{213} \hat{\mathbf{e}}_{2} \nabla_{1} A_{3} + \epsilon_{312} \hat{\mathbf{e}}_{3} \nabla_{1} A_{2} + \epsilon_{321} \hat{\mathbf{e}}_{3} \nabla_{2} A_{1} \\ =&\ \sum\limits_{i=1}^{3} \sum\limits_{j=1}^{3} \sum\limits_{k=1}^{3} \epsilon_{ijk} \hat{\mathbf{e}}_{i} \nabla_{j} A_{k} \\ =&\ \epsilon_{ijk} \hat{\mathbf{e}}_{i} \nabla_{j} A_{k} \end{align*}

Here, it’s important always to remember that i\nabla_{i} represents differentiation. Normally, swapping the order of vector components doesn’t cause any problems.

A1A2A3=A2A1A3 A_{1}A_{2}A_{3} = A_{2}A_{1}A_{3}

However, since i\nabla_{i} is a differentiation, you must never swap the order of vector components with it.

A12A32A1A3 A_{1}\nabla_{2}A_{3} \ne \nabla_{2}A_{1}A_{3}

For example, if A=(y,xy,xyz)\mathbf{A} = (y,xy,xyz), the following result is obtained.

A12A3=y(xyz)y=xyz2xyz=(xy2z)y=2A1A3 A_{1}\nabla_{2}A_{3} = y \dfrac{\partial (xyz)}{\partial y} = xyz \ne 2xyz = \dfrac{\partial (xy^{2}z)}{\partial y} = \nabla_{2}A_{1}A_{3}

Of course, since 2xy=2yx\dfrac{\partial^{2} }{\partial x\partial y} = \dfrac{\partial^{2} }{\partial y\partial x}, 12=21\nabla_{1}\nabla_{2}=\nabla_{2}\nabla_{1} holds true.