logo

Gaussian Quadrature 📂Numerical Analysis

Gaussian Quadrature

Definition 1

Let $f : [a,b] \to \mathbb{R}$ be integrable over $[a,b]$ and divide $[a,b]$ into nodes such as $a = x_{1} < \cdots < x_{n} = b$. $$ I_{n} (f) := \sum_{j=1}^{n} w_{j} f ( x_{j} ) \approx \int_{a}^{b} f(x) dx = I ( f ) $$ Calculating the weights $w_{j}$ of the defined $I_{n}$ and computing the numerical integration this way is called Gaussian quadrature.

Explanation

As there is a guarantee that there exists a polynomial $p_{n-1}$ that approximates $f$ well, we consider $p_{n-1}$ in place of $f$. Let’s call it $w(x) = 1$ for convenience. [ NOTE: In this case, it is called Gauss-Legendre quadrature. ] $$ E_{n} ( p_{n-1} ) : = \int_{a}^{b} p_{n-1} (x) dx - \sum_{j=1}^{n} w_{j} p_{n-1} ( x_{j} ) $$ The defined error $E_{n}$ is linear, therefore $$ \begin{align*} E_{n} ( p_{n-1} ) =& E_{n} \left( a_{0} + a_{1} x + \cdots + a_{n-1} x^{n-1} \right) \\ =& a_{0} E_{n} \left( 1 \right) + a_{1} E_{n} \left( x \right) + \cdots + a_{n-1} E_{n} \left( x^{n-1} \right) \end{align*} $$ Thus, regardless of what $a_{0} , a_{1} , \cdots , a_{n-1}$ is, to find $w_{j}$ such that $E_{n} ( p_{n-1} ) = 0$, for all $i=0,1, \cdots , (n-1)$, $$ E_{n} ( x^{i} ) = \int_{a}^{b} x^{i} dx - \sum_{j=1}^{n} w_{j} x_{j}^{i} = 0 $$ when written out explicitly $$ \begin{align*} w_{1} + \cdots + w_{n} =& \int_{a}^{b} 1 dx \\ w_{1} x_{1} + \cdots + w_{n} x_{n} =& \int_{a}^{b} x dx \\ w_{1} x_{1}^{2} + \cdots + w_{n} x_{n}^{2} =& \int_{a}^{b} x^{2} dx \\ \vdots& \\ w_{1} x_{1}^{n-1} + \cdots + w_{n} x_{n}^{n-1} =& \int_{a}^{b} x^{n-1} dx \end{align*} $$ Represented as a matrix $$ \begin{bmatrix} 1 & 1 & \cdots & 1 \\ x_{1} & x_{2} & \cdots & x_{n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{1}^{n-1} & x_{2}^{n-1} & \cdots & x_{n}^{n-1} \end{bmatrix} \begin{bmatrix} w_{1} \\ w_{2} \\ \vdots \\ w_{n} \end{bmatrix} = \begin{bmatrix} \int_{a}^{b} 1 dx \\ \int_{a}^{b} x dx \\ \vdots \\ \int_{a}^{b} x^{n-1} dx \end{bmatrix} $$ Here, the existence of $w_{1}, \cdots , w_{n}$ is ensured by the Vandermonde determinant.

The advantage of the Gaussian quadrature is that, unlike the Newton-Cotes integration formulas, it doesn’t matter how the nodes are positioned.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p524. ↩︎