logo

Gaussian Quadrature 📂Numerical Analysis

Gaussian Quadrature

Definition 1

Let f:[a,b]Rf : [a,b] \to \mathbb{R} be integrable over [a,b][a,b] and divide [a,b][a,b] into nodes such as a=x1<<xn=ba = x_{1} < \cdots < x_{n} = b. In(f):=j=1nwjf(xj)abf(x)dx=I(f) I_{n} (f) := \sum_{j=1}^{n} w_{j} f ( x_{j} ) \approx \int_{a}^{b} f(x) dx = I ( f ) Calculating the weights wjw_{j} of the defined InI_{n} and computing the numerical integration this way is called Gaussian quadrature.

Explanation

As there is a guarantee that there exists a polynomial pn1p_{n-1} that approximates ff well, we consider pn1p_{n-1} in place of ff. Let’s call it w(x)=1w(x) = 1 for convenience. [ NOTE: In this case, it is called Gauss-Legendre quadrature. ] En(pn1):=abpn1(x)dxj=1nwjpn1(xj) E_{n} ( p_{n-1} ) : = \int_{a}^{b} p_{n-1} (x) dx - \sum_{j=1}^{n} w_{j} p_{n-1} ( x_{j} ) The defined error EnE_{n} is linear, therefore En(pn1)=En(a0+a1x++an1xn1)=a0En(1)+a1En(x)++an1En(xn1) \begin{align*} E_{n} ( p_{n-1} ) =& E_{n} \left( a_{0} + a_{1} x + \cdots + a_{n-1} x^{n-1} \right) \\ =& a_{0} E_{n} \left( 1 \right) + a_{1} E_{n} \left( x \right) + \cdots + a_{n-1} E_{n} \left( x^{n-1} \right) \end{align*} Thus, regardless of what a0,a1,,an1a_{0} , a_{1} , \cdots , a_{n-1} is, to find wjw_{j} such that En(pn1)=0E_{n} ( p_{n-1} ) = 0, for all i=0,1,,(n1)i=0,1, \cdots , (n-1), En(xi)=abxidxj=1nwjxji=0 E_{n} ( x^{i} ) = \int_{a}^{b} x^{i} dx - \sum_{j=1}^{n} w_{j} x_{j}^{i} = 0 when written out explicitly w1++wn=ab1dxw1x1++wnxn=abxdxw1x12++wnxn2=abx2dxw1x1n1++wnxnn1=abxn1dx \begin{align*} w_{1} + \cdots + w_{n} =& \int_{a}^{b} 1 dx \\ w_{1} x_{1} + \cdots + w_{n} x_{n} =& \int_{a}^{b} x dx \\ w_{1} x_{1}^{2} + \cdots + w_{n} x_{n}^{2} =& \int_{a}^{b} x^{2} dx \\ \vdots& \\ w_{1} x_{1}^{n-1} + \cdots + w_{n} x_{n}^{n-1} =& \int_{a}^{b} x^{n-1} dx \end{align*} Represented as a matrix [111x1x2xnx1n1x2n1xnn1][w1w2wn]=[ab1dxabxdxabxn1dx] \begin{bmatrix} 1 & 1 & \cdots & 1 \\ x_{1} & x_{2} & \cdots & x_{n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{1}^{n-1} & x_{2}^{n-1} & \cdots & x_{n}^{n-1} \end{bmatrix} \begin{bmatrix} w_{1} \\ w_{2} \\ \vdots \\ w_{n} \end{bmatrix} = \begin{bmatrix} \int_{a}^{b} 1 dx \\ \int_{a}^{b} x dx \\ \vdots \\ \int_{a}^{b} x^{n-1} dx \end{bmatrix} Here, the existence of w1,,wnw_{1}, \cdots , w_{n} is ensured by the Vandermonde determinant.

The advantage of the Gaussian quadrature is that, unlike the Newton-Cotes integration formulas, it doesn’t matter how the nodes are positioned.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p524. ↩︎