logo

Newton's Method for Solving Nonlinear Systems 📂Numerical Analysis

Newton's Method for Solving Nonlinear Systems

Methods 1

Let’s assume that a multivariable function $\mathbf{f} : \mathbb{R}^{N} \to \mathbb{R}^{N}$ is defined as $\mathbf{f} \in C^{2} \left( N ( \alpha ) \right)$ with $\mathbf{f} ( \alpha ) = \mathbb{0}$, $\left[ D \mathbf{f} ( \alpha ) \right]^{-1}$ existing. For an initial value $\mathbf{x}_{0}$ sufficiently close to $\alpha$, the sequence $\left\{ \mathbf{x}_{n} \right\}$ defined by $$ \mathbf{x}_{n+1} := \mathbf{x}_{n} - \left[ D \mathbf{f} ( \mathbf{x}_{n} ) \right]^{-1} f ( \mathbf{x}_{n} ) $$ quadratically converges to $\alpha$ when $n \to \infty$.


  • $\mathbf{f} \in C^{2} \left( N ( \alpha ) \right)$ means that $\mathbf{f}$ is twice differentiable and continuous in the neighborhood of $\alpha$.
  • $D \mathbf{f}$ represents the Jacobian of $\mathbf{f}$.

Explanation

Motivation

$$ \begin{cases} x + \cos ( x - y ) = 0 \\ - y + \sin (x + y) = 0 \end{cases} $$ Let’s assume there exists a problem as shown above. If it is a linear system, one can relatively easily find the solution using tools of linear algebra, but it becomes difficult when dealing with nonlinear systems where two variables are intertwined. However, the Newton method, though numerical, efficiently solves it.

Generalization of the Newton-Raphson Method

The Newton-Raphson method is not only extremely fast but also versatile enough to solve nonlinear systems $$ \begin{cases} f_{1}( x_{1} , \cdots , x_{N} ) = 0 \\ \vdots \\ f_{N}( x_{1} , \cdots , x_{N} ) = 0 \end{cases} $$. Furthermore, its derivation and proof are essentially the same from one dimension to multidimensional, only generalized. Especially, if $f : \mathbb{R} \to \mathbb{R}$, then it naturally follows that $\displaystyle \left[ D f ( \mathbf{x}_{n} ) \right]^{-1} = {{1} \over { f ' ( x_{n} ) }}$. This is because the Jacobian acts as the derivative coefficient of a vector function.

In summary, it’s an excellent method because it’s understandable, fast, and can solve many problems. Having mastery over the Newton method means owning one of the best tools for solving equations in most fields.

Implementation

Below is an implementation of the Newton method in R code, using the numDeriv package to obtain the Jacobian.

library(numDeriv)
 
Newton<-function(f,x0,tol=10^(-8)){
  while(sqrt(sum(f(x0)^2))>tol){
    x0 <- x0 - solve(jacobian(f,x0)) %*% f(x0)
  }
  return(c(x0))
}
 
f<-function(v){c(
  v[1]+cos(v[1]-v[2]),
  -v[2]+sin(v[1]+v[2])
)}
 
Newton(f,c(1,1))
f(c(-0.9980201,-0.9350821))

20190328\_160813.png Running the code with the initial value $(x_{0} , y_{0}) = (1,1)$ yields the approximation $(-0.9980201 \cdots , -0.9350821 \cdots )$ of $\begin{cases} x + \cos ( x - y ) = 0 \\ - y + \sin (x + y) = 0 \end{cases}$, as can be seen.

See Also


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p108~109. ↩︎