logo

Secant Method 📂Numerical Analysis

Secant Method

Methods

20180903\_131323.png 20180903\_131333.png 20180903\_131343.png

Let $f,f’,f’’$ be continuous near $\alpha$ and suppose $f(\alpha) = 0, f '(\alpha) \ne 0$.

For a sufficiently close initial value $x_{0} , x_{1}$ to $\alpha$, the sequence $\left\{ x_{n} \right\}$ defined by $$ x_{n+1} := x_{n} - f ( x_{n} ) {{ x_{n} - x_{n-1} } \over { f ( x_{n} ) - f ( x_{n-1} ) }} $$ converges to $\alpha$ with order $\displaystyle {{1 + \sqrt{5} } \over {2}}$ when $n \to \infty$.

Explanation

The Golden Ratio

The order of convergence might seem quite familiar – it is precisely the golden ratio, $\displaystyle {{1 + \sqrt{5} } \over {2}} = 1.618 \cdots$. During the proof involving three consecutive terms from the sequence definition, the Fibonacci sequence appears, which is an interesting phenomenon before mentioning the usefulness of the secant method. This order indicates that the secant method is faster than the bisection method, but slower than the Newton-Raphson method.

The reason for adopting this method despite the faster Newton-Raphson method is that finding the derivative can be excessively difficult at times. Moreover, when compared in terms of the amount of computation, the lower convergence order of the secant method entails more computations than the Newton-Raphson method. However, the secant method may be preferable in terms of the computation cost per operation. The choice of the method depends on the problem (though frankly, almost no method other than the Newton method is used).

Initial Value Problem

20180903\_131820.png

Similar to the Newton-Raphson method, if the initial value is incorrectly given, it may not converge if it is not sufficiently close to $\alpha$.

Proof 1

  • $\mathscr{H} \left\{ a,b,c, \cdots \right\}$ represents the smallest interval including $a,b,c, \cdots$.

Part 1. Difference Table

Let us define $$ \begin{align*} f [ x_{0} , x_{1} ] :=& {{ f ( x_{1} ) - f ( x_{0} ) } \over { x_{1} - x_{0} }} \\ f [ x_{0} , x_{1} , x_{2} ] :=& {{ f [ x_{1} , x_{2} ] - f [ x_{0} , x_{1} ] } \over { x_{2} - x_{0} }} \end{align*} $$. These are called difference tables, and there exist $$ \begin{align*} \xi \in& \mathscr{H} \left\{ x_{0} , x_{1} \right\} \\ \zeta \in& \mathscr{H} \left\{ x_{0} , x_{1} , x_{2} \right\} \end{align*} $$ satisfying $f ’ ( \xi ) = f [ x_{0} , x_{1} ]$ and $\displaystyle {{1} \over {2}} f ’’ ( \zeta ) = f [ x_{0} , x_{1} , x_{2} ]$.


Part 2. $\displaystyle \alpha - x_{n+1} = - ( \alpha - x_{n-1} ) ( \alpha - x_{n} ) {{ f ''( \zeta _{n} ) } \over { 2 f ’ ( \xi_{n} ) }}$

Multiplying both sides of $x_{n+1} = x_{n} - f ( x_{n} ) {{ x_{n} - x_{n-1} } \over { f ( x_{n} ) - f ( x_{n-1} ) }}$ by $-1$ and adding $\alpha$ results in $$ \alpha - x_{n+1} = \alpha - x_{n} + f ( x_{n} ) {{ x_{n} - x_{n-1} } \over { f ( x_{n} ) - f ( x_{n-1} ) }} $$, and $$ \begin{align*} \alpha - x_{n+1} =& \alpha - x_{n} + {{ x_{n} f ( x_{n} ) - x_{n-1} f ( x_{n} ) } \over { f ( x_{n} ) - f ( x_{n-1} ) }} \\ =& {{ x_{n} f ( x_{n} ) - x_{n-1} f ( x_{n} ) + \alpha f(x_{n}) - x_{n} f(x_{n}) - \alpha f(x_{n-1}) + x_{n} f(x_{n-1}) } \over { f ( x_{n} ) - f ( x_{n-1} ) }} \\ =& {{1 } \over { f ( x_{n} ) - f ( x_{n-1} ) }} \left[ - x_{n-1} f ( x_{n} ) + \alpha f(x_{n}) - \alpha f(x_{n-1}) + x_{n} f(x_{n-1} ) \right] \\ =& {{1 } \over { f ( x_{n} ) - f ( x_{n-1} ) }} \left[ ( \alpha - x_{n-1} ) f ( x_{n} ) - ( \alpha - x_{n} ) f(x_{n-1} ) \right] \\ =& - {{ ( \alpha - x_{n-1} ) ( \alpha - x_{n} ) } \over { f ( x_{n} ) - f ( x_{n-1} ) }} \left[ {{ f ( x_{n} ) } \over { x_{n} - \alpha }} - {{ f(x_{n-1}) }} \right] \\ =& - {{ ( \alpha - x_{n-1} ) ( \alpha - x_{n} ) } \over { f ( x_{n} ) - f ( x_{n-1} ) }} \left[ {{ f ( x_{n} ) - f(\alpha) } \over { x_{n} - \alpha }} - {{ f(x_{n-1} ) - f( \alpha) } \over { x_{n-1} - \alpha }} \right] \\ =& - ( \alpha - x_{n-1} ) ( \alpha - x_{n} ) {{ 1 } \over { f ( x_{n} ) - f ( x_{n-1} ) }} \left[ f[ x_{n} , \alpha ] - f[ x_{n-1} , \alpha ] \right] \\ =& - ( \alpha - x_{n-1} ) ( \alpha - x_{n} ) {{ x_{n} - x_{n-1} } \over { f ( x_{n} ) - f ( x_{n-1} ) }} \left[ {{ f[ x_{n} , \alpha ] - f[ x_{n-1} , \alpha ] } \over { x_{n} - x_{n-1} }} \right] \\ =& - ( \alpha - x_{n-1} ) ( \alpha - x_{n} ) {{ f [ x_{n-1} , x_{n} , \alpha ] } \over { f [ x_{n-1} , x_{n} ] }} \end{align*} $$. Arranging yields $$ \alpha - x_{n+1} = - ( \alpha - x_{n-1} ) ( \alpha - x_{n} ) {{ f [ x_{n-1} , x_{n} , \alpha ] } \over { f [ x_{n-1} , x_{n} ] }} $$, and by the properties of the difference table, the following is obtained: $$ \alpha - x_{n+1} = - ( \alpha - x_{n-1} ) ( \alpha - x_{n} ) {{ f ''( \zeta _{n} ) } \over { 2 f ’ ( \xi_{n} ) }} $$.


Part 3. Convergence

Consider a sufficiently small $I : = [ \alpha - \delta , \alpha + \delta ]$ in the vicinity of $\alpha$ that satisfies all given assumptions.

Defining $$ M := {{ \max_{x \in I } | f '' (x) | } \over { 2 \min_{x \in I} | f '(x) | }} $$ as follows, based on the results from Part 1, $$ | \alpha - x_{n+1}| \le | \alpha - x_{n} | | \alpha - x_{n-1} | M $$ Multiplying both sides by $M$ results in $$ | \alpha - x_{n+1}| M \le ( | \alpha - x_{n} | M ) \cdot ( | \alpha - x_{n-1} | M ) $$ Assuming a sufficiently small $I$, setting $$ \varepsilon := \max \left\{ ( | \alpha - x_{n} | M ) , ( | \alpha - x_{n-1} | M ) \right\} < 1 $$ yields $$ | \alpha - x_{n+1}| M \le \varepsilon^2 $$ $$ | \alpha - x_{n+3}| M \le ( | \alpha - x_{n+2}| M ) \cdot ( | \alpha - x_{n+1}| M ) \le \varepsilon^2 \cdot \varepsilon = \varepsilon^3 $$ $$ | \alpha - x_{n+4}| M \le ( | \alpha - x_{n+3}| M ) \cdot ( | \alpha - x_{n+2}| M ) \le \varepsilon^3 \cdot \varepsilon^2 = \varepsilon^5 $$ and in this manner, the inequality $$ | \alpha - x_{n+m+1}| M \le ( | \alpha - x_{n+m}| M ) \cdot ( | \alpha - x_{n+m-1}| M ) \le \varepsilon^{ q_{m} } \cdot \varepsilon^{ q_{m-1} } = \varepsilon^{ q_{m+1} } $$ is obtained.

The General Term of the Fibonacci Sequence: Let’s say a sequence $\left\{ F_{n} \right\}$ is defined as $F_{n+1} := F_{n} + F_{n-1}$. If $F_{0} = F_{1} = 1$ then for $\displaystyle r_{0} : = {{1 + \sqrt{5} } \over {2}}$ and $\displaystyle r_{1} : = {{1 - \sqrt{5} } \over {2}}$, $\displaystyle F_{n} = {{ {r_{0}}^{n+1} - {r_{1}}^{n+1} } \over { r_{0} - r_{1} }} $.

$\left\{ q_{m} \right\}$ is none other than the Fibonacci sequence, for a sufficiently large $m$, $$ q_{m} \sim {{1} \over {\sqrt{5}}} ( 1.618 )^{m+1} $$ Thus, $$ | \alpha - x_{n+m}| \le {{1} \over {M}} \varepsilon^{ q_{m} } $$ and when $n \to \infty$, $x_{n} \to \alpha$, not only that but it can be seen that it converges with order $$ {{1 + \sqrt{5} } \over {2}} \sim 1.618 $$.

Implementation

Below is code written in R.

The itmax option is the maximum number of iterations, which defaults to giving up on the calculation if a precision solution is not found even after $1000$ iterations.

Secant<-function(f,x0,x1,itmax=10^3,tol=10^(-8))
{
  if(f(x0)==f(x1)){stop('Wrong Initial Values')}
  
  x2 = x1 - f(x1) \ast ( ( x1  - x0 ) / ( f(x1) - f(x0) ) )
  for(i in 1:itmax)
  {
    x0 = x1
    x1 = x2
    x2 = x1 - f(x1) \ast ( ( x1  - x0 ) / ( f(x1) - f(x0) ) )
    if(abs(x2-x1)<tol) {return(x2)}
  }
  
  stop('Maybe Wrong Initial Point')
}
 
f<-function(x) {x^3 + 8}
Secant(f,-7,7)
 
g<-function(x) {x^6 - x - 1}
Secant(g,0,3)
 
h<-function(x) {exp(x)-1}
Secant(h,-2,-1)

The result of executing the above code is as follows.

20180903\_125940.png


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p68~69. ↩︎