Theory and Proof


Newton's Method


Section 2.4 Newton-Raphson and Secant Methods


    If [Graphics:Images/Newton'sMethodProof_gr_1.gif] are continuous near a root [Graphics:Images/Newton'sMethodProof_gr_2.gif], then this extra information regarding the nature of [Graphics:Images/Newton'sMethodProof_gr_3.gif] can be used to develop algorithms that will produce sequences [Graphics:Images/Newton'sMethodProof_gr_4.gif] that converge faster to [Graphics:Images/Newton'sMethodProof_gr_5.gif] than either the bisection or false position method. The Newton-Raphson (or simply Newton's) method is one of the most useful and best known algorithms that relies on the continuity of [Graphics:Images/Newton'sMethodProof_gr_6.gif].  The method is attributed to Sir Isaac Newton (1643-1727) and Joseph Raphson (1648-1715).  

Theorem (Newton-Raphson Theorem).  Assume that [Graphics:Images/Newton'sMethodProof_gr_7.gif] and there exists a number [Graphics:Images/Newton'sMethodProof_gr_8.gif], where [Graphics:Images/Newton'sMethodProof_gr_9.gif].  If   [Graphics:Images/Newton'sMethodProof_gr_10.gif], then there exists a [Graphics:Images/Newton'sMethodProof_gr_11.gif] such that the sequence [Graphics:Images/Newton'sMethodProof_gr_12.gif] defined by the iteration  

    [Graphics:Images/Newton'sMethodProof_gr_13.gif]    for    [Graphics:Images/Newton'sMethodProof_gr_14.gif]  

will converge to [Graphics:Images/Newton'sMethodProof_gr_15.gif] for any initial approximation  [Graphics:Images/Newton'sMethodProof_gr_16.gif].  




Remark.  The function  [Graphics:Images/Newton'sMethodProof_gr_18.gif]  defined by the formula  


is called the Newton-Raphson iteration formula.  Since  [Graphics:Images/Newton'sMethodProof_gr_20.gif],  it is easy to see that  [Graphics:Images/Newton'sMethodProof_gr_21.gif].  Thus the Newton-Raphson iteration for finding the root of the equation  [Graphics:Images/Newton'sMethodProof_gr_22.gif],  is accomplished by finding the fixed point of the function  [Graphics:Images/Newton'sMethodProof_gr_23.gif].  

Geometric construction.




Corollary (Newton's Iteration for Finding Square Roots)  Assume that  [Graphics:Images/Newton'sMethodProof_gr_141.gif]  is a real number ant let  [Graphics:Images/Newton'sMethodProof_gr_142.gif]  be an initial approximation to  [Graphics:Images/Newton'sMethodProof_gr_143.gif].  Define the sequence  [Graphics:Images/Newton'sMethodProof_gr_144.gif] using the recursive rule  


Then the sequence  
[Graphics:Images/Newton'sMethodProof_gr_146.gif] converges to  [Graphics:Images/Newton'sMethodProof_gr_147.gif];  that is,  [Graphics:Images/Newton'sMethodProof_gr_148.gif].  


Definition (Order of a Root)  Assume that [Graphics:Images/Newton'sMethodProof_gr_149.gif] and its derivatives [Graphics:Images/Newton'sMethodProof_gr_150.gif] are defined and continuous on an interval about [Graphics:Images/Newton'sMethodProof_gr_151.gif].  
We say that  
[Graphics:Images/Newton'sMethodProof_gr_152.gif]  has a root of order  [Graphics:Images/Newton'sMethodProof_gr_153.gif]  at  [Graphics:Images/Newton'sMethodProof_gr_154.gif]  if and only if  


A root of order  [Graphics:Images/Newton'sMethodProof_gr_156.gif]  is often called a simple root, and if  [Graphics:Images/Newton'sMethodProof_gr_157.gif]  it is called a multiple root.  A root of order  [Graphics:Images/Newton'sMethodProof_gr_158.gif] is sometimes called a double root, and so on.  The next result will illuminate these concepts.


Lemma  If the equation  [Graphics:Images/Newton'sMethodProof_gr_159.gif]  has a root of order  [Graphics:Images/Newton'sMethodProof_gr_160.gif]  at  [Graphics:Images/Newton'sMethodProof_gr_161.gif],  then there exists a continuous function  [Graphics:Images/Newton'sMethodProof_gr_162.gif]  so that  [Graphics:Images/Newton'sMethodProof_gr_163.gif]  can be expressed as the product  



Speed of Convergence


    The distinguishing property we seek is the following.  If  p  is a simple root of  f(x) = 0,  Newton's method will converge rapidly, and the number of accurate decimal places (roughly) doubles with each iteration.  On the other hand, if  p  is a multiple root, the error in each successive approximation is a fraction of the previous error.  To make this precise, we define the order of convergence.  This is a measure of how rapidly a sequence converges.

Definition (Order of Convergence)  Assume that  [Graphics:Images/Newton'sMethodProof_gr_165.gif] converges to  p,  and set  [Graphics:Images/Newton'sMethodProof_gr_166.gif].    If two positive constants  [Graphics:Images/Newton'sMethodProof_gr_167.gif]  exist, and  


then the sequence is said to converge to  p  with
order of convergence R.  The number  A  is called the asymptotic error constant.  The cases  [Graphics:Images/Newton'sMethodProof_gr_169.gif]  are given special  consideration.

(i)   If  [Graphics:Images/Newton'sMethodProof_gr_170.gif], the convergence of  [Graphics:Images/Newton'sMethodProof_gr_171.gif]  is called linear.

(ii)  If  [Graphics:Images/Newton'sMethodProof_gr_172.gif], the convergence of  [Graphics:Images/Newton'sMethodProof_gr_173.gif]  is called quadratic.


Theorem (Convergence Rate for Newton-Raphson Iteration)  Assume that Newton-Raphson iteration produces a sequence  [Graphics:Images/Newton'sMethodProof_gr_174.gif] that converges to the root  p  of the function  [Graphics:Images/Newton'sMethodProof_gr_175.gif].  

If  p  is a simple root, then convergence is quadratic and  [Graphics:Images/Newton'sMethodProof_gr_176.gif]  for  k  sufficiently large.

If  p  is a multiple root of order  m,  then convergence is linear and  [Graphics:Images/Newton'sMethodProof_gr_177.gif]  for  k  sufficiently large.



Theorem.  (Acceleration of Newton-Raphson Iteration)  Suppose that the Newton-Raphson algorithm produces a sequence that converges linearly to the root  [Graphics:Images/Newton'sMethodProof_gr_200.gif]  of order  [Graphics:Images/Newton'sMethodProof_gr_201.gif].  Then the modified Newton-Raphson iteration formula  


will produce a sequence  [Graphics:Images/Newton'sMethodProof_gr_203.gif]  that converges quadratically to  p.



 Return to Numerical Methods - Numerical Analysis






















(c) John H. Mathews 2004