Numerical Optimization
Secant Method
A derivative-free root-finding method that approximates Newton's method using two previous function evaluations, achieving superlinear convergence of order approximately 1.618.
Prerequisites
Why This Matters
Newton's method requires computing at every iteration. In many practical settings, the derivative is expensive, unavailable, or numerically unstable. The secant method replaces the exact derivative with a finite difference approximation using two prior iterates, sacrificing some convergence speed (superlinear instead of quadratic) but eliminating the derivative requirement entirely.
Mental Model
Newton's method uses the tangent line at to find the next iterate. The secant method uses the secant line through and . This secant line approximates the tangent line, and the approximation improves as and converge to the root.
Formal Setup
Secant Method
Given two initial points and a function , the secant method generates iterates:
The fraction is the finite difference approximation to .
The method requires two starting points but only one function evaluation per iteration (since was already computed in the previous step). Newton's method requires one starting point but needs both and per iteration.
Main Theorems
Superlinear Convergence of the Secant Method
Statement
Let be a simple root of (meaning and ). If near and the initial points are sufficiently close to , then the secant method converges with order (the golden ratio):
for a constant depending on .
Intuition
The convergence order satisfies because each secant step uses information from two previous iterates. Solving this quadratic gives . This is strictly between linear (, bisection) and quadratic (, Newton).
Proof Sketch
Define the error . Taylor expand . Substitute into the secant recurrence and show that . Setting and using , we get . Matching exponents gives , so , which yields .
Why It Matters
The golden ratio convergence order is a clean example of how recycling past information (two points instead of one) can accelerate convergence without additional computation. Per function evaluation, the secant method is often more efficient than Newton because Newton needs both and while secant needs only .
Failure Mode
The method fails if (division by zero in the secant formula). It also has no guaranteed convergence from arbitrary starting points, unlike bisection. If the starting points are far from the root or (multiple root), convergence degrades or fails entirely.
Comparison With Other Methods
| Method | Convergence order | Derivatives needed | Evaluations per step | Guaranteed convergence |
|---|---|---|---|---|
| Bisection | 1 (linear) | None | 1 | Yes (if bracket exists) |
| Secant | ~1.618 (superlinear) | None | 1 | No |
| Newton | 2 (quadratic) | 2 ( and ) | No |
Efficiency per evaluation: Newton converges in fewer iterations, but each iteration costs two function evaluations (counting as one evaluation). The secant method uses one evaluation per step. The efficiency index (convergence order per evaluation) is for Newton and for secant. By this measure, secant is more efficient.
Canonical Examples
Solving x^3 - 2 = 0
Starting from with :
- (true root: )
Convergence to 4 decimal places in 5 iterations, with no derivative computation. The error roughly contracts by a factor of in the exponent each step, matching the superlinear-convergence theorem.
Common Confusions
Secant method is not the same as the method of false position
The method of false position (regula falsi) also uses a secant line, but it maintains a bracket with , guaranteeing convergence. The secant method always uses the two most recent iterates, which can both be on the same side of the root. Regula falsi sacrifices convergence speed for safety.
Superlinear does not mean faster than Newton in iterations
The secant method takes more iterations than Newton to reach the same accuracy. The advantage is per-function-evaluation efficiency, not per-iteration speed. If is cheap (e.g., automatic differentiation is available), Newton is usually preferable.
Exercises
Problem
Derive the secant update formula by replacing in Newton's method with the finite difference approximation using and .
Problem
Show that the convergence order of the secant method satisfies , and explain why this equation arises from the error recurrence .
References
Canonical:
- Burden & Faires, Numerical Analysis, 10th ed., Cengage (2015), Chapter 2.3
- Stoer & Bulirsch, Introduction to Numerical Analysis, 3rd ed., Springer (2002), Chapter 5
- Ostrowski, Solution of Equations and Systems of Equations, 2nd ed., Academic Press (1966)
- Traub, Iterative Methods for the Solution of Equations, Prentice-Hall (1964)
- Dennis & Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, SIAM (1996), Chapter 2
Current:
- Nocedal & Wright, Numerical Optimization, 2nd ed., Springer (2006), Chapter 11
- Kelley, Solving Nonlinear Equations with Newton's Method, SIAM (2003)
Next Topics
- Quasi-Newton methods: generalize the secant idea to multidimensional optimization (BFGS, L-BFGS)
Last reviewed: April 27, 2026
Canonical graph
Required before and derived from this topic
These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.
Required prerequisites
1- Newton's Methodlayer 1 · tier 1
Derived topics
1- Quasi-Newton Methodslayer 2 · tier 1
Graph-backed continuations