8000 Add Newton's method for finding roots (#166) · cp-algorithms/cp-algorithms@3d36c6f · GitHub
[go: up one dir, main page]

Skip to content

Commit 3d36c6f

Browse files
prprprponytcNickolas
authored andcommitted
Add Newton's method for finding roots (#166)
1 parent 42577ab commit 3d36c6f

File tree

2 files changed

+89
-0
lines changed

2 files changed

+89
-0
lines changed

src/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,7 @@ especially popular in field of competitive programming.*
5454

5555
### Numerical Methods
5656
- [Ternary Search](./num_methods/ternary_search.html)
57+
- [Newton's method for finding roots](./num_methods/roots_newton.html)
5758

5859
### Geometry
5960
- [Length of the union of intervals on a line in O(N\*logN)](./geometry/length-of-segments-union.html)

src/num_methods/roots_newton.md

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
<!--?title Newton's method for finding roots -->
2+
3+
# Newton's method for finding roots
4+
5+
This is an iterative method invented by Isaac Newton around 1664. However, sometimes this method is called the Raphson method, since Raphson invented the same algorithm a few years after Newton, but his article was published much earlier.
6+
7+
The task is as follows. Given the following equation:
8+
9+
$$f(x) = 0$$
10+
11+
We want to solve the equation, more precisely, to find one of its roots (it is assumed that the root exists). It is assumed that $f(x)$ is continuous and differentiable on an interval $[a; b]$.
12+
13+
## Algorithm
14+
15+
The input parameters of the algorithm consist of not only the function $f(x)$ but also the initial approximation - some $x_0$, with which the algorithm starts.
16+
17+
Suppose we have already calculated $x_i$, calculate $x_{i+1}$ as follows. Draw the tangent to the graph of the function $f(x)$ at the point $x = x_i$, and find the point of intersection of this tangent with the $x$-axis. $x_{i+1}$ is set equal to the $x$-coordinateof the point found, and we repeat the whole process from the beginning.
18+
19+
It is not difficult to obtain the following formula:
20+
21+
$$ x_{i+1} = x_i - \frac{f(x_i)}{f^\prime(x_i)} $$
22+
23+
It is intuitively clear that if the function $f(x)$ is "good" (smooth), and $x_i$ is close enough to the root, then $x_{i+1}$ will be even closer to the desired root.
24+
25+
The rate of convergence is quadratic, which, conditionally speaking, means that the number of exact digits in the approximate value $x_i$ doubles with each iteration.
26+
27+
## Application for calculating the square root
28+
29+
Let's use the calculation of square root as an example of Newton's method.
30+
31+
If we substitute $f(x) = \sqrt{x}$, then after simplifying the expression, we get:
32+
33+
$$ x_{i+1} = \frac{x_i + \frac{n}{x_i}}{2} $$
34+
35+
The first typical variant of the problem is when a rational number $n$ is given, and its root must be calculated with some accuracy `eps`:
36+
37+
```cpp
38+
double sqrt_newton(double n) {
39+
const double eps = 1E-15;
40+
double x = 1;
41+
for (;;) {
42+
double nx = (x + n / x) / 2;
43+
if (abs(x - nx) < eps)
44+
break;
45+
x = nx;
46+
}
47+
return x;
48+
}
49+
```
50+
51+
Another common variant of the problem is when we need to calculate the integer root (for the given $n$ find the largest $x$ such that $x^2 \le n$). Here it is necessary to slightly change the termination condition of the algorithm, since it may happen that $x$ will start to "jump" near the answer. Therefore, we add a condition that if the value $x$ has decreased in the previous step, and it tries to increase at the current step, then the algorithm must be stopped.
52+
53+
```cpp
54+
int isqrt_newton(int n) {
55+
int x = 1;
56+
bool decreased = false;
57+
for (;;) {
58+
int nx = (x + n / x) >> 1;
59+
if (x == nx || nx > x && decreased)
60+
break;
61+
decreased = nx < x;
62+
x = nx;
63+
}
64+
return x;
65+
}
66+
```
67+
68+
Finally, we are given the third variant - for the case of bignum arithmetic. Since the number $n$ can be large enough, it makes sense to pay attention to the initial approximation. Obviously, the closer it is to the root, the faster the result will be achieved. It is simple enough and effective to take the initial approximation as the number $2^{\textrm{bits}/2}$, where $\textrm{bits}$ is the number of bits in the number $n$. Here is the Java code that demonstrates this variant:
69+
70+
```java
71+
public static BigInteger isqrtNewton(BigInteger n) {
72+
BigInteger a = BigInteger.ONE.shiftLeft(n.bitLength() / 2);
73+
boolean p_dec = false;
74+
for (;;) {
75+
BigInteger b = n.divide(a).add(a).shiftRight(1);
76+
if (a.compareTo(b) == 0 || a.compareTo(b) < 0 && p_dec)
77+
break;
78+
p_dec = a.compareTo(b) > 0;
79+
a = b;
80+
}
81+
return a;
82+
}
83+
```
84+
85+
For example, this code is executed in $60$ milliseconds for $n = 10^{1000}$, and if we remove the improved selection of the initial approximation (just starting with $1$), then it will be executed in about $120$ milliseconds.
86+
87+
## Practice Problems
88+
- [UVa 10428 - The Roots](https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=16&page=show_problem&problem=1369)

0 commit comments

Comments
 (0)
0