8000 fixed issue #514 (#524) · cp-algorithms/cp-algorithms@2657e19 · GitHub
[go: up one dir, main page]

Skip to content

Commit 2657e19

Browse files
authored
fixed issue #514 (#524)
* Fixed typo in stack modification * changed [l;r] to [l,r] in required places. * Fixed some more range issues. * added missing comma
1 parent fd86d95 commit 2657e19

File tree

10 files changed

+45
-45
lines changed

10 files changed

+45
-45
lines changed

src/algebra/polynomial.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -172,9 +172,9 @@ It gives us an $O(n \log n)$ algorithm when you need to compute values in powers
172172
### Multi-point Evaluation
173173
Assume you need to calculate $A(x_1), \dots, A(x_n)$. As mentioned earlier, $A(x) \equiv A(x_i) \pmod{x-x_i}$. Thus you may do the following:
174174

175-
1. Compute a segment tree such that in the segment $[l;r)$ stands the product $P_{l, r}(x) = (x-x_l)(x-x_{l+1})\dots(x-x_{r-1})$.
176-
2. Starting with $l=1$ and $r=n$ at the root node. Let $m=\lfloor(l+r)/2\rfloor$. Let's move down to $[l;m)$ with the polynomial $A(x) \pmod{P_{l,m}(x)}$.
177-
3. This will recursively compute $A(x_l), \dots, A(x_{m-1})$, now do the same for $[m;r)$ with $A(x) \pmod{P_{m,r}(x)}$.
175+
1. Compute a segment tree such that in the segment $[l,r)$ stands the product $P_{l, r}(x) = (x-x_l)(x-x_{l+1})\dots(x-x_{r-1})$.
176+
2. Starting with $l=1$ and $r=n$ at the root node. Let $m=\lfloor(l+r)/2\rfloor$. Let's move down to $[l,m)$ with the polynomial $A(x) \pmod{P_{l,m}(x)}$.
177+
3. This will recursively compute $A(x_l), \dots, A(x_{m-1})$, now do the same for $[m,r)$ with $A(x) \pmod{P_{m,r}(x)}$.
178178
4. Concatenate the results from the first and second recursive call and return them.
179179

180180
The whole procedure will run in $O(n \log^2 n)$.

src/data_structures/disjoint_set_union.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -275,7 +275,7 @@ With DSU you can find the end point, to which we get after following all edges f
275275

276276
A good example of this application is the **problem of painting subarrays**.
277277
We have a segment of length $L$, each element initially has the color 0.
278-
We have to repaint the subarray $[l; r]$ with the color $c$ for each query $(l, r, c)$.
278+
We have to repaint the subarray $[l, r]$ with the color $c$ for each query $(l, r, c)$.
279279
At the end we want to find the final color of each cell.
280280
We assume that we know all the queries in advance, i.e. the task is offline.
281281

@@ -284,7 +284,7 @@ Thus initially each cell points to itself.
284284
After painting one requested repaint of a segment, all cells from that segment will point to the cell after the segment.
285285

286286
Now to solve this problem, we consider the queries **in the reverse order**: from last to first.
287-
This way when we execute a query, we only have to paint exactly the unpainted cells in the subarray $[l; r]$.
287+
This way when we execute a query, we only have to paint exactly the unpainted cells in the subarray $[l, r]$.
288288
All other cells already contain their final color.
289289
To quickly iterate over all unpainted cells, we use the DSU.
290290
We find the left-most unpainted cell inside of a segment, repaint it, and with the pointer we move to the next empty cell to the right.

src/data_structures/fenwick.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Let, $f$ be some _reversible_ function and $A$ be an array of integers of length
66

77
Fenwick tree is a data structure which:
88

9-
* calculates the value of function $f$ in the given range $[l; r]$ (i.e. $f(A_l, A_{l+1}, \dots, A_r)$) in $O(\log n)$ time;
9+
* calculates the value of function $f$ in the given range $[l, r]$ (i.e. $f(A_l, A_{l+1}, \dots, A_r)$) in $O(\log n)$ time;
1010
* updates the value of an element of $A$ in $O(\log n)$ time;
1111
* requires $O(N)$ memory, or in other words, exactly the same memory required for $A$;
1212
* is easy to use and code, especially, in the case of multidimensional arrays.
@@ -24,7 +24,7 @@ Fenwick tree was first described in a paper titled "A new data structure for cum
2424
For the sake of simplicity, we will assume that function $f$ is just a *sum function*.
2525

2626
Given an array of integers $A[0 \dots N-1]$.
27-
A Fenwick tree is just an array $T[0 \dots N-1]$, where each of its elements is equal to the sum of elements of $A$ in some range $[g(i); i]$:
27+
A Fenwick tree is just an array $T[0 \dots N-1]$, where each of its elements is equal to the sum of elements of $A$ in some range $[g(i), i]$:
2828
$$T_i = \sum_{j = g(i)}^{i}{A_j},$$
2929
where $g$ is some function that satisfies $0 \le g(i) \le i$.
3030
We will define the function in the next few paragraphs.
@@ -37,7 +37,7 @@ Many people will actually use a version of the Fenwick tree that uses one-based
3737
Therefore you will also find an alternative implementation using one-based indexing in the implementation section.
3838
Both versions are equivalent in terms of time and memory complexity.
3939

40-
Now we can write some pseudo-code for the two operations mentioned above - get the sum of elements of $A$ in the range $[0; r]$ and update (increase) some element $A_i$:
40+
Now we can write some pseudo-code for the two operations mentioned above - get the sum of elements of $A$ in the range $[0, r]$ and update (increase) some element $A_i$:
4141

4242
```python
4343
def sum(int r):
@@ -54,19 +54,19 @@ def increase(int i, int delta):
5454

5555
The function `sum` works as follows:
5656

57-
1. first, it adds the sum of the range $[g(r); r]$ (i.e. $T[r]$) to the `result`
58-
2. then, it "jumps" to the range $[g(g(r)-1); g(r)-1]$, and adds this range's sum to the `result`
59-
3. and so on, until it "jumps" from $[0; g(g( \dots g(r)-1 \dots -1)-1)]$ to $[g(-1); -1]$; that is where the `sum` function stops jumping.
57+
1. first, it adds the sum of the range $[g(r), r]$ (i.e. $T[r]$) to the `result`
58+
2. then, it "jumps" to the range $[g(g(r)-1), g(r)-1]$, and adds this range's sum to the `result`
59+
3. and so on, until it "jumps" from $[0, g(g( \dots g(r)-1 \dots -1)-1)]$ to $[g(-1), -1]$; that is where the `sum` function stops jumping.
6060

6161
The function `increase` works with the same analogy, but "jumps" in the direction of increasing indices:
6262

63-
1. sums of the ranges $[g(j); j]$ that satisfy the condition $g(j) \le i \le j$ are increased by `delta` , that is `t[j] += delta`. Therefore we updated all elements in $T$ that corresponds to ranges in with $A_i$ lies.
63+
1. sums of the ranges $[g(j), j]$ that satisfy the condition $g(j) \le i \le j$ are increased by `delta` , that is `t[j] += delta`. Therefore we updated all elements in $T$ that corresponds to ranges in with $A_i$ lies.
6464

6565
It is obvious that the complexity of both `sum` and `increase` depend on the function $g$.
6666
There are lots of ways to choose the function $g$, as long as $0 \le g(i) \le i$ for all $i$.
6767
For instance the function $g(i) = i$ works, which results just in $T = A$, and therefore summation queries are slow.
6868
We can also take the function $g(i) = 0$.
69-
This will correspond to prefix sum arrays, which means that finding the sum of the range $[0; i]$ will only take constant time, but updates are slow.
69+
This will correspond to prefix sum arrays, which means that finding the sum of the range $[0, i]$ will only take constant time, but updates are slow.
7070
The clever part of the Fenwick algorithm is, that there it uses a special definition of the function $g$ that can handle both operations in $O(\log N)$ time.
7171

7272
### Definition of $g(i)$
@@ -121,7 +121,7 @@ The nodes of the tree show the ranges they cover.
121121

122122
Here we present an implementation of the Fenwick tree for sum queries and single updates.
123123

124-
The normal Fenwick tree can only answer sum queries of the type $[0; r]$ using `sum(int r)`, however we can also answer other queries of the type $[l; r]$ by computing two sums $[0; r]$ and $[0; l-1]$ and subtract them.
124+
The normal Fenwick tree can only answer sum queries of the type $[0, r]$ using `sum(int r)`, however we can also answer other queries of the type $[l, r]$ by computing two sums $[0, r]$ and $[0, l-1]$ and subtract them.
125125
This is handled in the `sum(int l, int r)` method.
126126

127127
Also this implementation supports two constructors.
@@ -160,9 +160,9 @@ struct FenwickTree {
160160
};
161161
```
162162

163-
### Finding minimum of $[0; r]$ in one-dimensional array
163+
### Finding minimum of $[0, r]$ in one-dimensional array
164164

165-
It is obvious that there is no easy way of finding minimum of range $[l; r]$ using Fenwick tree, as Fenwick tree can only answer queries of type $[0; r]$.
165+
It is obvious that there is no easy way of finding minimum of range $[l, r]$ using Fenwick tree, as Fenwick tree can only answer queries of type $[0, r]$.
166166
Additionally, each time a value is `update`'d, the new value has to be smaller than the current value (because the $min$ function is not reversible).
167167
These, of course, are significant limitations.
168168

@@ -317,13 +317,13 @@ This is just the ordinary Fenwick tree as explained above.
317317
Using simple tricks we can also do the reverse operations: increasing ranges and querying for single values.
318318

319319
Let the Fenwick tree be initialized with zeros.
320-
Suppose that we want to increment the interval $[l; r]$ by $x$.
320+
Suppose that we want to increment the interval $[l, r]$ by $x$.
321321
We make two point update operations on Fenwick tree which are `add(l, x)` and `add(r+1, -x)`.
322322

323323
If we want to get the value of $A[i]$, we just need to take the prefix sum using the ordinary range sum method.
324324
To see why this is true, we can just focus on the previous increment operation again.
325325
If $i < l$, then the two update operations have no effect on the query and we get the sum $0$.
326-
If $i \in [l; r]$, then we get the answer $x$ because of the first update operation.
326+
If $i \in [l, r]$, then we get the answer $x$ because of the first update operation.
327327
And if $i > r$, then the second update operation will cancel the effect of first one.
328328

329329
The following implementation uses one-based indexing.
@@ -353,7 +353,7 @@ Note: of course it is also possible to increase a single point $A[i]$ with `rang
353353

354354
To support both range updates and range queries we will use two BITs namely $B_1[]$ and $B_2[]$, initialized with zeros.
355355

356-
Suppose that we want to increment the interval $[l; r]$ by the value $x$.
356+
Suppose that we want to increment the interval $[l, r]$ by the value $x$.
357357
Similarly as in the previous method, we perform two point updates on $B_1$: `add(B1, l, x)` and `add(B1, r+1, -x)`.
358358
And we also update $B_2$. The details will be explained later.
359359

@@ -366,7 +366,7 @@ def range_add(l, r, x):
366366
```
367367
After the range update $(l, r, x)$ the range sum query should return the following values:
368368
$$
369-
sum[0; i]=
369+
sum[0, i]=
370370
\begin{cases}
371371
0 & i < l \\\\
372372
x \cdot (i-(l-1)) & l \le i \le r \\\\
@@ -375,9 +375,9 @@ x \cdot (r-l+1) & i > r \\\\
375375
$$
376376

377377
We can write the range sum as difference of two terms, where we use $B_1$ for first term and $B_2$ for second term.
378-
The difference of the queries will give us prefix sum over $[0; i]$.
378+
The difference of the queries will give us prefix sum over $[0, i]$.
379379
$$\begin{align}
380-
sum[0; i] &= sum(B_1, i) \cdot i - sum(B_2, i) \\\\
380+
sum[0, i] &= sum(B_1, i) \cdot i - sum(B_2, i) \\\\
381381
&= \begin{cases}
382382
0 \cdot i - 0 & i < l\\\\
383383
x \cdot i - x \cdot (l-1) & l \le i \le r \\\\

src/data_structures/segment_tree.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -227,9 +227,9 @@ The memory consumption is limited by $4n$, even though a Segment Tree of an arra
227227
However it can be reduced.
228228
We renumber the vertices of the tree in the order of an Euler tour traversal (pre-order traversal), and we write all these vertices next to each other.
229229
230-
Lets look at a vertex at index $v$, and let him be responsible for the segment $[l; r]$, and let $mid = \dfrac{l + r}{2}$.
230+
Lets look at a vertex at index $v$, and let him be responsible for the segment $[l, r]$, and let $mid = \dfrac{l + r}{2}$.
231231
It is obvious that the left child will have the index $v + 1$.
232-
The left child is responsible for the segment $[l; mid]$, i.e. in total there will be $2 * (mid - l + 1) - 1$ vertices in the left child's subtree.
232+
The left child is responsible for the segment $[l, mid]$, i.e. in total there will be $2 * (mid - l + 1) - 1$ vertices in the left child's subtree.
233233
Thus we can compute the index of the right child of $v$. The index will be $v + 2 * (mid - l + 1)$.
234234
By this numbering we achieve a reduction of the necessary memory to $2n$.
235235

src/data_structures/sqrt_decomposition.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -27,17 +27,17 @@ Thus, for each block $k$, we know the sum of elements on it $b[k]$:
2727

2828
$$ b[k] = \sum\limits_{i=k\cdot s}^{\min {(n-1,(k+1)\cdot s - 1})} a[i] $$
2929

30-
So, we have calculated the values of $b[k]$ (this required $O(n)$ operations). How can they help us to answer each query $[l; r]$ ?
31-
Notice that if the interval $[l; r]$ is long enough, it will contain several whole blocks, and for those blocks we can find the sum of elements in them in a single operation. As a result, the interval $[l; r]$ will contain parts of only two blocks, and we'll have to calculate the sum of elements in these parts trivially.
30+
So, we have calculated the values of $b[k]$ (this required $O(n)$ operations). How can they help us to answer each query $[l, r]$ ?
31+
Notice that if the interval $[l, r]$ is long enough, it will contain several whole blocks, and for those blocks we can find the sum of elements in them in a single operation. As a result, the interval $[l, r]$ will contain parts of only two blocks, and we'll have to calculate the sum of elements in these parts trivially.
3232

33-
Thus, in order to calculate the sum of elements on the interval $[l; r]$ we only need to sum the elements of the two "tails":
33+
Thus, in order to calculate the sum of elements on the interval $[l, r]$ we only need to sum the elements of the two "tails":
3434
$[l\dots (k + 1)\cdot s-1]$ and $[p\cdot s\dots r]$ , and sum the values $b[i]$ in all the blocks from $k + 1$ to $p-1$:
3535

3636
$$ \sum\limits\_{i=l}^r a[i] = \sum\limits\_{i=l}^{(k+1) \cdot s-1} a[i] + \sum\limits\_{i=k+1}^{p-1} b[i] + \sum\limits\_{i=p\cdot s}^r a[i] $$
3737

3838
_Note: When $k = p$, i.e. $l$ and $r$ belong to the same block, the formula can't be applied, and the sum should be calculated trivially._
3939

40-
This approach allows us to significantly reduce the number of operations. Indeed, the size of each "tail" does not exceed the block length $s$, and the number of blocks in the sum does not exceed $s$. Since we have chosen $s \approx \sqrt n$, the total number of operations required to find the sum of elements on the interval $[l; r]$ is $O(\sqrt n)$.
40+
This approach allows us to significantly reduce the number of operations. Indeed, the size of each "tail" does not exceed the block length $s$, and the number of blocks in the sum does not exceed $s$. Since we have chosen $s \approx \sqrt n$, the total number of operations required to find the sum of elements on the interval $[l, r]$ is $O(\sqrt n)$.
4141

4242
### Implementation
4343

@@ -61,7 +61,7 @@ for (;;) {
6161
int sum = 0;
6262
for (int i=l; i<=r; )
6363
if (i % len == 0 && i + len - 1 <= r) {
64-
// if the whole block starting at i belongs to [l; r]
64+
// if the whole block starting at i belongs to [l, r]
6565
sum += b[i / len];
6666
i += len;
6767
}
@@ -102,7 +102,7 @@ Sqrt decomposition can be applied in a similar way to a whole class of other pro
102102

103103
Another class of problems appears when we need to **update array elements on intervals**: increment existing elements or replace them with a given value.
104104

105-
For example, let's say we can do two types of operations on an array: add a given value $\delta$ to all array elements on interval $[l; r]$ or query the value of element $a[i]$. Let's store the value which has to be added to all elements of block $k$ in $b[k]$ (initially all $b[k] = 0$). During each "add" operation we need to add $\delta$ to $b[k]$ for all blocks which belong to interval $[l; r]$ and to add $\delta$ to $a[i]$ for all elements which belong to the "tails" of the interval. The answer a query $i$ is simply $a[i] + b[i/s]$. This way "add" operation has $O(\sqrt{n})$ complexity, and answering a query has $O(1)$ complexity.
105+
For example, let's say we can do two types of operations on an array: add a given value $\delta$ to all array elements on interval $[l, r]$ or query the value of element $a[i]$. Let's store the value which has to be added to all elements of block $k$ in $b[k]$ (initially all $b[k] = 0$). During each "add" operation we need to add $\delta$ to $b[k]$ for all blocks which belong to interval $[l, r]$ and to add $\delta$ to $a[i]$ for all elements which belong to the "tails" of the interval. The answer a query $i$ is simply $a[i] + b[i/s]$. This way "add" operation has $O(\sqrt{n})$ complexity, and answering a query has $O(1)$ complexity.
106106

107107
Finally, those two classes of problems can be combined if the task requires doing **both** element updates on an interval and queries on an interval. Both operations can be done with $O(\sqrt{n})$ complexity. This will require two block arrays $b$ and $c$: one to keep track of element updates and another to keep track of answers to the query.
108108

src/geometry/convex_hull_trick.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ int get(ftype x) {
8383
8484
Assume you're given a set of functions such that each two can intersect at most once. Let's keep in each vertex of a segment tree some function in such way, that if we go from root to the leaf it will be guaranteed that one of the functions we met on the path will be the one giving the minimum value in that leaf. Let's see how to construct it.
8585
86-
Assume we're in some vertex corresponding to half-segment $[l;r)$ and the function $f_{old}$ is kept there and we add the function $f_{new}$. Then the intersection point will be either in $[l;m)$ or in $[m;r)$ where $m=\left\lfloor\tfrac{l+r}{2}\right\rfloor$. We can efficiently find that out by comparing the values of the functions in points $l$ and $m$. If the dominating function changes, then it is in $[l;m)$ otherwise it is in $[m;r)$. Now for the half of the segment with no intersection we will pick the lower function and write it in the current vertex. You can see that it will always be the one which is lower in point $m$. After that we recursively go to the other half of the segment with the function which was the upper one. As you can see this will keep correctness on the first half of segment and in the other one correctness will be maintained during the recursive call. Thus we can add functions and check the minimum value in the point in $O(\log [C\varepsilon^{-1}])$.
86+
Assume we're in some vertex corresponding to half-segment $[l,r)$ and the function $f_{old}$ is kept there and we add the function $f_{new}$. Then the intersection point will be either in $[l;m)$ or in $[m;r)$ where $m=\left\lfloor\tfrac{l+r}{2}\right\rfloor$. We can efficiently find that out by comparing the values of the functions in points $l$ and $m$. If the dominating function changes, then it is in $[l;m)$ otherwise it is in $[m;r)$. Now for the half of the segment with no intersection we will pick the lower function and write it in the current vertex. You can see that it will always be the one which is lower in point $m$. After that we recursively go to the other half of the segment with the function which was the upper one. As you can see this will keep correctness on the first half of segment and in the other one correctness will be maintained during the recursive call. Thus we can add functions and check the minimum value in the point in $O(\log [C\varepsilon^{-1}])$.
8787
8888
Here is the illustration of what is going on in the vertex when we add new function:
8989

src/graph/hld.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Let there be a tree $G$ of $n$ vertices, with an arbitrary root.
1010

1111
The essence of this tree decomposition is to **split the tree into several paths** so that we can reach the root vertex from any $v$ by traversing at most $\log n$ paths. In addition, none of these paths should intersect with another.
1212

13-
It is clear that if we find such a decomposition for any tree, it will allow us to reduce certain single queries of the form *“calculate something on the path from $a$ to $b$”* to several queries of the type *”calculate something on the segment $[l;r]$ of the $k^{th}$ path”*.
13+
It is clear that if we find such a decomposition for any tree, it will allow us to reduce certain single queries of the form *“calculate something on the path from $a$ to $b$”* to several queries of the type *”calculate something on the segment $[l, r]$ of the $k^{th}$ path”*.
1414

1515

1616
### Construction algorithm

0 commit comments

Comments
 (0)
0