8000 Fix typos (#389) · cp-algorithms/cp-algorithms@41650bc · GitHub
[go: up one dir, main page]

Skip to content

Commit 41650bc

Browse files
wikkujakobkogler
authored andcommitted
Fix typos (#389)
git grep -P '(?!.*long long)\b(\w+)\s+\1\b'
1 parent 5647830 commit 41650bc

19 files changed

+23
-23
lines changed

src/algebra/fft.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -346,7 +346,7 @@ In the previous implementation we iterated all bits of the index and created the
346346
However we can reverse the bits in a different way.
347347
348348
Suppose that $j$ already contains the reverse of $i$.
349-
Then by to go to $i + 1$, we have to increment $i$, and we also also have to increment $j$, but in a "reversed" number system.
349+
Then by to go to $i + 1$, we have to increment $i$, and we also have to increment $j$, but in a "reversed" number system.
350350
Adding one in the conventional binary system is equivalent to flip all tailing ones into zeros and flipping the zero right before them into a one.
351351
Equivalently in the "reversed" number system, we flip all leading ones, and the also the next zero.
352352
@@ -398,7 +398,7 @@ Also we can precompute all roots of unity and their powers.
398398
## Number theoretic transform
399399

400400
Now we switch the objective a little bit.
401-
We still want to to multiply two polynomials in $O(n \log n)$ time, but this time we want to compute the coefficients modulo some prime number $p$.
401+
We still want to multiply two polynomials in $O(n \log n)$ time, but this time we want to compute the coefficients modulo some prime number $p$.
402402
Of course for this task we can use the normal DFT and apply the modulo operator to the result.
403403
However, doing so might lead to rounding errors, especially when dealing with large numbers.
404404
The **number theoretic transform (NTT)** has the advantage, that it only works with integer, and therefore the result are guaranteed to be correct.
@@ -423,7 +423,7 @@ $$\begin{align}
423423
Thus if $w_n$ is a $n$-th root of unity, then $w_n^2$ is a $\frac{n}{2}$-th root of unity.
424424
And consequently for all smaller powers of two there exist roots of the required degree, and they can be computed using $w_n$.
425425

426-
For computing the inverse DFT, we need need the inverse $w_n^{-1}$ of $w_n$.
426+
For computing the inverse DFT, we need the inverse $w_n^{-1}$ of $w_n$.
427427
But for a prime modulus the inverse always exists.
428428

429429
Thus all the properties that we need from the complex roots are also available in modular arithmetic, provided that we have a large enough module $p$ for which a $n$-th root of unity exists.

src/combinatorics/bracket_sequences.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Otherwise it is.
3030
If there are several bracket types involved, then the algorithm needs to be changes.
3131
Instead of a counter $\text{depth}$ we create a stack, in which we will store all opening brackets that we meet.
3232
If the current bracket character is an opening one, we put it onto the stack.
33-
If is is a closing one, then we check if the stack is non-empty, and if the top element of the stack is of the same type as the current closing bracket.
33+
If it is a closing one, then we check if the stack is non-empty, and if the top element of the stack is of the same type as the current closing bracket.
3434
If both conditions are fulfilled, then we remove the opening bracket from the stack.
3535
If at any time one of the conditions is not fulfilled, or at the end the stack is not empty, then the string is not balanced.
3636
Otherwise it is.

src/combinatorics/burnside.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
## Burnside's lemma
55

66
**Burnside's lemma** was formulated and proven by **Burnside** in 1897, but historically it was already discovered in 1887 by **Frobenius**, and even earlier in 1845 by **Cauchy**.
7-
Therefore is is also sometimes named the **Cauchy-Frobenius lemma**.
7+
Therefore it is also sometimes named the **Cauchy-Frobenius lemma**.
88

99
Burnside's lemma allows us to count the number of equivalence classes in sets, based on internal symmetry.
1010

@@ -79,7 +79,7 @@ The proof was published by Kenneth P. Bogart in 1991.
7979
We need to prove the following statement:
8080
$$|\text{Classes}| \cdot |G| = \sum_{\pi \in G} I(\pi)$$
8181

82-
The value on the right side is nothing more the the number of "invariant pairs" $(f, \pi)$, i.e. pairs such that $f \pi \equiv f$.
82+
The value on the right side is nothing more than the number of "invariant pairs" $(f, \pi)$, i.e. pairs such that $f \pi \equiv f$.
8383
It is obvious that we can change the order of summation.
8484
We let the sum iterate over all elements $f$ and sum over the values $J(f)$ - the number of permutations for which $f$ is a fixed point.
8585
$$|\text{Classes}| \cdot |G| = \sum_{f} J(f)$$
@@ -147,7 +147,7 @@ $$\begin{align}
147147
\pi_{n-1} &= n 1 2 3\dots\end{align}$$
148148

149149
Let us find an explicit formula for calculating $C(\pi_i)$.
150-
First we note, that that the permutation $\pi_i$ has at the $j$-th position the value $i + j$ (taken modulo $n$).
150+
First we note, that the permutation $\pi_i$ has at the $j$-th position the value $i + j$ (taken modulo $n$).
151151
If we check the cycle structure for $\pi_i$.
152152
We see that $1$ goes to $1 + i$, $1 + i$ goes to $1 + 2i$, which goes to $1 + 3i$, etc., until we come to a number of the form $1 + k n$.
153153
Similar statements can be mode for the remaining elements.

src/data_structures/disjoint_set_union.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ void union_sets(int a, int b) {
193193
194194
### Time complexity
195195
196-
As mentioned before, if we combine both optimizations - path compression with union by size / rank - we will reach reach nearly constant time queries.
196+
As mentioned before, if we combine both optimizations - path compression with union by size / rank - we will reach nearly constant time queries.
197197
It turns out, that the final amortized time complexity is $O(\alpha(n))$, where $\alpha(n)$ is the inverse Ackermann function, which grows very slowly.
198198
In fact it grows so slowly, that it doesn't exceed $4$ for all reasonable $n$ (approximately $n < 10^{600}$).
199199

src/data_structures/segment_tree.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -789,7 +789,7 @@ Here is the implementation of the construction of a 2D Segment Tree.
789789
It actually represents two separate blocks:
790790
the construction of a Segment Tree along the $x$ coordinate ($\text{build}_x$), and the $y$ coordinate ($\text{build}_y$).
791791
For the leaf nodes in $\text{build}_y$ we have to separate two cases:
792-
when the current segment of the first coordinate $[tlx \dots trx]$ has length 1, and when it has a length greater than one. In the first case, we just take the corresponding value from the matrix, and int the second case we we can combine the values of two Segment Trees from the left and the right son in the coordinate $x$.
792+
when the current segment of the first coordinate $[tlx \dots trx]$ has length 1, and when it has a length greater than one. In the first case, we just take the corresponding value from the matrix, and int the second case we can combine the values of two Segment Trees from the left and the right son in the coordinate $x$.
793793

794794
```cpp
795795
void build_y(int vx, int lx, int rx, int vy, int ly, int ry) {

src/data_structures/sqrt-tree.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ So, we have the following algorithm for updating an _indexed_ tree:
126126

127127
* Update $\text{prefixOp}$ and $\text{suffixOp}$ in $O(\sqrt{n})$.
128128

129-
* Update $\text{index}$. It has length $O(\sqrt{n})$ and we need to update only one item in it (that represents the changed block). So, the time complexity for this step is $O(\sqrt{n})$. We can use the the algorithm described in the beginning of this section (the "slow" one) to do it.
129+
* Update $\text{index}$. It has length $O(\sqrt{n})$ and we need to update only one item in it (that represents the changed block). So, the time complexity for this step is $O(\sqrt{n})$. We can use the algorithm described in the beginning of this section (the "slow" one) to do it.
130130

131131
* Go into the child node that represents the changed block and update it in $O(\sqrt{n})$ with the "slow" algorithm.
132132

src/data_structures/sqrt_decomposition.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ vector<int> mo_s_algorithm(vector<Query> queries) {
183183
}
184184
```
185185
186-
Based on the problem we can can use a different data structure and modify the `add`/`remove`/`get_answer` functions accordingly.
186+
Based on the problem we can use a different data structure and modify the `add`/`remove`/`get_answer` functions accordingly.
187187
For example if we are asked to find range sum queries then we use a simple integer as data structure, which is $0$ at the beginning.
188188
The `add` function will simply add the value of the position and subsequently update the answer variable.
189189
On the other hand `remove` function will subtract the value at position and subsequently update the answer variable.

src/graph/all-pair-shortest-path-floyd-warshall.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
# Floyd-Warshall Algorithm
44

5-
Given an undirected weighted graph $G$ with $n$ vertices. The task is to find the length of the shortest path $d_{ij}$ between each each pair of vertices $i$ and $j$.
5+
Given an undirected weighted graph $G$ with $n$ vertices. The task is to find the length of the shortest path $d_{ij}$ between each pair of vertices $i$ and $j$.
66

77
The graph may have negative weight edges, but no negative weight cycles (for then the shortest path is undefined).
88

src/graph/dijkstra_sparse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ The main difference to the implementation with `set` is that we cannot remove el
8585
Therefore we have to cheat a little bit.
8686
We simply don't delete the old pair from the queue.
8787
As a result a vertex can appear multiple times with different distance in the queue at the same time.
88-
Among these pairs we are only interested in the pairs where the first element is equal to the corresponding value in $d[]$, all the other other pairs are old.
88+
Among these pairs we are only interested in the pairs where the first element is equal to the corresponding value in $d[]$, all the other pairs are old.
8989
Therefore we need to make a small modification:
9090
at the beginning of each iteration, after extracting the next pair, we check if it an important pair or if it is already an old and handled pair.
9191
This check is important, otherwise the complexity can increase up to $O(n m)$.

src/graph/edmonds_karp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@ The capacity of a $s$-$t$-cut is defined as the sum of capacities of the edges f
185185
Obviously we cannot send more flow from $s$ to $t$ than the capacity of any $s$-$t$-cut.
186186
Therefore the maximum flow is bounded by the minimum cut capacity.
187187
188-
The max-flow min-cut theorem theorem goes even farther.
188+
The max-flow min-cut theorem goes even further.
189189
It says that the capacity of the maximum flow has to be equal to the capacity of the minimum cut.
190190
191191
In the following image you can see the minimum cut of the flow network we used earlier.

src/graph/fixed_length_paths.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ $$A \odot B = C~~\Longleftrightarrow~~C_{i j} = \min_{p = 1 \ldots n}\left(A_{i
6262
Thus the solution of the task can be represented using the modified multiplication:
6363
$$L_k = \underbrace{G \odot \ldots \odot G}_{k~\text{times}} = G^{\odot k}$$
6464

65-
It remains to note that we also also can compute this exponentiation efficiently with [Binary exponentiation](./algebra/binary-exp.html), because the modified multiplication is obviously associative.
65+
It remains to note that we also can compute this exponentiation efficiently with [Binary exponentiation](./algebra/binary-exp.html), because the modified multiplication is obviously associative.
6666
So also this solution has $O(n^3 \log k)$ complexity.
6767

6868
## Generalization of the problems for paths with length up to $k$

src/graph/mst_kruskal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ for (Edge e : edges) {
7575
Why does Kruskal's algorithm give us the correct result?
7676

7777
If the original graph was connected, then also the resulting graph will be connected.
78-
Because otherwise there would be two components that could be connected with at least one edge. Though this is impossible, because Kruskal would have have chosen one of these edges, since the ids of the components are different.
78+
Because otherwise there would be two components that could be connected with at least one edge. Though this is impossible, because Kruskal would have chosen one of these edges, since the ids of the components are different.
7979
Also the resulting graph doesn't contain any cycles, since we forbid this explicitly in the algorithm.
8080
Therefore the algorithm generates a spanning tree.
8181

src/graph/mst_prim.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ void prim() {
139139
```
140140

141141
The adjacency matrix `adj[][]` of size $n \times n$ stores the weights of the edges, and it uses the weight `INF` if there doesn't exist an edge between two vertices.
142-
The algorithm uses two arrays: the flag `selected[]`, which indicates which vertices we already have selected, and the array `min_e[]` which stores the edge with minimal weight to an selected vertex for each each not-yet-selected vertex (it stores the weight and the end vertex).
142+
The algorithm uses two arrays: the flag `selected[]`, which indicates which vertices we already have selected, and the array `min_e[]` which stores the edge with minimal weight to an selected vertex for each not-yet-selected vertex (it stores the weight and the end vertex).
143143
The algorithm does $n$ steps, in each iteration the vertex with the smallest edge weight is selected, and the `min_e[]` of all other vertices gets updated.
144144

145 F438 145
### Sparse graphs: $O(m \log n)$

src/schedules/schedule_two_machines.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ In general we get the equation:
3030
$$x_k = \max\left(\sum_{i=1}^k a_i - \sum_{i=1}^{k-1} b_i - \sum_{i=1}^{k-1} x_i, 0 \right)$$
3131

3232
We can now calculate the **total idle time** $F(x)$.
33-
It is claimed that that is has the form
33+
It is claimed that it has the form
3434
$$F(x) = \max_{k=1 \dots n} K_i,$$
3535
where
3636
$$K_i = \sum_{i=1}^k a_i - \sum_{i=1}^{k-1} b_i.$$

src/string/aho_corasick.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ Initially we are at the root of the trie.
179179
If we are at any time at state $v$, and the next letter is $c$, then we transition to the next state with $\text{go}(v, c)$, thereby either increasing the length of the current match substring by $1$, or decreasing it by following a suffix link.
180180
181181
How can we find out for a state $v$, if there are any matches with strings for the set?
182-
First, it is clear that if we stand on a $\text{leaf}$ vertex, then then string corresponding to the vertex ends at this position in the text.
182+
First, it is clear that if we stand on a $\text{leaf}$ vertex, then the string corresponding to the vertex ends at this position in the text.
183183
However this is by no means the only possible case of achieving a match:
184184
if we can reach one or more $\text{leaf}$ vertices by moving along the suffix links, then there will be also a match corresponding to each found $\text{leaf}$ vertex.
185185
A simple example demonstrating this situation can be creating using the set of strings $\\{dabce, abc, bc\\}$ and the text $dabc$.

src/string/expression_parsing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ By slightly modifying the above implementation it is also possible to obtain the
118118
## Unary operators
119119
120120
Now suppose that the expression also contains **unary** operators (operators that take one argument).
121-
The unary plus and unary minus are common common examples for such operators.
121+
The unary plus and unary minus are common examples of such operators.
122122
123123
One of the differences in this case, is that we need to determine whether the current operator is a unary or a binary one.
124124

src/string/lyndon_factorization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ There can be three different cases:
3737
- $s[j] = s[k]$: if this is the case, then adding the symbol $s[j]$ to $s_2$ doesn't violate its pre-simplicity.
3838
So we simply increment the pointers $j$ and $k$.
3939
- $s[j] > s[k]$: here, the string $s_2 + s[j]$ becomes simple.
40-
We can increment $j$ and reset $k$ back to the beginning of $s_2$, so that the next character can be compared with the beginning of of the simple word.
40+
We can increment $j$ and reset $k$ back to the beginning of $s_2$, so that the next character can be compared with the beginning of the simple word.
4141
- $s[j] < s[k]$: the string $s_2 + s[j]$ is no longer pre-simple.
4242
Therefore we will split the pre-simple string $s_2$ into its simple strings and the remainder, possibly empty.
4343
The simple string will have the length $j - k$.

src/string/string-hashing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Comparing two strings is then an $O(1)$ operation.
1212

1313
For the conversion we need a so-called **hash function**.
1414
The goal of it is to convert a string into a integer, the so-called **hash** of the string.
15-
The following condition has to hold: if two two strings $s$ and $t$ are equal ($s = t$), then also their hashes have to be equal ($\text{hash}(s) = \text{hash}(t)$).
15+
The following condition has to hold: if two strings $s$ and $t$ are equal ($s = t$), then also their hashes have to be equal ($\text{hash}(s) = \text{hash}(t)$).
1616
Otherwise we will not be able to compare strings.
1717

1818
Notice, the opposite direction doesn't have to hold.

src/string/suffix-automaton.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ The fact that we can construct a tree using the sets $endpos$ follows directly f
169169
Let us now consider an arbitrary state $v \ne t_0$, and its suffix link $link(v)$.
170170
From the definition of the suffix link and from Lemma 2 it follows that
171171
$$endpos(v) \subseteq endpos(link(v)),$$
172-
which together with with the previous lemma proves the assertion:
172+
which together with the previous lemma proves the assertion:
173173
the tree of suffix links is essentially a tree of sets $endpos$.
174174

175175
Here is an **example** of a tree of suffix links in the suffix automaton build for the string $"abcbc"$.

0 commit comments

Comments
 (0)
0