8000 Merge branch 'cp-algorithms:master' into patch-3 · cp-algorithms/cp-algorithms@da1ca3a · GitHub
[go: up one dir, main page]

Skip to content

Commit da1ca3a

Browse files
Merge branch 'cp-algorithms:master' into patch-3
2 parents 65a6065 + b59e9d6 commit da1ca3a

File tree

3 files changed

+38
-32
lines changed

3 files changed

+38
-32
lines changed

src/algebra/polynomial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ Polynomial long division is useful because of its many important properties:
140140
Note that long division can't be properly defined for formal power series. Instead, for any $A(x)$ such that $a_0 \neq 0$, it is possible to define an inverse formal power series $A^{-1}(x)$, such that $A(x) A^{-1}(x) = 1$. This fact, in turn, can be used to compute the result of long division for polynomials.
141141

142142
## Basic implementation
143-
[Here](https://cp-algorithms.github.io/cp-algorithms-aux/cp-algo/algebra/poly.hpp) you can find the basic implementation of polynomial algebra.
143+
[Here](https://cp-algorithms.github.io/cp-algorithms-aux/cp-algo/math/poly.hpp) you can find the basic implementation of polynomial algebra.
144144

145145
It supports all trivial operations and some other useful methods. The main class is `poly<T>` for polynomials with coefficients of type `T`.
146146

src/data_structures/fenwick.md

Lines changed: 36 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -6,44 +6,48 @@ e_maxx_link: fenwick_tree
66

77
# Fenwick Tree
88

9-
Let, $f$ be some group operation (binary associative function over a set with identity element and inverse elements) and $A$ be an array of integers of length $N$.
9+
Let $f$ be some group operation (a binary associative function over a set with an identity element and inverse elements) and $A$ be an array of integers of length $N$.
10+
Denote $f$'s infix notation as $*$; that is, $f(x,y) = x*y$ for arbitrary integers $x,y$.
11+
(Since this is associative, we will omit parentheses for order of application of $f$ when using infix notation.)
1012

11-
Fenwick tree is a data structure which:
13+
The Fenwick tree is a data structure which:
1214

13-
* calculates the value of function $f$ in the given range $[l, r]$ (i.e. $f(A_l, A_{l+1}, \dots, A_r)$) in $O(\log N)$ time;
14-
* updates the value of an element of $A$ in $O(\log N)$ time;
15-
* requires $O(N)$ memory, or in other words, exactly the same memory required for $A$;
16-
* is easy to use and code, especially, in the case of multidimensional arrays.
15+
* calculates the value of function $f$ in the given range $[l, r]$ (i.e. $A_l * A_{l+1} * \dots * A_r$) in $O(\log N)$ time
16+
* updates the value of an element of $A$ in $O(\log N)$ time
17+
* requires $O(N)$ memory (the same amount required for $A$)
18+
* is easy to use and code, especially in the case of multidimensional arrays
1719

18-
The most common application of Fenwick tree is _calculating the sum of a range_ (i.e. using addition over the set of integers $\mathbb{Z}$: $f(A_1, A_2, \dots, A_k) = A_1 + A_2 + \dots + A_k$).
20+
The most common application of a Fenwick tree is _calculating the sum of a range_.
21+
For example, using addition over the set of integers as the group operation, i.e. $f(x,y) = x + y$: the binary operation, $*$, is $+$ in this case, so $A_l * A_{l+1} * \dots * A_r = A_l + A_{l+1} + \dots + A_{r}$.
1922

20-
Fenwick tree is also called **Binary Indexed Tree**, or just **BIT** abbreviated.
21-
22-
Fenwick tree was first described in a paper titled "A new data structure for cumulative frequency tables" (Peter M. Fenwick, 1994).
23+
The Fenwick tree is also called a **Binary Indexed Tree** (BIT).
24+
It was first described in a paper titled "A new data structure for cumulative frequency tables" (Peter M. Fenwick, 1994).
2325

2426
## Description
2527

2628
### Overview
2729

28-
For the sake of simplicity, we will assume that function $f$ is just a *sum function*.
30+
For the sake of simplicity, we will assume that function $f$ is defined as $f(x,y) = x + y$ over the integers.
2931

30-
Given an array of integers $A[0 \dots N-1]$.
31-
A Fenwick tree is just an array $T[0 \dots N-1]$, where each of its elements is equal to the sum of elements of $A$ in some range $[g(i), i]$:
32+
Suppose we are given an array of integers, $A[0 \dots N-1]$.
33+
(Note that we are using zero-based indexing.)
34+
A Fenwick tree is just an array, $T[0 \dots N-1]$, where each element is equal to the sum of elements of $A$ in some range, $[g(i), i]$:
3235

33-
$$T_i = \sum_{j = g(i)}^{i}{A_j},$$
36+
$$T_i = \sum_{j = g(i)}^{i}{A_j}$$
3437

3538
where $g$ is some function that satisfies $0 \le g(i) \le i$.
36-
We will define the function in the next few paragraphs.
39+
We will define $g$ in the next few paragraphs.
3740

38-
The data structure is called tree, because there is a nice representation of the data structure as tree, although we don't need to model an actual tree with nodes and edges.
39-
We will only need to maintain the array $T$ to handle all queries.
41+
The data structure is called a tree because there is a nice representation of it in the form of a tree, although we don't need to model an actual tree with nodes and edges.
42+
We only need to maintain the array $T$ to handle all queries.
4043

4144
**Note:** The Fenwick tree presented here uses zero-based indexing.
42-
Many people will actually use a version of the Fenwick tree that uses one-based indexing.
43-
Therefore you will also find an alternative implementation using one-based indexing in the implementation section.
45+
Many people use a version of the Fenwick tree that uses one-based indexing.
46+
As such, you will also find an alternative implementation which uses one-based indexing in the implementation section.
4447
Both versions are equivalent in terms of time and memory complexity.
4548

46-
Now we can write some pseudo-code for the two operations mentioned above - get the sum of elements of $A$ in the range $[0, r]$ and update (increase) some element $A_i$:
49+
Now we can write some pseudo-code for the two operations mentioned above.
50+
Below, we get the sum of elements of $A$ in the range $[0, r]$ and update (increase) some element $A_i$:
4751

4852
```python
4953
def sum(int r):
@@ -60,20 +64,21 @@ def increase(int i, int delta):
6064

6165
The function `sum` works as follows:
6266

63-
1. first, it adds the sum of the range $[g(r), r]$ (i.e. $T[r]$) to the `result`
64-
2. then, it "jumps" to the range $[g(g(r)-1), g(r)-1]$, and adds this range's sum to the `result`
65-
3. and so on, until it "jumps" from $[0, g(g( \dots g(r)-1 \dots -1)-1)]$ to $[g(-1), -1]$; that is where the `sum` function stops jumping.
67+
1. First, it adds the sum of the range $[g(r), r]$ (i.e. $T[r]$) to the `result`.
68+
2. Then, it "jumps" to the range $[g(g(r)-1), g(r)-1]$ and adds this range's sum to the `result`.
69+
3. This continues until it "jumps" from $[0, g(g( \dots g(r)-1 \dots -1)-1)]$ to $[g(-1), -1]$; this is where the `sum` function stops jumping.
6670

67-
The function `increase` works with the same analogy, but "jumps" in the direction of increasing indices:
71+
The function `increase` works with the same analogy, but it "jumps" in the direction of increasing indices:
6872

69-
1. sums of the ranges $[g(j), j]$ that satisfy the condition $g(j) \le i \le j$ are increased by `delta` , that is `t[j] += delta`. Therefore we updated all elements in $T$ that correspond to ranges in which $A_i$ lies.
73+
1. The sum for each range of the form $[g(j), j]$ which satisfies the condition $g(j) \le i \le j$ is increased by `delta`; that is, `t[j] += delta`.
74+
Therefore, it updates all elements in $T$ that correspond to ranges in which $A_i$ lies.
7075

71-
It is obvious that the complexity of both `sum` and `increase` depend on the function $g$.
72-
There are lots of ways to choose the function $g$, as long as $0 \le g(i) \le i$ for all $i$.
73-
For instance the function $g(i) = i$ works, which results just in $T = A$, and therefore summation queries are slow.
74-
We can also take the function $g(i) = 0$.
75-
This will correspond to prefix sum arrays, which means that finding the sum of the range $[0, i]$ will only take constant time, but updates are slow.
76-
The clever part of the Fenwick algorithm is, that there it uses a special definition of the function $g$ that can handle both operations in $O(\log N)$ time.
76+
The complexity of both `sum` and `increase` depend on the function $g$.
77+
There are many ways to choose the function $g$ such that $0 \le g(i) \le i$ for all $i$.
78+
For instance, the function $g(i) = i$ works, which yields $T = A$ (in which case, the summation queries are slow).
79+
We could also take the function $g(i) = 0$.
80+
This would correspond to prefix sum arrays (in which case, finding the sum of the range $[0, i]$ will only take constant time; however, updates are slow).
81+
The clever part of the algorithm for Fenwick trees is how it uses a special definition of the function $g$ which can handle both operations in $O(\log N)$ time.
7782

7883
### Definition of $g(i)$ { data-toc-label='Definition of <script type="math/tex">g(i)</script>' }
7984

src/graph/finding-cycle.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -129,5 +129,6 @@ void find_cycle() {
129129
```
130130
### Practice problems:
131131

132+
- [AtCoder : Reachability in Functional Graph](https://atcoder.jp/contests/abc357/tasks/abc357_e)
132133
- [CSES : Round Trip](https://cses.fi/problemset/task/1669)
133134
- [CSES : Round Trip II](https://cses.fi/problemset/task/1678/)

0 commit comments

Comments
 (0)
0