8000 Remove storage engine switching page (#779) · arangodb/docs@1b0a65e · GitHub
[go: up one dir, main page]

Skip to content
This repository was archived by the owner on Dec 13, 2023. It is now read-only.

Commit 1b0a65e

Browse files
authored
Remove storage engine switching page (#779)
... from 3.8 and 3.9, as well as remove some other MMFiles references
1 parent 41453e3 commit 1b0a65e

File tree

53 files changed

+89
-706
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

53 files changed

+89
-706
lines changed

3.6/release-notes-upgrading-changes36.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,10 @@ Deprecation of MMFiles Storage Engine
4848

4949
The MMFiles storage engine is deprecated starting with version
5050
3.6.0 and it will be removed in a future release.
51+
{% if page.version.version <= "3.7" %}
5152
To change your MMFiles storage engine deployment to RocksDB, see:
5253
[Switch storage engine](administration-engine-switch-engine.html)
54+
{% endif %}
5355

5456
We recommend to switch to RocksDB even before the removal of MMFiles.
5557
RocksDB is the default [storage engine](architecture-storage-engines.html)

3.7/administration-engine-switch-engine.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@
22
layout: default
33
description: Create a logical backup with arangodump and restore it with arangorestore to a new data directory
44
title: Switch ArangoDB Storage Engine
5+
redirect_from:
6+
- /3.8/administration-engine-switch-engine.html # 3.9 -> 3.9
7+
- /3.9/administration-engine-switch-engine.html # 3.9 -> 3.9
58
---
69
Switching the storage engine
710
----------------------------

3.7/backup-restore.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -263,12 +263,6 @@ not be suited for.
263263
This means that one cannot restore a 3-node ArangoDB cluster's hot backup to
264264
any other deployment than another 3-node ArangoDB cluster of the same version.
265265

266-
- **RocksDB Storage Engine Only**
267-
268-
Hot backups rely on creation of hard links on actual RocksDB data files and
269-
directories. The same or according file system level mechanisms are not
270-
available to MMFiles deployments.
271-
272266
- **Storage Space**
273267

274268
Without the creation of hot backups, RocksDB keeps compacting the file system

3.7/http/transaction-stream-transaction.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,7 @@ for making sure that the transaction is committed or aborted when it is no longe
4141
This avoids taking up resources on the ArangoDB server.
4242

4343
{% hint 'warning' %}
44-
Transactions will acquire collection locks for read and write operations
45-
in the MMFiles storage engine, and for write operations in RocksDB.
44+
Transactions will acquire collection locks for write operations in RocksDB.
4645
It is therefore advisable to keep the transactions as short as possible.
4746
{% endhint %}
4847

3.7/indexing-persistent.md

Lines changed: 1 addition & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,10 @@
11
---
22
layout: default
3-
description: The persistent index type is deprecated from version 3.4.0 on for the MMFiles storage engine.
3+
description: It is possible to define a persistent index on one or more attributes (or paths) of documents
44
---
55
Persistent indexes
66
==================
77

8-
{% hint 'warning' %}
9-
The persistent index type is deprecated from version 3.4.0 on for the MMFiles
10-
storage engine. Use the RocksDB storage engine instead, where all indexes are
11-
persistent.
12-
{% endhint %}
13-
14-
Introduction to Persistent Indexes
15-
----------------------------------
16-
17-
This is an introduction to ArangoDB's persistent indexes.
18-
198
It is possible to define a persistent index on one or more attributes (or paths)
209
of documents. The index is then used in queries to locate documents within a given range.
2110
If the index is declared unique, then no two documents are allowed to have the same

3.7/installation-windows.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -195,11 +195,6 @@ be specified like `/OPTIONNAME=value`.
195195
- `1`:
196196
- `INSTALL_SCOPE_ALL` = 1 add it to the path for all users
197197
- `INSTALL_SCOPE_ALL` = 0 add it to the path of the currently logged in users
198-
- `/STORAGE_ENGINE` - which storage engine to use (ArangoDB 3.2 onwards)
199-
- `auto`: Use default storage engine
200-
(RocksDB from version 3.4 on, MMFiles in 3.3 and older)
201-
- `mmfiles`: Use MMFiles storage engine
202-
- `rocksdb`: Use RocksDB storage engine
203198

204199
*For Uninstallation*:
205200
- `PURGE_DB`
Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
---
22
layout: default
3-
description: Arangodump limitations in cluster and with MMFiles storage engine
3+
description: >-
4+
In a cluster, arangodump does not guarantee to dump a consistent snapshot if
5+
write operations happen while the dump is in progress
46
title: Arangodump Limitations
57
---
68
Arangodump Limitations
@@ -16,10 +18,3 @@ _Arangodump_ has the following limitations:
1618
a single instance, a master/slave, or active failover setup, where even if
1719
write operations are ongoing, the created dump is consistent, as a snapshot
1820
is taken when the dump starts.
19-
<!-- TOOD Remove when 3.6 reaches EoL -->
20-
- If the MMFiles engine is in use, on a single instance, a master/slave, or
21-
active failover setup, even if the write operations are suspended, it is not
22-
guaranteed that the dump includes all the data that has been previously
23-
written as _arangodump_ will only dump the data included in the _datafiles_
24-
but not the data that has not been transferred from the _WAL_ to the
25-
_datafiles_. A WAL flush can be forced however.

3.8/administration-engine-switch-engine.md

Lines changed: 0 additions & 25 deletions
This file was deleted.

3.8/administration-leader-follower-initialize-from-backup.md

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -30,20 +30,14 @@ First of all you have to start the Leader server, using a command like the above
3030
arangod --server.endpoint tcp://leader.domain.org:8529
3131
```
3232

33-
Depending on your storage engine you also want to adjust the following options:
33+
Next, you should adjust the `--rocksdb.wal-file-timeout` storag F438 e engine
34+
option that defines the timeout after which unused WAL files are deleted in
35+
seconds (default: 10).
3436

35-
- MMFiles:<br>
36-
`--wal.historic-logfiles`<br>
37-
maximum number of historic logfiles to keep after collection (default: 10)
38-
39-
- RocksDB:<br>
40-
`--rocksdb.wal-file-timeout`<br>
41-
timeout after which unused WAL files are deleted in seconds (default: 10)
42-
43-
The options above prevent the premature removal of old WAL files from the Leader,
37+
The option prevents the premature removal of old WAL files from the Leader,
4438
and are useful in case intense write operations happen on the Leader while you
45-
are initializing the Follower. In fact, if you do not tune these options, what can
46-
happen is that the Leader WAL files do not include all the write operations
39+
are initializing the Follower. In fact, if you do not tune this option, what can
40+
happen is that the Leader WAL files do not include all the write operations that
4741
happened after the backup is taken. This may lead to situations in which the
4842
initialized Follower is missing some data, or fails to start.
4943

3.8/administration.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,5 +51,3 @@ Other Topics
5151
- [Backup & Restore](backup-restore.html)
5252
- [Import & Export](administration-import-export.html)
5353
- [User Management](administration-managing-users.html)
54-
- [Switch Storage Engine](administration-engine-switch-engine.html)
55-

3.8/appendix-deprecated.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ redirect_from:
1313
- appendix-deprecated-simple-queries-geo-queries.html # 3.8 -> 3.8
1414
- appendix-deprecated-simple-queries-fulltext-queries.html # 3.8 -> 3.8
1515
- http/simple-query.html # 3.8 -> 3.8
16+
- programs-arangod-compaction.html # 3.9 -> 3.9
1617
---
1718
Deprecated
1819
==========
@@ -24,8 +25,7 @@ replace the old features with:
2425

2526
- **MMFiles Storage Engine**:
2627
The MMFiles storage engine was deprecated in version 3.6.0 and removed in
27-
3.7.0. To change your MMFiles storage engine deployment to RocksDB, see:
28-
[Switch storage engine](administration-engine-switch-engine.html)
28+
3.7.0.
2929

3030
MMFiles specific startup options still exist but will also be removed.
3131
This will affect the following options:

3.8/aql/execution-and-performance-optimizer.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -605,10 +605,9 @@ The following optimizer rules may appear in the `rules` attribute of a plan:
605605
- `remove-redundant-sorts`:
606606
will appear if multiple *SORT* statements can be merged into fewer sorts.
607607

608-
- `remove-sort-rand`:
609-
will appear when a *SORT RAND()* expression is removed by moving the random
610-
iteration into an *EnumerateCollectionNode*. This optimizer rule is specific
611-
for the MMFiles storage engine.
608+
- `remove-sort-rand-limit-1`:
609+
will appear when a *SORT RAND() LIMIT 1* construct is removed by moving the
610+
random iteration into an *EnumerateCollectionNode*.
612611

613612
- `remove-unnecessary-calculations`:
614613
will appear if *CalculationNode*s were removed from the query. The rule will

3.8/aql/invocation-with-arangosh.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ There are further options that can be passed in the *options* attribute of the *
199199
- *stream*: Specify *true* and the query will be executed in a **streaming** fashion. The query result is
200200
not stored on the server, but calculated on the fly. *Beware*: long-running queries will
201201
need to hold the collection locks for as long as the query cursor exists. It is advisable
202-
to *only* use this option on short-running queries *or* without exclusive locks (write locks on MMFiles).
202+
to *only* use this option on short-running queries *or* without exclusive locks.
203203
When set to *false* the query will be executed right away in its entirety.
204204
In that case query results are either returned right away (if the result set is small enough),
205205
or stored on the arangod instance and accessible via the cursor API.

3.8/aql/operations-remove.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -127,8 +127,8 @@ FOR i IN 1..1000
127127

128128
### `exclusive`
129129

130-
In contrast to the MMFiles engine, the RocksDB engine does not require collection-level
131-
locks. Different write operations on the same collection do not block each other, as
130+
The RocksDB engine does not require collection-level locks. Different write
131+
operations on the same collection do not block each other, as
132132
long as there are no _write-write conflicts_ on the same documents. From an application
133133
development perspective it can be desired to have exclusive write access on collections,
134134
to simplify the development. Note that writes do not block reads in RocksDB.

3.8/aql/operations-replace.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -121,8 +121,8 @@ FOR i IN 1..1000
121121

122122
### `exclusive`
123123

124-
In contrast to the MMFiles engine, the RocksDB engine does not require collection-level
125-
locks. Different write operations on the same collection do not block each other, as
124+
The RocksDB engine does not require collection-level locks. Different write
125+
operations on the same collection do not block each other, as
126126
long as there are no _write-write conflicts_ on the same documents. From an application
127127
development perspective it can be desired to have exclusive write access on collections,
128128
to simplify the development. Note that writes do not block reads in RocksDB.

3.8/aql/operations-update.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -250,8 +250,8 @@ FOR i IN 1..1000
250250

251251
### `exclusive`
252252

253-
In contrast to the MMFiles engine, the RocksDB engine does not require collection-level
254-
locks. Different write operations on the same collection do not block each other, as
253+
The RocksDB engine does not require collection-level locks. Different write
254+
operations on the same collection do not block each other, as
255255
long as there are no _write-write conflicts_ on the same documents. From an application
256256
development perspective it can be desired to have exclusive write access on collections,
257257
to simplify the development. Note that writes do not block reads in RocksDB.

3.8/aql/operations-upsert.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -112,8 +112,8 @@ within the *searchExpression*. Even worse, if you use an outdated `_rev` in the
112112

113113
### `exclusive`
114114

115-
In contrast to the MMFiles engine, the RocksDB engine does not require collection-level
116-
locks. Different write operations on the same collection do not block each other, as
115+
The RocksDB engine does not require collection-level locks. Different write
116+
operations on the same collection do not block each other, as
117117
long as there are no _write-write conflicts_ on the same documents. From an application
118118
development perspective it can be desired to have exclusive write access on collections,
119119
to simplify the development. Note that writes do not block reads in RocksDB.

3.8/architecture-storage-engines.md

Lines changed: 7 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -1,60 +1,17 @@
11
---
22
layout: default
33
description: At the very bottom of the ArangoDB database system lies the RocksDB storage engine
4-
title: ArangoDB Storage Engines
4+
title: ArangoDB Storage Engine
55
---
6-
# Storage Engines
7-
8-
{% hint 'warning' %}
9-
The MMFiles storage engine was removed.
10-
To change your MMFiles storage engine deployment to RocksDB, see:
11-
[Switch storage engine](administration-engine-switch-engine.html)
12-
{% endhint %}
6+
# Storage Engine
137

148
At the very bottom of the ArangoDB database system lies the storage
159
engine. The storage engine is responsible for persisting the documents
1610
on disk, holding copies in memory, providing indexes and caches to
1711
speed up queries.
1812

19-
Up to version 3.1 ArangoDB only supported memory-mapped files (**MMFiles**)
20-
as sole storage engine. In version 3.2, ArangoDB gained support for pluggable
21-
storage engines and a second engine based on Facebook's **RocksDB** was added.
22-
MMFiles remained the default engine for 3.3, but in 3.4 RocksDB became the new
23-
default. MMFiles was deprecated in version 3.6.0 and removed in 3.7.0.
24-
25-
<!-- TODO: remove?
26-
The engine must be selected for the whole server / cluster. It is not
27-
possible to mix engines. The transaction handling and write-ahead-log
28-
format in the individual engines is very different and therefore cannot
29-
be mixed.
30-
-->
31-
32-
{% hint 'tip' %}
33-
For practical information on how to switch storage engine please refer to the
34-
[Switching the storage engine](administration-engine-switch-engine.html)
35-
page.
36-
{% endhint %}
37-
38-
| MMFiles | RocksDB |
39-
|---------|---------|
40-
| removed | default |
41-
| dataset needs to fit into memory | work with as much data as fits on disk |
42-
| indexes in memory | hot set in memory, data and indexes on disk |
43-
| slow restart due to index rebuilding | fast startup (no rebuilding of indexes) |
44-
| volatile collections (only in memory, optional) | collection data always persisted |
45-
| collection level locking (writes block reads) | concurrent reads and writes |
46-
47-
*Blog article: [Comparing new RocksDB and MMFiles storage engines](https://www.arangodb.com/community-server/rocksdb-storage-engine/){:target="_blank"}*
48-
49-
## MMFiles
50-
51-
The MMFiles (Memory-Mapped Files) engine was optimized for the use-case where
52-
the data fit into the main memory. It allowed for very fast concurrent
53-
reads. However, writes blocked reads and locking was on collection
54-
level.
55-
56-
Indexes were always in memory and rebuilt on startup. This
57-
gave better performance but imposed a longer startup time.
13+
ArangoDB's storage engine is based on Facebook's **RocksDB** and the only
14+
storage engine available in ArangoDB 3.7 and above.
5815

5916
## RocksDB
6017

@@ -80,10 +37,9 @@ The main advantages of RocksDB are:
8037
### Caveats
8138

8239
RocksDB allows concurrent writes. However, when touching the same document a
83-
write conflict is raised. This cannot happen with the MMFiles engine, therefore
84-
applications that switch to RocksDB need to be prepared that such exception can
85-
arise. It is possible to exclusively lock collections when executing AQL. This
86-
will avoid write conflicts but also inhibits concurrent writes.
40+
write conflict is raised. It is possible to exclusively lock collections when
41+
executing AQL. This will avoid write conflicts but also inhibits concurrent
42+
writes.
8743

8844
Currently, another restriction is due to the transaction handling in
8945
RocksDB. Transactions are limited in total size. If you have a statement

3.8/architecture-write-ahead-log.md

Lines changed: 3 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -1,60 +1,17 @@
11
---
22
layout: default
3-
description: Both storage engines use a form of write ahead logging (WAL)
3+
description: Write ahead logging is used for data recovery after a server crash and for replication
44
---
55
Write-ahead log
66
===============
77

8-
Both storage engines use a form of write ahead logging (WAL).
9-
10-
Starting with version 2.2 ArangoDB stores all data-modification operation in
11-
its write-ahead log. The write-ahead log is sequence of append-only files containing
8+
ArangoDB's RocksDB storage engine stores all data-modification operation in a
9+
write-ahead log (WAL). The WAL is sequence of append-only files containing
1210
all the write operations that were executed on the server.
13-
1411
It is used to run data recovery after a server crash, and can also be used in
1512
a replication setup when Followers need to replay the same sequence of operations as
1613
on the Leader.
1714

18-
MMFiles WAL Details
19-
-------------------
20-
21-
By default, each write-ahead logfile is 32 MiB in size. This size is configurable via the
22-
option *--wal.logfile-size*.
23-
When a write-ahead logfile is full, it is set to read-only, and following operations will
24-
be written into the next write-ahead logfile. By default, ArangoDB will reserve some
25-
spare logfiles in the background so switching logfiles should be fast. How many reserve
26-
logfiles ArangoDB will try to keep available in the background can be controlled by the
27-
configuration option *--wal.reserve-logfiles*.
28-
29-
Data contained in full write-ahead files will eventually be transferred into the journals or
30-
datafiles of collections. Only the "surviving" documents will be copied over. When all
31-
remaining operations from a write-ahead logfile have been copied over into the journals
32-
or datafiles of the collections, the write-ahead logfile can safely be removed if it is
33-
not used for replication.
34-
35-
Long-running transactions prevent write-ahead logfiles from being fully garbage-collected
36-
because it is unclear whether a transaction will commit or abort. Long-running transactions
37-
can thus block the garbage-collection progress and should therefore be avoided at
38-
all costs.
39-
40-
On a system that acts as a replication leader, it is useful to keep a few of the
41-
already collected write-ahead logfiles so replication Followers still can fetch data from
42-
them if required. How many collected logfiles will be kept before they get deleted is
43-
configurable via the option *--wal.historic-logfiles*.
44-
45-
For all write-ahead log configuration options, please refer to the page
46-
[Write-ahead log options](programs-arangod-wal.html).
47-
48-
49-
RocksDB WAL Details
50-
-------------------
51-
52-
The options mentioned above only apply for MMFiles. The WAL in the RocksDB
53-
storage engine works slightly differently.
54-
55-
_Note:_ In rocksdb the WAL options are all prefixed with `--rocksdb.*`.
56-
The `--wal.*` options do have no effect.
57-
5815
The individual RocksDB WAL files are per default about 64 MiB big.
5916
The size will always be proportionally sized to the value specified via
6017
`--rocksdb.write-buffer-size`. The value specifies the amount of data to build

3.8/backup-restore.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -263,12 +263,6 @@ not be suited for.
263263
This means that one cannot restore a 3-node ArangoDB cluster's hot backup to
264264
any other deployment than another 3-node ArangoDB cluster of the same version.
265265

266-
- **RocksDB Storage Engine Only**
267-
268-
Hot backups rely on creation of hard links on actual RocksDB data files and
269-
directories. The same or according file system level mechanisms are not
270-
available to MMFiles deployments.
271-
272266
- **Storage Space**
273267

274268
Without the creation of hot backups, RocksDB keeps compacting the file system

0 commit comments

Comments
 (0)
0