8000 Bug fix 3.5/min replication factor (#9524) · arangodb/arangodb@d5840c1 · GitHub
[go: up one dir, main page]

Skip to content

Commit d5840c1

Browse files
mchackiKVS85
authored andcommitted
Bug fix 3.5/min replication factor (#9524)
* Cherry-pick minReplicationFactor * Bug fix/failover with min replication factor (#9486) * Improve collection time of IResearchQueryOptimizationTest * Added a minReplicationFactor field in Collections. It is not possible to modify it yet and noone cares for it * Added some assertion son minReplicationFactor * Transaction API will now reject writes as soon as minimal replication factor is NOT fulfilled * added minReplicationFactor to the user interface, preparation for the collection api changes * added minReplicationFactor to VocBaseCollection, RestReplicationHandler, RestCollectionHandler, ClusterMethods, ClusterInfo and ClusterCollectionCreationInfo * added minReplicationFactor usage to tests * TODO TEMOPORARY COMMIT FOR TESTING PLEASE REVERT ME * minReplicationFactor now able to change via collection properties route * fixed wrongly assert * added minReplicationFactor to the graph management ui * added minReplicationFactor to the gharial api * Fixed off-by-one error in minReplicationFactor. We actually enforced one more. * adjusted description of minReplicationFactor * FollowerInfo Refactoring * added gharial api graph creation tests with minimal replication factor * proper cleanup of shell collection tests, removed lots of duplicate code, preparation for some new tests * added collection create tests using invalid/valid names, replicationFactor and minReplicationFactor * Debug logging * MORE Debug logging * Included replication fast lane * Use correct minreplicationfactor * modified debug logging * Fixed compileissues * MORE Debug logging * MORE Debug logging * MORE Debug logging * MORE Debug logging * MORE Debug logging * MORE Debug logging * MORE Debug logging * Revert "MORE Debug logging" This reverts commit dab5af2. * Revert "MORE Debug logging" This reverts commit 6134b66. * Revert "MORE Debug logging" This reverts commit 80160bd. * Revert "MORE Debug logging" This reverts commit 06aabcd. * Removed debug output * Added replication fast lane. Also refactored the commands as i cannot take it any more... * Put some requests of RocksDBReplication onto CATCHUP Lane. * Put some requests of MMFilesReplication onto CATCHUP Lane. * Adjusted Fast and MED lane usage in Supervised scheduler * Added changelog entry * Added new features entry * A new leader will now keep old followers in case of failover * Update arangod/Cluster/ClusterCollectionCreationInfo.cpp Co-Authored-By: Tobias Gödderz <tobias@arangodb.com> * Fixed JSLINT * Unified lane handling of replication handlers * Sorry forgotten in last commit * replaced strings with static strings * more use of static strings * optimized min repl description in the ui * decr initial loop variable * clean up of the createWithId test * more use of static strings * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js Co-Authored-By: Tobias Gödderz <tobias@arangodb.com> * Added some comments on condition, renamed variable as suggested in review * Added check for min replicationFactor to be non-zero * Added assertion * Added function to modify min and max replication factor in one go * added missing semicolon * rm log devel * Added a second information to follower info that can keep track of followers that have been in sync before a failover has taken place * Maintenance reports previous version now to follower info. instead of lying by itself. The Follower Info now gets a failover save mode to report insync followers * check replFactor against nr dbservers * Add lie reporting in CURRENT * Reverted most of my recent commits about Failover situation. The intended plan simply does not work out * move replication checks from logical collection to rest collection handler * added more replication tests * Include assert only if we are not in gtest * jslint * set min repl factor to zero if satellite collection * check replication attributes in v8 collection * Initial commit, old plan, does not yet work * fixed ires tests * Included FailoverCandidates key. Not fully implemented * fixed wrong assert * unified in sync follower reporting * fixed compiler errors * Cleanup locking, and fixed potential deadlocks * Comments about locking order in FollowerInfo. * properly check uint * Keep old leader as potential failover candidate * Transaction methods now use followerInfo to check if the leader can write, this might have the sideeffect that 'failoverCandidates' are updated * Let agency check failoverCandidates if possible * Initialize member variables * Use unified follower reporting in DBServerAgencySync * Removed obsolete variable, collecting it somewhere else * repl factor attr check * Reimplemented previous followers, second attempt now. PhaseOne and PhaseTwo can now synchronize on current. * Fixed assertion, forgot an off-by-one * adjusted test to be more preciese now * Fixed failove candidates list * Disable write on dropping too many followers * Allow to run updateFailoerCandidates multiple times with same leader. * Final fixes, resilience tests now green, crossing fingers for jenkins * Fixed race on atomics comparison * Fixed invalid number type * added nullptr handling * added nullptr handling * Removed invalid assert * Make takeover of leadership an atomic operation * Update tests/js/common/shell/shell-cluster-collection.js Co-Authored-By: Tobias Gödderz <tobias@arangodb.com> * Review fixes * Fixed creation code to use takeoverLeadership * Update arangod/Cluster/FollowerInfo.h Co-Authored-By: Tobias Gödderz <tobias@arangodb.com> * Applied review fixes * There is no timeout * Moved AQL + Pregel to INTERNAL_AQL lane, which is medium priority, to avoid deadlocks with Sync replication * More review fixes * Use difference if you want to compare two vectors... * Use std::string ... * Now check if we are in recovery mode * Added documentation for minReplicationFactor * Added readme update as well in documenation * Removed merge conflict leftovers 0o, i should not trust the IDE * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/Books/Manual/Architecture/Replication/README.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update CHANGELOG Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/Books/Manual/DataModeling/Collections/DatabaseMethods.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/Books/Manual/ReleaseNotes/NewFeatures35.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/DocuBlocks/Rest/Collections/1_structs.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/graphManagementView.js Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/graphManagementView.js Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/DocuBlocks/Rest/Graph/1_structs.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Apply suggestions from code review Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Adepted review requests, thanks for finding! * Removed unnecessary const * Apply suggestions from code review Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Moved initilization of variable more downwards * Apply lock before notify_all() * Remove documentation except DocuBlocks, covered by PR in docs repo * Remove accidental indent
1 parent 313d60e commit d5840c1

File tree

63 files changed

+3850
-3375
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+3850
-3375
lines changed

CHANGELOG

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,13 @@
11
v3.5.0-rc.5 (2019-XX-XX)
22
------------------------
33

4+
* MinReplicationFactor:
5+
Collections can now be created with a minimal replication factor (minReplicationFactor), which defaults to 1.
6+
If minReplicationFactor > 1 a collection will go into "read-only" mode as soon as it has less then minReplicationFactor
7+
many insync followers. With this mechanism users can avoid to have collections diverge too much in case of failure scenarios.
8+
minReplicationFactor can have the values: 1 <= minReplicationFactor <= replicationFactor.
9+
Having minReplicationFactor == 1 ArangoDB behaves the same way as in any previous version.
10+
411
* Fixed a query abort error with smart joins if both collections were restricted to a
512
single shard using the "restrict-to-single-shard" optimizer rule.
613

Documentation/DocuBlocks/Rest/Collections/1_structs.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,10 @@ determine the target shard for documents; *Cluster specific attribute.*
4343
@RESTSTRUCT{replicationFactor,collection_info,integer,optional,}
4444
contains how many copies of each shard are kept on different DBServers.; *Cluster specific attribute.*
4545

46+
@RESTSTRUCT{minReplicationFactor,collection_info,integer,optional,}
47+
contains how many minimal copies of each shard need to be in sync on different DBServers.
48+
The shards will refuse to write, if we have less then these many copies in sync. *Cluster specific attribute.*
49+
4650
@RESTSTRUCT{shardingStrategy,collection_info,string,optional,}
4751
the sharding strategy selected for the collection; *Cluster specific attribute.*
4852
One of 'hash' or 'enterprise-hash-smart-edge'

Documentation/DocuBlocks/Rest/Graph/1_structs.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,11 @@ concurrent modifications to this graph.
2525
@RESTSTRUCT{replicationFactor,graph_representation,integer,required,}
2626
The replication factor used for every new collection in the graph.
2727

28+
@RESTSTRUCT{minReplicationFactor,graph_representation,integer,optional,}
29+
The minimal replication factor used for every new collection in the graph.
30+
If one shard has less than minReplicationFactor copies, we cannot write
31+
to this shard, but to all others.
32+
2833
@RESTSTRUCT{isSmart,graph_representation,boolean,required,}
2934
Flag if the graph is a SmartGraph (Enterprise Edition only) or not.
3035

Documentation/DocuBlocks/Rest/Graph/general_graph_create_http_examples.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,11 @@ Cannot be modified later.
4242
@RESTSTRUCT{replicationFactor,post_api_gharial_create_opts,integer,required,}
4343
The replication factor used when initially creating collections for this graph.
4444

45+
@RESTSTRUCT{minReplicationFactor,post_api_gharial_create_opts,integer,optional,}
46+
The minimal replication factor used for every new collection in the graph.
47+
If one shard has less than minReplicationFactor copies, we cannot write
48+
to this shard, but to all others.
49+
4550
@RESTRETURNCODES
4651

4752
@RESTRETURNCODE{201}

Documentation/DocuBlocks/collectionProperties.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,11 @@ In a cluster setup, the result will also contain the following attributes:
5252
determine the target shard for documents.
5353

5454
* *replicationFactor*: determines how many copies of each shard are kept
55-
on different DBServers.
55+
on different DBServers. Has to be in the range of 1-10 *(Cluster only)*
56+
57+
* *minReplicationFactor* : determines the number of minimal shard copies kept on
58+
different DBServers, a shard will refuse to write if less than this amount
59+
of copies are in sync. Has to be in the range of 1-replicationFactor *(Cluster only)*
5660

5761
* *shardingStrategy*: the sharding strategy selected for the collection.
5862
This attribute will only be populated in cluster mode and is not populated
@@ -77,6 +81,10 @@ one or more of the following attribute(s):
7781
different DBServers, valid values are integer numbers
7882
in the range of 1-10 *(Cluster only)*
7983

84+
* *minReplicationFactor* : Change the number of minimal shard copies to be in sync on
85+
different DBServers, a shard will refuse to write if less than this amount
86+
of copies are in sync. Has to be in the range of 1-replicationFactor *(Cluster only)*
87+
8088
**Note**: some other collection properties, such as *type*, *isVolatile*,
8189
*keyOptions*, *numberOfShards* or *shardingStrategy* cannot be changed once
8290
the collection is created.

arangod/Agency/Job.cpp

Lines changed: 55 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -95,21 +95,22 @@ bool Job::finish(std::string const& server, std::string const& shard,
9595
try {
9696
jobType = pending.slice()[0].get("type").copyString();
9797
} catch (std::exception const&) {
98-
LOG_TOPIC("76352", WARN, Logger::AGENCY) << "Failed to obtain type of job " << _jobId;
98+
LOG_TOPIC("76352", WARN, Logger::AGENCY)
99+
<< "Failed to obtain type of job " << _jobId;
99100
}
100101

101102
// Additional payload, which is to be executed in the finish transaction
102103
Slice operations = Slice::emptyObjectSlice();
103-
Slice preconditions = Slice::emptyObjectSlice();
104+
Slice preconditions = Slice::emptyObjectSlice();
104105

105106
if (payload != nullptr) {
106107
Slice slice = payload->slice();
107108
TRI_ASSERT(slice.isObject() || slice.isArray());
108-
if (slice.isObject()) { // opers only
109+
if (slice.isObject()) { // opers only
109110
operations = slice;
110111
TRI_ASSERT(operations.isObject());
111112
} else {
112-
TRI_ASSERT(slice.length() < 3); // opers + precs only
113+
TRI_ASSERT(slice.length() < 3); // opers + precs only
113114
if (slice.length() > 0) {
114115
operations = slice[0];
115116
TRI_ASSERT(operations.isObject());
@@ -125,7 +126,7 @@ bool Job::finish(std::string const& server, std::string const& shard,
125126
{
126127
VPackArrayBuilder guard(&finished);
127128

128-
{ // operations --
129+
{ // operations --
129130
VPackObjectBuilder operguard(&finished);
130131

131132
addPutJobIntoSomewhere(finished, success ? "Finished" : "Failed",
@@ -148,15 +149,14 @@ bool Job::finish(std::string const& server, std::string const& shard,
148149
addReleaseShard(finished, shard);
149150
}
150151

151-
} // -- operations
152+
} // -- operations
152153

153-
if (preconditions.isObject() && preconditions.length() > 0) { // preconditions --
154+
if (preconditions.isObject() && preconditions.length() > 0) { // preconditions --
154155
VPackObjectBuilder precguard(&finished);
155156
for (auto const& prec : VPackObjectIterator(preconditions)) {
156157
finished.add(prec.key.copyString(), prec.value);
157158
}
158-
} // -- preconditions
159-
159+
} // -- preconditions
160160
}
161161

162162
write_ret_t res = singleWriteTransaction(_agent, finished, false);
@@ -168,16 +168,16 @@ bool Job::finish(std::string const& server, std::string const& shard,
168168
}
169169
} catch (std::exception const& e) {
170170
LOG_TOPIC("1fead", WARN, Logger::AGENCY)
171-
<< "Caught exception in finish, message: " << e.what();
171+
<< "Caught exception in finish, message: " << e.what();
172172
} catch (...) {
173173
LOG_TOPIC("7762f", WARN, Logger::AGENCY)
174-
<< "Caught unspecified exception in finish.";
174+
<< "Caught unspecified exception in finish.";
175175
}
176176
return false;
177177
}
178178

179179
std::string Job::randomIdleAvailableServer(Node const& snap,
180-
std::vector<std::string> const& exclude) {
180+
std::vector<std::string> const& exclude) {
181181
std::vector<std::string> as = availableServers(snap);
182182
std::string ret;
183183

@@ -189,11 +189,11 @@ std::string Job::randomIdleAvailableServer(Node const& snap,
189189
for (auto const& srv : snap.hasAsChildren(healthPrefix).first) {
190190
// ignore excluded servers
191191
if (std::find(std::begin(exclude), std::end(exclude), srv.first) != std::end(exclude)) {
192-
continue ;
192+
continue;
193193
}
194194
// ignore servers not in availableServers above:
195195
if (std::find(std::begin(as), std::end(as), srv.first) == std::end(as)) {
196-
continue ;
196+
continue;
197197
}
198198

199199
std::string const& status = (*srv.second).hasAsString("Status").first;
@@ -242,7 +242,7 @@ size_t Job::countGoodOrBadServersInList(Node const& snap, VPackSlice const& serv
242242
auto const& health = snap.hasAsChildren(healthPrefix);
243243
// Do we have a Health substructure?
244244
if (health.second) {
245-
Node::Children const& healthData = health.first; // List of servers in Health
245+
Node::Children const& healthData = health.first; // List of servers in Health
246246
for (VPackSlice const serverName : VPackArrayIterator(serverList)) {
247247
if (serverName.isString()) {
248248
// serverName not a string? Then don't count
@@ -269,13 +269,14 @@ size_t Job::countGoodOrBadServersInList(Node const& snap, VPackSlice const& serv
269269
}
270270

271271
// The following counts in a given server list how many of the servers are
272-
// in Status "GOOD" or "BAD".
273-
size_t Job::countGoodOrBadServersInList(Node const& snap, std::vector<std::string> const& serverList) {
272+
// in Status "GOOD" or "BAD".
273+
size_t Job::countGoodOrBadServersInList(Node const& snap,
274+
std::vector<std::string> const& serverList) {
274275
size_t count = 0;
275276
auto const& health = snap.hasAsChildren(healthPrefix);
276277
// Do we have a Health substructure?
277278
if (health.second) {
278-
Node::Children const& healthData = health.first; // List of servers in Health
279+
Node::Children const& healthData = health.first; // List of servers in Health
279280
for (auto& serverStr : serverList) {
280281
// Now look up this server:
281282
auto it = healthData.find(serverStr);
@@ -294,7 +295,8 @@ size_t Job::countGoodOrBadServersInList(Node const& snap, std::vector<std::strin
294295
}
295296

296297
/// @brief Check if a server is cleaned or to be cleaned out:
297-
bool Job::isInServerList(Node const& snap, std::string const& prefix, std::string const& server, bool isArray) {
298+
bool Job::isInServerList(Node const& snap, std::string const& prefix,
299+
std::string const& server, bool isArray) {
298300
VPackSlice slice;
299301
bool found = false;
300302
if (isArray) {
@@ -309,7 +311,7 @@ bool Job::isInServerList(Node const& snap, std::string const& prefix, std::strin
309311
}
310312
}
311313
} else { // an object
312-
auto const& children = snap.hasAsChildren(prefix);
314+
auto const& children = snap.hasAsChildren(prefix);
313315
if (children.second) {
314316
for (auto const& srv : children.first) {
315317
if (srv.first == server) {
@@ -418,16 +420,15 @@ std::vector<Job::shard_t> Job::clones(Node const& snapshot, std::string const& d
418420

419421
for (const auto& colptr : snapshot.hasAsChildren(databasePath).first) { // collections
420422

421-
auto const &col = *colptr.second;
422-
auto const &otherCollection = colptr.first;
423+
auto const& col = *colptr.second;
424+
auto const& otherCollection = colptr.first;
423425

424426
if (otherCollection != collection && col.has("distributeShardsLike") && // use .has() form to prevent logging of missing
425427
col.hasAsSlice("distributeShardsLike").first.copyString() == collection) {
426428
auto const& theirshards = sortedShardList(col.hasAsNode("shards").first);
427429
if (theirshards.size() > 0) { // do not care about virtual collections
428430
if (theirshards.size() == myshards.size()) {
429-
ret.emplace_back(otherCollection,
430-
theirshards[steps]);
431+
ret.emplace_back(otherCollection, theirshards[steps]);
431432
} else {
432433
LOG_TOPIC("3092e", ERR, Logger::SUPERVISION)
433434
<< "Shard distribution of clone(" << otherCollection
@@ -452,25 +453,44 @@ std::string Job::findNonblockedCommonHealthyInSyncFollower( // Which is in "GOO
452453

453454
std::unordered_map<std::string, size_t> currentServers;
454455
for (const auto& clone : cs) {
455-
auto currentShardPath = curColPrefix + db + "/" + clone.collection + "/" +
456-
clone.shard + "/servers";
457-
auto plannedShardPath =
458-
planColPrefix + db + "/" + clone.collection + "/shards/" + clone.shard;
459-
size_t i = 0;
456+
auto sharedPath = db + "/" + clone.collection + "/";
457+
auto currentShardPath = curColPrefix + sharedPath + clone.shard + "/servers";
458+
auto currentFailoverCandidatesPath =
459+
curColPrefix + sharedPath + clone.shard + "/servers";
460+
auto plannedShardPath = planColPrefix + sharedPath + "shards/" + clone.shard;
460461

461462
// start up race condition ... current might not have everything in plan
462463
if (!snap.has(currentShardPath) || !snap.has(plannedShardPath)) {
463464
--nclones;
464465
continue;
465466
} // if
466467

467-
for (const auto& server :
468-
VPackArrayIterator(snap.hasAsArray(currentShardPath).first)) {
469-
auto id = server.copyString();
468+
bool isArray = false;
469+
VPackSlice serverList;
470+
// If we do have failover candidates, we should use them
471+
std::tie(serverList, isArray) = snap.hasAsArray(currentFailoverCandidatesPath);
472+
if (!isArray) {
473+
// We have old DBServers that do not report failover candidates,
474+
// Need to rely on current
475+
std::tie(serverList, isArray) = snap.hasAsArray(currentShardPath);
476+
TRI_ASSERT(isArray);
477+
if (!isArray) {
478+
THROW_ARANGO_EXCEPTION_MESSAGE(
479+
TRI_ERROR_SUPERVISION_GENERAL_FAILURE,
480+
"Could not find common insync server for: " + currentShardPath +
481+
", value is not an array.");
482+
}
483+
}
484+
// Guaranteed by if above
485+
TRI_ASSERT(serverList.isArray());
486+
487+
size_t i = 0;
488+
for (const auto& server : VPackArrayIterator(serverList)) {
470489
if (i++ == 0) {
471490
// Skip leader
472491
continue;
473492
}
493+
auto id = server.copyString();
474494

475495
if (!good[id]) {
476496
// Skip unhealthy servers
@@ -550,9 +570,9 @@ bool Job::abortable(Node const& snapshot, std::string const& jobId) {
550570
return false;
551571
}
552572

553-
void Job::doForAllShards(Node const& snapshot, std::string& database,
554-
std::vector<shard_t>& shards,
555-
std::function<void(Slice plan, Slice current, std::string& planPath, std::string& curPath)> worker) {
573+
void Job::doForAllShards(
574+
Node const& snapshot, std::string& database, std::vector<shard_t>& shards,
575+
std::function<void(Slice plan, Slice current, std::string& planPath, std::string& curPath)> worker) {
556576
for (auto const& collShard : shards) {
557577
std::string shard = collShard.shard;
558578
std::string collection = collShard.collection;

arangod/Aql/RestAqlHandler.h

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -49,9 +49,7 @@ class RestAqlHandler : public RestVocbaseBaseHandler {
4949

5050
public:
5151
char const* name() const override final { return "RestAqlHandler"; }
52-
RequestLane lane() const override final {
53-
return RequestLane::CLUSTER_INTERNAL;
54-
}
52+
RequestLane lane() const override final { return RequestLane::CLUSTER_AQL; }
5553
RestStatus execute() override;
5654
RestStatus continueExecute() override;
5755

arangod/Cluster/ClusterCollectionCreationInfo.cpp

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -28,26 +28,27 @@
2828
#include <velocypack/velocypack-aliases.h>
2929

3030
arangodb::ClusterCollectionCreationInfo::ClusterCollectionCreationInfo(
31-
std::string const cID, uint64_t shards, uint64_t repFac, bool waitForRep,
32-
velocypack::Slice const& slice)
31+
std::string const cID, uint64_t shards, uint64_t repFac, uint64_t minRepFac,
32+
bool waitForRep, velocypack::Slice const& slice)
3333
: collectionID(std::move(cID)),
3434
numberOfShards(shards),
3535
replicationFactor(repFac),
36+
minReplicationFactor(minRepFac),
3637
waitForReplication(waitForRep),
3738
json(slice),
3839
name(arangodb::basics::VelocyPackHelper::getStringValue(json, arangodb::StaticStrings::DataSourceName,
3940
StaticStrings::Empty)),
4041
state(State::INIT) {
4142
if (numberOfShards == 0) {
42-
// Nothing to do this cannot fail
43-
// Deactivated this assertion, our testing mock for coordinator side
44-
// tries to get away without other servers by initially adding only 0
45-
// shard collections (non-smart). We do not want to loose these test.
46-
// So we will loose this assertion for now.
47-
/*
48-
TRI_ASSERT(arangodb::basics::VelocyPackHelper::getBooleanValue(
49-
json, arangodb::StaticStrings::IsSmart, false));
50-
*/
43+
// Nothing to do this cannot fail
44+
// Deactivated this assertion, our testing mock for coordinator side
45+
// tries to get away without other servers by initially adding only 0
46+
// shard collections (non-smart). We do not want to loose these test.
47+
// So we will loose this assertion for now.
48+
#ifndef ARANGODB_USE_GOOGLE_TESTS
49+
TRI_ASSERT(arangodb::basics::VelocyPackHelper::getBooleanValue(json, arangodb::StaticStrings::IsSmart,
50+
false));
51+
#endif
5152
state = State::DONE;
5253
}
5354
TRI_ASSERT(!name.empty());
@@ -68,10 +69,10 @@ VPackSlice arangodb::ClusterCollectionCreationInfo::isBuildingSlice() const {
6869
}
6970

7071
bool arangodb::ClusterCollectionCreationInfo::needsBuildingFlag() const {
71-
// Deactivated the smart graph check, our testing mock for coordinator side
72-
// tries to get away without other servers by initially adding only 0
73-
// shard collections (non-smart). We do not want to loose these test.
74-
// So we will loose the more precise check for now.
72+
// Deactivated the smart graph check, our testing mock for coordinator side
73+
// tries to get away without other servers by initially adding only 0
74+
// shard collections (non-smart). We do not want to loose these test.
75+
// So we will loose the more precise check for now.
7576
/*
7677
return numberOfShards > 0 ||
7778
arangodb::basics::VelocyPackHelper::getBooleanValue(json, StaticStrings::IsSmart, false);

arangod/Cluster/ClusterCollectionCreationInfo.h

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,20 +32,22 @@ namespace arangodb {
3232

3333
struct ClusterCollectionCreationInfo {
3434
enum State { INIT, FAILED, DONE };
35-
ClusterCollectionCreationInfo(std::string const cID, uint64_t shards, uint64_t repFac,
35+
ClusterCollectionCreationInfo(std::string const cID, uint64_t shards,
36+
uint64_t repFac, uint64_t minRepFac,
3637
bool waitForRep, velocypack::Slice const& slice);
3738

3839
std::string const collectionID;
3940
uint64_t numberOfShards;
4041
uint64_t replicationFactor;
42+
uint64_t minReplicationFactor;
4143
bool waitForReplication;
4244
velocypack::Slice const json;
4345
std::string name;
4446
State state;
4547

4648
public:
4749
velocypack::Slice isBuildingSlice() const;
48-
50+
4951
private:
5052
bool needsBuildingFlag() const;
5153

0 commit comments

Comments
 (0)
0