10000 Forward Port of changes in 3.5 review (#9544) · arangodb/arangodb@987ad41 · GitHub
[go: up one dir, main page]

Skip to content

Commit 987ad41

Browse files
authored
Forward Port of changes in 3.5 review (#9544)
* Bug fix 3.5/min replication factor (#9524) * Cherry-pick minReplicationFactor * Bug fix/failover with min replication factor (#9486) * Improve collection time of IResearchQueryOptimizationTest * Added a minReplicationFactor field in Collections. It is not possible to modify it yet and noone cares for it * Added some assertion son minReplicationFactor * Transaction API will now reject writes as soon as minimal replication factor is NOT fulfilled * added minReplicationFactor to the user interface, preparation for the collection api changes * added minReplicationFactor to VocBaseCollection, RestReplicationHandler, RestCollectionHandler, ClusterMethods, ClusterInfo and ClusterCollectionCreationInfo * added minReplicationFactor usage to tests * TODO TEMOPORARY COMMIT FOR TESTING PLEASE REVERT ME * minReplicationFactor now able to change via collection properties route * fixed wrongly assert * added minReplicationFactor to the graph management ui * added minReplicationFactor to the gharial api * Fixed off-by-one error in minReplicationFactor. We actually enforced one more. * adjusted description of minReplicationFactor * FollowerInfo Refactoring * added gharial api graph creation tests with minimal replication factor * proper cleanup of shell collection tests, removed lots of duplicate code, preparation for some new tests * added collection create tests using invalid/valid names, replicationFactor and minReplicationFactor * Debug logging * MORE Debug logging * Included replication fast lane * Use correct minreplicationfactor * modified debug logging * Fixed compileissues * MORE Debug logging * MORE Debug logging * MORE Debug logging * MORE Debug logging * MORE Debug logging * MORE Debug logging * MORE Debug logging * Revert "MORE Debug logging" This reverts commit dab5af2. * Revert "MORE Debug logging" This reverts commit 6134b66. * Revert "MORE Debug logging" This reverts commit 80160bd. * Revert "MORE Debug logging" This reverts commit 06aabcd. * Removed debug output * Added replication fast lane. Also refactored the commands as i cannot take it any more... * Put some requests of RocksDBReplication onto CATCHUP Lane. * Put some requests of MMFilesReplication onto CATCHUP Lane. * Adjusted Fast and MED lane usage in Supervised scheduler * Added changelog entry * Added new features entry * A new leader will now keep old followers in case of failover * Update arangod/Cluster/ClusterCollectionCreationInfo.cpp Co-Authored-By: Tobias Gödderz <tobias@arangodb.com> * Fixed JSLINT * Unified lane handling of replication handlers * Sorry forgotten in last commit * replaced strings with static strings * more use of static strings * optimized min repl description in the ui * decr initial loop variable * clean up of the createWithId test * more use of static strings * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js Co-Authored-By: Tobias Gödderz <tobias@arangodb.com> * Added some comments on condition, renamed variable as suggested in review * Added check for min replicationFactor to be non-zero * Added assertion * Added function to modify min and max replication factor in one go * added missing semicolon * rm log devel * Added a second information to follower info that can keep track of followers that have been in sync before a failover has taken place * Maintenance reports previous version now to follower info. instead of lying by itself. The Follower Info now gets a failover save mode to report insync followers * check replFactor against nr dbservers * Add lie reporting in CURRENT * Reverted most of my recent commits about Failover situation. The intended plan simply does not work out * move replication checks from logical collection to rest collection handler * added more replication tests * Include assert only if we are not in gtest * jslint * set min repl factor to zero if satellite collection * check replication attributes in v8 collection * Initial commit, old plan, does not yet work * fixed ires tests * Included FailoverCandidates key. Not fully implemented * fixed wrong assert * unified in sync follower reporting * fixed compiler errors * Cleanup locking, and fixed potential deadlocks * Comments about locking order in FollowerInfo. * properly check uint * Keep old leader as potential failover candidate * Transaction methods now use followerInfo to check if the leader can write, this might have the sideeffect that 'failoverCandidates' are updated * Let agency check failoverCandidates if possible * Initialize member variables * Use unified follower reporting in DBServerAgencySync * Removed obsolete variable, collecting it somewhere else * repl factor attr check * Reimplemented previous followers, second attempt now. PhaseOne and PhaseTwo can now synchronize on current. * Fixed assertion, forgot an off-by-one * adjusted test to be more preciese now * Fixed failove candidates list * Disable write on dropping too many followers * Allow to run updateFailoerCandidates multiple times with same leader. * Final fixes, resilience tests now green, crossing fingers for jenkins * Fixed race on atomics comparison * Fixed invalid number type * added nullptr handling * added nullptr handling * Removed invalid assert * Make takeover of leadership an atomic operation * Update tests/js/common/shell/shell-cluster-collection.js Co-Authored-By: Tobias Gödderz <tobias@arangodb.com> * Review fixes * Fixed creation code to use takeoverLeadership * Update arangod/Cluster/FollowerInfo.h Co-Authored-By: Tobias Gödderz <tobias@arangodb.com> * Applied review fixes * There is no timeout * Moved AQL + Pregel to INTERNAL_AQL lane, which is medium priority, to avoid deadlocks with Sync replication * More review fixes * Use difference if you want to compare two vectors... * Use std::string ... * Now check if we are in recovery mode * Added documentation for minReplicationFactor * Added readme update as well in documenation * Removed merge conflict leftovers 0o, i should not trust the IDE * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/Books/Manual/Architecture/Replication/README.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update CHANGELOG Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/Books/Manual/DataModeling/Collections/DatabaseMethods.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/Books/Manual/ReleaseNotes/NewFeatures35.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/DocuBlocks/Rest/Collections/1_structs.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/graphManagementView.js Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update js/apps/system/_admin/aardvark/APP/frontend/js/views/graphManagementView.js Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Update Documentation/DocuBlocks/Rest/Graph/1_structs.md Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Apply suggestions from code review Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Adepted review requests, thanks for finding! * Removed unnecessary const * Apply suggestions from code review Co-Authored-By: Jan <jsteemann@users.noreply.github.com> * Moved initilization of variable more downwards * Apply lock before notify_all() * Remove documentation except DocuBlocks, covered by PR in docs repo * Remove accidental indent * Removed leftover merge conflict in documentation block
1 parent 1e600c8 commit 987ad41

File tree

17 files changed

+172
-157
lines changed

17 files changed

+172
-157
lines changed

CHANGELOG

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ devel
88
* TOKENS function updated to deal with primitive types and arrays
99

1010
* MinReplicationFactor:
11-
Collections can now be created with a minimal replication factor (minReplicationFactor) default 1.
11+
Collections can now be created with a minimal replication factor (minReplicationFactor), which defaults to 1.
1212
If minReplicationFactor > 1 a collection will go into "read-only" mode as soon as it has less then minReplicationFactor
1313
many insync followers. With this mechanism users can avoid to have collections diverge too much in case of failure scenarios.
1414
minReplicationFactor can have the values: 1 <= minReplicationFactor <= replicationFactor.

Documentation/DocuBlocks/Rest/Collections/1_structs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@ determine the target shard for documents; *Cluster specific attribute.*
4444
contains how many copies of each shard are kept on different DBServers.; *Cluster specific attribute.*
4545

4646
@RESTSTRUCT{minReplicationFactor,collection_info,integer,optional,}
47-
contains how many minimal copies of each shard are kept on different DBServers.
48-
The shards will refuse to write, if we have less then these many copies in sync.; *Cluster specific attribute.*
47+
contains how many minimal copies of each shard need to be in sync on different DBServers.
48+
The shards will refuse to write, if we have less then these many copies in sync. *Cluster specific attribute.*
4949

5050
@RESTSTRUCT{shardingStrategy,collection_info,string,optional,}
5151
the sharding strategy selected for the collection; *Cluster specific attribute.*

Documentation/DocuBlocks/Rest/Graph/1_structs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The replication factor used for every new collection in the graph.
2727

2828
@RESTSTRUCT{minReplicationFactor,graph_representation,integer,optional,}
2929
The minimal replication factor used for every new collection in the graph.
30-
If one shard has less then minimal replication factor copies, we cannot write
30+
If one shard has less than minReplicationFactor copies, we cannot write
3131
to this shard, but to all others.
3232

3333
@RESTSTRUCT{isSmart,graph_representation,boolean,required,}

Documentation/DocuBlocks/Rest/Graph/general_graph_create_http_examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ The replication factor used when initially creating collections for this graph.
4444

4545
@RESTSTRUCT{minReplicationFactor,post_api_gharial_create_opts,integer,optional,}
4646
The minimal replication factor used for every new collection in the graph.
47-
If one shard has less then minimal replication factor copies, we cannot write
47+
If one shard has less than minReplicationFactor copies, we cannot write
4848
to this shard, but to all others.
4949

5050
@RESTRETURNCODES

Documentation/DocuBlocks/collectionProperties.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ In a cluster setup, the result will also contain the following attributes:
5555
on different DBServers. Has to be in the range of 1-10 *(Cluster only)*
5656

5757
* *minReplicationFactor* : determines the number of minimal shard copies kept on
58-
different DBServers, a shard will refuse to write, if less then this amount
58+
different DBServers, a shard will refuse to write if less than this amount
5959
of copies are in sync. Has to be in the range of 1-replicationFactor *(Cluster only)*
6060

6161
* *shardingStrategy*: the sharding strategy selected for the collection.
@@ -81,8 +81,8 @@ one or more of the following attribute(s):
8181
different DBServers, valid values are integer numbers
8282
in the range of 1-10 *(Cluster only)*
8383

84-
* *minReplicationFactor* : Change the number of minimal shard copies kept on
85-
different DBServers, a shard will refuse to write, if less then this amount
84+
* *minReplicationFactor* : Change the number of minimal shard copies to be in sync on
85+
different DBServers, a shard will refuse to write if less than this amount
8686
of copies are in sync. Has to be in the range of 1-replicationFactor *(Cluster only)*
8787

8888
**Note**: some other collection properties, such as *type*, *isVolatile*,

arangod/Agency/Job.cpp

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -458,7 +458,6 @@ std::string Job::findNonblockedCommonHealthyInSyncFollower( // Which is in "GOO
458458
auto currentFailoverCandidatesPath =
459459
curColPrefix + sharedPath + clone.shard + "/servers";
460460
auto plannedShardPath = planColPrefix + sharedPath + "shards/" + clone.shard;
461-
size_t i = 0;
462461

463462
// start up race condition ... current might not have everything in plan
464463
if (!snap.has(currentShardPath) || !snap.has(plannedShardPath)) {
@@ -482,8 +481,10 @@ std::string Job::findNonblockedCommonHealthyInSyncFollower( // Which is in "GOO
482481
", value is not an array.");
483482
}
484483
}
485-
// Guarantieed by if above
484+
// Guaranteed by if above
486485
TRI_ASSERT(serverList.isArray());
486+
487+
size_t i = 0;
487488
for (const auto& server : VPackArrayIterator(serverList)) {
488489
if (i++ == 0) {
489490
// Skip leader

arangod/Cluster/ClusterInfo.cpp

Lines changed: 27 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,8 @@ void ClusterInfo::logAgencyDump() const {
231231
AgencyCommResult ag = ac.getValues("/");
232232

233233
if (ag.successful()) {
234-
LOG_TOPIC("fe8ce", INFO, Logger::CLUSTER) << "Agency dump:\n" << ag.slice().toJson();
234+
LOG_TOPIC("fe8ce", INFO, Logger::CLUSTER) << "Agency dump:\n"
235+
<< ag.slice().toJson();
235236
} else {
236237
LOG_TOPIC("e7e30", WARN, Logger::CLUSTER) << "Could not get agency dump!";
237238
}
@@ -1614,22 +1615,22 @@ Result ClusterInfo::createCollectionCoordinator( // create collection
16141615
return createCollectionsCoordinator(databaseName, infos, endTime);
16151616
}
16161617

1617-
/// @brief this method does an atomic check of the preconditions for the collections
1618-
/// to be created, using the currently loaded plan. it populates the plan version
1619-
/// used for the checks
1618+
/// @brief this method does an atomic check of the preconditions for the
1619+
/// collections to be created, using the currently loaded plan. it populates the
1620+
/// plan version used for the checks
16201621
Result ClusterInfo::checkCollectionPreconditions(std::string const& databaseName,
16211622
std::vector<ClusterCollectionCreationInfo> const& infos,
16221623
uint64_t& planVersion) {
16231624
READ_LOCKER(readLocker, _planProt.lock);
1624-
1625+
16251626
planVersion = _planVersion;
16261627

16271628
for (auto const& info : infos) {
16281629
// Check if name exists.
16291630
if (info.name.empty() || !info.json.isObject() || !info.json.get("shards").isObject()) {
16301631
return TRI_ERROR_BAD_PARAMETER; // must not be empty
16311632
}
1632-
1633+
16331634
// Validate that the collection does not exist in the current plan
16341635
{
16351636
AllCollections::const_iterator it = _plannedCollections.find(databaseName);
@@ -1652,7 +1653,7 @@ Result ClusterInfo::checkCollectionPreconditions(std::string const& databaseName
16521653
}
16531654
}
16541655
}
1655-
1656+
16561657
// Validate that there is no view with this name either
16571658
{
16581659
// check against planned views as well
@@ -1667,7 +1668,6 @@ Result ClusterInfo::checkCollectionPreconditions(std::string const& databaseName
16671668
}
16681669
}
16691670
}
1670-
16711671
}
16721672

16731673
return {};
@@ -1693,7 +1693,7 @@ Result ClusterInfo::createCollectionsCoordinator(std::string const& databaseName
16931693

16941694
AgencyComm ac;
16951695
std::vector<std::shared_ptr<AgencyCallback>> agencyCallbacks;
1696-
1696+
16971697
auto cbGuard = scopeGuard([&] {
16981698
// We have a subtle race here, that we try to cover against:
16991699
// We register a callback in the agency.
@@ -1873,34 +1873,34 @@ Result ClusterInfo::createCollectionsCoordinator(std::string const& databaseName
18731873
AgencyPrecondition::Type::EMPTY, true));
18741874
}
18751875
}
1876-
1876+
18771877
// additionally ensure that no such collectionID exists yet in Plan/Collections
18781878
precs.emplace_back(AgencyPrecondition("Plan/Collections/" + databaseName + "/" + info.collectionID,
18791879
AgencyPrecondition::Type::EMPTY, true));
18801880
}
1881-
1881+
18821882
// We need to make sure our plan is up to date.
18831883
LOG_TOPIC("f4b14", DEBUG, Logger::CLUSTER)
18841884
<< "createCollectionCoordinator, loading Plan from agency...";
18851885

18861886
// load the plan, so we are up-to-date
18871887
loadPlan();
1888-
uint64_t planVersion = 0; // will be populated by following function call
1888+
uint64_t planVersion = 0; // will be populated by following function call
18891889
Result res = checkCollectionPreconditions(databaseName, infos, planVersion);
18901890
if (res.fail()) {
18911891
return res;
18921892
}
1893-
1894-
1895-
// now try to update the plan in the agency, using the current plan version as our
1896-
// precondition
1893+
1894+
// now try to update the plan in the agency, using the current plan version as
1895+
// our precondition
18971896
{
18981897
// create a builder with just the version number for comparison
18991898
VPackBuilder versionBuilder;
19001899
versionBuilder.add(VPackValue(planVersion));
1901-
1900+
19021901
// add a precondition that checks the plan version has not yet changed
1903-
precs.emplace_back(AgencyPrecondition("Plan/Version", AgencyPrecondition::Type::VALUE, versionBuilder.slice()));
1902+
precs.emplace_back(AgencyPrecondition("Plan/Version", AgencyPrecondition::Type::VALUE,
1903+
versionBuilder.slice()));
19041904

19051905
AgencyWriteTransaction transaction(opers, precs);
19061906

@@ -1915,15 +1915,17 @@ Result ClusterInfo::createCollectionsCoordinator(std::string const& databaseName
19151915
if (res.httpCode() == (int)arangodb::rest::ResponseCode::PRECONDITION_FAILED) {
19161916
// use this special error code to signal that we got a precondition failure
19171917
// in this case the caller can try again with an updated version of the plan change
1918-
return {TRI_ERROR_REQUEST_CANCELED, "operation aborted due to precondition failure"};
1919-
}
1920-
1918+
return {TRI_ERROR_REQUEST_CANCELED,
1919+
"operation aborted due to precondition failure"};
1920+
}
1921+
19211922
std::string errorMsg = "HTTP code: " + std::to_string(res.httpCode());
19221923
errorMsg += " error message: " + res.errorMessage();
19231924
errorMsg += " error details: " + res.errorDetails();
19241925
errorMsg += " body: " + res.body();
19251926
for (auto const& info : infos) {
1926-
events::CreateCollection(databaseName, info.name, TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN);
1927+
events::CreateCollection(databaseName, info.name,
1928+
TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN);
19271929
}
19281930
return {TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN, std::move(errorMsg)};
19291931
}
@@ -2135,14 +2137,14 @@ Result ClusterInfo::dropCollectionCoordinator( // drop collection
21352137

21362138
if (res.successful()) {
21372139
velocypack::Slice databaseSlice = res.slice()[0].get(std::vector<std::string>(
2138-
{AgencyCommManager::path(), "Plan", "Collections", dbName }));
2140+
{AgencyCommManager::path(), "Plan", "Collections", dbName}));
21392141

21402142
if (!databaseSlice.isObject()) {
21412143
// database dropped in the meantime
21422144
events::DropCollection(dbName, collectionID, TRI_ERROR_ARANGO_DATABASE_NOT_FOUND);
21432145
return TRI_ERROR_ARANGO_DATABASE_NOT_FOUND;
21442146
}
2145-
2147+
21462148
velocypack::Slice collectionSlice = databaseSlice.get(collectionID);
21472149
if (!collectionSlice.isObject()) {
21482150
// collection dropped in the meantime
@@ -2375,7 +2377,7 @@ Result ClusterInfo::createViewCoordinator( // create view
23752377
if (!res.successful()) {
23762378
if (res.httpCode() == (int)arangodb::rest::ResponseCode::PRECONDITION_FAILED) {
23772379
// Dump agency plan:
2378-
2380+
23792381
logAgencyDump();
23802382

23812383
events::CreateView(databaseName, name, TRI_ERROR_CLUSTER_COULD_NOT_CREATE_VIEW_IN_PLAN);

arangod/Cluster/ClusterInfo.h

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -423,10 +423,10 @@ class ClusterInfo final {
423423
bool waitForReplication, arangodb::velocypack::Slice const& json,
424424
double timeout // request timeout
425425
);
426-
427-
/// @brief this method does an atomic check of the preconditions for the collections
428-
/// to be created, using the currently loaded plan. it populates the plan version
429-
/// used for the checks
426+
427+
/// @brief this method does an atomic check of the preconditions for the
428+
/// collections to be created, using the currently loaded plan. it populates
429+
/// the plan version used for the checks
430430
Result checkCollectionPreconditions(std::string const& databaseName,
431431
std::vector<ClusterCollectionCreationInfo> const& infos,
432432
uint64_t& planVersion);
@@ -437,7 +437,8 @@ class ClusterInfo final {
437437
/// Note that in contrast to most other methods here, this method does not
438438
/// get a timeout parameter, but an endTime parameter!!!
439439
Result createCollectionsCoordinator(std::string const& databaseName,
440-
std::vector<ClusterCollectionCreationInfo>&, double endTime);
440+
std::vector<ClusterCollectionCreationInfo>&,
441+
double endTime);
441442

442443
/// @brief drop collection in coordinator
443444
//////////////////////////////////////////////////////////////////////////////
@@ -663,7 +664,7 @@ class ClusterInfo final {
663664
* @return List of DB servers serving the shard
664665
*/
665666
arangodb::Result getShardServers(ShardID const& shardId, std::vector<ServerID>&);
666-
667+
667668
//////////////////////////////////////////////////////////////////////////////
668669
/// @brief get an operation timeout
669670
//////////////////////////////////////////////////////////////////////////////

arangod/Cluster/ClusterMethods.cpp

Lines changed: 21 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -2881,14 +2881,13 @@ std::vector<std::shared_ptr<LogicalCollection>> ClusterMethods::persistCollectio
28812881
std::vector<std::shared_ptr<LogicalCollection>>& collections,
28822882
bool ignoreDistributeShardsLikeErrors, bool waitForSyncReplication,
28832883
bool enforceReplicationFactor) {
2884-
28852884
TRI_ASSERT(!collections.empty());
28862885
if (collections.empty()) {
28872886
THROW_ARANGO_EXCEPTION_MESSAGE(
28882887
TRI_ERROR_INTERNAL,
28892888
"Trying to create an empty list of collections on coordinator.");
28902889
}
2891-
2890+
28922891
double const realTimeout = ClusterInfo::getTimeout(240.0);
28932892
double const endTime = TRI_microtime() + realTimeout;
28942893

@@ -2899,16 +2898,16 @@ std::vector<std::shared_ptr<LogicalCollection>> ClusterMethods::persistCollectio
28992898
// users)
29002899
auto const dbName = collections[0]->vocbase().name();
29012900
ClusterInfo* ci = ClusterInfo::instance();
2902-
2901+
29032902
std::vector<ClusterCollectionCreationInfo> infos;
29042903

29052904
while (true) {
29062905
infos.clear();
2907-
2906+
29082907
ci->loadCurrentDBServers();
29092908
std::vector<std::string> dbServers = ci->getCurrentDBServers();
29102909
infos.reserve(collections.size());
2911-
2910+
29122911
std::vector<std::shared_ptr<VPackBuffer<uint8_t>>> vpackData;
29132912
vpackData.reserve(collections.size());
29142913
for (auto& col : collections) {
@@ -2959,9 +2958,11 @@ std::vector<std::shared_ptr<LogicalCollection>> ClusterMethods::persistCollectio
29592958
// We need to remove all servers that are in the avoid list
29602959
if (dbServers.size() - avoid.size() < replicationFactor) {
29612960
LOG_TOPIC("03682", DEBUG, Logger::CLUSTER)
2962-
<< "Do not have enough DBServers for requested replicationFactor,"
2961+
<< "Do not have enough DBServers for requested "
2962+
"replicationFactor,"
29632963
<< " (after considering avoid list),"
2964-
<< " nrDBServers: " << dbServers.size() << " replicationFactor: " << replicationFactor
2964+
<< " nrDBServers: " << dbServers.size()
2965+
<< " replicationFactor: " << replicationFactor
29652966
<< " avoid list size: " << avoid.size();
29662967
// Not enough DBServers left
29672968
THROW_ARANGO_EXCEPTION(TRI_ERROR_CLUSTER_INSUFFICIENT_DBSERVERS);
@@ -2992,40 +2993,38 @@ std::vector<std::shared_ptr<LogicalCollection>> ClusterMethods::persistCollectio
29922993
VPackBuilder velocy =
29932994
col->toVelocyPackIgnore(ignoreKeys, LogicalDataSource::makeFlags());
29942995

2995-
infos.emplace_back(
2996-
ClusterCollectionCreationInfo{std::to_string(col->id()),
2997-
col->numberOfShards(), col->replicationFactor(),
2998-
col->minReplicationFactor(),
2999-
waitForSyncReplication, velocy.slice()});
2996+
infos.emplace_back(ClusterCollectionCreationInfo{
2997+
std::to_string(col->id()), col->numberOfShards(), col->replicationFactor(),
2998+
col->minReplicationFactor(), waitForSyncReplication, velocy.slice()});
30002999
vpackData.emplace_back(velocy.steal());
30013000
}
30023001

30033002
// pass in the *endTime* here, not a timeout!
30043003
Result res = ci->createCollectionsCoordinator(dbName, infos, endTime);
3005-
3004+
30063005
if (res.ok()) {
30073006
// success! exit the loop and go on
30083007
break;
30093008
}
3010-
3009+
30113010
if (res.is(TRI_ERROR_REQUEST_CANCELED)) {
3012-
// special error code indicating that storing the updated plan in the agency
3013-
// didn't succeed, and that we should try again
3014-
3011+
// special error code indicating that storing the updated plan in the
3012+
// agency didn't succeed, and that we should try again
3013+
30153014
// sleep for a while
30163015
std::this_thread::sleep_for(std::chrono::milliseconds(100));
3017-
3016+
30183017
if (TRI_microtime() > endTime) {
30193018
// timeout expired
30203019
THROW_ARANGO_EXCEPTION(TRI_ERROR_CLUSTER_TIMEOUT);
30213020
}
3022-
3021+
30233022
if (arangodb::application_features::ApplicationServer::isStopping()) {
30243023
THROW_ARANGO_EXCEPTION(TRI_ERROR_SHUTTING_DOWN);
30253024
}
3026-
3025+
30273026
// try in next iteration with an adjusted plan change attempt
3028-
continue;
3027+
continue;
30293028

30303029
} else {
30313030
// any other error
@@ -3047,7 +3046,7 @@ std::vector<std::shared_ptr<LogicalCollection>> ClusterMethods::persistCollectio
30473046
usableCollectionPointers.emplace_back(std::move(c));
30483047
}
30493048
return usableCollectionPointers;
3050-
}
3049+
} // namespace arangodb
30513050

30523051
/// @brief fetch edges from TraverserEngines
30533052
/// Contacts all TraverserEngines placed

0 commit comments

Comments
 (0)
0