diff --git a/CHANGELOG b/CHANGELOG index 3e2eda7cc14b..a86d9ca00ee8 100644 --- a/CHANGELOG +++ b/CHANGELOG @@ -1,6 +1,9 @@ devel ----- +* Added new command line-option `--version-json`. This will return the + version information as json object. + * Fix ArangoAgency::version(), which always returned an empty string instead of the agency's correctly reported version. This also fixes the agency version in the startup log messages of the cluster. @@ -183,7 +186,7 @@ devel * Fix URL request parsing in case data is handed in in small chunks. Previously the URL could be cut off if the chunk size was smaller than the URL size. - + * Backport bugfix from upstream rocksdb repository for calculating the free disk space for the database directory. Before the bugfix, rocksdb could overestimate the amount of free space when the arangod process @@ -231,7 +234,7 @@ devel this way the system does only need to produce a path that is allowed to be passed through. e.g. - + FOR v,e,p IN 10 OUTBOUND @start GRAPH "myGraph" FILTER v.isRelevant == true RETURN p @@ -286,7 +289,7 @@ devel This disables the optimizer rule `optimize-cluster-single-document-operations` for array inputs, e.g. - + INSERT [...] INTO collection REMOVE [...] IN collection @@ -327,7 +330,7 @@ devel * Fixes BTS-417. In some cases an index did not consider both bounds (lower and upper) for a close range scan if both bounds are expressed using the same operator, e.g., `FILTER doc.beginDate >= lb AND ub >= doc.beginDate`. - + * When writing to starting shard leader respond with specific 503. Fixes BTS-390. @@ -359,7 +362,7 @@ devel conflicts in maintenance jobs. * Log a proper message if an unexpected state is encountered when taking over - shard leadership. In addition, make the change to the internal followerinfo + shard leadership. In addition, make the change to the internal followerinfo state atomic so that it cannot be semi-changed. * Improve exception safety for maintenance thread and shard unlock @@ -382,7 +385,7 @@ devel One example where this can happen is when trying to authenticate a request, but the _users collection is not yet available in the cluster. -* Fixed issue BTS-354: Assertion related to getCollection. +* Fixed issue BTS-354: Assertion related to getCollection. * Fixed a use after free bug in the connection pool. @@ -391,10 +394,10 @@ devel * Fixed issue #14122: when the optimizer rule "inline-subqueries" is applied, it may rename some variables in the query. The variable renaming was however - not carried out for traversal PRUNE conditions, so the PRUNE conditions + not carried out for traversal PRUNE conditions, so the PRUNE conditions could still refer to obsolete variables, which would make the query fail with - errors such as - + errors such as + Query: AQL: missing variable ... for node ... while planning registers * Fixed bug in error reporting when a database create did not work, which led @@ -405,7 +408,7 @@ devel * Add a connection cache for internal replication requests. -* Improve legibility of size values (by adding KB, MB, GB, TB suffixes) to +* Improve legibility of size values (by adding KB, MB, GB, TB suffixes) to output generated by client tools. * Timely updates of rebootId / cluster membership of DB servers and @@ -429,7 +432,7 @@ devel * Upgrade jemalloc version to latest stable dev. -* Fixed issue BTS-373: ASan detected possible heap-buffer-overflow at +* Fixed issue BTS-373: ASan detected possible heap-buffer-overflow at arangodb::transaction::V8Context::exitV8Context(). * Allow to specify a fail-over LDAP server. Instead of "--ldap.OPTION" you need @@ -443,15 +446,15 @@ devel * Make the time-to-live (TTL) value of a streaming cursor only count after the response has been sent to the client. -* Improve performance of batch CRUD operations (insert, update, replace, +* Improve performance of batch CRUD operations (insert, update, replace, remove) if some of the documents in the batch run into write-write conflicts. - Rolling back partial operations in case of a failure is very expensive - because it requires rebuilding RocksDB write batches for the transaction + Rolling back partial operations in case of a failure is very expensive + because it requires rebuilding RocksDB write batches for the transaction from scratch. Rebuilding write batches takes time proportional to the number - of operations in the batch, and for larger batches the cost can be - prohibitive. + of operations in the batch, and for larger batches the cost can be + prohibitive. Now we are not rolling back write batches in some situations when this is - not required, so that in many cases running into a conflict does not have + not required, so that in many cases running into a conflict does not have that high overhead. There can still be issues when conflicts happen for index entries, but a lot of previously problematic cases should now work better. @@ -467,31 +470,31 @@ devel be reduced to `x < 3` by the optimizer rule remove-redundant-or. * Changed default value of arangodump's `--envelope` option from `true` to - `false`. This allows using higher parallelism in arangorestore when - restoring large collection dumps. As a side-effect, this will also decrease - the size of dumps taken with arangodump, and should slightly improve dump + `false`. This allows using higher parallelism in arangorestore when + restoring large collection dumps. As a side-effect, this will also decrease + the size of dumps taken with arangodump, and should slightly improve dump speed. * Improve parallelism capabilities of arangorestore. arangorestore can now dispatch restoring data chunks of a collection to idle - background threads, so that multiple restore requests can be in flight for + background threads, so that multiple restore requests can be in flight for the same collection concurrently. - This can improve restore speed in situations when there are idle threads - left (number of threads can be configured via arangorestore's `--threads` + This can improve restore speed in situations when there are idle threads + left (number of threads can be configured via arangorestore's `--threads` option) and the dump file for the collection is large. - The improved parallelism is only used when restoring dumps that are in the - non-enveloped format. This format has been introduced with ArangoDB 3.8. - The reason is that dumps in the non-enveloped format only contain the raw - documents, which can be restored independent of each other, i.e. in any - order. However, the enveloped format may contain documents and remove - operations, which need to be restored in the original order. + The improved parallelism is only used when restoring dumps that are in the + non-enveloped format. This format has been introduced with ArangoDB 3.8. + The reason is that dumps in the non-enveloped format only contain the raw + documents, which can be restored independent of each other, i.e. in any + order. However, the enveloped format may contain documents and remove + operations, which need to be restored in the original order. * Fix BTS-374: thread race between ArangoSearch link unloading and storage engine WAL flushing. - + * Fix thread race between ArangoSearch link unloading and storage engine WAL flushing. @@ -539,7 +542,7 @@ devel * Fixed a problem in document batch operations, where errors from one shard were reported multiple times, if the shard is completely off line. -* Removed assertion for success of a RocksDB function. Throw a proper +* Removed assertion for success of a RocksDB function. Throw a proper exception instead. * Show peak memory usage in AQL query profiling output. @@ -551,8 +554,8 @@ devel - Make sure "computationTime" in Pregel job status response does not underflow in case of errors. -* Prevent arangod from terminating with "terminate called without an active - exception" (SIGABRT) in case an out-of-memory exception occurs during +* Prevent arangod from terminating with "terminate called without an active + exception" (SIGABRT) in case an out-of-memory exception occurs during creating an ASIO socket connection. * UI builds are now using the yarn package manager instead of the previously @@ -793,7 +796,7 @@ devel * Allow process-specific logfile names. - This change allows replacing '$PID' with the current process id in the + This change allows replacing '$PID' with the current process id in the `--log.output` and `--audit.output` startup parameters. This way it is easier to write process-specific logfiles. @@ -806,7 +809,7 @@ devel the V8 context multiple times, which would cause undefined behavior. Now we are tracking if we already left the context to prevent duplicate invocation. -* In a cluster, do not create the collections `_statistics`, `_statistics15` and +* In a cluster, do not create the collections `_statistics`, `_statistics15` and `statisticsRaw` on DB servers. These collections should only be created by the coordinator, and should translate into 2 shards each on DB servers. But there shouldn't be shards named `_statistics*` on DB servers. @@ -815,7 +818,7 @@ devel - Coordinators unconditionally logged the message "Got a hotbackup restore event, getting new cluster-wide unique IDs..." on shutdown. This was not necessarily related to a hotbackup restore. - - DB servers unconditionally logged the message "Strange, we could not + - DB servers unconditionally logged the message "Strange, we could not unregister the hotbackup restore callback." on shutdown, although this was meaningless. @@ -844,10 +847,10 @@ devel * When dropping a collection or an index with a larger amount of documents, the key range for the collection/index in RocksDB gets compacted. Previously, the compaction was running in foreground and thus would block the deletion operations. - Now, the compaction is running in background, so that the deletion operations - can return earlier. + Now, the compaction is running in background, so that the deletion operations + can return earlier. The maximum number of compaction jobs that are executed in background can be - configured using the new startup parameter `--rocksdb.max-parallel-compactions`, + configured using the new startup parameter `--rocksdb.max-parallel-compactions`, which defaults to 2. * Put Sync/LatestID into hotbackup and restore it on hotbackup restore @@ -856,7 +859,7 @@ devel * Fixed a bug in the index count optimization that doubled counted documents when using array expansions in the fields definition. - + * Don't store selectivity estimate values for newly created system collections. Not storing the estimates has a benefit especially for the `_statistics` @@ -864,7 +867,7 @@ devel idle servers. In this particular case, the actual statistics data was way smaller than the writes caused by the index estimate values, causing a disproportional overhead just for maintaining the selectivity estimates. - The change now turns off the selectivity estimates for indexes in all newly + The change now turns off the selectivity estimates for indexes in all newly created system collections, and for new user-defined indexes of type "persistent", "hash" or "skiplist", there is now an attribute "estimates" which can be set to `false` to disable the selectivity estimates for the index. @@ -873,21 +876,21 @@ devel for user-defined indexes. * Added startup option `--query.global-memory-limit` to set a limit on the - combined estimated memory usage of all AQL queries (in bytes). - If this option has a value of `0`, then no memory limit is in place. + combined estimated memory usage of all AQL queries (in bytes). + If this option has a value of `0`, then no memory limit is in place. This is also the default value and the same behavior as in previous versions - of ArangoDB. + of ArangoDB. Setting the option to a value greater than zero will mean that the total memory - usage of all AQL queries will be limited approximately to the configured value. - The limit is enforced by each server in a cluster independently, i.e. it can - be set separately for coordinators, DB servers etc. The memory usage of a + usage of all AQL queries will be limited approximately to the configured value. + The limit is enforced by each server in a cluster independently, i.e. it can + be set separately for coordinators, DB servers etc. The memory usage of a query that runs on multiple servers in parallel is not summed up, but tracked seperately on each server. If a memory allocation in a query would lead to the violation of the configured - global memory limit, then the query is aborted with error code 32 ("resource + global memory limit, then the query is aborted with error code 32 ("resource limit exceeded"). The global memory limit is approximate, in the same fashion as the per-query - limit provided by the option `--query.memory-limit` is. Some operations, + limit provided by the option `--query.memory-limit` is. Some operations, namely calls to AQL functions and their intermediate results, are currently not properly tracked. If both `--query.global-memory-limit` and `--query.memory-limit` are set, @@ -903,20 +906,20 @@ devel individual AQL queries can increase their memory limit via the `memoryLimit` query option. This is the default, so a query that increases its memory limit is allowed to use more memory. - The new option `--query.memory-limit-override` allows turning this behavior + The new option `--query.memory-limit-override` allows turning this behavior off, so that individual queries can only lower their maximum allowed memory - usage. + usage. * Added metric `arangodb_aql_global_memory_usage` to expose the total amount of memory (in steps of 32 kb) that is currently in use by all AQL queries. -* Added metric `arangodb_aql_global_memory_limit` to expose the memory limit +* Added metric `arangodb_aql_global_memory_limit` to expose the memory limit from startup option `--query.global-memory-limit`. * Allow setting path to the timezone information via the `TZ_DATA` environment variable, in the same fashion as the currently existing `ICU_DATA` environment variable. The `TZ_DATA` variable is useful in environments` that start arangod - from some unusual locations, when it can't find its `tzdata` directory + from some unusual locations, when it can't find its `tzdata` directory automatically. * Fixed a bug in query cost estimation when a NoResults node occured in a spliced @@ -945,7 +948,7 @@ devel * Add support for building with Zen 3 CPU when optimizing for the local architecture. - + * The web UI's node overview now displays also agent information (cluster only). * The statistics view in the web UI does now provide more system specific @@ -955,7 +958,7 @@ devel * Added metrics documentation snippets and infrastructure for that. * Added a new cluster distribution view to the web UI. The view includes general - details about cluster-wide distribution in general as well as more detailed + details about cluster-wide distribution in general as well as more detailed shard distribution specific information. * Follower primaries respond with @@ -979,18 +982,18 @@ devel a tabular format or as plain text (Prometheus Text-based format). Additionally, the metrics can be downloaded there. -* Added a new maintenance mode tab to the web UI in cluster mode. - The new tab shows the current state of the cluster supervision maintenance +* Added a new maintenance mode tab to the web UI in cluster mode. + The new tab shows the current state of the cluster supervision maintenance and allows to enable/disable the maintenance mode from there. The tab will only be visible in the `_system` database. The required privileges for - displaying the maintenance mode status and/or changing it are the as for + displaying the maintenance mode status and/or changing it are the as for using the REST APIs for the maintenance mode. * Fixed a problem that coordinators would vanish from the UI and the Health API if one switched the agency Supervision into maintenance mode and kept left that maintenance mode on for more than 24h. -* Fixed a bug in the web interface that displayed the error "Not authorized to +* Fixed a bug in the web interface that displayed the error "Not authorized to execute this request" when trying to create an index in the web interface in a database other than `_system` with a user that does not have any access permissions for the `_system` database. @@ -999,7 +1002,7 @@ devel creation. * Added ability to display Coordinator and DBServer logs from inside the Web UI - in a clustered environment when privileges are sufficient. + in a clustered environment when privileges are sufficient. Additionally, displayed log entries can now be downloaded from the web UI in single server and in cluster mode. @@ -1007,7 +1010,7 @@ devel statistics (e.g. RocksDB related figures, sharding information and more). * Improve progress reporting for shard synchronization in the web UI. - The UI will now show how many shards are actively syncing data, and will + The UI will now show how many shards are actively syncing data, and will provide a better progress indicator, especially if there is more than one follower for a shard. @@ -1032,7 +1035,7 @@ devel all databases (if the option is set to `true`) or just for the system database (if the option is set to `false`). The default value for the option is `true`, meaning statistics will be - displayed in the web interface for all databases. + displayed in the web interface for all databases. * Add optional hostname logging to log messages. Whether or not the hostname is added to each log message can be controlled via @@ -1043,7 +1046,7 @@ devel case of JSON-based logging. Setting the option to a value of `auto` will use the hostname as returned by `gethostbyname`. -* Added logging of elapsed time of ArangoSearch commit/consolidation/cleanup +* Added logging of elapsed time of ArangoSearch commit/consolidation/cleanup jobs. * Added list-repeat AIR primitive that creates a list containing n copies of the input value. @@ -1057,7 +1060,7 @@ devel connect attempts all fail and time out after 300ms. In this case we now don't try to reconnect after every command. -* Added 'custom-query' testcase to arangobench to allow execution of custom +* Added 'custom-query' testcase to arangobench to allow execution of custom queries. This also adds the options `--custom-query` and `--custom-query-file` for arangobench. @@ -1072,9 +1075,9 @@ devel * On Windows create a minidump in case of an unhandled SEH exception for post-mortem debugging. - + * Add JWT secret support for arangodump and arangorestore, i.e. they now also - provide the command-line options `--server.ask-jwt-secret` and + provide the command-line options `--server.ask-jwt-secret` and `--server.jwt-secret-keyfile` with the same meanings as in arangosh. * Add optional hyperlink to program option sections for information purposes, @@ -1100,7 +1103,7 @@ devel FOR out IN 1 OUTBOUND "v/1:1" edges FOR u IN unrelated RETURN [out, u] - + The "unrelated" collection was pulled into the DisjointSmartGraph, causing the AQL setup to create erroneous state. This is now fixed and the above query works. @@ -1136,7 +1139,7 @@ devel * Fix profiling of AQL queries with the `silent` and `stream` options sets in combination. Using the `silent` option makes a query execute, but discard all its results instantly. This led to some confusion in streaming queries, which - can return the first query results once they are available, but don't + can return the first query results once they are available, but don't necessarily execute the full query. Now, `silent` correctly discards all results even in streaming queries, but this has the effect that a streaming query will likely be executed completely @@ -1144,10 +1147,10 @@ devel `silent` option is normally not set. There is no change for streaming queries if the `silent` option is not set. - As a side-effect of this change, this makes profiling (i.e. using - `db._profileQuery(...)` work for streaming queries as well. Previously, + As a side-effect of this change, this makes profiling (i.e. using + `db._profileQuery(...)` work for streaming queries as well. Previously, profiling a streaming query could have led to some internal errors, and even - query results being returned, even though profiling a query should not return + query results being returned, even though profiling a query should not return any query results. * Make dropping of indexes in cluster retry in case of precondition failed. @@ -1155,7 +1158,7 @@ devel When dropping an indexes of a collection in the cluster, the operation could fail with a "precondition failed" error in case there were simultaneous index creation or drop actions running for the same collection. The error - was returned properly internally, but got lost at the point when + was returned properly internally, but got lost at the point when `.dropIndex()` simply converted any error to just `false`. We can't make `dropIndex()` throw an exception for any error, because that would affect downwards-compatibility. But in case there is a simultaneous @@ -1198,19 +1201,19 @@ devel and MAKE_DISTRIBUTE_GRAPH_INPUT) and an additional calculation node with an according function call will be introduced if we need to prepare the input data for the distribute node. - + * Added new REST APIs for retrieving the sharding distribution: - - GET `/_api/database/shardDistribution` will return the number of + - GET `/_api/database/shardDistribution` will return the number of collections, shards, leaders and followers for the database it is run - inside. The request can optionally be restricted to include data from + inside. The request can optionally be restricted to include data from only a single DB server, by passing the `DBserver` URL parameter. This API can only be used on coordinators. - GET `/_admin/cluster/shardDistribution` will return global statistics on the current shard distribution, showing the total number of databases, - collections, shards, leaders and followers for the entire cluster. + collections, shards, leaders and followers for the entire cluster. The results can optionally be restricted to include data from only a single DB server, by passing the `DBserver` URL parameter. By setting the `details` URL parameter, the response will not contain @@ -1219,13 +1222,13 @@ devel This API can only be used in the `_system` database of coordinators, and requires admin user privileges. -* Decrease the size of serialized index estimates, by introducing a +* Decrease the size of serialized index estimates, by introducing a compressed serialization format. The compressed format uses the previous uncompressed format internally, compresses it, and stores the compressed data instead. This makes serialized index estimates a lot smaller, which in turn decreases the size of I/O operations for index maintenance. -* Do not create index estimator objects for proxy collection objects on +* Do not create index estimator objects for proxy collection objects on coordinators and DB servers. Proxy objects are created on coordinators and DB servers for all shards, and they also make index objects available. In order to reduce the memory usage by these objects, we don't create any @@ -1234,7 +1237,7 @@ devel for higher numbers of collections/shards. * More improvements for logging: - + * Added new REST API endpoint GET `/_admin/log/entries` to return log entries in a more intuitive format, putting each log entry with all its properties into an object. The API response is an array with all log message objects @@ -1242,7 +1245,7 @@ devel This is an extension to the already existing API endpoint GET `/_admin/log`, which returned log messages fragmented into 5 separate arrays. - The already existing API endpoint GET `/_admin/log` for retrieving log + The already existing API endpoint GET `/_admin/log` for retrieving log messages is now deprecated, although it will stay available for some time. * Truncation of log messages now takes JSON format into account, so that @@ -1256,38 +1259,38 @@ devel - `--log.max-entry-length`: controls the maximum line length for individual log messages that are written into normal logfiles by arangod (note: this - does not include audit log messages). - Any log messages longer than the specified value will be truncated and the - suffix '...' will be added to them. The purpose of this parameter is to - shorten long log messages in case there is not a lot of space for logfiles, - and to keep rogue log messages from overusing resources. + does not include audit log messages). + Any log messages longer than the specified value will be truncated and the + suffix '...' will be added to them. The purpose of this parameter is to + shorten long log messages in case there is not a lot of space for logfiles, + and to keep rogue log messages from overusing resources. The default value is 128 MB, which is very high and should effectively mean downwards-compatibility with previous arangod versions, which did not restrict the maximum size of log messages. - + - `--audit.max-entry-length`: controls the maximum line length for individual audit log messages that are written into audit logs by arangod. Any audit - log messages longer than the specified value will be truncated and the - suffix '...' will be added to them. + log messages longer than the specified value will be truncated and the + suffix '...' will be added to them. The default value is 128 MB, which is very high and should effectively mean downwards-compatibility with previous arangod versions, which did not restrict the maximum size of log messages. - - `--log.in-memory-level`: controls which log messages are preserved in + - `--log.in-memory-level`: controls which log messages are preserved in memory (in case `--log.in-memory` is set to `true`). The default value is `info`, meaning all log messages of types `info`, `warning`, `error` and `fatal` will be stored by an instance in memory (this was also the behavior in previous versions of ArangoDB). By setting this option to `warning`, - only `warning`, `error` and `fatal` log messages will be preserved in memory, + only `warning`, `error` and `fatal` log messages will be preserved in memory, and by setting the option to `error` only error and fatal messages will be kept. - This option is useful because the number of in-memory log messages is - limited to the latest 2048 messages, and these slots are by default shared - between informational, warning and error messages. + This option is useful because the number of in-memory log messages is + limited to the latest 2048 messages, and these slots are by default shared + between informational, warning and error messages. * Honor the value of startup option `--log.api-enabled` when set to `false`. The desired behavior in this case is to turn off the REST API for logging, but was not implemented. The default value for the option is `true`, so the - REST API is enabled. This behavior did not change, and neither did the + REST API is enabled. This behavior did not change, and neither did the behavior when setting the option to a value of `jwt` (meaning the REST API for logging is only available for superusers with a valid JWT token). @@ -1301,8 +1304,8 @@ devel This often returned HTTP 500 with an error message "Need open Array" due to an internal error when setting up agency preconditions. -* Remove logging startup options `--log.api-enabled` and `--log.keep-logrotate` - for all client tools (arangosh, arangodump, arangorestore etc.), as these +* Remove logging startup options `--log.api-enabled` and `--log.keep-logrotate` + for all client tools (arangosh, arangodump, arangorestore etc.), as these options are only meaningful for arangod. * Fixed BTS-284: upgrading from 3.6 to 3.7 in cluster enviroment. @@ -1433,7 +1436,7 @@ devel document update operations (successful and failed) [s]. - `arangodb_collection_truncate_time`: Execution time histogram of all collection truncate operations (successful and failed) [s]. - + The timer metrics are turned off by default, and can be enabled by setting the startup option `--server.export-read-write-metrics true`. @@ -1446,7 +1449,7 @@ devel replication operation"). The log message will now provide the database and shard name plus the differing information about the shard leader. -* Make `padded` and `autoincrement` key generators export their `lastValue` +* Make `padded` and `autoincrement` key generators export their `lastValue` values, so that they are available in dumps and can be restored elsewhere from a dump. diff --git a/lib/ApplicationFeatures/VersionFeature.cpp b/lib/ApplicationFeatures/VersionFeature.cpp index d0583a33b667..c530e6566cc7 100644 --- a/lib/ApplicationFeatures/VersionFeature.cpp +++ b/lib/ApplicationFeatures/VersionFeature.cpp @@ -35,7 +35,9 @@ using namespace arangodb::options; namespace arangodb { VersionFeature::VersionFeature(application_features::ApplicationServer& server) - : ApplicationFeature(server, "Version"), _printVersion(false) { + : ApplicationFeature(server, "Version"), + _printVersion(false), + _printVersionJson(false) { setOptional(false); startsAfter(); @@ -45,9 +47,27 @@ void VersionFeature::collectOptions(std::shared_ptr options) { options->addOption("--version", "reports the version and exits", new BooleanParameter(&_printVersion), arangodb::options::makeDefaultFlags(arangodb::options::Flags::Command)); + + options->addOption("--version-json", "reports the version as JSON and exits", + new BooleanParameter(&_printVersionJson), + arangodb::options::makeDefaultFlags(arangodb::options::Flags::Command)) + .setIntroducedIn(30900); } void VersionFeature::validateOptions(std::shared_ptr) { + if (_printVersionJson) { + VPackBuilder builder; + { + VPackObjectBuilder ob(&builder); + Version::getVPack(builder); + + builder.add("version", VPackValue(Version::getServerVersion())); + } + + std::cout << builder.slice().toJson() << std::endl; + exit(EXIT_SUCCESS); + } + if (_printVersion) { std::cout << Version::getServerVersion() << std::endl << std::endl diff --git a/lib/ApplicationFeatures/VersionFeature.h b/lib/ApplicationFeatures/VersionFeature.h index 627b065a372f..d3d62a66e753 100644 --- a/lib/ApplicationFeatures/VersionFeature.h +++ b/lib/ApplicationFeatures/VersionFeature.h @@ -38,6 +38,7 @@ class VersionFeature final : public application_features::ApplicationFeature { private: bool _printVersion; + bool _printVersionJson; }; } // namespace arangodb