You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/api-reference.md
+39-6Lines changed: 39 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -1651,7 +1651,7 @@ client.search({ ... })
1651
1651
- **`profile` (Optional, boolean)**: Set to `true` to return detailed timing information about the execution of individual components in a search request. NOTE: This is a debugging tool and adds significant overhead to search execution.
- **`rescore` (Optional, { window_size, query, learning_to_rank } \| { window_size, query, learning_to_rank }[])**: Can be used to improve precision by reordering just the top (for example 100 - 500) documents returned by the `query` and `post_filter` phases.
1654
-
- **`retriever` (Optional, { standard, knn, rrf, text_similarity_reranker, rule })**: A retriever is a specification to describe top documents returned from a search. A retriever replaces other elements of the search API that also return top documents such as `query` and `knn`.
1654
+
- **`retriever` (Optional, { standard, knn, rrf, text_similarity_reranker, rule, rescorer, linear, pinned })**: A retriever is a specification to describe top documents returned from a search. A retriever replaces other elements of the search API that also return top documents such as `query` and `knn`.
1655
1655
- **`script_fields` (Optional, Record<string, { script, ignore_failure }>)**: Retrieve a script evaluation (based on different fields) for each hit.
1656
1656
- **`search_after` (Optional, number \| number \| string \| boolean \| null[])**: Used to retrieve the next page of hits using a set of sort values from the previous page.
1657
1657
- **`size` (Optional, number)**: The number of hits to return, which must not be negative. By default, you cannot page through more than 10,000 hits using the `from` and `size` parameters. To page through more hits, use the `search_after` property.
@@ -6755,9 +6755,45 @@ Changes dynamic index settings in real time.
6755
6755
For data streams, index setting changes are applied to all backing indices by default.
6756
6756
6757
6757
To revert a setting to the default value, use a null value.
6758
-
The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation.
6758
+
The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation.
6759
6759
To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`.
6760
6760
6761
+
There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example:
6762
+
6763
+
```
6764
+
{
6765
+
"number_of_replicas": 1
6766
+
}
6767
+
```
6768
+
6769
+
Or you can use an `index` setting object:
6770
+
```
6771
+
{
6772
+
"index": {
6773
+
"number_of_replicas": 1
6774
+
}
6775
+
}
6776
+
```
6777
+
6778
+
Or you can use dot annotation:
6779
+
```
6780
+
{
6781
+
"index.number_of_replicas": 1
6782
+
}
6783
+
```
6784
+
6785
+
Or you can embed any of the aforementioned options in a `settings` object. For example:
6786
+
6787
+
```
6788
+
{
6789
+
"settings": {
6790
+
"index": {
6791
+
"number_of_replicas": 1
6792
+
}
6793
+
}
6794
+
}
6795
+
```
6796
+
6761
6797
NOTE: You can only define new analyzers on closed indices.
6762
6798
To add an analyzer, you must close the index, define the analyzer, and reopen the index.
6763
6799
You cannot close the write index of a data stream.
The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
7509
+
The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
7474
7510
It only works with the `chat_completion` task type for `openai` and `elastic` inference services.
7475
7511
7476
-
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face.
7477
-
For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
7478
-
7479
7512
NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming.
7480
7513
The Chat completion inference API and the Stream inference API differ in their response structure and capabilities.
7481
7514
The Chat completion inference API provides more comprehensive customization options through more fields and function calling support.
Copy file name to clipboardExpand all lines: src/api/api/indices.ts
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -3165,7 +3165,7 @@ export default class Indices {
3165
3165
}
3166
3166
3167
3167
/**
3168
-
* Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default. To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`. NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
3168
+
* Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default. To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation. To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`. There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example: ``` { "number_of_replicas": 1 } ``` Or you can use an `index` setting object: ``` { "index": { "number_of_replicas": 1 }} ``` Or you can use dot annotation: ``` { "index.number_of_replicas": 1 } ``` Or you can embed any of the aforementioned options in a `settings` object. For example: ``` { "settings": { "index": { "number_of_replicas": 1 }}} ``` NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
3169
3169
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-indices-put-settings | Elasticsearch API documentation}
Copy file name to clipboardExpand all lines: src/api/api/inference.ts
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -364,7 +364,7 @@ export default class Inference {
364
364
}
365
365
366
366
/**
367
-
* Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the `chat_completion` task type for `openai` and `elastic` inference services. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the `openai` service or the `elastic` service, use the Chat completion inference API.
367
+
* Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the `chat_completion` task type for `openai` and `elastic` inference services. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the `openai` service or the `elastic` service, use the Chat completion inference API.
368
368
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-unified-inference | Elasticsearch API documentation}
0 commit comments