-
Notifications
You must be signed in to change notification settings - Fork 25.3k
Closed
Labels
:mlMachine learningMachine learning>bugFeature:NLP<
8D34
/span>Features and issues around NLPFeatures and issues around NLPTeam:MLMeta label for the ML teamMeta label for the ML team
Description
Description
When interacting with ELSER in serverless locally it crashes when attempting to perform inference.
Steps to reproduce
- Ensure docker is setup and running
- Checkout kibana and bootstrap it
- Start elasticsearch serverless locally:
yarn es serverless --projectType=security --ssl
- Start kibana locally
yarn start --serverless=security --ssl
- Download elser
- Deploy elser via the inference API
PUT _inference/sparse_embedding/elser
{
"service": "elser",
"service_settings": {
"model_id": ".elser_model_2",
"num_allocations": 1,
"num_threads": 1
},
"task_settings": {}
}
- Add an ingest processor
PUT _ingest/pipeline/elser
{
"processors": [
{
"inference": {
"model_id": "elser",
"input_output": [
{
"input_field": "content",
"output_field": "text_embedding"
}
]
}
},
{
"set": {
"field": "timestamp",
"value": "{{_ingest.timestamp}}"
}
}
]
}
- Attempt to perform inference
POST _ingest/pipeline/elser/_simulate
{
"docs": [
{
"_source": {
"content": "hello"
}
}]
}
- Retrieve the stats from the trained models api to observe that the process has crashed
"routing_state": {
"routing_state": "failed",
"reason": """inference process crashed due to reason [[my-elser-model] pytorch_inference/659 process stopped unexpectedly: Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0xffff83b20140, library: /lib/aarch64-linux-gnu/libc.so.6, base: 0xffff83a13000, normalized address: 0x10d140', version: 8.14.0-SNAPSHOT (build 38a5b0ec077958)
]"""
},
Metadata
Metadata
Assignees
Labels
:mlMachine learningMachine learning>bugFeature:NLPFeatures and issues around NLPFeatures and issues around NLPTeam:MLMeta label for the ML teamMeta label for the ML team