-
Notifications
You must be signed in to change notification settings - Fork 1.2k
[DOCS-10824] Run multiple pipelines on the same host guide #29360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Preview links (active after the
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @maycmlee looks good, I just have some general suggestions to reword a few things to keep it a bit more concise, also a few links need moved to the bottom of the page, and does this page need a Further reading? No biggie if not, thanks!
|
||
## Overview | ||
|
||
If you want to run multiple pipelines on one host so that you can send logs from different sources, you need to manually add the Worker files for any additional Workers. This document goes over how to run additional Workers and the files that you need to add and modify to run them. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to run multiple pipelines on one host so that you can send logs from different sources, you need to manually add the Worker files for any additional Workers. This document goes over how to run additional Workers and the files that you need to add and modify to run them. | |
If you want to run multiple pipelines on a single host to send logs from different sources, you need to manually add Worker files for any additional Workers. This document explains how to run additional Workers, and which to add or modify. |
|
||
## Create another pipeline | ||
|
||
[Set up another pipeline](https://docs.datadoghq.com/observability_pipelines/set_up_pipelines/?tab=pipelineui) for the additional Worker that you want to run on the same host. When you get to the Install page, follow the below steps to run the additional Worker. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Set up another pipeline](https://docs.datadoghq.com/observability_pipelines/set_up_pipelines/?tab=pipelineui) for the additional Worker that you want to run on the same host. When you get to the Install page, follow the below steps to run the additional Worker. | |
[Set up another pipeline](https://docs.datadoghq.com/observability_pipelines/set_up_pipelines/?tab=pipelineui) for each additional Worker you want to run on the same host. When you reach the Install page, follow the below steps to run the additional Worker. |
Can it be one or more additional workers? If so suggest to change to "each additional" Worker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's for one Worker, they would have go through these steps again to set up another one.
content/en/observability_pipelines/guide/run_multiple_pipelines_on_the_same_host.md
Show resolved
Hide resolved
[Install] | ||
WantedBy=multi-user.target | ||
``` | ||
- An environment file: `/etc/default/observability-pipelines-worker`, which looks like: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- An environment file: `/etc/default/observability-pipelines-worker`, which looks like: | |
- The environment file: `/etc/default/observability-pipelines-worker`, which looks like: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change the list to all use a/an.
``` | ||
sudo systemctl daemon-reload | ||
``` | ||
1. Run this command to start the new service: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. Run this command to start the new service: | |
1. Start the new service: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
``` | ||
sudo systemctl enable --now op-fluent | ||
``` | ||
1. Run this command to verify the service is running: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. Run this command to verify the service is running: | |
1. Verify the service is running: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
sudo systemctl status op-fluent | ||
``` | ||
|
||
You can run the command `sudo journalctl -u op-fluent.service` to help you debug any issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can run the command `sudo journalctl -u op-fluent.service` to help you debug any issues. | |
Additionally, use the command `sudo journalctl -u op-fluent.service` to help you debug any issues. |
content/en/observability_pipelines/guide/run_multiple_pipelines_on_the_same_host.md
Outdated
Show resolved
Hide resolved
content/en/observability_pipelines/guide/run_multiple_pipelines_on_the_same_host.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments
DD_API_KEY=<datadog_api_key> | ||
DD_OP_PIPELINE_ID=<pipelines_id> | ||
DD_SITE=[datadoghq.com](http://datadoghq.com) | ||
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_ACCESS_KEY_ID=<aws_access_key_id> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked about removing these I think, did you want to keep them here? I think we did away with these via IAM roles or similar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @ckelner! Sorry, I hadn't made the updates yet when reviewed this. I should have now removed all env vars that we said to remove.
DD_SITE=[datadoghq.com](http://datadoghq.com) | ||
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_ACCESS_KEY_ID=<aws_access_key_id> | ||
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_SECRET_ACCESS_KEY=<aws_secret_access_key> | ||
DD_OP_DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING=<connection_string> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having a second destination here just services as extraneous info/distraction, should we just keep it as simple as possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, removed.
``` | ||
In this example: | ||
- `DD_OP_DATA_DIR` is set to `/var/lib/op-fluent`, the new data directory you created in the previous step. | ||
- `DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_ACCESS_KEY_ID=<redacted>` and `DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_SECRET_ACCESS_KEY=<redacted>` are environment variables for the Datadog Archives Amazon destination. Replace it with the environment variables for your destinations. See [Environment Variables](https://docs.datadoghq.com/observability_pipelines/environment_variables/?tab=sources). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to my comment above, I think we did away with these in favor or IAM roles or similar right? Maybe I misremembering. Either way I think it just serves as extraneous info/distraction - stick to one destination for the example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed as well
Co-authored-by: Alicia Scott <aliciascott@users.noreply.github.com>
content/en/observability_pipelines/guide/run_multiple_pipelines_on_the_same_host.md
Outdated
Show resolved
Hide resolved
- The service name is specific to your use case, `<your_use_case_name>.service`. In this example, the service name is `op-fluent` since the pipeline is using the Fluent source. | ||
- The description is for your specific use case, where in this example it is `OPW for Fluent Pipeline`. | ||
- `EnvironmentFile` is set to the new systemd service environment variables file you created, which in this example is: `-/etc/default/op-fluent`. | ||
1. Run this command to reload systemd: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi! I'm going to keep this because I think it's clearer for someone who is using a screenreader.
What does this PR do? What is the motivation?
Merge instructions
Merge readiness:
For Datadog employees:
Merge queue is enabled in this repo. Your branch name MUST follow the
<name>/<description>
convention and include the forward slash (/
). Without this format, your pull request will not pass in CI, the GitLab pipeline will not run, and you won't get a branch preview. Getting a branch preview makes it easier for us to check any issues with your PR, such as broken links.If your branch doesn't follow this format, rename it or create a new branch and PR.
To have your PR automatically merged after it receives the required reviews, add the following PR comment:
Additional notes