8000 feat(helm): Add MCP server deployment. by junhaoliao · Pull Request #1819 · y-scope/clp · GitHub
[go: up one dir, main page]

Skip to content

Conversation

@junhaoliao
Copy link
Member
@junhaoliao junhaoliao commented Dec 19, 2025

> [!WARNING]
> This PR must be merged AFTER #1821 which fixed a bug in the MCP server (see #1820)
#1821 has been merged

Description

Note

This PR is part of the ongoing work for #1309. More PRs will be submitted until the Helm chart is complete and fully functional.

This PR adds support for the MCP (Model Context Protocol) server component to the Helm chart. The MCP server provides AI agent access to CLP functionalities.

Changes:

  • Add mcp-server-deployment.yaml - Deployment for the MCP server with:
    • Init containers to wait for db-table-creator and results-cache-indices-creator jobs
    • HTTP health checks at /health endpoint (port 8000)
    • Database credentials from secrets
    • Logging configuration
  • Add mcp-server-logs-pv.yaml - PersistentVolume for MCP server logs
  • Add mcp-server-logs-pvc.yaml - PersistentVolumeClaim for MCP server logs
  • Add mcp-server-service.yaml - NodePort service exposing the MCP server
  • Update values.yaml - Add mcp_server configuration (disabled by default)
  • Update configmap.yaml - Add mcp_server section to clp-config
  • Update test.sh - Add mcp_server log directory and port mapping (30800)

Configuration:

The MCP server is disabled by default (mcp_server: null). To enable it, configure in values.yaml:

mcp_server:
  port: 30800
  logging_level: "INFO"

Checklist

  • The PR satisfies the contribution guidelines.
  • This is a breaking change and that has been indicated in the PR title, OR this isn't a
    breaking change.
  • Necessary docs have been updated, OR no docs need to be updated.

Validation performed

0. Configure MCP server in values.yaml:

We need to configure the mcp_server to be a non-null value to enable its deployment:

mcp_server:
  port: 30800
  logging_level: "INFO"

1. Helm chart deployment

cd tools/deployment/package-helm
./test.sh

# observed "All jobs completed and services are ready."

2. Log ingestion

Ingested sample logs at /tmp/clp/samples using the WebUI at http://localhost:30000/ingest .


Alternatively, we can use the compress tool to ingest logs:

task # build the project
cd build/clp-package

Configure the database port in etc/clp-config.yaml to match the Helm chart setup:

database:
  port: 30306

Then run the compress tool:

./sbin/compress.sh --timestamp=timestamp ~/samples/postgresql.jsonl

3. Verify the MCP server is alive

curl http://localhost:30800/health

Output:

OK

4. Initialize an MCP session

curl -i -X POST http://localhost:30800/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "initialize",
    "params": {
      "protocolVersion": "2024-11-05",
      "capabilities": {},
      "clientInfo": {"name": "test", "version": "1.0"}
    }
  }'

Output (note the mcp-session-id header):

HTTP/1.1 200 OK
...
mcp-session-id: 8a30ef56074045c187e022a896c87681
...
event: message
data: {"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2024-11-05","capabilities":{"experimental":{},"prompts":{"listChanged":true},"resources":{"subscribe":false,"listChanged":true},"tools":{"listChanged":true}},"serverInfo":{"name":"clp-mcp-server","version":"2.14.1"}}}

5. List available MCP tools

Replace Mcp-Session-Id with the actual session ID from the previous step.

curl -X POST http://localhost:30800/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -H "Mcp-Session-Id: 8a30ef56074045c187e022a896c87681" \
  -d '{
    "jsonrpc": "2.0",
    "id": 2,
    "method": "tools/list",
    "params": {}
  }'

Output:

event: message
data: {"jsonrpc":"2.0","id":2,"result":{"tools":[{"name":"get_instructions","description":"Gets a pre-defined \"system prompt\" that guides the LLM behavior.\nThis function must be invoked before any other `FastMCP.tool`.
...

6. Invoke the get_instructions tool (required before other tools)

Replace Mcp-Session-Id with the actual session ID from the previous step.

curl -X POST http://localhost:30800/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -H "Mcp-Session-Id: 8a30ef56074045c187e022a896c87681" \
  -d '{
    "jsonrpc": "2.0",
    "id": 4,
    "method": "tools/call",
    "params": {
      "name": "get_instructions",
      "arguments": {}
    }
  }'

Output:

event: message
data: {"jsonrpc":"2.0","id":4,"result":{"content":[{"type":"text","text":"You are an AI assistant for querying the CLP log database using CLP-KQL (CKQL).
...

7. Invoke the search_by_kql tool

Replace Mcp-Session-Id with the actual session ID from the previous step.

curl -X POST http://localhost:30800/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -H "Mcp-Session-Id: 8a30ef56074045c187e022a896c87681" \
  -d '{
    "jsonrpc": "2.0",
    "id": 5,
    "method": "tools/call",
    "params": {
      "name": "search_by_kql",
      "arguments": {
        "kql_query": "1"
      }
    }
  }'

Output:

event: message
data: {"jsonrpc":"2.0","id":5,"result":{"content":[{"type":"text","text":"{\"items\":[\"timestamp: 2023-03-27T00:31:55.999Z, message: {\\\"timestamp\\\":\\\"2023-03-27 00:31:55.999\\\",\\\"pid\\\":8065,\\\"session_id\\\":\\\"64211c3b.1f81\\\",\\\"line_num\\\":1,\\\"session_start\\\":\\\"2023-03-27 00:31:55 EDT\\\",\\\"txid\\\":0,\\\"error_severity\\\":\\\"DEBUG\\\",\\\"query_id\\\":0,\\\"message\\\":\\\"InitPostgres\\\",\\\"backend_type\\\":\\\"autovacuum worker\\\"}\\n, link: http://localhost:4000/streamFile?type=json&streamId=aa47a77d-dc76-4282-b1bb-4199af8c3491&dataset=default&logEventIdx=913971\",\"timestamp: 2023-03-27T00:31:35.985Z,
...

✅ Observed results are getting returned.

Summary by CodeRabbit

  • New Features

    • Optional MCP server deployable via chart with configurable host/port, logging level and persistent log storage; config template updated to include mcp_server entry.
    • Deployment, Service, PVC and PV templates added to support the MCP server when enabled.
  • Tests / Chores

    • Test environment updated to create MCP log directory and expose server port for local testing.
    • Chart version bumped.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor
coderabbitai bot commented Dec 19, 2025

Walkthrough

Adds conditional Helm templates and values to enable an optional MCP server: configmap entry, Deployment with init containers and health checks, PV/PVC and NodePort Service for logs and exposure, and test script updates; resources render only when .Values.clpConfig.mcp_server is set. (≤50 words)

Changes

Cohort / File(s) Change Summary
Config & Values
tools/deployment/package-helm/templates/configmap.yaml, tools/deployment/package-helm/values.yaml
Added optional webui.mcp_server block in the clp-config ConfigMap template; when provided, sets mcp_server with host localhost, port 8000, and logging_level from values. values.yaml gains clpConfig.webui.mcp_server placeholder (default null) with commented port and logging_level.
Deployment
tools/deployment/package-helm/templates/mcp-server-deployment.yaml
New conditional Deployment for mcp-server when .Values.clpConfig.mcp_server is set: 1 replica, serviceAccountName, securityContext (firstParty uid/gid), two initContainers that wait for db-table-creator and results-cache-indices-creator jobs, main container using chart image and env vars from secrets/values, mounts for ConfigMap and logs PVC, command to run the Python MCP server, and readiness/liveness probes on /health (port 8000).
Service
tools/deployment/package-helm/templates/mcp-server-service.yaml
New conditional NodePort Service for mcp-server (port 8000, targetPort mcp), with labels/selectors from chart helpers and nodePort sourced from .Values.clpConfig.mcp_server.port.
Storage (PV/PVC)
tools/deployment/package-helm/templates/mcp-server-logs-pv.yaml, tools/deployment/package-helm/templates/mcp-server-logs-pvc.yaml
Conditional templates creating a local PersistentVolume (hostPath under logs directory, 5Gi, ReadWriteOnce) via clp.createLocalPv and a corresponding PersistentVolumeClaim (5Gi, ReadWriteOnce) via clp.createPvc for MCP server logs when mcp_server is enabled.
Test / Local-kind config
tools/deployment/package-helm/test.sh
Added creation of log directory $CLP_HOME/var/log/mcp_server and exposed additional host/container port mapping (30800) in the kind cluster configuration.
Chart metadata
tools/deployment/package-helm/Chart.yaml
Bumped chart version from 0.1.2-dev.10 to 0.1.2-dev.13.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Helm as Helm renderer
  participant K8s as Kubernetes API
  participant PV as PV/PVC
  participant Jobs as DB/Indices Jobs
  participant Init as Pod initContainers
  participant MCP as MCP Server Pod
  participant Service as NodePort Service

  Helm->>K8s: Render resources (ConfigMap, PV, PVC, Deployment, Service) if mcp_server enabled
  K8s->>PV: Create PersistentVolume & PVC (hostPath for logs)
  K8s->>Jobs: Ensure db-table-creator & results-cache-indices-creator Jobs exist/start
  K8s->>MCP: Create MCP Pod (initContainers + main container)
  MCP->>Init: Run initContainers to poll/wait for Jobs
  Init->>Jobs: Poll job status
  Jobs-->>Init: Jobs complete
  Init->>MCP: Init complete — start main container
  MCP->>K8s: Serve /health on port 8000 for readiness/liveness
  Service->>MCP: Route external traffic via NodePort (nodePort from values)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Verify Helm conditional guards and consistent use of chart helpers for names/labels/selectors.
  • Inspect initContainer wait logic and referenced job names (db-table-creator, results-cache-indices-creator).
  • Confirm env var names and secret keys used by the Deployment.
  • Valida 8000 te PV hostPath construction, nodeRole assumption, and local PV suitability.
  • Check NodePort value handling and potential conflicts with kind test port mapping.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding Helm chart support for MCP server deployment, which aligns with the multiple new templates and configuration updates.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@junhaoliao junhaoliao marked this pull request as ready for review December 19, 2025 06:18
@junhaoliao junhaoliao requested a review from a team as a code owner December 19, 2025 06:18
Copy link
Contributor
@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
components/clp-mcp-server/clp_mcp_server/clp_connector.py (1)

30-36: Update presto-clp/scripts/init.py to use the new database configuration schema.

The Helm chart has been properly updated to use database.names dictionary structure. However, tools/deployment/presto-clp/scripts/init.py still references the old database.name schema (line 141). This causes an inconsistency: the configuration will not have a database.name key, only database.names[ClpDbNameType.CLP]. Update line 141 to use the new schema via the config enum, or provide a fallback that accounts for the dictionary-based structure.

📜 Review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 65b3c7c and c23a5da.

📒 Files selected for processing (8)
  • components/clp-mcp-server/clp_mcp_server/clp_connector.py (2 hunks)
  • tools/deployment/package-helm/templates/configmap.yaml (1 hunks)
  • tools/deployment/package-helm/templates/mcp-server-deployment.yaml (1 hunks)
  • tools/deployment/package-helm/templates/mcp-server-logs-pv.yaml (1 hunks)
  • tools/deployment/package-helm/templates/mcp-server-logs-pvc.yaml (1 hunks)
  • tools/deployment/package-helm/templates/mcp-server-service.yaml (1 hunks)
  • tools/deployment/package-helm/test.sh (2 hunks)
  • tools/deployment/package-helm/values.yaml (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-05-06T10:07:04.654Z
Learnt from: kirkrodrigues
Repo: y-scope/clp PR: 881
File: components/core/tools/scripts/lib_install/macos/install-all.sh:11-12
Timestamp: 2025-05-06T10:07:04.654Z
Learning: In CLP installation scripts, temporary directories with downloaded files should not be automatically cleaned up on failure (e.g., with EXIT traps) to preserve artifacts for debugging purposes.

Applied to files:

  • tools/deployment/package-helm/test.sh
📚 Learning: 2025-12-19T05:03:32.320Z
Learnt from: junhaoliao
Repo: y-scope/clp PR: 1818
File: tools/deployment/package-helm/templates/configmap.yaml:12-12
Timestamp: 2025-12-19T05:03:32.320Z
Learning: In the y-scope/clp Helm chart, the `api_server.host` configuration field in the ConfigMap (tools/deployment/package-helm/templates/configmap.yaml) is unused because no other k8s internal services need to reach the API server—it's only accessed from outside the cluster via NodePort.

Applied to files:

  • tools/deployment/package-helm/templates/mcp-server-service.yaml
  • tools/deployment/package-helm/values.yaml
  • tools/deployment/package-helm/templates/configmap.yaml
📚 Learning: 2025-10-27T07:07:37.901Z
Learnt from: junhaoliao
Repo: y-scope/clp PR: 1501
File: tools/deployment/presto-clp/scripts/init.py:10-13
Timestamp: 2025-10-27T07:07:37.901Z
Learning: In `tools/deployment/presto-clp/scripts/init.py`, the `DATABASE_COMPONENT_NAME` and `DATABASE_DEFAULT_PORT` constants are intentionally duplicated from `clp_py_utils.clp_config` because `clp_py_utils` is not installed in the Presto init script's runtime environment. The two flows are separate and this duplication is documented. There are plans to merge these flows after a future release.

Applied to files:

  • components/clp-mcp-server/clp_mcp_server/clp_connector.py
🧬 Code graph analysis (1)
components/clp-mcp-server/clp_mcp_server/clp_connector.py (1)
components/clp-py-utils/clp_py_utils/clp_config.py (1)
  • ClpDbNameType (211-215)
🪛 YAMLlint (1.37.1)
tools/deployment/package-helm/templates/mcp-server-service.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


[error] 5-5: too many spaces inside braces

(braces)


[error] 5-5: too many spaces inside braces

(braces)


[error] 7-7: too many spaces inside braces

(braces)


[error] 12-12: too many spaces inside braces

(braces)


[error] 17-17: too many spaces inside braces

(braces)


[error] 17-17: too many spaces inside braces

(braces)

tools/deployment/package-helm/templates/mcp-server-deployment.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


[error] 5-5: too many spaces inside braces

(braces)


[error] 5-5: too many spaces inside braces

(braces)


[error] 7-7: too many spaces inside braces

(braces)


[error] 13-13: too many spaces inside braces

(braces)


[error] 18-18: too many spaces inside braces

(braces)


[error] 21-21: too many spaces inside braces

(braces)


[error] 21-21: too many spaces inside braces

(braces)


[error] 24-24: too many spaces inside braces

(braces)


[error] 24-24: too many spaces inside braces

(braces)


[error] 25-25: too many spaces inside braces

(braces)


[error] 25-25: too many spaces inside braces

(braces)


[error] 26-26: too many spaces inside braces

(braces)


[error] 26-26: too many spaces inside braces

(braces)


[error] 32-32: too many spaces inside braces

(braces)


[error] 37-37: too many spaces inside braces

(braces)


[error] 46-46: too many spaces inside braces

(braces)


[error] 46-46: too many spaces inside braces

(braces)


[error] 51-51: too many spaces inside braces

(braces)


[error] 51-51: too many spaces inside braces

(braces)


[error] 54-54: too many spaces inside braces

(braces)


[error] 54-54: too many spaces inside braces

(braces)


[error] 67-67: too many spaces inside braces

(braces)


[error] 70-70: too many spaces inside braces

(braces)


[error] 80-80: too many spaces inside braces

(braces)


[error] 85-85: too many spaces inside braces

(braces)


[error] 92-92: too many spaces inside braces

(braces)


[error] 95-95: too many spaces inside braces

(braces)


[error] 95-95: too many spaces inside braces

(braces)

tools/deployment/package-helm/templates/mcp-server-logs-pvc.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


[error] 8-8: too many spaces inside braces

(braces)


[error] 9-9: too many spaces inside braces

(braces)

tools/deployment/package-helm/templates/mcp-server-logs-pv.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


[error] 10-10: too many spaces inside braces

(braces)


[error] 11-11: too many spaces inside braces

(braces)

🔇 Additional comments (10)
components/clp-mcp-server/clp_mcp_server/clp_connector.py (1)

8-8: LGTM! Clean import addition.

The import of ClpDbNameType is necessary for the enum-based database name lookup introduced in this change.

tools/deployment/package-helm/test.sh (1)

62-62: LGTM! Test environment setup is correct.

The MCP server log directory and port mapping additions align with the new deployment configuration. The directory path mcp_server matches the PV hostPath configuration, and port 30800 matches the example in values.yaml.

Also applies to: 91-93

tools/deployment/package-helm/templates/mcp-server-logs-pvc.yaml (1)

1-9: LGTM! PVC configuration is appropriate.

The PVC template correctly uses conditional rendering and follows the chart's helper patterns. The 5Gi capacity and ReadWriteOnce access mode are suitable for log storage.

Note: YAMLlint errors in the static analysis hints are false positives—they don't account for Helm's Go template syntax.

tools/deployment/package-helm/templates/mcp-server-logs-pv.yaml (1)

1-11: LGTM! PV configuration is correct.

The local PV template properly references the mcp_server log directory and follows the chart's helper patterns. The hostPath matches the directory created in test.sh.

Note: YAMLlint errors are false positives for Helm template syntax.

tools/deployment/package-helm/templates/mcp-server-service.yaml (1)

1-18: Service configuration is correct, but depends on required field.

The NodePort service is properly configured and the targetPort: "mcp" correctly references the named port in the deployment. However, this template requires .Values.clpConfig.mcp_server.port to be set, which relates to the configuration validation issue identified in values.yaml.

Note: YAMLlint errors are false positives for Helm template syntax.

The service configuration is sound, but ensure the required port field issue in values.yaml is addressed.

tools/deployment/package-helm/templates/mcp-server-deployment.yaml (4)

27-37: Good practice: Init containers ensure dependencies are ready.

The init containers properly wait for the database table creator and results cache indices creator jobs to complete before starting the MCP server. This ensures the required infrastructure is ready and prevents startup failures.


42-58: Environment configuration follows security best practices.

Database credentials are correctly sourced from Kubernetes secrets rather than hardcoded values. The logging configuration is appropriately parameterized.

Note: This depends on the required field validation issue identified in values.yaml—logging_level must be present when mcp_server is enabled.

Ensure the configuration validation issue in values.yaml is addressed to prevent missing required fields.


59-61: Port naming correctly matches service configuration.

The named port "mcp" at line 60 properly corresponds to the targetPort: "mcp" in mcp-server-service.yaml, enabling correct service-to-pod routing.


79-86: Health check configuration is appropriate.

The readiness and liveness probes correctly use the /health endpoint on port 8000. The YAML anchor pattern (&mcp-server-health-check and *mcp-server-health-check) efficiently reuses the health check configuration.

tools/deployment/package-helm/templates/configmap.yaml (1)

84-89: Verify whether the mcp_server.host field is actually used for internal service-to-service communication.

The configuration sets mcp_server.host to the Kubernetes service name, similar to api_server.host. However, examination of the codebase reveals the MCP server is exposed via NodePort for external access, and no internal services appear to reference this host value for communication. The host is converted to an IP address during setup but may not be consumed at runtime, following the same pattern as the unused api_server.host field. Confirm whether internal Kubernetes services need to reach the MCP server using this host field, or if the field can be removed to simplify the configuration.

Comment on lines +78 to +80
mcp_server: null
# port: 30800
# logging_level: "INFO"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add required field validation or default values.

When mcp_server is enabled (non-null), both port and logging_level are required by the service and deployment templates. If a user enables the MCP server with an empty object (e.g., mcp_server: {}), the deployment will fail due to missing required fields.

Consider one of these solutions:

  1. Add a JSON schema validation file (values.schema.json) to enforce required fields
  2. Provide default values within the templates using default function
  3. Add clear documentation that both fields are mandatory when enabling mcp_server
🔎 Example fix using template defaults
  mcp_server: null
-  #   port: 30800
-  #   logging_level: "INFO"
+  # When enabling mcp_server, both port and logging_level are REQUIRED:
+  #   port: 30800  # Required: NodePort for external access
+  #   logging_level: "INFO"  # Required: Logging level for the server

Then in the templates, you could add defensive checks or defaults, for example in mcp-server-deployment.yaml:

{{- if not .Values.clpConfig.mcp_server.logging_level }}
{{- fail "mcp_server.logging_level is required when mcp_server is enabled" }}
{{- end }}

Copy link
Contributor
@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6d2c260 and bdb91d7.

📒 Files selected for processing (1)
  • tools/deployment/package-helm/templates/configmap.yaml (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-19T05:03:32.320Z
Learnt from: junhaoliao
Repo: y-scope/clp PR: 1818
File: tools/deployment/package-helm/templates/configmap.yaml:12-12
Timestamp: 2025-12-19T05:03:32.320Z
Learning: In the y-scope/clp Helm chart, the `api_server.host` configuration field in the ConfigMap (tools/deployment/package-helm/templates/configmap.yaml) is unused because no other k8s internal services need to reach the API server—it's only accessed from outside the cluster via NodePort.

Applied to files:

  • tools/deployment/package-helm/templates/configmap.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: package-image
  • GitHub Check: check-generated
  • GitHub Check: lint-check (ubuntu-24.04)
  • GitHub Check: lint-check (macos-15)

Comment on lines +84 to +91
{{- with .Values.clpConfig.mcp_server }}
mcp_server:
host: "localhost"
port: 8000
logging_level: {{ .logging_level | quote }}
{{- else }}
mcp_server: null
{{- end }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find and examine mcp_server configuration in the Helm chart

echo "=== Checking configmap.yaml structure ==="
cat tools/deployment/package-helm/templates/configmap.yaml | head -100

echo -e "\n=== Looking for values.yaml ==="
find tools/deployment/package-helm -name "values.yaml" -o -name "values*.yaml" | head -5

echo -e "\n=== Checking for service definitions ==="
find tools/deployment/package-helm/templates -name "*.yaml" -o -name "*.yml" | xargs ls -la

echo -e "\n=== Searching for mcp_server references ==="
rg -n "mcp_server" tools/deployment/package-helm/ --type yaml

Repository: y-scope/clp

Length of output: 12082


🏁 Script executed:

#!/bin/bash
# Check the mcp-server-service definition and compare with webui configuration

echo "=== mcp-server-service.yaml content ==="
cat tools/deployment/package-helm/templates/mcp-server-service.yaml

echo -e "\n=== webui configuration in configmap ==="
grep -A5 "webui:" tools/deployment/package-helm/templates/configmap.yaml

echo -e "\n=== values.yaml mcp_server section ==="
grep -B2 -A5 "mcp_server:" tools/deployment/package-helm/values.yaml

echo -e "\n=== Check if there are any other internal pod-to-pod accesses ==="
rg "localhost" tools/deployment/package-helm/templates/configmap.yaml

echo -e "\n=== Verify how api_server is configured (from retrieved learning) ==="
grep -n "api_server" tools/deployment/package-helm/templates/configmap.yaml || echo "api_server not found in configmap"

Repository: y-scope/clp

Length of output: 1259


🏁 Script executed:

#!/bin/bash
# Check the MCP server deployment and how it's intended to be accessed

echo "=== mcp-server-deployment.yaml ==="
cat tools/deployment/package-helm/templates/mcp-server-deployment.yaml

echo -e "\n=== Check if there's an ingress for mcp_server ==="
grep -r "mcp" tools/deployment/package-helm/templates/ | grep -i ingress || echo "No ingress found for mcp"

echo -e "\n=== Check PR documentation or comments ==="
head -50 tools/deployment/package-helm/values.yaml | tail -30

Repository: y-scope/clp

Length of output: 4339


Remove or update the hardcoded host: "localhost" in the ConfigMap mcp_server configuration.

The configmap sets host: "localhost" but the deployment command-line explicitly passes --host 0.0.0.0, which overrides this value. This inconsistency creates dead code and confusion. Since the deployment correctly binds to 0.0.0.0 to accept external connections via the NodePort service, the configmap should either remove the host field or align it with the deployment's intent by using 0.0.0.0. The port configuration (container port 8000) is correct and separate from the external NodePort value.

🤖 Prompt for AI Agents
In tools/deployment/package-helm/templates/configmap.yaml around lines 84 to 91,
the ConfigMap hardcodes host: "localhost" for mcp_server which conflicts with
the deployment's --host 0.0.0.0 and creates dead/confusing config; update the
template to either remove the host field entirely when not required or set it to
"0.0.0.0" (prefer removing if the app reads CLI args) by changing the mcp_server
block to omit host or use .Values.clpConfig.mcp_server.host so Helm values
control it, and ensure the default in values.yaml matches 0.0.0.0 if you choose
to keep the field.

8000
Co-authored-by: kirkrodrigues <2454684+kirkrodrigues@users.noreply.github.com>
Copy link
Contributor
@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1b9adb5 and f3cd5fb.

📒 Files selected for processing (2)
  • tools/deployment/package-helm/Chart.yaml (1 hunks)
  • tools/deployment/package-helm/templates/mcp-server-deployment.yaml (1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
tools/deployment/package-helm/templates/mcp-server-deployment.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


[error] 5-5: too many spaces inside braces

(braces)


[error] 5-5: too many spaces inside braces

(braces)


[error] 7-7: too many spaces inside braces

(braces)


[error] 13-13: too many spaces inside braces

(braces)


[error] 18-18: too many spaces inside braces

(braces)


[error] 21-21: too many spaces inside braces

(braces)


[error] 21-21: too many spaces inside braces

(braces)


[error] 24-24: too many spaces inside braces

(braces)


[error] 24-24: too many spaces inside braces

(braces)


[error] 25-25: too many spaces inside braces

(braces)


[error] 25-25: too many spaces inside braces

(braces)


[error] 26-26: too many spaces inside braces

(braces)


[error] 26-26: too many spaces inside braces

(braces)


[error] 32-32: too many spaces inside braces

(braces)


[error] 37-37: too many spaces inside braces

(braces)


[error] 46-46: too many spaces inside braces

(braces)


[error] 46-46: too many spaces inside braces

(braces)


[error] 51-51: too many spaces inside braces

(braces)


[error] 51-51: too many spaces inside braces

(braces)


[error] 54-54: too many spaces inside braces

(braces)


[error] 54-54: too many spaces inside braces

(braces)


[error] 67-67: too many spaces inside braces

(braces)


[error] 70-70: too many spaces inside braces

(braces)


[error] 80-80: too many spaces inside braces

(braces)


[error] 85-85: too many spaces inside braces

(braces)


[error] 92-92: too many spaces inside braces

(braces)


[error] 95-95: too many spaces inside braces

(braces)


[error] 95-95: too many spaces inside braces

(braces)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: lint-check (macos-15)
  • GitHub Check: lint-check (ubuntu-24.04)
  • GitHub Check: check-generated
🔇 Additional comments (7)
tools/deployment/package-helm/Chart.yaml (1)

3-3: LGTM! Version bump aligns with new MCP server feature.

The version increment appropriately reflects the addition of new Helm templates and configuration options for the MCP server deployment.

tools/deployment/package-helm/templates/mcp-server-deployment.yaml (6)

1-8: LGTM! Proper conditional rendering and metadata.

The deployment is correctly gated on the MCP server configuration and follows Helm best practices with appropriate labels and component designation.


27-37: LGTM! Proper dependency management with init containers.

The init containers correctly ensure that database tables and cache indices are ready before the MCP server starts, preventing startup failures.


79-86: LGTM! Health probes properly configured.

Both readiness and liveness probes correctly use HTTP health checks on the named port with appropriate timing configurations and efficient YAML anchor reuse.


87-96: LGTM! Volumes properly configured.

The volumes correctly reference the PVC for log persistence and the ConfigMap for configuration, with proper use of Helm helper functions for consistency.


1-96: Note: YAMLlint errors are false positives.

The static analysis tool reports syntax and bracing errors because YAMLlint doesn't understand Helm template syntax. These can be safely ignored as the template follows standard Helm conventions.


21-21: Reconsider service account design for MCP server.

The MCP server shares the job-watcher service account with other components (webui, reducer, query-scheduler, jobs). The role provides only read-only access to jobs and pods. If the MCP server requires write permissions for cluster management, a dedicated service account with appropriate permissions should be created. If read-only access is intended, the shared account works but consider using a more descriptive service account name reflecting its broader use across multiple component types.

{{- include "clp.labels" . | nindent 4 }}
app.kubernetes.io/component: "mcp-server"
spec:
replicas: 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider making replica count configurable.

The replica count is hard-coded to 1, which limits scalability options. For production deployments, consider making this configurable via values.yaml.

🔎 Suggested approach

In values.yaml, add:

mcp_server:
  replicas: 1
  # ... other config

Then update the deployment:

-  replicas: 1
+  replicas: {{ .Values.clpConfig.mcp_server.replicas | default 1 }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
replicas: 1
replicas: {{ .Values.clpConfig.mcp_server.replicas | default 1 }}
🤖 Prompt for AI Agents
In tools/deployment/package-helm/templates/mcp-server-deployment.yaml around
line 10 the replica count is hard-coded to 1; make it configurable by reading a
value from values.yaml (e.g. mcp_server.replicas) with a sensible default (1)
when rendering the template, then add the corresponding mcp_server.replicas key
to values.yaml (document the default and recommended production value).

Comment on lines +38 to +78
containers:
- name: "mcp-server"
image: "{{ include "clp.image.ref" . }}"
imagePullPolicy: "{{ .Values.image.clpPackage.pullPolicy }}"
env:
- name: "CLP_DB_PASS"
valueFrom:
secretKeyRef:
name: {{ include "clp.fullname" . }}-database
key: "password"
- name: "CLP_DB_USER"
valueFrom:
secretKeyRef:
name: {{ include "clp.fullname" . }}-database
key: "username"
- name: "CLP_LOGGING_LEVEL"
value: {{ .Values.clpConfig.mcp_server.logging_level | quote }}
- name: "CLP_LOGS_DIR"
value: "/var/log/mcp_server"
- name: "PYTHONPATH"
value: "/opt/clp/lib/python3/site-packages"
ports:
- name: "mcp"
containerPort: 8000
volumeMounts:
- name: "config"
mountPath: "/etc/clp-config.yaml"
subPath: "clp-config.yaml"
readOnly: true
- name: {{ include "clp.volumeName" (dict
"component_category" "mcp-server"
"name" "logs"
) | quote }}
mountPath: "/var/log/mcp_server"
command: [
"python3", "-u",
"-m", "clp_mcp_server.clp_mcp_server",
"--host", "0.0.0.0",
"--port", "8000",
"--config-path", "/etc/clp-config.yaml"
]
Copy link
Contributor
@coderabbitai coderabbitai bot Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add resource requests and limits.

The container lacks resource requests and limits, which is a Kubernetes best practice for production deployments to ensure proper scheduling, prevent resource exhaustion, and enable horizontal pod autoscaling.

🔎 Suggested addition

Add to values.yaml:

mcp_server:
  resources:
    requests:
      memory: "256Mi"
      cpu: "100m"
    limits:
      memory: "512Mi"
      cpu: "500m"

Then add to the container spec after command::

           command: [
             "python3", "-u",
             "-m", "clp_mcp_server.clp_mcp_server",
             "--host", "0.0.0.0",
             "--port", "8000",
             "--config-path", "/etc/clp-config.yaml"
           ]
+          resources:
+            {{- toYaml .Values.clpConfig.mcp_server.resources | nindent 12 }}

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 46-46: too many spaces inside braces

(braces)


[error] 46-46: too many spaces inside braces

(braces)


[error] 51-51: too many spaces inside braces

(braces)


[error] 51-51: too many spaces inside braces

(braces)


[error] 54-54: too many spaces inside braces

(braces)


[error] 54-54: too many spaces inside braces

(braces)


[error] 67-67: too many spaces inside braces

(braces)


[error] 70-70: too many spaces inside braces

(braces)

🤖 Prompt for AI Agents
tools/deployment/package-helm/templates/mcp-server-deployment.yaml lines 38-78:
the container spec is missing Kubernetes resource requests/limits; update the
Helm template to add a resources block on the container that references
configurable values (e.g., .Values.mcp_server.resources) so requests and limits
are set from values.yaml, and add the suggested default values to values.yaml
under mcp_server.resources with sensible requests and limits (memory and cpu) so
deployments have predictable scheduling and resource enforcement.

Copy link
Member Author
@junhaoliao junhaoliao Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will revisit with the docker compose orchestration and add some in the future

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@junhaoliao, understood! Just to clarify, the suggestion was specifically for Kubernetes resource configuration (requests/limits) in the Helm chart rather than Docker Compose. These help with pod scheduling and resource management in k8s clusters.

Since you're planning to revisit this later, would you like me to open a GitHub issue to track adding resource requests and limits to the MCP server deployment as a follow-up task?


🧠 Learnings used
Learnt from: junhaoliao
Repo: y-scope/clp PR: 1818
File: tools/deployment/package-helm/templates/configmap.yaml:12-12
Timestamp: 2025-12-19T05:03:32.320Z
Learning: In the y-scope/clp Helm chart, the `api_server.host` configuration field in the ConfigMap (tools/deployment/package-helm/templates/configmap.yaml) is unused because no other k8s internal services need to reach the API server—it's only accessed from outside the cluster via NodePort.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

0