-
Notifications
You must be signed in to change notification settings - Fork 171
feat(cli): add integrations command for managing database integrations #248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: tk/integrations-config-file-schema
Are you sure you want to change the base?
feat(cli): add integrations command for managing database integrations #248
Conversation
- Introduced a new command group for integrations, allowing users to pull integrations from the Deepnote API and merge them with a local YAML file. - Implemented error handling for missing authentication tokens and API request failures. - Added tests for the integrations command, covering various scenarios including token resolution and integration merging. - Updated the run command to utilize the new integrations configuration for database connections.
📝 WalkthroughWalkthroughAdds a new Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as Integrations Command
participant Token as Token Resolver
participant API as Deepnote API
participant FS as File System
participant Dotenv as Dotenv Utils
participant Merger as Integration Merger
User->>CLI: integrations pull [--token, --file, --url, --env-file]
CLI->>Token: resolveToken(flagToken, DEEPNOTE_TOKEN)
alt token missing
Token-->>CLI: MissingTokenError
CLI-->>User: exit with error
else token resolved
Token-->>CLI: token
end
CLI->>API: GET /v2/integrations (Authorization)
alt API error (401/403/other)
API-->>CLI: ApiError(statusCode)
CLI-->>User: exit with error
else
API-->>CLI: integrations (raw)
end
CLI->>FS: readRawIntegrations(file)
FS-->>CLI: existingEntries (or empty)
CLI->>Merger: toLocalIntegration & extractAndReplaceSecrets(raw, existing)
Merger-->>CLI: processedIntegrations, secretsMap
CLI->>Dotenv: updateDotEnv(envFile, secretsMap)
Dotenv-->>CLI: .env updated
CLI->>FS: writeIntegrationsFile(file, mergedYaml)
FS-->>CLI: file written
CLI-->>User: summary (added/updated/preserved counts)
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## tk/integrations-config-file-schema #248 +/- ##
======================================================================
- Coverage 92.95% 90.22% -2.73%
======================================================================
Files 80 84 +4
Lines 4696 5022 +326
Branches 1305 1393 +88
======================================================================
+ Hits 4365 4531 +166
- Misses 331 491 +160 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@packages/cli/src/commands/integrations.test.ts`:
- Around line 335-403: Tests redefine the API schema locally; instead export the
canonical apiResponseSchema from integrations.ts and update both tests in
integrations.test.ts to import and use that exported apiResponseSchema (and
remove the duplicated apiIntegrationSchema definitions) so the tests validate
the real schema artifacts from integrations.ts rather than a copy.
- Around line 56-186: The test duplicates merging logic; instead export the core
helpers (mergeIntegrations, readRawIntegrations, toLocalIntegration) from the
module that implements the CLI merging and update the test to import and call
those functions directly: ensure mergeIntegrations accepts existing YAML-parsed
entries and fetched integrations (or adapt call signature), use
readRawIntegrations to parse the file content in tests instead of reimplementing
parsing, and use toLocalIntegration for transforming API objects—then assert on
the output/output file to verify behavior rather than re-implementing the
algorithm.
- Around line 297-333: The tests are exercising token selection logic but not
the actual implementation—export the resolveToken function from its module and
update these tests to import and call resolveToken directly; specifically,
ensure resolveToken(token?: string) returns the flag value, falls back to
process.env.DEEPNOTE_TOKEN, and throws MissingTokenError (use the existing
MissingTokenError symbol) when neither is present, then replace the inline token
resolution in the four test cases with calls to resolveToken to validate the
real implementation.
In `@packages/cli/src/commands/integrations.ts`:
- Around line 26-27: The constant JSON_SCHEMA_URL contains a hardcoded branch
name ("refs/heads/tk/integrations-config-file-schema") which will break after
merge; update the value of JSON_SCHEMA_URL to point to a stable ref such as
"main" or a versioned tag (e.g., replace the branch segment with
"refs/heads/main" or a specific release tag) so the schema URL remains valid;
ensure you update the string assigned to JSON_SCHEMA_URL in
packages/cli/src/commands/integrations.ts and run any tests that fetch the
schema to verify the new URL resolves.
- Around line 123-129: The fetch call using const response = await fetch(url, {
... }) can hang; wrap it with an AbortController/AbortSignal timeout (e.g.,
AbortController with signal: AbortSignal.timeout(timeoutMs) or controller.signal
and a setTimeout to call controller.abort()) and pass that signal to fetch to
enforce a timeout; update the call site in integrations.ts where fetch(...) is
used to create the controller, pass signal in the options, handle AbortError
(timeout) separately from other errors, and choose a sensible timeout value
(configurable or a constant) so the CLI does not hang indefinitely.
- Around line 308-313: The current size-based check using existingEntries.length
> fetchedIntegrations.length is unreliable; instead compute preservedCount by
comparing IDs and counting existing entries that have no matching id in
fetchedIntegrations (e.g., count items in existingEntries where
fetchedIntegrations.some(fi => fi.id === ee.id) is false), then if
preservedCount > 0 call output(chalk.dim(...)) as before; update the condition
and preservedCount calculation around
mergedEntries/fetchedIntegrations/existingEntries to use ID-based comparison
rather than array length math.
In `@packages/cli/src/commands/run.ts`:
- Around line 255-268: Remove the commented-out dead function
getIntegrationEnvVarName from the file; either delete the entire commented block
or restore it as a real, explicitly deprecated export (export function
getIntegrationEnvVarName(...)) with the existing JSDoc `@deprecated` tag if other
modules still reference it; ensure there are no lingering references to the
commented symbol and run the build/tests to confirm nothing breaks.
| describe('integration merging', () => { | ||
| // Helper to create a mock integration | ||
| function createMockIntegration(overrides: Partial<ApiIntegration> = {}): ApiIntegration { | ||
| return { | ||
| id: 'test-id', | ||
| name: 'Test Integration', | ||
| type: 'pgsql', | ||
| metadata: { | ||
| host: 'localhost', | ||
| port: '5432', | ||
| database: 'testdb', | ||
| user: 'testuser', | ||
| password: 'testpass', | ||
| }, | ||
| is_public: false, | ||
| created_at: '2024-01-01T00:00:00Z', | ||
| updated_at: '2024-01-01T00:00:00Z', | ||
| federated_auth_method: null, | ||
| ...overrides, | ||
| } | ||
| } | ||
|
|
||
| it('creates new file when it does not exist', async () => { | ||
| const filePath = join(tempDir, 'new-file.yaml') | ||
| const integrations = [createMockIntegration({ id: 'new-id', name: 'New Integration' })] | ||
|
|
||
| // Import the module dynamically to test | ||
| const { stringify } = await import('yaml') | ||
|
|
||
| // Write the file manually to simulate what the command does | ||
| const yamlContent = stringify({ | ||
| integrations: integrations.map(i => ({ | ||
| id: i.id, | ||
| name: i.name, | ||
| type: i.type, | ||
| metadata: i.metadata, | ||
| })), | ||
| }) | ||
| await writeFile(filePath, yamlContent, 'utf-8') | ||
|
|
||
| const content = await readFile(filePath, 'utf-8') | ||
| expect(content).toContain('new-id') | ||
| expect(content).toContain('New Integration') | ||
| }) | ||
|
|
||
| it('preserves invalid entries when merging', async () => { | ||
| const filePath = join(tempDir, 'preserve-invalid.yaml') | ||
|
|
||
| // Create file with an invalid integration (unknown type) | ||
| const initialContent = `integrations: | ||
| - id: invalid-id | ||
| name: Invalid Integration | ||
| type: unknown-type | ||
| metadata: | ||
| foo: bar | ||
| - id: valid-id | ||
| name: Valid PostgreSQL | ||
| type: pgsql | ||
| metadata: | ||
| host: localhost | ||
| port: "5432" | ||
| database: mydb | ||
| user: myuser | ||
| password: mypass | ||
| ` | ||
| await writeFile(filePath, initialContent, 'utf-8') | ||
|
|
||
| // Simulate fetched integrations (only the valid one gets updated) | ||
| const fetchedIntegrations = [ | ||
| createMockIntegration({ | ||
| id: 'valid-id', | ||
| name: 'Updated PostgreSQL', | ||
| metadata: { | ||
| host: 'newhost', | ||
| port: '5432', | ||
| database: 'newdb', | ||
| user: 'newuser', | ||
| password: 'newpass', | ||
| }, | ||
| }), | ||
| ] | ||
|
|
||
| // Manually perform merge logic to verify | ||
| const { parseYaml } = await import('@deepnote/blocks') | ||
| const { stringify } = await import('yaml') | ||
|
|
||
| const parsed = parseYaml(initialContent) as { integrations?: unknown[] } | ||
| const existingEntries = parsed?.integrations ?? [] | ||
|
|
||
| // Create map of fetched by ID | ||
| const fetchedById = new Map( | ||
| fetchedIntegrations.map(i => [ | ||
| i.id, | ||
| { | ||
| id: i.id, | ||
| name: i.name, | ||
| type: i.type, | ||
| metadata: i.metadata, | ||
| }, | ||
| ]) | ||
| ) | ||
|
|
||
| const seenIds = new Set<string>() | ||
| const mergedEntries = existingEntries.map(entry => { | ||
| const entryId = (entry as Record<string, unknown>).id as string | ||
| if (entryId && fetchedById.has(entryId)) { | ||
| seenIds.add(entryId) | ||
| return fetchedById.get(entryId) | ||
| } | ||
| return entry | ||
| }) | ||
|
|
||
| for (const [id, integration] of fetchedById) { | ||
| if (!seenIds.has(id)) { | ||
| mergedEntries.push(integration) | ||
| } | ||
| } | ||
|
|
||
| const yamlContent = stringify({ integrations: mergedEntries }) | ||
| await writeFile(filePath, yamlContent, 'utf-8') | ||
|
|
||
| const finalContent = await readFile(filePath, 'utf-8') | ||
|
|
||
| // Should preserve invalid entry | ||
| expect(finalContent).toContain('invalid-id') | ||
| expect(finalContent).toContain('unknown-type') | ||
|
|
||
| // Should have updated valid entry | ||
| expect(finalContent).toContain('Updated PostgreSQL') | ||
| expect(finalContent).toContain('newhost') | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Tests replicate logic instead of calling actual functions.
Consider exporting mergeIntegrations, readRawIntegrations, toLocalIntegration and testing them directly. Current approach duplicates implementation logic.
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/integrations.test.ts` around lines 56 - 186, The
test duplicates merging logic; instead export the core helpers
(mergeIntegrations, readRawIntegrations, toLocalIntegration) from the module
that implements the CLI merging and update the test to import and call those
functions directly: ensure mergeIntegrations accepts existing YAML-parsed
entries and fetched integrations (or adapt call signature), use
readRawIntegrations to parse the file content in tests instead of reimplementing
parsing, and use toLocalIntegration for transforming API objects—then assert on
the output/output file to verify behavior rather than re-implementing the
algorithm.
| describe('token resolution', () => { | ||
| it('uses --token flag when provided', () => { | ||
| // This tests the token resolution logic | ||
| const options = { token: 'flag-token' } | ||
| const token = options.token ?? process.env.DEEPNOTE_TOKEN | ||
|
|
||
| expect(token).toBe('flag-token') | ||
| }) | ||
|
|
||
| it('uses DEEPNOTE_TOKEN env var as fallback', () => { | ||
| process.env.DEEPNOTE_TOKEN = 'env-token' | ||
| const options: { token?: string } = {} | ||
| const token = options.token ?? process.env.DEEPNOTE_TOKEN | ||
|
|
||
| expect(token).toBe('env-token') | ||
| }) | ||
|
|
||
| it('prefers --token flag over env var', () => { | ||
| process.env.DEEPNOTE_TOKEN = 'env-token' | ||
| const options = { token: 'flag-token' } | ||
| const token = options.token ?? process.env.DEEPNOTE_TOKEN | ||
|
|
||
| expect(token).toBe('flag-token') | ||
| }) | ||
|
|
||
| it('throws when neither token nor env var provided', () => { | ||
| delete process.env.DEEPNOTE_TOKEN | ||
| const options: { token?: string } = {} | ||
| const token = options.token ?? process.env.DEEPNOTE_TOKEN | ||
|
|
||
| if (!token) { | ||
| expect(() => { | ||
| throw new MissingTokenError() | ||
| }).toThrow(MissingTokenError) | ||
| } | ||
| }) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Token resolution tests don't call actual resolveToken.
These tests verify the concept but not the implementation. Export resolveToken for direct testing.
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/integrations.test.ts` around lines 297 - 333, The
tests are exercising token selection logic but not the actual
implementation—export the resolveToken function from its module and update these
tests to import and call resolveToken directly; specifically, ensure
resolveToken(token?: string) returns the flag value, falls back to
process.env.DEEPNOTE_TOKEN, and throws MissingTokenError (use the existing
MissingTokenError symbol) when neither is present, then replace the inline token
resolution in the four test cases with calls to resolveToken to validate the
real implementation.
| describe('API response validation', () => { | ||
| it('validates correct API response structure', async () => { | ||
| const { z } = await import('zod') | ||
|
|
||
| const apiIntegrationSchema = z.object({ | ||
| id: z.string(), | ||
| name: z.string(), | ||
| type: z.string(), | ||
| metadata: z.record(z.unknown()), | ||
| is_public: z.boolean(), | ||
| created_at: z.string(), | ||
| updated_at: z.string(), | ||
| federated_auth_method: z.string().nullable(), | ||
| }) | ||
|
|
||
| const apiResponseSchema = z.object({ | ||
| integrations: z.array(apiIntegrationSchema), | ||
| }) | ||
|
|
||
| const validResponse = { | ||
| integrations: [ | ||
| { | ||
| id: 'test-id', | ||
| name: 'Test', | ||
| type: 'pgsql', | ||
| metadata: { host: 'localhost' }, | ||
| is_public: false, | ||
| created_at: '2024-01-01T00:00:00Z', | ||
| updated_at: '2024-01-01T00:00:00Z', | ||
| federated_auth_method: null, | ||
| }, | ||
| ], | ||
| } | ||
|
|
||
| const result = apiResponseSchema.safeParse(validResponse) | ||
| expect(result.success).toBe(true) | ||
| }) | ||
|
|
||
| it('rejects invalid API response', async () => { | ||
| const { z } = await import('zod') | ||
|
|
||
| const apiIntegrationSchema = z.object({ | ||
| id: z.string(), | ||
| name: z.string(), | ||
| type: z.string(), | ||
| metadata: z.record(z.unknown()), | ||
| is_public: z.boolean(), | ||
| created_at: z.string(), | ||
| updated_at: z.string(), | ||
| federated_auth_method: z.string().nullable(), | ||
| }) | ||
|
|
||
| const apiResponseSchema = z.object({ | ||
| integrations: z.array(apiIntegrationSchema), | ||
| }) | ||
|
|
||
| const invalidResponse = { | ||
| integrations: [ | ||
| { | ||
| id: 'test-id', | ||
| // missing required fields | ||
| }, | ||
| ], | ||
| } | ||
|
|
||
| const result = apiResponseSchema.safeParse(invalidResponse) | ||
| expect(result.success).toBe(false) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Schema redefined in tests.
Export apiResponseSchema from integrations.ts to test the actual schema, not a copy.
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/integrations.test.ts` around lines 335 - 403, Tests
redefine the API schema locally; instead export the canonical apiResponseSchema
from integrations.ts and update both tests in integrations.test.ts to import and
use that exported apiResponseSchema (and remove the duplicated
apiIntegrationSchema definitions) so the tests validate the real schema
artifacts from integrations.ts rather than a copy.
| const JSON_SCHEMA_URL = | ||
| 'https://raw.githubusercontent.com/deepnote/deepnote/refs/heads/tk/integrations-config-file-schema/json-schemas/integrations-file-schema.json' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded branch in schema URL will break after merge.
The URL references refs/heads/tk/integrations-config-file-schema. Update to main or a versioned tag before merging.
Proposed fix
const JSON_SCHEMA_URL =
- 'https://raw.githubusercontent.com/deepnote/deepnote/refs/heads/tk/integrations-config-file-schema/json-schemas/integrations-file-schema.json'
+ 'https://raw.githubusercontent.com/deepnote/deepnote/main/json-schemas/integrations-file-schema.json'🤖 Prompt for AI Agents
In `@packages/cli/src/commands/integrations.ts` around lines 26 - 27, The constant
JSON_SCHEMA_URL contains a hardcoded branch name
("refs/heads/tk/integrations-config-file-schema") which will break after merge;
update the value of JSON_SCHEMA_URL to point to a stable ref such as "main" or a
versioned tag (e.g., replace the branch segment with "refs/heads/main" or a
specific release tag) so the schema URL remains valid; ensure you update the
string assigned to JSON_SCHEMA_URL in packages/cli/src/commands/integrations.ts
and run any tests that fetch the schema to verify the new URL resolves.
| const response = await fetch(url, { | ||
| method: 'GET', | ||
| headers: { | ||
| Authorization: `Bearer ${token}`, | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider adding a timeout.
fetch without AbortSignal.timeout() can hang indefinitely on network issues.
Proposed fix
- const response = await fetch(url, {
- method: 'GET',
- headers: {
- Authorization: `Bearer ${token}`,
- 'Content-Type': 'application/json',
- },
- })
+ const response = await fetch(url, {
+ method: 'GET',
+ headers: {
+ Authorization: `Bearer ${token}`,
+ 'Content-Type': 'application/json',
+ },
+ signal: AbortSignal.timeout(30_000),
+ })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await fetch(url, { | |
| method: 'GET', | |
| headers: { | |
| Authorization: `Bearer ${token}`, | |
| 'Content-Type': 'application/json', | |
| }, | |
| }) | |
| const response = await fetch(url, { | |
| method: 'GET', | |
| headers: { | |
| Authorization: `Bearer ${token}`, | |
| 'Content-Type': 'application/json', | |
| }, | |
| signal: AbortSignal.timeout(30_000), | |
| }) |
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/integrations.ts` around lines 123 - 129, The fetch
call using const response = await fetch(url, { ... }) can hang; wrap it with an
AbortController/AbortSignal timeout (e.g., AbortController with signal:
AbortSignal.timeout(timeoutMs) or controller.signal and a setTimeout to call
controller.abort()) and pass that signal to fetch to enforce a timeout; update
the call site in integrations.ts where fetch(...) is used to create the
controller, pass signal in the options, handle AbortError (timeout) separately
from other errors, and choose a sensible timeout value (configurable or a
constant) so the CLI does not hang indefinitely.
…ma' into tk/integrations-cli-pull
…a dotenv file, and implement referecing to environment - Added support for managing database integration secrets in the CLI. - Implemented dotenv utilities for reading and updating .env files. - Enhanced integration commands to extract secrets and store them in a .env file. - Introduced tests for dotenv functionalities and environment variable reference utilities. - Updated package dependencies to include new database integrations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@packages/cli/src/commands/integrations.ts`:
- Around line 224-254: The mergeIntegrations function is unused and should be
removed; delete the entire function declaration for mergeIntegrations from this
file and ensure any intended behavior relies on mergeProcessedIntegrations
instead (verify there are no remaining references to mergeIntegrations and
update imports/usages if any); this removes the dead code and avoids the
pipeline error complaining about an unused function.
In `@packages/cli/src/commands/run.ts`:
- Around line 517-521: The function contains leftover commented-out code (the
lines referencing getIntegrationEnvVarName, envVarName and the conditional) that
was left after a refactor and clutters the logic; remove those dead comments so
the check uses only integrationsById.has(integrationId.toLowerCase()) and
related active logic (look for the commented block around
getIntegrationEnvVarName and the envVarName comment in the run.ts function and
delete it).
- Around line 165-189: The file contains a duplicated call to
validateRequirements(...) invoked both before and after the integration env var
injection block; remove the earlier call (the validateRequirements(file, inputs,
pythonEnv, integrationsById, options.notebook) that appears before calling
getEnvironmentVariablesForIntegrations/setting process.env) so there is only a
single validation call after env vars are injected, keeping the later
validateRequirements invocation with the same arguments.
- Around line 616-618: Duplicate "Parsing ..." log: remove the redundant log in
runDeepnoteProject to avoid repeating the same message that setupProject already
emits. Locate the block in runDeepnoteProject where it calls
log(chalk.dim(`Parsing ${absolutePath}...`)) conditioned on !isMachineOutput and
delete that call (or wrap it behind a new flag if you must preserve one
occurrence); ensure setupProject remains the single place that logs the parsing
message for absolutePath so callers and machine output behavior are unchanged.
In `@packages/cli/src/utils/dotenv.ts`:
- Around line 55-58: The unescape logic in dotenv.ts only handles \n, \\" and
\\\\, leaving sequences like \t and \r literal; update the double-quoted branch
(where trimmedValue.startsWith('"') and value is computed) to unescape
additional common sequences (at minimum \\r and \\t, optionally \\b, \\f, \\v
and \\' if desired) so escaped tabs and carriage returns are preserved, and add
a short inline comment noting which escapes are supported; locate the
replacement chain around trimmedValue/value and extend the replace calls or use
a single regex replacer to map these escape sequences to their actual
characters.
- Around line 161-165: The code uses a non-null assertion
pendingUpdates.get(line.key)! inside the loop over lines; replace it by
retrieving the value into a local variable and guarding against undefined (e.g.,
const val = pendingUpdates.get(line.key); if (val !== undefined) { line.value =
val; pendingUpdates.delete(line.key) }) so you avoid the `!` operator and
satisfy the Biome lint; target the loop that iterates `for (const line of
lines)` and the symbols `pendingUpdates`, `line`, and `line.key`.
In `@packages/database-integrations/src/index.ts`:
- Around line 28-34: The exports are out of alphabetical order causing the lint
failure; reorder the export blocks so the secret-field exports
(getAllSecretFieldPaths, getSecretFieldPaths, isSecretField) come before the
Snowflake and SQLAlchemy exports (getSnowflakeFederatedAuthSqlAlchemyInput and
SqlAlchemyInput). Update the export sequence in
packages/database-integrations/src/index.ts to place the './secret-field-paths'
export block above the './snowflake-integration-env-vars' and
'./sql-alchemy-types' exports to satisfy Biome sorting rules.
| function mergeIntegrations(existingEntries: unknown[], fetchedIntegrations: ApiIntegration[]): unknown[] { | ||
| // Create map of fetched integrations by ID | ||
| const fetchedById = new Map<string, LocalIntegration>() | ||
| for (const integration of fetchedIntegrations) { | ||
| fetchedById.set(integration.id, toLocalIntegration(integration)) | ||
| } | ||
|
|
||
| // Track which IDs we've seen | ||
| const seenIds = new Set<string>() | ||
|
|
||
| // Update existing entries | ||
| const mergedEntries = existingEntries.map(entry => { | ||
| const entryRecord = entry as Record<string, unknown> | ||
| const entryId = typeof entryRecord.id === 'string' ? entryRecord.id : undefined | ||
|
|
||
| if (entryId && fetchedById.has(entryId)) { | ||
| seenIds.add(entryId) | ||
| return fetchedById.get(entryId) // Replace with fetched version | ||
| } | ||
| return entry // Keep existing (possibly invalid) entry | ||
| }) | ||
|
|
||
| // Append new integrations not in original file | ||
| for (const [id, integration] of fetchedById) { | ||
| if (!seenIds.has(id)) { | ||
| mergedEntries.push(integration) | ||
| } | ||
| } | ||
|
|
||
| return mergedEntries | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused mergeIntegrations function.
Pipeline error: function defined but never called. mergeProcessedIntegrations supersedes it.
Proposed fix
-/**
- * Merge fetched integrations with existing file entries.
- * Preserves invalid/unparseable entries.
- */
-function mergeIntegrations(existingEntries: unknown[], fetchedIntegrations: ApiIntegration[]): unknown[] {
- // Create map of fetched integrations by ID
- const fetchedById = new Map<string, LocalIntegration>()
- for (const integration of fetchedIntegrations) {
- fetchedById.set(integration.id, toLocalIntegration(integration))
- }
-
- // Track which IDs we've seen
- const seenIds = new Set<string>()
-
- // Update existing entries
- const mergedEntries = existingEntries.map(entry => {
- const entryRecord = entry as Record<string, unknown>
- const entryId = typeof entryRecord.id === 'string' ? entryRecord.id : undefined
-
- if (entryId && fetchedById.has(entryId)) {
- seenIds.add(entryId)
- return fetchedById.get(entryId) // Replace with fetched version
- }
- return entry // Keep existing (possibly invalid) entry
- })
-
- // Append new integrations not in original file
- for (const [id, integration] of fetchedById) {
- if (!seenIds.has(id)) {
- mergedEntries.push(integration)
- }
- }
-
- return mergedEntries
-}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function mergeIntegrations(existingEntries: unknown[], fetchedIntegrations: ApiIntegration[]): unknown[] { | |
| // Create map of fetched integrations by ID | |
| const fetchedById = new Map<string, LocalIntegration>() | |
| for (const integration of fetchedIntegrations) { | |
| fetchedById.set(integration.id, toLocalIntegration(integration)) | |
| } | |
| // Track which IDs we've seen | |
| const seenIds = new Set<string>() | |
| // Update existing entries | |
| const mergedEntries = existingEntries.map(entry => { | |
| const entryRecord = entry as Record<string, unknown> | |
| const entryId = typeof entryRecord.id === 'string' ? entryRecord.id : undefined | |
| if (entryId && fetchedById.has(entryId)) { | |
| seenIds.add(entryId) | |
| return fetchedById.get(entryId) // Replace with fetched version | |
| } | |
| return entry // Keep existing (possibly invalid) entry | |
| }) | |
| // Append new integrations not in original file | |
| for (const [id, integration] of fetchedById) { | |
| if (!seenIds.has(id)) { | |
| mergedEntries.push(integration) | |
| } | |
| } | |
| return mergedEntries | |
| } |
🧰 Tools
🪛 GitHub Actions: CI
[error] 224-224: lint/correctness/noUnusedVariables: 'mergeIntegrations' is unused. (Biome) Command 'pnpm run lintAndFormat' reported lint errors.
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/integrations.ts` around lines 224 - 254, The
mergeIntegrations function is unused and should be removed; delete the entire
function declaration for mergeIntegrations from this file and ensure any
intended behavior relies on mergeProcessedIntegrations instead (verify there are
no remaining references to mergeIntegrations and update imports/usages if any);
this removes the dead code and avoids the pipeline error complaining about an
unused function.
| // Validate that all requirements are met (inputs, integrations) - exit code 2 if not | ||
| await validateRequirements(file, inputs, pythonEnv, integrationsById, options.notebook) | ||
|
|
||
| // Inject integration environment variables into process.env | ||
| // This allows SQL blocks to access database connections | ||
| if (parsedIntegrations.integrations.length > 0) { | ||
| const { envVars, errors } = getEnvironmentVariablesForIntegrations(parsedIntegrations.integrations, { | ||
| projectRootDirectory: workingDirectory, | ||
| }) | ||
|
|
||
| // Log any errors from env var generation | ||
| for (const error of errors) { | ||
| debug(`Integration env var error: ${error.message}`) | ||
| } | ||
|
|
||
| // Inject env vars into process.env | ||
| for (const { name, value } of envVars) { | ||
| process.env[name] = value | ||
| } | ||
|
|
||
| debug(`Injected ${envVars.length} environment variables for integrations`) | ||
| } | ||
|
|
||
| // Validate that all requirements are met (inputs, integrations) - exit code 2 if not | ||
| await validateRequirements(file, inputs, pythonEnv, options.notebook) | ||
| await validateRequirements(file, inputs, pythonEnv, integrationsById, options.notebook) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate validateRequirements call.
Called at line 166 and again at line 189 with identical arguments. Remove one.
Proposed fix
// Build lookup map for integration IDs (lowercase for case-insensitive matching)
const integrationsById = buildIntegrationsById(parsedIntegrations.integrations)
- // Validate that all requirements are met (inputs, integrations) - exit code 2 if not
- await validateRequirements(file, inputs, pythonEnv, integrationsById, options.notebook)
-
// Inject integration environment variables into process.env
// This allows SQL blocks to access database connections
if (parsedIntegrations.integrations.length > 0) {🤖 Prompt for AI Agents
In `@packages/cli/src/commands/run.ts` around lines 165 - 189, The file contains a
duplicated call to validateRequirements(...) invoked both before and after the
integration env var injection block; remove the earlier call (the
validateRequirements(file, inputs, pythonEnv, integrationsById,
options.notebook) that appears before calling
getEnvironmentVariablesForIntegrations/setting process.env) so there is only a
single validation call after env vars are injected, keeping the later
validateRequirements invocation with the same arguments.
| // Check if integration is configured in the integrations file | ||
| // Use lowercase for case-insensitive UUID matching | ||
| // const envVarName = getIntegrationEnvVarName(integrationId) | ||
| // if (!process.env[envVarName]) { ... } | ||
| if (!integrationsById.has(integrationId.toLowerCase())) { 8B92 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove dead commented code inside function.
Leftover from refactor. Clutters the logic.
Proposed fix
if (integrationId && !builtInIntegrations.has(integrationId)) {
// Check if integration is configured in the integrations file
// Use lowercase for case-insensitive UUID matching
- // const envVarName = getIntegrationEnvVarName(integrationId)
- // if (!process.env[envVarName]) { ... }
if (!integrationsById.has(integrationId.toLowerCase())) {🤖 Prompt for AI Agents
In `@packages/cli/src/commands/run.ts` around lines 517 - 521, The function
contains leftover commented-out code (the lines referencing
getIntegrationEnvVarName, envVarName and the conditional) that was left after a
refactor and clutters the logic; remove those dead comments so the check uses
only integrationsById.has(integrationId.toLowerCase()) and related active logic
(look for the commented block around getIntegrationEnvVarName and the envVarName
comment in the run.ts function and delete it).
| if (!isMachineOutput) { | ||
| log(chalk.dim(`Parsing ${absolutePath}...`)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate log statement.
"Parsing ..." is logged at line 132 in setupProject and again here at line 617 in runDeepnoteProject. The latter runs after setup completes.
Proposed fix
- if (!isMachineOutput) {
- log(chalk.dim(`Parsing ${absolutePath}...`))
- }
-
// Create and start the execution engine📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!isMachineOutput) { | |
| log(chalk.dim(`Parsing ${absolutePath}...`)) | |
| } | |
| // Create and start the execution engine |
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/run.ts` around lines 616 - 618, Duplicate "Parsing
..." log: remove the redundant log in runDeepnoteProject to avoid repeating the
same message that setupProject already emits. Locate the block in
runDeepnoteProject where it calls log(chalk.dim(`Parsing ${absolutePath}...`))
conditioned on !isMachineOutput and delete that call (or wrap it behind a new
flag if you must preserve one occurrence); ensure setupProject remains the
single place that logs the parsing message for absolutePath so callers and
machine output behavior are unchanged.
| if (trimmedValue.startsWith('"')) { | ||
| // Double-quoted: handle escape sequences | ||
| value = value.replace(/\\n/g, '\n').replace(/\\"/g, '"').replace(/\\\\/g, '\\') | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Escape sequence handling is minimal.
Only \n, \", \\ are unescaped. Other sequences like \t, \r pass through literally. Acceptable for typical .env usage, but worth documenting.
🤖 Prompt for AI Agents
In `@packages/cli/src/utils/dotenv.ts` around lines 55 - 58, The unescape logic in
dotenv.ts only handles \n, \\" and \\\\, leaving sequences like \t and \r
literal; update the double-quoted branch (where trimmedValue.startsWith('"') and
value is computed) to unescape additional common sequences (at minimum \\r and
\\t, optionally \\b, \\f, \\v and \\' if desired) so escaped tabs and carriage
returns are preserved, and add a short inline comment noting which escapes are
supported; locate the replacement chain around trimmedValue/value and extend the
replace calls or use a single regex replacer to map these escape sequences to
their actual characters.
| for (const line of lines) { | ||
| if (line.type === 'variable' && line.key !== undefined && pendingUpdates.has(line.key)) { | ||
| line.value = pendingUpdates.get(line.key)! | ||
| pendingUpdates.delete(line.key) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix non-null assertion to satisfy Biome lint.
Pipeline fails on pendingUpdates.get(line.key)!. Use nullish coalescing or refactor.
Proposed fix
for (const line of lines) {
if (line.type === 'variable' && line.key !== undefined && pendingUpdates.has(line.key)) {
- line.value = pendingUpdates.get(line.key)!
+ line.value = pendingUpdates.get(line.key) ?? line.value
pendingUpdates.delete(line.key)
}
}🧰 Tools
🪛 GitHub Actions: CI
[error] 163-163: lint/style/noNonNullAssertion: Forbidden non-null assertion. (Biome) Command 'pnpm run lintAndFormat' reported lint errors.
🤖 Prompt for AI Agents
In `@packages/cli/src/utils/dotenv.ts` around lines 161 - 165, The code uses a
non-null assertion pendingUpdates.get(line.key)! inside the loop over lines;
replace it by retrieving the value into a local variable and guarding against
undefined (e.g., const val = pendingUpdates.get(line.key); if (val !==
undefined) { line.value = val; pendingUpdates.delete(line.key) }) so you avoid
the `!` operator and satisfy the Biome lint; target the loop that iterates `for
(const line of lines)` and the symbols `pendingUpdates`, `line`, and `line.key`.
| export { getSnowflakeFederatedAuthSqlAlchemyInput } from './snowflake-integration-env-vars 8020 39; | ||
| export type { SqlAlchemyInput } from './sql-alchemy-types' | ||
| export { | ||
| getAllSecretFieldPaths, | ||
| getSecretFieldPaths, | ||
| isSecretField, | ||
| } from './secret-field-paths' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix export order to unblock CI
The lint error indicates exports aren’t sorted. Reorder the blocks so ./secret-field-paths sits before ./snowflake-integration-env-vars and ./sql-* to satisfy Biome.
🔧 Suggested reorder
-export { getSnowflakeFederatedAuthSqlAlchemyInput } from './snowflake-integration-env-vars'
-export type { SqlAlchemyInput } from './sql-alchemy-types'
-export {
- getAllSecretFieldPaths,
- getSecretFieldPaths,
- isSecretField,
-} from './secret-field-paths'
+export {
+ getAllSecretFieldPaths,
+ getSecretFieldPaths,
+ isSecretField,
+} from './secret-field-paths'
+export { getSnowflakeFederatedAuthSqlAlchemyInput } from './snowflake-integration-env-vars'
+export type { SqlAlchemyInput } from './sql-alchemy-types'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export { getSnowflakeFederatedAuthSqlAlchemyInput } from './snowflake-integration-env-vars' | |
| export type { SqlAlchemyInput } from './sql-alchemy-types' | |
| export { | |
| getAllSecretFieldPaths, | |
| getSecretFieldPaths, | |
| isSecretField, | |
| } from './secret-field-paths' | |
| export { | |
| getAllSecretFieldPaths, | |
| getSecretFieldPaths, | |
| isSecretField, | |
| } from './secret-field-paths' | |
| export { getSnowflakeFederatedAuthSqlAlchemyInput } from './snowflake-integration-env-vars' | |
| export type { SqlAlchemyInput } from './sql-alchemy-types' |
🤖 Prompt for AI Agents
In `@packages/database-integrations/src/index.ts` around lines 28 - 34, The
exports are out of alphabetical order causing the lint failure; reorder the
export blocks so the secret-field exports (getAllSecretFieldPaths,
getSecretFieldPaths, isSecretField) come before the Snowflake and SQLAlchemy
exports (getSnowflakeFederatedAuthSqlAlchemyInput and SqlAlchemyInput). Update
the export sequence in packages/database-integrations/src/index.ts to place the
'./secret-field-paths' export block above the './snowflake-integration-env-vars'
and './sql-alchemy-types' exports to satisfy Biome sorting rules.
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.