diff --git a/.agents/skills/add-ai-integration/SKILL.md b/.agents/skills/add-ai-integration/SKILL.md new file mode 100644 index 000000000000..8323aa86fd56 --- /dev/null +++ b/.agents/skills/add-ai-integration/SKILL.md @@ -0,0 +1,121 @@ +--- +name: add-ai-integration +description: Add a new AI provider integration to the Sentry JavaScript SDK. Use when contributing a new AI instrumentation (OpenAI, Anthropic, Vercel AI, LangChain, etc.) or modifying an existing one. +argument-hint: +--- + +# Adding a New AI Integration + +## Decision Tree + +``` +Does the AI SDK have native OpenTelemetry support? +|- YES -> Does it emit OTel spans automatically? +| |- YES (like Vercel AI) -> Pattern 1: OTel Span Processors +| +- NO -> Pattern 2: OTel Instrumentation (wrap client) ++- NO -> Does the SDK provide hooks/callbacks? + |- YES (like LangChain) -> Pattern 3: Callback/Hook Based + +- NO -> Pattern 4: Client Wrapping +``` + +## Runtime-Specific Placement + +If an AI SDK only works in one runtime, code lives exclusively in that runtime's package. Do NOT add it to `packages/core/`. + +- **Node.js-only** -> `packages/node/src/integrations/tracing/{provider}/` +- **Cloudflare-only** -> `packages/cloudflare/src/integrations/tracing/{provider}.ts` +- **Browser-only** -> `packages/browser/src/integrations/tracing/{provider}/` +- **Multi-runtime** -> shared core in `packages/core/src/tracing/{provider}/` with runtime-specific wrappers + +## Span Hierarchy + +- `gen_ai.invoke_agent` — parent/pipeline spans (chains, agents, orchestration) +- `gen_ai.chat`, `gen_ai.generate_text`, etc. — child spans (actual LLM calls) + +## Shared Utilities (`packages/core/src/tracing/ai/`) + +- `gen-ai-attributes.ts` — OTel Semantic Convention attribute constants. **Always use these, never hardcode.** +- `utils.ts` — `setTokenUsageAttributes()`, `getTruncatedJsonString()`, `truncateGenAiMessages()`, `buildMethodPath()` +- Only use attributes from [Sentry Gen AI Conventions](https://getsentry.github.io/sentry-conventions/attributes/gen_ai/). + +## Streaming + +- **Non-streaming:** `startSpan()`, set attributes from response +- **Streaming:** `startSpanManual()`, accumulate state via async generator or event listeners, set `GEN_AI_RESPONSE_STREAMING_ATTRIBUTE: true`, call `span.end()` in finally block +- Detect via `params.stream === true` +- References: `openai/streaming.ts` (async generator), `anthropic-ai/streaming.ts` (event listeners) + +## Token Accumulation + +- **Child spans:** Set tokens directly from API response via `setTokenUsageAttributes()` +- **Parent spans (`invoke_agent`):** Accumulate from children using event processor (see `vercel-ai/`) + +## Pattern 1: OTel Span Processors + +**Use when:** SDK emits OTel spans automatically (Vercel AI) + +1. **Core:** Create `add{Provider}Processors()` in `packages/core/src/tracing/{provider}/index.ts` — registers `spanStart` listener + event processor +2. **Node.js:** Add `callWhenPatched()` optimization in `packages/node/src/integrations/tracing/{provider}/index.ts` — defers registration until package is imported +3. **Edge:** Direct registration in `packages/cloudflare/src/integrations/tracing/{provider}.ts` — no OTel, call processors immediately + +Reference: `packages/node/src/integrations/tracing/vercelai/` + +## Pattern 2: OTel Instrumentation (Client Wrapping) + +**Use when:** SDK has no native OTel support (OpenAI, Anthropic, Google GenAI) + +1. **Core:** Create `instrument{Provider}Client()` in `packages/core/src/tracing/{provider}/index.ts` — Proxy to wrap client methods, create spans manually +2. **Node.js `instrumentation.ts`:** Patch module exports, wrap client constructor. Check `_INTERNAL_shouldSkipAiProviderWrapping()` for LangChain compatibility. +3. **Node.js `index.ts`:** Export integration function using `generateInstrumentOnce()` helper + +Reference: `packages/node/src/integrations/tracing/openai/` + +## Pattern 3: Callback/Hook Based + +**Use when:** SDK provides lifecycle hooks (LangChain, LangGraph) + +1. **Core:** Create `create{Provider}CallbackHandler()` — implement SDK's callback interface, create spans in callbacks +2. **Node.js `instrumentation.ts`:** Auto-inject callbacks by patching runnable methods. Disable underlying AI provider wrapping. + +Reference: `packages/node/src/integrations/tracing/langchain/` + +## Auto-Instrumentation (Node.js) + +**Mandatory** for Node.js AI integrations. OTel only patches when the package is imported (zero cost if unused). + +### Steps + +1. **Add to `getAutoPerformanceIntegrations()`** in `packages/node/src/integrations/tracing/index.ts` — LangChain MUST come first +2. **Add to `getOpenTelemetryInstrumentationToPreload()`** for OTel-based integrations +3. **Export from `packages/node/src/index.ts`**: integration function + options type +4. **Add E2E tests:** + - Node.js: `dev-packages/node-integration-tests/suites/tracing/{provider}/` + - Cloudflare: `dev-packages/cloudflare-integration-tests/suites/tracing/{provider}/` + - Browser: `dev-packages/browser-integration-tests/suites/tracing/ai-providers/{provider}/` + +## Key Rules + +1. Respect `sendDefaultPii` for `recordInputs`/`recordOutputs` +2. Set `SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN = 'auto.ai.{provider}'` (alphanumerics, `_`, `.` only) +3. Truncate large data with helper functions from `utils.ts` +4. `gen_ai.invoke_agent` for parent ops, `gen_ai.chat` for child ops + +## Checklist + +- [ ] Runtime-specific code placed only in that runtime's package +- [ ] Added to `getAutoPerformanceIntegrations()` in correct order (Node.js) +- [ ] Added to `getOpenTelemetryInstrumentationToPreload()` (Node.js with OTel) +- [ ] Exported from appropriate package index +- [ ] E2E tests added and verifying auto-instrumentation +- [ ] Only used attributes from [Sentry Gen AI Conventions](https://getsentry.github.io/sentry-conventions/attributes/gen_ai/) +- [ ] JSDoc says "enabled by default" or "not enabled by default" +- [ ] Documented how to disable (if auto-enabled) +- [ ] Verified OTel only patches when package imported (Node.js) + +## Reference Implementations + +- **Pattern 1 (Span Processors):** `packages/node/src/integrations/tracing/vercelai/` +- **Pattern 2 (Client Wrapping):** `packages/node/src/integrations/tracing/openai/` +- **Pattern 3 (Callback/Hooks):** `packages/node/src/integrations/tracing/langchain/` + +**When in doubt, follow the pattern of the most similar existing integration.** diff --git a/.claude/skills/add-cdn-bundle/SKILL.md b/.agents/skills/add-cdn-bundle/SKILL.md similarity index 100% rename from .claude/skills/add-cdn-bundle/SKILL.md rename to .agents/skills/add-cdn-bundle/SKILL.md diff --git a/.agents/skills/dotagents/SKILL.md b/.agents/skills/dotagents/SKILL.md new file mode 100644 index 000000000000..85be3601419b --- /dev/null +++ b/.agents/skills/dotagents/SKILL.md @@ -0,0 +1,80 @@ +--- +name: dotagents +description: Manage agent skill dependencies with dotagents. Use when asked to "add a skill", "install skills", "remove a skill", "update skills", "dotagents init", "agents.toml", "agents.lock", "sync skills", "list skills", "set up dotagents", "configure trust", "add MCP server", "add hook", "wildcard skills", "user scope", or any dotagents-related task. +--- + +Manage agent skill dependencies declared in `agents.toml`. dotagents resolves, installs, and symlinks skills so multiple agent tools (Claude Code, Cursor, Codex, VS Code, OpenCode) discover them from `.agents/skills/`. + +## References + +Read the relevant reference when the task requires deeper detail: + +| Document | Read When | +| ---------------------------------------------------------- | ------------------------------------------------------------------------- | +| [references/cli-reference.md](references/cli-reference.md) | Full command options, flags, examples | +| [references/configuration.md](references/configuration.md) | Editing agents.toml, source formats, trust, MCP, hooks, wildcards, scopes | +| [references/config-schema.md](references/config-schema.md) | Exact field names, types, and defaults | + +## Quick Start + +```bash +# Initialize a new project (interactive TUI) +dotagents init + +# Add a skill from GitHub +dotagents add getsentry/skills find-bugs + +# Add multiple skills at once +dotagents add getsentry/skills find-bugs code-review commit + +# Add all skills from a repo +dotagents add getsentry/skills --all + +# Add a pinned skill +dotagents add getsentry/warden@v1.0.0 + +# Install all dependencies from agents.toml +dotagents install + +# List installed skills +dotagents list +``` + +## Commands + +| Command | Description | +| --------------------------- | ------------------------------------------------------------------ | +| `dotagents init` | Initialize `agents.toml` and `.agents/` directory | +| `dotagents install` | Install all skills from `agents.toml` | +| `dotagents add ` | Add a skill dependency | +| `dotagents remove ` | Remove a skill | +| `dotagents update [name]` | Update skills to latest versions | +| `dotagents sync` | Reconcile state (adopt orphans, repair symlinks, verify integrity) | +| `dotagents list` | Show installed skills and their status | +| `dotagents mcp` | Add, remove, or list MCP server declarations | + +All commands accept `--user` to operate on user scope (`~/.agents/`) instead of the current project. + +For full options and flags, read [references/cli-reference.md](references/cli-reference.md). + +## Source Formats + +| Format | Example | Description | +| ---------------- | -------------------------------------- | ------------------------------------- | +| GitHub shorthand | `getsentry/skills` | Owner/repo (resolves to GitHub HTTPS) | +| GitHub pinned | `getsentry/warden@v1.0.0` | With tag, branch, or commit | +| GitHub SSH | `git@github.com:owner/repo.git` | SSH clone URL | +| GitHub HTTPS | `https://github.com/owner/repo` | Full HTTPS URL | +| Git URL | `git:https://git.corp.dev/team/skills` | Any non-GitHub git remote | +| Local path | `path:./my-skills/custom` | Relative to project root | + +## Key Concepts + +- **`.agents/skills/`** is the canonical home for all installed skills +- **`agents.toml`** declares dependencies; **`agents.lock`** pins exact commits and integrity hashes +- **Symlinks**: `.claude/skills/`, `.cursor/skills/` point to `.agents/skills/` +- **Wildcards**: `name = "*"` installs all skills from a source, with optional `exclude` list +- **Trust**: Optional `[trust]` section restricts which sources are allowed +- **Hooks**: `[[hooks]]` declarations write tool-event hooks to each agent's config +- **Gitignore**: When `gitignore = true`, managed skills are gitignored; custom in-place skills are tracked +- **User scope**: `--user` flag manages skills in `~/.agents/` shared across all projects diff --git a/.agents/skills/dotagents/references/cli-reference.md b/.agents/skills/dotagents/references/cli-reference.md new file mode 100644 index 000000000000..a67817c959e0 --- /dev/null +++ b/.agents/skills/dotagents/references/cli-reference.md @@ -0,0 +1,215 @@ +# CLI Reference + +## Usage + +``` +dotagents [--user] [options] +``` + +### Global Flags + +| Flag | Description | +| ----------------- | --------------------------------------------------------------- | +| `--user` | Operate on user scope (`~/.agents/`) instead of current project | +| `--help`, `-h` | Show help | +| `--version`, `-V` | Show version | + +## Commands + +### `init` + +Initialize a new project with `agents.toml` and `.agents/` directory. Automatically includes the `dotagents` skill from `getsentry/dotagents` for CLI guidance, and attempts to install it. + +```bash +dotagents init +dotagents init --agents claude,cursor +dotagents init --force +dotagents --user init +``` + +| Flag | Description | +| ----------------- | ----------------------------------------------------------------------- | +| `--agents ` | Comma-separated agent targets (claude, cursor, codex, vscode, opencode) | +| `--force` | Overwrite existing `agents.toml` | + +**Interactive mode** (when TTY is available): + +1. Select agents (multiselect) +2. Manage `.gitignore` for installed skills? +3. Trust policy: allow all sources or restrict to trusted +4. If restricted: enter trusted GitHub orgs/repos (comma-separated) + +### `install` + +Install all skill dependencies declared in `agents.toml`. + +```bash +dotagents install +dotagents install --frozen +dotagents install --force +``` + +| Flag | Description | +| ---------- | ------------------------------------------------------------------ | +| `--frozen` | Fail if lockfile is missing or out of sync; do not modify lockfile | +| `--force` | Ignore locked commits and resolve all skills to latest refs | + +**Workflow:** + +1. Load config and lockfile +2. Expand wildcard entries (discover all skills from source) +3. Validate trust for each skill source +4. Resolve skills (use locked commits when available) +5. Copy skills into `.agents/skills//` +6. Write/update lockfile with integrity hashes +7. Generate `.agents/.gitignore` (if `gitignore = true`) +8. Create/verify agent symlinks +9. Write MCP and hook configs + +### `add [skill...]` + +Add one or more skill dependencies and install them. + +```bash +dotagents add getsentry/skills # Interactive selection if multiple skills +dotagents add getsentry/skills find-bugs # Add by positional name +dotagents add getsentry/skills find-bugs code-review # Add multiple skills at once +dotagents add getsentry/skills --name find-bugs # Add by --name flag +dotagents add getsentry/skills --skill find-bugs # --skill is an alias for --name +dotagents add getsentry/skills --all # Add all as wildcard +dotagents add getsentry/warden@v1.0.0 # Pinned ref (inline) +dotagents add getsentry/skills --ref v2.0.0 # Pinned ref (flag) +dotagents add git:https://git.corp.dev/team/skills # Non-GitHub git URL +dotagents add path:./my-skills/custom # Local path +``` + +| Flag | Description | +| ---------------- | ----------------------------------------------------------------- | +| `--name ` | Specify which skill to add (repeatable; alias: `--skill`) | +| `--skill ` | Alias for `--name` (repeatable) | +| `--ref ` | Pin to a specific tag, branch, or commit | +| `--all` | Add all skills from the source as a wildcard entry (`name = "*"`) | + +**Specifier formats:** + +- `owner/repo` -- GitHub shorthand +- `owner/repo@ref` -- GitHub with pinned ref +- `https://github.com/owner/repo` -- GitHub HTTPS URL +- `git@github.com:owner/repo.git` -- GitHub SSH URL +- `git:https://...` -- Non-GitHub git URL +- `path:../relative` -- Local filesystem path + +When a repo contains multiple skills, dotagents auto-discovers them. If only one skill is found, it's added automatically. If multiple are found and no names are given, an interactive picker is shown (TTY) or skills are listed (non-TTY). + +When adding multiple skills, already-existing entries are skipped with a warning. An error is only raised if all specified skills already exist. + +`--all` and `--name`/positional args are mutually exclusive. + +### `remove ` + +Remove a skill dependency. + +```bash +dotagents remove find-bugs +``` + +Removes from `agents.toml`, deletes `.agents/skills//`, updates lockfile, and regenerates `.gitignore`. + +For skills sourced from a wildcard entry (`name = "*"`), interactively prompts whether to add the skill to the wildcard's `exclude` list. If declined, the removal is cancelled. + +### `update [name]` + +Update skills to their latest versions. + +```bash +dotagents update # Update all +dotagents update find-bugs # Update one +``` + +Skips skills pinned to immutable commits (40-char SHAs). For wildcard entries, re-discovers all skills in the source -- adds new ones, removes deleted ones. Prints changelog showing old and new commits. + +### `sync` + +Reconcile project state: adopt orphans, verify integrity, repair symlinks and configs. + +```bash +dotagents sync +``` + +**Actions performed:** + +1. Adopt orphaned skills (installed but not declared in config) +2. Regenerate `.agents/.gitignore` +3. Check for missing skills +4. Verify integrity hashes +5. Repair agent symlinks +6. Verify/repair MCP configs +7. Verify/repair hook configs + +Reports issues as warnings (modified skills, missing MCP/hook configs) or errors (missing skills). + +### `list` + +Show installed skills and their status. + +```bash +dotagents list +dotagents list --json +``` + +| Flag | Description | +| -------- | -------------- | +| `--json` | Output as JSON | + +**Status indicators:** + +- `✓` ok -- installed, integrity matches +- `~` modified -- locally modified since install +- `✗` missing -- in config but not installed +- `?` unlocked -- installed but not in lockfile + +Skills from wildcard entries are marked with a wildcard indicator. + +### `mcp` + +Manage MCP (Model Context Protocol) server declarations in `agents.toml`. + +#### `mcp add ` + +Add an MCP server declaration. + +```bash +dotagents mcp add github --command npx --args -y --args @modelcontextprotocol/server-github --env GITHUB_TOKEN +dotagents mcp add remote-api --url https://mcp.example.com/sse --header "Authorization:Bearer token" +``` + +| Flag | Description | +| ---------------------- | ------------------------------------------------------- | +| `--command ` | Command to run (stdio transport) | +| `--args ` | Command arguments (repeatable) | +| `--url ` | HTTP endpoint URL (HTTP transport) | +| `--header ` | HTTP headers (repeatable) | +| `--env ` | Environment variable names to pass through (repeatable) | + +Either `--command` or `--url` is required (mutually exclusive). + +#### `mcp remove ` + +Remove an MCP server declaration. + +```bash +dotagents mcp remove github +``` + +#### `mcp list` + +Show declared MCP servers. + +```bash +dotagents mcp list +dotagents mcp list --json +``` + +| Flag | Description | +| -------- | -------------- | +| `--json` | Output as JSON | diff --git a/.agents/skills/dotagents/references/config-schema.md b/.agents/skills/dotagents/references/config-schema.md new file mode 100644 index 000000000000..eecafdf1ba72 --- /dev/null +++ b/.agents/skills/dotagents/references/config-schema.md @@ -0,0 +1,175 @@ +# agents.toml Configuration Schema + +## Top-Level Structure + +```toml +version = 1 # Required, must be 1 +gitignore = true # Optional, default true +agents = ["claude", "cursor"] # Optional, agent targets + +[project] # Optional +[trust] # Optional +[[skills]] # Optional, array of skill entries +[[mcp]] # Optional, array of MCP servers +[[hooks]] # Optional, array of hook declarations +``` + +## Top-Level Fields + +| Field | Type | Required | Default | Description | +| ----------- | -------- | -------- | ------- | ---------------------------------------------------------------- | +| `version` | integer | Yes | -- | Schema version, must be `1` | +| `gitignore` | boolean | No | `true` | Generate `.agents/.gitignore` for managed skills. | +| `agents` | string[] | No | `[]` | Agent targets: `claude`, `cursor`, `codex`, `vscode`, `opencode` | + +## Project Section + +```toml +[project] +name = "my-project" # Optional, display name +``` + +## Symlinks Section + +```toml +[symlinks] +targets = [".claude", ".cursor"] # Legacy: explicit symlink targets +``` + +When `agents` is set, symlink targets are derived automatically. The `[symlinks]` section is for backward compatibility. + +## Skills Section + +### Regular Skills + +```toml +[[skills]] +name = "find-bugs" # Required, unique skill identifier +source = "getsentry/skills" # Required, skill source +ref = "v1.0.0" # Optional, pin to tag/branch/commit +path = "tools/my-skill" # Optional, subdirectory within repo +``` + +| Field | Type | Required | Description | +| -------- | ------ | -------- | ------------------------------------------------------------- | +| `name` | string | Yes | Unique identifier. Pattern: `^[a-zA-Z0-9][a-zA-Z0-9._-]*$` | +| `source` | string | Yes | `owner/repo`, `owner/repo@ref`, `git:url`, or `path:relative` | +| `ref` | string | No | Tag, branch, or commit SHA to pin | +| `path` | string | No | Subdirectory containing the skill within the source repo | + +### Wildcard Skills + +```toml +[[skills]] +name = "*" # Wildcard: install all skills from source +source = "getsentry/skills" # Required +ref = "v1.0.0" # Optional +exclude = ["deprecated-skill"] # Optional, skills to skip +``` + +| Field | Type | Required | Description | +| --------- | ------------- | -------- | ---------------------------------- | +| `name` | literal `"*"` | Yes | Wildcard marker | +| `source` | string | Yes | Same formats as regular skills | +| `ref` | string | No | Tag, branch, or commit SHA to pin | +| `exclude` | string[] | No | Skill names to skip. Default: `[]` | + +## Trust Section + +```toml +[trust] +allow_all = true # Allow any source + +# OR restrict to specific sources: +[trust] +github_orgs = ["getsentry"] # GitHub org names +github_repos = ["ext-org/repo"] # Exact owner/repo pairs +git_domains = ["git.corp.example.com"] # Git URL domains +``` + +| Field | Type | Description | +| -------------- | -------- | ------------------------------------------ | +| `allow_all` | boolean | Allow all sources (overrides other fields) | +| `github_orgs` | string[] | Allowed GitHub organizations | +| `github_repos` | string[] | Allowed exact `owner/repo` pairs | +| `git_domains` | string[] | Allowed domains for `git:` URLs | + +No `[trust]` section = allow all sources (backward compatible). + +## MCP Section + +### Stdio Transport + +```toml +[[mcp]] +name = "github" # Required, unique server name +command = "npx" # Required for stdio +args = ["-y", "@modelcontextprotocol/server-github"] # Optional +env = ["GITHUB_TOKEN"] # Optional, env vars to pass through +``` + +### HTTP Transport + +```toml +[[mcp]] +name = "remote-api" # Required, unique server name +url = "https://mcp.example.com/sse" # Required for HTTP +``` + +| Field | Type | Required | Description | +| --------- | -------- | ---------- | ------------------------------------------ | +| `name` | string | Yes | Unique server identifier | +| `command` | string | Stdio only | Command to execute | +| `args` | string[] | No | Command arguments | +| `env` | string[] | No | Environment variable names to pass through | +| `url` | string | HTTP only | Server URL | +| `headers` | table | No | HTTP headers | + +## Hooks Section + +```toml +[[hooks]] +event = "PreToolUse" # Required +matcher = "Bash" # Optional, tool name filter +command = "my-lint-check" # Required +``` + +| Field | Type | Required | Description | +| --------- | ------ | -------- | ------------------------------------------------------- | +| `event` | string | Yes | `PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `Stop` | +| `matcher` | string | No | Tool name to match (omit for all tools) | +| `command` | string | Yes | Shell command to execute | + +## Lockfile (agents.lock) + +Auto-generated. Do not edit manually. + +```toml +version = 1 + +[skills.find-bugs] +source = "getsentry/skills" +resolved_url = "https://github.com/getsentry/skills.git" +resolved_path = "plugins/sentry-skills/skills/find-bugs" +resolved_ref = "v1.0.0" +commit = "c8881564e75eff4faaecc82d1c3f13356851b6e7" +integrity = "sha256-FWmCLdOj+x+XffiEg7Bx19drylVypeKz8me9OA757js=" +``` + +| Field | Type | Description | +| --------------- | ------ | -------------------------------------------------- | +| `source` | string | Original source from `agents.toml` | +| `resolved_url` | string | Resolved git URL | +| `resolved_path` | string | Subdirectory within repo | +| `resolved_ref` | string | Ref that was resolved (omitted for default branch) | +| `commit` | string | Full 40-char SHA of resolved commit | +| `integrity` | string | `sha256-` prefixed base64 content hash | + +Local path skills have `source` and `integrity` only (no commit). + +## Environment Variables + +| Variable | Purpose | +| --------------------- | ------------------------------------------------------- | +| `DOTAGENTS_STATE_DIR` | Override cache location (default: `~/.local/dotagents`) | +| `DOTAGENTS_HOME` | Override user-scope location (default: `~/.agents`) | diff --git a/.agents/skills/dotagents/references/configuration.md b/.agents/skills/dotagents/references/configuration.md new file mode 100644 index 000000000000..1807ffe70642 --- /dev/null +++ b/.agents/skills/dotagents/references/configuration.md @@ -0,0 +1,195 @@ +# Configuration (agents.toml) + +See [config-schema.md](config-schema.md) for the complete schema reference. + +## Minimal Example + +```toml +version = 1 +agents = ["claude"] + +[[skills]] +name = "find-bugs" +source = "getsentry/skills" +``` + +## Skills + +Each skill requires `name` and `source`. Optionally pin with `ref` or specify a subdirectory with `path`. + +```toml +[[skills]] +name = "find-bugs" +source = "getsentry/skills" +ref = "v1.0.0" +path = "plugins/sentry-skills/skills/find-bugs" +``` + +**Source formats:** + +| Format | Example | Resolves to | +| ---------------- | -------------------------------------- | ----------------------------------------- | +| GitHub shorthand | `getsentry/skills` | `https://github.com/getsentry/skills.git` | +| GitHub pinned | `getsentry/skills@v1.0.0` | Same, checked out at `v1.0.0` | +| GitHub HTTPS | `https://github.com/owner/repo` | URL used directly | +| GitHub SSH | `git@github.com:owner/repo.git` | SSH clone | +| Git URL | `git:https://git.corp.dev/team/skills` | Any non-GitHub git remote | +| Local | `path:./my-skills/custom` | Relative to project root | + +**Skill name rules:** Must start with alphanumeric, contain only `[a-zA-Z0-9._-]`. + +### Wildcard Skills + +Add all skills from a source with a single entry: + +```toml +[[skills]] +name = "*" +source = "getsentry/skills" +exclude = ["deprecated-skill"] +``` + +During `install` and `update`, dotagents discovers all skills in the source and installs each one (except those in `exclude`). Each skill gets its own lockfile entry. Use `dotagents add --all` to create a wildcard entry from the CLI. + +## Trust + +Restrict which sources are allowed. Without a `[trust]` section, all sources are allowed. + +```toml +# Allow all sources explicitly +[trust] +allow_all = true +``` + +```toml +# Restrict to specific GitHub orgs and repos +[trust] +github_orgs = ["getsentry"] +github_repos = ["external-org/specific-repo"] +git_domains = ["git.corp.example.com"] +``` + +- GitHub sources match against `github_orgs` (by owner) or `github_repos` (exact owner/repo) +- Git URL sources match against `git_domains` +- Local `path:` sources are always allowed +- A source passes if it matches any rule (org OR repo OR domain) + +Trust is validated before any network operations in `add` and `install`. + +## MCP Servers + +Declare MCP servers that get written to each agent's config. + +```toml +# Stdio transport +[[mcp]] +name = "github" +command = "npx" +args = ["-y", "@modelcontextprotocol/server-github"] +env = ["GITHUB_TOKEN"] + +# HTTP transport +[[mcp]] +name = "remote-api" +url = "https://mcp.example.com/sse" +headers = { Authorization = "Bearer token" } +``` + +MCP configs are written per-agent in the appropriate format: + +- Claude: `.mcp.json` (JSON) +- Cursor: `.cursor/mcp.json` (JSON) +- Codex: `.codex/config.toml` (TOML, shared with other Codex config) +- VS Code: `.vscode/mcp.json` (JSON) +- OpenCode: `opencode.json` (JSON, shared) + +## Hooks + +Declare hooks for agent tool events. + +```toml +[[hooks]] +event = "PreToolUse" +matcher = "Bash" +command = "my-lint-check" +``` + +**Supported events:** `PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `Stop` + +Hook configs are written per-agent: + +- Claude: `.claude/settings.json` (merged into existing file) +- Cursor: `.cursor/hooks.json` (dedicated file, events mapped to Cursor equivalents) +- VS Code: `.claude/settings.json` (same file as Claude) +- Codex/OpenCode: not supported (warnings emitted during install/sync) + +**Cursor event mapping:** + +- `PreToolUse` -> `beforeShellExecution` + `beforeMCPExecution` +- `PostToolUse` -> `afterFileEdit` +- `UserPromptSubmit` -> `beforeSubmitPrompt` +- `Stop` -> `stop` + +## Agents + +The `agents` array controls which agent tools get symlinks and configs. + +```toml +agents = ["claude", "cursor", "codex", "vscode", "opencode"] +``` + +Each agent gets: + +- A `/skills/` symlink pointing to `.agents/skills/` (Claude, Cursor) +- Or native discovery from `.agents/skills/` (Codex, VS Code, OpenCode) +- MCP server configs in the agent's config file +- Hook configs (where supported) + +## Scopes + +### Project Scope (default) + +Operates on the current project. Requires `agents.toml` at the project root. + +### User Scope (`--user`) + +Operates on `~/.agents/` for skills shared across all projects. Override with `DOTAGENTS_HOME`. + +```bash +dotagents --user init +dotagents --user add getsentry/skills --all +``` + +User-scope symlinks go to `~/.claude/skills/` and `~/.cursor/skills/`. + +When no `agents.toml` exists and you're not inside a git repo, dotagents falls back to user scope automatically. + +## Gitignore + +When `gitignore = true` (schema default), dotagents generates `.agents/.gitignore` listing managed (remote) skills. In-place skills (`path:.agents/skills/...`) are never gitignored since they must be tracked in git. + +When `gitignore = false`, no gitignore is created -- skills are checked into the repository. Anyone cloning gets skills without running `install`. + +## Caching + +- Cache location: `~/.local/dotagents/` (override with `DOTAGENTS_STATE_DIR`) +- Unpinned repos: cached with 24-hour TTL +- Pinned refs (40-char SHA): cached immutably, never re-fetched +- Use `dotagents install --force` to bypass cache + +## Troubleshooting + +**Skills not installing:** + +- Check `agents.toml` syntax with `dotagents list` +- Verify source is accessible (`git clone` the URL manually) +- Check trust config if using restricted mode + +**Symlinks broken:** + +- Run `dotagents sync` to repair + +**Integrity mismatch:** + +- Skill was modified locally -- run `dotagents install --force` to restore +- Or run `dotagents sync` to detect and report issues diff --git a/.claude/skills/e2e/SKILL.md b/.agents/skills/e2e/SKILL.md similarity index 100% rename from .claude/skills/e2e/SKILL.md rename to .agents/skills/e2e/SKILL.md diff --git a/.claude/skills/fix-security-vulnerability/SKILL.md b/.agents/skills/fix-security-vulnerability/SKILL.md similarity index 100% rename from .claude/skills/fix-security-vulnerability/SKILL.md rename to .agents/skills/fix-security-vulnerability/SKILL.md diff --git a/.agents/skills/release/SKILL.md b/.agents/skills/release/SKILL.md new file mode 100644 index 000000000000..13f61e6dec48 --- /dev/null +++ b/.agents/skills/release/SKILL.md @@ -0,0 +1,31 @@ +--- +name: release +description: Publish a new Sentry JavaScript SDK release. Use when preparing a release, updating the changelog, or creating a release branch. +argument-hint: [VERSION] +--- + +# Release Process + +See `docs/publishing-a-release.md` for full details. + +## Steps + +1. Ensure you're on `develop` with latest changes. Stash any unsaved work with `git stash -u` if needed. +2. Run `yarn changelog` (use `yarn changelog | pbcopy` to copy output). +3. Decide on a version per [semver](https://semver.org). Check the top of `CHANGELOG.md` for the current version. +4. Create branch `prepare-release/VERSION` off `develop`. +5. Update `CHANGELOG.md` — add the new version entry with the changelog output. See the "Updating the Changelog" section in `docs/publishing-a-release.md` for formatting details. Do not remove existing entries. +6. Commit: `meta(changelog): Update changelog for VERSION` +7. Push the branch and remind the user to open a PR targeting `master`. +8. If you were on a different branch, checkout back and `git stash pop` if needed. + +## First-time SDK releases + +Follow `docs/new-sdk-release-checklist.md`. If anything doesn't match the checklist, remind the user. + +## Key commands + +- `yarn changelog` — generate changelog entries +- `yarn lint` — verify code quality +- `yarn test` — run test suite +- `yarn build:dev` — verify build diff --git a/.claude/skills/triage-issue/SKILL.md b/.agents/skills/triage-issue/SKILL.md similarity index 100% rename from .claude/skills/triage-issue/SKILL.md rename to .agents/skills/triage-issue/SKILL.md diff --git a/.claude/skills/triage-issue/assets/suggested-fix-prompt.md b/.agents/skills/triage-issue/assets/suggested-fix-prompt.md similarity index 100% rename from .claude/skills/triage-issue/assets/suggested-fix-prompt.md rename to .agents/skills/triage-issue/assets/suggested-fix-prompt.md diff --git a/.claude/skills/triage-issue/assets/triage-report.md b/.agents/skills/triage-issue/assets/triage-report.md similarity index 100% rename from .claude/skills/triage-issue/assets/triage-report.md rename to .agents/skills/triage-issue/assets/triage-report.md diff --git a/.claude/skills/triage-issue/scripts/README.md b/.agents/skills/triage-issue/scripts/README.md similarity index 100% rename from .claude/skills/triage-issue/scripts/README.md rename to .agents/skills/triage-issue/scripts/README.md diff --git a/.claude/skills/triage-issue/scripts/detect_prompt_injection.py b/.agents/skills/triage-issue/scripts/detect_prompt_injection.py similarity index 100% rename from .claude/skills/triage-issue/scripts/detect_prompt_injection.py rename to .agents/skills/triage-issue/scripts/detect_prompt_injection.py diff --git a/.claude/skills/triage-issue/scripts/parse_gh_issues.py b/.agents/skills/triage-issue/scripts/parse_gh_issues.py similarity index 100% rename from .claude/skills/triage-issue/scripts/parse_gh_issues.py rename to .agents/skills/triage-issue/scripts/parse_gh_issues.py diff --git a/.claude/skills/triage-issue/scripts/post_linear_comment.py b/.agents/skills/triage-issue/scripts/post_linear_comment.py similarity index 98% rename from .claude/skills/triage-issue/scripts/post_linear_comment.py rename to .agents/skills/triage-issue/scripts/post_linear_comment.py index 3522ba0170be..c752676f35a2 100644 --- a/.claude/skills/triage-issue/scripts/post_linear_comment.py +++ b/.agents/skills/triage-issue/scripts/post_linear_comment.py @@ -84,7 +84,7 @@ def graphql(token, query, variables=None): ) comments = data.get("data", {}).get("issue", {}).get("comments", {}).get("nodes", []) for c in comments: - if c.get("body", "").startswith("## Automated Triage Report"): + if c.get("body", "").startswith("## Issue Triage:"): print(f"Triage comment already exists on {identifier}, skipping") sys.exit(0) diff --git a/.claude/skills/triage-issue/scripts/write_job_summary.py b/.agents/skills/triage-issue/scripts/write_job_summary.py similarity index 100% rename from .claude/skills/triage-issue/scripts/write_job_summary.py rename to .agents/skills/triage-issue/scripts/write_job_summary.py diff --git a/.agents/skills/upgrade-dep/SKILL.md b/.agents/skills/upgrade-dep/SKILL.md new file mode 100644 index 000000000000..97e30b44e700 --- /dev/null +++ b/.agents/skills/upgrade-dep/SKILL.md @@ -0,0 +1,59 @@ +--- +name: upgrade-dep +description: Upgrade a dependency in the Sentry JavaScript SDK. Use when upgrading packages, bumping versions, or fixing security vulnerabilities via dependency updates. +argument-hint: +--- + +# Dependency Upgrade + +**Only upgrade one package at a time.** + +## Upgrade command + +```bash +npx yarn-update-dependency@latest [package-name] +``` + +If the dependency is not defined in any `package.json`, run the upgrade from the root workspace (the `yarn.lock` lives there). + +Avoid upgrading top-level dependencies (especially test dependencies) without asking the user first. + +Ensure updated `package.json` files end with a newline. + +## OpenTelemetry constraint + +**STOP** if upgrading any `opentelemetry` package would introduce forbidden versions: + +- `2.x.x` (e.g., `2.0.0`) +- `0.2xx.x` (e.g., `0.200.0`, `0.201.0`) + +Verify before upgrading: + +```bash +yarn info @ dependencies +``` + +## E2E test dependencies + +Do **not** upgrade the major version of a dependency in `dev-packages/e2e-tests/test-applications/*` if the test directory name pins a version (e.g., `nestjs-8` must stay on NestJS 8). + +## Post-upgrade verification + +```bash +yarn install +yarn build:dev +yarn dedupe-deps:fix +yarn fix +yarn circularDepCheck +``` + +## Useful commands + +```bash +yarn list --depth=0 # Check dependency tree +yarn why [package-name] # Find why a package is installed +yarn info dependencies # Inspect package dependencies +yarn info versions # Check available versions +yarn outdated # Check outdated dependencies +yarn audit # Check for security vulnerabilities +``` diff --git a/.agents/skills/upgrade-otel/SKILL.md b/.agents/skills/upgrade-otel/SKILL.md new file mode 100644 index 000000000000..02827be8c9df --- /dev/null +++ b/.agents/skills/upgrade-otel/SKILL.md @@ -0,0 +1,32 @@ +--- +name: upgrade-otel +description: Upgrade OpenTelemetry instrumentations across the Sentry JavaScript SDK. Use when bumping OTel instrumentation packages to their latest versions. +argument-hint: '' +--- + +# Upgrading OpenTelemetry Instrumentations + +**All upgrades must be free of breaking changes.** Read each changelog before proceeding. + +## 1. `packages/**` + +Upgrade in this order: + +1. **`@opentelemetry/instrumentation`** to latest. Check changelog: `https://github.com/open-telemetry/opentelemetry-js/blob/main/experimental/CHANGELOG.md` +2. **All `@opentelemetry/instrumentation-*` packages.** Check each changelog: `https://github.com/open-telemetry/opentelemetry-js-contrib/blob/main/packages/instrumentation-{name}/CHANGELOG.md` +3. **Third-party instrumentations** (currently `@prisma/instrumentation`). Check their changelogs. + +**STOP** if any upgrade introduces breaking changes — fail with the reason. + +## 2. `dev-packages/**` + +- If an app depends on `@opentelemetry/instrumentation` >= `0.200.x`, upgrade to latest. +- If an app depends on `@opentelemetry/instrumentation-http` >= `0.200.x`, upgrade to latest. + +Same rule: no breaking changes allowed. + +## 3. Regenerate lock file + +```bash +yarn install +``` diff --git a/.claude/skills b/.claude/skills new file mode 120000 index 000000000000..2b7a412b8fa0 --- /dev/null +++ b/.claude/skills @@ -0,0 +1 @@ +../.agents/skills \ No newline at end of file diff --git a/.cursor/commands/add_cdn_bundle.md b/.cursor/commands/add_cdn_bundle.md deleted file mode 100644 index e6e38e83fd46..000000000000 --- a/.cursor/commands/add_cdn_bundle.md +++ /dev/null @@ -1,35 +0,0 @@ -# Add CDN Bundle for `{FEATURE_COMBO}` - -Create a new CDN bundle for the browser package that includes `{FEATURE_COMBO}` (e.g., `replay.logs.metrics`, `tracing.logs`, etc.). - -## Instructions - -Follow the detailed guide at [docs/adding-cdn-bundle.md](../../docs/adding-cdn-bundle.md) to create the bundle. - -## Quick Reference - Naming Conventions - -| Placeholder | Example (`replay.logs.metrics`) | -| ------------------------------- | ------------------------------- | -| `{FEATURE_COMBO}` | `replay.logs.metrics` | -| `{feature_combo}` | `replay_logs_metrics` | -| `{featureCombo}` | `replayLogsMetrics` | -| `{Human Readable Features}` | `Replay, Logs, Metrics` | -| `{Human Readable Feature List}` | `Replay, Logs, and Metrics` | - -## Quick Reference - Files to Create/Modify - -1. **Create** `packages/browser/src/index.bundle.{FEATURE_COMBO}.ts` -2. **Create** `packages/browser/test/index.bundle.{FEATURE_COMBO}.test.ts` -3. **Modify** `packages/browser/rollup.bundle.config.mjs` -4. **Modify** `.size-limit.js` -5. **Modify** `dev-packages/browser-integration-tests/package.json` -6. **Modify** `dev-packages/browser-integration-tests/utils/generatePlugin.ts` -7. **Modify** `.github/workflows/build.yml` - -## Verification Steps - -After making changes: - -1. Run `yarn lint` to check for linting issues -2. Run `cd packages/browser && yarn build:dev` to verify TypeScript compilation -3. Run `cd packages/browser && yarn test` to run the unit tests diff --git a/.cursor/commands/bump_otel_instrumentations.md b/.cursor/commands/bump_otel_instrumentations.md deleted file mode 100644 index ff1e6cfcbcc8..000000000000 --- a/.cursor/commands/bump_otel_instrumentations.md +++ /dev/null @@ -1,32 +0,0 @@ -# Bump OpenTelemetry instrumentations - -1. Ensure you're on the `develop` branch with the latest changes: - - If you have unsaved changes, stash them with `git stash -u`. - - If you're on a different branch than `develop`, check out the develop branch using `git checkout develop`. - - Pull the latest updates from the remote repository by running `git pull origin develop`. - -2. Create a new branch `bump-otel-{yyyy-mm-dd}`, e.g. `bump-otel-2025-03-03` - -3. Create a new empty commit with the commit message `feat(deps): Bump OpenTelemetry instrumentations` - -4. Push the branch and create a draft PR, note down the PR number as {PR_NUMBER} - -5. Create a changelog entry in `CHANGELOG.md` under - `- "You miss 100 percent of the chances you don't take. — Wayne Gretzky" — Michael Scott` with the following format: - `- feat(deps): Bump OpenTelemetry instrumentations ([#{PR_NUMBER}](https://github.com/getsentry/sentry-javascript/pull/{PR_NUMBER}))` - -6. Find the "Upgrade OpenTelemetry instrumentations" rule in `.cursor/rules/upgrade_opentelemetry_instrumentations` and - follow those complete instructions step by step. - - Create one commit per package in `packages/**` with the commit message - `Bump OpenTelemetry instrumentations for {SDK}`, e.g. `Bump OpenTelemetry instrumentation for @sentry/node` - - - For each OpenTelemetry dependency bump, record an entry in the changelog with the format indented under the main - entry created in step 5: `- Bump @opentelemetry/{instrumentation} from {previous_version} to {new_version}`, e.g. - `- Bump @opentelemetry/instrumentation from 0.204.0 to 0.207.0` **CRITICAL**: Avoid duplicated entries, e.g. if we - bump @opentelemetry/instrumentation in two packages, keep a single changelog entry. - -7. Regenerate the yarn lockfile and run `yarn yarn-deduplicate` - -8. Run `yarn fix` to fix all formatting issues - -9. Finally update the PR description to list all dependency bumps diff --git a/.cursor/commands/publish_release.md b/.cursor/commands/publish_release.md deleted file mode 100644 index 5d1477992dfb..000000000000 --- a/.cursor/commands/publish_release.md +++ /dev/null @@ -1,5 +0,0 @@ -# Release Command - -Execute the standard Sentry JavaScript SDK release process. - -Find the "publishing_release" rule in `.cursor/rules/publishing_release` and follow those complete instructions step by step. diff --git a/.cursor/rules/adding-a-new-ai-integration.mdc b/.cursor/rules/adding-a-new-ai-integration.mdc deleted file mode 100644 index 42f8f12722dd..000000000000 --- a/.cursor/rules/adding-a-new-ai-integration.mdc +++ /dev/null @@ -1,403 +0,0 @@ ---- -description: Guidelines for contributing a new Sentry JavaScript SDK AI integration. -alwaysApply: true ---- - -# Adding a New AI Integration - -Use these guidelines when contributing a new Sentry JavaScript SDK AI integration. - -## Quick Decision Tree - -**CRITICAL** - -``` -Does the AI SDK have native OpenTelemetry support? -├─ YES → Does it emit OTel spans automatically? -│ ├─ YES (like Vercel AI) → Pattern 1: OTEL Span Processors -│ └─ NO → Pattern 2: OTEL Instrumentation (wrap client) -└─ NO → Does the SDK provide hooks/callbacks? - ├─ YES (like LangChain) → Pattern 3: Callback/Hook Based - └─ NO → Pattern 4: Client Wrapping - -Multi-runtime considerations: -- Node.js: Use OpenTelemetry instrumentation -- Edge (Cloudflare/Vercel): No OTel, processors only or manual wrapping -``` - -**IMPORTANT - Runtime-Specific Placement:** - -If an AI SDK only works in a specific runtime, the integration code should live exclusively in that runtime's package. Do NOT add it to `packages/core/` or attempt to make it work in other runtimes where it cannot function. - -**Runtime-specific integration structures:** - -**Node.js-only SDKs** → `packages/node/` - -- Core logic: `packages/node/src/integrations/tracing/{provider}/index.ts` -- OTel instrumentation: `packages/node/src/integrations/tracing/{provider}/instrumentation.ts` -- Use when SDK only work with Node.js-specific APIs - -**Cloudflare Workers-only SDKs** → `packages/cloudflare/` - -- Single file: `packages/cloudflare/src/integrations/tracing/{provider}.ts` -- Use when SDK only works with Cloudflare Workers APIs or Cloudflare AI - -**Browser-only SDKs** → `packages/browser/` - -- Core logic: `packages/browser/src/integrations/tracing/{provider}/index.ts` -- Use when SDK requires browser-specific APIs (DOM, WebAPIs, etc.) - -**For all runtime-specific SDKs:** DO NOT create `packages/core/src/tracing/{provider}/` - keep everything in the runtime package. - -**Multi-runtime SDKs:** If the SDK works across multiple runtimes (Node.js, browser, edge), follow the standard pattern with shared core logic in `packages/core/` and runtime-specific wrappers/instrumentation in each package where needed. - ---- - -## Span Hierarchy - -**Two span types:** - -- `gen_ai.invoke_agent` - Parent/pipeline spans (chains, agents, orchestration) -- `gen_ai.chat`, `gen_ai.generate_text`, etc. - Child spans (actual LLM calls) - -**Hierarchy example:** - -``` -gen_ai.invoke_agent (ai.generateText) - └── gen_ai.generate_text (ai.generateText.doGenerate) -``` - -**References:** - -- Vercel AI: `packages/core/src/tracing/vercel-ai/constants.ts` -- LangChain: `onChainStart` callback in `packages/core/src/tracing/langchain/index.ts` - ---- - -## Streaming vs Non-Streaming - -**Non-streaming:** Use `startSpan()`, set attributes immediately from response - -**Streaming:** Use `startSpanManual()` and prefer event listeners/hooks when available (like Anthropic's `stream.on()`). If not available, use async generator pattern: - -```typescript -interface StreamingState { - responseTexts: string[]; // Accumulate fragments - promptTokens: number | undefined; - completionTokens: number | undefined; - // ... -} - -async function* instrumentStream(stream, span, recordOutputs) { - const state: StreamingState = { responseTexts: [], ... }; - try { - for await (const event of stream) { - processEvent(event, state, recordOutputs); // Accumulate data - yield event; // Pass through - } - } finally { - setTokenUsageAttributes(span, state.promptTokens, state.completionTokens); - span.setAttributes({ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true }); - span.end(); // MUST call manually - } -} -``` - -**Key rules:** - -- Accumulate with arrays/strings, don't overwrite -- Set `GEN_AI_RESPONSE_STREAMING_ATTRIBUTE: true` -- Call `span.end()` in finally block - -**Detection:** Check request parameters for `stream: true` to determine if response will be streamed. - -**References:** - -- OpenAI async generator: `instrumentStream` in `packages/core/src/tracing/openai/streaming.ts` -- Anthropic event listeners: `instrumentMessageStream` in `packages/core/src/tracing/anthropic-ai/streaming.ts` -- Detection logic: Check `params.stream === true` in `packages/core/src/tracing/openai/index.ts` - ---- - -## Token Accumulation - -**Child spans (LLM calls):** Set tokens directly from API response - -```typescript -setTokenUsageAttributes(span, inputTokens, outputTokens, totalTokens); -``` - -**Parent spans (invoke_agent):** Accumulate from children using event processor - -```typescript -// First pass: accumulate from children -for (const span of event.spans) { - if (span.parent_span_id && isGenAiOperationSpan(span)) { - accumulateTokensForParent(span, tokenAccumulator); - } -} - -// Second pass: apply to invoke_agent parents -for (const span of event.spans) { - if (span.op === 'gen_ai.invoke_agent') { - applyAccumulatedTokens(span, tokenAccumulator); - } -} -``` - -**Reference:** `vercelAiEventProcessor` and `accumulateTokensForParent` in `packages/core/src/tracing/vercel-ai/` - ---- - -## Shared Utilities - -Location: `packages/core/src/tracing/ai/` - -### `gen-ai-attributes.ts` - -OpenTelemetry Semantic Convention attribute names. **Always use these constants!** - -- `GEN_AI_SYSTEM_ATTRIBUTE` - 'openai', 'anthropic', etc. -- `GEN_AI_REQUEST_MODEL_ATTRIBUTE` - Model from request -- `GEN_AI_RESPONSE_MODEL_ATTRIBUTE` - Model from response -- `GEN_AI_INPUT_MESSAGES_ATTRIBUTE` - Input (requires recordInputs) -- `GEN_AI_RESPONSE_TEXT_ATTRIBUTE` - Output (requires recordOutputs) -- `GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE` - Token counts -- `GEN_AI_OPERATION_NAME_ATTRIBUTE` - 'chat', 'embeddings', etc. - -**CRITICAL - Attribute Usage:** - -Only use attributes explicitly listed in the [Sentry Gen AI Conventions](https://getsentry.github.io/sentry-conventions/attributes/gen_ai/). Do NOT create custom attributes or use undocumented ones. If you need a new attribute, it MUST be documented in the conventions first before implementation. - -### `utils.ts` - -- `setTokenUsageAttributes()` - Set token usage on span -- `getTruncatedJsonString()` - Truncate for attributes -- `truncateGenAiMessages()` - Truncate message arrays -- `buildMethodPath()` - Build method path from traversal - ---- - -## Pattern 1: OTEL Span Processors - -**Use when:** SDK emits OTel spans automatically (Vercel AI) - -### Key Steps - -1. **Core:** Create `add{Provider}Processors()` in `packages/core/src/tracing/{provider}/index.ts` - - Registers `spanStart` listener + event processor - - Post-processes spans to match semantic conventions - -2. **Node.js:** Add performance optimization in `packages/node/src/integrations/tracing/{provider}/index.ts` - - Use `callWhenPatched()` to defer processor registration - - Only register when package is actually imported (see `vercelAIIntegration` function) - -3. **Edge:** Direct registration in `packages/cloudflare/src/integrations/tracing/{provider}.ts` - - No OTel patching available - - Just call `add{Provider}Processors()` immediately - -**Reference:** `packages/node/src/integrations/tracing/vercelai/` - ---- - -## Pattern 2: OTEL Instrumentation (Client Wrapping) - -**Use when:** SDK has NO native OTel support (OpenAI, Anthropic, Google GenAI) - -### Key Steps - -1. **Core:** Create `instrument{Provider}Client()` in `packages/core/src/tracing/{provider}/index.ts` - - Use Proxy to wrap client methods recursively - - Create spans manually with `startSpan()` or `startSpanManual()` - -2. **Node.js Instrumentation:** Patch module exports in `instrumentation.ts` - - Wrap client constructor - - Check `_INTERNAL_shouldSkipAiProviderWrapping()` (for LangChain) - - See `instrumentOpenAi` in `packages/node/src/integrations/tracing/openai/instrumentation.ts` - -3. **Node.js Integration:** Export instrumentation function - - Use `generateInstrumentOnce()` helper - - See `openAIIntegration` in `packages/node/src/integrations/tracing/openai/index.ts` - -**Reference:** `packages/node/src/integrations/tracing/openai/` - ---- - -## Pattern 3: Callback/Hook Based - -**Use when:** SDK provides lifecycle hooks (LangChain, LangGraph) - -### Key Steps - -1. **Core:** Create `create{Provider}CallbackHandler()` in `packages/core/src/tracing/{provider}/index.ts` - - Implement SDK's callback interface - - Create spans in callback methods - -2. **Node.js Instrumentation:** Auto-inject callbacks - - Patch runnable methods to add handler automatically - - **Important:** Disable underlying AI provider wrapping (see `instrumentLangchain` in `packages/node/src/integrations/tracing/langchain/instrumentation.ts`) - -**Reference:** `packages/node/src/integrations/tracing/langchain/` - ---- - -## Auto-Instrumentation (Out-of-the-Box Support) - -**MANDATORY** - -**RULE:** AI SDKs should be auto-enabled in Node.js runtime if possible. - -✅ **Auto-enable if:** - -- SDK works in Node.js runtime -- OTel only patches when package imported (zero cost if unused) - -❌ **Don't auto-enable if:** - -- SDK is niche/experimental -- Integration has significant limitations - -### Steps to Auto-Enable - -**1. Add to auto performance integrations** - -Location: `packages/node/src/integrations/tracing/index.ts` - -```typescript -export function getAutoPerformanceIntegrations(): Integration[] { - return [ - // AI providers - IMPORTANT: LangChain MUST come first! - langChainIntegration(), // Disables underlying providers - langGraphIntegration(), - vercelAIIntegration(), - openAIIntegration(), - anthropicAIIntegration(), - googleGenAIIntegration(), - {provider}Integration(), // <-- Add here - ]; -} -``` - -**2. Add to preload instrumentation** - -```typescript -export function getOpenTelemetryInstrumentationToPreload() { - return [ - instrumentOpenAi, - instrumentAnthropicAi, - instrument{Provider}, // <-- Add here - ]; -} -``` - -**3. Export from package index** - -```typescript -// packages/node/src/index.ts -export { {provider}Integration } from './integrations/tracing/{provider}'; -export type { {Provider}Options } from './integrations/tracing/{provider}'; - -// If browser-compatible: packages/browser/src/index.ts -export { {provider}Integration } from './integrations/tracing/{provider}'; -``` - -**4. Add E2E tests** - -For Node.js integrations, add tests in `dev-packages/node-integration-tests/suites/tracing/{provider}/`: - -- Verify spans created automatically (no manual setup) -- Test `recordInputs` and `recordOutputs` options -- Test integration can be disabled - -For Cloudflare Workers integrations, add tests in `dev-packages/cloudflare-integration-tests/suites/tracing/{provider}`: - -- Create a new worker test app with the AI SDK -- Verify manual instrumentation creates spans correctly -- Test in actual Cloudflare Workers runtime (use `wrangler dev` or `miniflare`) - -For Browser integrations, add tests in `dev-packages/browser-integration-tests/suites/tracing/ai-providers/{provider}/`: - -- Create a new test suite with Playwright -- Verify manual instrumentation creates spans correctly in the browser -- Test with actual browser runtime - ---- - -## Directory Structure - -``` -packages/ -├── core/src/tracing/ -│ ├── ai/ # Shared utilities -│ │ ├── gen-ai-attributes.ts -│ │ ├── utils.ts -│ │ └── messageTruncation.ts -│ └── {provider}/ # Provider-specific -│ ├── index.ts # Main logic -│ ├── types.ts -│ ├── constants.ts -│ └── streaming.ts -│ -├── node/src/integrations/tracing/{provider}/ -│ ├── index.ts # Integration definition -│ └── instrumentation.ts # OTel instrumentation -│ -├── cloudflare/src/integrations/tracing/ -│ └── {provider}.ts # Single file -│ -└── vercel-edge/src/integrations/tracing/ - └── {provider}.ts # Single file -``` - ---- - -## Key Best Practices - -1. **Auto-instrumentation is mandatory** - All integrations MUST auto-detect and instrument automatically in Node.js runtime -2. **Runtime-specific placement** - If SDK only works in one runtime, code lives only in that package -3. **Respect `sendDefaultPii`** for recordInputs/recordOutputs -4. **Use semantic attributes** from `gen-ai-attributes.ts` (never hardcode) - Only use attributes from [Sentry Gen AI Conventions](https://getsentry.github.io/sentry-conventions/attributes/gen_ai/) -5. **Set Sentry origin**: `SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN = 'auto.ai.openai'` (use provider name: `openai`, `anthropic`, `vercelai`, etc. - only alphanumerics, `_`, and `.` allowed) -6. **Truncate large data**: Use helper functions from `utils.ts` -7. **Correct span operations**: `gen_ai.invoke_agent` for parent, `gen_ai.chat` for children -8. **Streaming**: Use `startSpanManual()`, accumulate state, call `span.end()` -9. **Token accumulation**: Direct on child spans, accumulate on parent from children -10. **Performance**: Use `callWhenPatched()` for Pattern 1 -11. **LangChain**: Check `_INTERNAL_shouldSkipAiProviderWrapping()` in Pattern 2 - ---- - -## Reference Implementations - -- **Pattern 1 (Span Processors):** `packages/node/src/integrations/tracing/vercelai/` -- **Pattern 2 (Client Wrapping):** `packages/node/src/integrations/tracing/openai/` -- **Pattern 3 (Callback/Hooks):** `packages/node/src/integrations/tracing/langchain/` - ---- - -## Auto-Instrumentation Checklist - -- [ ] If runtime-specific, placed code only in that runtime's package -- [ ] Added to `getAutoPerformanceIntegrations()` in correct order (Node.js) -- [ ] Added to `getOpenTelemetryInstrumentationToPreload()` (Node.js with OTel) -- [ ] Exported from appropriate package index (`packages/node/src/index.ts`, `packages/cloudflare/src/index.ts`, etc.) -- [ ] Added E2E tests: - - [ ] Node.js: `dev-packages/node-integration-tests/suites/tracing/{provider}/` - - [ ] Cloudflare: `dev-packages/cloudflare-integration-tests/suites/tracing/{provider}/` - - [ ] Browser: `dev-packages/browser-integration-tests/suites/tracing/ai-providers/{provider}/` -- [ ] E2E test verifies auto-instrumentation (no manual setup required) -- [ ] Only used attributes from [Sentry Gen AI Conventions](https://getsentry.github.io/sentry-conventions/attributes/gen_ai/) -- [ ] JSDoc says "enabled by default" or "not enabled by default" -- [ ] Documented how to disable (if auto-enabled) -- [ ] Documented limitations clearly -- [ ] Verified OTel only patches when package imported (Node.js) - ---- - -## Questions? - -1. Look at reference implementations above -2. Check shared utilities in `packages/core/src/tracing/ai/` -3. Review OpenTelemetry Semantic Conventions: https://opentelemetry.io/docs/specs/semconv/gen-ai/ - -**When in doubt, follow the pattern of the most similar existing integration!** diff --git a/.cursor/rules/fetch-docs/attributes.mdc b/.cursor/rules/fetch-docs/attributes.mdc deleted file mode 100644 index fa1fe70629d5..000000000000 --- a/.cursor/rules/fetch-docs/attributes.mdc +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: Use this rule if you need developer documentation about Span Attributes within the Sentry SDKs -alwaysApply: false ---- - -Find the SDK developer documentation here: https://develop.sentry.dev/sdk/telemetry/attributes.md diff --git a/.cursor/rules/fetch-docs/bundle-size.mdc b/.cursor/rules/fetch-docs/bundle-size.mdc deleted file mode 100644 index 0b7e16ff3b8b..000000000000 --- a/.cursor/rules/fetch-docs/bundle-size.mdc +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: Use this rule if you need developer documentation about bundle size in the Sentry SDKs for browser environments. -alwaysApply: false ---- - -Find the SDK developer documentation here: https://develop.sentry.dev/sdk/platform-specifics/javascript-sdks/bundle-size.md diff --git a/.cursor/rules/fetch-docs/scopes.mdc b/.cursor/rules/fetch-docs/scopes.mdc deleted file mode 100644 index 145ec100da89..000000000000 --- a/.cursor/rules/fetch-docs/scopes.mdc +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: Use this rule if you need developer documentation about the concept of Scopes (global, isolation, current) within the Sentry SDKs -alwaysApply: false ---- - -Find the SDK developer documentation here: https://develop.sentry.dev/sdk/telemetry/scopes.md diff --git a/.cursor/rules/fetch-docs/tracing-in-browser.mdc b/.cursor/rules/fetch-docs/tracing-in-browser.mdc deleted file mode 100644 index 38a2a4d673e3..000000000000 --- a/.cursor/rules/fetch-docs/tracing-in-browser.mdc +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: Use this rule if you need developer documentation about tracing for browser environments (e.g. page-load/navigation spans and Web Vitals). -alwaysApply: false ---- - -Find the SDK developer documentation here: https://develop.sentry.dev/sdk/platform-specifics/javascript-sdks/browser-tracing.md diff --git a/.cursor/rules/publishing_release.mdc b/.cursor/rules/publishing_release.mdc deleted file mode 100644 index f50a5ea57f93..000000000000 --- a/.cursor/rules/publishing_release.mdc +++ /dev/null @@ -1,38 +0,0 @@ ---- -description: Use this rule if you are looking to publish a release for the Sentry JavaScript SDKs -globs: -alwaysApply: false ---- - -# Publishing a Release - -Use these guidelines when publishing a new Sentry JavaScript SDK release. - -## Standard Release Process (from develop to master) - -The release process is outlined in [publishing-a-release.md](mdc:docs/publishing-a-release.md). - -1. Ensure you're on the `develop` branch with the latest changes: - - If you have unsaved changes, stash them with `git stash -u`. - - If you're on a different branch than `develop`, check out the develop branch using `git checkout develop`. - - Pull the latest updates from the remote repository by running `git pull origin develop`. - -2. Run `yarn changelog` on the `develop` branch and copy the output. You can use `yarn changelog | pbcopy` to copy the output of `yarn changelog` into your clipboard. -3. Decide on a version for the release based on [semver](mdc:https://semver.org). The version should be decided based on what is in included in the release. For example, if the release includes a new feature, we should increment the minor version. If it includes only bug fixes, we should increment the patch version. You can find the latest version in [CHANGELOG.md](mdc:CHANGELOG.md) at the very top. -4. Create a branch `prepare-release/VERSION`, eg. `prepare-release/8.1.0`, off `develop`. -5. Update [CHANGELOG.md](mdc:CHANGELOG.md) to add an entry for the next release number and a list of changes since the last release from the output of `yarn changelog`. See the `Updating the Changelog` section in [publishing-a-release.md](mdc:docs/publishing-a-release.md) for more details. Do not remove any changelog entries. -6. Commit the changes to [CHANGELOG.md](mdc:CHANGELOG.md) with `meta(changelog): Update changelog for VERSION` where `VERSION` is the version of the release, e.g. `meta(changelog): Update changelog for 8.1.0` -7. Push the `prepare-release/VERSION` branch to origin and remind the user that the release PR needs to be opened from the `master` branch. -8. In case you were working on a different branch, you can checkout back to the branch you were working on and continue your work by unstashing the changes you stashed earlier with the command `git stash pop` (only if you stashed changes). - -## Key Commands - -- `yarn changelog` - Generate changelog entries -- `yarn lint` - Ensure code quality -- `yarn test` - Run test suite -- `yarn build:dev` - Verify build - -## Important Notes - -- This repository uses **Git Flow** - target `develop` for feature PRs, not `master`. See [gitflow.md](mdc:docs/gitflow.md) for more details. -- For first-time SDK releases, follow the New SDK Release Checklist [new-sdk-release-checklist.md](mdc:docs/new-sdk-release-checklist.md). If you notice there is something not following the new SDK release checklist, please remind the user. diff --git a/.cursor/rules/sdk_dependency_upgrades.mdc b/.cursor/rules/sdk_dependency_upgrades.mdc deleted file mode 100644 index becf0805eb91..000000000000 --- a/.cursor/rules/sdk_dependency_upgrades.mdc +++ /dev/null @@ -1,167 +0,0 @@ ---- -description: Use this rule if you are looking to upgrade a dependency in the Sentry JavaScript SDKs -globs: -alwaysApply: false ---- - -# Yarn v1 Dependency Upgrades - -## Upgrade Process - -### Dependency Analysis - -```bash -# Check dependency tree -yarn list --depth=0 - -# Find why package is installed -yarn why [package-name] -``` - -### Root Workspace vs. Package Dependencies - -**CRITICAL**: Understand the difference between dependency types: - -- **Root Workspace dependencies**: Shared dev tools, build tools, testing frameworks -- **Package dependencies**: Package-specific runtime and dev dependencies -- Always check if dependency should be in root workspace or package level - -### Upgrade Dependencies - -**MANDATORY**: Only ever upgrade a single package at a time. - -**CRITICAL RULE**: If a dependency is not defined in `package.json` anywhere, the upgrade must be run in the root workspace as the `yarn.lock` file is contained there. - -```bash -# Upgrade specific package to latest compatible version -npx yarn-update-dependency@latest [package-name] -``` - -Avoid upgrading top-level dependencies (defined in `package.json`), especially if they are used for tests. If you are going to upgrade them, ask the user before proceeding. - -**REQUIREMENT**: If a `package.json` file is updated, make sure it has a new line at the end. - -#### CRITICAL: OpenTelemetry Dependency Constraint - -**STOP UPGRADE IMMEDIATELY** if upgrading any dependency with `opentelemetry` in the name and the new version or any of its dependencies uses forbidden OpenTelemetry versions. - -**FORBIDDEN VERSION PATTERNS:** - -- `2.x.x` versions (e.g., `2.0.0`, `2.1.0`) -- `0.2xx.x` versions (e.g., `0.200.0`, `0.201.0`) - -When upgrading OpenTelemetry dependencies: - -1. Check the dependency's `package.json` after upgrade -2. Verify the package itself doesn't use forbidden version patterns -3. Verify none of its dependencies use `@opentelemetry/*` packages with forbidden version patterns -4. **Example**: `@opentelemetry/instrumentation-pg@0.52.0` is forbidden because it bumped to core `2.0.0` and instrumentation `0.200.0` -5. If forbidden OpenTelemetry versions are detected, **ABORT the upgrade** and notify the user that this upgrade cannot proceed due to OpenTelemetry v2+ compatibility constraints - -#### CRITICAL: E2E Test Dependencies - -**DO NOT UPGRADE** the major version of dependencies in test applications where the test name explicitly mentions a dependency version. - -**RULE**: For `dev-packages/e2e-tests/test-applications/*`, if the test directory name mentions a specific version (e.g., `nestjs-8`), do not upgrade that dependency beyond the mentioned major version. - -**Example**: Do not upgrade the nestjs version of `dev-packages/e2e-tests/test-applications/nestjs-8` to nestjs 9 or above because the test name mentions nestjs 8. - -## Safety Protocols - -### Pre-Upgrade Checklist - -**COMPLETE ALL STEPS** before proceeding with any upgrade: - -1. **Backup**: Ensure clean git state or create backup branch -2. **CI Status**: Verify all tests are passing -3. **Lockfile works**: Confirm `yarn.lock` is in a good state (no merge conflicts) -4. **OpenTelemetry Check**: For OpenTelemetry dependencies, verify no forbidden version patterns (`2.x.x` or `0.2xx.x`) will be introduced - -### Post-Upgrade Verification - -```bash -# rebuild everything -yarn install - -# Build the project -yarn build:dev - -# Make sure dependencies are deduplicated -yarn dedupe-deps:fix - -# Fix any linting issues -yarn fix - -# Check circular dependencies -yarn circularDepCheck -``` - -## Version Management - -### Pinning Strategies - -- **Exact versions** (`1.2.3`): Use for critical dependencies -- **Caret versions** (`^1.2.3`): Allow minor updates only -- **Latest tag**: Avoid as much as possible other than in certain testing and development scenarios - -## Troubleshooting - -- **Yarn Version**: Run `yarn --version` - must be yarn v1 (not v2/v3/v4) -- **Lockfile Issues**: Verify yarn.lock exists and is properly maintained. Fix merge conflicts by running `yarn install` - -## Best Practices - -### Security Audits - -```bash -# Check for security vulnerabilities -yarn audit - -# Fix automatically fixable vulnerabilities -yarn audit fix - -# Install security patches only -yarn upgrade --security-only -``` - -### Check for Outdated Dependencies - -```bash -# Check all outdated dependencies -yarn outdated - -# Check specific package -yarn outdated [package-name] - -# Check dependencies in specific workspace -yarn workspace [workspace-name] outdated -``` - -### Using yarn info for Dependency Inspection - -Use `yarn info` to inspect package dependencies before and after upgrades: - -```bash -# Check current version and dependencies -yarn info - -# Check specific version dependencies -yarn info @ - -# Check dependencies field specifically -yarn info @ dependencies - -# Check all available versions -yarn info versions -``` - -The `yarn info` command provides detailed dependency information without requiring installation, making it particularly useful for: - -- Verifying OpenTelemetry packages don't introduce forbidden version patterns (`2.x.x` or `0.2xx.x`) -- Checking what dependencies a package will bring in before upgrading -- Understanding package version history and compatibility - -### Documentation - -- Update README or code comments if dependency change affects usage of the SDK or its integrations -- Notify team of significant changes diff --git a/.cursor/rules/sdk_development.mdc b/.cursor/rules/sdk_development.mdc deleted file mode 100644 index f3bbc6ee6669..000000000000 --- a/.cursor/rules/sdk_development.mdc +++ /dev/null @@ -1,209 +0,0 @@ ---- -description: Guidelines for working on the Sentry JavaScript SDK monorepo -alwaysApply: true ---- - -# SDK Development Rules - -You are working on the Sentry JavaScript SDK, a critical production SDK used by thousands of applications. Follow these rules strictly. - -## Code Quality Requirements (MANDATORY) - -**CRITICAL**: All changes must pass these checks before committing: - -1. **Always run `yarn lint`** - Fix all linting issues -2. **Always run `yarn test`** - Ensure all tests pass -3. **Always run `yarn build:dev`** - Verify TypeScript compilation - -## Development Commands - -### Build Commands - -- `yarn build` - Full production build with package verification -- `yarn build:dev` - Development build (transpile + types) -- `yarn build:dev:watch` - Development build in watch mode (recommended) -- `yarn build:dev:filter ` - Build specific package and dependencies -- `yarn build:bundle` - Build browser bundles only - -### Testing - -- `yarn test` - Run all unit tests - -### Linting and Formatting - -- `yarn lint` - Run ESLint and Prettier checks -- `yarn fix` - Auto-fix linting and formatting issues -- `yarn format:check` - Check file formatting only -- `yarn format` - Auto-fix formatting issues -- `yarn lint:es-compatibility` - Check ES compatibility - -## Git Flow Branching Strategy - -This repository uses **Git Flow**. See [docs/gitflow.md](docs/gitflow.md) for details. - -### Key Rules - -- **All PRs target `develop` branch** (NOT `master`) -- `master` represents the last released state -- Never merge directly into `master` (except emergency fixes) -- Avoid changing `package.json` files on `develop` during pending releases -- Never update dependencies, package.json content or build scripts unless explicitly asked for -- When asked to do a task on a set of files, always make sure that all occurences in the codebase are covered. Double check that no files have been forgotten. -- Unless explicitly asked for, make sure to cover all files, including files in `src/` and `test/` directories. - -### Branch Naming - -- Features: `feat/descriptive-name` -- Releases: `release/X.Y.Z` - -## Repository Architecture - -This is a monorepo with 40+ packages in the `@sentry/*` namespace, managed with Yarn workspaces and Nx. - -### Core Packages - -- `packages/core/` - Base SDK with interfaces, type definitions, core functionality -- `packages/types/` - Shared TypeScript type definitions - this is deprecated, never modify this package -- `packages/browser-utils/` - Browser-specific utilities and instrumentation -- `packages/node-core/` - Node Core SDK which contains most of the node-specific logic, excluding OpenTelemetry instrumentation. - -### Platform SDKs - -- `packages/browser/` - Browser SDK with bundled variants -- `packages/node/` - Node.js SDK. All general Node code should go into node-core, the node package itself only contains OpenTelemetry instrumentation on top of that. -- `packages/bun/`, `packages/deno/`, `packages/cloudflare/` - Runtime-specific SDKs - -### Framework Integrations - -- Framework packages: `packages/{framework}/` (react, vue, angular, etc.) -- Client/server entry points where applicable (nextjs, nuxt, sveltekit) -- Integration tests use Playwright (Remix, browser-integration-tests) - -### AI Integrations - -- `packages/core/src/tracing/{provider}/` - Core instrumentation logic (OpenAI, Anthropic, Vercel AI, LangChain, etc.) -- `packages/node/src/integrations/tracing/{provider}/` - Node.js-specific integration + OTel instrumentation -- `packages/cloudflare/src/integrations/tracing/{provider}.ts` - Edge runtime support -- Patterns: OTEL Span Processors, Client Wrapping, Callback/Hook Based -- See `.cursor/rules/adding-a-new-ai-integration.mdc` for implementation guide - -### User Experience Packages - -- `packages/replay-internal/` - Session replay functionality -- `packages/replay-canvas/` - Canvas recording for replay -- `packages/replay-worker/` - Web worker support for replay -- `packages/feedback/` - User feedback integration - -### Development Packages (`dev-packages/`) - -- `browser-integration-tests/` - Playwright browser tests -- `e2e-tests/` - End-to-end tests for 70+ framework combinations -- `node-integration-tests/` - Node.js integration tests -- `test-utils/` - Shared testing utilities -- `bundle-analyzer-scenarios/` - Bundle analysis -- `rollup-utils/` - Build utilities -- GitHub Actions packages for CI/CD automation - -## Development Guidelines - -### Build System - -- Uses Rollup for bundling (`rollup.*.config.mjs`) -- TypeScript with multiple tsconfig files per package -- Nx orchestrates task execution across packages with caching -- Vite for testing with `vitest` - -### Package Structure Pattern - -Each package typically contains: - -- `src/index.ts` - Main entry point -- `src/sdk.ts` - SDK initialization logic -- `rollup.npm.config.mjs` - Build configuration -- `tsconfig.json`, `tsconfig.test.json`, `tsconfig.types.json` -- `test/` directory with corresponding test files - -### Key Development Notes - -- Uses Volta for Node.js/Yarn version management -- Requires initial `yarn build` after `yarn install` for TypeScript linking -- Integration tests use Playwright extensively -- Never change the volta, yarn, or package manager setup in general unless explicitly asked for - -### E2E Testing - -E2E tests are located in `dev-packages/e2e-tests/` and verify SDK behavior in real-world framework scenarios. - -#### How Verdaccio Registry Works - -E2E tests use [Verdaccio](https://verdaccio.org/), a lightweight npm registry running in Docker. Before tests run: - -1. SDK packages are built and packed into tarballs (`yarn build && yarn build:tarball`) -2. Tarballs are published to Verdaccio at `http://127.0.0.1:4873` -3. Test applications install packages from Verdaccio instead of public npm - -#### The `.npmrc` Requirement - -Every E2E test application needs an `.npmrc` file with: - -``` -@sentry:registry=http://127.0.0.1:4873 -@sentry-internal:registry=http://127.0.0.1:4873 -``` - -Without this file, pnpm installs from the public npm registry instead of Verdaccio, so your local changes won't be tested. This is a common cause of "tests pass in CI but fail locally" or vice versa. - -#### Running a Single E2E Test - -```bash -# Build packages first -yarn build && yarn build:tarball - -# Run a specific test app -cd dev-packages/e2e-tests -yarn test:run - -# Run with a specific variant (e.g., Next.js 15) -yarn test:run --variant -``` - -#### Common Pitfalls and Debugging - -1. **Missing `.npmrc`**: Most common issue. Always verify the test app has the correct `.npmrc` file. - -2. **Stale tarballs**: After SDK changes, must re-run `yarn build:tarball`. - -3. **Debugging tips**: - - Check browser console logs for SDK initialization errors - - Use `debug: true` in Sentry config - - Verify installed package version: check `node_modules/@sentry/*/package.json` - -### Notes for Background Tasks - -- Make sure to use [volta](https://volta.sh/) for development. Volta is used to manage the node, yarn and pnpm version used. -- Make sure that [PNPM support is enabled in volta](https://docs.volta.sh/advanced/pnpm). This means that the `VOLTA_FEATURE_PNPM` environment variable has to be set to `1`. -- Yarn, Node and PNPM have to be used through volta, in the versions defined by the volta config. NEVER change any versions unless explicitly asked to. - -## Testing Single Packages - -- Test specific package: `cd packages/{package-name} && yarn test` -- Build specific package: `yarn build:dev:filter @sentry/{package-name}` - -## Code Style Rules - -- Follow existing code conventions in each package -- Check imports and dependencies - only use libraries already in the codebase -- Look at neighboring files for patterns and style -- Never introduce code that exposes secrets or keys -- Follow security best practices - -## Before Every Commit Checklist - -1. ✅ `yarn lint` (fix all issues) -2. ✅ `yarn test` (all tests pass) -3. ✅ `yarn build:dev` (builds successfully) -4. ✅ Target `develop` branch for PRs (not `master`) - -## Documentation Sync - -**IMPORTANT**: When editing CLAUDE.md, also update .cursor/rules/sdk_development.mdc and vice versa to keep both files in sync. diff --git a/.cursor/rules/upgrade_opentelemetry_instrumentations.mdc b/.cursor/rules/upgrade_opentelemetry_instrumentations.mdc deleted file mode 100644 index b650ae1f5041..000000000000 --- a/.cursor/rules/upgrade_opentelemetry_instrumentations.mdc +++ /dev/null @@ -1,33 +0,0 @@ ---- -description: Use this rule if you are looking to grade OpenTelemetry instrumentations for the Sentry JavaScript SDKs -globs: * -alwaysApply: false ---- - -# Upgrading OpenTelemetry instrumentations - -1. For every package in packages/\*\*: - - When upgrading dependencies for OpenTelemetry instrumentations we need to first upgrade `@opentelemetry/instrumentation` to the latest version. - **CRITICAL**: `@opentelemetry/instrumentation` MUST NOT include any breaking changes. - Read through the changelog of `@opentelemetry/instrumentation` to figure out if breaking changes are included and fail with the reason if it does include breaking changes. - You can find the changelog at `https://github.com/open-telemetry/opentelemetry-js/blob/main/experimental/CHANGELOG.md` - - - After successfully upgrading `@opentelemetry/instrumentation` upgrade all `@opentelemetry/instrumentation-{instrumentation}` packages, e.g. `@opentelemetry/instrumentation-pg` - **CRITICAL**: `@opentelemetry/instrumentation-{instrumentation}` MUST NOT include any breaking changes. - Read through the changelog of `@opentelemetry/instrumentation-{instrumentation}` to figure out if breaking changes are included and fail with the reason if it does including breaking changes. - You can find the changelogs at `https://github.com/open-telemetry/opentelemetry-js-contrib/blob/main/packages/instrumentation-{instrumentation}/CHANGELOG.md`. - - - Finally, upgrade third party instrumentations to their latest versions, these are currently: - - @prisma/instrumentation - - **CRITICAL**: Upgrades to third party instrumentations MUST NOT include breaking changes. - Read through the changelog of each third party instrumentation to figure out if breaking changes are included and fail with the reason if it does include breaking changes. - -2. For packages and apps in dev-packages/\*\*: - - If an app depends on `@opentelemetry/instrumentation` >= 0.200.x upgrade it to the latest version. - **CRITICAL**: `@opentelemetry/instrumentation` MUST NOT include any breaking changes. - - - If an app depends on `@opentelemetry/instrumentation-http` >= 0.200.x upgrade it to the latest version. - **CRITICAL**: `@opentelemetry/instrumentation-http` MUST NOT include any breaking changes. - -3. Generate a new yarn lock file. diff --git a/.github/workflows/fix-security-vulnerability.yml b/.github/workflows/fix-security-vulnerability.yml index 8edd1447ca9b..f78290c032c6 100644 --- a/.github/workflows/fix-security-vulnerability.yml +++ b/.github/workflows/fix-security-vulnerability.yml @@ -32,7 +32,13 @@ jobs: id: alert run: | INPUT="${{ github.event.inputs.alert }}" - echo "number=${INPUT##*/}" >> "$GITHUB_OUTPUT" + RAW="${INPUT##*/}" + NUMBER="${RAW%%\?*}" + if ! [[ "$NUMBER" =~ ^[0-9]+$ ]]; then + echo "Error: Could not extract a valid numeric alert ID from input: $INPUT" + exit 1 + fi + echo "number=$NUMBER" >> "$GITHUB_OUTPUT" - uses: anthropics/claude-code-action@v1 with: diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 000000000000..7e1e1dc66b48 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,138 @@ +# Sentry JavaScript SDK + +Monorepo with 40+ packages in `@sentry/*`, managed with Yarn workspaces and Nx. + +## Setup + +- [Volta](https://volta.sh/) for Node.js/Yarn/PNPM version management +- Requires `VOLTA_FEATURE_PNPM=1` +- After cloning: `yarn install && yarn build` +- Never change Volta, Yarn, or package manager versions unless explicitly asked + +## Package Manager + +Use **yarn**: `yarn install`, `yarn build:dev`, `yarn test`, `yarn lint` + +| Command | Purpose | +| ------------------------------------- | ----------------------------- | +| `yarn build` | Full production build | +| `yarn build:dev` | Dev build (transpile + types) | +| `yarn build:dev:filter @sentry/` | Build one package + deps | +| `yarn build:bundle` | Browser bundles only | +| `yarn test` | All unit tests | +| `yarn lint` | ESLint + Oxfmt | +| `yarn fix` | Auto-fix lint + format | +| `yarn format` | Auto-fix formatting (Oxfmt) | + +Single package: `cd packages/ && yarn test` + +## Commit Attribution + +AI commits MUST include: + +``` +Co-Authored-By: +``` + +## Git Workflow + +Uses **Git Flow** (see `docs/gitflow.md`). + +- **All PRs target `develop`** (NOT `master`) +- `master` = last released state — never merge directly +- Feature branches: `feat/descriptive-name` +- Never update dependencies, `package.json`, or build scripts unless explicitly asked + +## Before Every Commit + +1. `yarn format` +2. `yarn lint` +3. `yarn test` +4. `yarn build:dev` +5. NEVER push on `develop` + +## Architecture + +### Core + +- `packages/core/` — Base SDK: interfaces, types, core functionality +- `packages/types/` — Shared types (**deprecated, never modify – instead find types in packages/core**) +- `packages/browser-utils/` — Browser utilities and instrumentation +- `packages/node-core/` — Node core logic (excludes OTel instrumentation) + +### Platform SDKs + +- `packages/browser/` — Browser SDK + CDN bundles +- `packages/node/` — Node.js SDK (OTel instrumentation on top of node-core) +- `packages/bun/`, `packages/deno/`, `packages/cloudflare/` + +### Framework Integrations + +- `packages/{framework}/` — React, Vue, Angular, Next.js, Nuxt, SvelteKit, Remix, etc. +- Some have client/server entry points (nextjs, nuxt, sveltekit) + +### AI Integrations + +- `packages/core/src/tracing/{provider}/` — Core instrumentation +- `packages/node/src/integrations/tracing/{provider}/` — Node.js integration + OTel +- `packages/cloudflare/src/integrations/tracing/{provider}.ts` — Edge runtime +- Use `/add-ai-integration` skill when adding or modifying integrations + +### User Experience + +- `packages/replay-internal/`, `packages/replay-canvas/`, `packages/replay-worker/` — Session replay +- `packages/feedback/` — User feedback + +### Dev Packages (`dev-packages/`) + +- `browser-integration-tests/` — Playwright browser tests +- `e2e-tests/` — E2E tests (70+ framework combos) +- `node-integration-tests/` — Node.js integration tests +- `test-utils/` — Shared test utilities +- `rollup-utils/` — Build utilities + +## Coding Standards + +- Follow existing conventions — check neighboring files +- Only use libraries already in the codebase +- Never expose secrets or keys +- When modifying files, cover all occurrences (including `src/` and `test/`) + +## Reference Documentation + +- [Span Attributes](https://develop.sentry.dev/sdk/telemetry/attributes.md) +- [Scopes (global, isolation, current)](https://develop.sentry.dev/sdk/telemetry/scopes.md) + +## Skills + +### E2E Testing + +Use `/e2e` skill to run E2E tests. See `.claude/skills/e2e/SKILL.md` + +### Security Vulnerabilities + +Use `/fix-security-vulnerability` skill for Dependabot alerts. See `.claude/skills/fix-security-vulnerability/SKILL.md` + +### Issue Triage + +Use `/triage-issue` skill. See `.claude/skills/triage-issue/SKILL.md` + +### CDN Bundles + +Use `/add-cdn-bundle` skill. See `.claude/skills/add-cdn-bundle/SKILL.md` + +### Publishing a Release + +Use `/release` skill. See `.claude/skills/release/SKILL.md` + +### Dependency Upgrades + +Use `/upgrade-dep` skill. See `.claude/skills/upgrade-dep/SKILL.md` + +### OpenTelemetry Instrumentation Upgrades + +Use `/upgrade-otel` skill. See `.claude/skills/upgrade-otel/SKILL.md` + +### AI Integration + +Use `/add-ai-integration` skill. See `.claude/skills/add-ai-integration/SKILL.md` diff --git a/CHANGELOG.md b/CHANGELOG.md index 0c5a99c217c1..c7a40cb69762 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,103 @@ - "You miss 100 percent of the chances you don't take. — Wayne Gretzky" — Michael Scott +## 10.41.0 + +### Important Changes + +- **feat(core,cloudflare,deno): Add `instrumentPostgresJsSql` instrumentation ([#19566](https://github.com/getsentry/sentry-javascript/pull/19566))** + + Added a new instrumentation helper for the [`postgres`](https://github.com/porsager/postgres) (postgres.js) library, designed for + SDKs that are not based on OpenTelemetry (e.g. Cloudflare, Deno). This wraps a postgres.js `sql` tagged template instance so that + all queries automatically create Sentry spans. + + ```javascript + import postgres from 'postgres'; + import * as Sentry from '@sentry/cloudflare'; // or '@sentry/deno' + + export default Sentry.withSentry(env => ({ dsn: '__DSN__' }), { + async fetch(request, env, ctx) { + const sql = Sentry.instrumentPostgresJsSql(postgres(env.DATABASE_URL)); + + // All queries now create Sentry spans + const users = await sql`SELECT * FROM users WHERE id = ${userId}`; + return Response.json(users); + }, + }); + ``` + + The instrumentation is available in `@sentry/core`, `@sentry/cloudflare`, and `@sentry/deno`. + +- **feat(nextjs): Add Turbopack support for `thirdPartyErrorFilterIntegration` ([#19542](https://github.com/getsentry/sentry-javascript/pull/19542))** + + We added experimental support for the `thirdPartyErrorFilterIntegration` with Turbopack builds. + + This feature requires Next.js 16+ and is currently behind an experimental flag: + + ```js + // next.config.ts + import { withSentryConfig } from '@sentry/nextjs'; + + export default withSentryConfig(nextConfig, { + _experimental: { + turbopackApplicationKey: 'my-app-key', + }, + }); + ``` + + Then configure the integration in your client instrumentation file with a matching key: + + ```js + // instrumentation-client.ts + import * as Sentry from '@sentry/nextjs'; + + Sentry.init({ + integrations: [ + Sentry.thirdPartyErrorFilterIntegration({ + filterKeys: ['my-app-key'], + behaviour: 'apply-tag-if-exclusively-contains-third-party-frames', + }), + ], + }); + ``` + +### Other Changes + +- feat(core,cloudflare): Add dispose to the client for proper cleanup ([#19506](https://github.com/getsentry/sentry-javascript/pull/19506)) +- feat(deps): Bump rxjs from 7.8.1 to 7.8.2 ([#19545](https://github.com/getsentry/sentry-javascript/pull/19545)) +- feat(nextjs): Use `not: foreign` condition in turbopack loaders ([#19502](https://github.com/getsentry/sentry-javascript/pull/19502)) +- feat(react-router): Include middleware function names and indices ([#19109](https://github.com/getsentry/sentry-javascript/pull/19109)) +- fix(consola): Normalize extra keys from consola ([#19511](https://github.com/getsentry/sentry-javascript/pull/19511)) +- fix(core): Improve message truncation for multimodal content and normalize streaming span names ([#19500](https://github.com/getsentry/sentry-javascript/pull/19500)) +- fix(core): Strip inline media from multimodal content before stringification ([#19540](https://github.com/getsentry/sentry-javascript/pull/19540)) +- fix(deps): Bump transitive rollup deps to patch CVE-2026-27606 ([#19565](https://github.com/getsentry/sentry-javascript/pull/19565)) +- fix(langchain): Use runName argument in handleChainStart to fix unknown_chain spans ([#19554](https://github.com/getsentry/sentry-javascript/pull/19554)) +- fix(nestjs): Improve control flow exception filtering ([#19524](https://github.com/getsentry/sentry-javascript/pull/19524)) +- fix(tanstackstart-react): Flush events in server entry point for serverless environments ([#19513](https://github.com/getsentry/sentry-javascript/pull/19513)) +- fix(vue): Avoid triggering deprecated next callback from router instrumentation ([#19476](https://github.com/getsentry/sentry-javascript/pull/19476)) + +
+ Internal Changes + +- chore: Updating minimatch ([#19434](https://github.com/getsentry/sentry-javascript/pull/19434)) +- chore(agents): Add `dotagents` ([#19526](https://github.com/getsentry/sentry-javascript/pull/19526)) +- chore(agents): Add nested `AGENTS.md` for browser ([#19551](https://github.com/getsentry/sentry-javascript/pull/19551)) +- chore(agents): Add nested `AGENTS.md` for nextjs ([#19556](https://github.com/getsentry/sentry-javascript/pull/19556)) +- chore(agents): Consolidate SDK dev rules into `AGENTS.md` ([#19521](https://github.com/getsentry/sentry-javascript/pull/19521)) +- chore(agents): Migrate repo-wide cursor rules to skills ([#19549](https://github.com/getsentry/sentry-javascript/pull/19549)) +- chore(agents): Remove stale cursor commands ([#19560](https://github.com/getsentry/sentry-javascript/pull/19560)) +- chore(ci): Validate alert id ([#19499](https://github.com/getsentry/sentry-javascript/pull/19499)) +- chore(deps): Bump rollup to 4.59.0 to fix path traversal vulnerability ([#19538](https://github.com/getsentry/sentry-javascript/pull/19538)) +- chore(lint): Remove junit report file ([#19491](https://github.com/getsentry/sentry-javascript/pull/19491)) +- chore(svelte,sveltekit): Use version range for magic-string ([#19520](https://github.com/getsentry/sentry-javascript/pull/19520)) +- chore(tanstackstart): Fix leftover formatting issue ([#19536](https://github.com/getsentry/sentry-javascript/pull/19536)) +- test(consola): Restructure tests ([#19517](https://github.com/getsentry/sentry-javascript/pull/19517)) +- test(node): Test runName parameter in handleChainStart for langchain ([#19562](https://github.com/getsentry/sentry-javascript/pull/19562)) + +
+ +Work in this release was contributed by @YevheniiKotyrlo. Thank you for your contribution! + ## 10.40.0 ### Important Changes diff --git a/CLAUDE.md b/CLAUDE.md deleted file mode 100644 index 8f16ed4ff986..000000000000 --- a/CLAUDE.md +++ /dev/null @@ -1,190 +0,0 @@ -# CLAUDE.md - -This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. - -# SDK Development Rules - -You are working on the Sentry JavaScript SDK, a critical production SDK used by thousands of applications. Follow these rules strictly. - -## Code Quality Requirements (MANDATORY) - -**CRITICAL**: All changes must pass these checks before committing: - -1. **Always run `yarn lint`** - Fix all linting issues -2. **Always run `yarn test`** - Ensure all tests pass -3. **Always run `yarn build:dev`** - Verify TypeScript compilation - -## Development Commands - -### Build Commands - -- `yarn build` - Full production build with package verification -- `yarn build:dev` - Development build (transpile + types) -- `yarn build:dev:watch` - Development build in watch mode (recommended) -- `yarn build:dev:filter ` - Build specific package and dependencies -- `yarn build:bundle` - Build browser bundles only - -### Testing - -- `yarn test` - Run all unit tests - -### Linting and Formatting - -- `yarn lint` - Run ESLint and Oxfmt checks -- `yarn fix` - Auto-fix linting and formatting issues -- `yarn format:check` - Check file formatting only -- `yarn format` - Auto-fix formatting issues -- `yarn lint:es-compatibility` - Check ES compatibility - -## Git Flow Branching Strategy - -This repository uses **Git Flow**. See [docs/gitflow.md](docs/gitflow.md) for details. - -### Key Rules - -- **All PRs target `develop` branch** (NOT `master`) -- `master` represents the last released state -- Never merge directly into `master` (except emergency fixes) -- Avoid changing `package.json` files on `develop` during pending releases -- Never update dependencies, package.json content or build scripts unless explicitly asked for -- When asked to do a task on a set of files, always make sure that all occurences in the codebase are covered. Double check that no files have been forgotten. -- Unless explicitly asked for, make sure to cover all files, including files in `src/` and `test/` directories. - -### Branch Naming - -- Features: `feat/descriptive-name` -- Releases: `release/X.Y.Z` - -## Repository Architecture - -This is a monorepo with 40+ packages in the `@sentry/*` namespace, managed with Yarn workspaces and Nx. - -### Core Packages - -- `packages/core/` - Base SDK with interfaces, type definitions, core functionality -- `packages/types/` - Shared TypeScript type definitions - this is deprecated, never modify this package -- `packages/browser-utils/` - Browser-specific utilities and instrumentation -- `packages/node-core/` - Node Core SDK which contains most of the node-specific logic, excluding OpenTelemetry instrumentation. - -### Platform SDKs - -- `packages/browser/` - Browser SDK with bundled variants -- `packages/node/` - Node.js SDK. All general Node code should go into node-core, the node package itself only contains OpenTelemetry instrumentation on top of that. -- `packages/bun/`, `packages/deno/`, `packages/cloudflare/` - Runtime-specific SDKs - -### Framework Integrations - -- Framework packages: `packages/{framework}/` (react, vue, angular, etc.) -- Client/server entry points where applicable (nextjs, nuxt, sveltekit) -- Integration tests use Playwright (Remix, browser-integration-tests) - -### User Experience Packages - -- `packages/replay-internal/` - Session replay functionality -- `packages/replay-canvas/` - Canvas recording for replay -- `packages/replay-worker/` - Web worker support for replay -- `packages/feedback/` - User feedback integration - -### Development Packages (`dev-packages/`) - -- `browser-integration-tests/` - Playwright browser tests -- `e2e-tests/` - End-to-end tests for 70+ framework combinations -- `node-integration-tests/` - Node.js integration tests -- `test-utils/` - Shared testing utilities -- `bundle-analyzer-scenarios/` - Bundle analysis -- `rollup-utils/` - Build utilities -- GitHub Actions packages for CI/CD automation - -## Development Guidelines - -### Build System - -- Uses Rollup for bundling (`rollup.*.config.mjs`) -- TypeScript with multiple tsconfig files per package -- Nx orchestrates task execution across packages with caching -- Vite for testing with `vitest` - -### Package Structure Pattern - -Each package typically contains: - -- `src/index.ts` - Main entry point -- `src/sdk.ts` - SDK initialization logic -- `rollup.npm.config.mjs` - Build configuration -- `tsconfig.json`, `tsconfig.test.json`, `tsconfig.types.json` -- `test/` directory with corresponding test files - -### Key Development Notes - -- Uses Volta for Node.js/Yarn version management -- Requires initial `yarn build` after `yarn install` for TypeScript linking -- Integration tests use Playwright extensively -- Never change the volta, yarn, or package manager setup in general unless explicitly asked for - -### E2E Testing - -E2E tests are located in `dev-packages/e2e-tests/` and verify SDK behavior in real-world framework scenarios. - -#### How Verdaccio Registry Works - -E2E tests use [Verdaccio](https://verdaccio.org/), a lightweight npm registry running in Docker. Before tests run: - -1. SDK packages are built and packed into tarballs (`yarn build && yarn build:tarball`) -2. Tarballs are published to Verdaccio at `http://127.0.0.1:4873` -3. Test applications install packages from Verdaccio instead of public npm - -#### The `.npmrc` Requirement - -Every E2E test application needs an `.npmrc` file with: - -``` -@sentry:registry=http://127.0.0.1:4873 -@sentry-internal:registry=http://127.0.0.1:4873 -``` - -Without this file, pnpm installs from the public npm registry instead of Verdaccio, so your local changes won't be tested. This is a common cause of "tests pass in CI but fail locally" or vice versa. - -#### Running a Single E2E Test - -Run the e2e skill. - -#### Common Pitfalls and Debugging - -1. **Missing `.npmrc`**: Most common issue. Always verify the test app has the correct `.npmrc` file. - -2. **Stale tarballs**: After SDK changes, must re-run `yarn build:tarball`. - -3. **Debugging tips**: - - Check browser console logs for SDK initialization errors - - Use `debug: true` in Sentry config - - Verify installed package version: check `node_modules/@sentry/*/package.json` - -### Notes for Background Tasks - -- Make sure to use [volta](https://volta.sh/) for development. Volta is used to manage the node, yarn and pnpm version used. -- Make sure that [PNPM support is enabled in volta](https://docs.volta.sh/advanced/pnpm). This means that the `VOLTA_FEATURE_PNPM` environment variable has to be set to `1`. -- Yarn, Node and PNPM have to be used through volta, in the versions defined by the volta config. NEVER change any versions unless explicitly asked to. - -## Testing Single Packages - -- Test specific package: `cd packages/{package-name} && yarn test` -- Build specific package: `yarn build:dev:filter @sentry/{package-name}` - -## Code Style Rules - -- Follow existing code conventions in each package -- Check imports and dependencies - only use libraries already in the codebase -- Look at neighboring files for patterns and style -- Never introduce code that exposes secrets or keys -- Follow security best practices - -## Before Every Commit Checklist - -1. ✅ `yarn lint` (fix all issues) -2. ✅ `yarn test` (all tests pass) -3. ✅ `yarn build:dev` (builds successfully) -4. ✅ Target `develop` branch for PRs (not `master`) - -## Documentation Sync - -**IMPORTANT**: When editing CLAUDE.md, also update .cursor/rules/sdk_development.mdc and vice versa to keep both files in sync. diff --git a/CLAUDE.md b/CLAUDE.md new file mode 120000 index 000000000000..47dc3e3d863c --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1 @@ +AGENTS.md \ No newline at end of file diff --git a/agents.lock b/agents.lock new file mode 100644 index 000000000000..faf3ca74d885 --- /dev/null +++ b/agents.lock @@ -0,0 +1,10 @@ +# Auto-generated by dotagents. Do not edit. +version = 1 + +[skills.dotagents] +source = "getsentry/dotagents" +resolved_url = "https://github.com/getsentry/dotagents.git" +resolved_path = "skills/dotagents" +commit = "84ec01d363fdd50b47f2baefed742d27a564c210" +integrity = "sha256-bVx96wBmjIF6NPfPH7GMDWUJLulbAHWZhRWi1UAZ6Ws=" + diff --git a/agents.toml b/agents.toml new file mode 100644 index 000000000000..fd6dbd04c767 --- /dev/null +++ b/agents.toml @@ -0,0 +1,15 @@ +version = 1 +# Check skills into git so collaborators get them without running 'dotagents install'. +# Set to true (or remove) to gitignore managed skills instead. +gitignore = false +agents = ["claude", "cursor"] + +[trust] +github_orgs = ["getsentry"] + +github_repos = ["getsentry/skills"] + + +[[skills]] +name = "dotagents" +source = "getsentry/dotagents" diff --git a/dev-packages/browser-integration-tests/.gitignore b/dev-packages/browser-integration-tests/.gitignore index 41569583fe3f..b258dda7330c 100644 --- a/dev-packages/browser-integration-tests/.gitignore +++ b/dev-packages/browser-integration-tests/.gitignore @@ -1,2 +1,3 @@ test-results tmp +*.junit.xml diff --git a/dev-packages/browser-integration-tests/package.json b/dev-packages/browser-integration-tests/package.json index e2264f7fd250..53c59f2a889e 100644 --- a/dev-packages/browser-integration-tests/package.json +++ b/dev-packages/browser-integration-tests/package.json @@ -69,10 +69,9 @@ "webpack": "^5.95.0" }, "devDependencies": { - "@types/glob": "8.0.0", "@types/node": "^18.19.1", "eslint-plugin-regexp": "^1.15.0", - "glob": "8.0.3" + "glob": "^13.0.6" }, "volta": { "extends": "../../package.json" diff --git a/dev-packages/browser-integration-tests/results.junit.xml b/dev-packages/browser-integration-tests/results.junit.xml deleted file mode 100644 index c66322d597ab..000000000000 --- a/dev-packages/browser-integration-tests/results.junit.xml +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - \ No newline at end of file diff --git a/dev-packages/browser-integration-tests/scripts/detectFlakyTests.ts b/dev-packages/browser-integration-tests/scripts/detectFlakyTests.ts index 8e0aadc6af59..21f666adeb0e 100644 --- a/dev-packages/browser-integration-tests/scripts/detectFlakyTests.ts +++ b/dev-packages/browser-integration-tests/scripts/detectFlakyTests.ts @@ -1,7 +1,7 @@ import * as childProcess from 'child_process'; import * as fs from 'fs'; import * as path from 'path'; -import * as glob from 'glob'; +import { globSync } from 'glob'; /** * Assume that each test runs for 3s. @@ -112,7 +112,7 @@ function getPerTestRunCount(testPaths: string[]) { } function getTestPaths(): string[] { - const paths = glob.sync('suites/**/test.{ts,js}', { + const paths = globSync('suites/**/test.{ts,js}', { cwd: path.join(__dirname, '../'), }); diff --git a/dev-packages/e2e-tests/package.json b/dev-packages/e2e-tests/package.json index 21f2fa33b202..6084550a9d7b 100644 --- a/dev-packages/e2e-tests/package.json +++ b/dev-packages/e2e-tests/package.json @@ -21,13 +21,12 @@ "clean:pnpm": "pnpm store prune" }, "devDependencies": { - "@types/glob": "8.0.0", "@types/node": "^18.19.1", "dotenv": "16.0.3", "esbuild": "0.20.0", "eslint-plugin-regexp": "^1.15.0", - "glob": "8.0.3", - "rimraf": "^5.0.10", + "glob": "^13.0.6", + "rimraf": "^6.1.3", "ts-node": "10.9.2", "yaml": "2.8.2" }, diff --git a/dev-packages/e2e-tests/publish-packages.ts b/dev-packages/e2e-tests/publish-packages.ts index 5ade5b1d735c..cbe9a37de0a7 100644 --- a/dev-packages/e2e-tests/publish-packages.ts +++ b/dev-packages/e2e-tests/publish-packages.ts @@ -1,6 +1,6 @@ import * as childProcess from 'child_process'; import { readFileSync } from 'fs'; -import * as glob from 'glob'; +import { globSync } from 'glob'; import * as path from 'path'; const repositoryRoot = path.resolve(__dirname, '../..'); @@ -10,7 +10,7 @@ const version = (JSON.parse(readFileSync(path.join(__dirname, './package.json'), // Get absolute paths of all the packages we want to publish to the fake registry // Only include the current versions, to avoid getting old tarballs published as well -const packageTarballPaths = glob.sync(`packages/*/sentry-*-${version}.tgz`, { +const packageTarballPaths = globSync(`packages/*/sentry-*-${version}.tgz`, { cwd: repositoryRoot, absolute: true, }); diff --git a/dev-packages/e2e-tests/test-applications/nestjs-basic/package.json b/dev-packages/e2e-tests/test-applications/nestjs-basic/package.json index 2c142b5c6f90..46beea570042 100644 --- a/dev-packages/e2e-tests/test-applications/nestjs-basic/package.json +++ b/dev-packages/e2e-tests/test-applications/nestjs-basic/package.json @@ -21,6 +21,7 @@ "@nestjs/platform-express": "^10.0.0", "@sentry/nestjs": "latest || *", "reflect-metadata": "^0.2.0", + "axios": "1.13.5", "rxjs": "^7.8.1" }, "devDependencies": { diff --git a/dev-packages/e2e-tests/test-applications/nestjs-basic/src/app.controller.ts b/dev-packages/e2e-tests/test-applications/nestjs-basic/src/app.controller.ts index 33a6b1957d99..035106a14b21 100644 --- a/dev-packages/e2e-tests/test-applications/nestjs-basic/src/app.controller.ts +++ b/dev-packages/e2e-tests/test-applications/nestjs-basic/src/app.controller.ts @@ -67,6 +67,11 @@ export class AppController { return this.appService.testExpectedRpcException(id); } + @Get('test-axios-error/:id') + async testAxiosError(@Param('id') id: string) { + return this.appService.testAxiosError(id); + } + @Get('test-span-decorator-async') async testSpanDecoratorAsync() { return { result: await this.appService.testSpanDecoratorAsync() }; diff --git a/dev-packages/e2e-tests/test-applications/nestjs-basic/src/app.service.ts b/dev-packages/e2e-tests/test-applications/nestjs-basic/src/app.service.ts index 39495a3a7424..a7b91b7b3d98 100644 --- a/dev-packages/e2e-tests/test-applications/nestjs-basic/src/app.service.ts +++ b/dev-packages/e2e-tests/test-applications/nestjs-basic/src/app.service.ts @@ -3,6 +3,7 @@ import { RpcException } from '@nestjs/microservices'; import { Cron, SchedulerRegistry } from '@nestjs/schedule'; import * as Sentry from '@sentry/nestjs'; import { SentryCron, SentryTraced } from '@sentry/nestjs'; +import { AxiosError } from 'axios'; const monitorConfig = { schedule: { @@ -42,6 +43,17 @@ export class AppService { throw new RpcException(`This is an expected RPC exception with id ${id}`); } + testAxiosError(id: string) { + throw new AxiosError( + `This is an axios error with id ${id}`, + 'ERR_BAD_RESPONSE', + undefined, + undefined, + // Simulating an upstream API 502 response + { status: 502, statusText: 'Bad Gateway', headers: {}, config: { headers: {} }, data: {} } as any, + ); + } + @SentryTraced('wait and return a string') async wait() { await new Promise(resolve => setTimeout(resolve, 500)); diff --git a/dev-packages/e2e-tests/test-applications/nestjs-basic/tests/errors.test.ts b/dev-packages/e2e-tests/test-applications/nestjs-basic/tests/errors.test.ts index 09effba34198..62e14534cc7e 100644 --- a/dev-packages/e2e-tests/test-applications/nestjs-basic/tests/errors.test.ts +++ b/dev-packages/e2e-tests/test-applications/nestjs-basic/tests/errors.test.ts @@ -34,6 +34,24 @@ test('Sends exception to Sentry', async ({ baseURL }) => { }); }); +test('Sends AxiosError to Sentry', async ({ baseURL }) => { + const errorEventPromise = waitForError('nestjs-basic', event => { + return !event.type && event.exception?.values?.[0]?.value === 'This is an axios error with id 123'; + }); + + const response = await fetch(`${baseURL}/test-axios-error/123`); + expect(response.status).toBe(500); + + const errorEvent = await errorEventPromise; + + expect(errorEvent.exception?.values).toHaveLength(1); + expect(errorEvent.exception?.values?.[0]?.value).toBe('This is an axios error with id 123'); + expect(errorEvent.exception?.values?.[0]?.mechanism).toEqual({ + handled: false, + type: 'auto.http.nestjs.global_filter', + }); +}); + test('Does not send HttpExceptions to Sentry', async ({ baseURL }) => { let errorEventOccurred = false; diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16/app/third-party-filter/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16/app/third-party-filter/page.tsx new file mode 100644 index 000000000000..b6b4bea80def --- /dev/null +++ b/dev-packages/e2e-tests/test-applications/nextjs-16/app/third-party-filter/page.tsx @@ -0,0 +1,24 @@ +'use client'; + +import * as Sentry from '@sentry/nextjs'; + +function throwFirstPartyError(): void { + throw new Error('first-party-error'); +} + +export default function Page() { + return ( + + ); +} diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16/instrumentation-client.ts b/dev-packages/e2e-tests/test-applications/nextjs-16/instrumentation-client.ts index ae4e3195a2a1..934a50fb786d 100644 --- a/dev-packages/e2e-tests/test-applications/nextjs-16/instrumentation-client.ts +++ b/dev-packages/e2e-tests/test-applications/nextjs-16/instrumentation-client.ts @@ -7,6 +7,12 @@ Sentry.init({ tunnel: `http://localhost:3031/`, // proxy server tracesSampleRate: 1.0, sendDefaultPii: true, + integrations: [ + Sentry.thirdPartyErrorFilterIntegration({ + filterKeys: ['nextjs-16-e2e'], + behaviour: 'apply-tag-if-exclusively-contains-third-party-frames', + }), + ], // Verify Log type is available beforeSendLog(log: Log) { return log; diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16/next.config.ts b/dev-packages/e2e-tests/test-applications/nextjs-16/next.config.ts index 5e72a02200d8..342ba13b1206 100644 --- a/dev-packages/e2e-tests/test-applications/nextjs-16/next.config.ts +++ b/dev-packages/e2e-tests/test-applications/nextjs-16/next.config.ts @@ -10,5 +10,6 @@ export default withSentryConfig(nextConfig, { silent: true, _experimental: { vercelCronsMonitoring: true, + turbopackApplicationKey: 'nextjs-16-e2e', }, }); diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16/tests/third-party-filter.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16/tests/third-party-filter.test.ts new file mode 100644 index 000000000000..277a53fd394b --- /dev/null +++ b/dev-packages/e2e-tests/test-applications/nextjs-16/tests/third-party-filter.test.ts @@ -0,0 +1,25 @@ +import test, { expect } from '@playwright/test'; +import { waitForError } from '@sentry-internal/test-utils'; + +const isWebpackDev = process.env.TEST_ENV === 'development-webpack'; + +test('First-party error should not be tagged as third-party code', async ({ page }) => { + test.skip(isWebpackDev, 'Only relevant for Turbopack builds'); + + const errorPromise = waitForError('nextjs-16', errorEvent => { + return errorEvent?.exception?.values?.some(value => value.value === 'first-party-error') ?? false; + }); + + await page.goto('/third-party-filter'); + await page.locator('#first-party-error-btn').click(); + + const errorEvent = await errorPromise; + + expect(errorEvent.exception?.values?.[0]?.value).toBe('first-party-error'); + + // In production, TEST_ENV=production is shared by both turbopack and webpack variants. + // Only assert when the build is actually turbopack. + if (errorEvent.tags?.turbopack) { + expect(errorEvent.tags?.third_party_code).toBeUndefined(); + } +}); diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes.ts index 6bd5b27264eb..f0c389733cfa 100644 --- a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes.ts +++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes.ts @@ -10,6 +10,7 @@ export default [ route('server-loader', 'routes/performance/server-loader.tsx'), route('server-action', 'routes/performance/server-action.tsx'), route('with-middleware', 'routes/performance/with-middleware.tsx'), + route('multi-middleware', 'routes/performance/multi-middleware.tsx'), route('error-loader', 'routes/performance/error-loader.tsx'), route('error-action', 'routes/performance/error-action.tsx'), route('error-middleware', 'routes/performance/error-middleware.tsx'), diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/multi-middleware.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/multi-middleware.tsx new file mode 100644 index 000000000000..b1a1525ea6a4 --- /dev/null +++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/multi-middleware.tsx @@ -0,0 +1,26 @@ +import type { Route } from './+types/multi-middleware'; + +export const middleware: Route.MiddlewareFunction[] = [ + async function multiAuthMiddleware(_args, next) { + return next(); + }, + async function multiLoggingMiddleware(_args, next) { + return next(); + }, + async function multiValidationMiddleware(_args, next) { + return next(); + }, +]; + +export function loader() { + return { message: 'Multi-middleware route loaded' }; +} + +export default function MultiMiddlewarePage() { + return ( +
+

Multi Middleware Route

+

This route has 3 middlewares

+
+ ); +} diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/middleware.server.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/middleware.server.test.ts index e99a58a7f57c..ccc3fff2cd3e 100644 --- a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/middleware.server.test.ts +++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/middleware.server.test.ts @@ -2,8 +2,6 @@ import { expect, test } from '@playwright/test'; import { waitForTransaction } from '@sentry-internal/test-utils'; import { APP_NAME } from '../constants'; -// Note: React Router middleware instrumentation now works in Framework Mode. -// Previously this was a known limitation (see: https://github.com/remix-run/react-router/discussions/12950) test.describe('server - instrumentation API middleware', () => { test('should instrument server middleware with instrumentation API origin', async ({ page }) => { const txPromise = waitForTransaction(APP_NAME, async transactionEvent => { @@ -43,20 +41,27 @@ test.describe('server - instrumentation API middleware', () => { (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.middleware', ); + expect(middlewareSpan).toBeDefined(); expect(middlewareSpan).toMatchObject({ span_id: expect.any(String), trace_id: expect.any(String), - data: { + data: expect.objectContaining({ 'sentry.origin': 'auto.function.react_router.instrumentation_api', 'sentry.op': 'function.react_router.middleware', - }, - description: '/performance/with-middleware', + 'react_router.route.id': 'routes/performance/with-middleware', + 'http.route': '/performance/with-middleware', + 'react_router.middleware.index': 0, + }), parent_span_id: expect.any(String), start_timestamp: expect.any(Number), timestamp: expect.any(Number), op: 'function.react_router.middleware', origin: 'auto.function.react_router.instrumentation_api', }); + + // Middleware name is available via OTEL patching of createRequestHandler + expect(middlewareSpan!.data?.['react_router.middleware.name']).toBe('authMiddleware'); + expect(middlewareSpan!.description).toBe('middleware authMiddleware'); }); test('should have middleware span run before loader span', async ({ page }) => { @@ -80,6 +85,37 @@ test.describe('server - instrumentation API middleware', () => { expect(loaderSpan).toBeDefined(); // Middleware should start before loader - expect(middlewareSpan!.start_timestamp).toBeLessThanOrEqual(loaderSpan!.start_timestamp); + expect(middlewareSpan!.start_timestamp).toBeLessThanOrEqual(loaderSpan!.start_timestamp!); + }); + + test('should track multiple middlewares with correct indices', async ({ page }) => { + const txPromise = waitForTransaction(APP_NAME, async transactionEvent => { + return transactionEvent.transaction === 'GET /performance/multi-middleware'; + }); + + await page.goto(`/performance/multi-middleware`); + + const transaction = await txPromise; + + await expect(page.locator('#multi-middleware-title')).toBeVisible(); + await expect(page.locator('#multi-middleware-content')).toHaveText('This route has 3 middlewares'); + + const middlewareSpans = transaction?.spans?.filter( + (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.middleware', + ); + + expect(middlewareSpans).toHaveLength(3); + + const sortedSpans = [...middlewareSpans!].sort( + (a: any, b: any) => + (a.data?.['react_router.middleware.index'] ?? 0) - (b.data?.['react_router.middleware.index'] ?? 0), + ); + + expect(sortedSpans.map((s: any) => s.data?.['react_router.middleware.index'])).toEqual([0, 1, 2]); + expect(sortedSpans.map((s: any) => s.data?.['react_router.middleware.name'])).toEqual([ + 'multiAuthMiddleware', + 'multiLoggingMiddleware', + 'multiValidationMiddleware', + ]); }); }); diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/vite.config.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/vite.config.ts index 68ba30d69397..4da306d41cc7 100644 --- a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/vite.config.ts +++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/vite.config.ts @@ -1,6 +1,11 @@ import { reactRouter } from '@react-router/dev/vite'; +import { sentryReactRouter } from '@sentry/react-router'; import { defineConfig } from 'vite'; -export default defineConfig({ - plugins: [reactRouter()], -}); +export default defineConfig(async config => ({ + plugins: [ + reactRouter(), + // eslint-disable-next-line @typescript-eslint/no-explicit-any + ...((await sentryReactRouter({ sourcemaps: { disable: true } }, config)) as any[]), + ], +})); diff --git a/dev-packages/e2e-tests/validate-test-app-setups.ts b/dev-packages/e2e-tests/validate-test-app-setups.ts index 21bdc383c88d..edbbe047417f 100644 --- a/dev-packages/e2e-tests/validate-test-app-setups.ts +++ b/dev-packages/e2e-tests/validate-test-app-setups.ts @@ -1,9 +1,9 @@ /* eslint-disable no-console */ import * as fs from 'fs'; -import * as glob from 'glob'; +import { globSync } from 'glob'; import * as path from 'path'; -const testRecipePaths = glob.sync('test-applications/*/test-recipe.json', { +const testRecipePaths = globSync('test-applications/*/test-recipe.json', { cwd: __dirname, absolute: true, }); diff --git a/dev-packages/e2e-tests/validate-verdaccio-configuration.ts b/dev-packages/e2e-tests/validate-verdaccio-configuration.ts index ca0b20f7213e..7bef179bd5a6 100644 --- a/dev-packages/e2e-tests/validate-verdaccio-configuration.ts +++ b/dev-packages/e2e-tests/validate-verdaccio-configuration.ts @@ -1,6 +1,6 @@ import * as assert from 'assert'; import * as fs from 'fs'; -import * as glob from 'glob'; +import { globSync } from 'glob'; import * as path from 'path'; import * as YAML from 'yaml'; @@ -19,7 +19,7 @@ const sentryScopedPackagesInVerdaccioConfig = Object.keys(verdaccioConfig.packag packageName.startsWith('@sentry/'), ); -const packageJsonPaths = glob.sync('packages/*/package.json', { +const packageJsonPaths = globSync('packages/*/package.json', { cwd: repositoryRoot, absolute: true, }); diff --git a/dev-packages/node-core-integration-tests/package.json b/dev-packages/node-core-integration-tests/package.json index e17ef820c83e..265f440b731a 100644 --- a/dev-packages/node-core-integration-tests/package.json +++ b/dev-packages/node-core-integration-tests/package.json @@ -46,7 +46,7 @@ "node-schedule": "^2.1.1", "proxy": "^2.1.1", "reflect-metadata": "0.2.1", - "rxjs": "^7.8.1", + "rxjs": "^7.8.2", "winston": "^3.17.0", "yargs": "^16.2.0" }, diff --git a/dev-packages/node-integration-tests/package.json b/dev-packages/node-integration-tests/package.json index 27bcddd0aff7..d84293eb2b2c 100644 --- a/dev-packages/node-integration-tests/package.json +++ b/dev-packages/node-integration-tests/package.json @@ -79,7 +79,7 @@ "proxy": "^2.1.1", "redis-4": "npm:redis@^4.6.14", "reflect-metadata": "0.2.1", - "rxjs": "^7.8.1", + "rxjs": "^7.8.2", "tedious": "^18.6.1", "winston": "^3.17.0", "yargs": "^16.2.0" diff --git a/dev-packages/node-integration-tests/suites/tracing/anthropic/test.ts b/dev-packages/node-integration-tests/suites/tracing/anthropic/test.ts index 182f4d4ee8c5..719333488051 100644 --- a/dev-packages/node-integration-tests/suites/tracing/anthropic/test.ts +++ b/dev-packages/node-integration-tests/suites/tracing/anthropic/test.ts @@ -266,7 +266,7 @@ describe('Anthropic integration', () => { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic', }), - description: 'chat claude-3-haiku-20240307 stream-response', + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', origin: 'auto.ai.anthropic', status: 'ok', @@ -296,7 +296,7 @@ describe('Anthropic integration', () => { [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307', [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true, }), - description: 'chat claude-3-haiku-20240307 stream-response', + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', origin: 'auto.ai.anthropic', status: 'ok', @@ -401,7 +401,7 @@ describe('Anthropic integration', () => { spans: expect.arrayContaining([ // messages.create with stream: true expect.objectContaining({ - description: 'chat claude-3-haiku-20240307 stream-response', + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', data: expect.objectContaining({ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic', @@ -419,7 +419,7 @@ describe('Anthropic integration', () => { }), // messages.stream expect.objectContaining({ - description: 'chat claude-3-haiku-20240307 stream-response', + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', data: expect.objectContaining({ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic', @@ -435,7 +435,7 @@ describe('Anthropic integration', () => { }), // messages.stream with redundant stream: true param expect.objectContaining({ - description: 'chat claude-3-haiku-20240307 stream-response', + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', data: expect.objectContaining({ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic', @@ -457,7 +457,7 @@ describe('Anthropic integration', () => { transaction: 'main', spans: expect.arrayContaining([ expect.objectContaining({ - description: 'chat claude-3-haiku-20240307 stream-response', + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', data: expect.objectContaining({ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true, @@ -466,7 +466,7 @@ describe('Anthropic integration', () => { }), }), expect.objectContaining({ - description: 'chat claude-3-haiku-20240307 stream-response', + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', data: expect.objectContaining({ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true, @@ -474,7 +474,7 @@ describe('Anthropic integration', () => { }), }), expect.objectContaining({ - description: 'chat claude-3-haiku-20240307 stream-response', + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', data: expect.objectContaining({ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true, @@ -536,7 +536,7 @@ describe('Anthropic integration', () => { transaction: { spans: expect.arrayContaining([ expect.objectContaining({ - description: expect.stringContaining('stream-response'), + description: 'chat claude-3-haiku-20240307', op: 'gen_ai.chat', data: expect.objectContaining({ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_TOOLS_JSON, @@ -557,7 +557,7 @@ describe('Anthropic integration', () => { spans: expect.arrayContaining([ // Error with messages.create on stream initialization expect.objectContaining({ - description: 'chat error-stream-init stream-response', + description: 'chat error-stream-init', op: 'gen_ai.chat', status: 'internal_error', // Actual status coming from the instrumentation data: expect.objectContaining({ @@ -567,7 +567,7 @@ describe('Anthropic integration', () => { }), // Error with messages.stream on stream initialization expect.objectContaining({ - description: 'chat error-stream-init stream-response', + description: 'chat error-stream-init', op: 'gen_ai.chat', status: 'internal_error', // Actual status coming from the instrumentation data: expect.objectContaining({ @@ -577,7 +577,7 @@ describe('Anthropic integration', () => { // Error midway with messages.create on streaming - note: The stream is started successfully // so we get a successful span with the content that was streamed before the error expect.objectContaining({ - description: 'chat error-stream-midway stream-response', + description: 'chat error-stream-midway', op: 'gen_ai.chat', status: 'ok', data: expect.objectContaining({ @@ -589,7 +589,7 @@ describe('Anthropic integration', () => { }), // Error midway with messages.stream - same behavior, we get a span with the streamed data expect.objectContaining({ - description: 'chat error-stream-midway stream-response', + description: 'chat error-stream-midway', op: 'gen_ai.chat', status: 'ok', data: expect.objectContaining({ @@ -731,7 +731,7 @@ describe('Anthropic integration', () => { source: { type: 'base64', media_type: 'image/png', - data: '[Filtered]', + data: '[Blob substitute]', }, }, ], diff --git a/dev-packages/node-integration-tests/suites/tracing/google-genai/test.ts b/dev-packages/node-integration-tests/suites/tracing/google-genai/test.ts index 89130a7eb425..993984cc6b3d 100644 --- a/dev-packages/node-integration-tests/suites/tracing/google-genai/test.ts +++ b/dev-packages/node-integration-tests/suites/tracing/google-genai/test.ts @@ -272,7 +272,7 @@ describe('Google GenAI integration', () => { [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 10, [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 22, }), - description: 'generate_content gemini-2.0-flash-001 stream-response', + description: 'generate_content gemini-2.0-flash-001', op: 'gen_ai.generate_content', origin: 'auto.ai.google_genai', status: 'ok', @@ -327,7 +327,7 @@ describe('Google GenAI integration', () => { [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12, [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 22, }), - description: 'generate_content gemini-1.5-flash stream-response', + description: 'generate_content gemini-1.5-flash', op: 'gen_ai.generate_content', origin: 'auto.ai.google_genai', status: 'ok', @@ -361,7 +361,7 @@ describe('Google GenAI integration', () => { [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'mock-response-streaming-id', [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gemini-1.5-pro', }), - description: 'chat gemini-1.5-pro stream-response', + description: 'chat gemini-1.5-pro', op: 'gen_ai.chat', origin: 'auto.ai.google_genai', status: 'ok', @@ -373,7 +373,7 @@ describe('Google GenAI integration', () => { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai', }), - description: 'generate_content blocked-model stream-response', + description: 'generate_content blocked-model', op: 'gen_ai.generate_content', origin: 'auto.ai.google_genai', status: 'internal_error', @@ -385,7 +385,7 @@ describe('Google GenAI integration', () => { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai', }), - description: 'generate_content error-model stream-response', + description: 'generate_content error-model', op: 'gen_ai.generate_content', origin: 'auto.ai.google_genai', status: 'internal_error', @@ -416,7 +416,7 @@ describe('Google GenAI integration', () => { [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12, [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 22, }), - description: 'generate_content gemini-1.5-flash stream-response', + description: 'generate_content gemini-1.5-flash', op: 'gen_ai.generate_content', origin: 'auto.ai.google_genai', status: 'ok', @@ -455,7 +455,7 @@ describe('Google GenAI integration', () => { [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12, [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 22, }), - description: 'chat gemini-1.5-pro stream-response', + description: 'chat gemini-1.5-pro', op: 'gen_ai.chat', origin: 'auto.ai.google_genai', status: 'ok', @@ -472,7 +472,7 @@ describe('Google GenAI integration', () => { [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents when recordInputs: true [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true, }), - description: 'generate_content blocked-model stream-response', + description: 'generate_content blocked-model', op: 'gen_ai.generate_content', origin: 'auto.ai.google_genai', status: 'internal_error', @@ -488,7 +488,7 @@ describe('Google GenAI integration', () => { [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7, [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents when recordInputs: true }), - description: 'generate_content error-model stream-response', + description: 'generate_content error-model', op: 'gen_ai.generate_content', origin: 'auto.ai.google_genai', status: 'internal_error', diff --git a/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-chain.mjs b/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-chain.mjs new file mode 100644 index 000000000000..b8394be972df --- /dev/null +++ b/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-chain.mjs @@ -0,0 +1,63 @@ +import { ChatAnthropic } from '@langchain/anthropic'; +import { RunnableLambda, RunnableSequence } from '@langchain/core/runnables'; +import { createLangChainCallbackHandler } from '@sentry/core'; +import * as Sentry from '@sentry/node'; +import express from 'express'; + +function startMockAnthropicServer() { + const app = express(); + app.use(express.json()); + + app.post('/v1/messages', (req, res) => { + res.json({ + id: 'msg_chain_test', + type: 'message', + role: 'assistant', + content: [{ type: 'text', text: 'The weather is sunny.' }], + model: req.body.model, + stop_reason: 'end_turn', + stop_sequence: null, + usage: { input_tokens: 10, output_tokens: 5 }, + }); + }); + + return new Promise(resolve => { + const server = app.listen(0, () => { + resolve(server); + }); + }); +} + +async function run() { + const server = await startMockAnthropicServer(); + const baseUrl = `http://localhost:${server.address().port}`; + + await Sentry.startSpan({ op: 'function', name: 'main' }, async () => { + const model = new ChatAnthropic({ + model: 'claude-3-5-sonnet-20241022', + temperature: 0, + maxTokens: 50, + apiKey: 'mock-api-key', + clientOptions: { baseURL: baseUrl }, + }); + + const formatStep = RunnableLambda.from(input => `Tell me about: ${input.topic}`).withConfig({ + runName: 'format_prompt', + }); + + const parseStep = RunnableLambda.from(output => output.content[0].text).withConfig({ + runName: 'parse_output', + }); + + const chain = RunnableSequence.from([formatStep, model, parseStep]); + + const handler = createLangChainCallbackHandler(); + + await chain.invoke({ topic: 'weather' }, { callbacks: [handler] }); + }); + + await Sentry.flush(2000); + server.close(); +} + +run(); diff --git a/dev-packages/node-integration-tests/suites/tracing/langchain/test.ts b/dev-packages/node-integration-tests/suites/tracing/langchain/test.ts index 1ff46919f399..e1949751bae4 100644 --- a/dev-packages/node-integration-tests/suites/tracing/langchain/test.ts +++ b/dev-packages/node-integration-tests/suites/tracing/langchain/test.ts @@ -376,4 +376,41 @@ describe('LangChain integration', () => { }); }, ); + + createEsmAndCjsTests(__dirname, 'scenario-chain.mjs', 'instrument.mjs', (createRunner, test) => { + test('uses runName for chain spans instead of unknown_chain', async () => { + await createRunner() + .ignore('event') + .expect({ + transaction: { + transaction: 'main', + spans: expect.arrayContaining([ + expect.objectContaining({ + description: 'chain format_prompt', + op: 'gen_ai.invoke_agent', + origin: 'auto.ai.langchain', + data: expect.objectContaining({ + 'langchain.chain.name': 'format_prompt', + }), + }), + expect.objectContaining({ + description: 'chain parse_output', + op: 'gen_ai.invoke_agent', + origin: 'auto.ai.langchain', + data: expect.objectContaining({ + 'langchain.chain.name': 'parse_output', + }), + }), + expect.objectContaining({ + description: 'chat claude-3-5-sonnet-20241022', + op: 'gen_ai.chat', + origin: 'auto.ai.langchain', + }), + ]), + }, + }) + .start() + .completed(); + }); + }); }); diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/openai-tool-calls/test.ts b/dev-packages/node-integration-tests/suites/tracing/openai/openai-tool-calls/test.ts index b2189f993b2b..5c61ec320c57 100644 --- a/dev-packages/node-integration-tests/suites/tracing/openai/openai-tool-calls/test.ts +++ b/dev-packages/node-integration-tests/suites/tracing/openai/openai-tool-calls/test.ts @@ -133,7 +133,7 @@ describe('OpenAI Tool Calls integration', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 25, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 15, }, - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -187,7 +187,7 @@ describe('OpenAI Tool Calls integration', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 12, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 8, }, - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -254,7 +254,7 @@ describe('OpenAI Tool Calls integration', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 25, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 15, }, - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -314,7 +314,7 @@ describe('OpenAI Tool Calls integration', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 12, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 8, }, - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/scenario-vision.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-vision.mjs new file mode 100644 index 000000000000..00dd173f7b49 --- /dev/null +++ b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-vision.mjs @@ -0,0 +1,101 @@ +import * as Sentry from '@sentry/node'; +import express from 'express'; +import OpenAI from 'openai'; + +function startMockServer() { + const app = express(); + app.use(express.json({ limit: '10mb' })); + + app.post('/openai/chat/completions', (req, res) => { + res.send({ + id: 'chatcmpl-vision-123', + object: 'chat.completion', + created: 1677652288, + model: req.body.model, + choices: [ + { + index: 0, + message: { + role: 'assistant', + content: 'I see a red square in the image.', + }, + finish_reason: 'stop', + }, + ], + usage: { + prompt_tokens: 50, + completion_tokens: 10, + total_tokens: 60, + }, + }); + }); + + return new Promise(resolve => { + const server = app.listen(0, () => { + resolve(server); + }); + }); +} + +// Small 10x10 red PNG image encoded as base64 +const RED_PNG_BASE64 = + 'iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8z8BQDwADhQGAWjR9awAAAABJRU5ErkJggg=='; + +async function run() { + const server = await startMockServer(); + + await Sentry.startSpan({ op: 'function', name: 'main' }, async () => { + const client = new OpenAI({ + baseURL: `http://localhost:${server.address().port}/openai`, + apiKey: 'mock-api-key', + }); + + // Vision request with inline base64 image + await client.chat.completions.create({ + model: 'gpt-4o', + messages: [ + { + role: 'user', + content: [ + { type: 'text', text: 'What is in this image?' }, + { + type: 'image_url', + image_url: { + url: `data:image/png;base64,${RED_PNG_BASE64}`, + }, + }, + ], + }, + ], + }); + + // Vision request with multiple images (one inline, one URL) + await client.chat.completions.create({ + model: 'gpt-4o', + messages: [ + { + role: 'user', + content: [ + { type: 'text', text: 'Compare these images' }, + { + type: 'image_url', + image_url: { + url: `data:image/png;base64,${RED_PNG_BASE64}`, + }, + }, + { + type: 'image_url', + image_url: { + url: 'https://example.com/image.png', + }, + }, + ], + }, + ], + }); + }); + + server.close(); +} + +run(); diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/test.ts b/dev-packages/node-integration-tests/suites/tracing/openai/test.ts index 5753a494fa31..9e7a1722db11 100644 --- a/dev-packages/node-integration-tests/suites/tracing/openai/test.ts +++ b/dev-packages/node-integration-tests/suites/tracing/openai/test.ts @@ -125,7 +125,7 @@ describe('OpenAI integration', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 18, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 12, }, - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -152,7 +152,7 @@ describe('OpenAI integration', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 10, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 6, }, - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -167,7 +167,7 @@ describe('OpenAI integration', () => { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai', }, - description: 'chat error-model stream-response', + description: 'chat error-model', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'internal_error', @@ -283,7 +283,7 @@ describe('OpenAI integration', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 18, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 12, }), - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -314,7 +314,7 @@ describe('OpenAI integration', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 10, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 6, }), - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -331,7 +331,7 @@ describe('OpenAI integration', () => { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai', }, - description: 'chat error-model stream-response', + description: 'chat error-model', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'internal_error', @@ -983,4 +983,64 @@ describe('OpenAI integration', () => { .completed(); }); }); + + createEsmAndCjsTests(__dirname, 'scenario-vision.mjs', 'instrument-with-pii.mjs', (createRunner, test) => { + test('redacts inline base64 image data in vision requests', async () => { + await createRunner() + .ignore('event') + .expect({ + transaction: { + transaction: 'main', + spans: expect.arrayContaining([ + // Single image vision request + expect.objectContaining({ + data: expect.objectContaining({ + [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat', + [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4o', + [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringContaining('[Blob substitute]'), + [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1, + }), + description: 'chat gpt-4o', + op: 'gen_ai.chat', + status: 'ok', + }), + // Multiple images vision request + expect.objectContaining({ + data: expect.objectContaining({ + [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat', + [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4o', + [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringContaining('[Blob substitute]'), + [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1, + }), + description: 'chat gpt-4o', + op: 'gen_ai.chat', + status: 'ok', + }), + ]), + }, + }) + .start() + .completed(); + }); + + test('preserves regular URLs in image_url (does not redact https links)', async () => { + await createRunner() + .ignore('event') + .expect({ + transaction: { + transaction: 'main', + spans: expect.arrayContaining([ + // The second span (multiple images) should still contain the https URL + expect.objectContaining({ + data: expect.objectContaining({ + [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringContaining('https://example.com/image.png'), + }), + }), + ]), + }, + }) + .start() + .completed(); + }); + }); }); diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/v6/test.ts b/dev-packages/node-integration-tests/suites/tracing/openai/v6/test.ts index 0cb07c6eba66..626e53248e66 100644 --- a/dev-packages/node-integration-tests/suites/tracing/openai/v6/test.ts +++ b/dev-packages/node-integration-tests/suites/tracing/openai/v6/test.ts @@ -124,7 +124,7 @@ describe('OpenAI integration (V6)', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 18, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 12, }, - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -151,7 +151,7 @@ describe('OpenAI integration (V6)', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 10, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 6, }, - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -166,7 +166,7 @@ describe('OpenAI integration (V6)', () => { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai', }, - description: 'chat error-model stream-response', + description: 'chat error-model', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'internal_error', @@ -278,7 +278,7 @@ describe('OpenAI integration (V6)', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 18, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 12, }), - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -309,7 +309,7 @@ describe('OpenAI integration (V6)', () => { [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 10, [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 6, }), - description: 'chat gpt-4 stream-response', + description: 'chat gpt-4', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'ok', @@ -326,7 +326,7 @@ describe('OpenAI integration (V6)', () => { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai', }, - description: 'chat error-model stream-response', + description: 'chat error-model', op: 'gen_ai.chat', origin: 'auto.ai.openai', status: 'internal_error', diff --git a/package.json b/package.json index 7c12ef97eddb..91bc549e4527 100644 --- a/package.json +++ b/package.json @@ -133,7 +133,7 @@ "nx": "22.5.0", "oxfmt": "^0.32.0", "rimraf": "^5.0.10", - "rollup": "^4.35.0", + "rollup": "^4.59.0", "rollup-plugin-cleanup": "^3.2.1", "rollup-plugin-license": "^3.3.1", "size-limit": "~11.1.6", @@ -149,6 +149,9 @@ "we need to resolve them to the CommonJS versions." ], "resolutions": { + "**/nx/minimatch": "10.2.4", + "**/ng-packagr/postcss-url/minimatch": "3.1.5", + "**/@angular-devkit/build-angular/minimatch": "5.1.9", "gauge/strip-ansi": "6.0.1", "wide-align/string-width": "4.2.3", "cliui/wrap-ansi": "7.0.0", diff --git a/packages/angular/package.json b/packages/angular/package.json index 477821d202b6..77bcb6ddf7c8 100644 --- a/packages/angular/package.json +++ b/packages/angular/package.json @@ -37,7 +37,7 @@ "@angular/router": "^14.3.0", "@types/node": "^14.8.0", "ng-packagr": "^14.2.2", - "rxjs": "7.8.1", + "rxjs": "7.8.2", "typescript": "4.6.4", "zone.js": "^0.12.0" }, diff --git a/packages/browser/AGENTS.md b/packages/browser/AGENTS.md new file mode 100644 index 000000000000..6d8cb97b5009 --- /dev/null +++ b/packages/browser/AGENTS.md @@ -0,0 +1,12 @@ +# Browser SDK + +## Bundle Size + +Bundle size is critical. Be mindful of imports and tree-shaking. + +- [Bundle size documentation](https://develop.sentry.dev/sdk/platform-specifics/javascript-sdks/bundle-size.md) +- [Browser tracing documentation](https://develop.sentry.dev/sdk/platform-specifics/javascript-sdks/browser-tracing.md) + +## CDN Bundles + +Use `/add-cdn-bundle` skill to create new CDN bundles. diff --git a/packages/cloudflare/src/client.ts b/packages/cloudflare/src/client.ts index 3332f71dab90..957cb7274d8e 100644 --- a/packages/cloudflare/src/client.ts +++ b/packages/cloudflare/src/client.ts @@ -16,6 +16,9 @@ export class CloudflareClient extends ServerRuntimeClient { private _spanCompletionPromise: Promise | null = null; private _resolveSpanCompletion: (() => void) | null = null; + private _unsubscribeSpanStart: (() => void) | null = null; + private _unsubscribeSpanEnd: (() => void) | null = null; + /** * Creates a new Cloudflare SDK instance. * @param options Configuration options for this SDK. @@ -37,7 +40,7 @@ export class CloudflareClient extends ServerRuntimeClient { this._flushLock = flushLock; // Track span lifecycle to know when to flush - this.on('spanStart', span => { + this._unsubscribeSpanStart = this.on('spanStart', span => { const spanId = span.spanContext().spanId; DEBUG_BUILD && debug.log('[CloudflareClient] Span started:', spanId); this._pendingSpans.add(spanId); @@ -49,7 +52,7 @@ export class CloudflareClient extends ServerRuntimeClient { } }); - this.on('spanEnd', span => { + this._unsubscribeSpanEnd = this.on('spanEnd', span => { const spanId = span.spanContext().spanId; DEBUG_BUILD && debug.log('[CloudflareClient] Span ended:', spanId); this._pendingSpans.delete(spanId); @@ -99,6 +102,33 @@ export class CloudflareClient extends ServerRuntimeClient { return super.flush(timeout); } + /** + * Disposes of the client and releases all resources. + * + * This method clears all Cloudflare-specific state in addition to the base client cleanup. + * It unsubscribes from span lifecycle events and clears pending span tracking. + * + * Call this method after flushing to allow the client to be garbage collected. + * After calling dispose(), the client should not be used anymore. + */ + public override dispose(): void { + DEBUG_BUILD && debug.log('[CloudflareClient] Disposing client...'); + + super.dispose(); + + if (this._unsubscribeSpanStart) { + this._unsubscribeSpanStart(); + this._unsubscribeSpanStart = null; + } + if (this._unsubscribeSpanEnd) { + this._unsubscribeSpanEnd(); + this._unsubscribeSpanEnd = null; + } + + this._resetSpanCompletionPromise(); + (this as unknown as { _flushLock: ReturnType | void })._flushLock = undefined; + } + /** * Resets the span completion promise and resolve function. */ diff --git a/packages/cloudflare/src/flush.ts b/packages/cloudflare/src/flush.ts index f38c805d0f8b..ebadd6393298 100644 --- a/packages/cloudflare/src/flush.ts +++ b/packages/cloudflare/src/flush.ts @@ -1,4 +1,6 @@ import type { ExecutionContext } from '@cloudflare/workers-types'; +import type { Client } from '@sentry/core'; +import { flush } from '@sentry/core'; type FlushLock = { readonly ready: Promise; @@ -36,3 +38,16 @@ export function makeFlushLock(context: ExecutionContext): FlushLock { }, }); } + +/** + * Flushes the client and then disposes of it to allow garbage collection. + * This should be called at the end of each request to prevent memory leaks. + * + * @param client - The CloudflareClient instance to flush and dispose + * @param timeout - Timeout in milliseconds for the flush operation + * @returns A promise that resolves when flush and dispose are complete + */ +export async function flushAndDispose(client: Client | undefined, timeout = 2000): Promise { + await flush(timeout); + client?.dispose(); +} diff --git a/packages/cloudflare/src/handler.ts b/packages/cloudflare/src/handler.ts index 0c9196740448..29b32225e433 100644 --- a/packages/cloudflare/src/handler.ts +++ b/packages/cloudflare/src/handler.ts @@ -1,6 +1,5 @@ import { captureException, - flush, SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN, SEMANTIC_ATTRIBUTE_SENTRY_SOURCE, @@ -9,6 +8,7 @@ import { } from '@sentry/core'; import { setAsyncLocalStorageAsyncContextStrategy } from './async'; import type { CloudflareOptions } from './client'; +import { flushAndDispose } from './flush'; import { isInstrumented, markAsInstrumented } from './instrument'; import { getHonoIntegration } from './integrations/hono'; import { getFinalOptions } from './options'; @@ -113,7 +113,7 @@ export function withSentry< captureException(e, { mechanism: { handled: false, type: 'auto.faas.cloudflare.scheduled' } }); throw e; } finally { - waitUntil(flush(2000)); + waitUntil(flushAndDispose(client)); } }, ); @@ -157,7 +157,7 @@ export function withSentry< captureException(e, { mechanism: { handled: false, type: 'auto.faas.cloudflare.email' } }); throw e; } finally { - waitUntil(flush(2000)); + waitUntil(flushAndDispose(client)); } }, ); @@ -209,7 +209,7 @@ export function withSentry< captureException(e, { mechanism: { handled: false, type: 'auto.faas.cloudflare.queue' } }); throw e; } finally { - waitUntil(flush(2000)); + waitUntil(flushAndDispose(client)); } }, ); @@ -243,7 +243,7 @@ export function withSentry< captureException(e, { mechanism: { handled: false, type: 'auto.faas.cloudflare.tail' } }); throw e; } finally { - waitUntil(flush(2000)); + waitUntil(flushAndDispose(client)); } }); }, diff --git a/packages/cloudflare/src/index.ts b/packages/cloudflare/src/index.ts index 33572c81714d..62263627aa24 100644 --- a/packages/cloudflare/src/index.ts +++ b/packages/cloudflare/src/index.ts @@ -85,6 +85,7 @@ export { moduleMetadataIntegration, supabaseIntegration, instrumentSupabaseClient, + instrumentPostgresJsSql, zodErrorsIntegration, consoleIntegration, SEMANTIC_ATTRIBUTE_SENTRY_OP, diff --git a/packages/cloudflare/src/request.ts b/packages/cloudflare/src/request.ts index c404e57d01d8..5ad215aab428 100644 --- a/packages/cloudflare/src/request.ts +++ b/packages/cloudflare/src/request.ts @@ -2,7 +2,6 @@ import type { ExecutionContext, IncomingRequestCfProperties } from '@cloudflare/ import { captureException, continueTrace, - flush, getClient, getHttpSpanDetailsFromUrlObject, httpHeadersToSpanAttributes, @@ -14,6 +13,7 @@ import { withIsolationScope, } from '@sentry/core'; import type { CloudflareOptions } from './client'; +import { flushAndDispose } from './flush'; import { addCloudResourceContext, addCultureContext, addRequest } from './scope-utils'; import { init } from './sdk'; import { classifyResponseStreaming } from './utils/streaming'; @@ -95,7 +95,7 @@ export function wrapRequestHandler( } throw e; } finally { - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(client)); } } @@ -122,7 +122,7 @@ export function wrapRequestHandler( if (captureErrors) { captureException(e, { mechanism: { handled: false, type: 'auto.http.cloudflare' } }); } - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(client)); throw e; } @@ -149,7 +149,7 @@ export function wrapRequestHandler( } finally { reader.releaseLock(); span.end(); - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(client)); } })(); @@ -165,14 +165,22 @@ export function wrapRequestHandler( } catch (e) { // tee() failed (e.g stream already locked) - fall back to non-streaming handling span.end(); - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(client)); return res; } } // Non-streaming response - end span immediately and return original span.end(); - waitUntil?.(flush(2000)); + + // Don't dispose for protocol upgrades (101 Switching Protocols) - the connection stays alive. + // This includes WebSocket upgrades where webSocketMessage/webSocketClose handlers + // will still be called and may need the client to capture errors. + if (res.status === 101) { + waitUntil?.(client?.flush(2000)); + } else { + waitUntil?.(flushAndDispose(client)); + } return res; }); }, diff --git a/packages/cloudflare/src/workflows.ts b/packages/cloudflare/src/workflows.ts index 30680c0f2131..3c40c86ff867 100644 --- a/packages/cloudflare/src/workflows.ts +++ b/packages/cloudflare/src/workflows.ts @@ -20,6 +20,7 @@ import type { } from 'cloudflare:workers'; import { setAsyncLocalStorageAsyncContextStrategy } from './async'; import type { CloudflareOptions } from './client'; +import { flushAndDispose } from './flush'; import { addCloudResourceContext } from './scope-utils'; import { init } from './sdk'; import { instrumentContext } from './utils/instrumentContext'; @@ -186,7 +187,7 @@ export function instrumentWorkflowWithSentry< new WrappedWorkflowStep(event.instanceId, context, options, step), ); } finally { - context.waitUntil(flush(2000)); + context.waitUntil(flushAndDispose(client)); } }); }); diff --git a/packages/cloudflare/src/wrapMethodWithSentry.ts b/packages/cloudflare/src/wrapMethodWithSentry.ts index f0fe3c83f5e0..3c719e7da4b1 100644 --- a/packages/cloudflare/src/wrapMethodWithSentry.ts +++ b/packages/cloudflare/src/wrapMethodWithSentry.ts @@ -1,7 +1,6 @@ import type { DurableObjectStorage } from '@cloudflare/workers-types'; import { captureException, - flush, getClient, isThenable, type Scope, @@ -12,6 +11,7 @@ import { withScope, } from '@sentry/core'; import type { CloudflareOptions } from './client'; +import { flushAndDispose } from './flush'; import { isInstrumented, markAsInstrumented } from './instrument'; import { init } from './sdk'; @@ -74,6 +74,8 @@ export function wrapMethodWithSentry( scope.setClient(client); } + const clientToDispose = currentClient || scope.getClient(); + if (!wrapperOptions.spanName) { try { if (callback) { @@ -84,7 +86,7 @@ export function wrapMethodWithSentry( if (isThenable(result)) { return result.then( (res: unknown) => { - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(clientToDispose)); return res; }, (e: unknown) => { @@ -94,12 +96,12 @@ export function wrapMethodWithSentry( handled: false, }, }); - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(clientToDispose)); throw e; }, ); } else { - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(clientToDispose)); return result; } } catch (e) { @@ -109,7 +111,7 @@ export function wrapMethodWithSentry( handled: false, }, }); - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(clientToDispose)); throw e; } } @@ -128,7 +130,7 @@ export function wrapMethodWithSentry( if (isThenable(result)) { return result.then( (res: unknown) => { - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(clientToDispose)); return res; }, (e: unknown) => { @@ -138,12 +140,12 @@ export function wrapMethodWithSentry( handled: false, }, }); - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(clientToDispose)); throw e; }, ); } else { - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(clientToDispose)); return result; } } catch (e) { @@ -153,7 +155,7 @@ export function wrapMethodWithSentry( handled: false, }, }); - waitUntil?.(flush(2000)); + waitUntil?.(flushAndDispose(clientToDispose)); throw e; } }); diff --git a/packages/cloudflare/test/client.test.ts b/packages/cloudflare/test/client.test.ts new file mode 100644 index 000000000000..ecf85c0f4c2c --- /dev/null +++ b/packages/cloudflare/test/client.test.ts @@ -0,0 +1,312 @@ +import { beforeAll, beforeEach, describe, expect, it, vi } from 'vitest'; +import { setAsyncLocalStorageAsyncContextStrategy } from '../src/async'; +import { CloudflareClient, type CloudflareClientOptions } from '../src/client'; +import { makeFlushLock } from '../src/flush'; + +const MOCK_CLIENT_OPTIONS: CloudflareClientOptions = { + dsn: 'https://public@dsn.ingest.sentry.io/1337', + stackParser: () => [], + integrations: [], + transport: () => ({ + send: vi.fn().mockResolvedValue({}), + flush: vi.fn().mockResolvedValue(true), + }), +}; + +describe('CloudflareClient', () => { + beforeAll(() => { + setAsyncLocalStorageAsyncContextStrategy(); + }); + + beforeEach(() => { + vi.clearAllMocks(); + }); + + describe('dispose()', () => { + it('unsubscribes from span lifecycle events', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + // Access the private unsubscribe functions to verify they exist + const privateClient = client as unknown as { + _unsubscribeSpanStart: (() => void) | null; + _unsubscribeSpanEnd: (() => void) | null; + }; + + expect(privateClient._unsubscribeSpanStart).not.toBeNull(); + expect(privateClient._unsubscribeSpanEnd).not.toBeNull(); + + client.dispose(); + + expect(privateClient._unsubscribeSpanStart).toBeNull(); + expect(privateClient._unsubscribeSpanEnd).toBeNull(); + }); + + it('clears pending spans tracking', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + const privateClient = client as unknown as { + _pendingSpans: Set; + _spanCompletionPromise: Promise | null; + _resolveSpanCompletion: (() => void) | null; + }; + + // Add some pending spans + privateClient._pendingSpans.add('span1'); + privateClient._pendingSpans.add('span2'); + privateClient._spanCompletionPromise = new Promise(() => {}); + privateClient._resolveSpanCompletion = () => {}; + + expect(privateClient._pendingSpans.size).toBe(2); + + client.dispose(); + + expect(privateClient._pendingSpans.size).toBe(0); + expect(privateClient._spanCompletionPromise).toBeNull(); + expect(privateClient._resolveSpanCompletion).toBeNull(); + }); + + it('clears flushLock reference', () => { + const mockContext = { + waitUntil: vi.fn(), + passThroughOnException: vi.fn(), + }; + const flushLock = makeFlushLock(mockContext as any); + + const client = new CloudflareClient({ + ...MOCK_CLIENT_OPTIONS, + flushLock, + }); + + const privateClient = client as unknown as { + _flushLock: ReturnType | void; + }; + + expect(privateClient._flushLock).toBeDefined(); + + client.dispose(); + + expect(privateClient._flushLock).toBeUndefined(); + }); + + it('clears hooks', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + // Add a hook + const hookCallback = vi.fn(); + client.on('beforeEnvelope', hookCallback); + + const privateClient = client as unknown as { + _hooks: Record | undefined>; + }; + + // Verify hook was registered - check that there are hooks with actual Sets + const hooksWithSets = Object.values(privateClient._hooks).filter(v => v instanceof Set); + expect(hooksWithSets.length).toBeGreaterThan(0); + + client.dispose(); + + // All hooks should be cleared (set to undefined) + const hooksWithSetsAfter = Object.values(privateClient._hooks).filter(v => v instanceof Set); + expect(hooksWithSetsAfter.length).toBe(0); + }); + + it('clears event processors', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + // Add an event processor + client.addEventProcessor(event => event); + + const privateClient = client as unknown as { + _eventProcessors: unknown[]; + }; + + // SDK adds some default processors, so length should be >= 1 + const initialLength = privateClient._eventProcessors.length; + expect(initialLength).toBeGreaterThan(0); + + client.dispose(); + + expect(privateClient._eventProcessors.length).toBe(0); + }); + + it('clears integrations', () => { + const mockIntegration = { + name: 'MockIntegration', + setupOnce: vi.fn(), + }; + + const client = new CloudflareClient({ + ...MOCK_CLIENT_OPTIONS, + integrations: [mockIntegration], + }); + + // Need to call init() to setup integrations + client.init(); + + const privateClient = client as unknown as { + _integrations: Record; + }; + + // Integration should be registered + expect(privateClient._integrations['MockIntegration']).toBeDefined(); + expect(privateClient._integrations['MockIntegration']).not.toBeUndefined(); + + client.dispose(); + + // Integration reference should be cleared (set to undefined) + expect(privateClient._integrations['MockIntegration']).toBeUndefined(); + }); + + it('clears transport reference', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + const privateClient = client as unknown as { + _transport?: unknown; + }; + + expect(privateClient._transport).toBeDefined(); + + client.dispose(); + + expect(privateClient._transport).toBeUndefined(); + }); + + it('clears outcomes tracking', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + const privateClient = client as unknown as { + _outcomes: Record; + }; + + // Add some outcomes + privateClient._outcomes['reason:error:outcome1'] = 5; + privateClient._outcomes['reason:error:outcome2'] = 10; + + // Verify we have actual values + const validOutcomes = Object.values(privateClient._outcomes).filter(v => v !== undefined); + expect(validOutcomes.length).toBe(2); + + client.dispose(); + + // All outcomes should be set to undefined + const validOutcomesAfter = Object.values(privateClient._outcomes).filter(v => v !== undefined); + expect(validOutcomesAfter.length).toBe(0); + }); + + it('can be called multiple times safely', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + // Should not throw when called multiple times + expect(() => { + client.dispose(); + client.dispose(); + client.dispose(); + }).not.toThrow(); + }); + + it('does not break event emission after spanStart unsubscribe', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + // Dispose which unsubscribes from span events + client.dispose(); + + // Should not throw when emitting span events after dispose + expect(() => { + client.emit('spanStart', {} as any); + client.emit('spanEnd', {} as any); + }).not.toThrow(); + }); + }); + + describe('span lifecycle tracking', () => { + it('tracks pending spans when spanStart is emitted', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + const privateClient = client as unknown as { + _pendingSpans: Set; + _spanCompletionPromise: Promise | null; + }; + + expect(privateClient._pendingSpans.size).toBe(0); + expect(privateClient._spanCompletionPromise).toBeNull(); + + // Emit spanStart + const mockSpan = { + spanContext: () => ({ spanId: 'test-span-id' }), + }; + client.emit('spanStart', mockSpan as any); + + expect(privateClient._pendingSpans.has('test-span-id')).toBe(true); + expect(privateClient._spanCompletionPromise).not.toBeNull(); + }); + + it('removes pending span when spanEnd is emitted', async () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + const privateClient = client as unknown as { + _pendingSpans: Set; + _spanCompletionPromise: Promise | null; + }; + + const mockSpan = { + spanContext: () => ({ spanId: 'test-span-id' }), + }; + + // Start span + client.emit('spanStart', mockSpan as any); + expect(privateClient._pendingSpans.has('test-span-id')).toBe(true); + + // End span + client.emit('spanEnd', mockSpan as any); + expect(privateClient._pendingSpans.has('test-span-id')).toBe(false); + }); + + it('resolves completion promise when all spans end', async () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + const privateClient = client as unknown as { + _pendingSpans: Set; + _spanCompletionPromise: Promise | null; + }; + + const mockSpan1 = { spanContext: () => ({ spanId: 'span-1' }) }; + const mockSpan2 = { spanContext: () => ({ spanId: 'span-2' }) }; + + // Start both spans + client.emit('spanStart', mockSpan1 as any); + client.emit('spanStart', mockSpan2 as any); + + const completionPromise = privateClient._spanCompletionPromise; + expect(completionPromise).not.toBeNull(); + + // End first span - promise should still exist + client.emit('spanEnd', mockSpan1 as any); + expect(privateClient._pendingSpans.size).toBe(1); + + // End second span - promise should be resolved and reset + client.emit('spanEnd', mockSpan2 as any); + expect(privateClient._pendingSpans.size).toBe(0); + + // The original promise should resolve + await expect(completionPromise).resolves.toBeUndefined(); + }); + + it('does not track spans after dispose', () => { + const client = new CloudflareClient(MOCK_CLIENT_OPTIONS); + + client.dispose(); + + const privateClient = client as unknown as { + _pendingSpans: Set; + }; + + const mockSpan = { + spanContext: () => ({ spanId: 'test-span-id' }), + }; + + // Emit spanStart after dispose - should not be tracked + client.emit('spanStart', mockSpan as any); + expect(privateClient._pendingSpans.has('test-span-id')).toBe(false); + }); + }); +}); diff --git a/packages/cloudflare/test/request.test.ts b/packages/cloudflare/test/request.test.ts index 94b5d89e4ae0..5160d8976e9b 100644 --- a/packages/cloudflare/test/request.test.ts +++ b/packages/cloudflare/test/request.test.ts @@ -377,3 +377,192 @@ function createMockExecutionContext(): ExecutionContext { passThroughOnException: vi.fn(), }; } + +describe('flushAndDispose', () => { + test('dispose is called after flush completes', async () => { + const context = createMockExecutionContext(); + const waits: Promise[] = []; + const waitUntil = vi.fn(promise => waits.push(promise)); + (context as any).waitUntil = waitUntil; + + const disposeSpy = vi.spyOn(CloudflareClient.prototype, 'dispose'); + const flushSpy = vi.spyOn(SentryCore.Client.prototype, 'flush').mockResolvedValue(true); + + await wrapRequestHandler({ options: MOCK_OPTIONS, request: new Request('https://example.com'), context }, () => { + const response = new Response('test'); + response.headers.set('content-length', '4'); + return response; + }); + + // Wait for all waitUntil promises to resolve + await Promise.all(waits); + + expect(flushSpy).toHaveBeenCalled(); + expect(disposeSpy).toHaveBeenCalled(); + + flushSpy.mockRestore(); + disposeSpy.mockRestore(); + }); + + test('dispose is called after handler throws error', async () => { + const context = createMockExecutionContext(); + const waits: Promise[] = []; + const waitUntil = vi.fn(promise => waits.push(promise)); + (context as any).waitUntil = waitUntil; + + const disposeSpy = vi.spyOn(CloudflareClient.prototype, 'dispose'); + const flushSpy = vi.spyOn(SentryCore.Client.prototype, 'flush').mockResolvedValue(true); + + try { + await wrapRequestHandler({ options: MOCK_OPTIONS, request: new Request('https://example.com'), context }, () => { + throw new Error('test error'); + }); + } catch { + // Expected to throw + } + + // Wait for all waitUntil promises to resolve + await Promise.all(waits); + + expect(disposeSpy).toHaveBeenCalled(); + + flushSpy.mockRestore(); + disposeSpy.mockRestore(); + }); + + test('dispose is called for OPTIONS requests', async () => { + const context = createMockExecutionContext(); + const waits: Promise[] = []; + const waitUntil = vi.fn(promise => waits.push(promise)); + (context as any).waitUntil = waitUntil; + + const disposeSpy = vi.spyOn(CloudflareClient.prototype, 'dispose'); + const flushSpy = vi.spyOn(SentryCore.Client.prototype, 'flush').mockResolvedValue(true); + + await wrapRequestHandler( + { + options: MOCK_OPTIONS, + request: new Request('https://example.com', { method: 'OPTIONS' }), + context, + }, + () => new Response('', { status: 200 }), + ); + + // Wait for all waitUntil promises to resolve + await Promise.all(waits); + + expect(disposeSpy).toHaveBeenCalled(); + + flushSpy.mockRestore(); + disposeSpy.mockRestore(); + }); + + test('dispose is called for HEAD requests', async () => { + const context = createMockExecutionContext(); + const waits: Promise[] = []; + const waitUntil = vi.fn(promise => waits.push(promise)); + (context as any).waitUntil = waitUntil; + + const disposeSpy = vi.spyOn(CloudflareClient.prototype, 'dispose'); + const flushSpy = vi.spyOn(SentryCore.Client.prototype, 'flush').mockResolvedValue(true); + + await wrapRequestHandler( + { + options: MOCK_OPTIONS, + request: new Request('https://example.com', { method: 'HEAD' }), + context, + }, + () => new Response('', { status: 200 }), + ); + + // Wait for all waitUntil promises to resolve + await Promise.all(waits); + + expect(disposeSpy).toHaveBeenCalled(); + + flushSpy.mockRestore(); + disposeSpy.mockRestore(); + }); + + test('dispose is called after streaming response completes', async () => { + const context = createMockExecutionContext(); + const waits: Promise[] = []; + const waitUntil = vi.fn(promise => waits.push(promise)); + (context as any).waitUntil = waitUntil; + + const disposeSpy = vi.spyOn(CloudflareClient.prototype, 'dispose'); + const flushSpy = vi.spyOn(SentryCore.Client.prototype, 'flush').mockResolvedValue(true); + + const stream = new ReadableStream({ + start(controller) { + controller.enqueue(new TextEncoder().encode('chunk1')); + controller.enqueue(new TextEncoder().encode('chunk2')); + controller.close(); + }, + }); + + const result = await wrapRequestHandler( + { options: MOCK_OPTIONS, request: new Request('https://example.com'), context }, + () => new Response(stream), + ); + + // Consume the response to trigger stream completion + await result.text(); + + // Wait for all waitUntil promises to resolve + await Promise.all(waits); + + expect(disposeSpy).toHaveBeenCalled(); + + flushSpy.mockRestore(); + disposeSpy.mockRestore(); + }); + + test('dispose is NOT called for protocol upgrade responses (status 101)', async () => { + const context = createMockExecutionContext(); + const waits: Promise[] = []; + const waitUntil = vi.fn(promise => waits.push(promise)); + (context as any).waitUntil = waitUntil; + + const disposeSpy = vi.spyOn(CloudflareClient.prototype, 'dispose'); + const flushSpy = vi.spyOn(CloudflareClient.prototype, 'flush').mockResolvedValue(true); + + // Create a mock protocol upgrade response (Node.js Response doesn't allow status 101) + // In Cloudflare Workers, this is a valid response for WebSocket upgrades and other protocol switches + const mockWebSocketResponse = { + status: 101, + statusText: 'Switching Protocols', + headers: new Headers(), + body: null, + ok: false, + redirected: false, + type: 'basic' as ResponseType, + url: '', + clone: () => mockWebSocketResponse, + arrayBuffer: () => Promise.resolve(new ArrayBuffer(0)), + blob: () => Promise.resolve(new Blob()), + formData: () => Promise.resolve(new FormData()), + json: () => Promise.resolve({}), + text: () => Promise.resolve(''), + bodyUsed: false, + bytes: () => Promise.resolve(new Uint8Array()), + } as Response; + + await wrapRequestHandler( + { options: MOCK_OPTIONS, request: new Request('https://example.com'), context }, + () => mockWebSocketResponse, + ); + + // Wait for all waitUntil promises to resolve + await Promise.all(waits); + + // dispose should NOT be called for protocol upgrades (101) since the connection stays alive + // and subsequent handlers (e.g., webSocketMessage/webSocketClose) may still need the client + expect(disposeSpy).not.toHaveBeenCalled(); + // But flush should still be called + expect(flushSpy).toHaveBeenCalled(); + + flushSpy.mockRestore(); + disposeSpy.mockRestore(); + }); +}); diff --git a/packages/cloudflare/test/wrapMethodWithSentry.test.ts b/packages/cloudflare/test/wrapMethodWithSentry.test.ts index 3acafaba9b33..a7e73a83cd39 100644 --- a/packages/cloudflare/test/wrapMethodWithSentry.test.ts +++ b/packages/cloudflare/test/wrapMethodWithSentry.test.ts @@ -8,6 +8,7 @@ vi.mock('../src/sdk', () => ({ init: vi.fn(() => ({ getOptions: () => ({}), on: vi.fn(), + dispose: vi.fn(), })), })); @@ -237,7 +238,7 @@ describe('wrapMethodWithSentry', () => { }); describe('waitUntil flush', () => { - it('calls waitUntil with flush when context has waitUntil', async () => { + it('calls waitUntil with flushAndDispose when context has waitUntil', async () => { const waitUntil = vi.fn(); const context = { waitUntil, @@ -254,6 +255,7 @@ describe('wrapMethodWithSentry', () => { await wrapped(); expect(waitUntil).toHaveBeenCalled(); + // flushAndDispose calls flush internally expect(sentryCore.flush).toHaveBeenCalledWith(2000); }); diff --git a/packages/core/src/client.ts b/packages/core/src/client.ts index 2a06ee28b535..766e66a72a99 100644 --- a/packages/core/src/client.ts +++ b/packages/core/src/client.ts @@ -203,12 +203,12 @@ export abstract class Client { protected _eventProcessors: EventProcessor[]; /** Holds flushable */ - private _outcomes: { [key: string]: number }; + protected _outcomes: { [key: string]: number }; // eslint-disable-next-line @typescript-eslint/ban-types - private _hooks: Record>; + protected _hooks: Record>; - private _promiseBuffer: PromiseBuffer; + protected _promiseBuffer: PromiseBuffer; /** * Initializes this client instance. @@ -1098,6 +1098,16 @@ export abstract class Client { return {}; } + /** + * Disposes of the client and releases all resources. + * + * Subclasses should override this method to clean up their own resources. + * After calling dispose(), the client should not be used anymore. + */ + public dispose(): void { + // Base class has no cleanup logic - subclasses implement their own + } + /* eslint-enable @typescript-eslint/unified-signatures */ /** Setup integrations for this client. */ diff --git a/packages/core/src/index.ts b/packages/core/src/index.ts index 2fd6a4a9c8d5..5ff7aa6b7a59 100644 --- a/packages/core/src/index.ts +++ b/packages/core/src/index.ts @@ -120,6 +120,7 @@ export { dedupeIntegration } from './integrations/dedupe'; export { extraErrorDataIntegration } from './integrations/extraerrordata'; export { rewriteFramesIntegration } from './integrations/rewriteframes'; export { supabaseIntegration, instrumentSupabaseClient } from './integrations/supabase'; +export { instrumentPostgresJsSql } from './integrations/postgresjs'; export { zodErrorsIntegration } from './integrations/zoderrors'; export { thirdPartyErrorFilterIntegration } from './integrations/third-party-errors-filter'; export { consoleIntegration } from './integrations/console'; diff --git a/packages/core/src/integrations/consola.ts b/packages/core/src/integrations/consola.ts index 4781b253b161..26ca7b71ab4e 100644 --- a/packages/core/src/integrations/consola.ts +++ b/packages/core/src/integrations/consola.ts @@ -3,6 +3,7 @@ import { getClient } from '../currentScopes'; import { _INTERNAL_captureLog } from '../logs/internal'; import { formatConsoleArgs } from '../logs/utils'; import type { LogSeverityLevel } from '../types-hoist/log'; +import { normalize } from '../utils/normalize'; /** * Options for the Sentry Consola reporter. @@ -61,8 +62,9 @@ export interface ConsolaReporter { */ export interface ConsolaLogObject { /** - * Allows additional custom properties to be set on the log object. - * These properties will be captured as log attributes with a 'consola.' prefix. + * Allows additional custom properties to be set on the log object. These properties will be captured as log attributes. + * + * Additional properties are set when passing a single object with a `message` (`consola.[type]({ message: '', ... })`) or if the reporter is called directly * * @example * ```ts @@ -73,7 +75,7 @@ export interface ConsolaLogObject { * userId: 123, * sessionId: 'abc-123' * }); - * // Will create attributes: consola.userId and consola.sessionId + * // Will create attributes: `userId` and `sessionId` * ``` */ [key: string]: unknown; @@ -123,13 +125,20 @@ export interface ConsolaLogObject { /** * The raw arguments passed to the log method. * - * When `message` is not provided, these args are typically formatted into the final message. + * These args are typically formatted into the final `message`. In Consola reporters, `message` is not provided. * * @example * ```ts * consola.info('Hello', 'world', { user: 'john' }); * // args = ['Hello', 'world', { user: 'john' }] * ``` + * + * @example + * ```ts + * // `message` is a reserved property in Consola + * consola.log({ message: 'Hello' }); + * // args = ['Hello'] + * ``` */ args?: unknown[]; @@ -145,6 +154,10 @@ export interface ConsolaLogObject { * * When provided, this is the final formatted message. When not provided, * the message should be constructed from the `args` array. + * + * Note: In reporters, `message` is typically undefined. It is primarily for + * `consola.[type]({ message: 'xxx' })` usage and is normalized into `args` before + * reporters receive the log object. See: https://github.com/unjs/consola/issues/406#issuecomment-3684792551 */ message?: string; } @@ -187,8 +200,9 @@ export function createConsolaReporter(options: ConsolaReporterOptions = {}): Con return { log(logObj: ConsolaLogObject) { + // We need to exclude certain known properties from being added as additional attributes // eslint-disable-next-line @typescript-eslint/no-unused-vars - const { type, level, message: consolaMessage, args, tag, date: _date, ...attributes } = logObj; + const { type, level, message: consolaMessage, args, tag, date: _date, ...rest } = logObj; // Get client - use provided client or current client const client = providedClient || getClient(); @@ -216,7 +230,13 @@ export function createConsolaReporter(options: ConsolaReporterOptions = {}): Con } const message = messageParts.join(' '); + const attributes: Record = {}; + // Build attributes + for (const [key, value] of Object.entries(rest)) { + attributes[key] = normalize(value, normalizeDepth, normalizeMaxBreadth); + } + attributes['sentry.origin'] = 'auto.log.consola'; if (tag) { diff --git a/packages/core/src/integrations/postgresjs.ts b/packages/core/src/integrations/postgresjs.ts new file mode 100644 index 000000000000..b01ac13b1708 --- /dev/null +++ b/packages/core/src/integrations/postgresjs.ts @@ -0,0 +1,448 @@ +// Portable instrumentation for https://github.com/porsager/postgres +// This can be used in any environment (Node.js, Cloudflare Workers, etc.) +// without depending on OpenTelemetry module hooking. + +import { DEBUG_BUILD } from '../debug-build'; +import { SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '../semanticAttributes'; +import { SPAN_STATUS_ERROR, startSpanManual } from '../tracing'; +import type { Span } from '../types-hoist/span'; +import { debug } from '../utils/debug-logger'; +import { getActiveSpan } from '../utils/spanUtils'; + +const SQL_OPERATION_REGEX = /^(SELECT|INSERT|UPDATE|DELETE|CREATE|DROP|ALTER)/i; + +type PostgresConnectionContext = { + ATTR_DB_NAMESPACE?: string; + ATTR_SERVER_ADDRESS?: string; + ATTR_SERVER_PORT?: string; +}; + +interface PostgresJsSqlInstrumentationOptions { + /** + * Whether to require a parent span for the instrumentation. + * If set to true, the instrumentation will only create spans if there is a parent span + * available in the current scope. + * @default true + */ + requireParentSpan?: boolean; + /** + * Hook to modify the span before it is started. + * This can be used to set additional attributes or modify the span in any way. + */ + requestHook?: (span: Span, sanitizedSqlQuery: string, postgresConnectionContext?: PostgresConnectionContext) => void; +} + +const CONNECTION_CONTEXT_SYMBOL = Symbol('sentryPostgresConnectionContext'); + +// Use the same Symbol.for() markers as the Node.js OTel instrumentation +// so that both approaches recognize each other and prevent double-wrapping. +const INSTRUMENTED_MARKER = Symbol.for('sentry.instrumented.postgresjs'); +// Marker to track if a query was created from an instrumented sql instance. +// This prevents double-spanning when both the wrapper and the Node.js Query.prototype +// fallback patch are active simultaneously. +const QUERY_FROM_INSTRUMENTED_SQL = Symbol.for('sentry.query.from.instrumented.sql'); + +/** + * Instruments a postgres.js `sql` instance with Sentry tracing. + * + * This is a portable instrumentation function that works in any environment + * (Node.js, Cloudflare Workers, etc.) without depending on OpenTelemetry. + * + * @example + * ```javascript + * import postgres from 'postgres'; + * import * as Sentry from '@sentry/cloudflare'; // or '@sentry/deno' + * + * const sql = Sentry.instrumentPostgresJsSql( + * postgres({ host: 'localhost', database: 'mydb' }) + * ); + * + * // All queries now create Sentry spans + * await sql`SELECT * FROM users WHERE id = ${userId}`; + * ``` + */ +export function instrumentPostgresJsSql(sql: T, options?: PostgresJsSqlInstrumentationOptions): T { + if (!sql || typeof sql !== 'function') { + DEBUG_BUILD && debug.warn('instrumentPostgresJsSql: provided value is not a valid postgres.js sql instance'); + return sql; + } + + return _instrumentSqlInstance(sql, { requireParentSpan: true, ...options }) as T; +} + +/** + * Instruments a sql instance by wrapping its query execution methods. + */ +function _instrumentSqlInstance( + sql: unknown, + options: PostgresJsSqlInstrumentationOptions, + parentConnectionContext?: PostgresConnectionContext, +): unknown { + // Check if already instrumented to prevent double-wrapping + // Using Symbol.for() ensures the marker survives proxying + if ((sql as Record)[INSTRUMENTED_MARKER]) { + return sql; + } + + // Wrap the sql function to intercept query creation + const proxiedSql: unknown = new Proxy(sql as (...args: unknown[]) => unknown, { + apply(target, thisArg, argumentsList: unknown[]) { + const query = Reflect.apply(target, thisArg, argumentsList); + + if (query && typeof query === 'object' && 'handle' in query) { + _wrapSingleQueryHandle(query as { handle: unknown; strings?: string[] }, proxiedSql, options); + } + + return query; + }, + get(target, prop) { + const original = (target as unknown as Record)[prop]; + + if (typeof prop !== 'string' || typeof original !== 'function') { + return original; + } + + // Wrap methods that return PendingQuery objects (unsafe, file) + if (prop === 'unsafe' || prop === 'file') { + return _wrapQueryMethod(original as (...args: unknown[]) => unknown, target, proxiedSql, options); + } + + // Wrap begin and reserve (not savepoint to avoid duplicate spans) + if (prop === 'begin' || prop === 'reserve') { + return _wrapCallbackMethod(original as (...args: unknown[]) => unknown, target, proxiedSql, options); + } + + return original; + }, + }); + + // Use provided parent context if available, otherwise extract from sql.options + if (parentConnectionContext) { + (proxiedSql as Record)[CONNECTION_CONTEXT_SYMBOL] = parentConnectionContext; + } else { + _attachConnectionContext(sql, proxiedSql as Record); + } + + // Mark both the original and proxy as instrumented to prevent double-wrapping + (sql as Record)[INSTRUMENTED_MARKER] = true; + (proxiedSql as Record)[INSTRUMENTED_MARKER] = true; + + return proxiedSql; +} + +/** + * Wraps query-returning methods (unsafe, file) to ensure their queries are instrumented. + */ +function _wrapQueryMethod( + original: (...args: unknown[]) => unknown, + target: unknown, + proxiedSql: unknown, + options: PostgresJsSqlInstrumentationOptions, +): (...args: unknown[]) => unknown { + return function (this: unknown, ...args: unknown[]): unknown { + const query = Reflect.apply(original, target, args); + + if (query && typeof query === 'object' && 'handle' in query) { + _wrapSingleQueryHandle(query as { handle: unknown; strings?: string[] }, proxiedSql, options); + } + + return query; + }; +} + +/** + * Wraps callback-based methods (begin, reserve) to recursively instrument Sql instances. + * Note: These methods can also be used as tagged templates, which we pass through unchanged. + * + * Savepoint is not wrapped to avoid complex nested transaction instrumentation issues. + * Queries within savepoint callbacks are still instrumented through the parent transaction's Sql instance. + */ +function _wrapCallbackMethod( + original: (...args: unknown[]) => unknown, + target: unknown, + parentSqlInstance: unknown, + options: PostgresJsSqlInstrumentationOptions, +): (...args: unknown[]) => unknown { + return function (this: unknown, ...args: unknown[]): unknown { + // Extract parent context to propagate to child instances + const parentContext = (parentSqlInstance as Record)[CONNECTION_CONTEXT_SYMBOL] as + | PostgresConnectionContext + | undefined; + + // Check if this is a callback-based call by verifying the last argument is a function + const isCallbackBased = typeof args[args.length - 1] === 'function'; + + if (!isCallbackBased) { + // Not a callback-based call - could be tagged template or promise-based + const result = Reflect.apply(original, target, args); + // If result is a Promise (e.g., reserve() without callback), instrument the resolved Sql instance + if (result && typeof (result as Promise).then === 'function') { + return (result as Promise).then((sqlInstance: unknown) => { + return _instrumentSqlInstance(sqlInstance, options, parentContext); + }); + } + return result; + } + + // Callback-based call: wrap the callback to instrument the Sql instance + const callback = (args.length === 1 ? args[0] : args[1]) as (sql: unknown) => unknown; + const wrappedCallback = function (sqlInstance: unknown): unknown { + const instrumentedSql = _instrumentSqlInstance(sqlInstance, options, parentContext); + return callback(instrumentedSql); + }; + + const newArgs = args.length === 1 ? [wrappedCallback] : [args[0], wrappedCallback]; + return Reflect.apply(original, target, newArgs); + }; +} + +/** + * Wraps a single query's handle method to create spans. + */ +function _wrapSingleQueryHandle( + query: { handle: unknown; strings?: string[]; __sentryWrapped?: boolean }, + sqlInstance: unknown, + options: PostgresJsSqlInstrumentationOptions, +): void { + // Prevent double wrapping - check if the handle itself is already wrapped + if ((query.handle as { __sentryWrapped?: boolean })?.__sentryWrapped) { + return; + } + + // Mark this query as coming from an instrumented sql instance. + // This prevents the Node.js Query.prototype fallback patch from double-spanning. + (query as Record)[QUERY_FROM_INSTRUMENTED_SQL] = true; + + const originalHandle = query.handle as (...args: unknown[]) => Promise; + + // IMPORTANT: We must replace the handle function directly, not use a Proxy, + // because Query.then() internally calls this.handle(), which would bypass a Proxy wrapper. + const wrappedHandle = async function (this: unknown, ...args: unknown[]): Promise { + if (!_shouldCreateSpans(options)) { + return originalHandle.apply(this, args); + } + + const fullQuery = _reconstructQuery(query.strings); + const sanitizedSqlQuery = _sanitizeSqlQuery(fullQuery); + + return startSpanManual( + { + name: sanitizedSqlQuery || 'postgresjs.query', + op: 'db', + }, + (span: Span) => { + span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN, 'auto.db.postgresjs'); + + span.setAttributes({ + 'db.system.name': 'postgres', + 'db.query.text': sanitizedSqlQuery, + }); + + const connectionContext = sqlInstance + ? ((sqlInstance as Record)[CONNECTION_CONTEXT_SYMBOL] as + | PostgresConnectionContext + | undefined) + : undefined; + + _setConnectionAttributes(span, connectionContext); + + if (options.requestHook) { + try { + options.requestHook(span, sanitizedSqlQuery, connectionContext); + } catch (e) { + span.setAttribute('sentry.hook.error', 'requestHook failed'); + DEBUG_BUILD && debug.error('Error in requestHook for PostgresJs instrumentation:', e); + } + } + + const queryWithCallbacks = this as { + resolve: unknown; + reject: unknown; + }; + + queryWithCallbacks.resolve = new Proxy(queryWithCallbacks.resolve as (...args: unknown[]) => unknown, { + apply: (resolveTarget, resolveThisArg, resolveArgs: [{ command?: string }]) => { + try { + _setOperationName(span, sanitizedSqlQuery, resolveArgs?.[0]?.command); + span.end(); + } catch (e) { + DEBUG_BUILD && debug.error('Error ending span in resolve callback:', e); + } + + return Reflect.apply(resolveTarget, resolveThisArg, resolveArgs); + }, + }); + + queryWithCallbacks.reject = new Proxy(queryWithCallbacks.reject as (...args: unknown[]) => unknown, { + apply: (rejectTarget, rejectThisArg, rejectArgs: { message?: string; code?: string; name?: string }[]) => { + try { + span.setStatus({ + code: SPAN_STATUS_ERROR, + message: rejectArgs?.[0]?.message || 'unknown_error', + }); + + span.setAttribute('db.response.status_code', rejectArgs?.[0]?.code || 'unknown'); + span.setAttribute('error.type', rejectArgs?.[0]?.name || 'unknown'); + + _setOperationName(span, sanitizedSqlQuery); + span.end(); + } catch (e) { + DEBUG_BUILD && debug.error('Error ending span in reject callback:', e); + } + return Reflect.apply(rejectTarget, rejectThisArg, rejectArgs); + }, + }); + + // Handle synchronous errors that might occur before promise is created + try { + return originalHandle.apply(this, args); + } catch (e) { + span.setStatus({ + code: SPAN_STATUS_ERROR, + message: e instanceof Error ? e.message : 'unknown_error', + }); + span.end(); + throw e; + } + }, + ); + }; + + (wrappedHandle as { __sentryWrapped?: boolean }).__sentryWrapped = true; + query.handle = wrappedHandle; +} + +/** + * Determines whether a span should be created based on the current context. + * If `requireParentSpan` is set to true in the options, a span will + * only be created if there is a parent span available. + */ +function _shouldCreateSpans(options: PostgresJsSqlInstrumentationOptions): boolean { + const hasParentSpan = getActiveSpan() !== undefined; + return hasParentSpan || !options.requireParentSpan; +} + +/** + * Reconstructs the full SQL query from template strings with PostgreSQL placeholders. + * + * For sql`SELECT * FROM users WHERE id = ${123} AND name = ${'foo'}`: + * strings = ["SELECT * FROM users WHERE id = ", " AND name = ", ""] + * returns: "SELECT * FROM users WHERE id = $1 AND name = $2" + * + * @internal Exported for testing only + */ +export function _reconstructQuery(strings: string[] | undefined): string | undefined { + if (!strings?.length) { + return undefined; + } + if (strings.length === 1) { + return strings[0] || undefined; + } + // Join template parts with PostgreSQL placeholders ($1, $2, etc.) + return strings.reduce((acc, str, i) => (i === 0 ? str : `${acc}$${i}${str}`), ''); +} + +/** + * Sanitize SQL query as per the OTEL semantic conventions + * https://opentelemetry.io/docs/specs/semconv/database/database-spans/#sanitization-of-dbquerytext + * + * PostgreSQL $n placeholders are preserved per OTEL spec - they're parameterized queries, + * not sensitive literals. Only actual values (strings, numbers, booleans) are sanitized. + * + * @internal Exported for testing only + */ +export function _sanitizeSqlQuery(sqlQuery: string | undefined): string { + if (!sqlQuery) { + return 'Unknown SQL Query'; + } + + return ( + sqlQuery + // Remove comments first (they may contain newlines and extra spaces) + .replace(/--.*$/gm, '') // Single line comments (multiline mode) + .replace(/\/\*[\s\S]*?\*\//g, '') // Multi-line comments + .replace(/;\s*$/, '') // Remove trailing semicolons + // Collapse whitespace to a single space (after removing comments) + .replace(/\s+/g, ' ') + .trim() // Remove extra spaces and trim + // Sanitize hex/binary literals before string literals + .replace(/\bX'[0-9A-Fa-f]*'/gi, '?') // Hex string literals + .replace(/\bB'[01]*'/gi, '?') // Binary string literals + // Sanitize string literals (handles escaped quotes) + .replace(/'(?:[^']|'')*'/g, '?') + // Sanitize hex numbers + .replace(/\b0x[0-9A-Fa-f]+/gi, '?') + // Sanitize boolean literals + .replace(/\b(?:TRUE|FALSE)\b/gi, '?') + // Sanitize numeric literals (preserve $n placeholders via negative lookbehind) + .replace(/-?\b\d+\.?\d*[eE][+-]?\d+\b/g, '?') // Scientific notation + .replace(/-?\b\d+\.\d+\b/g, '?') // Decimals + .replace(/-?\.\d+\b/g, '?') // Decimals starting with dot + .replace(/(?): void { + const sqlInstance = sql as { options?: { host?: string[]; port?: number[]; database?: string } }; + if (!sqlInstance.options || typeof sqlInstance.options !== 'object') { + return; + } + + const opts = sqlInstance.options; + // postgres.js stores parsed options with host and port as arrays + // The library defaults to 'localhost' and 5432 if not specified, but we're defensive here + const host = opts.host?.[0] || 'localhost'; + const port = opts.port?.[0] || 5432; + + const connectionContext: PostgresConnectionContext = { + ATTR_DB_NAMESPACE: typeof opts.database === 'string' && opts.database !== '' ? opts.database : undefined, + ATTR_SERVER_ADDRESS: host, + ATTR_SERVER_PORT: String(port), + }; + + proxiedSql[CONNECTION_CONTEXT_SYMBOL] = connectionContext; +} diff --git a/packages/core/src/server-runtime-client.ts b/packages/core/src/server-runtime-client.ts index d1ae8e9063e6..d163fbc6d9e9 100644 --- a/packages/core/src/server-runtime-client.ts +++ b/packages/core/src/server-runtime-client.ts @@ -10,10 +10,11 @@ import type { Event, EventHint } from './types-hoist/event'; import type { ClientOptions } from './types-hoist/options'; import type { ParameterizedString } from './types-hoist/parameterize'; import type { SeverityLevel } from './types-hoist/severity'; -import type { BaseTransportOptions } from './types-hoist/transport'; +import type { BaseTransportOptions, Transport } from './types-hoist/transport'; import { debug } from './utils/debug-logger'; import { eventFromMessage, eventFromUnknownInput } from './utils/eventbuilder'; import { uuid4 } from './utils/misc'; +import type { PromiseBuffer } from './utils/promisebuffer'; import { resolvedSyncPromise } from './utils/syncpromise'; import { _getTraceInfoFromScope } from './utils/trace-info'; @@ -152,6 +153,32 @@ export class ServerRuntimeClient< return id; } + /** + * Disposes of the client and releases all resources. + * + * This method clears all internal state to allow the client to be garbage collected. + * It clears hooks, event processors, integrations, transport, and other internal references. + * + * Call this method after flushing to allow the client to be garbage collected. + * After calling dispose(), the client should not be used anymore. + * + * Subclasses should override this method to clean up their own resources and call `super.dispose()`. + */ + public override dispose(): void { + DEBUG_BUILD && debug.log('Disposing client...'); + + for (const hookName of Object.keys(this._hooks)) { + this._hooks[hookName]?.clear(); + } + + this._hooks = {}; + this._eventProcessors.length = 0; + this._integrations = {}; + this._outcomes = {}; + (this as unknown as { _transport?: Transport })._transport = undefined; + (this as unknown as { _promiseBuffer?: PromiseBuffer })._promiseBuffer = undefined; + } + /** * @inheritDoc */ diff --git a/packages/core/src/tracing/ai/mediaStripping.ts b/packages/core/src/tracing/ai/mediaStripping.ts new file mode 100644 index 000000000000..f4870cd5a9de --- /dev/null +++ b/packages/core/src/tracing/ai/mediaStripping.ts @@ -0,0 +1,160 @@ +/** + * Inline media content source, with a potentially very large base64 + * blob or data: uri. + */ +export type ContentMedia = Record & + ( + | { + media_type: string; + data: string; + } + | { + image_url: `data:${string}`; + } + | { + image_url: { url: `data:${string}` }; + } + | { + type: 'blob' | 'base64'; + content: string; + } + | { + b64_json: string; + } + | { + uri: `data:${string}`; + } + | { + type: 'input_audio'; + input_audio: { data: string }; + } + | { + type: 'file'; + file: { file_data?: string }; + } + ); + +/** + * Check if a content part is an OpenAI/Anthropic media source + */ +export function isContentMedia(part: unknown): part is ContentMedia { + if (!part || typeof part !== 'object') return false; + + return ( + isContentMediaSource(part) || + hasInlineData(part) || + hasImageUrl(part) || + hasInputAudio(part) || + hasFileData(part) || + hasMediaTypeData(part) || + hasBlobOrBase64Type(part) || + hasB64Json(part) || + hasImageGenerationResult(part) || + hasDataUri(part) + ); +} + +function hasImageUrl(part: NonNullable): boolean { + if (!('image_url' in part)) return false; + if (typeof part.image_url === 'string') return part.image_url.startsWith('data:'); + return hasNestedImageUrl(part); +} + +function hasNestedImageUrl(part: NonNullable): part is { image_url: { url: string } } { + return ( + 'image_url' in part && + !!part.image_url && + typeof part.image_url === 'object' && + 'url' in part.image_url && + typeof part.image_url.url === 'string' && + part.image_url.url.startsWith('data:') + ); +} + +function isContentMediaSource(part: NonNullable): boolean { + return 'type' in part && typeof part.type === 'string' && 'source' in part && isContentMedia(part.source); +} + +function hasInlineData(part: NonNullable): part is { inlineData: { data?: string } } { + return ( + 'inlineData' in part && + !!part.inlineData && + typeof part.inlineData === 'object' && + 'data' in part.inlineData && + typeof part.inlineData.data === 'string' + ); +} + +function hasInputAudio(part: NonNullable): part is { type: 'input_audio'; input_audio: { data: string } } { + return ( + 'type' in part && + part.type === 'input_audio' && + 'input_audio' in part && + !!part.input_audio && + typeof part.input_audio === 'object' && + 'data' in part.input_audio && + typeof part.input_audio.data === 'string' + ); +} + +function hasFileData(part: NonNullable): part is { type: 'file'; file: { file_data: string } } { + return ( + 'type' in part && + part.type === 'file' && + 'file' in part && + !!part.file && + typeof part.file === 'object' && + 'file_data' in part.file && + typeof part.file.file_data === 'string' + ); +} + +function hasMediaTypeData(part: NonNullable): part is { media_type: string; data: string } { + return 'media_type' in part && typeof part.media_type === 'string' && 'data' in part; +} + +function hasBlobOrBase64Type(part: NonNullable): part is { type: 'blob' | 'base64'; content: string } { + return 'type' in part && (part.type === 'blob' || part.type === 'base64'); +} + +function hasB64Json(part: NonNullable): part is { b64_json: string } { + return 'b64_json' in part; +} + +function hasImageGenerationResult(part: NonNullable): part is { type: 'image_generation'; result: string } { + return 'type' in part && 'result' in part && part.type === 'image_generation'; +} + +function hasDataUri(part: NonNullable): part is { uri: string } { + return 'uri' in part && typeof part.uri === 'string' && part.uri.startsWith('data:'); +} + +const REMOVED_STRING = '[Blob substitute]'; + +const MEDIA_FIELDS = ['image_url', 'data', 'content', 'b64_json', 'result', 'uri'] as const; + +/** + * Replace inline binary data in a single media content part with a placeholder. + */ +export function stripInlineMediaFromSingleMessage(part: ContentMedia): ContentMedia { + const strip = { ...part }; + if (isContentMedia(strip.source)) { + strip.source = stripInlineMediaFromSingleMessage(strip.source); + } + if (hasInlineData(part)) { + strip.inlineData = { ...part.inlineData, data: REMOVED_STRING }; + } + if (hasNestedImageUrl(part)) { + strip.image_url = { ...part.image_url, url: REMOVED_STRING }; + } + if (hasInputAudio(part)) { + strip.input_audio = { ...part.input_audio, data: REMOVED_STRING }; + } + if (hasFileData(part)) { + strip.file = { ...part.file, file_data: REMOVED_STRING }; + } + for (const field of MEDIA_FIELDS) { + if (typeof strip[field] === 'string') strip[field] = REMOVED_STRING; + } + return strip; +} diff --git a/packages/core/src/tracing/ai/messageTruncation.ts b/packages/core/src/tracing/ai/messageTruncation.ts index f5c040892dcf..499d25ee6e47 100644 --- a/packages/core/src/tracing/ai/messageTruncation.ts +++ b/packages/core/src/tracing/ai/messageTruncation.ts @@ -1,3 +1,5 @@ +import { isContentMedia, stripInlineMediaFromSingleMessage } from './mediaStripping'; + /** * Default maximum size in bytes for GenAI messages. * Messages exceeding this limit will be truncated. @@ -23,31 +25,6 @@ type ContentArrayMessage = { }[]; }; -/** - * Inline media content source, with a potentially very large base64 - * blob or data: uri. - */ -type ContentMedia = Record & - ( - | { - media_type: string; - data: string; - } - | { - image_url: `data:${string}`; - } - | { - type: 'blob' | 'base64'; - content: string; - } - | { - b64_json: string; - } - | { - uri: `data:${string}`; - } - ); - /** * Message format used by Google GenAI API. * Parts can be strings or objects with a text property. @@ -85,12 +62,12 @@ const jsonBytes = (value: unknown): number => { }; /** - * Truncate a string to fit within maxBytes when encoded as UTF-8. + * Truncate a string to fit within maxBytes (inclusive) when encoded as UTF-8. * Uses binary search for efficiency with multi-byte characters. * * @param text - The string to truncate - * @param maxBytes - Maximum byte length (UTF-8 encoded) - * @returns Truncated string that fits within maxBytes + * @param maxBytes - Maximum byte length (inclusive, UTF-8 encoded) + * @returns Truncated string whose UTF-8 byte length is at most maxBytes */ function truncateTextByBytes(text: string, maxBytes: number): string { if (utf8Bytes(text) <= maxBytes) { @@ -164,36 +141,6 @@ function isContentArrayMessage(message: unknown): message is ContentArrayMessage return message !== null && typeof message === 'object' && 'content' in message && Array.isArray(message.content); } -/** - * Check if a content part is an OpenAI/Anthropic media source - */ -function isContentMedia(part: unknown): part is ContentMedia { - if (!part || typeof part !== 'object') return false; - - return ( - isContentMediaSource(part) || - hasInlineData(part) || - ('media_type' in part && typeof part.media_type === 'string' && 'data' in part) || - ('image_url' in part && typeof part.image_url === 'string' && part.image_url.startsWith('data:')) || - ('type' in part && (part.type === 'blob' || part.type === 'base64')) || - 'b64_json' in part || - ('type' in part && 'result' in part && part.type === 'image_generation') || - ('uri' in part && typeof part.uri === 'string' && part.uri.startsWith('data:')) - ); -} -function isContentMediaSource(part: NonNullable): boolean { - return 'type' in part && typeof part.type === 'string' && 'source' in part && isContentMedia(part.source); -} -function hasInlineData(part: NonNullable): part is { inlineData: { data?: string } } { - return ( - 'inlineData' in part && - !!part.inlineData && - typeof part.inlineData === 'object' && - 'data' in part.inlineData && - typeof part.inlineData.data === 'string' - ); -} - /** * Check if a message has the Google GenAI parts format. */ @@ -318,25 +265,6 @@ function truncateSingleMessage(message: unknown, maxBytes: number): unknown[] { return []; } -const REMOVED_STRING = '[Filtered]'; - -const MEDIA_FIELDS = ['image_url', 'data', 'content', 'b64_json', 'result', 'uri'] as const; - -function stripInlineMediaFromSingleMessage(part: ContentMedia): ContentMedia { - const strip = { ...part }; - if (isContentMedia(strip.source)) { - strip.source = stripInlineMediaFromSingleMessage(strip.source); - } - // google genai inline data blob objects - if (hasInlineData(part)) { - strip.inlineData = { ...part.inlineData, data: REMOVED_STRING }; - } - for (const field of MEDIA_FIELDS) { - if (typeof strip[field] === 'string') strip[field] = REMOVED_STRING; - } - return strip; -} - /** * Strip the inline media from message arrays. * @@ -401,6 +329,11 @@ function truncateMessagesByBytes(messages: unknown[], maxBytes: number): unknown return messages; } + // The result is always a single-element array that callers wrap with + // JSON.stringify([message]), so subtract the 2-byte array wrapper ("[" and "]") + // to ensure the final serialized value stays under the limit. + const effectiveMaxBytes = maxBytes - 2; + // Always keep only the last message const lastMessage = messages[messages.length - 1]; @@ -410,12 +343,12 @@ function truncateMessagesByBytes(messages: unknown[], maxBytes: number): unknown // Check if it fits const messageBytes = jsonBytes(strippedMessage); - if (messageBytes <= maxBytes) { + if (messageBytes <= effectiveMaxBytes) { return stripped; } // Truncate the single message if needed - return truncateSingleMessage(strippedMessage, maxBytes); + return truncateSingleMessage(strippedMessage, effectiveMaxBytes); } /** diff --git a/packages/core/src/tracing/anthropic-ai/index.ts b/packages/core/src/tracing/anthropic-ai/index.ts index 49ed1c3b3354..63ff1be0e52f 100644 --- a/packages/core/src/tracing/anthropic-ai/index.ts +++ b/packages/core/src/tracing/anthropic-ai/index.ts @@ -206,7 +206,7 @@ function handleStreamingRequest( ): R | Promise { const model = requestAttributes[GEN_AI_REQUEST_MODEL_ATTRIBUTE] ?? 'unknown'; const spanConfig = { - name: `${operationName} ${model} stream-response`, + name: `${operationName} ${model}`, op: getSpanOperation(methodPath), attributes: requestAttributes as Record, }; diff --git a/packages/core/src/tracing/google-genai/index.ts b/packages/core/src/tracing/google-genai/index.ts index a56985b9b6f6..7781b67d6db0 100644 --- a/packages/core/src/tracing/google-genai/index.ts +++ b/packages/core/src/tracing/google-genai/index.ts @@ -270,7 +270,7 @@ function instrumentMethod( // Use startSpanManual for streaming methods to control span lifecycle return startSpanManual( { - name: `${operationName} ${model} stream-response`, + name: `${operationName} ${model}`, op: getSpanOperation(methodPath), attributes: requestAttributes, }, diff --git a/packages/core/src/tracing/langchain/index.ts b/packages/core/src/tracing/langchain/index.ts index 8cf12dfcb861..7484219d32d9 100644 --- a/packages/core/src/tracing/langchain/index.ts +++ b/packages/core/src/tracing/langchain/index.ts @@ -184,8 +184,17 @@ export function createLangChainCallbackHandler(options: LangChainOptions = {}): }, // Chain Start Handler - handleChainStart(chain: { name?: string }, inputs: Record, runId: string, _parentRunId?: string) { - const chainName = chain.name || 'unknown_chain'; + handleChainStart( + chain: { name?: string }, + inputs: Record, + runId: string, + _parentRunId?: string, + _tags?: string[], + _metadata?: Record, + _runType?: string, + runName?: string, + ) { + const chainName = runName || chain.name || 'unknown_chain'; const attributes: Record = { [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain', 'langchain.chain.name': chainName, diff --git a/packages/core/src/tracing/langchain/utils.ts b/packages/core/src/tracing/langchain/utils.ts index 249025480882..924739485948 100644 --- a/packages/core/src/tracing/langchain/utils.ts +++ b/packages/core/src/tracing/langchain/utils.ts @@ -25,6 +25,7 @@ import { GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE, GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE, } from '../ai/gen-ai-attributes'; +import { isContentMedia, stripInlineMediaFromSingleMessage } from '../ai/mediaStripping'; import { truncateGenAiMessages } from '../ai/messageTruncation'; import { extractSystemInstructions } from '../ai/utils'; import { LANGCHAIN_ORIGIN, ROLE_MAP } from './constants'; @@ -62,6 +63,38 @@ function asString(v: unknown): string { } } +/** + * Converts message content to a string, stripping inline media (base64 images, audio, etc.) + * from multimodal content before stringification so downstream media stripping can't miss it. + * + * @example + * // String content passes through unchanged: + * normalizeContent("Hello") // => "Hello" + * + * // Multimodal array content — media is replaced with "[Blob substitute]" before JSON.stringify: + * normalizeContent([ + * { type: "text", text: "What color?" }, + * { type: "image_url", image_url: { url: "data:image/png;base64,iVBOR..." } } + * ]) + * // => '[{"type":"text","text":"What color?"},{"type":"image_url","image_url":{"url":"[Blob substitute]"}}]' + * + * // Without this, asString() would JSON.stringify the raw array and the base64 blob + * // would end up in span attributes, since downstream stripping only works on objects. + */ +function normalizeContent(v: unknown): string { + if (Array.isArray(v)) { + try { + const stripped = v.map(part => + part && typeof part === 'object' && isContentMedia(part) ? stripInlineMediaFromSingleMessage(part) : part, + ); + return JSON.stringify(stripped); + } catch { + return String(v); + } + } + return asString(v); +} + /** * Normalizes a single role token to our canonical set. * @@ -123,7 +156,7 @@ export function normalizeLangChainMessages(messages: LangChainMessage[]): Array< const messageType = maybeGetType.call(message); return { role: normalizeMessageRole(messageType), - content: asString(message.content), + content: normalizeContent(message.content), }; } @@ -136,7 +169,7 @@ export function normalizeLangChainMessages(messages: LangChainMessage[]): Array< return { role: normalizeMessageRole(role), - content: asString(message.kwargs?.content), + content: normalizeContent(message.kwargs?.content), }; } @@ -145,7 +178,7 @@ export function normalizeLangChainMessages(messages: LangChainMessage[]): Array< const role = String(message.type).toLowerCase(); return { role: normalizeMessageRole(role), - content: asString(message.content), + content: normalizeContent(message.content), }; } @@ -154,7 +187,7 @@ export function normalizeLangChainMessages(messages: LangChainMessage[]): Array< if (message.role) { return { role: normalizeMessageRole(String(message.role)), - content: asString(message.content), + content: normalizeContent(message.content), }; } @@ -164,14 +197,14 @@ export function normalizeLangChainMessages(messages: LangChainMessage[]): Array< if (ctor && ctor !== 'Object') { return { role: normalizeMessageRole(normalizeRoleNameFromCtor(ctor)), - content: asString(message.content), + content: normalizeContent(message.content), }; } // 6) Fallback: treat as user text return { role: 'user', - content: asString(message.content), + content: normalizeContent(message.content), }; }); } diff --git a/packages/core/src/tracing/openai/index.ts b/packages/core/src/tracing/openai/index.ts index 9568567227df..cfbdc5cfb4b1 100644 --- a/packages/core/src/tracing/openai/index.ts +++ b/packages/core/src/tracing/openai/index.ts @@ -262,7 +262,7 @@ function instrumentMethod( const isStreamRequested = params && typeof params === 'object' && params.stream === true; const spanConfig = { - name: `${operationName} ${model}${isStreamRequested ? ' stream-response' : ''}`, + name: `${operationName} ${model}`, op: getSpanOperation(methodPath), attributes: requestAttributes as Record, }; diff --git a/packages/core/test/lib/integrations/consola.test.ts b/packages/core/test/lib/integrations/consola.test.ts index a32f073eeb75..e1a32b775e54 100644 --- a/packages/core/test/lib/integrations/consola.test.ts +++ b/packages/core/test/lib/integrations/consola.test.ts @@ -11,9 +11,14 @@ vi.mock('../../../src/logs/internal', () => ({ _INTERNAL_flushLogsBuffer: vi.fn(), })); -vi.mock('../../../src/logs/utils', async actual => ({ - formatConsoleArgs: vi.fn(((await actual()) as any).formatConsoleArgs), -})); +vi.mock('../../../src/logs/utils', async importOriginal => { + // eslint-disable-next-line @typescript-eslint/consistent-type-imports + const actual: typeof import('../../../src/logs/utils') = await importOriginal(); + return { + ...actual, + formatConsoleArgs: vi.fn(actual.formatConsoleArgs), + }; +}); vi.mock('../../../src/currentScopes', () => ({ getClient: vi.fn(), @@ -22,6 +27,7 @@ vi.mock('../../../src/currentScopes', () => ({ describe('createConsolaReporter', () => { let mockClient: TestClient; + let sentryReporter: ReturnType; beforeEach(() => { vi.clearAllMocks(); @@ -34,12 +40,11 @@ describe('createConsolaReporter', () => { normalizeMaxBreadth: 1000, }); - const mockScope = { - getClient: vi.fn().mockReturnValue(mockClient), - }; - vi.mocked(getClient).mockReturnValue(mockClient); - vi.mocked(getCurrentScope).mockReturnValue(mockScope as any); + vi.mocked(getCurrentScope).mockReturnValue({ + getClient: vi.fn().mockReturnValue(mockClient), + } as any); + sentryReporter = createConsolaReporter(); }); afterEach(() => { @@ -56,127 +61,8 @@ describe('createConsolaReporter', () => { }); }); - describe('log capturing', () => { - let sentryReporter: any; - - beforeEach(() => { - sentryReporter = createConsolaReporter(); - }); - - it('should capture error logs', () => { - const logObj = { - type: 'error', - level: 0, - message: 'This is an error', - tag: 'test', - date: new Date('2023-01-01T00:00:00.000Z'), - }; - - sentryReporter.log(logObj); - - expect(_INTERNAL_captureLog).toHaveBeenCalledWith({ - level: 'error', - message: 'This is an error', - attributes: { - 'sentry.origin': 'auto.log.consola', - 'consola.tag': 'test', - 'consola.type': 'error', - 'consola.level': 0, - }, - }); - }); - - it('should capture warn logs', () => { - const logObj = { - type: 'warn', - message: 'This is a warning', - }; - - sentryReporter.log(logObj); - - expect(_INTERNAL_captureLog).toHaveBeenCalledWith({ - level: 'warn', - message: 'This is a warning', - attributes: { - 'sentry.origin': 'auto.log.consola', - 'consola.type': 'warn', - }, - }); - }); - - it('should capture info logs', () => { - const logObj = { - type: 'info', - message: 'This is info', - }; - - sentryReporter.log(logObj); - - expect(_INTERNAL_captureLog).toHaveBeenCalledWith({ - level: 'info', - message: 'This is info', - attributes: { - 'sentry.origin': 'auto.log.consola', - 'consola.type': 'info', - }, - }); - }); - - it('should capture debug logs', () => { - const logObj = { - type: 'debug', - message: 'Debug message', - }; - - sentryReporter.log(logObj); - - expect(_INTERNAL_captureLog).toHaveBeenCalledWith({ - level: 'debug', - message: 'Debug message', - attributes: { - 'sentry.origin': 'auto.log.consola', - 'consola.type': 'debug', - }, - }); - }); - - it('should capture trace logs', () => { - const logObj = { - type: 'trace', - message: 'Trace message', - }; - - sentryReporter.log(logObj); - - expect(_INTERNAL_captureLog).toHaveBeenCalledWith({ - level: 'trace', - message: 'Trace message', - attributes: { - 'sentry.origin': 'auto.log.consola', - 'consola.type': 'trace', - }, - }); - }); - - it('should capture fatal logs', () => { - const logObj = { - type: 'fatal', - message: 'Fatal error', - }; - - sentryReporter.log(logObj); - - expect(_INTERNAL_captureLog).toHaveBeenCalledWith({ - level: 'fatal', - message: 'Fatal error', - attributes: { - 'sentry.origin': 'auto.log.consola', - 'consola.type': 'fatal', - }, - }); - }); - - it('should format message from args when message is not provided', () => { + describe('message and args handling', () => { + it('should format message from args', () => { const logObj = { type: 'info', args: ['Hello', 'world', 123, { key: 'value' }], @@ -216,52 +102,111 @@ describe('createConsolaReporter', () => { }); }); - it('should map consola levels to sentry levels when type is not provided', () => { - const logObj = { - level: 0, // Fatal level - message: 'Fatal message', - }; + it('consola-merged: args=[message] with extra keys on log object', () => { + sentryReporter.log({ + type: 'log', + level: 2, + args: ['Hello', 'world', { some: 'obj' }], + userId: 123, + action: 'login', + time: '2026-02-24T10:24:04.477Z', + smallObj: { firstLevel: { secondLevel: { thirdLevel: { fourthLevel: 'deep' } } } }, + tag: '', + }); + + const call = vi.mocked(_INTERNAL_captureLog).mock.calls[0]![0]; + + // Message from args + expect(call.message).toBe('Hello world {"some":"obj"}'); + expect(call.attributes).toMatchObject({ + 'consola.type': 'log', + 'consola.level': 2, + userId: 123, + smallObj: { firstLevel: { secondLevel: { thirdLevel: '[Object]' } } }, // Object is normalized + action: 'login', + time: '2026-02-24T10:24:04.477Z', + 'sentry.origin': 'auto.log.consola', + }); + expect(call.attributes?.['sentry.message.parameter.0']).toBeUndefined(); + }); - sentryReporter.log(logObj); + it('capturing custom keys mimicking direct reporter.log({ type, message, userId, sessionId })', () => { + sentryReporter.log({ + type: 'info', + message: 'User action', + userId: 123, + sessionId: 'abc-123', + }); - expect(_INTERNAL_captureLog).toHaveBeenCalledWith({ - level: 'fatal', - message: 'Fatal message', - attributes: { - 'sentry.origin': 'auto.log.consola', - 'consola.level': 0, - }, + const call = vi.mocked(_INTERNAL_captureLog).mock.calls[0]![0]; + expect(call.message).toBe('User action'); + expect(call.attributes).toMatchObject({ + 'consola.type': 'info', + userId: 123, + sessionId: 'abc-123', }); }); + }); - it('should map various consola types correctly', () => { - const testCases = [ - { type: 'success', expectedLevel: 'info' }, - { type: 'fail', expectedLevel: 'error' }, - { type: 'ready', expectedLevel: 'info' }, - { type: 'start', expectedLevel: 'info' }, - { type: 'verbose', expectedLevel: 'debug' }, - { type: 'log', expectedLevel: 'info' }, - { type: 'silent', expectedLevel: 'trace' }, - ]; - - testCases.forEach(({ type, expectedLevel }) => { - vi.clearAllMocks(); - - sentryReporter.log({ - type, - message: `Test ${type} message`, - }); + describe('level mapping', () => { + it.each([ + ['error', 'error'], + ['warn', 'warn'], + ['info', 'info'], + ['debug', 'debug'], + ['trace', 'trace'], + ['fatal', 'fatal'], + ] as const)('maps type "%s" to Sentry level "%s"', (type, expectedLevel) => { + sentryReporter.log({ type, args: [`${type} message`] }); + + expect(_INTERNAL_captureLog).toHaveBeenCalledWith( + expect.objectContaining({ + level: expectedLevel, + message: `${type} message`, + attributes: expect.objectContaining({ 'consola.type': type, 'sentry.origin': 'auto.log.consola' }), + }), + ); + }); - expect(_INTERNAL_captureLog).toHaveBeenCalledWith({ + it.each([ + ['success', 'info'], + ['fail', 'error'], + ['ready', 'info'], + ['start', 'info'], + ['verbose', 'debug'], + ['log', 'info'], + ['silent', 'trace'], + ] as const)('maps consola type "%s" to Sentry level "%s"', (type, expectedLevel) => { + sentryReporter.log({ type, args: [`Test ${type}`] }); + + expect(_INTERNAL_captureLog).toHaveBeenCalledWith( + expect.objectContaining({ level: expectedLevel, - message: `Test ${type} message`, - attributes: { - 'sentry.origin': 'auto.log.consola', + message: `Test ${type}`, + attributes: expect.objectContaining({ 'consola.type': type, - }, - }); + 'sentry.origin': 'auto.log.consola', + }), + }), + ); + }); + + it('uses level number when type is missing', () => { + sentryReporter.log({ + level: 0, // Fatal level + args: ['Fatal message'], }); + + expect(_INTERNAL_captureLog).toHaveBeenCalledWith( + expect.objectContaining({ + level: 'fatal', + message: 'Fatal message', + attributes: expect.objectContaining({ + 'consola.level': 0, + 'sentry.origin': 'auto.log.consola', + }), + }), + ); }); }); @@ -274,21 +219,21 @@ describe('createConsolaReporter', () => { // Should capture error filteredReporter.log({ type: 'error', - message: 'Error message', + args: ['Error message'], }); expect(_INTERNAL_captureLog).toHaveBeenCalledTimes(1); // Should capture warn filteredReporter.log({ type: 'warn', - message: 'Warn message', + args: ['Warn message'], }); expect(_INTERNAL_captureLog).toHaveBeenCalledTimes(2); // Should not capture info filteredReporter.log({ type: 'info', - message: 'Info message', + args: ['Info message'], }); expect(_INTERNAL_captureLog).toHaveBeenCalledTimes(2); }); @@ -300,7 +245,7 @@ describe('createConsolaReporter', () => { ['trace', 'debug', 'info', 'warn', 'error', 'fatal'].forEach(type => { defaultReporter.log({ type, - message: `${type} message`, + args: [`${type} message`], }); }); diff --git a/packages/core/test/lib/integrations/postgresjs.test.ts b/packages/core/test/lib/integrations/postgresjs.test.ts new file mode 100644 index 000000000000..dfc159808377 --- /dev/null +++ b/packages/core/test/lib/integrations/postgresjs.test.ts @@ -0,0 +1,602 @@ +import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; +import { _reconstructQuery, _sanitizeSqlQuery, instrumentPostgresJsSql } from '../../../src/integrations/postgresjs'; +import * as spanUtils from '../../../src/utils/spanUtils'; + +describe('PostgresJs portable instrumentation', () => { + describe('_reconstructQuery', () => { + describe('empty input handling', () => { + it.each([ + [undefined, undefined], + [null as unknown as undefined, undefined], + [[], undefined], + [[''], undefined], + ])('returns undefined for %p', (input, expected) => { + expect(_reconstructQuery(input)).toBe(expected); + }); + + it('returns whitespace-only string as-is', () => { + expect(_reconstructQuery([' '])).toBe(' '); + }); + }); + + describe('single-element array (non-parameterized)', () => { + it.each([ + ['SELECT * FROM users', 'SELECT * FROM users'], + ['SELECT * FROM users WHERE id = $1', 'SELECT * FROM users WHERE id = $1'], + ['INSERT INTO users (email, name) VALUES ($1, $2)', 'INSERT INTO users (email, name) VALUES ($1, $2)'], + ])('returns %p as-is', (input, expected) => { + expect(_reconstructQuery([input])).toBe(expected); + }); + }); + + describe('multi-element array (parameterized)', () => { + it.each([ + [['SELECT * FROM users WHERE id = ', ''], 'SELECT * FROM users WHERE id = $1'], + [['SELECT * FROM users WHERE id = ', ' AND name = ', ''], 'SELECT * FROM users WHERE id = $1 AND name = $2'], + [['INSERT INTO t VALUES (', ', ', ', ', ')'], 'INSERT INTO t VALUES ($1, $2, $3)'], + [['', ' WHERE id = ', ''], '$1 WHERE id = $2'], + [ + ['SELECT * FROM ', ' WHERE id = ', ' AND status IN (', ', ', ') ORDER BY ', ''], + 'SELECT * FROM $1 WHERE id = $2 AND status IN ($3, $4) ORDER BY $5', + ], + ])('reconstructs %p to %p', (input, expected) => { + expect(_reconstructQuery(input)).toBe(expected); + }); + }); + + describe('edge cases', () => { + it('handles 10+ parameters', () => { + const strings = ['INSERT INTO t VALUES (', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ')']; + expect(_reconstructQuery(strings)).toBe('INSERT INTO t VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)'); + }); + + it.each([ + [['SELECT * FROM users WHERE id = ', ' ', ''], 'SELECT * FROM users WHERE id = $1 $2'], + [['SELECT * FROM users WHERE id = ', ' LIMIT 10'], 'SELECT * FROM users WHERE id = $1 LIMIT 10'], + [['SELECT *\nFROM users\nWHERE id = ', ''], 'SELECT *\nFROM users\nWHERE id = $1'], + [['SELECT * FROM "User" WHERE "email" = ', ''], 'SELECT * FROM "User" WHERE "email" = $1'], + [['SELECT ', '', '', ''], 'SELECT $1$2$3'], + [['', ''], '$1'], + ])('handles edge case %p', (input, expected) => { + expect(_reconstructQuery(input)).toBe(expected); + }); + }); + + describe('integration with _sanitizeSqlQuery', () => { + it('preserves $n placeholders per OTEL spec', () => { + const strings = ['SELECT * FROM users WHERE id = ', ' AND name = ', '']; + expect(_sanitizeSqlQuery(_reconstructQuery(strings))).toBe('SELECT * FROM users WHERE id = $1 AND name = $2'); + }); + + it('collapses IN clause with $n to IN ($?)', () => { + const strings = ['SELECT * FROM users WHERE id = ', ' AND status IN (', ', ', ', ', ')']; + expect(_sanitizeSqlQuery(_reconstructQuery(strings))).toBe( + 'SELECT * FROM users WHERE id = $1 AND status IN ($?)', + ); + }); + + it('returns Unknown SQL Query for undefined input', () => { + expect(_sanitizeSqlQuery(_reconstructQuery(undefined))).toBe('Unknown SQL Query'); + }); + + it('normalizes whitespace and removes trailing semicolon', () => { + const strings = ['SELECT *\n FROM users\n WHERE id = ', ';']; + expect(_sanitizeSqlQuery(_reconstructQuery(strings))).toBe('SELECT * FROM users WHERE id = $1'); + }); + }); + }); + + describe('_sanitizeSqlQuery', () => { + describe('passthrough (no literals)', () => { + it.each([ + ['SELECT * FROM users', 'SELECT * FROM users'], + ['INSERT INTO users (a, b) SELECT a, b FROM other', 'INSERT INTO users (a, b) SELECT a, b FROM other'], + [ + 'SELECT col1, col2 FROM table1 JOIN table2 ON table1.id = table2.id', + 'SELECT col1, col2 FROM table1 JOIN table2 ON table1.id = table2.id', + ], + ])('passes through %p unchanged', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('comment removal', () => { + it.each([ + ['SELECT * FROM users -- comment', 'SELECT * FROM users'], + ['SELECT * -- comment\nFROM users', 'SELECT * FROM users'], + ['SELECT /* comment */ * FROM users', 'SELECT * FROM users'], + ['SELECT /* multi\nline */ * FROM users', 'SELECT * FROM users'], + ['SELECT /* c1 */ * FROM /* c2 */ users -- c3', 'SELECT * FROM users'], + ])('removes comments: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('whitespace normalization', () => { + it.each([ + ['SELECT * FROM users', 'SELECT * FROM users'], + ['SELECT *\n\tFROM\n\tusers', 'SELECT * FROM users'], + [' SELECT * FROM users ', 'SELECT * FROM users'], + [' SELECT \n\t * \r\n FROM \t\t users ', 'SELECT * FROM users'], + ])('normalizes %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('trailing semicolon removal', () => { + it.each([ + ['SELECT * FROM users;', 'SELECT * FROM users'], + ['SELECT * FROM users; ', 'SELECT * FROM users'], + ])('removes trailing semicolon: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('$n placeholder preservation (OTEL compliance)', () => { + it.each([ + ['SELECT * FROM users WHERE id = $1', 'SELECT * FROM users WHERE id = $1'], + ['SELECT * FROM users WHERE id = $1 AND name = $2', 'SELECT * FROM users WHERE id = $1 AND name = $2'], + ['INSERT INTO t VALUES ($1, $10, $100)', 'INSERT INTO t VALUES ($1, $10, $100)'], + ['$1 UNION SELECT * FROM users', '$1 UNION SELECT * FROM users'], + ['SELECT * FROM users LIMIT $1', 'SELECT * FROM users LIMIT $1'], + ['SELECT $1$2$3', 'SELECT $1$2$3'], + ['SELECT generate_series($1, $2)', 'SELECT generate_series($1, $2)'], + ])('preserves $n: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('string literal sanitization', () => { + it.each([ + ["SELECT * FROM users WHERE name = 'John'", 'SELECT * FROM users WHERE name = ?'], + ["SELECT * FROM users WHERE a = 'x' AND b = 'y'", 'SELECT * FROM users WHERE a = ? AND b = ?'], + ["SELECT * FROM users WHERE name = ''", 'SELECT * FROM users WHERE name = ?'], + ["SELECT * FROM users WHERE name = 'it''s'", 'SELECT * FROM users WHERE name = ?'], + ["SELECT * FROM users WHERE data = 'a''b''c'", 'SELECT * FROM users WHERE data = ?'], + ["SELECT * FROM t WHERE desc = 'Use $1 for param'", 'SELECT * FROM t WHERE desc = ?'], + ["SELECT * FROM users WHERE name = '日本語'", 'SELECT * FROM users WHERE name = ?'], + ])('sanitizes string: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('numeric literal sanitization', () => { + it.each([ + ['SELECT * FROM users WHERE id = 123', 'SELECT * FROM users WHERE id = ?'], + ['SELECT * FROM users WHERE count = 0', 'SELECT * FROM users WHERE count = ?'], + ['SELECT * FROM products WHERE price = 19.99', 'SELECT * FROM products WHERE price = ?'], + ['SELECT * FROM products WHERE discount = .5', 'SELECT * FROM products WHERE discount = ?'], + ['SELECT * FROM accounts WHERE balance = -500', 'SELECT * FROM accounts WHERE balance = ?'], + ['SELECT * FROM accounts WHERE rate = -0.05', 'SELECT * FROM accounts WHERE rate = ?'], + ['SELECT * FROM data WHERE value = 1e10', 'SELECT * FROM data WHERE value = ?'], + ['SELECT * FROM data WHERE value = 1.5e-3', 'SELECT * FROM data WHERE value = ?'], + ['SELECT * FROM data WHERE value = 2.5E+10', 'SELECT * FROM data WHERE value = ?'], + ['SELECT * FROM data WHERE value = -1e10', 'SELECT * FROM data WHERE value = ?'], + ['SELECT * FROM users LIMIT 10 OFFSET 20', 'SELECT * FROM users LIMIT ? OFFSET ?'], + ])('sanitizes number: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + + it('preserves numbers in identifiers', () => { + expect(_sanitizeSqlQuery('SELECT * FROM users2 WHERE col1 = 5')).toBe('SELECT * FROM users2 WHERE col1 = ?'); + expect(_sanitizeSqlQuery('SELECT * FROM "table1" WHERE "col2" = 5')).toBe( + 'SELECT * FROM "table1" WHERE "col2" = ?', + ); + }); + }); + + describe('hex and binary literal sanitization', () => { + it.each([ + ["SELECT * FROM t WHERE data = X'1A2B'", 'SELECT * FROM t WHERE data = ?'], + ["SELECT * FROM t WHERE data = x'ff'", 'SELECT * FROM t WHERE data = ?'], + ["SELECT * FROM t WHERE data = X''", 'SELECT * FROM t WHERE data = ?'], + ['SELECT * FROM t WHERE flags = 0x1A2B', 'SELECT * FROM t WHERE flags = ?'], + ['SELECT * FROM t WHERE flags = 0XFF', 'SELECT * FROM t WHERE flags = ?'], + ["SELECT * FROM t WHERE bits = B'1010'", 'SELECT * FROM t WHERE bits = ?'], + ["SELECT * FROM t WHERE bits = b'1111'", 'SELECT * FROM t WHERE bits = ?'], + ["SELECT * FROM t WHERE bits = B''", 'SELECT * FROM t WHERE bits = ?'], + ])('sanitizes hex/binary: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('boolean literal sanitization', () => { + it.each([ + ['SELECT * FROM users WHERE active = TRUE', 'SELECT * FROM users WHERE active = ?'], + ['SELECT * FROM users WHERE active = FALSE', 'SELECT * FROM users WHERE active = ?'], + ['SELECT * FROM users WHERE a = true AND b = false', 'SELECT * FROM users WHERE a = ? AND b = ?'], + ['SELECT * FROM users WHERE a = True AND b = False', 'SELECT * FROM users WHERE a = ? AND b = ?'], + ])('sanitizes boolean: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + + it('does not affect identifiers containing TRUE/FALSE', () => { + expect(_sanitizeSqlQuery('SELECT TRUE_FLAG FROM users WHERE active = TRUE')).toBe( + 'SELECT TRUE_FLAG FROM users WHERE active = ?', + ); + }); + }); + + describe('IN clause collapsing', () => { + it.each([ + ['SELECT * FROM users WHERE id IN (?, ?, ?)', 'SELECT * FROM users WHERE id IN (?)'], + ['SELECT * FROM users WHERE id IN ($1, $2, $3)', 'SELECT * FROM users WHERE id IN ($?)'], + ['SELECT * FROM users WHERE id in ($1, $2)', 'SELECT * FROM users WHERE id IN ($?)'], + ['SELECT * FROM users WHERE id IN ( $1 , $2 , $3 )', 'SELECT * FROM users WHERE id IN ($?)'], + [ + 'SELECT * FROM users WHERE id IN ($1, $2) AND status IN ($3, $4)', + 'SELECT * FROM users WHERE id IN ($?) AND status IN ($?)', + ], + ['SELECT * FROM users WHERE id NOT IN ($1, $2)', 'SELECT * FROM users WHERE id NOT IN ($?)'], + ['SELECT * FROM users WHERE id NOT IN (?, ?)', 'SELECT * FROM users WHERE id NOT IN (?)'], + ['SELECT * FROM users WHERE id IN ($1)', 'SELECT * FROM users WHERE id IN ($?)'], + ['SELECT * FROM users WHERE id IN (1, 2, 3)', 'SELECT * FROM users WHERE id IN (?)'], + ])('collapses IN clause: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('mixed scenarios (params + literals)', () => { + it.each([ + ["SELECT * FROM users WHERE id = $1 AND status = 'active'", 'SELECT * FROM users WHERE id = $1 AND status = ?'], + ['SELECT * FROM users WHERE id = $1 AND limit = 100', 'SELECT * FROM users WHERE id = $1 AND limit = ?'], + [ + "SELECT * FROM t WHERE a = $1 AND b = 'foo' AND c = 123 AND d = TRUE AND e IN ($2, $3)", + 'SELECT * FROM t WHERE a = $1 AND b = ? AND c = ? AND d = ? AND e IN ($?)', + ], + ])('handles mixed: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('PostgreSQL-specific syntax', () => { + it.each([ + ['SELECT $1::integer', 'SELECT $1::integer'], + ['SELECT $1::text', 'SELECT $1::text'], + ['SELECT * FROM t WHERE tags = ARRAY[1, 2, 3]', 'SELECT * FROM t WHERE tags = ARRAY[?, ?, ?]'], + ['SELECT * FROM t WHERE tags = ARRAY[$1, $2]', 'SELECT * FROM t WHERE tags = ARRAY[$1, $2]'], + ["SELECT data->'key' FROM t WHERE id = $1", 'SELECT data->? FROM t WHERE id = $1'], + ["SELECT data->>'key' FROM t WHERE id = $1", 'SELECT data->>? FROM t WHERE id = $1'], + ["SELECT * FROM t WHERE data @> '{}'", 'SELECT * FROM t WHERE data @> ?'], + [ + "SELECT * FROM t WHERE created_at > NOW() - INTERVAL '7 days'", + 'SELECT * FROM t WHERE created_at > NOW() - INTERVAL ?', + ], + ['CREATE TABLE t (created_at TIMESTAMP(3))', 'CREATE TABLE t (created_at TIMESTAMP(?))'], + ['CREATE TABLE t (price NUMERIC(10, 2))', 'CREATE TABLE t (price NUMERIC(?, ?))'], + ])('handles PostgreSQL syntax: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('empty/undefined input', () => { + it.each([ + [undefined, 'Unknown SQL Query'], + ['', 'Unknown SQL Query'], + [' ', ''], + [' \n\t ', ''], + ])('handles empty input %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('complex real-world queries', () => { + it('handles query with comments, whitespace, and IN clause', () => { + const input = ` + SELECT * FROM users -- fetch all users + WHERE id = $1 + AND status IN ($2, $3, $4); + `; + expect(_sanitizeSqlQuery(input)).toBe('SELECT * FROM users WHERE id = $1 AND status IN ($?)'); + }); + + it('handles Prisma-style query', () => { + const input = ` + SELECT "User"."id", "User"."email", "User"."name" + FROM "User" + WHERE "User"."email" = $1 + AND "User"."deleted_at" IS NULL + LIMIT $2; + `; + expect(_sanitizeSqlQuery(input)).toBe( + 'SELECT "User"."id", "User"."email", "User"."name" FROM "User" WHERE "User"."email" = $1 AND "User"."deleted_at" IS NULL LIMIT $2', + ); + }); + + it('handles CREATE TABLE with various types', () => { + const input = ` + CREATE TABLE "User" ( + "id" SERIAL NOT NULL, + "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "email" TEXT NOT NULL, + "balance" NUMERIC(10, 2) DEFAULT 0.00, + CONSTRAINT "User_pkey" PRIMARY KEY ("id") + ); + `; + expect(_sanitizeSqlQuery(input)).toBe( + 'CREATE TABLE "User" ( "id" SERIAL NOT NULL, "createdAt" TIMESTAMP(?) NOT NULL DEFAULT CURRENT_TIMESTAMP, "email" TEXT NOT NULL, "balance" NUMERIC(?, ?) DEFAULT ?, CONSTRAINT "User_pkey" PRIMARY KEY ("id") )', + ); + }); + + it('handles INSERT/UPDATE with mixed literals and params', () => { + expect(_sanitizeSqlQuery("INSERT INTO users (name, age, active) VALUES ('John', 30, TRUE)")).toBe( + 'INSERT INTO users (name, age, active) VALUES (?, ?, ?)', + ); + expect(_sanitizeSqlQuery("UPDATE users SET name = $1, updated_at = '2024-01-01' WHERE id = 123")).toBe( + 'UPDATE users SET name = $1, updated_at = ? WHERE id = ?', + ); + }); + }); + + describe('edge cases', () => { + it.each([ + ['SELECT * FROM "my-table" WHERE "my-column" = $1', 'SELECT * FROM "my-table" WHERE "my-column" = $1'], + ['SELECT * FROM t WHERE big_id = 99999999999999999999', 'SELECT * FROM t WHERE big_id = ?'], + ['SELECT * FROM t WHERE val > -5', 'SELECT * FROM t WHERE val > ?'], + ['SELECT * FROM t WHERE id IN (1, -2, 3)', 'SELECT * FROM t WHERE id IN (?)'], + ['SELECT 1+2*3', 'SELECT ?+?*?'], + ["SELECT * FROM users WHERE name LIKE '%john%'", 'SELECT * FROM users WHERE name LIKE ?'], + ['SELECT * FROM t WHERE age BETWEEN 18 AND 65', 'SELECT * FROM t WHERE age BETWEEN ? AND ?'], + ['SELECT * FROM t WHERE age BETWEEN $1 AND $2', 'SELECT * FROM t WHERE age BETWEEN $1 AND $2'], + [ + "SELECT CASE WHEN status = 'active' THEN 1 ELSE 0 END FROM users", + 'SELECT CASE WHEN status = ? THEN ? ELSE ? END FROM users', + ], + [ + 'SELECT * FROM users WHERE id IN (SELECT user_id FROM orders WHERE amount > 100)', + 'SELECT * FROM users WHERE id IN (SELECT user_id FROM orders WHERE amount > ?)', + ], + [ + "WITH cte AS (SELECT * FROM users WHERE status = 'active') SELECT * FROM cte WHERE id = $1", + 'WITH cte AS (SELECT * FROM users WHERE status = ?) SELECT * FROM cte WHERE id = $1', + ], + [ + 'SELECT COUNT(*), SUM(amount), AVG(price) FROM orders WHERE status = $1', + 'SELECT COUNT(*), SUM(amount), AVG(price) FROM orders WHERE status = $1', + ], + [ + 'SELECT status, COUNT(*) FROM orders GROUP BY status HAVING COUNT(*) > 10', + 'SELECT status, COUNT(*) FROM orders GROUP BY status HAVING COUNT(*) > ?', + ], + [ + 'SELECT ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY created_at) FROM orders', + 'SELECT ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY created_at) FROM orders', + ], + ])('handles edge case: %p', (input, expected) => { + expect(_sanitizeSqlQuery(input)).toBe(expected); + }); + }); + + describe('regression tests', () => { + it('does not replace $n with ? (OTEL compliance)', () => { + const result = _sanitizeSqlQuery('SELECT * FROM users WHERE id = $1'); + expect(result).not.toContain('?'); + expect(result).toBe('SELECT * FROM users WHERE id = $1'); + }); + + it('does not split decimal numbers into ?.?', () => { + const result = _sanitizeSqlQuery('SELECT * FROM t WHERE price = 19.99'); + expect(result).not.toBe('SELECT * FROM t WHERE price = ?.?'); + expect(result).toBe('SELECT * FROM t WHERE price = ?'); + }); + + it('does not leave minus sign when sanitizing negative numbers', () => { + const result = _sanitizeSqlQuery('SELECT * FROM t WHERE val = -500'); + expect(result).not.toBe('SELECT * FROM t WHERE val = -?'); + expect(result).toBe('SELECT * FROM t WHERE val = ?'); + }); + + it('handles exact queries from integration tests', () => { + expect( + _sanitizeSqlQuery( + 'CREATE TABLE "User" ("id" SERIAL NOT NULL,"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,"email" TEXT NOT NULL,"name" TEXT,CONSTRAINT "User_pkey" PRIMARY KEY ("id"))', + ), + ).toBe( + 'CREATE TABLE "User" ("id" SERIAL NOT NULL,"createdAt" TIMESTAMP(?) NOT NULL DEFAULT CURRENT_TIMESTAMP,"email" TEXT NOT NULL,"name" TEXT,CONSTRAINT "User_pkey" PRIMARY KEY ("id"))', + ); + expect(_sanitizeSqlQuery('SELECT * from generate_series(1,1000) as x')).toBe( + 'SELECT * from generate_series(?,?) as x', + ); + }); + }); + }); + + describe('instrumentPostgresJsSql', () => { + it('returns non-function values unchanged', () => { + expect(instrumentPostgresJsSql(null as any)).toBe(null); + expect(instrumentPostgresJsSql(undefined as any)).toBe(undefined); + expect(instrumentPostgresJsSql(42 as any)).toBe(42); + expect(instrumentPostgresJsSql('string' as any)).toBe('string'); + }); + + it('wraps sql function and intercepts tagged template calls', () => { + const mockQuery = { handle: vi.fn(), strings: ['SELECT * FROM users WHERE id = ', ''] }; + const mockSql = vi.fn().mockReturnValue(mockQuery); + + const instrumented = instrumentPostgresJsSql(mockSql); + expect(instrumented).not.toBe(mockSql); + expect(typeof instrumented).toBe('function'); + + // Invoke the instrumented function + const result = instrumented(['SELECT * FROM users WHERE id = ', ''], 1); + expect(mockSql).toHaveBeenCalledWith(['SELECT * FROM users WHERE id = ', ''], 1); + expect(result).toBe(mockQuery); + // The handle should have been wrapped + expect((mockQuery.handle as any).__sentryWrapped).toBe(true); + }); + + it('wraps unsafe method', () => { + const mockQuery = { handle: vi.fn(), strings: undefined }; + const mockSql = vi.fn(); + mockSql.unsafe = vi.fn().mockReturnValue(mockQuery); + + const instrumented = instrumentPostgresJsSql(mockSql as any); + const result = instrumented.unsafe('SELECT 1'); + expect(mockSql.unsafe).toHaveBeenCalledWith('SELECT 1'); + expect(result).toBe(mockQuery); + expect((mockQuery.handle as any).__sentryWrapped).toBe(true); + }); + + it('wraps file method', () => { + const mockQuery = { handle: vi.fn(), strings: undefined }; + const mockSql = vi.fn(); + mockSql.file = vi.fn().mockReturnValue(mockQuery); + + const instrumented = instrumentPostgresJsSql(mockSql as any); + const result = instrumented.file('test.sql'); + expect(mockSql.file).toHaveBeenCalledWith('test.sql'); + expect(result).toBe(mockQuery); + expect((mockQuery.handle as any).__sentryWrapped).toBe(true); + }); + + it('wraps begin method with callback', () => { + const mockSql = vi.fn(); + const innerSql = vi.fn(); + mockSql.begin = vi.fn().mockImplementation((cb: (sql: unknown) => unknown) => { + return cb(innerSql); + }); + + const instrumented = instrumentPostgresJsSql(mockSql as any); + let receivedSql: unknown; + instrumented.begin((sql: unknown) => { + receivedSql = sql; + return 'result'; + }); + + // The callback should receive an instrumented sql instance (a proxy, not the raw innerSql) + expect(receivedSql).not.toBe(innerSql); + expect(typeof receivedSql).toBe('function'); + }); + + it('wraps reserve method with promise', async () => { + const innerSql = vi.fn(); + const mockSql = vi.fn(); + mockSql.reserve = vi.fn().mockResolvedValue(innerSql); + + const instrumented = instrumentPostgresJsSql(mockSql as any); + const result = await instrumented.reserve(); + + // The resolved instance should be instrumented + expect(result).not.toBe(innerSql); + expect(typeof result).toBe('function'); + }); + + it('prevents double-instrumentation via Symbol marker', () => { + const mockSql = vi.fn(); + const instrumented1 = instrumentPostgresJsSql(mockSql); + const instrumented2 = instrumentPostgresJsSql(instrumented1); + + // Should return the same proxy, not double-wrap + expect(instrumented2).toBe(instrumented1); + }); + + it('extracts connection context from sql.options', () => { + const mockQuery = { handle: vi.fn(), strings: ['SELECT 1'] }; + const mockSql = vi.fn().mockReturnValue(mockQuery); + mockSql.options = { + host: ['db.example.com'], + port: [5433], + database: 'testdb', + }; + + const instrumented = instrumentPostgresJsSql(mockSql as any); + + // We can't access the connection context directly via a new Symbol, + // but we can verify the proxy was created + expect(instrumented).not.toBe(mockSql); + }); + + describe('span creation', () => { + beforeEach(() => { + // By default, mock getActiveSpan to return undefined (no parent) + vi.spyOn(spanUtils, 'getActiveSpan').mockReturnValue(undefined); + }); + + afterEach(() => { + vi.restoreAllMocks(); + }); + + it('skips span creation when requireParentSpan is true (default) and no parent span', async () => { + const originalHandle = vi.fn().mockResolvedValue([]); + const mockQuery = { + handle: originalHandle, + strings: ['SELECT * FROM users'], + resolve: vi.fn(), + reject: vi.fn(), + }; + const mockSql = vi.fn().mockReturnValue(mockQuery); + + const instrumented = instrumentPostgresJsSql(mockSql); + instrumented(['SELECT * FROM users']); + + // handle is wrapped but when called, since there's no parent span, + // it should delegate to the original + const wrappedHandle = mockQuery.handle as (...args: unknown[]) => Promise; + await wrappedHandle.call(mockQuery); + + // The original handle should have been called directly (no span creation) + expect(originalHandle).toHaveBeenCalled(); + }); + + it('creates spans when requireParentSpan is false', async () => { + const handleFn = vi.fn().mockResolvedValue([]); + const mockQuery = { + handle: handleFn, + strings: ['SELECT * FROM users'], + resolve: vi.fn(), + reject: vi.fn(), + }; + const mockSql = vi.fn().mockReturnValue(mockQuery); + + const instrumented = instrumentPostgresJsSql(mockSql, { requireParentSpan: false }); + instrumented(['SELECT * FROM users']); + + // handle was wrapped + expect((mockQuery.handle as any).__sentryWrapped).toBe(true); + }); + }); + + it('does not wrap non-query results from sql call', () => { + const nonQueryResult = { notAQuery: true }; + const mockSql = vi.fn().mockReturnValue(nonQueryResult); + + const instrumented = instrumentPostgresJsSql(mockSql); + const result = instrumented(); + + // Should pass through without trying to wrap + expect(result).toBe(nonQueryResult); + }); + + it('passes through non-function properties', () => { + const mockSql = vi.fn(); + (mockSql as any).someProperty = 'value'; + (mockSql as any).someNumber = 42; + + const instrumented = instrumentPostgresJsSql(mockSql as any); + expect(instrumented.someProperty).toBe('value'); + expect(instrumented.someNumber).toBe(42); + }); + + it('handles requestHook errors gracefully', () => { + const handleFn = vi.fn().mockResolvedValue([]); + const mockQuery = { + handle: handleFn, + strings: ['SELECT 1'], + resolve: vi.fn(), + reject: vi.fn(), + }; + const mockSql = vi.fn().mockReturnValue(mockQuery); + + const badHook = vi.fn().mockImplementation(() => { + throw new Error('hook error'); + }); + + // Should not throw + const instrumented = instrumentPostgresJsSql(mockSql, { requestHook: badHook }); + instrumented(['SELECT 1']); + + // The handle was wrapped despite the bad hook + expect((mockQuery.handle as any).__sentryWrapped).toBe(true); + }); + }); +}); diff --git a/packages/core/test/lib/tracing/ai-message-truncation.test.ts b/packages/core/test/lib/tracing/ai-message-truncation.test.ts index 8a8cefaffa5b..c7f8e0043622 100644 --- a/packages/core/test/lib/tracing/ai-message-truncation.test.ts +++ b/packages/core/test/lib/tracing/ai-message-truncation.test.ts @@ -13,7 +13,7 @@ describe('message truncation utilities', () => { it('strips inline media from messages', () => { const b64 = Buffer.from('lots of data\n').toString('base64'); - const removed = '[Filtered]'; + const removed = '[Blob substitute]'; const messages = [ { role: 'user', @@ -148,6 +148,265 @@ describe('message truncation utilities', () => { ]); }); + it('strips OpenAI vision format with nested image_url object', () => { + const b64 = + 'iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAQBf9AoL/k2KLAAAAABJRU5ErkJggg=='; + const removed = '[Blob substitute]'; + + const messages = [ + { + role: 'user', + content: [ + { type: 'text', text: 'What is in this image?' }, + { + type: 'image_url', + image_url: { + url: `data:image/png;base64,${b64}`, + }, + }, + ], + }, + ]; + + const messagesJson = JSON.stringify(messages, null, 2); + const result = truncateGenAiMessages(messages); + + // original messages must not be mutated + expect(JSON.stringify(messages, null, 2)).toBe(messagesJson); + + expect(result).toStrictEqual([ + { + role: 'user', + content: [ + { type: 'text', text: 'What is in this image?' }, + { + type: 'image_url', + image_url: { + url: removed, + }, + }, + ], + }, + ]); + + // Validate no raw base64 leaks + const serialized = JSON.stringify(result); + expect(serialized).not.toMatch(/[A-Za-z0-9+/]{100,}={0,2}/); + expect(serialized).toContain('[Blob substitute]'); + }); + + it('does not redact image_url with regular URL (non-data: scheme)', () => { + const messages = [ + { + role: 'user', + content: [ + { type: 'text', text: 'What is in this image?' }, + { + type: 'image_url', + image_url: { + url: 'https://example.com/image.png', + }, + }, + ], + }, + ]; + + const result = truncateGenAiMessages(messages); + + expect(result).toStrictEqual([ + { + role: 'user', + content: [ + { type: 'text', text: 'What is in this image?' }, + { + type: 'image_url', + image_url: { + url: 'https://example.com/image.png', + }, + }, + ], + }, + ]); + }); + + it('strips multiple image parts in a single message', () => { + const b64 = + 'iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAQBf9AoL/k2KLAAAAABJRU5ErkJggg=='; + const removed = '[Blob substitute]'; + + const messages = [ + { + role: 'user', + content: [ + { type: 'text', text: 'Compare these images' }, + { + type: 'image_url', + image_url: { url: `data:image/png;base64,${b64}` }, + }, + { + type: 'image_url', + image_url: { url: `data:image/jpeg;base64,${b64}` }, + }, + { + type: 'image_url', + image_url: { url: 'https://example.com/safe.png' }, + }, + ], + }, + ]; + + const result = truncateGenAiMessages(messages); + + expect(result).toStrictEqual([ + { + role: 'user', + content: [ + { type: 'text', text: 'Compare these images' }, + { + type: 'image_url', + image_url: { url: removed }, + }, + { + type: 'image_url', + image_url: { url: removed }, + }, + { + type: 'image_url', + image_url: { url: 'https://example.com/safe.png' }, + }, + ], + }, + ]); + }); + + it('strips input_audio data from messages', () => { + const b64Audio = Buffer.from('fake audio data for testing').toString('base64'); + const removed = '[Blob substitute]'; + + const messages = [ + { + role: 'user', + content: [ + { type: 'text', text: 'What does this audio say?' }, + { + type: 'input_audio', + input_audio: { + data: b64Audio, + format: 'wav', + }, + }, + ], + }, + ]; + + const messagesJson = JSON.stringify(messages, null, 2); + const result = truncateGenAiMessages(messages); + + expect(JSON.stringify(messages, null, 2)).toBe(messagesJson); + + expect(result).toStrictEqual([ + { + role: 'user', + content: [ + { type: 'text', text: 'What does this audio say?' }, + { + type: 'input_audio', + input_audio: { + data: removed, + format: 'wav', + }, + }, + ], + }, + ]); + + const serialized = JSON.stringify(result); + expect(serialized).not.toContain(b64Audio); + expect(serialized).toContain(removed); + }); + + it('strips file_data from file content parts', () => { + const b64File = Buffer.from('fake file content for testing').toString('base64'); + const removed = '[Blob substitute]'; + + const messages = [ + { + role: 'user', + content: [ + { type: 'text', text: 'Summarize this document' }, + { + type: 'file', + file: { + file_data: b64File, + filename: 'document.pdf', + }, + }, + ], + }, + ]; + + const messagesJson = JSON.stringify(messages, null, 2); + const result = truncateGenAiMessages(messages); + + expect(JSON.stringify(messages, null, 2)).toBe(messagesJson); + + expect(result).toStrictEqual([ + { + role: 'user', + content: [ + { type: 'text', text: 'Summarize this document' }, + { + type: 'file', + file: { + file_data: removed, + filename: 'document.pdf', + }, + }, + ], + }, + ]); + + const serialized = JSON.stringify(result); + expect(serialized).not.toContain(b64File); + expect(serialized).toContain(removed); + }); + + it('does not redact file parts that only have file_id (no inline data)', () => { + const messages = [ + { + role: 'user', + content: [ + { type: 'text', text: 'Summarize this document' }, + { + type: 'file', + file: { + file_id: 'file-abc123', + filename: 'document.pdf', + }, + }, + ], + }, + ]; + + const result = truncateGenAiMessages(messages); + + expect(result).toStrictEqual([ + { + role: 'user', + content: [ + { type: 'text', text: 'Summarize this document' }, + { + type: 'file', + file: { + file_id: 'file-abc123', + filename: 'document.pdf', + }, + }, + ], + }, + ]); + }); + const humongous = 'this is a long string '.repeat(10_000); const giant = 'this is a long string '.repeat(1_000); const big = 'this is a long string '.repeat(100); @@ -169,7 +428,7 @@ describe('message truncation utilities', () => { it('keeps only the last message with truncation when it does not fit the limit', () => { const messages = [{ content: `1 ${humongous}` }, { content: `2 ${humongous}` }, { content: `3 ${humongous}` }]; const result = truncateGenAiMessages(messages); - const truncLen = 20_000 - JSON.stringify({ content: '' }).length; + const truncLen = 20_000 - 2 - JSON.stringify({ content: '' }).length; expect(result).toStrictEqual([{ content: `3 ${humongous}`.substring(0, truncLen) }]); }); @@ -191,7 +450,7 @@ describe('message truncation utilities', () => { it('truncates if the message content string will not fit', () => { const messages = [{ content: `2 ${humongous}` }]; const result = truncateGenAiMessages(messages); - const truncLen = 20_000 - JSON.stringify({ content: '' }).length; + const truncLen = 20_000 - 2 - JSON.stringify({ content: '' }).length; expect(result).toStrictEqual([{ content: `2 ${humongous}`.substring(0, truncLen) }]); }); @@ -229,6 +488,7 @@ describe('message truncation utilities', () => { // case that seems unlikely in normal usage. const truncLen = 20_000 - + 2 - JSON.stringify({ parts: ['', { some_other_field: 'no text here', text: '' }], }).length; @@ -249,6 +509,7 @@ describe('message truncation utilities', () => { const result = truncateGenAiMessages(messages); const truncLen = 20_000 - + 2 - JSON.stringify({ parts: [{ text: '' }], }).length; diff --git a/packages/core/test/lib/tracing/langchain-utils.test.ts b/packages/core/test/lib/tracing/langchain-utils.test.ts new file mode 100644 index 000000000000..98724c8902d4 --- /dev/null +++ b/packages/core/test/lib/tracing/langchain-utils.test.ts @@ -0,0 +1,248 @@ +import { describe, expect, it } from 'vitest'; +import { GEN_AI_INPUT_MESSAGES_ATTRIBUTE } from '../../../src/tracing/ai/gen-ai-attributes'; +import type { LangChainMessage } from '../../../src/tracing/langchain/types'; +import { extractChatModelRequestAttributes, normalizeLangChainMessages } from '../../../src/tracing/langchain/utils'; + +describe('normalizeLangChainMessages', () => { + it('normalizes messages with _getType()', () => { + const messages = [ + { + _getType: () => 'human', + content: 'Hello', + }, + { + _getType: () => 'ai', + content: 'Hi there!', + }, + ] as unknown as LangChainMessage[]; + + const result = normalizeLangChainMessages(messages); + expect(result).toEqual([ + { role: 'user', content: 'Hello' }, + { role: 'assistant', content: 'Hi there!' }, + ]); + }); + + it('normalizes messages with type property', () => { + const messages: LangChainMessage[] = [ + { type: 'human', content: 'Hello' }, + { type: 'ai', content: 'Hi!' }, + ]; + + const result = normalizeLangChainMessages(messages); + expect(result).toEqual([ + { role: 'user', content: 'Hello' }, + { role: 'assistant', content: 'Hi!' }, + ]); + }); + + it('normalizes messages with role property', () => { + const messages: LangChainMessage[] = [ + { role: 'user', content: 'Hello' }, + { role: 'assistant', content: 'Hi!' }, + ]; + + const result = normalizeLangChainMessages(messages); + expect(result).toEqual([ + { role: 'user', content: 'Hello' }, + { role: 'assistant', content: 'Hi!' }, + ]); + }); + + it('normalizes serialized LangChain format', () => { + const messages: LangChainMessage[] = [ + { + lc: 1, + id: ['langchain_core', 'messages', 'HumanMessage'], + kwargs: { content: 'Hello from serialized' }, + }, + ]; + + const result = normalizeLangChainMessages(messages); + expect(result).toEqual([{ role: 'user', content: 'Hello from serialized' }]); + }); + + describe('multimodal content media stripping', () => { + const b64Data = `iVBORw0KGgoAAAANSUhEUgAAAAUA${'A'.repeat(200)}`; + const BLOB_SUBSTITUTE = '[Blob substitute]'; + + it('strips base64 image_url from multimodal array content via _getType()', () => { + const messages = [ + { + _getType: () => 'human', + content: [ + { type: 'text', text: 'What color is in this image?' }, + { type: 'image_url', image_url: { url: `data:image/png;base64,${b64Data}` } }, + ], + }, + ] as unknown as LangChainMessage[]; + + const result = normalizeLangChainMessages(messages); + expect(result).toHaveLength(1); + expect(result[0]!.role).toBe('user'); + + const parsed = JSON.parse(result[0]!.content); + expect(parsed).toHaveLength(2); + expect(parsed[0]).toEqual({ type: 'text', text: 'What color is in this image?' }); + expect(parsed[1].image_url.url).toBe(BLOB_SUBSTITUTE); + expect(result[0]!.content).not.toContain(b64Data); + }); + + it('strips base64 data from Anthropic-style source blocks', () => { + const messages = [ + { + _getType: () => 'human', + content: [ + { type: 'text', text: 'Describe this image' }, + { + type: 'image', + source: { + type: 'base64', + media_type: 'image/png', + data: b64Data, + }, + }, + ], + }, + ] as unknown as LangChainMessage[]; + + const result = normalizeLangChainMessages(messages); + const parsed = JSON.parse(result[0]!.content); + expect(parsed[1].source.data).toBe(BLOB_SUBSTITUTE); + expect(result[0]!.content).not.toContain(b64Data); + }); + + it('strips base64 from inline_data (Google GenAI style)', () => { + const messages: LangChainMessage[] = [ + { + type: 'human', + content: [ + { type: 'text', text: 'Describe' }, + { inlineData: { mimeType: 'image/png', data: b64Data } }, + ] as unknown as string, + }, + ]; + + const result = normalizeLangChainMessages(messages); + const parsed = JSON.parse(result[0]!.content); + expect(parsed[1].inlineData.data).toBe(BLOB_SUBSTITUTE); + expect(result[0]!.content).not.toContain(b64Data); + }); + + it('strips base64 from input_audio content parts', () => { + const messages = [ + { + _getType: () => 'human', + content: [ + { type: 'text', text: 'What do you hear?' }, + { type: 'input_audio', input_audio: { data: b64Data } }, + ], + }, + ] as unknown as LangChainMessage[]; + + const result = normalizeLangChainMessages(messages); + const parsed = JSON.parse(result[0]!.content); + expect(parsed[1].input_audio.data).toBe(BLOB_SUBSTITUTE); + expect(result[0]!.content).not.toContain(b64Data); + }); + + it('preserves text-only array content without modification', () => { + const messages = [ + { + _getType: () => 'human', + content: [ + { type: 'text', text: 'First part' }, + { type: 'text', text: 'Second part' }, + ], + }, + ] as unknown as LangChainMessage[]; + + const result = normalizeLangChainMessages(messages); + const parsed = JSON.parse(result[0]!.content); + expect(parsed).toEqual([ + { type: 'text', text: 'First part' }, + { type: 'text', text: 'Second part' }, + ]); + }); + + it('strips media from serialized LangChain format with array content', () => { + const messages: LangChainMessage[] = [ + { + lc: 1, + id: ['langchain_core', 'messages', 'HumanMessage'], + kwargs: { + content: [ + { type: 'text', text: 'Describe this' }, + { type: 'image_url', image_url: { url: `data:image/png;base64,${b64Data}` } }, + ] as unknown as string, + }, + }, + ]; + + const result = normalizeLangChainMessages(messages); + const parsed = JSON.parse(result[0]!.content); + expect(parsed[1].image_url.url).toBe(BLOB_SUBSTITUTE); + expect(result[0]!.content).not.toContain(b64Data); + }); + + it('strips media from messages with role property and array content', () => { + const messages: LangChainMessage[] = [ + { + role: 'user', + content: [ + { type: 'text', text: 'Look at this' }, + { type: 'image_url', image_url: { url: `data:image/jpeg;base64,${b64Data}` } }, + ] as unknown as string, + }, + ]; + + const result = normalizeLangChainMessages(messages); + const parsed = JSON.parse(result[0]!.content); + expect(parsed[1].image_url.url).toBe(BLOB_SUBSTITUTE); + expect(result[0]!.content).not.toContain(b64Data); + }); + + it('strips media from messages with type property and array content', () => { + const messages: LangChainMessage[] = [ + { + type: 'human', + content: [ + { type: 'text', text: 'Check this' }, + { type: 'image_url', image_url: { url: `data:image/png;base64,${b64Data}` } }, + ] as unknown as string, + }, + ]; + + const result = normalizeLangChainMessages(messages); + const parsed = JSON.parse(result[0]!.content); + expect(parsed[1].image_url.url).toBe(BLOB_SUBSTITUTE); + }); + }); +}); + +describe('extractChatModelRequestAttributes with multimodal content', () => { + const b64Data = `iVBORw0KGgoAAAANSUhEUgAAAAUA${'A'.repeat(200)}`; + + it('strips base64 from input messages attribute', () => { + const serialized = { id: ['langchain', 'chat_models', 'openai'], name: 'ChatOpenAI' }; + const messages: LangChainMessage[][] = [ + [ + { + _getType: () => 'human', + content: [ + { type: 'text', text: 'What is in this image?' }, + { type: 'image_url', image_url: { url: `data:image/png;base64,${b64Data}` } }, + ], + } as unknown as LangChainMessage, + ], + ]; + + const attrs = extractChatModelRequestAttributes(serialized, messages, true); + const inputMessages = attrs[GEN_AI_INPUT_MESSAGES_ATTRIBUTE] as string | undefined; + + expect(inputMessages).toBeDefined(); + expect(inputMessages).not.toContain(b64Data); + expect(inputMessages).toContain('[Blob substitute]'); + expect(inputMessages).toContain('What is in this image?'); + }); +}); diff --git a/packages/core/test/lib/utils/anthropic-utils.test.ts b/packages/core/test/lib/utils/anthropic-utils.test.ts index 91a311cc574b..797bb9bc8186 100644 --- a/packages/core/test/lib/utils/anthropic-utils.test.ts +++ b/packages/core/test/lib/utils/anthropic-utils.test.ts @@ -84,7 +84,7 @@ describe('anthropic-ai-utils', () => { it('sets length along with truncated value', () => { const content = 'A'.repeat(200_000); setMessagesAttribute(span, [{ role: 'user', content }]); - const result = [{ role: 'user', content: 'A'.repeat(19972) }]; + const result = [{ role: 'user', content: 'A'.repeat(19970) }]; expect(mock.attributes).toStrictEqual({ 'sentry.sdk_meta.gen_ai.input.messages.original_length': 1, 'gen_ai.input.messages': JSON.stringify(result), diff --git a/packages/deno/src/index.ts b/packages/deno/src/index.ts index 91c1ea77f8e7..e6fdde530c81 100644 --- a/packages/deno/src/index.ts +++ b/packages/deno/src/index.ts @@ -78,6 +78,7 @@ export { rewriteFramesIntegration, supabaseIntegration, instrumentSupabaseClient, + instrumentPostgresJsSql, zodErrorsIntegration, SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN, diff --git a/packages/nestjs/package.json b/packages/nestjs/package.json index 33e0eb5f8d4b..c4a9168a9994 100644 --- a/packages/nestjs/package.json +++ b/packages/nestjs/package.json @@ -56,7 +56,7 @@ "@nestjs/common": "^10.0.0", "@nestjs/core": "^10.0.0", "reflect-metadata": "^0.2.2", - "rxjs": "^7.8.1" + "rxjs": "^7.8.2" }, "peerDependencies": { "@nestjs/common": "^8.0.0 || ^9.0.0 || ^10.0.0 || ^11.0.0", diff --git a/packages/nestjs/src/helpers.ts b/packages/nestjs/src/helpers.ts index 34450aa3ac75..f5e9d853d4db 100644 --- a/packages/nestjs/src/helpers.ts +++ b/packages/nestjs/src/helpers.ts @@ -1,13 +1,34 @@ /** - * Determines if the exception is an expected control flow error. - * - HttpException errors will have a status property - * - RpcException errors will have an error property + * Determines if the exception is an expected NestJS control flow error. + * - HttpException have getStatus and getResponse methods: https://github.com/nestjs/nest/blob/master/packages/microservices/exceptions/rpc-exception.ts + * - RpcException have getError and initMessage methods: https://github.com/nestjs/nest/blob/master/packages/common/exceptions/http.exception.ts + * + * We cannot use `instanceof HttpException` here because this file is imported + * from the main entry point (via decorators.ts), and importing @nestjs/common at that + * point would load it before OpenTelemetry instrumentation can patch it, breaking instrumentations. * * @returns `true` if the exception is expected and should not be reported to Sentry, otherwise `false`. */ export function isExpectedError(exception: unknown): boolean { - if (typeof exception === 'object' && exception !== null) { - return 'status' in exception || 'error' in exception; + if (typeof exception !== 'object' || exception === null) { + return false; + } + + const ex = exception as Record; + + // HttpException + if ( + typeof ex.getStatus === 'function' && + typeof ex.getResponse === 'function' && + typeof ex.initMessage === 'function' + ) { + return true; + } + + // RpcException + if (typeof ex.getError === 'function' && typeof ex.initMessage === 'function') { + return true; } + return false; } diff --git a/packages/nestjs/test/helpers.test.ts b/packages/nestjs/test/helpers.test.ts new file mode 100644 index 000000000000..00645059e3e2 --- /dev/null +++ b/packages/nestjs/test/helpers.test.ts @@ -0,0 +1,25 @@ +import { HttpException, HttpStatus } from '@nestjs/common'; +import { describe, expect, it } from 'vitest'; +import { isExpectedError } from '../src/helpers'; + +describe('isExpectedError', () => { + it('should return true for HttpException', () => { + expect(isExpectedError(new HttpException('Bad Request', HttpStatus.BAD_REQUEST))).toBe(true); + }); + + it('should return true for RpcException-like objects', () => { + const rpcLike = { + getError: () => 'some error', + initMessage: () => {}, + }; + expect(isExpectedError(rpcLike)).toBe(true); + }); + + it('should return false for plain Error', () => { + expect(isExpectedError(new Error('test'))).toBe(false); + }); + + it('should return false for object with status property', () => { + expect(isExpectedError({ status: 502, message: 'Bad Gateway' })).toBe(false); + }); +}); diff --git a/packages/nextjs/AGENTS.md b/packages/nextjs/AGENTS.md new file mode 100644 index 000000000000..e72d858e0c25 --- /dev/null +++ b/packages/nextjs/AGENTS.md @@ -0,0 +1,77 @@ +# Next.js SDK (`@sentry/nextjs`) + +## Bundler Architecture: Webpack vs Turbopack + +Next.js apps use **either webpack or turbopack** as their bundler. This fundamentally changes how Sentry instruments the application. + +- **Webpack** — default bundler through Next.js 15 +- **Turbopack** — default bundler starting Next.js 16 + +Detection: `process.env.TURBOPACK` or `--turbo` CLI flag (see `src/config/util.ts:detectActiveBundler`). + +### Webpack Path (Build-Time Wrapping) + +Webpack builds use **loaders and templates** to wrap user code at compile time: + +1. **Wrapping loader** (`src/config/loaders/wrappingLoader.ts`) identifies route handlers, API routes, pages, middleware, and server components by file path patterns +2. **Templates** (`src/config/templates/`) inject Sentry instrumentation around each export — using Rollup internally to expand `export *` statements +3. **Value injection loader** injects globals (`_sentryNextJsVersion`, route manifest, tunnel path, etc.) +4. **SentryWebpackPlugin** handles sourcemap upload and release management + +Template files and what they wrap: + +- `pageWrapperTemplate.ts` — Pages Router pages (`getInitialProps`, `getStaticProps`, `getServerSideProps`) +- `apiWrapperTemplate.ts` — Pages Router API routes +- `routeHandlerWrapperTemplate.ts` — App Router route handlers (GET, POST, etc.) +- `serverComponentWrapperTemplate.ts` — App Router server components, `generateMetadata`, `generateViewport` +- `middlewareWrapperTemplate.ts` — Edge middleware + +Config options that control wrapping: `autoInstrumentServerFunctions`, `autoInstrumentMiddleware`, `autoInstrumentAppDirectory`, `excludeServerRoutes`. + +### Turbopack Path (No Build-Time Wrapping) + +Turbopack does **NOT** use the wrapping loader or templates. There is no build-time function wrapping. + +What turbopack **does** support: + +- **Value injection** via Turbopack rules (`src/config/turbopack/`) — injects the same globals as webpack +- **Module metadata injection** (`moduleMetadataInjectionLoader.ts`) — enables `thirdPartyErrorFilterIntegration` (requires Next.js 16+ and `_experimental.turbopackApplicationKey`) +- **Native debug IDs** for sourcemaps (Next.js 15.6+) +- **`runAfterProductionCompile` hook** (enabled by default) for sourcemap upload + +What turbopack does **NOT** support: + +- Build-time wrapping of route handlers, API routes, pages, server components, or middleware +- `autoInstrumentServerFunctions`, `autoInstrumentMiddleware`, `autoInstrumentAppDirectory` — these are no-ops +- `excludeServerRoutes` — no-op since routes aren't wrapped +- React component name annotations +- SentryWebpackPlugin (no webpack = no webpack plugin) + +Instrumentation with turbopack relies on **Next.js's built-in telemetry/OpenTelemetry integration** and the `instrumentation.ts` hook rather than build-time code transformation. + +## Config Flow + +``` +withSentryConfig(nextConfig, sentryOptions) + → detect bundler (webpack or turbopack) + → webpack: constructWebpackConfigFunction() → loaders + plugins + → turbopack: constructTurbopackConfig() → value injection rules only + → set up runAfterProductionCompile hook (sourcemaps) +``` + +Entry point: `src/config/withSentryConfig/index.ts` +Routing logic: `src/config/withSentryConfig/getFinalConfigObject.ts` + +## Key Directories + +| Path | Purpose | +| ------------------------------ | ------------------------------------------------------------ | +| `src/config/webpack.ts` | Webpack-specific config (loaders, rules, plugins) | +| `src/config/turbopack/` | Turbopack-specific config (value injection rules) | +| `src/config/loaders/` | Webpack loaders (wrapping, value injection, module metadata) | +| `src/config/templates/` | Wrapper templates used by wrapping loader (webpack only) | +| `src/config/manifest/` | Route manifest generation for transaction grouping | +| `src/config/withSentryConfig/` | Main `withSentryConfig` entry point and bundler routing | +| `src/client/` | Client-side SDK (browser) | +| `src/server/` | Server-side SDK (Node.js) | +| `src/edge/` | Edge runtime SDK | diff --git a/packages/nextjs/src/config/loaders/index.ts b/packages/nextjs/src/config/loaders/index.ts index 322567c1495b..359d72d7def6 100644 --- a/packages/nextjs/src/config/loaders/index.ts +++ b/packages/nextjs/src/config/loaders/index.ts @@ -1,3 +1,4 @@ export { default as valueInjectionLoader } from './valueInjectionLoader'; export { default as prefixLoader } from './prefixLoader'; export { default as wrappingLoader } from './wrappingLoader'; +export { default as moduleMetadataInjectionLoader } from './moduleMetadataInjectionLoader'; diff --git a/packages/nextjs/src/config/loaders/moduleMetadataInjectionLoader.ts b/packages/nextjs/src/config/loaders/moduleMetadataInjectionLoader.ts new file mode 100644 index 000000000000..3bfe974b59fb --- /dev/null +++ b/packages/nextjs/src/config/loaders/moduleMetadataInjectionLoader.ts @@ -0,0 +1,41 @@ +import type { LoaderThis } from './types'; +import { SKIP_COMMENT_AND_DIRECTIVE_REGEX } from './valueInjectionLoader'; + +export type ModuleMetadataInjectionLoaderOptions = { + applicationKey: string; +}; + +/** + * Inject `_sentryModuleMetadata` into every module so that the + * `thirdPartyErrorFilterIntegration` can tell first-party code from + * third-party code. + * + * This is the Turbopack equivalent of what `@sentry/webpack-plugin` does + * via its `moduleMetadata` option. + * + * Options: + * - `applicationKey`: The application key used to tag first-party modules. + */ +export default function moduleMetadataInjectionLoader( + this: LoaderThis, + userCode: string, +): string { + const { applicationKey } = 'getOptions' in this ? this.getOptions() : this.query; + + // We do not want to cache injected values across builds + this.cacheable(false); + + // The snippet mirrors what @sentry/webpack-plugin injects for moduleMetadata. + // We access _sentryModuleMetadata via globalThis (not as a bare variable) to avoid + // ReferenceError in strict mode. Each module is keyed by its Error stack trace so that + // the SDK can map filenames to metadata at runtime. + // Not putting any newlines in the generated code will decrease the likelihood of sourcemaps breaking. + const metadata = JSON.stringify({ [`_sentryBundlerPluginAppKey:${applicationKey}`]: true }); + const injectedCode = + ';globalThis._sentryModuleMetadata = globalThis._sentryModuleMetadata || {};' + + `globalThis._sentryModuleMetadata[(new Error).stack] = Object.assign({}, globalThis._sentryModuleMetadata[(new Error).stack], ${metadata});`; + + return userCode.replace(SKIP_COMMENT_AND_DIRECTIVE_REGEX, match => { + return match + injectedCode; + }); +} diff --git a/packages/nextjs/src/config/loaders/valueInjectionLoader.ts b/packages/nextjs/src/config/loaders/valueInjectionLoader.ts index c15413cd1444..3fe15a8e5872 100644 --- a/packages/nextjs/src/config/loaders/valueInjectionLoader.ts +++ b/packages/nextjs/src/config/loaders/valueInjectionLoader.ts @@ -10,7 +10,7 @@ export type ValueInjectionLoaderOptions = { // We need to be careful not to inject anything before any `"use strict";`s or "use client"s or really any other directive. // As an additional complication directives may come after any number of comments. // This regex is shamelessly stolen from: https://github.com/getsentry/sentry-javascript-bundler-plugins/blob/7f984482c73e4284e8b12a08dfedf23b5a82f0af/packages/bundler-plugin-core/src/index.ts#L535-L539 -const SKIP_COMMENT_AND_DIRECTIVE_REGEX = +export const SKIP_COMMENT_AND_DIRECTIVE_REGEX = // Note: CodeQL complains that this regex potentially has n^2 runtime. This likely won't affect realistic files. new RegExp('^(?:\\s*|/\\*(?:.|\\r|\\n)*?\\*/|//.*[\\n\\r])*(?:"[^"]*";?|\'[^\']*\';?)?'); diff --git a/packages/nextjs/src/config/turbopack/constructTurbopackConfig.ts b/packages/nextjs/src/config/turbopack/constructTurbopackConfig.ts index 3484a169d018..d8f70efbacf1 100644 --- a/packages/nextjs/src/config/turbopack/constructTurbopackConfig.ts +++ b/packages/nextjs/src/config/turbopack/constructTurbopackConfig.ts @@ -1,8 +1,9 @@ import { debug } from '@sentry/core'; +import * as path from 'path'; import type { VercelCronsConfig } from '../../common/types'; import type { RouteManifest } from '../manifest/types'; import type { NextConfigObject, SentryBuildOptions, TurbopackMatcherWithRule, TurbopackOptions } from '../types'; -import { supportsNativeDebugIds } from '../util'; +import { supportsNativeDebugIds, supportsTurbopackRuleCondition } from '../util'; import { generateValueInjectionRules } from './generateValueInjectionRules'; /** @@ -56,6 +57,28 @@ export function constructTurbopackConfig({ newConfig.rules = safelyAddTurbopackRule(newConfig.rules, { matcher, rule }); } + // Add module metadata injection loader for thirdPartyErrorFilterIntegration support. + // This is only added when turbopackApplicationKey is set AND the Next.js version supports the + // `condition` field in Turbopack rules (Next.js 16+). Without `condition: { not: 'foreign' }`, + // the loader would tag node_modules as first-party, defeating the purpose. + const applicationKey = userSentryOptions?._experimental?.turbopackApplicationKey; + if (applicationKey && nextJsVersion && supportsTurbopackRuleCondition(nextJsVersion)) { + newConfig.rules = safelyAddTurbopackRule(newConfig.rules, { + matcher: '*.{ts,tsx,js,jsx,mjs,cjs}', + rule: { + condition: { not: 'foreign' }, + loaders: [ + { + loader: path.resolve(__dirname, '..', 'loaders', 'moduleMetadataInjectionLoader.js'), + options: { + applicationKey, + }, + }, + ], + }, + }); + } + return newConfig; } diff --git a/packages/nextjs/src/config/turbopack/generateValueInjectionRules.ts b/packages/nextjs/src/config/turbopack/generateValueInjectionRules.ts index 9dac16cc411f..8bca01cb2ce4 100644 --- a/packages/nextjs/src/config/turbopack/generateValueInjectionRules.ts +++ b/packages/nextjs/src/config/turbopack/generateValueInjectionRules.ts @@ -2,7 +2,7 @@ import * as path from 'path'; import type { VercelCronsConfig } from '../../common/types'; import type { RouteManifest } from '../manifest/types'; import type { JSONValue, TurbopackMatcherWithRule } from '../types'; -import { getPackageModules } from '../util'; +import { getPackageModules, supportsTurbopackRuleCondition } from '../util'; /** * Generate the value injection rules for client and server in turbopack config. @@ -50,11 +50,16 @@ export function generateValueInjectionRules({ serverValues = { ...serverValues, ...isomorphicValues }; } + const hasConditionSupport = nextJsVersion ? supportsTurbopackRuleCondition(nextJsVersion) : false; + // Client value injection if (Object.keys(clientValues).length > 0) { rules.push({ matcher: '**/instrumentation-client.*', rule: { + // Only run on user code, not node_modules or Next.js internals + // condition field is only supported in Next.js 16+ + ...(hasConditionSupport ? { condition: { not: 'foreign' } } : {}), loaders: [ { loader: path.resolve(__dirname, '..', 'loaders', 'valueInjectionLoader.js'), @@ -72,6 +77,9 @@ export function generateValueInjectionRules({ rules.push({ matcher: '**/instrumentation.*', rule: { + // Only run on user code, not node_modules or Next.js internals + // condition field is only supported in Next.js 16+ + ...(hasConditionSupport ? { condition: { not: 'foreign' } } : {}), loaders: [ { loader: path.resolve(__dirname, '..', 'loaders', 'valueInjectionLoader.js'), diff --git a/packages/nextjs/src/config/types.ts b/packages/nextjs/src/config/types.ts index cc1183f13b71..233860fb1388 100644 --- a/packages/nextjs/src/config/types.ts +++ b/packages/nextjs/src/config/types.ts @@ -713,6 +713,17 @@ export type SentryBuildOptions = { * Requires cron jobs to be configured in `vercel.json`. */ vercelCronsMonitoring?: boolean; + /** + * Application key used by `thirdPartyErrorFilterIntegration` to distinguish + * first-party code from third-party code in Turbopack builds. + * + * When set, a Turbopack loader injects `_sentryModuleMetadata` into every + * first-party module, mirroring what `@sentry/webpack-plugin` does for + * webpack builds via its `moduleMetadata` / `applicationKey` option. + * + * Requires Next.js 16+ + */ + turbopackApplicationKey?: string; }>; /** @@ -849,6 +860,17 @@ type TurbopackRuleCondition = { path: string | RegExp; }; +// Condition used to filter when a loader rule applies. +// Supports built-in string conditions ('foreign', 'browser', 'development', 'production', 'node', 'edge-light') +// and boolean operators matching the Turbopack advanced condition syntax. +type TurbopackRuleConditionFilter = + | string + | { not: TurbopackRuleConditionFilter } + | { all: TurbopackRuleConditionFilter[] } + | { any: TurbopackRuleConditionFilter[] } + | { path: string | RegExp } + | { content: RegExp }; + export type TurbopackRuleConfigItemOrShortcut = TurbopackLoaderItem[] | TurbopackRuleConfigItem; export type TurbopackMatcherWithRule = { @@ -859,6 +881,7 @@ export type TurbopackMatcherWithRule = { type TurbopackRuleConfigItemOptions = { loaders: TurbopackLoaderItem[]; as?: string; + condition?: TurbopackRuleConditionFilter; }; type TurbopackRuleConfigItem = diff --git a/packages/nextjs/src/config/util.ts b/packages/nextjs/src/config/util.ts index d23efc480b76..4ec25dbf0e94 100644 --- a/packages/nextjs/src/config/util.ts +++ b/packages/nextjs/src/config/util.ts @@ -67,6 +67,27 @@ export function supportsProductionCompileHook(version: string): boolean { return false; } +/** + * Checks if the current Next.js version supports the `condition` field in Turbopack rules. + * This field was introduced in Next.js 16. + * + * @param version - version string to check. + * @returns true if Next.js version is 16 or higher + */ +export function supportsTurbopackRuleCondition(version: string): boolean { + if (!version) { + return false; + } + + const { major } = parseSemver(version); + + if (major === undefined) { + return false; + } + + return major >= 16; +} + /** * Checks if the current Next.js version supports native debug ids for turbopack. * This feature was first introduced in Next.js v15.6.0-canary.36 and marked stable in Next.js v16 diff --git a/packages/nextjs/test/config/getBuildPluginOptions.test.ts b/packages/nextjs/test/config/getBuildPluginOptions.test.ts index 329ca59eedc1..6eecd83905b8 100644 --- a/packages/nextjs/test/config/getBuildPluginOptions.test.ts +++ b/packages/nextjs/test/config/getBuildPluginOptions.test.ts @@ -902,6 +902,27 @@ describe('getBuildPluginOptions', () => { }); }); + describe('applicationKey is not forwarded to webpack plugin', () => { + it('does not include turbopackApplicationKey in webpack plugin options', () => { + const sentryBuildOptions: SentryBuildOptions = { + org: 'test-org', + project: 'test-project', + _experimental: { turbopackApplicationKey: 'my-app' }, + }; + + const result = getBuildPluginOptions({ + sentryBuildOptions, + releaseName: mockReleaseName, + distDirAbsPath: mockDistDirAbsPath, + buildTool: 'webpack-client', + }); + + // turbopackApplicationKey should only be used by the Turbopack loader, + // not forwarded to the webpack plugin + expect(result.applicationKey).toBeUndefined(); + }); + }); + describe('edge cases', () => { it('handles undefined release name gracefully', () => { const sentryBuildOptions: SentryBuildOptions = { diff --git a/packages/nextjs/test/config/moduleMetadataInjectionLoader.test.ts b/packages/nextjs/test/config/moduleMetadataInjectionLoader.test.ts new file mode 100644 index 000000000000..35e7ba4b692a --- /dev/null +++ b/packages/nextjs/test/config/moduleMetadataInjectionLoader.test.ts @@ -0,0 +1,129 @@ +import { describe, expect, it } from 'vitest'; +import type { ModuleMetadataInjectionLoaderOptions } from '../../src/config/loaders/moduleMetadataInjectionLoader'; +import moduleMetadataInjectionLoader from '../../src/config/loaders/moduleMetadataInjectionLoader'; +import type { LoaderThis } from '../../src/config/loaders/types'; + +function createLoaderThis( + applicationKey: string, + useGetOptions = true, +): LoaderThis { + const base = { + addDependency: () => undefined, + async: () => undefined, + cacheable: () => undefined, + callback: () => undefined, + resourcePath: './app/page.tsx', + }; + + if (useGetOptions) { + return { ...base, getOptions: () => ({ applicationKey }) } as LoaderThis; + } + + return { ...base, query: { applicationKey } } as LoaderThis; +} + +describe('moduleMetadataInjectionLoader', () => { + it('should inject metadata snippet into simple code', () => { + const loaderThis = createLoaderThis('my-app'); + const userCode = "import * as Sentry from '@sentry/nextjs';\nSentry.init();"; + + const result = moduleMetadataInjectionLoader.call(loaderThis, userCode); + + expect(result).toContain('_sentryModuleMetadata'); + expect(result).toContain('_sentryBundlerPluginAppKey:my-app'); + expect(result).toContain('Object.assign'); + }); + + it('should inject after "use strict" directive', () => { + const loaderThis = createLoaderThis('my-app'); + const userCode = '"use strict";\nconsole.log("hello");'; + + const result = moduleMetadataInjectionLoader.call(loaderThis, userCode); + + const metadataIndex = result.indexOf('_sentryModuleMetadata'); + const directiveIndex = result.indexOf('"use strict"'); + expect(metadataIndex).toBeGreaterThan(directiveIndex); + }); + + it('should inject after "use client" directive', () => { + const loaderThis = createLoaderThis('my-app'); + const userCode = '"use client";\nimport React from \'react\';'; + + const result = moduleMetadataInjectionLoader.call(loaderThis, userCode); + + const metadataIndex = result.indexOf('_sentryModuleMetadata'); + const directiveIndex = result.indexOf('"use client"'); + expect(metadataIndex).toBeGreaterThan(directiveIndex); + }); + + it('should handle code with leading comments before directives', () => { + const loaderThis = createLoaderThis('my-app'); + const userCode = '// some comment\n"use client";\nimport React from \'react\';'; + + const result = moduleMetadataInjectionLoader.call(loaderThis, userCode); + + expect(result).toContain('_sentryBundlerPluginAppKey:my-app'); + const metadataIndex = result.indexOf('_sentryModuleMetadata'); + const directiveIndex = result.indexOf('"use client"'); + expect(metadataIndex).toBeGreaterThan(directiveIndex); + }); + + it('should handle code with block comments before directives', () => { + const loaderThis = createLoaderThis('my-app'); + const userCode = '/* block comment */\n"use client";\nimport React from \'react\';'; + + const result = moduleMetadataInjectionLoader.call(loaderThis, userCode); + + expect(result).toContain('_sentryBundlerPluginAppKey:my-app'); + }); + + it('should set cacheable to false', () => { + let cacheableValue: boolean | undefined; + const loaderThis = { + addDependency: () => undefined, + async: () => undefined, + cacheable: (flag: boolean) => { + cacheableValue = flag; + }, + callback: () => undefined, + resourcePath: './app/page.tsx', + getOptions: () => ({ applicationKey: 'my-app' }), + } as LoaderThis; + + moduleMetadataInjectionLoader.call(loaderThis, 'const x = 1;'); + + expect(cacheableValue).toBe(false); + }); + + it('should work with webpack 4 query API', () => { + const loaderThis = createLoaderThis('my-app', false); + const userCode = 'const x = 1;'; + + const result = moduleMetadataInjectionLoader.call(loaderThis, userCode); + + expect(result).toContain('_sentryBundlerPluginAppKey:my-app'); + }); + + it('should use globalThis and Object.assign merge pattern keyed by stack trace', () => { + const loaderThis = createLoaderThis('my-app'); + const userCode = 'const x = 1;'; + + const result = moduleMetadataInjectionLoader.call(loaderThis, userCode); + + // Should use globalThis to avoid ReferenceError in strict mode + expect(result).toContain('globalThis._sentryModuleMetadata = globalThis._sentryModuleMetadata || {}'); + // Should key by stack trace like the webpack plugin does + expect(result).toContain('globalThis._sentryModuleMetadata[(new Error).stack]'); + // Should use Object.assign to merge metadata + expect(result).toContain('Object.assign({}'); + }); + + it('should contain the correct app key format in output', () => { + const loaderThis = createLoaderThis('test-key-123'); + const userCode = 'export default function Page() {}'; + + const result = moduleMetadataInjectionLoader.call(loaderThis, userCode); + + expect(result).toContain('"_sentryBundlerPluginAppKey:test-key-123":true'); + }); +}); diff --git a/packages/nextjs/test/config/turbopack/constructTurbopackConfig.test.ts b/packages/nextjs/test/config/turbopack/constructTurbopackConfig.test.ts index 7c576a5ef692..d1bf313d16f2 100644 --- a/packages/nextjs/test/config/turbopack/constructTurbopackConfig.test.ts +++ b/packages/nextjs/test/config/turbopack/constructTurbopackConfig.test.ts @@ -12,7 +12,13 @@ vi.mock('path', async () => { const actual = await vi.importActual('path'); return { ...actual, - resolve: vi.fn().mockReturnValue('/mocked/path/to/valueInjectionLoader.js'), + resolve: vi.fn().mockImplementation((...args: string[]) => { + const lastArg = args[args.length - 1]; + if (lastArg === 'moduleMetadataInjectionLoader.js') { + return '/mocked/path/to/moduleMetadataInjectionLoader.js'; + } + return '/mocked/path/to/valueInjectionLoader.js'; + }), }; }); @@ -874,6 +880,206 @@ describe('constructTurbopackConfig', () => { }); }); +describe('condition field version gating', () => { + it('should include condition field for Next.js 16+', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + nextJsVersion: '16.0.0', + }); + + const serverRule = result.rules!['**/instrumentation.*'] as { condition?: unknown; loaders: unknown[] }; + expect(serverRule.condition).toEqual({ not: 'foreign' }); + }); + + it('should include condition field for Next.js 17+', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + routeManifest: { dynamicRoutes: [], staticRoutes: [], isrRoutes: [] }, + nextJsVersion: '17.0.0', + }); + + const clientRule = result.rules!['**/instrumentation-client.*'] as { condition?: unknown; loaders: unknown[] }; + const serverRule = result.rules!['**/instrumentation.*'] as { condition?: unknown; loaders: unknown[] }; + expect(clientRule.condition).toEqual({ not: 'foreign' }); + expect(serverRule.condition).toEqual({ not: 'foreign' }); + }); + + it('should not include condition field for Next.js 15.x', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + nextJsVersion: '15.4.1', + }); + + const serverRule = result.rules!['**/instrumentation.*'] as { condition?: unknown; loaders: unknown[] }; + expect(serverRule).not.toHaveProperty('condition'); + }); + + it('should not include condition field for Next.js 14.x', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + nextJsVersion: '14.2.0', + }); + + const serverRule = result.rules!['**/instrumentation.*'] as { condition?: unknown; loaders: unknown[] }; + expect(serverRule).not.toHaveProperty('condition'); + }); + + it('should not include condition field when nextJsVersion is undefined', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ userNextConfig }); + + const serverRule = result.rules!['**/instrumentation.*'] as { condition?: unknown; loaders: unknown[] }; + expect(serverRule).not.toHaveProperty('condition'); + }); +}); + +describe('moduleMetadataInjection with applicationKey', () => { + it('should add metadata loader rule when applicationKey is set and Next.js >= 16', () => { + const pathResolveSpy = vi.spyOn(path, 'resolve'); + pathResolveSpy.mockImplementation((...args: string[]) => { + const lastArg = args[args.length - 1]; + if (lastArg === 'moduleMetadataInjectionLoader.js') { + return '/mocked/path/to/moduleMetadataInjectionLoader.js'; + } + return '/mocked/path/to/valueInjectionLoader.js'; + }); + + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + userSentryOptions: { _experimental: { turbopackApplicationKey: 'my-app' } }, + nextJsVersion: '16.0.0', + }); + + expect(result.rules!['*.{ts,tsx,js,jsx,mjs,cjs}']).toEqual({ + condition: { not: 'foreign' }, + loaders: [ + { + loader: '/mocked/path/to/moduleMetadataInjectionLoader.js', + options: { + applicationKey: 'my-app', + }, + }, + ], + }); + }); + + it('should NOT add metadata loader rule when Next.js < 16', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + userSentryOptions: { _experimental: { turbopackApplicationKey: 'my-app' } }, + nextJsVersion: '15.4.1', + }); + + expect(result.rules!['*.{ts,tsx,js,jsx,mjs,cjs}']).toBeUndefined(); + }); + + it('should NOT add metadata loader rule when applicationKey is not set', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + userSentryOptions: {}, + nextJsVersion: '16.0.0', + }); + + expect(result.rules!['*.{ts,tsx,js,jsx,mjs,cjs}']).toBeUndefined(); + }); + + it('should NOT add metadata loader rule when nextJsVersion is undefined', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + userSentryOptions: { _experimental: { turbopackApplicationKey: 'my-app' } }, + nextJsVersion: undefined, + }); + + expect(result.rules!['*.{ts,tsx,js,jsx,mjs,cjs}']).toBeUndefined(); + }); + + it('should pass applicationKey through to loader options correctly', () => { + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + userSentryOptions: { _experimental: { turbopackApplicationKey: 'custom-key-123' } }, + nextJsVersion: '16.0.0', + }); + + const rule = result.rules!['*.{ts,tsx,js,jsx,mjs,cjs}'] as { + condition: unknown; + loaders: Array<{ loader: string; options: { applicationKey: string } }>; + }; + expect(rule.loaders[0]!.options.applicationKey).toBe('custom-key-123'); + }); + + it('should coexist with existing value injection rules', () => { + const userNextConfig: NextConfigObject = {}; + const mockRouteManifest: RouteManifest = { + dynamicRoutes: [], + staticRoutes: [{ path: '/', regex: '/' }], + isrRoutes: [], + }; + + const result = constructTurbopackConfig({ + userNextConfig, + userSentryOptions: { _experimental: { turbopackApplicationKey: 'my-app' } }, + routeManifest: mockRouteManifest, + nextJsVersion: '16.0.0', + }); + + // Value injection rules should still be present + expect(result.rules!['**/instrumentation-client.*']).toBeDefined(); + expect(result.rules!['**/instrumentation.*']).toBeDefined(); + // Metadata loader rule should also be present + expect(result.rules!['*.{ts,tsx,js,jsx,mjs,cjs}']).toBeDefined(); + }); + + it('should add metadata loader rule for Next.js 17+', () => { + const pathResolveSpy = vi.spyOn(path, 'resolve'); + pathResolveSpy.mockImplementation((...args: string[]) => { + const lastArg = args[args.length - 1]; + if (lastArg === 'moduleMetadataInjectionLoader.js') { + return '/mocked/path/to/moduleMetadataInjectionLoader.js'; + } + return '/mocked/path/to/valueInjectionLoader.js'; + }); + + const userNextConfig: NextConfigObject = {}; + + const result = constructTurbopackConfig({ + userNextConfig, + userSentryOptions: { _experimental: { turbopackApplicationKey: 'my-app' } }, + nextJsVersion: '17.0.0', + }); + + expect(result.rules!['*.{ts,tsx,js,jsx,mjs,cjs}']).toEqual({ + condition: { not: 'foreign' }, + loaders: [ + { + loader: '/mocked/path/to/moduleMetadataInjectionLoader.js', + options: { + applicationKey: 'my-app', + }, + }, + ], + }); + }); +}); + describe('safelyAddTurbopackRule', () => { const mockRule = { loaders: [ diff --git a/packages/nextjs/test/config/util.test.ts b/packages/nextjs/test/config/util.test.ts index 7335139b5037..01b5e59af28e 100644 --- a/packages/nextjs/test/config/util.test.ts +++ b/packages/nextjs/test/config/util.test.ts @@ -334,4 +334,39 @@ describe('util', () => { expect(util.detectActiveBundler()).toBe('webpack'); }); }); + + describe('supportsTurbopackRuleCondition', () => { + describe('supported versions (returns true)', () => { + it.each([ + ['16.0.0', 'Next.js 16.0.0'], + ['16.1.0', 'Next.js 16.1.0'], + ['17.0.0', 'Next.js 17.0.0'], + ['20.0.0', 'Next.js 20.0.0'], + ])('returns true for %s (%s)', version => { + expect(util.supportsTurbopackRuleCondition(version)).toBe(true); + }); + }); + + describe('unsupported versions (returns false)', () => { + it.each([ + ['15.9.9', 'Next.js 15.9.9'], + ['15.4.1', 'Next.js 15.4.1 (min Turbopack version)'], + ['15.0.0', 'Next.js 15.0.0'], + ['14.2.0', 'Next.js 14.2.0'], + ['13.0.0', 'Next.js 13.0.0'], + ])('returns false for %s (%s)', version => { + expect(util.supportsTurbopackRuleCondition(version)).toBe(false); + }); + }); + + describe('edge cases', () => { + it('returns false for empty string', () => { + expect(util.supportsTurbopackRuleCondition('')).toBe(false); + }); + + it('returns false for invalid version string', () => { + expect(util.supportsTurbopackRuleCondition('invalid')).toBe(false); + }); + }); + }); }); diff --git a/packages/node/src/integrations/tracing/postgresjs.ts b/packages/node/src/integrations/tracing/postgresjs.ts index ddb588b90585..86f7952712d2 100644 --- a/packages/node/src/integrations/tracing/postgresjs.ts +++ b/packages/node/src/integrations/tracing/postgresjs.ts @@ -1,4 +1,3 @@ -/* eslint-disable max-lines */ // Instrumentation for https://github.com/porsager/postgres import { context, trace } from '@opentelemetry/api'; @@ -10,19 +9,17 @@ import { safeExecuteInTheMiddle, } from '@opentelemetry/instrumentation'; import { - ATTR_DB_NAMESPACE, ATTR_DB_OPERATION_NAME, ATTR_DB_QUERY_TEXT, ATTR_DB_RESPONSE_STATUS_CODE, ATTR_DB_SYSTEM_NAME, ATTR_ERROR_TYPE, - ATTR_SERVER_ADDRESS, - ATTR_SERVER_PORT, } from '@opentelemetry/semantic-conventions'; import type { IntegrationFn, Span } from '@sentry/core'; import { debug, defineIntegration, + instrumentPostgresJsSql, replaceExports, SDK_VERSION, SPAN_STATUS_ERROR, @@ -41,8 +38,6 @@ type PostgresConnectionContext = { ATTR_SERVER_PORT?: string; // Port number of the database server }; -const CONNECTION_CONTEXT_SYMBOL = Symbol('sentryPostgresConnectionContext'); -const INSTRUMENTED_MARKER = Symbol.for('sentry.instrumented.postgresjs'); // Marker to track if a query was created from an instrumented sql instance // This prevents double-spanning when both wrapper and prototype patches are active const QUERY_FROM_INSTRUMENTED_SQL = Symbol.for('sentry.query.from.instrumented.sql'); @@ -146,7 +141,12 @@ export class PostgresJsInstrumentation extends InstrumentationBase unknown, - target: unknown, - proxiedSql: unknown, - ): (...args: unknown[]) => unknown { - // eslint-disable-next-line @typescript-eslint/no-this-alias - const self = this; - return function (this: unknown, ...args: unknown[]): unknown { - const query = Reflect.apply(original, target, args); - - if (query && typeof query === 'object' && 'handle' in query) { - self._wrapSingleQueryHandle(query as { handle: unknown; strings?: string[] }, proxiedSql); - } - - return query; - }; - } - - /** - * Wraps callback-based methods (begin, reserve) to recursively instrument Sql instances. - * Note: These methods can also be used as tagged templates, which we pass through unchanged. - * - * Savepoint is not wrapped to avoid complex nested transaction instrumentation issues. - * Queries within savepoint callbacks are still instrumented through the parent transaction's Sql instance. - */ - private _wrapCallbackMethod( - original: (...args: unknown[]) => unknown, - target: unknown, - parentSqlInstance: unknown, - ): (...args: unknown[]) => unknown { - // eslint-disable-next-line @typescript-eslint/no-this-alias - const self = this; - return function (this: unknown, ...args: unknown[]): unknown { - // Extract parent context to propagate to child instances - const parentContext = (parentSqlInstance as Record)[CONNECTION_CONTEXT_SYMBOL] as - | PostgresConnectionContext - | undefined; - - // Check if this is a callback-based call by verifying the last argument is a function - const isCallbackBased = typeof args[args.length - 1] === 'function'; - - if (!isCallbackBased) { - // Not a callback-based call - could be tagged template or promise-based - const result = Reflect.apply(original, target, args); - // If result is a Promise (e.g., reserve() without callback), instrument the resolved Sql instance - if (result && typeof (result as Promise).then === 'function') { - return (result as Promise).then((sqlInstance: unknown) => { - return self._instrumentSqlInstance(sqlInstance, parentContext); - }); - } - return result; - } - - // Callback-based call: wrap the callback to instrument the Sql instance - const callback = (args.length === 1 ? args[0] : args[1]) as (sql: unknown) => unknown; - const wrappedCallback = function (sqlInstance: unknown): unknown { - const instrumentedSql = self._instrumentSqlInstance(sqlInstance, parentContext); - return callback(instrumentedSql); - }; - - const newArgs = args.length === 1 ? [wrappedCallback] : [args[0], wrappedCallback]; - return Reflect.apply(original, target, newArgs); - }; - } - - /** - * Sets connection context attributes on a span. + * Determines whether a span should be created based on the current context. + * If `requireParentSpan` is set to true in the configuration, a span will + * only be created if there is a parent span available. */ - private _setConnectionAttributes(span: Span, connectionContext: PostgresConnectionContext | undefined): void { - if (!connectionContext) { - return; - } - if (connectionContext.ATTR_DB_NAMESPACE) { - span.setAttribute(ATTR_DB_NAMESPACE, connectionContext.ATTR_DB_NAMESPACE); - } - if (connectionContext.ATTR_SERVER_ADDRESS) { - span.setAttribute(ATTR_SERVER_ADDRESS, connectionContext.ATTR_SERVER_ADDRESS); - } - if (connectionContext.ATTR_SERVER_PORT !== undefined) { - // Port is stored as string in PostgresConnectionContext for requestHook backwards compatibility, - // but OTEL semantic conventions expect port as a number for span attributes - const portNumber = parseInt(connectionContext.ATTR_SERVER_PORT, 10); - if (!isNaN(portNumber)) { - span.setAttribute(ATTR_SERVER_PORT, portNumber); - } - } + private _shouldCreateSpans(): boolean { + const config = this.getConfig(); + const hasParentSpan = trace.getSpan(context.active()) !== undefined; + return hasParentSpan || !config.requireParentSpan; } /** @@ -277,225 +197,6 @@ export class PostgresJsInstrumentation extends InstrumentationBase): void { - const sqlInstance = sql as { options?: { host?: string[]; port?: number[]; database?: string } }; - if (!sqlInstance.options || typeof sqlInstance.options !== 'object') { - return; - } - - const opts = sqlInstance.options; - // postgres.js stores parsed options with host and port as arrays - // The library defaults to 'localhost' and 5432 if not specified, but we're defensive here - const host = opts.host?.[0] || 'localhost'; - const port = opts.port?.[0] || 5432; - - const connectionContext: PostgresConnectionContext = { - ATTR_DB_NAMESPACE: typeof opts.database === 'string' && opts.database !== '' ? opts.database : undefined, - ATTR_SERVER_ADDRESS: host, - ATTR_SERVER_PORT: String(port), - }; - - proxiedSql[CONNECTION_CONTEXT_SYMBOL] = connectionContext; - } - - /** - * Instruments a sql instance by wrapping its query execution methods. - */ - private _instrumentSqlInstance(sql: unknown, parentConnectionContext?: PostgresConnectionContext): unknown { - // Check if already instrumented to prevent double-wrapping - // Using Symbol.for() ensures the marker survives proxying - if ((sql as Record)[INSTRUMENTED_MARKER]) { - return sql; - } - - // eslint-disable-next-line @typescript-eslint/no-this-alias - const self = this; - - // Wrap the sql function to intercept query creation - const proxiedSql: unknown = new Proxy(sql as (...args: unknown[]) => unknown, { - apply(target, thisArg, argumentsList: unknown[]) { - const query = Reflect.apply(target, thisArg, argumentsList); - - if (query && typeof query === 'object' && 'handle' in query) { - self._wrapSingleQueryHandle(query as { handle: unknown; strings?: string[] }, proxiedSql); - } - - return query; - }, - get(target, prop) { - const original = (target as unknown as Record)[prop]; - - if (typeof prop !== 'string' || typeof original !== 'function') { - return original; - } - - // Wrap methods that return PendingQuery objects (unsafe, file) - if (prop === 'unsafe' || prop === 'file') { - return self._wrapQueryMethod(original as (...args: unknown[]) => unknown, target, proxiedSql); - } - - // Wrap begin and reserve (not savepoint to avoid duplicate spans) - if (prop === 'begin' || prop === 'reserve') { - return self._wrapCallbackMethod(original as (...args: unknown[]) => unknown, target, proxiedSql); - } - - return original; - }, - }); - - // Use provided parent context if available, otherwise extract from sql.options - if (parentConnectionContext) { - (proxiedSql as Record)[CONNECTION_CONTEXT_SYMBOL] = parentConnectionContext; - } else { - this._attachConnectionContext(sql, proxiedSql as Record); - } - - // Mark both the original and proxy as instrumented to prevent double-wrapping - // The proxy might be passed to other methods, or the original - // might be accessed directly, so we need to mark both - (sql as Record)[INSTRUMENTED_MARKER] = true; - (proxiedSql as Record)[INSTRUMENTED_MARKER] = true; - - return proxiedSql; - } - - /** - * Wraps a single query's handle method to create spans. - */ - private _wrapSingleQueryHandle( - query: { handle: unknown; strings?: string[]; __sentryWrapped?: boolean }, - sqlInstance: unknown, - ): void { - // Prevent double wrapping - check if the handle itself is already wrapped - if ((query.handle as { __sentryWrapped?: boolean })?.__sentryWrapped) { - return; - } - - // Mark this query as coming from an instrumented sql instance - // This prevents the Query.prototype fallback patch from double-spanning - (query as Record)[QUERY_FROM_INSTRUMENTED_SQL] = true; - - const originalHandle = query.handle as (...args: unknown[]) => Promise; - // eslint-disable-next-line @typescript-eslint/no-this-alias - const self = this; - - // IMPORTANT: We must replace the handle function directly, not use a Proxy, - // because Query.then() internally calls this.handle(), which would bypass a Proxy wrapper. - const wrappedHandle = async function (this: unknown, ...args: unknown[]): Promise { - if (!self._shouldCreateSpans()) { - return originalHandle.apply(this, args); - } - - const fullQuery = self._reconstructQuery(query.strings); - const sanitizedSqlQuery = self._sanitizeSqlQuery(fullQuery); - - return startSpanManual( - { - name: sanitizedSqlQuery || 'postgresjs.query', - op: 'db', - }, - (span: Span) => { - addOriginToSpan(span, 'auto.db.postgresjs'); - - span.setAttributes({ - [ATTR_DB_SYSTEM_NAME]: 'postgres', - [ATTR_DB_QUERY_TEXT]: sanitizedSqlQuery, - }); - - const connectionContext = sqlInstance - ? ((sqlInstance as Record)[CONNECTION_CONTEXT_SYMBOL] as - | PostgresConnectionContext - | undefined) - : undefined; - - self._setConnectionAttributes(span, connectionContext); - - const config = self.getConfig(); - const { requestHook } = config; - if (requestHook) { - safeExecuteInTheMiddle( - () => requestHook(span, sanitizedSqlQuery, connectionContext), - e => { - if (e) { - span.setAttribute('sentry.hook.error', 'requestHook failed'); - DEBUG_BUILD && debug.error(`Error in requestHook for ${INTEGRATION_NAME} integration:`, e); - } - }, - true, - ); - } - - const queryWithCallbacks = this as { - resolve: unknown; - reject: unknown; - }; - - queryWithCallbacks.resolve = new Proxy(queryWithCallbacks.resolve as (...args: unknown[]) => unknown, { - apply: (resolveTarget, resolveThisArg, resolveArgs: [{ command?: string }]) => { - try { - self._setOperationName(span, sanitizedSqlQuery, resolveArgs?.[0]?.command); - span.end(); - } catch (e) { - DEBUG_BUILD && debug.error('Error ending span in resolve callback:', e); - } - - return Reflect.apply(resolveTarget, resolveThisArg, resolveArgs); - }, - }); - - queryWithCallbacks.reject = new Proxy(queryWithCallbacks.reject as (...args: unknown[]) => unknown, { - apply: (rejectTarget, rejectThisArg, rejectArgs: { message?: string; code?: string; name?: string }[]) => { - try { - span.setStatus({ - code: SPAN_STATUS_ERROR, - message: rejectArgs?.[0]?.message || 'unknown_error', - }); - - span.setAttribute(ATTR_DB_RESPONSE_STATUS_CODE, rejectArgs?.[0]?.code || 'unknown'); - span.setAttribute(ATTR_ERROR_TYPE, rejectArgs?.[0]?.name || 'unknown'); - - self._setOperationName(span, sanitizedSqlQuery); - span.end(); - } catch (e) { - DEBUG_BUILD && debug.error('Error ending span in reject callback:', e); - } - return Reflect.apply(rejectTarget, rejectThisArg, rejectArgs); - }, - }); - - // Handle synchronous errors that might occur before promise is created - try { - return originalHandle.apply(this, args); - } catch (e) { - span.setStatus({ - code: SPAN_STATUS_ERROR, - message: e instanceof Error ? e.message : 'unknown_error', - }); - span.end(); - throw e; - } - }, - ); - }; - - (wrappedHandle as { __sentryWrapped?: boolean }).__sentryWrapped = true; - query.handle = wrappedHandle; - } - - /** - * Determines whether a span should be created based on the current context. - * If `requireParentSpan` is set to true in the configuration, a span will - * only be created if there is a parent span available. - */ - private _shouldCreateSpans(): boolean { - const config = this.getConfig(); - const hasParentSpan = trace.getSpan(context.active()) !== undefined; - return hasParentSpan || !config.requireParentSpan; - } - /** * Reconstructs the full SQL query from template strings with PostgreSQL placeholders. * diff --git a/packages/react-router/package.json b/packages/react-router/package.json index 3cf8bb0d1efb..8e9161841208 100644 --- a/packages/react-router/package.json +++ b/packages/react-router/package.json @@ -55,7 +55,7 @@ "@sentry/node": "10.40.0", "@sentry/react": "10.40.0", "@sentry/vite-plugin": "^5.1.0", - "glob": "^13.0.1" + "glob": "^13.0.6" }, "devDependencies": { "@react-router/dev": "^7.13.0", diff --git a/packages/react-router/src/client/createClientInstrumentation.ts b/packages/react-router/src/client/createClientInstrumentation.ts index c465a25dd662..86784127ec91 100644 --- a/packages/react-router/src/client/createClientInstrumentation.ts +++ b/packages/react-router/src/client/createClientInstrumentation.ts @@ -19,6 +19,9 @@ const WINDOW = GLOBAL_OBJ as typeof GLOBAL_OBJ & Window; // Tracks active numeric navigation span to prevent duplicate spans when popstate fires let currentNumericNavigationSpan: Span | undefined; +// Per-request middleware counters, keyed by Request +const middlewareCountersMap = new WeakMap>(); + const SENTRY_CLIENT_INSTRUMENTATION_FLAG = '__sentryReactRouterClientInstrumentationUsed'; // Intentionally never reset - once set, instrumentation API handles all navigations for the session. const SENTRY_NAVIGATE_HOOK_INVOKED_FLAG = '__sentryReactRouterNavigateHookInvoked'; @@ -214,6 +217,8 @@ export function createSentryClientInstrumentation( }, route(route: InstrumentableRoute) { + const routeId = route.id; + route.instrument({ async loader(callLoader, info) { const urlPath = getPathFromRequest(info.request); @@ -267,12 +272,24 @@ export function createSentryClientInstrumentation( const urlPath = getPathFromRequest(info.request); const routePattern = normalizeRoutePath(getPattern(info)) || urlPath; + let counters = middlewareCountersMap.get(info.request); + if (!counters) { + counters = {}; + middlewareCountersMap.set(info.request, counters); + } + + const middlewareIndex = counters[routeId] ?? 0; + counters[routeId] = middlewareIndex + 1; + await startSpan( { - name: routePattern, + name: `middleware ${routeId}`, attributes: { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.client_middleware', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api', + 'react_router.route.id': routeId, + 'http.route': routePattern, + 'react_router.middleware.index': middlewareIndex, }, }, async span => { diff --git a/packages/react-router/src/server/createServerInstrumentation.ts b/packages/react-router/src/server/createServerInstrumentation.ts index 3fceca6a4ff7..7eb7cc6c163e 100644 --- a/packages/react-router/src/server/createServerInstrumentation.ts +++ b/packages/react-router/src/server/createServerInstrumentation.ts @@ -1,4 +1,4 @@ -import { context } from '@opentelemetry/api'; +import { context, createContextKey } from '@opentelemetry/api'; import { getRPCMetadata, RPCType } from '@opentelemetry/core'; import { ATTR_HTTP_ROUTE } from '@opentelemetry/semantic-conventions'; import { @@ -17,8 +17,11 @@ import { import { DEBUG_BUILD } from '../common/debug-build'; import type { InstrumentableRequestHandler, InstrumentableRoute, ServerInstrumentation } from '../common/types'; import { captureInstrumentationError, getPathFromRequest, getPattern, normalizeRoutePath } from '../common/utils'; +import { getMiddlewareName } from './serverBuild'; import { markInstrumentationApiUsed } from './serverGlobals'; +const MIDDLEWARE_COUNTER_KEY = createContextKey('sentry_react_router_middleware_counter'); + // Re-export for backward compatibility and external use export { isInstrumentationApiUsed } from './serverGlobals'; @@ -53,61 +56,68 @@ export function createSentryServerInstrumentation( const activeSpan = getActiveSpan(); const existingRootSpan = activeSpan ? getRootSpan(activeSpan) : undefined; - if (existingRootSpan) { - updateSpanName(existingRootSpan, `${info.request.method} ${pathname}`); - existingRootSpan.setAttributes({ - [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.server', - [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.react_router.instrumentation_api', - [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url', - }); + const counterStore = { counters: {} as Record }; + const ctx = context.active().setValue(MIDDLEWARE_COUNTER_KEY, counterStore); - try { - const result = await handleRequest(); - if (result.status === 'error' && result.error instanceof Error) { - existingRootSpan.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' }); - captureInstrumentationError(result, captureErrors, 'react_router.request_handler', { - 'http.method': info.request.method, - 'http.url': pathname, - }); + await context.with(ctx, async () => { + if (existingRootSpan) { + updateSpanName(existingRootSpan, `${info.request.method} ${pathname}`); + existingRootSpan.setAttributes({ + [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.server', + [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.react_router.instrumentation_api', + [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url', + }); + + try { + const result = await handleRequest(); + if (result.status === 'error' && result.error instanceof Error) { + existingRootSpan.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' }); + captureInstrumentationError(result, captureErrors, 'react_router.request_handler', { + 'http.method': info.request.method, + 'http.url': pathname, + }); + } + } finally { + await flushIfServerless(); } - } finally { - await flushIfServerless(); - } - } else { - await startSpan( - { - name: `${info.request.method} ${pathname}`, - forceTransaction: true, - attributes: { - [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.server', - [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.react_router.instrumentation_api', - [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url', - 'http.request.method': info.request.method, - 'url.path': pathname, - 'url.full': info.request.url, + } else { + await startSpan( + { + name: `${info.request.method} ${pathname}`, + forceTransaction: true, + attributes: { + [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.server', + [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.react_router.instrumentation_api', + [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url', + 'http.request.method': info.request.method, + 'url.path': pathname, + 'url.full': info.request.url, + }, }, - }, - async span => { - try { - const result = await handleRequest(); - if (result.status === 'error' && result.error instanceof Error) { - span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' }); - captureInstrumentationError(result, captureErrors, 'react_router.request_handler', { - 'http.method': info.request.method, - 'http.url': pathname, - }); + async span => { + try { + const result = await handleRequest(); + if (result.status === 'error' && result.error instanceof Error) { + span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' }); + captureInstrumentationError(result, captureErrors, 'react_router.request_handler', { + 'http.method': info.request.method, + 'http.url': pathname, + }); + } + } finally { + await flushIfServerless(); } - } finally { - await flushIfServerless(); - } - }, - ); - } + }, + ); + } + }); }, }); }, route(route: InstrumentableRoute) { + const routeId = route.id; + route.instrument({ async loader(callLoader, info) { const urlPath = getPathFromRequest(info.request); @@ -168,15 +178,29 @@ export function createSentryServerInstrumentation( const pattern = getPattern(info); const routePattern = normalizeRoutePath(pattern) || urlPath; - // Update root span with parameterized route (same as loader/action) updateRootSpanWithRoute(info.request.method, pattern, urlPath); + const counterStore = context.active().getValue(MIDDLEWARE_COUNTER_KEY) as + | { counters: Record } + | undefined; + let middlewareIndex = 0; + if (counterStore) { + middlewareIndex = counterStore.counters[routeId] ?? 0; + counterStore.counters[routeId] = middlewareIndex + 1; + } + + const middlewareName = getMiddlewareName(routeId, middlewareIndex); + await startSpan( { - name: routePattern, + name: `middleware ${middlewareName || routeId}`, attributes: { [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.middleware', [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api', + 'react_router.route.id': routeId, + [ATTR_HTTP_ROUTE]: routePattern, + ...(middlewareName && { 'react_router.middleware.name': middlewareName }), + 'react_router.middleware.index': middlewareIndex, }, }, async span => { diff --git a/packages/react-router/src/server/instrumentation/reactRouter.ts b/packages/react-router/src/server/instrumentation/reactRouter.ts index 2f24d2c7bcb7..e322caa5cfcc 100644 --- a/packages/react-router/src/server/instrumentation/reactRouter.ts +++ b/packages/react-router/src/server/instrumentation/reactRouter.ts @@ -15,7 +15,8 @@ import { } from '@sentry/core'; import type * as reactRouter from 'react-router'; import { DEBUG_BUILD } from '../../common/debug-build'; -import { isInstrumentationApiUsed } from '../serverGlobals'; +import { isServerBuildLike, setServerBuild } from '../serverBuild'; +import { isInstrumentationApiUsed, isOtelDataLoaderSpanCreationEnabled } from '../serverGlobals'; import { getOpName, getSpanName, isDataRequest } from './util'; type ReactRouterModuleExports = typeof reactRouter; @@ -62,9 +63,31 @@ export class ReactRouterInstrumentation extends InstrumentationBase unknown; + args[0] = async function sentryWrappedBuildFn() { + const resolvedBuild = await originalBuildFn(); + if (isServerBuildLike(resolvedBuild)) { + setServerBuild(resolvedBuild); + } + return resolvedBuild; + }; + } + const originalRequestHandler = original.apply(this, args); return async function sentryWrappedRequestHandler(request: Request, initialContext?: unknown) { + // Skip OTEL span creation when instrumentation API is active or when span creation is not enabled. + // Checked per-request (not at handler-creation time) because in dev, createRequestHandler runs before entry.server.tsx. + if (isInstrumentationApiUsed() || !isOtelDataLoaderSpanCreationEnabled()) { + return originalRequestHandler(request, initialContext); + } + let url: URL; try { url = new URL(request.url); @@ -77,13 +100,6 @@ export class ReactRouterInstrumentation extends InstrumentationBase { export const instrumentReactRouterServer = Object.assign( (): void => { instrumentReactRouter(); + // Register global for Vite plugin ServerBuild capture + registerServerBuildGlobal(); }, { id: INTEGRATION_NAME }, ); @@ -24,17 +27,17 @@ export const reactRouterServerIntegration = defineIntegration(() => { return { name: INTEGRATION_NAME, setupOnce() { - // Skip OTEL patching if the instrumentation API is in use - if (isInstrumentationApiUsed()) { - return; - } - + // Enable OTEL data-loader spans only on Node versions without the diagnostics_channel-based instrumentation API. if ( - (NODE_VERSION.major === 20 && NODE_VERSION.minor < 19) || // https://nodejs.org/en/blog/release/v20.19.0 - (NODE_VERSION.major === 22 && NODE_VERSION.minor < 12) // https://nodejs.org/en/blog/release/v22.12.0 + (NODE_VERSION.major === 20 && NODE_VERSION.minor < 19) || + (NODE_VERSION.major === 22 && NODE_VERSION.minor < 12) ) { - instrumentReactRouterServer(); + enableOtelDataLoaderSpanCreation(); } + + // Always install to capture ServerBuild for middleware names. + // Skips per-request wrapping when instrumentation API is active or OTEL span creation is disabled. + instrumentReactRouterServer(); }, processEvent(event) { // Express generates bogus `*` routes for data loaders, which we want to remove here diff --git a/packages/react-router/src/server/serverBuild.ts b/packages/react-router/src/server/serverBuild.ts new file mode 100644 index 000000000000..3b46c745b4b7 --- /dev/null +++ b/packages/react-router/src/server/serverBuild.ts @@ -0,0 +1,66 @@ +import { GLOBAL_OBJ } from '@sentry/core'; + +/** + * Subset of ServerBuild shape for middleware name lookup. + * The official React Router types don't expose `middleware` on route modules yet. + * @internal + */ +interface ServerBuildLike { + routes?: Record< + string, + { + module?: { + middleware?: Array<{ name?: string }>; + }; + } + >; +} + +/** @internal */ +export const GLOBAL_KEY = '__sentrySetServerBuild'; + +type GlobalObjWithBuildCapture = typeof GLOBAL_OBJ & { + [GLOBAL_KEY]?: (build: ServerBuildLike) => void; +}; + +// ServerBuild reference for middleware name lookup. Updated on each createRequestHandler call. +let _serverBuild: ServerBuildLike | undefined; + +/** @internal */ +export function isServerBuildLike(build: unknown): build is ServerBuildLike { + return ( + build !== null && + typeof build === 'object' && + 'routes' in build && + build.routes !== null && + typeof build.routes === 'object' + ); +} + +/** @internal */ +export function setServerBuild(build: ServerBuildLike): void { + _serverBuild = build; +} + +/** @internal */ +export function getMiddlewareName(routeId: string, index: number): string | undefined { + if (!_serverBuild?.routes) return undefined; + + const route = _serverBuild.routes[routeId]; + if (!route?.module?.middleware) return undefined; + + const middlewareFn = route.module.middleware[index]; + return middlewareFn?.name || undefined; +} + +/** @internal */ +export function registerServerBuildGlobal(): void { + (GLOBAL_OBJ as GlobalObjWithBuildCapture)[GLOBAL_KEY] = setServerBuild; +} + +/** @internal Exported for testing. */ +export function _resetServerBuild(): void { + _serverBuild = undefined; + // eslint-disable-next-line @typescript-eslint/no-dynamic-delete + delete (GLOBAL_OBJ as GlobalObjWithBuildCapture)[GLOBAL_KEY]; +} diff --git a/packages/react-router/src/server/serverGlobals.ts b/packages/react-router/src/server/serverGlobals.ts index 33f96ab5f45a..9010d6568a01 100644 --- a/packages/react-router/src/server/serverGlobals.ts +++ b/packages/react-router/src/server/serverGlobals.ts @@ -1,9 +1,11 @@ import { GLOBAL_OBJ } from '@sentry/core'; const SENTRY_SERVER_INSTRUMENTATION_FLAG = '__sentryReactRouterServerInstrumentationUsed'; +const SENTRY_OTEL_SPAN_CREATION_FLAG = '__sentryReactRouterOtelSpanCreationEnabled'; type GlobalObjWithFlag = typeof GLOBAL_OBJ & { [SENTRY_SERVER_INSTRUMENTATION_FLAG]?: boolean; + [SENTRY_OTEL_SPAN_CREATION_FLAG]?: boolean; }; /** @@ -21,3 +23,19 @@ export function markInstrumentationApiUsed(): void { export function isInstrumentationApiUsed(): boolean { return !!(GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_SERVER_INSTRUMENTATION_FLAG]; } + +/** + * Enable OTEL data-loader span creation for React Router server. + * @internal + */ +export function enableOtelDataLoaderSpanCreation(): void { + (GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_OTEL_SPAN_CREATION_FLAG] = true; +} + +/** + * Check if OTEL data-loader span creation is enabled for React Router server. + * @internal + */ +export function isOtelDataLoaderSpanCreationEnabled(): boolean { + return !!(GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_OTEL_SPAN_CREATION_FLAG]; +} diff --git a/packages/react-router/src/vite/makeServerBuildCapturePlugin.ts b/packages/react-router/src/vite/makeServerBuildCapturePlugin.ts new file mode 100644 index 000000000000..e7d081306bfb --- /dev/null +++ b/packages/react-router/src/vite/makeServerBuildCapturePlugin.ts @@ -0,0 +1,39 @@ +import { type Plugin } from 'vite'; +import { GLOBAL_KEY } from '../server/serverBuild'; + +const SERVER_BUILD_MODULE_ID = 'virtual:react-router/server-build'; + +/** + * A Sentry plugin for React Router to capture the server build for middleware name resolution. + */ +export function makeServerBuildCapturePlugin(): Plugin { + let isSsrBuild = false; + + return { + name: 'sentry-react-router-server-build-capture', + enforce: 'post', + + configResolved(config) { + isSsrBuild = !!config.build.ssr; + }, + + transform(code, id) { + if (!isSsrBuild) { + return null; + } + + if (!id.includes(SERVER_BUILD_MODULE_ID)) { + return null; + } + + // `routes` is a module-scope export in the virtual:react-router/server-build module + const injectedCode = `${code} +if (typeof globalThis !== 'undefined' && typeof globalThis["${GLOBAL_KEY}"] === 'function') { + globalThis["${GLOBAL_KEY}"]({ routes }); +} +`; + + return { code: injectedCode, map: null }; + }, + }; +} diff --git a/packages/react-router/src/vite/plugin.ts b/packages/react-router/src/vite/plugin.ts index d58b08df3fa2..4a66a2575987 100644 --- a/packages/react-router/src/vite/plugin.ts +++ b/packages/react-router/src/vite/plugin.ts @@ -2,6 +2,7 @@ import type { ConfigEnv, Plugin } from 'vite'; import { makeConfigInjectorPlugin } from './makeConfigInjectorPlugin'; import { makeCustomSentryVitePlugins } from './makeCustomSentryVitePlugins'; import { makeEnableSourceMapsPlugin } from './makeEnableSourceMapsPlugin'; +import { makeServerBuildCapturePlugin } from './makeServerBuildCapturePlugin'; import type { SentryReactRouterBuildOptions } from './types'; /** @@ -18,6 +19,7 @@ export async function sentryReactRouter( const plugins: Plugin[] = []; plugins.push(makeConfigInjectorPlugin(options)); + plugins.push(makeServerBuildCapturePlugin()); if (process.env.NODE_ENV !== 'development' && viteConfig.command === 'build' && viteConfig.mode !== 'development') { plugins.push(makeEnableSourceMapsPlugin(options)); diff --git a/packages/react-router/test/client/createClientInstrumentation.test.ts b/packages/react-router/test/client/createClientInstrumentation.test.ts index 0078b2601c51..8f04bf8d7851 100644 --- a/packages/react-router/test/client/createClientInstrumentation.test.ts +++ b/packages/react-router/test/client/createClientInstrumentation.test.ts @@ -478,10 +478,13 @@ describe('createSentryClientInstrumentation', () => { expect(core.startSpan).toHaveBeenCalledWith( expect.objectContaining({ - name: '/users/:id', + name: 'middleware test-route', attributes: expect.objectContaining({ 'sentry.op': 'function.react_router.client_middleware', 'sentry.origin': 'auto.function.react_router.instrumentation_api', + 'react_router.route.id': 'test-route', + 'http.route': '/users/:id', + 'react_router.middleware.index': 0, }), }), expect.any(Function), diff --git a/packages/react-router/test/server/createServerInstrumentation.test.ts b/packages/react-router/test/server/createServerInstrumentation.test.ts index 33eb73f48ace..c86114bef58a 100644 --- a/packages/react-router/test/server/createServerInstrumentation.test.ts +++ b/packages/react-router/test/server/createServerInstrumentation.test.ts @@ -1,9 +1,11 @@ +import * as otelApi from '@opentelemetry/api'; import * as core from '@sentry/core'; import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; import { createSentryServerInstrumentation, isInstrumentationApiUsed, } from '../../src/server/createServerInstrumentation'; +import * as serverBuildModule from '../../src/server/serverBuild'; vi.mock('@sentry/core', async () => { const actual = await vi.importActual('@sentry/core'); @@ -22,6 +24,25 @@ vi.mock('@sentry/core', async () => { }; }); +vi.mock('../../src/server/serverBuild', () => ({ + getMiddlewareName: vi.fn(), +})); + +vi.mock('@opentelemetry/api', async () => { + const actual = await vi.importActual('@opentelemetry/api'); + return { + ...actual, + context: { + active: vi.fn(() => ({ + getValue: vi.fn(), + setValue: vi.fn(), + })), + with: vi.fn((ctx, fn) => fn()), + }, + createContextKey: actual.createContextKey, + }; +}); + describe('createSentryServerInstrumentation', () => { beforeEach(() => { vi.clearAllMocks(); @@ -287,47 +308,65 @@ describe('createSentryServerInstrumentation', () => { ); }); - it('should instrument route middleware with spans', async () => { + async function callMiddlewareHook(options: { + middlewareName: string | undefined; + routeId: string; + routePath: string; + url: string; + }) { const mockCallMiddleware = vi.fn().mockResolvedValue({ status: 'success', error: undefined }); const mockInstrument = vi.fn(); const mockSetAttributes = vi.fn(); const mockRootSpan = { setAttributes: mockSetAttributes }; + vi.mocked(serverBuildModule.getMiddlewareName).mockReturnValue(options.middlewareName); + (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn()); (core.getActiveSpan as any).mockReturnValue({}); (core.getRootSpan as any).mockReturnValue(mockRootSpan); const instrumentation = createSentryServerInstrumentation(); instrumentation.route?.({ - id: 'test-route', + id: options.routeId, index: false, - path: '/users/:id', + path: options.routePath, instrument: mockInstrument, }); const hooks = mockInstrument.mock.calls[0]![0]; - // Call the middleware hook with RouteHandlerInstrumentationInfo await hooks.middleware(mockCallMiddleware, { - request: { method: 'GET', url: 'http://example.com/users/123', headers: { get: () => null } }, - params: { id: '123' }, - unstable_pattern: '/users/:id', + request: { method: 'GET', url: options.url, headers: { get: () => null } }, + params: {}, + unstable_pattern: options.routePath, context: undefined, }); + return { mockSetAttributes, mockRootSpan }; + } + + it('should instrument route middleware with spans (without function name)', async () => { + const { mockSetAttributes, mockRootSpan } = await callMiddlewareHook({ + middlewareName: undefined, + routeId: 'test-route', + routePath: '/users/:id', + url: 'http://example.com/users/123', + }); + expect(core.startSpan).toHaveBeenCalledWith( expect.objectContaining({ - name: '/users/:id', + name: 'middleware test-route', attributes: expect.objectContaining({ 'sentry.op': 'function.react_router.middleware', 'sentry.origin': 'auto.function.react_router.instrumentation_api', + 'react_router.route.id': 'test-route', + 'http.route': '/users/:id', + 'react_router.middleware.index': 0, }), }), expect.any(Function), ); - // Verify updateRootSpanWithRoute was called (same as loader/action) - // This updates the root span name and sets http.route for parameterized routes expect(core.updateSpanName).toHaveBeenCalledWith(mockRootSpan, 'GET /users/:id'); expect(mockSetAttributes).toHaveBeenCalledWith( expect.objectContaining({ @@ -337,6 +376,87 @@ describe('createSentryServerInstrumentation', () => { ); }); + it('should use middleware function name when available from serverBuild', async () => { + await callMiddlewareHook({ + middlewareName: 'authMiddleware', + routeId: 'routes/protected', + routePath: '/protected', + url: 'http://example.com/protected', + }); + + expect(core.startSpan).toHaveBeenCalledWith( + expect.objectContaining({ + name: 'middleware authMiddleware', + attributes: expect.objectContaining({ + 'sentry.op': 'function.react_router.middleware', + 'react_router.route.id': 'routes/protected', + 'http.route': '/protected', + 'react_router.middleware.name': 'authMiddleware', + 'react_router.middleware.index': 0, + }), + }), + expect.any(Function), + ); + }); + + it('should increment middleware index for multiple middleware calls on same route', async () => { + const mockCallMiddleware = vi.fn().mockResolvedValue({ status: 'success', error: undefined }); + const mockInstrument = vi.fn(); + const mockSetAttributes = vi.fn(); + const mockRootSpan = { setAttributes: mockSetAttributes }; + const routeId = 'routes/multi-middleware'; + + // Simulate counter store that would be created by handler and stored in OTel context + const counterStore = { counters: {} as Record }; + + // eslint-disable-next-line @typescript-eslint/unbound-method + vi.mocked(otelApi.context.active).mockReturnValue({ + getValue: vi.fn(() => counterStore), + setValue: vi.fn(), + } as any); + + vi.mocked(serverBuildModule.getMiddlewareName).mockReturnValue(undefined); + + const startSpanCalls: any[] = []; + (core.startSpan as any).mockImplementation((opts: any, fn: any) => { + startSpanCalls.push(opts); + return fn(); + }); + (core.getActiveSpan as any).mockReturnValue({}); + (core.getRootSpan as any).mockReturnValue(mockRootSpan); + + const instrumentation = createSentryServerInstrumentation(); + instrumentation.route?.({ + id: routeId, + index: false, + path: '/multi-middleware', + instrument: mockInstrument, + }); + + const hooks = mockInstrument.mock.calls[0]![0]; + const requestInfo = { + request: { method: 'GET', url: 'http://example.com/multi-middleware', headers: { get: () => null } }, + params: {}, + unstable_pattern: '/multi-middleware', + context: undefined, + }; + + // Call middleware 3 times (simulating 3 middlewares on same route) + await hooks.middleware(mockCallMiddleware, requestInfo); + await hooks.middleware(mockCallMiddleware, requestInfo); + await hooks.middleware(mockCallMiddleware, requestInfo); + + // Filter to only middleware spans + const middlewareSpans = startSpanCalls.filter( + opts => opts.attributes?.['sentry.op'] === 'function.react_router.middleware', + ); + + expect(middlewareSpans).toHaveLength(3); + expect(middlewareSpans[0].attributes['react_router.middleware.index']).toBe(0); + expect(middlewareSpans[1].attributes['react_router.middleware.index']).toBe(1); + expect(middlewareSpans[2].attributes['react_router.middleware.index']).toBe(2); + }); + it('should instrument lazy route loading with spans', async () => { const mockCallLazy = vi.fn().mockResolvedValue({ status: 'success', error: undefined }); const mockInstrument = vi.fn(); diff --git a/packages/react-router/test/server/instrumentation/reactRouterServer.test.ts b/packages/react-router/test/server/instrumentation/reactRouterServer.test.ts index 93e0a91a1c2b..f9b9d88c1a2b 100644 --- a/packages/react-router/test/server/instrumentation/reactRouterServer.test.ts +++ b/packages/react-router/test/server/instrumentation/reactRouterServer.test.ts @@ -3,6 +3,8 @@ import * as SentryCore from '@sentry/core'; import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; import { ReactRouterInstrumentation } from '../../../src/server/instrumentation/reactRouter'; import * as Util from '../../../src/server/instrumentation/util'; +import * as ServerBuild from '../../../src/server/serverBuild'; +import * as ServerGlobals from '../../../src/server/serverGlobals'; vi.mock('@sentry/core', async () => { return { @@ -63,6 +65,7 @@ describe('ReactRouterInstrumentation', () => { }); it('should call original handler for non-data requests', async () => { + vi.spyOn(ServerGlobals, 'isOtelDataLoaderSpanCreationEnabled').mockReturnValue(true); vi.spyOn(Util, 'isDataRequest').mockReturnValue(false); const proxy = (instrumentation as any)._createPatchedModuleProxy(mockModule); @@ -75,6 +78,7 @@ describe('ReactRouterInstrumentation', () => { }); it('should call original handler if no active root span', async () => { + vi.spyOn(ServerGlobals, 'isOtelDataLoaderSpanCreationEnabled').mockReturnValue(true); vi.spyOn(Util, 'isDataRequest').mockReturnValue(true); vi.spyOn(SentryCore, 'getActiveSpan').mockReturnValue(undefined); @@ -88,6 +92,7 @@ describe('ReactRouterInstrumentation', () => { }); it('should start a span for data requests with active root span', async () => { + vi.spyOn(ServerGlobals, 'isOtelDataLoaderSpanCreationEnabled').mockReturnValue(true); vi.spyOn(Util, 'isDataRequest').mockReturnValue(true); // @ts-expect-error MockSpan just for testing vi.spyOn(SentryCore, 'getActiveSpan').mockReturnValue(mockSpan as Span); @@ -110,6 +115,7 @@ describe('ReactRouterInstrumentation', () => { }); it('should handle invalid URLs gracefully', async () => { + vi.spyOn(ServerGlobals, 'isOtelDataLoaderSpanCreationEnabled').mockReturnValue(true); const proxy = (instrumentation as any)._createPatchedModuleProxy(mockModule); const wrappedHandler = proxy.createRequestHandler(); const req = { url: 'not a url', method: 'GET' } as any; @@ -117,4 +123,68 @@ describe('ReactRouterInstrumentation', () => { expect(originalHandler).toHaveBeenCalledWith(req, undefined); }); + + it('should call setServerBuild when static ServerBuild is passed', () => { + const spy = vi.spyOn(ServerBuild, 'setServerBuild'); + vi.spyOn(ServerBuild, 'isServerBuildLike').mockReturnValue(true); + + const staticBuild = { routes: { root: { id: 'root' } } }; + const proxy = (instrumentation as any)._createPatchedModuleProxy(mockModule); + proxy.createRequestHandler(staticBuild); + + expect(spy).toHaveBeenCalledWith(staticBuild); + }); + + it('should capture ServerBuild from factory function', async () => { + const resolvedBuild = { routes: { root: { id: 'root' } } }; + const buildFactory = vi.fn().mockResolvedValue(resolvedBuild); + vi.spyOn(ServerBuild, 'isServerBuildLike').mockImplementation(val => val === resolvedBuild); + const spy = vi.spyOn(ServerBuild, 'setServerBuild'); + + const proxy = (instrumentation as any)._createPatchedModuleProxy(mockModule); + proxy.createRequestHandler(buildFactory); + + // Factory gets wrapped — invoke it via the arg passed to the original createRequestHandler + const wrappedFactory = mockModule.createRequestHandler.mock.calls[0][0]; + await wrappedFactory(); + + expect(spy).toHaveBeenCalledWith(resolvedBuild); + }); + + it('should bypass instrumentation when instrumentation API is active', async () => { + vi.spyOn(ServerGlobals, 'isInstrumentationApiUsed').mockReturnValue(true); + vi.spyOn(ServerGlobals, 'isOtelDataLoaderSpanCreationEnabled').mockReturnValue(true); + vi.spyOn(Util, 'isDataRequest').mockReturnValue(true); + const startSpanSpy = vi.spyOn(SentryCore, 'startSpan'); + + const proxy = (instrumentation as any)._createPatchedModuleProxy(mockModule); + const handler = proxy.createRequestHandler(); + + // Handler is always wrapped; the instrumentation API check happens per-request + expect(handler).not.toBe(originalHandler); + + const req = createRequest('https://test.com/data', 'GET'); + await handler(req); + + // Should delegate to original handler without creating spans + expect(originalHandler).toHaveBeenCalledWith(req, undefined); + expect(startSpanSpy).not.toHaveBeenCalled(); + }); + + it('should skip span creation when OTEL data-loader span creation is disabled', async () => { + vi.spyOn(ServerGlobals, 'isInstrumentationApiUsed').mockReturnValue(false); + vi.spyOn(ServerGlobals, 'isOtelDataLoaderSpanCreationEnabled').mockReturnValue(false); + vi.spyOn(Util, 'isDataRequest').mockReturnValue(true); + const startSpanSpy = vi.spyOn(SentryCore, 'startSpan'); + + const proxy = (instrumentation as any)._createPatchedModuleProxy(mockModule); + const handler = proxy.createRequestHandler(); + + const req = createRequest('https://test.com/data', 'GET'); + await handler(req); + + // Should delegate to original handler without creating spans + expect(originalHandler).toHaveBeenCalledWith(req, undefined); + expect(startSpanSpy).not.toHaveBeenCalled(); + }); }); diff --git a/packages/react-router/test/server/integration/reactRouterServer.test.ts b/packages/react-router/test/server/integration/reactRouterServer.test.ts index a5eac42643e5..096095984eec 100644 --- a/packages/react-router/test/server/integration/reactRouterServer.test.ts +++ b/packages/react-router/test/server/integration/reactRouterServer.test.ts @@ -1,7 +1,8 @@ -import * as SentryNode from '@sentry/node'; -import { beforeEach, describe, expect, it, vi } from 'vitest'; +import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; import { ReactRouterInstrumentation } from '../../../src/server/instrumentation/reactRouter'; import { reactRouterServerIntegration } from '../../../src/server/integration/reactRouterServer'; +import * as serverBuild from '../../../src/server/serverBuild'; +import * as serverGlobals from '../../../src/server/serverGlobals'; vi.mock('../../../src/server/instrumentation/reactRouter', () => { return { @@ -9,57 +10,92 @@ vi.mock('../../../src/server/instrumentation/reactRouter', () => { }; }); +const mockNodeVersion = { major: 20, minor: 18, patch: 0 }; + vi.mock('@sentry/node', () => { return { generateInstrumentOnce: vi.fn((_name: string, callback: () => any) => { return Object.assign(callback, { id: 'test' }); }), - NODE_VERSION: { - major: 0, - minor: 0, - patch: 0, + get NODE_VERSION() { + return mockNodeVersion; }, }; }); describe('reactRouterServerIntegration', () => { + let registerServerBuildGlobalSpy: ReturnType; + let enableOtelDataLoaderSpanCreationSpy: ReturnType; + beforeEach(() => { vi.clearAllMocks(); + registerServerBuildGlobalSpy = vi.spyOn(serverBuild, 'registerServerBuildGlobal'); + enableOtelDataLoaderSpanCreationSpy = vi.spyOn(serverGlobals, 'enableOtelDataLoaderSpanCreation'); + }); + + afterEach(() => { + vi.restoreAllMocks(); + }); + + it('sets up ReactRouterInstrumentation on setupOnce', () => { + const integration = reactRouterServerIntegration(); + integration.setupOnce!(); + + expect(ReactRouterInstrumentation).toHaveBeenCalledTimes(1); + }); + + it('registers the server build global callback on setupOnce', () => { + const integration = reactRouterServerIntegration(); + integration.setupOnce!(); + + expect(registerServerBuildGlobalSpy).toHaveBeenCalledTimes(1); }); - it('sets up ReactRouterInstrumentation for Node 20.18', () => { - vi.spyOn(SentryNode, 'NODE_VERSION', 'get').mockReturnValue({ major: 20, minor: 18, patch: 0 }); + it('enables OTEL data-loader span creation on Node 20.18', () => { + mockNodeVersion.major = 20; + mockNodeVersion.minor = 18; const integration = reactRouterServerIntegration(); integration.setupOnce!(); - expect(ReactRouterInstrumentation).toHaveBeenCalled(); + expect(enableOtelDataLoaderSpanCreationSpy).toHaveBeenCalledTimes(1); + expect(ReactRouterInstrumentation).toHaveBeenCalledTimes(1); + expect(registerServerBuildGlobalSpy).toHaveBeenCalledTimes(1); }); - it('sets up ReactRouterInstrumentationfor Node.js 22.11', () => { - vi.spyOn(SentryNode, 'NODE_VERSION', 'get').mockReturnValue({ major: 22, minor: 11, patch: 0 }); + it('enables OTEL data-loader span creation on Node 22.11', () => { + mockNodeVersion.major = 22; + mockNodeVersion.minor = 11; const integration = reactRouterServerIntegration(); integration.setupOnce!(); - expect(ReactRouterInstrumentation).toHaveBeenCalled(); + expect(enableOtelDataLoaderSpanCreationSpy).toHaveBeenCalledTimes(1); + expect(ReactRouterInstrumentation).toHaveBeenCalledTimes(1); + expect(registerServerBuildGlobalSpy).toHaveBeenCalledTimes(1); }); - it('does not set up ReactRouterInstrumentation for Node.js 20.19', () => { - vi.spyOn(SentryNode, 'NODE_VERSION', 'get').mockReturnValue({ major: 20, minor: 19, patch: 0 }); + it('does not enable OTEL data-loader span creation on Node 20.19', () => { + mockNodeVersion.major = 20; + mockNodeVersion.minor = 19; const integration = reactRouterServerIntegration(); integration.setupOnce!(); - expect(ReactRouterInstrumentation).not.toHaveBeenCalled(); + expect(enableOtelDataLoaderSpanCreationSpy).not.toHaveBeenCalled(); + expect(ReactRouterInstrumentation).toHaveBeenCalledTimes(1); + expect(registerServerBuildGlobalSpy).toHaveBeenCalledTimes(1); }); - it('does not set up ReactRouterInstrumentation for Node.js 22.12', () => { - vi.spyOn(SentryNode, 'NODE_VERSION', 'get').mockReturnValue({ major: 22, minor: 12, patch: 0 }); + it('does not enable OTEL data-loader span creation on Node 22.12', () => { + mockNodeVersion.major = 22; + mockNodeVersion.minor = 12; const integration = reactRouterServerIntegration(); integration.setupOnce!(); - expect(ReactRouterInstrumentation).not.toHaveBeenCalled(); + expect(enableOtelDataLoaderSpanCreationSpy).not.toHaveBeenCalled(); + expect(ReactRouterInstrumentation).toHaveBeenCalledTimes(1); + expect(registerServerBuildGlobalSpy).toHaveBeenCalledTimes(1); }); }); diff --git a/packages/react-router/test/server/serverBuild.test.ts b/packages/react-router/test/server/serverBuild.test.ts new file mode 100644 index 000000000000..80eb7e2028b7 --- /dev/null +++ b/packages/react-router/test/server/serverBuild.test.ts @@ -0,0 +1,77 @@ +import { GLOBAL_OBJ } from '@sentry/core'; +import { afterEach, describe, expect, it } from 'vitest'; +import { + _resetServerBuild, + getMiddlewareName, + isServerBuildLike, + registerServerBuildGlobal, + setServerBuild, +} from '../../src/server/serverBuild'; + +describe('serverBuild', () => { + afterEach(() => { + _resetServerBuild(); + }); + + describe('getMiddlewareName', () => { + it('should return undefined when build is missing or incomplete', () => { + expect(getMiddlewareName('any-route', 0)).toBeUndefined(); + + setServerBuild({ routes: { 'test-route': {} } }); + expect(getMiddlewareName('test-route', 0)).toBeUndefined(); + + setServerBuild({ routes: { 'test-route': { module: { middleware: [{ name: 'first' }] } } } }); + expect(getMiddlewareName('test-route', 1)).toBeUndefined(); + }); + + it('should return the middleware function name by index', () => { + setServerBuild({ + routes: { + 'route-a': { module: { middleware: [{ name: 'authMiddleware' }, { name: 'loggingMiddleware' }] } }, + }, + }); + + expect(getMiddlewareName('route-a', 0)).toBe('authMiddleware'); + expect(getMiddlewareName('route-a', 1)).toBe('loggingMiddleware'); + }); + + it('should return undefined for empty-string middleware names', () => { + setServerBuild({ + routes: { + 'route-a': { module: { middleware: [{ name: '' }] } }, + }, + }); + + expect(getMiddlewareName('route-a', 0)).toBeUndefined(); + }); + }); + + describe('isServerBuildLike', () => { + it('should return true for objects with a routes object', () => { + expect(isServerBuildLike({ routes: {} })).toBe(true); + expect(isServerBuildLike({ routes: { 'test-route': {} } })).toBe(true); + }); + + it('should return false for non-build values', () => { + expect(isServerBuildLike(null)).toBe(false); + expect(isServerBuildLike(undefined)).toBe(false); + expect(isServerBuildLike({})).toBe(false); + expect(isServerBuildLike({ routes: null })).toBe(false); + expect(isServerBuildLike({ routes: 'string' })).toBe(false); + }); + }); + + describe('registerServerBuildGlobal', () => { + it('should register a global callback that calls setServerBuild', () => { + registerServerBuildGlobal(); + + const callback = (GLOBAL_OBJ as any).__sentrySetServerBuild; + expect(typeof callback).toBe('function'); + + const build = { routes: { 'test-route': { module: { middleware: [{ name: 'testMiddleware' }] } } } }; + callback(build); + + expect(getMiddlewareName('test-route', 0)).toBe('testMiddleware'); + }); + }); +}); diff --git a/packages/react-router/test/vite/makeServerBuildCapturePlugin.test.ts b/packages/react-router/test/vite/makeServerBuildCapturePlugin.test.ts new file mode 100644 index 000000000000..f76090b60f56 --- /dev/null +++ b/packages/react-router/test/vite/makeServerBuildCapturePlugin.test.ts @@ -0,0 +1,44 @@ +import { describe, expect, it } from 'vitest'; +import { makeServerBuildCapturePlugin } from '../../src/vite/makeServerBuildCapturePlugin'; + +const SERVER_BUILD_MODULE_ID = 'virtual:react-router/server-build'; +const SERVER_BUILD_CODE = 'const routes = {}; export { routes, entry, future };'; + +describe('makeServerBuildCapturePlugin', () => { + it('should create a plugin with the correct name and enforce post', () => { + const plugin = makeServerBuildCapturePlugin(); + expect(plugin.name).toBe('sentry-react-router-server-build-capture'); + expect(plugin.enforce).toBe('post'); + }); + + it('should return null for non-SSR builds', () => { + const plugin = makeServerBuildCapturePlugin(); + (plugin as any).configResolved({ build: { ssr: false } }); + + const result = (plugin as any).transform(SERVER_BUILD_CODE, SERVER_BUILD_MODULE_ID); + + expect(result).toBeNull(); + }); + + it('should return null for non-server-build modules in SSR mode', () => { + const plugin = makeServerBuildCapturePlugin(); + (plugin as any).configResolved({ build: { ssr: true } }); + + const result = (plugin as any).transform('export function helper() {}', 'src/utils.ts'); + + expect(result).toBeNull(); + }); + + it('should inject capture snippet into the server build module in SSR mode', () => { + const plugin = makeServerBuildCapturePlugin(); + (plugin as any).configResolved({ build: { ssr: true } }); + + const result = (plugin as any).transform(SERVER_BUILD_CODE, SERVER_BUILD_MODULE_ID); + + expect(result).not.toBeNull(); + expect(result.code).toContain(SERVER_BUILD_CODE); + expect(result.code).toContain('__sentrySetServerBuild'); + expect(result.code).toContain('({ routes })'); + expect(result.map).toBeNull(); + }); +}); diff --git a/packages/react-router/test/vite/plugin.test.ts b/packages/react-router/test/vite/plugin.test.ts index f01254ca8869..52306eb0dbd1 100644 --- a/packages/react-router/test/vite/plugin.test.ts +++ b/packages/react-router/test/vite/plugin.test.ts @@ -2,6 +2,7 @@ import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; import { makeConfigInjectorPlugin } from '../../src/vite/makeConfigInjectorPlugin'; import { makeCustomSentryVitePlugins } from '../../src/vite/makeCustomSentryVitePlugins'; import { makeEnableSourceMapsPlugin } from '../../src/vite/makeEnableSourceMapsPlugin'; +import { makeServerBuildCapturePlugin } from '../../src/vite/makeServerBuildCapturePlugin'; import { sentryReactRouter } from '../../src/vite/plugin'; vi.spyOn(console, 'log').mockImplementation(() => { @@ -14,17 +15,20 @@ vi.spyOn(console, 'warn').mockImplementation(() => { vi.mock('../../src/vite/makeCustomSentryVitePlugins'); vi.mock('../../src/vite/makeEnableSourceMapsPlugin'); vi.mock('../../src/vite/makeConfigInjectorPlugin'); +vi.mock('../../src/vite/makeServerBuildCapturePlugin'); describe('sentryReactRouter', () => { const mockPlugins = [{ name: 'test-plugin' }]; const mockSourceMapsPlugin = { name: 'source-maps-plugin' }; const mockConfigInjectorPlugin = { name: 'sentry-config-injector' }; + const mockServerBuildCapturePlugin = { name: 'sentry-react-router-server-build-capture' }; beforeEach(() => { vi.clearAllMocks(); vi.mocked(makeCustomSentryVitePlugins).mockResolvedValue(mockPlugins); vi.mocked(makeEnableSourceMapsPlugin).mockReturnValue(mockSourceMapsPlugin); vi.mocked(makeConfigInjectorPlugin).mockReturnValue(mockConfigInjectorPlugin); + vi.mocked(makeServerBuildCapturePlugin).mockReturnValue(mockServerBuildCapturePlugin); }); afterEach(() => { @@ -37,7 +41,7 @@ describe('sentryReactRouter', () => { const result = await sentryReactRouter({}, { command: 'build', mode: 'production' }); - expect(result).toEqual([mockConfigInjectorPlugin]); + expect(result).toEqual([mockConfigInjectorPlugin, mockServerBuildCapturePlugin]); expect(makeCustomSentryVitePlugins).not.toHaveBeenCalled(); expect(makeEnableSourceMapsPlugin).not.toHaveBeenCalled(); @@ -47,7 +51,7 @@ describe('sentryReactRouter', () => { it('should return config injector plugin when not in build mode', async () => { const result = await sentryReactRouter({}, { command: 'serve', mode: 'production' }); - expect(result).toEqual([mockConfigInjectorPlugin]); + expect(result).toEqual([mockConfigInjectorPlugin, mockServerBuildCapturePlugin]); expect(makeCustomSentryVitePlugins).not.toHaveBeenCalled(); expect(makeEnableSourceMapsPlugin).not.toHaveBeenCalled(); }); @@ -55,7 +59,7 @@ describe('sentryReactRouter', () => { it('should return config injector plugin in development build mode', async () => { const result = await sentryReactRouter({}, { command: 'build', mode: 'development' }); - expect(result).toEqual([mockConfigInjectorPlugin]); + expect(result).toEqual([mockConfigInjectorPlugin, mockServerBuildCapturePlugin]); expect(makeCustomSentryVitePlugins).not.toHaveBeenCalled(); expect(makeEnableSourceMapsPlugin).not.toHaveBeenCalled(); }); @@ -66,8 +70,14 @@ describe('sentryReactRouter', () => { const result = await sentryReactRouter({}, { command: 'build', mode: 'production' }); - expect(result).toEqual([mockConfigInjectorPlugin, mockSourceMapsPlugin, ...mockPlugins]); + expect(result).toEqual([ + mockConfigInjectorPlugin, + mockServerBuildCapturePlugin, + mockSourceMapsPlugin, + ...mockPlugins, + ]); expect(makeConfigInjectorPlugin).toHaveBeenCalledWith({}); + expect(makeServerBuildCapturePlugin).toHaveBeenCalled(); expect(makeCustomSentryVitePlugins).toHaveBeenCalledWith({}); expect(makeEnableSourceMapsPlugin).toHaveBeenCalledWith({}); diff --git a/packages/remix/package.json b/packages/remix/package.json index c18114d7fe8f..d0e754ea137d 100644 --- a/packages/remix/package.json +++ b/packages/remix/package.json @@ -72,7 +72,7 @@ "@sentry/core": "10.40.0", "@sentry/node": "10.40.0", "@sentry/react": "10.40.0", - "glob": "^10.3.4", + "glob": "^13.0.6", "yargs": "^17.6.0" }, "devDependencies": { diff --git a/packages/remix/scripts/deleteSourcemaps.js b/packages/remix/scripts/deleteSourcemaps.js index f73cc678f7df..82a00b5b0f92 100644 --- a/packages/remix/scripts/deleteSourcemaps.js +++ b/packages/remix/scripts/deleteSourcemaps.js @@ -2,13 +2,13 @@ const fs = require('fs'); const path = require('path'); -const glob = require('glob'); +const { globSync } = require('glob'); function deleteSourcemaps(buildPath) { console.info(`[sentry] Deleting sourcemaps from ${buildPath}`); // Delete all .map files in the build folder and its subfolders - const mapFiles = glob.sync('**/*.map', { cwd: buildPath }); + const mapFiles = globSync('**/*.map', { cwd: buildPath }); mapFiles.forEach(file => { fs.unlinkSync(path.join(buildPath, file)); diff --git a/packages/svelte/package.json b/packages/svelte/package.json index ae8fd0157646..68419169dbab 100644 --- a/packages/svelte/package.json +++ b/packages/svelte/package.json @@ -41,7 +41,7 @@ "dependencies": { "@sentry/browser": "10.40.0", "@sentry/core": "10.40.0", - "magic-string": "^0.30.0" + "magic-string": "~0.30.0" }, "peerDependencies": { "svelte": "3.x || 4.x || 5.x" diff --git a/packages/sveltekit/package.json b/packages/sveltekit/package.json index 9d947f1060ec..256fd02c206a 100644 --- a/packages/sveltekit/package.json +++ b/packages/sveltekit/package.json @@ -53,7 +53,7 @@ "@sentry/node": "10.40.0", "@sentry/svelte": "10.40.0", "@sentry/vite-plugin": "^5.1.0", - "magic-string": "0.30.7", + "magic-string": "~0.30.0", "recast": "0.23.11", "sorcery": "1.0.0" }, diff --git a/packages/tanstackstart-react/src/server/wrapFetchWithSentry.ts b/packages/tanstackstart-react/src/server/wrapFetchWithSentry.ts index 22d218ef0b48..d8861ffa8e48 100644 --- a/packages/tanstackstart-react/src/server/wrapFetchWithSentry.ts +++ b/packages/tanstackstart-react/src/server/wrapFetchWithSentry.ts @@ -1,3 +1,4 @@ +import { flushIfServerless } from '@sentry/core'; import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN, startSpan } from '@sentry/node'; import { extractServerFunctionSha256 } from './utils'; @@ -32,35 +33,39 @@ export type ServerEntry = { export function wrapFetchWithSentry(serverEntry: ServerEntry): ServerEntry { if (serverEntry.fetch) { serverEntry.fetch = new Proxy(serverEntry.fetch, { - apply: (target, thisArg, args) => { - const request: Request = args[0]; - const url = new URL(request.url); - const method = request.method || 'GET'; + async apply(target, thisArg, args) { + try { + const request: Request = args[0]; + const url = new URL(request.url); + const method = request.method || 'GET'; - // instrument server functions - if (url.pathname.includes('_serverFn') || url.pathname.includes('createServerFn')) { - const functionSha256 = extractServerFunctionSha256(url.pathname); - const op = 'function.tanstackstart'; + // instrument server functions + if (url.pathname.includes('_serverFn') || url.pathname.includes('createServerFn')) { + const functionSha256 = extractServerFunctionSha256(url.pathname); + const op = 'function.tanstackstart'; - const serverFunctionSpanAttributes = { - [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.tanstackstart.server', - [SEMANTIC_ATTRIBUTE_SENTRY_OP]: op, - 'tanstackstart.function.hash.sha256': functionSha256, - }; + const serverFunctionSpanAttributes = { + [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.tanstackstart.server', + [SEMANTIC_ATTRIBUTE_SENTRY_OP]: op, + 'tanstackstart.function.hash.sha256': functionSha256, + }; - return startSpan( - { - op: op, - name: `${method} ${url.pathname}`, - attributes: serverFunctionSpanAttributes, - }, - () => { - return target.apply(thisArg, args); - }, - ); - } + return await startSpan( + { + op: op, + name: `${method} ${url.pathname}`, + attributes: serverFunctionSpanAttributes, + }, + async () => { + return target.apply(thisArg, args); + }, + ); + } - return target.apply(thisArg, args); + return await target.apply(thisArg, args); + } finally { + await flushIfServerless(); + } }, }); } diff --git a/packages/tanstackstart-react/test/server/wrapFetchWithSentry.test.ts b/packages/tanstackstart-react/test/server/wrapFetchWithSentry.test.ts new file mode 100644 index 000000000000..68761782a1c6 --- /dev/null +++ b/packages/tanstackstart-react/test/server/wrapFetchWithSentry.test.ts @@ -0,0 +1,66 @@ +import { afterEach, describe, expect, it, vi } from 'vitest'; + +const startSpanSpy = vi.fn((_, callback) => callback()); +const flushIfServerlessSpy = vi.fn().mockResolvedValue(undefined); + +vi.mock('@sentry/node', async importOriginal => { + const original = await importOriginal(); + return { + ...original, + startSpan: (...args: unknown[]) => startSpanSpy(...args), + }; +}); + +vi.mock('@sentry/core', async importOriginal => { + const original = await importOriginal(); + return { + ...original, + flushIfServerless: (...args: unknown[]) => flushIfServerlessSpy(...args), + }; +}); + +// Import after mocks are set up +const { wrapFetchWithSentry } = await import('../../src/server/wrapFetchWithSentry'); + +describe('wrapFetchWithSentry', () => { + afterEach(() => { + vi.clearAllMocks(); + }); + + it('calls flushIfServerless after a regular request', async () => { + const mockResponse = new Response('ok'); + const fetchFn = vi.fn().mockResolvedValue(mockResponse); + + const serverEntry = wrapFetchWithSentry({ fetch: fetchFn }); + const request = new Request('http://localhost:3000/page'); + + await serverEntry.fetch(request); + + expect(fetchFn).toHaveBeenCalled(); + expect(flushIfServerlessSpy).toHaveBeenCalledTimes(1); + }); + + it('calls flushIfServerless after a server function request', async () => { + const mockResponse = new Response('ok'); + const fetchFn = vi.fn().mockResolvedValue(mockResponse); + + const serverEntry = wrapFetchWithSentry({ fetch: fetchFn }); + const request = new Request('http://localhost:3000/_serverFn/abc123'); + + await serverEntry.fetch(request); + + expect(startSpanSpy).toHaveBeenCalled(); + expect(flushIfServerlessSpy).toHaveBeenCalledTimes(1); + }); + + it('calls flushIfServerless even if the handler throws', async () => { + const fetchFn = vi.fn().mockRejectedValue(new Error('handler error')); + + const serverEntry = wrapFetchWithSentry({ fetch: fetchFn }); + const request = new Request('http://localhost:3000/page'); + + await expect(serverEntry.fetch(request)).rejects.toThrow('handler error'); + + expect(flushIfServerlessSpy).toHaveBeenCalledTimes(1); + }); +}); diff --git a/packages/vue/src/router.ts b/packages/vue/src/router.ts index 2506fd220207..8d4100e9b6c8 100644 --- a/packages/vue/src/router.ts +++ b/packages/vue/src/router.ts @@ -30,6 +30,9 @@ export type Route = { interface VueRouter { onError: (fn: (err: Error) => void) => void; beforeEach: (fn: (to: Route, from: Route, next?: () => void) => void) => void; + // Vue Router 3 exposes a `mode` property ('hash' | 'history' | 'abstract'). + // Vue Router 4+ replaced it with `options.history`. Used for version detection. + mode?: string; } /** @@ -52,9 +55,16 @@ export function instrumentVueRouter( ): void { let hasHandledFirstPageLoad = false; + // Detect Vue Router 3 by checking for the `mode` property which only exists in VR3. + // Vue Router 4+ uses `options.history` instead and does not expose `mode`. + const isLegacyRouter = 'mode' in router; + router.onError(error => captureException(error, { mechanism: { handled: false } })); - router.beforeEach((to, _from, next) => { + // Use rest params to capture `next` without declaring it as a named parameter. + // This keeps Function.length === 2, which tells Vue Router 4+/5+ to use the + // modern return-based resolution (no deprecation warning in Vue Router 5.0.3+). + router.beforeEach((to: Route, _from: Route, ...rest: [(() => void)?]) => { // We avoid trying to re-fetch the page load span when we know we already handled it the first time const activePageLoadSpan = !hasHandledFirstPageLoad ? getActivePageLoadSpan() : undefined; @@ -116,11 +126,16 @@ export function instrumentVueRouter( }); } - // Vue Router 4 no longer exposes the `next` function, so we need to - // check if it's available before calling it. - // `next` needs to be called in Vue Router 3 so that the hook is resolved. - if (next) { - next(); + // Vue Router 3 requires `next()` to be called to resolve the navigation guard. + // Vue Router 4+ auto-resolves guards with Function.length < 3 via `guardToPromiseFn`. + // In Vue Router 5.0.3+, the `next` callback passed to guards is wrapped with + // `withDeprecationWarning()`, so calling it emits a console warning. We avoid + // calling it on modern routers where it is both unnecessary and noisy. + if (isLegacyRouter) { + const next = rest[0]; + if (typeof next === 'function') { + next(); + } } }); } diff --git a/packages/vue/test/router.test.ts b/packages/vue/test/router.test.ts index 55dcff7bc25b..763f3e51d6f7 100644 --- a/packages/vue/test/router.test.ts +++ b/packages/vue/test/router.test.ts @@ -23,11 +23,9 @@ vi.mock('@sentry/core', async () => { const mockVueRouter = { onError: vi.fn<[(error: Error) => void]>(), - beforeEach: vi.fn<[(from: Route, to: Route, next?: () => void) => void]>(), + beforeEach: vi.fn<[(from: Route, to: Route) => void]>(), }; -const mockNext = vi.fn(); - const testRoutes: Record = { initialPageloadRoute: { matched: [], params: {}, path: '', query: {} }, normalRoute1: { @@ -118,8 +116,8 @@ describe('instrumentVueRouter()', () => { const from = testRoutes[fromKey]!; const to = testRoutes[toKey]!; - beforeEachCallback(to, testRoutes['initialPageloadRoute']!, mockNext); // fake initial pageload - beforeEachCallback(to, from, mockNext); + beforeEachCallback(to, testRoutes['initialPageloadRoute']!); // fake initial pageload + beforeEachCallback(to, from); expect(mockStartSpan).toHaveBeenCalledTimes(2); expect(mockStartSpan).toHaveBeenLastCalledWith({ @@ -131,8 +129,6 @@ describe('instrumentVueRouter()', () => { }, op: 'navigation', }); - - expect(mockNext).toHaveBeenCalledTimes(2); }, ); @@ -171,7 +167,7 @@ describe('instrumentVueRouter()', () => { const from = testRoutes[fromKey]!; const to = testRoutes[toKey]!; - beforeEachCallback(to, from, mockNext); + beforeEachCallback(to, from); expect(mockVueRouter.beforeEach).toHaveBeenCalledTimes(1); expect(mockRootSpan.updateName).toHaveBeenCalledWith(transactionName); @@ -180,8 +176,6 @@ describe('instrumentVueRouter()', () => { [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.pageload.vue', ...getAttributesForRoute(to), }); - - expect(mockNext).toHaveBeenCalledTimes(1); }, ); @@ -198,8 +192,8 @@ describe('instrumentVueRouter()', () => { const from = testRoutes.normalRoute1!; const to = testRoutes.namedRoute!; - beforeEachCallback(to, testRoutes['initialPageloadRoute']!, mockNext); // fake initial pageload - beforeEachCallback(to, from, mockNext); + beforeEachCallback(to, testRoutes['initialPageloadRoute']!); // fake initial pageload + beforeEachCallback(to, from); // first startTx call happens when the instrumentation is initialized (for pageloads) expect(mockStartSpan).toHaveBeenLastCalledWith({ @@ -226,8 +220,8 @@ describe('instrumentVueRouter()', () => { const from = testRoutes.normalRoute1!; const to = testRoutes.namedRoute!; - beforeEachCallback(to, testRoutes['initialPageloadRoute']!, mockNext); // fake initial pageload - beforeEachCallback(to, from, mockNext); + beforeEachCallback(to, testRoutes['initialPageloadRoute']!); // fake initial pageload + beforeEachCallback(to, from); // first startTx call happens when the instrumentation is initialized (for pageloads) expect(mockStartSpan).toHaveBeenLastCalledWith({ @@ -284,7 +278,7 @@ describe('instrumentVueRouter()', () => { const to = testRoutes['normalRoute1']!; const from = testRoutes['initialPageloadRoute']!; - beforeEachCallback(to, from, mockNext); + beforeEachCallback(to, from); expect(mockVueRouter.beforeEach).toHaveBeenCalledTimes(1); @@ -318,7 +312,7 @@ describe('instrumentVueRouter()', () => { const from = testRoutes['initialPageloadRoute']!; const to = testRoutes['normalRoute1']!; - beforeEachCallback(to, from, mockNext); + beforeEachCallback(to, from); expect(scopeSetTransactionNameSpy).toHaveBeenCalledTimes(1); expect(scopeSetTransactionNameSpy).toHaveBeenCalledWith('/books/:bookId/chapter/:chapterId'); @@ -357,7 +351,7 @@ describe('instrumentVueRouter()', () => { expect(mockVueRouter.beforeEach).toHaveBeenCalledTimes(1); const beforeEachCallback = mockVueRouter.beforeEach.mock.calls[0]![0]!; - beforeEachCallback(testRoutes['normalRoute1']!, testRoutes['initialPageloadRoute']!, mockNext); + beforeEachCallback(testRoutes['normalRoute1']!, testRoutes['initialPageloadRoute']!); expect(mockRootSpan.updateName).toHaveBeenCalledTimes(expectedCallsAmount); expect(mockStartSpan).not.toHaveBeenCalled(); @@ -381,14 +375,14 @@ describe('instrumentVueRouter()', () => { expect(mockVueRouter.beforeEach).toHaveBeenCalledTimes(1); const beforeEachCallback = mockVueRouter.beforeEach.mock.calls[0]![0]!; - beforeEachCallback(testRoutes['normalRoute1']!, testRoutes['initialPageloadRoute']!, mockNext); // fake initial pageload - beforeEachCallback(testRoutes['normalRoute2']!, testRoutes['normalRoute1']!, mockNext); + beforeEachCallback(testRoutes['normalRoute1']!, testRoutes['initialPageloadRoute']!); // fake initial pageload + beforeEachCallback(testRoutes['normalRoute2']!, testRoutes['normalRoute1']!); expect(mockStartSpan).toHaveBeenCalledTimes(expectedCallsAmount); }, ); - it("doesn't throw when `next` is not available in the beforeEach callback (Vue Router 4)", () => { + it('does not declare a third parameter to avoid Vue Router next() deprecation warning', () => { const mockStartSpan = vi.fn().mockReturnValue(MOCK_SPAN); instrumentVueRouter( mockVueRouter, @@ -398,21 +392,45 @@ describe('instrumentVueRouter()', () => { const beforeEachCallback = mockVueRouter.beforeEach.mock.calls[0]![0]!; - const from = testRoutes.normalRoute1!; - const to = testRoutes.namedRoute!; - beforeEachCallback(to, testRoutes['initialPageloadRoute']!, mockNext); // fake initial pageload - beforeEachCallback(to, from, undefined); + // Vue Router uses Function.length to detect whether the guard uses the legacy + // `next` callback. Guards with < 3 params use the modern return-based pattern. + expect(beforeEachCallback.length).toBeLessThan(3); + }); - // first startTx call happens when the instrumentation is initialized (for pageloads) - expect(mockStartSpan).toHaveBeenLastCalledWith({ - name: '/login', - attributes: { - [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.navigation.vue', - [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'route', - ...getAttributesForRoute(to), - }, - op: 'navigation', - }); + it('calls next() for Vue Router 3 (legacy router with mode property)', () => { + const mockNext = vi.fn(); + const mockLegacyRouter = { + onError: vi.fn<[(error: Error) => void]>(), + beforeEach: vi.fn<[(from: Route, to: Route, next?: () => void) => void]>(), + mode: 'history', + }; + + const mockStartSpan = vi.fn().mockReturnValue(MOCK_SPAN); + instrumentVueRouter( + mockLegacyRouter, + { routeLabel: 'name', instrumentPageLoad: true, instrumentNavigation: true }, + mockStartSpan, + ); + + const beforeEachCallback = mockLegacyRouter.beforeEach.mock.calls[0]![0]!; + beforeEachCallback(testRoutes['normalRoute1']!, testRoutes['initialPageloadRoute']!, mockNext); + + expect(mockNext).toHaveBeenCalledTimes(1); + }); + + it('does not call next() for Vue Router 4+ (modern router without mode property)', () => { + const mockNext = vi.fn(); + const mockStartSpan = vi.fn().mockReturnValue(MOCK_SPAN); + instrumentVueRouter( + mockVueRouter, + { routeLabel: 'name', instrumentPageLoad: true, instrumentNavigation: true }, + mockStartSpan, + ); + + const beforeEachCallback = mockVueRouter.beforeEach.mock.calls[0]![0]!; + beforeEachCallback(testRoutes['normalRoute1']!, testRoutes['initialPageloadRoute']!, mockNext); + + expect(mockNext).not.toHaveBeenCalled(); }); }); diff --git a/yarn.lock b/yarn.lock index 5bd273f6d7a2..c4b44e29e3fa 100644 --- a/yarn.lock +++ b/yarn.lock @@ -426,7 +426,7 @@ "@anthropic-ai/sdk@^0.65.0": version "0.65.0" - resolved "https://registry.npmjs.org/@anthropic-ai/sdk/-/sdk-0.65.0.tgz#3f464fe2029eacf8e7e7fb8197579d00c8ca7502" + resolved "https://registry.yarnpkg.com/@anthropic-ai/sdk/-/sdk-0.65.0.tgz#3f464fe2029eacf8e7e7fb8197579d00c8ca7502" integrity sha512-zIdPOcrCVEI8t3Di40nH4z9EoeyGZfXbYSvWdDLsB/KkaSYMnEgC7gmcgWu83g2NTn1ZTpbMvpdttWDGGIk6zw== dependencies: json-schema-to-ts "^3.1.1" @@ -3001,7 +3001,7 @@ "@cfworker/json-schema@^4.0.2": version "4.1.1" - resolved "https://registry.npmjs.org/@cfworker/json-schema/-/json-schema-4.1.1.tgz#4a2a3947ee9fa7b7c24be981422831b8674c3be6" + resolved "https://registry.yarnpkg.com/@cfworker/json-schema/-/json-schema-4.1.1.tgz#4a2a3947ee9fa7b7c24be981422831b8674c3be6" integrity sha512-gAmrUZSGtKc3AiBL71iNWxDsyUC5uMaKKGdvzYsBoTW/xi42JQHl7eKV2OYzCUqvc+D2RCcf7EXY2iCyFIk6og== "@clack/core@1.0.0": @@ -4588,7 +4588,7 @@ "@google/genai@^1.20.0": version "1.20.0" - resolved "https://registry.npmjs.org/@google/genai/-/genai-1.20.0.tgz#b728bdb383fc58fbb1b92eff26e831ff598688c0" + resolved "https://registry.yarnpkg.com/@google/genai/-/genai-1.20.0.tgz#b728bdb383fc58fbb1b92eff26e831ff598688c0" integrity sha512-QdShxO9LX35jFogy3iKprQNqgKKveux4H2QjOnyIvyHRuGi6PHiz3fjNf8Y0VPY8o5V2fHqR2XqiSVoz7yZs0w== dependencies: google-auth-library "^9.14.2" @@ -5064,18 +5064,6 @@ resolved "https://registry.yarnpkg.com/@ioredis/commands/-/commands-1.5.0.tgz#3dddcea446a4b1dc177d0743a1e07ff50691652a" integrity sha512-eUgLqrMf8nJkZxT24JvVRrQya1vZkQh8BBeYNwGDqa5I0VUi8ACx7uFvAaLxintokpTenkK6DASvo/bvNbBGow== -"@isaacs/balanced-match@^4.0.1": - version "4.0.1" - resolved "https://registry.yarnpkg.com/@isaacs/balanced-match/-/balanced-match-4.0.1.tgz#3081dadbc3460661b751e7591d7faea5df39dd29" - integrity sha512-yzMTt9lEb8Gv7zRioUilSglI0c0smZ9k5D65677DLWLtWJaXIS3CqcGyUFByYKlnUj6TkjLVs54fBl6+TiGQDQ== - -"@isaacs/brace-expansion@^5.0.0": - version "5.0.1" - resolved "https://registry.yarnpkg.com/@isaacs/brace-expansion/-/brace-expansion-5.0.1.tgz#0ef5a92d91f2fff2a37646ce54da9e5f599f6eff" - integrity sha512-WMz71T1JS624nWj2n2fnYAuPovhv7EUhk69R6i9dsVyzxt5eM3bjwvgk9L+APE1TRscGysAVMANkB0jh0LQZrQ== - dependencies: - "@isaacs/balanced-match" "^4.0.1" - "@isaacs/cliui@^8.0.2": version "8.0.2" resolved "https://registry.yarnpkg.com/@isaacs/cliui/-/cliui-8.0.2.tgz#b37667b7bc181c168782259bab42474fbf52b550" @@ -5228,7 +5216,7 @@ "@langchain/anthropic@^0.3.10": version "0.3.31" - resolved "https://registry.npmjs.org/@langchain/anthropic/-/anthropic-0.3.31.tgz#80bc2464ab98cfb8df0de50cf219d92cfe5934e1" + resolved "https://registry.yarnpkg.com/@langchain/anthropic/-/anthropic-0.3.31.tgz#80bc2464ab98cfb8df0de50cf219d92cfe5934e1" integrity sha512-XyjwE1mA1I6sirSlVZtI6tyv7nH3+b8F5IFDi9WNKA8+SidJ0o3cP90TxrK7x1sSLmdj+su3f8s2hOusw6xpaw== dependencies: "@anthropic-ai/sdk" "^0.65.0" @@ -5254,21 +5242,21 @@ "@langchain/langgraph-checkpoint@^1.0.0": version "1.0.0" - resolved "https://registry.npmjs.org/@langchain/langgraph-checkpoint/-/langgraph-checkpoint-1.0.0.tgz#ece2ede439d0d0b0b532c4be7817fd5029afe4f8" + resolved "https://registry.yarnpkg.com/@langchain/langgraph-checkpoint/-/langgraph-checkpoint-1.0.0.tgz#ece2ede439d0d0b0b532c4be7817fd5029afe4f8" integrity sha512-xrclBGvNCXDmi0Nz28t3vjpxSH6UYx6w5XAXSiiB1WEdc2xD2iY/a913I3x3a31XpInUW/GGfXXfePfaghV54A== dependencies: uuid "^10.0.0" "@langchain/langgraph-checkpoint@~0.0.17": version "0.0.18" - resolved "https://registry.npmjs.org/@langchain/langgraph-checkpoint/-/langgraph-checkpoint-0.0.18.tgz#2f7a9cdeda948ccc8d312ba9463810709d71d0b8" + resolved "https://registry.yarnpkg.com/@langchain/langgraph-checkpoint/-/langgraph-checkpoint-0.0.18.tgz#2f7a9cdeda948ccc8d312ba9463810709d71d0b8" integrity sha512-IS7zJj36VgY+4pf8ZjsVuUWef7oTwt1y9ylvwu0aLuOn1d0fg05Om9DLm3v2GZ2Df6bhLV1kfWAM0IAl9O5rQQ== dependencies: uuid "^10.0.0" "@langchain/langgraph-sdk@~0.0.32": version "0.0.112" - resolved "https://registry.npmjs.org/@langchain/langgraph-sdk/-/langgraph-sdk-0.0.112.tgz#3186919b60e3381aa8aa32ea9b9c39df1f02a9fd" + resolved "https://registry.yarnpkg.com/@langchain/langgraph-sdk/-/langgraph-sdk-0.0.112.tgz#3186919b60e3381aa8aa32ea9b9c39df1f02a9fd" integrity sha512-/9W5HSWCqYgwma6EoOspL4BGYxGxeJP6lIquPSF4FA0JlKopaUv58ucZC3vAgdJyCgg6sorCIV/qg7SGpEcCLw== dependencies: "@types/json-schema" "^7.0.15" @@ -5278,7 +5266,7 @@ "@langchain/langgraph-sdk@~1.0.0": version "1.0.0" - resolved "https://registry.npmjs.org/@langchain/langgraph-sdk/-/langgraph-sdk-1.0.0.tgz#16faca6cc426432dee9316428d0aecd94e5b7989" + resolved "https://registry.yarnpkg.com/@langchain/langgraph-sdk/-/langgraph-sdk-1.0.0.tgz#16faca6cc426432dee9316428d0aecd94e5b7989" integrity sha512-g25ti2W7Dl5wUPlNK+0uIGbeNFqf98imhHlbdVVKTTkDYLhi/pI1KTgsSSkzkeLuBIfvt2b0q6anQwCs7XBlbw== dependencies: p-queue "^6.6.2" @@ -5287,7 +5275,7 @@ "@langchain/langgraph@^0.2.32": version "0.2.74" - resolved "https://registry.npmjs.org/@langchain/langgraph/-/langgraph-0.2.74.tgz#37367a1e8bafda3548037a91449a69a84f285def" + resolved "https://registry.yarnpkg.com/@langchain/langgraph/-/langgraph-0.2.74.tgz#37367a1e8bafda3548037a91449a69a84f285def" integrity sha512-oHpEi5sTZTPaeZX1UnzfM2OAJ21QGQrwReTV6+QnX7h8nDCBzhtipAw1cK616S+X8zpcVOjgOtJuaJhXa4mN8w== dependencies: "@langchain/langgraph-checkpoint" "~0.0.17" @@ -5297,7 +5285,7 @@ "@langchain/langgraph@^1.0.1": version "1.0.2" - resolved "https://registry.npmjs.org/@langchain/langgraph/-/langgraph-1.0.2.tgz#62de931edac0dd850daf708bd6f8f3835cf25a5e" + resolved "https://registry.yarnpkg.com/@langchain/langgraph/-/langgraph-1.0.2.tgz#62de931edac0dd850daf708bd6f8f3835cf25a5e" integrity sha512-syxzzWTnmpCL+RhUEvalUeOXFoZy/KkzHa2Da2gKf18zsf9Dkbh3rfnRDrTyUGS1XSTejq07s4rg1qntdEDs2A== dependencies: "@langchain/langgraph-checkpoint" "^1.0.0" @@ -7333,130 +7321,130 @@ estree-walker "^2.0.2" picomatch "^4.0.2" -"@rollup/rollup-android-arm-eabi@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.57.0.tgz#f762035679a6b168138c94c960fda0b0cdb00d98" - integrity sha512-tPgXB6cDTndIe1ah7u6amCI1T0SsnlOuKgg10Xh3uizJk4e5M1JGaUMk7J4ciuAUcFpbOiNhm2XIjP9ON0dUqA== - -"@rollup/rollup-android-arm64@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.57.0.tgz#1061ce0bfa6a6da361bda52a2949612769cd22ef" - integrity sha512-sa4LyseLLXr1onr97StkU1Nb7fWcg6niokTwEVNOO7awaKaoRObQ54+V/hrF/BP1noMEaaAW6Fg2d/CfLiq3Mg== - -"@rollup/rollup-darwin-arm64@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.57.0.tgz#20d65f967566000d22ef6c9defb0f96d2f95ed79" - integrity sha512-/NNIj9A7yLjKdmkx5dC2XQ9DmjIECpGpwHoGmA5E1AhU0fuICSqSWScPhN1yLCkEdkCwJIDu2xIeLPs60MNIVg== - -"@rollup/rollup-darwin-x64@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.57.0.tgz#2a805303beb4cd44bfef993c39582cb0f1794f90" - integrity sha512-xoh8abqgPrPYPr7pTYipqnUi1V3em56JzE/HgDgitTqZBZ3yKCWI+7KUkceM6tNweyUKYru1UMi7FC060RyKwA== - -"@rollup/rollup-freebsd-arm64@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.57.0.tgz#7cf26a60d7245e9207a253ac07f11ddfcc47d622" - integrity sha512-PCkMh7fNahWSbA0OTUQ2OpYHpjZZr0hPr8lId8twD7a7SeWrvT3xJVyza+dQwXSSq4yEQTMoXgNOfMCsn8584g== - -"@rollup/rollup-freebsd-x64@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.57.0.tgz#2b1acc1e624b47f676f526df30bb4357ea21f9b6" - integrity sha512-1j3stGx+qbhXql4OCDZhnK7b01s6rBKNybfsX+TNrEe9JNq4DLi1yGiR1xW+nL+FNVvI4D02PUnl6gJ/2y6WJA== - -"@rollup/rollup-linux-arm-gnueabihf@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.57.0.tgz#1ba1ef444365a51687c7af2824b370791a1e3aaf" - integrity sha512-eyrr5W08Ms9uM0mLcKfM/Uzx7hjhz2bcjv8P2uynfj0yU8GGPdz8iYrBPhiLOZqahoAMB8ZiolRZPbbU2MAi6Q== - -"@rollup/rollup-linux-arm-musleabihf@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.57.0.tgz#e49863b683644bbbb9abc5b051c9b9d59774c3a0" - integrity sha512-Xds90ITXJCNyX9pDhqf85MKWUI4lqjiPAipJ8OLp8xqI2Ehk+TCVhF9rvOoN8xTbcafow3QOThkNnrM33uCFQA== - -"@rollup/rollup-linux-arm64-gnu@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.57.0.tgz#fda3bfd43d2390d2d99bc7d9617c2db2941da52b" - integrity sha512-Xws2KA4CLvZmXjy46SQaXSejuKPhwVdaNinldoYfqruZBaJHqVo6hnRa8SDo9z7PBW5x84SH64+izmldCgbezw== - -"@rollup/rollup-linux-arm64-musl@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.57.0.tgz#aea6199031404f80a0ccf33d5d3a63de53819da0" - integrity sha512-hrKXKbX5FdaRJj7lTMusmvKbhMJSGWJ+w++4KmjiDhpTgNlhYobMvKfDoIWecy4O60K6yA4SnztGuNTQF+Lplw== - -"@rollup/rollup-linux-loong64-gnu@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.57.0.tgz#f467333a5691f69a18295a7051e1cebb6815fdfe" - integrity sha512-6A+nccfSDGKsPm00d3xKcrsBcbqzCTAukjwWK6rbuAnB2bHaL3r9720HBVZ/no7+FhZLz/U3GwwZZEh6tOSI8Q== - -"@rollup/rollup-linux-loong64-musl@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-loong64-musl/-/rollup-linux-loong64-musl-4.57.0.tgz#e46dffc29692caa743140636eb0d1d9a24ed0fc3" - integrity sha512-4P1VyYUe6XAJtQH1Hh99THxr0GKMMwIXsRNOceLrJnaHTDgk1FTcTimDgneRJPvB3LqDQxUmroBclQ1S0cIJwQ== - -"@rollup/rollup-linux-ppc64-gnu@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.57.0.tgz#be5b4494047ccbaadf1542fe9ac45b7788e73968" - integrity sha512-8Vv6pLuIZCMcgXre6c3nOPhE0gjz1+nZP6T+hwWjr7sVH8k0jRkH+XnfjjOTglyMBdSKBPPz54/y1gToSKwrSQ== - -"@rollup/rollup-linux-ppc64-musl@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-ppc64-musl/-/rollup-linux-ppc64-musl-4.57.0.tgz#b14ce2b0fe9c37fd0646ec3095087c1d64c791f4" - integrity sha512-r1te1M0Sm2TBVD/RxBPC6RZVwNqUTwJTA7w+C/IW5v9Ssu6xmxWEi+iJQlpBhtUiT1raJ5b48pI8tBvEjEFnFA== - -"@rollup/rollup-linux-riscv64-gnu@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.57.0.tgz#b78357f88ee7a34f677b118714594e37a2362a8c" - integrity sha512-say0uMU/RaPm3CDQLxUUTF2oNWL8ysvHkAjcCzV2znxBr23kFfaxocS9qJm+NdkRhF8wtdEEAJuYcLPhSPbjuQ== - -"@rollup/rollup-linux-riscv64-musl@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.57.0.tgz#f44107ec0c30d691552c89eb3e4f287c33c56c3c" - integrity sha512-/MU7/HizQGsnBREtRpcSbSV1zfkoxSTR7wLsRmBPQ8FwUj5sykrP1MyJTvsxP5KBq9SyE6kH8UQQQwa0ASeoQQ== - -"@rollup/rollup-linux-s390x-gnu@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.57.0.tgz#ddb1cf80fb21b376a45a4e93ffdbeb15205d38f3" - integrity sha512-Q9eh+gUGILIHEaJf66aF6a414jQbDnn29zeu0eX3dHMuysnhTvsUvZTCAyZ6tJhUjnvzBKE4FtuaYxutxRZpOg== - -"@rollup/rollup-linux-x64-gnu@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.57.0.tgz#0da46a644c87e1d8b13da5e2901037193caea8d3" - integrity sha512-OR5p5yG5OKSxHReWmwvM0P+VTPMwoBS45PXTMYaskKQqybkS3Kmugq1W+YbNWArF8/s7jQScgzXUhArzEQ7x0A== - -"@rollup/rollup-linux-x64-musl@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.57.0.tgz#e561c93b6a23114a308396806551c25e28d3e303" - integrity sha512-XeatKzo4lHDsVEbm1XDHZlhYZZSQYym6dg2X/Ko0kSFgio+KXLsxwJQprnR48GvdIKDOpqWqssC3iBCjoMcMpw== - -"@rollup/rollup-openbsd-x64@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-openbsd-x64/-/rollup-openbsd-x64-4.57.0.tgz#52490600775364a0476f26be7ddc416dfa11439b" - integrity sha512-Lu71y78F5qOfYmubYLHPcJm74GZLU6UJ4THkf/a1K7Tz2ycwC2VUbsqbJAXaR6Bx70SRdlVrt2+n5l7F0agTUw== - -"@rollup/rollup-openharmony-arm64@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.57.0.tgz#c25988aae57bd21fa7d0fcb014ef85ec8987ad2c" - integrity sha512-v5xwKDWcu7qhAEcsUubiav7r+48Uk/ENWdr82MBZZRIm7zThSxCIVDfb3ZeRRq9yqk+oIzMdDo6fCcA5DHfMyA== - -"@rollup/rollup-win32-arm64-msvc@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.57.0.tgz#572a8cd78442441121f1a6b5ad686ab723c31ae4" - integrity sha512-XnaaaSMGSI6Wk8F4KK3QP7GfuuhjGchElsVerCplUuxRIzdvZ7hRBpLR0omCmw+kI2RFJB80nenhOoGXlJ5TfQ== - -"@rollup/rollup-win32-ia32-msvc@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.57.0.tgz#431fa95c0be8377907fe4e7070aaa4016c7b7e3b" - integrity sha512-3K1lP+3BXY4t4VihLw5MEg6IZD3ojSYzqzBG571W3kNQe4G4CcFpSUQVgurYgib5d+YaCjeFow8QivWp8vuSvA== - -"@rollup/rollup-win32-x64-gnu@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-x64-gnu/-/rollup-win32-x64-gnu-4.57.0.tgz#19db67feb9c5fe09b1358efd1d97c5f6b299d347" - integrity sha512-MDk610P/vJGc5L5ImE4k5s+GZT3en0KoK1MKPXCRgzmksAMk79j4h3k1IerxTNqwDLxsGxStEZVBqG0gIqZqoA== - -"@rollup/rollup-win32-x64-msvc@4.57.0": - version "4.57.0" - resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.57.0.tgz#6f38851da1123ac0380121108abd31ff21205c3d" - integrity sha512-Zv7v6q6aV+VslnpwzqKAmrk5JdVkLUzok2208ZXGipjb+msxBr/fJPZyeEXiFgH7k62Ak0SLIfxQRZQvTuf7rQ== +"@rollup/rollup-android-arm-eabi@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.59.0.tgz#a6742c74c7d9d6d604ef8a48f99326b4ecda3d82" + integrity sha512-upnNBkA6ZH2VKGcBj9Fyl9IGNPULcjXRlg0LLeaioQWueH30p6IXtJEbKAgvyv+mJaMxSm1l6xwDXYjpEMiLMg== + +"@rollup/rollup-android-arm64@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.59.0.tgz#97247be098de4df0c11971089fd2edf80a5da8cf" + integrity sha512-hZ+Zxj3SySm4A/DylsDKZAeVg0mvi++0PYVceVyX7hemkw7OreKdCvW2oQ3T1FMZvCaQXqOTHb8qmBShoqk69Q== + +"@rollup/rollup-darwin-arm64@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.59.0.tgz#674852cf14cf11b8056e0b1a2f4e872b523576cf" + integrity sha512-W2Psnbh1J8ZJw0xKAd8zdNgF9HRLkdWwwdWqubSVk0pUuQkoHnv7rx4GiF9rT4t5DIZGAsConRE3AxCdJ4m8rg== + +"@rollup/rollup-darwin-x64@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.59.0.tgz#36dfd7ed0aaf4d9d89d9ef983af72632455b0246" + integrity sha512-ZW2KkwlS4lwTv7ZVsYDiARfFCnSGhzYPdiOU4IM2fDbL+QGlyAbjgSFuqNRbSthybLbIJ915UtZBtmuLrQAT/w== + +"@rollup/rollup-freebsd-arm64@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.59.0.tgz#2f87c2074b4220260fdb52a9996246edfc633c22" + integrity sha512-EsKaJ5ytAu9jI3lonzn3BgG8iRBjV4LxZexygcQbpiU0wU0ATxhNVEpXKfUa0pS05gTcSDMKpn3Sx+QB9RlTTA== + +"@rollup/rollup-freebsd-x64@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.59.0.tgz#9b5a26522a38a95dc06616d1939d4d9a76937803" + integrity sha512-d3DuZi2KzTMjImrxoHIAODUZYoUUMsuUiY4SRRcJy6NJoZ6iIqWnJu9IScV9jXysyGMVuW+KNzZvBLOcpdl3Vg== + +"@rollup/rollup-linux-arm-gnueabihf@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.59.0.tgz#86aa4859385a8734235b5e40a48e52d770758c3a" + integrity sha512-t4ONHboXi/3E0rT6OZl1pKbl2Vgxf9vJfWgmUoCEVQVxhW6Cw/c8I6hbbu7DAvgp82RKiH7TpLwxnJeKv2pbsw== + +"@rollup/rollup-linux-arm-musleabihf@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.59.0.tgz#cbe70e56e6ece8dac83eb773b624fc9e5a460976" + integrity sha512-CikFT7aYPA2ufMD086cVORBYGHffBo4K8MQ4uPS/ZnY54GKj36i196u8U+aDVT2LX4eSMbyHtyOh7D7Zvk2VvA== + +"@rollup/rollup-linux-arm64-gnu@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.59.0.tgz#d14992a2e653bc3263d284bc6579b7a2890e1c45" + integrity sha512-jYgUGk5aLd1nUb1CtQ8E+t5JhLc9x5WdBKew9ZgAXg7DBk0ZHErLHdXM24rfX+bKrFe+Xp5YuJo54I5HFjGDAA== + +"@rollup/rollup-linux-arm64-musl@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.59.0.tgz#2fdd1ddc434ea90aeaa0851d2044789b4d07f6da" + integrity sha512-peZRVEdnFWZ5Bh2KeumKG9ty7aCXzzEsHShOZEFiCQlDEepP1dpUl/SrUNXNg13UmZl+gzVDPsiCwnV1uI0RUA== + +"@rollup/rollup-linux-loong64-gnu@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.59.0.tgz#8a181e6f89f969f21666a743cd411416c80099e7" + integrity sha512-gbUSW/97f7+r4gHy3Jlup8zDG190AuodsWnNiXErp9mT90iCy9NKKU0Xwx5k8VlRAIV2uU9CsMnEFg/xXaOfXg== + +"@rollup/rollup-linux-loong64-musl@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-loong64-musl/-/rollup-linux-loong64-musl-4.59.0.tgz#904125af2babc395f8061daa27b5af1f4e3f2f78" + integrity sha512-yTRONe79E+o0FWFijasoTjtzG9EBedFXJMl888NBEDCDV9I2wGbFFfJQQe63OijbFCUZqxpHz1GzpbtSFikJ4Q== + +"@rollup/rollup-linux-ppc64-gnu@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.59.0.tgz#a57970ac6864c9a3447411a658224bdcf948be22" + integrity sha512-sw1o3tfyk12k3OEpRddF68a1unZ5VCN7zoTNtSn2KndUE+ea3m3ROOKRCZxEpmT9nsGnogpFP9x6mnLTCaoLkA== + +"@rollup/rollup-linux-ppc64-musl@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-ppc64-musl/-/rollup-linux-ppc64-musl-4.59.0.tgz#bb84de5b26870567a4267666e08891e80bb56a63" + integrity sha512-+2kLtQ4xT3AiIxkzFVFXfsmlZiG5FXYW7ZyIIvGA7Bdeuh9Z0aN4hVyXS/G1E9bTP/vqszNIN/pUKCk/BTHsKA== + +"@rollup/rollup-linux-riscv64-gnu@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.59.0.tgz#72d00d2c7fb375ce3564e759db33f17a35bffab9" + integrity sha512-NDYMpsXYJJaj+I7UdwIuHHNxXZ/b/N2hR15NyH3m2qAtb/hHPA4g4SuuvrdxetTdndfj9b1WOmy73kcPRoERUg== + +"@rollup/rollup-linux-riscv64-musl@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.59.0.tgz#4c166ef58e718f9245bd31873384ba15a5c1a883" + integrity sha512-nLckB8WOqHIf1bhymk+oHxvM9D3tyPndZH8i8+35p/1YiVoVswPid2yLzgX7ZJP0KQvnkhM4H6QZ5m0LzbyIAg== + +"@rollup/rollup-linux-s390x-gnu@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.59.0.tgz#bb5025cde9a61db478c2ca7215808ad3bce73a09" + integrity sha512-oF87Ie3uAIvORFBpwnCvUzdeYUqi2wY6jRFWJAy1qus/udHFYIkplYRW+wo+GRUP4sKzYdmE1Y3+rY5Gc4ZO+w== + +"@rollup/rollup-linux-x64-gnu@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.59.0.tgz#9b66b1f9cd95c6624c788f021c756269ffed1552" + integrity sha512-3AHmtQq/ppNuUspKAlvA8HtLybkDflkMuLK4DPo77DfthRb71V84/c4MlWJXixZz4uruIH4uaa07IqoAkG64fg== + +"@rollup/rollup-linux-x64-musl@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.59.0.tgz#b007ca255dc7166017d57d7d2451963f0bd23fd9" + integrity sha512-2UdiwS/9cTAx7qIUZB/fWtToJwvt0Vbo0zmnYt7ED35KPg13Q0ym1g442THLC7VyI6JfYTP4PiSOWyoMdV2/xg== + +"@rollup/rollup-openbsd-x64@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-openbsd-x64/-/rollup-openbsd-x64-4.59.0.tgz#e8b357b2d1aa2c8d76a98f5f0d889eabe93f4ef9" + integrity sha512-M3bLRAVk6GOwFlPTIxVBSYKUaqfLrn8l0psKinkCFxl4lQvOSz8ZrKDz2gxcBwHFpci0B6rttydI4IpS4IS/jQ== + +"@rollup/rollup-openharmony-arm64@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.59.0.tgz#96c2e3f4aacd3d921981329831ff8dde492204dc" + integrity sha512-tt9KBJqaqp5i5HUZzoafHZX8b5Q2Fe7UjYERADll83O4fGqJ49O1FsL6LpdzVFQcpwvnyd0i+K/VSwu/o/nWlA== + +"@rollup/rollup-win32-arm64-msvc@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.59.0.tgz#2d865149d706d938df8b4b8f117e69a77646d581" + integrity sha512-V5B6mG7OrGTwnxaNUzZTDTjDS7F75PO1ae6MJYdiMu60sq0CqN5CVeVsbhPxalupvTX8gXVSU9gq+Rx1/hvu6A== + +"@rollup/rollup-win32-ia32-msvc@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.59.0.tgz#abe1593be0fa92325e9971c8da429c5e05b92c36" + integrity sha512-UKFMHPuM9R0iBegwzKF4y0C4J9u8C6MEJgFuXTBerMk7EJ92GFVFYBfOZaSGLu6COf7FxpQNqhNS4c4icUPqxA== + +"@rollup/rollup-win32-x64-gnu@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-x64-gnu/-/rollup-win32-x64-gnu-4.59.0.tgz#c4af3e9518c9a5cd4b1c163dc81d0ad4d82e7eab" + integrity sha512-laBkYlSS1n2L8fSo1thDNGrCTQMmxjYY5G0WFWjFFYZkKPjsMBsgJfGf4TLxXrF6RyhI60L8TMOjBMvXiTcxeA== + +"@rollup/rollup-win32-x64-msvc@4.59.0": + version "4.59.0" + resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.59.0.tgz#4584a8a87b29188a4c1fe987a9fcf701e256d86c" + integrity sha512-2HRCml6OztYXyJXAvdDXPKcawukWY2GpR5/nxKp4iBgiO3wcoEGkAaqctIbZcNB6KlUQBIqt8VYkNSj2397EfA== "@rtsao/scc@^1.1.0": version "1.1.0" @@ -9305,7 +9293,7 @@ dependencies: "@types/node" "*" -"@types/glob@*", "@types/glob@8.0.0": +"@types/glob@*": version "8.0.0" resolved "https://registry.yarnpkg.com/@types/glob/-/glob-8.0.0.tgz#321607e9cbaec54f687a0792b2d1d370739455d2" integrity sha512-l6NQsDDyQUVeoTynNpC9uRvCUint/gSUXQA2euwmTuWGvPY5LSDUu6tkCtJB2SvGQlJQzLaKqcGZP4//7EDveA== @@ -9762,7 +9750,7 @@ "@types/uuid@^10.0.0": version "10.0.0" - resolved "https://registry.npmjs.org/@types/uuid/-/uuid-10.0.0.tgz#e9c07fe50da0f53dc24970cca94d619ff03f6f6d" + resolved "https://registry.yarnpkg.com/@types/uuid/-/uuid-10.0.0.tgz#e9c07fe50da0f53dc24970cca94d619ff03f6f6d" integrity sha512-7gqG38EyHgyP1S+7+xomFtL+ZNHcKv6DwNaCZmJmo1vgMugyF3TCnXVg4t1uk89mLNwnLtnY3TpOpCOyp1/xHQ== "@types/webidl-conversions@*": @@ -12188,10 +12176,10 @@ brace-expansion@^1.1.7: balanced-match "^1.0.0" concat-map "0.0.1" -brace-expansion@^2.0.1: - version "2.0.1" - resolved "https://registry.yarnpkg.com/brace-expansion/-/brace-expansion-2.0.1.tgz#1edc459e0f0c548486ecf9fc99f2221364b9a0ae" - integrity sha512-XnAIvQ8eM+kC6aULx6wuQiwVsnzsi9d3WxzV3FpWTGA19F621kwdbsAcFKXgKUHZWsy+mY6iL1sHTxWEFCytDA== +brace-expansion@^2.0.1, brace-expansion@^2.0.2: + version "2.0.2" + resolved "https://registry.yarnpkg.com/brace-expansion/-/brace-expansion-2.0.2.tgz#54fc53237a613d854c7bd37463aad17df87214e7" + integrity sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ== dependencies: balanced-match "^1.0.0" @@ -13323,7 +13311,7 @@ cliui@^8.0.1: clone-deep@^4.0.1: version "4.0.1" - resolved "https://registry.npmjs.org/clone-deep/-/clone-deep-4.0.1.tgz#c19fd9bdbbf85942b4fd979c84dcf7d5f07c2387" + resolved "https://registry.yarnpkg.com/clone-deep/-/clone-deep-4.0.1.tgz#c19fd9bdbbf85942b4fd979c84dcf7d5f07c2387" integrity sha512-neHB9xuzh/wk0dIHweyAXv2aPGZIVk3pLMe+/RNzINf17fe0OG96QroktYAUm7SM1PBnzTabaLboqqxDyMU+SQ== dependencies: is-plain-object "^2.0.4" @@ -13649,7 +13637,7 @@ console-control-strings@^1.1.0: console-table-printer@^2.12.1: version "2.14.6" - resolved "https://registry.npmjs.org/console-table-printer/-/console-table-printer-2.14.6.tgz#edfe0bf311fa2701922ed509443145ab51e06436" + resolved "https://registry.yarnpkg.com/console-table-printer/-/console-table-printer-2.14.6.tgz#edfe0bf311fa2701922ed509443145ab51e06436" integrity sha512-MCBl5HNVaFuuHW6FGbL/4fB7N/ormCy+tQ+sxTrF6QtSbSNETvPuOVbkJBhzDgYhvjWGrTma4eYJa37ZuoQsPw== dependencies: simple-wcswidth "^1.0.1" @@ -17700,7 +17688,7 @@ gaxios@^4.0.0: gaxios@^6.0.0, gaxios@^6.1.1: version "6.7.1" - resolved "https://registry.npmjs.org/gaxios/-/gaxios-6.7.1.tgz#ebd9f7093ede3ba502685e73390248bb5b7f71fb" + resolved "https://registry.yarnpkg.com/gaxios/-/gaxios-6.7.1.tgz#ebd9f7093ede3ba502685e73390248bb5b7f71fb" integrity sha512-LDODD4TMYx7XXdpwxAVRAIAuB0bzv0s+ywFonY46k126qzQHT9ygyoa9tncmOiQmmDrik65UYsEkv3lbfqQ3yQ== dependencies: extend "^3.0.2" @@ -17925,7 +17913,7 @@ glob@8.0.3: minimatch "^5.0.1" once "^1.3.0" -glob@^10.0.0, glob@^10.3.10, glob@^10.3.4, glob@^10.3.7, glob@^10.4.1: +glob@^10.0.0, glob@^10.3.10, glob@^10.3.7, glob@^10.4.1: version "10.5.0" resolved "https://registry.yarnpkg.com/glob/-/glob-10.5.0.tgz#8ec0355919cd3338c28428a23d4f24ecc5fe738c" integrity sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg== @@ -17937,7 +17925,7 @@ glob@^10.0.0, glob@^10.3.10, glob@^10.3.4, glob@^10.3.7, glob@^10.4.1: package-json-from-dist "^1.0.0" path-scurry "^1.11.1" -glob@^13.0.0, glob@^13.0.1, glob@^13.0.6: +glob@^13.0.0, glob@^13.0.3, glob@^13.0.6: version "13.0.6" resolved "https://registry.yarnpkg.com/glob/-/glob-13.0.6.tgz#078666566a425147ccacfbd2e332deb66a2be71d" integrity sha512-Wjlyrolmm8uDpm/ogGyXZXb1Z+Ca2B8NbJwqBVg0axK9GbBeoS7yGV6vjXnYdGm6X53iehEuxxbyiKp8QmN4Vw== @@ -18058,7 +18046,7 @@ globby@10.0.0: globby@11, globby@^11.0.3, globby@^11.1.0: version "11.1.0" - resolved "https://registry.npmjs.org/globby/-/globby-11.1.0.tgz#bd4be98bb042f83d796f7e3811991fbe82a0d34b" + resolved "https://registry.yarnpkg.com/globby/-/globby-11.1.0.tgz#bd4be98bb042f83d796f7e3811991fbe82a0d34b" integrity sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g== dependencies: array-union "^2.1.0" @@ -19982,7 +19970,7 @@ js-tokens@^9.0.1: js-yaml@^3.10.0, js-yaml@^3.13.0, js-yaml@^3.13.1, js-yaml@^3.2.5, js-yaml@^3.2.7: version "3.14.2" - resolved "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.2.tgz" + resolved "https://registry.yarnpkg.com/js-yaml/-/js-yaml-3.14.2.tgz" integrity sha512-PMSmkqxr106Xa156c2M265Z+FTrPl+oxd/rgOQy2tijQeK5TxQ43psO1ZCwhVOSdnn+RzkzlRz/eY4BgJBYVpg== dependencies: argparse "^1.0.7" @@ -19990,7 +19978,7 @@ js-yaml@^3.10.0, js-yaml@^3.13.0, js-yaml@^3.13.1, js-yaml@^3.2.5, js-yaml@^3.2. js-yaml@^4.1.0: version "4.1.1" - resolved "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz" + resolved "https://registry.yarnpkg.com/js-yaml/-/js-yaml-4.1.1.tgz" integrity sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA== dependencies: argparse "^2.0.1" @@ -20347,7 +20335,7 @@ kuler@^2.0.0: langsmith@^0.3.67: version "0.3.74" - resolved "https://registry.npmjs.org/langsmith/-/langsmith-0.3.74.tgz#014d31a9ff7530b54f0d797502abd512ce8fb6fb" + resolved "https://registry.yarnpkg.com/langsmith/-/langsmith-0.3.74.tgz#014d31a9ff7530b54f0d797502abd512ce8fb6fb" integrity sha512-ZuW3Qawz8w88XcuCRH91yTp6lsdGuwzRqZ5J0Hf5q/AjMz7DwcSv0MkE6V5W+8hFMI850QZN2Wlxwm3R9lHlZg== dependencies: "@types/uuid" "^10.0.0" @@ -21010,13 +20998,6 @@ magic-string@0.26.2: dependencies: sourcemap-codec "^1.4.8" -magic-string@0.30.7: - version "0.30.7" - resolved "https://registry.yarnpkg.com/magic-string/-/magic-string-0.30.7.tgz#0cecd0527d473298679da95a2d7aeb8c64048505" - integrity sha512-8vBuFF/I/+OSLRmdf2wwFCJCz+nSn0m6DPvGH1fS/KiQoSaR+sETbov0eIk9KhEKy8CYqIkIAnbohxT/4H0kuA== - dependencies: - "@jridgewell/sourcemap-codec" "^1.4.15" - magic-string@0.30.8: version "0.30.8" resolved "https://registry.yarnpkg.com/magic-string/-/magic-string-0.30.8.tgz#14e8624246d2bedba70d5462aa99ac9681844613" @@ -21900,69 +21881,48 @@ minimalistic-assert@^1.0.0: resolved "https://registry.yarnpkg.com/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz#2e194de044626d4a10e7f7fbc00ce73e83e4d5c7" integrity sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A== -minimatch@10.1.1: - version "10.1.1" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-10.1.1.tgz#e6e61b9b0c1dcab116b5a7d1458e8b6ae9e73a55" - integrity sha512-enIvLvRAFZYXJzkCYG5RKmPfrFArdLv+R+lbQ53BmIMLIry74bjKzX6iHAm8WYamJkhSSEabrWN5D97XnKObjQ== +minimatch@10.1.1, minimatch@10.2.4, minimatch@^10.0.3, minimatch@^10.2.2: + version "10.2.4" + resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-10.2.4.tgz#465b3accbd0218b8281f5301e27cedc697f96fde" + integrity sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg== dependencies: - "@isaacs/brace-expansion" "^5.0.0" + brace-expansion "^5.0.2" -"minimatch@2 || 3", minimatch@^3.0.0, minimatch@^3.0.2, minimatch@^3.0.4, minimatch@^3.0.5, minimatch@^3.1.1, minimatch@^3.1.2: - version "3.1.3" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.1.3.tgz#6a5cba9b31f503887018f579c89f81f61162e624" - integrity sha512-M2GCs7Vk83NxkUyQV1bkABc4yxgz9kILhHImZiBPAZ9ybuvCb0/H7lEl5XvIg3g+9d4eNotkZA5IWwYl0tibaA== +"minimatch@2 || 3", minimatch@3.1.5, minimatch@^3.0.0, minimatch@^3.0.2, minimatch@^3.0.4, minimatch@^3.0.5, minimatch@^3.1.1, minimatch@^3.1.2, minimatch@~3.0.4: + version "3.1.5" + resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.1.5.tgz#580c88f8d5445f2bd6aa8f3cadefa0de79fbd69e" + integrity sha512-VgjWUsnnT6n+NUk6eZq77zeFdpW2LWDzP6zFGrCbHXiYNul5Dzqk2HHQ5uFH2DNW5Xbp8+jVzaeNt94ssEEl4w== dependencies: brace-expansion "^1.1.7" -minimatch@5.1.0: - version "5.1.0" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-5.1.0.tgz#1717b464f4971b144f6aabe8f2d0b8e4511e09c7" - integrity sha512-9TPBGGak4nHfGZsPBohm9AWg6NoT7QTCehS3BIJABslyZbzxfV78QM2Y6+i741OPZIafFAaiiEMh5OyIrJPgtg== - dependencies: - brace-expansion "^2.0.1" - -minimatch@^10.0.3, minimatch@^10.2.2: - version "10.2.2" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-10.2.2.tgz#361603ee323cfb83496fea2ae17cc44ea4e1f99f" - integrity sha512-+G4CpNBxa5MprY+04MbgOw1v7So6n5JY166pFi9KfYwT78fxScCeSNQSNzp6dpPSW2rONOps6Ocam1wFhCgoVw== - dependencies: - brace-expansion "^5.0.2" - -minimatch@^5.0.1, minimatch@^5.1.0: - version "5.1.7" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-5.1.7.tgz#9bce540b26998f278d34784a3dd25d96f5054d6d" - integrity sha512-FjiwU9HaHW6YB3H4a1sFudnv93lvydNjz2lmyUXR6IwKhGI+bgL3SOZrBGn6kvvX2pJvhEkGSGjyTHN47O4rqA== +minimatch@5.1.0, minimatch@5.1.9, minimatch@^5.0.1, minimatch@^5.1.0: + version "5.1.9" + resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-5.1.9.tgz#1293ef15db0098b394540e8f9f744f9fda8dee4b" + integrity sha512-7o1wEA2RyMP7Iu7GNba9vc0RWWGACJOCZBJX2GJWip0ikV+wcOsgVuY9uE8CPiyQhkGFSlhuSkZPavN7u1c2Fw== dependencies: brace-expansion "^2.0.1" minimatch@^7.4.1: - version "7.4.7" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-7.4.7.tgz#a27ce8f2b8cb7d662fd416c779504f121173d54a" - integrity sha512-t3SrsBRdssa8F/nFEadAxveFpnbhlbq7FiizzOMqx69w9EbmNEzcKiPkc60udvrOkWsTMm6jmnQP1c5rbdVfSA== + version "7.4.9" + resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-7.4.9.tgz#ef35412b1b36261b78ef1b2f0db29b759bbcaf5d" + integrity sha512-Brg/fp/iAVDOQoHxkuN5bEYhyQlZhxddI78yWsCbeEwTHXQjlNLtiJDUsp1GIptVqMI7/gkJMz4vVAc01mpoBw== dependencies: - brace-expansion "^2.0.1" + brace-expansion "^2.0.2" minimatch@^8.0.2: - version "8.0.5" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-8.0.5.tgz#141ba3930da7a2309aa0d43bf5a3a6b4f36d0742" - integrity sha512-85MramurFFFSes0exAhJjto4tC4MpGWoktMZl+GYYBPwdpITzZmTKDJDrxhzg2bOyXGIPxlWvGl39tCcQBkuKA== + version "8.0.7" + resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-8.0.7.tgz#954766e22da88a3e0a17ad93b58c15c9d8a579de" + integrity sha512-V+1uQNdzybxa14e/p00HZnQNNcTjnRJjDxg2V8wtkjFctq4M7hXFws4oekyTP0Jebeq7QYtpFyOeBAjc88zvYg== dependencies: brace-expansion "^2.0.1" minimatch@^9.0.0, minimatch@^9.0.4: - version "9.0.6" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-9.0.6.tgz#a7e3bccfcb3d78ec1bf8d51c9ba749080237a5c8" - integrity sha512-kQAVowdR33euIqeA0+VZTDqU+qo1IeVY+hrKYtZMio3Pg0P0vuh/kwRylLUddJhB6pf3q/botcOvRtx4IN1wqQ== + version "9.0.8" + resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-9.0.8.tgz#bb3aa36d7b42ea77a93c44d5c1082b188112497c" + integrity sha512-reYkDYtj/b19TeqbNZCV4q9t+Yxylf/rYBsLb42SXJatTv4/ylq5lEiAmhA/IToxO7NI2UzNMghHoHuaqDkAjw== dependencies: brace-expansion "^5.0.2" -minimatch@~3.0.4: - version "3.0.8" - resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.0.8.tgz#5e6a59bd11e2ab0de1cfb843eb2d82e546c321c1" - integrity sha512-6FsRAQsxQ61mw+qP1ZzbL9Bc78x2p5OqNgNpnoAFLTrX8n5Kxph0CsnhmKKNXTWjXqU5L0pGPR7hYk+XWZr60Q== - dependencies: - brace-expansion "^1.1.7" - minimist@^0.2.1: version "0.2.4" resolved "https://registry.yarnpkg.com/minimist/-/minimist-0.2.4.tgz#0085d5501e29033748a2f2a4da0180142697a475" @@ -22861,7 +22821,7 @@ normalize-path@^3.0.0, normalize-path@~3.0.0: npm-bundled@^1.1.1: version "1.1.2" - resolved "https://registry.npmjs.org/npm-bundled/-/npm-bundled-1.1.2.tgz#944c78789bd739035b70baa2ca5cc32b8d860bc1" + resolved "https://registry.yarnpkg.com/npm-bundled/-/npm-bundled-1.1.2.tgz#944c78789bd739035b70baa2ca5cc32b8d860bc1" integrity sha512-x5DHup0SuyQcmL3s7Rx/YQ8sbw/Hzg0rj48eN0dV7hf5cmQq5PXIeioroH3raV1QC1yh3uTYuMThvEQF3iKgGQ== dependencies: npm-normalize-package-bin "^1.0.1" @@ -23711,7 +23671,7 @@ p-map@^7.0.3: p-queue@^6.6.2: version "6.6.2" - resolved "https://registry.npmjs.org/p-queue/-/p-queue-6.6.2.tgz#2068a9dcf8e67dd0ec3e7a2bcb76810faa85e426" + resolved "https://registry.yarnpkg.com/p-queue/-/p-queue-6.6.2.tgz#2068a9dcf8e67dd0ec3e7a2bcb76810faa85e426" integrity sha512-RwFpb72c/BhQLEXIZ5K2e+AhgNVmIejGlTgiB9MzZ0e93GRvqZ7uSi0dvRF7/XIXDeNkra2fNHBxTyPDGySpjQ== dependencies: eventemitter3 "^4.0.4" @@ -23762,10 +23722,10 @@ p-wait-for@^3.2.0: dependencies: p-timeout "^3.0.0" -package-json-from-dist@^1.0.0: - version "1.0.0" - resolved "https://registry.yarnpkg.com/package-json-from-dist/-/package-json-from-dist-1.0.0.tgz#e501cd3094b278495eb4258d4c9f6d5ac3019f00" - integrity sha512-dATvCeZN/8wQsGywez1mzHtTlP22H8OEfPrVMLNr4/eGa+ijtLn/6M5f0dY8UKNrC2O9UCU6SSoG3qRKnt7STw== +package-json-from-dist@^1.0.0, package-json-from-dist@^1.0.1: + version "1.0.1" + resolved "https://registry.yarnpkg.com/package-json-from-dist/-/package-json-from-dist-1.0.1.tgz#4f1471a010827a86f94cfd9b0727e36d267de505" + integrity sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw== package-name-regex@~2.0.6: version "2.0.6" @@ -26332,6 +26292,14 @@ rimraf@^5.0.10: dependencies: glob "^10.3.7" +rimraf@^6.1.3: + version "6.1.3" + resolved "https://registry.yarnpkg.com/rimraf/-/rimraf-6.1.3.tgz#afbee236b3bd2be331d4e7ce4493bac1718981af" + integrity sha512-LKg+Cr2ZF61fkcaK1UdkH2yEBBKnYjTyWzTJT6KNPcSPaiT7HSdhtMXQuN5wkTX0Xu72KQ1l8S42rlmexS2hSA== + dependencies: + glob "^13.0.3" + package-json-from-dist "^1.0.1" + rimraf@~2.6.2: version "2.6.3" resolved "https://registry.yarnpkg.com/rimraf/-/rimraf-2.6.3.tgz#b2d104fe0d8fb27cf9e0a1cda8262dd3833c6cab" @@ -26409,51 +26377,51 @@ rollup-pluginutils@^2.8.2: estree-walker "^0.6.1" rollup@^2.70.0: - version "2.79.2" - resolved "https://registry.yarnpkg.com/rollup/-/rollup-2.79.2.tgz#f150e4a5db4b121a21a747d762f701e5e9f49090" - integrity sha512-fS6iqSPZDs3dr/y7Od6y5nha8dW1YnbgtsyotCVvoFGKbERG++CVRFv1meyGDE1SNItQA8BrnCw7ScdAhRJ3XQ== + version "2.80.0" + resolved "https://registry.npmjs.org/rollup/-/rollup-2.80.0.tgz" + integrity sha512-cIFJOD1DESzpjOBl763Kp1AH7UE/0fcdHe6rZXUdQ9c50uvgigvW97u3IcSeBwOkgqL/PXPBktBCh0KEu5L8XQ== optionalDependencies: fsevents "~2.3.2" rollup@^3.27.1, rollup@^3.28.1: - version "3.29.5" - resolved "https://registry.yarnpkg.com/rollup/-/rollup-3.29.5.tgz#8a2e477a758b520fb78daf04bca4c522c1da8a54" - integrity sha512-GVsDdsbJzzy4S/v3dqWPJ7EfvZJfCHiDqe80IyrF59LYuP+e6U1LJoUqeuqRbwAWoMNoXivMNeNAOf5E22VA1w== + version "3.30.0" + resolved "https://registry.npmjs.org/rollup/-/rollup-3.30.0.tgz" + integrity sha512-kQvGasUgN+AlWGliFn2POSajRQEsULVYFGTvOZmK06d7vCD+YhZztt70kGk3qaeAXeWYL5eO7zx+rAubBc55eA== optionalDependencies: fsevents "~2.3.2" -rollup@^4.0.0, rollup@^4.20.0, rollup@^4.34.9, rollup@^4.35.0, rollup@^4.55.1: - version "4.57.0" - resolved "https://registry.yarnpkg.com/rollup/-/rollup-4.57.0.tgz#9fa13c1fb779d480038f45708b5e01b9449b6853" - integrity sha512-e5lPJi/aui4TO1LpAXIRLySmwXSE8k3b9zoGfd42p67wzxog4WHjiZF3M2uheQih4DGyc25QEV4yRBbpueNiUA== +rollup@^4.0.0, rollup@^4.20.0, rollup@^4.34.9, rollup@^4.35.0, rollup@^4.55.1, rollup@^4.59.0: + version "4.59.0" + resolved "https://registry.yarnpkg.com/rollup/-/rollup-4.59.0.tgz#cf74edac17c1486f562d728a4d923a694abdf06f" + integrity sha512-2oMpl67a3zCH9H79LeMcbDhXW/UmWG/y2zuqnF2jQq5uq9TbM9TVyXvA4+t+ne2IIkBdrLpAaRQAvo7YI/Yyeg== dependencies: "@types/estree" "1.0.8" optionalDependencies: - "@rollup/rollup-android-arm-eabi" "4.57.0" - "@rollup/rollup-android-arm64" "4.57.0" - "@rollup/rollup-darwin-arm64" "4.57.0" - "@rollup/rollup-darwin-x64" "4.57.0" - "@rollup/rollup-freebsd-arm64" "4.57.0" - "@rollup/rollup-freebsd-x64" "4.57.0" - "@rollup/rollup-linux-arm-gnueabihf" "4.57.0" - "@rollup/rollup-linux-arm-musleabihf" "4.57.0" - "@rollup/rollup-linux-arm64-gnu" "4.57.0" - "@rollup/rollup-linux-arm64-musl" "4.57.0" - "@rollup/rollup-linux-loong64-gnu" "4.57.0" - "@rollup/rollup-linux-loong64-musl" "4.57.0" - "@rollup/rollup-linux-ppc64-gnu" "4.57.0" - "@rollup/rollup-linux-ppc64-musl" "4.57.0" - "@rollup/rollup-linux-riscv64-gnu" "4.57.0" - "@rollup/rollup-linux-riscv64-musl" "4.57.0" - "@rollup/rollup-linux-s390x-gnu" "4.57.0" - "@rollup/rollup-linux-x64-gnu" "4.57.0" - "@rollup/rollup-linux-x64-musl" "4.57.0" - "@rollup/rollup-openbsd-x64" "4.57.0" - "@rollup/rollup-openharmony-arm64" "4.57.0" - "@rollup/rollup-win32-arm64-msvc" "4.57.0" - "@rollup/rollup-win32-ia32-msvc" "4.57.0" - "@rollup/rollup-win32-x64-gnu" "4.57.0" - "@rollup/rollup-win32-x64-msvc" "4.57.0" + "@rollup/rollup-android-arm-eabi" "4.59.0" + "@rollup/rollup-android-arm64" "4.59.0" + "@rollup/rollup-darwin-arm64" "4.59.0" + "@rollup/rollup-darwin-x64" "4.59.0" + "@rollup/rollup-freebsd-arm64" "4.59.0" + "@rollup/rollup-freebsd-x64" "4.59.0" + "@rollup/rollup-linux-arm-gnueabihf" "4.59.0" + "@rollup/rollup-linux-arm-musleabihf" "4.59.0" + "@rollup/rollup-linux-arm64-gnu" "4.59.0" + "@rollup/rollup-linux-arm64-musl" "4.59.0" + "@rollup/rollup-linux-loong64-gnu" "4.59.0" + "@rollup/rollup-linux-loong64-musl" "4.59.0" + "@rollup/rollup-linux-ppc64-gnu" "4.59.0" + "@rollup/rollup-linux-ppc64-musl" "4.59.0" + "@rollup/rollup-linux-riscv64-gnu" "4.59.0" + "@rollup/rollup-linux-riscv64-musl" "4.59.0" + "@rollup/rollup-linux-s390x-gnu" "4.59.0" + "@rollup/rollup-linux-x64-gnu" "4.59.0" + "@rollup/rollup-linux-x64-musl" "4.59.0" + "@rollup/rollup-openbsd-x64" "4.59.0" + "@rollup/rollup-openharmony-arm64" "4.59.0" + "@rollup/rollup-win32-arm64-msvc" "4.59.0" + "@rollup/rollup-win32-ia32-msvc" "4.59.0" + "@rollup/rollup-win32-x64-gnu" "4.59.0" + "@rollup/rollup-win32-x64-msvc" "4.59.0" fsevents "~2.3.2" rou3@^0.7.12: @@ -26516,14 +26484,7 @@ rxjs@6.6.7, rxjs@^6.4.0, rxjs@^6.6.0: dependencies: tslib "^1.9.0" -rxjs@7.8.1: - version "7.8.1" - resolved "https://registry.yarnpkg.com/rxjs/-/rxjs-7.8.1.tgz#6f6f3d99ea8044291efd92e7c7fcf562c4057543" - integrity sha512-AA3TVj+0A2iuIoQkWEK/tqFjBq2j+6PO6Y0zJcvzLAFhEFIO3HL0vls9hWLncZbAAbK0mar7oZ4V079I/qPMxg== - dependencies: - tslib "^2.1.0" - -rxjs@^7.5.5, rxjs@^7.8.1: +rxjs@7.8.2, rxjs@^7.5.5, rxjs@^7.8.2: version "7.8.2" resolved "https://registry.yarnpkg.com/rxjs/-/rxjs-7.8.2.tgz#955bc473ed8af11a002a2be52071bf475638607b" integrity sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA== @@ -27195,7 +27156,7 @@ simple-update-notifier@^2.0.0: simple-wcswidth@^1.0.1: version "1.1.2" - resolved "https://registry.npmjs.org/simple-wcswidth/-/simple-wcswidth-1.1.2.tgz#66722f37629d5203f9b47c5477b1225b85d6525b" + resolved "https://registry.yarnpkg.com/simple-wcswidth/-/simple-wcswidth-1.1.2.tgz#66722f37629d5203f9b47c5477b1225b85d6525b" integrity sha512-j7piyCjAeTDSjzTSQ7DokZtMNwNlEAyxqSZeCS+CXH7fJ4jx3FuJ/mTW3mE+6JLs4VJBbcll0Kjn+KXI5t21Iw== sinon@21.0.1: @@ -29767,12 +29728,12 @@ uuid-v4@^0.1.0: uuid@8.3.2, uuid@^8.0.0, uuid@^8.3.0, uuid@^8.3.2: version "8.3.2" - resolved "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz#80d5b5ced271bb9af6c445f21a1a04c606cefbe2" + resolved "https://registry.yarnpkg.com/uuid/-/uuid-8.3.2.tgz#80d5b5ced271bb9af6c445f21a1a04c606cefbe2" integrity sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg== uuid@^10.0.0: version "10.0.0" - resolved "https://registry.npmjs.org/uuid/-/uuid-10.0.0.tgz#5a95aa454e6e002725c79055fd42aaba30ca6294" + resolved "https://registry.yarnpkg.com/uuid/-/uuid-10.0.0.tgz#5a95aa454e6e002725c79055fd42aaba30ca6294" integrity sha512-8XkAphELsDnEGrDxUOHB3RGvXz6TeuYSGEZBOjtTtPm2lwhGBjLgOzLHB63IUWfBpNucQjND6d3AOudO+H3RWQ== uuid@^11.1.0: @@ -30035,7 +29996,7 @@ vite@^4.4.9: vite@^5.0.0, vite@^5.4.11, vite@^5.4.21: version "5.4.21" - resolved "https://registry.npmjs.org/vite/-/vite-5.4.21.tgz" + resolved "https://registry.yarnpkg.com/vite/-/vite-5.4.21.tgz" integrity sha512-o5a9xKjbtuhY6Bi5S3+HvbRERmouabWbyUcpXXUA1u+GNUKoROi9byOJ8M0nHbHYHkYICiMlqxkg1KkYmm25Sw== dependencies: esbuild "^0.21.3" @@ -31046,12 +31007,12 @@ zip-stream@^6.0.1: zod-to-json-schema@^3.22.3, zod-to-json-schema@^3.24.1: version "3.24.6" - resolved "https://registry.npmjs.org/zod-to-json-schema/-/zod-to-json-schema-3.24.6.tgz#5920f020c4d2647edfbb954fa036082b92c9e12d" + resolved "https://registry.yarnpkg.com/zod-to-json-schema/-/zod-to-json-schema-3.24.6.tgz#5920f020c4d2647edfbb954fa036082b92c9e12d" integrity sha512-h/z3PKvcTcTetyjl1fkj79MHNEjm+HpD6NXheWjzOekY7kV+lwDYnHw+ivHkijnCSMz1yJaWBD9vu/Fcmk+vEg== zod@^3.22.4, zod@^3.23.8, zod@^3.24.1, zod@^3.25.32: version "3.25.76" - resolved "https://registry.npmjs.org/zod/-/zod-3.25.76.tgz#26841c3f6fd22a6a2760e7ccb719179768471e34" + resolved "https://registry.yarnpkg.com/zod/-/zod-3.25.76.tgz#26841c3f6fd22a6a2760e7ccb719179768471e34" integrity sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ== zod@^4.0.0: