diff --git a/.claude/skills/e2e/SKILL.md b/.claude/skills/e2e/SKILL.md
new file mode 100644
index 000000000000..8c45d939a8cf
--- /dev/null
+++ b/.claude/skills/e2e/SKILL.md
@@ -0,0 +1,200 @@
+---
+name: e2e
+description: Run E2E tests for Sentry JavaScript SDK test applications
+argument-hint: [--variant ]
+---
+
+# E2E Test Runner Skill
+
+This skill runs end-to-end tests for Sentry JavaScript SDK test applications. It ensures SDK packages are built before running tests.
+
+## Input
+
+The user provides a test application name and optionally a variant:
+
+- `e2e-tests/test-applications/nextjs-app-dir` (full path)
+- `nextjs-app-dir` (just the app name)
+- `nextjs-app-dir --variant nextjs-15` (with variant)
+
+## Workflow
+
+### Step 1: Parse the Test Application Name
+
+Extract the test app name from user input:
+
+- Strip `e2e-tests/test-applications/` prefix if present
+- Extract variant flag if provided (e.g., `--variant nextjs-15`)
+- Store the clean app name (e.g., `nextjs-app-dir`)
+
+### Step 2: Determine Which Packages Need Rebuilding
+
+If the user recently edited files in `packages/*`, identify which packages were modified:
+
+```bash
+# Check which packages have uncommitted changes (including untracked files)
+git status --porcelain | grep "^[ MARC?][ MD?] packages/" | cut -d'/' -f2 | sort -u
+```
+
+For each modified package, rebuild its tarball:
+
+```bash
+cd packages/
+yarn build && yarn build:tarball
+cd ../..
+```
+
+**Option C: User Specifies Packages**
+
+If the user says "I changed @sentry/node" or similar, rebuild just that package:
+
+```bash
+cd packages/node
+yarn build && yarn build:tarball
+cd ../..
+```
+
+### Step 3: Verify Test Application Exists
+
+Check that the test app exists:
+
+```bash
+ls -d dev-packages/e2e-tests/test-applications/
+```
+
+If it doesn't exist, list available test apps:
+
+```bash
+ls dev-packages/e2e-tests/test-applications/
+```
+
+Ask the user which one they meant.
+
+### Step 4: Run the E2E Test
+
+Navigate to the e2e-tests directory and run the test:
+
+```bash
+cd dev-packages/e2e-tests
+yarn test:run
+```
+
+If a variant was specified:
+
+```bash
+cd dev-packages/e2e-tests
+yarn test:run --variant
+```
+
+### Step 5: Report Results
+
+After the test completes, provide a summary:
+
+**If tests passed:**
+
+```
+โ
E2E tests passed for
+
+All tests completed successfully. Your SDK changes work correctly with this test application.
+```
+
+**If tests failed:**
+
+```
+โ E2E tests failed for
+
+[Include relevant error output]
+```
+
+**If package rebuild was needed:**
+
+```
+๐ฆ Rebuilt SDK packages:
+๐งช Running E2E tests for ...
+```
+
+## Error Handling
+
+- **No tarballs found**: Run `yarn build && yarn build:tarball` at repository root
+- **Test app not found**: List available apps and ask user to clarify
+- **Verdaccio not running**: Tests should start Verdaccio automatically, but if issues occur, check Docker
+- **Build failures**: Fix build errors before running tests
+
+## Common Test Applications
+
+Here are frequently tested applications:
+
+- `nextjs-app-dir` - Next.js App Router
+- `nextjs-15` - Next.js 15.x
+- `react-create-hash-router` - React with React Router
+- `node-express-esm-loader` - Node.js Express with ESM
+- `sveltekit-2` - SvelteKit 2.x
+- `remix-2` - Remix 2.x
+- `nuxt-3` - Nuxt 3.x
+
+To see all available test apps:
+
+```bash
+ls dev-packages/e2e-tests/test-applications/
+```
+
+## Example Workflows
+
+### Example 1: After modifying @sentry/node
+
+```bash
+# User: "Run e2e tests for node-express-esm-loader"
+
+# Step 1: Detect recent changes to packages/node
+# Step 2: Rebuild the modified package
+cd packages/node
+yarn build && yarn build:tarball
+cd ../..
+
+# Step 3: Run the test
+cd dev-packages/e2e-tests
+yarn test:run node-express-esm-loader
+```
+
+### Example 2: First-time test run
+
+```bash
+# User: "Run e2e tests for nextjs-app-dir"
+
+# Step 1: Check for existing tarballs
+# Step 2: None found, build all packages
+yarn build && yarn build:tarball
+
+# Step 3: Run the test
+cd dev-packages/e2e-tests
+yarn test:run nextjs-app-dir
+```
+
+### Example 3: With variant
+
+```bash
+# User: "Run e2e tests for nextjs-app-dir with nextjs-15 variant"
+
+# Step 1: Rebuild if needed
+# Step 2: Run with variant
+cd dev-packages/e2e-tests
+yarn test:run nextjs-app-dir --variant nextjs-15
+```
+
+## Tips
+
+- **Always rebuild after SDK changes**: Tarballs contain the compiled SDK code
+- **Watch build output**: Build errors must be fixed before testing
+
+## Integration with Development Workflow
+
+This skill integrates with the standard SDK development workflow:
+
+1. Make changes to SDK code in `packages/*`
+2. Run `/e2e ` to test your changes
+3. Fix any test failures
+
+The skill automates the tedious parts of:
+
+- Remembering to rebuild tarballs
+- Navigating to the correct directory
+- Running tests with the right flags
diff --git a/.size-limit.js b/.size-limit.js
index e8124a622962..978d3105dd41 100644
--- a/.size-limit.js
+++ b/.size-limit.js
@@ -96,21 +96,21 @@ module.exports = [
path: 'packages/browser/build/npm/esm/prod/index.js',
import: createImport('init', 'feedbackIntegration'),
gzip: true,
- limit: '42 KB',
+ limit: '43 KB',
},
{
name: '@sentry/browser (incl. sendFeedback)',
path: 'packages/browser/build/npm/esm/prod/index.js',
import: createImport('init', 'sendFeedback'),
gzip: true,
- limit: '30 KB',
+ limit: '31 KB',
},
{
name: '@sentry/browser (incl. FeedbackAsync)',
path: 'packages/browser/build/npm/esm/prod/index.js',
import: createImport('init', 'feedbackAsyncIntegration'),
gzip: true,
- limit: '35 KB',
+ limit: '36 KB',
},
{
name: '@sentry/browser (incl. Metrics)',
@@ -140,7 +140,7 @@ module.exports = [
import: createImport('init', 'ErrorBoundary'),
ignore: ['react/jsx-runtime'],
gzip: true,
- limit: '27 KB',
+ limit: '28 KB',
},
{
name: '@sentry/react (incl. Tracing)',
@@ -208,7 +208,7 @@ module.exports = [
name: 'CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics)',
path: createCDNPath('bundle.tracing.replay.feedback.logs.metrics.min.js'),
gzip: true,
- limit: '86 KB',
+ limit: '87 KB',
},
// browser CDN bundles (non-gzipped)
{
@@ -223,7 +223,7 @@ module.exports = [
path: createCDNPath('bundle.tracing.min.js'),
gzip: false,
brotli: false,
- limit: '127 KB',
+ limit: '128 KB',
},
{
name: 'CDN Bundle (incl. Tracing, Logs, Metrics) - uncompressed',
@@ -278,7 +278,7 @@ module.exports = [
import: createImport('init'),
ignore: [...builtinModules, ...nodePrefixedBuiltinModules],
gzip: true,
- limit: '52 KB',
+ limit: '53 KB',
},
// Node SDK (ESM)
{
@@ -287,7 +287,7 @@ module.exports = [
import: createImport('init'),
ignore: [...builtinModules, ...nodePrefixedBuiltinModules],
gzip: true,
- limit: '166 KB',
+ limit: '167 KB',
},
{
name: '@sentry/node - without tracing',
diff --git a/CHANGELOG.md b/CHANGELOG.md
index e32b01c44a72..8b7e3964bc1e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,6 +4,79 @@
- "You miss 100 percent of the chances you don't take. โ Wayne Gretzky" โ Michael Scott
+## 10.37.0
+
+### Important Changes
+
+- **feat(core): Introduces a new `Sentry.setConversationId()` API to track multi turn AI conversations across API calls. ([#18909](https://github.com/getsentry/sentry-javascript/pull/18909))**
+
+ You can now set a conversation ID that will be automatically applied to spans within that scope. This allows you to link traces from the same conversation together.
+
+ ```javascript
+ import * as Sentry from '@sentry/node';
+
+ // Set conversation ID for all subsequent spans
+ Sentry.setConversationId('conv_abc123');
+
+ // All AI spans will now include the gen_ai.conversation.id attribute
+ await openai.chat.completions.create({...});
+ ```
+
+ This is particularly useful for tracking multiple AI API calls that are part of the same conversation, allowing you to analyze entire conversation flows in Sentry.
+ The conversation ID is stored on the isolation scope and automatically applied to spans via the new `conversationIdIntegration`.
+
+- **feat(tanstackstart-react): Auto-instrument global middleware in `sentryTanstackStart` Vite plugin ([#18844](https://github.com/getsentry/sentry-javascript/pull/18844))**
+
+ The `sentryTanstackStart` Vite plugin now automatically instruments `requestMiddleware` and `functionMiddleware` arrays in `createStart()`. This captures performance data without requiring manual wrapping.
+
+ Auto-instrumentation is enabled by default. To disable it:
+
+ ```ts
+ // vite.config.ts
+ sentryTanstackStart({
+ authToken: process.env.SENTRY_AUTH_TOKEN,
+ org: 'your-org',
+ project: 'your-project',
+ autoInstrumentMiddleware: false,
+ });
+ ```
+
+### Other Changes
+
+- feat(core): simplify truncation logic to only keep the newest message ([#18906](https://github.com/getsentry/sentry-javascript/pull/18906))
+- feat(core): Support new client discard reason `invalid` ([#18901](https://github.com/getsentry/sentry-javascript/pull/18901))
+- feat(deps): Bump OpenTelemetry instrumentations ([#18934](https://github.com/getsentry/sentry-javascript/pull/18934))
+- feat(nextjs): Update default ignore list for sourcemaps ([#18938](https://github.com/getsentry/sentry-javascript/pull/18938))
+- feat(node): pass prisma instrumentation options through ([#18900](https://github.com/getsentry/sentry-javascript/pull/18900))
+- feat(nuxt): Don't run source maps related code on Nuxt "prepare" ([#18936](https://github.com/getsentry/sentry-javascript/pull/18936))
+- feat(replay): Update client report discard reason for invalid sessions ([#18796](https://github.com/getsentry/sentry-javascript/pull/18796))
+- feat(winston): Add customLevelMap for winston transport ([#18922](https://github.com/getsentry/sentry-javascript/pull/18922))
+- feat(react-router): Add support for React Router instrumentation API ([#18580](https://github.com/getsentry/sentry-javascript/pull/18580))
+- fix(astro): Do not show warnings for valid options ([#18947](https://github.com/getsentry/sentry-javascript/pull/18947))
+- fix(core): Report well known values in gen_ai.operation.name attribute ([#18925](https://github.com/getsentry/sentry-javascript/pull/18925))
+- fix(node-core): ignore vercel `AbortError` by default on unhandled rejection ([#18973](https://github.com/getsentry/sentry-javascript/pull/18973))
+- fix(nuxt): include sentry.config.server.ts in nuxt app types ([#18971](https://github.com/getsentry/sentry-javascript/pull/18971))
+- fix(profiling): Add `platform` to envelope item header ([#18954](https://github.com/getsentry/sentry-javascript/pull/18954))
+- fix(react): Defer React Router span finalization until lazy routes load ([#18881](https://github.com/getsentry/sentry-javascript/pull/18881))
+- ref(core): rename `gen_ai.input.messages.original_length` to `sentry.sdk_meta.gen_ai.input.messages.original_length` ([#18970](https://github.com/getsentry/sentry-javascript/pull/18970))
+- ref(core): rename `gen_ai.request.messages` to `gen_ai.input.messages` ([#18944](https://github.com/getsentry/sentry-javascript/pull/18944))
+- ref(core): Set system message as separate attribute ([#18978](https://github.com/getsentry/sentry-javascript/pull/18978))
+- deps: Bump version of sentry-bundler-plugins ([#18972](https://github.com/getsentry/sentry-javascript/pull/18972))
+
+
+ Internal Changes
+
+- chore(e2e): Add e2e claude skill ([#18957](https://github.com/getsentry/sentry-javascript/pull/18957))
+- chore(e2e): Add Makefile to make running specific e2e test apps easier ([#18953](https://github.com/getsentry/sentry-javascript/pull/18953))
+- chore(e2e): Modify e2e skill to also account for untracked files ([#18959](https://github.com/getsentry/sentry-javascript/pull/18959))
+- ref(tests): use constants in ai integration tests and add missing ones ([#18945](https://github.com/getsentry/sentry-javascript/pull/18945))
+- test(nextjs): Added nextjs CF workers test app ([#18928](https://github.com/getsentry/sentry-javascript/pull/18928))
+- test(prisma): Move to yarn prisma ([#18975](https://github.com/getsentry/sentry-javascript/pull/18975))
+
+
+
+Work in this release was contributed by @sebws, @harshit078, and @fedetorre. Thank you for your contributions!
+
## 10.36.0
- feat(node): Add Prisma v7 support ([#18908](https://github.com/getsentry/sentry-javascript/pull/18908))
diff --git a/CLAUDE.md b/CLAUDE.md
index cae60376d964..739c690c4873 100644
--- a/CLAUDE.md
+++ b/CLAUDE.md
@@ -145,17 +145,7 @@ Without this file, pnpm installs from the public npm registry instead of Verdacc
#### Running a Single E2E Test
-```bash
-# Build packages first
-yarn build && yarn build:tarball
-
-# Run a specific test app
-cd dev-packages/e2e-tests
-yarn test:run
-
-# Run with a specific variant (e.g., Next.js 15)
-yarn test:run --variant
-```
+Run the e2e skill.
#### Common Pitfalls and Debugging
diff --git a/dev-packages/browser-integration-tests/suites/profiling/manualMode/test.ts b/dev-packages/browser-integration-tests/suites/profiling/manualMode/test.ts
index 2e4358563aa2..176caad92af6 100644
--- a/dev-packages/browser-integration-tests/suites/profiling/manualMode/test.ts
+++ b/dev-packages/browser-integration-tests/suites/profiling/manualMode/test.ts
@@ -48,7 +48,7 @@ sentryTest('sends profile_chunk envelopes in manual mode', async ({ page, getLoc
const envelopeItemHeader = profileChunkEnvelopeItem[0];
const envelopeItemPayload1 = profileChunkEnvelopeItem[1];
- expect(envelopeItemHeader).toHaveProperty('type', 'profile_chunk');
+ expect(envelopeItemHeader).toEqual({ type: 'profile_chunk', platform: 'javascript' });
expect(envelopeItemPayload1.profile).toBeDefined();
const profilerId1 = envelopeItemPayload1.profiler_id;
@@ -71,7 +71,7 @@ sentryTest('sends profile_chunk envelopes in manual mode', async ({ page, getLoc
const envelopeItemHeader2 = profileChunkEnvelopeItem2[0];
const envelopeItemPayload2 = profileChunkEnvelopeItem2[1];
- expect(envelopeItemHeader2).toHaveProperty('type', 'profile_chunk');
+ expect(envelopeItemHeader2).toEqual({ type: 'profile_chunk', platform: 'javascript' });
expect(envelopeItemPayload2.profile).toBeDefined();
expect(envelopeItemPayload2.profiler_id).toBe(profilerId1); // same profiler id for the whole session
diff --git a/dev-packages/browser-integration-tests/suites/profiling/traceLifecycleMode_multiple-chunks/test.ts b/dev-packages/browser-integration-tests/suites/profiling/traceLifecycleMode_multiple-chunks/test.ts
index 5afc23a3a75f..a16a054da45a 100644
--- a/dev-packages/browser-integration-tests/suites/profiling/traceLifecycleMode_multiple-chunks/test.ts
+++ b/dev-packages/browser-integration-tests/suites/profiling/traceLifecycleMode_multiple-chunks/test.ts
@@ -51,7 +51,7 @@ sentryTest(
const envelopeItemHeader = profileChunkEnvelopeItem[0];
const envelopeItemPayload1 = profileChunkEnvelopeItem[1];
- expect(envelopeItemHeader).toHaveProperty('type', 'profile_chunk');
+ expect(envelopeItemHeader).toEqual({ type: 'profile_chunk', platform: 'javascript' });
expect(envelopeItemPayload1.profile).toBeDefined();
validateProfilePayloadMetadata(envelopeItemPayload1);
@@ -77,7 +77,7 @@ sentryTest(
const envelopeItemHeader2 = profileChunkEnvelopeItem2[0];
const envelopeItemPayload2 = profileChunkEnvelopeItem2[1];
- expect(envelopeItemHeader2).toHaveProperty('type', 'profile_chunk');
+ expect(envelopeItemHeader2).toEqual({ type: 'profile_chunk', platform: 'javascript' });
expect(envelopeItemPayload2.profile).toBeDefined();
validateProfilePayloadMetadata(envelopeItemPayload2);
diff --git a/dev-packages/browser-integration-tests/suites/profiling/traceLifecycleMode_overlapping-spans/test.ts b/dev-packages/browser-integration-tests/suites/profiling/traceLifecycleMode_overlapping-spans/test.ts
index fa66a225b49b..487cd05f1dbd 100644
--- a/dev-packages/browser-integration-tests/suites/profiling/traceLifecycleMode_overlapping-spans/test.ts
+++ b/dev-packages/browser-integration-tests/suites/profiling/traceLifecycleMode_overlapping-spans/test.ts
@@ -52,7 +52,7 @@ sentryTest(
const envelopeItemHeader = profileChunkEnvelopeItem[0];
const envelopeItemPayload = profileChunkEnvelopeItem[1];
- expect(envelopeItemHeader).toHaveProperty('type', 'profile_chunk');
+ expect(envelopeItemHeader).toEqual({ type: 'profile_chunk', platform: 'javascript' });
expect(envelopeItemPayload.profile).toBeDefined();
validateProfilePayloadMetadata(envelopeItemPayload);
diff --git a/dev-packages/browser-integration-tests/suites/public-api/debug/test.ts b/dev-packages/browser-integration-tests/suites/public-api/debug/test.ts
index b15c64280544..675f9a776cbf 100644
--- a/dev-packages/browser-integration-tests/suites/public-api/debug/test.ts
+++ b/dev-packages/browser-integration-tests/suites/public-api/debug/test.ts
@@ -24,6 +24,7 @@ sentryTest('logs debug messages correctly', async ({ getLocalTestUrl, page }) =>
? [
'Sentry Logger [log]: Integration installed: InboundFilters',
'Sentry Logger [log]: Integration installed: FunctionToString',
+ 'Sentry Logger [log]: Integration installed: ConversationId',
'Sentry Logger [log]: Integration installed: BrowserApiErrors',
'Sentry Logger [log]: Integration installed: Breadcrumbs',
'Sentry Logger [log]: Global Handler attached: onerror',
diff --git a/dev-packages/browser-integration-tests/suites/tracing/ai-providers/anthropic/test.ts b/dev-packages/browser-integration-tests/suites/tracing/ai-providers/anthropic/test.ts
index 206e29be16e5..8f14f0318456 100644
--- a/dev-packages/browser-integration-tests/suites/tracing/ai-providers/anthropic/test.ts
+++ b/dev-packages/browser-integration-tests/suites/tracing/ai-providers/anthropic/test.ts
@@ -20,11 +20,11 @@ sentryTest('manual Anthropic instrumentation sends gen_ai transactions', async (
const eventData = envelopeRequestParser(req);
// Verify it's a gen_ai transaction
- expect(eventData.transaction).toBe('messages claude-3-haiku-20240307');
- expect(eventData.contexts?.trace?.op).toBe('gen_ai.messages');
+ expect(eventData.transaction).toBe('chat claude-3-haiku-20240307');
+ expect(eventData.contexts?.trace?.op).toBe('gen_ai.chat');
expect(eventData.contexts?.trace?.origin).toBe('auto.ai.anthropic');
expect(eventData.contexts?.trace?.data).toMatchObject({
- 'gen_ai.operation.name': 'messages',
+ 'gen_ai.operation.name': 'chat',
'gen_ai.system': 'anthropic',
'gen_ai.request.model': 'claude-3-haiku-20240307',
'gen_ai.request.temperature': 0.7,
diff --git a/dev-packages/cloudflare-integration-tests/suites/tracing/anthropic-ai/test.ts b/dev-packages/cloudflare-integration-tests/suites/tracing/anthropic-ai/test.ts
index c9e112b32241..17cea5dbf95b 100644
--- a/dev-packages/cloudflare-integration-tests/suites/tracing/anthropic-ai/test.ts
+++ b/dev-packages/cloudflare-integration-tests/suites/tracing/anthropic-ai/test.ts
@@ -1,4 +1,15 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { expect, it } from 'vitest';
+import {
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { createRunner } from '../../../runner';
// These tests are not exhaustive because the instrumentation is
@@ -17,19 +28,19 @@ it('traces a basic message creation request', async ({ signal }) => {
expect.arrayContaining([
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.id': 'msg_mock123',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'msg_mock123',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
}),
- description: 'messages claude-3-haiku-20240307',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
}),
]),
diff --git a/dev-packages/cloudflare-integration-tests/suites/tracing/google-genai/test.ts b/dev-packages/cloudflare-integration-tests/suites/tracing/google-genai/test.ts
index 3c36e832a17a..d2657f55b1ed 100644
--- a/dev-packages/cloudflare-integration-tests/suites/tracing/google-genai/test.ts
+++ b/dev-packages/cloudflare-integration-tests/suites/tracing/google-genai/test.ts
@@ -1,4 +1,16 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { expect, it } from 'vitest';
+import {
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_REQUEST_TOP_P_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { createRunner } from '../../../runner';
// These tests are not exhaustive because the instrumentation is
@@ -18,14 +30,14 @@ it('traces Google GenAI chat creation and message sending', async () => {
// First span - chats.create
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 150,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 150,
}),
description: 'chat gemini-1.5-pro create',
op: 'gen_ai.chat',
@@ -34,14 +46,14 @@ it('traces Google GenAI chat creation and message sending', async () => {
// Second span - chat.sendMessage
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
}),
description: 'chat gemini-1.5-pro',
op: 'gen_ai.chat',
@@ -50,20 +62,20 @@ it('traces Google GenAI chat creation and message sending', async () => {
// Third span - models.generateContent
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-flash',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-flash',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
}),
- description: 'models gemini-1.5-flash',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-1.5-flash',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
}),
]),
diff --git a/dev-packages/cloudflare-integration-tests/suites/tracing/langchain/test.ts b/dev-packages/cloudflare-integration-tests/suites/tracing/langchain/test.ts
index 875b4191b84b..d4abc4ae7220 100644
--- a/dev-packages/cloudflare-integration-tests/suites/tracing/langchain/test.ts
+++ b/dev-packages/cloudflare-integration-tests/suites/tracing/langchain/test.ts
@@ -1,4 +1,16 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { expect, it } from 'vitest';
+import {
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_TOOL_NAME_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { createRunner } from '../../../runner';
// These tests are not exhaustive because the instrumentation is
@@ -18,16 +30,16 @@ it('traces langchain chat model, chain, and tool invocations', async ({ signal }
// Chat model span
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -36,8 +48,8 @@ it('traces langchain chat model, chain, and tool invocations', async ({ signal }
// Chain span
expect.objectContaining({
data: expect.objectContaining({
- 'sentry.origin': 'auto.ai.langchain',
- 'sentry.op': 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
'langchain.chain.name': 'my_test_chain',
}),
description: 'chain my_test_chain',
@@ -47,9 +59,9 @@ it('traces langchain chat model, chain, and tool invocations', async ({ signal }
// Tool span
expect.objectContaining({
data: expect.objectContaining({
- 'sentry.origin': 'auto.ai.langchain',
- 'sentry.op': 'gen_ai.execute_tool',
- 'gen_ai.tool.name': 'search_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'search_tool',
}),
description: 'execute_tool search_tool',
op: 'gen_ai.execute_tool',
diff --git a/dev-packages/cloudflare-integration-tests/suites/tracing/langgraph/test.ts b/dev-packages/cloudflare-integration-tests/suites/tracing/langgraph/test.ts
index 33023b30fa55..6efa07164df5 100644
--- a/dev-packages/cloudflare-integration-tests/suites/tracing/langgraph/test.ts
+++ b/dev-packages/cloudflare-integration-tests/suites/tracing/langgraph/test.ts
@@ -1,4 +1,16 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { expect, it } from 'vitest';
+import {
+ GEN_AI_AGENT_NAME_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_PIPELINE_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { createRunner } from '../../../runner';
// These tests are not exhaustive because the instrumentation is
@@ -18,10 +30,10 @@ it('traces langgraph compile and invoke operations', async ({ signal }) => {
const createAgentSpan = transactionEvent.spans.find((span: any) => span.op === 'gen_ai.create_agent');
expect(createAgentSpan).toMatchObject({
data: {
- 'gen_ai.operation.name': 'create_agent',
- 'sentry.op': 'gen_ai.create_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'weather_assistant',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'weather_assistant',
},
description: 'create_agent weather_assistant',
op: 'gen_ai.create_agent',
@@ -32,16 +44,16 @@ it('traces langgraph compile and invoke operations', async ({ signal }) => {
const invokeAgentSpan = transactionEvent.spans.find((span: any) => span.op === 'gen_ai.invoke_agent');
expect(invokeAgentSpan).toMatchObject({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'weather_assistant',
- 'gen_ai.pipeline.name': 'weather_assistant',
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the weather in SF?"}]',
- 'gen_ai.response.model': 'mock-model',
- 'gen_ai.usage.input_tokens': 20,
- 'gen_ai.usage.output_tokens': 10,
- 'gen_ai.usage.total_tokens': 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'weather_assistant',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'weather_assistant',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather in SF?"}]',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
}),
description: 'invoke_agent weather_assistant',
op: 'gen_ai.invoke_agent',
@@ -49,8 +61,8 @@ it('traces langgraph compile and invoke operations', async ({ signal }) => {
});
// Verify tools are captured
- if (invokeAgentSpan.data['gen_ai.request.available_tools']) {
- expect(invokeAgentSpan.data['gen_ai.request.available_tools']).toMatch(/get_weather/);
+ if (invokeAgentSpan.data[GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]) {
+ expect(invokeAgentSpan.data[GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]).toMatch(/get_weather/);
}
})
.start(signal);
diff --git a/dev-packages/cloudflare-integration-tests/suites/tracing/openai/test.ts b/dev-packages/cloudflare-integration-tests/suites/tracing/openai/test.ts
index eb15fd80fc97..1c057e1a986c 100644
--- a/dev-packages/cloudflare-integration-tests/suites/tracing/openai/test.ts
+++ b/dev-packages/cloudflare-integration-tests/suites/tracing/openai/test.ts
@@ -1,4 +1,17 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { expect, it } from 'vitest';
+import {
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { createRunner } from '../../../runner';
// These tests are not exhaustive because the instrumentation is
@@ -17,18 +30,18 @@ it('traces a basic chat completion request', async ({ signal }) => {
expect.arrayContaining([
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'gen_ai.response.finish_reasons': '["stop"]',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
}),
description: 'chat gpt-3.5-turbo',
op: 'gen_ai.chat',
diff --git a/dev-packages/e2e-tests/Makefile b/dev-packages/e2e-tests/Makefile
new file mode 100644
index 000000000000..3761c715b2d1
--- /dev/null
+++ b/dev-packages/e2e-tests/Makefile
@@ -0,0 +1,11 @@
+.PHONY: run list
+
+run:
+ @if ! command -v fzf &> /dev/null; then \
+ echo "Error: fzf is required. Install with: brew install fzf"; \
+ exit 1; \
+ fi
+ @ls test-applications | fzf --height=10 --layout=reverse --border=rounded --margin=1.5% --color=dark --prompt="yarn test:run " | xargs -r yarn test:run
+
+list:
+ @ls test-applications
diff --git a/dev-packages/e2e-tests/README.md b/dev-packages/e2e-tests/README.md
index ffe06dd91aaf..23718eed6dde 100644
--- a/dev-packages/e2e-tests/README.md
+++ b/dev-packages/e2e-tests/README.md
@@ -33,6 +33,30 @@ yarn test:run --variant
Variant name matching is case-insensitive and partial. For example, `--variant 13` will match `nextjs-pages-dir (next@13)` if a matching variant is present in the test app's `package.json`.
+### Using the Makefile
+
+Alternatively, you can use the provided Makefile for an interactive test selection experience:
+
+**Prerequisites**: Install `fzf` with Homebrew:
+
+```bash
+brew install fzf
+```
+
+**Run tests interactively**:
+
+```bash
+make run
+```
+
+This will display a fuzzy-finder menu of all available test applications. Select one to run it automatically.
+
+**List all test applications**:
+
+```bash
+make list
+```
+
For example, if you have the following variants in your test app's `package.json`:
```json
diff --git a/dev-packages/e2e-tests/test-applications/hydrogen-react-router-7/server.ts b/dev-packages/e2e-tests/test-applications/hydrogen-react-router-7/server.ts
index d28fab88135f..b430f97b1f44 100644
--- a/dev-packages/e2e-tests/test-applications/hydrogen-react-router-7/server.ts
+++ b/dev-packages/e2e-tests/test-applications/hydrogen-react-router-7/server.ts
@@ -10,6 +10,10 @@ import { type AppLoadContext, createRequestHandler, getStorefrontHeaders } from
import { CART_QUERY_FRAGMENT } from '~/lib/fragments';
import { AppSession } from '~/lib/session';
import { wrapRequestHandler } from '@sentry/cloudflare';
+// Virtual entry point for the app
+// eslint-disable-next-line @typescript-eslint/ban-ts-comment
+// @ts-expect-error
+import * as serverBuild from 'virtual:react-router/server-build';
/**
* Export a fetch handler in module format.
@@ -96,8 +100,7 @@ export default {
* Hydrogen's Storefront client to the loader context.
*/
const handleRequest = createRequestHandler({
- // @ts-ignore
- build: await import('virtual:react-router/server-build'),
+ build: serverBuild,
mode: process.env.NODE_ENV,
getLoadContext: (): AppLoadContext => ({
session,
diff --git a/dev-packages/e2e-tests/test-applications/hydrogen-react-router-7/tests/server-transactions.test.ts b/dev-packages/e2e-tests/test-applications/hydrogen-react-router-7/tests/server-transactions.test.ts
index 0455ea2e0b79..1dca64548e83 100644
--- a/dev-packages/e2e-tests/test-applications/hydrogen-react-router-7/tests/server-transactions.test.ts
+++ b/dev-packages/e2e-tests/test-applications/hydrogen-react-router-7/tests/server-transactions.test.ts
@@ -13,7 +13,8 @@ test('Sends parameterized transaction name to Sentry', async ({ page }) => {
const transaction = await transactionPromise;
expect(transaction).toBeDefined();
- expect(transaction.transaction).toBe('GET /user/123');
+ // Transaction name should be parameterized (route pattern, not actual URL)
+ expect(transaction.transaction).toBe('GET /user/:id');
});
test('Sends two linked transactions (server & client) to Sentry', async ({ page }) => {
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/.gitignore b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/.gitignore
new file mode 100644
index 000000000000..dd146b53d966
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/.gitignore
@@ -0,0 +1,44 @@
+# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
+
+# dependencies
+/node_modules
+/.pnp
+.pnp.*
+.yarn/*
+!.yarn/patches
+!.yarn/plugins
+!.yarn/releases
+!.yarn/versions
+
+# testing
+/coverage
+
+# next.js
+/.next/
+/out/
+
+# production
+/build
+
+# misc
+.DS_Store
+*.pem
+
+# debug
+npm-debug.log*
+yarn-debug.log*
+yarn-error.log*
+.pnpm-debug.log*
+
+# env files (can opt-in for committing if needed)
+.env*
+
+# vercel
+.vercel
+
+# typescript
+*.tsbuildinfo
+next-env.d.ts
+
+# Sentry Config File
+.env.sentry-build-plugin
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/.npmrc b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/.npmrc
new file mode 100644
index 000000000000..070f80f05092
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/.npmrc
@@ -0,0 +1,2 @@
+@sentry:registry=http://127.0.0.1:4873
+@sentry-internal:registry=http://127.0.0.1:4873
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/layout.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/layout.tsx
new file mode 100644
index 000000000000..ace0c2f086b7
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/layout.tsx
@@ -0,0 +1,12 @@
+import { PropsWithChildren } from 'react';
+
+export const dynamic = 'force-dynamic';
+
+export default function Layout({ children }: PropsWithChildren<{}>) {
+ return (
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/[dynamic]/layout.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/[dynamic]/layout.tsx
new file mode 100644
index 000000000000..dbdc60adadc2
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/[dynamic]/layout.tsx
@@ -0,0 +1,12 @@
+import { PropsWithChildren } from 'react';
+
+export const dynamic = 'force-dynamic';
+
+export default function Layout({ children }: PropsWithChildren<{}>) {
+ return (
+
+
DynamicLayout
+ {children}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/[dynamic]/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/[dynamic]/page.tsx
new file mode 100644
index 000000000000..3eaddda2a1df
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/[dynamic]/page.tsx
@@ -0,0 +1,15 @@
+export const dynamic = 'force-dynamic';
+
+export default async function Page() {
+ return (
+
+ );
+}
+
+export async function generateMetadata() {
+ return {
+ title: 'I am dynamic page generated metadata',
+ };
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/layout.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/layout.tsx
new file mode 100644
index 000000000000..ace0c2f086b7
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/layout.tsx
@@ -0,0 +1,12 @@
+import { PropsWithChildren } from 'react';
+
+export const dynamic = 'force-dynamic';
+
+export default function Layout({ children }: PropsWithChildren<{}>) {
+ return (
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/page.tsx
new file mode 100644
index 000000000000..8077c14d23ca
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/(nested-layout)/nested-layout/page.tsx
@@ -0,0 +1,11 @@
+export const dynamic = 'force-dynamic';
+
+export default function Page() {
+ return Hello World!
;
+}
+
+export async function generateMetadata() {
+ return {
+ title: 'I am generated metadata',
+ };
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/api/endpoint-behind-middleware/route.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/api/endpoint-behind-middleware/route.ts
new file mode 100644
index 000000000000..2733cc918f44
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/api/endpoint-behind-middleware/route.ts
@@ -0,0 +1,3 @@
+export function GET() {
+ return Response.json({ name: 'John Doe' });
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/favicon.ico b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/favicon.ico
new file mode 100644
index 000000000000..718d6fea4835
Binary files /dev/null and b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/favicon.ico differ
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/global-error.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/global-error.tsx
new file mode 100644
index 000000000000..20c175015b03
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/global-error.tsx
@@ -0,0 +1,23 @@
+'use client';
+
+import * as Sentry from '@sentry/nextjs';
+import NextError from 'next/error';
+import { useEffect } from 'react';
+
+export default function GlobalError({ error }: { error: Error & { digest?: string } }) {
+ useEffect(() => {
+ Sentry.captureException(error);
+ }, [error]);
+
+ return (
+
+
+ {/* `NextError` is the default Next.js error page component. Its type
+ definition requires a `statusCode` prop. However, since the App Router
+ does not expose status codes for errors, we simply pass 0 to render a
+ generic error message. */}
+
+
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/isr-test/[product]/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/isr-test/[product]/page.tsx
new file mode 100644
index 000000000000..cd1e085e2763
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/isr-test/[product]/page.tsx
@@ -0,0 +1,17 @@
+export const revalidate = 60; // ISR: revalidate every 60 seconds
+export const dynamicParams = true; // Allow dynamic params beyond generateStaticParams
+
+export async function generateStaticParams(): Promise> {
+ return [{ product: 'laptop' }, { product: 'phone' }, { product: 'tablet' }];
+}
+
+export default async function ISRProductPage({ params }: { params: Promise<{ product: string }> }) {
+ const { product } = await params;
+
+ return (
+
+
ISR Product: {product}
+
{product}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/isr-test/static/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/isr-test/static/page.tsx
new file mode 100644
index 000000000000..f49605bd9da4
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/isr-test/static/page.tsx
@@ -0,0 +1,15 @@
+export const revalidate = 60; // ISR: revalidate every 60 seconds
+export const dynamicParams = true;
+
+export async function generateStaticParams(): Promise {
+ return [];
+}
+
+export default function ISRStaticPage() {
+ return (
+
+
ISR Static Page
+
static-isr
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/layout.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/layout.tsx
new file mode 100644
index 000000000000..c8f9cee0b787
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/layout.tsx
@@ -0,0 +1,7 @@
+export default function Layout({ children }: { children: React.ReactNode }) {
+ return (
+
+ {children}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/metrics/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/metrics/page.tsx
new file mode 100644
index 000000000000..fdb7bc0a40a7
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/metrics/page.tsx
@@ -0,0 +1,34 @@
+'use client';
+
+import * as Sentry from '@sentry/nextjs';
+
+export default function Page() {
+ const handleClick = async () => {
+ Sentry.metrics.count('test.page.count', 1, {
+ attributes: {
+ page: '/metrics',
+ 'random.attribute': 'Apples',
+ },
+ });
+ Sentry.metrics.distribution('test.page.distribution', 100, {
+ attributes: {
+ page: '/metrics',
+ 'random.attribute': 'Manzanas',
+ },
+ });
+ Sentry.metrics.gauge('test.page.gauge', 200, {
+ attributes: {
+ page: '/metrics',
+ 'random.attribute': 'Mele',
+ },
+ });
+ await fetch('/metrics/route-handler');
+ };
+
+ return (
+
+
Metrics page
+ Emit
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/metrics/route-handler/route.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/metrics/route-handler/route.ts
new file mode 100644
index 000000000000..84e81960f9c9
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/metrics/route-handler/route.ts
@@ -0,0 +1,23 @@
+import * as Sentry from '@sentry/nextjs';
+
+export const GET = async () => {
+ Sentry.metrics.count('test.route.handler.count', 1, {
+ attributes: {
+ endpoint: '/metrics/route-handler',
+ 'random.attribute': 'Potatoes',
+ },
+ });
+ Sentry.metrics.distribution('test.route.handler.distribution', 100, {
+ attributes: {
+ endpoint: '/metrics/route-handler',
+ 'random.attribute': 'Patatas',
+ },
+ });
+ Sentry.metrics.gauge('test.route.handler.gauge', 200, {
+ attributes: {
+ endpoint: '/metrics/route-handler',
+ 'random.attribute': 'Patate',
+ },
+ });
+ return Response.json({ message: 'Bueno' });
+};
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/nested-rsc-error/[param]/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/nested-rsc-error/[param]/page.tsx
new file mode 100644
index 000000000000..675b248026be
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/nested-rsc-error/[param]/page.tsx
@@ -0,0 +1,17 @@
+import { Suspense } from 'react';
+
+export const dynamic = 'force-dynamic';
+
+export default async function Page() {
+ return (
+ Loading... }>
+ {/* @ts-ignore */}
+ ;
+
+ );
+}
+
+async function Crash() {
+ throw new Error('I am technically uncatchable');
+ return unreachable
;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/non-isr-test/[item]/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/non-isr-test/[item]/page.tsx
new file mode 100644
index 000000000000..e0bafdb24181
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/non-isr-test/[item]/page.tsx
@@ -0,0 +1,11 @@
+// No generateStaticParams - this is NOT an ISR page
+export default async function NonISRPage({ params }: { params: Promise<{ item: string }> }) {
+ const { item } = await params;
+
+ return (
+
+
Non-ISR Dynamic Page: {item}
+
{item}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/page.tsx
new file mode 100644
index 000000000000..2bc0a407a355
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/page.tsx
@@ -0,0 +1,3 @@
+export default function Page() {
+ return Next 16 test app
;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/pageload-tracing/layout.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/pageload-tracing/layout.tsx
new file mode 100644
index 000000000000..1f0cbe478f88
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/pageload-tracing/layout.tsx
@@ -0,0 +1,8 @@
+import { PropsWithChildren } from 'react';
+
+export const dynamic = 'force-dynamic';
+
+export default async function Layout({ children }: PropsWithChildren) {
+ await new Promise(resolve => setTimeout(resolve, 500));
+ return <>{children}>;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/pageload-tracing/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/pageload-tracing/page.tsx
new file mode 100644
index 000000000000..689735d61ddf
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/pageload-tracing/page.tsx
@@ -0,0 +1,14 @@
+export const dynamic = 'force-dynamic';
+
+export default async function Page() {
+ await new Promise(resolve => setTimeout(resolve, 1000));
+ return I am page 2
;
+}
+
+export async function generateMetadata() {
+ (await fetch('https://example.com/', { cache: 'no-store' })).text();
+
+ return {
+ title: 'my title',
+ };
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/beep/[two]/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/beep/[two]/page.tsx
new file mode 100644
index 000000000000..f34461c2bb07
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/beep/[two]/page.tsx
@@ -0,0 +1,3 @@
+export default function ParameterizedPage() {
+ return Dynamic page two
;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/beep/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/beep/page.tsx
new file mode 100644
index 000000000000..a7d9164c8c03
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/beep/page.tsx
@@ -0,0 +1,3 @@
+export default function BeepPage() {
+ return Beep
;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/page.tsx
new file mode 100644
index 000000000000..9fa617a22381
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/[one]/page.tsx
@@ -0,0 +1,3 @@
+export default function ParameterizedPage() {
+ return Dynamic page one
;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/static/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/static/page.tsx
new file mode 100644
index 000000000000..16ef0482d53b
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/parameterized/static/page.tsx
@@ -0,0 +1,3 @@
+export default function StaticPage() {
+ return Static page
;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/prefetching/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/prefetching/page.tsx
new file mode 100644
index 000000000000..4cb811ecf1b4
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/prefetching/page.tsx
@@ -0,0 +1,9 @@
+import Link from 'next/link';
+
+export default function Page() {
+ return (
+
+ link
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/prefetching/to-be-prefetched/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/prefetching/to-be-prefetched/page.tsx
new file mode 100644
index 000000000000..83aac90d65cf
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/prefetching/to-be-prefetched/page.tsx
@@ -0,0 +1,5 @@
+export const dynamic = 'force-dynamic';
+
+export default function Page() {
+ return Hello
;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/redirect/destination/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/redirect/destination/page.tsx
new file mode 100644
index 000000000000..5583d36b04b0
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/redirect/destination/page.tsx
@@ -0,0 +1,7 @@
+export default function RedirectDestinationPage() {
+ return (
+
+
Redirect Destination
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/redirect/origin/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/redirect/origin/page.tsx
new file mode 100644
index 000000000000..52615e0a054b
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/redirect/origin/page.tsx
@@ -0,0 +1,18 @@
+import { redirect } from 'next/navigation';
+
+async function redirectAction() {
+ 'use server';
+
+ redirect('/redirect/destination');
+}
+
+export default function RedirectOriginPage() {
+ return (
+ <>
+ {/* @ts-ignore */}
+
+ >
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/route-handler/[xoxo]/edge/route.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/route-handler/[xoxo]/edge/route.ts
new file mode 100644
index 000000000000..7cd1fc7e332c
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/route-handler/[xoxo]/edge/route.ts
@@ -0,0 +1,8 @@
+import { NextResponse } from 'next/server';
+
+export const runtime = 'edge';
+export const dynamic = 'force-dynamic';
+
+export async function GET() {
+ return NextResponse.json({ message: 'Hello Edge Route Handler' });
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/route-handler/[xoxo]/node/route.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/route-handler/[xoxo]/node/route.ts
new file mode 100644
index 000000000000..5bc418f077aa
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/route-handler/[xoxo]/node/route.ts
@@ -0,0 +1,7 @@
+import { NextResponse } from 'next/server';
+
+export const dynamic = 'force-dynamic';
+
+export async function GET() {
+ return NextResponse.json({ message: 'Hello Node Route Handler' });
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/streaming-rsc-error/[param]/client-page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/streaming-rsc-error/[param]/client-page.tsx
new file mode 100644
index 000000000000..7b66c3fbdeef
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/streaming-rsc-error/[param]/client-page.tsx
@@ -0,0 +1,8 @@
+'use client';
+
+import { use } from 'react';
+
+export function RenderPromise({ stringPromise }: { stringPromise: Promise }) {
+ const s = use(stringPromise);
+ return <>{s}>;
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/streaming-rsc-error/[param]/page.tsx b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/streaming-rsc-error/[param]/page.tsx
new file mode 100644
index 000000000000..9531f9a42139
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/app/streaming-rsc-error/[param]/page.tsx
@@ -0,0 +1,18 @@
+import { Suspense } from 'react';
+import { RenderPromise } from './client-page';
+
+export const dynamic = 'force-dynamic';
+
+export default async function Page() {
+ const crashingPromise = new Promise((_, reject) => {
+ setTimeout(() => {
+ reject(new Error('I am a data streaming error'));
+ }, 100);
+ });
+
+ return (
+ Loading...}>
+ ;
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/eslint.config.mjs b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/eslint.config.mjs
new file mode 100644
index 000000000000..60f7af38f6c2
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/eslint.config.mjs
@@ -0,0 +1,19 @@
+import { dirname } from 'path';
+import { fileURLToPath } from 'url';
+import { FlatCompat } from '@eslint/eslintrc';
+
+const __filename = fileURLToPath(import.meta.url);
+const __dirname = dirname(__filename);
+
+const compat = new FlatCompat({
+ baseDirectory: __dirname,
+});
+
+const eslintConfig = [
+ ...compat.extends('next/core-web-vitals', 'next/typescript'),
+ {
+ ignores: ['node_modules/**', '.next/**', 'out/**', 'build/**', 'next-env.d.ts'],
+ },
+];
+
+export default eslintConfig;
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/instrumentation-client.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/instrumentation-client.ts
new file mode 100644
index 000000000000..ae4e3195a2a1
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/instrumentation-client.ts
@@ -0,0 +1,16 @@
+import * as Sentry from '@sentry/nextjs';
+import type { Log } from '@sentry/nextjs';
+
+Sentry.init({
+ environment: 'qa', // dynamic sampling bias to keep transactions
+ dsn: process.env.NEXT_PUBLIC_E2E_TEST_DSN,
+ tunnel: `http://localhost:3031/`, // proxy server
+ tracesSampleRate: 1.0,
+ sendDefaultPii: true,
+ // Verify Log type is available
+ beforeSendLog(log: Log) {
+ return log;
+ },
+});
+
+export const onRouterTransitionStart = Sentry.captureRouterTransitionStart;
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/instrumentation.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/instrumentation.ts
new file mode 100644
index 000000000000..964f937c439a
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/instrumentation.ts
@@ -0,0 +1,13 @@
+import * as Sentry from '@sentry/nextjs';
+
+export async function register() {
+ if (process.env.NEXT_RUNTIME === 'nodejs') {
+ await import('./sentry.server.config');
+ }
+
+ if (process.env.NEXT_RUNTIME === 'edge') {
+ await import('./sentry.edge.config');
+ }
+}
+
+export const onRequestError = Sentry.captureRequestError;
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/middleware.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/middleware.ts
new file mode 100644
index 000000000000..f5980e4231c1
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/middleware.ts
@@ -0,0 +1,26 @@
+import { getDefaultIsolationScope } from '@sentry/core';
+import * as Sentry from '@sentry/nextjs';
+import { NextResponse } from 'next/server';
+import type { NextRequest } from 'next/server';
+
+export async function middleware(request: NextRequest) {
+ Sentry.setTag('my-isolated-tag', true);
+ Sentry.setTag('my-global-scope-isolated-tag', getDefaultIsolationScope().getScopeData().tags['my-isolated-tag']); // We set this tag to be able to assert that the previously set tag has not leaked into the global isolation scope
+
+ if (request.headers.has('x-should-throw')) {
+ throw new Error('Middleware Error');
+ }
+
+ if (request.headers.has('x-should-make-request')) {
+ await fetch('http://localhost:3030/');
+ }
+
+ return NextResponse.next();
+}
+
+// See "Matching Paths" below to learn more
+export const config = {
+ matcher: ['/api/endpoint-behind-middleware', '/api/endpoint-behind-faulty-middleware'],
+};
+
+export const runtime = 'experimental-edge';
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/next.config.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/next.config.ts
new file mode 100644
index 000000000000..6699b3dd2c33
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/next.config.ts
@@ -0,0 +1,8 @@
+import { withSentryConfig } from '@sentry/nextjs';
+import type { NextConfig } from 'next';
+
+const nextConfig: NextConfig = {};
+
+export default withSentryConfig(nextConfig, {
+ silent: true,
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/open-next.config.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/open-next.config.ts
new file mode 100644
index 000000000000..a68b3c089829
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/open-next.config.ts
@@ -0,0 +1,9 @@
+import { defineCloudflareConfig } from '@opennextjs/cloudflare';
+
+export default defineCloudflareConfig({
+ // Uncomment to enable R2 cache,
+ // It should be imported as:
+ // `import r2IncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/r2-incremental-cache";`
+ // See https://opennext.js.org/cloudflare/caching for more details
+ // incrementalCache: r2IncrementalCache,
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/package.json b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/package.json
new file mode 100644
index 000000000000..c48695371649
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/package.json
@@ -0,0 +1,55 @@
+{
+ "name": "nextjs-16-cf-workers",
+ "version": "0.1.0",
+ "private": true,
+ "scripts": {
+ "dev": "next dev",
+ "cf:build": "opennextjs-cloudflare build",
+ "cf:preview": "opennextjs-cloudflare preview",
+ "build": "next build",
+ "clean": "npx rimraf node_modules pnpm-lock.yaml .tmp_dev_server_logs",
+ "start": "pnpm cf:preview",
+ "lint": "eslint",
+ "test:prod": "TEST_ENV=production playwright test",
+ "test:build": "pnpm install && pnpm cf:build",
+ "test:build-canary": "pnpm install && pnpm add next@canary && pnpm cf:build",
+ "test:build-latest": "pnpm install && pnpm add next@latest && pnpm cf:build",
+ "test:assert": "pnpm test:prod"
+ },
+ "dependencies": {
+ "@opennextjs/cloudflare": "^1.14.9",
+ "@sentry/nextjs": "latest || *",
+ "@sentry/core": "latest || *",
+ "next": "16.0.10",
+ "react": "19.1.0",
+ "react-dom": "19.1.0"
+ },
+ "devDependencies": {
+ "@playwright/test": "~1.56.0",
+ "@sentry-internal/test-utils": "link:../../../test-utils",
+ "@types/node": "^20",
+ "@types/react": "^19",
+ "@types/react-dom": "^19",
+ "eslint": "^9",
+ "eslint-config-next": "canary",
+ "typescript": "^5",
+ "wrangler": "^4.59.2"
+ },
+ "volta": {
+ "extends": "../../package.json"
+ },
+ "sentryTest": {
+ "variants": [
+ {
+ "build-command": "pnpm test:build-latest",
+ "label": "nextjs-16-cf-workers (latest)"
+ }
+ ],
+ "optionalVariants": [
+ {
+ "build-command": "pnpm test:build-canary",
+ "label": "nextjs-16-cf-workers (canary)"
+ }
+ ]
+ }
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/playwright.config.mjs b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/playwright.config.mjs
new file mode 100644
index 000000000000..0f15639161dd
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/playwright.config.mjs
@@ -0,0 +1,21 @@
+import { getPlaywrightConfig } from '@sentry-internal/test-utils';
+const testEnv = process.env.TEST_ENV;
+
+if (!testEnv) {
+ throw new Error('No test env defined');
+}
+
+const getStartCommand = () => {
+ if (testEnv === 'production') {
+ return 'pnpm cf:preview --port 3030';
+ }
+
+ throw new Error(`Unknown test env: ${testEnv}`);
+};
+
+const config = getPlaywrightConfig({
+ startCommand: getStartCommand(),
+ port: 3030,
+});
+
+export default config;
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/file.svg b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/file.svg
new file mode 100644
index 000000000000..004145cddf3f
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/file.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/globe.svg b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/globe.svg
new file mode 100644
index 000000000000..567f17b0d7c7
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/globe.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/next.svg b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/next.svg
new file mode 100644
index 000000000000..5174b28c565c
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/next.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/vercel.svg b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/vercel.svg
new file mode 100644
index 000000000000..77053960334e
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/vercel.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/window.svg b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/window.svg
new file mode 100644
index 000000000000..b2b2a44f6ebc
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/public/window.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/sentry.edge.config.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/sentry.edge.config.ts
new file mode 100644
index 000000000000..2199afc46eaf
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/sentry.edge.config.ts
@@ -0,0 +1,10 @@
+import * as Sentry from '@sentry/nextjs';
+
+Sentry.init({
+ environment: 'qa', // dynamic sampling bias to keep transactions
+ dsn: process.env.NEXT_PUBLIC_E2E_TEST_DSN,
+ tunnel: `http://localhost:3031/`, // proxy server
+ tracesSampleRate: 1.0,
+ sendDefaultPii: true,
+ // debug: true,
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/sentry.server.config.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/sentry.server.config.ts
new file mode 100644
index 000000000000..8f0b4d0f7800
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/sentry.server.config.ts
@@ -0,0 +1,16 @@
+import * as Sentry from '@sentry/nextjs';
+import { Log } from '@sentry/nextjs';
+
+Sentry.init({
+ environment: 'qa', // dynamic sampling bias to keep transactions
+ dsn: process.env.NEXT_PUBLIC_E2E_TEST_DSN,
+ tunnel: `http://localhost:3031/`, // proxy server
+ tracesSampleRate: 1.0,
+ sendDefaultPii: true,
+ // debug: true,
+ integrations: [Sentry.vercelAIIntegration()],
+ // Verify Log type is available
+ beforeSendLog(log: Log) {
+ return log;
+ },
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/start-event-proxy.mjs b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/start-event-proxy.mjs
new file mode 100644
index 000000000000..efb664370443
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/start-event-proxy.mjs
@@ -0,0 +1,14 @@
+import * as fs from 'fs';
+import * as path from 'path';
+import { startEventProxyServer } from '@sentry-internal/test-utils';
+
+const packageJson = JSON.parse(fs.readFileSync(path.join(process.cwd(), 'package.json')));
+
+startEventProxyServer({
+ port: 3031,
+ proxyServerName: 'nextjs-16-cf-workers',
+ envelopeDumpPath: path.join(
+ process.cwd(),
+ `event-dumps/next-16-v${packageJson.dependencies.next}-${process.env.TEST_ENV}.dump`,
+ ),
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/async-params.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/async-params.test.ts
new file mode 100644
index 000000000000..e8160d12aded
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/async-params.test.ts
@@ -0,0 +1,14 @@
+import { expect, test } from '@playwright/test';
+import fs from 'fs';
+import { isDevMode } from './isDevMode';
+
+test('should not print warning for async params', async ({ page }) => {
+ test.skip(!isDevMode, 'should be skipped for non-dev mode');
+ await page.goto('/');
+
+ // If the server exits with code 1, the test will fail (see instrumentation.ts)
+ const devStdout = fs.readFileSync('.tmp_dev_server_logs', 'utf-8');
+ expect(devStdout).not.toContain('`params` should be awaited before using its properties.');
+
+ await expect(page.getByText('Next 16 test app')).toBeVisible();
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/isDevMode.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/isDevMode.ts
new file mode 100644
index 000000000000..d2be94232110
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/isDevMode.ts
@@ -0,0 +1 @@
+export const isDevMode = !!process.env.TEST_ENV && process.env.TEST_ENV.includes('development');
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/isr-routes.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/isr-routes.test.ts
new file mode 100644
index 000000000000..b42d2cd61b93
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/isr-routes.test.ts
@@ -0,0 +1,94 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+
+test('should remove sentry-trace and baggage meta tags on ISR dynamic route page load', async ({ page }) => {
+ // Navigate to ISR page
+ await page.goto('/isr-test/laptop');
+
+ // Wait for page to be fully loaded
+ await expect(page.locator('#isr-product-id')).toHaveText('laptop');
+
+ // Check that sentry-trace and baggage meta tags are removed for ISR pages
+ await expect(page.locator('meta[name="sentry-trace"]')).toHaveCount(0);
+ await expect(page.locator('meta[name="baggage"]')).toHaveCount(0);
+});
+
+test('should remove sentry-trace and baggage meta tags on ISR static route', async ({ page }) => {
+ // Navigate to ISR static page
+ await page.goto('/isr-test/static');
+
+ // Wait for page to be fully loaded
+ await expect(page.locator('#isr-static-marker')).toHaveText('static-isr');
+
+ // Check that sentry-trace and baggage meta tags are removed for ISR pages
+ await expect(page.locator('meta[name="sentry-trace"]')).toHaveCount(0);
+ await expect(page.locator('meta[name="baggage"]')).toHaveCount(0);
+});
+
+test('should remove meta tags for different ISR dynamic route values', async ({ page }) => {
+ // Test with 'phone' (one of the pre-generated static params)
+ await page.goto('/isr-test/phone');
+ await expect(page.locator('#isr-product-id')).toHaveText('phone');
+
+ await expect(page.locator('meta[name="sentry-trace"]')).toHaveCount(0);
+ await expect(page.locator('meta[name="baggage"]')).toHaveCount(0);
+
+ // Test with 'tablet'
+ await page.goto('/isr-test/tablet');
+ await expect(page.locator('#isr-product-id')).toHaveText('tablet');
+
+ await expect(page.locator('meta[name="sentry-trace"]')).toHaveCount(0);
+ await expect(page.locator('meta[name="baggage"]')).toHaveCount(0);
+});
+
+test('should create unique transactions for ISR pages on each visit', async ({ page }) => {
+ const traceIds: string[] = [];
+
+ // Load the same ISR page 5 times to ensure cached HTML meta tags are consistently removed
+ for (let i = 0; i < 5; i++) {
+ const transactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return !!(
+ transactionEvent.transaction === '/isr-test/:product' && transactionEvent.contexts?.trace?.op === 'pageload'
+ );
+ });
+
+ if (i === 0) {
+ await page.goto('/isr-test/laptop');
+ } else {
+ await page.reload();
+ }
+
+ const transaction = await transactionPromise;
+ const traceId = transaction.contexts?.trace?.trace_id;
+
+ expect(traceId).toBeDefined();
+ expect(traceId).toMatch(/[a-f0-9]{32}/);
+ traceIds.push(traceId!);
+ }
+
+ // Verify all 5 page loads have unique trace IDs (no reuse of cached/stale meta tags)
+ const uniqueTraceIds = new Set(traceIds);
+ expect(uniqueTraceIds.size).toBe(5);
+});
+
+test('ISR route should be identified correctly in the route manifest', async ({ page }) => {
+ const transactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent.transaction === '/isr-test/:product' && transactionEvent.contexts?.trace?.op === 'pageload';
+ });
+
+ await page.goto('/isr-test/laptop');
+ const transaction = await transactionPromise;
+
+ // Verify the transaction is properly parameterized
+ expect(transaction).toMatchObject({
+ transaction: '/isr-test/:product',
+ transaction_info: { source: 'route' },
+ contexts: {
+ trace: {
+ data: {
+ 'sentry.source': 'route',
+ },
+ },
+ },
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/metrics.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/metrics.test.ts
new file mode 100644
index 000000000000..6569c3d21890
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/metrics.test.ts
@@ -0,0 +1,135 @@
+import { expect, test } from '@playwright/test';
+import { waitForMetric } from '@sentry-internal/test-utils';
+
+// Metrics are not currently supported on Cloudflare Workers
+// TODO: Investigate and enable when metrics support is added for CF Workers
+test.skip('Should emit metrics from server and client', async ({ request, page }) => {
+ const clientCountPromise = waitForMetric('nextjs-16-cf-workers', async metric => {
+ return metric.name === 'test.page.count';
+ });
+
+ const clientDistributionPromise = waitForMetric('nextjs-16-cf-workers', async metric => {
+ return metric.name === 'test.page.distribution';
+ });
+
+ const clientGaugePromise = waitForMetric('nextjs-16-cf-workers', async metric => {
+ return metric.name === 'test.page.gauge';
+ });
+
+ const serverCountPromise = waitForMetric('nextjs-16-cf-workers', async metric => {
+ return metric.name === 'test.route.handler.count';
+ });
+
+ const serverDistributionPromise = waitForMetric('nextjs-16-cf-workers', async metric => {
+ return metric.name === 'test.route.handler.distribution';
+ });
+
+ const serverGaugePromise = waitForMetric('nextjs-16-cf-workers', async metric => {
+ return metric.name === 'test.route.handler.gauge';
+ });
+
+ await page.goto('/metrics');
+ await page.getByText('Emit').click();
+ const clientCount = await clientCountPromise;
+ const clientDistribution = await clientDistributionPromise;
+ const clientGauge = await clientGaugePromise;
+ const serverCount = await serverCountPromise;
+ const serverDistribution = await serverDistributionPromise;
+ const serverGauge = await serverGaugePromise;
+
+ expect(clientCount).toMatchObject({
+ timestamp: expect.any(Number),
+ trace_id: expect.any(String),
+ span_id: expect.any(String),
+ name: 'test.page.count',
+ type: 'counter',
+ value: 1,
+ attributes: {
+ page: { value: '/metrics', type: 'string' },
+ 'random.attribute': { value: 'Apples', type: 'string' },
+ 'sentry.environment': { value: 'qa', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.nextjs', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ },
+ });
+
+ expect(clientDistribution).toMatchObject({
+ timestamp: expect.any(Number),
+ trace_id: expect.any(String),
+ span_id: expect.any(String),
+ name: 'test.page.distribution',
+ type: 'distribution',
+ value: 100,
+ attributes: {
+ page: { value: '/metrics', type: 'string' },
+ 'random.attribute': { value: 'Manzanas', type: 'string' },
+ 'sentry.environment': { value: 'qa', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.nextjs', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ },
+ });
+
+ expect(clientGauge).toMatchObject({
+ timestamp: expect.any(Number),
+ trace_id: expect.any(String),
+ span_id: expect.any(String),
+ name: 'test.page.gauge',
+ type: 'gauge',
+ value: 200,
+ attributes: {
+ page: { value: '/metrics', type: 'string' },
+ 'random.attribute': { value: 'Mele', type: 'string' },
+ 'sentry.environment': { value: 'qa', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.nextjs', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ },
+ });
+
+ expect(serverCount).toMatchObject({
+ timestamp: expect.any(Number),
+ trace_id: expect.any(String),
+ name: 'test.route.handler.count',
+ type: 'counter',
+ value: 1,
+ attributes: {
+ 'server.address': { value: expect.any(String), type: 'string' },
+ 'random.attribute': { value: 'Potatoes', type: 'string' },
+ endpoint: { value: '/metrics/route-handler', type: 'string' },
+ 'sentry.environment': { value: 'qa', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.nextjs', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ },
+ });
+
+ expect(serverDistribution).toMatchObject({
+ timestamp: expect.any(Number),
+ trace_id: expect.any(String),
+ name: 'test.route.handler.distribution',
+ type: 'distribution',
+ value: 100,
+ attributes: {
+ 'server.address': { value: expect.any(String), type: 'string' },
+ 'random.attribute': { value: 'Patatas', type: 'string' },
+ endpoint: { value: '/metrics/route-handler', type: 'string' },
+ 'sentry.environment': { value: 'qa', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.nextjs', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ },
+ });
+
+ expect(serverGauge).toMatchObject({
+ timestamp: expect.any(Number),
+ trace_id: expect.any(String),
+ name: 'test.route.handler.gauge',
+ type: 'gauge',
+ value: 200,
+ attributes: {
+ 'server.address': { value: expect.any(String), type: 'string' },
+ 'random.attribute': { value: 'Patate', type: 'string' },
+ endpoint: { value: '/metrics/route-handler', type: 'string' },
+ 'sentry.environment': { value: 'qa', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.nextjs', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ },
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/middleware.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/middleware.test.ts
new file mode 100644
index 000000000000..f769874a3d34
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/middleware.test.ts
@@ -0,0 +1,77 @@
+import { expect, test } from '@playwright/test';
+import { waitForError, waitForTransaction } from '@sentry-internal/test-utils';
+import { isDevMode } from './isDevMode';
+
+// TODO: Middleware tests need SDK adjustments for Cloudflare Workers edge runtime
+test.skip('Should create a transaction for middleware', async ({ request }) => {
+ const middlewareTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'middleware GET';
+ });
+
+ const response = await request.get('/api/endpoint-behind-middleware');
+ expect(await response.json()).toStrictEqual({ name: 'John Doe' });
+
+ const middlewareTransaction = await middlewareTransactionPromise;
+
+ expect(middlewareTransaction.contexts?.trace?.status).toBe('ok');
+ expect(middlewareTransaction.contexts?.trace?.op).toBe('http.server.middleware');
+ expect(middlewareTransaction.contexts?.runtime?.name).toBe('vercel-edge');
+ expect(middlewareTransaction.transaction_info?.source).toBe('route');
+
+ // Assert that isolation scope works properly
+ expect(middlewareTransaction.tags?.['my-isolated-tag']).toBe(true);
+ // TODO: Isolation scope is not working properly yet
+ // expect(middlewareTransaction.tags?.['my-global-scope-isolated-tag']).not.toBeDefined();
+});
+
+// TODO: Middleware tests need SDK adjustments for Cloudflare Workers edge runtime
+test.skip('Faulty middlewares', async ({ request }) => {
+ test.skip(isDevMode, 'Throwing crashes the dev server atm'); // https://github.com/vercel/next.js/issues/85261
+ const middlewareTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'middleware GET';
+ });
+
+ const errorEventPromise = waitForError('nextjs-16-cf-workers', errorEvent => {
+ return errorEvent?.exception?.values?.[0]?.value === 'Middleware Error';
+ });
+
+ request.get('/api/endpoint-behind-middleware', { headers: { 'x-should-throw': '1' } }).catch(() => {
+ // Noop
+ });
+
+ await test.step('should record transactions', async () => {
+ const middlewareTransaction = await middlewareTransactionPromise;
+ expect(middlewareTransaction.contexts?.trace?.status).toBe('internal_error');
+ expect(middlewareTransaction.contexts?.trace?.op).toBe('http.server.middleware');
+ expect(middlewareTransaction.contexts?.runtime?.name).toBe('vercel-edge');
+ expect(middlewareTransaction.transaction_info?.source).toBe('route');
+ });
+});
+
+// TODO: Middleware tests need SDK adjustments for Cloudflare Workers edge runtime
+test.skip('Should trace outgoing fetch requests inside middleware and create breadcrumbs for it', async ({
+ request,
+}) => {
+ test.skip(isDevMode, 'The fetch requests ends up in a separate tx in dev atm');
+ const middlewareTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'middleware GET';
+ });
+
+ request.get('/api/endpoint-behind-middleware', { headers: { 'x-should-make-request': '1' } }).catch(() => {
+ // Noop
+ });
+
+ const middlewareTransaction = await middlewareTransactionPromise;
+
+ // Breadcrumbs should always be created for the fetch request
+ expect(middlewareTransaction.breadcrumbs).toEqual(
+ expect.arrayContaining([
+ {
+ category: 'http',
+ data: { 'http.method': 'GET', status_code: 200, url: 'http://localhost:3030/' },
+ timestamp: expect.any(Number),
+ type: 'http',
+ },
+ ]),
+ );
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/nested-rsc-error.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/nested-rsc-error.test.ts
new file mode 100644
index 000000000000..9c9de3b350a8
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/nested-rsc-error.test.ts
@@ -0,0 +1,39 @@
+import { expect, test } from '@playwright/test';
+import { waitForError, waitForTransaction } from '@sentry-internal/test-utils';
+
+// TODO: Flakey on CI
+test.skip('Should capture errors from nested server components when `Sentry.captureRequestError` is added to the `onRequestError` hook', async ({
+ page,
+}) => {
+ const errorEventPromise = waitForError('nextjs-16-cf-workers', errorEvent => {
+ return !!errorEvent?.exception?.values?.some(value => value.value === 'I am technically uncatchable');
+ });
+
+ const serverTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /nested-rsc-error/[param]';
+ });
+
+ await page.goto(`/nested-rsc-error/123`);
+ const errorEvent = await errorEventPromise;
+ const serverTransactionEvent = await serverTransactionPromise;
+
+ // error event is part of the transaction
+ expect(errorEvent.contexts?.trace?.trace_id).toBe(serverTransactionEvent.contexts?.trace?.trace_id);
+
+ expect(errorEvent.request).toMatchObject({
+ headers: expect.any(Object),
+ method: 'GET',
+ });
+
+ expect(errorEvent.contexts?.nextjs).toEqual({
+ route_type: 'render',
+ router_kind: 'App Router',
+ router_path: '/nested-rsc-error/[param]',
+ request_path: '/nested-rsc-error/123',
+ });
+
+ expect(errorEvent.exception?.values?.[0]?.mechanism).toEqual({
+ handled: false,
+ type: 'auto.function.nextjs.on_request_error',
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/pageload-tracing.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/pageload-tracing.test.ts
new file mode 100644
index 000000000000..55f78630ef2d
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/pageload-tracing.test.ts
@@ -0,0 +1,56 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+
+// TODO: Flakey on CI
+test.skip('App router transactions should be attached to the pageload request span', async ({ page }) => {
+ const serverTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /pageload-tracing';
+ });
+
+ const pageloadTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === '/pageload-tracing';
+ });
+
+ await page.goto(`/pageload-tracing`);
+
+ const [serverTransaction, pageloadTransaction] = await Promise.all([
+ serverTransactionPromise,
+ pageloadTransactionPromise,
+ ]);
+
+ const pageloadTraceId = pageloadTransaction.contexts?.trace?.trace_id;
+
+ expect(pageloadTraceId).toBeTruthy();
+ expect(serverTransaction.contexts?.trace?.trace_id).toBe(pageloadTraceId);
+});
+
+// TODO: HTTP request headers are not extracted as span attributes on Cloudflare Workers
+test.skip('extracts HTTP request headers as span attributes', async ({ baseURL }) => {
+ const serverTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /pageload-tracing';
+ });
+
+ await fetch(`${baseURL}/pageload-tracing`, {
+ headers: {
+ 'User-Agent': 'Custom-NextJS-Agent/15.0',
+ 'Content-Type': 'text/html',
+ 'X-NextJS-Test': 'nextjs-header-value',
+ Accept: 'text/html, application/xhtml+xml',
+ 'X-Framework': 'Next.js',
+ 'X-Request-ID': 'nextjs-789',
+ },
+ });
+
+ const serverTransaction = await serverTransactionPromise;
+
+ expect(serverTransaction.contexts?.trace?.data).toEqual(
+ expect.objectContaining({
+ 'http.request.header.user_agent': 'Custom-NextJS-Agent/15.0',
+ 'http.request.header.content_type': 'text/html',
+ 'http.request.header.x_nextjs_test': 'nextjs-header-value',
+ 'http.request.header.accept': 'text/html, application/xhtml+xml',
+ 'http.request.header.x_framework': 'Next.js',
+ 'http.request.header.x_request_id': 'nextjs-789',
+ }),
+ );
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/parameterized-routes.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/parameterized-routes.test.ts
new file mode 100644
index 000000000000..5d2925375688
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/parameterized-routes.test.ts
@@ -0,0 +1,189 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+
+test('should create a parameterized transaction when the `app` directory is used', async ({ page }) => {
+ const transactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/parameterized/:one' && transactionEvent.contexts?.trace?.op === 'pageload'
+ );
+ });
+
+ await page.goto(`/parameterized/cappuccino`);
+
+ const transaction = await transactionPromise;
+
+ expect(transaction).toMatchObject({
+ breadcrumbs: expect.arrayContaining([
+ {
+ category: 'navigation',
+ data: { from: '/parameterized/cappuccino', to: '/parameterized/cappuccino' },
+ timestamp: expect.any(Number),
+ },
+ ]),
+ contexts: {
+ react: { version: expect.any(String) },
+ trace: {
+ data: {
+ 'sentry.op': 'pageload',
+ 'sentry.origin': 'auto.pageload.nextjs.app_router_instrumentation',
+ 'sentry.source': 'route',
+ },
+ op: 'pageload',
+ origin: 'auto.pageload.nextjs.app_router_instrumentation',
+ span_id: expect.stringMatching(/[a-f0-9]{16}/),
+ trace_id: expect.stringMatching(/[a-f0-9]{32}/),
+ },
+ },
+ environment: 'qa',
+ request: {
+ headers: expect.any(Object),
+ url: expect.stringMatching(/\/parameterized\/cappuccino$/),
+ },
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ transaction: '/parameterized/:one',
+ transaction_info: { source: 'route' },
+ type: 'transaction',
+ });
+});
+
+test('should create a static transaction when the `app` directory is used and the route is not parameterized', async ({
+ page,
+}) => {
+ const transactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/parameterized/static' && transactionEvent.contexts?.trace?.op === 'pageload'
+ );
+ });
+
+ await page.goto(`/parameterized/static`);
+
+ const transaction = await transactionPromise;
+
+ expect(transaction).toMatchObject({
+ breadcrumbs: expect.arrayContaining([
+ {
+ category: 'navigation',
+ data: { from: '/parameterized/static', to: '/parameterized/static' },
+ timestamp: expect.any(Number),
+ },
+ ]),
+ contexts: {
+ react: { version: expect.any(String) },
+ trace: {
+ data: {
+ 'sentry.op': 'pageload',
+ 'sentry.origin': 'auto.pageload.nextjs.app_router_instrumentation',
+ 'sentry.source': 'url',
+ },
+ op: 'pageload',
+ origin: 'auto.pageload.nextjs.app_router_instrumentation',
+ span_id: expect.stringMatching(/[a-f0-9]{16}/),
+ trace_id: expect.stringMatching(/[a-f0-9]{32}/),
+ },
+ },
+ environment: 'qa',
+ request: {
+ headers: expect.any(Object),
+ url: expect.stringMatching(/\/parameterized\/static$/),
+ },
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ transaction: '/parameterized/static',
+ transaction_info: { source: 'url' },
+ type: 'transaction',
+ });
+});
+
+test('should create a partially parameterized transaction when the `app` directory is used', async ({ page }) => {
+ const transactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/parameterized/:one/beep' && transactionEvent.contexts?.trace?.op === 'pageload'
+ );
+ });
+
+ await page.goto(`/parameterized/cappuccino/beep`);
+
+ const transaction = await transactionPromise;
+
+ expect(transaction).toMatchObject({
+ breadcrumbs: expect.arrayContaining([
+ {
+ category: 'navigation',
+ data: { from: '/parameterized/cappuccino/beep', to: '/parameterized/cappuccino/beep' },
+ timestamp: expect.any(Number),
+ },
+ ]),
+ contexts: {
+ react: { version: expect.any(String) },
+ trace: {
+ data: {
+ 'sentry.op': 'pageload',
+ 'sentry.origin': 'auto.pageload.nextjs.app_router_instrumentation',
+ 'sentry.source': 'route',
+ },
+ op: 'pageload',
+ origin: 'auto.pageload.nextjs.app_router_instrumentation',
+ span_id: expect.stringMatching(/[a-f0-9]{16}/),
+ trace_id: expect.stringMatching(/[a-f0-9]{32}/),
+ },
+ },
+ environment: 'qa',
+ request: {
+ headers: expect.any(Object),
+ url: expect.stringMatching(/\/parameterized\/cappuccino\/beep$/),
+ },
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ transaction: '/parameterized/:one/beep',
+ transaction_info: { source: 'route' },
+ type: 'transaction',
+ });
+});
+
+test('should create a nested parameterized transaction when the `app` directory is used.', async ({ page }) => {
+ const transactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/parameterized/:one/beep/:two' &&
+ transactionEvent.contexts?.trace?.op === 'pageload'
+ );
+ });
+
+ await page.goto(`/parameterized/cappuccino/beep/espresso`);
+
+ const transaction = await transactionPromise;
+
+ expect(transaction).toMatchObject({
+ breadcrumbs: expect.arrayContaining([
+ {
+ category: 'navigation',
+ data: { from: '/parameterized/cappuccino/beep/espresso', to: '/parameterized/cappuccino/beep/espresso' },
+ timestamp: expect.any(Number),
+ },
+ ]),
+ contexts: {
+ react: { version: expect.any(String) },
+ trace: {
+ data: {
+ 'sentry.op': 'pageload',
+ 'sentry.origin': 'auto.pageload.nextjs.app_router_instrumentation',
+ 'sentry.source': 'route',
+ },
+ op: 'pageload',
+ origin: 'auto.pageload.nextjs.app_router_instrumentation',
+ span_id: expect.stringMatching(/[a-f0-9]{16}/),
+ trace_id: expect.stringMatching(/[a-f0-9]{32}/),
+ },
+ },
+ environment: 'qa',
+ request: {
+ headers: expect.any(Object),
+ url: expect.stringMatching(/\/parameterized\/cappuccino\/beep\/espresso$/),
+ },
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ transaction: '/parameterized/:one/beep/:two',
+ transaction_info: { source: 'route' },
+ type: 'transaction',
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/prefetch-spans.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/prefetch-spans.test.ts
new file mode 100644
index 000000000000..f48158a54697
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/prefetch-spans.test.ts
@@ -0,0 +1,25 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+import { isDevMode } from './isDevMode';
+
+test('Prefetch client spans should have a http.request.prefetch attribute', async ({ page }) => {
+ test.skip(isDevMode, "Prefetch requests don't have the prefetch header in dev mode");
+
+ const pageloadTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === '/prefetching';
+ });
+
+ await page.goto(`/prefetching`);
+
+ // Make it more likely that nextjs prefetches
+ await page.hover('#prefetch-link');
+
+ expect((await pageloadTransactionPromise).spans).toContainEqual(
+ expect.objectContaining({
+ op: 'http.client',
+ data: expect.objectContaining({
+ 'http.request.prefetch': true,
+ }),
+ }),
+ );
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/route-handler.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/route-handler.test.ts
new file mode 100644
index 000000000000..16368e5be57b
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/route-handler.test.ts
@@ -0,0 +1,37 @@
+import test, { expect } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+
+test.skip('Should create a transaction for node route handlers', async ({ request }) => {
+ const routehandlerTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /route-handler/[xoxo]/node';
+ });
+
+ const response = await request.get('/route-handler/123/node', { headers: { 'x-charly': 'gomez' } });
+ expect(await response.json()).toStrictEqual({ message: 'Hello Node Route Handler' });
+
+ const routehandlerTransaction = await routehandlerTransactionPromise;
+
+ expect(routehandlerTransaction.contexts?.trace?.status).toBe('ok');
+ expect(routehandlerTransaction.contexts?.trace?.op).toBe('http.server');
+
+ // Custom headers are not captured on Cloudflare Workers
+ // This assertion is skipped for CF Workers environment
+});
+
+test('Should create a transaction for edge route handlers', async ({ request }) => {
+ // This test only works for webpack builds on non-async param extraction
+ // todo: check if we can set request headers for edge on sdkProcessingMetadata
+ test.skip();
+ const routehandlerTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /route-handler/[xoxo]/edge';
+ });
+
+ const response = await request.get('/route-handler/123/edge', { headers: { 'x-charly': 'gomez' } });
+ expect(await response.json()).toStrictEqual({ message: 'Hello Edge Route Handler' });
+
+ const routehandlerTransaction = await routehandlerTransactionPromise;
+
+ expect(routehandlerTransaction.contexts?.trace?.status).toBe('ok');
+ expect(routehandlerTransaction.contexts?.trace?.op).toBe('http.server');
+ expect(routehandlerTransaction.contexts?.trace?.data?.['http.request.header.x_charly']).toBe('gomez');
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/server-action-redirect.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/server-action-redirect.test.ts
new file mode 100644
index 000000000000..09ae79cc60a7
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/server-action-redirect.test.ts
@@ -0,0 +1,47 @@
+import { expect, test } from '@playwright/test';
+import { waitForError, waitForTransaction } from '@sentry-internal/test-utils';
+
+test.skip('Should handle server action redirect without capturing errors', async ({ page }) => {
+ // Wait for the initial page load transaction
+ const pageLoadTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === '/redirect/origin';
+ });
+
+ // Navigate to the origin page
+ await page.goto('/redirect/origin');
+
+ const pageLoadTransaction = await pageLoadTransactionPromise;
+ expect(pageLoadTransaction).toBeDefined();
+
+ // Wait for the redirect transaction
+ const redirectTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /redirect/destination';
+ });
+
+ // No error should be captured
+ const redirectErrorPromise = waitForError('nextjs-16-cf-workers', async errorEvent => {
+ return !!errorEvent;
+ });
+
+ // Click the redirect button
+ await page.click('button[type="submit"]');
+
+ await redirectTransactionPromise;
+
+ // Verify we got redirected to the destination page
+ await expect(page).toHaveURL('/redirect/destination');
+
+ // Wait for potential errors with a 2 second timeout
+ const errorTimeout = new Promise((_, reject) =>
+ setTimeout(() => reject(new Error('No error captured (timeout)')), 2000),
+ );
+
+ // We expect this to timeout since no error should be captured during the redirect
+ try {
+ await Promise.race([redirectErrorPromise, errorTimeout]);
+ throw new Error('Expected no error to be captured, but an error was found');
+ } catch (e) {
+ // If we get a timeout error (as expected), no error was captured
+ expect((e as Error).message).toBe('No error captured (timeout)');
+ }
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/server-components.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/server-components.test.ts
new file mode 100644
index 000000000000..f5c9f0fb6f96
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/server-components.test.ts
@@ -0,0 +1,101 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+
+// TODO: Server component tests need SDK adjustments for Cloudflare Workers
+test.skip('Sends a transaction for a request to app router with URL', async ({ page }) => {
+ const serverComponentTransactionPromise = waitForTransaction('nextjs-16-cf-workers', transactionEvent => {
+ return (
+ transactionEvent?.transaction === 'GET /parameterized/[one]/beep/[two]' &&
+ transactionEvent.contexts?.trace?.data?.['http.target']?.startsWith('/parameterized/1337/beep/42')
+ );
+ });
+
+ await page.goto('/parameterized/1337/beep/42');
+
+ const transactionEvent = await serverComponentTransactionPromise;
+
+ expect(transactionEvent.contexts?.trace).toEqual({
+ data: expect.objectContaining({
+ 'sentry.op': 'http.server',
+ 'sentry.origin': 'auto',
+ 'sentry.sample_rate': 1,
+ 'sentry.source': 'route',
+ 'http.method': 'GET',
+ 'http.response.status_code': 200,
+ 'http.route': '/parameterized/[one]/beep/[two]',
+ 'http.status_code': 200,
+ 'http.target': '/parameterized/1337/beep/42',
+ 'otel.kind': 'SERVER',
+ 'next.route': '/parameterized/[one]/beep/[two]',
+ }),
+ op: 'http.server',
+ origin: 'auto',
+ span_id: expect.stringMatching(/[a-f0-9]{16}/),
+ status: 'ok',
+ trace_id: expect.stringMatching(/[a-f0-9]{32}/),
+ });
+
+ expect(transactionEvent.request).toMatchObject({
+ url: expect.stringContaining('/parameterized/1337/beep/42'),
+ });
+
+ // The transaction should not contain any spans with the same name as the transaction
+ // e.g. "GET /parameterized/[one]/beep/[two]"
+ expect(
+ transactionEvent.spans?.filter(span => {
+ return span.description === transactionEvent.transaction;
+ }),
+ ).toHaveLength(0);
+});
+
+// TODO: Server component span tests need SDK adjustments for Cloudflare Workers
+test.skip('Will create a transaction with spans for every server component and metadata generation functions when visiting a page', async ({
+ page,
+}) => {
+ const serverTransactionEventPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /nested-layout';
+ });
+
+ await page.goto('/nested-layout');
+
+ const spanDescriptions = (await serverTransactionEventPromise).spans?.map(span => {
+ return span.description;
+ });
+
+ expect(spanDescriptions).toContainEqual('render route (app) /nested-layout');
+ expect(spanDescriptions).toContainEqual('build component tree');
+ expect(spanDescriptions).toContainEqual('resolve root layout server component');
+ expect(spanDescriptions).toContainEqual('resolve layout server component "(nested-layout)"');
+ expect(spanDescriptions).toContainEqual('resolve layout server component "nested-layout"');
+ expect(spanDescriptions).toContainEqual('resolve page server component "/nested-layout"');
+ expect(spanDescriptions).toContainEqual('generateMetadata /(nested-layout)/nested-layout/page');
+ expect(spanDescriptions).toContainEqual('start response');
+ expect(spanDescriptions).toContainEqual('NextNodeServer.clientComponentLoading');
+});
+
+// TODO: Server component span tests need SDK adjustments for Cloudflare Workers
+test.skip('Will create a transaction with spans for every server component and metadata generation functions when visiting a dynamic page', async ({
+ page,
+}) => {
+ const serverTransactionEventPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /nested-layout/[dynamic]';
+ });
+
+ await page.goto('/nested-layout/123');
+
+ const spanDescriptions = (await serverTransactionEventPromise).spans?.map(span => {
+ return span.description;
+ });
+
+ expect(spanDescriptions).toContainEqual('resolve page components');
+ expect(spanDescriptions).toContainEqual('render route (app) /nested-layout/[dynamic]');
+ expect(spanDescriptions).toContainEqual('build component tree');
+ expect(spanDescriptions).toContainEqual('resolve root layout server component');
+ expect(spanDescriptions).toContainEqual('resolve layout server component "(nested-layout)"');
+ expect(spanDescriptions).toContainEqual('resolve layout server component "nested-layout"');
+ expect(spanDescriptions).toContainEqual('resolve layout server component "[dynamic]"');
+ expect(spanDescriptions).toContainEqual('resolve page server component "/nested-layout/[dynamic]"');
+ expect(spanDescriptions).toContainEqual('generateMetadata /(nested-layout)/nested-layout/[dynamic]/page');
+ expect(spanDescriptions).toContainEqual('start response');
+ expect(spanDescriptions).toContainEqual('NextNodeServer.clientComponentLoading');
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/streaming-rsc-error.test.ts b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/streaming-rsc-error.test.ts
new file mode 100644
index 000000000000..ba42d9fadbb9
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tests/streaming-rsc-error.test.ts
@@ -0,0 +1,38 @@
+import { expect, test } from '@playwright/test';
+import { waitForError, waitForTransaction } from '@sentry-internal/test-utils';
+
+test('Should capture errors for crashing streaming promises in server components when `Sentry.captureRequestError` is added to the `onRequestError` hook', async ({
+ page,
+}) => {
+ const errorEventPromise = waitForError('nextjs-16-cf-workers', errorEvent => {
+ return !!errorEvent?.exception?.values?.some(value => value.value === 'I am a data streaming error');
+ });
+
+ const serverTransactionPromise = waitForTransaction('nextjs-16-cf-workers', async transactionEvent => {
+ return transactionEvent?.transaction === 'GET /streaming-rsc-error/[param]';
+ });
+
+ await page.goto(`/streaming-rsc-error/123`);
+ const errorEvent = await errorEventPromise;
+ const serverTransactionEvent = await serverTransactionPromise;
+
+ // error event is part of the transaction
+ expect(errorEvent.contexts?.trace?.trace_id).toBe(serverTransactionEvent.contexts?.trace?.trace_id);
+
+ expect(errorEvent.request).toMatchObject({
+ headers: expect.any(Object),
+ method: 'GET',
+ });
+
+ expect(errorEvent.contexts?.nextjs).toEqual({
+ route_type: 'render',
+ router_kind: 'App Router',
+ router_path: '/streaming-rsc-error/[param]',
+ request_path: '/streaming-rsc-error/123',
+ });
+
+ expect(errorEvent.exception?.values?.[0]?.mechanism).toEqual({
+ handled: false,
+ type: 'auto.function.nextjs.on_request_error',
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tsconfig.json b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tsconfig.json
new file mode 100644
index 000000000000..cc9ed39b5aa2
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/tsconfig.json
@@ -0,0 +1,27 @@
+{
+ "compilerOptions": {
+ "target": "ES2017",
+ "lib": ["dom", "dom.iterable", "esnext"],
+ "allowJs": true,
+ "skipLibCheck": true,
+ "strict": true,
+ "noEmit": true,
+ "esModuleInterop": true,
+ "module": "esnext",
+ "moduleResolution": "bundler",
+ "resolveJsonModule": true,
+ "isolatedModules": true,
+ "jsx": "react-jsx",
+ "incremental": true,
+ "plugins": [
+ {
+ "name": "next"
+ }
+ ],
+ "paths": {
+ "@/*": ["./*"]
+ }
+ },
+ "include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts", ".next/dev/types/**/*.ts", "**/*.mts"],
+ "exclude": ["node_modules"]
+}
diff --git a/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/wrangler.jsonc b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/wrangler.jsonc
new file mode 100644
index 000000000000..062a8e7881e3
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/nextjs-16-cf-workers/wrangler.jsonc
@@ -0,0 +1,68 @@
+/**
+ * For more details on how to configure Wrangler, refer to:
+ * https://developers.cloudflare.com/workers/wrangler/configuration/
+ */
+/**
+ * For more details on how to configure Wrangler, refer to:
+ * https://developers.cloudflare.com/workers/wrangler/configuration/
+ */
+{
+ "$schema": "node_modules/wrangler/config-schema.json",
+ "name": "next-cf",
+ "main": ".open-next/worker.js",
+ "compatibility_date": "2025-12-01",
+ "compatibility_flags": [
+ "nodejs_compat",
+ "global_fetch_strictly_public"
+ ],
+ "assets": {
+ "binding": "ASSETS",
+ "directory": ".open-next/assets"
+ },
+ "images": {
+ // Enable image optimization
+ // see https://opennext.js.org/cloudflare/howtos/image
+ "binding": "IMAGES"
+ },
+ "services": [
+ {
+ // Self-reference service binding, the service name must match the worker name
+ // see https://opennext.js.org/cloudflare/caching
+ "binding": "WORKER_SELF_REFERENCE",
+ "service": "next-cf"
+ }
+ ],
+ "observability": {
+ "enabled": true
+ }
+ /**
+ * Smart Placement
+ * Docs: https://developers.cloudflare.com/workers/configuration/smart-placement/#smart-placement
+ */
+ // "placement": { "mode": "smart" }
+ /**
+ * Bindings
+ * Bindings allow your Worker to interact with resources on the Cloudflare Developer Platform, including
+ * databases, object storage, AI inference, real-time communication and more.
+ * https://developers.cloudflare.com/workers/runtime-apis/bindings/
+ */
+ /**
+ * Environment Variables
+ * https://developers.cloudflare.com/workers/wrangler/configuration/#environment-variables
+ */
+ // "vars": { "MY_VARIABLE": "production_value" }
+ /**
+ * Note: Use secrets to store sensitive data.
+ * https://developers.cloudflare.com/workers/configuration/secrets/
+ */
+ /**
+ * Static Assets
+ * https://developers.cloudflare.com/workers/static-assets/binding/
+ */
+ // "assets": { "directory": "./public/", "binding": "ASSETS" }
+ /**
+ * Service Bindings (communicate between multiple Workers)
+ * https://developers.cloudflare.com/workers/wrangler/configuration/#service-bindings
+ */
+ // "services": [{ "binding": "MY_SERVICE", "service": "my-service" }]
+}
\ No newline at end of file
diff --git a/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2-custom-sampler/package.json b/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2-custom-sampler/package.json
index 62d5bc10cd1a..68e8f7ac1f24 100644
--- a/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2-custom-sampler/package.json
+++ b/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2-custom-sampler/package.json
@@ -12,13 +12,13 @@
},
"dependencies": {
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/context-async-hooks": "^2.4.0",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/instrumentation-http": "^0.210.0",
- "@opentelemetry/resources": "^2.4.0",
- "@opentelemetry/sdk-trace-node": "^2.4.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/context-async-hooks": "^2.5.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/instrumentation-http": "^0.211.0",
+ "@opentelemetry/resources": "^2.5.0",
+ "@opentelemetry/sdk-trace-node": "^2.5.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@sentry/node-core": "latest || *",
"@sentry/opentelemetry": "latest || *",
"@types/express": "4.17.17",
diff --git a/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2-sdk-node/package.json b/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2-sdk-node/package.json
index 49e9bfe01dc8..b79d084997bf 100644
--- a/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2-sdk-node/package.json
+++ b/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2-sdk-node/package.json
@@ -12,15 +12,15 @@
},
"dependencies": {
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/context-async-hooks": "^2.4.0",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/instrumentation-http": "^0.210.0",
- "@opentelemetry/resources": "^2.4.0",
- "@opentelemetry/sdk-trace-node": "^2.4.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
- "@opentelemetry/sdk-node": "^0.210.0",
- "@opentelemetry/exporter-trace-otlp-http": "^0.210.0",
+ "@opentelemetry/context-async-hooks": "^2.5.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/instrumentation-http": "^0.211.0",
+ "@opentelemetry/resources": "^2.5.0",
+ "@opentelemetry/sdk-trace-node": "^2.5.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
+ "@opentelemetry/sdk-node": "^0.211.0",
+ "@opentelemetry/exporter-trace-otlp-http": "^0.211.0",
"@sentry/node-core": "latest || *",
"@sentry/opentelemetry": "latest || *",
"@types/express": "4.17.17",
diff --git a/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2/package.json b/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2/package.json
index bda2295cc692..7da50bef037a 100644
--- a/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2/package.json
+++ b/dev-packages/e2e-tests/test-applications/node-core-express-otel-v2/package.json
@@ -14,13 +14,13 @@
"@sentry/node-core": "latest || *",
"@sentry/opentelemetry": "latest || *",
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/context-async-hooks": "^2.4.0",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/instrumentation-http": "^0.210.0",
- "@opentelemetry/resources": "^2.4.0",
- "@opentelemetry/sdk-trace-node": "^2.4.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/context-async-hooks": "^2.5.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/instrumentation-http": "^0.211.0",
+ "@opentelemetry/resources": "^2.5.0",
+ "@opentelemetry/sdk-trace-node": "^2.5.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@types/express": "^4.17.21",
"@types/node": "^18.19.1",
"express": "^4.21.2",
diff --git a/dev-packages/e2e-tests/test-applications/node-otel-without-tracing/package.json b/dev-packages/e2e-tests/test-applications/node-otel-without-tracing/package.json
index a445a9ef4aac..b8165a303621 100644
--- a/dev-packages/e2e-tests/test-applications/node-otel-without-tracing/package.json
+++ b/dev-packages/e2e-tests/test-applications/node-otel-without-tracing/package.json
@@ -12,11 +12,11 @@
},
"dependencies": {
"@opentelemetry/api": "1.9.0",
- "@opentelemetry/sdk-trace-node": "2.4.0",
- "@opentelemetry/exporter-trace-otlp-http": "0.210.0",
- "@opentelemetry/instrumentation-undici": "0.20.0",
- "@opentelemetry/instrumentation-http": "0.210.0",
- "@opentelemetry/instrumentation": "0.210.0",
+ "@opentelemetry/sdk-trace-node": "2.5.0",
+ "@opentelemetry/exporter-trace-otlp-http": "0.211.0",
+ "@opentelemetry/instrumentation-undici": "0.21.0",
+ "@opentelemetry/instrumentation-http": "0.211.0",
+ "@opentelemetry/instrumentation": "0.211.0",
"@sentry/node": "latest || *",
"@types/express": "4.17.17",
"@types/node": "^18.19.1",
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/.gitignore b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/.gitignore
new file mode 100644
index 000000000000..ebb991370034
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/.gitignore
@@ -0,0 +1,32 @@
+# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
+
+# dependencies
+/node_modules
+/.pnp
+.pnp.js
+
+# testing
+/coverage
+
+# production
+/build
+
+# misc
+.DS_Store
+.env.local
+.env.development.local
+.env.test.local
+.env.production.local
+
+npm-debug.log*
+yarn-debug.log*
+yarn-error.log*
+
+/test-results/
+/playwright-report/
+/playwright/.cache/
+
+!*.d.ts
+
+# react router
+.react-router
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/.npmrc b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/.npmrc
new file mode 100644
index 000000000000..070f80f05092
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/.npmrc
@@ -0,0 +1,2 @@
+@sentry:registry=http://127.0.0.1:4873
+@sentry-internal:registry=http://127.0.0.1:4873
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/app.css b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/app.css
new file mode 100644
index 000000000000..e78d2096ad20
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/app.css
@@ -0,0 +1,5 @@
+body {
+ font-family: system-ui, sans-serif;
+ margin: 0;
+ padding: 20px;
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/entry.client.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/entry.client.tsx
new file mode 100644
index 000000000000..c8bd9df2ba99
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/entry.client.tsx
@@ -0,0 +1,33 @@
+import * as Sentry from '@sentry/react-router';
+import { StrictMode, startTransition } from 'react';
+import { hydrateRoot } from 'react-dom/client';
+import { HydratedRouter } from 'react-router/dom';
+
+// Create the tracing integration with useInstrumentationAPI enabled
+// This must be set BEFORE Sentry.init() to prepare the instrumentation
+const tracing = Sentry.reactRouterTracingIntegration({ useInstrumentationAPI: true });
+
+Sentry.init({
+ environment: 'qa', // dynamic sampling bias to keep transactions
+ dsn: 'https://username@domain/123',
+ tunnel: `http://localhost:3031/`, // proxy server
+ integrations: [tracing],
+ tracesSampleRate: 1.0,
+ tracePropagationTargets: [/^\//],
+});
+
+// Get the client instrumentation from the Sentry integration
+// NOTE: As of React Router 7.x, HydratedRouter does NOT invoke these hooks in Framework Mode.
+// The client-side instrumentation is prepared for when React Router adds support.
+// Client-side navigation is currently handled by the legacy instrumentHydratedRouter() approach.
+const sentryClientInstrumentation = [tracing.clientInstrumentation];
+
+startTransition(() => {
+ hydrateRoot(
+ document,
+
+ {/* unstable_instrumentations is React Router 7.x's prop name (will become `instrumentations` in v8) */}
+
+ ,
+ );
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/entry.server.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/entry.server.tsx
new file mode 100644
index 000000000000..1cbc6b6166fe
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/entry.server.tsx
@@ -0,0 +1,22 @@
+import { createReadableStreamFromReadable } from '@react-router/node';
+import * as Sentry from '@sentry/react-router';
+import { renderToPipeableStream } from 'react-dom/server';
+import { ServerRouter } from 'react-router';
+import { type HandleErrorFunction } from 'react-router';
+
+const ABORT_DELAY = 5_000;
+
+const handleRequest = Sentry.createSentryHandleRequest({
+ streamTimeout: ABORT_DELAY,
+ ServerRouter,
+ renderToPipeableStream,
+ createReadableStreamFromReadable,
+});
+
+export default handleRequest;
+
+export const handleError: HandleErrorFunction = Sentry.createSentryHandleError({ logErrors: true });
+
+// Use Sentry's instrumentation API for server-side tracing
+// `unstable_instrumentations` is React Router 7.x's export name (will become `instrumentations` in v8)
+export const unstable_instrumentations = [Sentry.createSentryServerInstrumentation()];
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/root.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/root.tsx
new file mode 100644
index 000000000000..227c08f7730c
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/root.tsx
@@ -0,0 +1,69 @@
+import * as Sentry from '@sentry/react-router';
+import { Links, Meta, Outlet, Scripts, ScrollRestoration, isRouteErrorResponse } from 'react-router';
+import type { Route } from './+types/root';
+import stylesheet from './app.css?url';
+
+export const links: Route.LinksFunction = () => [
+ { rel: 'preconnect', href: 'https://fonts.googleapis.com' },
+ {
+ rel: 'preconnect',
+ href: 'https://fonts.gstatic.com',
+ crossOrigin: 'anonymous',
+ },
+ {
+ rel: 'stylesheet',
+ href: 'https://fonts.googleapis.com/css2?family=Inter:ital,opsz,wght@0,14..32,100..900;1,14..32,100..900&display=swap',
+ },
+ { rel: 'stylesheet', href: stylesheet },
+];
+
+export function Layout({ children }: { children: React.ReactNode }) {
+ return (
+
+
+
+
+
+
+
+
+ {children}
+
+
+
+
+ );
+}
+
+export default function App() {
+ return ;
+}
+
+export function ErrorBoundary({ error }: Route.ErrorBoundaryProps) {
+ let message = 'Oops!';
+ let details = 'An unexpected error occurred.';
+ let stack: string | undefined;
+
+ if (isRouteErrorResponse(error)) {
+ message = error.status === 404 ? '404' : 'Error';
+ details = error.status === 404 ? 'The requested page could not be found.' : error.statusText || details;
+ } else if (error && error instanceof Error) {
+ Sentry.captureException(error);
+ if (import.meta.env.DEV) {
+ details = error.message;
+ stack = error.stack;
+ }
+ }
+
+ return (
+
+ {message}
+ {details}
+ {stack && (
+
+ {stack}
+
+ )}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes.ts
new file mode 100644
index 000000000000..6bd5b27264eb
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes.ts
@@ -0,0 +1,19 @@
+import { type RouteConfig, index, prefix, route } from '@react-router/dev/routes';
+
+export default [
+ index('routes/home.tsx'),
+ ...prefix('performance', [
+ index('routes/performance/index.tsx'),
+ route('ssr', 'routes/performance/ssr.tsx'),
+ route('with/:param', 'routes/performance/dynamic-param.tsx'),
+ route('static', 'routes/performance/static.tsx'),
+ route('server-loader', 'routes/performance/server-loader.tsx'),
+ route('server-action', 'routes/performance/server-action.tsx'),
+ route('with-middleware', 'routes/performance/with-middleware.tsx'),
+ route('error-loader', 'routes/performance/error-loader.tsx'),
+ route('error-action', 'routes/performance/error-action.tsx'),
+ route('error-middleware', 'routes/performance/error-middleware.tsx'),
+ route('lazy-route', 'routes/performance/lazy-route.tsx'),
+ route('fetcher-test', 'routes/performance/fetcher-test.tsx'),
+ ]),
+] satisfies RouteConfig;
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/home.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/home.tsx
new file mode 100644
index 000000000000..7812a9c500d1
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/home.tsx
@@ -0,0 +1,10 @@
+export function meta() {
+ return [
+ { title: 'React Router Instrumentation API Test' },
+ { name: 'description', content: 'Testing React Router instrumentation API' },
+ ];
+}
+
+export default function Home() {
+ return home
;
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/dynamic-param.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/dynamic-param.tsx
new file mode 100644
index 000000000000..2ceee24aa1be
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/dynamic-param.tsx
@@ -0,0 +1,15 @@
+import type { Route } from './+types/dynamic-param';
+
+// Minimal loader to trigger Sentry's route instrumentation
+export function loader() {
+ return null;
+}
+
+export default function DynamicParamPage({ params }: Route.ComponentProps) {
+ return (
+
+
Dynamic Param Page
+
Param: {params.param}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-action.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-action.tsx
new file mode 100644
index 000000000000..1948fa13a1ec
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-action.tsx
@@ -0,0 +1,16 @@
+import { Form } from 'react-router';
+
+export async function action(): Promise {
+ throw new Error('Action error for testing');
+}
+
+export default function ErrorActionPage() {
+ return (
+
+
Error Action Page
+
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-loader.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-loader.tsx
new file mode 100644
index 000000000000..6dd3d3013f37
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-loader.tsx
@@ -0,0 +1,12 @@
+export function loader(): never {
+ throw new Error('Loader error for testing');
+}
+
+export default function ErrorLoaderPage() {
+ return (
+
+
Error Loader Page
+
This should not render
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-middleware.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-middleware.tsx
new file mode 100644
index 000000000000..e918ab55322d
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/error-middleware.tsx
@@ -0,0 +1,16 @@
+import type { Route } from './+types/error-middleware';
+
+export const middleware: Route.MiddlewareFunction[] = [
+ async function errorMiddleware() {
+ throw new Error('Middleware error for testing');
+ },
+];
+
+export default function ErrorMiddlewarePage() {
+ return (
+
+
Error Middleware Page
+
This should not render
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/fetcher-test.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/fetcher-test.tsx
new file mode 100644
index 000000000000..9256b134989d
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/fetcher-test.tsx
@@ -0,0 +1,30 @@
+import { useFetcher } from 'react-router';
+import type { Route } from './+types/fetcher-test';
+
+export async function loader() {
+ return { message: 'Fetcher test page loaded' };
+}
+
+export async function action({ request }: Route.ActionArgs) {
+ const formData = await request.formData();
+ const value = formData.get('value')?.toString() || '';
+ await new Promise(resolve => setTimeout(resolve, 50));
+ return { success: true, value };
+}
+
+export default function FetcherTestPage() {
+ const fetcher = useFetcher();
+
+ return (
+
+
Fetcher Test Page
+
+
+
+ Submit via Fetcher
+
+
+ {fetcher.data?.success &&
Fetcher result: {fetcher.data.value}
}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/index.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/index.tsx
new file mode 100644
index 000000000000..94479a3d12f2
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/index.tsx
@@ -0,0 +1,19 @@
+import { Link } from 'react-router';
+
+// Minimal loader to trigger Sentry's route instrumentation
+export function loader() {
+ return null;
+}
+
+export default function PerformancePage() {
+ return (
+
+
Performance Page
+
+ SSR Page
+ With Param Page
+ Server Loader
+
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/lazy-route.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/lazy-route.tsx
new file mode 100644
index 000000000000..9ea3102f6e3f
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/lazy-route.tsx
@@ -0,0 +1,14 @@
+export async function loader() {
+ // Simulate a slow lazy load
+ await new Promise(resolve => setTimeout(resolve, 100));
+ return { message: 'Lazy loader data' };
+}
+
+export default function LazyRoute() {
+ return (
+
+
Lazy Route
+
This route was lazily loaded
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/server-action.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/server-action.tsx
new file mode 100644
index 000000000000..4b5ad7a4f5ac
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/server-action.tsx
@@ -0,0 +1,22 @@
+import { Form } from 'react-router';
+import type { Route } from './+types/server-action';
+
+export async function action({ request }: Route.ActionArgs) {
+ const formData = await request.formData();
+ const name = formData.get('name')?.toString() || '';
+ await new Promise(resolve => setTimeout(resolve, 100));
+ return { success: true, name };
+}
+
+export default function ServerActionPage({ actionData }: Route.ComponentProps) {
+ return (
+
+
Server Action Page
+
+ {actionData?.success &&
Action completed for: {actionData.name}
}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/server-loader.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/server-loader.tsx
new file mode 100644
index 000000000000..3ab65bff8ecf
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/server-loader.tsx
@@ -0,0 +1,16 @@
+import type { Route } from './+types/server-loader';
+
+export async function loader() {
+ await new Promise(resolve => setTimeout(resolve, 100));
+ return { data: 'burritos' };
+}
+
+export default function ServerLoaderPage({ loaderData }: Route.ComponentProps) {
+ const { data } = loaderData;
+ return (
+
+
Server Loader Page
+
{data}
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/ssr.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/ssr.tsx
new file mode 100644
index 000000000000..0b4831496c3c
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/ssr.tsx
@@ -0,0 +1,12 @@
+import { Link } from 'react-router';
+
+export default function SsrPage() {
+ return (
+
+
SSR Page
+
+ Back to Performance
+
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/static.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/static.tsx
new file mode 100644
index 000000000000..773f6e64ebea
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/static.tsx
@@ -0,0 +1,7 @@
+export default function StaticPage() {
+ return (
+
+
Static Page
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/with-middleware.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/with-middleware.tsx
new file mode 100644
index 000000000000..ed4f4713d7b6
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/app/routes/performance/with-middleware.tsx
@@ -0,0 +1,30 @@
+import type { Route } from './+types/with-middleware';
+
+// Middleware runs before loaders/actions on matching routes
+// With future.v8_middleware enabled, we export 'middleware' (not 'unstable_middleware')
+export const middleware: Route.MiddlewareFunction[] = [
+ async function authMiddleware({ context }, next) {
+ // Code runs BEFORE handlers
+ // Type assertion to allow setting custom properties on context
+ (context as any).middlewareCalled = true;
+
+ // Must call next() and return the response
+ const response = await next();
+
+ // Code runs AFTER handlers (can modify response headers here)
+ return response;
+ },
+];
+
+export function loader() {
+ return { message: 'Middleware route loaded' };
+}
+
+export default function WithMiddlewarePage() {
+ return (
+
+
Middleware Route
+
This route has middleware
+
+ );
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/instrument.mjs b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/instrument.mjs
new file mode 100644
index 000000000000..bb1dad2e5da9
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/instrument.mjs
@@ -0,0 +1,10 @@
+import * as Sentry from '@sentry/react-router';
+
+// Initialize Sentry early (before the server starts)
+// The server instrumentations are created in entry.server.tsx
+Sentry.init({
+ dsn: 'https://username@domain/123',
+ environment: 'qa', // dynamic sampling bias to keep transactions
+ tracesSampleRate: 1.0,
+ tunnel: `http://localhost:3031/`, // proxy server
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/package.json b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/package.json
new file mode 100644
index 000000000000..9666bf218893
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/package.json
@@ -0,0 +1,61 @@
+{
+ "name": "react-router-7-framework-instrumentation",
+ "version": "0.1.0",
+ "type": "module",
+ "private": true,
+ "dependencies": {
+ "@react-router/node": "latest",
+ "@react-router/serve": "latest",
+ "@sentry/react-router": "latest || *",
+ "isbot": "^5.1.17",
+ "react": "^18.3.1",
+ "react-dom": "^18.3.1",
+ "react-router": "latest"
+ },
+ "devDependencies": {
+ "@playwright/test": "~1.56.0",
+ "@react-router/dev": "latest",
+ "@sentry-internal/test-utils": "link:../../../test-utils",
+ "@types/node": "^20",
+ "@types/react": "18.3.1",
+ "@types/react-dom": "18.3.1",
+ "typescript": "^5.6.3",
+ "vite": "^5.4.11"
+ },
+ "scripts": {
+ "build": "react-router build",
+ "dev": "NODE_OPTIONS='--import ./instrument.mjs' react-router dev",
+ "start": "NODE_OPTIONS='--import ./instrument.mjs' react-router-serve ./build/server/index.js",
+ "proxy": "node start-event-proxy.mjs",
+ "typecheck": "react-router typegen && tsc",
+ "clean": "npx rimraf node_modules pnpm-lock.yaml",
+ "test:build": "pnpm install && pnpm build",
+ "test:assert": "pnpm test:ts && pnpm test:playwright",
+ "test:ts": "pnpm typecheck",
+ "test:playwright": "playwright test"
+ },
+ "eslintConfig": {
+ "extends": [
+ "react-app",
+ "react-app/jest"
+ ]
+ },
+ "browserslist": {
+ "production": [
+ ">0.2%",
+ "not dead",
+ "not op_mini all"
+ ],
+ "development": [
+ "last 1 chrome version",
+ "last 1 firefox version",
+ "last 1 safari version"
+ ]
+ },
+ "volta": {
+ "extends": "../../package.json"
+ },
+ "sentryTest": {
+ "optional": true
+ }
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/playwright.config.mjs b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/playwright.config.mjs
new file mode 100644
index 000000000000..3ed5721107a7
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/playwright.config.mjs
@@ -0,0 +1,8 @@
+import { getPlaywrightConfig } from '@sentry-internal/test-utils';
+
+const config = getPlaywrightConfig({
+ startCommand: `PORT=3030 pnpm start`,
+ port: 3030,
+});
+
+export default config;
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/react-router.config.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/react-router.config.ts
new file mode 100644
index 000000000000..72f2eef3b0f5
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/react-router.config.ts
@@ -0,0 +1,9 @@
+import type { Config } from '@react-router/dev/config';
+
+export default {
+ ssr: true,
+ prerender: ['/performance/static'],
+ future: {
+ v8_middleware: true,
+ },
+} satisfies Config;
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/start-event-proxy.mjs b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/start-event-proxy.mjs
new file mode 100644
index 000000000000..f70c1d3f20f1
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/start-event-proxy.mjs
@@ -0,0 +1,6 @@
+import { startEventProxyServer } from '@sentry-internal/test-utils';
+
+startEventProxyServer({
+ port: 3031,
+ proxyServerName: 'react-router-7-framework-instrumentation',
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/constants.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/constants.ts
new file mode 100644
index 000000000000..850613659daa
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/constants.ts
@@ -0,0 +1 @@
+export const APP_NAME = 'react-router-7-framework-instrumentation';
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/errors/errors.server.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/errors/errors.server.test.ts
new file mode 100644
index 000000000000..7550f8b4e10c
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/errors/errors.server.test.ts
@@ -0,0 +1,176 @@
+import { expect, test } from '@playwright/test';
+import { waitForError, waitForTransaction } from '@sentry-internal/test-utils';
+import { APP_NAME } from '../constants';
+
+test.describe('server - instrumentation API error capture', () => {
+ test('should capture loader errors with instrumentation API mechanism', async ({ page }) => {
+ const errorPromise = waitForError(APP_NAME, async errorEvent => {
+ return errorEvent.exception?.values?.[0]?.value === 'Loader error for testing';
+ });
+
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/error-loader';
+ });
+
+ await page.goto(`/performance/error-loader`).catch(() => {
+ // Expected to fail due to loader error
+ });
+
+ const [error, transaction] = await Promise.all([errorPromise, txPromise]);
+
+ // Verify the error was captured with correct mechanism and transaction name
+ expect(error).toMatchObject({
+ exception: {
+ values: [
+ {
+ type: 'Error',
+ value: 'Loader error for testing',
+ mechanism: {
+ type: 'react_router.loader',
+ handled: false,
+ },
+ },
+ ],
+ },
+ transaction: 'GET /performance/error-loader',
+ });
+
+ // Verify the transaction was also created with correct attributes
+ expect(transaction).toMatchObject({
+ transaction: 'GET /performance/error-loader',
+ contexts: {
+ trace: {
+ op: 'http.server',
+ origin: 'auto.http.react_router.instrumentation_api',
+ },
+ },
+ });
+ });
+
+ test('should include loader span in transaction even when loader throws', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/error-loader';
+ });
+
+ await page.goto(`/performance/error-loader`).catch(() => {
+ // Expected to fail due to loader error
+ });
+
+ const transaction = await txPromise;
+
+ // Find the loader span
+ const loaderSpan = transaction?.spans?.find(
+ (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.loader',
+ );
+
+ expect(loaderSpan).toMatchObject({
+ data: {
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ 'sentry.op': 'function.react_router.loader',
+ },
+ op: 'function.react_router.loader',
+ });
+ });
+
+ test('error and transaction should share the same trace', async ({ page }) => {
+ const errorPromise = waitForError(APP_NAME, async errorEvent => {
+ return errorEvent.exception?.values?.[0]?.value === 'Loader error for testing';
+ });
+
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/error-loader';
+ });
+
+ await page.goto(`/performance/error-loader`).catch(() => {
+ // Expected to fail due to loader error
+ });
+
+ const [error, transaction] = await Promise.all([errorPromise, txPromise]);
+
+ // Error and transaction should have the same trace_id
+ expect(error.contexts?.trace?.trace_id).toBe(transaction.contexts?.trace?.trace_id);
+ });
+
+ test('should capture action errors with instrumentation API mechanism', async ({ page }) => {
+ const errorPromise = waitForError(APP_NAME, async errorEvent => {
+ return errorEvent.exception?.values?.[0]?.value === 'Action error for testing';
+ });
+
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'POST /performance/error-action';
+ });
+
+ await page.goto(`/performance/error-action`);
+ await page.getByRole('button', { name: 'Trigger Error' }).click();
+
+ const [error, transaction] = await Promise.all([errorPromise, txPromise]);
+
+ expect(error).toMatchObject({
+ exception: {
+ values: [
+ {
+ type: 'Error',
+ value: 'Action error for testing',
+ mechanism: {
+ type: 'react_router.action',
+ handled: false,
+ },
+ },
+ ],
+ },
+ transaction: 'POST /performance/error-action',
+ });
+
+ expect(transaction).toMatchObject({
+ transaction: 'POST /performance/error-action',
+ contexts: {
+ trace: {
+ op: 'http.server',
+ origin: 'auto.http.react_router.instrumentation_api',
+ },
+ },
+ });
+ });
+
+ test('should capture middleware errors with instrumentation API mechanism', async ({ page }) => {
+ const errorPromise = waitForError(APP_NAME, async errorEvent => {
+ return errorEvent.exception?.values?.[0]?.value === 'Middleware error for testing';
+ });
+
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/error-middleware';
+ });
+
+ await page.goto(`/performance/error-middleware`).catch(() => {
+ // Expected to fail due to middleware error
+ });
+
+ const [error, transaction] = await Promise.all([errorPromise, txPromise]);
+
+ expect(error).toMatchObject({
+ exception: {
+ values: [
+ {
+ type: 'Error',
+ value: 'Middleware error for testing',
+ mechanism: {
+ type: 'react_router.middleware',
+ handled: false,
+ },
+ },
+ ],
+ },
+ transaction: 'GET /performance/error-middleware',
+ });
+
+ expect(transaction).toMatchObject({
+ transaction: 'GET /performance/error-middleware',
+ contexts: {
+ trace: {
+ op: 'http.server',
+ origin: 'auto.http.react_router.instrumentation_api',
+ },
+ },
+ });
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/fetcher.client.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/fetcher.client.test.ts
new file mode 100644
index 000000000000..41ef363b9589
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/fetcher.client.test.ts
@@ -0,0 +1,80 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+import { APP_NAME } from '../constants';
+
+// Known React Router limitation: HydratedRouter doesn't invoke instrumentation API
+// hooks on the client-side in Framework Mode. This includes the router.fetch hook.
+// See: https://github.com/remix-run/react-router/discussions/13749
+// Using test.fixme to auto-detect when React Router fixes this upstream.
+
+test.describe('client - instrumentation API fetcher (upstream limitation)', () => {
+ test.fixme('should instrument fetcher with instrumentation API origin', async ({ page }) => {
+ const serverTxPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === 'GET /performance/fetcher-test' &&
+ transactionEvent.contexts?.trace?.op === 'http.server'
+ );
+ });
+
+ await page.goto(`/performance/fetcher-test`);
+ await serverTxPromise;
+
+ // Wait for the fetcher action transaction
+ const fetcherTxPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.contexts?.trace?.op === 'function.react_router.fetcher' &&
+ transactionEvent.contexts?.trace?.data?.['sentry.origin'] === 'auto.function.react_router.instrumentation_api'
+ );
+ });
+
+ await page.locator('#fetcher-submit').click();
+
+ const fetcherTx = await fetcherTxPromise;
+
+ expect(fetcherTx).toMatchObject({
+ contexts: {
+ trace: {
+ op: 'function.react_router.fetcher',
+ origin: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ });
+ });
+
+ test('should still send server action transaction when fetcher submits', async ({ page }) => {
+ const serverPageloadPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === 'GET /performance/fetcher-test' &&
+ transactionEvent.contexts?.trace?.op === 'http.server'
+ );
+ });
+
+ await page.goto(`/performance/fetcher-test`);
+ await serverPageloadPromise;
+
+ // Fetcher submit triggers a server action
+ const serverActionPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === 'POST /performance/fetcher-test' &&
+ transactionEvent.contexts?.trace?.op === 'http.server'
+ );
+ });
+
+ await page.locator('#fetcher-submit').click();
+
+ const serverAction = await serverActionPromise;
+
+ expect(serverAction).toMatchObject({
+ transaction: 'POST /performance/fetcher-test',
+ contexts: {
+ trace: {
+ op: 'http.server',
+ origin: 'auto.http.react_router.instrumentation_api',
+ },
+ },
+ });
+
+ // Verify fetcher result is displayed
+ await expect(page.locator('#fetcher-result')).toHaveText('Fetcher result: test-value');
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/lazy.server.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/lazy.server.test.ts
new file mode 100644
index 000000000000..85c35e75e8a5
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/lazy.server.test.ts
@@ -0,0 +1,115 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+import { APP_NAME } from '../constants';
+
+// Known React Router limitation: route.lazy hooks only work in Data Mode (createBrowserRouter).
+// Framework Mode uses bundler code-splitting which doesn't trigger the lazy hook.
+// See: https://github.com/remix-run/react-router/blob/main/decisions/0002-lazy-route-modules.md
+// Using test.fail() to auto-detect when React Router fixes this upstream.
+test.describe('server - instrumentation API lazy loading', () => {
+ test.fail('should instrument lazy route loading with instrumentation API origin', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/lazy-route';
+ });
+
+ await page.goto(`/performance/lazy-route`);
+
+ const transaction = await txPromise;
+
+ // Verify the lazy route content is rendered
+ await expect(page.locator('#lazy-route-title')).toBeVisible();
+ await expect(page.locator('#lazy-route-content')).toHaveText('This route was lazily loaded');
+
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.op': 'http.server',
+ 'sentry.origin': 'auto.http.react_router.instrumentation_api',
+ 'sentry.source': 'route',
+ },
+ op: 'http.server',
+ origin: 'auto.http.react_router.instrumentation_api',
+ },
+ },
+ spans: expect.any(Array),
+ transaction: 'GET /performance/lazy-route',
+ type: 'transaction',
+ transaction_info: { source: 'route' },
+ });
+
+ // Find the lazy span
+ const lazySpan = transaction?.spans?.find(
+ (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.lazy',
+ );
+
+ expect(lazySpan).toMatchObject({
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ 'sentry.op': 'function.react_router.lazy',
+ },
+ description: 'Lazy Route Load',
+ parent_span_id: expect.any(String),
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ op: 'function.react_router.lazy',
+ origin: 'auto.function.react_router.instrumentation_api',
+ });
+ });
+
+ test('should include loader span after lazy loading completes', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/lazy-route';
+ });
+
+ await page.goto(`/performance/lazy-route`);
+
+ const transaction = await txPromise;
+
+ // Find the loader span that runs after lazy loading
+ const loaderSpan = transaction?.spans?.find(
+ (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.loader',
+ );
+
+ expect(loaderSpan).toMatchObject({
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ 'sentry.op': 'function.react_router.loader',
+ },
+ description: '/performance/lazy-route',
+ op: 'function.react_router.loader',
+ origin: 'auto.function.react_router.instrumentation_api',
+ });
+ });
+
+ test.fail('should have correct span ordering: lazy before loader', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/lazy-route';
+ });
+
+ await page.goto(`/performance/lazy-route`);
+
+ const transaction = await txPromise;
+
+ const lazySpan = transaction?.spans?.find(
+ (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.lazy',
+ );
+
+ const loaderSpan = transaction?.spans?.find(
+ (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.loader',
+ );
+
+ expect(lazySpan).toBeDefined();
+ expect(loaderSpan).toBeDefined();
+
+ // Lazy span should start before or at the same time as loader
+ // (lazy loading must complete before loader can run)
+ expect(lazySpan!.start_timestamp).toBeLessThanOrEqual(loaderSpan!.start_timestamp);
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/middleware.server.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/middleware.server.test.ts
new file mode 100644
index 000000000000..e99a58a7f57c
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/middleware.server.test.ts
@@ -0,0 +1,85 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+import { APP_NAME } from '../constants';
+
+// Note: React Router middleware instrumentation now works in Framework Mode.
+// Previously this was a known limitation (see: https://github.com/remix-run/react-router/discussions/12950)
+test.describe('server - instrumentation API middleware', () => {
+ test('should instrument server middleware with instrumentation API origin', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/with-middleware';
+ });
+
+ await page.goto(`/performance/with-middleware`);
+
+ const transaction = await txPromise;
+
+ // Verify the middleware route content is rendered
+ await expect(page.locator('#middleware-route-title')).toBeVisible();
+ await expect(page.locator('#middleware-route-content')).toHaveText('This route has middleware');
+
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.op': 'http.server',
+ 'sentry.origin': 'auto.http.react_router.instrumentation_api',
+ 'sentry.source': 'route',
+ },
+ op: 'http.server',
+ origin: 'auto.http.react_router.instrumentation_api',
+ },
+ },
+ spans: expect.any(Array),
+ transaction: 'GET /performance/with-middleware',
+ type: 'transaction',
+ transaction_info: { source: 'route' },
+ });
+
+ // Find the middleware span
+ const middlewareSpan = transaction?.spans?.find(
+ (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.middleware',
+ );
+
+ expect(middlewareSpan).toMatchObject({
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ 'sentry.op': 'function.react_router.middleware',
+ },
+ description: '/performance/with-middleware',
+ parent_span_id: expect.any(String),
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ op: 'function.react_router.middleware',
+ origin: 'auto.function.react_router.instrumentation_api',
+ });
+ });
+
+ test('should have middleware span run before loader span', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/with-middleware';
+ });
+
+ await page.goto(`/performance/with-middleware`);
+
+ const transaction = await txPromise;
+
+ const middlewareSpan = transaction?.spans?.find(
+ (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.middleware',
+ );
+
+ const loaderSpan = transaction?.spans?.find(
+ (span: { data?: { 'sentry.op'?: string } }) => span.data?.['sentry.op'] === 'function.react_router.loader',
+ );
+
+ expect(middlewareSpan).toBeDefined();
+ expect(loaderSpan).toBeDefined();
+
+ // Middleware should start before loader
+ expect(middlewareSpan!.start_timestamp).toBeLessThanOrEqual(loaderSpan!.start_timestamp);
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/navigation.client.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/navigation.client.test.ts
new file mode 100644
index 000000000000..ed5bafad79fc
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/navigation.client.test.ts
@@ -0,0 +1,196 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+import { APP_NAME } from '../constants';
+
+// Known React Router limitation: HydratedRouter doesn't invoke instrumentation API
+// hooks on the client-side in Framework Mode. Server-side instrumentation works.
+// See: https://github.com/remix-run/react-router/discussions/13749
+// The legacy HydratedRouter instrumentation provides fallback navigation tracking.
+
+test.describe('client - navigation fallback to legacy instrumentation', () => {
+ test('should send navigation transaction via legacy HydratedRouter instrumentation', async ({ page }) => {
+ // First load the performance page
+ await page.goto(`/performance`);
+ await page.waitForTimeout(1000);
+
+ // Wait for the navigation transaction (from legacy instrumentation)
+ const navigationTxPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/performance/ssr' && transactionEvent.contexts?.trace?.op === 'navigation'
+ );
+ });
+
+ // Click on the SSR link to navigate
+ await page.getByRole('link', { name: 'SSR Page' }).click();
+
+ const transaction = await navigationTxPromise;
+
+ // Navigation should work via legacy HydratedRouter instrumentation
+ // (not instrumentation_api since that doesn't work in Framework Mode)
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ op: 'navigation',
+ origin: 'auto.navigation.react_router', // Legacy origin, not instrumentation_api
+ },
+ },
+ transaction: '/performance/ssr',
+ type: 'transaction',
+ });
+ });
+
+ test('should parameterize navigation transaction for dynamic routes', async ({ page }) => {
+ await page.goto(`/performance`);
+ await page.waitForTimeout(1000);
+
+ const navigationTxPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/performance/with/:param' &&
+ transactionEvent.contexts?.trace?.op === 'navigation'
+ );
+ });
+
+ await page.getByRole('link', { name: 'With Param Page' }).click();
+
+ const transaction = await navigationTxPromise;
+
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ op: 'navigation',
+ origin: 'auto.navigation.react_router',
+ data: {
+ 'sentry.source': 'route',
+ },
+ },
+ },
+ transaction: '/performance/with/:param',
+ type: 'transaction',
+ transaction_info: { source: 'route' },
+ });
+ });
+
+ test('should send multiple navigation transactions in sequence', async ({ page }) => {
+ await page.goto(`/performance`);
+ await page.waitForTimeout(1000);
+
+ // First navigation: /performance -> /performance/ssr
+ const firstNavPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/performance/ssr' && transactionEvent.contexts?.trace?.op === 'navigation'
+ );
+ });
+
+ await page.getByRole('link', { name: 'SSR Page' }).click();
+
+ const firstNav = await firstNavPromise;
+
+ expect(firstNav).toMatchObject({
+ contexts: {
+ trace: {
+ op: 'navigation',
+ origin: 'auto.navigation.react_router',
+ },
+ },
+ transaction: '/performance/ssr',
+ type: 'transaction',
+ });
+
+ // Second navigation: /performance/ssr -> /performance
+ const secondNavPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === '/performance' && transactionEvent.contexts?.trace?.op === 'navigation';
+ });
+
+ await page.getByRole('link', { name: 'Back to Performance' }).click();
+
+ const secondNav = await secondNavPromise;
+
+ expect(secondNav).toMatchObject({
+ contexts: {
+ trace: {
+ op: 'navigation',
+ origin: 'auto.navigation.react_router',
+ },
+ },
+ transaction: '/performance',
+ type: 'transaction',
+ });
+ });
+});
+
+// Tests for instrumentation API navigation - expected to fail until React Router fixes upstream
+test.describe('client - instrumentation API navigation (upstream limitation)', () => {
+ test.fixme('should send navigation transaction with instrumentation API origin', async ({ page }) => {
+ // First load the performance page
+ await page.goto(`/performance`);
+
+ // Wait for the navigation transaction
+ const navigationTxPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/performance/ssr' &&
+ transactionEvent.contexts?.trace?.data?.['sentry.origin'] === 'auto.navigation.react_router.instrumentation_api'
+ );
+ });
+
+ // Click on the SSR link to navigate
+ await page.getByRole('link', { name: 'SSR Page' }).click();
+
+ const transaction = await navigationTxPromise;
+
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.op': 'navigation',
+ 'sentry.origin': 'auto.navigation.react_router.instrumentation_api',
+ 'sentry.source': 'url',
+ },
+ op: 'navigation',
+ origin: 'auto.navigation.react_router.instrumentation_api',
+ },
+ },
+ transaction: '/performance/ssr',
+ type: 'transaction',
+ transaction_info: { source: 'url' },
+ });
+ });
+
+ test.fixme('should send navigation transaction on parameterized route', async ({ page }) => {
+ // First load the performance page
+ await page.goto(`/performance`);
+
+ // Wait for the navigation transaction
+ const navigationTxPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === '/performance/with/sentry' &&
+ transactionEvent.contexts?.trace?.data?.['sentry.origin'] === 'auto.navigation.react_router.instrumentation_api'
+ );
+ });
+
+ // Click on the With Param link to navigate
+ await page.getByRole('link', { name: 'With Param Page' }).click();
+
+ const transaction = await navigationTxPromise;
+
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.op': 'navigation',
+ 'sentry.origin': 'auto.navigation.react_router.instrumentation_api',
+ 'sentry.source': 'url',
+ },
+ op: 'navigation',
+ origin: 'auto.navigation.react_router.instrumentation_api',
+ },
+ },
+ transaction: '/performance/with/sentry',
+ type: 'transaction',
+ transaction_info: { source: 'url' },
+ });
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/pageload.client.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/pageload.client.test.ts
new file mode 100644
index 000000000000..0e1bf552b995
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/pageload.client.test.ts
@@ -0,0 +1,51 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+import { APP_NAME } from '../constants';
+
+test.describe('client - instrumentation API pageload', () => {
+ test('should send pageload transaction', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === '/performance' && transactionEvent.contexts?.trace?.op === 'pageload';
+ });
+
+ await page.goto(`/performance`);
+
+ const transaction = await txPromise;
+
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ op: 'pageload',
+ },
+ },
+ transaction: '/performance',
+ type: 'transaction',
+ });
+ });
+
+ test('should link server and client transactions with same trace_id', async ({ page }) => {
+ const serverTxPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return (
+ transactionEvent.transaction === 'GET /performance' && transactionEvent.contexts?.trace?.op === 'http.server'
+ );
+ });
+
+ const clientTxPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === '/performance' && transactionEvent.contexts?.trace?.op === 'pageload';
+ });
+
+ await page.goto(`/performance`);
+
+ const [serverTx, clientTx] = await Promise.all([serverTxPromise, clientTxPromise]);
+
+ // Both transactions should share the same trace_id
+ expect(serverTx.contexts?.trace?.trace_id).toBeDefined();
+ expect(clientTx.contexts?.trace?.trace_id).toBeDefined();
+ expect(serverTx.contexts?.trace?.trace_id).toBe(clientTx.contexts?.trace?.trace_id);
+
+ // But have different span_ids
+ expect(serverTx.contexts?.trace?.span_id).not.toBe(clientTx.contexts?.trace?.span_id);
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/performance.server.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/performance.server.test.ts
new file mode 100644
index 000000000000..6deac7cf83b2
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tests/performance/performance.server.test.ts
@@ -0,0 +1,154 @@
+import { expect, test } from '@playwright/test';
+import { waitForTransaction } from '@sentry-internal/test-utils';
+import { APP_NAME } from '../constants';
+
+test.describe('server - instrumentation API performance', () => {
+ test('should send server transaction on pageload with instrumentation API origin', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance';
+ });
+
+ await page.goto(`/performance`);
+
+ const transaction = await txPromise;
+
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.op': 'http.server',
+ 'sentry.origin': 'auto.http.react_router.instrumentation_api',
+ 'sentry.source': 'route',
+ },
+ op: 'http.server',
+ origin: 'auto.http.react_router.instrumentation_api',
+ },
+ },
+ spans: expect.any(Array),
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ transaction: 'GET /performance',
+ type: 'transaction',
+ transaction_info: { source: 'route' },
+ platform: 'node',
+ request: {
+ url: expect.stringContaining('/performance'),
+ headers: expect.any(Object),
+ },
+ event_id: expect.any(String),
+ environment: 'qa',
+ sdk: {
+ integrations: expect.arrayContaining([expect.any(String)]),
+ name: 'sentry.javascript.react-router',
+ version: expect.any(String),
+ packages: [
+ { name: 'npm:@sentry/react-router', version: expect.any(String) },
+ { name: 'npm:@sentry/node', version: expect.any(String) },
+ ],
+ },
+ tags: {
+ runtime: 'node',
+ },
+ });
+ });
+
+ test('should send server transaction on parameterized route with instrumentation API origin', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/with/:param';
+ });
+
+ await page.goto(`/performance/with/some-param`);
+
+ const transaction = await txPromise;
+
+ expect(transaction).toMatchObject({
+ contexts: {
+ trace: {
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.op': 'http.server',
+ 'sentry.origin': 'auto.http.react_router.instrumentation_api',
+ 'sentry.source': 'route',
+ },
+ op: 'http.server',
+ origin: 'auto.http.react_router.instrumentation_api',
+ },
+ },
+ spans: expect.any(Array),
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ transaction: 'GET /performance/with/:param',
+ type: 'transaction',
+ transaction_info: { source: 'route' },
+ platform: 'node',
+ request: {
+ url: expect.stringContaining('/performance/with/some-param'),
+ headers: expect.any(Object),
+ },
+ event_id: expect.any(String),
+ environment: 'qa',
+ });
+ });
+
+ test('should instrument server loader with instrumentation API origin', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'GET /performance/server-loader';
+ });
+
+ await page.goto(`/performance/server-loader`);
+
+ const transaction = await txPromise;
+
+ // Find the loader span
+ const loaderSpan = transaction?.spans?.find(span => span.data?.['sentry.op'] === 'function.react_router.loader');
+
+ expect(loaderSpan).toMatchObject({
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ 'sentry.op': 'function.react_router.loader',
+ },
+ description: '/performance/server-loader',
+ parent_span_id: expect.any(String),
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ status: 'ok',
+ op: 'function.react_router.loader',
+ origin: 'auto.function.react_router.instrumentation_api',
+ });
+ });
+
+ test('should instrument server action with instrumentation API origin', async ({ page }) => {
+ const txPromise = waitForTransaction(APP_NAME, async transactionEvent => {
+ return transactionEvent.transaction === 'POST /performance/server-action';
+ });
+
+ await page.goto(`/performance/server-action`);
+ await page.getByRole('button', { name: 'Submit' }).click();
+
+ const transaction = await txPromise;
+
+ // Find the action span
+ const actionSpan = transaction?.spans?.find(span => span.data?.['sentry.op'] === 'function.react_router.action');
+
+ expect(actionSpan).toMatchObject({
+ span_id: expect.any(String),
+ trace_id: expect.any(String),
+ data: {
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ 'sentry.op': 'function.react_router.action',
+ },
+ description: '/performance/server-action',
+ parent_span_id: expect.any(String),
+ start_timestamp: expect.any(Number),
+ timestamp: expect.any(Number),
+ status: 'ok',
+ op: 'function.react_router.action',
+ origin: 'auto.function.react_router.instrumentation_api',
+ });
+ });
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tsconfig.json b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tsconfig.json
new file mode 100644
index 000000000000..a16df276e8bc
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/tsconfig.json
@@ -0,0 +1,20 @@
+{
+ "compilerOptions": {
+ "lib": ["DOM", "DOM.Iterable", "ES2022"],
+ "types": ["node", "vite/client"],
+ "target": "ES2022",
+ "module": "ES2022",
+ "moduleResolution": "bundler",
+ "jsx": "react-jsx",
+ "rootDirs": [".", "./.react-router/types"],
+ "baseUrl": ".",
+
+ "esModuleInterop": true,
+ "verbatimModuleSyntax": true,
+ "noEmit": true,
+ "resolveJsonModule": true,
+ "skipLibCheck": true,
+ "strict": true
+ },
+ "include": ["**/*", "**/.server/**/*", "**/.client/**/*", ".react-router/types/**/*"]
+}
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/vite.config.ts b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/vite.config.ts
new file mode 100644
index 000000000000..68ba30d69397
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-framework-instrumentation/vite.config.ts
@@ -0,0 +1,6 @@
+import { reactRouter } from '@react-router/dev/vite';
+import { defineConfig } from 'vite';
+
+export default defineConfig({
+ plugins: [reactRouter()],
+});
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/index.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/index.tsx
index 1bcad5eaf4ce..a35a1b8ae077 100644
--- a/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/index.tsx
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/index.tsx
@@ -134,6 +134,20 @@ const router = sentryCreateBrowserRouter(
lazyChildren: () => import('./pages/SlowFetchLazyRoutes').then(module => module.slowFetchRoutes),
},
},
+ {
+ // Route with wildcard placeholder that gets replaced by lazy-loaded parameterized routes
+ // This tests that wildcard transaction names get upgraded to parameterized routes
+ path: '/wildcard-lazy',
+ children: [
+ {
+ path: '*', // Catch-all wildcard - will be matched initially before lazy routes load
+ element: <>Loading...>,
+ },
+ ],
+ handle: {
+ lazyChildren: () => import('./pages/WildcardLazyRoutes').then(module => module.wildcardRoutes),
+ },
+ },
],
{
async patchRoutesOnNavigation({ matches, patch }: Parameters[0]) {
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/pages/Index.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/pages/Index.tsx
index cf80af402b96..c22153441862 100644
--- a/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/pages/Index.tsx
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/pages/Index.tsx
@@ -35,6 +35,10 @@ const Index = () => {
Navigate to Slow Fetch Route (500ms delay with fetch)
+
+
+ Navigate to Wildcard Lazy Route (500ms delay, no fetch)
+
>
);
};
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/pages/WildcardLazyRoutes.tsx b/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/pages/WildcardLazyRoutes.tsx
new file mode 100644
index 000000000000..8be773e6613a
--- /dev/null
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/src/pages/WildcardLazyRoutes.tsx
@@ -0,0 +1,17 @@
+import React from 'react';
+import { useParams } from 'react-router-dom';
+
+// Simulate slow lazy route loading (500ms delay via top-level await)
+await new Promise(resolve => setTimeout(resolve, 500));
+
+function WildcardItem() {
+ const { id } = useParams();
+ return Wildcard Item: {id}
;
+}
+
+export const wildcardRoutes = [
+ {
+ path: ':id',
+ element: ,
+ },
+];
diff --git a/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/tests/transactions.test.ts b/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/tests/transactions.test.ts
index f7a3ec4a5519..9ebfa7ceb8c3 100644
--- a/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/tests/transactions.test.ts
+++ b/dev-packages/e2e-tests/test-applications/react-router-7-lazy-routes/tests/transactions.test.ts
@@ -1228,3 +1228,83 @@ test('Query/hash navigation does not corrupt transaction name', async ({ page })
const corruptedToRoot = navigationTransactions.filter(t => t.name === '/');
expect(corruptedToRoot.length).toBe(0);
});
+
+// Regression: Pageload to slow lazy route should get parameterized name even if span ends early
+test('Slow lazy route pageload with early span end still gets parameterized route name (regression)', async ({
+ page,
+}) => {
+ const transactionPromise = waitForTransaction('react-router-7-lazy-routes', async transactionEvent => {
+ return (
+ !!transactionEvent?.transaction &&
+ transactionEvent.contexts?.trace?.op === 'pageload' &&
+ (transactionEvent.transaction?.startsWith('/slow-fetch') ?? false)
+ );
+ });
+
+ // idleTimeout=300 ends span before 500ms lazy route loads, timeout=1000 waits for lazy routes
+ await page.goto('/slow-fetch/123?idleTimeout=300&timeout=1000');
+
+ const event = await transactionPromise;
+
+ expect(event.transaction).toBe('/slow-fetch/:id');
+ expect(event.type).toBe('transaction');
+ expect(event.contexts?.trace?.op).toBe('pageload');
+ expect(event.contexts?.trace?.data?.['sentry.source']).toBe('route');
+
+ const idleSpanFinishReason = event.contexts?.trace?.data?.['sentry.idle_span_finish_reason'];
+ expect(['idleTimeout', 'externalFinish']).toContain(idleSpanFinishReason);
+});
+
+// Regression: Wildcard route names should be upgraded to parameterized routes when lazy routes load
+test('Wildcard route pageload gets upgraded to parameterized route name (regression)', async ({ page }) => {
+ const transactionPromise = waitForTransaction('react-router-7-lazy-routes', async transactionEvent => {
+ return (
+ !!transactionEvent?.transaction &&
+ transactionEvent.contexts?.trace?.op === 'pageload' &&
+ (transactionEvent.transaction?.startsWith('/wildcard-lazy') ?? false)
+ );
+ });
+
+ await page.goto('/wildcard-lazy/456?idleTimeout=300&timeout=1000');
+
+ const event = await transactionPromise;
+
+ expect(event.transaction).toBe('/wildcard-lazy/:id');
+ expect(event.type).toBe('transaction');
+ expect(event.contexts?.trace?.op).toBe('pageload');
+ expect(event.contexts?.trace?.data?.['sentry.source']).toBe('route');
+});
+
+// Regression: Navigation to slow lazy route should get parameterized name even if span ends early.
+// Network activity from dynamic imports extends the idle timeout until lazy routes load.
+test('Slow lazy route navigation with early span end still gets parameterized route name (regression)', async ({
+ page,
+}) => {
+ // Configure short idle timeout (300ms) but longer lazy route timeout (1000ms)
+ await page.goto('/?idleTimeout=300&timeout=1000');
+
+ // Wait for pageload to complete
+ await page.waitForTimeout(500);
+
+ const navigationPromise = waitForTransaction('react-router-7-lazy-routes', async transactionEvent => {
+ return (
+ !!transactionEvent?.transaction &&
+ transactionEvent.contexts?.trace?.op === 'navigation' &&
+ (transactionEvent.transaction?.startsWith('/wildcard-lazy') ?? false)
+ );
+ });
+
+ // Navigate to wildcard-lazy route (500ms delay in module via top-level await)
+ // The dynamic import creates network activity that extends the span lifetime
+ const wildcardLazyLink = page.locator('id=navigation-to-wildcard-lazy');
+ await expect(wildcardLazyLink).toBeVisible();
+ await wildcardLazyLink.click();
+
+ const event = await navigationPromise;
+
+ // The navigation transaction should have the parameterized route name
+ expect(event.transaction).toBe('/wildcard-lazy/:id');
+ expect(event.type).toBe('transaction');
+ expect(event.contexts?.trace?.op).toBe('navigation');
+ expect(event.contexts?.trace?.data?.['sentry.source']).toBe('route');
+});
diff --git a/dev-packages/e2e-tests/test-applications/tanstackstart-react/src/middleware.ts b/dev-packages/e2e-tests/test-applications/tanstackstart-react/src/middleware.ts
index daf81ea97e10..780d8a3a2a9d 100644
--- a/dev-packages/e2e-tests/test-applications/tanstackstart-react/src/middleware.ts
+++ b/dev-packages/e2e-tests/test-applications/tanstackstart-react/src/middleware.ts
@@ -2,13 +2,15 @@ import { createMiddleware } from '@tanstack/react-start';
import { wrapMiddlewaresWithSentry } from '@sentry/tanstackstart-react';
// Global request middleware - runs on every request
-const globalRequestMiddleware = createMiddleware().server(async ({ next }) => {
+// NOTE: This is exported unwrapped to test auto-instrumentation via the Vite plugin
+export const globalRequestMiddleware = createMiddleware().server(async ({ next }) => {
console.log('Global request middleware executed');
return next();
});
// Global function middleware - runs on every server function
-const globalFunctionMiddleware = createMiddleware({ type: 'function' }).server(async ({ next }) => {
+// NOTE: This is exported unwrapped to test auto-instrumentation via the Vite plugin
+export const globalFunctionMiddleware = createMiddleware({ type: 'function' }).server(async ({ next }) => {
console.log('Global function middleware executed');
return next();
});
@@ -37,17 +39,13 @@ const errorMiddleware = createMiddleware({ type: 'function' }).server(async () =
throw new Error('Middleware Error Test');
});
-// Manually wrap middlewares with Sentry
+// Manually wrap middlewares with Sentry (for middlewares that won't be auto-instrumented)
export const [
- wrappedGlobalRequestMiddleware,
- wrappedGlobalFunctionMiddleware,
wrappedServerFnMiddleware,
wrappedServerRouteRequestMiddleware,
wrappedEarlyReturnMiddleware,
wrappedErrorMiddleware,
] = wrapMiddlewaresWithSentry({
- globalRequestMiddleware,
- globalFunctionMiddleware,
serverFnMiddleware,
serverRouteRequestMiddleware,
earlyReturnMiddleware,
diff --git a/dev-packages/e2e-tests/test-applications/tanstackstart-react/src/start.ts b/dev-packages/e2e-tests/test-applications/tanstackstart-react/src/start.ts
index eecd2816e492..0dc32ebd112f 100644
--- a/dev-packages/e2e-tests/test-applications/tanstackstart-react/src/start.ts
+++ b/dev-packages/e2e-tests/test-applications/tanstackstart-react/src/start.ts
@@ -1,9 +1,10 @@
import { createStart } from '@tanstack/react-start';
-import { wrappedGlobalRequestMiddleware, wrappedGlobalFunctionMiddleware } from './middleware';
+// NOTE: These are NOT wrapped - auto-instrumentation via the Vite plugin will wrap them
+import { globalRequestMiddleware, globalFunctionMiddleware } from './middleware';
export const startInstance = createStart(() => {
return {
- requestMiddleware: [wrappedGlobalRequestMiddleware],
- functionMiddleware: [wrappedGlobalFunctionMiddleware],
+ requestMiddleware: [globalRequestMiddleware],
+ functionMiddleware: [globalFunctionMiddleware],
};
});
diff --git a/dev-packages/node-core-integration-tests/package.json b/dev-packages/node-core-integration-tests/package.json
index c249dd9e0ab1..3ae32c6ddff7 100644
--- a/dev-packages/node-core-integration-tests/package.json
+++ b/dev-packages/node-core-integration-tests/package.json
@@ -27,13 +27,13 @@
"@nestjs/core": "^11",
"@nestjs/platform-express": "^11",
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/context-async-hooks": "^2.4.0",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/instrumentation-http": "0.210.0",
- "@opentelemetry/resources": "^2.4.0",
- "@opentelemetry/sdk-trace-base": "^2.4.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/context-async-hooks": "^2.5.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/instrumentation-http": "0.211.0",
+ "@opentelemetry/resources": "^2.5.0",
+ "@opentelemetry/sdk-trace-base": "^2.5.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@sentry/core": "10.36.0",
"@sentry/node-core": "10.36.0",
"body-parser": "^1.20.3",
diff --git a/dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/ignore-default.js b/dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/ignore-default.js
index 623aa8eaa8f7..f15c8b387036 100644
--- a/dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/ignore-default.js
+++ b/dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/ignore-default.js
@@ -13,10 +13,18 @@ class AI_NoOutputGeneratedError extends Error {
}
}
+class AbortError extends Error {
+ constructor(message) {
+ super(message);
+ this.name = 'AbortError';
+ }
+}
+
setTimeout(() => {
process.stdout.write("I'm alive!");
process.exit(0);
}, 500);
-// This should be ignored by default and not produce a warning
+// These should be ignored by default and not produce a warning
Promise.reject(new AI_NoOutputGeneratedError('Stream aborted'));
+Promise.reject(new AbortError('Stream aborted'));
diff --git a/dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/test.ts b/dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/test.ts
index cd0627664ea3..c8570747cf8d 100644
--- a/dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/test.ts
+++ b/dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/test.ts
@@ -179,7 +179,7 @@ test rejection`);
expect(transactionEvent!.contexts!.trace!.span_id).toBe(errorEvent!.contexts!.trace!.span_id);
});
- test('should not warn when AI_NoOutputGeneratedError is rejected (default ignore)', () =>
+ test('should not warn when AI_NoOutputGeneratedError or AbortError is rejected (default ignore)', () =>
new Promise(done => {
expect.assertions(3);
diff --git a/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-media-truncation.mjs b/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-media-truncation.mjs
index 73891ad30b6f..7df934404ff9 100644
--- a/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-media-truncation.mjs
+++ b/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-media-truncation.mjs
@@ -49,10 +49,15 @@ async function run() {
const client = instrumentAnthropicAiClient(mockClient);
// Send the image showing the number 3
+ // Put the image in the last message so it doesn't get dropped
await client.messages.create({
model: 'claude-3-haiku-20240307',
max_tokens: 1024,
messages: [
+ {
+ role: 'user',
+ content: 'what number is this?',
+ },
{
role: 'user',
content: [
@@ -66,10 +71,6 @@ async function run() {
},
],
},
- {
- role: 'user',
- content: 'what number is this?',
- },
],
temperature: 0.7,
});
diff --git a/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-message-truncation.mjs b/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-message-truncation.mjs
index 21821cdc5aae..49cee7e3067d 100644
--- a/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-message-truncation.mjs
+++ b/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-message-truncation.mjs
@@ -48,12 +48,11 @@ async function run() {
const client = instrumentAnthropicAiClient(mockClient);
- // Create 3 large messages where:
- // - First 2 messages are very large (will be dropped)
- // - Last message is large but will be truncated to fit within the 20KB limit
+ // Test 1: Given an array of messages only the last message should be kept
+ // The last message should be truncated to fit within the 20KB limit
const largeContent1 = 'A'.repeat(15000); // ~15KB
const largeContent2 = 'B'.repeat(15000); // ~15KB
- const largeContent3 = 'C'.repeat(25000); // ~25KB (will be truncated)
+ const largeContent3 = 'C'.repeat(25000) + 'D'.repeat(25000); // ~50KB (will be truncated, only C's remain)
await client.messages.create({
model: 'claude-3-haiku-20240307',
@@ -65,6 +64,20 @@ async function run() {
],
temperature: 0.7,
});
+
+ // Test 2: Given an array of messages only the last message should be kept
+ // The last message is small, so it should be kept intact
+ const smallContent = 'This is a small message that fits within the limit';
+ await client.messages.create({
+ model: 'claude-3-haiku-20240307',
+ max_tokens: 100,
+ messages: [
+ { role: 'user', content: largeContent1 },
+ { role: 'assistant', content: largeContent2 },
+ { role: 'user', content: smallContent },
+ ],
+ temperature: 0.7,
+ });
});
}
diff --git a/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-system-instructions.mjs b/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-system-instructions.mjs
new file mode 100644
index 000000000000..bf04419deb47
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/anthropic/scenario-system-instructions.mjs
@@ -0,0 +1,57 @@
+import Anthropic from '@anthropic-ai/sdk';
+import * as Sentry from '@sentry/node';
+import express from 'express';
+
+function startMockAnthropicServer() {
+ const app = express();
+ app.use(express.json());
+
+ app.post('/anthropic/v1/messages', (req, res) => {
+ res.send({
+ id: 'msg_system_test',
+ type: 'message',
+ model: req.body.model,
+ role: 'assistant',
+ content: [
+ {
+ type: 'text',
+ text: 'Response',
+ },
+ ],
+ stop_reason: 'end_turn',
+ stop_sequence: null,
+ usage: {
+ input_tokens: 10,
+ output_tokens: 5,
+ },
+ });
+ });
+
+ return new Promise(resolve => {
+ const server = app.listen(0, () => {
+ resolve(server);
+ });
+ });
+}
+
+async function run() {
+ const server = await startMockAnthropicServer();
+
+ await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
+ const client = new Anthropic({
+ apiKey: 'mock-api-key',
+ baseURL: `http://localhost:${server.address().port}/anthropic`,
+ });
+
+ await client.messages.create({
+ model: 'claude-3-5-sonnet-20241022',
+ max_tokens: 1024,
+ system: 'You are a helpful assistant',
+ messages: [{ role: 'user', content: 'Hello' }],
+ });
+ });
+
+ server.close();
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/anthropic/test.ts b/dev-packages/node-integration-tests/suites/tracing/anthropic/test.ts
index f62975dafb71..182f4d4ee8c5 100644
--- a/dev-packages/node-integration-tests/suites/tracing/anthropic/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/anthropic/test.ts
@@ -1,4 +1,27 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { afterAll, describe, expect } from 'vitest';
+import {
+ ANTHROPIC_AI_RESPONSE_TIMESTAMP_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
+ GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_STREAM_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_STREAMING_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../utils/runner';
describe('Anthropic integration', () => {
@@ -12,63 +35,63 @@ describe('Anthropic integration', () => {
// First span - basic message completion without PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.id': 'msg_mock123',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'msg_mock123',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
}),
- description: 'messages claude-3-haiku-20240307',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'ok',
}),
// Second span - error handling
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'error-model',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
}),
- description: 'messages error-model',
- op: 'gen_ai.messages',
+ description: 'chat error-model',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'internal_error',
}),
// Third span - token counting (no response.text because recordOutputs=false by default)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
}),
- description: 'messages claude-3-haiku-20240307',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'ok',
}),
// Fourth span - models.retrieve
expect.objectContaining({
data: expect.objectContaining({
- 'anthropic.response.timestamp': '2024-05-08T05:20:00.000Z',
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.anthropic',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.id': 'claude-3-haiku-20240307',
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
+ [ANTHROPIC_AI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2024-05-08T05:20:00.000Z',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'models',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.models',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
}),
description: 'models claude-3-haiku-20240307',
op: 'gen_ai.models',
@@ -84,24 +107,23 @@ describe('Anthropic integration', () => {
// First span - basic message completion with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.request.messages':
- '[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"What is the capital of France?"}]',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.id': 'msg_mock123',
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.text': 'Hello from Anthropic mock!',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the capital of France?"}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'msg_mock123',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Hello from Anthropic mock!',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
}),
- description: 'messages claude-3-haiku-20240307',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'ok',
}),
@@ -112,8 +134,8 @@ describe('Anthropic integration', () => {
'http.response.header.content-length': 247,
'http.response.status_code': 200,
'otel.kind': 'CLIENT',
- 'sentry.op': 'http.client',
- 'sentry.origin': 'auto.http.otel.node_fetch',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.client',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.otel.node_fetch',
'url.path': '/anthropic/v1/messages',
'url.query': '',
'url.scheme': 'http',
@@ -126,15 +148,15 @@ describe('Anthropic integration', () => {
// Second - error handling with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.messages': '[{"role":"user","content":"This will fail"}]',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.system': 'anthropic',
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"This will fail"}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
}),
- description: 'messages error-model',
- op: 'gen_ai.messages',
+ description: 'chat error-model',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'internal_error',
}),
@@ -145,8 +167,8 @@ describe('Anthropic integration', () => {
'http.response.header.content-length': 15,
'http.response.status_code': 404,
'otel.kind': 'CLIENT',
- 'sentry.op': 'http.client',
- 'sentry.origin': 'auto.http.otel.node_fetch',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.client',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.otel.node_fetch',
'url.path': '/anthropic/v1/messages',
'url.query': '',
'url.scheme': 'http',
@@ -159,16 +181,16 @@ describe('Anthropic integration', () => {
// Third - token counting with PII (response.text is present because sendDefaultPii=true enables recordOutputs)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the capital of France?"}]',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.text': '15',
- 'gen_ai.system': 'anthropic',
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the capital of France?"}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: '15',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
}),
- description: 'messages claude-3-haiku-20240307',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'ok',
}),
@@ -179,8 +201,8 @@ describe('Anthropic integration', () => {
'http.response.header.content-length': 19,
'http.response.status_code': 200,
'otel.kind': 'CLIENT',
- 'sentry.op': 'http.client',
- 'sentry.origin': 'auto.http.otel.node_fetch',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.client',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.otel.node_fetch',
'url.path': '/anthropic/v1/messages/count_tokens',
'url.query': '',
'url.scheme': 'http',
@@ -193,14 +215,14 @@ describe('Anthropic integration', () => {
// Fourth - models.retrieve with PII
expect.objectContaining({
data: expect.objectContaining({
- 'anthropic.response.timestamp': '2024-05-08T05:20:00.000Z',
- 'gen_ai.operation.name': 'models',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.id': 'claude-3-haiku-20240307',
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
- 'gen_ai.system': 'anthropic',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.anthropic',
+ [ANTHROPIC_AI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2024-05-08T05:20:00.000Z',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'models',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.models',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
}),
description: 'models claude-3-haiku-20240307',
op: 'gen_ai.models',
@@ -214,8 +236,8 @@ describe('Anthropic integration', () => {
'http.response.header.content-length': 123,
'http.response.status_code': 200,
'otel.kind': 'CLIENT',
- 'sentry.op': 'http.client',
- 'sentry.origin': 'auto.http.otel.node_fetch',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.client',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.otel.node_fetch',
'url.path': '/anthropic/v1/models/claude-3-haiku-20240307',
'url.query': '',
'url.scheme': 'http',
@@ -229,23 +251,23 @@ describe('Anthropic integration', () => {
// Fifth - messages.create with stream: true
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the capital of France?"}]',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.request.stream': true,
- 'gen_ai.response.id': 'msg_stream123',
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.text': 'Hello from stream!',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the capital of France?"}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'msg_stream123',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Hello from stream!',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
}),
- description: 'messages claude-3-haiku-20240307 stream-response',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307 stream-response',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'ok',
}),
@@ -255,8 +277,8 @@ describe('Anthropic integration', () => {
'http.request.method_original': 'POST',
'http.response.status_code': 200,
'otel.kind': 'CLIENT',
- 'sentry.op': 'http.client',
- 'sentry.origin': 'auto.http.otel.node_fetch',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.client',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.otel.node_fetch',
'url.path': '/anthropic/v1/messages',
'url.query': '',
'url.scheme': 'http',
@@ -270,12 +292,12 @@ describe('Anthropic integration', () => {
// Sixth - messages.stream
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.request.stream': true,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
}),
- description: 'messages claude-3-haiku-20240307 stream-response',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307 stream-response',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'ok',
}),
@@ -288,27 +310,27 @@ describe('Anthropic integration', () => {
// Check that custom options are respected
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response text when recordOutputs: true
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text when recordOutputs: true
}),
}),
// Check token counting with options
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': '15', // Present because recordOutputs=true is set in options
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: '15', // Present because recordOutputs=true is set in options
}),
- op: 'gen_ai.messages',
+ op: 'gen_ai.chat',
}),
// Check models.retrieve with options
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.id': 'claude-3-haiku-20240307',
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'models',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
}),
op: 'gen_ai.models',
description: 'models claude-3-haiku-20240307',
@@ -379,53 +401,53 @@ describe('Anthropic integration', () => {
spans: expect.arrayContaining([
// messages.create with stream: true
expect.objectContaining({
- description: 'messages claude-3-haiku-20240307 stream-response',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307 stream-response',
+ op: 'gen_ai.chat',
data: expect.objectContaining({
- 'gen_ai.system': 'anthropic',
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.request.stream': true,
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.id': 'msg_stream_1',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'gen_ai.response.finish_reasons': '["end_turn"]',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'msg_stream_1',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["end_turn"]',
}),
}),
// messages.stream
expect.objectContaining({
- description: 'messages claude-3-haiku-20240307 stream-response',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307 stream-response',
+ op: 'gen_ai.chat',
data: expect.objectContaining({
- 'gen_ai.system': 'anthropic',
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.id': 'msg_stream_1',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'msg_stream_1',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
}),
}),
// messages.stream with redundant stream: true param
expect.objectContaining({
- description: 'messages claude-3-haiku-20240307 stream-response',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307 stream-response',
+ op: 'gen_ai.chat',
data: expect.objectContaining({
- 'gen_ai.system': 'anthropic',
- 'gen_ai.operation.name': 'messages',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.request.stream': true,
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.id': 'msg_stream_1',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'msg_stream_1',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
}),
}),
]),
@@ -435,28 +457,28 @@ describe('Anthropic integration', () => {
transaction: 'main',
spans: expect.arrayContaining([
expect.objectContaining({
- description: 'messages claude-3-haiku-20240307 stream-response',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307 stream-response',
+ op: 'gen_ai.chat',
data: expect.objectContaining({
- 'gen_ai.response.streaming': true,
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
// streamed text concatenated
- 'gen_ai.response.text': 'Hello from stream!',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Hello from stream!',
}),
}),
expect.objectContaining({
- description: 'messages claude-3-haiku-20240307 stream-response',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307 stream-response',
+ op: 'gen_ai.chat',
data: expect.objectContaining({
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.text': 'Hello from stream!',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Hello from stream!',
}),
}),
expect.objectContaining({
- description: 'messages claude-3-haiku-20240307 stream-response',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307 stream-response',
+ op: 'gen_ai.chat',
data: expect.objectContaining({
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.text': 'Hello from stream!',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Hello from stream!',
}),
}),
]),
@@ -487,10 +509,10 @@ describe('Anthropic integration', () => {
transaction: {
spans: expect.arrayContaining([
expect.objectContaining({
- op: 'gen_ai.messages',
+ op: 'gen_ai.chat',
data: expect.objectContaining({
- 'gen_ai.request.available_tools': EXPECTED_TOOLS_JSON,
- 'gen_ai.response.tool_calls': EXPECTED_TOOL_CALLS_JSON,
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_TOOLS_JSON,
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: EXPECTED_TOOL_CALLS_JSON,
}),
}),
]),
@@ -515,10 +537,10 @@ describe('Anthropic integration', () => {
spans: expect.arrayContaining([
expect.objectContaining({
description: expect.stringContaining('stream-response'),
- op: 'gen_ai.messages',
+ op: 'gen_ai.chat',
data: expect.objectContaining({
- 'gen_ai.request.available_tools': EXPECTED_TOOLS_JSON,
- 'gen_ai.response.tool_calls': EXPECTED_TOOL_CALLS_JSON,
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_TOOLS_JSON,
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: EXPECTED_TOOL_CALLS_JSON,
}),
}),
]),
@@ -535,45 +557,45 @@ describe('Anthropic integration', () => {
spans: expect.arrayContaining([
// Error with messages.create on stream initialization
expect.objectContaining({
- description: 'messages error-stream-init stream-response',
- op: 'gen_ai.messages',
+ description: 'chat error-stream-init stream-response',
+ op: 'gen_ai.chat',
status: 'internal_error', // Actual status coming from the instrumentation
data: expect.objectContaining({
- 'gen_ai.request.model': 'error-stream-init',
- 'gen_ai.request.stream': true,
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-stream-init',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
}),
}),
// Error with messages.stream on stream initialization
expect.objectContaining({
- description: 'messages error-stream-init stream-response',
- op: 'gen_ai.messages',
+ description: 'chat error-stream-init stream-response',
+ op: 'gen_ai.chat',
status: 'internal_error', // Actual status coming from the instrumentation
data: expect.objectContaining({
- 'gen_ai.request.model': 'error-stream-init',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-stream-init',
}),
}),
// Error midway with messages.create on streaming - note: The stream is started successfully
// so we get a successful span with the content that was streamed before the error
expect.objectContaining({
- description: 'messages error-stream-midway stream-response',
- op: 'gen_ai.messages',
+ description: 'chat error-stream-midway stream-response',
+ op: 'gen_ai.chat',
status: 'ok',
data: expect.objectContaining({
- 'gen_ai.request.model': 'error-stream-midway',
- 'gen_ai.request.stream': true,
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.text': 'This stream will ', // We received some data before error
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-stream-midway',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'This stream will ', // We received some data before error
}),
}),
// Error midway with messages.stream - same behavior, we get a span with the streamed data
expect.objectContaining({
- description: 'messages error-stream-midway stream-response',
- op: 'gen_ai.messages',
+ description: 'chat error-stream-midway stream-response',
+ op: 'gen_ai.chat',
status: 'ok',
data: expect.objectContaining({
- 'gen_ai.request.model': 'error-stream-midway',
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.text': 'This stream will ', // We received some data before error
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-stream-midway',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'This stream will ', // We received some data before error
}),
}),
]),
@@ -591,11 +613,11 @@ describe('Anthropic integration', () => {
spans: expect.arrayContaining([
// Invalid tool format error
expect.objectContaining({
- description: 'messages invalid-format',
- op: 'gen_ai.messages',
+ description: 'chat invalid-format',
+ op: 'gen_ai.chat',
status: 'internal_error',
data: expect.objectContaining({
- 'gen_ai.request.model': 'invalid-format',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'invalid-format',
}),
}),
// Model retrieval error
@@ -604,17 +626,17 @@ describe('Anthropic integration', () => {
op: 'gen_ai.models',
status: 'internal_error',
data: expect.objectContaining({
- 'gen_ai.request.model': 'nonexistent-model',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'nonexistent-model',
}),
}),
// Successful tool usage (for comparison)
expect.objectContaining({
- description: 'messages claude-3-haiku-20240307',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
status: 'ok',
data: expect.objectContaining({
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.response.tool_calls': expect.stringContaining('tool_ok_1'),
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.stringContaining('tool_ok_1'),
}),
}),
]),
@@ -638,18 +660,39 @@ describe('Anthropic integration', () => {
transaction: {
transaction: 'main',
spans: expect.arrayContaining([
+ // First call: Last message is large and gets truncated (only C's remain, D's are cropped)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 3,
// Messages should be present (truncation happened) and should be a JSON array
- 'gen_ai.request.messages': expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
}),
- description: 'messages claude-3-haiku-20240307',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.anthropic',
+ status: 'ok',
+ }),
+ // Second call: Last message is small and kept without truncation
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 3,
+ // Small message should be kept intact
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: JSON.stringify([
+ { role: 'user', content: 'This is a small message that fits within the limit' },
+ ]),
+ }),
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'ok',
}),
@@ -672,12 +715,14 @@ describe('Anthropic integration', () => {
spans: expect.arrayContaining([
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'messages',
- 'sentry.op': 'gen_ai.messages',
- 'sentry.origin': 'auto.ai.anthropic',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-haiku-20240307',
- 'gen_ai.request.messages': JSON.stringify([
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.anthropic',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-haiku-20240307',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 2,
+ // Only the last message (with filtered media) should be kept
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: JSON.stringify([
{
role: 'user',
content: [
@@ -691,14 +736,10 @@ describe('Anthropic integration', () => {
},
],
},
- {
- role: 'user',
- content: 'what number is this?',
- },
]),
}),
- description: 'messages claude-3-haiku-20240307',
- op: 'gen_ai.messages',
+ description: 'chat claude-3-haiku-20240307',
+ op: 'gen_ai.chat',
origin: 'auto.ai.anthropic',
status: 'ok',
}),
@@ -709,4 +750,32 @@ describe('Anthropic integration', () => {
.completed();
});
});
+
+ createEsmAndCjsTests(
+ __dirname,
+ 'scenario-system-instructions.mjs',
+ 'instrument-with-pii.mjs',
+ (createRunner, test) => {
+ test('extracts system instructions from messages', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({
+ transaction: {
+ transaction: 'main',
+ spans: expect.arrayContaining([
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: JSON.stringify([
+ { type: 'text', content: 'You are a helpful assistant' },
+ ]),
+ }),
+ }),
+ ]),
+ },
+ })
+ .start()
+ .completed();
+ });
+ },
+ );
});
diff --git a/dev-packages/node-integration-tests/suites/tracing/google-genai/scenario-message-truncation.mjs b/dev-packages/node-integration-tests/suites/tracing/google-genai/scenario-message-truncation.mjs
index bb24b6835db2..595728e06531 100644
--- a/dev-packages/node-integration-tests/suites/tracing/google-genai/scenario-message-truncation.mjs
+++ b/dev-packages/node-integration-tests/suites/tracing/google-genai/scenario-message-truncation.mjs
@@ -43,12 +43,11 @@ async function run() {
const client = instrumentGoogleGenAIClient(mockClient);
- // Create 3 large messages where:
- // - First 2 messages are very large (will be dropped)
- // - Last message is large but will be truncated to fit within the 20KB limit
+ // Test 1: Given an array of messages only the last message should be kept
+ // The last message should be truncated to fit within the 20KB limit
const largeContent1 = 'A'.repeat(15000); // ~15KB
const largeContent2 = 'B'.repeat(15000); // ~15KB
- const largeContent3 = 'C'.repeat(25000); // ~25KB (will be truncated)
+ const largeContent3 = 'C'.repeat(25000) + 'D'.repeat(25000); // ~50KB (will be truncated, only C's remain)
await client.models.generateContent({
model: 'gemini-1.5-flash',
@@ -63,6 +62,23 @@ async function run() {
{ role: 'user', parts: [{ text: largeContent3 }] },
],
});
+
+ // Test 2: Given an array of messages only the last message should be kept
+ // The last message is small, so it should be kept intact
+ const smallContent = 'This is a small message that fits within the limit';
+ await client.models.generateContent({
+ model: 'gemini-1.5-flash',
+ config: {
+ temperature: 0.7,
+ topP: 0.9,
+ maxOutputTokens: 100,
+ },
+ contents: [
+ { role: 'user', parts: [{ text: largeContent1 }] },
+ { role: 'model', parts: [{ text: largeContent2 }] },
+ { role: 'user', parts: [{ text: smallContent }] },
+ ],
+ });
});
}
diff --git a/dev-packages/node-integration-tests/suites/tracing/google-genai/scenario-system-instructions.mjs b/dev-packages/node-integration-tests/suites/tracing/google-genai/scenario-system-instructions.mjs
new file mode 100644
index 000000000000..d4081d052968
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/google-genai/scenario-system-instructions.mjs
@@ -0,0 +1,50 @@
+import { instrumentGoogleGenAIClient } from '@sentry/core';
+import * as Sentry from '@sentry/node';
+
+class MockGoogleGenAI {
+ constructor(config) {
+ this.apiKey = config.apiKey;
+ this.models = {
+ generateContent: async params => {
+ await new Promise(resolve => setTimeout(resolve, 10));
+ return {
+ response: {
+ text: () => 'Response',
+ modelVersion: params.model,
+ usageMetadata: {
+ promptTokenCount: 10,
+ candidatesTokenCount: 5,
+ totalTokenCount: 15,
+ },
+ candidates: [
+ {
+ content: {
+ parts: [{ text: 'Response' }],
+ role: 'model',
+ },
+ finishReason: 'STOP',
+ },
+ ],
+ },
+ };
+ },
+ };
+ }
+}
+
+async function run() {
+ await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
+ const mockClient = new MockGoogleGenAI({ apiKey: 'mock-api-key' });
+ const client = instrumentGoogleGenAIClient(mockClient);
+
+ await client.models.generateContent({
+ model: 'gemini-1.5-flash',
+ config: {
+ systemInstruction: 'You are a helpful assistant',
+ },
+ contents: [{ role: 'user', parts: [{ text: 'Hello' }] }],
+ });
+ });
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/google-genai/test.ts b/dev-packages/node-integration-tests/suites/tracing/google-genai/test.ts
index 486b71dfedc7..89130a7eb425 100644
--- a/dev-packages/node-integration-tests/suites/tracing/google-genai/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/google-genai/test.ts
@@ -1,4 +1,26 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
+ GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_REQUEST_TOP_P_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_STREAMING_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../utils/runner';
describe('Google GenAI integration', () => {
@@ -12,14 +34,14 @@ describe('Google GenAI integration', () => {
// First span - chats.create
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 150,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 150,
},
description: 'chat gemini-1.5-pro create',
op: 'gen_ai.chat',
@@ -29,14 +51,14 @@ describe('Google GenAI integration', () => {
// Second span - chat.sendMessage (should get model from context)
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro', // Should get from chat context
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro', // Should get from chat context
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
},
description: 'chat gemini-1.5-pro',
op: 'gen_ai.chat',
@@ -46,34 +68,34 @@ describe('Google GenAI integration', () => {
// Third span - models.generateContent
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-flash',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-flash',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
},
- description: 'models gemini-1.5-flash',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-1.5-flash',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'ok',
}),
// Fourth span - error handling
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'error-model',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
},
- description: 'models error-model',
- op: 'gen_ai.models',
+ description: 'generate_content error-model',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'internal_error',
}),
@@ -86,17 +108,15 @@ describe('Google GenAI integration', () => {
// First span - chats.create with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 150,
- 'gen_ai.request.messages': expect.stringMatching(
- /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/,
- ), // Should include history when recordInputs: true
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 150,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","parts":[{"text":"Hello, how are you?"}]}]',
}),
description: 'chat gemini-1.5-pro create',
op: 'gen_ai.chat',
@@ -106,16 +126,16 @@ describe('Google GenAI integration', () => {
// Second span - chat.sendMessage with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.request.messages': expect.any(String), // Should include message when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response when recordOutputs: true
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include message when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response when recordOutputs: true
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
}),
description: 'chat gemini-1.5-pro',
op: 'gen_ai.chat',
@@ -125,37 +145,37 @@ describe('Google GenAI integration', () => {
// Third span - models.generateContent with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-flash',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.request.messages': expect.any(String), // Should include contents when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response when recordOutputs: true
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-flash',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response when recordOutputs: true
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
}),
- description: 'models gemini-1.5-flash',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-1.5-flash',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'ok',
}),
// Fourth span - error handling with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.messages': expect.any(String), // Should include contents when recordInputs: true
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents when recordInputs: true
}),
- description: 'models error-model',
- op: 'gen_ai.models',
+ description: 'generate_content error-model',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'internal_error',
}),
@@ -168,8 +188,8 @@ describe('Google GenAI integration', () => {
// Check that custom options are respected
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response text when recordOutputs: true
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text when recordOutputs: true
}),
description: expect.not.stringContaining('stream-response'), // Non-streaming span
}),
@@ -215,64 +235,64 @@ describe('Google GenAI integration', () => {
// Non-streaming with tools
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-2.0-flash-001',
- 'gen_ai.request.available_tools': EXPECTED_AVAILABLE_TOOLS_JSON,
- 'gen_ai.request.messages': expect.any(String), // Should include contents
- 'gen_ai.response.text': expect.any(String), // Should include response text
- 'gen_ai.response.tool_calls': expect.any(String), // Should include tool calls
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 8,
- 'gen_ai.usage.total_tokens': 23,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-2.0-flash-001',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_AVAILABLE_TOOLS_JSON,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String), // Should include tool calls
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 23,
}),
- description: 'models gemini-2.0-flash-001',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-2.0-flash-001',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'ok',
}),
// Streaming with tools
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-2.0-flash-001',
- 'gen_ai.request.available_tools': EXPECTED_AVAILABLE_TOOLS_JSON,
- 'gen_ai.request.messages': expect.any(String), // Should include contents
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.text': expect.any(String), // Should include response text
- 'gen_ai.response.tool_calls': expect.any(String), // Should include tool calls
- 'gen_ai.response.id': 'mock-response-tools-id',
- 'gen_ai.response.model': 'gemini-2.0-flash-001',
- 'gen_ai.usage.input_tokens': 12,
- 'gen_ai.usage.output_tokens': 10,
- 'gen_ai.usage.total_tokens': 22,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-2.0-flash-001',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_AVAILABLE_TOOLS_JSON,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String), // Should include tool calls
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'mock-response-tools-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gemini-2.0-flash-001',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 22,
}),
- description: 'models gemini-2.0-flash-001 stream-response',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-2.0-flash-001 stream-response',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'ok',
}),
// Without tools for comparison
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-2.0-flash-001',
- 'gen_ai.request.messages': expect.any(String), // Should include contents
- 'gen_ai.response.text': expect.any(String), // Should include response text
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-2.0-flash-001',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
}),
- description: 'models gemini-2.0-flash-001',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-2.0-flash-001',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'ok',
}),
@@ -291,38 +311,38 @@ describe('Google GenAI integration', () => {
// First span - models.generateContentStream (streaming)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-flash',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.id': 'mock-response-streaming-id',
- 'gen_ai.response.model': 'gemini-1.5-pro',
- 'gen_ai.response.finish_reasons': '["STOP"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 22,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-flash',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'mock-response-streaming-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["STOP"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 22,
}),
- description: 'models gemini-1.5-flash stream-response',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-1.5-flash stream-response',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'ok',
}),
// Second span - chat.create
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 150,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 150,
}),
description: 'chat gemini-1.5-pro create',
op: 'gen_ai.chat',
@@ -332,14 +352,14 @@ describe('Google GenAI integration', () => {
// Third span - chat.sendMessageStream (streaming)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.id': 'mock-response-streaming-id',
- 'gen_ai.response.model': 'gemini-1.5-pro',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'mock-response-streaming-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
}),
description: 'chat gemini-1.5-pro stream-response',
op: 'gen_ai.chat',
@@ -349,24 +369,24 @@ describe('Google GenAI integration', () => {
// Fourth span - blocked content streaming
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
}),
- description: 'models blocked-model stream-response',
- op: 'gen_ai.models',
+ description: 'generate_content blocked-model stream-response',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'internal_error',
}),
// Fifth span - error handling for streaming
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
}),
- description: 'models error-model stream-response',
- op: 'gen_ai.models',
+ description: 'generate_content error-model stream-response',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'internal_error',
}),
@@ -379,39 +399,39 @@ describe('Google GenAI integration', () => {
// First span - models.generateContentStream (streaming) with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-flash',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.request.messages': expect.any(String), // Should include contents when recordInputs: true
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.id': 'mock-response-streaming-id',
- 'gen_ai.response.model': 'gemini-1.5-pro',
- 'gen_ai.response.finish_reasons': '["STOP"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 22,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-flash',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents when recordInputs: true
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'mock-response-streaming-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["STOP"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 22,
}),
- description: 'models gemini-1.5-flash stream-response',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-1.5-flash stream-response',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'ok',
}),
// Second span - chat.create
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.top_p': 0.9,
- 'gen_ai.request.max_tokens': 150,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 150,
}),
description: 'chat gemini-1.5-pro create',
op: 'gen_ai.chat',
@@ -421,19 +441,19 @@ describe('Google GenAI integration', () => {
// Third span - chat.sendMessageStream (streaming) with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-pro',
- 'gen_ai.request.messages': expect.any(String), // Should include message when recordInputs: true
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.id': 'mock-response-streaming-id',
- 'gen_ai.response.model': 'gemini-1.5-pro',
- 'gen_ai.response.finish_reasons': '["STOP"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 22,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include message when recordInputs: true
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'mock-response-streaming-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gemini-1.5-pro',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["STOP"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 22,
}),
description: 'chat gemini-1.5-pro stream-response',
op: 'gen_ai.chat',
@@ -443,33 +463,33 @@ describe('Google GenAI integration', () => {
// Fourth span - blocked content stream with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'blocked-model',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.messages': expect.any(String), // Should include contents when recordInputs: true
- 'gen_ai.response.streaming': true,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'blocked-model',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents when recordInputs: true
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
}),
- description: 'models blocked-model stream-response',
- op: 'gen_ai.models',
+ description: 'generate_content blocked-model stream-response',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'internal_error',
}),
// Fifth span - error handling for streaming with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.messages': expect.any(String), // Should include contents when recordInputs: true
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include contents when recordInputs: true
}),
- description: 'models error-model stream-response',
- op: 'gen_ai.models',
+ description: 'generate_content error-model stream-response',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'internal_error',
}),
@@ -504,23 +524,75 @@ describe('Google GenAI integration', () => {
transaction: {
transaction: 'main',
spans: expect.arrayContaining([
+ // First call: Last message is large and gets truncated (only C's remain, D's are cropped)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'models',
- 'sentry.op': 'gen_ai.models',
- 'sentry.origin': 'auto.ai.google_genai',
- 'gen_ai.system': 'google_genai',
- 'gen_ai.request.model': 'gemini-1.5-flash',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-flash',
// Messages should be present (truncation happened) and should be a JSON array with parts
- 'gen_ai.request.messages': expect.stringMatching(
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringMatching(
/^\[\{"role":"user","parts":\[\{"text":"C+"\}\]\}\]$/,
),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 3,
}),
- description: 'models gemini-1.5-flash',
- op: 'gen_ai.models',
+ description: 'generate_content gemini-1.5-flash',
+ op: 'gen_ai.generate_content',
origin: 'auto.ai.google_genai',
status: 'ok',
}),
+ // Second call: Last message is small and kept without truncation
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.google_genai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'google_genai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gemini-1.5-flash',
+ // Small message should be kept intact
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: JSON.stringify([
+ {
+ role: 'user',
+ parts: [{ text: 'This is a small message that fits within the limit' }],
+ },
+ ]),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 3,
+ }),
+ description: 'generate_content gemini-1.5-flash',
+ op: 'gen_ai.generate_content',
+ origin: 'auto.ai.google_genai',
+ status: 'ok',
+ }),
+ ]),
+ },
+ })
+ .start()
+ .completed();
+ });
+ },
+ );
+
+ createEsmAndCjsTests(
+ __dirname,
+ 'scenario-system-instructions.mjs',
+ 'instrument-with-pii.mjs',
+ (createRunner, test) => {
+ test('extracts system instructions from messages', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({
+ transaction: {
+ transaction: 'main',
+ spans: expect.arrayContaining([
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: JSON.stringify([
+ { type: 'text', content: 'You are a helpful assistant' },
+ ]),
+ }),
+ }),
]),
},
})
diff --git a/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-message-truncation.mjs b/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-message-truncation.mjs
index 6dafe8572cec..9e5e59f264ca 100644
--- a/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-message-truncation.mjs
+++ b/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-message-truncation.mjs
@@ -51,17 +51,27 @@ async function run() {
const largeContent1 = 'A'.repeat(15000); // ~15KB
const largeContent2 = 'B'.repeat(15000); // ~15KB
- const largeContent3 = 'C'.repeat(25000); // ~25KB (will be truncated)
+ const largeContent3 = 'C'.repeat(25000) + 'D'.repeat(25000); // ~50KB (will be truncated, only C's remain)
- // Create one very large string that gets truncated to only include Cs
- await model.invoke(largeContent3 + largeContent2);
+ // Test 1: Create one very large string that gets truncated to only include Cs
+ await model.invoke(largeContent3);
- // Create an array of messages that gets truncated to only include the last message (result should again contain only Cs)
+ // Test 2: Create an array of messages that gets truncated to only include the last message
+ // The last message should be truncated to fit within the 20KB limit (result should again contain only Cs)
await model.invoke([
{ role: 'system', content: largeContent1 },
{ role: 'user', content: largeContent2 },
{ role: 'user', content: largeContent3 },
]);
+
+ // Test 3: Given an array of messages only the last message should be kept
+ // The last message is small, so it should be kept intact
+ const smallContent = 'This is a small message that fits within the limit';
+ await model.invoke([
+ { role: 'system', content: largeContent1 },
+ { role: 'user', content: largeContent2 },
+ { role: 'user', content: smallContent },
+ ]);
});
await Sentry.flush(2000);
diff --git a/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-system-instructions.mjs b/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-system-instructions.mjs
new file mode 100644
index 000000000000..42382cb8262b
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/langchain/scenario-system-instructions.mjs
@@ -0,0 +1,61 @@
+import { ChatAnthropic } from '@langchain/anthropic';
+import * as Sentry from '@sentry/node';
+import express from 'express';
+
+function startMockServer() {
+ const app = express();
+ app.use(express.json());
+
+ app.post('/v1/messages', (req, res) => {
+ res.json({
+ id: 'msg_test123',
+ type: 'message',
+ role: 'assistant',
+ content: [
+ {
+ type: 'text',
+ text: 'Response',
+ },
+ ],
+ model: req.body.model,
+ stop_reason: 'end_turn',
+ stop_sequence: null,
+ usage: {
+ input_tokens: 10,
+ output_tokens: 5,
+ },
+ });
+ });
+
+ return new Promise(resolve => {
+ const server = app.listen(0, () => {
+ resolve(server);
+ });
+ });
+}
+
+async function run() {
+ const server = await startMockServer();
+ const baseUrl = `http://localhost:${server.address().port}`;
+
+ await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
+ const model = new ChatAnthropic({
+ model: 'claude-3-5-sonnet-20241022',
+ apiKey: 'mock-api-key',
+ clientOptions: {
+ baseURL: baseUrl,
+ },
+ });
+
+ await model.invoke([
+ { role: 'system', content: 'You are a helpful assistant' },
+ { role: 'user', content: 'Hello' },
+ ]);
+ });
+
+ await Sentry.flush(2000);
+
+ server.close();
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/langchain/test.ts b/dev-packages/node-integration-tests/suites/tracing/langchain/test.ts
index e75e0ec7f5da..1ff46919f399 100644
--- a/dev-packages/node-integration-tests/suites/tracing/langchain/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/langchain/test.ts
@@ -1,4 +1,24 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_REQUEST_TOP_P_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../utils/runner';
describe('LangChain integration', () => {
@@ -12,19 +32,19 @@ describe('LangChain integration', () => {
// First span - chat model with claude-3-5-sonnet
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': expect.any(String),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: expect.any(String),
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -34,20 +54,20 @@ describe('LangChain integration', () => {
// Second span - chat model with claude-3-opus
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-opus-20240229',
- 'gen_ai.request.temperature': 0.9,
- 'gen_ai.request.top_p': 0.95,
- 'gen_ai.request.max_tokens': 200,
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': expect.any(String),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-opus-20240229',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.95,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 200,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: expect.any(String),
}),
description: 'chat claude-3-opus-20240229',
op: 'gen_ai.chat',
@@ -57,11 +77,11 @@ describe('LangChain integration', () => {
// Third span - error handling
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'error-model',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
}),
description: 'chat error-model',
op: 'gen_ai.chat',
@@ -77,21 +97,21 @@ describe('LangChain integration', () => {
// First span - chat model with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response when recordOutputs: true
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': expect.any(String),
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response when recordOutputs: true
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -101,22 +121,22 @@ describe('LangChain integration', () => {
// Second span - chat model with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-opus-20240229',
- 'gen_ai.request.temperature': 0.9,
- 'gen_ai.request.top_p': 0.95,
- 'gen_ai.request.max_tokens': 200,
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response when recordOutputs: true
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': expect.any(String),
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-opus-20240229',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.95,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 200,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response when recordOutputs: true
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
}),
description: 'chat claude-3-opus-20240229',
op: 'gen_ai.chat',
@@ -126,12 +146,12 @@ describe('LangChain integration', () => {
// Third span - error handling with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
}),
description: 'chat error-model',
op: 'gen_ai.chat',
@@ -166,20 +186,20 @@ describe('LangChain integration', () => {
spans: expect.arrayContaining([
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 150,
- 'gen_ai.usage.input_tokens': 20,
- 'gen_ai.usage.output_tokens': 30,
- 'gen_ai.usage.total_tokens': 50,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': 'tool_use',
- 'gen_ai.response.tool_calls': expect.any(String),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 150,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 50,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: 'tool_use',
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String),
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -198,30 +218,55 @@ describe('LangChain integration', () => {
const EXPECTED_TRANSACTION_MESSAGE_TRUNCATION = {
transaction: 'main',
spans: expect.arrayContaining([
+ // First call: String input truncated (only C's remain, D's are cropped)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
// Messages should be present and should include truncated string input (contains only Cs)
- 'gen_ai.request.messages': expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
origin: 'auto.ai.langchain',
status: 'ok',
}),
+ // Second call: Array input, last message truncated (only C's remain, D's are cropped)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 2,
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: expect.any(String),
// Messages should be present (truncation happened) and should be a JSON array of a single index (contains only Cs)
- 'gen_ai.request.messages': expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ }),
+ description: 'chat claude-3-5-sonnet-20241022',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.langchain',
+ status: 'ok',
+ }),
+ // Third call: Last message is small and kept without truncation
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 2,
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: expect.any(String),
+ // Small message should be kept intact
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: JSON.stringify([
+ { role: 'user', content: 'This is a small message that fits within the limit' },
+ ]),
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -265,8 +310,7 @@ describe('LangChain integration', () => {
// First call: Direct Anthropic call made BEFORE LangChain import
// This should have Anthropic instrumentation (origin: 'auto.ai.anthropic')
const firstAnthropicSpan = spans.find(
- span =>
- span.description === 'messages claude-3-5-sonnet-20241022' && span.origin === 'auto.ai.anthropic',
+ span => span.description === 'chat claude-3-5-sonnet-20241022' && span.origin === 'auto.ai.anthropic',
);
// Second call: LangChain call
@@ -279,8 +323,7 @@ describe('LangChain integration', () => {
// This should NOT have Anthropic instrumentation (skip works correctly)
// Count how many Anthropic spans we have - should be exactly 1
const anthropicSpans = spans.filter(
- span =>
- span.description === 'messages claude-3-5-sonnet-20241022' && span.origin === 'auto.ai.anthropic',
+ span => span.description === 'chat claude-3-5-sonnet-20241022' && span.origin === 'auto.ai.anthropic',
);
// Verify the edge case limitation:
@@ -305,4 +348,32 @@ describe('LangChain integration', () => {
// This test fails on CJS because we use dynamic imports to simulate importing LangChain after the Anthropic client is created
{ failsOnCjs: true },
);
+
+ createEsmAndCjsTests(
+ __dirname,
+ 'scenario-system-instructions.mjs',
+ 'instrument-with-pii.mjs',
+ (createRunner, test) => {
+ test('extracts system instructions from messages', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({
+ transaction: {
+ transaction: 'main',
+ spans: expect.arrayContaining([
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: JSON.stringify([
+ { type: 'text', content: 'You are a helpful assistant' },
+ ]),
+ }),
+ }),
+ ]),
+ },
+ })
+ .start()
+ .completed();
+ });
+ },
+ );
});
diff --git a/dev-packages/node-integration-tests/suites/tracing/langchain/v1/scenario-message-truncation.mjs b/dev-packages/node-integration-tests/suites/tracing/langchain/v1/scenario-message-truncation.mjs
index 6dafe8572cec..9e5e59f264ca 100644
--- a/dev-packages/node-integration-tests/suites/tracing/langchain/v1/scenario-message-truncation.mjs
+++ b/dev-packages/node-integration-tests/suites/tracing/langchain/v1/scenario-message-truncation.mjs
@@ -51,17 +51,27 @@ async function run() {
const largeContent1 = 'A'.repeat(15000); // ~15KB
const largeContent2 = 'B'.repeat(15000); // ~15KB
- const largeContent3 = 'C'.repeat(25000); // ~25KB (will be truncated)
+ const largeContent3 = 'C'.repeat(25000) + 'D'.repeat(25000); // ~50KB (will be truncated, only C's remain)
- // Create one very large string that gets truncated to only include Cs
- await model.invoke(largeContent3 + largeContent2);
+ // Test 1: Create one very large string that gets truncated to only include Cs
+ await model.invoke(largeContent3);
- // Create an array of messages that gets truncated to only include the last message (result should again contain only Cs)
+ // Test 2: Create an array of messages that gets truncated to only include the last message
+ // The last message should be truncated to fit within the 20KB limit (result should again contain only Cs)
await model.invoke([
{ role: 'system', content: largeContent1 },
{ role: 'user', content: largeContent2 },
{ role: 'user', content: largeContent3 },
]);
+
+ // Test 3: Given an array of messages only the last message should be kept
+ // The last message is small, so it should be kept intact
+ const smallContent = 'This is a small message that fits within the limit';
+ await model.invoke([
+ { role: 'system', content: largeContent1 },
+ { role: 'user', content: largeContent2 },
+ { role: 'user', content: smallContent },
+ ]);
});
await Sentry.flush(2000);
diff --git a/dev-packages/node-integration-tests/suites/tracing/langchain/v1/test.ts b/dev-packages/node-integration-tests/suites/tracing/langchain/v1/test.ts
index 3e6b147d4e0d..92903ea547b1 100644
--- a/dev-packages/node-integration-tests/suites/tracing/langchain/v1/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/langchain/v1/test.ts
@@ -1,4 +1,24 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { afterAll, expect } from 'vitest';
+import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_REQUEST_TOP_P_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { conditionalTest } from '../../../../utils';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../../utils/runner';
@@ -15,19 +35,19 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// First span - chat model with claude-3-5-sonnet
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': expect.any(String),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: expect.any(String),
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -37,20 +57,20 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// Second span - chat model with claude-3-opus
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-opus-20240229',
- 'gen_ai.request.temperature': 0.9,
- 'gen_ai.request.top_p': 0.95,
- 'gen_ai.request.max_tokens': 200,
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': expect.any(String),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-opus-20240229',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.95,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 200,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: expect.any(String),
}),
description: 'chat claude-3-opus-20240229',
op: 'gen_ai.chat',
@@ -60,13 +80,13 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// Third span - error handling
// expect.objectContaining({
// data: expect.objectContaining({
- // 'gen_ai.operation.name': 'chat',
- // 'sentry.op': 'gen_ai.chat',
- // 'sentry.origin': 'auto.ai.langchain',
- // 'gen_ai.system': 'anthropic',
- // 'gen_ai.request.model': 'error-model',
+ // [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ // [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ // [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ // [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ // [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
// }),
- // description: 'chat error-model',
+ // description: 'invoke_agent error-model',
// op: 'gen_ai.chat',
// origin: 'auto.ai.langchain',
// status: 'internal_error',
@@ -80,21 +100,21 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// First span - chat model with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response when recordOutputs: true
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': expect.any(String),
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response when recordOutputs: true
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -104,22 +124,22 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// Second span - chat model with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-opus-20240229',
- 'gen_ai.request.temperature': 0.9,
- 'gen_ai.request.top_p': 0.95,
- 'gen_ai.request.max_tokens': 200,
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response when recordOutputs: true
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': expect.any(String),
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-opus-20240229',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.9,
+ [GEN_AI_REQUEST_TOP_P_ATTRIBUTE]: 0.95,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 200,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response when recordOutputs: true
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
}),
description: 'chat claude-3-opus-20240229',
op: 'gen_ai.chat',
@@ -129,14 +149,14 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// Third span - error handling with PII
// expect.objectContaining({
// data: expect.objectContaining({
- // 'gen_ai.operation.name': 'chat',
- // 'sentry.op': 'gen_ai.chat',
- // 'sentry.origin': 'auto.ai.langchain',
- // 'gen_ai.system': 'anthropic',
- // 'gen_ai.request.model': 'error-model',
- // 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
+ // [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ // [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ // [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ // [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ // [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ // [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
// }),
- // description: 'chat error-model',
+ // description: 'invoke_agent error-model',
// op: 'gen_ai.chat',
// origin: 'auto.ai.langchain',
// status: 'internal_error',
@@ -193,20 +213,20 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
spans: expect.arrayContaining([
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 150,
- 'gen_ai.usage.input_tokens': 20,
- 'gen_ai.usage.output_tokens': 30,
- 'gen_ai.usage.total_tokens': 50,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': expect.any(String),
- 'gen_ai.response.stop_reason': 'tool_use',
- 'gen_ai.response.tool_calls': expect.any(String),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 150,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 50,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: 'tool_use',
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String),
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -241,30 +261,56 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
const EXPECTED_TRANSACTION_MESSAGE_TRUNCATION = {
transaction: 'main',
spans: expect.arrayContaining([
+ // First call: String input truncated (only C's remain, D's are cropped)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
// Messages should be present and should include truncated string input (contains only Cs)
- 'gen_ai.request.messages': expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
origin: 'auto.ai.langchain',
status: 'ok',
}),
+ // Second call: Array input, last message truncated (only C's remain, D's are cropped)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'anthropic',
- 'gen_ai.request.model': 'claude-3-5-sonnet-20241022',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 2,
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: expect.stringMatching(/^\[\{"type":"text","content":"A+"\}\]$/),
// Messages should be present (truncation happened) and should be a JSON array of a single index (contains only Cs)
- 'gen_ai.request.messages': expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ }),
+ description: 'chat claude-3-5-sonnet-20241022',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.langchain',
+ status: 'ok',
+ }),
+ // Third call: Last message is small and kept without truncation
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'anthropic',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'claude-3-5-sonnet-20241022',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 2,
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: expect.stringMatching(/^\[\{"type":"text","content":"A+"\}\]$/),
+
+ // Small message should be kept intact
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: JSON.stringify([
+ { role: 'user', content: 'This is a small message that fits within the limit' },
+ ]),
}),
description: 'chat claude-3-5-sonnet-20241022',
op: 'gen_ai.chat',
@@ -315,8 +361,7 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// First call: Direct Anthropic call made BEFORE LangChain import
// This should have Anthropic instrumentation (origin: 'auto.ai.anthropic')
const firstAnthropicSpan = spans.find(
- span =>
- span.description === 'messages claude-3-5-sonnet-20241022' && span.origin === 'auto.ai.anthropic',
+ span => span.description === 'chat claude-3-5-sonnet-20241022' && span.origin === 'auto.ai.anthropic',
);
// Second call: LangChain call
@@ -329,8 +374,7 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// This should NOT have Anthropic instrumentation (skip works correctly)
// Count how many Anthropic spans we have - should be exactly 1
const anthropicSpans = spans.filter(
- span =>
- span.description === 'messages claude-3-5-sonnet-20241022' && span.origin === 'auto.ai.anthropic',
+ span => span.description === 'chat claude-3-5-sonnet-20241022' && span.origin === 'auto.ai.anthropic',
);
// Verify the edge case limitation:
@@ -368,19 +412,19 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// First span - initChatModel with gpt-4o
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4o',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.max_tokens': 100,
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'gpt-4o',
- 'gen_ai.response.stop_reason': 'stop',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4o',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE]: 100,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4o',
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: 'stop',
}),
description: 'chat gpt-4o',
op: 'gen_ai.chat',
@@ -390,18 +434,18 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// Second span - initChatModel with gpt-3.5-turbo
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.langchain',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.5,
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.stop_reason': 'stop',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.5,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_STOP_REASON_ATTRIBUTE]: 'stop',
}),
description: 'chat gpt-3.5-turbo',
op: 'gen_ai.chat',
@@ -411,13 +455,13 @@ conditionalTest({ min: 20 })('LangChain integration (v1)', () => {
// Third span - error handling
// expect.objectContaining({
// data: expect.objectContaining({
- // 'gen_ai.operation.name': 'chat',
- // 'sentry.op': 'gen_ai.chat',
- // 'sentry.origin': 'auto.ai.langchain',
- // 'gen_ai.system': 'openai',
- // 'gen_ai.request.model': 'error-model',
+ // [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ // [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ // [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langchain',
+ // [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ // [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
// }),
- // description: 'chat error-model',
+ // description: 'invoke_agent error-model',
// op: 'gen_ai.chat',
// origin: 'auto.ai.langchain',
// status: 'internal_error',
diff --git a/dev-packages/node-integration-tests/suites/tracing/langgraph/scenario-system-instructions.mjs b/dev-packages/node-integration-tests/suites/tracing/langgraph/scenario-system-instructions.mjs
new file mode 100644
index 000000000000..2d0887dca6d5
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/langgraph/scenario-system-instructions.mjs
@@ -0,0 +1,43 @@
+import { END, MessagesAnnotation, START, StateGraph } from '@langchain/langgraph';
+import * as Sentry from '@sentry/node';
+
+async function run() {
+ await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
+ const mockLlm = () => {
+ return {
+ messages: [
+ {
+ role: 'assistant',
+ content: 'Response',
+ response_metadata: {
+ model_name: 'mock-model',
+ finish_reason: 'stop',
+ tokenUsage: {
+ promptTokens: 10,
+ completionTokens: 5,
+ totalTokens: 15,
+ },
+ },
+ },
+ ],
+ };
+ };
+
+ const graph = new StateGraph(MessagesAnnotation)
+ .addNode('agent', mockLlm)
+ .addEdge(START, 'agent')
+ .addEdge('agent', END)
+ .compile({ name: 'test-agent' });
+
+ await graph.invoke({
+ messages: [
+ { role: 'system', content: 'You are a helpful assistant' },
+ { role: 'user', content: 'Hello' },
+ ],
+ });
+ });
+
+ await Sentry.flush(2000);
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/langgraph/test.ts b/dev-packages/node-integration-tests/suites/tracing/langgraph/test.ts
index bafcdf49a32c..5905d592ee7a 100644
--- a/dev-packages/node-integration-tests/suites/tracing/langgraph/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/langgraph/test.ts
@@ -1,4 +1,21 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_AGENT_NAME_ATTRIBUTE,
+ GEN_AI_CONVERSATION_ID_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_PIPELINE_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../utils/runner';
describe('LangGraph integration', () => {
@@ -12,10 +29,10 @@ describe('LangGraph integration', () => {
// create_agent span
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'create_agent',
- 'sentry.op': 'gen_ai.create_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'weather_assistant',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'weather_assistant',
},
description: 'create_agent weather_assistant',
op: 'gen_ai.create_agent',
@@ -25,11 +42,11 @@ describe('LangGraph integration', () => {
// First invoke_agent span
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'weather_assistant',
- 'gen_ai.pipeline.name': 'weather_assistant',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'weather_assistant',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'weather_assistant',
}),
description: 'invoke_agent weather_assistant',
op: 'gen_ai.invoke_agent',
@@ -39,11 +56,11 @@ describe('LangGraph integration', () => {
// Second invoke_agent span
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'weather_assistant',
- 'gen_ai.pipeline.name': 'weather_assistant',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'weather_assistant',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'weather_assistant',
}),
description: 'invoke_agent weather_assistant',
op: 'gen_ai.invoke_agent',
@@ -59,10 +76,10 @@ describe('LangGraph integration', () => {
// create_agent span (PII enabled doesn't affect this span)
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'create_agent',
- 'sentry.op': 'gen_ai.create_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'weather_assistant',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'weather_assistant',
},
description: 'create_agent weather_assistant',
op: 'gen_ai.create_agent',
@@ -72,12 +89,12 @@ describe('LangGraph integration', () => {
// First invoke_agent span with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'weather_assistant',
- 'gen_ai.pipeline.name': 'weather_assistant',
- 'gen_ai.request.messages': expect.stringContaining('What is the weather today?'),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'weather_assistant',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'weather_assistant',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringContaining('What is the weather today?'),
}),
description: 'invoke_agent weather_assistant',
op: 'gen_ai.invoke_agent',
@@ -87,12 +104,12 @@ describe('LangGraph integration', () => {
// Second invoke_agent span with PII and multiple messages
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'weather_assistant',
- 'gen_ai.pipeline.name': 'weather_assistant',
- 'gen_ai.request.messages': expect.stringContaining('Tell me about the weather'),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'weather_assistant',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'weather_assistant',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringContaining('Tell me about the weather'),
}),
description: 'invoke_agent weather_assistant',
op: 'gen_ai.invoke_agent',
@@ -108,10 +125,10 @@ describe('LangGraph integration', () => {
// create_agent span for first graph (no tool calls)
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'create_agent',
- 'sentry.op': 'gen_ai.create_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'tool_agent',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'tool_agent',
},
description: 'create_agent tool_agent',
op: 'gen_ai.create_agent',
@@ -121,19 +138,19 @@ describe('LangGraph integration', () => {
// invoke_agent span with tools available but not called
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'tool_agent',
- 'gen_ai.pipeline.name': 'tool_agent',
- 'gen_ai.request.available_tools': expect.stringContaining('get_weather'),
- 'gen_ai.request.messages': expect.stringContaining('What is the weather?'),
- 'gen_ai.response.model': 'gpt-4-0613',
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.response.text': expect.stringContaining('Response without calling tools'),
- 'gen_ai.usage.input_tokens': 25,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'tool_agent',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'tool_agent',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: expect.stringContaining('get_weather'),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringContaining('What is the weather?'),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4-0613',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.stringContaining('Response without calling tools'),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
}),
description: 'invoke_agent tool_agent',
op: 'gen_ai.invoke_agent',
@@ -143,10 +160,10 @@ describe('LangGraph integration', () => {
// create_agent span for second graph (with tool calls)
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'create_agent',
- 'sentry.op': 'gen_ai.create_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'tool_calling_agent',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'tool_calling_agent',
},
description: 'create_agent tool_calling_agent',
op: 'gen_ai.create_agent',
@@ -156,21 +173,21 @@ describe('LangGraph integration', () => {
// invoke_agent span with tool calls and execution
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'tool_calling_agent',
- 'gen_ai.pipeline.name': 'tool_calling_agent',
- 'gen_ai.request.available_tools': expect.stringContaining('get_weather'),
- 'gen_ai.request.messages': expect.stringContaining('San Francisco'),
- 'gen_ai.response.model': 'gpt-4-0613',
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.response.text': expect.stringMatching(/"role":"tool"/),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'tool_calling_agent',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'tool_calling_agent',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: expect.stringContaining('get_weather'),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringContaining('San Francisco'),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4-0613',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.stringMatching(/"role":"tool"/),
// Verify tool_calls are captured
- 'gen_ai.response.tool_calls': expect.stringContaining('get_weather'),
- 'gen_ai.usage.input_tokens': 80,
- 'gen_ai.usage.output_tokens': 40,
- 'gen_ai.usage.total_tokens': 120,
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.stringContaining('get_weather'),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 80,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 120,
}),
description: 'invoke_agent tool_calling_agent',
op: 'gen_ai.invoke_agent',
@@ -213,10 +230,10 @@ describe('LangGraph integration', () => {
// create_agent span
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'create_agent',
- 'sentry.op': 'gen_ai.create_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'thread_test_agent',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.create_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'thread_test_agent',
},
description: 'create_agent thread_test_agent',
op: 'gen_ai.create_agent',
@@ -226,13 +243,13 @@ describe('LangGraph integration', () => {
// First invoke_agent span with thread_id
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'thread_test_agent',
- 'gen_ai.pipeline.name': 'thread_test_agent',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'thread_test_agent',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'thread_test_agent',
// The thread_id should be captured as conversation.id
- 'gen_ai.conversation.id': 'thread_abc123_session_1',
+ [GEN_AI_CONVERSATION_ID_ATTRIBUTE]: 'thread_abc123_session_1',
}),
description: 'invoke_agent thread_test_agent',
op: 'gen_ai.invoke_agent',
@@ -242,13 +259,13 @@ describe('LangGraph integration', () => {
// Second invoke_agent span with different thread_id
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'invoke_agent',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.ai.langgraph',
- 'gen_ai.agent.name': 'thread_test_agent',
- 'gen_ai.pipeline.name': 'thread_test_agent',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.langgraph',
+ [GEN_AI_AGENT_NAME_ATTRIBUTE]: 'thread_test_agent',
+ [GEN_AI_PIPELINE_NAME_ATTRIBUTE]: 'thread_test_agent',
// Different thread_id for different conversation
- 'gen_ai.conversation.id': 'thread_xyz789_session_2',
+ [GEN_AI_CONVERSATION_ID_ATTRIBUTE]: 'thread_xyz789_session_2',
}),
description: 'invoke_agent thread_test_agent',
op: 'gen_ai.invoke_agent',
@@ -258,7 +275,7 @@ describe('LangGraph integration', () => {
// Third invoke_agent span without thread_id (should NOT have gen_ai.conversation.id)
expect.objectContaining({
data: expect.not.objectContaining({
- 'gen_ai.conversation.id': expect.anything(),
+ [GEN_AI_CONVERSATION_ID_ATTRIBUTE]: expect.anything(),
}),
description: 'invoke_agent thread_test_agent',
op: 'gen_ai.invoke_agent',
@@ -273,4 +290,32 @@ describe('LangGraph integration', () => {
await createRunner().ignore('event').expect({ transaction: EXPECTED_TRANSACTION_THREAD_ID }).start().completed();
});
});
+
+ createEsmAndCjsTests(
+ __dirname,
+ 'scenario-system-instructions.mjs',
+ 'instrument-with-pii.mjs',
+ (createRunner, test) => {
+ test('extracts system instructions from messages', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({
+ transaction: {
+ transaction: 'main',
+ spans: expect.arrayContaining([
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: JSON.stringify([
+ { type: 'text', content: 'You are a helpful assistant' },
+ ]),
+ }),
+ }),
+ ]),
+ },
+ })
+ .start()
+ .completed();
+ });
+ },
+ );
});
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/openai-tool-calls/test.ts b/dev-packages/node-integration-tests/suites/tracing/openai/openai-tool-calls/test.ts
index ac40fbe94249..b2189f993b2b 100644
--- a/dev-packages/node-integration-tests/suites/tracing/openai/openai-tool-calls/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/openai-tool-calls/test.ts
@@ -1,4 +1,28 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_STREAM_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_STREAMING_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+ OPENAI_RESPONSE_ID_ATTRIBUTE,
+ OPENAI_RESPONSE_MODEL_ATTRIBUTE,
+ OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE,
+ OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE,
+ OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE,
+} from '../../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../../utils/runner';
describe('OpenAI Tool Calls integration', () => {
@@ -63,23 +87,23 @@ describe('OpenAI Tool Calls integration', () => {
// First span - chat completion with tools (non-streaming)
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.available_tools': WEATHER_TOOL_DEFINITION,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'chatcmpl-tools-123',
- 'gen_ai.response.finish_reasons': '["tool_calls"]',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'openai.response.id': 'chatcmpl-tools-123',
- 'openai.response.model': 'gpt-4',
- 'openai.response.timestamp': '2023-03-01T06:31:40.000Z',
- 'openai.usage.completion_tokens': 25,
- 'openai.usage.prompt_tokens': 15,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: WEATHER_TOOL_DEFINITION,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-tools-123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["tool_calls"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-tools-123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:40.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 15,
},
description: 'chat gpt-4',
op: 'gen_ai.chat',
@@ -89,25 +113,25 @@ describe('OpenAI Tool Calls integration', () => {
// Second span - chat completion with tools and streaming
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.stream': true,
- 'gen_ai.request.available_tools': WEATHER_TOOL_DEFINITION,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'chatcmpl-stream-tools-123',
- 'gen_ai.response.finish_reasons': '["tool_calls"]',
- 'gen_ai.response.streaming': true,
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'openai.response.id': 'chatcmpl-stream-tools-123',
- 'openai.response.model': 'gpt-4',
- 'openai.response.timestamp': '2023-03-01T06:31:45.000Z',
- 'openai.usage.completion_tokens': 25,
- 'openai.usage.prompt_tokens': 15,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: WEATHER_TOOL_DEFINITION,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-tools-123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["tool_calls"]',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-tools-123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:45.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 15,
},
description: 'chat gpt-4 stream-response',
op: 'gen_ai.chat',
@@ -117,54 +141,54 @@ describe('OpenAI Tool Calls integration', () => {
// Third span - responses API with tools (non-streaming)
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.available_tools': WEATHER_TOOL_DEFINITION,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'resp_tools_789',
- 'gen_ai.response.finish_reasons': '["completed"]',
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
- 'openai.response.id': 'resp_tools_789',
- 'openai.response.model': 'gpt-4',
- 'openai.response.timestamp': '2023-03-01T06:32:00.000Z',
- 'openai.usage.completion_tokens': 12,
- 'openai.usage.prompt_tokens': 8,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: WEATHER_TOOL_DEFINITION,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_tools_789',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["completed"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_tools_789',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:32:00.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 12,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 8,
},
- description: 'responses gpt-4',
- op: 'gen_ai.responses',
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Fourth span - responses API with tools and streaming
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.stream': true,
- 'gen_ai.request.available_tools': WEATHER_TOOL_DEFINITION,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'resp_stream_tools_789',
- 'gen_ai.response.finish_reasons': '["in_progress","completed"]',
- 'gen_ai.response.streaming': true,
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
- 'openai.response.id': 'resp_stream_tools_789',
- 'openai.response.model': 'gpt-4',
- 'openai.response.timestamp': '2023-03-01T06:31:50.000Z',
- 'openai.usage.completion_tokens': 12,
- 'openai.usage.prompt_tokens': 8,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: WEATHER_TOOL_DEFINITION,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_tools_789',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["in_progress","completed"]',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_tools_789',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:50.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 12,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 8,
},
- description: 'responses gpt-4 stream-response',
- op: 'gen_ai.responses',
+ description: 'chat gpt-4 stream-response',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
@@ -177,27 +201,27 @@ describe('OpenAI Tool Calls integration', () => {
// First span - chat completion with tools (non-streaming) with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the weather like in Paris today?"}]',
- 'gen_ai.request.available_tools': WEATHER_TOOL_DEFINITION,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'chatcmpl-tools-123',
- 'gen_ai.response.finish_reasons': '["tool_calls"]',
- 'gen_ai.response.text': '[""]',
- 'gen_ai.response.tool_calls': CHAT_TOOL_CALLS,
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'openai.response.id': 'chatcmpl-tools-123',
- 'openai.response.model': 'gpt-4',
- 'openai.response.timestamp': '2023-03-01T06:31:40.000Z',
- 'openai.usage.completion_tokens': 25,
- 'openai.usage.prompt_tokens': 15,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather like in Paris today?"}]',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: WEATHER_TOOL_DEFINITION,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-tools-123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["tool_calls"]',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: '[""]',
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: CHAT_TOOL_CALLS,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-tools-123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:40.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 15,
},
description: 'chat gpt-4',
op: 'gen_ai.chat',
@@ -207,28 +231,28 @@ describe('OpenAI Tool Calls integration', () => {
// Second span - chat completion with tools and streaming with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.stream': true,
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the weather like in Paris today?"}]',
- 'gen_ai.request.available_tools': WEATHER_TOOL_DEFINITION,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'chatcmpl-stream-tools-123',
- 'gen_ai.response.finish_reasons': '["tool_calls"]',
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.tool_calls': CHAT_STREAM_TOOL_CALLS,
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'openai.response.id': 'chatcmpl-stream-tools-123',
- 'openai.response.model': 'gpt-4',
- 'openai.response.timestamp': '2023-03-01T06:31:45.000Z',
- 'openai.usage.completion_tokens': 25,
- 'openai.usage.prompt_tokens': 15,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather like in Paris today?"}]',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: WEATHER_TOOL_DEFINITION,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-tools-123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["tool_calls"]',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: CHAT_STREAM_TOOL_CALLS,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-tools-123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:45.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 15,
},
description: 'chat gpt-4 stream-response',
op: 'gen_ai.chat',
@@ -238,60 +262,60 @@ describe('OpenAI Tool Calls integration', () => {
// Third span - responses API with tools (non-streaming) with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the weather like in Paris today?"}]',
- 'gen_ai.request.available_tools': WEATHER_TOOL_DEFINITION,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'resp_tools_789',
- 'gen_ai.response.finish_reasons': '["completed"]',
- 'gen_ai.response.tool_calls': RESPONSES_TOOL_CALLS,
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
- 'openai.response.id': 'resp_tools_789',
- 'openai.response.model': 'gpt-4',
- 'openai.response.timestamp': '2023-03-01T06:32:00.000Z',
- 'openai.usage.completion_tokens': 12,
- 'openai.usage.prompt_tokens': 8,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather like in Paris today?"}]',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: WEATHER_TOOL_DEFINITION,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_tools_789',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["completed"]',
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: RESPONSES_TOOL_CALLS,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_tools_789',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:32:00.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 12,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 8,
},
- description: 'responses gpt-4',
- op: 'gen_ai.responses',
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Fourth span - responses API with tools and streaming with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.stream': true,
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the weather like in Paris today?"}]',
- 'gen_ai.request.available_tools': WEATHER_TOOL_DEFINITION,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'resp_stream_tools_789',
- 'gen_ai.response.finish_reasons': '["in_progress","completed"]',
- 'gen_ai.response.streaming': true,
- 'gen_ai.response.tool_calls': RESPONSES_TOOL_CALLS,
- 'gen_ai.usage.input_tokens': 8,
- 'gen_ai.usage.output_tokens': 12,
- 'gen_ai.usage.total_tokens': 20,
- 'openai.response.id': 'resp_stream_tools_789',
- 'openai.response.model': 'gpt-4',
- 'openai.response.timestamp': '2023-03-01T06:31:50.000Z',
- 'openai.usage.completion_tokens': 12,
- 'openai.usage.prompt_tokens': 8,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather like in Paris today?"}]',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: WEATHER_TOOL_DEFINITION,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_tools_789',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["in_progress","completed"]',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: RESPONSES_TOOL_CALLS,
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 20,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_tools_789',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:50.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 12,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 8,
},
- description: 'responses gpt-4 stream-response',
- op: 'gen_ai.responses',
+ description: 'chat gpt-4 stream-response',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/scenario-embeddings.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-embeddings.mjs
index f6cbe1160bf5..42c6a94c5199 100644
--- a/dev-packages/node-integration-tests/suites/tracing/openai/scenario-embeddings.mjs
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-embeddings.mjs
@@ -67,6 +67,12 @@ async function run() {
} catch {
// Error is expected and handled
}
+
+ // Third test: embeddings API with multiple inputs
+ await client.embeddings.create({
+ input: ['First input text', 'Second input text', 'Third input text'],
+ model: 'text-embedding-3-small',
+ });
});
server.close();
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/scenario-manual-conversation-id.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-manual-conversation-id.mjs
new file mode 100644
index 000000000000..a44b4767bbae
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-manual-conversation-id.mjs
@@ -0,0 +1,79 @@
+import * as Sentry from '@sentry/node';
+import express from 'express';
+import OpenAI from 'openai';
+
+function startMockServer() {
+ const app = express();
+ app.use(express.json());
+
+ // Chat completions endpoint
+ app.post('/openai/chat/completions', (req, res) => {
+ const { model } = req.body;
+
+ res.send({
+ id: 'chatcmpl-mock123',
+ object: 'chat.completion',
+ created: 1677652288,
+ model: model,
+ choices: [
+ {
+ index: 0,
+ message: {
+ role: 'assistant',
+ content: 'Mock response from OpenAI',
+ },
+ finish_reason: 'stop',
+ },
+ ],
+ usage: {
+ prompt_tokens: 10,
+ completion_tokens: 15,
+ total_tokens: 25,
+ },
+ });
+ });
+
+ return new Promise(resolve => {
+ const server = app.listen(0, () => {
+ resolve(server);
+ });
+ });
+}
+
+async function run() {
+ const server = await startMockServer();
+
+ // Test: Multiple chat completions in the same conversation with manual conversation ID
+ await Sentry.startSpan({ op: 'function', name: 'chat-with-manual-conversation-id' }, async () => {
+ const client = new OpenAI({
+ baseURL: `http://localhost:${server.address().port}/openai`,
+ apiKey: 'mock-api-key',
+ });
+
+ // Set conversation ID manually using Sentry API
+ Sentry.setConversationId('user_chat_session_abc123');
+
+ // First message in the conversation
+ await client.chat.completions.create({
+ model: 'gpt-4',
+ messages: [{ role: 'user', content: 'What is the capital of France?' }],
+ });
+
+ // Second message in the same conversation
+ await client.chat.completions.create({
+ model: 'gpt-4',
+ messages: [{ role: 'user', content: 'Tell me more about it' }],
+ });
+
+ // Third message in the same conversation
+ await client.chat.completions.create({
+ model: 'gpt-4',
+ messages: [{ role: 'user', content: 'What is its population?' }],
+ });
+ });
+
+ server.close();
+ await Sentry.flush(2000);
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/scenario-separate-scope-1.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-separate-scope-1.mjs
new file mode 100644
index 000000000000..dab303a401d9
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-separate-scope-1.mjs
@@ -0,0 +1,74 @@
+import * as Sentry from '@sentry/node';
+import express from 'express';
+import OpenAI from 'openai';
+
+function startMockServer() {
+ const app = express();
+ app.use(express.json());
+
+ // Chat completions endpoint
+ app.post('/openai/chat/completions', (req, res) => {
+ const { model } = req.body;
+
+ res.send({
+ id: 'chatcmpl-mock123',
+ object: 'chat.completion',
+ created: 1677652288,
+ model: model,
+ choices: [
+ {
+ index: 0,
+ message: {
+ role: 'assistant',
+ content: 'Mock response from OpenAI',
+ },
+ finish_reason: 'stop',
+ },
+ ],
+ usage: {
+ prompt_tokens: 10,
+ completion_tokens: 15,
+ total_tokens: 25,
+ },
+ });
+ });
+
+ return new Promise(resolve => {
+ const server = app.listen(0, () => {
+ resolve(server);
+ });
+ });
+}
+
+async function run() {
+ const server = await startMockServer();
+ const client = new OpenAI({
+ baseURL: `http://localhost:${server.address().port}/openai`,
+ apiKey: 'mock-api-key',
+ });
+
+ // First request/conversation scope
+ await Sentry.withScope(async scope => {
+ // Set conversation ID for this request scope BEFORE starting the span
+ scope.setConversationId('conv_user1_session_abc');
+
+ await Sentry.startSpan({ op: 'http.server', name: 'GET /chat/conversation-1' }, async () => {
+ // First message in conversation 1
+ await client.chat.completions.create({
+ model: 'gpt-4',
+ messages: [{ role: 'user', content: 'Hello from conversation 1' }],
+ });
+
+ // Second message in conversation 1
+ await client.chat.completions.create({
+ model: 'gpt-4',
+ messages: [{ role: 'user', content: 'Follow-up in conversation 1' }],
+ });
+ });
+ });
+
+ server.close();
+ await Sentry.flush(2000);
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/scenario-separate-scope-2.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-separate-scope-2.mjs
new file mode 100644
index 000000000000..09f73afed761
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-separate-scope-2.mjs
@@ -0,0 +1,74 @@
+import * as Sentry from '@sentry/node';
+import express from 'express';
+import OpenAI from 'openai';
+
+function startMockServer() {
+ const app = express();
+ app.use(express.json());
+
+ // Chat completions endpoint
+ app.post('/openai/chat/completions', (req, res) => {
+ const { model } = req.body;
+
+ res.send({
+ id: 'chatcmpl-mock123',
+ object: 'chat.completion',
+ created: 1677652288,
+ model: model,
+ choices: [
+ {
+ index: 0,
+ message: {
+ role: 'assistant',
+ content: 'Mock response from OpenAI',
+ },
+ finish_reason: 'stop',
+ },
+ ],
+ usage: {
+ prompt_tokens: 10,
+ completion_tokens: 15,
+ total_tokens: 25,
+ },
+ });
+ });
+
+ return new Promise(resolve => {
+ const server = app.listen(0, () => {
+ resolve(server);
+ });
+ });
+}
+
+async function run() {
+ const server = await startMockServer();
+ const client = new OpenAI({
+ baseURL: `http://localhost:${server.address().port}/openai`,
+ apiKey: 'mock-api-key',
+ });
+
+ // Second request/conversation scope (completely separate)
+ await Sentry.withScope(async scope => {
+ // Set different conversation ID for this request scope BEFORE starting the span
+ scope.setConversationId('conv_user2_session_xyz');
+
+ await Sentry.startSpan({ op: 'http.server', name: 'GET /chat/conversation-2' }, async () => {
+ // First message in conversation 2
+ await client.chat.completions.create({
+ model: 'gpt-4',
+ messages: [{ role: 'user', content: 'Hello from conversation 2' }],
+ });
+
+ // Second message in conversation 2
+ await client.chat.completions.create({
+ model: 'gpt-4',
+ messages: [{ role: 'user', content: 'Follow-up in conversation 2' }],
+ });
+ });
+ });
+
+ server.close();
+ await Sentry.flush(2000);
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/scenario-system-instructions.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-system-instructions.mjs
new file mode 100644
index 000000000000..1fb09d2f9a6d
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/scenario-system-instructions.mjs
@@ -0,0 +1,63 @@
+import * as Sentry from '@sentry/node';
+import express from 'express';
+import OpenAI from 'openai';
+
+function startMockServer() {
+ const app = express();
+ app.use(express.json());
+
+ app.post('/openai/chat/completions', (req, res) => {
+ const { model } = req.body;
+
+ res.send({
+ id: 'chatcmpl-system-test',
+ object: 'chat.completion',
+ created: 1677652288,
+ model: model,
+ choices: [
+ {
+ index: 0,
+ message: {
+ role: 'assistant',
+ content: 'Response',
+ },
+ finish_reason: 'stop',
+ },
+ ],
+ usage: {
+ prompt_tokens: 10,
+ completion_tokens: 5,
+ total_tokens: 15,
+ },
+ });
+ });
+
+ return new Promise(resolve => {
+ const server = app.listen(0, () => {
+ resolve(server);
+ });
+ });
+}
+
+async function run() {
+ const server = await startMockServer();
+
+ await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
+ const client = new OpenAI({
+ baseURL: `http://localhost:${server.address().port}/openai`,
+ apiKey: 'test-key',
+ });
+
+ await client.chat.completions.create({
+ model: 'gpt-3.5-turbo',
+ messages: [
+ { role: 'system', content: 'You are a helpful assistant' },
+ { role: 'user', content: 'Hello' },
+ ],
+ });
+ });
+
+ server.close();
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/test.ts b/dev-packages/node-integration-tests/suites/tracing/openai/test.ts
index 4d41b34b8c31..df432d292bba 100644
--- a/dev-packages/node-integration-tests/suites/tracing/openai/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/test.ts
@@ -1,4 +1,32 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_CONVERSATION_ID_ATTRIBUTE,
+ GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_DIMENSIONS_ATTRIBUTE,
+ GEN_AI_REQUEST_ENCODING_FORMAT_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_STREAM_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_STREAMING_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+ OPENAI_RESPONSE_ID_ATTRIBUTE,
+ OPENAI_RESPONSE_MODEL_ATTRIBUTE,
+ OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE,
+ OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE,
+ OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../utils/runner';
describe('OpenAI integration', () => {
@@ -12,23 +40,23 @@ describe('OpenAI integration', () => {
// First span - basic chat completion without PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'openai.response.id': 'chatcmpl-mock123',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:28.000Z',
- 'openai.usage.completion_tokens': 15,
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:28.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 15,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
description: 'chat gpt-3.5-turbo',
op: 'gen_ai.chat',
@@ -38,36 +66,36 @@ describe('OpenAI integration', () => {
// Second span - responses API
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'resp_mock456',
- 'gen_ai.response.finish_reasons': '["completed"]',
- 'gen_ai.usage.input_tokens': 5,
- 'gen_ai.usage.output_tokens': 8,
- 'gen_ai.usage.total_tokens': 13,
- 'openai.response.id': 'resp_mock456',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:30.000Z',
- 'openai.usage.completion_tokens': 8,
- 'openai.usage.prompt_tokens': 5,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_mock456',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["completed"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 5,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 13,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_mock456',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:30.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 8,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 5,
},
- description: 'responses gpt-3.5-turbo',
- op: 'gen_ai.responses',
+ description: 'chat gpt-3.5-turbo',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Third span - error handling
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'error-model',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
},
description: 'chat error-model',
op: 'gen_ai.chat',
@@ -77,25 +105,25 @@ describe('OpenAI integration', () => {
// Fourth span - chat completions streaming
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.stream': true,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'chatcmpl-stream-123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.usage.input_tokens': 12,
- 'gen_ai.usage.output_tokens': 18,
- 'gen_ai.usage.total_tokens': 30,
- 'openai.response.id': 'chatcmpl-stream-123',
- 'openai.response.model': 'gpt-4',
- 'gen_ai.response.streaming': true,
- 'openai.response.timestamp': '2023-03-01T06:31:40.000Z',
- 'openai.usage.completion_tokens': 18,
- 'openai.usage.prompt_tokens': 12,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 18,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:40.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 18,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 12,
},
description: 'chat gpt-4 stream-response',
op: 'gen_ai.chat',
@@ -105,39 +133,39 @@ describe('OpenAI integration', () => {
// Fifth span - responses API streaming
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.stream': true,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'resp_stream_456',
- 'gen_ai.response.finish_reasons': '["in_progress","completed"]',
- 'gen_ai.usage.input_tokens': 6,
- 'gen_ai.usage.output_tokens': 10,
- 'gen_ai.usage.total_tokens': 16,
- 'openai.response.id': 'resp_stream_456',
- 'openai.response.model': 'gpt-4',
- 'gen_ai.response.streaming': true,
- 'openai.response.timestamp': '2023-03-01T06:31:50.000Z',
- 'openai.usage.completion_tokens': 10,
- 'openai.usage.prompt_tokens': 6,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_456',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["in_progress","completed"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 6,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 16,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_456',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:50.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 6,
},
- description: 'responses gpt-4 stream-response',
- op: 'gen_ai.responses',
+ description: 'chat gpt-4 stream-response',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Sixth span - error handling in streaming context
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.stream': true,
- 'gen_ai.system': 'openai',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
},
description: 'chat error-model stream-response',
op: 'gen_ai.chat',
@@ -153,27 +181,29 @@ describe('OpenAI integration', () => {
// First span - basic chat completion with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.messages.original_length': 2,
- 'gen_ai.request.messages':
- '[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"What is the capital of France?"}]',
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.response.text': '["Hello from OpenAI mock!"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'openai.response.id': 'chatcmpl-mock123',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:28.000Z',
- 'openai.usage.completion_tokens': 15,
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the capital of France?"}]',
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: JSON.stringify([
+ { type: 'text', content: 'You are a helpful assistant.' },
+ ]),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: '["Hello from OpenAI mock!"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:28.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 15,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
description: 'chat gpt-3.5-turbo',
op: 'gen_ai.chat',
@@ -183,40 +213,41 @@ describe('OpenAI integration', () => {
// Second span - responses API with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.messages': 'Translate this to French: Hello',
- 'gen_ai.response.text': 'Response to: Translate this to French: Hello',
- 'gen_ai.response.finish_reasons': '["completed"]',
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'resp_mock456',
- 'gen_ai.usage.input_tokens': 5,
- 'gen_ai.usage.output_tokens': 8,
- 'gen_ai.usage.total_tokens': 13,
- 'openai.response.id': 'resp_mock456',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:30.000Z',
- 'openai.usage.completion_tokens': 8,
- 'openai.usage.prompt_tokens': 5,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: 'Translate this to French: Hello',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Response to: Translate this to French: Hello',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["completed"]',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_mock456',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 5,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 13,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_mock456',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:30.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 8,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 5,
},
- description: 'responses gpt-3.5-turbo',
- op: 'gen_ai.responses',
+ description: 'chat gpt-3.5-turbo',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Third span - error handling with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"This will fail"}]',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"This will fail"}]',
},
description: 'chat error-model',
op: 'gen_ai.chat',
@@ -226,29 +257,31 @@ describe('OpenAI integration', () => {
// Fourth span - chat completions streaming with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.stream': true,
- 'gen_ai.request.messages.original_length': 2,
- 'gen_ai.request.messages':
- '[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Tell me about streaming"}]',
- 'gen_ai.response.text': 'Hello from OpenAI streaming!',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.response.id': 'chatcmpl-stream-123',
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.usage.input_tokens': 12,
- 'gen_ai.usage.output_tokens': 18,
- 'gen_ai.usage.total_tokens': 30,
- 'openai.response.id': 'chatcmpl-stream-123',
- 'openai.response.model': 'gpt-4',
- 'gen_ai.response.streaming': true,
- 'openai.response.timestamp': '2023-03-01T06:31:40.000Z',
- 'openai.usage.completion_tokens': 18,
- 'openai.usage.prompt_tokens': 12,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Tell me about streaming"}]',
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: JSON.stringify([
+ { type: 'text', content: 'You are a helpful assistant.' },
+ ]),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Hello from OpenAI streaming!',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-123',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 18,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:40.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 18,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 12,
}),
description: 'chat gpt-4 stream-response',
op: 'gen_ai.chat',
@@ -258,43 +291,45 @@ describe('OpenAI integration', () => {
// Fifth span - responses API streaming with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.stream': true,
- 'gen_ai.request.messages': 'Test streaming responses API',
- 'gen_ai.response.text': 'Streaming response to: Test streaming responses APITest streaming responses API',
- 'gen_ai.response.finish_reasons': '["in_progress","completed"]',
- 'gen_ai.response.id': 'resp_stream_456',
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.usage.input_tokens': 6,
- 'gen_ai.usage.output_tokens': 10,
- 'gen_ai.usage.total_tokens': 16,
- 'openai.response.id': 'resp_stream_456',
- 'openai.response.model': 'gpt-4',
- 'gen_ai.response.streaming': true,
- 'openai.response.timestamp': '2023-03-01T06:31:50.000Z',
- 'openai.usage.completion_tokens': 10,
- 'openai.usage.prompt_tokens': 6,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: 'Test streaming responses API',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]:
+ 'Streaming response to: Test streaming responses APITest streaming responses API',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["in_progress","completed"]',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_456',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 6,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 16,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_456',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:50.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 6,
}),
- description: 'responses gpt-4 stream-response',
- op: 'gen_ai.responses',
+ description: 'chat gpt-4 stream-response',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Sixth span - error handling in streaming context with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.stream': true,
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"This will fail"}]',
- 'gen_ai.system': 'openai',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"This will fail"}]',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
},
description: 'chat error-model stream-response',
op: 'gen_ai.chat',
@@ -310,16 +345,16 @@ describe('OpenAI integration', () => {
// Check that custom options are respected
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response text when recordOutputs: true
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text when recordOutputs: true
}),
}),
// Check that custom options are respected for streaming
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response text when recordOutputs: true
- 'gen_ai.request.stream': true, // Should be marked as stream
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text when recordOutputs: true
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true, // Should be marked as stream
}),
}),
]),
@@ -361,18 +396,18 @@ describe('OpenAI integration', () => {
// First span - embeddings API
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'embeddings',
- 'sentry.op': 'gen_ai.embeddings',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'text-embedding-3-small',
- 'gen_ai.request.encoding_format': 'float',
- 'gen_ai.request.dimensions': 1536,
- 'gen_ai.response.model': 'text-embedding-3-small',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.total_tokens': 10,
- 'openai.response.model': 'text-embedding-3-small',
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_REQUEST_ENCODING_FORMAT_ATTRIBUTE]: 'float',
+ [GEN_AI_REQUEST_DIMENSIONS_ATTRIBUTE]: 1536,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
description: 'embeddings text-embedding-3-small',
op: 'gen_ai.embeddings',
@@ -382,11 +417,11 @@ describe('OpenAI integration', () => {
// Second span - embeddings API error model
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'embeddings',
- 'sentry.op': 'gen_ai.embeddings',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'error-model',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
},
description: 'embeddings error-model',
op: 'gen_ai.embeddings',
@@ -402,19 +437,19 @@ describe('OpenAI integration', () => {
// First span - embeddings API with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'embeddings',
- 'sentry.op': 'gen_ai.embeddings',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'text-embedding-3-small',
- 'gen_ai.request.encoding_format': 'float',
- 'gen_ai.request.dimensions': 1536,
- 'gen_ai.request.messages': 'Embedding test!',
- 'gen_ai.response.model': 'text-embedding-3-small',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.total_tokens': 10,
- 'openai.response.model': 'text-embedding-3-small',
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_REQUEST_ENCODING_FORMAT_ATTRIBUTE]: 'float',
+ [GEN_AI_REQUEST_DIMENSIONS_ATTRIBUTE]: 1536,
+ [GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE]: 'Embedding test!',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
description: 'embeddings text-embedding-3-small',
op: 'gen_ai.embeddings',
@@ -424,18 +459,38 @@ describe('OpenAI integration', () => {
// Second span - embeddings API error model with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'embeddings',
- 'sentry.op': 'gen_ai.embeddings',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.messages': 'Error embedding test!',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE]: 'Error embedding test!',
},
description: 'embeddings error-model',
op: 'gen_ai.embeddings',
origin: 'auto.ai.openai',
status: 'internal_error',
}),
+ // Third span - embeddings API with multiple inputs (this does not get truncated)
+ expect.objectContaining({
+ data: {
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE]: '["First input text","Second input text","Third input text"]',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
+ },
+ description: 'embeddings text-embedding-3-small',
+ op: 'gen_ai.embeddings',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
]),
};
createEsmAndCjsTests(__dirname, 'scenario-embeddings.mjs', 'instrument.mjs', (createRunner, test) => {
@@ -475,23 +530,23 @@ describe('OpenAI integration', () => {
span_id: expect.any(String),
trace_id: expect.any(String),
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'openai.response.id': 'chatcmpl-mock123',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:28.000Z',
- 'openai.usage.completion_tokens': 15,
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:28.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 15,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
op: 'gen_ai.chat',
origin: 'auto.ai.openai',
@@ -522,23 +577,23 @@ describe('OpenAI integration', () => {
span_id: expect.any(String),
trace_id: expect.any(String),
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'openai.response.id': 'chatcmpl-mock123',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:28.000Z',
- 'openai.usage.completion_tokens': 15,
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:28.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 15,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
op: 'gen_ai.chat',
origin: 'auto.ai.openai',
@@ -564,54 +619,45 @@ describe('OpenAI integration', () => {
transaction: {
transaction: 'main',
spans: expect.arrayContaining([
+ // First call: Last message is large and gets truncated (only C's remain, D's are cropped)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
// Messages should be present (truncation happened) and should be a JSON array of a single index
- 'gen_ai.request.messages': expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringMatching(/^\[\{"role":"user","content":"C+"\}\]$/),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 2,
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: expect.stringMatching(
+ /^\[\{"type":"text","content":"A+"\}\]$/,
+ ),
}),
description: 'chat gpt-3.5-turbo',
op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
- ]),
- },
- })
- .start()
- .completed();
- });
- },
- );
-
- createEsmAndCjsTests(
- __dirname,
- 'truncation/scenario-message-truncation-responses.mjs',
- 'instrument-with-pii.mjs',
- (createRunner, test) => {
- test('truncates string inputs when they exceed byte limit', async () => {
- await createRunner()
- .ignore('event')
- .expect({
- transaction: {
- transaction: 'main',
- spans: expect.arrayContaining([
+ // Second call: Last message is small and kept without truncation
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- // Messages should be present and should include truncated string input (contains only As)
- 'gen_ai.request.messages': expect.stringMatching(/^A+$/),
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ // Small message should be kept intact
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: JSON.stringify([
+ { role: 'user', content: 'This is a small message that fits within the limit' },
+ ]),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 2,
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: expect.stringMatching(
+ /^\[\{"type":"text","content":"A+"\}\]$/,
+ ),
}),
- description: 'responses gpt-3.5-turbo',
- op: 'gen_ai.responses',
+ description: 'chat gpt-3.5-turbo',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
@@ -626,10 +672,10 @@ describe('OpenAI integration', () => {
createEsmAndCjsTests(
__dirname,
- 'truncation/scenario-message-truncation-embeddings.mjs',
+ 'truncation/scenario-message-truncation-responses.mjs',
'instrument-with-pii.mjs',
(createRunner, test) => {
- test('truncates messages when they exceed byte limit - keeps only last message and crops it', async () => {
+ test('truncates string inputs when they exceed byte limit', async () => {
await createRunner()
.ignore('event')
.expect({
@@ -638,8 +684,19 @@ describe('OpenAI integration', () => {
spans: expect.arrayContaining([
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'embeddings',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ // Messages should be present and should include truncated string input (contains only As)
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.stringMatching(/^A+$/),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
}),
+ description: 'chat gpt-3.5-turbo',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.openai',
+ status: 'ok',
}),
]),
},
@@ -657,54 +714,54 @@ describe('OpenAI integration', () => {
// First span - conversations.create returns conversation object with id
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'conversations',
- 'sentry.op': 'gen_ai.conversations',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
// The conversation ID should be captured from the response
- 'gen_ai.conversation.id': 'conv_689667905b048191b4740501625afd940c7533ace33a2dab',
+ [GEN_AI_CONVERSATION_ID_ATTRIBUTE]: 'conv_689667905b048191b4740501625afd940c7533ace33a2dab',
}),
- description: 'conversations unknown',
- op: 'gen_ai.conversations',
+ description: 'chat unknown',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Second span - responses.create with conversation parameter
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
// The conversation ID should be captured from the request
- 'gen_ai.conversation.id': 'conv_689667905b048191b4740501625afd940c7533ace33a2dab',
+ [GEN_AI_CONVERSATION_ID_ATTRIBUTE]: 'conv_689667905b048191b4740501625afd940c7533ace33a2dab',
}),
- op: 'gen_ai.responses',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Third span - responses.create without conversation (first in chain, should NOT have gen_ai.conversation.id)
expect.objectContaining({
data: expect.not.objectContaining({
- 'gen_ai.conversation.id': expect.anything(),
+ [GEN_AI_CONVERSATION_ID_ATTRIBUTE]: expect.anything(),
}),
- op: 'gen_ai.responses',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Fourth span - responses.create with previous_response_id (chaining)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
// The previous_response_id should be captured as conversation.id
- 'gen_ai.conversation.id': 'resp_mock_conv_123',
+ [GEN_AI_CONVERSATION_ID_ATTRIBUTE]: 'resp_mock_conv_123',
}),
- op: 'gen_ai.responses',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
@@ -720,4 +777,172 @@ describe('OpenAI integration', () => {
.completed();
});
});
+
+ // Test for manual conversation ID setting using setConversationId()
+ const EXPECTED_TRANSACTION_MANUAL_CONVERSATION_ID = {
+ transaction: 'chat-with-manual-conversation-id',
+ spans: expect.arrayContaining([
+ // All three chat completion spans should have the same manually-set conversation ID
+ expect.objectContaining({
+ data: expect.objectContaining({
+ 'gen_ai.conversation.id': 'user_chat_session_abc123',
+ 'gen_ai.system': 'openai',
+ 'gen_ai.request.model': 'gpt-4',
+ 'gen_ai.operation.name': 'chat',
+ 'sentry.op': 'gen_ai.chat',
+ }),
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
+ expect.objectContaining({
+ data: expect.objectContaining({
+ 'gen_ai.conversation.id': 'user_chat_session_abc123',
+ 'gen_ai.system': 'openai',
+ 'gen_ai.request.model': 'gpt-4',
+ 'gen_ai.operation.name': 'chat',
+ 'sentry.op': 'gen_ai.chat',
+ }),
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
+ expect.objectContaining({
+ data: expect.objectContaining({
+ 'gen_ai.conversation.id': 'user_chat_session_abc123',
+ 'gen_ai.system': 'openai',
+ 'gen_ai.request.model': 'gpt-4',
+ 'gen_ai.operation.name': 'chat',
+ 'sentry.op': 'gen_ai.chat',
+ }),
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
+ ]),
+ };
+
+ createEsmAndCjsTests(__dirname, 'scenario-manual-conversation-id.mjs', 'instrument.mjs', (createRunner, test) => {
+ test('attaches manual conversation ID set via setConversationId() to all chat spans', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({ transaction: EXPECTED_TRANSACTION_MANUAL_CONVERSATION_ID })
+ .start()
+ .completed();
+ });
+ });
+
+ // Test for scope isolation - different scopes have different conversation IDs
+ const EXPECTED_TRANSACTION_CONVERSATION_1 = {
+ transaction: 'GET /chat/conversation-1',
+ spans: expect.arrayContaining([
+ // Both chat completion spans in conversation 1 should have conv_user1_session_abc
+ expect.objectContaining({
+ data: expect.objectContaining({
+ 'gen_ai.conversation.id': 'conv_user1_session_abc',
+ 'gen_ai.system': 'openai',
+ 'gen_ai.request.model': 'gpt-4',
+ 'sentry.op': 'gen_ai.chat',
+ }),
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
+ expect.objectContaining({
+ data: expect.objectContaining({
+ 'gen_ai.conversation.id': 'conv_user1_session_abc',
+ 'gen_ai.system': 'openai',
+ 'gen_ai.request.model': 'gpt-4',
+ 'sentry.op': 'gen_ai.chat',
+ }),
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
+ ]),
+ };
+
+ const EXPECTED_TRANSACTION_CONVERSATION_2 = {
+ transaction: 'GET /chat/conversation-2',
+ spans: expect.arrayContaining([
+ // Both chat completion spans in conversation 2 should have conv_user2_session_xyz
+ expect.objectContaining({
+ data: expect.objectContaining({
+ 'gen_ai.conversation.id': 'conv_user2_session_xyz',
+ 'gen_ai.system': 'openai',
+ 'gen_ai.request.model': 'gpt-4',
+ 'sentry.op': 'gen_ai.chat',
+ }),
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
+ expect.objectContaining({
+ data: expect.objectContaining({
+ 'gen_ai.conversation.id': 'conv_user2_session_xyz',
+ 'gen_ai.system': 'openai',
+ 'gen_ai.request.model': 'gpt-4',
+ 'sentry.op': 'gen_ai.chat',
+ }),
+ description: 'chat gpt-4',
+ op: 'gen_ai.chat',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
+ ]),
+ };
+
+ createEsmAndCjsTests(__dirname, 'scenario-separate-scope-1.mjs', 'instrument.mjs', (createRunner, test) => {
+ test('isolates conversation IDs across separate scopes - conversation 1', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({ transaction: EXPECTED_TRANSACTION_CONVERSATION_1 })
+ .start()
+ .completed();
+ });
+ });
+
+ createEsmAndCjsTests(__dirname, 'scenario-separate-scope-2.mjs', 'instrument.mjs', (createRunner, test) => {
+ test('isolates conversation IDs across separate scopes - conversation 2', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({ transaction: EXPECTED_TRANSACTION_CONVERSATION_2 })
+ .start()
+ .completed();
+ });
+ });
+
+ createEsmAndCjsTests(
+ __dirname,
+ 'scenario-system-instructions.mjs',
+ 'instrument-with-pii.mjs',
+ (createRunner, test) => {
+ test('extracts system instructions from messages', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({
+ transaction: {
+ transaction: 'main',
+ spans: expect.arrayContaining([
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: JSON.stringify([
+ { type: 'text', content: 'You are a helpful assistant' },
+ ]),
+ }),
+ }),
+ ]),
+ },
+ })
+ .start()
+ .completed();
+ });
+ },
+ );
});
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/truncation/scenario-message-truncation-completions.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/truncation/scenario-message-truncation-completions.mjs
index 96684ed9ec4f..7b0cdd730aa3 100644
--- a/dev-packages/node-integration-tests/suites/tracing/openai/truncation/scenario-message-truncation-completions.mjs
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/truncation/scenario-message-truncation-completions.mjs
@@ -47,12 +47,11 @@ async function run() {
const client = instrumentOpenAiClient(mockClient);
- // Create 3 large messages where:
- // - First 2 messages are very large (will be dropped)
- // - Last message is large but will be truncated to fit within the 20KB limit
+ // Test 1: Given an array of messages only the last message should be kept
+ // The last message should be truncated to fit within the 20KB limit
const largeContent1 = 'A'.repeat(15000); // ~15KB
const largeContent2 = 'B'.repeat(15000); // ~15KB
- const largeContent3 = 'C'.repeat(25000); // ~25KB (will be truncated)
+ const largeContent3 = 'C'.repeat(25000) + 'D'.repeat(25000); // ~50KB (will be truncated, only C's remain)
await client.chat.completions.create({
model: 'gpt-3.5-turbo',
@@ -63,6 +62,19 @@ async function run() {
],
temperature: 0.7,
});
+
+ // Test 2: Given an array of messages only the last message should be kept
+ // The last message is small, so it should be kept intact
+ const smallContent = 'This is a small message that fits within the limit';
+ await client.chat.completions.create({
+ model: 'gpt-3.5-turbo',
+ messages: [
+ { role: 'system', content: largeContent1 },
+ { role: 'user', content: largeContent2 },
+ { role: 'user', content: smallContent },
+ ],
+ temperature: 0.7,
+ });
});
}
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/truncation/scenario-message-truncation-embeddings.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/truncation/scenario-message-truncation-embeddings.mjs
deleted file mode 100644
index b2e5cf3206fe..000000000000
--- a/dev-packages/node-integration-tests/suites/tracing/openai/truncation/scenario-message-truncation-embeddings.mjs
+++ /dev/null
@@ -1,66 +0,0 @@
-import { instrumentOpenAiClient } from '@sentry/core';
-import * as Sentry from '@sentry/node';
-
-class MockOpenAI {
- constructor(config) {
- this.apiKey = config.apiKey;
-
- this.embeddings = {
- create: async params => {
- await new Promise(resolve => setTimeout(resolve, 10));
-
- return {
- object: 'list',
- data: [
- {
- object: 'embedding',
- embedding: [0.1, 0.2, 0.3],
- index: 0,
- },
- ],
- model: params.model,
- usage: {
- prompt_tokens: 10,
- total_tokens: 10,
- },
- };
- },
- };
- }
-}
-
-async function run() {
- await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
- const mockClient = new MockOpenAI({
- apiKey: 'mock-api-key',
- });
-
- const client = instrumentOpenAiClient(mockClient);
-
- // Create 1 large input that gets truncated to fit within the 20KB limit
- const largeContent = 'A'.repeat(25000) + 'B'.repeat(25000); // ~50KB gets truncated to include only As
-
- await client.embeddings.create({
- input: largeContent,
- model: 'text-embedding-3-small',
- dimensions: 1536,
- encoding_format: 'float',
- });
-
- // Create 3 large inputs where:
- // - First 2 inputs are very large (will be dropped)
- // - Last input is large but will be truncated to fit within the 20KB limit
- const largeContent1 = 'A'.repeat(15000); // ~15KB
- const largeContent2 = 'B'.repeat(15000); // ~15KB
- const largeContent3 = 'C'.repeat(25000); // ~25KB (will be truncated)
-
- await client.embeddings.create({
- input: [largeContent1, largeContent2, largeContent3],
- model: 'text-embedding-3-small',
- dimensions: 1536,
- encoding_format: 'float',
- });
- });
-}
-
-run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/v6/scenario-embeddings.mjs b/dev-packages/node-integration-tests/suites/tracing/openai/v6/scenario-embeddings.mjs
index f6cbe1160bf5..42c6a94c5199 100644
--- a/dev-packages/node-integration-tests/suites/tracing/openai/v6/scenario-embeddings.mjs
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/v6/scenario-embeddings.mjs
@@ -67,6 +67,12 @@ async function run() {
} catch {
// Error is expected and handled
}
+
+ // Third test: embeddings API with multiple inputs
+ await client.embeddings.create({
+ input: ['First input text', 'Second input text', 'Third input text'],
+ model: 'text-embedding-3-small',
+ });
});
server.close();
diff --git a/dev-packages/node-integration-tests/suites/tracing/openai/v6/test.ts b/dev-packages/node-integration-tests/suites/tracing/openai/v6/test.ts
index 3784fb7e4631..0cb07c6eba66 100644
--- a/dev-packages/node-integration-tests/suites/tracing/openai/v6/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/openai/v6/test.ts
@@ -1,4 +1,31 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_DIMENSIONS_ATTRIBUTE,
+ GEN_AI_REQUEST_ENCODING_FORMAT_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_REQUEST_STREAM_ATTRIBUTE,
+ GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_STREAMING_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+ OPENAI_RESPONSE_ID_ATTRIBUTE,
+ OPENAI_RESPONSE_MODEL_ATTRIBUTE,
+ OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE,
+ OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE,
+ OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE,
+} from '../../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../../utils/runner';
describe('OpenAI integration (V6)', () => {
@@ -12,23 +39,23 @@ describe('OpenAI integration (V6)', () => {
// First span - basic chat completion without PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'openai.response.id': 'chatcmpl-mock123',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:28.000Z',
- 'openai.usage.completion_tokens': 15,
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:28.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 15,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
description: 'chat gpt-3.5-turbo',
op: 'gen_ai.chat',
@@ -38,36 +65,36 @@ describe('OpenAI integration (V6)', () => {
// Second span - responses API
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'resp_mock456',
- 'gen_ai.response.finish_reasons': '["completed"]',
- 'gen_ai.usage.input_tokens': 5,
- 'gen_ai.usage.output_tokens': 8,
- 'gen_ai.usage.total_tokens': 13,
- 'openai.response.id': 'resp_mock456',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:30.000Z',
- 'openai.usage.completion_tokens': 8,
- 'openai.usage.prompt_tokens': 5,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_mock456',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["completed"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 5,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 13,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_mock456',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:30.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 8,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 5,
},
- description: 'responses gpt-3.5-turbo',
- op: 'gen_ai.responses',
+ description: 'chat gpt-3.5-turbo',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Third span - error handling
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'error-model',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
},
description: 'chat error-model',
op: 'gen_ai.chat',
@@ -77,25 +104,25 @@ describe('OpenAI integration (V6)', () => {
// Fourth span - chat completions streaming
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.stream': true,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'chatcmpl-stream-123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.usage.input_tokens': 12,
- 'gen_ai.usage.output_tokens': 18,
- 'gen_ai.usage.total_tokens': 30,
- 'openai.response.id': 'chatcmpl-stream-123',
- 'openai.response.model': 'gpt-4',
- 'gen_ai.response.streaming': true,
- 'openai.response.timestamp': '2023-03-01T06:31:40.000Z',
- 'openai.usage.completion_tokens': 18,
- 'openai.usage.prompt_tokens': 12,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 18,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:40.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 18,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 12,
},
description: 'chat gpt-4 stream-response',
op: 'gen_ai.chat',
@@ -105,39 +132,39 @@ describe('OpenAI integration (V6)', () => {
// Fifth span - responses API streaming
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.stream': true,
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.response.id': 'resp_stream_456',
- 'gen_ai.response.finish_reasons': '["in_progress","completed"]',
- 'gen_ai.usage.input_tokens': 6,
- 'gen_ai.usage.output_tokens': 10,
- 'gen_ai.usage.total_tokens': 16,
- 'openai.response.id': 'resp_stream_456',
- 'openai.response.model': 'gpt-4',
- 'gen_ai.response.streaming': true,
- 'openai.response.timestamp': '2023-03-01T06:31:50.000Z',
- 'openai.usage.completion_tokens': 10,
- 'openai.usage.prompt_tokens': 6,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_456',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["in_progress","completed"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 6,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 16,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_456',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:50.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 6,
},
- description: 'responses gpt-4 stream-response',
- op: 'gen_ai.responses',
+ description: 'chat gpt-4 stream-response',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Sixth span - error handling in streaming context
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.stream': true,
- 'gen_ai.system': 'openai',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
},
description: 'chat error-model stream-response',
op: 'gen_ai.chat',
@@ -153,27 +180,27 @@ describe('OpenAI integration (V6)', () => {
// First span - basic chat completion with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.request.messages.original_length': 2,
- 'gen_ai.request.messages':
- '[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"What is the capital of France?"}]',
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.response.text': '["Hello from OpenAI mock!"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'openai.response.id': 'chatcmpl-mock123',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:28.000Z',
- 'openai.usage.completion_tokens': 15,
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the capital of France?"}]',
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: '[{"type":"text","content":"You are a helpful assistant."}]',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: '["Hello from OpenAI mock!"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:28.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 15,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
description: 'chat gpt-3.5-turbo',
op: 'gen_ai.chat',
@@ -183,40 +210,41 @@ describe('OpenAI integration (V6)', () => {
// Second span - responses API with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.messages': 'Translate this to French: Hello',
- 'gen_ai.response.text': 'Response to: Translate this to French: Hello',
- 'gen_ai.response.finish_reasons': '["completed"]',
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'resp_mock456',
- 'gen_ai.usage.input_tokens': 5,
- 'gen_ai.usage.output_tokens': 8,
- 'gen_ai.usage.total_tokens': 13,
- 'openai.response.id': 'resp_mock456',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:30.000Z',
- 'openai.usage.completion_tokens': 8,
- 'openai.usage.prompt_tokens': 5,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: 'Translate this to French: Hello',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Response to: Translate this to French: Hello',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["completed"]',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_mock456',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 5,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 8,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 13,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_mock456',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:30.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 8,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 5,
},
- description: 'responses gpt-3.5-turbo',
- op: 'gen_ai.responses',
+ description: 'chat gpt-3.5-turbo',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Third span - error handling with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"This will fail"}]',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"This will fail"}]',
},
description: 'chat error-model',
op: 'gen_ai.chat',
@@ -226,29 +254,29 @@ describe('OpenAI integration (V6)', () => {
// Fourth span - chat completions streaming with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.temperature': 0.8,
- 'gen_ai.request.stream': true,
- 'gen_ai.request.messages.original_length': 2,
- 'gen_ai.request.messages':
- '[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Tell me about streaming"}]',
- 'gen_ai.response.text': 'Hello from OpenAI streaming!',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.response.id': 'chatcmpl-stream-123',
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.usage.input_tokens': 12,
- 'gen_ai.usage.output_tokens': 18,
- 'gen_ai.usage.total_tokens': 30,
- 'openai.response.id': 'chatcmpl-stream-123',
- 'openai.response.model': 'gpt-4',
- 'gen_ai.response.streaming': true,
- 'openai.response.timestamp': '2023-03-01T06:31:40.000Z',
- 'openai.usage.completion_tokens': 18,
- 'openai.usage.prompt_tokens': 12,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.8,
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Tell me about streaming"}]',
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: '[{"type":"text","content":"You are a helpful assistant."}]',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Hello from OpenAI streaming!',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-123',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 12,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 18,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-stream-123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:40.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 18,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 12,
}),
description: 'chat gpt-4 stream-response',
op: 'gen_ai.chat',
@@ -258,43 +286,45 @@ describe('OpenAI integration (V6)', () => {
// Fifth span - responses API streaming with PII
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.operation.name': 'responses',
- 'sentry.op': 'gen_ai.responses',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-4',
- 'gen_ai.request.stream': true,
- 'gen_ai.request.messages': 'Test streaming responses API',
- 'gen_ai.response.text': 'Streaming response to: Test streaming responses APITest streaming responses API',
- 'gen_ai.response.finish_reasons': '["in_progress","completed"]',
- 'gen_ai.response.id': 'resp_stream_456',
- 'gen_ai.response.model': 'gpt-4',
- 'gen_ai.usage.input_tokens': 6,
- 'gen_ai.usage.output_tokens': 10,
- 'gen_ai.usage.total_tokens': 16,
- 'openai.response.id': 'resp_stream_456',
- 'openai.response.model': 'gpt-4',
- 'gen_ai.response.streaming': true,
- 'openai.response.timestamp': '2023-03-01T06:31:50.000Z',
- 'openai.usage.completion_tokens': 10,
- 'openai.usage.prompt_tokens': 6,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: 'Test streaming responses API',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]:
+ 'Streaming response to: Test streaming responses APITest streaming responses API',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["in_progress","completed"]',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_456',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 6,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 16,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'resp_stream_456',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-4',
+ [GEN_AI_RESPONSE_STREAMING_ATTRIBUTE]: true,
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:50.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 6,
}),
- description: 'responses gpt-4 stream-response',
- op: 'gen_ai.responses',
+ description: 'chat gpt-4 stream-response',
+ op: 'gen_ai.chat',
origin: 'auto.ai.openai',
status: 'ok',
}),
// Sixth span - error handling in streaming context with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'chat',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.stream': true,
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"This will fail"}]',
- 'gen_ai.system': 'openai',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"This will fail"}]',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
},
description: 'chat error-model stream-response',
op: 'gen_ai.chat',
@@ -310,18 +340,20 @@ describe('OpenAI integration (V6)', () => {
// Check that custom options are respected
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.messages.original_length': expect.any(Number),
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response text when recordOutputs: true
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: expect.any(String), // System instructions should be extracted
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text when recordOutputs: true
}),
}),
// Check that custom options are respected for streaming
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.messages.original_length': expect.any(Number),
- 'gen_ai.request.messages': expect.any(String), // Should include messages when recordInputs: true
- 'gen_ai.response.text': expect.any(String), // Should include response text when recordOutputs: true
- 'gen_ai.request.stream': true, // Should be marked as stream
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String), // Should include messages when recordInputs: true
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: expect.any(String), // System instructions should be extracted
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String), // Should include response text when recordOutputs: true
+ [GEN_AI_REQUEST_STREAM_ATTRIBUTE]: true, // Should be marked as stream
}),
}),
]),
@@ -333,18 +365,18 @@ describe('OpenAI integration (V6)', () => {
// First span - embeddings API
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'embeddings',
- 'sentry.op': 'gen_ai.embeddings',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'text-embedding-3-small',
- 'gen_ai.request.encoding_format': 'float',
- 'gen_ai.request.dimensions': 1536,
- 'gen_ai.response.model': 'text-embedding-3-small',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.total_tokens': 10,
- 'openai.response.model': 'text-embedding-3-small',
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_REQUEST_ENCODING_FORMAT_ATTRIBUTE]: 'float',
+ [GEN_AI_REQUEST_DIMENSIONS_ATTRIBUTE]: 1536,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
description: 'embeddings text-embedding-3-small',
op: 'gen_ai.embeddings',
@@ -354,11 +386,11 @@ describe('OpenAI integration (V6)', () => {
// Second span - embeddings API error model
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'embeddings',
- 'sentry.op': 'gen_ai.embeddings',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'error-model',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
},
description: 'embeddings error-model',
op: 'gen_ai.embeddings',
@@ -374,19 +406,19 @@ describe('OpenAI integration (V6)', () => {
// First span - embeddings API with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'embeddings',
- 'sentry.op': 'gen_ai.embeddings',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'text-embedding-3-small',
- 'gen_ai.request.encoding_format': 'float',
- 'gen_ai.request.dimensions': 1536,
- 'gen_ai.request.messages': 'Embedding test!',
- 'gen_ai.response.model': 'text-embedding-3-small',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.total_tokens': 10,
- 'openai.response.model': 'text-embedding-3-small',
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_REQUEST_ENCODING_FORMAT_ATTRIBUTE]: 'float',
+ [GEN_AI_REQUEST_DIMENSIONS_ATTRIBUTE]: 1536,
+ [GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE]: 'Embedding test!',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
description: 'embeddings text-embedding-3-small',
op: 'gen_ai.embeddings',
@@ -396,18 +428,38 @@ describe('OpenAI integration (V6)', () => {
// Second span - embeddings API error model with PII
expect.objectContaining({
data: {
- 'gen_ai.operation.name': 'embeddings',
- 'sentry.op': 'gen_ai.embeddings',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'error-model',
- 'gen_ai.request.messages': 'Error embedding test!',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'error-model',
+ [GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE]: 'Error embedding test!',
},
description: 'embeddings error-model',
op: 'gen_ai.embeddings',
origin: 'auto.ai.openai',
status: 'internal_error',
}),
+ // Third span - embeddings API with multiple inputs (this does not get truncated)
+ expect.objectContaining({
+ data: {
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.embeddings',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE]: '["First input text","Second input text","Third input text"]',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 10,
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'text-embedding-3-small',
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
+ },
+ description: 'embeddings text-embedding-3-small',
+ op: 'gen_ai.embeddings',
+ origin: 'auto.ai.openai',
+ status: 'ok',
+ }),
]),
};
@@ -532,23 +584,23 @@ describe('OpenAI integration (V6)', () => {
span_id: expect.any(String),
trace_id: expect.any(String),
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'openai.response.id': 'chatcmpl-mock123',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:28.000Z',
- 'openai.usage.completion_tokens': 15,
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:28.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 15,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
op: 'gen_ai.chat',
origin: 'auto.ai.openai',
@@ -590,23 +642,23 @@ describe('OpenAI integration (V6)', () => {
span_id: expect.any(String),
trace_id: expect.any(String),
data: {
- 'gen_ai.operation.name': 'chat',
- 'sentry.op': 'gen_ai.chat',
- 'sentry.origin': 'auto.ai.openai',
- 'gen_ai.system': 'openai',
- 'gen_ai.request.model': 'gpt-3.5-turbo',
- 'gen_ai.request.temperature': 0.7,
- 'gen_ai.response.model': 'gpt-3.5-turbo',
- 'gen_ai.response.id': 'chatcmpl-mock123',
- 'gen_ai.response.finish_reasons': '["stop"]',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 15,
- 'gen_ai.usage.total_tokens': 25,
- 'openai.response.id': 'chatcmpl-mock123',
- 'openai.response.model': 'gpt-3.5-turbo',
- 'openai.response.timestamp': '2023-03-01T06:31:28.000Z',
- 'openai.usage.completion_tokens': 15,
- 'openai.usage.prompt_tokens': 10,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.ai.openai',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'openai',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE]: 0.7,
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: '["stop"]',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 25,
+ [OPENAI_RESPONSE_ID_ATTRIBUTE]: 'chatcmpl-mock123',
+ [OPENAI_RESPONSE_MODEL_ATTRIBUTE]: 'gpt-3.5-turbo',
+ [OPENAI_RESPONSE_TIMESTAMP_ATTRIBUTE]: '2023-03-01T06:31:28.000Z',
+ [OPENAI_USAGE_COMPLETION_TOKENS_ATTRIBUTE]: 15,
+ [OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE]: 10,
},
op: 'gen_ai.chat',
origin: 'auto.ai.openai',
diff --git a/dev-packages/node-integration-tests/suites/tracing/prisma-orm-v7/test.ts b/dev-packages/node-integration-tests/suites/tracing/prisma-orm-v7/test.ts
index 9ae4efd136e7..5bb0158eee3c 100644
--- a/dev-packages/node-integration-tests/suites/tracing/prisma-orm-v7/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/prisma-orm-v7/test.ts
@@ -18,7 +18,7 @@ conditionalTest({ min: 20 })('Prisma ORM v7 Tests', () => {
.withDockerCompose({
workingDirectory: [cwd],
readyMatches: ['port 5432'],
- setupCommand: `prisma generate --schema ${cwd}/prisma/schema.prisma && tsc -p ${cwd}/prisma/tsconfig.json && prisma migrate dev -n sentry-test --schema ${cwd}/prisma/schema.prisma`,
+ setupCommand: `yarn prisma generate --schema ${cwd}/prisma/schema.prisma && tsc -p ${cwd}/prisma/tsconfig.json && yarn prisma migrate dev -n sentry-test --schema ${cwd}/prisma/schema.prisma`,
})
.expect({
transaction: transaction => {
diff --git a/dev-packages/node-integration-tests/suites/tracing/vercelai/scenario-system-instructions.mjs b/dev-packages/node-integration-tests/suites/tracing/vercelai/scenario-system-instructions.mjs
new file mode 100644
index 000000000000..f9b05e0c5960
--- /dev/null
+++ b/dev-packages/node-integration-tests/suites/tracing/vercelai/scenario-system-instructions.mjs
@@ -0,0 +1,23 @@
+import * as Sentry from '@sentry/node';
+import { generateText } from 'ai';
+import { MockLanguageModelV1 } from 'ai/test';
+
+async function run() {
+ await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
+ await generateText({
+ experimental_telemetry: { isEnabled: true },
+ model: new MockLanguageModelV1({
+ doGenerate: async () => ({
+ rawCall: { rawPrompt: null, rawSettings: {} },
+ finishReason: 'stop',
+ usage: { promptTokens: 10, completionTokens: 5 },
+ text: 'Response',
+ }),
+ }),
+ system: 'You are a helpful assistant',
+ prompt: 'Hello',
+ });
+ });
+}
+
+run();
diff --git a/dev-packages/node-integration-tests/suites/tracing/vercelai/test-generate-object.ts b/dev-packages/node-integration-tests/suites/tracing/vercelai/test-generate-object.ts
index 2e8e8711e9e9..ac6614af7502 100644
--- a/dev-packages/node-integration-tests/suites/tracing/vercelai/test-generate-object.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/vercelai/test-generate-object.ts
@@ -24,7 +24,7 @@ describe('Vercel AI integration - generateObject', () => {
'gen_ai.usage.input_tokens': 15,
'gen_ai.usage.output_tokens': 25,
'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateObject',
+ 'gen_ai.operation.name': 'invoke_agent',
'sentry.op': 'gen_ai.invoke_agent',
'sentry.origin': 'auto.vercelai.otel',
}),
@@ -38,7 +38,7 @@ describe('Vercel AI integration - generateObject', () => {
data: expect.objectContaining({
'sentry.origin': 'auto.vercelai.otel',
'sentry.op': 'gen_ai.generate_object',
- 'gen_ai.operation.name': 'ai.generateObject.doGenerate',
+ 'gen_ai.operation.name': 'generate_content',
'vercel.ai.operationId': 'ai.generateObject.doGenerate',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.model.id': 'mock-model-id',
diff --git a/dev-packages/node-integration-tests/suites/tracing/vercelai/test.ts b/dev-packages/node-integration-tests/suites/tracing/vercelai/test.ts
index 8112bcadd5f5..a98e7b97e919 100644
--- a/dev-packages/node-integration-tests/suites/tracing/vercelai/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/vercelai/test.ts
@@ -1,5 +1,29 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import type { Event } from '@sentry/node';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_PROMPT_ATTRIBUTE,
+ GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ GEN_AI_TOOL_CALL_ID_ATTRIBUTE,
+ GEN_AI_TOOL_INPUT_ATTRIBUTE,
+ GEN_AI_TOOL_NAME_ATTRIBUTE,
+ GEN_AI_TOOL_OUTPUT_ATTRIBUTE,
+ GEN_AI_TOOL_TYPE_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../utils/runner';
describe('Vercel AI integration', () => {
@@ -13,14 +37,14 @@ describe('Vercel AI integration', () => {
// First span - no telemetry config, should enable telemetry but not record inputs/outputs when sendDefaultPii: false
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -37,17 +61,17 @@ describe('Vercel AI integration', () => {
// Second span - explicitly enabled telemetry but recordInputs/recordOutputs not set, should not record when sendDefaultPii: false
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -66,18 +90,18 @@ describe('Vercel AI integration', () => {
// Third span - explicit telemetry enabled, should record inputs/outputs regardless of sendDefaultPii
expect.objectContaining({
data: {
- 'gen_ai.prompt': '{"prompt":"Where is the second span?"}',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the second span?"}]',
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': expect.any(String),
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the second span?"}',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -95,20 +119,20 @@ describe('Vercel AI integration', () => {
// Fourth span - doGenerate for explicit telemetry enabled call
expect.objectContaining({
data: {
- 'gen_ai.request.messages': expect.any(String),
- 'gen_ai.request.messages.original_length': expect.any(Number),
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': expect.any(String),
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -128,14 +152,14 @@ describe('Vercel AI integration', () => {
// Fifth span - tool call generateText span
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -152,17 +176,17 @@ describe('Vercel AI integration', () => {
// Sixth span - tool call doGenerate span
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -181,12 +205,12 @@ describe('Vercel AI integration', () => {
// Seventh span - tool call execution span
expect.objectContaining({
data: {
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.operationId': 'ai.toolCall',
},
description: 'execute_tool getWeather',
@@ -206,18 +230,18 @@ describe('Vercel AI integration', () => {
// First span - no telemetry config, should enable telemetry AND record inputs/outputs when sendDefaultPii: true
expect.objectContaining({
data: {
- 'gen_ai.prompt': '{"prompt":"Where is the first span?"}',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the first span?"}]',
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': 'First span here!',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the first span?"}',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the first span?"}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'First span here!',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -240,20 +264,21 @@ describe('Vercel AI integration', () => {
// Second span - doGenerate for first call, should also include input/output fields when sendDefaultPii: true
expect.objectContaining({
data: {
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': 'First span here!',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]:
+ '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'First span here!',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -278,18 +303,18 @@ describe('Vercel AI integration', () => {
// Third span - explicitly enabled telemetry, should record inputs/outputs regardless of sendDefaultPii
expect.objectContaining({
data: {
- 'gen_ai.prompt': '{"prompt":"Where is the second span?"}',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the second span?"}]',
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': expect.any(String),
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the second span?"}',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -312,20 +337,20 @@ describe('Vercel AI integration', () => {
// Fourth span - doGenerate for explicitly enabled telemetry call
expect.objectContaining({
data: {
- 'gen_ai.request.messages': expect.any(String),
- 'gen_ai.request.messages.original_length': expect.any(Number),
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': expect.any(String),
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -350,19 +375,19 @@ describe('Vercel AI integration', () => {
// Fifth span - tool call generateText span (should include prompts when sendDefaultPii: true)
expect.objectContaining({
data: {
- 'gen_ai.prompt': '{"prompt":"What is the weather in San Francisco?"}',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the weather in San Francisco?"}]',
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': 'Tool call completed!',
- 'gen_ai.response.tool_calls': expect.any(String),
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"What is the weather in San Francisco?"}',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather in San Francisco?"}]',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Tool call completed!',
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -385,22 +410,22 @@ describe('Vercel AI integration', () => {
// Sixth span - tool call doGenerate span (should include prompts when sendDefaultPii: true)
expect.objectContaining({
data: {
- 'gen_ai.request.available_tools': EXPECTED_AVAILABLE_TOOLS_JSON,
- 'gen_ai.request.messages': expect.any(String),
- 'gen_ai.request.messages.original_length': expect.any(Number),
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': 'Tool call completed!',
- 'gen_ai.response.tool_calls': expect.any(String),
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_AVAILABLE_TOOLS_JSON,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'Tool call completed!',
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -426,14 +451,14 @@ describe('Vercel AI integration', () => {
// Seventh span - tool call execution span
expect.objectContaining({
data: {
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.input': expect.any(String),
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.output': expect.any(String),
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_INPUT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_OUTPUT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.operationId': 'ai.toolCall',
},
description: 'execute_tool getWeather',
@@ -468,14 +493,14 @@ describe('Vercel AI integration', () => {
spans: expect.arrayContaining([
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -490,17 +515,17 @@ describe('Vercel AI integration', () => {
}),
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -518,12 +543,12 @@ describe('Vercel AI integration', () => {
}),
expect.objectContaining({
data: {
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.operationId': 'ai.toolCall',
},
description: 'execute_tool getWeather',
@@ -588,14 +613,14 @@ describe('Vercel AI integration', () => {
spans: expect.arrayContaining([
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -610,17 +635,17 @@ describe('Vercel AI integration', () => {
}),
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -638,12 +663,12 @@ describe('Vercel AI integration', () => {
}),
expect.objectContaining({
data: {
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.operationId': 'ai.toolCall',
},
description: 'execute_tool getWeather',
@@ -720,9 +745,9 @@ describe('Vercel AI integration', () => {
origin: 'auto.vercelai.otel',
status: 'ok',
data: expect.objectContaining({
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
- 'gen_ai.operation.name': 'ai.generateText',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
}),
}),
// The doGenerate span - name stays as 'generateText.doGenerate' since model ID is missing
@@ -732,9 +757,9 @@ describe('Vercel AI integration', () => {
origin: 'auto.vercelai.otel',
status: 'ok',
data: expect.objectContaining({
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
}),
}),
]),
@@ -743,4 +768,32 @@ describe('Vercel AI integration', () => {
await createRunner().expect({ transaction: expectedTransaction }).start().completed();
});
});
+
+ createEsmAndCjsTests(
+ __dirname,
+ 'scenario-system-instructions.mjs',
+ 'instrument-with-pii.mjs',
+ (createRunner, test) => {
+ test('extracts system instructions from messages', async () => {
+ await createRunner()
+ .ignore('event')
+ .expect({
+ transaction: {
+ transaction: 'main',
+ spans: expect.arrayContaining([
+ expect.objectContaining({
+ data: expect.objectContaining({
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: JSON.stringify([
+ { type: 'text', content: 'You are a helpful assistant' },
+ ]),
+ }),
+ }),
+ ]),
+ },
+ })
+ .start()
+ .completed();
+ });
+ },
+ );
});
diff --git a/dev-packages/node-integration-tests/suites/tracing/vercelai/v5/test.ts b/dev-packages/node-integration-tests/suites/tracing/vercelai/v5/test.ts
index 179644bbcd73..332f84777264 100644
--- a/dev-packages/node-integration-tests/suites/tracing/vercelai/v5/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/vercelai/v5/test.ts
@@ -1,5 +1,28 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import type { Event } from '@sentry/node';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_PROMPT_ATTRIBUTE,
+ GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_TOOL_CALL_ID_ATTRIBUTE,
+ GEN_AI_TOOL_INPUT_ATTRIBUTE,
+ GEN_AI_TOOL_NAME_ATTRIBUTE,
+ GEN_AI_TOOL_OUTPUT_ATTRIBUTE,
+ GEN_AI_TOOL_TYPE_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../../utils/runner';
describe('Vercel AI integration (V5)', () => {
@@ -13,20 +36,20 @@ describe('Vercel AI integration (V5)', () => {
// First span - no telemetry config, should enable telemetry but not record inputs/outputs when sendDefaultPii: false
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -36,26 +59,26 @@ describe('Vercel AI integration (V5)', () => {
// Second span - explicitly enabled telemetry but recordInputs/recordOutputs not set, should not record when sendDefaultPii: false
expect.objectContaining({
data: {
- 'sentry.origin': 'auto.vercelai.otel',
- 'sentry.op': 'gen_ai.generate_text',
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.model.provider': 'mock-provider',
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.settings.maxRetries': 2,
- 'gen_ai.system': 'mock-provider',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
'vercel.ai.streaming': false,
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.response.model': 'mock-model-id',
'vercel.ai.response.id': expect.any(String),
'vercel.ai.response.timestamp': expect.any(String),
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.total_tokens': 30,
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
},
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -65,25 +88,25 @@ describe('Vercel AI integration (V5)', () => {
// Third span - explicit telemetry enabled, should record inputs/outputs regardless of sendDefaultPii
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.prompt': '{"prompt":"Where is the second span?"}',
'vercel.ai.response.finishReason': 'stop',
- 'gen_ai.response.text': expect.any(String),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.prompt': '{"prompt":"Where is the second span?"}',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the second span?"}]',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the second span?"}',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -93,29 +116,29 @@ describe('Vercel AI integration (V5)', () => {
// Fourth span - doGenerate for explicit telemetry enabled call
expect.objectContaining({
data: {
- 'sentry.origin': 'auto.vercelai.otel',
- 'sentry.op': 'gen_ai.generate_text',
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.model.provider': 'mock-provider',
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.settings.maxRetries': 2,
- 'gen_ai.system': 'mock-provider',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
'vercel.ai.streaming': false,
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.response.model': 'mock-model-id',
'vercel.ai.response.id': expect.any(String),
- 'gen_ai.response.text': expect.any(String),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
'vercel.ai.response.timestamp': expect.any(String),
- 'gen_ai.request.messages.original_length': expect.any(Number),
- 'gen_ai.request.messages': expect.any(String),
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.total_tokens': 30,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
},
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -125,20 +148,20 @@ describe('Vercel AI integration (V5)', () => {
// Fifth span - tool call generateText span
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.response.finishReason': 'tool-calls',
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -148,7 +171,7 @@ describe('Vercel AI integration (V5)', () => {
// Sixth span - tool call doGenerate span
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -158,16 +181,16 @@ describe('Vercel AI integration (V5)', () => {
'vercel.ai.response.timestamp': expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -178,12 +201,12 @@ describe('Vercel AI integration (V5)', () => {
expect.objectContaining({
data: {
'vercel.ai.operationId': 'ai.toolCall',
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'execute_tool getWeather',
op: 'gen_ai.execute_tool',
@@ -202,25 +225,25 @@ describe('Vercel AI integration (V5)', () => {
// First span - no telemetry config, should enable telemetry AND record inputs/outputs when sendDefaultPii: true
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.prompt': '{"prompt":"Where is the first span?"}',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the first span?"}]',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the first span?"}]',
'vercel.ai.response.finishReason': 'stop',
- 'gen_ai.response.text': 'First span here!',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'First span here!',
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.prompt': '{"prompt":"Where is the first span?"}',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the first span?"}',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -230,29 +253,30 @@ describe('Vercel AI integration (V5)', () => {
// Second span - doGenerate for first call, should also include input/output fields when sendDefaultPii: true
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]:
+ '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.response.id': expect.any(String),
'vercel.ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': 'First span here!',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'First span here!',
'vercel.ai.response.timestamp': expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -262,25 +286,25 @@ describe('Vercel AI integration (V5)', () => {
// Third span - explicitly enabled telemetry, should record inputs/outputs regardless of sendDefaultPii
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.prompt': '{"prompt":"Where is the second span?"}',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the second span?"}]',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
'vercel.ai.response.finishReason': 'stop',
- 'gen_ai.response.text': expect.any(String),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.prompt': '{"prompt":"Where is the second span?"}',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the second span?"}',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -290,29 +314,29 @@ describe('Vercel AI integration (V5)', () => {
// Fourth span - doGenerate for explicitly enabled telemetry call
expect.objectContaining({
data: {
- 'sentry.origin': 'auto.vercelai.otel',
- 'sentry.op': 'gen_ai.generate_text',
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.model.provider': 'mock-provider',
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.settings.maxRetries': 2,
- 'gen_ai.system': 'mock-provider',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
'vercel.ai.streaming': false,
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.response.model': 'mock-model-id',
'vercel.ai.response.id': expect.any(String),
- 'gen_ai.response.text': expect.any(String),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
'vercel.ai.response.timestamp': expect.any(String),
- 'gen_ai.request.messages.original_length': expect.any(Number),
- 'gen_ai.request.messages': expect.any(String),
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.total_tokens': 30,
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
},
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -322,25 +346,25 @@ describe('Vercel AI integration (V5)', () => {
// Fifth span - tool call generateText span (should include prompts when sendDefaultPii: true)
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.prompt': '{"prompt":"What is the weather in San Francisco?"}',
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the weather in San Francisco?"}]',
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather in San Francisco?"}]',
'vercel.ai.response.finishReason': 'tool-calls',
- 'gen_ai.response.tool_calls': expect.any(String),
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.prompt': '{"prompt":"What is the weather in San Francisco?"}',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"What is the weather in San Francisco?"}',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -349,34 +373,34 @@ describe('Vercel AI integration (V5)', () => {
}),
// Sixth span - tool call doGenerate span (should include prompts when sendDefaultPii: true)
expect.objectContaining({
- data: {
- 'gen_ai.request.model': 'mock-model-id',
+ data: expect.objectContaining({
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
- 'gen_ai.request.messages.original_length': expect.any(Number),
- 'gen_ai.request.messages': expect.any(String),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: 1,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
'vercel.ai.prompt.toolChoice': expect.any(String),
- 'gen_ai.request.available_tools': EXPECTED_AVAILABLE_TOOLS_JSON,
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_AVAILABLE_TOOLS_JSON,
'vercel.ai.response.finishReason': 'tool-calls',
'vercel.ai.response.id': expect.any(String),
'vercel.ai.response.model': 'mock-model-id',
// 'gen_ai.response.text': 'Tool call completed!', // TODO: look into why this is not being set
'vercel.ai.response.timestamp': expect.any(String),
- 'gen_ai.response.tool_calls': expect.any(String),
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
- },
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ }),
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
origin: 'auto.vercelai.otel',
@@ -384,17 +408,17 @@ describe('Vercel AI integration (V5)', () => {
}),
// Seventh span - tool call execution span
expect.objectContaining({
- data: {
+ data: expect.objectContaining({
'vercel.ai.operationId': 'ai.toolCall',
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.input': expect.any(String),
- 'gen_ai.tool.output': expect.any(String),
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
- },
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_INPUT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_TOOL_OUTPUT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ }),
description: 'execute_tool getWeather',
op: 'gen_ai.execute_tool',
origin: 'auto.vercelai.otel',
@@ -446,19 +470,19 @@ describe('Vercel AI integration (V5)', () => {
spans: expect.arrayContaining([
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.response.finishReason': 'tool-calls',
},
description: 'generateText',
@@ -467,7 +491,7 @@ describe('Vercel AI integration (V5)', () => {
}),
expect.objectContaining({
data: {
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -477,16 +501,16 @@ describe('Vercel AI integration (V5)', () => {
'vercel.ai.response.timestamp': expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -496,12 +520,12 @@ describe('Vercel AI integration (V5)', () => {
expect.objectContaining({
data: {
'vercel.ai.operationId': 'ai.toolCall',
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
},
description: 'execute_tool getWeather',
op: 'gen_ai.execute_tool',
diff --git a/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts b/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
index 98a16618d77d..f779eebdf0e3 100644
--- a/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
@@ -1,5 +1,27 @@
+import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
import type { Event } from '@sentry/node';
import { afterAll, describe, expect } from 'vitest';
+import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_PROMPT_ATTRIBUTE,
+ GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
+ GEN_AI_RESPONSE_ID_ATTRIBUTE,
+ GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
+ GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
+ GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_TOOL_CALL_ID_ATTRIBUTE,
+ GEN_AI_TOOL_INPUT_ATTRIBUTE,
+ GEN_AI_TOOL_NAME_ATTRIBUTE,
+ GEN_AI_TOOL_OUTPUT_ATTRIBUTE,
+ GEN_AI_TOOL_TYPE_ATTRIBUTE,
+ GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
+} from '../../../../../../packages/core/src/tracing/ai/gen-ai-attributes';
import { cleanupChildProcesses, createEsmAndCjsTests } from '../../../../utils/runner';
describe('Vercel AI integration (V6)', () => {
@@ -13,7 +35,7 @@ describe('Vercel AI integration (V6)', () => {
// First span - no telemetry config, should enable telemetry but not record inputs/outputs when sendDefaultPii: false
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -21,13 +43,13 @@ describe('Vercel AI integration (V6)', () => {
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -37,27 +59,27 @@ describe('Vercel AI integration (V6)', () => {
// Second span - explicitly enabled telemetry but recordInputs/recordOutputs not set, should not record when sendDefaultPii: false
expect.objectContaining({
data: expect.objectContaining({
- 'sentry.origin': 'auto.vercelai.otel',
- 'sentry.op': 'gen_ai.generate_text',
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.request.headers.user-agent': expect.any(String),
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.settings.maxRetries': 2,
- 'gen_ai.system': 'mock-provider',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
'vercel.ai.streaming': false,
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.response.model': 'mock-model-id',
'vercel.ai.response.id': expect.any(String),
'vercel.ai.response.timestamp': expect.any(String),
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.total_tokens': 30,
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
}),
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -67,25 +89,25 @@ describe('Vercel AI integration (V6)', () => {
// Third span - explicit telemetry enabled, should record inputs/outputs regardless of sendDefaultPii
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.prompt': '{"prompt":"Where is the second span?"}',
'vercel.ai.request.headers.user-agent': expect.any(String),
'vercel.ai.response.finishReason': 'stop',
- 'gen_ai.response.text': expect.any(String),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.prompt': '{"prompt":"Where is the second span?"}',
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the second span?"}]',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the second span?"}',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -95,29 +117,29 @@ describe('Vercel AI integration (V6)', () => {
// Fourth span - doGenerate for explicit telemetry enabled call
expect.objectContaining({
data: expect.objectContaining({
- 'sentry.origin': 'auto.vercelai.otel',
- 'sentry.op': 'gen_ai.generate_text',
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.request.headers.user-agent': expect.any(String),
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.settings.maxRetries': 2,
- 'gen_ai.system': 'mock-provider',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
'vercel.ai.streaming': false,
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.response.model': 'mock-model-id',
'vercel.ai.response.id': expect.any(String),
- 'gen_ai.response.text': expect.any(String),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
'vercel.ai.response.timestamp': expect.any(String),
- 'gen_ai.request.messages': expect.any(String),
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.total_tokens': 30,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
}),
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -127,7 +149,7 @@ describe('Vercel AI integration (V6)', () => {
// Fifth span - tool call generateText span
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
@@ -135,13 +157,13 @@ describe('Vercel AI integration (V6)', () => {
'vercel.ai.response.finishReason': 'tool-calls',
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -151,7 +173,7 @@ describe('Vercel AI integration (V6)', () => {
// Sixth span - tool call doGenerate span
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -162,16 +184,16 @@ describe('Vercel AI integration (V6)', () => {
'vercel.ai.response.timestamp': expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -182,12 +204,12 @@ describe('Vercel AI integration (V6)', () => {
expect.objectContaining({
data: expect.objectContaining({
'vercel.ai.operationId': 'ai.toolCall',
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'execute_tool getWeather',
op: 'gen_ai.execute_tool',
@@ -206,25 +228,25 @@ describe('Vercel AI integration (V6)', () => {
// First span - no telemetry config, should enable telemetry AND record inputs/outputs when sendDefaultPii: true
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.prompt': '{"prompt":"Where is the first span?"}',
'vercel.ai.request.headers.user-agent': expect.any(String),
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the first span?"}]',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the first span?"}]',
'vercel.ai.response.finishReason': 'stop',
- 'gen_ai.response.text': 'First span here!',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'First span here!',
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.prompt': '{"prompt":"Where is the first span?"}',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the first span?"}',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -234,29 +256,30 @@ describe('Vercel AI integration (V6)', () => {
// Second span - doGenerate for first call, should also include input/output fields when sendDefaultPii: true
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
'vercel.ai.request.headers.user-agent': expect.any(String),
- 'gen_ai.request.messages': '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]:
+ '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.response.id': expect.any(String),
'vercel.ai.response.model': 'mock-model-id',
- 'gen_ai.response.text': 'First span here!',
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: 'First span here!',
'vercel.ai.response.timestamp': expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -266,25 +289,25 @@ describe('Vercel AI integration (V6)', () => {
// Third span - explicitly enabled telemetry, should record inputs/outputs regardless of sendDefaultPii
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.prompt': '{"prompt":"Where is the second span?"}',
'vercel.ai.request.headers.user-agent': expect.any(String),
- 'gen_ai.request.messages': '[{"role":"user","content":"Where is the second span?"}]',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
'vercel.ai.response.finishReason': 'stop',
- 'gen_ai.response.text': expect.any(String),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.prompt': '{"prompt":"Where is the second span?"}',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.usage.total_tokens': 30,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"Where is the second span?"}',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -294,29 +317,29 @@ describe('Vercel AI integration (V6)', () => {
// Fourth span - doGenerate for explicitly enabled telemetry call
expect.objectContaining({
data: expect.objectContaining({
- 'sentry.origin': 'auto.vercelai.otel',
- 'sentry.op': 'gen_ai.generate_text',
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.request.headers.user-agent': expect.any(String),
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.settings.maxRetries': 2,
- 'gen_ai.system': 'mock-provider',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
'vercel.ai.streaming': false,
'vercel.ai.response.finishReason': 'stop',
'vercel.ai.response.model': 'mock-model-id',
'vercel.ai.response.id': expect.any(String),
- 'gen_ai.response.text': expect.any(String),
+ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
'vercel.ai.response.timestamp': expect.any(String),
- 'gen_ai.request.messages': expect.any(String),
- 'gen_ai.response.finish_reasons': ['stop'],
- 'gen_ai.usage.input_tokens': 10,
- 'gen_ai.usage.output_tokens': 20,
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.total_tokens': 30,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 30,
}),
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -326,25 +349,25 @@ describe('Vercel AI integration (V6)', () => {
// Fifth span - tool call generateText span (should include prompts when sendDefaultPii: true)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.prompt': '{"prompt":"What is the weather in San Francisco?"}',
'vercel.ai.request.headers.user-agent': expect.any(String),
- 'gen_ai.request.messages': '[{"role":"user","content":"What is the weather in San Francisco?"}]',
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather in San Francisco?"}]',
'vercel.ai.response.finishReason': 'tool-calls',
- 'gen_ai.response.tool_calls': expect.any(String),
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.prompt': '{"prompt":"What is the weather in San Francisco?"}',
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_PROMPT_ATTRIBUTE]: '{"prompt":"What is the weather in San Francisco?"}',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generateText',
op: 'gen_ai.invoke_agent',
@@ -354,32 +377,32 @@ describe('Vercel AI integration (V6)', () => {
// Sixth span - tool call doGenerate span (should include prompts when sendDefaultPii: true)
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
'vercel.ai.request.headers.user-agent': expect.any(String),
- 'gen_ai.request.messages': expect.any(String),
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
'vercel.ai.prompt.toolChoice': expect.any(String),
- 'gen_ai.request.available_tools': EXPECTED_AVAILABLE_TOOLS_JSON,
+ [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_AVAILABLE_TOOLS_JSON,
'vercel.ai.response.finishReason': 'tool-calls',
'vercel.ai.response.id': expect.any(String),
'vercel.ai.response.model': 'mock-model-id',
// 'gen_ai.response.text': 'Tool call completed!', // TODO: look into why this is not being set
'vercel.ai.response.timestamp': expect.any(String),
- 'gen_ai.response.tool_calls': expect.any(String),
+ [GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE]: expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -390,14 +413,14 @@ describe('Vercel AI integration (V6)', () => {
expect.objectContaining({
data: expect.objectContaining({
'vercel.ai.operationId': 'ai.toolCall',
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.input': expect.any(String),
- 'gen_ai.tool.output': expect.any(String),
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_INPUT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_TOOL_OUTPUT_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'execute_tool getWeather',
op: 'gen_ai.execute_tool',
@@ -450,20 +473,20 @@ describe('Vercel AI integration (V6)', () => {
spans: expect.arrayContaining([
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText',
'vercel.ai.pipeline.name': 'generateText',
'vercel.ai.request.headers.user-agent': expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText',
- 'sentry.op': 'gen_ai.invoke_agent',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
'vercel.ai.response.finishReason': 'tool-calls',
}),
description: 'generateText',
@@ -472,7 +495,7 @@ describe('Vercel AI integration (V6)', () => {
}),
expect.objectContaining({
data: expect.objectContaining({
- 'gen_ai.request.model': 'mock-model-id',
+ [GEN_AI_REQUEST_MODEL_ATTRIBUTE]: 'mock-model-id',
'vercel.ai.model.provider': 'mock-provider',
'vercel.ai.operationId': 'ai.generateText.doGenerate',
'vercel.ai.pipeline.name': 'generateText.doGenerate',
@@ -483,16 +506,16 @@ describe('Vercel AI integration (V6)', () => {
'vercel.ai.response.timestamp': expect.any(String),
'vercel.ai.settings.maxRetries': 2,
'vercel.ai.streaming': false,
- 'gen_ai.response.finish_reasons': ['tool-calls'],
- 'gen_ai.response.id': expect.any(String),
- 'gen_ai.response.model': 'mock-model-id',
- 'gen_ai.system': 'mock-provider',
- 'gen_ai.usage.input_tokens': 15,
- 'gen_ai.usage.output_tokens': 25,
- 'gen_ai.usage.total_tokens': 40,
- 'gen_ai.operation.name': 'ai.generateText.doGenerate',
- 'sentry.op': 'gen_ai.generate_text',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['tool-calls'],
+ [GEN_AI_RESPONSE_ID_ATTRIBUTE]: expect.any(String),
+ [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
+ [GEN_AI_SYSTEM_ATTRIBUTE]: 'mock-provider',
+ [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 15,
+ [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 25,
+ [GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE]: 40,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'generate_content',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.generate_text',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'generate_text mock-model-id',
op: 'gen_ai.generate_text',
@@ -502,12 +525,12 @@ describe('Vercel AI integration (V6)', () => {
expect.objectContaining({
data: expect.objectContaining({
'vercel.ai.operationId': 'ai.toolCall',
- 'gen_ai.tool.call.id': 'call-1',
- 'gen_ai.tool.name': 'getWeather',
- 'gen_ai.tool.type': 'function',
- 'gen_ai.operation.name': 'ai.toolCall',
- 'sentry.op': 'gen_ai.execute_tool',
- 'sentry.origin': 'auto.vercelai.otel',
+ [GEN_AI_TOOL_CALL_ID_ATTRIBUTE]: 'call-1',
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: 'getWeather',
+ [GEN_AI_TOOL_TYPE_ATTRIBUTE]: 'function',
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.execute_tool',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.vercelai.otel',
}),
description: 'execute_tool getWeather',
op: 'gen_ai.execute_tool',
diff --git a/dev-packages/node-integration-tests/suites/winston/subject.ts b/dev-packages/node-integration-tests/suites/winston/subject.ts
index 1047f2f1cd47..2c9d88456cce 100644
--- a/dev-packages/node-integration-tests/suites/winston/subject.ts
+++ b/dev-packages/node-integration-tests/suites/winston/subject.ts
@@ -9,6 +9,7 @@ Sentry.init({
environment: 'test',
enableLogs: true,
transport: loggingTransport,
+ debug: true,
});
async function run(): Promise {
@@ -64,6 +65,81 @@ async function run(): Promise {
});
}
+ if (process.env.WITH_FILTER === 'true') {
+ const FilteredSentryWinstonTransport = Sentry.createSentryWinstonTransport(Transport, {
+ levels: ['error'],
+ });
+ const filteredLogger = winston.createLogger({
+ transports: [new FilteredSentryWinstonTransport()],
+ });
+
+ filteredLogger.info('Ignored message');
+ filteredLogger.error('Test error message');
+ }
+
+ // If unmapped custom level is requested (tests debug line for unknown levels)
+ if (process.env.UNMAPPED_CUSTOM_LEVEL === 'true') {
+ const customLevels = {
+ levels: {
+ myUnknownLevel: 0,
+ error: 1,
+ },
+ };
+
+ // Create transport WITHOUT customLevelMap for myUnknownLevel
+ // myUnknownLevel will default to 'info', but we only capture 'error'
+ const UnmappedSentryWinstonTransport = Sentry.createSentryWinstonTransport(Transport, {
+ levels: ['error'],
+ });
+
+ const unmappedLogger = winston.createLogger({
+ levels: customLevels.levels,
+ level: 'error',
+ transports: [new UnmappedSentryWinstonTransport()],
+ });
+
+ // This should NOT be captured (unknown level defaults to 'info', which is not in levels)
+ // @ts-ignore - custom levels are not part of the winston logger
+ unmappedLogger.myUnknownLevel('This unknown level message should be skipped');
+ // This SHOULD be captured
+ unmappedLogger.error('This error message should be captured');
+ }
+
+ // If custom level mapping is requested
+ if (process.env.CUSTOM_LEVEL_MAPPING === 'true') {
+ const customLevels = {
+ levels: {
+ customCritical: 0,
+ customWarning: 1,
+ customNotice: 2,
+ },
+ };
+
+ const SentryWinstonTransport = Sentry.createSentryWinstonTransport(Transport, {
+ customLevelMap: {
+ customCritical: 'fatal',
+ customWarning: 'warn',
+ customNotice: 'info',
+ },
+ });
+
+ const mappedLogger = winston.createLogger({
+ levels: customLevels.levels,
+ // https://github.com/winstonjs/winston/issues/1491
+ // when custom levels are set with a transport,
+ // the level must be set on the logger
+ level: 'customNotice',
+ transports: [new SentryWinstonTransport()],
+ });
+
+ // @ts-ignore - custom levels are not part of the winston logger
+ mappedLogger.customCritical('This is a critical message');
+ // @ts-ignore - custom levels are not part of the winston logger
+ mappedLogger.customWarning('This is a warning message');
+ // @ts-ignore - custom levels are not part of the winston logger
+ mappedLogger.customNotice('This is a notice message');
+ }
+
await Sentry.flush();
}
diff --git a/dev-packages/node-integration-tests/suites/winston/test.ts b/dev-packages/node-integration-tests/suites/winston/test.ts
index 777b1149c871..1b359cc20f80 100644
--- a/dev-packages/node-integration-tests/suites/winston/test.ts
+++ b/dev-packages/node-integration-tests/suites/winston/test.ts
@@ -123,6 +123,71 @@ describe('winston integration', () => {
await runner.completed();
});
+ test("should capture winston logs with filter but don't show custom level warnings", async () => {
+ const runner = createRunner(__dirname, 'subject.ts')
+ .withEnv({ WITH_FILTER: 'true' })
+ .expect({
+ log: {
+ items: [
+ {
+ timestamp: expect.any(Number),
+ level: 'info',
+ body: 'Test info message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ {
+ timestamp: expect.any(Number),
+ level: 'error',
+ body: 'Test error message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ {
+ timestamp: expect.any(Number),
+ level: 'error',
+ body: 'Test error message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ ],
+ },
+ })
+ .start();
+
+ await runner.completed();
+
+ const logs = runner.getLogs();
+
+ const warning = logs.find(log => log.includes('Winston log level info is not captured by Sentry.'));
+
+ expect(warning).not.toBeDefined();
+ });
+
test('should capture winston logs with metadata', async () => {
const runner = createRunner(__dirname, 'subject.ts')
.withEnv({ WITH_METADATA: 'true' })
@@ -183,4 +248,162 @@ describe('winston integration', () => {
await runner.completed();
});
+
+ test('should skip unmapped custom levels when not in the levels option', async () => {
+ const runner = createRunner(__dirname, 'subject.ts')
+ .withEnv({ UNMAPPED_CUSTOM_LEVEL: 'true' })
+ .expect({
+ log: {
+ items: [
+ // First, the default logger captures info and error
+ {
+ timestamp: expect.any(Number),
+ level: 'info',
+ body: 'Test info message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ {
+ timestamp: expect.any(Number),
+ level: 'error',
+ body: 'Test error message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ // Then the unmapped logger only captures error (myUnknownLevel defaults to info, which is skipped)
+ {
+ timestamp: expect.any(Number),
+ level: 'error',
+ body: 'This error message should be captured',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ ],
+ },
+ })
+ .start();
+
+ await runner.completed();
+
+ const logs = runner.getLogs();
+
+ const warning = logs.find(log => log.includes('Winston log level myUnknownLevel is not captured by Sentry.'));
+
+ expect(warning).toBeDefined();
+ });
+
+ test('should map custom winston levels to Sentry severity levels', async () => {
+ const runner = createRunner(__dirname, 'subject.ts')
+ .withEnv({ CUSTOM_LEVEL_MAPPING: 'true' })
+ .expect({
+ log: {
+ items: [
+ // First, the default logger captures info and error
+ {
+ timestamp: expect.any(Number),
+ level: 'info',
+ body: 'Test info message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ {
+ timestamp: expect.any(Number),
+ level: 'error',
+ body: 'Test error message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ // Then the mapped logger uses custom level mappings
+ {
+ timestamp: expect.any(Number),
+ level: 'fatal', // 'critical' maps to 'fatal'
+ body: 'This is a critical message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ {
+ timestamp: expect.any(Number),
+ level: 'warn', // 'warning' maps to 'warn'
+ body: 'This is a warning message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ {
+ timestamp: expect.any(Number),
+ level: 'info', // 'notice' maps to 'info'
+ body: 'This is a notice message',
+ severity_number: expect.any(Number),
+ trace_id: expect.any(String),
+ attributes: {
+ 'sentry.origin': { value: 'auto.log.winston', type: 'string' },
+ 'sentry.release': { value: '1.0.0', type: 'string' },
+ 'sentry.environment': { value: 'test', type: 'string' },
+ 'sentry.sdk.name': { value: 'sentry.javascript.node', type: 'string' },
+ 'sentry.sdk.version': { value: expect.any(String), type: 'string' },
+ 'server.address': { value: expect.any(String), type: 'string' },
+ },
+ },
+ ],
+ },
+ })
+ .start();
+
+ await runner.completed();
+ });
});
diff --git a/packages/angular/src/sdk.ts b/packages/angular/src/sdk.ts
index c6cf3b17fcd0..45b2b1fc9759 100755
--- a/packages/angular/src/sdk.ts
+++ b/packages/angular/src/sdk.ts
@@ -12,6 +12,7 @@ import {
import type { Client, Integration } from '@sentry/core';
import {
applySdkMetadata,
+ conversationIdIntegration,
debug,
dedupeIntegration,
functionToStringIntegration,
@@ -36,6 +37,7 @@ export function getDefaultIntegrations(_options: BrowserOptions = {}): Integrati
// eslint-disable-next-line deprecation/deprecation
inboundFiltersIntegration(),
functionToStringIntegration(),
+ conversationIdIntegration(),
breadcrumbsIntegration(),
globalHandlersIntegration(),
linkedErrorsIntegration(),
diff --git a/packages/astro/package.json b/packages/astro/package.json
index e19a6acec0af..3da063aadd4e 100644
--- a/packages/astro/package.json
+++ b/packages/astro/package.json
@@ -59,7 +59,7 @@
"@sentry/browser": "10.36.0",
"@sentry/core": "10.36.0",
"@sentry/node": "10.36.0",
- "@sentry/vite-plugin": "^4.6.2"
+ "@sentry/vite-plugin": "^4.7.0"
},
"devDependencies": {
"astro": "^3.5.0",
diff --git a/packages/astro/src/index.server.ts b/packages/astro/src/index.server.ts
index 28623724db19..7005fcf26b86 100644
--- a/packages/astro/src/index.server.ts
+++ b/packages/astro/src/index.server.ts
@@ -114,6 +114,7 @@ export {
SEMANTIC_ATTRIBUTE_SENTRY_SAMPLE_RATE,
SEMANTIC_ATTRIBUTE_SENTRY_SOURCE,
setContext,
+ setConversationId,
setCurrentClient,
setExtra,
setExtras,
diff --git a/packages/astro/src/integration/index.ts b/packages/astro/src/integration/index.ts
index 86f2f3f03bde..a96685ce8033 100644
--- a/packages/astro/src/integration/index.ts
+++ b/packages/astro/src/integration/index.ts
@@ -35,13 +35,21 @@ export const sentryAstro = (options: SentryOptions = {}): AstroIntegration => {
bundleSizeOptimizations,
unstable_sentryVitePluginOptions,
debug,
- ...otherOptions
+ org,
+ project,
+ authToken,
+ sentryUrl,
+ headers,
+ telemetry,
+ silent,
+ errorHandler,
+ ...deprecatedOptions
} = options;
- const otherOptionsKeys = Object.keys(otherOptions);
- if (otherOptionsKeys.length > 0) {
+ const deprecatedOptionsKeys = Object.keys(deprecatedOptions);
+ if (deprecatedOptionsKeys.length > 0) {
logger.warn(
- `You passed in additional options (${otherOptionsKeys.join(
+ `You passed in additional options (${deprecatedOptionsKeys.join(
', ',
)}) to the Sentry integration. This is deprecated and will stop working in a future version. Instead, configure the Sentry SDK in your \`sentry.client.config.(js|ts)\` or \`sentry.server.config.(js|ts)\` files.`,
);
@@ -101,26 +109,26 @@ export const sentryAstro = (options: SentryOptions = {}): AstroIntegration => {
sentryVitePlugin({
// Priority: top-level options > deprecated options > env vars
// eslint-disable-next-line deprecation/deprecation
- org: options.org ?? uploadOptions.org ?? env.SENTRY_ORG,
+ org: org ?? uploadOptions.org ?? env.SENTRY_ORG,
// eslint-disable-next-line deprecation/deprecation
- project: options.project ?? uploadOptions.project ?? env.SENTRY_PROJECT,
+ project: project ?? uploadOptions.project ?? env.SENTRY_PROJECT,
// eslint-disable-next-line deprecation/deprecation
- authToken: options.authToken ?? uploadOptions.authToken ?? env.SENTRY_AUTH_TOKEN,
- url: options.sentryUrl ?? env.SENTRY_URL,
- headers: options.headers,
+ authToken: authToken ?? uploadOptions.authToken ?? env.SENTRY_AUTH_TOKEN,
+ url: sentryUrl ?? env.SENTRY_URL,
+ headers,
// eslint-disable-next-line deprecation/deprecation
- telemetry: options.telemetry ?? uploadOptions.telemetry ?? true,
- silent: options.silent ?? false,
- errorHandler: options.errorHandler,
+ telemetry: telemetry ?? uploadOptions.telemetry ?? true,
+ silent: silent ?? false,
+ errorHandler,
_metaOptions: {
telemetry: {
metaFramework: 'astro',
},
},
...unstableMerged_sentryVitePluginOptions,
- debug: options.debug ?? false,
+ debug: debug ?? false,
sourcemaps: {
- ...options.sourcemaps,
+ ...sourcemaps,
// eslint-disable-next-line deprecation/deprecation
assets: sourcemaps?.assets ?? uploadOptions.assets ?? [getSourcemapsAssetsGlob(config)],
filesToDeleteAfterUpload:
diff --git a/packages/astro/test/integration/index.test.ts b/packages/astro/test/integration/index.test.ts
index abb3f48dcf72..15b04ac041bc 100644
--- a/packages/astro/test/integration/index.test.ts
+++ b/packages/astro/test/integration/index.test.ts
@@ -352,6 +352,7 @@ describe('sentryAstro integration', () => {
it('injects runtime config into client and server init scripts and warns about deprecation', async () => {
const integration = sentryAstro({
+ project: 'my-project',
environment: 'test',
release: '1.0.0',
dsn: 'https://test.sentry.io/123',
diff --git a/packages/aws-serverless/package.json b/packages/aws-serverless/package.json
index 2b4c3f87df1a..8a797981c768 100644
--- a/packages/aws-serverless/package.json
+++ b/packages/aws-serverless/package.json
@@ -66,9 +66,9 @@
},
"dependencies": {
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/instrumentation-aws-sdk": "0.65.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/instrumentation-aws-sdk": "0.66.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@sentry/core": "10.36.0",
"@sentry/node": "10.36.0",
"@sentry/node-core": "10.36.0",
diff --git a/packages/aws-serverless/src/index.ts b/packages/aws-serverless/src/index.ts
index fd0a0cb83095..34889236032c 100644
--- a/packages/aws-serverless/src/index.ts
+++ b/packages/aws-serverless/src/index.ts
@@ -25,6 +25,7 @@ export {
Scope,
SDK_VERSION,
setContext,
+ setConversationId,
setExtra,
setExtras,
setTag,
diff --git a/packages/browser/src/profiling/UIProfiler.ts b/packages/browser/src/profiling/UIProfiler.ts
index 89edb4899a6a..932b442a4b6e 100644
--- a/packages/browser/src/profiling/UIProfiler.ts
+++ b/packages/browser/src/profiling/UIProfiler.ts
@@ -401,7 +401,7 @@ export class UIProfiler implements ContinuousProfiler {
...(sdkInfo && { sdk: sdkInfo }),
...(!!tunnel && dsn && { dsn: dsnToString(dsn) }),
},
- [[{ type: 'profile_chunk' }, chunk]],
+ [[{ type: 'profile_chunk', platform: 'javascript' }, chunk]],
);
client.sendEnvelope(envelope).then(null, reason => {
diff --git a/packages/browser/src/sdk.ts b/packages/browser/src/sdk.ts
index 800c1b701352..eeff23fe8f17 100644
--- a/packages/browser/src/sdk.ts
+++ b/packages/browser/src/sdk.ts
@@ -1,5 +1,6 @@
import type { Client, Integration, Options } from '@sentry/core';
import {
+ conversationIdIntegration,
dedupeIntegration,
functionToStringIntegration,
getIntegrationsToSetup,
@@ -31,6 +32,7 @@ export function getDefaultIntegrations(_options: Options): Integration[] {
// eslint-disable-next-line deprecation/deprecation
inboundFiltersIntegration(),
functionToStringIntegration(),
+ conversationIdIntegration(),
browserApiErrorsIntegration(),
breadcrumbsIntegration(),
globalHandlersIntegration(),
diff --git a/packages/browser/test/profiling/UIProfiler.test.ts b/packages/browser/test/profiling/UIProfiler.test.ts
index 7fd583f513d8..b64ee35fc50e 100644
--- a/packages/browser/test/profiling/UIProfiler.test.ts
+++ b/packages/browser/test/profiling/UIProfiler.test.ts
@@ -106,6 +106,7 @@ describe('Browser Profiling v2 trace lifecycle', () => {
const transactionEnvelopeHeader = send.mock.calls?.[0]?.[0]?.[1]?.[0]?.[0];
const profileChunkEnvelopeHeader = send.mock.calls?.[1]?.[0]?.[1]?.[0]?.[0];
expect(profileChunkEnvelopeHeader?.type).toBe('profile_chunk');
+ expect(profileChunkEnvelopeHeader?.platform).toBe('javascript');
expect(transactionEnvelopeHeader?.type).toBe('transaction');
});
@@ -207,6 +208,7 @@ describe('Browser Profiling v2 trace lifecycle', () => {
expect(mockConstructor.mock.calls.length).toBe(2);
const firstChunkHeader = send.mock.calls?.[0]?.[0]?.[1]?.[0]?.[0];
expect(firstChunkHeader?.type).toBe('profile_chunk');
+ expect(firstChunkHeader?.platform).toBe('javascript');
// Second chunk after another 60s
vi.advanceTimersByTime(60_000);
@@ -679,6 +681,7 @@ describe('Browser Profiling v2 manual lifecycle', () => {
expect(send).toHaveBeenCalledTimes(1);
const envelopeHeader = send.mock.calls?.[0]?.[0]?.[1]?.[0]?.[0];
expect(envelopeHeader?.type).toBe('profile_chunk');
+ expect(envelopeHeader?.platform).toBe('javascript');
});
it('calling start and stop while profile session is running prints warnings', async () => {
diff --git a/packages/bun/src/index.ts b/packages/bun/src/index.ts
index 9de1e55dacb6..5f2d628ce983 100644
--- a/packages/bun/src/index.ts
+++ b/packages/bun/src/index.ts
@@ -48,6 +48,7 @@ export {
Scope,
SDK_VERSION,
setContext,
+ setConversationId,
setExtra,
setExtras,
setTag,
diff --git a/packages/cloudflare/src/sdk.ts b/packages/cloudflare/src/sdk.ts
index 238cc13253a5..0211fa7f96a9 100644
--- a/packages/cloudflare/src/sdk.ts
+++ b/packages/cloudflare/src/sdk.ts
@@ -1,6 +1,7 @@
import type { Integration } from '@sentry/core';
import {
consoleIntegration,
+ conversationIdIntegration,
dedupeIntegration,
functionToStringIntegration,
getIntegrationsToSetup,
@@ -30,6 +31,7 @@ export function getDefaultIntegrations(options: CloudflareOptions): Integration[
// eslint-disable-next-line deprecation/deprecation
inboundFiltersIntegration(),
functionToStringIntegration(),
+ conversationIdIntegration(),
linkedErrorsIntegration(),
fetchIntegration(),
honoIntegration(),
diff --git a/packages/core/src/exports.ts b/packages/core/src/exports.ts
index a59e521febc7..d7931565b7ab 100644
--- a/packages/core/src/exports.ts
+++ b/packages/core/src/exports.ts
@@ -111,6 +111,15 @@ export function setUser(user: User | null): void {
getIsolationScope().setUser(user);
}
+/**
+ * Sets the conversation ID for the current isolation scope.
+ *
+ * @param conversationId The conversation ID to set. Pass `null` or `undefined` to unset the conversation ID.
+ */
+export function setConversationId(conversationId: string | null | undefined): void {
+ getIsolationScope().setConversationId(conversationId);
+}
+
/**
* The last error event id of the isolation scope.
*
diff --git a/packages/core/src/index.ts b/packages/core/src/index.ts
index 19a83d230155..30ace1803b1a 100644
--- a/packages/core/src/index.ts
+++ b/packages/core/src/index.ts
@@ -25,6 +25,7 @@ export {
setTag,
setTags,
setUser,
+ setConversationId,
isInitialized,
isEnabled,
startSession,
@@ -120,6 +121,7 @@ export { thirdPartyErrorFilterIntegration } from './integrations/third-party-err
export { consoleIntegration } from './integrations/console';
export { featureFlagsIntegration, type FeatureFlagsIntegration } from './integrations/featureFlags';
export { growthbookIntegration } from './integrations/featureFlags';
+export { conversationIdIntegration } from './integrations/conversationId';
export { profiler } from './profiling';
// eslint thinks the entire function is deprecated (while only one overload is actually deprecated)
diff --git a/packages/core/src/integrations/conversationId.ts b/packages/core/src/integrations/conversationId.ts
new file mode 100644
index 000000000000..c11b587d3a71
--- /dev/null
+++ b/packages/core/src/integrations/conversationId.ts
@@ -0,0 +1,35 @@
+import type { Client } from '../client';
+import { getCurrentScope, getIsolationScope } from '../currentScopes';
+import { defineIntegration } from '../integration';
+import { GEN_AI_CONVERSATION_ID_ATTRIBUTE } from '../semanticAttributes';
+import type { IntegrationFn } from '../types-hoist/integration';
+import type { Span } from '../types-hoist/span';
+
+const INTEGRATION_NAME = 'ConversationId';
+
+const _conversationIdIntegration = (() => {
+ return {
+ name: INTEGRATION_NAME,
+ setup(client: Client) {
+ client.on('spanStart', (span: Span) => {
+ const scopeData = getCurrentScope().getScopeData();
+ const isolationScopeData = getIsolationScope().getScopeData();
+
+ const conversationId = scopeData.conversationId || isolationScopeData.conversationId;
+
+ if (conversationId) {
+ span.setAttribute(GEN_AI_CONVERSATION_ID_ATTRIBUTE, conversationId);
+ }
+ });
+ },
+ };
+}) satisfies IntegrationFn;
+
+/**
+ * Automatically applies conversation ID from scope to spans.
+ *
+ * This integration reads the conversation ID from the current or isolation scope
+ * and applies it to spans when they start. This ensures the conversation ID is
+ * available for all AI-related operations.
+ */
+export const conversationIdIntegration = defineIntegration(_conversationIdIntegration);
diff --git a/packages/core/src/scope.ts b/packages/core/src/scope.ts
index b5a64bb8818a..8f05cf78c16f 100644
--- a/packages/core/src/scope.ts
+++ b/packages/core/src/scope.ts
@@ -51,6 +51,7 @@ export interface ScopeContext {
attributes?: RawAttributes>;
fingerprint: string[];
propagationContext: PropagationContext;
+ conversationId?: string;
}
export interface SdkProcessingMetadata {
@@ -85,6 +86,7 @@ export interface ScopeData {
level?: SeverityLevel;
transactionName?: string;
span?: Span;
+ conversationId?: string;
}
/**
@@ -153,6 +155,9 @@ export class Scope {
/** Contains the last event id of a captured event. */
protected _lastEventId?: string;
+ /** Conversation ID */
+ protected _conversationId?: string;
+
// NOTE: Any field which gets added here should get added not only to the constructor but also to the `clone` method.
public constructor() {
@@ -202,6 +207,7 @@ export class Scope {
newScope._propagationContext = { ...this._propagationContext };
newScope._client = this._client;
newScope._lastEventId = this._lastEventId;
+ newScope._conversationId = this._conversationId;
_setSpanForScope(newScope, _getSpanForScope(this));
@@ -284,6 +290,16 @@ export class Scope {
return this._user;
}
+ /**
+ * Set the conversation ID for this scope.
+ * Set to `null` to unset the conversation ID.
+ */
+ public setConversationId(conversationId: string | null | undefined): this {
+ this._conversationId = conversationId || undefined;
+ this._notifyScopeListeners();
+ return this;
+ }
+
/**
* Set an object that will be merged into existing tags on the scope,
* and will be sent as tags data with the event.
@@ -507,6 +523,7 @@ export class Scope {
level,
fingerprint = [],
propagationContext,
+ conversationId,
} = scopeInstance || {};
this._tags = { ...this._tags, ...tags };
@@ -530,6 +547,10 @@ export class Scope {
this._propagationContext = propagationContext;
}
+ if (conversationId) {
+ this._conversationId = conversationId;
+ }
+
return this;
}
@@ -549,6 +570,7 @@ export class Scope {
this._transactionName = undefined;
this._fingerprint = undefined;
this._session = undefined;
+ this._conversationId = undefined;
_setSpanForScope(this, undefined);
this._attachments = [];
this.setPropagationContext({
@@ -641,6 +663,7 @@ export class Scope {
sdkProcessingMetadata: this._sdkProcessingMetadata,
transactionName: this._transactionName,
span: _getSpanForScope(this),
+ conversationId: this._conversationId,
};
}
diff --git a/packages/core/src/semanticAttributes.ts b/packages/core/src/semanticAttributes.ts
index 9b90809c0091..88b0f470dfa3 100644
--- a/packages/core/src/semanticAttributes.ts
+++ b/packages/core/src/semanticAttributes.ts
@@ -77,3 +77,18 @@ export const SEMANTIC_ATTRIBUTE_URL_FULL = 'url.full';
* @see https://develop.sentry.dev/sdk/telemetry/traces/span-links/#link-types
*/
export const SEMANTIC_LINK_ATTRIBUTE_LINK_TYPE = 'sentry.link.type';
+
+/**
+ * =============================================================================
+ * GEN AI ATTRIBUTES
+ * Based on OpenTelemetry Semantic Conventions for Generative AI
+ * @see https://opentelemetry.io/docs/specs/semconv/gen-ai/
+ * =============================================================================
+ */
+
+/**
+ * The conversation ID for linking messages across API calls.
+ * For OpenAI Assistants API: thread_id
+ * For LangGraph: configurable.thread_id
+ */
+export const GEN_AI_CONVERSATION_ID_ATTRIBUTE = 'gen_ai.conversation.id';
diff --git a/packages/core/src/tracing/ai/gen-ai-attributes.ts b/packages/core/src/tracing/ai/gen-ai-attributes.ts
index 7959ee05bcdf..3476bfb3582a 100644
--- a/packages/core/src/tracing/ai/gen-ai-attributes.ts
+++ b/packages/core/src/tracing/ai/gen-ai-attributes.ts
@@ -118,13 +118,20 @@ export const GEN_AI_OPERATION_NAME_ATTRIBUTE = 'gen_ai.operation.name';
/**
* Original length of messages array, used to indicate truncations had occured
*/
-export const GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE = 'gen_ai.request.messages.original_length';
+export const GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE = 'sentry.sdk_meta.gen_ai.input.messages.original_length';
/**
* The prompt messages
* Only recorded when recordInputs is enabled
*/
-export const GEN_AI_REQUEST_MESSAGES_ATTRIBUTE = 'gen_ai.request.messages';
+export const GEN_AI_INPUT_MESSAGES_ATTRIBUTE = 'gen_ai.input.messages';
+
+/**
+ * The system instructions extracted from system messages
+ * Only recorded when recordInputs is enabled
+ * According to OpenTelemetry spec: https://opentelemetry.io/docs/specs/semconv/registry/attributes/gen-ai/#gen-ai-system-instructions
+ */
+export const GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE = 'gen_ai.system_instructions';
/**
* The response text
@@ -211,6 +218,12 @@ export const GEN_AI_GENERATE_OBJECT_DO_GENERATE_OPERATION_ATTRIBUTE = 'gen_ai.ge
*/
export const GEN_AI_STREAM_OBJECT_DO_STREAM_OPERATION_ATTRIBUTE = 'gen_ai.stream_object';
+/**
+ * The embeddings input
+ * Only recorded when recordInputs is enabled
+ */
+export const GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE = 'gen_ai.embeddings.input';
+
/**
* The span operation name for embedding
*/
@@ -226,6 +239,31 @@ export const GEN_AI_EMBED_MANY_DO_EMBED_OPERATION_ATTRIBUTE = 'gen_ai.embed_many
*/
export const GEN_AI_EXECUTE_TOOL_OPERATION_ATTRIBUTE = 'gen_ai.execute_tool';
+/**
+ * The tool name for tool call spans
+ */
+export const GEN_AI_TOOL_NAME_ATTRIBUTE = 'gen_ai.tool.name';
+
+/**
+ * The tool call ID
+ */
+export const GEN_AI_TOOL_CALL_ID_ATTRIBUTE = 'gen_ai.tool.call.id';
+
+/**
+ * The tool type (e.g., 'function')
+ */
+export const GEN_AI_TOOL_TYPE_ATTRIBUTE = 'gen_ai.tool.type';
+
+/**
+ * The tool input/arguments
+ */
+export const GEN_AI_TOOL_INPUT_ATTRIBUTE = 'gen_ai.tool.input';
+
+/**
+ * The tool output/result
+ */
+export const GEN_AI_TOOL_OUTPUT_ATTRIBUTE = 'gen_ai.tool.output';
+
// =============================================================================
// OPENAI-SPECIFIC ATTRIBUTES
// =============================================================================
@@ -260,13 +298,12 @@ export const OPENAI_USAGE_PROMPT_TOKENS_ATTRIBUTE = 'openai.usage.prompt_tokens'
// =============================================================================
/**
- * OpenAI API operations
+ * OpenAI API operations following OpenTelemetry semantic conventions
+ * @see https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/#llm-request-spans
*/
export const OPENAI_OPERATIONS = {
CHAT: 'chat',
- RESPONSES: 'responses',
EMBEDDINGS: 'embeddings',
- CONVERSATIONS: 'conversations',
} as const;
// =============================================================================
diff --git a/packages/core/src/tracing/ai/messageTruncation.ts b/packages/core/src/tracing/ai/messageTruncation.ts
index 9c8718387404..f5c040892dcf 100644
--- a/packages/core/src/tracing/ai/messageTruncation.ts
+++ b/packages/core/src/tracing/ai/messageTruncation.ts
@@ -294,11 +294,17 @@ function truncatePartsMessage(message: PartsMessage, maxBytes: number): unknown[
* @returns Array containing the truncated message, or empty array if truncation fails
*/
function truncateSingleMessage(message: unknown, maxBytes: number): unknown[] {
- /* c8 ignore start - unreachable */
- if (!message || typeof message !== 'object') {
+ if (!message) return [];
+
+ // Handle plain strings (e.g., embeddings input)
+ if (typeof message === 'string') {
+ const truncated = truncateTextByBytes(message, maxBytes);
+ return truncated ? [truncated] : [];
+ }
+
+ if (typeof message !== 'object') {
return [];
}
- /* c8 ignore stop */
if (isContentMessage(message)) {
return truncateContentMessage(message, maxBytes);
@@ -374,19 +380,19 @@ function stripInlineMediaFromMessages(messages: unknown[]): unknown[] {
* Truncate an array of messages to fit within a byte limit.
*
* Strategy:
- * - Keeps the newest messages (from the end of the array)
- * - Uses O(n) algorithm: precompute sizes once, then find largest suffix under budget
- * - If no complete messages fit, attempts to truncate the newest single message
+ * - Always keeps only the last (newest) message
+ * - Strips inline media from the message
+ * - Truncates the message content if it exceeds the byte limit
*
* @param messages - Array of messages to truncate
- * @param maxBytes - Maximum total byte limit for all messages
- * @returns Truncated array of messages
+ * @param maxBytes - Maximum total byte limit for the message
+ * @returns Array containing only the last message (possibly truncated)
*
* @example
* ```ts
* const messages = [msg1, msg2, msg3, msg4]; // newest is msg4
* const truncated = truncateMessagesByBytes(messages, 10000);
- * // Returns [msg3, msg4] if they fit, or [msg4] if only it fits, etc.
+ * // Returns [msg4] (truncated if needed)
* ```
*/
function truncateMessagesByBytes(messages: unknown[], maxBytes: number): unknown[] {
@@ -395,46 +401,21 @@ function truncateMessagesByBytes(messages: unknown[], maxBytes: number): unknown
return messages;
}
- // strip inline media first. This will often get us below the threshold,
- // while preserving human-readable information about messages sent.
- const stripped = stripInlineMediaFromMessages(messages);
-
- // Fast path: if all messages fit, return as-is
- const totalBytes = jsonBytes(stripped);
- if (totalBytes <= maxBytes) {
- return stripped;
- }
-
- // Precompute each message's JSON size once for efficiency
- const messageSizes = stripped.map(jsonBytes);
+ // Always keep only the last message
+ const lastMessage = messages[messages.length - 1];
- // Find the largest suffix (newest messages) that fits within the budget
- let bytesUsed = 0;
- let startIndex = stripped.length; // Index where the kept suffix starts
+ // Strip inline media from the single message
+ const stripped = stripInlineMediaFromMessages([lastMessage]);
+ const strippedMessage = stripped[0];
- for (let i = stripped.length - 1; i >= 0; i--) {
- const messageSize = messageSizes[i];
-
- if (messageSize && bytesUsed + messageSize > maxBytes) {
- // Adding this message would exceed the budget
- break;
- }
-
- if (messageSize) {
- bytesUsed += messageSize;
- }
- startIndex = i;
- }
-
- // If no complete messages fit, try truncating just the newest message
- if (startIndex === stripped.length) {
- // we're truncating down to one message, so all others dropped.
- const newestMessage = stripped[stripped.length - 1];
- return truncateSingleMessage(newestMessage, maxBytes);
+ // Check if it fits
+ const messageBytes = jsonBytes(strippedMessage);
+ if (messageBytes <= maxBytes) {
+ return stripped;
}
- // Return the suffix that fits
- return stripped.slice(startIndex);
+ // Truncate the single message if needed
+ return truncateSingleMessage(strippedMessage, maxBytes);
}
/**
diff --git a/packages/core/src/tracing/ai/utils.ts b/packages/core/src/tracing/ai/utils.ts
index 4a7a14eea554..8f08b65c6171 100644
--- a/packages/core/src/tracing/ai/utils.ts
+++ b/packages/core/src/tracing/ai/utils.ts
@@ -9,15 +9,21 @@ import {
} from './gen-ai-attributes';
import { truncateGenAiMessages, truncateGenAiStringInput } from './messageTruncation';
/**
- * Maps AI method paths to Sentry operation name
+ * Maps AI method paths to OpenTelemetry semantic convention operation names
+ * @see https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/#llm-request-spans
*/
export function getFinalOperationName(methodPath: string): string {
if (methodPath.includes('messages')) {
- return 'messages';
+ return 'chat';
}
if (methodPath.includes('completions')) {
- return 'completions';
+ return 'text_completion';
+ }
+ // Google GenAI: models.generateContent* -> generate_content (actually generates AI responses)
+ if (methodPath.includes('generateContent')) {
+ return 'generate_content';
}
+ // Anthropic: models.get/retrieve -> models (metadata retrieval only)
if (methodPath.includes('models')) {
return 'models';
}
@@ -105,3 +111,44 @@ export function getTruncatedJsonString(value: T | T[]): string {
// value is an object, so we need to stringify it
return JSON.stringify(value);
}
+
+/**
+ * Extract system instructions from messages array.
+ * Finds the first system message and formats it according to OpenTelemetry semantic conventions.
+ *
+ * @param messages - Array of messages to extract system instructions from
+ * @returns systemInstructions (JSON string) and filteredMessages (without system message)
+ */
+export function extractSystemInstructions(messages: unknown[] | unknown): {
+ systemInstructions: string | undefined;
+ filteredMessages: unknown[] | unknown;
+} {
+ if (!Array.isArray(messages)) {
+ return { systemInstructions: undefined, filteredMessages: messages };
+ }
+
+ const systemMessageIndex = messages.findIndex(
+ msg => msg && typeof msg === 'object' && 'role' in msg && (msg as { role: string }).role === 'system',
+ );
+
+ if (systemMessageIndex === -1) {
+ return { systemInstructions: undefined, filteredMessages: messages };
+ }
+
+ const systemMessage = messages[systemMessageIndex] as { role: string; content?: string | unknown };
+ const systemContent =
+ typeof systemMessage.content === 'string'
+ ? systemMessage.content
+ : systemMessage.content !== undefined
+ ? JSON.stringify(systemMessage.content)
+ : undefined;
+
+ if (!systemContent) {
+ return { systemInstructions: undefined, filteredMessages: messages };
+ }
+
+ const systemInstructions = JSON.stringify([{ type: 'text', content: systemContent }]);
+ const filteredMessages = [...messages.slice(0, systemMessageIndex), ...messages.slice(systemMessageIndex + 1)];
+
+ return { systemInstructions, filteredMessages };
+}
diff --git a/packages/core/src/tracing/anthropic-ai/utils.ts b/packages/core/src/tracing/anthropic-ai/utils.ts
index f10b3ebe6358..b9cf31b4aeea 100644
--- a/packages/core/src/tracing/anthropic-ai/utils.ts
+++ b/packages/core/src/tracing/anthropic-ai/utils.ts
@@ -2,10 +2,11 @@ import { captureException } from '../../exports';
import { SPAN_STATUS_ERROR } from '../../tracing';
import type { Span } from '../../types-hoist/span';
import {
- GEN_AI_REQUEST_MESSAGES_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
} from '../ai/gen-ai-attributes';
-import { getTruncatedJsonString } from '../ai/utils';
+import { extractSystemInstructions, getTruncatedJsonString } from '../ai/utils';
import { ANTHROPIC_AI_INSTRUMENTED_METHODS } from './constants';
import type { AnthropicAiInstrumentedMethod, AnthropicAiResponse } from './types';
@@ -18,15 +19,26 @@ export function shouldInstrument(methodPath: string): methodPath is AnthropicAiI
/**
* Set the messages and messages original length attributes.
+ * Extracts system instructions before truncation.
*/
export function setMessagesAttribute(span: Span, messages: unknown): void {
- const length = Array.isArray(messages) ? messages.length : undefined;
- if (length !== 0) {
+ if (Array.isArray(messages) && messages.length === 0) {
+ return;
+ }
+
+ const { systemInstructions, filteredMessages } = extractSystemInstructions(messages);
+
+ if (systemInstructions) {
span.setAttributes({
- [GEN_AI_REQUEST_MESSAGES_ATTRIBUTE]: getTruncatedJsonString(messages),
- [GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: length,
+ [GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE]: systemInstructions,
});
}
+
+ const filteredLength = Array.isArray(filteredMessages) ? filteredMessages.length : 1;
+ span.setAttributes({
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: getTruncatedJsonString(filteredMessages),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: filteredLength,
+ });
}
/**
diff --git a/packages/core/src/tracing/google-genai/index.ts b/packages/core/src/tracing/google-genai/index.ts
index 53af7a9632cb..a56985b9b6f6 100644
--- a/packages/core/src/tracing/google-genai/index.ts
+++ b/packages/core/src/tracing/google-genai/index.ts
@@ -6,12 +6,12 @@ import { startSpan, startSpanManual } from '../../tracing/trace';
import type { Span, SpanAttributeValue } from '../../types-hoist/span';
import { handleCallbackErrors } from '../../utils/handleCallbackErrors';
import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_OPERATION_NAME_ATTRIBUTE,
GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
GEN_AI_REQUEST_FREQUENCY_PENALTY_ATTRIBUTE,
GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_REQUEST_MODEL_ATTRIBUTE,
GEN_AI_REQUEST_PRESENCE_PENALTY_ATTRIBUTE,
GEN_AI_REQUEST_TEMPERATURE_ATTRIBUTE,
@@ -21,12 +21,13 @@ import {
GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
} from '../ai/gen-ai-attributes';
import { truncateGenAiMessages } from '../ai/messageTruncation';
-import { buildMethodPath, getFinalOperationName, getSpanOperation } from '../ai/utils';
+import { buildMethodPath, extractSystemInstructions, getFinalOperationName, getSpanOperation } from '../ai/utils';
import { CHAT_PATH, CHATS_CREATE_METHOD, GOOGLE_GENAI_SYSTEM_NAME } from './constants';
import { instrumentStream } from './streaming';
import type {
@@ -167,9 +168,16 @@ function addPrivateRequestAttributes(span: Span, params: Record
}
if (Array.isArray(messages) && messages.length) {
+ const { systemInstructions, filteredMessages } = extractSystemInstructions(messages);
+
+ if (systemInstructions) {
+ span.setAttribute(GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE, systemInstructions);
+ }
+
+ const filteredLength = Array.isArray(filteredMessages) ? filteredMessages.length : 0;
span.setAttributes({
- [GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: messages.length,
- [GEN_AI_REQUEST_MESSAGES_ATTRIBUTE]: JSON.stringify(truncateGenAiMessages(messages)),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: filteredLength,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: JSON.stringify(truncateGenAiMessages(filteredMessages as unknown[])),
});
}
}
diff --git a/packages/core/src/tracing/langchain/index.ts b/packages/core/src/tracing/langchain/index.ts
index 1930be794be5..8cf12dfcb861 100644
--- a/packages/core/src/tracing/langchain/index.ts
+++ b/packages/core/src/tracing/langchain/index.ts
@@ -3,7 +3,13 @@ import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '
import { SPAN_STATUS_ERROR } from '../../tracing';
import { startSpanManual } from '../../tracing/trace';
import type { Span, SpanAttributeValue } from '../../types-hoist/span';
-import { GEN_AI_OPERATION_NAME_ATTRIBUTE, GEN_AI_REQUEST_MODEL_ATTRIBUTE } from '../ai/gen-ai-attributes';
+import {
+ GEN_AI_OPERATION_NAME_ATTRIBUTE,
+ GEN_AI_REQUEST_MODEL_ATTRIBUTE,
+ GEN_AI_TOOL_INPUT_ATTRIBUTE,
+ GEN_AI_TOOL_NAME_ATTRIBUTE,
+ GEN_AI_TOOL_OUTPUT_ATTRIBUTE,
+} from '../ai/gen-ai-attributes';
import { LANGCHAIN_ORIGIN } from './constants';
import type {
LangChainCallbackHandler,
@@ -92,10 +98,10 @@ export function createLangChainCallbackHandler(options: LangChainOptions = {}):
startSpanManual(
{
name: `${operationName} ${modelName}`,
- op: 'gen_ai.pipeline',
+ op: 'gen_ai.chat',
attributes: {
...attributes,
- [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.pipeline',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.chat',
},
},
span => {
@@ -241,12 +247,12 @@ export function createLangChainCallbackHandler(options: LangChainOptions = {}):
const toolName = tool.name || 'unknown_tool';
const attributes: Record = {
[SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: LANGCHAIN_ORIGIN,
- 'gen_ai.tool.name': toolName,
+ [GEN_AI_TOOL_NAME_ATTRIBUTE]: toolName,
};
// Add input if recordInputs is enabled
if (recordInputs) {
- attributes['gen_ai.tool.input'] = input;
+ attributes[GEN_AI_TOOL_INPUT_ATTRIBUTE] = input;
}
startSpanManual(
@@ -272,7 +278,7 @@ export function createLangChainCallbackHandler(options: LangChainOptions = {}):
// Add output if recordOutputs is enabled
if (recordOutputs) {
span.setAttributes({
- 'gen_ai.tool.output': JSON.stringify(output),
+ [GEN_AI_TOOL_OUTPUT_ATTRIBUTE]: JSON.stringify(output),
});
}
exitSpan(runId);
diff --git a/packages/core/src/tracing/langchain/utils.ts b/packages/core/src/tracing/langchain/utils.ts
index 0a07ae8df370..249025480882 100644
--- a/packages/core/src/tracing/langchain/utils.ts
+++ b/packages/core/src/tracing/langchain/utils.ts
@@ -1,11 +1,11 @@
import { SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '../../semanticAttributes';
import type { SpanAttributeValue } from '../../types-hoist/span';
import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_OPERATION_NAME_ATTRIBUTE,
GEN_AI_REQUEST_FREQUENCY_PENALTY_ATTRIBUTE,
GEN_AI_REQUEST_MAX_TOKENS_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_REQUEST_MODEL_ATTRIBUTE,
GEN_AI_REQUEST_PRESENCE_PENALTY_ATTRIBUTE,
GEN_AI_REQUEST_STREAM_ATTRIBUTE,
@@ -18,6 +18,7 @@ import {
GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE,
GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
GEN_AI_USAGE_CACHE_CREATION_INPUT_TOKENS_ATTRIBUTE,
GEN_AI_USAGE_CACHE_READ_INPUT_TOKENS_ATTRIBUTE,
GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
@@ -25,6 +26,7 @@ import {
GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
} from '../ai/gen-ai-attributes';
import { truncateGenAiMessages } from '../ai/messageTruncation';
+import { extractSystemInstructions } from '../ai/utils';
import { LANGCHAIN_ORIGIN, ROLE_MAP } from './constants';
import type { LangChainLLMResult, LangChainMessage, LangChainSerialized } from './types';
@@ -125,12 +127,16 @@ export function normalizeLangChainMessages(messages: LangChainMessage[]): Array<
};
}
- // 2) Then try constructor name (SystemMessage / HumanMessage / ...)
- const ctor = (message as { constructor?: { name?: string } }).constructor?.name;
- if (ctor) {
+ // 2) Serialized LangChain format (lc: 1) - check before constructor name
+ // This is more reliable than constructor.name which can be lost during serialization
+ if (message.lc === 1 && message.kwargs) {
+ const id = message.id;
+ const messageType = Array.isArray(id) && id.length > 0 ? id[id.length - 1] : '';
+ const role = typeof messageType === 'string' ? normalizeRoleNameFromCtor(messageType) : 'user';
+
return {
- role: normalizeMessageRole(normalizeRoleNameFromCtor(ctor)),
- content: asString(message.content),
+ role: normalizeMessageRole(role),
+ content: asString(message.kwargs?.content),
};
}
@@ -143,7 +149,8 @@ export function normalizeLangChainMessages(messages: LangChainMessage[]): Array<
};
}
- // 4) Then objects with `{ role, content }`
+ // 4) Then objects with `{ role, content }` - check before constructor name
+ // Plain objects have constructor.name="Object" which would incorrectly default to "user"
if (message.role) {
return {
role: normalizeMessageRole(String(message.role)),
@@ -151,15 +158,13 @@ export function normalizeLangChainMessages(messages: LangChainMessage[]): Array<
};
}
- // 5) Serialized LangChain format (lc: 1)
- if (message.lc === 1 && message.kwargs) {
- const id = message.id;
- const messageType = Array.isArray(id) && id.length > 0 ? id[id.length - 1] : '';
- const role = typeof messageType === 'string' ? normalizeRoleNameFromCtor(messageType) : 'user';
-
+ // 5) Then try constructor name (SystemMessage / HumanMessage / ...)
+ // Only use this if we haven't matched a more specific case
+ const ctor = (message as { constructor?: { name?: string } }).constructor?.name;
+ if (ctor && ctor !== 'Object') {
return {
- role: normalizeMessageRole(role),
- content: asString(message.kwargs?.content),
+ role: normalizeMessageRole(normalizeRoleNameFromCtor(ctor)),
+ content: asString(message.content),
};
}
@@ -216,18 +221,18 @@ function extractCommonRequestAttributes(
/**
* Small helper to assemble boilerplate attributes shared by both request extractors.
+ * Always uses 'chat' as the operation type for all LLM and chat model operations.
*/
function baseRequestAttributes(
system: unknown,
modelName: unknown,
- operation: 'pipeline' | 'chat',
serialized: LangChainSerialized,
invocationParams?: Record,
langSmithMetadata?: Record,
): Record {
return {
[GEN_AI_SYSTEM_ATTRIBUTE]: asString(system ?? 'langchain'),
- [GEN_AI_OPERATION_NAME_ATTRIBUTE]: operation,
+ [GEN_AI_OPERATION_NAME_ATTRIBUTE]: 'chat',
[GEN_AI_REQUEST_MODEL_ATTRIBUTE]: asString(modelName),
[SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: LANGCHAIN_ORIGIN,
...extractCommonRequestAttributes(serialized, invocationParams, langSmithMetadata),
@@ -237,7 +242,8 @@ function baseRequestAttributes(
/**
* Extracts attributes for plain LLM invocations (string prompts).
*
- * - Operation is tagged as `pipeline` to distinguish from chat-style invocations.
+ * - Operation is tagged as `chat` following OpenTelemetry semantic conventions.
+ * LangChain LLM operations are treated as chat operations.
* - When `recordInputs` is true, string prompts are wrapped into `{role:"user"}`
* messages to align with the chat schema used elsewhere.
*/
@@ -251,12 +257,12 @@ export function extractLLMRequestAttributes(
const system = langSmithMetadata?.ls_provider;
const modelName = invocationParams?.model ?? langSmithMetadata?.ls_model_name ?? 'unknown';
- const attrs = baseRequestAttributes(system, modelName, 'pipeline', llm, invocationParams, langSmithMetadata);
+ const attrs = baseRequestAttributes(system, modelName, llm, invocationParams, langSmithMetadata);
if (recordInputs && Array.isArray(prompts) && prompts.length > 0) {
- setIfDefined(attrs, GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE, prompts.length);
+ setIfDefined(attrs, GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE, prompts.length);
const messages = prompts.map(p => ({ role: 'user', content: p }));
- setIfDefined(attrs, GEN_AI_REQUEST_MESSAGES_ATTRIBUTE, asString(messages));
+ setIfDefined(attrs, GEN_AI_INPUT_MESSAGES_ATTRIBUTE, asString(messages));
}
return attrs;
@@ -265,7 +271,8 @@ export function extractLLMRequestAttributes(
/**
* Extracts attributes for ChatModel invocations (array-of-arrays of messages).
*
- * - Operation is tagged as `chat`.
+ * - Operation is tagged as `chat` following OpenTelemetry semantic conventions.
+ * LangChain chat model operations are chat operations.
* - We flatten LangChain's `LangChainMessage[][]` and normalize shapes into a
* consistent `{ role, content }` array when `recordInputs` is true.
* - Provider system value falls back to `serialized.id?.[2]`.
@@ -280,13 +287,22 @@ export function extractChatModelRequestAttributes(
const system = langSmithMetadata?.ls_provider ?? llm.id?.[2];
const modelName = invocationParams?.model ?? langSmithMetadata?.ls_model_name ?? 'unknown';
- const attrs = baseRequestAttributes(system, modelName, 'chat', llm, invocationParams, langSmithMetadata);
+ const attrs = baseRequestAttributes(system, modelName, llm, invocationParams, langSmithMetadata);
if (recordInputs && Array.isArray(langChainMessages) && langChainMessages.length > 0) {
const normalized = normalizeLangChainMessages(langChainMessages.flat());
- setIfDefined(attrs, GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE, normalized.length);
- const truncated = truncateGenAiMessages(normalized);
- setIfDefined(attrs, GEN_AI_REQUEST_MESSAGES_ATTRIBUTE, asString(truncated));
+
+ const { systemInstructions, filteredMessages } = extractSystemInstructions(normalized);
+
+ if (systemInstructions) {
+ setIfDefined(attrs, GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE, systemInstructions);
+ }
+
+ const filteredLength = Array.isArray(filteredMessages) ? filteredMessages.length : 0;
+ setIfDefined(attrs, GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE, filteredLength);
+
+ const truncated = truncateGenAiMessages(filteredMessages as unknown[]);
+ setIfDefined(attrs, GEN_AI_INPUT_MESSAGES_ATTRIBUTE, asString(truncated));
}
return attrs;
diff --git a/packages/core/src/tracing/langgraph/index.ts b/packages/core/src/tracing/langgraph/index.ts
index c0800e05e6da..6a9c39a7ddda 100644
--- a/packages/core/src/tracing/langgraph/index.ts
+++ b/packages/core/src/tracing/langgraph/index.ts
@@ -4,14 +4,16 @@ import { SPAN_STATUS_ERROR } from '../../tracing';
import {
GEN_AI_AGENT_NAME_ATTRIBUTE,
GEN_AI_CONVERSATION_ID_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_INVOKE_AGENT_OPERATION_ATTRIBUTE,
GEN_AI_OPERATION_NAME_ATTRIBUTE,
GEN_AI_PIPELINE_NAME_ATTRIBUTE,
GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
} from '../ai/gen-ai-attributes';
import { truncateGenAiMessages } from '../ai/messageTruncation';
+import { extractSystemInstructions } from '../ai/utils';
import type { LangChainMessage } from '../langchain/types';
import { normalizeLangChainMessages } from '../langchain/utils';
import { startSpan } from '../trace';
@@ -138,10 +140,17 @@ function instrumentCompiledGraphInvoke(
if (inputMessages && recordInputs) {
const normalizedMessages = normalizeLangChainMessages(inputMessages);
- const truncatedMessages = truncateGenAiMessages(normalizedMessages);
+ const { systemInstructions, filteredMessages } = extractSystemInstructions(normalizedMessages);
+
+ if (systemInstructions) {
+ span.setAttribute(GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE, systemInstructions);
+ }
+
+ const truncatedMessages = truncateGenAiMessages(filteredMessages as unknown[]);
+ const filteredLength = Array.isArray(filteredMessages) ? filteredMessages.length : 0;
span.setAttributes({
- [GEN_AI_REQUEST_MESSAGES_ATTRIBUTE]: JSON.stringify(truncatedMessages),
- [GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: normalizedMessages.length,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: JSON.stringify(truncatedMessages),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: filteredLength,
});
}
diff --git a/packages/core/src/tracing/openai/index.ts b/packages/core/src/tracing/openai/index.ts
index 6789f5fca3ce..b0d26f92c36c 100644
--- a/packages/core/src/tracing/openai/index.ts
+++ b/packages/core/src/tracing/openai/index.ts
@@ -5,15 +5,18 @@ import { SPAN_STATUS_ERROR } from '../../tracing';
import { startSpan, startSpanManual } from '../../tracing/trace';
import type { Span, SpanAttributeValue } from '../../types-hoist/span';
import {
+ GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_OPERATION_NAME_ATTRIBUTE,
GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_REQUEST_MODEL_ATTRIBUTE,
GEN_AI_RESPONSE_TEXT_ATTRIBUTE,
GEN_AI_SYSTEM_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
+ OPENAI_OPERATIONS,
} from '../ai/gen-ai-attributes';
-import { getTruncatedJsonString } from '../ai/utils';
+import { extractSystemInstructions, getTruncatedJsonString } from '../ai/utils';
import { instrumentStream } from './streaming';
import type {
ChatCompletionChunk,
@@ -107,16 +110,54 @@ function addResponseAttributes(span: Span, result: unknown, recordOutputs?: bool
}
// Extract and record AI request inputs, if present. This is intentionally separate from response attributes.
-function addRequestAttributes(span: Span, params: Record): void {
- const src = 'input' in params ? params.input : 'messages' in params ? params.messages : undefined;
- // typically an array, but can be other types. skip if an empty array.
- const length = Array.isArray(src) ? src.length : undefined;
- if (src && length !== 0) {
- const truncatedInput = getTruncatedJsonString(src);
- span.setAttribute(GEN_AI_REQUEST_MESSAGES_ATTRIBUTE, truncatedInput);
- if (length) {
- span.setAttribute(GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE, length);
+function addRequestAttributes(span: Span, params: Record, operationName: string): void {
+ // Store embeddings input on a separate attribute and do not truncate it
+ if (operationName === OPENAI_OPERATIONS.EMBEDDINGS && 'input' in params) {
+ const input = params.input;
+
+ // No input provided
+ if (input == null) {
+ return;
+ }
+
+ // Empty input string
+ if (typeof input === 'string' && input.length === 0) {
+ return;
+ }
+
+ // Empty array input
+ if (Array.isArray(input) && input.length === 0) {
+ return;
}
+
+ // Store strings as-is, arrays/objects as JSON
+ span.setAttribute(GEN_AI_EMBEDDINGS_INPUT_ATTRIBUTE, typeof input === 'string' ? input : JSON.stringify(input));
+ return;
+ }
+
+ const src = 'input' in params ? params.input : 'messages' in params ? params.messages : undefined;
+
+ if (!src) {
+ return;
+ }
+
+ if (Array.isArray(src) && src.length === 0) {
+ return;
+ }
+
+ const { systemInstructions, filteredMessages } = extractSystemInstructions(src);
+
+ if (systemInstructions) {
+ span.setAttribute(GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE, systemInstructions);
+ }
+
+ const truncatedInput = getTruncatedJsonString(filteredMessages);
+ span.setAttribute(GEN_AI_INPUT_MESSAGES_ATTRIBUTE, truncatedInput);
+
+ if (Array.isArray(filteredMessages)) {
+ span.setAttribute(GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE, filteredMessages.length);
+ } else {
+ span.setAttribute(GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE, 1);
}
}
@@ -150,7 +191,7 @@ function instrumentMethod(
async (span: Span) => {
try {
if (options.recordInputs && params) {
- addRequestAttributes(span, params);
+ addRequestAttributes(span, params, operationName);
}
const result = await originalMethod.apply(context, args);
@@ -189,7 +230,7 @@ function instrumentMethod(
async (span: Span) => {
try {
if (options.recordInputs && params) {
- addRequestAttributes(span, params);
+ addRequestAttributes(span, params, operationName);
}
const result = await originalMethod.apply(context, args);
diff --git a/packages/core/src/tracing/openai/utils.ts b/packages/core/src/tracing/openai/utils.ts
index 007dd93a91b1..82494f7ae018 100644
--- a/packages/core/src/tracing/openai/utils.ts
+++ b/packages/core/src/tracing/openai/utils.ts
@@ -35,20 +35,21 @@ import type {
} from './types';
/**
- * Maps OpenAI method paths to Sentry operation names
+ * Maps OpenAI method paths to OpenTelemetry semantic convention operation names
+ * @see https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/#llm-request-spans
*/
export function getOperationName(methodPath: string): string {
if (methodPath.includes('chat.completions')) {
return OPENAI_OPERATIONS.CHAT;
}
if (methodPath.includes('responses')) {
- return OPENAI_OPERATIONS.RESPONSES;
+ return OPENAI_OPERATIONS.CHAT;
}
if (methodPath.includes('embeddings')) {
return OPENAI_OPERATIONS.EMBEDDINGS;
}
if (methodPath.includes('conversations')) {
- return OPENAI_OPERATIONS.CONVERSATIONS;
+ return OPENAI_OPERATIONS.CHAT;
}
return methodPath.split('.').pop() || 'unknown';
}
diff --git a/packages/core/src/tracing/vercel-ai/constants.ts b/packages/core/src/tracing/vercel-ai/constants.ts
index fe307b03e7fb..57e8bf2a57c8 100644
--- a/packages/core/src/tracing/vercel-ai/constants.ts
+++ b/packages/core/src/tracing/vercel-ai/constants.ts
@@ -3,3 +3,22 @@ import type { Span } from '../../types-hoist/span';
// Global Map to track tool call IDs to their corresponding spans
// This allows us to capture tool errors and link them to the correct span
export const toolCallSpanMap = new Map();
+
+// Operation sets for efficient mapping to OpenTelemetry semantic convention values
+export const INVOKE_AGENT_OPS = new Set([
+ 'ai.generateText',
+ 'ai.streamText',
+ 'ai.generateObject',
+ 'ai.streamObject',
+ 'ai.embed',
+ 'ai.embedMany',
+]);
+
+export const GENERATE_CONTENT_OPS = new Set([
+ 'ai.generateText.doGenerate',
+ 'ai.streamText.doStream',
+ 'ai.generateObject.doGenerate',
+ 'ai.streamObject.doStream',
+]);
+
+export const EMBEDDINGS_OPS = new Set(['ai.embed.doEmbed', 'ai.embedMany.doEmbed']);
diff --git a/packages/core/src/tracing/vercel-ai/index.ts b/packages/core/src/tracing/vercel-ai/index.ts
index 9b95e8aa91ad..ad6ca1256004 100644
--- a/packages/core/src/tracing/vercel-ai/index.ts
+++ b/packages/core/src/tracing/vercel-ai/index.ts
@@ -4,16 +4,22 @@ import type { Event } from '../../types-hoist/event';
import type { Span, SpanAttributes, SpanAttributeValue, SpanJSON, SpanOrigin } from '../../types-hoist/span';
import { spanToJSON } from '../../utils/spanUtils';
import {
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
GEN_AI_OPERATION_NAME_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ATTRIBUTE,
GEN_AI_REQUEST_MODEL_ATTRIBUTE,
GEN_AI_RESPONSE_MODEL_ATTRIBUTE,
+ GEN_AI_TOOL_CALL_ID_ATTRIBUTE,
+ GEN_AI_TOOL_INPUT_ATTRIBUTE,
+ GEN_AI_TOOL_NAME_ATTRIBUTE,
+ GEN_AI_TOOL_OUTPUT_ATTRIBUTE,
+ GEN_AI_TOOL_TYPE_ATTRIBUTE,
GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
GEN_AI_USAGE_INPUT_TOKENS_CACHE_WRITE_ATTRIBUTE,
GEN_AI_USAGE_INPUT_TOKENS_CACHED_ATTRIBUTE,
GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
+ GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE,
} from '../ai/gen-ai-attributes';
-import { toolCallSpanMap } from './constants';
+import { EMBEDDINGS_OPS, GENERATE_CONTENT_OPS, INVOKE_AGENT_OPS, toolCallSpanMap } from './constants';
import type { TokenSummary } from './types';
import {
accumulateTokensForParent,
@@ -48,6 +54,29 @@ function addOriginToSpan(span: Span, origin: SpanOrigin): void {
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN, origin);
}
+/**
+ * Maps Vercel AI SDK operation names to OpenTelemetry semantic convention values
+ * @see https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/#llm-request-spans
+ */
+function mapVercelAiOperationName(operationName: string): string {
+ // Top-level pipeline operations map to invoke_agent
+ if (INVOKE_AGENT_OPS.has(operationName)) {
+ return 'invoke_agent';
+ }
+ // .do* operations are the actual LLM calls
+ if (GENERATE_CONTENT_OPS.has(operationName)) {
+ return 'generate_content';
+ }
+ if (EMBEDDINGS_OPS.has(operationName)) {
+ return 'embeddings';
+ }
+ if (operationName === 'ai.toolCall') {
+ return 'execute_tool';
+ }
+ // Return the original value for unknown operations
+ return operationName;
+}
+
/**
* Post-process spans emitted by the Vercel AI SDK.
* This is supposed to be used in `client.on('spanStart', ...)
@@ -133,7 +162,7 @@ function processEndedVercelAiSpan(span: SpanJSON): void {
typeof attributes[GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE] === 'number' &&
typeof attributes[GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE] === 'number'
) {
- attributes['gen_ai.usage.total_tokens'] =
+ attributes[GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE] =
attributes[GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE] + attributes[GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE];
}
@@ -145,15 +174,21 @@ function processEndedVercelAiSpan(span: SpanJSON): void {
}
// Rename AI SDK attributes to standardized gen_ai attributes
- renameAttributeKey(attributes, OPERATION_NAME_ATTRIBUTE, GEN_AI_OPERATION_NAME_ATTRIBUTE);
- renameAttributeKey(attributes, AI_PROMPT_MESSAGES_ATTRIBUTE, GEN_AI_REQUEST_MESSAGES_ATTRIBUTE);
+ // Map operation.name to OpenTelemetry semantic convention values
+ if (attributes[OPERATION_NAME_ATTRIBUTE]) {
+ const operationName = mapVercelAiOperationName(attributes[OPERATION_NAME_ATTRIBUTE] as string);
+ attributes[GEN_AI_OPERATION_NAME_ATTRIBUTE] = operationName;
+ // eslint-disable-next-line @typescript-eslint/no-dynamic-delete
+ delete attributes[OPERATION_NAME_ATTRIBUTE];
+ }
+ renameAttributeKey(attributes, AI_PROMPT_MESSAGES_ATTRIBUTE, GEN_AI_INPUT_MESSAGES_ATTRIBUTE);
renameAttributeKey(attributes, AI_RESPONSE_TEXT_ATTRIBUTE, 'gen_ai.response.text');
renameAttributeKey(attributes, AI_RESPONSE_TOOL_CALLS_ATTRIBUTE, 'gen_ai.response.tool_calls');
renameAttributeKey(attributes, AI_RESPONSE_OBJECT_ATTRIBUTE, 'gen_ai.response.object');
renameAttributeKey(attributes, AI_PROMPT_TOOLS_ATTRIBUTE, 'gen_ai.request.available_tools');
- renameAttributeKey(attributes, AI_TOOL_CALL_ARGS_ATTRIBUTE, 'gen_ai.tool.input');
- renameAttributeKey(attributes, AI_TOOL_CALL_RESULT_ATTRIBUTE, 'gen_ai.tool.output');
+ renameAttributeKey(attributes, AI_TOOL_CALL_ARGS_ATTRIBUTE, GEN_AI_TOOL_INPUT_ATTRIBUTE);
+ renameAttributeKey(attributes, AI_TOOL_CALL_RESULT_ATTRIBUTE, GEN_AI_TOOL_OUTPUT_ATTRIBUTE);
renameAttributeKey(attributes, AI_SCHEMA_ATTRIBUTE, 'gen_ai.request.schema');
renameAttributeKey(attributes, AI_MODEL_ID_ATTRIBUTE, GEN_AI_REQUEST_MODEL_ATTRIBUTE);
@@ -183,22 +218,23 @@ function renameAttributeKey(attributes: Record, oldKey: string,
function processToolCallSpan(span: Span, attributes: SpanAttributes): void {
addOriginToSpan(span, 'auto.vercelai.otel');
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.execute_tool');
- renameAttributeKey(attributes, AI_TOOL_CALL_NAME_ATTRIBUTE, 'gen_ai.tool.name');
- renameAttributeKey(attributes, AI_TOOL_CALL_ID_ATTRIBUTE, 'gen_ai.tool.call.id');
+ span.setAttribute(GEN_AI_OPERATION_NAME_ATTRIBUTE, 'execute_tool');
+ renameAttributeKey(attributes, AI_TOOL_CALL_NAME_ATTRIBUTE, GEN_AI_TOOL_NAME_ATTRIBUTE);
+ renameAttributeKey(attributes, AI_TOOL_CALL_ID_ATTRIBUTE, GEN_AI_TOOL_CALL_ID_ATTRIBUTE);
// Store the span in our global map using the tool call ID
// This allows us to capture tool errors and link them to the correct span
- const toolCallId = attributes['gen_ai.tool.call.id'];
+ const toolCallId = attributes[GEN_AI_TOOL_CALL_ID_ATTRIBUTE];
if (typeof toolCallId === 'string') {
toolCallSpanMap.set(toolCallId, span);
}
// https://opentelemetry.io/docs/specs/semconv/registry/attributes/gen-ai/#gen-ai-tool-type
- if (!attributes['gen_ai.tool.type']) {
- span.setAttribute('gen_ai.tool.type', 'function');
+ if (!attributes[GEN_AI_TOOL_TYPE_ATTRIBUTE]) {
+ span.setAttribute(GEN_AI_TOOL_TYPE_ATTRIBUTE, 'function');
}
- const toolName = attributes['gen_ai.tool.name'];
+ const toolName = attributes[GEN_AI_TOOL_NAME_ATTRIBUTE];
if (toolName) {
span.updateName(`execute_tool ${toolName}`);
}
diff --git a/packages/core/src/tracing/vercel-ai/utils.ts b/packages/core/src/tracing/vercel-ai/utils.ts
index 05dcc1f43817..2a0878f1e591 100644
--- a/packages/core/src/tracing/vercel-ai/utils.ts
+++ b/packages/core/src/tracing/vercel-ai/utils.ts
@@ -6,15 +6,16 @@ import {
GEN_AI_EXECUTE_TOOL_OPERATION_ATTRIBUTE,
GEN_AI_GENERATE_OBJECT_DO_GENERATE_OPERATION_ATTRIBUTE,
GEN_AI_GENERATE_TEXT_DO_GENERATE_OPERATION_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
+ GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_INVOKE_AGENT_OPERATION_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ATTRIBUTE,
- GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE,
GEN_AI_STREAM_OBJECT_DO_STREAM_OPERATION_ATTRIBUTE,
GEN_AI_STREAM_TEXT_DO_STREAM_OPERATION_ATTRIBUTE,
+ GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE,
GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE,
GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE,
} from '../ai/gen-ai-attributes';
-import { getTruncatedJsonString } from '../ai/utils';
+import { extractSystemInstructions, getTruncatedJsonString } from '../ai/utils';
import { toolCallSpanMap } from './constants';
import type { TokenSummary } from './types';
import { AI_PROMPT_ATTRIBUTE, AI_PROMPT_MESSAGES_ATTRIBUTE } from './vercel-ai-attributes';
@@ -139,24 +140,38 @@ export function requestMessagesFromPrompt(span: Span, attributes: SpanAttributes
const prompt = attributes[AI_PROMPT_ATTRIBUTE];
if (
typeof prompt === 'string' &&
- !attributes[GEN_AI_REQUEST_MESSAGES_ATTRIBUTE] &&
+ !attributes[GEN_AI_INPUT_MESSAGES_ATTRIBUTE] &&
!attributes[AI_PROMPT_MESSAGES_ATTRIBUTE]
) {
const messages = convertPromptToMessages(prompt);
if (messages.length) {
+ const { systemInstructions, filteredMessages } = extractSystemInstructions(messages);
+
+ if (systemInstructions) {
+ span.setAttribute(GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE, systemInstructions);
+ }
+
+ const filteredLength = Array.isArray(filteredMessages) ? filteredMessages.length : 0;
span.setAttributes({
- [GEN_AI_REQUEST_MESSAGES_ATTRIBUTE]: getTruncatedJsonString(messages),
- [GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: messages.length,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: getTruncatedJsonString(filteredMessages),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: filteredLength,
});
}
} else if (typeof attributes[AI_PROMPT_MESSAGES_ATTRIBUTE] === 'string') {
try {
const messages = JSON.parse(attributes[AI_PROMPT_MESSAGES_ATTRIBUTE]);
if (Array.isArray(messages)) {
+ const { systemInstructions, filteredMessages } = extractSystemInstructions(messages);
+
+ if (systemInstructions) {
+ span.setAttribute(GEN_AI_SYSTEM_INSTRUCTIONS_ATTRIBUTE, systemInstructions);
+ }
+
+ const filteredLength = Array.isArray(filteredMessages) ? filteredMessages.length : 0;
span.setAttributes({
[AI_PROMPT_MESSAGES_ATTRIBUTE]: undefined,
- [GEN_AI_REQUEST_MESSAGES_ATTRIBUTE]: getTruncatedJsonString(messages),
- [GEN_AI_REQUEST_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: messages.length,
+ [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: getTruncatedJsonString(filteredMessages),
+ [GEN_AI_INPUT_MESSAGES_ORIGINAL_LENGTH_ATTRIBUTE]: filteredLength,
});
}
// eslint-disable-next-line no-empty
diff --git a/packages/core/src/types-hoist/clientreport.ts b/packages/core/src/types-hoist/clientreport.ts
index cba53867827e..4c07f1956014 100644
--- a/packages/core/src/types-hoist/clientreport.ts
+++ b/packages/core/src/types-hoist/clientreport.ts
@@ -10,7 +10,8 @@ export type EventDropReason =
| 'send_error'
| 'internal_sdk_error'
| 'buffer_overflow'
- | 'ignored';
+ | 'ignored'
+ | 'invalid';
export type Outcome = {
reason: EventDropReason;
diff --git a/packages/core/test/lib/integrations/conversationId.test.ts b/packages/core/test/lib/integrations/conversationId.test.ts
new file mode 100644
index 000000000000..e9ea9cc50d45
--- /dev/null
+++ b/packages/core/test/lib/integrations/conversationId.test.ts
@@ -0,0 +1,98 @@
+import { afterEach, beforeEach, describe, expect, it } from 'vitest';
+import { getCurrentScope, getIsolationScope, setCurrentClient, startSpan } from '../../../src';
+import { conversationIdIntegration } from '../../../src/integrations/conversationId';
+import { GEN_AI_CONVERSATION_ID_ATTRIBUTE } from '../../../src/semanticAttributes';
+import { spanToJSON } from '../../../src/utils/spanUtils';
+import { getDefaultTestClientOptions, TestClient } from '../../mocks/client';
+
+describe('ConversationId', () => {
+ beforeEach(() => {
+ const testClient = new TestClient(
+ getDefaultTestClientOptions({
+ tracesSampleRate: 1,
+ }),
+ );
+ setCurrentClient(testClient);
+ testClient.init();
+ testClient.addIntegration(conversationIdIntegration());
+ });
+
+ afterEach(() => {
+ getCurrentScope().setClient(undefined);
+ getCurrentScope().setConversationId(null);
+ getIsolationScope().setConversationId(null);
+ });
+
+ it('applies conversation ID from current scope to span', () => {
+ getCurrentScope().setConversationId('conv_test_123');
+
+ startSpan({ name: 'test-span' }, span => {
+ const spanJSON = spanToJSON(span);
+ expect(spanJSON.data[GEN_AI_CONVERSATION_ID_ATTRIBUTE]).toBe('conv_test_123');
+ });
+ });
+
+ it('applies conversation ID from isolation scope when current scope does not have one', () => {
+ getIsolationScope().setConversationId('conv_isolation_456');
+
+ startSpan({ name: 'test-span' }, span => {
+ const spanJSON = spanToJSON(span);
+ expect(spanJSON.data[GEN_AI_CONVERSATION_ID_ATTRIBUTE]).toBe('conv_isolation_456');
+ });
+ });
+
+ it('prefers current scope over isolation scope', () => {
+ getCurrentScope().setConversationId('conv_current_789');
+ getIsolationScope().setConversationId('conv_isolation_999');
+
+ startSpan({ name: 'test-span' }, span => {
+ const spanJSON = spanToJSON(span);
+ expect(spanJSON.data[GEN_AI_CONVERSATION_ID_ATTRIBUTE]).toBe('conv_current_789');
+ });
+ });
+
+ it('does not apply conversation ID when not set in scope', () => {
+ startSpan({ name: 'test-span' }, span => {
+ const spanJSON = spanToJSON(span);
+ expect(spanJSON.data[GEN_AI_CONVERSATION_ID_ATTRIBUTE]).toBeUndefined();
+ });
+ });
+
+ it('works when conversation ID is unset with null', () => {
+ getCurrentScope().setConversationId('conv_test_123');
+ getCurrentScope().setConversationId(null);
+
+ startSpan({ name: 'test-span' }, span => {
+ const spanJSON = spanToJSON(span);
+ expect(spanJSON.data[GEN_AI_CONVERSATION_ID_ATTRIBUTE]).toBeUndefined();
+ });
+ });
+
+ it('applies conversation ID to nested spans', () => {
+ getCurrentScope().setConversationId('conv_nested_abc');
+
+ startSpan({ name: 'parent-span' }, () => {
+ startSpan({ name: 'child-span' }, childSpan => {
+ const childJSON = spanToJSON(childSpan);
+ expect(childJSON.data[GEN_AI_CONVERSATION_ID_ATTRIBUTE]).toBe('conv_nested_abc');
+ });
+ });
+ });
+
+ it('scope conversation ID overrides explicitly set attribute', () => {
+ getCurrentScope().setConversationId('conv_from_scope');
+
+ startSpan(
+ {
+ name: 'test-span',
+ attributes: {
+ [GEN_AI_CONVERSATION_ID_ATTRIBUTE]: 'conv_explicit',
+ },
+ },
+ span => {
+ const spanJSON = spanToJSON(span);
+ expect(spanJSON.data[GEN_AI_CONVERSATION_ID_ATTRIBUTE]).toBe('conv_from_scope');
+ },
+ );
+ });
+});
diff --git a/packages/core/test/lib/scope.test.ts b/packages/core/test/lib/scope.test.ts
index f1e5c58550be..11fc4cb62fff 100644
--- a/packages/core/test/lib/scope.test.ts
+++ b/packages/core/test/lib/scope.test.ts
@@ -1011,6 +1011,63 @@ describe('Scope', () => {
});
});
+ describe('setConversationId() / getScopeData()', () => {
+ test('sets and gets conversation ID via getScopeData', () => {
+ const scope = new Scope();
+ scope.setConversationId('conv_abc123');
+ expect(scope.getScopeData().conversationId).toEqual('conv_abc123');
+ });
+
+ test('unsets conversation ID with null or undefined', () => {
+ const scope = new Scope();
+ scope.setConversationId('conv_abc123');
+ scope.setConversationId(null);
+ expect(scope.getScopeData().conversationId).toBeUndefined();
+
+ scope.setConversationId('conv_abc123');
+ scope.setConversationId(undefined);
+ expect(scope.getScopeData().conversationId).toBeUndefined();
+ });
+
+ test('clones conversation ID to new scope', () => {
+ const scope = new Scope();
+ scope.setConversationId('conv_clone123');
+ const clonedScope = scope.clone();
+ expect(clonedScope.getScopeData().conversationId).toEqual('conv_clone123');
+ });
+
+ test('notifies scope listeners when conversation ID is set', () => {
+ const scope = new Scope();
+ const listener = vi.fn();
+ scope.addScopeListener(listener);
+ scope.setConversationId('conv_listener');
+ expect(listener).toHaveBeenCalledWith(scope);
+ });
+
+ test('clears conversation ID when scope is cleared', () => {
+ const scope = new Scope();
+ scope.setConversationId('conv_to_clear');
+ expect(scope.getScopeData().conversationId).toEqual('conv_to_clear');
+ scope.clear();
+ expect(scope.getScopeData().conversationId).toBeUndefined();
+ });
+
+ test('updates conversation ID when scope is updated with ScopeContext', () => {
+ const scope = new Scope();
+ scope.setConversationId('conv_old');
+ scope.update({ conversationId: 'conv_updated' });
+ expect(scope.getScopeData().conversationId).toEqual('conv_updated');
+ });
+
+ test('updates conversation ID when scope is updated with another Scope', () => {
+ const scope1 = new Scope();
+ const scope2 = new Scope();
+ scope2.setConversationId('conv_from_scope2');
+ scope1.update(scope2);
+ expect(scope1.getScopeData().conversationId).toEqual('conv_from_scope2');
+ });
+ });
+
describe('addBreadcrumb()', () => {
test('adds a breadcrumb', () => {
const scope = new Scope();
diff --git a/packages/core/test/lib/tracing/ai-message-truncation.test.ts b/packages/core/test/lib/tracing/ai-message-truncation.test.ts
index 968cd2308bb7..8a8cefaffa5b 100644
--- a/packages/core/test/lib/tracing/ai-message-truncation.test.ts
+++ b/packages/core/test/lib/tracing/ai-message-truncation.test.ts
@@ -96,33 +96,8 @@ describe('message truncation utilities', () => {
// original messages objects must not be mutated
expect(JSON.stringify(messages, null, 2)).toBe(messagesJson);
+ // only the last message should be kept (with media stripped)
expect(result).toStrictEqual([
- {
- role: 'user',
- content: [
- {
- type: 'image',
- source: {
- type: 'base64',
- media_type: 'image/png',
- data: removed,
- },
- },
- ],
- },
- {
- role: 'user',
- content: {
- image_url: removed,
- },
- },
- {
- role: 'agent',
- type: 'image',
- content: {
- b64_json: removed,
- },
- },
{
role: 'system',
inlineData: {
@@ -177,39 +152,35 @@ describe('message truncation utilities', () => {
const giant = 'this is a long string '.repeat(1_000);
const big = 'this is a long string '.repeat(100);
- it('drops older messages to fit in the limit', () => {
+ it('keeps only the last message without truncation when it fits the limit', () => {
+ // Multiple messages that together exceed 20KB, but last message is small
const messages = [
- `0 ${giant}`,
- { type: 'text', content: `1 ${big}` },
- { type: 'text', content: `2 ${big}` },
- { type: 'text', content: `3 ${giant}` },
- { type: 'text', content: `4 ${big}` },
- `5 ${big}`,
- { type: 'text', content: `6 ${big}` },
- { type: 'text', content: `7 ${big}` },
- { type: 'text', content: `8 ${big}` },
- { type: 'text', content: `9 ${big}` },
- { type: 'text', content: `10 ${big}` },
- { type: 'text', content: `11 ${big}` },
- { type: 'text', content: `12 ${big}` },
+ { content: `1 ${humongous}` },
+ { content: `2 ${humongous}` },
+ { content: `3 ${big}` }, // last message - small enough to fit
];
- const messagesJson = JSON.stringify(messages, null, 2);
const result = truncateGenAiMessages(messages);
- // should not mutate original messages list
- expect(JSON.stringify(messages, null, 2)).toBe(messagesJson);
- // just retain the messages that fit in the budget
- expect(result).toStrictEqual([
- `5 ${big}`,
- { type: 'text', content: `6 ${big}` },
- { type: 'text', content: `7 ${big}` },
- { type: 'text', content: `8 ${big}` },
- { type: 'text', content: `9 ${big}` },
- { type: 'text', content: `10 ${big}` },
- { type: 'text', content: `11 ${big}` },
- { type: 'text', content: `12 ${big}` },
- ]);
+ // Should only keep the last message, unchanged
+ expect(result).toStrictEqual([{ content: `3 ${big}` }]);
+ });
+
+ it('keeps only the last message with truncation when it does not fit the limit', () => {
+ const messages = [{ content: `1 ${humongous}` }, { content: `2 ${humongous}` }, { content: `3 ${humongous}` }];
+ const result = truncateGenAiMessages(messages);
+ const truncLen = 20_000 - JSON.stringify({ content: '' }).length;
+ expect(result).toStrictEqual([{ content: `3 ${humongous}`.substring(0, truncLen) }]);
+ });
+
+ it('drops if last message cannot be safely truncated', () => {
+ const messages = [
+ { content: `1 ${humongous}` },
+ { content: `2 ${humongous}` },
+ { what_even_is_this: `? ${humongous}` },
+ ];
+ const result = truncateGenAiMessages(messages);
+ expect(result).toStrictEqual([]);
});
it('fully drops message if content cannot be made to fit', () => {
@@ -315,22 +286,5 @@ describe('message truncation utilities', () => {
},
]);
});
-
- it('truncates first message if none fit', () => {
- const messages = [{ content: `1 ${humongous}` }, { content: `2 ${humongous}` }, { content: `3 ${humongous}` }];
- const result = truncateGenAiMessages(messages);
- const truncLen = 20_000 - JSON.stringify({ content: '' }).length;
- expect(result).toStrictEqual([{ content: `3 ${humongous}`.substring(0, truncLen) }]);
- });
-
- it('drops if first message cannot be safely truncated', () => {
- const messages = [
- { content: `1 ${humongous}` },
- { content: `2 ${humongous}` },
- { what_even_is_this: `? ${humongous}` },
- ];
- const result = truncateGenAiMessages(messages);
- expect(result).toStrictEqual([]);
- });
});
});
diff --git a/packages/core/test/lib/utils/anthropic-utils.test.ts b/packages/core/test/lib/utils/anthropic-utils.test.ts
index 74d4e6b85c17..91a311cc574b 100644
--- a/packages/core/test/lib/utils/anthropic-utils.test.ts
+++ b/packages/core/test/lib/utils/anthropic-utils.test.ts
@@ -86,22 +86,24 @@ describe('anthropic-ai-utils', () => {
setMessagesAttribute(span, [{ role: 'user', content }]);
const result = [{ role: 'user', content: 'A'.repeat(19972) }];
expect(mock.attributes).toStrictEqual({
- 'gen_ai.request.messages.original_length': 1,
- 'gen_ai.request.messages': JSON.stringify(result),
+ 'sentry.sdk_meta.gen_ai.input.messages.original_length': 1,
+ 'gen_ai.input.messages': JSON.stringify(result),
});
});
- it('removes length when setting new value ', () => {
+ it('sets length to 1 for non-array input', () => {
setMessagesAttribute(span, { content: 'hello, world' });
expect(mock.attributes).toStrictEqual({
- 'gen_ai.request.messages': '{"content":"hello, world"}',
+ 'sentry.sdk_meta.gen_ai.input.messages.original_length': 1,
+ 'gen_ai.input.messages': '{"content":"hello, world"}',
});
});
it('ignores empty array', () => {
setMessagesAttribute(span, []);
expect(mock.attributes).toStrictEqual({
- 'gen_ai.request.messages': '{"content":"hello, world"}',
+ 'sentry.sdk_meta.gen_ai.input.messages.original_length': 1,
+ 'gen_ai.input.messages': '{"content":"hello, world"}',
});
});
});
diff --git a/packages/core/test/lib/utils/openai-utils.test.ts b/packages/core/test/lib/utils/openai-utils.test.ts
index ff951e8be40b..25cd873ace08 100644
--- a/packages/core/test/lib/utils/openai-utils.test.ts
+++ b/packages/core/test/lib/utils/openai-utils.test.ts
@@ -18,14 +18,14 @@ describe('openai-utils', () => {
expect(getOperationName('some.path.chat.completions.method')).toBe('chat');
});
- it('should return responses for responses methods', () => {
- expect(getOperationName('responses.create')).toBe('responses');
- expect(getOperationName('some.path.responses.method')).toBe('responses');
+ it('should return chat for responses methods', () => {
+ expect(getOperationName('responses.create')).toBe('chat');
+ expect(getOperationName('some.path.responses.method')).toBe('chat');
});
- it('should return conversations for conversations methods', () => {
- expect(getOperationName('conversations.create')).toBe('conversations');
- expect(getOperationName('some.path.conversations.method')).toBe('conversations');
+ it('should return chat for conversations methods', () => {
+ expect(getOperationName('conversations.create')).toBe('chat');
+ expect(getOperationName('some.path.conversations.method')).toBe('chat');
});
it('should return the last part of path for unknown methods', () => {
@@ -41,7 +41,7 @@ describe('openai-utils', () => {
describe('getSpanOperation', () => {
it('should prefix operation with gen_ai', () => {
expect(getSpanOperation('chat.completions.create')).toBe('gen_ai.chat');
- expect(getSpanOperation('responses.create')).toBe('gen_ai.responses');
+ expect(getSpanOperation('responses.create')).toBe('gen_ai.chat');
expect(getSpanOperation('some.custom.operation')).toBe('gen_ai.operation');
});
});
diff --git a/packages/gatsby/package.json b/packages/gatsby/package.json
index 55c098503137..cfdadf181930 100644
--- a/packages/gatsby/package.json
+++ b/packages/gatsby/package.json
@@ -47,7 +47,7 @@
"dependencies": {
"@sentry/core": "10.36.0",
"@sentry/react": "10.36.0",
- "@sentry/webpack-plugin": "^4.6.2"
+ "@sentry/webpack-plugin": "^4.7.0"
},
"peerDependencies": {
"gatsby": "^2.0.0 || ^3.0.0 || ^4.0.0 || ^5.0.0",
diff --git a/packages/google-cloud-serverless/src/index.ts b/packages/google-cloud-serverless/src/index.ts
index 4fa5c727be59..636852d722d3 100644
--- a/packages/google-cloud-serverless/src/index.ts
+++ b/packages/google-cloud-serverless/src/index.ts
@@ -25,6 +25,7 @@ export {
Scope,
SDK_VERSION,
setContext,
+ setConversationId,
setExtra,
setExtras,
setTag,
diff --git a/packages/nestjs/package.json b/packages/nestjs/package.json
index 2ff03b78d7d1..2bea432bb7ae 100644
--- a/packages/nestjs/package.json
+++ b/packages/nestjs/package.json
@@ -45,10 +45,10 @@
},
"dependencies": {
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/instrumentation-nestjs-core": "0.56.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/instrumentation-nestjs-core": "0.57.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@sentry/core": "10.36.0",
"@sentry/node": "10.36.0"
},
diff --git a/packages/nextjs/package.json b/packages/nextjs/package.json
index 65a6078ba3a7..0bcaef4b753e 100644
--- a/packages/nextjs/package.json
+++ b/packages/nextjs/package.json
@@ -86,7 +86,7 @@
"@sentry/opentelemetry": "10.36.0",
"@sentry/react": "10.36.0",
"@sentry/vercel-edge": "10.36.0",
- "@sentry/webpack-plugin": "^4.6.2",
+ "@sentry/webpack-plugin": "^4.7.0",
"rollup": "^4.35.0",
"stacktrace-parser": "^0.1.10"
},
diff --git a/packages/nextjs/src/common/utils/dropMiddlewareTunnelRequests.ts b/packages/nextjs/src/common/utils/dropMiddlewareTunnelRequests.ts
index 29e2ee55e45e..e8cde6e94baf 100644
--- a/packages/nextjs/src/common/utils/dropMiddlewareTunnelRequests.ts
+++ b/packages/nextjs/src/common/utils/dropMiddlewareTunnelRequests.ts
@@ -1,12 +1,6 @@
import { SEMATTRS_HTTP_TARGET } from '@opentelemetry/semantic-conventions';
-import {
- getClient,
- GLOBAL_OBJ,
- isSentryRequestUrl,
- SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN,
- type Span,
- type SpanAttributes,
-} from '@sentry/core';
+import { GLOBAL_OBJ, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN, type Span, type SpanAttributes } from '@sentry/core';
+import { isSentryRequestSpan } from '@sentry/opentelemetry';
import { ATTR_NEXT_SPAN_TYPE } from '../nextSpanAttributes';
import { TRANSACTION_ATTR_SHOULD_DROP_TRANSACTION } from '../span-attributes-with-logic-attached';
@@ -42,36 +36,6 @@ export function dropMiddlewareTunnelRequests(span: Span, attrs: SpanAttributes |
}
}
-/**
- * Local copy of `@sentry/opentelemetry`'s `isSentryRequestSpan`, to avoid pulling the whole package into Edge bundles.
- */
-function isSentryRequestSpan(span: Span): boolean {
- const attributes = spanToAttributes(span);
- if (!attributes) {
- return false;
- }
-
- const httpUrl = attributes['http.url'] || attributes['url.full'];
- if (!httpUrl) {
- return false;
- }
-
- return isSentryRequestUrl(httpUrl.toString(), getClient());
-}
-
-function spanToAttributes(span: Span): Record | undefined {
- // OTEL spans expose attributes in different shapes depending on implementation.
- // We only need best-effort read access.
- type MaybeSpanAttributes = {
- attributes?: Record;
- _attributes?: Record;
- };
-
- const maybeSpan = span as unknown as MaybeSpanAttributes;
- const attrs = maybeSpan.attributes || maybeSpan._attributes;
- return attrs;
-}
-
/**
* Checks if a span's HTTP target matches the tunnel route.
*/
diff --git a/packages/nextjs/src/config/getBuildPluginOptions.ts b/packages/nextjs/src/config/getBuildPluginOptions.ts
index e43061eb59a5..dbc84e88be40 100644
--- a/packages/nextjs/src/config/getBuildPluginOptions.ts
+++ b/packages/nextjs/src/config/getBuildPluginOptions.ts
@@ -37,6 +37,13 @@ const FILE_PATTERNS = {
FRAMEWORK_CHUNKS_DOT: 'static/chunks/framework.*',
POLYFILLS_CHUNKS: 'static/chunks/polyfills-*',
WEBPACK_CHUNKS: 'static/chunks/webpack-*',
+ PAGE_CLIENT_REFERENCE_MANIFEST: '**/page_client-reference-manifest.js',
+ SERVER_REFERENCE_MANIFEST: '**/server-reference-manifest.js',
+ NEXT_FONT_MANIFEST: '**/next-font-manifest.js',
+ MIDDLEWARE_BUILD_MANIFEST: '**/middleware-build-manifest.js',
+ INTERCEPTION_ROUTE_REWRITE_MANIFEST: '**/interception-route-rewrite-manifest.js',
+ ROUTE_CLIENT_REFERENCE_MANIFEST: '**/route_client-reference-manifest.js',
+ MIDDLEWARE_REACT_LOADABLE_MANIFEST: '**/middleware-react-loadable-manifest.js',
} as const;
// Source map file extensions to delete
@@ -142,6 +149,16 @@ function createSourcemapUploadIgnorePattern(
path.posix.join(normalizedDistPath, FILE_PATTERNS.FRAMEWORK_CHUNKS_DOT),
path.posix.join(normalizedDistPath, FILE_PATTERNS.POLYFILLS_CHUNKS),
path.posix.join(normalizedDistPath, FILE_PATTERNS.WEBPACK_CHUNKS),
+ // Next.js internal manifest files that don't have source maps
+ // These files are auto-generated by Next.js and do not contain user code.
+ // Ignoring them prevents "Could not determine source map reference" warnings.
+ FILE_PATTERNS.PAGE_CLIENT_REFERENCE_MANIFEST,
+ FILE_PATTERNS.SERVER_REFERENCE_MANIFEST,
+ FILE_PATTERNS.NEXT_FONT_MANIFEST,
+ FILE_PATTERNS.MIDDLEWARE_BUILD_MANIFEST,
+ FILE_PATTERNS.INTERCEPTION_ROUTE_REWRITE_MANIFEST,
+ FILE_PATTERNS.ROUTE_CLIENT_REFERENCE_MANIFEST,
+ FILE_PATTERNS.MIDDLEWARE_REACT_LOADABLE_MANIFEST,
);
return ignore;
@@ -216,6 +233,20 @@ function createReleaseConfig(
};
}
+/**
+ * Merges default ignore patterns with user-provided ignore patterns.
+ * User patterns are appended to the defaults to ensure internal Next.js
+ * files are always ignored while allowing users to add additional patterns.
+ */
+function mergeIgnorePatterns(defaultPatterns: string[], userPatterns: string | string[] | undefined): string[] {
+ if (!userPatterns) {
+ return defaultPatterns;
+ }
+
+ const userPatternsArray = Array.isArray(userPatterns) ? userPatterns : [userPatterns];
+ return [...defaultPatterns, ...userPatternsArray];
+}
+
/**
* Get Sentry Build Plugin options for both webpack and turbopack builds.
* These options can be used in two ways:
@@ -239,7 +270,6 @@ export function getBuildPluginOptions({
// glob characters. This clashes with Windows path separators.
// See: https://www.npmjs.com/package/glob
const normalizedDistDirAbsPath = normalizePathForGlob(distDirAbsPath);
-
const loggerPrefix = LOGGER_PREFIXES[buildTool];
const widenClientFileUpload = sentryBuildOptions.widenClientFileUpload ?? false;
const deleteSourcemapsAfterUpload = sentryBuildOptions.sourcemaps?.deleteSourcemapsAfterUpload ?? false;
@@ -252,6 +282,8 @@ export function getBuildPluginOptions({
const sourcemapUploadIgnore = createSourcemapUploadIgnorePattern(normalizedDistDirAbsPath, widenClientFileUpload);
+ const finalIgnorePatterns = mergeIgnorePatterns(sourcemapUploadIgnore, sentryBuildOptions.sourcemaps?.ignore);
+
const filesToDeleteAfterUpload = createFilesToDeleteAfterUploadPattern(
normalizedDistDirAbsPath,
buildTool,
@@ -281,7 +313,7 @@ export function getBuildPluginOptions({
disable: skipSourcemapsUpload ? true : (sentryBuildOptions.sourcemaps?.disable ?? false),
rewriteSources: rewriteWebpackSources,
assets: sentryBuildOptions.sourcemaps?.assets ?? sourcemapUploadAssets,
- ignore: sentryBuildOptions.sourcemaps?.ignore ?? sourcemapUploadIgnore,
+ ignore: finalIgnorePatterns,
filesToDeleteAfterUpload,
...sentryBuildOptions.webpack?.unstable_sentryWebpackPluginOptions?.sourcemaps,
},
diff --git a/packages/nextjs/src/config/types.ts b/packages/nextjs/src/config/types.ts
index cdc6e68f053d..46b1aef110d2 100644
--- a/packages/nextjs/src/config/types.ts
+++ b/packages/nextjs/src/config/types.ts
@@ -261,7 +261,8 @@ export type SentryBuildOptions = {
/**
* A glob or an array of globs that specifies which build artifacts should not be uploaded to Sentry.
*
- * Default: `[]`
+ * The SDK automatically ignores Next.js internal files that don't have source maps (such as manifest files)
+ * to prevent "Could not determine source map" warnings. Your custom patterns are merged with these defaults.
*
* The globbing patterns follow the implementation of the `glob` package. (https://www.npmjs.com/package/glob)
*
diff --git a/packages/nextjs/src/edge/index.ts b/packages/nextjs/src/edge/index.ts
index 94c71a52c483..9fa05c94e978 100644
--- a/packages/nextjs/src/edge/index.ts
+++ b/packages/nextjs/src/edge/index.ts
@@ -1,7 +1,7 @@
// import/export got a false positive, and affects most of our index barrel files
// can be removed once following issue is fixed: https://github.com/import-js/eslint-plugin-import/issues/703
/* eslint-disable import/export */
-import { context, createContextKey } from '@opentelemetry/api';
+import { context } from '@opentelemetry/api';
import {
applySdkMetadata,
type EventProcessor,
@@ -12,7 +12,6 @@ import {
getRootSpan,
GLOBAL_OBJ,
registerSpanErrorInstrumentation,
- type Scope,
SEMANTIC_ATTRIBUTE_SENTRY_OP,
SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN,
SEMANTIC_ATTRIBUTE_SENTRY_SOURCE,
@@ -20,6 +19,7 @@ import {
spanToJSON,
stripUrlQueryAndFragment,
} from '@sentry/core';
+import { getScopesFromContext } from '@sentry/opentelemetry';
import type { VercelEdgeOptions } from '@sentry/vercel-edge';
import { getDefaultIntegrations, init as vercelEdgeInit } from '@sentry/vercel-edge';
import { DEBUG_BUILD } from '../common/debug-build';
@@ -42,32 +42,6 @@ export { wrapApiHandlerWithSentry } from './wrapApiHandlerWithSentry';
export type EdgeOptions = VercelEdgeOptions;
-type CurrentScopes = {
- scope: Scope;
- isolationScope: Scope;
-};
-
-// This key must match `@sentry/opentelemetry`'s `SENTRY_SCOPES_CONTEXT_KEY`.
-// We duplicate it here so the Edge bundle does not need to import the full `@sentry/opentelemetry` package.
-const SENTRY_SCOPES_CONTEXT_KEY = createContextKey('sentry_scopes');
-
-type ContextWithGetValue = {
- getValue(key: unknown): unknown;
-};
-
-function getScopesFromContext(otelContext: unknown): CurrentScopes | undefined {
- if (!otelContext || typeof otelContext !== 'object') {
- return undefined;
- }
-
- const maybeContext = otelContext as Partial;
- if (typeof maybeContext.getValue !== 'function') {
- return undefined;
- }
-
- return maybeContext.getValue(SENTRY_SCOPES_CONTEXT_KEY) as CurrentScopes | undefined;
-}
-
const globalWithInjectedValues = GLOBAL_OBJ as typeof GLOBAL_OBJ & {
_sentryRewriteFramesDistDir?: string;
_sentryRelease?: string;
diff --git a/packages/nextjs/test/config/getBuildPluginOptions.test.ts b/packages/nextjs/test/config/getBuildPluginOptions.test.ts
index 3e95eadafc96..f62463447290 100644
--- a/packages/nextjs/test/config/getBuildPluginOptions.test.ts
+++ b/packages/nextjs/test/config/getBuildPluginOptions.test.ts
@@ -33,6 +33,13 @@ describe('getBuildPluginOptions', () => {
'/path/to/.next/static/chunks/framework.*',
'/path/to/.next/static/chunks/polyfills-*',
'/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
],
filesToDeleteAfterUpload: undefined,
rewriteSources: expect.any(Function),
@@ -121,6 +128,13 @@ describe('getBuildPluginOptions', () => {
'/path/to/.next/static/chunks/framework.*',
'/path/to/.next/static/chunks/polyfills-*',
'/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
]);
expect(result.reactComponentAnnotation).toBeDefined();
});
@@ -142,6 +156,13 @@ describe('getBuildPluginOptions', () => {
'/path/to/.next/static/chunks/framework.*',
'/path/to/.next/static/chunks/polyfills-*',
'/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
]);
});
@@ -161,6 +182,13 @@ describe('getBuildPluginOptions', () => {
'/path/to/.next/static/chunks/framework.*',
'/path/to/.next/static/chunks/polyfills-*',
'/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
]);
expect(result.reactComponentAnnotation).toBeDefined();
});
@@ -181,6 +209,13 @@ describe('getBuildPluginOptions', () => {
'/path/to/.next/static/chunks/framework.*',
'/path/to/.next/static/chunks/polyfills-*',
'/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
]);
expect(result.reactComponentAnnotation).toBeDefined();
});
@@ -205,6 +240,13 @@ describe('getBuildPluginOptions', () => {
'/path/to/.next/static/chunks/framework.*',
'/path/to/.next/static/chunks/polyfills-*',
'/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
]);
expect(result.reactComponentAnnotation).toBeUndefined();
});
@@ -228,6 +270,13 @@ describe('getBuildPluginOptions', () => {
'/path/to/.next/static/chunks/framework.*',
'/path/to/.next/static/chunks/polyfills-*',
'/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
]);
expect(result.reactComponentAnnotation).toBeUndefined();
});
@@ -444,7 +493,7 @@ describe('getBuildPluginOptions', () => {
expect(result.sourcemaps?.assets).toEqual(customAssets);
});
- it('uses custom sourcemap ignore patterns when provided', () => {
+ it('merges custom sourcemap ignore patterns with defaults', () => {
const customIgnore = ['**/vendor/**', '**/node_modules/**'];
const sentryBuildOptions: SentryBuildOptions = {
org: 'test-org',
@@ -461,7 +510,58 @@ describe('getBuildPluginOptions', () => {
buildTool: 'webpack-client',
});
- expect(result.sourcemaps?.ignore).toEqual(customIgnore);
+ // Custom patterns should be appended to defaults, not replace them
+ expect(result.sourcemaps?.ignore).toEqual([
+ '/path/to/.next/static/chunks/main-*',
+ '/path/to/.next/static/chunks/framework-*',
+ '/path/to/.next/static/chunks/framework.*',
+ '/path/to/.next/static/chunks/polyfills-*',
+ '/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
+ '**/vendor/**',
+ '**/node_modules/**',
+ ]);
+ });
+
+ it('handles single string custom sourcemap ignore pattern', () => {
+ const customIgnore = '**/vendor/**';
+ const sentryBuildOptions: SentryBuildOptions = {
+ org: 'test-org',
+ project: 'test-project',
+ sourcemaps: {
+ ignore: customIgnore,
+ },
+ };
+
+ const result = getBuildPluginOptions({
+ sentryBuildOptions,
+ releaseName: mockReleaseName,
+ distDirAbsPath: mockDistDirAbsPath,
+ buildTool: 'webpack-client',
+ });
+
+ // Single string pattern should be appended to defaults
+ expect(result.sourcemaps?.ignore).toEqual([
+ '/path/to/.next/static/chunks/main-*',
+ '/path/to/.next/static/chunks/framework-*',
+ '/path/to/.next/static/chunks/framework.*',
+ '/path/to/.next/static/chunks/polyfills-*',
+ '/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
+ '**/vendor/**',
+ ]);
});
it('disables sourcemaps when disable flag is set', () => {
@@ -769,6 +869,13 @@ describe('getBuildPluginOptions', () => {
'/path/to/.next/static/chunks/framework.*',
'/path/to/.next/static/chunks/polyfills-*',
'/path/to/.next/static/chunks/webpack-*',
+ '**/page_client-reference-manifest.js',
+ '**/server-reference-manifest.js',
+ '**/next-font-manifest.js',
+ '**/middleware-build-manifest.js',
+ '**/interception-route-rewrite-manifest.js',
+ '**/route_client-reference-manifest.js',
+ '**/middleware-react-loadable-manifest.js',
],
filesToDeleteAfterUpload: undefined,
rewriteSources: expect.any(Function),
diff --git a/packages/node-core/package.json b/packages/node-core/package.json
index db554b2d50ed..6bfdf59307ac 100644
--- a/packages/node-core/package.json
+++ b/packages/node-core/package.json
@@ -63,7 +63,7 @@
"@opentelemetry/instrumentation": ">=0.57.1 <1",
"@opentelemetry/resources": "^1.30.1 || ^2.1.0",
"@opentelemetry/sdk-trace-base": "^1.30.1 || ^2.1.0",
- "@opentelemetry/semantic-conventions": "^1.37.0"
+ "@opentelemetry/semantic-conventions": "^1.39.0"
},
"dependencies": {
"@apm-js-collab/tracing-hooks": "^0.3.1",
@@ -74,12 +74,12 @@
"devDependencies": {
"@apm-js-collab/code-transformer": "^0.8.2",
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/context-async-hooks": "^2.4.0",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/resources": "^2.4.0",
- "@opentelemetry/sdk-trace-base": "^2.4.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/context-async-hooks": "^2.5.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/resources": "^2.5.0",
+ "@opentelemetry/sdk-trace-base": "^2.5.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@types/node": "^18.19.1"
},
"scripts": {
diff --git a/packages/node-core/src/integrations/onunhandledrejection.ts b/packages/node-core/src/integrations/onunhandledrejection.ts
index 42a17e2e6c7e..af40bacfda57 100644
--- a/packages/node-core/src/integrations/onunhandledrejection.ts
+++ b/packages/node-core/src/integrations/onunhandledrejection.ts
@@ -27,7 +27,10 @@ const INTEGRATION_NAME = 'OnUnhandledRejection';
const DEFAULT_IGNORES: IgnoreMatcher[] = [
{
- name: 'AI_NoOutputGeneratedError', // When stream aborts in Vercel AI SDK, Vercel flush() fails with an error
+ name: 'AI_NoOutputGeneratedError', // When stream aborts in Vercel AI SDK V5, Vercel flush() fails with an error
+ },
+ {
+ name: 'AbortError', // When stream aborts in Vercel AI SDK V6
},
];
diff --git a/packages/node-core/src/integrations/winston.ts b/packages/node-core/src/integrations/winston.ts
index bea0fa584bf7..a461d8797338 100644
--- a/packages/node-core/src/integrations/winston.ts
+++ b/packages/node-core/src/integrations/winston.ts
@@ -1,5 +1,7 @@
/* eslint-disable @typescript-eslint/ban-ts-comment */
import type { LogSeverityLevel } from '@sentry/core';
+import { debug } from '@sentry/core';
+import { DEBUG_BUILD } from '../debug-build';
import { captureLog } from '../logs/capture';
const DEFAULT_CAPTURED_LEVELS: Array = ['trace', 'debug', 'info', 'warn', 'error', 'fatal'];
@@ -25,6 +27,21 @@ interface WinstonTransportOptions {
* ```
*/
levels?: Array;
+
+ /**
+ * Use this option to map custom levels to Sentry log severity levels.
+ *
+ * @example
+ * ```ts
+ * const SentryWinstonTransport = Sentry.createSentryWinstonTransport(Transport, {
+ * customLevelMap: {
+ * myCustomLevel: 'info',
+ * customError: 'error',
+ * },
+ * });
+ * ```
+ */
+ customLevelMap?: Record;
}
/**
@@ -85,12 +102,20 @@ export function createSentryWinstonTransport[0];
}
class SentryPrismaInteropInstrumentation extends PrismaInstrumentation {
- public constructor() {
- super();
+ public constructor(options?: PrismaOptions) {
+ super(options?.instrumentationConfig);
}
public enable(): void {
@@ -165,8 +169,8 @@ function engineSpanKindToOTELSpanKind(engineSpanKind: V5EngineSpanKind): SpanKin
}
}
-export const instrumentPrisma = generateInstrumentOnce(INTEGRATION_NAME, _options => {
- return new SentryPrismaInteropInstrumentation();
+export const instrumentPrisma = generateInstrumentOnce(INTEGRATION_NAME, options => {
+ return new SentryPrismaInteropInstrumentation(options);
});
/**
@@ -201,11 +205,11 @@ export const instrumentPrisma = generateInstrumentOnce(INTEGRATIO
* }
* ```
*/
-export const prismaIntegration = defineIntegration((_options?: PrismaOptions) => {
+export const prismaIntegration = defineIntegration((options?: PrismaOptions) => {
return {
name: INTEGRATION_NAME,
setupOnce() {
- instrumentPrisma();
+ instrumentPrisma(options);
},
setup(client) {
// If no tracing helper exists, we skip any work here
diff --git a/packages/node/test/integrations/tracing/prisma.test.ts b/packages/node/test/integrations/tracing/prisma.test.ts
new file mode 100644
index 000000000000..7fb734d7193d
--- /dev/null
+++ b/packages/node/test/integrations/tracing/prisma.test.ts
@@ -0,0 +1,38 @@
+import { PrismaInstrumentation } from '@prisma/instrumentation';
+import { INSTRUMENTED } from '@sentry/node-core';
+import { beforeEach, describe, expect, it, type MockInstance, vi } from 'vitest';
+import { instrumentPrisma } from '../../../src/integrations/tracing/prisma';
+
+vi.mock('@prisma/instrumentation');
+
+describe('Prisma', () => {
+ beforeEach(() => {
+ vi.clearAllMocks();
+ delete INSTRUMENTED.Prisma;
+
+ (PrismaInstrumentation as unknown as MockInstance).mockImplementation(() => {
+ return {
+ setTracerProvider: () => undefined,
+ setMeterProvider: () => undefined,
+ getConfig: () => ({}),
+ setConfig: () => ({}),
+ enable: () => undefined,
+ };
+ });
+ });
+
+ it('defaults are correct for instrumentPrisma', () => {
+ instrumentPrisma();
+
+ expect(PrismaInstrumentation).toHaveBeenCalledTimes(1);
+ expect(PrismaInstrumentation).toHaveBeenCalledWith(undefined);
+ });
+
+ it('passes instrumentationConfig option to PrismaInstrumentation', () => {
+ const config = { ignoreSpanTypes: [] };
+ instrumentPrisma({ instrumentationConfig: config });
+
+ expect(PrismaInstrumentation).toHaveBeenCalledTimes(1);
+ expect(PrismaInstrumentation).toHaveBeenCalledWith(config);
+ });
+});
diff --git a/packages/nuxt/package.json b/packages/nuxt/package.json
index b3dc72061875..d701e590c04b 100644
--- a/packages/nuxt/package.json
+++ b/packages/nuxt/package.json
@@ -54,8 +54,8 @@
"@sentry/core": "10.36.0",
"@sentry/node": "10.36.0",
"@sentry/node-core": "10.36.0",
- "@sentry/rollup-plugin": "^4.6.2",
- "@sentry/vite-plugin": "^4.6.2",
+ "@sentry/rollup-plugin": "^4.7.0",
+ "@sentry/vite-plugin": "^4.7.0",
"@sentry/vue": "10.36.0"
},
"devDependencies": {
diff --git a/packages/nuxt/src/module.ts b/packages/nuxt/src/module.ts
index 3656eac56e63..813fb9385066 100644
--- a/packages/nuxt/src/module.ts
+++ b/packages/nuxt/src/module.ts
@@ -74,20 +74,6 @@ export default defineNuxtModule({
mode: 'client',
order: 1,
});
-
- // Add the sentry config file to the include array
- nuxt.hook('prepare:types', options => {
- const tsConfig = options.tsConfig as { include?: string[] };
-
- if (!tsConfig.include) {
- tsConfig.include = [];
- }
-
- // Add type references for useRuntimeConfig in root files for nuxt v4
- // Should be relative to `root/.nuxt`
- const relativePath = path.relative(nuxt.options.buildDir, clientConfigFile);
- tsConfig.include.push(relativePath);
- });
}
const serverConfigFile = findDefaultSdkInitFile('server', nuxt);
@@ -134,7 +120,31 @@ export default defineNuxtModule({
addDatabaseInstrumentation(nuxt.options.nitro);
}
+ // Add the sentry config file to the include array
+ nuxt.hook('prepare:types', options => {
+ const tsConfig = options.tsConfig as { include?: string[] };
+
+ if (!tsConfig.include) {
+ tsConfig.include = [];
+ }
+
+ // Add type references for useRuntimeConfig in root files for nuxt v4
+ // Should be relative to `root/.nuxt`
+ if (clientConfigFile) {
+ const relativePath = path.relative(nuxt.options.buildDir, clientConfigFile);
+ tsConfig.include.push(relativePath);
+ }
+ if (serverConfigFile) {
+ const relativePath = path.relative(nuxt.options.buildDir, serverConfigFile);
+ tsConfig.include.push(relativePath);
+ }
+ });
+
nuxt.hooks.hook('nitro:init', nitro => {
+ if (nuxt.options?._prepare) {
+ return;
+ }
+
if (serverConfigFile) {
addMiddlewareInstrumentation(nitro);
}
diff --git a/packages/nuxt/src/vite/sourceMaps.ts b/packages/nuxt/src/vite/sourceMaps.ts
index dff4f74df2f7..771be8d3d532 100644
--- a/packages/nuxt/src/vite/sourceMaps.ts
+++ b/packages/nuxt/src/vite/sourceMaps.ts
@@ -35,7 +35,7 @@ export function setupSourceMaps(moduleOptions: SentryNuxtModuleOptions, nuxt: Nu
let shouldDeleteFilesFallback = { client: true, server: true };
nuxt.hook('modules:done', () => {
- if (sourceMapsEnabled && !nuxt.options.dev) {
+ if (sourceMapsEnabled && !nuxt.options.dev && !nuxt.options?._prepare) {
// Changing this setting will propagate:
// - for client to viteConfig.build.sourceMap
// - for server to viteConfig.build.sourceMap and nitro.sourceMap
@@ -49,23 +49,35 @@ export function setupSourceMaps(moduleOptions: SentryNuxtModuleOptions, nuxt: Nu
server: previousSourceMapSettings.server === 'unset',
};
- if (
- isDebug &&
- !moduleOptions.sourcemaps?.filesToDeleteAfterUpload &&
- // eslint-disable-next-line deprecation/deprecation
- !sourceMapsUploadOptions.sourcemaps?.filesToDeleteAfterUpload &&
- (shouldDeleteFilesFallback.client || shouldDeleteFilesFallback.server)
- ) {
- // eslint-disable-next-line no-console
- console.log(
- "[Sentry] As Sentry enabled `'hidden'` source maps, source maps will be automatically deleted after uploading them to Sentry.",
- );
+ if (isDebug && (shouldDeleteFilesFallback.client || shouldDeleteFilesFallback.server)) {
+ const enabledDeleteFallbacks =
+ shouldDeleteFilesFallback.client && shouldDeleteFilesFallback.server
+ ? 'client-side and server-side'
+ : shouldDeleteFilesFallback.server
+ ? 'server-side'
+ : 'client-side';
+
+ if (
+ !moduleOptions.sourcemaps?.filesToDeleteAfterUpload &&
+ // eslint-disable-next-line deprecation/deprecation
+ !sourceMapsUploadOptions.sourcemaps?.filesToDeleteAfterUpload
+ ) {
+ // eslint-disable-next-line no-console
+ console.log(
+ `[Sentry] We enabled \`'hidden'\` source maps for your ${enabledDeleteFallbacks} build. Source map files will be automatically deleted after uploading them to Sentry.`,
+ );
+ } else {
+ // eslint-disable-next-line no-console
+ console.log(
+ `[Sentry] We enabled \`'hidden'\` source maps for your ${enabledDeleteFallbacks} build. Source map files will be deleted according to your \`sourcemaps.filesToDeleteAfterUpload\` configuration. To use automatic deletion instead, leave \`filesToDeleteAfterUpload\` empty.`,
+ );
+ }
}
}
});
nuxt.hook('vite:extendConfig', async (viteConfig, env) => {
- if (sourceMapsEnabled && viteConfig.mode !== 'development') {
+ if (sourceMapsEnabled && viteConfig.mode !== 'development' && !nuxt.options?._prepare) {
const runtime = env.isServer ? 'server' : env.isClient ? 'client' : undefined;
const nuxtSourceMapSetting = extractNuxtSourceMapSetting(nuxt, runtime);
@@ -99,7 +111,7 @@ export function setupSourceMaps(moduleOptions: SentryNuxtModuleOptions, nuxt: Nu
});
nuxt.hook('nitro:config', (nitroConfig: NitroConfig) => {
- if (sourceMapsEnabled && !nitroConfig.dev) {
+ if (sourceMapsEnabled && !nitroConfig.dev && !nuxt.options?._prepare) {
if (!nitroConfig.rollupConfig) {
nitroConfig.rollupConfig = {};
}
diff --git a/packages/nuxt/test/vite/sourceMaps-nuxtHooks.test.ts b/packages/nuxt/test/vite/sourceMaps-nuxtHooks.test.ts
new file mode 100644
index 000000000000..230c92b812a7
--- /dev/null
+++ b/packages/nuxt/test/vite/sourceMaps-nuxtHooks.test.ts
@@ -0,0 +1,124 @@
+import type { Nuxt } from '@nuxt/schema';
+import { afterAll, beforeAll, beforeEach, describe, expect, it, vi } from 'vitest';
+import type { SourceMapSetting } from '../../src/vite/sourceMaps';
+
+describe('setupSourceMaps hooks', () => {
+ const mockSentryVitePlugin = vi.fn(() => ({ name: 'sentry-vite-plugin' }));
+ const mockSentryRollupPlugin = vi.fn(() => ({ name: 'sentry-rollup-plugin' }));
+
+ const consoleLogSpy = vi.spyOn(console, 'log');
+ const consoleWarnSpy = vi.spyOn(console, 'warn');
+
+ beforeAll(() => {
+ vi.doMock('@sentry/vite-plugin', () => ({
+ sentryVitePlugin: mockSentryVitePlugin,
+ }));
+ vi.doMock('@sentry/rollup-plugin', () => ({
+ sentryRollupPlugin: mockSentryRollupPlugin,
+ }));
+ });
+
+ afterAll(() => {
+ consoleLogSpy.mockRestore();
+ consoleWarnSpy.mockRestore();
+ vi.doUnmock('@sentry/vite-plugin');
+ vi.doUnmock('@sentry/rollup-plugin');
+ });
+
+ beforeEach(() => {
+ consoleLogSpy.mockClear();
+ consoleWarnSpy.mockClear();
+ mockSentryVitePlugin.mockClear();
+ mockSentryRollupPlugin.mockClear();
+ });
+
+ type HookCallback = (...args: unknown[]) => void | Promise;
+
+ function createMockNuxt(options: {
+ _prepare?: boolean;
+ dev?: boolean;
+ sourcemap?: SourceMapSetting | { server?: SourceMapSetting; client?: SourceMapSetting };
+ }) {
+ const hooks: Record = {};
+
+ return {
+ options: {
+ _prepare: options._prepare ?? false,
+ dev: options.dev ?? false,
+ sourcemap: options.sourcemap ?? { server: undefined, client: undefined },
+ },
+ hook: (name: string, callback: HookCallback) => {
+ hooks[name] = hooks[name] || [];
+ hooks[name].push(callback);
+ },
+ // Helper to trigger hooks in tests
+ triggerHook: async (name: string, ...args: unknown[]) => {
+ const callbacks = hooks[name] || [];
+ for (const callback of callbacks) {
+ await callback(...args);
+ }
+ },
+ };
+ }
+
+ it('should not call any source map related functions in nuxt prepare mode', async () => {
+ const { setupSourceMaps } = await import('../../src/vite/sourceMaps');
+ const mockNuxt = createMockNuxt({ _prepare: true });
+
+ setupSourceMaps({ debug: true }, mockNuxt as unknown as Nuxt);
+
+ await mockNuxt.triggerHook('modules:done');
+ await mockNuxt.triggerHook(
+ 'vite:extendConfig',
+ { build: {}, plugins: [], mode: 'production' },
+ { isServer: true, isClient: false },
+ );
+ await mockNuxt.triggerHook('nitro:config', { rollupConfig: { plugins: [] }, dev: false });
+
+ expect(mockSentryVitePlugin).not.toHaveBeenCalled();
+ expect(mockSentryRollupPlugin).not.toHaveBeenCalled();
+
+ expect(consoleLogSpy).not.toHaveBeenCalledWith(expect.stringContaining('[Sentry]'));
+ });
+
+ it('should call source map related functions when not in prepare mode', async () => {
+ const { setupSourceMaps } = await import('../../src/vite/sourceMaps');
+ const mockNuxt = createMockNuxt({ _prepare: false, dev: false });
+
+ setupSourceMaps({ debug: true }, mockNuxt as unknown as Nuxt);
+
+ await mockNuxt.triggerHook('modules:done');
+
+ const viteConfig = { build: {}, plugins: [] as unknown[], mode: 'production' };
+ await mockNuxt.triggerHook('vite:extendConfig', viteConfig, { isServer: true, isClient: false });
+
+ const nitroConfig = { rollupConfig: { plugins: [] as unknown[], output: {} }, dev: false };
+ await mockNuxt.triggerHook('nitro:config', nitroConfig);
+
+ expect(mockSentryVitePlugin).toHaveBeenCalled();
+ expect(mockSentryRollupPlugin).toHaveBeenCalled();
+
+ expect(viteConfig.plugins.length).toBeGreaterThan(0);
+ expect(nitroConfig.rollupConfig.plugins.length).toBeGreaterThan(0);
+
+ expect(consoleLogSpy).toHaveBeenCalledWith(expect.stringContaining('[Sentry]'));
+ });
+
+ it('should not call source map related functions in dev mode', async () => {
+ const { setupSourceMaps } = await import('../../src/vite/sourceMaps');
+ const mockNuxt = createMockNuxt({ _prepare: false, dev: true });
+
+ setupSourceMaps({ debug: true }, mockNuxt as unknown as Nuxt);
+
+ await mockNuxt.triggerHook('modules:done');
+ await mockNuxt.triggerHook(
+ 'vite:extendConfig',
+ { build: {}, plugins: [], mode: 'development' },
+ { isServer: true, isClient: false },
+ );
+ await mockNuxt.triggerHook('nitro:config', { rollupConfig: { plugins: [] }, dev: true });
+
+ expect(mockSentryVitePlugin).not.toHaveBeenCalled();
+ expect(mockSentryRollupPlugin).not.toHaveBeenCalled();
+ });
+});
diff --git a/packages/opentelemetry/package.json b/packages/opentelemetry/package.json
index cb815faad726..5bcacf692f94 100644
--- a/packages/opentelemetry/package.json
+++ b/packages/opentelemetry/package.json
@@ -46,14 +46,14 @@
"@opentelemetry/context-async-hooks": "^1.30.1 || ^2.1.0",
"@opentelemetry/core": "^1.30.1 || ^2.1.0",
"@opentelemetry/sdk-trace-base": "^1.30.1 || ^2.1.0",
- "@opentelemetry/semantic-conventions": "^1.37.0"
+ "@opentelemetry/semantic-conventions": "^1.39.0"
},
"devDependencies": {
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/context-async-hooks": "^2.4.0",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/sdk-trace-base": "^2.4.0",
- "@opentelemetry/semantic-conventions": "^1.37.0"
+ "@opentelemetry/context-async-hooks": "^2.5.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/sdk-trace-base": "^2.5.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0"
},
"scripts": {
"build": "run-p build:transpile build:types",
diff --git a/packages/opentelemetry/src/constants.ts b/packages/opentelemetry/src/constants.ts
index 3500ad6c4782..375e42dfdd00 100644
--- a/packages/opentelemetry/src/constants.ts
+++ b/packages/opentelemetry/src/constants.ts
@@ -9,10 +9,6 @@ export const SENTRY_TRACE_STATE_URL = 'sentry.url';
export const SENTRY_TRACE_STATE_SAMPLE_RAND = 'sentry.sample_rand';
export const SENTRY_TRACE_STATE_SAMPLE_RATE = 'sentry.sample_rate';
-// NOTE: `@sentry/nextjs` has a local copy of this context key for Edge bundles:
-// - `packages/nextjs/src/edge/index.ts` (`SENTRY_SCOPES_CONTEXT_KEY`)
-//
-// If you change the key name passed to `createContextKey(...)`, update that file too.
export const SENTRY_SCOPES_CONTEXT_KEY = createContextKey('sentry_scopes');
export const SENTRY_FORK_ISOLATION_SCOPE_CONTEXT_KEY = createContextKey('sentry_fork_isolation_scope');
diff --git a/packages/opentelemetry/src/utils/contextData.ts b/packages/opentelemetry/src/utils/contextData.ts
index 78577131d0c7..468b377f9ccd 100644
--- a/packages/opentelemetry/src/utils/contextData.ts
+++ b/packages/opentelemetry/src/utils/contextData.ts
@@ -11,10 +11,6 @@ const SCOPE_CONTEXT_FIELD = '_scopeContext';
* This requires a Context Manager that was wrapped with getWrappedContextManager.
*/
export function getScopesFromContext(context: Context): CurrentScopes | undefined {
- // NOTE: `@sentry/nextjs` has a local copy of this helper for Edge bundles:
- // - `packages/nextjs/src/edge/index.ts` (`getScopesFromContext`)
- //
- // If you change how scopes are stored/read (key or retrieval), update that file too.
return context.getValue(SENTRY_SCOPES_CONTEXT_KEY) as CurrentScopes | undefined;
}
diff --git a/packages/opentelemetry/src/utils/isSentryRequest.ts b/packages/opentelemetry/src/utils/isSentryRequest.ts
index 6e06bcf5ab2e..d6b59880137b 100644
--- a/packages/opentelemetry/src/utils/isSentryRequest.ts
+++ b/packages/opentelemetry/src/utils/isSentryRequest.ts
@@ -9,10 +9,6 @@ import { spanHasAttributes } from './spanTypes';
* @returns boolean
*/
export function isSentryRequestSpan(span: AbstractSpan): boolean {
- // NOTE: `@sentry/nextjs` has a local copy of this helper for Edge bundles:
- // - `packages/nextjs/src/common/utils/dropMiddlewareTunnelRequests.ts` (`isSentryRequestSpan`)
- //
- // If you change supported OTEL attribute keys or request detection logic, update that file too.
if (!spanHasAttributes(span)) {
return false;
}
diff --git a/packages/react-router/package.json b/packages/react-router/package.json
index 86178e295db2..a5a8f67be663 100644
--- a/packages/react-router/package.json
+++ b/packages/react-router/package.json
@@ -46,15 +46,15 @@
},
"dependencies": {
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@sentry/browser": "10.36.0",
"@sentry/cli": "^2.58.4",
"@sentry/core": "10.36.0",
"@sentry/node": "10.36.0",
"@sentry/react": "10.36.0",
- "@sentry/vite-plugin": "^4.6.2",
+ "@sentry/vite-plugin": "^4.7.0",
"glob": "11.1.0"
},
"devDependencies": {
diff --git a/packages/react-router/src/client/createClientInstrumentation.ts b/packages/react-router/src/client/createClientInstrumentation.ts
new file mode 100644
index 000000000000..c465a25dd662
--- /dev/null
+++ b/packages/react-router/src/client/createClientInstrumentation.ts
@@ -0,0 +1,327 @@
+import { startBrowserTracingNavigationSpan } from '@sentry/browser';
+import type { Span } from '@sentry/core';
+import {
+ debug,
+ getClient,
+ GLOBAL_OBJ,
+ SEMANTIC_ATTRIBUTE_SENTRY_OP,
+ SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN,
+ SEMANTIC_ATTRIBUTE_SENTRY_SOURCE,
+ SPAN_STATUS_ERROR,
+ startSpan,
+} from '@sentry/core';
+import { DEBUG_BUILD } from '../common/debug-build';
+import type { ClientInstrumentation, InstrumentableRoute, InstrumentableRouter } from '../common/types';
+import { captureInstrumentationError, getPathFromRequest, getPattern, normalizeRoutePath } from '../common/utils';
+
+const WINDOW = GLOBAL_OBJ as typeof GLOBAL_OBJ & Window;
+
+// Tracks active numeric navigation span to prevent duplicate spans when popstate fires
+let currentNumericNavigationSpan: Span | undefined;
+
+const SENTRY_CLIENT_INSTRUMENTATION_FLAG = '__sentryReactRouterClientInstrumentationUsed';
+// Intentionally never reset - once set, instrumentation API handles all navigations for the session.
+const SENTRY_NAVIGATE_HOOK_INVOKED_FLAG = '__sentryReactRouterNavigateHookInvoked';
+const SENTRY_POPSTATE_LISTENER_ADDED_FLAG = '__sentryReactRouterPopstateListenerAdded';
+
+type GlobalObjWithFlags = typeof GLOBAL_OBJ & {
+ [SENTRY_CLIENT_INSTRUMENTATION_FLAG]?: boolean;
+ [SENTRY_NAVIGATE_HOOK_INVOKED_FLAG]?: boolean;
+ [SENTRY_POPSTATE_LISTENER_ADDED_FLAG]?: boolean;
+};
+
+const GLOBAL_WITH_FLAGS = GLOBAL_OBJ as GlobalObjWithFlags;
+
+/**
+ * Options for creating Sentry client instrumentation.
+ */
+export interface CreateSentryClientInstrumentationOptions {
+ /**
+ * Whether to capture errors from loaders/actions automatically.
+ * Set to `false` to avoid duplicates if using custom error handlers.
+ * @default true
+ */
+ captureErrors?: boolean;
+}
+
+/**
+ * Creates a Sentry client instrumentation for React Router's instrumentation API.
+ * @experimental
+ */
+export function createSentryClientInstrumentation(
+ options: CreateSentryClientInstrumentationOptions = {},
+): ClientInstrumentation {
+ const { captureErrors = true } = options;
+
+ DEBUG_BUILD && debug.log('React Router client instrumentation API created.');
+
+ return {
+ router(router: InstrumentableRouter) {
+ // Set the flag when React Router actually invokes our instrumentation.
+ // This ensures the flag is only set in Library Mode (where hooks run),
+ // not in Framework Mode (where hooks are never called).
+ // See: https://github.com/remix-run/react-router/discussions/13749
+ GLOBAL_WITH_FLAGS[SENTRY_CLIENT_INSTRUMENTATION_FLAG] = true;
+ DEBUG_BUILD && debug.log('React Router client instrumentation API router hook registered.');
+
+ // Add popstate listener for browser back/forward navigation (persists for session, one listener only)
+ if (!GLOBAL_WITH_FLAGS[SENTRY_POPSTATE_LISTENER_ADDED_FLAG] && WINDOW.addEventListener) {
+ GLOBAL_WITH_FLAGS[SENTRY_POPSTATE_LISTENER_ADDED_FLAG] = true;
+
+ WINDOW.addEventListener('popstate', () => {
+ const client = getClient();
+ if (!client) {
+ currentNumericNavigationSpan = undefined;
+ return;
+ }
+
+ const pathname = WINDOW.location?.pathname || '/';
+
+ // If there's an active numeric navigation span, update it instead of creating a duplicate
+ if (currentNumericNavigationSpan) {
+ if (currentNumericNavigationSpan.isRecording()) {
+ currentNumericNavigationSpan.updateName(pathname);
+ }
+ currentNumericNavigationSpan = undefined;
+ return;
+ }
+
+ // Only create a new span for actual browser back/forward button clicks
+ startBrowserTracingNavigationSpan(client, {
+ name: pathname,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'navigation',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.navigation.react_router.instrumentation_api',
+ 'navigation.type': 'browser.popstate',
+ },
+ });
+ });
+
+ DEBUG_BUILD && debug.log('React Router popstate listener registered for browser back/forward navigation.');
+ }
+
+ router.instrument({
+ async navigate(callNavigate, info) {
+ // navigate(0) triggers a page reload - skip span creation, but still capture errors
+ // (navigation can be rejected before reload, e.g., by a navigation guard)
+ if (info.to === 0) {
+ const result = await callNavigate();
+ if (result.status === 'error' && result.error instanceof Error) {
+ captureInstrumentationError(result, captureErrors, 'react_router.navigate', {
+ 'http.url': info.currentUrl,
+ });
+ }
+ return;
+ }
+
+ GLOBAL_WITH_FLAGS[SENTRY_NAVIGATE_HOOK_INVOKED_FLAG] = true;
+
+ // Handle numeric navigations (navigate(-1), navigate(1), etc.)
+ if (typeof info.to === 'number') {
+ const client = getClient();
+ let navigationSpan;
+
+ if (client) {
+ const navigationType = info.to < 0 ? 'router.back' : 'router.forward';
+ const currentPathname = WINDOW.location?.pathname || info.currentUrl;
+
+ navigationSpan = startBrowserTracingNavigationSpan(client, {
+ name: currentPathname,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'navigation',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.navigation.react_router.instrumentation_api',
+ 'navigation.type': navigationType,
+ },
+ });
+
+ // Store ref so popstate listener can update it instead of creating a duplicate
+ currentNumericNavigationSpan = navigationSpan;
+ }
+
+ try {
+ const result = await callNavigate();
+
+ if (navigationSpan && WINDOW.location) {
+ navigationSpan.updateName(WINDOW.location.pathname);
+ }
+
+ if (result.status === 'error' && result.error instanceof Error) {
+ if (navigationSpan) {
+ navigationSpan.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ }
+ captureInstrumentationError(result, captureErrors, 'react_router.navigate', {
+ 'http.url': WINDOW.location?.pathname || info.currentUrl,
+ });
+ }
+ } finally {
+ currentNumericNavigationSpan = undefined;
+ }
+ return;
+ }
+
+ // Handle string navigations (e.g., navigate('/about'))
+ const client = getClient();
+ const toPath = String(info.to);
+ let navigationSpan;
+
+ if (client) {
+ navigationSpan = startBrowserTracingNavigationSpan(client, {
+ name: toPath,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url',
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'navigation',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.navigation.react_router.instrumentation_api',
+ 'navigation.type': 'router.navigate',
+ },
+ });
+ }
+
+ const result = await callNavigate();
+ if (result.status === 'error' && result.error instanceof Error) {
+ if (navigationSpan) {
+ navigationSpan.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ }
+ captureInstrumentationError(result, captureErrors, 'react_router.navigate', {
+ 'http.url': toPath,
+ });
+ }
+ return;
+ },
+
+ async fetch(callFetch, info) {
+ await startSpan(
+ {
+ name: `Fetcher ${info.fetcherKey}`,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.fetcher',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callFetch();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.fetcher', {
+ 'http.url': info.href,
+ });
+ }
+ },
+ );
+ },
+ });
+ },
+
+ route(route: InstrumentableRoute) {
+ route.instrument({
+ async loader(callLoader, info) {
+ const urlPath = getPathFromRequest(info.request);
+ const routePattern = normalizeRoutePath(getPattern(info)) || urlPath;
+
+ await startSpan(
+ {
+ name: routePattern,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.client_loader',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callLoader();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.client_loader', {
+ 'http.url': urlPath,
+ });
+ }
+ },
+ );
+ },
+
+ async action(callAction, info) {
+ const urlPath = getPathFromRequest(info.request);
+ const routePattern = normalizeRoutePath(getPattern(info)) || urlPath;
+
+ await startSpan(
+ {
+ name: routePattern,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.client_action',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callAction();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.client_action', {
+ 'http.url': urlPath,
+ });
+ }
+ },
+ );
+ },
+
+ async middleware(callMiddleware, info) {
+ const urlPath = getPathFromRequest(info.request);
+ const routePattern = normalizeRoutePath(getPattern(info)) || urlPath;
+
+ await startSpan(
+ {
+ name: routePattern,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.client_middleware',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callMiddleware();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.client_middleware', {
+ 'http.url': urlPath,
+ });
+ }
+ },
+ );
+ },
+
+ async lazy(callLazy) {
+ await startSpan(
+ {
+ name: 'Lazy Route Load',
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.client_lazy',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callLazy();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.client_lazy', {});
+ }
+ },
+ );
+ },
+ });
+ },
+ };
+}
+
+/**
+ * Check if React Router's instrumentation API is being used on the client.
+ * @experimental
+ */
+export function isClientInstrumentationApiUsed(): boolean {
+ return !!GLOBAL_WITH_FLAGS[SENTRY_CLIENT_INSTRUMENTATION_FLAG];
+}
+
+/**
+ * Check if React Router's instrumentation API's navigate hook was invoked.
+ * @experimental
+ */
+export function isNavigateHookInvoked(): boolean {
+ return !!GLOBAL_WITH_FLAGS[SENTRY_NAVIGATE_HOOK_INVOKED_FLAG];
+}
diff --git a/packages/react-router/src/client/hydratedRouter.ts b/packages/react-router/src/client/hydratedRouter.ts
index 14cdf07a33c9..499e1fcc1751 100644
--- a/packages/react-router/src/client/hydratedRouter.ts
+++ b/packages/react-router/src/client/hydratedRouter.ts
@@ -1,7 +1,7 @@
import { startBrowserTracingNavigationSpan } from '@sentry/browser';
import type { Span } from '@sentry/core';
import {
- consoleSandbox,
+ debug,
getActiveSpan,
getClient,
getRootSpan,
@@ -13,6 +13,7 @@ import {
} from '@sentry/core';
import type { DataRouter, RouterState } from 'react-router';
import { DEBUG_BUILD } from '../common/debug-build';
+import { isClientInstrumentationApiUsed } from './createClientInstrumentation';
const GLOBAL_OBJ_WITH_DATA_ROUTER = GLOBAL_OBJ as typeof GLOBAL_OBJ & {
__reactRouterDataRouter?: DataRouter;
@@ -34,7 +35,6 @@ export function instrumentHydratedRouter(): void {
if (router) {
// The first time we hit the router, we try to update the pageload transaction
- // todo: update pageload tx here
const pageloadSpan = getActiveRootSpan();
if (pageloadSpan) {
@@ -51,18 +51,18 @@ export function instrumentHydratedRouter(): void {
[SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.pageload.react_router',
});
}
+ }
- // Patching navigate for creating accurate navigation transactions
- if (typeof router.navigate === 'function') {
- const originalNav = router.navigate.bind(router);
- router.navigate = function sentryPatchedNavigate(...args) {
- maybeCreateNavigationTransaction(
- String(args[0]) || '', // will be updated anyway
- 'url', // this also will be updated once we have the parameterized route
- );
- return originalNav(...args);
- };
- }
+ // Patching navigate for creating accurate navigation transactions
+ if (typeof router.navigate === 'function') {
+ const originalNav = router.navigate.bind(router);
+ router.navigate = function sentryPatchedNavigate(...args) {
+ // Skip if instrumentation API is enabled (it handles navigation spans itself)
+ if (!isClientInstrumentationApiUsed()) {
+ maybeCreateNavigationTransaction(String(args[0]) || '', 'url');
+ }
+ return originalNav(...args);
+ };
}
// Subscribe to router state changes to update navigation transactions with parameterized routes
@@ -79,7 +79,8 @@ export function instrumentHydratedRouter(): void {
if (
navigationSpanName &&
newState.navigation.state === 'idle' && // navigation has completed
- normalizePathname(newState.location.pathname) === normalizePathname(navigationSpanName) // this event is for the currently active navigation
+ // this event is for the currently active navigation
+ normalizePathname(newState.location.pathname) === normalizePathname(navigationSpanName)
) {
navigationSpan.updateName(parameterizedNavRoute);
navigationSpan.setAttributes({
@@ -100,11 +101,7 @@ export function instrumentHydratedRouter(): void {
const interval = setInterval(() => {
if (trySubscribe() || retryCount >= MAX_RETRIES) {
if (retryCount >= MAX_RETRIES) {
- DEBUG_BUILD &&
- consoleSandbox(() => {
- // eslint-disable-next-line no-console
- console.warn('Unable to instrument React Router: router not found after hydration.');
- });
+ DEBUG_BUILD && debug.warn('Unable to instrument React Router: router not found after hydration.');
}
clearInterval(interval);
}
diff --git a/packages/react-router/src/client/index.ts b/packages/react-router/src/client/index.ts
index ba5c1c1264cb..6734b21c8583 100644
--- a/packages/react-router/src/client/index.ts
+++ b/packages/react-router/src/client/index.ts
@@ -4,7 +4,11 @@
export * from '@sentry/browser';
export { init } from './sdk';
-export { reactRouterTracingIntegration } from './tracingIntegration';
+export {
+ reactRouterTracingIntegration,
+ type ReactRouterTracingIntegration,
+ type ReactRouterTracingIntegrationOptions,
+} from './tracingIntegration';
export { captureReactException, reactErrorHandler, Profiler, withProfiler, useProfiler } from '@sentry/react';
@@ -19,3 +23,11 @@ export { ErrorBoundary, withErrorBoundary } from '@sentry/react';
* See https://docs.sentry.io/platforms/javascript/guides/react-router/#report-errors-from-error-boundaries
*/
export type { ErrorBoundaryProps, FallbackRender } from '@sentry/react';
+
+// React Router instrumentation API for use with unstable_instrumentations (React Router 7.x)
+export {
+ createSentryClientInstrumentation,
+ isClientInstrumentationApiUsed,
+ isNavigateHookInvoked,
+ type CreateSentryClientInstrumentationOptions,
+} from './createClientInstrumentation';
diff --git a/packages/react-router/src/client/tracingIntegration.ts b/packages/react-router/src/client/tracingIntegration.ts
index 01b71f36d92a..a711eb986508 100644
--- a/packages/react-router/src/client/tracingIntegration.ts
+++ b/packages/react-router/src/client/tracingIntegration.ts
@@ -1,17 +1,68 @@
import { browserTracingIntegration as originalBrowserTracingIntegration } from '@sentry/browser';
import type { Integration } from '@sentry/core';
+import type { ClientInstrumentation } from '../common/types';
+import {
+ createSentryClientInstrumentation,
+ type CreateSentryClientInstrumentationOptions,
+} from './createClientInstrumentation';
import { instrumentHydratedRouter } from './hydratedRouter';
+/**
+ * Options for the React Router tracing integration.
+ */
+export interface ReactRouterTracingIntegrationOptions {
+ /**
+ * Options for React Router's instrumentation API.
+ * @experimental
+ */
+ instrumentationOptions?: CreateSentryClientInstrumentationOptions;
+
+ /**
+ * Enable React Router's instrumentation API.
+ * When true, prepares for use with HydratedRouter's `unstable_instrumentations` prop.
+ * @experimental
+ * @default false
+ */
+ useInstrumentationAPI?: boolean;
+}
+
+/**
+ * React Router tracing integration with support for the instrumentation API.
+ */
+export interface ReactRouterTracingIntegration extends Integration {
+ /**
+ * Client instrumentation for React Router's instrumentation API.
+ * Lazily initialized on first access.
+ * @experimental HydratedRouter doesn't invoke these hooks in Framework Mode yet.
+ */
+ readonly clientInstrumentation: ClientInstrumentation;
+}
+
/**
* Browser tracing integration for React Router (Framework) applications.
- * This integration will create navigation spans and enhance transactions names with parameterized routes.
+ * This integration will create navigation spans and enhance transaction names with parameterized routes.
*/
-export function reactRouterTracingIntegration(): Integration {
+export function reactRouterTracingIntegration(
+ options: ReactRouterTracingIntegrationOptions = {},
+): ReactRouterTracingIntegration {
const browserTracingIntegrationInstance = originalBrowserTracingIntegration({
// Navigation transactions are started within the hydrated router instrumentation
instrumentNavigation: false,
});
+ let clientInstrumentationInstance: ClientInstrumentation | undefined;
+
+ if (options.useInstrumentationAPI || options.instrumentationOptions) {
+ clientInstrumentationInstance = createSentryClientInstrumentation(options.instrumentationOptions);
+ }
+
+ const getClientInstrumentation = (): ClientInstrumentation => {
+ if (!clientInstrumentationInstance) {
+ clientInstrumentationInstance = createSentryClientInstrumentation(options.instrumentationOptions);
+ }
+ return clientInstrumentationInstance;
+ };
+
return {
...browserTracingIntegrationInstance,
name: 'ReactRouterTracingIntegration',
@@ -19,5 +70,8 @@ export function reactRouterTracingIntegration(): Integration {
browserTracingIntegrationInstance.afterAllSetup(client);
instrumentHydratedRouter();
},
+ get clientInstrumentation(): ClientInstrumentation {
+ return getClientInstrumentation();
+ },
};
}
diff --git a/packages/react-router/src/common/types.ts b/packages/react-router/src/common/types.ts
new file mode 100644
index 000000000000..23cbb174f167
--- /dev/null
+++ b/packages/react-router/src/common/types.ts
@@ -0,0 +1,96 @@
+/**
+ * Types for React Router's instrumentation API.
+ *
+ * Derived from React Router v7.x `unstable_instrumentations` API.
+ * The stable `instrumentations` API is planned for React Router v8.
+ * If React Router changes these types, this file must be updated.
+ *
+ * @see https://reactrouter.com/how-to/instrumentation
+ * @experimental
+ */
+
+export type InstrumentationResult = { status: 'success'; error: undefined } | { status: 'error'; error: unknown };
+
+export interface ReadonlyRequest {
+ method: string;
+ url: string;
+ headers: Pick;
+}
+
+export interface RouteHandlerInstrumentationInfo {
+ readonly request: ReadonlyRequest;
+ readonly params: Record;
+ readonly pattern?: string;
+ readonly unstable_pattern?: string;
+ readonly context?: unknown;
+}
+
+export interface RouterNavigationInstrumentationInfo {
+ readonly to: string | number;
+ readonly currentUrl: string;
+ readonly formMethod?: string;
+ readonly formEncType?: string;
+ readonly formData?: FormData;
+ readonly body?: unknown;
+}
+
+export interface RouterFetchInstrumentationInfo {
+ readonly href: string;
+ readonly currentUrl: string;
+ readonly fetcherKey: string;
+ readonly formMethod?: string;
+ readonly formEncType?: string;
+ readonly formData?: FormData;
+ readonly body?: unknown;
+}
+
+export interface RequestHandlerInstrumentationInfo {
+ readonly request: Request;
+ readonly context: unknown;
+}
+
+export type InstrumentFunction = (handler: () => Promise, info: T) => Promise;
+
+export interface RouteInstrumentations {
+ lazy?: InstrumentFunction;
+ 'lazy.loader'?: InstrumentFunction;
+ 'lazy.action'?: InstrumentFunction;
+ 'lazy.middleware'?: InstrumentFunction;
+ middleware?: InstrumentFunction;
+ loader?: InstrumentFunction;
+ action?: InstrumentFunction;
+}
+
+export interface RouterInstrumentations {
+ navigate?: InstrumentFunction;
+ fetch?: InstrumentFunction;
+}
+
+export interface RequestHandlerInstrumentations {
+ request?: InstrumentFunction;
+}
+
+export interface InstrumentableRoute {
+ id: string;
+ index: boolean | undefined;
+ path: string | undefined;
+ instrument(instrumentations: RouteInstrumentations): void;
+}
+
+export interface InstrumentableRouter {
+ instrument(instrumentations: RouterInstrumentations): void;
+}
+
+export interface InstrumentableRequestHandler {
+ instrument(instrumentations: RequestHandlerInstrumentations): void;
+}
+
+export interface ClientInstrumentation {
+ router?(router: InstrumentableRouter): void;
+ route?(route: InstrumentableRoute): void;
+}
+
+export interface ServerInstrumentation {
+ handler?(handler: InstrumentableRequestHandler): void;
+ route?(route: InstrumentableRoute): void;
+}
diff --git a/packages/react-router/src/common/utils.ts b/packages/react-router/src/common/utils.ts
new file mode 100644
index 000000000000..1585d00fd635
--- /dev/null
+++ b/packages/react-router/src/common/utils.ts
@@ -0,0 +1,61 @@
+import { captureException, debug } from '@sentry/core';
+import { DEBUG_BUILD } from './debug-build';
+import type { InstrumentationResult } from './types';
+
+/**
+ * Extracts pathname from request URL.
+ * Falls back to '' with DEBUG warning if URL cannot be parsed.
+ */
+export function getPathFromRequest(request: { url: string }): string {
+ try {
+ return new URL(request.url).pathname;
+ } catch {
+ try {
+ // Fallback: use a dummy base URL since we only care about the pathname
+ return new URL(request.url, 'http://example.com').pathname;
+ } catch (error) {
+ DEBUG_BUILD && debug.warn('Failed to parse URL from request:', request.url, error);
+ return '';
+ }
+ }
+}
+
+/**
+ * Extracts route pattern from instrumentation info.
+ * Prefers `pattern` (planned for v8) over `unstable_pattern` (v7.x).
+ */
+export function getPattern(info: { pattern?: string; unstable_pattern?: string }): string | undefined {
+ return info.pattern ?? info.unstable_pattern;
+}
+
+/**
+ * Normalizes route path by ensuring it starts with a slash.
+ * Returns undefined if the input is falsy.
+ */
+export function normalizeRoutePath(pattern?: string): string | undefined {
+ if (!pattern) {
+ return undefined;
+ }
+ return pattern.startsWith('/') ? pattern : `/${pattern}`;
+}
+
+/**
+ * Captures an error from instrumentation result.
+ * Caller must verify result contains an Error before calling.
+ */
+export function captureInstrumentationError(
+ result: InstrumentationResult,
+ captureErrors: boolean,
+ mechanismType: string,
+ data: Record,
+): void {
+ if (captureErrors) {
+ captureException(result.error, {
+ mechanism: {
+ type: mechanismType,
+ handled: false,
+ data,
+ },
+ });
+ }
+}
diff --git a/packages/react-router/src/server/createServerInstrumentation.ts b/packages/react-router/src/server/createServerInstrumentation.ts
new file mode 100644
index 000000000000..3fceca6a4ff7
--- /dev/null
+++ b/packages/react-router/src/server/createServerInstrumentation.ts
@@ -0,0 +1,248 @@
+import { context } from '@opentelemetry/api';
+import { getRPCMetadata, RPCType } from '@opentelemetry/core';
+import { ATTR_HTTP_ROUTE } from '@opentelemetry/semantic-conventions';
+import {
+ debug,
+ flushIfServerless,
+ getActiveSpan,
+ getCurrentScope,
+ getRootSpan,
+ SEMANTIC_ATTRIBUTE_SENTRY_OP,
+ SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN,
+ SEMANTIC_ATTRIBUTE_SENTRY_SOURCE,
+ SPAN_STATUS_ERROR,
+ startSpan,
+ updateSpanName,
+} from '@sentry/core';
+import { DEBUG_BUILD } from '../common/debug-build';
+import type { InstrumentableRequestHandler, InstrumentableRoute, ServerInstrumentation } from '../common/types';
+import { captureInstrumentationError, getPathFromRequest, getPattern, normalizeRoutePath } from '../common/utils';
+import { markInstrumentationApiUsed } from './serverGlobals';
+
+// Re-export for backward compatibility and external use
+export { isInstrumentationApiUsed } from './serverGlobals';
+
+/**
+ * Options for creating Sentry server instrumentation.
+ */
+export interface CreateSentryServerInstrumentationOptions {
+ /**
+ * Whether to capture errors from loaders/actions automatically.
+ * @default true
+ */
+ captureErrors?: boolean;
+}
+
+/**
+ * Creates a Sentry server instrumentation for React Router's instrumentation API.
+ * @experimental
+ */
+export function createSentryServerInstrumentation(
+ options: CreateSentryServerInstrumentationOptions = {},
+): ServerInstrumentation {
+ const { captureErrors = true } = options;
+
+ markInstrumentationApiUsed();
+ DEBUG_BUILD && debug.log('React Router server instrumentation API enabled.');
+
+ return {
+ handler(handler: InstrumentableRequestHandler) {
+ handler.instrument({
+ async request(handleRequest, info) {
+ const pathname = getPathFromRequest(info.request);
+ const activeSpan = getActiveSpan();
+ const existingRootSpan = activeSpan ? getRootSpan(activeSpan) : undefined;
+
+ if (existingRootSpan) {
+ updateSpanName(existingRootSpan, `${info.request.method} ${pathname}`);
+ existingRootSpan.setAttributes({
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.server',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.react_router.instrumentation_api',
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url',
+ });
+
+ try {
+ const result = await handleRequest();
+ if (result.status === 'error' && result.error instanceof Error) {
+ existingRootSpan.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.request_handler', {
+ 'http.method': info.request.method,
+ 'http.url': pathname,
+ });
+ }
+ } finally {
+ await flushIfServerless();
+ }
+ } else {
+ await startSpan(
+ {
+ name: `${info.request.method} ${pathname}`,
+ forceTransaction: true,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'http.server',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.react_router.instrumentation_api',
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'url',
+ 'http.request.method': info.request.method,
+ 'url.path': pathname,
+ 'url.full': info.request.url,
+ },
+ },
+ async span => {
+ try {
+ const result = await handleRequest();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.request_handler', {
+ 'http.method': info.request.method,
+ 'http.url': pathname,
+ });
+ }
+ } finally {
+ await flushIfServerless();
+ }
+ },
+ );
+ }
+ },
+ });
+ },
+
+ route(route: InstrumentableRoute) {
+ route.instrument({
+ async loader(callLoader, info) {
+ const urlPath = getPathFromRequest(info.request);
+ const pattern = getPattern(info);
+ const routePattern = normalizeRoutePath(pattern) || urlPath;
+ updateRootSpanWithRoute(info.request.method, pattern, urlPath);
+
+ await startSpan(
+ {
+ name: routePattern,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.loader',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callLoader();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.loader', {
+ 'http.method': info.request.method,
+ 'http.url': urlPath,
+ });
+ }
+ },
+ );
+ },
+
+ async action(callAction, info) {
+ const urlPath = getPathFromRequest(info.request);
+ const pattern = getPattern(info);
+ const routePattern = normalizeRoutePath(pattern) || urlPath;
+ updateRootSpanWithRoute(info.request.method, pattern, urlPath);
+
+ await startSpan(
+ {
+ name: routePattern,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.action',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callAction();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.action', {
+ 'http.method': info.request.method,
+ 'http.url': urlPath,
+ });
+ }
+ },
+ );
+ },
+
+ async middleware(callMiddleware, info) {
+ const urlPath = getPathFromRequest(info.request);
+ const pattern = getPattern(info);
+ const routePattern = normalizeRoutePath(pattern) || urlPath;
+
+ // Update root span with parameterized route (same as loader/action)
+ updateRootSpanWithRoute(info.request.method, pattern, urlPath);
+
+ await startSpan(
+ {
+ name: routePattern,
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.middleware',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callMiddleware();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.middleware', {
+ 'http.method': info.request.method,
+ 'http.url': urlPath,
+ });
+ }
+ },
+ );
+ },
+
+ async lazy(callLazy) {
+ await startSpan(
+ {
+ name: 'Lazy Route Load',
+ attributes: {
+ [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'function.react_router.lazy',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.function.react_router.instrumentation_api',
+ },
+ },
+ async span => {
+ const result = await callLazy();
+ if (result.status === 'error' && result.error instanceof Error) {
+ span.setStatus({ code: SPAN_STATUS_ERROR, message: 'internal_error' });
+ captureInstrumentationError(result, captureErrors, 'react_router.lazy', {});
+ }
+ },
+ );
+ },
+ });
+ },
+ };
+}
+
+function updateRootSpanWithRoute(method: string, pattern: string | undefined, urlPath: string): void {
+ const activeSpan = getActiveSpan();
+ if (!activeSpan) return;
+ const rootSpan = getRootSpan(activeSpan);
+ if (!rootSpan) return;
+
+ // Skip update if URL path is invalid (failed to parse)
+ if (!urlPath || urlPath === '') {
+ DEBUG_BUILD && debug.warn('Cannot update span with invalid URL path:', urlPath);
+ return;
+ }
+
+ const hasPattern = !!pattern;
+ const routeName = hasPattern ? normalizeRoutePath(pattern) || urlPath : urlPath;
+
+ const rpcMetadata = getRPCMetadata(context.active());
+ if (rpcMetadata?.type === RPCType.HTTP) {
+ rpcMetadata.route = routeName;
+ }
+
+ const transactionName = `${method} ${routeName}`;
+ updateSpanName(rootSpan, transactionName);
+ rootSpan.setAttributes({
+ [ATTR_HTTP_ROUTE]: routeName,
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: hasPattern ? 'route' : 'url',
+ });
+
+ // Also update the scope's transaction name so errors captured during this request
+ // have the correct transaction name (not the initial placeholder like "GET *")
+ getCurrentScope().setTransactionName(transactionName);
+}
diff --git a/packages/react-router/src/server/index.ts b/packages/react-router/src/server/index.ts
index acca80a94d81..e0b8c8981632 100644
--- a/packages/react-router/src/server/index.ts
+++ b/packages/react-router/src/server/index.ts
@@ -11,3 +11,10 @@ export { wrapServerAction } from './wrapServerAction';
export { wrapServerLoader } from './wrapServerLoader';
export { createSentryHandleError, type SentryHandleErrorOptions } from './createSentryHandleError';
export { getMetaTagTransformer } from './getMetaTagTransformer';
+
+// React Router instrumentation API support (works with both unstable_instrumentations and instrumentations)
+export {
+ createSentryServerInstrumentation,
+ isInstrumentationApiUsed,
+ type CreateSentryServerInstrumentationOptions,
+} from './createServerInstrumentation';
diff --git a/packages/react-router/src/server/instrumentation/reactRouter.ts b/packages/react-router/src/server/instrumentation/reactRouter.ts
index 708b9857015b..2f24d2c7bcb7 100644
--- a/packages/react-router/src/server/instrumentation/reactRouter.ts
+++ b/packages/react-router/src/server/instrumentation/reactRouter.ts
@@ -15,6 +15,7 @@ import {
} from '@sentry/core';
import type * as reactRouter from 'react-router';
import { DEBUG_BUILD } from '../../common/debug-build';
+import { isInstrumentationApiUsed } from '../serverGlobals';
import { getOpName, getSpanName, isDataRequest } from './util';
type ReactRouterModuleExports = typeof reactRouter;
@@ -76,6 +77,13 @@ export class ReactRouterInstrumentation extends InstrumentationBase {
return {
name: INTEGRATION_NAME,
setupOnce() {
+ // Skip OTEL patching if the instrumentation API is in use
+ if (isInstrumentationApiUsed()) {
+ return;
+ }
+
if (
(NODE_VERSION.major === 20 && NODE_VERSION.minor < 19) || // https://nodejs.org/en/blog/release/v20.19.0
(NODE_VERSION.major === 22 && NODE_VERSION.minor < 12) // https://nodejs.org/en/blog/release/v22.12.0
@@ -36,13 +42,17 @@ export const reactRouterServerIntegration = defineIntegration(() => {
if (
event.type === 'transaction' &&
event.contexts?.trace?.data &&
- event.contexts.trace.data[ATTR_HTTP_ROUTE] === '*' &&
- // This means the name has been adjusted before, but the http.route remains, so we need to remove it
- event.transaction !== 'GET *' &&
- event.transaction !== 'POST *'
+ event.contexts.trace.data[ATTR_HTTP_ROUTE] === '*'
) {
- // eslint-disable-next-line @typescript-eslint/no-dynamic-delete
- delete event.contexts.trace.data[ATTR_HTTP_ROUTE];
+ const origin = event.contexts.trace.origin;
+ const isInstrumentationApiOrigin = origin?.includes('instrumentation_api');
+
+ // For instrumentation_api, always clean up bogus `*` route since we set better names
+ // For legacy, only clean up if the name has been adjusted (not METHOD *)
+ if (isInstrumentationApiOrigin || !event.transaction?.endsWith(' *')) {
+ // eslint-disable-next-line @typescript-eslint/no-dynamic-delete
+ delete event.contexts.trace.data[ATTR_HTTP_ROUTE];
+ }
}
return event;
diff --git a/packages/react-router/src/server/serverGlobals.ts b/packages/react-router/src/server/serverGlobals.ts
new file mode 100644
index 000000000000..33f96ab5f45a
--- /dev/null
+++ b/packages/react-router/src/server/serverGlobals.ts
@@ -0,0 +1,23 @@
+import { GLOBAL_OBJ } from '@sentry/core';
+
+const SENTRY_SERVER_INSTRUMENTATION_FLAG = '__sentryReactRouterServerInstrumentationUsed';
+
+type GlobalObjWithFlag = typeof GLOBAL_OBJ & {
+ [SENTRY_SERVER_INSTRUMENTATION_FLAG]?: boolean;
+};
+
+/**
+ * Mark that the React Router instrumentation API is being used on the server.
+ * @internal
+ */
+export function markInstrumentationApiUsed(): void {
+ (GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_SERVER_INSTRUMENTATION_FLAG] = true;
+}
+
+/**
+ * Check if React Router's instrumentation API is being used on the server.
+ * @experimental
+ */
+export function isInstrumentationApiUsed(): boolean {
+ return !!(GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_SERVER_INSTRUMENTATION_FLAG];
+}
diff --git a/packages/react-router/src/server/wrapSentryHandleRequest.ts b/packages/react-router/src/server/wrapSentryHandleRequest.ts
index 2e788637988f..9bf634a68505 100644
--- a/packages/react-router/src/server/wrapSentryHandleRequest.ts
+++ b/packages/react-router/src/server/wrapSentryHandleRequest.ts
@@ -4,11 +4,14 @@ import { ATTR_HTTP_ROUTE } from '@opentelemetry/semantic-conventions';
import {
flushIfServerless,
getActiveSpan,
+ getCurrentScope,
getRootSpan,
SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN,
SEMANTIC_ATTRIBUTE_SENTRY_SOURCE,
+ updateSpanName,
} from '@sentry/core';
import type { AppLoadContext, EntryContext, RouterContextProvider } from 'react-router';
+import { isInstrumentationApiUsed } from './serverGlobals';
type OriginalHandleRequestWithoutMiddleware = (
request: Request,
@@ -67,7 +70,8 @@ export function wrapSentryHandleRequest(
const rootSpan = activeSpan ? getRootSpan(activeSpan) : undefined;
if (parameterizedPath && rootSpan) {
- const routeName = `/${parameterizedPath}`;
+ // Normalize route name - avoid "//" for root routes
+ const routeName = parameterizedPath.startsWith('/') ? parameterizedPath : `/${parameterizedPath}`;
// The express instrumentation writes on the rpcMetadata and that ends up stomping on the `http.route` attribute.
const rpcMetadata = getRPCMetadata(context.active());
@@ -76,12 +80,25 @@ export function wrapSentryHandleRequest(
rpcMetadata.route = routeName;
}
- // The span exporter picks up the `http.route` (ATTR_HTTP_ROUTE) attribute to set the transaction name
- rootSpan.setAttributes({
- [ATTR_HTTP_ROUTE]: routeName,
- [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'route',
- [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.react_router.request_handler',
- });
+ const transactionName = `${request.method} ${routeName}`;
+
+ updateSpanName(rootSpan, transactionName);
+ getCurrentScope().setTransactionName(transactionName);
+
+ // Set route attributes - acts as fallback for lazy-only routes when using instrumentation API
+ // Don't override origin when instrumentation API is used (preserve instrumentation_api origin)
+ if (isInstrumentationApiUsed()) {
+ rootSpan.setAttributes({
+ [ATTR_HTTP_ROUTE]: routeName,
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'route',
+ });
+ } else {
+ rootSpan.setAttributes({
+ [ATTR_HTTP_ROUTE]: routeName,
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'route',
+ [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'auto.http.react_router.request_handler',
+ });
+ }
}
try {
diff --git a/packages/react-router/src/server/wrapServerAction.ts b/packages/react-router/src/server/wrapServerAction.ts
index 991327a60d10..356237008650 100644
--- a/packages/react-router/src/server/wrapServerAction.ts
+++ b/packages/react-router/src/server/wrapServerAction.ts
@@ -1,6 +1,7 @@
import { SEMATTRS_HTTP_TARGET } from '@opentelemetry/semantic-conventions';
import type { SpanAttributes } from '@sentry/core';
import {
+ debug,
flushIfServerless,
getActiveSpan,
getRootSpan,
@@ -12,12 +13,17 @@ import {
updateSpanName,
} from '@sentry/core';
import type { ActionFunctionArgs } from 'react-router';
+import { DEBUG_BUILD } from '../common/debug-build';
+import { isInstrumentationApiUsed } from './serverGlobals';
type SpanOptions = {
name?: string;
attributes?: SpanAttributes;
};
+// Track if we've already warned about duplicate instrumentation
+let hasWarnedAboutDuplicateActionInstrumentation = false;
+
/**
* Wraps a React Router server action function with Sentry performance monitoring.
* @param options - Optional span configuration options including name, operation, description and attributes
@@ -37,8 +43,23 @@ type SpanOptions = {
* );
* ```
*/
-export function wrapServerAction(options: SpanOptions = {}, actionFn: (args: ActionFunctionArgs) => Promise) {
- return async function (args: ActionFunctionArgs) {
+export function wrapServerAction(
+ options: SpanOptions = {},
+ actionFn: (args: ActionFunctionArgs) => Promise,
+): (args: ActionFunctionArgs) => Promise {
+ return async function (args: ActionFunctionArgs): Promise {
+ // Skip instrumentation if instrumentation API is already handling it
+ if (isInstrumentationApiUsed()) {
+ if (DEBUG_BUILD && !hasWarnedAboutDuplicateActionInstrumentation) {
+ hasWarnedAboutDuplicateActionInstrumentation = true;
+ debug.warn(
+ 'wrapServerAction is redundant when using the instrumentation API. ' +
+ 'The action is already instrumented automatically. You can safely remove wrapServerAction.',
+ );
+ }
+ return actionFn(args);
+ }
+
const name = options.name || 'Executing Server Action';
const active = getActiveSpan();
if (active) {
diff --git a/packages/react-router/src/server/wrapServerLoader.ts b/packages/react-router/src/server/wrapServerLoader.ts
index fc28d504637f..a3146d0de24a 100644
--- a/packages/react-router/src/server/wrapServerLoader.ts
+++ b/packages/react-router/src/server/wrapServerLoader.ts
@@ -1,6 +1,7 @@
import { SEMATTRS_HTTP_TARGET } from '@opentelemetry/semantic-conventions';
import type { SpanAttributes } from '@sentry/core';
import {
+ debug,
flushIfServerless,
getActiveSpan,
getRootSpan,
@@ -12,12 +13,17 @@ import {
updateSpanName,
} from '@sentry/core';
import type { LoaderFunctionArgs } from 'react-router';
+import { DEBUG_BUILD } from '../common/debug-build';
+import { isInstrumentationApiUsed } from './serverGlobals';
type SpanOptions = {
name?: string;
attributes?: SpanAttributes;
};
+// Track if we've already warned about duplicate instrumentation
+let hasWarnedAboutDuplicateLoaderInstrumentation = false;
+
/**
* Wraps a React Router server loader function with Sentry performance monitoring.
* @param options - Optional span configuration options including name, operation, description and attributes
@@ -37,8 +43,23 @@ type SpanOptions = {
* );
* ```
*/
-export function wrapServerLoader(options: SpanOptions = {}, loaderFn: (args: LoaderFunctionArgs) => Promise) {
- return async function (args: LoaderFunctionArgs) {
+export function wrapServerLoader(
+ options: SpanOptions = {},
+ loaderFn: (args: LoaderFunctionArgs) => Promise,
+): (args: LoaderFunctionArgs) => Promise {
+ return async function (args: LoaderFunctionArgs): Promise {
+ // Skip instrumentation if instrumentation API is already handling it
+ if (isInstrumentationApiUsed()) {
+ if (DEBUG_BUILD && !hasWarnedAboutDuplicateLoaderInstrumentation) {
+ hasWarnedAboutDuplicateLoaderInstrumentation = true;
+ debug.warn(
+ 'wrapServerLoader is redundant when using the instrumentation API. ' +
+ 'The loader is already instrumented automatically. You can safely remove wrapServerLoader.',
+ );
+ }
+ return loaderFn(args);
+ }
+
const name = options.name || 'Executing Server Loader';
const active = getActiveSpan();
diff --git a/packages/react-router/src/vite/makeCustomSentryVitePlugins.ts b/packages/react-router/src/vite/makeCustomSentryVitePlugins.ts
index 80e540c9760a..c09b81ac632f 100644
--- a/packages/react-router/src/vite/makeCustomSentryVitePlugins.ts
+++ b/packages/react-router/src/vite/makeCustomSentryVitePlugins.ts
@@ -53,7 +53,7 @@ export async function makeCustomSentryVitePlugins(options: SentryReactRouterBuil
...sentryVitePlugins.filter(plugin => {
return [
'sentry-telemetry-plugin',
- 'sentry-vite-release-injection-plugin',
+ 'sentry-vite-injection-plugin',
...(reactComponentAnnotation?.enabled || unstable_sentryVitePluginOptions?.reactComponentAnnotation?.enabled
? ['sentry-vite-component-name-annotate-plugin']
: []),
diff --git a/packages/react-router/test/client/createClientInstrumentation.test.ts b/packages/react-router/test/client/createClientInstrumentation.test.ts
new file mode 100644
index 000000000000..0078b2601c51
--- /dev/null
+++ b/packages/react-router/test/client/createClientInstrumentation.test.ts
@@ -0,0 +1,718 @@
+import * as browser from '@sentry/browser';
+import * as core from '@sentry/core';
+import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
+import {
+ createSentryClientInstrumentation,
+ isClientInstrumentationApiUsed,
+ isNavigateHookInvoked,
+} from '../../src/client/createClientInstrumentation';
+
+vi.mock('@sentry/core', async () => {
+ const actual = await vi.importActual('@sentry/core');
+ return {
+ ...actual,
+ startSpan: vi.fn(),
+ captureException: vi.fn(),
+ getClient: vi.fn(),
+ GLOBAL_OBJ: globalThis,
+ SEMANTIC_ATTRIBUTE_SENTRY_OP: 'sentry.op',
+ SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN: 'sentry.origin',
+ SEMANTIC_ATTRIBUTE_SENTRY_SOURCE: 'sentry.source',
+ };
+});
+
+vi.mock('@sentry/browser', () => ({
+ startBrowserTracingNavigationSpan: vi.fn().mockReturnValue({ setStatus: vi.fn() }),
+}));
+
+describe('createSentryClientInstrumentation', () => {
+ beforeEach(() => {
+ vi.clearAllMocks();
+ // Reset global flag
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+ });
+
+ afterEach(() => {
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+ });
+
+ it('should create a valid client instrumentation object', () => {
+ const instrumentation = createSentryClientInstrumentation();
+
+ expect(instrumentation).toBeDefined();
+ expect(typeof instrumentation.router).toBe('function');
+ expect(typeof instrumentation.route).toBe('function');
+ });
+
+ it('should NOT set the global flag when created (only when router() is called)', () => {
+ expect((globalThis as any).__sentryReactRouterClientInstrumentationUsed).toBeUndefined();
+
+ createSentryClientInstrumentation();
+
+ // Flag should NOT be set just by creating instrumentation
+ // This is important for Framework Mode where router() is never called
+ expect((globalThis as any).__sentryReactRouterClientInstrumentationUsed).toBeUndefined();
+ });
+
+ it('should set the global flag when router() is called by React Router', () => {
+ expect((globalThis as any).__sentryReactRouterClientInstrumentationUsed).toBeUndefined();
+
+ const mockInstrument = vi.fn();
+ const instrumentation = createSentryClientInstrumentation();
+
+ // Flag should not be set yet
+ expect((globalThis as any).__sentryReactRouterClientInstrumentationUsed).toBeUndefined();
+
+ // When React Router calls router(), the flag should be set
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ expect((globalThis as any).__sentryReactRouterClientInstrumentationUsed).toBe(true);
+ });
+
+ it('should instrument router navigate with browser tracing span', async () => {
+ const mockCallNavigate = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+ const mockClient = {};
+
+ (core.getClient as any).mockReturnValue(mockClient);
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ expect(mockInstrument).toHaveBeenCalled();
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the navigate hook with proper info structure
+ await hooks.navigate(mockCallNavigate, {
+ currentUrl: '/home',
+ to: '/about',
+ });
+
+ expect(browser.startBrowserTracingNavigationSpan).toHaveBeenCalledWith(mockClient, {
+ name: '/about',
+ attributes: expect.objectContaining({
+ 'sentry.source': 'url',
+ 'sentry.op': 'navigation',
+ 'sentry.origin': 'auto.navigation.react_router.instrumentation_api',
+ 'navigation.type': 'router.navigate',
+ }),
+ });
+ expect(mockCallNavigate).toHaveBeenCalled();
+ });
+
+ it('should instrument router fetch with spans', async () => {
+ const mockCallFetch = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the fetch hook with proper info structure
+ await hooks.fetch(mockCallFetch, {
+ href: '/api/data',
+ currentUrl: '/home',
+ fetcherKey: 'fetcher-1',
+ });
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: 'Fetcher fetcher-1',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.fetcher',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+ expect(mockCallFetch).toHaveBeenCalled();
+ });
+
+ it('should instrument route loader with spans', async () => {
+ const mockCallLoader = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+
+ const instrumentation = createSentryClientInstrumentation();
+ // Route has id, index, path as required properties
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/test',
+ instrument: mockInstrument,
+ });
+
+ expect(mockInstrument).toHaveBeenCalled();
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the loader hook with RouteHandlerInstrumentationInfo
+ await hooks.loader(mockCallLoader, {
+ request: { method: 'GET', url: 'http://example.com/users/123', headers: { get: () => null } },
+ params: { id: '123' },
+ unstable_pattern: '/users/:id',
+ context: undefined,
+ });
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: '/users/:id',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.client_loader',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+ expect(mockCallLoader).toHaveBeenCalled();
+ });
+
+ it('should instrument route action with spans', async () => {
+ const mockCallAction = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/test',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the action hook with RouteHandlerInstrumentationInfo
+ await hooks.action(mockCallAction, {
+ request: { method: 'POST', url: 'http://example.com/users/123', headers: { get: () => null } },
+ params: { id: '123' },
+ unstable_pattern: '/users/:id',
+ context: undefined,
+ });
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: '/users/:id',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.client_action',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+ });
+
+ it('should capture errors when captureErrors is true (default)', async () => {
+ const mockError = new Error('Test error');
+ // React Router returns an error result, not a rejection
+ const mockCallLoader = vi.fn().mockResolvedValue({ status: 'error', error: mockError });
+ const mockInstrument = vi.fn();
+ const mockSpan = { setStatus: vi.fn() };
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn(mockSpan));
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/test',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.loader(mockCallLoader, {
+ request: { method: 'GET', url: 'http://example.com/test-path', headers: { get: () => null } },
+ params: {},
+ unstable_pattern: '/test-path',
+ context: undefined,
+ });
+
+ expect(core.captureException).toHaveBeenCalledWith(mockError, {
+ mechanism: { type: 'react_router.client_loader', handled: false, data: { 'http.url': '/test-path' } },
+ });
+
+ // Should also set span status to error for actual Error instances
+ expect(mockSpan.setStatus).toHaveBeenCalledWith({ code: 2, message: 'internal_error' });
+ });
+
+ it('should not capture errors when captureErrors is false', async () => {
+ const mockError = new Error('Test error');
+ // React Router returns an error result, not a rejection
+ const mockCallLoader = vi.fn().mockResolvedValue({ status: 'error', error: mockError });
+ const mockInstrument = vi.fn();
+ const mockSpan = { setStatus: vi.fn() };
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn(mockSpan));
+
+ const instrumentation = createSentryClientInstrumentation({ captureErrors: false });
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/test',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.loader(mockCallLoader, {
+ request: { method: 'GET', url: 'http://example.com/test-path', headers: { get: () => null } },
+ params: {},
+ unstable_pattern: '/test-path',
+ context: undefined,
+ });
+
+ expect(core.captureException).not.toHaveBeenCalled();
+
+ // Span status should still be set for Error instances (reflects actual state)
+ expect(mockSpan.setStatus).toHaveBeenCalledWith({ code: 2, message: 'internal_error' });
+ });
+
+ it('should capture navigate errors and set span status', async () => {
+ const mockError = new Error('Navigation error');
+ // React Router returns an error result, not a rejection
+ const mockCallNavigate = vi.fn().mockResolvedValue({ status: 'error', error: mockError });
+ const mockInstrument = vi.fn();
+ const mockNavigationSpan = { setStatus: vi.fn() };
+
+ (core.getClient as any).mockReturnValue({});
+ (browser.startBrowserTracingNavigationSpan as any).mockReturnValue(mockNavigationSpan);
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.navigate(mockCallNavigate, {
+ currentUrl: '/home',
+ to: '/about',
+ });
+
+ expect(core.captureException).toHaveBeenCalledWith(mockError, {
+ mechanism: { type: 'react_router.navigate', handled: false, data: { 'http.url': '/about' } },
+ });
+
+ // Should set span status to error
+ expect(mockNavigationSpan.setStatus).toHaveBeenCalledWith({ code: 2, message: 'internal_error' });
+ });
+
+ describe('numeric navigations (history back/forward)', () => {
+ const originalLocation = globalThis.location;
+
+ beforeEach(() => {
+ (globalThis as any).location = { pathname: '/current-page' };
+ });
+
+ afterEach(() => {
+ if (originalLocation) {
+ (globalThis as any).location = originalLocation;
+ } else {
+ delete (globalThis as any).location;
+ }
+ });
+
+ it.each([
+ { to: -1, expectedType: 'router.back', destination: '/previous-page' },
+ { to: -2, expectedType: 'router.back', destination: '/two-pages-back' },
+ { to: 1, expectedType: 'router.forward', destination: '/next-page' },
+ ])(
+ 'should create navigation span for navigate($to) with navigation.type $expectedType',
+ async ({ to, expectedType, destination }) => {
+ const mockCallNavigate = vi.fn().mockImplementation(async () => {
+ (globalThis as any).location.pathname = destination;
+ return { status: 'success', error: undefined };
+ });
+ const mockInstrument = vi.fn();
+ const mockNavigationSpan = { setStatus: vi.fn(), updateName: vi.fn() };
+ const mockClient = {};
+
+ (core.getClient as any).mockReturnValue(mockClient);
+ (browser.startBrowserTracingNavigationSpan as any).mockReturnValue(mockNavigationSpan);
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.navigate(mockCallNavigate, { currentUrl: '/current-page', to });
+
+ expect(browser.startBrowserTracingNavigationSpan).toHaveBeenCalledWith(mockClient, {
+ name: '/current-page',
+ attributes: expect.objectContaining({
+ 'sentry.source': 'url',
+ 'sentry.op': 'navigation',
+ 'sentry.origin': 'auto.navigation.react_router.instrumentation_api',
+ 'navigation.type': expectedType,
+ }),
+ });
+ expect(mockNavigationSpan.updateName).toHaveBeenCalledWith(destination);
+ },
+ );
+
+ it('should skip span creation for navigate(0) since it triggers a page reload', async () => {
+ const mockCallNavigate = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.getClient as any).mockReturnValue({});
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.navigate(mockCallNavigate, { currentUrl: '/current-page', to: 0 });
+
+ expect(browser.startBrowserTracingNavigationSpan).not.toHaveBeenCalled();
+ expect(mockCallNavigate).toHaveBeenCalled();
+ });
+
+ it('should set error status on span for failed numeric navigation', async () => {
+ const mockError = new Error('Navigation failed');
+ const mockCallNavigate = vi.fn().mockImplementation(async () => {
+ (globalThis as any).location.pathname = '/error-page';
+ return { status: 'error', error: mockError };
+ });
+ const mockInstrument = vi.fn();
+ const mockNavigationSpan = { setStatus: vi.fn(), updateName: vi.fn() };
+
+ (core.getClient as any).mockReturnValue({});
+ (browser.startBrowserTracingNavigationSpan as any).mockReturnValue(mockNavigationSpan);
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.navigate(mockCallNavigate, { currentUrl: '/current-page', to: -1 });
+
+ expect(mockNavigationSpan.setStatus).toHaveBeenCalledWith({ code: 2, message: 'internal_error' });
+ expect(core.captureException).toHaveBeenCalledWith(mockError, {
+ mechanism: { type: 'react_router.navigate', handled: false, data: { 'http.url': '/error-page' } },
+ });
+ });
+
+ it('should set navigate hook invoked flag for numeric navigations but NOT for navigate(0)', async () => {
+ const mockInstrument = vi.fn();
+ const mockNavigationSpan = { setStatus: vi.fn(), updateName: vi.fn() };
+
+ (core.getClient as any).mockReturnValue({});
+ (browser.startBrowserTracingNavigationSpan as any).mockReturnValue(mockNavigationSpan);
+
+ delete (globalThis as any).__sentryReactRouterNavigateHookInvoked;
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // navigate(0) should NOT set flag
+ await hooks.navigate(vi.fn().mockResolvedValue({ status: 'success', error: undefined }), {
+ currentUrl: '/current-page',
+ to: 0,
+ });
+ expect(isNavigateHookInvoked()).toBe(false);
+
+ // navigate(-1) should set flag
+ await hooks.navigate(vi.fn().mockResolvedValue({ status: 'success', error: undefined }), {
+ currentUrl: '/current-page',
+ to: -1,
+ });
+ expect(isNavigateHookInvoked()).toBe(true);
+ });
+ });
+
+ it('should fall back to URL pathname when unstable_pattern is undefined', async () => {
+ const mockCallLoader = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/test',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call with undefined unstable_pattern - should fall back to pathname
+ await hooks.loader(mockCallLoader, {
+ request: { method: 'GET', url: 'http://example.com/users/123', headers: { get: () => null } },
+ params: { id: '123' },
+ unstable_pattern: undefined,
+ context: undefined,
+ });
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: '/users/123',
+ }),
+ expect.any(Function),
+ );
+ });
+
+ it('should instrument route middleware with spans', async () => {
+ const mockCallMiddleware = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/users/:id',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.middleware(mockCallMiddleware, {
+ request: { method: 'GET', url: 'http://example.com/users/123', headers: { get: () => null } },
+ params: { id: '123' },
+ unstable_pattern: '/users/:id',
+ context: undefined,
+ });
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: '/users/:id',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.client_middleware',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+ });
+
+ it('should instrument lazy route loading with spans', async () => {
+ const mockCallLazy = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/users/:id',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.lazy(mockCallLazy, undefined);
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: 'Lazy Route Load',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.client_lazy',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+ });
+
+ describe('popstate listener (browser back/forward button)', () => {
+ const originalLocation = globalThis.location;
+ const originalAddEventListener = globalThis.addEventListener;
+ let addEventListenerSpy: ReturnType;
+ let popstateHandler: (() => void) | null = null;
+
+ beforeEach(() => {
+ delete (globalThis as any).__sentryReactRouterPopstateListenerAdded;
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+
+ (globalThis as any).location = { pathname: '/current-page' };
+
+ popstateHandler = null;
+ addEventListenerSpy = vi.fn((event, handler) => {
+ if (event === 'popstate') {
+ popstateHandler = handler;
+ }
+ });
+ (globalThis as any).addEventListener = addEventListenerSpy;
+ });
+
+ afterEach(() => {
+ if (originalLocation) {
+ (globalThis as any).location = originalLocation;
+ } else {
+ delete (globalThis as any).location;
+ }
+ (globalThis as any).addEventListener = originalAddEventListener;
+ delete (globalThis as any).__sentryReactRouterPopstateListenerAdded;
+ });
+
+ it('should register popstate listener once when router() is called', () => {
+ const mockInstrument = vi.fn();
+ const instrumentation = createSentryClientInstrumentation();
+
+ instrumentation.router?.({ instrument: mockInstrument });
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ const popstateCalls = addEventListenerSpy.mock.calls.filter((call: string[]) => call[0] === 'popstate');
+ expect(popstateCalls.length).toBe(1);
+ });
+
+ it('should create navigation span with browser.popstate type on popstate event', () => {
+ const mockClient = {};
+ (core.getClient as any).mockReturnValue(mockClient);
+
+ const mockInstrument = vi.fn();
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ popstateHandler!();
+
+ expect(browser.startBrowserTracingNavigationSpan).toHaveBeenCalledWith(mockClient, {
+ name: '/current-page',
+ attributes: expect.objectContaining({
+ 'sentry.source': 'url',
+ 'sentry.op': 'navigation',
+ 'sentry.origin': 'auto.navigation.react_router.instrumentation_api',
+ 'navigation.type': 'browser.popstate',
+ }),
+ });
+ });
+
+ it('should not create span on popstate when no client is available', () => {
+ (core.getClient as any).mockReturnValue(undefined);
+
+ const mockInstrument = vi.fn();
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ popstateHandler!();
+
+ expect(browser.startBrowserTracingNavigationSpan).not.toHaveBeenCalled();
+ });
+
+ it('should update existing numeric navigation span on popstate instead of creating duplicate', async () => {
+ const mockClient = {};
+ const mockNavigationSpan = {
+ setStatus: vi.fn(),
+ updateName: vi.fn(),
+ isRecording: vi.fn().mockReturnValue(true),
+ };
+
+ (core.getClient as any).mockReturnValue(mockClient);
+ (browser.startBrowserTracingNavigationSpan as any).mockReturnValue(mockNavigationSpan);
+
+ const mockCallNavigate = vi.fn().mockImplementation(async () => {
+ (globalThis as any).location.pathname = '/previous-page';
+ popstateHandler!();
+ return { status: 'success', error: undefined };
+ });
+ const mockInstrument = vi.fn();
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.navigate(mockCallNavigate, { currentUrl: '/current-page', to: -1 });
+
+ // Only ONE span created (not two - no duplicate from popstate)
+ expect(browser.startBrowserTracingNavigationSpan).toHaveBeenCalledTimes(1);
+ expect(mockNavigationSpan.updateName).toHaveBeenCalledWith('/previous-page');
+ });
+
+ it('should create new span on popstate when no numeric navigation is in progress', () => {
+ const mockClient = {};
+ (core.getClient as any).mockReturnValue(mockClient);
+
+ const mockInstrument = vi.fn();
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ // Direct popstate without navigate(-1) - simulates browser back button click
+ popstateHandler!();
+
+ expect(browser.startBrowserTracingNavigationSpan).toHaveBeenCalledWith(mockClient, {
+ name: '/current-page',
+ attributes: expect.objectContaining({
+ 'navigation.type': 'browser.popstate',
+ }),
+ });
+ });
+ });
+});
+
+describe('isClientInstrumentationApiUsed', () => {
+ beforeEach(() => {
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+ });
+
+ afterEach(() => {
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+ });
+
+ it('should return false when flag is not set', () => {
+ expect(isClientInstrumentationApiUsed()).toBe(false);
+ });
+
+ it('should return true when flag is set', () => {
+ (globalThis as any).__sentryReactRouterClientInstrumentationUsed = true;
+ expect(isClientInstrumentationApiUsed()).toBe(true);
+ });
+
+ it('should return false after createSentryClientInstrumentation is called (flag set only when router() called)', () => {
+ expect(isClientInstrumentationApiUsed()).toBe(false);
+ createSentryClientInstrumentation();
+ // Flag is NOT set just by creating instrumentation - it's set when router() is called
+ // This is important for Framework Mode where router() is never called
+ expect(isClientInstrumentationApiUsed()).toBe(false);
+ });
+
+ it('should return true after router() is called', () => {
+ const mockInstrument = vi.fn();
+ expect(isClientInstrumentationApiUsed()).toBe(false);
+ const instrumentation = createSentryClientInstrumentation();
+ expect(isClientInstrumentationApiUsed()).toBe(false);
+ instrumentation.router?.({ instrument: mockInstrument });
+ expect(isClientInstrumentationApiUsed()).toBe(true);
+ });
+});
+
+describe('isNavigateHookInvoked', () => {
+ beforeEach(() => {
+ vi.clearAllMocks();
+ delete (globalThis as any).__sentryReactRouterNavigateHookInvoked;
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+ });
+
+ afterEach(() => {
+ delete (globalThis as any).__sentryReactRouterNavigateHookInvoked;
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+ });
+
+ it('should return false when flag is not set and true when set', () => {
+ expect(isNavigateHookInvoked()).toBe(false);
+ (globalThis as any).__sentryReactRouterNavigateHookInvoked = true;
+ expect(isNavigateHookInvoked()).toBe(true);
+ });
+
+ it('should set flag after navigate hook is invoked even without client', async () => {
+ const mockCallNavigate = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.getClient as any).mockReturnValue(undefined);
+
+ const instrumentation = createSentryClientInstrumentation();
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ expect(isNavigateHookInvoked()).toBe(false);
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.navigate(mockCallNavigate, { currentUrl: '/home', to: '/about' });
+
+ expect(isNavigateHookInvoked()).toBe(true);
+ expect(browser.startBrowserTracingNavigationSpan).not.toHaveBeenCalled();
+ });
+});
diff --git a/packages/react-router/test/client/hydratedRouter.test.ts b/packages/react-router/test/client/hydratedRouter.test.ts
index 3e798e829566..457a701f835f 100644
--- a/packages/react-router/test/client/hydratedRouter.test.ts
+++ b/packages/react-router/test/client/hydratedRouter.test.ts
@@ -11,6 +11,9 @@ vi.mock('@sentry/core', async () => {
getRootSpan: vi.fn(),
spanToJSON: vi.fn(),
getClient: vi.fn(),
+ debug: {
+ warn: vi.fn(),
+ },
SEMANTIC_ATTRIBUTE_SENTRY_OP: 'op',
SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN: 'origin',
SEMANTIC_ATTRIBUTE_SENTRY_SOURCE: 'source',
@@ -108,4 +111,67 @@ describe('instrumentHydratedRouter', () => {
expect(mockNavigationSpan.updateName).not.toHaveBeenCalled();
expect(mockNavigationSpan.setAttributes).not.toHaveBeenCalled();
});
+
+ it('skips navigation span creation when client instrumentation API is enabled', () => {
+ // Simulate that the client instrumentation API is enabled
+ // (meaning the instrumentation API handles navigation spans and we should avoid double-counting)
+ (globalThis as any).__sentryReactRouterClientInstrumentationUsed = true;
+
+ instrumentHydratedRouter();
+ mockRouter.navigate('/bar');
+
+ // Should not create a navigation span because instrumentation API is handling it
+ expect(browser.startBrowserTracingNavigationSpan).not.toHaveBeenCalled();
+
+ // Clean up
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+ });
+
+ it('creates navigation span when client instrumentation API is not enabled', () => {
+ // Ensure the flag is not set (default state - instrumentation API not used)
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+
+ instrumentHydratedRouter();
+ mockRouter.navigate('/bar');
+
+ // Should create a navigation span because instrumentation API is not handling it
+ expect(browser.startBrowserTracingNavigationSpan).toHaveBeenCalled();
+ });
+
+ it('creates navigation span in Framework Mode (flag not set means router() was never called)', () => {
+ // This is a regression test for Framework Mode (e.g., Remix) where:
+ // 1. createSentryClientInstrumentation() may be called during SDK init
+ // 2. But the framework doesn't support unstable_instrumentations, so router() is never called
+ // 3. In this case, the legacy navigation instrumentation should still create spans
+ //
+ // We simulate this by ensuring the flag is NOT set (since router() was never called)
+
+ // Ensure the flag is NOT set (simulating that router() was never called)
+ delete (globalThis as any).__sentryReactRouterClientInstrumentationUsed;
+
+ instrumentHydratedRouter();
+ mockRouter.navigate('/bar');
+
+ // Should create a navigation span via legacy instrumentation because
+ // the instrumentation API's router() method was never called
+ expect(browser.startBrowserTracingNavigationSpan).toHaveBeenCalled();
+ });
+
+ it('should warn when router is not found after max retries', () => {
+ vi.useFakeTimers();
+
+ // Remove the router to simulate it not being available
+ delete (globalThis as any).__reactRouterDataRouter;
+
+ instrumentHydratedRouter();
+
+ // Advance timers past MAX_RETRIES (40 retries ร 50ms = 2000ms)
+ vi.advanceTimersByTime(2100);
+
+ expect(core.debug.warn).toHaveBeenCalledWith(
+ 'Unable to instrument React Router: router not found after hydration.',
+ );
+
+ vi.useRealTimers();
+ });
});
diff --git a/packages/react-router/test/client/tracingIntegration.test.ts b/packages/react-router/test/client/tracingIntegration.test.ts
index 2469c9b29db6..81a3360f1457 100644
--- a/packages/react-router/test/client/tracingIntegration.test.ts
+++ b/packages/react-router/test/client/tracingIntegration.test.ts
@@ -1,12 +1,23 @@
import * as sentryBrowser from '@sentry/browser';
import type { Client } from '@sentry/core';
+import { GLOBAL_OBJ } from '@sentry/core';
import { afterEach, describe, expect, it, vi } from 'vitest';
+import { isClientInstrumentationApiUsed } from '../../src/client/createClientInstrumentation';
import * as hydratedRouterModule from '../../src/client/hydratedRouter';
import { reactRouterTracingIntegration } from '../../src/client/tracingIntegration';
+// Global flag used by client instrumentation API
+const SENTRY_CLIENT_INSTRUMENTATION_FLAG = '__sentryReactRouterClientInstrumentationUsed';
+
+type GlobalObjWithFlag = typeof GLOBAL_OBJ & {
+ [SENTRY_CLIENT_INSTRUMENTATION_FLAG]?: boolean;
+};
+
describe('reactRouterTracingIntegration', () => {
afterEach(() => {
vi.clearAllMocks();
+ // Clean up global flag between tests
+ (GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG] = undefined;
});
it('returns an integration with the correct name and properties', () => {
@@ -28,4 +39,156 @@ describe('reactRouterTracingIntegration', () => {
expect(browserTracingSpy).toHaveBeenCalled();
expect(instrumentSpy).toHaveBeenCalled();
});
+
+ describe('clientInstrumentation', () => {
+ it('provides clientInstrumentation property', () => {
+ const integration = reactRouterTracingIntegration();
+
+ expect(integration.clientInstrumentation).toBeDefined();
+ });
+
+ it('lazily creates clientInstrumentation only when accessed', () => {
+ const integration = reactRouterTracingIntegration();
+
+ // Flag should not be set yet (lazy initialization)
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBeUndefined();
+
+ // Access the instrumentation
+ const instrumentation = integration.clientInstrumentation;
+
+ // Flag is still NOT set - it only gets set when router() is called by React Router
+ // This is important for Framework Mode where router() is never called
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBeUndefined();
+ expect(instrumentation).toBeDefined();
+ expect(typeof instrumentation.router).toBe('function');
+ expect(typeof instrumentation.route).toBe('function');
+
+ // Simulate React Router calling router() - this is what sets the flag
+ const mockInstrument = vi.fn();
+ instrumentation.router?.({ instrument: mockInstrument });
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBe(true);
+ });
+
+ it('returns the same clientInstrumentation instance on multiple accesses', () => {
+ const integration = reactRouterTracingIntegration();
+
+ const first = integration.clientInstrumentation;
+ const second = integration.clientInstrumentation;
+
+ expect(first).toBe(second);
+ });
+
+ it('passes options to createSentryClientInstrumentation', () => {
+ const integration = reactRouterTracingIntegration({
+ instrumentationOptions: {
+ captureErrors: false,
+ },
+ });
+
+ const instrumentation = integration.clientInstrumentation;
+
+ // The instrumentation is created - we can verify by checking it has the expected shape
+ expect(instrumentation).toBeDefined();
+ expect(typeof instrumentation.router).toBe('function');
+ expect(typeof instrumentation.route).toBe('function');
+ });
+
+ it('eagerly creates instrumentation when useInstrumentationAPI is true', () => {
+ // Flag should not be set before creating integration
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBeUndefined();
+
+ // Create integration with useInstrumentationAPI: true
+ const integration = reactRouterTracingIntegration({ useInstrumentationAPI: true });
+
+ // Flag should NOT be set just by creating integration - only when router() is called
+ // This is critical for Framework Mode where router() is never called
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBeUndefined();
+
+ // Verify instrumentation was eagerly created (accessible immediately)
+ expect(integration.clientInstrumentation).toBeDefined();
+
+ // Simulate React Router calling router() - this is what sets the flag
+ const mockInstrument = vi.fn();
+ integration.clientInstrumentation?.router?.({ instrument: mockInstrument });
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBe(true);
+ });
+
+ it('eagerly creates instrumentation when instrumentationOptions is provided', () => {
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBeUndefined();
+
+ const integration = reactRouterTracingIntegration({ instrumentationOptions: {} });
+
+ // Flag should NOT be set just by creating integration - only when router() is called
+ // This is critical for Framework Mode where router() is never called
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBeUndefined();
+
+ // Verify instrumentation was eagerly created (accessible immediately)
+ expect(integration.clientInstrumentation).toBeDefined();
+
+ // Simulate React Router calling router() - this is what sets the flag
+ const mockInstrument = vi.fn();
+ integration.clientInstrumentation?.router?.({ instrument: mockInstrument });
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBe(true);
+ });
+
+ it('calls instrumentHydratedRouter when useInstrumentationAPI is true', () => {
+ vi.spyOn(sentryBrowser, 'browserTracingIntegration').mockImplementation(() => ({
+ setup: vi.fn(),
+ afterAllSetup: vi.fn(),
+ name: 'BrowserTracing',
+ }));
+ const instrumentSpy = vi.spyOn(hydratedRouterModule, 'instrumentHydratedRouter').mockImplementation(() => null);
+
+ // Create with useInstrumentationAPI - flag is set eagerly
+ const integration = reactRouterTracingIntegration({ useInstrumentationAPI: true });
+
+ // afterAllSetup runs
+ integration.afterAllSetup?.({} as Client);
+
+ // instrumentHydratedRouter is called for both pageload and navigation handling
+ // (In Framework Mode, HydratedRouter doesn't invoke client hooks, so legacy instrumentation remains active)
+ expect(instrumentSpy).toHaveBeenCalled();
+ });
+
+ it('Framework Mode regression: isClientInstrumentationApiUsed returns false when router() is never called', () => {
+ // This is a critical regression test for Framework Mode (e.g., Remix).
+ //
+ // Scenario:
+ // 1. User sets useInstrumentationAPI: true in reactRouterTracingIntegration options
+ // 2. createSentryClientInstrumentation() is called eagerly during SDK init
+ // 3. BUT in Framework Mode, React Router doesn't support unstable_instrumentations,
+ // so router() method is NEVER called by the framework
+ // 4. The SENTRY_CLIENT_INSTRUMENTATION_FLAG must NOT be set in this case
+ // 5. isClientInstrumentationApiUsed() must return false
+ // 6. This allows legacy instrumentation in hydratedRouter.ts to create navigation spans
+ //
+ // Without this behavior, Framework Mode would have ZERO navigation spans because:
+ // - The flag would be set (disabling legacy instrumentation)
+ // - But router() was never called (so instrumentation API doesn't create spans either)
+
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBeUndefined();
+
+ // Create integration with useInstrumentationAPI: true (simulating user config)
+ const integration = reactRouterTracingIntegration({ useInstrumentationAPI: true });
+
+ // Access the instrumentation (simulating what would happen during setup)
+ const instrumentation = integration.clientInstrumentation;
+ expect(instrumentation).toBeDefined();
+
+ // CRITICAL: Flag is NOT set because router() was never called
+ // This simulates Framework Mode where the framework doesn't call our hooks
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBeUndefined();
+
+ // isClientInstrumentationApiUsed() returns false - legacy instrumentation will work
+ expect(isClientInstrumentationApiUsed()).toBe(false);
+
+ // Now simulate what happens in Library Mode: React Router calls router()
+ const mockInstrument = vi.fn();
+ instrumentation.router?.({ instrument: mockInstrument });
+
+ // After router() is called, flag IS set and isClientInstrumentationApiUsed() returns true
+ expect((GLOBAL_OBJ as GlobalObjWithFlag)[SENTRY_CLIENT_INSTRUMENTATION_FLAG]).toBe(true);
+ expect(isClientInstrumentationApiUsed()).toBe(true);
+ });
+ });
});
diff --git a/packages/react-router/test/common/utils.test.ts b/packages/react-router/test/common/utils.test.ts
new file mode 100644
index 000000000000..3479744328ce
--- /dev/null
+++ b/packages/react-router/test/common/utils.test.ts
@@ -0,0 +1,109 @@
+import * as core from '@sentry/core';
+import { beforeEach, describe, expect, it, vi } from 'vitest';
+import {
+ captureInstrumentationError,
+ getPathFromRequest,
+ getPattern,
+ normalizeRoutePath,
+} from '../../src/common/utils';
+
+vi.mock('@sentry/core', async () => {
+ const actual = await vi.importActual('@sentry/core');
+ return {
+ ...actual,
+ captureException: vi.fn(),
+ };
+});
+
+describe('getPathFromRequest', () => {
+ it('should extract pathname from valid absolute URL', () => {
+ const request = { url: 'http://example.com/users/123' };
+ expect(getPathFromRequest(request)).toBe('/users/123');
+ });
+
+ it('should extract pathname from relative URL using dummy base', () => {
+ const request = { url: '/api/data' };
+ expect(getPathFromRequest(request)).toBe('/api/data');
+ });
+
+ it('should handle malformed URLs by treating them as relative paths', () => {
+ // The dummy base URL fallback handles most strings as relative paths
+ // This verifies the fallback works even for unusual URL strings
+ const request = { url: ':::invalid:::' };
+ expect(getPathFromRequest(request)).toBe('/:::invalid:::');
+ });
+
+ it('should handle URL with query string', () => {
+ const request = { url: 'http://example.com/search?q=test' };
+ expect(getPathFromRequest(request)).toBe('/search');
+ });
+
+ it('should handle URL with fragment', () => {
+ const request = { url: 'http://example.com/page#section' };
+ expect(getPathFromRequest(request)).toBe('/page');
+ });
+
+ it('should handle root path', () => {
+ const request = { url: 'http://example.com/' };
+ expect(getPathFromRequest(request)).toBe('/');
+ });
+});
+
+describe('getPattern', () => {
+ it('should prefer stable pattern over unstable_pattern', () => {
+ const info = { pattern: '/users/:id', unstable_pattern: '/old/:id' };
+ expect(getPattern(info)).toBe('/users/:id');
+ });
+
+ it('should fall back to unstable_pattern when pattern is undefined', () => {
+ const info = { unstable_pattern: '/users/:id' };
+ expect(getPattern(info)).toBe('/users/:id');
+ });
+
+ it('should return undefined when neither is available', () => {
+ const info = {};
+ expect(getPattern(info)).toBeUndefined();
+ });
+});
+
+describe('normalizeRoutePath', () => {
+ it('should add leading slash if missing', () => {
+ expect(normalizeRoutePath('users/:id')).toBe('/users/:id');
+ });
+
+ it('should keep existing leading slash', () => {
+ expect(normalizeRoutePath('/users/:id')).toBe('/users/:id');
+ });
+
+ it('should return undefined for falsy input', () => {
+ expect(normalizeRoutePath(undefined)).toBeUndefined();
+ expect(normalizeRoutePath('')).toBeUndefined();
+ });
+});
+
+describe('captureInstrumentationError', () => {
+ beforeEach(() => {
+ vi.clearAllMocks();
+ });
+
+ it('should capture error when captureErrors is true', () => {
+ const error = new Error('test error');
+ const result = { status: 'error' as const, error };
+ const data = { 'http.url': '/test' };
+
+ captureInstrumentationError(result, true, 'react_router.loader', data);
+
+ expect(core.captureException).toHaveBeenCalledWith(error, {
+ mechanism: { type: 'react_router.loader', handled: false, data },
+ });
+ });
+
+ it('should not capture error when captureErrors is false', () => {
+ const error = new Error('test error');
+ const result = { status: 'error' as const, error };
+
+ captureInstrumentationError(result, false, 'react_router.loader', {});
+
+ expect(core.captureException).not.toHaveBeenCalled();
+ });
+});
diff --git a/packages/react-router/test/server/createServerInstrumentation.test.ts b/packages/react-router/test/server/createServerInstrumentation.test.ts
new file mode 100644
index 000000000000..33eb73f48ace
--- /dev/null
+++ b/packages/react-router/test/server/createServerInstrumentation.test.ts
@@ -0,0 +1,470 @@
+import * as core from '@sentry/core';
+import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
+import {
+ createSentryServerInstrumentation,
+ isInstrumentationApiUsed,
+} from '../../src/server/createServerInstrumentation';
+
+vi.mock('@sentry/core', async () => {
+ const actual = await vi.importActual('@sentry/core');
+ return {
+ ...actual,
+ startSpan: vi.fn(),
+ captureException: vi.fn(),
+ flushIfServerless: vi.fn(),
+ getActiveSpan: vi.fn(),
+ getRootSpan: vi.fn(),
+ updateSpanName: vi.fn(),
+ GLOBAL_OBJ: globalThis,
+ SEMANTIC_ATTRIBUTE_SENTRY_OP: 'sentry.op',
+ SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN: 'sentry.origin',
+ SEMANTIC_ATTRIBUTE_SENTRY_SOURCE: 'sentry.source',
+ };
+});
+
+describe('createSentryServerInstrumentation', () => {
+ beforeEach(() => {
+ vi.clearAllMocks();
+ // Reset global flag
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
+ });
+
+ afterEach(() => {
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
+ });
+
+ it('should create a valid server instrumentation object', () => {
+ const instrumentation = createSentryServerInstrumentation();
+
+ expect(instrumentation).toBeDefined();
+ expect(typeof instrumentation.handler).toBe('function');
+ expect(typeof instrumentation.route).toBe('function');
+ });
+
+ it('should set the global flag when created', () => {
+ expect((globalThis as any).__sentryReactRouterServerInstrumentationUsed).toBeUndefined();
+
+ createSentryServerInstrumentation();
+
+ expect((globalThis as any).__sentryReactRouterServerInstrumentationUsed).toBe(true);
+ });
+
+ it('should update root span with handler request attributes', async () => {
+ const mockRequest = new Request('http://example.com/test-path');
+ const mockHandleRequest = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+ const mockSetAttributes = vi.fn();
+ const mockRootSpan = { setAttributes: mockSetAttributes };
+
+ (core.getActiveSpan as any).mockReturnValue({});
+ (core.getRootSpan as any).mockReturnValue(mockRootSpan);
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.handler?.({ instrument: mockInstrument });
+
+ expect(mockInstrument).toHaveBeenCalled();
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the request hook with RequestHandlerInstrumentationInfo
+ await hooks.request(mockHandleRequest, { request: mockRequest, context: undefined });
+
+ // Should update the root span name and attributes
+ expect(core.updateSpanName).toHaveBeenCalledWith(mockRootSpan, 'GET /test-path');
+ expect(mockSetAttributes).toHaveBeenCalledWith({
+ 'sentry.op': 'http.server',
+ 'sentry.origin': 'auto.http.react_router.instrumentation_api',
+ 'sentry.source': 'url',
+ });
+ expect(mockHandleRequest).toHaveBeenCalled();
+ expect(core.flushIfServerless).toHaveBeenCalled();
+ });
+
+ it('should create own root span when no active span exists', async () => {
+ const mockRequest = new Request('http://example.com/api/users');
+ const mockHandleRequest = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ // No active span exists
+ (core.getActiveSpan as any).mockReturnValue(undefined);
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.handler?.({ instrument: mockInstrument });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.request(mockHandleRequest, { request: mockRequest, context: undefined });
+
+ // Should create a new root span with forceTransaction
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: 'GET /api/users',
+ forceTransaction: true,
+ attributes: expect.objectContaining({
+ 'sentry.op': 'http.server',
+ 'sentry.origin': 'auto.http.react_router.instrumentation_api',
+ 'sentry.source': 'url',
+ 'http.request.method': 'GET',
+ 'url.path': '/api/users',
+ 'url.full': 'http://example.com/api/users',
+ }),
+ }),
+ expect.any(Function),
+ );
+ expect(mockHandleRequest).toHaveBeenCalled();
+ expect(core.flushIfServerless).toHaveBeenCalled();
+ });
+
+ it('should capture errors and set span status when root span exists', async () => {
+ const mockRequest = new Request('http://example.com/api/users');
+ const mockError = new Error('Handler error');
+ const mockHandleRequest = vi.fn().mockResolvedValue({ status: 'error', error: mockError });
+ const mockInstrument = vi.fn();
+ const mockSetStatus = vi.fn();
+ const mockRootSpan = { setAttributes: vi.fn(), setStatus: mockSetStatus };
+
+ (core.getActiveSpan as any).mockReturnValue({});
+ (core.getRootSpan as any).mockReturnValue(mockRootSpan);
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.handler?.({ instrument: mockInstrument });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.request(mockHandleRequest, { request: mockRequest, context: undefined });
+
+ expect(mockSetStatus).toHaveBeenCalledWith({ code: 2, message: 'internal_error' });
+ expect(core.captureException).toHaveBeenCalledWith(mockError, {
+ mechanism: {
+ type: 'react_router.request_handler',
+ handled: false,
+ data: { 'http.method': 'GET', 'http.url': '/api/users' },
+ },
+ });
+ });
+
+ it('should capture errors in handler when no root span exists', async () => {
+ const mockRequest = new Request('http://example.com/api/users');
+ const mockError = new Error('Handler error');
+ const mockHandleRequest = vi.fn().mockResolvedValue({ status: 'error', error: mockError });
+ const mockInstrument = vi.fn();
+ const mockSpan = { setStatus: vi.fn() };
+
+ (core.getActiveSpan as any).mockReturnValue(undefined);
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn(mockSpan));
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.handler?.({ instrument: mockInstrument });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.request(mockHandleRequest, { request: mockRequest, context: undefined });
+
+ expect(mockSpan.setStatus).toHaveBeenCalledWith({ code: 2, message: 'internal_error' });
+ expect(core.captureException).toHaveBeenCalledWith(mockError, {
+ mechanism: {
+ type: 'react_router.request_handler',
+ handled: false,
+ data: { 'http.method': 'GET', 'http.url': '/api/users' },
+ },
+ });
+ });
+
+ it('should handle invalid URL gracefully and still call handler', async () => {
+ // Create a request object with an invalid URL that will fail URL parsing
+ const mockRequest = { url: 'not-a-valid-url', method: 'GET' } as unknown as Request;
+ const mockHandleRequest = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.handler?.({ instrument: mockInstrument });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.request(mockHandleRequest, { request: mockRequest, context: undefined });
+
+ // Handler should still be called even if URL parsing fails
+ expect(mockHandleRequest).toHaveBeenCalled();
+ expect(core.flushIfServerless).toHaveBeenCalled();
+ });
+
+ it('should handle relative URLs by using a dummy base', async () => {
+ const mockRequest = { url: '/relative/path', method: 'GET' } as unknown as Request;
+ const mockHandleRequest = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+ const mockSetAttributes = vi.fn();
+ const mockRootSpan = { setAttributes: mockSetAttributes };
+
+ (core.getActiveSpan as any).mockReturnValue({});
+ (core.getRootSpan as any).mockReturnValue(mockRootSpan);
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.handler?.({ instrument: mockInstrument });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.request(mockHandleRequest, { request: mockRequest, context: undefined });
+
+ expect(core.updateSpanName).toHaveBeenCalledWith(mockRootSpan, 'GET /relative/path');
+ });
+
+ it('should instrument route loader with spans', async () => {
+ const mockCallLoader = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+ (core.getActiveSpan as any).mockReturnValue({});
+ (core.getRootSpan as any).mockReturnValue({ setAttributes: vi.fn() });
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/users/:id',
+ instrument: mockInstrument,
+ });
+
+ expect(mockInstrument).toHaveBeenCalled();
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the loader hook with RouteHandlerInstrumentationInfo
+ await hooks.loader(mockCallLoader, {
+ request: { method: 'GET', url: 'http://example.com/users/123', headers: { get: () => null } },
+ params: { id: '123' },
+ unstable_pattern: '/users/:id',
+ context: undefined,
+ });
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: '/users/:id',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.loader',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+ expect(mockCallLoader).toHaveBeenCalled();
+ expect(core.updateSpanName).toHaveBeenCalled();
+ });
+
+ it('should instrument route action with spans', async () => {
+ const mockCallAction = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+ (core.getActiveSpan as any).mockReturnValue({});
+ (core.getRootSpan as any).mockReturnValue({ setAttributes: vi.fn() });
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/users/:id',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the action hook with RouteHandlerInstrumentationInfo
+ await hooks.action(mockCallAction, {
+ request: { method: 'POST', url: 'http://example.com/users/123', headers: { get: () => null } },
+ params: { id: '123' },
+ unstable_pattern: '/users/:id',
+ context: undefined,
+ });
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: '/users/:id',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.action',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+ });
+
+ it('should instrument route middleware with spans', async () => {
+ const mockCallMiddleware = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+ const mockSetAttributes = vi.fn();
+ const mockRootSpan = { setAttributes: mockSetAttributes };
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+ (core.getActiveSpan as any).mockReturnValue({});
+ (core.getRootSpan as any).mockReturnValue(mockRootSpan);
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/users/:id',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the middleware hook with RouteHandlerInstrumentationInfo
+ await hooks.middleware(mockCallMiddleware, {
+ request: { method: 'GET', url: 'http://example.com/users/123', headers: { get: () => null } },
+ params: { id: '123' },
+ unstable_pattern: '/users/:id',
+ context: undefined,
+ });
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: '/users/:id',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.middleware',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+
+ // Verify updateRootSpanWithRoute was called (same as loader/action)
+ // This updates the root span name and sets http.route for parameterized routes
+ expect(core.updateSpanName).toHaveBeenCalledWith(mockRootSpan, 'GET /users/:id');
+ expect(mockSetAttributes).toHaveBeenCalledWith(
+ expect.objectContaining({
+ 'http.route': '/users/:id',
+ 'sentry.source': 'route',
+ }),
+ );
+ });
+
+ it('should instrument lazy route loading with spans', async () => {
+ const mockCallLazy = vi.fn().mockResolvedValue({ status: 'success', error: undefined });
+ const mockInstrument = vi.fn();
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn());
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/users/:id',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ // Call the lazy hook - info is undefined for lazy loading
+ await hooks.lazy(mockCallLazy, undefined);
+
+ expect(core.startSpan).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: 'Lazy Route Load',
+ attributes: expect.objectContaining({
+ 'sentry.op': 'function.react_router.lazy',
+ 'sentry.origin': 'auto.function.react_router.instrumentation_api',
+ }),
+ }),
+ expect.any(Function),
+ );
+ expect(mockCallLazy).toHaveBeenCalled();
+ });
+
+ it('should capture errors when captureErrors is true (default)', async () => {
+ const mockError = new Error('Test error');
+ // React Router returns an error result, not a rejection
+ const mockCallLoader = vi.fn().mockResolvedValue({ status: 'error', error: mockError });
+ const mockInstrument = vi.fn();
+ const mockSpan = { setStatus: vi.fn() };
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn(mockSpan));
+ (core.getActiveSpan as any).mockReturnValue({});
+ (core.getRootSpan as any).mockReturnValue({ setAttributes: vi.fn() });
+
+ const instrumentation = createSentryServerInstrumentation();
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/test',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.loader(mockCallLoader, {
+ request: { method: 'GET', url: 'http://example.com/test', headers: { get: () => null } },
+ params: {},
+ unstable_pattern: '/test',
+ context: undefined,
+ });
+
+ expect(core.captureException).toHaveBeenCalledWith(mockError, {
+ mechanism: {
+ type: 'react_router.loader',
+ handled: false,
+ data: { 'http.method': 'GET', 'http.url': '/test' },
+ },
+ });
+
+ // Should also set span status to error for actual Error instances
+ expect(mockSpan.setStatus).toHaveBeenCalledWith({ code: 2, message: 'internal_error' });
+ });
+
+ it('should not capture errors when captureErrors is false', async () => {
+ const mockError = new Error('Test error');
+ // React Router returns an error result, not a rejection
+ const mockCallLoader = vi.fn().mockResolvedValue({ status: 'error', error: mockError });
+ const mockInstrument = vi.fn();
+ const mockSpan = { setStatus: vi.fn() };
+
+ (core.startSpan as any).mockImplementation((_opts: any, fn: any) => fn(mockSpan));
+ (core.getActiveSpan as any).mockReturnValue({});
+ (core.getRootSpan as any).mockReturnValue({ setAttributes: vi.fn() });
+
+ const instrumentation = createSentryServerInstrumentation({ captureErrors: false });
+ instrumentation.route?.({
+ id: 'test-route',
+ index: false,
+ path: '/test',
+ instrument: mockInstrument,
+ });
+
+ const hooks = mockInstrument.mock.calls[0]![0];
+
+ await hooks.loader(mockCallLoader, {
+ request: { method: 'GET', url: 'http://example.com/test', headers: { get: () => null } },
+ params: {},
+ unstable_pattern: '/test',
+ context: undefined,
+ });
+
+ expect(core.captureException).not.toHaveBeenCalled();
+
+ // Span status should still be set for Error instances (reflects actual state)
+ expect(mockSpan.setStatus).toHaveBeenCalledWith({ code: 2, message: 'internal_error' });
+ });
+});
+
+describe('isInstrumentationApiUsed', () => {
+ beforeEach(() => {
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
+ });
+
+ afterEach(() => {
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
+ });
+
+ it('should return false when flag is not set', () => {
+ expect(isInstrumentationApiUsed()).toBe(false);
+ });
+
+ it('should return true when flag is set', () => {
+ (globalThis as any).__sentryReactRouterServerInstrumentationUsed = true;
+ expect(isInstrumentationApiUsed()).toBe(true);
+ });
+
+ it('should return true after createSentryServerInstrumentation is called', () => {
+ expect(isInstrumentationApiUsed()).toBe(false);
+ createSentryServerInstrumentation();
+ expect(isInstrumentationApiUsed()).toBe(true);
+ });
+});
diff --git a/packages/react-router/test/server/instrumentation/reactRouterServer.test.ts b/packages/react-router/test/server/instrumentation/reactRouterServer.test.ts
index fb5141f8830d..93e0a91a1c2b 100644
--- a/packages/react-router/test/server/instrumentation/reactRouterServer.test.ts
+++ b/packages/react-router/test/server/instrumentation/reactRouterServer.test.ts
@@ -18,6 +18,7 @@ vi.mock('@sentry/core', async () => {
SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN: 'sentry.origin',
SEMANTIC_ATTRIBUTE_SENTRY_SOURCE: 'sentry.source',
startSpan: vi.fn((opts, fn) => fn({})),
+ GLOBAL_OBJ: {},
};
});
diff --git a/packages/react-router/test/server/wrapSentryHandleRequest.test.ts b/packages/react-router/test/server/wrapSentryHandleRequest.test.ts
index 45b4ca1062df..71875d1aa887 100644
--- a/packages/react-router/test/server/wrapSentryHandleRequest.test.ts
+++ b/packages/react-router/test/server/wrapSentryHandleRequest.test.ts
@@ -24,11 +24,16 @@ vi.mock('@sentry/core', () => ({
getRootSpan: vi.fn(),
getTraceMetaTags: vi.fn(),
flushIfServerless: vi.fn(),
+ updateSpanName: vi.fn(),
+ getCurrentScope: vi.fn(() => ({ setTransactionName: vi.fn() })),
+ GLOBAL_OBJ: globalThis,
}));
describe('wrapSentryHandleRequest', () => {
beforeEach(() => {
vi.clearAllMocks();
+ // Reset global flag for unstable instrumentation
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
});
test('should call original handler with same parameters', async () => {
@@ -175,6 +180,39 @@ describe('wrapSentryHandleRequest', () => {
mockError,
);
});
+
+ test('should set route attributes as fallback when instrumentation API is used (for lazy-only routes)', async () => {
+ // Set the global flag indicating instrumentation API is in use
+ (globalThis as any).__sentryReactRouterServerInstrumentationUsed = true;
+
+ const originalHandler = vi.fn().mockResolvedValue('test');
+ const wrappedHandler = wrapSentryHandleRequest(originalHandler);
+
+ const mockActiveSpan = {};
+ const mockRootSpan = { setAttributes: vi.fn() };
+ const mockRpcMetadata = { type: RPCType.HTTP, route: '/some-path' };
+
+ (getActiveSpan as unknown as ReturnType).mockReturnValue(mockActiveSpan);
+ (getRootSpan as unknown as ReturnType).mockReturnValue(mockRootSpan);
+ const getRPCMetadata = vi.fn().mockReturnValue(mockRpcMetadata);
+ (vi.importActual('@opentelemetry/core') as unknown as { getRPCMetadata: typeof getRPCMetadata }).getRPCMetadata =
+ getRPCMetadata;
+
+ const routerContext = {
+ staticHandlerContext: {
+ matches: [{ route: { path: 'some-path' } }],
+ },
+ } as any;
+
+ await wrappedHandler(new Request('https://nacho.queso'), 200, new Headers(), routerContext, {} as any);
+
+ // Should set route attributes without origin (to preserve instrumentation_api origin)
+ expect(mockRootSpan.setAttributes).toHaveBeenCalledWith({
+ [ATTR_HTTP_ROUTE]: '/some-path',
+ [SEMANTIC_ATTRIBUTE_SENTRY_SOURCE]: 'route',
+ });
+ expect(mockRpcMetadata.route).toBe('/some-path');
+ });
});
describe('getMetaTagTransformer', () => {
diff --git a/packages/react-router/test/server/wrapServerAction.test.ts b/packages/react-router/test/server/wrapServerAction.test.ts
index c0cde751e472..043d838aa90a 100644
--- a/packages/react-router/test/server/wrapServerAction.test.ts
+++ b/packages/react-router/test/server/wrapServerAction.test.ts
@@ -1,6 +1,6 @@
import * as core from '@sentry/core';
import type { ActionFunctionArgs } from 'react-router';
-import { beforeEach, describe, expect, it, vi } from 'vitest';
+import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import { wrapServerAction } from '../../src/server/wrapServerAction';
vi.mock('@sentry/core', async () => {
@@ -9,12 +9,21 @@ vi.mock('@sentry/core', async () => {
...actual,
startSpan: vi.fn(),
flushIfServerless: vi.fn(),
+ debug: {
+ warn: vi.fn(),
+ },
};
});
describe('wrapServerAction', () => {
beforeEach(() => {
vi.clearAllMocks();
+ // Reset the global flag and warning state
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
+ });
+
+ afterEach(() => {
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
});
it('should wrap an action function with default options', async () => {
@@ -107,4 +116,36 @@ describe('wrapServerAction', () => {
await expect(wrappedAction(mockArgs)).rejects.toBe(mockError);
});
+
+ it('should skip span creation and warn when instrumentation API is used', async () => {
+ // Reset modules to get a fresh copy with unset warning flag
+ vi.resetModules();
+ // @ts-expect-error - Dynamic import for module reset works at runtime but vitest's typecheck doesn't fully support it
+ const { wrapServerAction: freshWrapServerAction } = await import('../../src/server/wrapServerAction');
+
+ // Set the global flag indicating instrumentation API is in use
+ (globalThis as any).__sentryReactRouterServerInstrumentationUsed = true;
+
+ const mockActionFn = vi.fn().mockResolvedValue('result');
+ const mockArgs = { request: new Request('http://test.com') } as ActionFunctionArgs;
+
+ const wrappedAction = freshWrapServerAction({}, mockActionFn);
+
+ // Call multiple times
+ await wrappedAction(mockArgs);
+ await wrappedAction(mockArgs);
+ await wrappedAction(mockArgs);
+
+ // Should warn about redundant wrapper via debug.warn, but only once
+ expect(core.debug.warn).toHaveBeenCalledTimes(1);
+ expect(core.debug.warn).toHaveBeenCalledWith(
+ expect.stringContaining('wrapServerAction is redundant when using the instrumentation API'),
+ );
+
+ // Should not create spans (instrumentation API handles it)
+ expect(core.startSpan).not.toHaveBeenCalled();
+
+ // Should still execute the action function
+ expect(mockActionFn).toHaveBeenCalledTimes(3);
+ });
});
diff --git a/packages/react-router/test/server/wrapServerLoader.test.ts b/packages/react-router/test/server/wrapServerLoader.test.ts
index 032107c1075e..7dfb39bbed42 100644
--- a/packages/react-router/test/server/wrapServerLoader.test.ts
+++ b/packages/react-router/test/server/wrapServerLoader.test.ts
@@ -1,6 +1,6 @@
import * as core from '@sentry/core';
import type { LoaderFunctionArgs } from 'react-router';
-import { beforeEach, describe, expect, it, vi } from 'vitest';
+import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import { wrapServerLoader } from '../../src/server/wrapServerLoader';
vi.mock('@sentry/core', async () => {
@@ -9,12 +9,21 @@ vi.mock('@sentry/core', async () => {
...actual,
startSpan: vi.fn(),
flushIfServerless: vi.fn(),
+ debug: {
+ warn: vi.fn(),
+ },
};
});
describe('wrapServerLoader', () => {
beforeEach(() => {
vi.clearAllMocks();
+ // Reset the global flag and warning state
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
+ });
+
+ afterEach(() => {
+ delete (globalThis as any).__sentryReactRouterServerInstrumentationUsed;
});
it('should wrap a loader function with default options', async () => {
@@ -107,4 +116,36 @@ describe('wrapServerLoader', () => {
await expect(wrappedLoader(mockArgs)).rejects.toBe(mockError);
});
+
+ it('should skip span creation and warn when instrumentation API is used', async () => {
+ // Reset modules to get a fresh copy with unset warning flag
+ vi.resetModules();
+ // @ts-expect-error - Dynamic import for module reset works at runtime but vitest's typecheck doesn't fully support it
+ const { wrapServerLoader: freshWrapServerLoader } = await import('../../src/server/wrapServerLoader');
+
+ // Set the global flag indicating instrumentation API is in use
+ (globalThis as any).__sentryReactRouterServerInstrumentationUsed = true;
+
+ const mockLoaderFn = vi.fn().mockResolvedValue('result');
+ const mockArgs = { request: new Request('http://test.com') } as LoaderFunctionArgs;
+
+ const wrappedLoader = freshWrapServerLoader({}, mockLoaderFn);
+
+ // Call multiple times
+ await wrappedLoader(mockArgs);
+ await wrappedLoader(mockArgs);
+ await wrappedLoader(mockArgs);
+
+ // Should warn about redundant wrapper via debug.warn, but only once
+ expect(core.debug.warn).toHaveBeenCalledTimes(1);
+ expect(core.debug.warn).toHaveBeenCalledWith(
+ expect.stringContaining('wrapServerLoader is redundant when using the instrumentation API'),
+ );
+
+ // Should not create spans (instrumentation API handles it)
+ expect(core.startSpan).not.toHaveBeenCalled();
+
+ // Should still execute the loader function
+ expect(mockLoaderFn).toHaveBeenCalledTimes(3);
+ });
});
diff --git a/packages/react-router/test/vite/makeCustomSentryVitePlugins.test.ts b/packages/react-router/test/vite/makeCustomSentryVitePlugins.test.ts
index b4db6d85d028..db786f67adb2 100644
--- a/packages/react-router/test/vite/makeCustomSentryVitePlugins.test.ts
+++ b/packages/react-router/test/vite/makeCustomSentryVitePlugins.test.ts
@@ -7,7 +7,7 @@ vi.mock('@sentry/vite-plugin', () => ({
.fn()
.mockReturnValue([
{ name: 'sentry-telemetry-plugin' },
- { name: 'sentry-vite-release-injection-plugin' },
+ { name: 'sentry-vite-injection-plugin' },
{ name: 'sentry-vite-component-name-annotate-plugin' },
{ name: 'other-plugin' },
]),
@@ -59,7 +59,7 @@ describe('makeCustomSentryVitePlugins', () => {
const plugins = await makeCustomSentryVitePlugins({});
expect(plugins).toHaveLength(2);
expect(plugins?.[0]?.name).toBe('sentry-telemetry-plugin');
- expect(plugins?.[1]?.name).toBe('sentry-vite-release-injection-plugin');
+ expect(plugins?.[1]?.name).toBe('sentry-vite-injection-plugin');
});
it('should include component annotation plugin when reactComponentAnnotation.enabled is true', async () => {
@@ -67,7 +67,7 @@ describe('makeCustomSentryVitePlugins', () => {
expect(plugins).toHaveLength(3);
expect(plugins?.[0]?.name).toBe('sentry-telemetry-plugin');
- expect(plugins?.[1]?.name).toBe('sentry-vite-release-injection-plugin');
+ expect(plugins?.[1]?.name).toBe('sentry-vite-injection-plugin');
expect(plugins?.[2]?.name).toBe('sentry-vite-component-name-annotate-plugin');
});
@@ -78,7 +78,7 @@ describe('makeCustomSentryVitePlugins', () => {
expect(plugins).toHaveLength(3);
expect(plugins?.[0]?.name).toBe('sentry-telemetry-plugin');
- expect(plugins?.[1]?.name).toBe('sentry-vite-release-injection-plugin');
+ expect(plugins?.[1]?.name).toBe('sentry-vite-injection-plugin');
expect(plugins?.[2]?.name).toBe('sentry-vite-component-name-annotate-plugin');
});
});
diff --git a/packages/react/src/reactrouter-compat-utils/instrumentation.tsx b/packages/react/src/reactrouter-compat-utils/instrumentation.tsx
index d646624618f9..1cfa0951ddc4 100644
--- a/packages/react/src/reactrouter-compat-utils/instrumentation.tsx
+++ b/packages/react/src/reactrouter-compat-utils/instrumentation.tsx
@@ -71,6 +71,9 @@ export const allRoutes = new Set();
// Tracks lazy route loads to wait before finalizing span names
const pendingLazyRouteLoads = new WeakMap>>();
+// Tracks deferred lazy route promises that can be resolved when patchRoutesOnNavigation is called
+const deferredLazyRouteResolvers = new WeakMap void>();
+
/**
* Schedules a callback using requestAnimationFrame when available (browser),
* or falls back to setTimeout for SSR environments (Node.js, createMemoryRouter tests).
@@ -233,6 +236,34 @@ function trackLazyRouteLoad(span: Span, promise: Promise): void {
});
}
+/**
+ * Creates a deferred promise for a span that will be resolved when patchRoutesOnNavigation is called.
+ * This ensures that patchedEnd waits for patchRoutesOnNavigation to be called before ending the span.
+ */
+function createDeferredLazyRoutePromise(span: Span): void {
+ const deferredPromise = new Promise(resolve => {
+ deferredLazyRouteResolvers.set(span, resolve);
+ });
+
+ trackLazyRouteLoad(span, deferredPromise);
+}
+
+/**
+ * Resolves the deferred lazy route promise for a span.
+ * Called when patchRoutesOnNavigation is invoked.
+ */
+function resolveDeferredLazyRoutePromise(span: Span): void {
+ const resolver = deferredLazyRouteResolvers.get(span);
+ if (resolver) {
+ resolver();
+ deferredLazyRouteResolvers.delete(span);
+ // Clear the flag so patchSpanEnd doesn't wait unnecessarily for routes that have already loaded
+ if ((span as unknown as Record).__sentry_may_have_lazy_routes__) {
+ (span as unknown as Record).__sentry_may_have_lazy_routes__ = false;
+ }
+ }
+}
+
/**
* Processes resolved routes by adding them to allRoutes and checking for nested async handlers.
* When capturedSpan is provided, updates that specific span instead of the current active span.
@@ -454,10 +485,30 @@ export function createV6CompatibleWrapCreateBrowserRouter<
}
}
- const wrappedOpts = wrapPatchRoutesOnNavigation(opts);
+ // Capture the active span BEFORE creating the router.
+ // This is important because the span might end (due to idle timeout) before
+ // patchRoutesOnNavigation is called by React Router.
+ const activeRootSpan = getActiveRootSpan();
+
+ // If patchRoutesOnNavigation is provided and we have an active span,
+ // mark the span as having potential lazy routes and create a deferred promise.
+ const hasPatchRoutesOnNavigation =
+ opts && 'patchRoutesOnNavigation' in opts && typeof opts.patchRoutesOnNavigation === 'function';
+ if (hasPatchRoutesOnNavigation && activeRootSpan) {
+ // Mark the span as potentially having lazy routes
+ addNonEnumerableProperty(
+ activeRootSpan as unknown as Record,
+ '__sentry_may_have_lazy_routes__',
+ true,
+ );
+ createDeferredLazyRoutePromise(activeRootSpan);
+ }
+
+ // Pass the captured span to wrapPatchRoutesOnNavigation so it uses the same span
+ // even if the span has ended by the time patchRoutesOnNavigation is called.
+ const wrappedOpts = wrapPatchRoutesOnNavigation(opts, false, activeRootSpan);
const router = createRouterFunction(routes, wrappedOpts);
const basename = opts?.basename;
- const activeRootSpan = getActiveRootSpan();
if (router.state.historyAction === 'POP' && activeRootSpan) {
updatePageloadTransaction({
@@ -510,7 +561,23 @@ export function createV6CompatibleWrapCreateMemoryRouter<
}
}
- const wrappedOpts = wrapPatchRoutesOnNavigation(opts, true);
+ // Capture the active span BEFORE creating the router (same as browser router)
+ const memoryActiveRootSpanEarly = getActiveRootSpan();
+
+ // If patchRoutesOnNavigation is provided and we have an active span,
+ // mark the span as having potential lazy routes and create a deferred promise.
+ const hasPatchRoutesOnNavigation =
+ opts && 'patchRoutesOnNavigation' in opts && typeof opts.patchRoutesOnNavigation === 'function';
+ if (hasPatchRoutesOnNavigation && memoryActiveRootSpanEarly) {
+ addNonEnumerableProperty(
+ memoryActiveRootSpanEarly as unknown as Record,
+ '__sentry_may_have_lazy_routes__',
+ true,
+ );
+ createDeferredLazyRoutePromise(memoryActiveRootSpanEarly);
+ }
+
+ const wrappedOpts = wrapPatchRoutesOnNavigation(opts, true, memoryActiveRootSpanEarly);
const router = createRouterFunction(routes, wrappedOpts);
const basename = opts?.basename;
@@ -706,9 +773,36 @@ export function createV6CompatibleWrapUseRoutes(origUseRoutes: UseRoutes, versio
};
}
+/**
+ * Helper to update the current span (navigation or pageload) with lazy-loaded route information.
+ * Reduces code duplication in patchRoutesOnNavigation wrapper.
+ */
+function updateSpanWithLazyRoutes(pathname: string, forceUpdate: boolean): void {
+ const currentActiveRootSpan = getActiveRootSpan();
+ if (!currentActiveRootSpan) {
+ return;
+ }
+
+ const spanOp = (spanToJSON(currentActiveRootSpan) as { op?: string }).op;
+ const location = { pathname, search: '', hash: '', state: null, key: 'default' };
+ const routesArray = Array.from(allRoutes);
+
+ if (spanOp === 'navigation') {
+ updateNavigationSpan(currentActiveRootSpan, location, routesArray, forceUpdate, _matchRoutes);
+ } else if (spanOp === 'pageload') {
+ updatePageloadTransaction({
+ activeRootSpan: currentActiveRootSpan,
+ location,
+ routes: routesArray,
+ allRoutes: routesArray,
+ });
+ }
+}
+
function wrapPatchRoutesOnNavigation(
opts: Record | undefined,
isMemoryRouter = false,
+ capturedSpan?: Span,
): Record {
if (!opts || !('patchRoutesOnNavigation' in opts) || typeof opts.patchRoutesOnNavigation !== 'function') {
return opts || {};
@@ -721,29 +815,47 @@ function wrapPatchRoutesOnNavigation(
// eslint-disable-next-line @typescript-eslint/no-explicit-any, @typescript-eslint/no-unsafe-member-access
const targetPath = (args as any)?.path;
- const activeRootSpan = getActiveRootSpan();
+ // Use current active span if available, otherwise fall back to captured span (from router creation time).
+ // This ensures navigation spans use their own span (not the stale pageload span), while still
+ // supporting pageload spans that may have ended before patchRoutesOnNavigation is called.
+ const activeRootSpan = getActiveRootSpan() ?? capturedSpan;
if (!isMemoryRouter) {
// eslint-disable-next-line @typescript-eslint/no-explicit-any, @typescript-eslint/no-unsafe-member-access
const originalPatch = (args as any)?.patch;
+ // eslint-disable-next-line @typescript-eslint/no-explicit-any, @typescript-eslint/no-unsafe-member-access
+ const matches = (args as any)?.matches as Array<{ route: RouteObject }> | undefined;
if (originalPatch) {
// eslint-disable-next-line @typescript-eslint/no-explicit-any, @typescript-eslint/no-unsafe-member-access
(args as any).patch = (routeId: string, children: RouteObject[]) => {
addRoutesToAllRoutes(children);
- const currentActiveRootSpan = getActiveRootSpan();
+
+ // Find the parent route from matches and attach children to it in allRoutes.
+ // React Router's patch attaches children to its internal route copies, but we need
+ // to update the route objects in our allRoutes Set for proper route matching.
+ if (matches && matches.length > 0) {
+ const leafMatch = matches[matches.length - 1];
+ const leafRoute = leafMatch?.route;
+ if (leafRoute) {
+ // Find the matching route in allRoutes by id, reference, or path
+ const matchingRoute = Array.from(allRoutes).find(route => {
+ const idMatches = route.id !== undefined && route.id === routeId;
+ const referenceMatches = route === leafRoute;
+ const pathMatches =
+ route.path !== undefined && leafRoute.path !== undefined && route.path === leafRoute.path;
+
+ return idMatches || referenceMatches || pathMatches;
+ });
+
+ if (matchingRoute) {
+ addResolvedRoutesToParent(children, matchingRoute);
+ }
+ }
+ }
+
// Only update if we have a valid targetPath (patchRoutesOnNavigation can be called without path)
- if (
- targetPath &&
- currentActiveRootSpan &&
- (spanToJSON(currentActiveRootSpan) as { op?: string }).op === 'navigation'
- ) {
- updateNavigationSpan(
- currentActiveRootSpan,
- { pathname: targetPath, search: '', hash: '', state: null, key: 'default' },
- Array.from(allRoutes),
- true,
- _matchRoutes,
- );
+ if (targetPath) {
+ updateSpanWithLazyRoutes(targetPath, true);
}
return originalPatch(routeId, children);
};
@@ -758,21 +870,16 @@ function wrapPatchRoutesOnNavigation(
result = await originalPatchRoutes(args);
} finally {
clearNavigationContext(contextToken);
+ // Resolve the deferred promise now that patchRoutesOnNavigation has completed.
+ // This ensures patchedEnd has waited long enough for the lazy routes to load.
+ if (activeRootSpan) {
+ resolveDeferredLazyRoutePromise(activeRootSpan);
+ }
}
- const currentActiveRootSpan = getActiveRootSpan();
- if (currentActiveRootSpan && (spanToJSON(currentActiveRootSpan) as { op?: string }).op === 'navigation') {
- const pathname = isMemoryRouter ? targetPath : targetPath || WINDOW.location?.pathname;
-
- if (pathname) {
- updateNavigationSpan(
- currentActiveRootSpan,
- { pathname, search: '', hash: '', state: null, key: 'default' },
- Array.from(allRoutes),
- false,
- _matchRoutes,
- );
- }
+ const pathname = isMemoryRouter ? targetPath : targetPath || WINDOW.location?.pathname;
+ if (pathname) {
+ updateSpanWithLazyRoutes(pathname, false);
}
return result;
@@ -893,7 +1000,7 @@ export function handleNavigation(opts: {
pathname: location.pathname,
locationKey,
});
- patchSpanEnd(navigationSpan, location, routes, basename, allRoutes, 'navigation');
+ patchSpanEnd(navigationSpan, location, routes, basename, 'navigation');
} else {
// If no span was created, remove the placeholder
activeNavigationSpans.delete(client);
@@ -965,8 +1072,13 @@ function updatePageloadTransaction({
activeRootSpan.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_SOURCE, source);
// Patch span.end() to ensure we update the name one last time before the span is sent
- patchSpanEnd(activeRootSpan, location, routes, basename, allRoutes, 'pageload');
+ patchSpanEnd(activeRootSpan, location, routes, basename, 'pageload');
}
+ } else if (activeRootSpan) {
+ // Even if branches is null (can happen when lazy routes haven't loaded yet),
+ // we still need to patch span.end() so that when lazy routes load and the span ends,
+ // we can update the transaction name correctly.
+ patchSpanEnd(activeRootSpan, location, routes, basename, 'pageload');
}
}
@@ -1061,7 +1173,6 @@ function patchSpanEnd(
location: Location,
routes: RouteObject[],
basename: string | undefined,
- _allRoutes: RouteObject[] | undefined,
spanType: 'pageload' | 'navigation',
): void {
const patchedPropertyName = `__sentry_${spanType}_end_patched__` as const;
@@ -1071,8 +1182,7 @@ function patchSpanEnd(
return;
}
- // Use the passed route context, or fall back to global Set
- const allRoutesSet = _allRoutes ? new Set(_allRoutes) : allRoutes;
+ // Uses global allRoutes to access lazy-loaded routes added after this function was called.
const originalEnd = span.end.bind(span);
let endCalled = false;
@@ -1103,29 +1213,40 @@ function patchSpanEnd(
};
const pendingPromises = pendingLazyRouteLoads.get(span);
+ const mayHaveLazyRoutes = (span as unknown as Record).__sentry_may_have_lazy_routes__;
+
// Wait for lazy routes if:
- // 1. There are pending promises AND
+ // 1. (There are pending promises OR the span was marked as potentially having lazy routes) AND
// 2. Current name exists AND
// 3. Either the name has a wildcard OR the source is not 'route' (URL-based names)
+ const hasPendingOrMayHaveLazyRoutes = (pendingPromises && pendingPromises.size > 0) || mayHaveLazyRoutes;
const shouldWaitForLazyRoutes =
- pendingPromises &&
- pendingPromises.size > 0 &&
+ hasPendingOrMayHaveLazyRoutes &&
currentName &&
(transactionNameHasWildcard(currentName) || currentSource !== 'route');
if (shouldWaitForLazyRoutes) {
if (_lazyRouteTimeout === 0) {
- tryUpdateSpanNameBeforeEnd(span, spanJson, currentName, location, routes, basename, spanType, allRoutesSet);
+ tryUpdateSpanNameBeforeEnd(span, spanJson, currentName, location, routes, basename, spanType, allRoutes);
cleanupNavigationSpan();
originalEnd(endTimestamp);
return;
}
- const allSettled = Promise.allSettled(pendingPromises).then(() => {});
- const waitPromise =
- _lazyRouteTimeout === Infinity
- ? allSettled
- : Promise.race([allSettled, new Promise(r => setTimeout(r, _lazyRouteTimeout))]);
+ // If we have pending promises, wait for them. Otherwise, just wait for the timeout.
+ // This handles the case where we know lazy routes might load but patchRoutesOnNavigation
+ // hasn't been called yet.
+ const timeoutPromise = new Promise(r => setTimeout(r, _lazyRouteTimeout));
+ let waitPromise: Promise;
+
+ if (pendingPromises && pendingPromises.size > 0) {
+ const allSettled = Promise.allSettled(pendingPromises).then(() => {});
+ waitPromise = _lazyRouteTimeout === Infinity ? allSettled : Promise.race([allSettled, timeoutPromise]);
+ } else {
+ // No pending promises yet, but we know lazy routes might load
+ // Wait for the timeout to give React Router time to call patchRoutesOnNavigation
+ waitPromise = timeoutPromise;
+ }
waitPromise
.then(() => {
@@ -1138,7 +1259,7 @@ function patchSpanEnd(
routes,
basename,
spanType,
- allRoutesSet,
+ allRoutes,
);
cleanupNavigationSpan();
originalEnd(endTimestamp);
@@ -1150,7 +1271,7 @@ function patchSpanEnd(
return;
}
- tryUpdateSpanNameBeforeEnd(span, spanJson, currentName, location, routes, basename, spanType, allRoutesSet);
+ tryUpdateSpanNameBeforeEnd(span, spanJson, currentName, location, routes, basename, spanType, allRoutes);
cleanupNavigationSpan();
originalEnd(endTimestamp);
};
diff --git a/packages/react/test/reactrouter-compat-utils/instrumentation.test.tsx b/packages/react/test/reactrouter-compat-utils/instrumentation.test.tsx
index 3d2b4f198cf5..7cc99641b5c2 100644
--- a/packages/react/test/reactrouter-compat-utils/instrumentation.test.tsx
+++ b/packages/react/test/reactrouter-compat-utils/instrumentation.test.tsx
@@ -1309,4 +1309,92 @@ describe('tryUpdateSpanNameBeforeEnd - source upgrade logic', () => {
}
});
});
+
+ describe('allRoutes global set (lazy routes behavior)', () => {
+ it('should allow adding routes to allRoutes after initial setup', () => {
+ // Clear the set first
+ allRoutes.clear();
+
+ const initialRoutes: RouteObject[] = [{ path: '/', element: Home
}];
+ const lazyRoutes: RouteObject[] = [{ path: '/lazy/:id', element: Lazy
}];
+
+ // Add initial routes
+ addRoutesToAllRoutes(initialRoutes);
+ expect(allRoutes.size).toBe(1);
+ expect(allRoutes.has(initialRoutes[0]!)).toBe(true);
+
+ // Simulate lazy route loading via patchRoutesOnNavigation
+ addRoutesToAllRoutes(lazyRoutes);
+ expect(allRoutes.size).toBe(2);
+ expect(allRoutes.has(lazyRoutes[0]!)).toBe(true);
+ });
+
+ it('should not duplicate routes when adding same route multiple times', () => {
+ allRoutes.clear();
+
+ const routes: RouteObject[] = [{ path: '/users', element: Users
}];
+
+ addRoutesToAllRoutes(routes);
+ addRoutesToAllRoutes(routes); // Add same route again
+
+ // Set should have unique entries only
+ expect(allRoutes.size).toBe(1);
+ });
+
+ it('should recursively add nested children routes', () => {
+ allRoutes.clear();
+
+ const parentRoute: RouteObject = {
+ path: '/parent',
+ element: Parent
,
+ children: [
+ {
+ path: ':id',
+ element: Child
,
+ children: [{ path: 'nested', element: Nested
}],
+ },
+ ],
+ };
+
+ addRoutesToAllRoutes([parentRoute]);
+
+ // Should add parent and all nested children
+ expect(allRoutes.size).toBe(3);
+ expect(allRoutes.has(parentRoute)).toBe(true);
+ expect(allRoutes.has(parentRoute.children![0]!)).toBe(true);
+ expect(allRoutes.has(parentRoute.children![0]!.children![0]!)).toBe(true);
+ });
+
+ // Regression test: Verify that routes added AFTER a span starts are still accessible
+ // This is the key fix for the lazy routes pageload bug where patchSpanEnd
+ // was using a stale snapshot instead of the global allRoutes set.
+ it('should maintain reference to global set (not snapshot) for late route additions', () => {
+ allRoutes.clear();
+
+ // Initial routes at "pageload start" time
+ const initialRoutes: RouteObject[] = [
+ { path: '/', element: Home
},
+ { path: '/slow-fetch', element: Slow Fetch Parent
},
+ ];
+ addRoutesToAllRoutes(initialRoutes);
+
+ // Capture a reference to allRoutes (simulating what patchSpanEnd does AFTER the fix)
+ const routesReference = allRoutes;
+
+ // Later, lazy routes are loaded via patchRoutesOnNavigation
+ const lazyLoadedRoutes: RouteObject[] = [{ path: ':id', element: Lazy Child
}];
+ addRoutesToAllRoutes(lazyLoadedRoutes);
+
+ // The reference should see the newly added routes (fix behavior)
+ // Before the fix, a snapshot (new Set(allRoutes)) was taken, which wouldn't see new routes
+ expect(routesReference.size).toBe(3);
+ expect(routesReference.has(lazyLoadedRoutes[0]!)).toBe(true);
+
+ // Convert to array and verify all routes are present
+ const allRoutesArray = Array.from(routesReference);
+ expect(allRoutesArray).toContain(initialRoutes[0]);
+ expect(allRoutesArray).toContain(initialRoutes[1]);
+ expect(allRoutesArray).toContain(lazyLoadedRoutes[0]);
+ });
+ });
});
diff --git a/packages/remix/package.json b/packages/remix/package.json
index 558a6543b485..bd15b9d3d011 100644
--- a/packages/remix/package.json
+++ b/packages/remix/package.json
@@ -65,8 +65,8 @@
},
"dependencies": {
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/instrumentation": "^0.210.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/instrumentation": "^0.211.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@remix-run/router": "1.x",
"@sentry/cli": "^2.58.2",
"@sentry/core": "10.36.0",
diff --git a/packages/replay-internal/src/replay.ts b/packages/replay-internal/src/replay.ts
index 49e8ce092edd..10dba8758d8a 100644
--- a/packages/replay-internal/src/replay.ts
+++ b/packages/replay-internal/src/replay.ts
@@ -55,7 +55,7 @@ import { debug } from './util/logger';
import { resetReplayIdOnDynamicSamplingContext } from './util/resetReplayIdOnDynamicSamplingContext';
import { closestElementOfNode } from './util/rrweb';
import { sendReplay } from './util/sendReplay';
-import { RateLimitError } from './util/sendReplayRequest';
+import { RateLimitError, ReplayDurationLimitError } from './util/sendReplayRequest';
import type { SKIPPED } from './util/throttle';
import { throttle, THROTTLED } from './util/throttle';
@@ -1185,7 +1185,7 @@ export class ReplayContainer implements ReplayContainerInterface {
// We leave 30s wiggle room to accommodate late flushing etc.
// This _could_ happen when the browser is suspended during flushing, in which case we just want to stop
if (timestamp - this._context.initialTimestamp > this._options.maxReplayDuration + 30_000) {
- throw new Error('Session is too long, not sending replay');
+ throw new ReplayDurationLimitError();
}
const eventContext = this._popEventContext();
@@ -1218,7 +1218,14 @@ export class ReplayContainer implements ReplayContainerInterface {
const client = getClient();
if (client) {
- const dropReason = err instanceof RateLimitError ? 'ratelimit_backoff' : 'send_error';
+ let dropReason: 'ratelimit_backoff' | 'send_error' | 'invalid';
+ if (err instanceof RateLimitError) {
+ dropReason = 'ratelimit_backoff';
+ } else if (err instanceof ReplayDurationLimitError) {
+ dropReason = 'invalid';
+ } else {
+ dropReason = 'send_error';
+ }
client.recordDroppedEvent(dropReason, 'replay');
}
}
diff --git a/packages/replay-internal/src/util/sendReplayRequest.ts b/packages/replay-internal/src/util/sendReplayRequest.ts
index 4f40934f37d3..777b3f970712 100644
--- a/packages/replay-internal/src/util/sendReplayRequest.ts
+++ b/packages/replay-internal/src/util/sendReplayRequest.ts
@@ -117,16 +117,17 @@ export async function sendReplayRequest({
throw error;
}
- // If the status code is invalid, we want to immediately stop & not retry
- if (typeof response.statusCode === 'number' && (response.statusCode < 200 || response.statusCode >= 300)) {
- throw new TransportStatusCodeError(response.statusCode);
- }
-
+ // Check for rate limiting first (handles 429 and rate limit headers)
const rateLimits = updateRateLimits({}, response);
if (isRateLimited(rateLimits, 'replay')) {
throw new RateLimitError(rateLimits);
}
+ // If the status code is invalid, we want to immediately stop & not retry
+ if (typeof response.statusCode === 'number' && (response.statusCode < 200 || response.statusCode >= 300)) {
+ throw new TransportStatusCodeError(response.statusCode);
+ }
+
return response;
}
@@ -150,3 +151,13 @@ export class RateLimitError extends Error {
this.rateLimits = rateLimits;
}
}
+
+/**
+ * This error indicates that the replay duration limit was exceeded and the session is too long.
+ *
+ */
+export class ReplayDurationLimitError extends Error {
+ public constructor() {
+ super('Session is too long, not sending replay');
+ }
+}
diff --git a/packages/replay-internal/test/integration/flush.test.ts b/packages/replay-internal/test/integration/flush.test.ts
index d9c45278855b..83ab08ffb2cb 100644
--- a/packages/replay-internal/test/integration/flush.test.ts
+++ b/packages/replay-internal/test/integration/flush.test.ts
@@ -489,6 +489,49 @@ describe('Integration | flush', () => {
await replay.start();
});
+ /**
+ * This tests that when a replay exceeds maxReplayDuration,
+ * the dropped event is recorded with the 'invalid' reason
+ * to distinguish it from actual send errors.
+ */
+ it('records dropped event with invalid reason when session exceeds maxReplayDuration', async () => {
+ const client = SentryUtils.getClient()!;
+ const recordDroppedEventSpy = vi.spyOn(client, 'recordDroppedEvent');
+
+ replay.getOptions().maxReplayDuration = 100_000;
+
+ sessionStorage.clear();
+ clearSession(replay);
+ replay['_initializeSessionForSampling']();
+ replay.setInitialState();
+ await new Promise(process.nextTick);
+ vi.setSystemTime(BASE_TIMESTAMP);
+
+ replay.eventBuffer!.clear();
+
+ replay.eventBuffer!.hasCheckout = true;
+
+ replay['_addPerformanceEntries'] = () => {
+ return new Promise(resolve => setTimeout(resolve, 140_000));
+ };
+
+ const TEST_EVENT = getTestEventCheckout({ timestamp: BASE_TIMESTAMP + 100 });
+ mockRecord._emitter(TEST_EVENT);
+
+ await vi.advanceTimersByTimeAsync(160_000);
+
+ expect(mockFlush).toHaveBeenCalledTimes(1);
+ expect(mockSendReplay).toHaveBeenCalledTimes(0);
+ expect(replay.isEnabled()).toBe(false);
+
+ expect(recordDroppedEventSpy).toHaveBeenCalledWith('invalid', 'replay');
+
+ replay.getOptions().maxReplayDuration = MAX_REPLAY_DURATION;
+ recordDroppedEventSpy.mockRestore();
+
+ await replay.start();
+ });
+
it('resets flush lock if runFlush rejects/throws', async () => {
mockRunFlush.mockImplementation(
() =>
diff --git a/packages/replay-internal/test/integration/rateLimiting.test.ts b/packages/replay-internal/test/integration/rateLimiting.test.ts
index 688c9469fc40..745c4378a91f 100644
--- a/packages/replay-internal/test/integration/rateLimiting.test.ts
+++ b/packages/replay-internal/test/integration/rateLimiting.test.ts
@@ -113,4 +113,42 @@ describe('Integration | rate-limiting behaviour', () => {
expect(replay.session).toBeDefined();
expect(replay.isEnabled()).toBe(true);
});
+
+ it('records dropped event with ratelimit_backoff reason when rate limited', async () => {
+ const client = getClient()!;
+ const recordDroppedEventSpy = vi.spyOn(client, 'recordDroppedEvent');
+
+ mockTransportSend.mockImplementationOnce(() => {
+ return Promise.resolve({ statusCode: 429, headers: { 'retry-after': '10' } } as TransportMakeRequestResponse);
+ });
+
+ replay.start();
+ await advanceTimers(DEFAULT_FLUSH_MIN_DELAY);
+
+ expect(replay.isEnabled()).toBe(false);
+ expect(recordDroppedEventSpy).toHaveBeenCalledWith('ratelimit_backoff', 'replay');
+
+ recordDroppedEventSpy.mockRestore();
+ });
+
+ it('records dropped event with send_error reason when transport fails', async () => {
+ const client = getClient()!;
+ const recordDroppedEventSpy = vi.spyOn(client, 'recordDroppedEvent');
+
+ mockTransportSend.mockImplementation(() => {
+ return Promise.reject(new Error('Network error'));
+ });
+
+ replay.start();
+ await advanceTimers(DEFAULT_FLUSH_MIN_DELAY);
+
+ await advanceTimers(5000);
+ await advanceTimers(10000);
+ await advanceTimers(30000);
+
+ expect(replay.isEnabled()).toBe(false);
+ expect(recordDroppedEventSpy).toHaveBeenCalledWith('send_error', 'replay');
+
+ recordDroppedEventSpy.mockRestore();
+ });
});
diff --git a/packages/replay-internal/test/unit/util/sendReplayRequest.test.ts b/packages/replay-internal/test/unit/util/sendReplayRequest.test.ts
new file mode 100644
index 000000000000..f5ea1787571a
--- /dev/null
+++ b/packages/replay-internal/test/unit/util/sendReplayRequest.test.ts
@@ -0,0 +1,50 @@
+import { describe, expect, it } from 'vitest';
+import {
+ RateLimitError,
+ ReplayDurationLimitError,
+ TransportStatusCodeError,
+} from '../../../src/util/sendReplayRequest';
+
+describe('Unit | util | sendReplayRequest', () => {
+ describe('TransportStatusCodeError', () => {
+ it('creates error with correct message', () => {
+ const error = new TransportStatusCodeError(500);
+ expect(error.message).toBe('Transport returned status code 500');
+ expect(error).toBeInstanceOf(Error);
+ });
+ });
+
+ describe('RateLimitError', () => {
+ it('creates error with correct message and stores rate limits', () => {
+ const rateLimits = { all: 1234567890 };
+ const error = new RateLimitError(rateLimits);
+ expect(error.message).toBe('Rate limit hit');
+ expect(error.rateLimits).toBe(rateLimits);
+ expect(error).toBeInstanceOf(Error);
+ });
+ });
+
+ describe('ReplayDurationLimitError', () => {
+ it('creates error with correct message', () => {
+ const error = new ReplayDurationLimitError();
+ expect(error.message).toBe('Session is too long, not sending replay');
+ expect(error).toBeInstanceOf(Error);
+ });
+
+ it('is distinguishable from other error types', () => {
+ const durationError = new ReplayDurationLimitError();
+ const rateLimitError = new RateLimitError({ all: 123 });
+ const transportError = new TransportStatusCodeError(500);
+
+ expect(durationError instanceof ReplayDurationLimitError).toBe(true);
+ expect(durationError instanceof RateLimitError).toBe(false);
+ expect(durationError instanceof TransportStatusCodeError).toBe(false);
+
+ expect(rateLimitError instanceof ReplayDurationLimitError).toBe(false);
+ expect(rateLimitError instanceof RateLimitError).toBe(true);
+
+ expect(transportError instanceof ReplayDurationLimitError).toBe(false);
+ expect(transportError instanceof TransportStatusCodeError).toBe(true);
+ });
+ });
+});
diff --git a/packages/solidstart/package.json b/packages/solidstart/package.json
index 5b61600f0c64..add286e5af73 100644
--- a/packages/solidstart/package.json
+++ b/packages/solidstart/package.json
@@ -69,7 +69,7 @@
"@sentry/core": "10.36.0",
"@sentry/node": "10.36.0",
"@sentry/solid": "10.36.0",
- "@sentry/vite-plugin": "^4.6.2"
+ "@sentry/vite-plugin": "^4.7.0"
},
"devDependencies": {
"@solidjs/router": "^0.15.0",
diff --git a/packages/solidstart/test/config/withSentry.test.ts b/packages/solidstart/test/config/withSentry.test.ts
index 3f695ca36c46..4b4acf1f680b 100644
--- a/packages/solidstart/test/config/withSentry.test.ts
+++ b/packages/solidstart/test/config/withSentry.test.ts
@@ -80,9 +80,8 @@ describe('withSentry()', () => {
expect(names).toEqual([
'sentry-solidstart-build-instrumentation-file',
'sentry-telemetry-plugin',
- 'sentry-vite-release-injection-plugin',
+ 'sentry-vite-injection-plugin',
'sentry-release-management-plugin',
- 'sentry-vite-debug-id-injection-plugin',
'sentry-vite-debug-id-upload-plugin',
'sentry-file-deletion-plugin',
'sentry-solidstart-update-source-map-setting',
@@ -108,9 +107,8 @@ describe('withSentry()', () => {
expect(names).toEqual([
'sentry-solidstart-build-instrumentation-file',
'sentry-telemetry-plugin',
- 'sentry-vite-release-injection-plugin',
+ 'sentry-vite-injection-plugin',
'sentry-release-management-plugin',
- 'sentry-vite-debug-id-injection-plugin',
'sentry-vite-debug-id-upload-plugin',
'sentry-file-deletion-plugin',
'sentry-solidstart-update-source-map-setting',
@@ -140,9 +138,8 @@ describe('withSentry()', () => {
expect(names).toEqual([
'sentry-solidstart-build-instrumentation-file',
'sentry-telemetry-plugin',
- 'sentry-vite-release-injection-plugin',
+ 'sentry-vite-injection-plugin',
'sentry-release-management-plugin',
- 'sentry-vite-debug-id-injection-plugin',
'sentry-vite-debug-id-upload-plugin',
'sentry-file-deletion-plugin',
'sentry-solidstart-update-source-map-setting',
diff --git a/packages/solidstart/test/vite/sentrySolidStartVite.test.ts b/packages/solidstart/test/vite/sentrySolidStartVite.test.ts
index c40a4f7c8dbc..b71b0e055f1f 100644
--- a/packages/solidstart/test/vite/sentrySolidStartVite.test.ts
+++ b/packages/solidstart/test/vite/sentrySolidStartVite.test.ts
@@ -28,9 +28,8 @@ describe('sentrySolidStartVite()', () => {
expect(names).toEqual([
'sentry-solidstart-build-instrumentation-file',
'sentry-telemetry-plugin',
- 'sentry-vite-release-injection-plugin',
+ 'sentry-vite-injection-plugin',
'sentry-release-management-plugin',
- 'sentry-vite-debug-id-injection-plugin',
'sentry-vite-debug-id-upload-plugin',
'sentry-file-deletion-plugin',
'sentry-solidstart-update-source-map-setting',
diff --git a/packages/sveltekit/package.json b/packages/sveltekit/package.json
index cfb8a158960a..2e886307ae39 100644
--- a/packages/sveltekit/package.json
+++ b/packages/sveltekit/package.json
@@ -52,7 +52,7 @@
"@sentry/core": "10.36.0",
"@sentry/node": "10.36.0",
"@sentry/svelte": "10.36.0",
- "@sentry/vite-plugin": "^4.6.2",
+ "@sentry/vite-plugin": "^4.7.0",
"magic-string": "0.30.7",
"recast": "0.23.11",
"sorcery": "1.0.0"
diff --git a/packages/sveltekit/test/vite/sentrySvelteKitPlugins.test.ts b/packages/sveltekit/test/vite/sentrySvelteKitPlugins.test.ts
index eef008fca73d..798da06b7bf0 100644
--- a/packages/sveltekit/test/vite/sentrySvelteKitPlugins.test.ts
+++ b/packages/sveltekit/test/vite/sentrySvelteKitPlugins.test.ts
@@ -42,8 +42,8 @@ describe('sentrySvelteKit()', () => {
const plugins = await getSentrySvelteKitPlugins();
expect(plugins).toBeInstanceOf(Array);
- // 1 auto instrument plugin + 1 global values injection plugin + 5 source maps plugins
- expect(plugins).toHaveLength(10);
+ // 1 auto instrument plugin + 1 global values injection plugin + 4 source maps plugins
+ expect(plugins).toHaveLength(9);
});
it('returns the custom sentry source maps upload plugin, unmodified sourcemaps plugins and the auto-instrument plugin by default', async () => {
@@ -56,8 +56,7 @@ describe('sentrySvelteKit()', () => {
'sentry-sveltekit-global-values-injection-plugin',
// default source maps plugins:
'sentry-telemetry-plugin',
- 'sentry-vite-release-injection-plugin',
- 'sentry-vite-debug-id-injection-plugin',
+ 'sentry-vite-injection-plugin',
'sentry-sveltekit-update-source-map-setting-plugin',
'sentry-sveltekit-files-to-delete-after-upload-setting-plugin',
// custom release plugin:
@@ -90,7 +89,7 @@ describe('sentrySvelteKit()', () => {
it("doesn't return the auto instrument plugin if autoInstrument is `false`", async () => {
const plugins = await getSentrySvelteKitPlugins({ autoInstrument: false });
const pluginNames = plugins.map(plugin => plugin.name);
- expect(plugins).toHaveLength(9); // global values injection + 5 source maps plugins + 3 default plugins
+ expect(plugins).toHaveLength(8); // global values injection + 4 source maps plugins + 3 default plugins
expect(pluginNames).not.toContain('sentry-auto-instrumentation');
});
diff --git a/packages/tanstackstart-react/package.json b/packages/tanstackstart-react/package.json
index f13e63b635d6..0ea2afd78506 100644
--- a/packages/tanstackstart-react/package.json
+++ b/packages/tanstackstart-react/package.json
@@ -56,7 +56,7 @@
"@sentry/core": "10.36.0",
"@sentry/node": "10.36.0",
"@sentry/react": "10.36.0",
- "@sentry/vite-plugin": "^4.6.2"
+ "@sentry/vite-plugin": "^4.7.0"
},
"devDependencies": {
"vite": "^5.4.11"
diff --git a/packages/tanstackstart-react/src/vite/autoInstrumentMiddleware.ts b/packages/tanstackstart-react/src/vite/autoInstrumentMiddleware.ts
new file mode 100644
index 000000000000..6d898f233e1f
--- /dev/null
+++ b/packages/tanstackstart-react/src/vite/autoInstrumentMiddleware.ts
@@ -0,0 +1,119 @@
+import type { Plugin } from 'vite';
+
+type AutoInstrumentMiddlewareOptions = {
+ enabled?: boolean;
+ debug?: boolean;
+};
+
+/**
+ * A Vite plugin that automatically instruments TanStack Start middlewares
+ * by wrapping `requestMiddleware` and `functionMiddleware` arrays in `createStart()`.
+ */
+export function makeAutoInstrumentMiddlewarePlugin(options: AutoInstrumentMiddlewareOptions = {}): Plugin {
+ const { enabled = true, debug = false } = options;
+
+ return {
+ name: 'sentry-tanstack-middleware-auto-instrument',
+ enforce: 'pre',
+
+ transform(code, id) {
+ if (!enabled) {
+ return null;
+ }
+
+ // Skip if not a TS/JS file
+ if (!/\.(ts|tsx|js|jsx|mjs|mts)$/.test(id)) {
+ return null;
+ }
+
+ // Only wrap requestMiddleware and functionMiddleware in createStart()
+ // createStart() should always be in a file named start.ts
+ if (!id.includes('start') || !code.includes('createStart(')) {
+ return null;
+ }
+
+ // Skip if the user already did some manual wrapping
+ if (code.includes('wrapMiddlewaresWithSentry')) {
+ return null;
+ }
+
+ let transformed = code;
+ let needsImport = false;
+ const skippedMiddlewares: string[] = [];
+
+ transformed = transformed.replace(
+ /(requestMiddleware|functionMiddleware)\s*:\s*\[([^\]]*)\]/g,
+ (match: string, key: string, contents: string) => {
+ const objContents = arrayToObjectShorthand(contents);
+ if (objContents) {
+ needsImport = true;
+ if (debug) {
+ // eslint-disable-next-line no-console
+ console.log(`[Sentry] Auto-wrapping ${key} in ${id}`);
+ }
+ return `${key}: wrapMiddlewaresWithSentry(${objContents})`;
+ }
+ // Track middlewares that couldn't be auto-wrapped
+ // Skip if we matched whitespace only
+ if (contents.trim()) {
+ skippedMiddlewares.push(key);
+ }
+ return match;
+ },
+ );
+
+ // Warn about middlewares that couldn't be auto-wrapped
+ if (skippedMiddlewares.length > 0) {
+ // eslint-disable-next-line no-console
+ console.warn(
+ `[Sentry] Could not auto-instrument ${skippedMiddlewares.join(' and ')} in ${id}. ` +
+ 'To instrument these middlewares, use wrapMiddlewaresWithSentry() manually. ',
+ );
+ }
+
+ // We didn't wrap any middlewares, so we don't need to import the wrapMiddlewaresWithSentry function
+ if (!needsImport) {
+ return null;
+ }
+
+ const sentryImport = "import { wrapMiddlewaresWithSentry } from '@sentry/tanstackstart-react';\n";
+
+ // Check for 'use server' or 'use client' directives, these need to be before any imports
+ const directiveMatch = transformed.match(/^(['"])use (client|server)\1;?\s*\n?/);
+ if (directiveMatch) {
+ // Insert import after the directive
+ const directive = directiveMatch[0];
+ transformed = directive + sentryImport + transformed.slice(directive.length);
+ } else {
+ transformed = sentryImport + transformed;
+ }
+
+ return { code: transformed, map: null };
+ },
+ };
+}
+
+/**
+ * Convert array contents to object shorthand syntax.
+ * e.g., "foo, bar, baz" โ "{ foo, bar, baz }"
+ *
+ * Returns null if contents contain non-identifier expressions (function calls, etc.)
+ * which cannot be converted to object shorthand.
+ */
+export function arrayToObjectShorthand(contents: string): string | null {
+ const items = contents
+ .split(',')
+ .map(s => s.trim())
+ .filter(Boolean);
+
+ // Only convert if all items are valid identifiers (no complex expressions)
+ const allIdentifiers = items.every(item => /^[a-zA-Z_$][a-zA-Z0-9_$]*$/.test(item));
+ if (!allIdentifiers || items.length === 0) {
+ return null;
+ }
+
+ // Deduplicate to avoid invalid syntax like { foo, foo }
+ const uniqueItems = [...new Set(items)];
+
+ return `{ ${uniqueItems.join(', ')} }`;
+}
diff --git a/packages/tanstackstart-react/src/vite/index.ts b/packages/tanstackstart-react/src/vite/index.ts
index 4af3423136fb..85143344028d 100644
--- a/packages/tanstackstart-react/src/vite/index.ts
+++ b/packages/tanstackstart-react/src/vite/index.ts
@@ -1 +1,2 @@
export { sentryTanstackStart } from './sentryTanstackStart';
+export type { SentryTanstackStartOptions } from './sentryTanstackStart';
diff --git a/packages/tanstackstart-react/src/vite/sentryTanstackStart.ts b/packages/tanstackstart-react/src/vite/sentryTanstackStart.ts
index 00dc145117be..d14033ff052d 100644
--- a/packages/tanstackstart-react/src/vite/sentryTanstackStart.ts
+++ b/packages/tanstackstart-react/src/vite/sentryTanstackStart.ts
@@ -1,7 +1,26 @@
import type { BuildTimeOptionsBase } from '@sentry/core';
import type { Plugin } from 'vite';
+import { makeAutoInstrumentMiddlewarePlugin } from './autoInstrumentMiddleware';
import { makeAddSentryVitePlugin, makeEnableSourceMapsVitePlugin } from './sourceMaps';
+/**
+ * Build-time options for the Sentry TanStack Start SDK.
+ */
+export interface SentryTanstackStartOptions extends BuildTimeOptionsBase {
+ /**
+ * If this flag is `true`, the Sentry plugins will automatically instrument TanStack Start middlewares.
+ *
+ * This wraps global middlewares (`requestMiddleware` and `functionMiddleware`) in `createStart()` with Sentry
+ * instrumentation to capture performance data.
+ *
+ * Set to `false` to disable automatic middleware instrumentation if you prefer to wrap middlewares manually
+ * using `wrapMiddlewaresWithSentry`.
+ *
+ * @default true
+ */
+ autoInstrumentMiddleware?: boolean;
+}
+
/**
* Vite plugins for the Sentry TanStack Start SDK.
*
@@ -14,11 +33,11 @@ import { makeAddSentryVitePlugin, makeEnableSourceMapsVitePlugin } from './sourc
*
* export default defineConfig({
* plugins: [
+ * tanstackStart(),
* sentryTanstackStart({
* org: 'your-org',
* project: 'your-project',
* }),
- * tanstackStart(),
* ],
* });
* ```
@@ -26,14 +45,20 @@ import { makeAddSentryVitePlugin, makeEnableSourceMapsVitePlugin } from './sourc
* @param options - Options to configure the Sentry Vite plugins
* @returns An array of Vite plugins
*/
-export function sentryTanstackStart(options: BuildTimeOptionsBase = {}): Plugin[] {
- // Only add plugins in production builds
+export function sentryTanstackStart(options: SentryTanstackStartOptions = {}): Plugin[] {
+ // only add plugins in production builds
if (process.env.NODE_ENV === 'development') {
return [];
}
const plugins: Plugin[] = [...makeAddSentryVitePlugin(options)];
+ // middleware auto-instrumentation
+ if (options.autoInstrumentMiddleware !== false) {
+ plugins.push(makeAutoInstrumentMiddlewarePlugin({ enabled: true, debug: options.debug }));
+ }
+
+ // source maps
const sourceMapsDisabled = options.sourcemaps?.disable === true || options.sourcemaps?.disable === 'disable-upload';
if (!sourceMapsDisabled) {
plugins.push(...makeEnableSourceMapsVitePlugin(options));
diff --git a/packages/tanstackstart-react/test/vite/autoInstrumentMiddleware.test.ts b/packages/tanstackstart-react/test/vite/autoInstrumentMiddleware.test.ts
new file mode 100644
index 000000000000..749b3e9822bd
--- /dev/null
+++ b/packages/tanstackstart-react/test/vite/autoInstrumentMiddleware.test.ts
@@ -0,0 +1,228 @@
+import type { Plugin } from 'vite';
+import { describe, expect, it, vi } from 'vitest';
+import { arrayToObjectShorthand, makeAutoInstrumentMiddlewarePlugin } from '../../src/vite/autoInstrumentMiddleware';
+
+type PluginWithTransform = Plugin & {
+ transform: (code: string, id: string) => { code: string; map: null } | null;
+};
+
+describe('makeAutoInstrumentMiddlewarePlugin', () => {
+ const createStartFile = `
+import { createStart } from '@tanstack/react-start';
+import { authMiddleware, loggingMiddleware } from './middleware';
+
+export const startInstance = createStart(() => ({
+ requestMiddleware: [authMiddleware],
+ functionMiddleware: [loggingMiddleware],
+}));
+`;
+
+ it('instruments a file with createStart and middleware arrays', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const result = plugin.transform(createStartFile, '/app/start.ts');
+
+ expect(result).not.toBeNull();
+ expect(result!.code).toContain("import { wrapMiddlewaresWithSentry } from '@sentry/tanstackstart-react'");
+ expect(result!.code).toContain('requestMiddleware: wrapMiddlewaresWithSentry({ authMiddleware })');
+ expect(result!.code).toContain('functionMiddleware: wrapMiddlewaresWithSentry({ loggingMiddleware })');
+ });
+
+ it('does not instrument files without createStart', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = "export const foo = 'bar';";
+ const result = plugin.transform(code, '/app/other.ts');
+
+ expect(result).toBeNull();
+ });
+
+ it('does not instrument non-TS/JS files', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const result = plugin.transform(createStartFile, '/app/start.css');
+
+ expect(result).toBeNull();
+ });
+
+ it('does not instrument when enabled is false', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin({ enabled: false }) as PluginWithTransform;
+ const result = plugin.transform(createStartFile, '/app/start.ts');
+
+ expect(result).toBeNull();
+ });
+
+ it('wraps single middleware entry correctly', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({ requestMiddleware: [singleMiddleware] }));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result!.code).toContain('requestMiddleware: wrapMiddlewaresWithSentry({ singleMiddleware })');
+ });
+
+ it('wraps multiple middleware entries correctly', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({ requestMiddleware: [a, b, c] }));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result!.code).toContain('requestMiddleware: wrapMiddlewaresWithSentry({ a, b, c })');
+ });
+
+ it('does not wrap empty middleware arrays', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({ requestMiddleware: [] }));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result).toBeNull();
+ });
+
+ it('does not wrap if middleware contains function calls', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({ requestMiddleware: [getMiddleware()] }));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result).toBeNull();
+ });
+
+ it('does not instrument files that already use wrapMiddlewaresWithSentry', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+import { wrapMiddlewaresWithSentry } from '@sentry/tanstackstart-react';
+createStart(() => ({ requestMiddleware: wrapMiddlewaresWithSentry({ myMiddleware }) }));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result).toBeNull();
+ });
+
+ it('handles files with use server directive', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `'use server';
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({ requestMiddleware: [authMiddleware] }));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result).not.toBeNull();
+ expect(result!.code).toMatch(/^'use server';\s*\nimport \{ wrapMiddlewaresWithSentry \}/);
+ });
+
+ it('handles files with use client directive', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `"use client";
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({ requestMiddleware: [authMiddleware] }));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result).not.toBeNull();
+ expect(result!.code).toMatch(/^"use client";\s*\nimport \{ wrapMiddlewaresWithSentry \}/);
+ });
+
+ it('handles trailing commas in middleware arrays', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({ requestMiddleware: [authMiddleware,] }));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result).not.toBeNull();
+ expect(result!.code).toContain('requestMiddleware: wrapMiddlewaresWithSentry({ authMiddleware })');
+ });
+
+ it('wraps valid array and skips invalid array in same file', () => {
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({
+ requestMiddleware: [authMiddleware],
+ functionMiddleware: [getMiddleware()]
+}));
+`;
+ const result = plugin.transform(code, '/app/start.ts');
+
+ expect(result).not.toBeNull();
+ expect(result!.code).toContain('requestMiddleware: wrapMiddlewaresWithSentry({ authMiddleware })');
+ expect(result!.code).toContain('functionMiddleware: [getMiddleware()]');
+ });
+
+ it('warns when middleware contains expressions that cannot be auto-wrapped', () => {
+ const consoleWarnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
+
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({ requestMiddleware: [getMiddleware()] }));
+`;
+ plugin.transform(code, '/app/start.ts');
+
+ expect(consoleWarnSpy).toHaveBeenCalledWith(expect.stringContaining('Could not auto-instrument requestMiddleware'));
+
+ consoleWarnSpy.mockRestore();
+ });
+
+ it('warns about skipped middlewares even when others are successfully wrapped', () => {
+ const consoleWarnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
+
+ const plugin = makeAutoInstrumentMiddlewarePlugin() as PluginWithTransform;
+ const code = `
+import { createStart } from '@tanstack/react-start';
+createStart(() => ({
+ requestMiddleware: [authMiddleware],
+ functionMiddleware: [getMiddleware()]
+}));
+`;
+ plugin.transform(code, '/app/start.ts');
+
+ expect(consoleWarnSpy).toHaveBeenCalledWith(
+ expect.stringContaining('Could not auto-instrument functionMiddleware'),
+ );
+
+ consoleWarnSpy.mockRestore();
+ });
+});
+
+describe('arrayToObjectShorthand', () => {
+ it('converts single identifier', () => {
+ expect(arrayToObjectShorthand('foo')).toBe('{ foo }');
+ });
+
+ it('converts multiple identifiers', () => {
+ expect(arrayToObjectShorthand('foo, bar, baz')).toBe('{ foo, bar, baz }');
+ });
+
+ it('handles whitespace', () => {
+ expect(arrayToObjectShorthand(' foo , bar ')).toBe('{ foo, bar }');
+ });
+
+ it('returns null for empty string', () => {
+ expect(arrayToObjectShorthand('')).toBeNull();
+ });
+
+ it('returns null for function calls', () => {
+ expect(arrayToObjectShorthand('getMiddleware()')).toBeNull();
+ });
+
+ it('returns null for spread syntax', () => {
+ expect(arrayToObjectShorthand('...middlewares')).toBeNull();
+ });
+
+ it('returns null for mixed valid and invalid', () => {
+ expect(arrayToObjectShorthand('foo, bar(), baz')).toBeNull();
+ });
+
+ it('deduplicates entries', () => {
+ expect(arrayToObjectShorthand('foo, foo, bar')).toBe('{ foo, bar }');
+ });
+});
diff --git a/packages/tanstackstart-react/test/vite/sentryTanstackStart.test.ts b/packages/tanstackstart-react/test/vite/sentryTanstackStart.test.ts
index 390b601d8808..ef18da74d03a 100644
--- a/packages/tanstackstart-react/test/vite/sentryTanstackStart.test.ts
+++ b/packages/tanstackstart-react/test/vite/sentryTanstackStart.test.ts
@@ -1,5 +1,6 @@
import type { Plugin } from 'vite';
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
+import { makeAutoInstrumentMiddlewarePlugin } from '../../src/vite/autoInstrumentMiddleware';
import { sentryTanstackStart } from '../../src/vite/sentryTanstackStart';
const mockSourceMapsConfigPlugin: Plugin = {
@@ -21,11 +22,21 @@ const mockEnableSourceMapsPlugin: Plugin = {
config: vi.fn(),
};
+const mockMiddlewarePlugin: Plugin = {
+ name: 'sentry-tanstack-middleware-auto-instrument',
+ apply: 'build',
+ transform: vi.fn(),
+};
+
vi.mock('../../src/vite/sourceMaps', () => ({
makeAddSentryVitePlugin: vi.fn(() => [mockSourceMapsConfigPlugin, mockSentryVitePlugin]),
makeEnableSourceMapsVitePlugin: vi.fn(() => [mockEnableSourceMapsPlugin]),
}));
+vi.mock('../../src/vite/autoInstrumentMiddleware', () => ({
+ makeAutoInstrumentMiddlewarePlugin: vi.fn(() => mockMiddlewarePlugin),
+}));
+
describe('sentryTanstackStart()', () => {
beforeEach(() => {
vi.clearAllMocks();
@@ -36,47 +47,84 @@ describe('sentryTanstackStart()', () => {
process.env.NODE_ENV = 'production';
});
- it('returns plugins in production mode', () => {
- const plugins = sentryTanstackStart({ org: 'test-org' });
+ describe('source maps', () => {
+ it('returns source maps plugins in production mode', () => {
+ const plugins = sentryTanstackStart({ autoInstrumentMiddleware: false });
- expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin, mockEnableSourceMapsPlugin]);
- });
+ expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin, mockEnableSourceMapsPlugin]);
+ });
- it('returns no plugins in development mode', () => {
- process.env.NODE_ENV = 'development';
+ it('returns no plugins in development mode', () => {
+ process.env.NODE_ENV = 'development';
- const plugins = sentryTanstackStart({ org: 'test-org' });
+ const plugins = sentryTanstackStart({ autoInstrumentMiddleware: false });
- expect(plugins).toEqual([]);
- });
+ expect(plugins).toEqual([]);
+ });
- it('returns Sentry Vite plugins but not enable source maps plugin when sourcemaps.disable is true', () => {
- const plugins = sentryTanstackStart({
- sourcemaps: { disable: true },
+ it('returns Sentry Vite plugins but not enable source maps plugin when sourcemaps.disable is true', () => {
+ const plugins = sentryTanstackStart({
+ autoInstrumentMiddleware: false,
+ sourcemaps: { disable: true },
+ });
+
+ expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin]);
});
- expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin]);
- });
+ it('returns Sentry Vite plugins but not enable source maps plugin when sourcemaps.disable is "disable-upload"', () => {
+ const plugins = sentryTanstackStart({
+ autoInstrumentMiddleware: false,
+ sourcemaps: { disable: 'disable-upload' },
+ });
- it('returns Sentry Vite plugins but not enable source maps plugin when sourcemaps.disable is "disable-upload"', () => {
- const plugins = sentryTanstackStart({
- sourcemaps: { disable: 'disable-upload' },
+ expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin]);
});
- expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin]);
+ it('returns Sentry Vite plugins and enable source maps plugin when sourcemaps.disable is false', () => {
+ const plugins = sentryTanstackStart({
+ autoInstrumentMiddleware: false,
+ sourcemaps: { disable: false },
+ });
+
+ expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin, mockEnableSourceMapsPlugin]);
+ });
});
- it('returns Sentry Vite plugins and enable source maps plugin when sourcemaps.disable is false', () => {
- const plugins = sentryTanstackStart({
- sourcemaps: { disable: false },
+ describe('middleware auto-instrumentation', () => {
+ it('includes middleware plugin by default', () => {
+ const plugins = sentryTanstackStart({ sourcemaps: { disable: true } });
+
+ expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin, mockMiddlewarePlugin]);
});
- expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin, mockEnableSourceMapsPlugin]);
- });
+ it('includes middleware plugin when autoInstrumentMiddleware is true', () => {
+ const plugins = sentryTanstackStart({
+ autoInstrumentMiddleware: true,
+ sourcemaps: { disable: true },
+ });
+
+ expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin, mockMiddlewarePlugin]);
+ });
+
+ it('does not include middleware plugin when autoInstrumentMiddleware is false', () => {
+ const plugins = sentryTanstackStart({
+ autoInstrumentMiddleware: false,
+ sourcemaps: { disable: true },
+ });
- it('returns Sentry Vite Plugins and enable source maps plugin by default when sourcemaps is not specified', () => {
- const plugins = sentryTanstackStart({});
+ expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin]);
+ });
+
+ it('passes correct options to makeAutoInstrumentMiddlewarePlugin', () => {
+ sentryTanstackStart({ debug: true, sourcemaps: { disable: true } });
+
+ expect(makeAutoInstrumentMiddlewarePlugin).toHaveBeenCalledWith({ enabled: true, debug: true });
+ });
+
+ it('passes debug: undefined when not specified', () => {
+ sentryTanstackStart({ sourcemaps: { disable: true } });
- expect(plugins).toEqual([mockSourceMapsConfigPlugin, mockSentryVitePlugin, mockEnableSourceMapsPlugin]);
+ expect(makeAutoInstrumentMiddlewarePlugin).toHaveBeenCalledWith({ enabled: true, debug: undefined });
+ });
});
});
diff --git a/packages/vercel-edge/package.json b/packages/vercel-edge/package.json
index 91ca53f81aab..867496df2575 100644
--- a/packages/vercel-edge/package.json
+++ b/packages/vercel-edge/package.json
@@ -40,14 +40,14 @@
},
"dependencies": {
"@opentelemetry/api": "^1.9.0",
- "@opentelemetry/resources": "^2.4.0",
+ "@opentelemetry/resources": "^2.5.0",
"@sentry/core": "10.36.0"
},
"devDependencies": {
"@edge-runtime/types": "3.0.1",
- "@opentelemetry/core": "^2.4.0",
- "@opentelemetry/sdk-trace-base": "^2.4.0",
- "@opentelemetry/semantic-conventions": "^1.37.0",
+ "@opentelemetry/core": "^2.5.0",
+ "@opentelemetry/sdk-trace-base": "^2.5.0",
+ "@opentelemetry/semantic-conventions": "^1.39.0",
"@sentry/opentelemetry": "10.36.0"
},
"scripts": {
diff --git a/packages/vercel-edge/rollup.npm.config.mjs b/packages/vercel-edge/rollup.npm.config.mjs
index d19ef3a09e2f..ae01f43703d0 100644
--- a/packages/vercel-edge/rollup.npm.config.mjs
+++ b/packages/vercel-edge/rollup.npm.config.mjs
@@ -1,141 +1,79 @@
import replace from '@rollup/plugin-replace';
import { makeBaseNPMConfig, makeNPMConfigVariants, plugins } from '@sentry-internal/rollup-utils';
-const downlevelLogicalAssignmentsPlugin = {
- name: 'downlevel-logical-assignments',
- renderChunk(code) {
- // ES2021 logical assignment operators (`||=`, `&&=`, `??=`) are not allowed by our ES2020 compatibility check.
- // OTEL currently ships some of these, so we downlevel them in the final output.
- //
- // Note: This is intentionally conservative (only matches property access-like LHS) to avoid duplicating side effects.
- // IMPORTANT: Use regex literals (not `String.raw` + `RegExp(...)`) to avoid accidental double-escaping.
- let out = code;
-
- // ??=
- out = out.replace(/([A-Za-z_$][\w$]*(?:\[[^\]]+\]|\.[A-Za-z_$][\w$]*)+)\s*\?\?=\s*([^;]+);/g, (_m, left, right) => {
- return `${left} = ${left} ?? ${right};`;
- });
-
- // ||=
- out = out.replace(/([A-Za-z_$][\w$]*(?:\[[^\]]+\]|\.[A-Za-z_$][\w$]*)+)\s*\|\|=\s*([^;]+);/g, (_m, left, right) => {
- return `${left} = ${left} || ${right};`;
- });
-
- // &&=
- out = out.replace(/([A-Za-z_$][\w$]*(?:\[[^\]]+\]|\.[A-Za-z_$][\w$]*)+)\s*&&=\s*([^;]+);/g, (_m, left, right) => {
- return `${left} = ${left} && ${right};`;
- });
-
- return { code: out, map: null };
- },
-};
-
-const baseConfig = makeBaseNPMConfig({
- entrypoints: ['src/index.ts'],
- bundledBuiltins: ['perf_hooks', 'util'],
- packageSpecificConfig: {
- context: 'globalThis',
- output: {
- preserveModules: false,
- },
- plugins: [
- plugins.makeCommonJSPlugin({ transformMixedEsModules: true }), // Needed because various modules in the OTEL toolchain use CJS (require-in-the-middle, shimmer, etc..)
- plugins.makeJsonPlugin(), // Needed because `require-in-the-middle` imports json via require
- replace({
- preventAssignment: true,
- // Use negative lookahead/lookbehind instead of word boundaries so `process.argv0` is also replaced in
- // `process.argv0.length` (where `.` follows). Default `\b` delimiters don't match before `.`.
- delimiters: ['(? Date.now()
- };
- }
- }
- `,
- resolveId: source => {
- if (source === 'perf_hooks') {
- return '\0perf_hooks_sentry_shim';
- } else if (source === 'util') {
- return '\0util_sentry_shim';
- } else {
- return null;
- }
- },
- load: id => {
- if (id === '\0perf_hooks_sentry_shim') {
- return `
- export const performance = {
- timeOrigin: 0,
- now: () => Date.now()
+export default makeNPMConfigVariants(
+ makeBaseNPMConfig({
+ entrypoints: ['src/index.ts'],
+ bundledBuiltins: ['perf_hooks', 'util'],
+ packageSpecificConfig: {
+ context: 'globalThis',
+ output: {
+ preserveModules: false,
+ },
+ plugins: [
+ plugins.makeCommonJSPlugin({ transformMixedEsModules: true }), // Needed because various modules in the OTEL toolchain use CJS (require-in-the-middle, shimmer, etc..)
+ plugins.makeJsonPlugin(), // Needed because `require-in-the-middle` imports json via require
+ replace({
+ preventAssignment: true,
+ values: {
+ 'process.argv0': JSON.stringify(''), // needed because otel relies on process.argv0 for the default service name, but that api is not available in the edge runtime.
+ },
+ }),
+ {
+ // This plugin is needed because otel imports `performance` from `perf_hooks` and also uses it via the `performance` global.
+ // It also imports `inspect` and `promisify` from node's `util` which are not available in the edge runtime so we need to define a polyfill.
+ // Both of these APIs are not available in the edge runtime so we need to define a polyfill.
+ // Vercel does something similar in the `@vercel/otel` package: https://github.com/vercel/otel/blob/087601ae585cb116bb2b46c211d014520de76c71/packages/otel/build.ts#L62
+ name: 'edge-runtime-polyfills',
+ banner: `
+ {
+ if (globalThis.performance === undefined) {
+ globalThis.performance = {
+ timeOrigin: 0,
+ now: () => Date.now()
+ };
}
- `;
- } else if (id === '\0util_sentry_shim') {
- return `
- export const inspect = (object) =>
- JSON.stringify(object, null, 2);
+ }
+ `,
+ resolveId: source => {
+ if (source === 'perf_hooks') {
+ return '\0perf_hooks_sentry_shim';
+ } else if (source === 'util') {
+ return '\0util_sentry_shim';
+ } else {
+ return null;
+ }
+ },
+ load: id => {
+ if (id === '\0perf_hooks_sentry_shim') {
+ return `
+ export const performance = {
+ timeOrigin: 0,
+ now: () => Date.now()
+ }
+ `;
+ } else if (id === '\0util_sentry_shim') {
+ return `
+ export const inspect = (object) =>
+ JSON.stringify(object, null, 2);
- export const promisify = (fn) => {
- return (...args) => {
- return new Promise((resolve, reject) => {
- fn(...args, (err, result) => {
- if (err) reject(err);
- else resolve(result);
+ export const promisify = (fn) => {
+ return (...args) => {
+ return new Promise((resolve, reject) => {
+ fn(...args, (err, result) => {
+ if (err) reject(err);
+ else resolve(result);
+ });
});
- });
+ };
};
- };
- `;
- } else {
- return null;
- }
+ `;
+ } else {
+ return null;
+ }
+ },
},
- },
- downlevelLogicalAssignmentsPlugin,
- ],
- },
-});
-
-// `makeBaseNPMConfig` marks dependencies/peers as external by default.
-// For Edge, we must ensure the OTEL SDK bits which reference `process.argv0` are bundled so our replace() plugin applies.
-const baseExternal = baseConfig.external;
-baseConfig.external = (source, importer, isResolved) => {
- // Never treat these as external - they need to be inlined so `process.argv0` can be replaced.
- if (
- source === '@opentelemetry/resources' ||
- source.startsWith('@opentelemetry/resources/') ||
- source === '@opentelemetry/sdk-trace-base' ||
- source.startsWith('@opentelemetry/sdk-trace-base/')
- ) {
- return false;
- }
-
- if (typeof baseExternal === 'function') {
- return baseExternal(source, importer, isResolved);
- }
-
- if (Array.isArray(baseExternal)) {
- return baseExternal.includes(source);
- }
-
- if (baseExternal instanceof RegExp) {
- return baseExternal.test(source);
- }
-
- return false;
-};
-
-export default makeNPMConfigVariants(baseConfig);
+ ],
+ },
+ }),
+);
diff --git a/packages/vercel-edge/src/sdk.ts b/packages/vercel-edge/src/sdk.ts
index 5c8387c9bc7a..269d9ada280a 100644
--- a/packages/vercel-edge/src/sdk.ts
+++ b/packages/vercel-edge/src/sdk.ts
@@ -9,6 +9,7 @@ import {
import type { Client, Integration, Options } from '@sentry/core';
import {
consoleIntegration,
+ conversationIdIntegration,
createStackParser,
debug,
dedupeIntegration,
@@ -56,6 +57,7 @@ export function getDefaultIntegrations(options: Options): Integration[] {
// eslint-disable-next-line deprecation/deprecation
inboundFiltersIntegration(),
functionToStringIntegration(),
+ conversationIdIntegration(),
linkedErrorsIntegration(),
winterCGFetchIntegration(),
consoleIntegration(),
diff --git a/packages/vercel-edge/test/build-artifacts.test.ts b/packages/vercel-edge/test/build-artifacts.test.ts
deleted file mode 100644
index c4994f4f8b29..000000000000
--- a/packages/vercel-edge/test/build-artifacts.test.ts
+++ /dev/null
@@ -1,32 +0,0 @@
-import { readFileSync } from 'fs';
-import { join } from 'path';
-import { describe, expect, it } from 'vitest';
-
-function readBuildFile(relativePathFromPackageRoot: string): string {
- const filePath = join(process.cwd(), relativePathFromPackageRoot);
- return readFileSync(filePath, 'utf8');
-}
-
-describe('build artifacts', () => {
- it('does not contain Node-only `process.argv0` usage (Edge compatibility)', () => {
- const cjs = readBuildFile('build/cjs/index.js');
- const esm = readBuildFile('build/esm/index.js');
-
- expect(cjs).not.toContain('process.argv0');
- expect(esm).not.toContain('process.argv0');
- });
-
- it('does not contain ES2021 logical assignment operators (ES2020 compatibility)', () => {
- const cjs = readBuildFile('build/cjs/index.js');
- const esm = readBuildFile('build/esm/index.js');
-
- // ES2021 operators which `es-check es2020` rejects
- expect(cjs).not.toContain('??=');
- expect(cjs).not.toContain('||=');
- expect(cjs).not.toContain('&&=');
-
- expect(esm).not.toContain('??=');
- expect(esm).not.toContain('||=');
- expect(esm).not.toContain('&&=');
- });
-});
diff --git a/yarn.lock b/yarn.lock
index a1bddfa00756..5b61d1c49da9 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -5970,10 +5970,10 @@
dependencies:
"@opentelemetry/api" "^1.3.0"
-"@opentelemetry/api-logs@0.210.0":
- version "0.210.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/api-logs/-/api-logs-0.210.0.tgz#569016861175fe79d5a57554b523c68714db3b95"
- integrity sha512-CMtLxp+lYDriveZejpBND/2TmadrrhUfChyxzmkFtHaMDdSKfP59MAYyA0ICBvEBdm3iXwLcaj/8Ic/pnGw9Yg==
+"@opentelemetry/api-logs@0.211.0":
+ version "0.211.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/api-logs/-/api-logs-0.211.0.tgz#32d9ed98939956a84d4e2ff5e01598cb9d28d744"
+ integrity sha512-swFdZq8MCdmdR22jTVGQDhwqDzcI4M10nhjXkLr1EsIzXgZBqm4ZlmmcWsg3TSNf+3mzgOiqveXmBLZuDi2Lgg==
dependencies:
"@opentelemetry/api" "^1.3.0"
@@ -5982,232 +5982,232 @@
resolved "https://registry.yarnpkg.com/@opentelemetry/api/-/api-1.9.0.tgz#d03eba68273dc0f7509e2a3d5cba21eae10379fe"
integrity sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg==
-"@opentelemetry/context-async-hooks@^2.4.0":
- version "2.4.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/context-async-hooks/-/context-async-hooks-2.4.0.tgz#d9eb2da5e6cda0aa80001ee88836ab7c448da3ee"
- integrity sha512-jn0phJ+hU7ZuvaoZE/8/Euw3gvHJrn2yi+kXrymwObEPVPjtwCmkvXDRQCWli+fCTTF/aSOtXaLr7CLIvv3LQg==
+"@opentelemetry/context-async-hooks@^2.5.0":
+ version "2.5.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/context-async-hooks/-/context-async-hooks-2.5.0.tgz#0e6bf31f0dbdd159731f7dbcd266d20f028a6915"
+ integrity sha512-uOXpVX0ZjO7heSVjhheW2XEPrhQAWr2BScDPoZ9UDycl5iuHG+Usyc3AIfG6kZeC1GyLpMInpQ6X5+9n69yOFw==
-"@opentelemetry/core@2.4.0", "@opentelemetry/core@^2.0.0", "@opentelemetry/core@^2.4.0":
- version "2.4.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/core/-/core-2.4.0.tgz#342706e2693b12923af74e45eed8f0571523439e"
- integrity sha512-KtcyFHssTn5ZgDu6SXmUznS80OFs/wN7y6MyFRRcKU6TOw8hNcGxKvt8hsdaLJfhzUszNSjURetq5Qpkad14Gw==
+"@opentelemetry/core@2.5.0", "@opentelemetry/core@^2.0.0", "@opentelemetry/core@^2.5.0":
+ version "2.5.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/core/-/core-2.5.0.tgz#3b2ac6cf471ed9a85eea836048a4de77a2e549d3"
+ integrity sha512-ka4H8OM6+DlUhSAZpONu0cPBtPPTQKxbxVzC4CzVx5+K4JnroJVBtDzLAMx4/3CDTJXRvVFhpFjtl4SaiTNoyQ==
dependencies:
"@opentelemetry/semantic-conventions" "^1.29.0"
-"@opentelemetry/instrumentation-amqplib@0.57.0":
- version "0.57.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-amqplib/-/instrumentation-amqplib-0.57.0.tgz#398f85e2fc367cd529948157a2312a3c80060080"
- integrity sha512-hgHnbcopDXju7164mwZu7+6mLT/+O+6MsyedekrXL+HQAYenMqeG7cmUOE0vI6s/9nW08EGHXpD+Q9GhLU1smA==
+"@opentelemetry/instrumentation-amqplib@0.58.0":
+ version "0.58.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-amqplib/-/instrumentation-amqplib-0.58.0.tgz#e3dc86ebfa7d72fe861a63b1c24a062faeb64a8c"
+ integrity sha512-fjpQtH18J6GxzUZ+cwNhWUpb71u+DzT7rFkg5pLssDGaEber91Y2WNGdpVpwGivfEluMlNMZumzjEqfg8DeKXQ==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.33.0"
-"@opentelemetry/instrumentation-aws-sdk@0.65.0":
- version "0.65.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-aws-sdk/-/instrumentation-aws-sdk-0.65.0.tgz#ce95583e1dc2d241e5ba425deed1746b88cc296d"
- integrity sha512-nrKIhTlBxFr/wvjk2vZ6eCcyc41eOQVTMR+ux4FM0gNvK+DgggE+RnkycGATP5lJKjltn+wrYNP2E2tmxCtF1A==
+"@opentelemetry/instrumentation-aws-sdk@0.66.0":
+ version "0.66.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-aws-sdk/-/instrumentation-aws-sdk-0.66.0.tgz#f81fbcf8b4efc3ed227fa4ac6235a61ddb451a3f"
+ integrity sha512-K+vFDsD0RsjxjCOWGOKgaqOoE5wxIPMA8wnGJ0no3m7MjVdpkS/dNOGUx2nYegpqZzU/jZ0qvc+JrfkvkzcUyg==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.34.0"
-"@opentelemetry/instrumentation-connect@0.53.0":
- version "0.53.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-connect/-/instrumentation-connect-0.53.0.tgz#5273a47a8ce960700c88fd904b7efeac58e3914a"
- integrity sha512-SoFqipWLUEYVIxvz0VYX9uWLJhatJG4cqXpRe1iophLofuEtqFUn8YaEezjz2eJK74eTUQ0f0dJVOq7yMXsJGQ==
+"@opentelemetry/instrumentation-connect@0.54.0":
+ version "0.54.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-connect/-/instrumentation-connect-0.54.0.tgz#87312850844b6c57976d00bd3256d55650543772"
+ integrity sha512-43RmbhUhqt3uuPnc16cX6NsxEASEtn8z/cYV8Zpt6EP4p2h9s4FNuJ4Q9BbEQ2C0YlCCB/2crO1ruVz/hWt8fA==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.27.0"
"@types/connect" "3.4.38"
-"@opentelemetry/instrumentation-dataloader@0.27.0":
- version "0.27.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-dataloader/-/instrumentation-dataloader-0.27.0.tgz#cd38001a17abba775629ce53422430840bced206"
- integrity sha512-8e7n8edfTN28nJDpR/H59iW3RbW1fvpt0xatGTfSbL8JS4FLizfjPxO7JLbyWh9D3DSXxrTnvOvXpt6V5pnxJg==
+"@opentelemetry/instrumentation-dataloader@0.28.0":
+ version "0.28.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-dataloader/-/instrumentation-dataloader-0.28.0.tgz#b857bb038e4a2a3b7278f3da89a1e210bb15339e"
+ integrity sha512-ExXGBp0sUj8yhm6Znhf9jmuOaGDsYfDES3gswZnKr4MCqoBWQdEFn6EoDdt5u+RdbxQER+t43FoUihEfTSqsjA==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
-"@opentelemetry/instrumentation-express@0.58.0":
- version "0.58.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-express/-/instrumentation-express-0.58.0.tgz#5ec1201e6d512974b683416d16a8742fc8931b6c"
- integrity sha512-UuGst6/1XPcswrIm5vmhuUwK/9qx9+fmNB+4xNk3lfpgQlnQxahy20xmlo3I+LIyA5ZA3CR2CDXslxAMqwminA==
+"@opentelemetry/instrumentation-express@0.59.0":
+ version "0.59.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-express/-/instrumentation-express-0.59.0.tgz#c2ac7dcb4f9904926518408cdf4efb046e724382"
+ integrity sha512-pMKV/qnHiW/Q6pmbKkxt0eIhuNEtvJ7sUAyee192HErlr+a1Jx+FZ3WjfmzhQL1geewyGEiPGkmjjAgNY8TgDA==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.27.0"
-"@opentelemetry/instrumentation-fs@0.29.0":
- version "0.29.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-fs/-/instrumentation-fs-0.29.0.tgz#11934448111d84e4341f15e1698a9be2d9a624fd"
- integrity sha512-JXPygU1RbrHNc5kD+626v3baV5KamB4RD4I9m9nUTd/HyfLZQSA3Z2z3VOebB3ChJhRDERmQjLiWvwJMHecKPg==
+"@opentelemetry/instrumentation-fs@0.30.0":
+ version "0.30.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-fs/-/instrumentation-fs-0.30.0.tgz#5e28edde0591dc4ffa471a86a68f91e737fe31fb"
+ integrity sha512-n3Cf8YhG7reaj5dncGlRIU7iT40bxPOjsBEA5Bc1a1g6e9Qvb+JFJ7SEiMlPbUw4PBmxE3h40ltE8LZ3zVt6OA==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
-"@opentelemetry/instrumentation-generic-pool@0.53.0":
- version "0.53.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-generic-pool/-/instrumentation-generic-pool-0.53.0.tgz#bc6e24b62d9e132f164347a40931513cc6d7fc37"
- integrity sha512-h49axGXGlvWzyQ4exPyd0qG9EUa+JP+hYklFg6V+Gm4ZC2Zam1QeJno/TQ8+qrLvsVvaFnBjTdS53hALpR3h3Q==
+"@opentelemetry/instrumentation-generic-pool@0.54.0":
+ version "0.54.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-generic-pool/-/instrumentation-generic-pool-0.54.0.tgz#9f3ad0cedbfe5011efe4ebdc76c85a73a0b967a6"
+ integrity sha512-8dXMBzzmEdXfH/wjuRvcJnUFeWzZHUnExkmFJ2uPfa31wmpyBCMxO59yr8f/OXXgSogNgi/uPo9KW9H7LMIZ+g==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
-"@opentelemetry/instrumentation-graphql@0.57.0":
- version "0.57.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-graphql/-/instrumentation-graphql-0.57.0.tgz#c2d28906c383756f0e0d839e8aa65bb22635c123"
- integrity sha512-wjtSavcp9MsGcnA1hj8ArgsL3EkHIiTLGMwqVohs5pSnMGeao0t2mgAuMiv78KdoR3kO3DUjks8xPO5Q6uJekg==
+"@opentelemetry/instrumentation-graphql@0.58.0":
+ version "0.58.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-graphql/-/instrumentation-graphql-0.58.0.tgz#3ca294ba410e04c920dc82ab4caa23ec1c2e1a2e"
+ integrity sha512-+yWVVY7fxOs3j2RixCbvue8vUuJ1inHxN2q1sduqDB0Wnkr4vOzVKRYl/Zy7B31/dcPS72D9lo/kltdOTBM3bQ==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
-"@opentelemetry/instrumentation-hapi@0.56.0":
- version "0.56.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-hapi/-/instrumentation-hapi-0.56.0.tgz#2121a926c34c76dd797a8507f743c2ed78a54906"
- integrity sha512-HgLxgO0G8V9y/6yW2pS3Fv5M3hz9WtWUAdbuszQDZ8vXDQSd1sI9FYHLdZW+td/8xCLApm8Li4QIeCkRSpHVTg==
+"@opentelemetry/instrumentation-hapi@0.57.0":
+ version "0.57.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-hapi/-/instrumentation-hapi-0.57.0.tgz#27b3a44a51444af3100a321f2e40623e89e5bb75"
+ integrity sha512-Os4THbvls8cTQTVA8ApLfZZztuuqGEeqog0XUnyRW7QVF0d/vOVBEcBCk1pazPFmllXGEdNbbat8e2fYIWdFbw==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.27.0"
-"@opentelemetry/instrumentation-http@0.210.0":
- version "0.210.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-http/-/instrumentation-http-0.210.0.tgz#305dc128988ab26eb8f3439a9b66f8fd6f016d4d"
- integrity sha512-dICO+0D0VBnrDOmDXOvpmaP0gvai6hNhJ5y6+HFutV0UoXc7pMgJlJY3O7AzT725cW/jP38ylmfHhQa7M0Nhww==
+"@opentelemetry/instrumentation-http@0.211.0":
+ version "0.211.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-http/-/instrumentation-http-0.211.0.tgz#2f12f83f0c21d37917fd9710fb5b755f28858cf6"
+ integrity sha512-n0IaQ6oVll9PP84SjbOCwDjaJasWRHi6BLsbMLiT6tNj7QbVOkuA5sk/EfZczwI0j5uTKl1awQPivO/ldVtsqA==
dependencies:
- "@opentelemetry/core" "2.4.0"
- "@opentelemetry/instrumentation" "0.210.0"
+ "@opentelemetry/core" "2.5.0"
+ "@opentelemetry/instrumentation" "0.211.0"
"@opentelemetry/semantic-conventions" "^1.29.0"
forwarded-parse "2.1.2"
-"@opentelemetry/instrumentation-ioredis@0.58.0":
- version "0.58.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-ioredis/-/instrumentation-ioredis-0.58.0.tgz#ac87be758ad2eea5ec23eaa9c159d75be2d2707a"
- integrity sha512-2tEJFeoM465A0FwPB0+gNvdM/xPBRIqNtC4mW+mBKy+ZKF9CWa7rEqv87OODGrigkEDpkH8Bs1FKZYbuHKCQNQ==
+"@opentelemetry/instrumentation-ioredis@0.59.0":
+ version "0.59.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-ioredis/-/instrumentation-ioredis-0.59.0.tgz#530d06aa67b73ea732414557adebe1dde7de430f"
+ integrity sha512-875UxzBHWkW+P4Y45SoFM2AR8f8TzBMD8eO7QXGCyFSCUMP5s9vtt/BS8b/r2kqLyaRPK6mLbdnZznK3XzQWvw==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/redis-common" "^0.38.2"
"@opentelemetry/semantic-conventions" "^1.33.0"
-"@opentelemetry/instrumentation-kafkajs@0.19.0":
- version "0.19.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-kafkajs/-/instrumentation-kafkajs-0.19.0.tgz#29ba2873aab3ee1deb1609e61d8b819b44b36e9d"
- integrity sha512-PMJePP4PVv+NSvWFuKADEVemsbNK8tnloHnrHOiRXMmBnyqcyOTmJyPy6eeJ0au90QyiGB2rzD8smmu2Y0CC7A==
+"@opentelemetry/instrumentation-kafkajs@0.20.0":
+ version "0.20.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-kafkajs/-/instrumentation-kafkajs-0.20.0.tgz#521db06d10d39f42e842ce336e5c1e48b3da2956"
+ integrity sha512-yJXOuWZROzj7WmYCUiyT27tIfqBrVtl1/TwVbQyWPz7rL0r1Lu7kWjD0PiVeTCIL6CrIZ7M2s8eBxsTAOxbNvw==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.30.0"
-"@opentelemetry/instrumentation-knex@0.54.0":
- version "0.54.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-knex/-/instrumentation-knex-0.54.0.tgz#fbaa3b682534693920c0bfd06b20310e721a787f"
- integrity sha512-XYXKVUH+0/Ur29jMPnyxZj32MrZkWSXHhCteTkt/HzynKnvIASmaAJ6moMOgBSRoLuDJFqPew68AreRylIzhhg==
+"@opentelemetry/instrumentation-knex@0.55.0":
+ version "0.55.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-knex/-/instrumentation-knex-0.55.0.tgz#fefc17d854a107d99ab0dbc8933d5897efce1abd"
+ integrity sha512-FtTL5DUx5Ka/8VK6P1VwnlUXPa3nrb7REvm5ddLUIeXXq4tb9pKd+/ThB1xM/IjefkRSN3z8a5t7epYw1JLBJQ==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.33.1"
-"@opentelemetry/instrumentation-koa@0.58.0":
- version "0.58.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-koa/-/instrumentation-koa-0.58.0.tgz#81b32868dd0effaa96740a1c5eb11090619c26c4"
- integrity sha512-602W6hEFi3j2QrQQBKWuBUSlHyrwSCc1IXpmItC991i9+xJOsS4n4mEktEk/7N6pavBX35J9OVkhPDXjbFk/1A==
+"@opentelemetry/instrumentation-koa@0.59.0":
+ version "0.59.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-koa/-/instrumentation-koa-0.59.0.tgz#7df8850fa193a8f590e3fbcab00016e25db27041"
+ integrity sha512-K9o2skADV20Skdu5tG2bogPKiSpXh4KxfLjz6FuqIVvDJNibwSdu5UvyyBzRVp1rQMV6UmoIk6d3PyPtJbaGSg==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.36.0"
-"@opentelemetry/instrumentation-lru-memoizer@0.54.0":
- version "0.54.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-lru-memoizer/-/instrumentation-lru-memoizer-0.54.0.tgz#0376f795b3d4dd39f184f2aceb240d7a74207b1c"
- integrity sha512-LPji0Qwpye5e1TNAUkHt7oij2Lrtpn2DRTUr4CU69VzJA13aoa2uzP3NutnFoLDUjmuS6vi/lv08A2wo9CfyTA==
+"@opentelemetry/instrumentation-lru-memoizer@0.55.0":
+ version "0.55.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-lru-memoizer/-/instrumentation-lru-memoizer-0.55.0.tgz#776d5f10178adfbda7286b4f31adde8bb518d55a"
+ integrity sha512-FDBfT7yDGcspN0Cxbu/k8A0Pp1Jhv/m7BMTzXGpcb8ENl3tDj/51U65R5lWzUH15GaZA15HQ5A5wtafklxYj7g==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
-"@opentelemetry/instrumentation-mongodb@0.63.0":
- version "0.63.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-mongodb/-/instrumentation-mongodb-0.63.0.tgz#8f3a97388ff044c627d4fc50793ab9f978f85e9d"
- integrity sha512-EvJb3aLiq1QedAZO4vqXTG0VJmKUpGU37r11thLPuL5HNa08sUS9DbF69RB8YoXVby2pXkFPMnbG0Pky0JMlKA==
+"@opentelemetry/instrumentation-mongodb@0.64.0":
+ version "0.64.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-mongodb/-/instrumentation-mongodb-0.64.0.tgz#0027c13fdd7506eb1f618998245edd244cc23cc7"
+ integrity sha512-pFlCJjweTqVp7B220mCvCld1c1eYKZfQt1p3bxSbcReypKLJTwat+wbL2YZoX9jPi5X2O8tTKFEOahO5ehQGsA==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.33.0"
-"@opentelemetry/instrumentation-mongoose@0.56.0":
- version "0.56.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-mongoose/-/instrumentation-mongoose-0.56.0.tgz#2a55cf00ab895bb5ae0a99abbcb7a626a930f8ce"
- integrity sha512-1xBjUpDSJFZS4qYc4XXef0pzV38iHyKymY4sKQ3xPv7dGdka4We1PsuEg6Z8K21f1d2Yg5eU0OXXRSPVmowKfA==
+"@opentelemetry/instrumentation-mongoose@0.57.0":
+ version "0.57.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-mongoose/-/instrumentation-mongoose-0.57.0.tgz#2ce3f3bbf66a255958c3a112a92079898d69f624"
+ integrity sha512-MthiekrU/BAJc5JZoZeJmo0OTX6ycJMiP6sMOSRTkvz5BrPMYDqaJos0OgsLPL/HpcgHP7eo5pduETuLguOqcg==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.33.0"
-"@opentelemetry/instrumentation-mysql2@0.56.0":
- version "0.56.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-mysql2/-/instrumentation-mysql2-0.56.0.tgz#fe3792150a690dd7f715ce0889fa339860e136d5"
- integrity sha512-rW0hIpoaCFf55j0F1oqw6+Xv9IQeqJGtw9MudT3LCuhqld9S3DF0UEj8o3CZuPhcYqD+HAivZQdrsO5XMWyFqw==
+"@opentelemetry/instrumentation-mysql2@0.57.0":
+ version "0.57.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-mysql2/-/instrumentation-mysql2-0.57.0.tgz#928eda47c6f4ab193d3363fcab01d81a70adc46b"
+ integrity sha512-nHSrYAwF7+aV1E1V9yOOP9TchOodb6fjn4gFvdrdQXiRE7cMuffyLLbCZlZd4wsspBzVwOXX8mpURdRserAhNA==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.33.0"
"@opentelemetry/sql-common" "^0.41.2"
-"@opentelemetry/instrumentation-mysql@0.56.0":
- version "0.56.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-mysql/-/instrumentation-mysql-0.56.0.tgz#acd5a772e60a82b6bd41e274fec68a1bd98efcc1"
- integrity sha512-osdGMB3vc4bm1Kos04zfVmYAKoKVbKiF/Ti5/R0upDEOsCnrnUm9xvLeaKKbbE2WgJoaFz3VS8c99wx31efytQ==
+"@opentelemetry/instrumentation-mysql@0.57.0":
+ version "0.57.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-mysql/-/instrumentation-mysql-0.57.0.tgz#74d42a1c6d20aee93996f8b6f6b7b69469748754"
+ integrity sha512-HFS/+FcZ6Q7piM7Il7CzQ4VHhJvGMJWjx7EgCkP5AnTntSN5rb5Xi3TkYJHBKeR27A0QqPlGaCITi93fUDs++Q==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.33.0"
"@types/mysql" "2.15.27"
-"@opentelemetry/instrumentation-nestjs-core@0.56.0":
- version "0.56.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-nestjs-core/-/instrumentation-nestjs-core-0.56.0.tgz#f65211562e3868091b5f365d766d5787854b2b1d"
- integrity sha512-2wKd6+/nKyZVTkElTHRZAAEQ7moGqGmTIXlZvfAeV/dNA+6zbbl85JBcyeUFIYt+I42Naq5RgKtUY8fK6/GE1g==
+"@opentelemetry/instrumentation-nestjs-core@0.57.0":
+ version "0.57.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-nestjs-core/-/instrumentation-nestjs-core-0.57.0.tgz#7d42f690b8b78c08d9003425084911665c73deb8"
+ integrity sha512-mzTjjethjuk70o/vWUeV12QwMG9EAFJpkn13/q8zi++sNosf2hoGXTplIdbs81U8S3PJ4GxHKsBjM0bj1CGZ0g==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.30.0"
-"@opentelemetry/instrumentation-pg@0.62.0":
- version "0.62.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-pg/-/instrumentation-pg-0.62.0.tgz#a005304969ecf0b67f33f47ffe18e5c67aa71040"
- integrity sha512-/ZSMRCyFRMjQVx7Wf+BIAOMEdN/XWBbAGTNLKfQgGYs1GlmdiIFkUy8Z8XGkToMpKrgZju0drlTQpqt4Ul7R6w==
+"@opentelemetry/instrumentation-pg@0.63.0":
+ version "0.63.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-pg/-/instrumentation-pg-0.63.0.tgz#852ca5519d756c613bb9f3153a5e70c2b805e5cf"
+ integrity sha512-dKm/ODNN3GgIQVlbD6ZPxwRc3kleLf95hrRWXM+l8wYo+vSeXtEpQPT53afEf6VFWDVzJK55VGn8KMLtSve/cg==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.34.0"
"@opentelemetry/sql-common" "^0.41.2"
"@types/pg" "8.15.6"
"@types/pg-pool" "2.0.7"
-"@opentelemetry/instrumentation-redis@0.58.0":
- version "0.58.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-redis/-/instrumentation-redis-0.58.0.tgz#1491b9c10b9075ba817f295eb38a83312035ebe8"
- integrity sha512-tOGxw+6HZ5LDpMP05zYKtTw5HPqf3PXYHaOuN+pkv6uIgrZ+gTT75ELkd49eXBpjg3t36p8bYpsLgYcpIPqWqA==
+"@opentelemetry/instrumentation-redis@0.59.0":
+ version "0.59.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-redis/-/instrumentation-redis-0.59.0.tgz#44c1bd7852cdadbe77c1bdfa94185528012558cf"
+ integrity sha512-JKv1KDDYA2chJ1PC3pLP+Q9ISMQk6h5ey+99mB57/ARk0vQPGZTTEb4h4/JlcEpy7AYT8HIGv7X6l+br03Neeg==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/redis-common" "^0.38.2"
"@opentelemetry/semantic-conventions" "^1.27.0"
-"@opentelemetry/instrumentation-tedious@0.29.0":
- version "0.29.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-tedious/-/instrumentation-tedious-0.29.0.tgz#f9e1f9a166678b12f5ebeaa654eb8a382a62bdbc"
- integrity sha512-Jtnayb074lk7DQL25pOOpjvg4zjJMFjFWOLlKzTF5i1KxMR4+GlR/DSYgwDRfc0a4sfPXzdb/yYw7jRSX/LdFg==
+"@opentelemetry/instrumentation-tedious@0.30.0":
+ version "0.30.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-tedious/-/instrumentation-tedious-0.30.0.tgz#4a8906b5322c4add4132e6e086c23e17bc23626b"
+ integrity sha512-bZy9Q8jFdycKQ2pAsyuHYUHNmCxCOGdG6eg1Mn75RvQDccq832sU5OWOBnc12EFUELI6icJkhR7+EQKMBam2GA==
dependencies:
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.33.0"
"@types/tedious" "^4.0.14"
-"@opentelemetry/instrumentation-undici@0.20.0":
- version "0.20.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-undici/-/instrumentation-undici-0.20.0.tgz#3996e2b634081f37c17ecc34aaf0c0d0a6ec6e83"
- integrity sha512-VGBQ89Bza1pKtV12Lxgv3uMrJ1vNcf1cDV6LAXp2wa6hnl6+IN6lbEmPn6WNWpguZTZaFEvugyZgN8FJuTjLEA==
+"@opentelemetry/instrumentation-undici@0.21.0":
+ version "0.21.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation-undici/-/instrumentation-undici-0.21.0.tgz#dcb43a364c39e78217946aeb7aa09156e55f4c6c"
+ integrity sha512-gok0LPUOTz2FQ1YJMZzaHcOzDFyT64XJ8M9rNkugk923/p6lDGms/cRW1cqgqp6N6qcd6K6YdVHwPEhnx9BWbw==
dependencies:
"@opentelemetry/core" "^2.0.0"
- "@opentelemetry/instrumentation" "^0.210.0"
+ "@opentelemetry/instrumentation" "^0.211.0"
"@opentelemetry/semantic-conventions" "^1.24.0"
-"@opentelemetry/instrumentation@0.210.0", "@opentelemetry/instrumentation@^0.210.0":
- version "0.210.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation/-/instrumentation-0.210.0.tgz#3c9cf77072b7c7796fffcb04e19cad2976a4afbf"
- integrity sha512-sLMhyHmW9katVaLUOKpfCnxSGhZq2t1ReWgwsu2cSgxmDVMB690H9TanuexanpFI94PJaokrqbp8u9KYZDUT5g==
+"@opentelemetry/instrumentation@0.211.0", "@opentelemetry/instrumentation@^0.211.0":
+ version "0.211.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/instrumentation/-/instrumentation-0.211.0.tgz#d45e20eafa75b5d3e8a9745a6205332893c55f37"
+ integrity sha512-h0nrZEC/zvI994nhg7EgQ8URIHt0uDTwN90r3qQUdZORS455bbx+YebnGeEuFghUT0HlJSrLF4iHw67f+odY+Q==
dependencies:
- "@opentelemetry/api-logs" "0.210.0"
+ "@opentelemetry/api-logs" "0.211.0"
import-in-the-middle "^2.0.0"
require-in-the-middle "^8.0.0"
@@ -6225,27 +6225,27 @@
resolved "https://registry.yarnpkg.com/@opentelemetry/redis-common/-/redis-common-0.38.2.tgz#cefa4f3e79db1cd54f19e233b7dfb56621143955"
integrity sha512-1BCcU93iwSRZvDAgwUxC/DV4T/406SkMfxGqu5ojc3AvNI+I9GhV7v0J1HljsczuuhcnFLYqD5VmwVXfCGHzxA==
-"@opentelemetry/resources@2.4.0", "@opentelemetry/resources@^2.4.0":
- version "2.4.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/resources/-/resources-2.4.0.tgz#51188708204ba888685de019286a3969508c444d"
- integrity sha512-RWvGLj2lMDZd7M/5tjkI/2VHMpXebLgPKvBUd9LRasEWR2xAynDwEYZuLvY9P2NGG73HF07jbbgWX2C9oavcQg==
+"@opentelemetry/resources@2.5.0", "@opentelemetry/resources@^2.5.0":
+ version "2.5.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/resources/-/resources-2.5.0.tgz#e7a575b2c534961a9db5153f9498931c786a607a"
+ integrity sha512-F8W52ApePshpoSrfsSk1H2yJn9aKjCrbpQF1M9Qii0GHzbfVeFUB+rc3X4aggyZD8x9Gu3Slua+s6krmq6Dt8g==
dependencies:
- "@opentelemetry/core" "2.4.0"
+ "@opentelemetry/core" "2.5.0"
"@opentelemetry/semantic-conventions" "^1.29.0"
-"@opentelemetry/sdk-trace-base@^2.4.0":
- version "2.4.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/sdk-trace-base/-/sdk-trace-base-2.4.0.tgz#0ab37a996cb574e7efc94e58fc759cb4a8df8401"
- integrity sha512-WH0xXkz/OHORDLKqaxcUZS0X+t1s7gGlumr2ebiEgNZQl2b0upK2cdoD0tatf7l8iP74woGJ/Kmxe82jdvcWRw==
+"@opentelemetry/sdk-trace-base@^2.5.0":
+ version "2.5.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/sdk-trace-base/-/sdk-trace-base-2.5.0.tgz#4b96ae2494a4de5e3bfb36ef7459b30a1ce3332a"
+ integrity sha512-VzRf8LzotASEyNDUxTdaJ9IRJ1/h692WyArDBInf5puLCjxbICD6XkHgpuudis56EndyS7LYFmtTMny6UABNdQ==
dependencies:
- "@opentelemetry/core" "2.4.0"
- "@opentelemetry/resources" "2.4.0"
+ "@opentelemetry/core" "2.5.0"
+ "@opentelemetry/resources" "2.5.0"
"@opentelemetry/semantic-conventions" "^1.29.0"
-"@opentelemetry/semantic-conventions@^1.24.0", "@opentelemetry/semantic-conventions@^1.27.0", "@opentelemetry/semantic-conventions@^1.29.0", "@opentelemetry/semantic-conventions@^1.30.0", "@opentelemetry/semantic-conventions@^1.33.0", "@opentelemetry/semantic-conventions@^1.33.1", "@opentelemetry/semantic-conventions@^1.34.0", "@opentelemetry/semantic-conventions@^1.36.0", "@opentelemetry/semantic-conventions@^1.37.0":
- version "1.38.0"
- resolved "https://registry.yarnpkg.com/@opentelemetry/semantic-conventions/-/semantic-conventions-1.38.0.tgz#8b5f415395a7ddb7c8e0c7932171deb9278df1a3"
- integrity sha512-kocjix+/sSggfJhwXqClZ3i9Y/MI0fp7b+g7kCRm6psy2dsf8uApTRclwG18h8Avm7C9+fnt+O36PspJ/OzoWg==
+"@opentelemetry/semantic-conventions@^1.24.0", "@opentelemetry/semantic-conventions@^1.27.0", "@opentelemetry/semantic-conventions@^1.29.0", "@opentelemetry/semantic-conventions@^1.30.0", "@opentelemetry/semantic-conventions@^1.33.0", "@opentelemetry/semantic-conventions@^1.33.1", "@opentelemetry/semantic-conventions@^1.34.0", "@opentelemetry/semantic-conventions@^1.36.0", "@opentelemetry/semantic-conventions@^1.37.0", "@opentelemetry/semantic-conventions@^1.39.0":
+ version "1.39.0"
+ resolved "https://registry.yarnpkg.com/@opentelemetry/semantic-conventions/-/semantic-conventions-1.39.0.tgz#f653b2752171411feb40310b8a8953d7e5c543b7"
+ integrity sha512-R5R9tb2AXs2IRLNKLBJDynhkfmx7mX0vi8NkhZb3gUkPWHn6HXk5J8iQ/dql0U3ApfWym4kXXmBDRGO+oeOfjg==
"@opentelemetry/sql-common@^0.41.2":
version "0.41.2"
@@ -7117,18 +7117,37 @@
fflate "^0.4.4"
mitt "^3.0.0"
-"@sentry/babel-plugin-component-annotate@4.6.2":
- version "4.6.2"
- resolved "https://registry.yarnpkg.com/@sentry/babel-plugin-component-annotate/-/babel-plugin-component-annotate-4.6.2.tgz#b052ded0fc12088d4a5032a4022b65551717a631"
- integrity sha512-6VTjLJXtIHKwxMmThtZKwi1+hdklLNzlbYH98NhbH22/Vzb/c6BlSD2b5A0NGN9vFB807rD4x4tuP+Su7BxQXQ==
+"@sentry/babel-plugin-component-annotate@4.7.0":
+ version "4.7.0"
+ resolved "https://registry.yarnpkg.com/@sentry/babel-plugin-component-annotate/-/babel-plugin-component-annotate-4.7.0.tgz#46841deb27275b7d235f2fbce42c5156ad6c7ae6"
+ integrity sha512-MkyajDiO17/GaHHFgOmh05ZtOwF5hmm9KRjVgn9PXHIdpz+TFM5mkp1dABmR6Y75TyNU98Z1aOwPOgyaR5etJw==
-"@sentry/bundler-plugin-core@4.6.2", "@sentry/bundler-plugin-core@^4.6.2":
- version "4.6.2"
- resolved "https://registry.yarnpkg.com/@sentry/bundler-plugin-core/-/bundler-plugin-core-4.6.2.tgz#65239308aba07de9dad48bf51d6589be5d492860"
- integrity sha512-JkOc3JkVzi/fbXsFp8R9uxNKmBrPRaU4Yu4y1i3ihWfugqymsIYaN0ixLENZbGk2j4xGHIk20PAJzBJqBMTHew==
+"@sentry/babel-plugin-component-annotate@4.8.0":
+ version "4.8.0"
+ resolved "https://registry.yarnpkg.com/@sentry/babel-plugin-component-annotate/-/babel-plugin-component-annotate-4.8.0.tgz#6705126a7726bd248f93acc79b8f3c8921b1c385"
+ integrity sha512-cy/9Eipkv23MsEJ4IuB4dNlVwS9UqOzI3Eu+QPake5BVFgPYCX0uP0Tr3Z43Ime6Rb+BiDnWC51AJK9i9afHYw==
+
+"@sentry/bundler-plugin-core@4.7.0":
+ version "4.7.0"
+ resolved "https://registry.yarnpkg.com/@sentry/bundler-plugin-core/-/bundler-plugin-core-4.7.0.tgz#00ab83727df34bbbe170f032fa948e6f21f43185"
+ integrity sha512-gFdEtiup/7qYhN3vp1v2f0WL9AG9OorWLtIpfSBYbWjtzklVNg1sizvNyZ8nEiwtnb25LzvvCUbOP1SyP6IodQ==
+ dependencies:
+ "@babel/core" "^7.18.5"
+ "@sentry/babel-plugin-component-annotate" "4.7.0"
+ "@sentry/cli" "^2.57.0"
+ dotenv "^16.3.1"
+ find-up "^5.0.0"
+ glob "^10.5.0"
+ magic-string "0.30.8"
+ unplugin "1.0.1"
+
+"@sentry/bundler-plugin-core@4.8.0", "@sentry/bundler-plugin-core@^4.6.2":
+ version "4.8.0"
+ resolved "https://registry.yarnpkg.com/@sentry/bundler-plugin-core/-/bundler-plugin-core-4.8.0.tgz#2e7a4493795a848951e1e074a1b15b650fe0e6b0"
+ integrity sha512-QaXd/NzaZ2vmiA2FNu2nBkgQU+17N3fE+zVOTzG0YK54QDSJMd4n3AeJIEyPhSzkOob+GqtO22nbYf6AATFMAw==
dependencies:
"@babel/core" "^7.18.5"
- "@sentry/babel-plugin-component-annotate" "4.6.2"
+ "@sentry/babel-plugin-component-annotate" "4.8.0"
"@sentry/cli" "^2.57.0"
dotenv "^16.3.1"
find-up "^5.0.0"
@@ -7196,28 +7215,28 @@
"@sentry/cli-win32-i686" "2.58.4"
"@sentry/cli-win32-x64" "2.58.4"
-"@sentry/rollup-plugin@^4.6.2":
- version "4.6.2"
- resolved "https://registry.yarnpkg.com/@sentry/rollup-plugin/-/rollup-plugin-4.6.2.tgz#e03a835e52c4613b2c856ff3cb411f5683176c78"
- integrity sha512-sTgh24KfV8iJhv1zESZi6atgJEgOPpwy1W/UqOdmKPyDW5FkX9Zp9lyMF+bbJDWBqhACUJBGsIbE3MAonLX3wQ==
+"@sentry/rollup-plugin@^4.7.0":
+ version "4.7.0"
+ resolved "https://registry.yarnpkg.com/@sentry/rollup-plugin/-/rollup-plugin-4.7.0.tgz#92f9a5ed6b27de382ece4e973d9854099f62c1af"
+ integrity sha512-G928V05BLAIAIky42AN6zTDIKwfTYzWQ/OivSBTY3ZFJ2Db3lkB5UFHhtRsTjT9Hy/uZnQQjs397rixn51X3Vg==
dependencies:
- "@sentry/bundler-plugin-core" "4.6.2"
+ "@sentry/bundler-plugin-core" "4.7.0"
unplugin "1.0.1"
-"@sentry/vite-plugin@^4.6.2":
- version "4.6.2"
- resolved "https://registry.yarnpkg.com/@sentry/vite-plugin/-/vite-plugin-4.6.2.tgz#e4d4321c089af8bf2bc20b8e9ee467881154d267"
- integrity sha512-hK9N50LlTaPlb2P1r87CFupU7MJjvtrp+Js96a2KDdiP8ViWnw4Gsa/OvA0pkj2wAFXFeBQMLS6g/SktTKG54w==
+"@sentry/vite-plugin@^4.7.0":
+ version "4.7.0"
+ resolved "https://registry.yarnpkg.com/@sentry/vite-plugin/-/vite-plugin-4.7.0.tgz#2d819ff0cc40d6a85503e86f834e358bad2cdde5"
+ integrity sha512-eQXDghOQLsYwnHutJo8TCzhG4gp0KLNq3h96iqFMhsbjnNnfYeCX1lIw1pJEh/az3cDwSyPI/KGkvf8hr0dZmQ==
dependencies:
- "@sentry/bundler-plugin-core" "4.6.2"
+ "@sentry/bundler-plugin-core" "4.7.0"
unplugin "1.0.1"
-"@sentry/webpack-plugin@^4.6.2":
- version "4.6.2"
- resolved "https://registry.yarnpkg.com/@sentry/webpack-plugin/-/webpack-plugin-4.6.2.tgz#371c00cc5ce7654e34c123accd471f55b6ce4ed4"
- integrity sha512-uyb4nAqstVvO6ep86TQRlSxuynYhFec/HYfrA8wN5qYLx31gJQsOiuAeEzocJ2GGrhJq/ySH9nYfcnpjgk4J2w==
+"@sentry/webpack-plugin@^4.7.0":
+ version "4.8.0"
+ resolved "https://registry.yarnpkg.com/@sentry/webpack-plugin/-/webpack-plugin-4.8.0.tgz#f3c1f5756cb889df4e4e5e69316080160c5680d0"
+ integrity sha512-x4gayRA/J8CEcowrXWA2scaPZx+hd18squORJElHZKC46PGYRKvQfAWQ7qRCX6gtJ2v53x9264n9D8f3b9rp9g==
dependencies:
- "@sentry/bundler-plugin-core" "4.6.2"
+ "@sentry/bundler-plugin-core" "4.8.0"
unplugin "1.0.1"
uuid "^9.0.0"