Documentation Index
Fetch the complete documentation index at: https://docs.keywordsai.co/llms.txt
Use this file to discover all available pages before exploring further.
What is Vercel AI SDK tracing?
This guide shows how to set up Respan tracing with Next.js and the Vercel AI SDK so you can monitor and trace your AI-powered applications.Steps to use
If you already have a Next.js + Vercel AI SDK app, start from Step 1 below.Set up OpenTelemetry instrumentation
Next.js supports OpenTelemetry instrumentation out of the box. Create Then configure the Respan exporter:
instrumentation.ts in your project root (where package.json lives).Install Vercel’s OpenTelemetry instrumentation:instrumentation.ts
Configure environment variables
Add your Respan credentials (and your provider key) to
.env.local:- OpenAI
- Anthropic
- Google Gemini
.env.local
Enable telemetry in your route
In your API route file (e.g.
app/api/chat/route.ts), enable telemetry by adding the experimental_telemetry option.- OpenAI
- Anthropic
- Google Gemini
app/api/chat/route.ts
Run locally and verify traces
- Start your dev server:
- Make some chat requests through your application
- Verify traces in Respan:
- Go to Logs → Traces in your Respan dashboard
- Confirm requests are being traced
@vercel/otel, install the missing packages and retry.What gets traced
With this setup, Respan will capture:- AI model calls: requests made via the Vercel AI SDK
- Token usage: input and output token counts
- Performance metrics: latency and throughput
- Errors: failed requests and error details
- Custom metadata: additional context you attach via telemetry metadata