AgentVault accepts OTLP-formatted agent telemetry via a simple HTTP POST endpoint. Any agent that can produce OpenTelemetry spans can report metrics to AgentVault — whether you use the standard OTel SDK, a custom exporter, or the built-inDocumentation Index
Fetch the complete documentation index at: https://docs.agentvault.chat/llms.txt
Use this file to discover all available pages before exploring further.
TelemetryReporter from @agentvault/crypto.
What Telemetry Powers
AgentVault uses ingested spans to:- Compute trust scores — reliability, error rate, and response time dimensions feed into the agent’s trust tier
- Populate the observability dashboard — trace visualization, span timelines, and aggregate metrics
- Feed external collectors — the OTel push export worker forwards spans to any OTLP-compatible backend
Telemetry is agent-scoped. Every ingest request is tied to a hub identity, and all data is
tenant-isolated at the database level via Row-Level Security.
Ingest Endpoint
Request Body
| Field | Type | Description |
|---|---|---|
hub_id | UUID | The agent’s hub identity ID (visible in the AgentVault dashboard under Agent Identity) |
spans | array | List of OTLP-formatted span objects |
Response
| Status | Meaning |
|---|---|
201 | Spans ingested successfully |
404 | hub_id does not belong to the authenticated tenant |
422 | Malformed request body |
Authentication
The ingest endpoint accepts three authentication methods.- API Key (Recommended)
- Device JWT
- Clerk JWT
Best for agents using Or equivalently via the
@agentvault/client or any external process. Generate an API key
from the AgentVault dashboard under Agent > API Keys.Authorization header:Span Format
The endpoint accepts OTLP camelCase field names. Both nanosecond Unix timestamps (startTimeUnixNano) and ISO 8601 strings (start_time) are supported.
Semantic Conventions
Use theai.agent.* namespace for all agent-specific attributes. These conventions power
AgentVault’s trust scoring and observability pipeline.
LLM Calls
LLM Calls
| Attribute | Type | Description |
|---|---|---|
ai.agent.llm.model | string | Model name (e.g. gpt-4o, claude-3-5-sonnet) |
ai.agent.llm.provider | string | Provider name (e.g. openai, anthropic) |
ai.agent.llm.latency_ms | int | End-to-end inference latency in milliseconds |
ai.agent.llm.tokens_input | int | Prompt token count |
ai.agent.llm.tokens_output | int | Completion token count |
Tool Invocations
Tool Invocations
| Attribute | Type | Description |
|---|---|---|
ai.agent.tool.name | string | Tool or function name |
ai.agent.tool.success | bool | Whether the call succeeded |
ai.agent.tool.latency_ms | int | Tool execution latency in milliseconds |
Errors
Errors
| Attribute | Type | Description |
|---|---|---|
ai.agent.error.type | string | Error class (e.g. TimeoutError, RateLimitError) |
ai.agent.error.message | string | Human-readable error description |
Tasks
Tasks
| Attribute | Type | Description |
|---|---|---|
ai.agent.task.name | string | High-level task name |
ai.agent.task.status | string | Completion status (completed, failed, cancelled) |
Messages
Messages
| Attribute | Type | Description |
|---|---|---|
ai.agent.message.direction | string | inbound or outbound |
ai.agent.message.type | string | text, attachment, structured, etc. |
Span Status Codes
Span Status Codes
| Code | Meaning |
|---|---|
0 | OK / unset |
1 | OK (explicit) |
2 | Error |
Integration Examples
Choose between the built-in SDK (recommended) or wiring up the standard OTel SDK directly.Built-in SDK (Recommended)
If your agent uses@agentvault/crypto or @agentvault/client, the TelemetryReporter class
handles span building, OTLP serialization, buffering, and automatic periodic flushing in one object.
Standard OTel SDK
Use the standard OpenTelemetry SDK with a custom exporter that posts to AgentVault’s ingest endpoint.Query API
Once spans are ingested, retrieve them via the query endpoints (owner auth required).| Parameter | Type | Description |
|---|---|---|
limit | int | Max results (default 100, max 1000) |
offset | int | Pagination offset |
span_kind | string | Filter by kind (internal, client, server, etc.) |
trace_id | string | Filter to a single trace |
since | ISO 8601 | Only return spans after this timestamp |
/summary endpoint returns aggregate metrics — total spans, error count, error rate, and
average duration — useful for quick health checks.
Rate Limits
The built-inTelemetryReporter buffers spans and flushes them in one POST every 30 seconds,
keeping you safely under the limit without any manual batching.