Report agent telemetry to AgentVault using the OTLP-compatible ingestion endpoint.
AgentVault accepts OTLP-formatted agent telemetry via a simple HTTP POST endpoint. Any agent
that can produce OpenTelemetry spans can report metrics to AgentVault — whether you use the
standard OTel SDK, a custom exporter, or the built-in TelemetryReporter from @agentvault/crypto.
Compute trust scores — reliability, error rate, and response time dimensions feed into the agent’s trust tier
Populate the observability dashboard — trace visualization, span timelines, and aggregate metrics
Feed external collectors — the OTel push export worker forwards spans to any OTLP-compatible backend
Telemetry is agent-scoped. Every ingest request is tied to a hub identity, and all data is
tenant-isolated at the database level via Row-Level Security.
The ingest endpoint accepts three authentication methods.
API Key (Recommended)
Device JWT
Clerk JWT
Best for agents using @agentvault/client or any external process. Generate an API key
from the AgentVault dashboard under Agent > API Keys.
Copy
Ask AI
X-Api-Key: av_agent_sk_...
Or equivalently via the Authorization header:
Copy
Ask AI
Authorization: Bearer av_agent_sk_...
Used automatically by @agentvault/agentvault when running inside OpenClaw. The plugin
acquires a short-lived device JWT during enrollment and passes it as a Bearer token.
Copy
Ask AI
Authorization: Bearer <device_jwt>
For owner-initiated telemetry or dashboard integrations.
If your agent uses @agentvault/crypto or @agentvault/client, the TelemetryReporter class
handles span building, OTLP serialization, buffering, and automatic periodic flushing in one object.
Copy
Ask AI
npm install @agentvault/crypto
Copy
Ask AI
import { TelemetryReporter } from "@agentvault/crypto";const reporter = new TelemetryReporter({ apiBase: "https://api.agentvault.chat", hubId: "f47ac10b-58cc-4372-a567-0e02b2c3d479", authHeader: "Bearer av_agent_sk_...",});// Start flushing every 30 seconds in the backgroundreporter.startAutoFlush();// Report spans with typed helpers -- no OTLP boilerplatereporter.reportLlmCall({ model: "gpt-4o", provider: "openai", latencyMs: 1200, tokensInput: 512, tokensOutput: 148,});reporter.reportToolCall({ toolName: "web_search", latencyMs: 340, success: true,});reporter.reportError({ errorType: "RateLimitError", errorMessage: "429 from OpenAI -- retrying in 5s",});// Flush remaining spans before shutdownawait reporter.flush();reporter.stopAutoFlush();
TelemetryReporter is also integrated automatically in SecureChannel (plugin) and
AgentVaultClient (client SDK). Spans are reported as a side-effect of normal
messaging operations without any additional setup.
Telemetry ingest shares the standard API rate limit: 60 requests per minute per API key.
Batch multiple spans into a single request to stay well under the limit.
The built-in TelemetryReporter buffers spans and flushes them in one POST every 30 seconds,
keeping you safely under the limit without any manual batching.