Event Schema
All events share a common envelope. Provider-specific fields are merged in depending on the event type and endpoint.
Common fields (all events)
| Field | Type | Description |
|---|---|---|
event_id |
uuid | Groups .before, .after, .error for the same call |
event_type |
string | e.g. chat.completions.create.after |
model |
string | LLM model identifier as passed by the caller |
tags |
object | Caller-supplied key/value metadata |
message_count |
int | Number of messages in conversation (chat endpoints) |
After events (.after)
| Field | Type | Description |
|---|---|---|
elapsed_ms |
float | Wall-clock latency of the LLM call in milliseconds |
prompt_tokens |
int | null | Input token count (chat, text completions, embeddings) |
completion_tokens |
int | null | Output token count (chat, text completions) |
total_tokens |
int | null | Total tokens (embeddings) |
input_tokens |
int | null | Input tokens (Responses API) |
output_tokens |
int | null | Output tokens (Responses API) |
cached_tokens |
int | null | Cached input tokens (Responses API) |
char_count |
int | null | Input character count (TTS — billed by character) |
Error events (.error)
| Field | Type | Description |
|---|---|---|
elapsed_ms |
float | Time elapsed before the error was raised |
error_type |
string | Exception class name (e.g. RateLimitError) |
error_message |
string | Human-readable error description |
status_code |
int | null | HTTP status code from the provider (if available) |
Event type reference
| Event type | Provider |
|---|---|
chat.completions.create.{before,after,error} |
OpenAI — chat |
chat.completions.create_async.{before,after,error} |
OpenAI — async chat |
completions.create.{before,after,error} |
OpenAI — legacy text |
completions.create_async.{before,after,error} |
OpenAI — legacy text async |
embeddings.create.{before,after,error} |
OpenAI — embeddings |
embeddings.create_async.{before,after,error} |
OpenAI — embeddings async |
responses.create.{before,after,error} |
OpenAI — Responses API |
responses.create_async.{before,after,error} |
OpenAI — Responses API async |
audio.speech.create.{before,after,error} |
OpenAI — TTS |
audio.speech.create_async.{before,after,error} |
OpenAI — TTS async |
audio.transcriptions.create.{before,after,error} |
OpenAI — STT |
audio.transcriptions.create_async.{before,after,error} |
OpenAI — STT async |
audio.translations.create.{before,after,error} |
OpenAI — Translation |
audio.translations.create_async.{before,after,error} |
OpenAI — Translation async |
chat.complete.{before,after,error} |
Mistral AI — sync |
chat.complete_async.{before,after,error} |
Mistral AI — async |