Quick Start
Get observability on your LLM calls in under 2 minutes.
Install the SDK
pip install weflayrSet environment variables
Your Flare credentials are in the Weflayr dashboard under Settings → Flares.
export WEFLAYR_INTAKE_URL="https://api.weflayr.com" export WEFLAYR_CLIENT_ID="your-client-id" export WEFLAYR_CLIENT_SECRET="your-client-secret"You can also pass them directly to the client constructor (
intake_url,client_id,bearer_token).Replace your import — one line change
before
from openai import OpenAI client = OpenAI(api_key="sk-...")after
from weflayr.sdk.openai.client import OpenAI # ← only this line changes client = OpenAI(api_key="sk-...")The Weflayr client is a subclass of the official SDK client. All existing method signatures, return types, and exceptions are identical.
Make a call — telemetry is automatic
response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Hello!"}], tags={"feature": "onboarding", "env": "production"}, # optional ) # response is the standard ChatCompletion object — unchangedThe
tagsargument is stripped before forwarding to OpenAI. See the Tags guide for more.