AgentFlare tracks every LLM call your agents make, calculates costs in real-time, and auto-pauses them the moment they hit your spending limit. 3-line integration. Works with LangChain, LangGraph, or any custom agent.
# Install
pip install agentflare
from agentflare import AgentFlare
guard = AgentFlare(
api_key="ag_...",
agent_id="my-agent",
cost_threshold=10.0,
)
# LangChain — drop-in callback
chain.with_config(callbacks=[guard.callback])Built for developers running AI agents in production
Real-time cost tracking
Every LLM call tracked instantly. Cost calculated per model with up-to-date pricing.
Auto-pause guardrail
Set a daily budget per agent. When it's hit, the agent pauses itself automatically.
Slack alerts
Get instant Slack notifications when an agent is paused.
Live dashboard
Watch events stream in real-time. Cost meters, hourly charts, model breakdowns.
Four steps from SDK call to dashboard update
01
Agent calls LLM
Your agent makes an LLM call. The SDK intercepts it.
02
Event sent
Token counts and model posted to the backend in the background.
03
Cost calculated
Backend calculates USD cost and checks your 24h threshold.
04
Dashboard updates
Supabase Realtime pushes the event to your dashboard in < 1 second.
Managed platform. SDK is fully open source on PyPI and GitHub.