Managed Platform · SDK Open Source

Stop your AI agents
before they wreck your budget

AgentFlare tracks every LLM call your agents make, calculates costs in real-time, and auto-pauses them the moment they hit your spending limit. 3-line integration. Works with LangChain, LangGraph, or any custom agent.

Quick start
# Install
pip install agentflare

from agentflare import AgentFlare

guard = AgentFlare(
    api_key="ag_...",
    agent_id="my-agent",
    cost_threshold=10.0,
)

# LangChain — drop-in callback
chain.with_config(callbacks=[guard.callback])

Everything you need

Built for developers running AI agents in production

Real-time cost tracking

Every LLM call tracked instantly. Cost calculated per model with up-to-date pricing.

Auto-pause guardrail

Set a daily budget per agent. When it's hit, the agent pauses itself automatically.

Slack alerts

Get instant Slack notifications when an agent is paused.

Live dashboard

Watch events stream in real-time. Cost meters, hourly charts, model breakdowns.

How it works

Four steps from SDK call to dashboard update

01

Agent calls LLM

Your agent makes an LLM call. The SDK intercepts it.

02

Event sent

Token counts and model posted to the backend in the background.

03

Cost calculated

Backend calculates USD cost and checks your 24h threshold.

04

Dashboard updates

Supabase Realtime pushes the event to your dashboard in < 1 second.

Ready to guard your agents?

Managed platform. SDK is fully open source on PyPI and GitHub.

No credit card required
Open source SDK
5 min integration