LangChain Integration
Drop-in callback handler for LangChain and LangGraph agents.
Overview
If your agent uses LangChain or LangGraph, AgentFlare integrates with zero changes to your existing chain logic. Just add guard.callback to the callbacks list.
AgentFlare implements BaseCallbackHandler from langchain-core. It hooks into the on_llm_end and on_tool_start lifecycle events, extracting token counts and model names automatically.
Setup
Usage
With a chain
With LangGraph
What gets tracked automatically
| Event | What fires | What's extracted |
|---|---|---|
| LLM call | on_llm_end | model, input_tokens, output_tokens |
| Tool call | on_tool_start | tool_name |
Token counts are read from the generation_info dict returned by the LLM. AgentFlare handles different key names across providers:
| Provider | Input tokens key | Output tokens key |
|---|---|---|
| OpenAI | prompt_tokens | completion_tokens |
| Anthropic | input_tokens | output_tokens |
| Google Gemini | prompt_token_count | candidates_token_count |
If a key is missing, the count defaults to 0 (no cost is recorded for that call, but the event is still saved).
Handling pause mid-chain
When an agent is paused, the next send_event call inside the callback returns False and caches _paused = True. However, LangChain callbacks don't interrupt a running chain — the current LLM call will complete.
If you need hard stop behavior, check guard.is_paused between chain steps:
Without LangChain installed
If langchain-core is not installed, importing AgentFlareCallback raises an ImportError with a clear message. The rest of the SDK (AgentFlare, AgentEvent) works fine without LangChain.