← Back to Blog
2026-03-07·9 min read·Curtis Thomas
LangChain
audit trail
tutorial
SDK
compliance

How to Add Audit Trails to LangChain Agents

Your LangChain agent makes decisions in production — calling tools, reasoning through multi-step tasks, generating outputs that affect real users. But when someone asks "what did the agent do and why?", your answer is probably a CloudWatch log full of unstructured text.

That's not an audit trail. That's a liability.

Why Standard Logging Fails for LangChain Agents

Most teams start with the obvious approach: pipe LangChain's verbose output to CloudWatch, Datadog, or a custom logger. It works for debugging, but it falls apart when compliance or legal gets involved.

CloudWatch/Datadog/Splunk logs aren't tamper-proof. Anyone with write access can modify or delete log entries after the fact. If a regulator asks you to prove an agent's decision wasn't altered, you can't. There's no cryptographic guarantee that what you're showing them is what actually happened.

They aren't structured for compliance. Generic log lines don't capture the decision chain: what input the agent received, what reasoning it used, which tools it called, and what output it produced. Compliance reports need structured decision lineage, not [INFO] Agent called tool: search_api.

They can't generate audit reports. When your SOC 2 auditor asks for evidence of processing integrity, or when the EU AI Act Article 12 mandates automatic logging over the system's lifetime, you need one-click report generation — not a junior engineer spending a week writing Splunk queries.

What a Proper AI Agent Audit Trail Needs

A compliance-grade audit trail for AI agents requires:

  1. Hash chaining: Every trace is cryptographically linked to the previous one via SHA-256. If any record is modified, the entire chain breaks. This is your tamper-proof guarantee.

  2. Immutability: Once a trace is written, it cannot be altered or deleted. The system of record is append-only.

  3. Session linking: Every step in a multi-step agent execution is linked by a session ID, so you can reconstruct the full decision chain from trigger to final output.

  4. Decision lineage: Input, reasoning, tool calls, and output are captured as structured data — not log strings.

  5. Compliance export: One-click generation of reports mapped to specific frameworks (EU AI Act, SOC 2, ISO 27001).

Tutorial: Add AgentTraceHQ to a LangChain Agent in 5 Minutes

Prerequisites

  • Node.js 18+
  • A LangChain agent (we'll build one from scratch if you don't have one)
  • An AgentTraceHQ account (free tier — 10K traces/month)

Install the SDK

npm install @agenttracehq/sdk @langchain/openai @langchain/core langchain

Initialize AgentTraceHQ

import { AgentTraceHQ, LangChainHandler } from "@agenttracehq/sdk"; const athq = new AgentTraceHQ({ apiKey: process.env.AGENTTRACEHQ_API_KEY, agentId: "financial-research-agent", environment: "production", });

The agentId ties all traces from this agent together. Use a descriptive, stable identifier — it's how you'll filter and query traces in the dashboard.

Add the LangChain Callback Handler

AgentTraceHQ provides a native LangChain callback handler that hooks into the framework's existing callback system. It captures every LLM call, tool invocation, chain step, and agent action automatically.

const handler = new LangChainHandler(athq, { captureIO: true, // Log full inputs and outputs captureReasoning: true, // Extract chain-of-thought from agent scratchpad sessionPrefix: "research", // Sessions are auto-named: research-<uuid> });

Pass the handler to your agent executor:

const result = await agentExecutor.invoke( { input: userQuery }, { callbacks: [handler] } );

That's it. Every action the agent takes is now traced.

Full Working Example

Here's a complete, working LangChain agent with AgentTraceHQ integrated — not pseudocode:

import { ChatOpenAI } from "@langchain/openai"; import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents"; import { DynamicTool } from "@langchain/core/tools"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { AgentTraceHQ, LangChainHandler } from "@agenttracehq/sdk"; // 1. Initialize AgentTraceHQ const athq = new AgentTraceHQ({ apiKey: process.env.AGENTTRACEHQ_API_KEY, agentId: "financial-research-agent", }); const handler = new LangChainHandler(athq, { captureIO: true, captureReasoning: true, }); // 2. Define tools const lookupStock = new DynamicTool({ name: "lookup_stock", description: "Look up current stock price and key metrics for a ticker symbol", func: async (ticker) => { // Replace with your actual market data API const data = { ticker: ticker.toUpperCase(), price: 142.58, pe_ratio: 28.4, market_cap: "3.5T", day_change: "+2.3%", volume: "45.2M", }; return JSON.stringify(data); }, }); const getNews = new DynamicTool({ name: "get_financial_news", description: "Get recent financial news headlines for a company", func: async (company) => { // Replace with your actual news API return JSON.stringify([ { headline: `${company} beats Q4 earnings estimates by 12%`, date: "2026-03-05" }, { headline: `${company} announces new AI infrastructure investment`, date: "2026-03-04" }, { headline: `Analysts upgrade ${company} to 'Strong Buy'`, date: "2026-03-03" }, ]); }, }); // 3. Create the agent const llm = new ChatOpenAI({ modelName: "gpt-4o", temperature: 0, }); const prompt = ChatPromptTemplate.fromMessages([ ["system", `You are a financial research assistant. Analyze stocks using available tools and provide clear, data-backed recommendations. Always state your reasoning.`], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"], ]); const agent = await createOpenAIFunctionsAgent({ llm, tools: [lookupStock, getNews], prompt, }); const agentExecutor = new AgentExecutor({ agent, tools: [lookupStock, getNews], verbose: false, // AgentTraceHQ handles tracing — no need for console spam }); // 4. Run the agent with tracing const result = await agentExecutor.invoke( { input: "Should I invest in AAPL? Give me a quick analysis." }, { callbacks: [handler] } ); console.log(result.output); // 5. Flush traces before exit (important for short-lived scripts) await athq.flush();

Run this script, and within seconds you'll see traces in your AgentTraceHQ dashboard.

What Appears in the Dashboard

Open the trace explorer and you'll see a session containing multiple linked traces:

  1. agent_start — The agent received the user query, timestamp and session ID assigned
  2. llm_call — The LLM was called with the system prompt and user input. Full prompt captured, token count recorded (e.g., 2,150 input / 430 output)
  3. tool_call: lookup_stock — The agent decided to look up AAPL. Input: "AAPL". Output: the full stock data object
  4. tool_call: get_financial_news — The agent fetched news. Input: "Apple". Output: the news headlines
  5. llm_call — Second LLM call with tool results in context. The agent synthesizes its analysis
  6. agent_finish — Final output with the investment recommendation, total latency, and cost

Click any trace to see the full detail: input payload, output payload, reasoning extracted from the agent scratchpad, model used, token counts, latency, and cost estimate.

Click Session View to see the entire decision chain as a timeline — from the user's question to the final recommendation, with every intermediate step visible.

Hash Chain Verification

Every trace in the session is hash-chained. Click Verify Chain on any trace, and AgentTraceHQ walks the chain backward from that point, confirming every SHA-256 hash matches. If someone modified a trace — changed an output, deleted a tool call, altered a timestamp — the verification fails and shows you exactly which block was tampered with.

This is what makes it a compliance-grade audit trail, not just a log. When your SOC 2 auditor asks "can you prove this record wasn't altered?", you can verify the cryptographic chain in front of them.

If you want to build tamper-proof audit trails alongside your existing observability stack, AgentTraceHQ handles the compliance layer while your existing tools handle debugging and performance monitoring.

What You Get

Trace Explorer: Searchable, filterable table of every agent action. Filter by agent, session, action type, time range, risk level, or tags.

Forensic Replay: Reconstruct any agent session step-by-step. See exactly what the agent saw, what it decided, and what it did — in order, with full context.

Compliance Exports: One-click reports for EU AI Act (Article 12 logging compliance), SOC 2 Type II (processing integrity evidence), and ISO 27001 (information security audit records).

Anomaly Alerts: Get notified when an agent's behavior deviates from baseline — unusual token spend, unexpected tool calls, error rate spikes, or policy violations.

Cost Tracking: Per-agent, per-session cost breakdowns. Know exactly how much each agent decision costs.

Start Tracing Your LangChain Agents

Every LangChain agent you run in production is making autonomous decisions. The question isn't whether you need an audit trail — it's whether you'll have one when a regulator, auditor, or customer asks for it.

AgentTraceHQ adds compliance-grade audit trails to any LangChain agent with a callback handler and three lines of code. Every trace is hash-chained, tamper-evident, and exportable to the compliance framework you need.

Start free at agenttracehq.com — 10K traces/month, no credit card required.