Stream Chat LangChain SDK

The Stream Chat LangChain SDK wires Stream’s real-time messaging primitives to the LangChain JavaScript SDK. With a single package you can host AI copilots inside Stream channels, connect OpenAI, Anthropic, Gemini, or xAI chat models, stream partial responses with typing indicators, dispatch tool calls, and optionally persist long-term memory via Mem0.

You can find sample integration with the Langchain AI SDK in our samples repo.


Installation

npm install @stream-io/chat-langchain-sdk
# or
yarn add @stream-io/chat-langchain-sdk

The package ships TypeScript sources plus transpiled JavaScript under dist/. Import either ESM or CJS builds depending on your bundler.

Supported Providers

Select a provider at runtime through the AgentPlatform enum:

PlatformEnumDefault modelNotes
OpenAIAgentPlatform.OPENAIgpt-4o-miniEnables Mem0 (provider: openai) when configured.
AnthropicAgentPlatform.ANTHROPICclaude-3-5-sonnet-20241022Uses LangChain’s Anthropic chat wrapper with optional Mem0 support.
GeminiAgentPlatform.GEMINIgemini-1.5-flashAccepts GOOGLE_GENERATIVE_AI_API_KEY or GEMINI_API_KEY.
xAIAgentPlatform.XAIgrok-betaMem0 does not expose xAI yet, so long-term memory stays disabled.

Override the model ID per agent via the model option to match your LangChain configuration.


Environment Variables

Set the following before instantiating an agent:

VariableRequiredDescription
STREAM_API_KEY, STREAM_API_SECRETUsed by the server-side Stream client in serverClient.ts to upsert and connect the agent user. Missing keys throw immediately on import.
OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY/GEMINI_API_KEY, XAI_API_KEY✅ (per provider)Passed into the LangChain chat model wrapper for the configured AgentPlatform.
MEM0_API_KEY, MEM0_CONFIG_JSON, MEM0_DEFAULT_*OptionalEnables Mem0 memory when present. See Memory & Mem0.
OPENWEATHER_API_KEYOptionalRequired only if you rely on the getCurrentTemperature tool returned by createDefaultTools().

Tip: when running multiple agents, load environment variables once at process start-up so lazy imports such as serverClient do not throw intermittently.

Quick Start: Single Agent

import {
  Agent,
  AgentPlatform,
  createDefaultTools,
  type ClientToolDefinition,
} from "@stream-io/chat-langchain-sdk";

const agent = new Agent({
  userId: "ai-bot-weather",
  channelId: "support-room",
  channelType: "messaging",
  platform: AgentPlatform.OPENAI,
  model: "gpt-4o-mini",
  instructions: [
    "Answer in a friendly, concise tone.",
    "Prefer Celsius unless the user specifies otherwise.",
  ],
  serverTools: createDefaultTools(),
  clientTools: [
    {
      name: "openHelpCenter",
      description: "Open the help center in the web app",
      parameters: {
        type: "object",
        properties: { articleSlug: { type: "string" } },
        required: ["articleSlug"],
      },
    } satisfies ClientToolDefinition,
  ],
  mem0Context: {
    channelId: "support-room",
    appId: "stream-chat-support",
  },
});

await agent.start();
// Later...
await agent.stop();

Multi-Channel Deployments with AgentManager

Use AgentManager to orchestrate many concurrent agents without leaking resources. It caches instances by userId, supports lazy start/stop, and periodically disposes idle bots.

import {
  AgentManager,
  AgentPlatform,
  createDefaultTools,
} from "@stream-io/chat-langchain-sdk";

const manager = new AgentManager({
  serverToolsFactory: () => createDefaultTools(),
  inactivityThresholdMs: 30 * 60 * 1000,
  cleanupIntervalMs: 5_000,
});

await manager.startAgent({
  userId: "ai-support-bot-123",
  channelId: "support-room",
  channelType: "messaging",
  platform: AgentPlatform.ANTHROPIC,
  instructions: [
    "Provide empathetic troubleshooting tips.",
    "Offer to escalate tough issues to a human.",
  ],
});

manager.registerClientTools("support-room", [
  {
    name: "openTicket",
    description: "Open the CRM ticket in the dashboard",
    parameters: {
      type: "object",
      properties: { ticketId: { type: "string" } },
      required: ["ticketId"],
    },
  },
]);

// On shutdown:
await manager.stopAgent("ai-support-bot-123");
manager.dispose();

Key behaviors:

  • serverToolsFactory runs each time a new agent is constructed, allowing per-channel config.
  • registerClientTools(channelId, tools) persists definitions and re-applies them the next time the channel’s agent starts.
  • inactivityThresholdMs (default ~8 hours) pairs with cleanupIntervalMs (default 5 seconds) to stop agents automatically once Agent#getLastInteraction() stales.
  • dispose() clears the cleanup interval and caches; call it during process shutdown to avoid hanging timers.

Working with Tools

Server Tools (AgentTool)

Server tools run in Node/TS and receive typed arguments plus channel/message context. Each tool consists of a name, human-facing description, optional instructions (folded into the system prompt), a Zod parameters schema, and an async execute function that returns the tool output text.

import { z } from "zod";
import type { AgentTool } from "@stream-io/chat-langchain-sdk";

const getOrderStatus: AgentTool = {
  name: "getOrderStatus",
  description: "Look up order details by ID.",
  instructions: "Invoke only when the user provides a valid order number.",
  parameters: z.object({
    orderId: z.string().describe("The e-commerce order number"),
  }),
  async execute({ orderId }, { channel }) {
    const record = await lookupOrder(orderId);
    if (!record) {
      throw new Error(`No order found for ${orderId}`);
    }
    await channel.sendEvent({ type: "order.viewed", orderId });
    return `Status: ${record.status}. ETA ${record.eta}`;
  },
};

Register tools through serverTools in the Agent constructor, agent.addServerTools(...), or agent.registerServerTools([...], { replace: boolean }).

Client Tools

Client tools are JSON-schema definitions representing capabilities that must be executed on the front end (e.g., “open ticket in CRM”, “navigate to screen”). The SDK converts the schema to a Zod validator and exposes an internal tool that sends a custom_client_tool_invocation event containing:

{
  "type": "custom_client_tool_invocation",
  "cid": "<channel-id>",
  "message_id": "<message-id>",
  "tool": { "name": "openTicket", ... },
  "args": { "ticketId": "123" }
}

Listen for that event in your app, perform the privileged action, and optionally respond in chat.

Default Tools

createDefaultTools() returns getCurrentTemperature. It requires OPENWEATHER_API_KEY and demonstrates:

  • Fetching an external REST API (axios).
  • Converting units based on tool arguments.
  • Supplying instructions so the LLM only calls the tool when the user explicitly asks about weather.

Use it as inspiration for production tools or swap in your own factory.

Streaming, Indicators, and Cancellation

  • Messages stream via LangChain’s async generator interfaces, and the SDK pipes partial tokens through chatClient.partialUpdateMessage for live typing.
  • ai_indicator.update events surface typing states: AI_STATE_THINKING, AI_STATE_GENERATING, AI_STATE_EXTERNAL_SOURCES, and AI_STATE_ERROR.
  • ai_indicator.clear flushes once the final response lands.
  • The SDK listens for ai_indicator.stop events and aborts the underlying LangChain run, which cancels streaming mid-flight.
  • Partial message text is written through chatClient.partialUpdateMessage, enabling front-end live typing plus cancellation UI.

Memory & Mem0

Mem0 adds “long-term” recall for user+channel pairs. Configuration layers:

  1. Optional MEM0_CONFIG_JSON (stringified JSON) provides defaults such as metadata.
  2. Per-process env fallbacks: MEM0_DEFAULT_USER_ID, MEM0_DEFAULT_AGENT_ID, MEM0_DEFAULT_APP_ID.
  3. Agent option mem0Context lets you override channelId, userId, appId, agentId, or supply configOverrides.
  4. During runtime the SDK augments metadata with Stream-specific IDs.

Mem0 only runs when the chosen provider matches openai, anthropic, or google and MEM0_API_KEY is set. Pass { disableMem0: true } via Agent.generateSummary(...) implicitly when generating summaries to keep them fast and deterministic.

Summaries and Titles

Use agent.summarize(text) (instance) or Agent.generateSummary(text, platform, model?) (static) to request a short, six-word “title” summarizing arbitrary text—ideal for channel list previews, inbox cards, or push notification headers. The helper:

  • Spins up a lightweight LangChain chat model using the current provider settings.
  • Disables Mem0 to avoid unnecessary persistence.
  • Normalizes the resulting string by trimming surrounding quotes.
const headline = await agent.summarize(latestTranscript);
await channel.updatePartial({ set: { subtitle: headline } });

© Getstream.io, Inc. All Rights Reserved.