In this tutorial, we will demonstrate how easy it is to create an AI assistant for iOS using Stream Chat.
In this example, we will use the Stream Chat integration with Vercel's AI SDK; however, developers are free to use whichever LLM provider they like and still benefit from Stream's rich UI support for Markdown, tables, code samples, charts etc.
To follow along with this tutorial, we recommend creating a free account and checking out our main iOS Chat SDK tutorial as a refresher.
Here's a video of the end result:
Running the Backend
Before adding AI features to our iOS app, let's set up our node.js backend. The backend will expose two methods for starting and stopping an AI agent for a particular channel. If the agent is started, it listens to all new messages and sends them to the LLM provider of your choice. It provides the results by sending a message and updating its text.
The sample also supports sending different states of the typing indicator (for example, Thinking, Checking external sources, etc), client-side MCP tools, suggestions, summaries, memory with mem0 and much more.
You can find a working implementation of the backend here.
1. Install Dependencies
1npm install @stream-io/chat-ai-sdk express cors dotenv
@stream-io/chat-ai-sdk brings the Agent, AgentManager, tool helpers, and the streaming logic. Express/cors/dotenv provide the basic HTTP server.
2. Configure Stream Credentials
Create a .env file with:
STREAM_API_KEY=your_key
STREAM_API_SECRET=your_secret
OPENAI_API_KEY=your_open_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
XAI_API_KEY=your_xai_api_key
GOOGLE_API_KEY=your_google_api_key
MEM0_API_KEY=your_mem0_api_key
Apart from the Stream API key and secret, every other API key is optional. You would need to add at least one LLM service, for the AI to work.
Load it at the top of your entry file:
1import 'dotenv/config';
3. Bootstrap Express and the AgentManager
Add the following to your index.ts (or similar):
1234567891011121314151617181920212223242526import express from 'express'; import cors from 'cors'; import { AgentManager, AgentPlatform, createDefaultTools, } from '@stream-io/chat-ai-sdk'; const app = express(); app.use(express.json()); app.use(cors({ origin: '*' })); const buildAgentUserId = (channelId: string): string => `ai-bot-${channelId.replace(/!/g, '')}`; const agentManager = new AgentManager({ serverToolsFactory: () => createDefaultTools(), agentIdResolver: buildAgentUserId, }); const normalizeChannelId = (raw: string): string => { const trimmed = raw?.trim() ?? ''; if (!trimmed) return trimmed; const parts = trimmed.split(':'); return parts.length > 1 ? parts[1] : trimmed; };
AgentManager owns the agent cache, pending state, and inactivity cleanup. Each channel uses an ID pattern such as ai-bot-{channelId}.
4. Starting an Agent
Next, let's add the endpoint that will start the AI agent. First, we need to validate the payload, normalize the channel, and then ask the AgentManager to start or reuse the agent:
1234567891011121314151617181920212223242526import express from 'express'; import cors from 'cors'; import { AgentManager, AgentPlatform, createDefaultTools, } from '@stream-io/chat-ai-sdk'; const app = express(); app.use(express.json()); app.use(cors({ origin: '*' })); const buildAgentUserId = (channelId: string): string => `ai-bot-${channelId.replace(/!/g, '')}`; const agentManager = new AgentManager({ serverToolsFactory: () => createDefaultTools(), agentIdResolver: buildAgentUserId, }); const normalizeChannelId = (raw: string): string => { const trimmed = raw?.trim() ?? ''; if (!trimmed) return trimmed; const parts = trimmed.split(':'); return parts.length > 1 ? parts[1] : trimmed; };
5. Stopping an Agent
To stop the agent and clean the cache, we can do the following:
1234567891011121314151617app.post('/stop-ai-agent', async (req, res) => { const channelId = normalizeChannelId(req.body?.channel_id ?? ''); if (!channelId) { res.status(400).json({ error: 'Invalid channel_id' }); return; } try { await agentManager.stopAgent(buildAgentUserId(channelId)); res.json({ message: 'AI Agent stopped' }); } catch (error) { res.status(500).json({ error: 'Failed to stop AI Agent', reason: (error as Error).message, }); } });
6. Register Client Side Tools
Next, we can expose an endpoint to allow clients to register MCP tools that can be handled client-side:
12345678910111213141516app.post('/register-tools', (req, res) => { const { channel_id, tools } = req.body ?? {}; if (typeof channel_id !== 'string' || !channel_id.trim()) { res.status(400).json({ error: 'Missing or invalid channel_id' }); return; } if (!Array.isArray(tools)) { res.status(400).json({ error: 'Missing or invalid tools array' }); return; } const channelId = normalizeChannelId(channel_id); agentManager.registerClientTools(channelId, tools); res.json({ message: 'Client tools registered', count: tools.length }); });
7. Start the Server
Add the log when the server is started.
1234const port = process.env.PORT || 3000; app.listen(port, () => { console.log(`Server is running on <http://localhost>:${port}`); });
You can start the server by running:
npm start
iOS Integration
Next, let's setup things on the iOS side. You can find a working implementation of this project here.
1. Add the Required Swift Packages
Open your project in Xcode and add the packages through File ▸ Add Package Dependencies.... Select the following versions when adding the SDKs:
| Package | URL | Version |
|---|---|---|
| Stream Chat SwiftUI | https://github.com/GetStream/stream-chat-swiftui | 4.93.0 |
| Stream Chat AI | https://github.com/GetStream/stream-chat-swift-ai.git | 0.4.0 |
2. Set Up Stream Chat
Create the SwiftUI App entry point (AIComponentsApp.swift) and bootstrap ChatClient, StreamChat, and the Stream Chat SwiftUI environment:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950import SwiftUI import StreamChat import StreamChatSwiftUI @main struct AIComponentsApp: App { @State var streamChat: StreamChat var chatClient: ChatClient = { var config = ChatClientConfig(apiKey: .init("YOUR_STREAM_API_KEY")) config.isLocalStorageEnabled = true config.applicationGroupIdentifier = "group.your.bundle.chat" return ChatClient(config: config) }() init() { let messageListConfig = MessageListConfig( messageDisplayOptions: .init( showAvatars: false, showAvatarsInGroups: false, showMessageDate: false, showAuthorName: false, spacerWidth: { _ in 0 } ), skipEditedMessageLabel: { $0.extraData["ai_generated"]?.boolValue == true } ) let utils = Utils( messageTypeResolver: CustomMessageResolver(), messageListConfig: messageListConfig ) _streamChat = State(initialValue: StreamChat(chatClient: chatClient, utils: utils)) chatClient.connectUser( userInfo: UserInfo( id: "anakin_skywalker", imageURL: URL(string: "<https://example.com/avatar.png>") ), token: try! Token(rawValue: "USER_TOKEN_FROM_YOUR_BACKEND") ) } var body: some Scene { WindowGroup { ContentView() } } }
Key points:
ChatClientConfigmust contain your Stream API key. Turn on local storage and specify an app group if you want share extensions.StreamChatinjects SDK utilities into the SwiftUI environment; the sample provides aCustomMessageResolverso AI generated content renders as custom attachments.connectUserhas to be called with a user token created on your server.
3. Build the AI Agent Service Layer
AgentService.swift centralizes all calls to your Stream Chat AI proxy. It uses StreamChatAI's request/response models plus Model Context Protocol (MCP) helpers:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556class AgentService { static let shared = AgentService() private let baseURL = "<http://localhost:3000>" private let jsonEncoder = JSONEncoder() private let jsonDecoder = JSONDecoder() private let urlSession = URLSession.shared func setupAgent(channelId: String, model: String? = nil) async throws { try await executePostRequest( body: AIAgentRequest(channelId: channelId, model: model), endpoint: "start-ai-agent" ) } func stopAgent(channelId: String) async throws { try await executePostRequest( body: AIAgentRequest(channelId: channelId, model: nil), endpoint: "stop-ai-agent" ) } func registerTools(channelId: String, tools: [ToolRegistrationPayload]) async throws { guard !tools.isEmpty else { return } try await executePostRequest( body: ToolRegistrationRequest(channelId: channelId, tools: tools), endpoint: "register-tools" ) } func summarize(text: String, platform: String, model: String? = nil) async throws -> String { let data = try await executePostRequestWithResponse( body: AgentSummaryRequest(text: text, platform: platform, model: model), endpoint: "summarize" ) let response = try jsonDecoder.decode(AgentSummaryResponse.self, from: data) return response.summary } private func executePostRequest<RequestBody: Encodable>(body: RequestBody, endpoint: String) async throws { _ = try await executePostRequestWithResponse(body: body, endpoint: endpoint) } private func executePostRequestWithResponse<RequestBody: Encodable>(body: RequestBody, endpoint: String) async throws -> Data { let url = URL(string: "\\(baseURL)/\\(endpoint)")! var request = URLRequest(url: url) request.httpMethod = "POST" request.setValue("application/json", forHTTPHeaderField: "Content-Type") request.httpBody = try jsonEncoder.encode(body) let (data, _) = try await urlSession.data(for: request) return data } }
The complete implementation of this service, along with the required helper methods and request/response payloads can be found here.
Each chat channel kicks off its own AI agent (/start-ai-agent), optionally registers client tools, and can request message summaries for conversation titles. You should update the baseURL to your deployed backend service when you are ready to test and ship.
4. Integration with Stream Chat SwiftUI
Next, let's build some UI and integrate it with Stream Chat's SwiftUI SDK. The SDK works seamlessly with the AI components, enabling you to build AI assistants quickly.
We will create a new view factory, that will implement the ViewFactory protocol:
123456789101112131415161718192021222324252627282930313233343536373839404142import SwiftUI import StreamChat import StreamChatAI import StreamChatSwiftUI class AIComponentsViewFactory: ViewFactory { @Injected(\\.chatClient) var chatClient: ChatClient private let actionHandler = ClientToolActionHandler.shared var typingIndicatorHandler: TypingIndicatorHandler! private init() {} static let shared = AIComponentsViewFactory() @ViewBuilder func makeCustomAttachmentViewType( for message: ChatMessage, isFirst: Bool, availableWidth: CGFloat, scrolledId: Binding<String?> ) -> some View { let isGenerating = message.extraData["generating"]?.boolValue == true StreamingMessageView( content: message.text, isGenerating: isGenerating ) .padding() } func makeMessageListContainerModifier() -> some ViewModifier { CustomMessageListContainerModifier(typingIndicatorHandler: typingIndicatorHandler) } func makeEmptyMessagesView( for channel: ChatChannel, colors: ColorPalette ) -> some View { AIAgentOverlayView(typingIndicatorHandler: typingIndicatorHandler) } }
First, we are implementing a method that allows us to specify custom attachments - the makeCustomAttachmentViewType method, where we return the StreamingMessageView from the AI components.
Next, we're creating a custom message list container modifier, where we set the thinking indicator as an overlay:
12345678910struct CustomMessageListContainerModifier: ViewModifier { @ObservedObject var typingIndicatorHandler: TypingIndicatorHandler func body(content: Content) -> some View { content.overlay { AIAgentOverlayView(typingIndicatorHandler: typingIndicatorHandler) } } }
We do something similar for the makeEmptyMessagesView, which is called when there are no messages in the message list.
The AIAgentOverlayView handles the display of the thinking indicator, and shows the AITypingIndicatorView from the AI components SDK:
12345678910111213141516171819struct AIAgentOverlayView: View { @ObservedObject var typingIndicatorHandler: TypingIndicatorHandler var body: some View { VStack { Spacer() if typingIndicatorHandler.typingIndicatorShown { HStack { AITypingIndicatorView(text: typingIndicatorHandler.state) Spacer() } .padding() .frame(height: 60) .background(Color(UIColor.secondarySystemBackground)) } } } }
The TypingIndicatorHandler is a helper class that listens to events from Stream Chat. You can find its complete implementation here.
5. Build the Conversation UI
Building the Conversation UI is quite straightforward with the available components from the Stream Chat SwiftUI SDK and the AI components.
You can find all the implementation details here.
1234567891011121314151617var body: some View { NavigationStack { SidebarView( isOpen: $isSplitOpen, excludedBottomHeight: composerHeight, menu: { ConversationListView( onChannelSelected: handleChannelSelection, onNewChat: handleNewChatRequest ) }, content: { mainConversation() } ) } }
At the root of the navigation stack, we use the SidebarView from the AI components, that lets you build side menu and main content with few lines of code.
The ConversationListView handles the conversation history and allows you to start new chats:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354import SwiftUI import StreamChat import StreamChatAI import StreamChatSwiftUI struct ConversationListView: View { @StateObject private var viewModel = ChatChannelListViewModel() var onChannelSelected: (ChatChannel) -> Void var onNewChat: () -> Void var body: some View { ScrollView { LazyVStack { HStack { Text("Conversations") .font(.headline) Spacer() Button(action: onNewChat) { Image(systemName: "plus.circle.fill") .font(.title3) .foregroundColor(.accentColor) } .accessibilityLabel("New chat") .buttonStyle(.plain) } .padding() ForEach(viewModel.channels) { channel in HStack { Text(channel.name ?? channel.id) .multilineTextAlignment(.leading) .lineLimit(1) Spacer() } .padding(.horizontal) .padding(.vertical, 4) .contentShape(Rectangle()) .onTapGesture { onChannelSelected(channel) } .onAppear { if let index = viewModel.channels.firstIndex(of: channel) { viewModel.checkForChannels(index: index) } } } } } .padding(.top, 40) } }
The mainConversation view shows the actual AI chat experience.
1234567891011121314151617181920212223242526272829303132333435363738private func mainConversation() -> some View { VStack(spacing: 0) { ZStack(alignment: .leading) { if showMessageList, let channelController { ConversationView( channelController: channelController ) .id(channelController.cid) } else { VStack { Spacer() SuggestionsView(suggestions: predefinedOptions) { messageData in sendMessage(messageData) } } .onChange(of: viewModel.text) { oldValue, newValue in // already create the channel for faster reply. if viewModel.text.count > 5 { setupChannel() } } } } .frame(maxWidth: .infinity, maxHeight: .infinity) .contentShape(Rectangle()) ComposerView( viewModel: viewModel, isGenerating: typingIndicatorHandler.generatingMessageId != nil, onMessageSend: { messageData in sendMessage(messageData) }, onStopGenerating: { stopGenerating() } ) } }
The ConversationView uses the message list from our SwiftUI SDK:
12345678910111213141516171819202122struct ConversationView: View { let channelController: ChatChannelController @StateObject private var viewModel: ChatChannelViewModel init(channelController: ChatChannelController) { self.channelController = channelController _viewModel = StateObject(wrappedValue: ChatChannelViewModel(channelController: channelController)) } var body: some View { if let channel = viewModel.channel { MessageListView( factory: AIComponentsViewFactory.shared, channel: channel, viewModel: viewModel, onLongPress: { _ in } ) } else { ProgressView() } } }
We're also using the ComposerView from the AI components SDK.
6. Register client tools and surface their actions
With the help of Stream Chat AI, you can handle native "client tools" coming from your LLM through custom events. For this, you can use our server-side SDKs that integrate with Vercel's AI SDK and Langchain, or build your own.
The iOS sample sets up a registry and a global action handler:
- Implements a tool action handler (
ClientToolActionHandler.swift) that conforms toClientToolActionHandling, stores alerts, and executes tool-provided closures on the main thread.
123456789101112131415161718192021222324252627import Combine import Foundation import StreamChatAI final class ClientToolActionHandler: ObservableObject, ClientToolActionHandling { static let shared = ClientToolActionHandler() @Published var presentedAlert: ClientToolAlert? private init() {} func handle(_ actions: [ClientToolAction]) { actions.forEach { action in action() } } func presentAlert(_ alert: ClientToolAlert) { presentedAlert = alert } } struct ClientToolAlert: Identifiable { let id = UUID() let title: String let message: String }
- Define your tools (
StreamChatClientTools.swift). The includedGreetClientToolexposes agreetUsertool with a tiny JSON schema and responds by showing a SwiftUI alert.
123456789101112131415161718192021222324252627282930313233343536@MainActor final class GreetClientTool: ClientTool { let toolDefinition: Tool = { let schema: Value = .object([ "type": .string("object"), "properties": .object([:]), "required": .array([]), "additionalProperties": .bool(false) ]) return Tool( name: "greetUser", description: "Display a native greeting to the user", inputSchema: schema, annotations: .init(title: "Greet user") ) }() let instructions = "Use the greetUser tool when the user asks to be greeted. The tool shows a greeting alert in the iOS app." let showExternalSourcesIndicator = false func handleInvocation(_ invocation: ClientToolInvocation) -> [ClientToolAction] { [ { ClientToolActionHandler.shared.presentAlert( ClientToolAlert( title: "Greetings!", message: "👋 Hello there! The assistant asked me to greet you." ) ) } ] } }
You can find the complete implementation of the client-side tool handling mechanism here.
To make this work, we listen for custom_client_tool_invocation events and translate them into tool invocations. TypingIndicatorHandler does this by checking each event payload, constructing a ClientToolInvocation, and passing it through a ClientToolRegistry.
This gives your AI agent the ability to ask the client to run arbitrary UX-side actions such as surfacing alerts, opening screens, or fetching files.
To register a tool, you need to add it to the client tool registry:
12let registry = ClientToolRegistry() registry.register(tool: GreetClientTool())
Conclusion
In this tutorial, you learned how to build a complete AI-powered chat experience by combining Stream Chat, the Stream Chat AI SDK, and a customizable Node.js backend.
We set up an AI agent capable of generating responses, streaming typing indicators, and triggering client-side tools, then integrated it into an iOS app using Stream's SwiftUI components for a polished, real-time chat interface.
With a few endpoints, a lightweight service layer, and fully customizable UI, you now have a foundation for creating rich conversational AI experiences on iOS using any LLM provider you prefer.
