Build low-latency Vision AI applications using our new open-source Vision AI SDK. ⭐️ on GitHub

Tutorials: Vision

Build a Local AI Agent with Qwen 3.5 Small on macOS

Qwen 3.5 Small is a new family of lightweight, high-performance models from Alibaba (0.8B, 2B, 4B, and 9B parameters) that are now available on Ollama. These models support multimodal input, native tool calling, and strong reasoning, all while running efficiently on laptops, Macs, and even mobile/IoT devices. In this demo, the agent runs completely locally
Read more
3 min read

Using Opus 4.6: Vibe Code a Custom Python Plugin for Vision Agents

Vision Agents has out-of-the-box support for the LLM services and providers developers need to build voice, vision, and video AI applications. The framework also makes it easy to integrate custom AI services — either by following a step-by-step guide or by vibe coding them using SoTA models. Let’s use Claude Opus 4.6 to create a
Read more
9 min read

Build an AI Travel Advisor That Speaks with Gemini 3.1 Pro

Most LLMs are great at thinking, but making them speak naturally is a different challenge. Gemini 3.1 Pro changes that. This new model from Google brings significantly improved reasoning, longer context, and better tool-use capabilities, making it one of the best choices (at the time of writing) for building conversational voice agents. In this guide,
Read more
2 min read

Add Text-to-Speech to Apps with Cartesia Sonic 3 & Vision Agents

Realistic text-to-speech was one of the hardest parts of building voice agents. Most models either sounded robotic, introduced noticeable latency, or required complex integration that slowed down prototyping. Cartesia Sonic 3 changes that equation. Released late 2025, it combines sub-200 ms first-chunk latency, strong emotional expressiveness, multilingual support, and the ability to clone voices from
Read more
2 min read

ElevenLabs with Vision Agents: Add Text-to-Speech in a Few Lines of Code

ElevenLabs delivers some of the most lifelike and expressive text-to-speech voices out there. Its natural intonation, emotion, and multilingual support make your AI agents sound genuinely human. And, with the ElevenLabs plugin for Vision Agents, integration is a one-liner affair: import, initialize (with optional voice/model tweaks), and pass it to your agent. No messing around
Read more
3 min read

Kimi K2.5: Build a Video & Vision Agent in Python

Imagine pointing your webcam at everyday objects (or even sharing your screen with code) and having an AI instantly understand what it sees, reason through it step by step, and explain everything back to you in a natural voice. That’s what Kimi K2.5 from Moonshot AI makes possible when accessed via its OpenAI-compatible API and
Read more
3 min read

Create Speech-to-Text Experiences with ElevenLabs Scribe v2 Realtime & Vision Agents

ElevenLabs released Scribe v2 Realtime, an ultra-low latency speech-to-text model with ~150ms end-to-end transcription, supporting 90+ languages and claiming the lowest Word Error Rate in benchmarks for major languages and accents. It's built specifically for agentic apps, live meetings, note-taking, and conversational AI, where every millisecond and every word matters. In this demo, Scribe v2
Read more
2 min read

How to Build a Local AI Voice Agent with Pocket TTS

Voice agents are getting better, but most text-to-speech pipelines still assume you’re okay with cloud APIs, large models, and unpredictable latency. If you want fast, natural-sounding speech that runs entirely on your own hardware (no GPU, no network calls), you need a different approach. In this tutorial, you’ll build a real-time AI voice agent that
Read more
9 min read