Build Your First AI App in 30 Minutes with the TypeScript AI SDK and Gemini


You’ve seen what LLMs can do—ChatGPT, Claude, and countless AI-powered products are reshaping how software is built. But as a TypeScript developer, you’re probably wondering: can I build AI applications with the same type-safety and developer experience I’m used to? The answer is yes.

You too can become an AI engineer and build powerful, type-safe AI applications in TypeScript with the Vercel AI SDK. It’s a comprehensive toolkit designed for the modern web. It provides a unified API for various models like Gemini and GPT, first-class streaming support, and most importantly, a robust system for generating structured, type-safe outputs.

In this guide, we’ll cut through the noise and build two practical examples in under 30 minutes. We will create:

  1. A CLI tool that transforms a simple sentence into a fully-typed user object.
  2. An interactive CLI chatbot that remembers your conversation.

All in TypeScript, all running right in your terminal. Let’s get started.

The 5-Minute Setup: Your AI Development Environment

Getting started is refreshingly simple. All you need is Node.js 22+ and a Google account.

Step 1: Project & Dependencies

First, let’s create a new project directory and initialise it. We’ll use pnpm, but you can use npm or yarn if you prefer.

mkdir ai-cli && cd ai-cli
pnpm init

Next, configure your package.json to use ES modules by adding the "type" field:

{
  "name": "ai-cli",
  "version": "1.0.0",
  "type": "module"
  // ... other fields ...
}

Now install the necessary packages.

pnpm install ai @ai-sdk/google @clack/prompts zod

Here’s a quick breakdown of our toolkit:

  • ai: The core of the Vercel AI SDK.
  • @ai-sdk/google: The specific provider for using Google’s Gemini models.
  • @clack/prompts: A fantastic library for building beautiful and user-friendly command-line interfaces.
  • zod: The cornerstone of our type-safety, allowing us to define data schemas.

Step 2: Get Your Google Gemini API Key

The Vercel AI SDK supports many models, but we’ll use Google’s Gemini, which offers a generous free tier that’s perfect for development—no credit card required.

  1. Go to Google AI Studio.
  2. Click “Create API key”. Follow the prompt and copy your new key when you’re done.
  3. Back in your project directory, create a new file named .env.
  4. Add your API key to this file:
GOOGLE_GENERATIVE_AI_API_KEY="YOUR_KEY_HERE"

The AI SDK will automatically detect and use this key.

Step 3: Node.js Version Requirements

This guide uses Node.js 22 or higher, which includes two important features:

  • Native TypeScript support via type stripping (no need for ts-node or tsx)
  • Built-in .env file loading with the --env-file flag

If you’re using an older Node.js version, please upgrade to Node.js 22+ (or configure tsx/tsnode) to follow along.

Now, you’re ready to build your first AI application.

Core Concepts: Understanding the AI SDK’s Building Blocks

Before we dive into coding, let’s understand the key concepts that make the Vercel AI SDK powerful for TypeScript developers.

generateObject vs generateText

The AI SDK provides two primary functions for interacting with LLMs:

  • generateObject: Returns structured data that conforms to a schema you define. Perfect for extracting specific information, generating mock data, or any scenario where you need predictable, typed outputs.
  • generateText: Returns free-form text responses. Ideal for conversations, creative writing, or open-ended queries.

There are streaming variants of both functions (streamGenerateObject and streamGenerateText) for streaming responses. For simplicity, we will focus on the non-streaming usage in this guide.

Schemas and Type Safety

The SDK uses Zod schemas to define the exact shape of data you expect from the AI. When you pass a schema to generateObject, you get:

  1. Type-safe outputs: TypeScript knows the exact structure of the returned data
  2. Runtime validation: The SDK ensures the AI’s response matches your schema
  3. AI guidance: Your schema descriptions act as instructions to the AI, helping it generate better outputs

This is the magic that brings TypeScript’s type safety into the unpredictable world of AI.

The Unified API

One of the SDK’s greatest strengths is its unified API across different model providers (Google, OpenAI, Anthropic, etc.). You can swap models by simply changing the model parameter, while your application logic remains unchanged.

With these concepts in mind, let’s build our first example.

Example 1: From a Sentence to a Type-Safe Object

Our first example will be to build a script that takes a natural language description, like “a software engineer from London,” and generates a fully-typed, structured mock user object from it. This example shows the possibility of using LLM to generate mock data for testing, prototyping, or seeding databases. More importantly, it demonstrates how to leverage the AI SDK’s generateObject function to request for structured response and get type-safe outputs.

Let’s build the example step by step, but before we start, create a new file named sentence-transformer.ts in your project directory.

Step 1: Imports and Schema Definition

The secret to type-safe AI app is defining a clear contract. We’ll use Zod to create a schema that describes the exact shape of the User object we want the LLM to generate. The .describe() calls are crucial; they act as instructions to the LLM, guiding it on what each field represents. Better descriptions lead to better outputs.

We’ll start by importing the dependencies and defining its schema.

import { google } from "@ai-sdk/google";
import { generateObject } from "ai";
import { z } from "zod";
import * as prompts from "@clack/prompts";

// This schema defines the structure of our User object
const UserSchema = z.object({
  name: z.string().describe("The full name of the person"),
  age: z.number().int().positive().describe("The age of the person in years"),
  occupation: z.string().describe("The person's job or profession"),
  location: z.string().describe("The city or location where the person lives"),
  email: z.email().describe("The person's email address"),
  interests: z.array(z.string()).describe("A list of hobbies or interests"),
});

With this schema, we’ve told the model exactly what we expect: a name, an age, a location, and so on, right down to the specific data types.

Step 2: Gathering User Input

Next, we’ll create our main function. Inside it, we’ll use @clack/prompts to create a polished CLI experience that asks the user for a description and shows a spinner while the AI is working.

async function main() {
  console.clear();
  prompts.intro("✨ Sentence to Type-Safe Object Generator ✨");

  const description = await prompts.text({
    message: 'Describe a person (e.g., "a software engineer from London"):',
    placeholder: "a software engineer from London who likes to ski...",
    validate: (value) => {
      if (!value || value.length < 3) return "Please provide a description";
    },
  });

  if (prompts.isCancel(description)) {
    prompts.cancel("Operation cancelled.");
    return;
  }

  const spinner = prompts.spinner();
  spinner.start("🤖 Generating type-safe user object...");

  // ... AI logic will go here
}

Step 3: The Magic of generateObject

Here comes the core of the application. Inside a try...catch block, we’ll call generateObject. This function is where the AI SDK shines, taking our model, schema, and prompt, and handling all the complex API calls to return a typed object.

// ... inside the main function, after the spinner starts

try {
  const { object: user } = await generateObject({
    model: google("gemini-2.5-flash-lite"),
    schema: UserSchema,
    prompt: `Generate a realistic mock user profile based on this description: "${description}". Make sure all fields are filled with believable, creative data.`,
  });
  // TypeScript now knows: user.name is string, user.age is number, etc.

  spinner.stop("✅ Generated successfully!");

  // Display the Result
  prompts.note(
    `Name: ${user.name}
Age: ${user.age} years old
Occupation: ${user.occupation}
Location: ${user.location}
Email: ${user.email}
Interests: ${user.interests.join(", ")}`,
    "Generated User Profile",
  );

  prompts.outro("✨ Done! ✨");
} catch (error) {
  spinner.stop("❌ Generation failed");
  prompts.log.error(`Error: ${error instanceof Error ? error.message : error}`);
  return;
}

Notice the result is destructured as { object: user } and the result is displayed using prompts.note(). The SDK guarantees that this user variable will conform to our UserSchema. No JSON.parse(), no as any, just clean, typed data.

Step 4: Calling the Main Function

Finally, don’t forget to call the main function at the end of your file:

main().catch(console.error);

Run It!

Run the script from your terminal:

node --env-file=.env sentence-transformer.ts

Your terminal will spring to life, asking for a description. Enter something, and watch the AI generate a perfect, typed object. Here’s a sample interaction:

┌  ✨ Sentence to Type-Safe Object Generator ✨

◇  Describe a person (e.g., "a software engineer from London"):
│  a 42-year old chef from Paris who loves jazz and cycling

 spinning  🤖 Generating type-safe user object...
●  ✅ Generated successfully!

│  Generated User Profile
│  Name: Jean-Pierre Moreau
│  Age: 42 years old
│  Occupation: Chef
│  Location: Paris
│  Email: j.moreau@example.com
│  Interests: Jazz music, Cycling

└  ✨ Done! ✨

The AI SDK’s generateObject function is a game-changer for TypeScript developers. It abstracts away the complexity of working with LLMs while providing strong guarantees about the shape and type of the data you receive. This means you can build AI-powered applications with confidence, knowing that your data will always conform to your defined schemas.

Complete Code for Example 1

Here’s the full code for sentence-transformer.ts:

import { google } from "@ai-sdk/google";
import { generateObject } from "ai";
import { z } from "zod";
import * as prompts from "@clack/prompts";

const UserSchema = z.object({
  name: z.string().describe("The full name of the person"),
  age: z.number().int().positive().describe("The age of the person in years"),
  occupation: z.string().describe("The person's job or profession"),
  location: z.string().describe("The city or location where the person lives"),
  email: z.email().describe("The person's email address"),
  interests: z.array(z.string()).describe("A list of hobbies or interests"),
});

async function main() {
  console.clear();
  prompts.intro("✨ Sentence to Type-Safe Object Generator ✨");

  const description = await prompts.text({
    message: 'Describe a person (e.g., "a software engineer from London"):',
    placeholder: "a software engineer from London who likes to ski...",
    validate: (value) => {
      if (!value || value.length < 3) return "Please provide a description";
    },
  });

  if (prompts.isCancel(description)) {
    prompts.cancel("Operation cancelled.");
    return;
  }

  const spinner = prompts.spinner();
  spinner.start("🤖 Generating type-safe user object...");

  try {
    const { object: user } = await generateObject({
      model: google("gemini-2.5-flash-lite"),
      schema: UserSchema,
      prompt: `Generate a realistic mock user profile based on this description: "${description}". Make sure all fields are filled with believable, creative data.`,
    });

    spinner.stop("✅ Generated successfully!");

    prompts.note(
      `Name: ${user.name}
Age: ${user.age} years old
Occupation: ${user.occupation}
Location: ${user.location}
Email: ${user.email}
Interests: ${user.interests.join(", ")}`,
      "Generated User Profile",
    );

    prompts.outro("✨ Done! ✨");
  } catch (error) {
    spinner.stop("❌ Generation failed");
    prompts.log.error(
      `Error: ${error instanceof Error ? error.message : error}`,
    );
    return;
  }
}

main().catch(console.error);

Example 2: Building a Conversational AI Application

Building a conversational AI application is a rite of passage for any AI developer. It’s the “Hello, World!” of the AI Agents era. What makes this example special is how the Vercel AI SDK simplifies the complex task of maintaining conversation context while preserving type safety.

Let’s create a ChatGPT-like experience right in your terminal. This will be our second example, and it builds naturally on what we’ve learned about structured outputs.

We’ll build this piece by piece again. Create a new file named conversing-bot.ts in your project directory and follow along.

Step 1: Setup and Conversation History

First, we import our tools. The key element here is the messages array. Each object in this array will have a role (user or assistant) and content. This entire array is sent to the model with each new prompt, giving it the full context of the conversation.

import { generateText } from "ai";
import type { UserModelMessage, AssistantModelMessage } from "ai";
import { google } from "@ai-sdk/google";
import * as prompts from "@clack/prompts";
import process from "node:process";

// The messages array stores both user and assistant messages to maintain history
const messages: Array<UserModelMessage | AssistantModelMessage> = [];

Important: LLMs don’t have memory between calls, so maintaining this history is crucial for context. This is how the LLM “remembers” what was said earlier, allowing for coherent, context-aware responses. In production applications, you’ll need to manage conversation length since most models have token limits (e.g., 32k tokens). Consider implementing message trimming or summarization for long conversations.

Step 2: The Main Chat Loop

The application logic will live inside a while (true) loop. In each iteration, it’ll prompt the user for input, check if they want to exit, and then add their message to the history array.

async function main() {
  prompts.intro("🤖 Conversational CLI Bot");
  console.log('Type "exit" or "quit" to end the conversation.\n');

  while (true) {
    const userMessage = await prompts.text({
      message: "You:",
      placeholder: "Type your message here...",
      validate: (value) => {
        if (!value) return "Please enter a message";
      },
    });

    if (prompts.isCancel(userMessage)) {
      prompts.cancel("Conversation ended.");
      process.exit(0);
    }

    const messageText = userMessage;
    if (
      messageText === "exit" ||
      messageText === "quit" ||
      messageText === "bye"
    ) {
      prompts.outro("👋 Goodbye! Thanks for chatting!");
      break;
    }

    // Add the user's message to the conversation history
    messages.push({ role: "user", content: messageText });

    // ... AI generation logic will go here
  }
}

Step 3: Generating a Response and Completing the Loop

Now for the AI part. We use generateText, which is perfect for conversational interactions. We pass the entire messages array to it, which is how the model remembers what’s been said. After getting the response, we print it and—crucially—add the model’s own message back to the history array to complete the loop for the next turn.

This all happens inside a try...catch block within the while loop.

// ... inside the while loop

const spinner = prompts.spinner();
spinner.start("AI is thinking...");

try {
  const { text } = await generateText({
    model: google("gemini-2.5-flash-lite"),
    messages: messages, // Send the full conversation history
    system: "You are a helpful and friendly AI assistant.",
  });

  spinner.stop("AI response received");
  prompts.note(text, "AI Response");

  // Add the assistant's response to the conversation history
  messages.push({ role: "assistant", content: text });
} catch (error) {
  spinner.stop("Error occurred");
  console.error("\n❌ Error generating response:", error);
  messages.pop(); // Remove the last user message on error
}

Step 4: Calling the Main Function

Don’t forget to call the main function at the end of your file:

main().catch(console.error);

Run the Chatbot

Now, run your new chatbot and have a conversation! Go to your terminal and execute:

node --env-file=.env conversing-bot.ts

Here’s how a short conversation might look:

┌  🤖 Conversational CLI Bot

◇  You:
│  Hello, what's the capital of France?

◇  spinning  AI is thinking...
●  AI response received
|
AI: The capital of France is Paris.
|
◇  You:
│  What's it famous for?

◇  spinning  AI is thinking...
●  AI response received

AI: Paris is famous for many things, including the Eiffel Tower, the Louvre Museum, its delicious cuisine, and its romantic atmosphere.

Complete Code for Example 2

Here’s the full code for conversing-bot.ts:

import { generateText } from "ai";
import type { UserModelMessage, AssistantModelMessage } from "ai";
import { google } from "@ai-sdk/google";
import * as prompts from "@clack/prompts";
import process from "node:process";

const messages: Array<UserModelMessage | AssistantModelMessage> = [];

async function main() {
  prompts.intro("🤖 Conversational CLI Bot");
  console.log('Type "exit" or "quit" to end the conversation.\n');

  while (true) {
    const userMessage = await prompts.text({
      message: "You:",
      placeholder: "Type your message here...",
      validate: (value) => {
        if (!value) return "Please enter a message";
      },
    });

    if (prompts.isCancel(userMessage)) {
      prompts.cancel("Conversation ended.");
      process.exit(0);
    }

    const messageText = userMessage;
    if (
      messageText === "exit" ||
      messageText === "quit" ||
      messageText === "bye"
    ) {
      prompts.outro("👋 Goodbye! Thanks for chatting!");
      break;
    }

    messages.push({ role: "user", content: messageText });

    const spinner = prompts.spinner();
    spinner.start("AI is thinking...");

    try {
      const { text } = await generateText({
        model: google("gemini-2.5-flash-lite"),
        messages: messages,
        system: "You are a helpful and friendly AI assistant.",
      });

      spinner.stop("AI response received");
      prompts.note(text, "AI Response");

      messages.push({ role: "assistant", content: text });
    } catch (error) {
      spinner.stop("Error occurred");
      console.error("\n❌ Error generating response:", error);
      messages.pop();
    }
  }
}

main().catch(console.error);

Production Considerations

While these examples work great for learning and prototyping, building production-ready AI applications requires additional considerations:

Rate Limiting and Error Handling

AI providers impose rate limits on API calls. In production, implement:

  • Retry logic with exponential backoff
  • Request queuing for high-traffic scenarios
  • Graceful degradation when limits are reached

Conversation Management

For conversational applications, managing message history becomes critical:

  • Token limits: Most models have context window limits (e.g., 32k tokens). Monitor conversation length and implement strategies like message summarization or trimming older messages.
  • Cost management: Longer conversations mean more tokens sent with each request, increasing costs. Consider caching or summarizing older context.
  • Context engineering: Not all messages are equally important. Learn to prioritize recent context and critical system instructions.

Security and Privacy

  • Never expose API keys in client-side code
  • Sanitize user inputs before sending to the LLM
  • Be mindful of sensitive data in conversation histories
  • Implement content filtering for user-facing applications

These topics deserve deeper exploration, which we’ll cover in future articles. For now, focus on understanding the fundamentals and building your first applications.

Your Journey into AI Has Just Begun

In just a few minutes, you’ve set up a complete AI development environment and built two powerful TypeScript applications. You’ve seen how to:

  • Use generateObject to get reliable, type-safe structured data from an LLM.
  • Use generateText and a message history to build a contextual chat application.
  • Leverage Zod schemas for both type safety and AI guidance.

The Vercel AI SDK empowers you to treat AI as a first-class citizen in your TypeScript applications, backed by the safety and predictability you expect. The unified API means you can easily swap between models from Google, OpenAI, Anthropic, and more without rewriting your core logic.

This is just the beginning and I’ll share more in future posts. The SDK offers a rich set of features to explore next:

  • Web UIs with React/Next.js: Use powerful hooks like useChat and useCompletion to build responsive web interfaces.
  • Tool Calling: Give your AI superpowers by allowing it to call your own functions, search databases, or interact with external APIs. Keep an eye out for a deep dive on this topic soon.
  • The Full Documentation: Dive deeper into advanced features, different model providers, and production best practices.

You now have the foundation to build sophisticated, type-safe AI applications. Go create something amazing! Source code for both examples can be found in this GitHub repository.

If you found this guide helpful, please share it with your developer friends and colleagues. If you have questions or want to share what you’ve built, feel free to reach out on Twitter or GitHub. Happy coding! 🚀

Subscribe to newsletter

Subscribe to receive expert insights on high-performance Web and Node.js optimization techniques, and distributed systems engineering. I share practical tips, tutorials, and fun projects that you will appreciate. No spam, unsubscribe anytime.