The boilerplate includes professional AI chat templates that you can use to quickly add AI capabilities to your application. These templates support multiple AI providers with streaming responses and modern chat UX.
View the working AI templates at:
/ai - Tabbed interface with both Nuxt UI and shadcn versions/ai/chat/[id] - Full-featured chat interface with conversation historyThe AI chat templates feature:
The shadcn version (/ai → Shadcn tab) provides a simple, direct AI interface perfect for one-off interactions.
Use this when you need:
Key features:
The Nuxt UI version (/ai → Nuxt UI tab or /ai/chat/[id]) provides a full-featured chat experience with persistent conversations.
Use this when you need:
Key features:
Add your API keys to .env:
# OpenAI (for GPT models)
OPENAI_API_KEY="sk-..."
# Anthropic (for Claude models)
ANTHROPIC_API_KEY="sk-ant-..."
# Grok / xAI (for Grok models)
GROK_API_KEY="xai-..."
You only need to configure the providers you plan to use. Get API keys from:
Navigate to /ai in your application to see both template variants in action. Switch between tabs to compare the implementations.
Decide which template variant fits your use case:
The shadcn version is located at app/components/ai/AiInterfaceShadcn.vue. Use it directly in your pages:
<script setup lang="ts">
import AiInterfaceShadcn from '@/components/ai/AiInterfaceShadcn.vue'
</script>
<template>
<div class="container mx-auto py-8">
<AiInterfaceShadcn />
</div>
</template>
The Nuxt UI version provides a complete chat experience with these key pages:
Main chat interface (app/pages/ai/chat/[id].vue):
Chat list page - Create a page to list user's chats:
<script setup lang="ts">
const { data: chats } = await useFetch('/api/chats')
</script>
<template>
<div class="space-y-4">
<h1>Your conversations</h1>
<div v-for="chat in chats" :key="chat.id">
<NuxtLink :to="`/ai/chat/${chat.id}`">
{{ chat.title || 'New chat' }}
</NuxtLink>
</div>
</div>
</template>
The default models are configured in server/api/ai/stream.ts:
const models = {
chatgpt: 'gpt-4o-mini',
claude: 'claude-3-5-haiku-latest',
grok: 'grok-4',
}
To use more capable models, update the configuration:
const models = {
chatgpt: 'gpt-4o', // More capable GPT-4
claude: 'claude-3-5-sonnet-latest', // More capable Claude
grok: 'grok-vision-beta', // Grok with vision
}
Customize AI behavior by modifying the request parameters:
await fetch('/api/ai/stream', {
method: 'POST',
body: JSON.stringify({
model: 'chatgpt',
prompt: 'Your prompt here',
temperature: 0.7, // 0-2: Lower = focused, higher = creative
max_tokens: 2000, // Max response length
top_p: 0.95, // Alternative to temperature
}),
})
Common temperature values:
0.3 - Factual responses, code generation0.7 - Balanced creativity (recommended)1.2 - Creative writing, brainstormingBoth template variants use your app's design system:
Shadcn version:
app/components/ui/Nuxt UI version:
UChatMessages, UChatPrompt)ai layout for sidebar customizationProtect AI endpoints so only logged-in users can access them:
requireAuth() calls in:server/api/chats/index.post.tsserver/api/chats/[id].get.tsserver/api/chats/[id].post.tsexport default defineEventHandler(async event => {
// Require authentication
const userId = await requireAuth(event)
// ... rest of the handler
})
Limit AI access to paying subscribers:
import { requireSubscription } from '@@/server/utils/require-subscription'
export default defineEventHandler(async event => {
// Require pro or enterprise subscription
await requireSubscription(event, { plans: ['pro', 'enterprise'] })
// ... rest of the handler
})
The AI endpoint includes rate limiting (5 requests per 5 minutes by default). Adjust in server/api/ai/stream.ts:
await rateLimit(event, {
max: 5, // Number of requests
window: '5m', // Time window
prefix: 'ai-stream',
})
Track AI usage for monitoring and billing:
// After validation
await prisma.aiUsage.create({
data: {
userId: event.context.user.id,
model,
promptTokens: prompt.length / 4, // Rough estimate
completionTokens: 0, // Update after response
},
})
The Nuxt UI chat template uses these Prisma models for conversation persistence:
model Chat {
id String @id @default(cuid())
title String?
userId String
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
messages Message[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Message {
id String @id @default(cuid())
chatId String
chat Chat @relation(fields: [chatId], references: [id], onDelete: Cascade)
role String
content String @db.Text
createdAt DateTime @default(now())
}
These are already included in your Prisma schema if you're using the Nuxt UI version.
The templates use these API endpoints:
POST /api/ai/stream - Stream AI responses without persistence
// Request
{
model: 'chatgpt' | 'claude' | 'grok',
prompt: string,
temperature?: number,
max_tokens?: number,
top_p?: number
}
// Response: Server-Sent Events stream
GET /api/chats - List user's chats
POST /api/chats - Create new chat
GET /api/chats/[id] - Get chat with messages
POST /api/chats/[id] - Send message in chat (returns stream)
DELETE /api/chats/[id] - Delete chat
Track important metrics:
For high-traffic applications:
Depending on your use case: