import { definitions } from '/snippets/definitions.mdx'
Building an AI
TypeScript-based frameworks like Vercel’s AI SDK do integrate most seamlessly with Axiom’s tooling today, but that’s likely to evolve over time.
Build your capability
Define your capability using your framework of choice. Here’s an example using Vercel's AI SDK, which includes many examples covering different capability design patterns. Popular alternatives like Mastra also exist.
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { wrapAISDKModel } from 'axiom/ai';
import { z } from 'zod';
export async function classifyTicket(input: {
subject?: string;
content: string
}) {
const result = await generateObject({
model: wrapAISDKModel(openai('gpt-4o-mini')),
messages: [
{
role: 'system',
content: 'Classify support tickets as: question, bug_report, or feature_request.',
},
{
role: 'user',
content: input.subject
? `Subject: ${input.subject}\n\n${input.content}`
: input.content,
},
],
schema: z.object({
category: z.enum(['question', 'bug_report', 'feature_request']),
confidence: z.number().min(0).max(1),
}),
});
return result.object;
}The wrapAISDKModel function instruments your model calls for Axiom’s observability features. Learn more in the Observe section.
Gather reference examples
As you prototype, collect examples of inputs and their correct outputs.
const referenceExamples = [
{
input: {
subject: 'How do I reset my password?',
content: 'I forgot my password and need help.'
},
expected: { category: 'question' },
},
{
input: {
subject: 'App crashes on startup',
content: 'The app immediately crashes when I open it.'
},
expected: { category: 'bug_report' },
},
];These become your ground truth for evaluation. Learn more in the Evaluate section.
Structured prompt management
For teams wanting more structure around prompt definitions, Axiom’s SDK includes experimental utilities for managing prompts as versioned objects.
Define prompts as objects
Represent capabilities as structured Prompt objects:
import {
experimental_Type,
type experimental_Prompt
} from 'axiom/ai';
export const ticketClassifierPrompt = {
name: "Ticket Classifier",
slug: "ticket-classifier",
version: "1.0.0",
model: "gpt-4o-mini",
messages: [
{
role: "system",
content: "Classify support tickets as: {{ categories }}",
},
{
role: "user",
content: "{{ ticket_content }}",
},
],
arguments: {
categories: experimental_Type.String(),
ticket_content: experimental_Type.String(),
},
} satisfies experimental_Prompt;Type-safe arguments
The experimental_Type system provides type safety for prompt arguments:
arguments: {
user: experimental_Type.Object({
name: experimental_Type.String(),
preferences: experimental_Type.Array(experimental_Type.String()),
}),
priority: experimental_Type.Union([
experimental_Type.Literal("high"),
experimental_Type.Literal("medium"),
experimental_Type.Literal("low"),
]),
}Local testing
Test prompts locally before using them:
import { experimental_parse } from 'axiom/ai';
const parsed = await experimental_parse(ticketClassifierPrompt, {
context: {
categories: 'question, bug_report, feature_request',
ticket_content: 'How do I reset my password?',
},
});
console.log(parsed.messages);These utilities help organize prompts in your codebase. Centralized prompt management and versioning features may be added in future releases.
What's next?
Once you have a working capability and reference examples, systematically evaluate its performance.
To learn how to set up and run evaluations, see Evaluate.