TypeScript for AI Applications: Type Safety in LLM Integration

TypeScript for AI Applications: Type Safety in LLM Integration

Expert Guide to Building Type-Safe AI Applications with TypeScript

I’ve built AI applications with and without TypeScript, and I can tell you: type safety isn’t optional for AI applications. When you’re dealing with streaming responses, complex message structures, and dynamic AI outputs, TypeScript catches bugs before they reach production. It’s the difference between debugging runtime errors and catching issues at compile time.

In this guide, I’ll show you how to use TypeScript effectively for AI applications. You’ll learn how to type LLM responses, handle streaming, create type-safe API clients, and build robust type definitions for AI workflows.

What You’ll Learn

  • Typing LLM requests and responses
  • Type-safe streaming implementations
  • Creating reusable type definitions for AI
  • Handling dynamic AI outputs with types
  • Type guards and runtime validation
  • Advanced TypeScript patterns for AI
  • Real-world examples from production
  • Common type safety pitfalls and how to avoid them

Introduction: Why TypeScript for AI Applications?

AI applications have unique type challenges:

  • Dynamic outputs: AI responses vary in structure
  • Streaming data: Partial responses need typing
  • Complex payloads: Messages, context, metadata
  • API contracts: Type-safe client-server communication

TypeScript helps you handle these challenges with compile-time type checking, better IDE support, and self-documenting code. I’ve seen projects where TypeScript caught critical bugs before deployment.

TypeScript Architecture for A
Figure 1: TypeScript Architecture for AI Applications

1. Typing LLM Requests and Responses

1.1 Basic Message Types

Start with core message types:

// Core message types
type MessageRole = 'user' | 'assistant' | 'system';

interface Message {
  id: string;
  role: MessageRole;
  content: string;
  timestamp: Date;
  metadata?: Record<string, unknown>;
}

// Request types
interface ChatRequest {
  messages: Message[];
  model?: string;
  temperature?: number;
  maxTokens?: number;
  stream?: boolean;
}

// Response types
interface ChatResponse {
  id: string;
  message: Message;
  finishReason: 'stop' | 'length' | 'content_filter';
  usage: {
    promptTokens: number;
    completionTokens: number;
    totalTokens: number;
  };
}

// Streaming response
interface StreamingChunk {
  id: string;
  delta: {
    role?: MessageRole;
    content?: string;
  };
  finishReason?: 'stop' | 'length' | 'content_filter';
}

1.2 Generic API Client

Create a type-safe API client:

class AIClient<TRequest extends ChatRequest, TResponse extends ChatResponse> {
  constructor(private apiKey: string, private baseURL: string) {}
  
  async chat(request: TRequest): Promise<TResponse> {
    const response = await fetch(`${this.baseURL}/chat`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${this.apiKey}`,
      },
      body: JSON.stringify(request),
    });
    
    if (!response.ok) {
      throw new Error(`API error: ${response.statusText}`);
    }
    
    return response.json() as Promise<TResponse>;
  }
  
  async *streamChat(request: TRequest): AsyncGenerator<StreamingChunk, void, unknown> {
    const response = await fetch(`${this.baseURL}/chat/stream`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${this.apiKey}`,
      },
      body: JSON.stringify({ ...request, stream: true }),
    });
    
    if (!response.ok) {
      throw new Error(`API error: ${response.statusText}`);
    }
    
    const reader = response.body?.getReader();
    const decoder = new TextDecoder();
    
    if (!reader) {
      throw new Error('Response body is not readable');
    }
    
    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      
      const chunk = decoder.decode(value);
      const lines = chunk.split('\n').filter(line => line.trim());
      
      for (const line of lines) {
        if (line.startsWith('data: ')) {
          const data = JSON.parse(line.slice(6)) as StreamingChunk;
          yield data;
        }
      }
    }
  }
}
Type Safety Patterns for AI Applications
Figure 2: Type Safety Patterns for AI Applications

2. Type-Safe Streaming

2.1 Streaming State Types

Type your streaming state:

interface StreamingState {
  isStreaming: boolean;
  currentContent: string;
  accumulatedContent: string;
  error: Error | null;
}

type StreamingAction =
  | { type: 'START_STREAM'; messageId: string }
  | { type: 'UPDATE_STREAM'; content: string }
  | { type: 'COMPLETE_STREAM'; finalContent: string }
  | { type: 'ERROR_STREAM'; error: Error };

function streamingReducer(
  state: StreamingState,
  action: StreamingAction
): StreamingState {
  switch (action.type) {
    case 'START_STREAM':
      return {
        isStreaming: true,
        currentContent: '',
        accumulatedContent: '',
        error: null,
      };
    case 'UPDATE_STREAM':
      return {
        ...state,
        currentContent: action.content,
        accumulatedContent: state.accumulatedContent + action.content,
      };
    case 'COMPLETE_STREAM':
      return {
        isStreaming: false,
        currentContent: '',
        accumulatedContent: action.finalContent,
        error: null,
      };
    case 'ERROR_STREAM':
      return {
        ...state,
        isStreaming: false,
        error: action.error,
      };
    default:
      return state;
  }
}

2.2 Type-Safe Stream Handler

Create a type-safe stream handler:

async function handleStreamingResponse<T extends StreamingChunk>(
  stream: AsyncGenerator<T, void, unknown>,
  onChunk: (chunk: T) => void,
  onComplete: (finalContent: string) => void,
  onError: (error: Error) => void
): Promise<void> {
  let accumulatedContent = '';
  
  try {
    for await (const chunk of stream) {
      if (chunk.delta?.content) {
        accumulatedContent += chunk.delta.content;
        onChunk(chunk);
      }
      
      if (chunk.finishReason) {
        onComplete(accumulatedContent);
        return;
      }
    }
  } catch (error) {
    onError(error instanceof Error ? error : new Error('Unknown error'));
  }
}

3. Advanced Type Patterns

3.1 Discriminated Unions

Use discriminated unions for different message types:

type BaseMessage = {
  id: string;
  timestamp: Date;
};

type UserMessage = BaseMessage & {
  type: 'user';
  content: string;
  attachments?: File[];
};

type AssistantMessage = BaseMessage & {
  type: 'assistant';
  content: string;
  sources?: string[];
  confidence?: number;
};

type SystemMessage = BaseMessage & {
  type: 'system';
  content: string;
  level: 'info' | 'warning' | 'error';
};

type Message = UserMessage | AssistantMessage | SystemMessage;

// Type narrowing
function processMessage(message: Message) {
  switch (message.type) {
    case 'user':
      // TypeScript knows message is UserMessage
      console.log(message.content);
      if (message.attachments) {
        // TypeScript knows attachments exists
        message.attachments.forEach(file => console.log(file.name));
      }
      break;
    case 'assistant':
      // TypeScript knows message is AssistantMessage
      console.log(message.content);
      if (message.confidence) {
        // TypeScript knows confidence exists
        console.log(`Confidence: ${message.confidence}`);
      }
      break;
    case 'system':
      // TypeScript knows message is SystemMessage
      console.log(`[${message.level}] ${message.content}`);
      break;
  }
}

3.2 Conditional Types

Use conditional types for flexible APIs:

type StreamResponse<T extends boolean> = T extends true
  ? AsyncGenerator<StreamingChunk, void, unknown>
  : ChatResponse;

async function chat<T extends boolean>(
  request: ChatRequest & { stream: T }
): Promise<StreamResponse<T>> {
  if (request.stream) {
    return streamChat(request) as StreamResponse<T>;
  } else {
    return regularChat(request) as StreamResponse<T>;
  }
}

// Usage
const streamResponse = await chat({ messages: [], stream: true });
// TypeScript knows this is AsyncGenerator

const regularResponse = await chat({ messages: [], stream: false });
// TypeScript knows this is ChatResponse

3.3 Type Guards

Create type guards for runtime validation:

function isMessage(obj: unknown): obj is Message {
  return (
    typeof obj === 'object' &&
    obj !== null &&
    'id' in obj &&
    'role' in obj &&
    'content' in obj &&
    typeof (obj as Message).id === 'string' &&
    ['user', 'assistant', 'system'].includes((obj as Message).role) &&
    typeof (obj as Message).content === 'string'
  );
}

function isChatResponse(obj: unknown): obj is ChatResponse {
  return (
    typeof obj === 'object' &&
    obj !== null &&
    'id' in obj &&
    'message' in obj &&
    'finishReason' in obj &&
    isMessage((obj as ChatResponse).message)
  );
}

// Usage
function handleResponse(data: unknown) {
  if (isChatResponse(data)) {
    // TypeScript knows data is ChatResponse
    console.log(data.message.content);
  } else {
    throw new Error('Invalid response format');
  }
}
Advanced TypeScript Patterns for AI
Figure 3: Advanced TypeScript Patterns for AI

4. Type-Safe State Management

4.1 Typed Store with Zustand

Create a type-safe Zustand store:

import { create } from 'zustand';
import { devtools } from 'zustand/middleware';

interface ConversationState {
  messages: Message[];
  currentStream: string | null;
  isStreaming: boolean;
  error: Error | null;
}

interface ConversationActions {
  addMessage: (message: Message) => void;
  updateStream: (content: string) => void;
  startStream: () => void;
  completeStream: () => void;
  setError: (error: Error | null) => void;
  clearConversation: () => void;
}

type ConversationStore = ConversationState & ConversationActions;

export const useConversationStore = create<ConversationStore>()(
  devtools(
    (set) => ({
      // State
      messages: [],
      currentStream: null,
      isStreaming: false,
      error: null,
      
      // Actions
      addMessage: (message) =>
        set((state) => ({
          messages: [...state.messages, message],
        })),
      
      updateStream: (content) =>
        set((state) => ({
          currentStream: content,
          messages: state.messages.map((msg, idx) =>
            idx === state.messages.length - 1 && msg.role === 'assistant'
              ? { ...msg, content: state.accumulatedContent + content }
              : msg
          ),
        })),
      
      startStream: () =>
        set((state) => ({
          isStreaming: true,
          currentStream: '',
          messages: [
            ...state.messages,
            {
              id: Date.now().toString(),
              role: 'assistant',
              content: '',
              timestamp: new Date(),
            },
          ],
        })),
      
      completeStream: () =>
        set((state) => ({
          isStreaming: false,
          currentStream: null,
        })),
      
      setError: (error) =>
        set({
          error,
          isStreaming: false,
        }),
      
      clearConversation: () =>
        set({
          messages: [],
          currentStream: null,
          isStreaming: false,
          error: null,
        }),
    }),
    { name: 'Conversation Store' }
  )
);

5. Runtime Validation with Zod

5.1 Schema Definitions

Use Zod for runtime validation:

import { z } from 'zod';

const MessageRoleSchema = z.enum(['user', 'assistant', 'system']);

const MessageSchema = z.object({
  id: z.string(),
  role: MessageRoleSchema,
  content: z.string(),
  timestamp: z.date(),
  metadata: z.record(z.unknown()).optional(),
});

const ChatRequestSchema = z.object({
  messages: z.array(MessageSchema),
  model: z.string().optional(),
  temperature: z.number().min(0).max(2).optional(),
  maxTokens: z.number().positive().optional(),
  stream: z.boolean().optional(),
});

const ChatResponseSchema = z.object({
  id: z.string(),
  message: MessageSchema,
  finishReason: z.enum(['stop', 'length', 'content_filter']),
  usage: z.object({
    promptTokens: z.number(),
    completionTokens: z.number(),
    totalTokens: z.number(),
  }),
});

// Infer TypeScript types from schemas
type Message = z.infer<typeof MessageSchema>;
type ChatRequest = z.infer<typeof ChatRequestSchema>;
type ChatResponse = z.infer<typeof ChatResponseSchema>;

5.2 Validated API Client

Create a validated API client:

class ValidatedAIClient {
  async chat(request: unknown): Promise<ChatResponse> {
    // Validate request
    const validatedRequest = ChatRequestSchema.parse(request);
    
    const response = await fetch('/api/chat', {
      method: 'POST',
      body: JSON.stringify(validatedRequest),
    });
    
    const data = await response.json();
    
    // Validate response
    return ChatResponseSchema.parse(data);
  }
}
Best Practices: Lessons from Production
Best Practices: Lessons from Production

6. Best Practices: Lessons from Production

After building type-safe AI applications, here are the practices I follow:

  1. Start with core types: Define Message, Request, Response types first
  2. Use discriminated unions: For different message types and states
  3. Create type guards: For runtime validation
  4. Use Zod for validation: Combine TypeScript and Zod
  5. Type your state management: Zustand, Redux, Jotai all support TypeScript
  6. Use conditional types: For flexible APIs (stream vs non-stream)
  7. Document with types: Types are self-documenting
  8. Strict mode: Enable strict TypeScript settings
  9. Type your errors: Create error types for better error handling
  10. Test your types: Use type tests to ensure types work correctly
Common Mistakes to Avoid
Common Mistakes to Avoid

7. Common Mistakes to Avoid

I’ve made these mistakes so you don’t have to:

  • Using `any` everywhere: Defeats the purpose of TypeScript
  • Not typing API responses: Runtime errors from unexpected data
  • Ignoring type errors: Fix them, don’t suppress them
  • Not using type guards: Runtime validation is still needed
  • Over-complicating types: Keep types simple and readable
  • Not typing streaming: Streaming needs proper typing too
  • Forgetting error types: Type your errors for better handling
  • Not using strict mode: Strict mode catches more bugs

8. Conclusion

TypeScript is essential for AI applications. It catches bugs at compile time, provides better IDE support, and makes code self-documenting. The key is starting with core types, using discriminated unions, creating type guards, and combining with runtime validation.

Get these right, and your AI application will be more robust, maintainable, and easier to debug. Type safety isn’t optional for production AI applications.

🎯 Key Takeaway

TypeScript for AI applications is about catching bugs before they reach production. Start with core types, use discriminated unions for different message types, create type guards for runtime validation, and combine with Zod for schema validation. The result: more robust, maintainable, and easier-to-debug AI applications.


Discover more from C4: Container, Code, Cloud & Context

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.