Skip to main content

SDK Reference

The Adaline SDK enables you to build production-ready AI agentic applications with enterprise-grade observability and deployment management.
TypeScript SDK Available Now! The Adaline SDK is currently available in TypeScript. Python SDK is coming soon. Stay tuned for updates!

Installation

npm install @adaline/client @adaline/api

Overview

The Adaline SDK provides two core capabilities:

Deployment Management

Fetch and cache your deployed prompts with automatic background refresh:
  • getDeployment() - Get a specific prompt deployment by ID
  • getLatestDeployment() - Get the latest prompt deployment by environment (e.g., production, staging)
  • initLatestDeployment() - Initialize cached prompt deployment with auto-refresh

Observability & Monitoring

Track every AI agentic application interaction with structured traces and spans:
  • Monitor - Buffer and batch log submissions with automatic retries and automatic flushing
  • Trace - High-level operation tracking (e.g., user request, workflow, agentic application interaction)
  • Span - Granular operation tracking (e.g., LLM call, tool execution, retrieval, embedding generation, function call, guardrail check, etc.)

Quick Start

import { Adaline } from '@adaline/client';
import OpenAI from 'openai';

// Initialize the Adaline client, must setup ADALINE_API_KEY environment variable prior to using the SDK
const adaline = new Adaline();

// Get your deployed prompt configuration
const deployment = await adaline.getLatestDeployment({
  promptId: 'your-prompt-id',
  deploymentEnvironmentId: 'your-deployment-environment-id'
});

// Initialize monitoring for your project
const monitor = adaline.initMonitor({
  projectId: 'your-project-id',
  flushInterval: 5, // flush every 5 seconds
  maxBufferSize: 10  // or when buffer reaches 10 items
});

// Create a trace for the entire user interaction
const trace = monitor.logTrace({
  name: 'Chat Completion',
  status: 'unknown',
  sessionId: 'user-session-123'
});

// Log the LLM call as a span
const llmSpan = trace.logSpan({
  name: 'OpenAI GPT-4 Call',
  status: 'unknown',
  promptId: deployment.id,
  deploymentId: deployment.id
});

try {
  // Make your LLM call with the deployed configuration
  const openai = new OpenAI();
  const response = await openai.chat.completions.create({
    model: deployment.prompt.config.model,
    messages: deployment.prompt.messages,
    ...deployment.prompt.config.settings
  });

  // Update span with successful result
  llmSpan.update({
    status: 'success',
    content: {
      type: 'Model',
      provider: deployment.prompt.config.providerName,
      model: deployment.prompt.config.model,
      input: JSON.stringify(deployment.prompt.messages),
      output: JSON.stringify(response.choices[0].message)
    }
  });
  
  trace.update({ status: 'success' });
} catch (error) {
  // Log errors
  llmSpan.update({ status: 'failure' });
  trace.update({ status: 'failure' });
}

// End tracking, this will mark the trace (and all its spans) as complete and ready to be flushed
trace.end();

// Logs are automatically flushed every 5 seconds, or manually flush:
await monitor.flush();

Key Features

Automatic Background Refresh

Keep your prompts up-to-date without redeploying your application:
const controller = await adaline.initLatestDeployment({
  promptId: 'your-prompt-id',
  deploymentEnvironmentId: 'your-deployment-environment-id',
  refreshInterval: 60 // refresh every 60 seconds
});

// Always get the latest cached prompt deployment
const deployment = await controller.get();

// Or force a fresh fetch of the prompt deployment
const refreshedDeployment = await controller.get(true);

// Stop when done
controller.stop();

Smart Buffering & Batching

Optimize performance with automatic batching and retry logic:
  • Logs are buffered in memory and flushed in batches
  • Automatic retry with exponential backoff on transient failures
  • Configurable flush intervals and buffer sizes
  • Failed entries are tracked and retried on next flush

Comprehensive Observability

Track everything in your AI agentic application:
  • Model spans - LLM inference calls (streaming and non-streaming)
  • Tool spans - Function/API calls
  • Retrieval spans - RAG and vector database queries
  • Embeddings spans - Embedding generation
  • Function spans - Custom application logic
  • Guardrail spans - Safety and compliance checks

Rich Metadata

Attach detailed context to every operation and later search or filter on them.
  • Tags - Categorize and filter traces (e.g., ['production', 'high-priority'])
  • Attributes - Key-value metadata (e.g., { userId: '123', region: 'us-east' })
  • Sessions - Group related traces by session ID (e.g., user-session-123)
  • References - Link traces and spans with custom IDs (e.g., trace-ref-001)