Trace Class
TheTrace class represents a high-level operation or workflow in your LLM application. Traces capture the entire lifecycle of a request, batch job, or user interaction, and can contain multiple child spans for granular tracking.
Overview
A trace is the top-level unit of observability that represents:- A single user request to your API
- A background job or workflow
- A conversation turn in a chatbot
- A complete RAG pipeline execution
- Any end-to-end operation you want to track
- Metadata: name, status, timestamps, tags, attributes
- Context: session ID, reference ID
- Children: one or more spans representing sub-operations
Creation
Create a trace using theMonitor.logTrace() method:
Copy
Ask AI
const trace = monitor.logTrace({
name: 'User Login',
sessionId: 'session-abc-123',
tags: ['auth', 'production'],
attributes: { userId: 'user-456' }
});
Properties
trace
Copy
Ask AI
trace: CreateLogTraceRequest
Copy
Ask AI
{
projectId: string;
trace: {
name: string;
status: TraceStatus;
sessionId?: string;
referenceId: string;
tags?: string[];
attributes?: Record<string, string | number | boolean>;
startedAt: number; // Unix timestamp in milliseconds
endedAt?: number; // Set when trace.end() is called
}
}
traceId
Copy
Ask AI
traceId: string | undefined
Copy
Ask AI
const trace = monitor.logTrace({ name: 'Operation' });
console.log(trace.traceId); // undefined (not flushed yet)
trace.end();
await monitor.flush();
console.log(trace.traceId); // "trace_abc123xyz" (assigned by server)
Methods
logSpan()
Create a child span within this trace.Copy
Ask AI
logSpan(options: LogSpanOptions): Span
Parameters
Show properties
Show properties
Human-readable name for the span (e.g., “LLM Call”, “Database Query”).
Initial status:
'success' | 'failure' | 'aborted' | 'cancelled' | 'unknown'Custom reference ID. Auto-generated UUID if not provided.
ID of the prompt used in this span (for LLM calls).
ID of the deployment used in this span.
Whether to run evaluators on this span after completion.
Tags for categorization (e.g.,
['llm', 'openai']).Additional metadata (e.g.,
{ model: 'gpt-4o', tokens: 1500 }).Span content (input/output). Defaults to
monitor.defaultContent.Returns
A new Span instance. See Span Class for details.
Examples
Copy
Ask AI
const trace = monitor.logTrace({ name: 'API Request' });
const span = trace.logSpan({
name: 'Process Data'
});
// Do work...
await processData();
span.update({ status: 'success' });
span.end();
trace.end();
update()
Update trace metadata in place.Copy
Ask AI
update(updates: TraceUpdates): this
Parameters
Partial updates to apply to the trace.
Show properties
Show properties
Returns
Returnsthis for method chaining.
Examples
Copy
Ask AI
const trace = monitor.logTrace({
name: 'Process Order',
status: 'pending'
});
try {
await processOrder();
trace.update({ status: 'success' });
} catch (error) {
trace.update({ status: 'failure' });
}
trace.end();
end()
Mark the trace as complete and ready to be flushed.Copy
Ask AI
end(): string | undefined
Behavior
- Sets
endedAttimestamp if not already set - Marks the trace as
readyin the monitor’s buffer - Recursively ends all child spans (and their descendants)
- Returns the trace’s reference ID
Always call
end() on your traces! Traces that are never ended will never be flushed to the API.Returns
The trace’s reference ID for correlation with external systems.
Examples
Copy
Ask AI
const trace = monitor.logTrace({ name: 'Operation' });
// Do work...
trace.end(); // Required!
Complete Examples
Example 1: Simple API Request
Copy
Ask AI
import { Adaline } from '@adaline/client';
import { Gateway } from '@adaline/gateway';
import { OpenAI } from '@adaline/openai';
const adaline = new Adaline();
const gateway = new Gateway();
const openaiProvider = new OpenAI();
const monitor = adaline.initMonitor({ projectId: 'my-api', flushInterval: 5 });
async function handleChatRequest(userId: string, message: string) {
const deployment = await adaline.getLatestDeployment({
promptId: 'chat-prompt',
deploymentEnvironmentId: 'environment_abc123'
});
// Create trace for this request
const trace = monitor.logTrace({
name: 'Chat Request',
sessionId: userId,
tags: ['chat', 'api'],
attributes: {
userId,
messageLength: message.length
}
});
try {
// Log the LLM call
const llmSpan = trace.logSpan({
name: 'LLM Completion',
promptId: deployment.promptId,
deploymentId: deployment.id,
tags: ['llm']
});
const model = openaiProvider.chatModel({
modelName: deployment.prompt.config.model,
apiKey: process.env.OPENAI_API_KEY!
});
const gatewayResponse = await gateway.completeChat({
model,
config: deployment.prompt.config.settings,
messages: [
...deployment.prompt.messages,
{
role: 'user',
content: [{ modality: 'text', value: message }]
}
],
tools: deployment.prompt.tools
});
llmSpan.update({
status: 'success',
content: {
type: 'Model',
provider: deployment.prompt.config.providerName,
model: deployment.prompt.config.model,
input: JSON.stringify(gatewayResponse.provider.request),
output: JSON.stringify(gatewayResponse.provider.response)
}
});
llmSpan.end();
trace.update({ status: 'success' });
return gatewayResponse.response.messages[0].content[0].value;
} catch (error) {
trace.update({
status: 'failure',
attributes: {
error: error instanceof Error ? error.message : String(error)
}
});
throw error;
} finally {
trace.end();
}
}
Example 2: Multi-Step Workflow
Copy
Ask AI
async function processUserOnboarding(userId: string) {
const trace = monitor.logTrace({
name: 'User Onboarding',
sessionId: userId,
status: 'pending',
tags: ['onboarding', 'workflow'],
attributes: { userId }
});
try {
// Step 1: Create account
const createSpan = trace.logSpan({
name: 'Create Account',
tags: ['database']
});
await createAccount(userId);
createSpan.update({ status: 'success' });
createSpan.end();
// Step 2: Send welcome email
const emailSpan = trace.logSpan({
name: 'Send Welcome Email',
tags: ['email']
});
await sendWelcomeEmail(userId);
emailSpan.update({ status: 'success' });
emailSpan.end();
// Step 3: Generate personalized content
const llmSpan = trace.logSpan({
name: 'Generate Welcome Message',
tags: ['llm', 'personalization']
});
const welcomeMsg = await generateWelcomeMessage(userId);
llmSpan.update({
status: 'success',
content: {
type: 'Model',
provider: 'openai',
model: 'gpt-4o',
input: JSON.stringify({ userId }),
output: JSON.stringify({ message: welcomeMsg })
}
});
llmSpan.end();
trace.update({ status: 'success' });
} catch (error) {
trace.update({
status: 'failure',
attributes: { error: String(error) }
});
throw error;
} finally {
trace.end();
}
}
Example 3: Nested Operations (RAG Pipeline)
Copy
Ask AI
async function answerQuestion(sessionId: string, question: string) {
const trace = monitor.logTrace({
name: 'RAG Question Answering',
sessionId,
tags: ['rag', 'qa'],
attributes: { questionLength: question.length }
});
try {
// Parent span for entire RAG operation
const ragSpan = trace.logSpan({
name: 'RAG Pipeline',
tags: ['pipeline']
});
// Step 1: Generate embedding (child of ragSpan)
const embedSpan = ragSpan.logSpan({
name: 'Generate Query Embedding',
tags: ['embedding']
});
const embedding = await generateEmbedding(question);
embedSpan.update({
status: 'success',
content: {
type: 'Embeddings',
input: JSON.stringify({ query: question }),
output: JSON.stringify({ dimensions: embedding.length })
}
});
embedSpan.end();
// Step 2: Retrieve documents (child of ragSpan)
const retrieveSpan = ragSpan.logSpan({
name: 'Vector Search',
tags: ['retrieval', 'vector-db']
});
const docs = await retrieveDocuments(embedding);
retrieveSpan.update({
status: 'success',
content: {
type: 'Retrieval',
input: JSON.stringify({ embedding: 'vector', topK: 5 }),
output: JSON.stringify({ documentIds: docs.map(d => d.id) })
},
attributes: { documentsFound: docs.length }
});
retrieveSpan.end();
// Step 3: Generate answer (child of ragSpan)
const llmSpan = ragSpan.logSpan({
name: 'Generate Answer',
tags: ['llm', 'answer-generation'],
runEvaluation: true
});
const answer = await generateAnswer(question, docs);
llmSpan.update({
status: 'success',
content: {
type: 'Model',
provider: 'openai',
model: 'gpt-4o',
input: JSON.stringify({ question, context: docs }),
output: JSON.stringify({ answer })
}
});
llmSpan.end();
ragSpan.update({ status: 'success' });
ragSpan.end();
trace.update({ status: 'success' });
return answer;
} catch (error) {
trace.update({ status: 'failure' });
throw error;
} finally {
trace.end();
}
}
Example 4: Long-Running Background Job
Copy
Ask AI
async function processBatch(batchId: string) {
const trace = monitor.logTrace({
name: 'Batch Processing',
referenceId: `batch-${batchId}`,
status: 'pending',
tags: ['batch', 'background'],
attributes: {
batchId,
startTime: Date.now()
}
});
const items = await getBatchItems(batchId);
trace.update({
attributes: { itemCount: items.length }
});
let successCount = 0;
let failureCount = 0;
for (const item of items) {
const itemSpan = trace.logSpan({
name: `Process Item ${item.id}`,
tags: ['item'],
attributes: { itemId: item.id }
});
try {
await processItem(item);
itemSpan.update({ status: 'success' });
successCount++;
} catch (error) {
itemSpan.update({
status: 'failure',
attributes: { error: String(error) }
});
failureCount++;
}
itemSpan.end();
}
trace.update({
status: failureCount === 0 ? 'success' : 'failure',
attributes: {
successCount,
failureCount,
duration: Date.now() - trace.trace.trace.startedAt
}
});
trace.end();
}
Type Definitions
Copy
Ask AI
type TraceStatus =
| 'success'
| 'failure'
| 'aborted'
| 'cancelled'
| 'pending'
| 'unknown';
interface LogTraceOptions {
name: string;
status?: TraceStatus;
sessionId?: string;
referenceId?: string;
tags?: string[];
attributes?: Record<string, string | number | boolean>;
}
interface TraceUpdates {
name?: string;
status?: TraceStatus;
tags?: string[];
attributes?: Record<string, string | number | boolean>;
}
interface CreateLogTraceRequest {
projectId: string;
trace: {
name: string;
status: TraceStatus;
sessionId?: string;
referenceId: string;
tags?: string[];
attributes?: Record<string, string | number | boolean>;
startedAt: number;
endedAt?: number;
};
}
Best Practices
1. Always Use Try-Finally
Copy
Ask AI
// ✅ Good: trace.end() always called
const trace = monitor.logTrace({ name: 'Operation' });
try {
await doWork();
trace.update({ status: 'success' });
} finally {
trace.end();
}
Copy
Ask AI
// ❌ Bad: trace.end() might not be called
const trace = monitor.logTrace({ name: 'Operation' });
await doWork();
trace.end(); // Skipped if doWork() throws!
2. Use Meaningful Names
Copy
Ask AI
// ✅ Good: Descriptive names
const trace = monitor.logTrace({ name: 'User Registration Flow' });
const trace = monitor.logTrace({ name: 'PDF Processing Pipeline' });
const trace = monitor.logTrace({ name: 'RAG Question Answering' });
Copy
Ask AI
// ❌ Bad: Generic names
const trace = monitor.logTrace({ name: 'Request' });
const trace = monitor.logTrace({ name: 'Function' });
const trace = monitor.logTrace({ name: 'Process' });
3. Add Context with Attributes
Copy
Ask AI
// ✅ Good: Rich context
const trace = monitor.logTrace({
name: 'API Request',
sessionId: userId,
tags: ['api', 'production', 'premium-tier'],
attributes: {
userId,
endpoint: '/api/chat',
method: 'POST',
clientVersion: '2.1.0',
region: 'us-east-1'
}
});
4. Update Status Based on Outcome
Copy
Ask AI
// ✅ Good: Accurate status tracking
const trace = monitor.logTrace({ name: 'Operation', status: 'pending' });
try {
await doWork();
trace.update({ status: 'success' });
} catch (error) {
trace.update({ status: 'failure' });
throw error;
} finally {
trace.end();
}
5. Use Sessions for Related Traces
Copy
Ask AI
// ✅ Good: Group related traces
const sessionId = `user-${userId}-${Date.now()}`;
// First trace
const trace1 = monitor.logTrace({
name: 'Login',
sessionId
});
// Later traces in same session
const trace2 = monitor.logTrace({
name: 'Chat Message 1',
sessionId
});
const trace3 = monitor.logTrace({
name: 'Chat Message 2',
sessionId
});
Common Patterns
Pattern 1: Request Handler
Copy
Ask AI
async function handleRequest(req: Request) {
const trace = monitor.logTrace({
name: `${req.method} ${req.url}`,
sessionId: req.headers.get('session-id') || undefined,
tags: ['api', 'http'],
attributes: {
method: req.method,
path: req.url,
userAgent: req.headers.get('user-agent') || 'unknown'
}
});
try {
const result = await processRequest(req);
trace.update({ status: 'success' });
return result;
} catch (error) {
trace.update({ status: 'failure' });
throw error;
} finally {
trace.end();
}
}
Pattern 2: Background Job
Copy
Ask AI
async function runJob(jobId: string) {
const trace = monitor.logTrace({
name: 'Background Job',
referenceId: jobId,
status: 'pending',
tags: ['job', 'async']
});
try {
// Job logic with spans...
trace.update({ status: 'success' });
} catch (error) {
trace.update({ status: 'failure' });
} finally {
trace.end();
}
}