Trace Class
The Trace class represents a high-level operation or workflow in your LLM application. Traces capture the entire lifecycle of a request, batch job, or user interaction, and can contain multiple child spans for granular tracking.
Overview
A trace is the top-level unit of observability that represents:
A single user request to your API
A background job or workflow
A conversation turn in a chatbot
A complete RAG pipeline execution
Any end-to-end operation you want to track
Traces contain:
Metadata : name, status, timestamps, tags, attributes
Context : session ID, reference ID
Children : one or more spans representing sub-operations
Creation
Create a trace using the Monitor.logTrace() method:
const trace = monitor . logTrace ({
name: 'User Login' ,
sessionId: 'session-abc-123' ,
tags: [ 'auth' , 'production' ],
attributes: { userId: 'user-456' }
});
Properties
trace
trace : CreateLogTraceRequest
The underlying trace request object that will be sent to the API. Contains all trace metadata.
Structure:
{
projectId : string ;
trace : {
name : string ;
status : TraceStatus ;
sessionId ?: string ;
referenceId : string ;
tags ?: string [];
attributes ?: Record < string , string | number | boolean > ;
startedAt : number ; // Unix timestamp in milliseconds
endedAt ?: number ; // Set when trace.end() is called
}
}
traceId
traceId : string | undefined
The server-assigned trace ID, set after the trace is successfully flushed to the API.
const trace = monitor . logTrace ({ name: 'Operation' });
console . log ( trace . traceId ); // undefined (not flushed yet)
trace . end ();
await monitor . flush ();
console . log ( trace . traceId ); // "trace_abc123xyz" (assigned by server)
Methods
logSpan()
Create a child span within this trace.
logSpan ( options : LogSpanOptions ): Span
Parameters
Human-readable name for the span (e.g., “LLM Call”, “Database Query”).
status
SpanStatus
default: "unknown"
Initial status: 'success' | 'failure' | 'aborted' | 'cancelled' | 'unknown'
Custom reference ID. Auto-generated UUID if not provided.
ID of the prompt used in this span (for LLM calls).
ID of the deployment used in this span.
Whether to run evaluators on this span after completion.
Tags for categorization (e.g., ['llm', 'openai']).
attributes
Record<string, string | number | boolean>
Additional metadata (e.g., { model: 'gpt-4o', tokens: 1500 }).
Span content (input/output). Defaults to monitor.defaultContent.
Returns
Examples
Basic Span
LLM Call Span
Nested Spans
const trace = monitor . logTrace ({ name: 'API Request' });
const span = trace . logSpan ({
name: 'Process Data'
});
// Do work...
await processData ();
span . update ({ status: 'success' });
span . end ();
trace . end ();
update()
Update trace metadata in place.
update ( updates : TraceUpdates ): this
Parameters
Partial updates to apply to the trace. Update the status: 'success' | 'failure' | 'aborted' | 'cancelled' | 'pending' | 'unknown'
Update tags (replaces existing tags).
attributes
Record<string, string | number | boolean>
Update attributes (merges with existing attributes).
Returns
Returns this for method chaining.
Examples
Update Status
Add Attributes
Method Chaining
Update Everything
const trace = monitor . logTrace ({
name: 'Process Order' ,
status: 'pending'
});
try {
await processOrder ();
trace . update ({ status: 'success' });
} catch ( error ) {
trace . update ({ status: 'failure' });
}
trace . end ();
end()
Mark the trace as complete and ready to be flushed.
end (): string | undefined
Behavior
Sets endedAt timestamp if not already set
Marks the trace as ready in the monitor’s buffer
Recursively ends all child spans (and their descendants)
Returns the trace’s reference ID
Always call end() on your traces! Traces that are never ended will never be flushed to the API.
Returns
The trace’s reference ID for correlation with external systems.
Examples
Basic
Try-Finally Pattern
Get Reference ID
Auto-End Children
const trace = monitor . logTrace ({ name: 'Operation' });
// Do work...
trace . end (); // Required!
Complete Examples
Example 1: Simple API Request
import { Adaline } from '@adaline/client' ;
import OpenAI from 'openai' ;
const adaline = new Adaline ();
const openai = new OpenAI ();
const monitor = adaline . initMonitor ({ projectId: 'my-api' });
async function handleChatRequest ( userId : string , message : string ) {
// Create trace for this request
const trace = monitor . logTrace ({
name: 'Chat Request' ,
sessionId: userId ,
tags: [ 'chat' , 'api' ],
attributes: {
userId ,
messageLength: message . length
}
});
try {
// Log the LLM call
const llmSpan = trace . logSpan ({
name: 'OpenAI Completion' ,
tags: [ 'llm' ]
});
const response = await openai . chat . completions . create ({
model: 'gpt-4o' ,
messages: [{ role: 'user' , content: message }]
});
llmSpan . update ({
status: 'success' ,
content: {
type: 'Model' ,
provider: 'openai' ,
model: 'gpt-4o' ,
input: JSON . stringify ([{ role: 'user' , content: message }]),
output: JSON . stringify ( response . choices [ 0 ]. message )
}
});
llmSpan . end ();
trace . update ({ status: 'success' });
return response . choices [ 0 ]. message . content ;
} catch ( error ) {
trace . update ({
status: 'failure' ,
attributes: {
error: error instanceof Error ? error . message : String ( error )
}
});
throw error ;
} finally {
trace . end ();
}
}
Example 2: Multi-Step Workflow
async function processUserOnboarding ( userId : string ) {
const trace = monitor . logTrace ({
name: 'User Onboarding' ,
sessionId: userId ,
status: 'pending' ,
tags: [ 'onboarding' , 'workflow' ],
attributes: { userId }
});
try {
// Step 1: Create account
const createSpan = trace . logSpan ({
name: 'Create Account' ,
tags: [ 'database' ]
});
await createAccount ( userId );
createSpan . update ({ status: 'success' });
createSpan . end ();
// Step 2: Send welcome email
const emailSpan = trace . logSpan ({
name: 'Send Welcome Email' ,
tags: [ 'email' ]
});
await sendWelcomeEmail ( userId );
emailSpan . update ({ status: 'success' });
emailSpan . end ();
// Step 3: Generate personalized content
const llmSpan = trace . logSpan ({
name: 'Generate Welcome Message' ,
tags: [ 'llm' , 'personalization' ]
});
const welcomeMsg = await generateWelcomeMessage ( userId );
llmSpan . update ({
status: 'success' ,
content: {
type: 'Model' ,
provider: 'openai' ,
model: 'gpt-4o' ,
input: JSON . stringify ({ userId }),
output: JSON . stringify ({ message: welcomeMsg })
}
});
llmSpan . end ();
trace . update ({ status: 'success' });
} catch ( error ) {
trace . update ({
status: 'failure' ,
attributes: { error: String ( error ) }
});
throw error ;
} finally {
trace . end ();
}
}
Example 3: Nested Operations (RAG Pipeline)
async function answerQuestion ( sessionId : string , question : string ) {
const trace = monitor . logTrace ({
name: 'RAG Question Answering' ,
sessionId ,
tags: [ 'rag' , 'qa' ],
attributes: { questionLength: question . length }
});
try {
// Parent span for entire RAG operation
const ragSpan = trace . logSpan ({
name: 'RAG Pipeline' ,
tags: [ 'pipeline' ]
});
// Step 1: Generate embedding (child of ragSpan)
const embedSpan = ragSpan . logSpan ({
name: 'Generate Query Embedding' ,
tags: [ 'embedding' ]
});
const embedding = await generateEmbedding ( question );
embedSpan . update ({
status: 'success' ,
content: {
type: 'Embeddings' ,
input: JSON . stringify ({ query: question }),
output: JSON . stringify ({ dimensions: embedding . length })
}
});
embedSpan . end ();
// Step 2: Retrieve documents (child of ragSpan)
const retrieveSpan = ragSpan . logSpan ({
name: 'Vector Search' ,
tags: [ 'retrieval' , 'vector-db' ]
});
const docs = await retrieveDocuments ( embedding );
retrieveSpan . update ({
status: 'success' ,
content: {
type: 'Retrieval' ,
input: JSON . stringify ({ embedding: 'vector' , topK: 5 }),
output: JSON . stringify ({ documentIds: docs . map ( d => d . id ) })
},
attributes: { documentsFound: docs . length }
});
retrieveSpan . end ();
// Step 3: Generate answer (child of ragSpan)
const llmSpan = ragSpan . logSpan ({
name: 'Generate Answer' ,
tags: [ 'llm' , 'answer-generation' ],
runEvaluation: true
});
const answer = await generateAnswer ( question , docs );
llmSpan . update ({
status: 'success' ,
content: {
type: 'Model' ,
provider: 'openai' ,
model: 'gpt-4o' ,
input: JSON . stringify ({ question , context: docs }),
output: JSON . stringify ({ answer })
}
});
llmSpan . end ();
ragSpan . update ({ status: 'success' });
ragSpan . end ();
trace . update ({ status: 'success' });
return answer ;
} catch ( error ) {
trace . update ({ status: 'failure' });
throw error ;
} finally {
trace . end ();
}
}
Example 4: Long-Running Background Job
async function processBatch ( batchId : string ) {
const trace = monitor . logTrace ({
name: 'Batch Processing' ,
referenceId: `batch- ${ batchId } ` ,
status: 'pending' ,
tags: [ 'batch' , 'background' ],
attributes: {
batchId ,
startTime: Date . now ()
}
});
const items = await getBatchItems ( batchId );
trace . update ({
attributes: { itemCount: items . length }
});
let successCount = 0 ;
let failureCount = 0 ;
for ( const item of items ) {
const itemSpan = trace . logSpan ({
name: `Process Item ${ item . id } ` ,
tags: [ 'item' ],
attributes: { itemId: item . id }
});
try {
await processItem ( item );
itemSpan . update ({ status: 'success' });
successCount ++ ;
} catch ( error ) {
itemSpan . update ({
status: 'failure' ,
attributes: { error: String ( error ) }
});
failureCount ++ ;
}
itemSpan . end ();
}
trace . update ({
status: failureCount === 0 ? 'success' : 'failure' ,
attributes: {
successCount ,
failureCount ,
duration: Date . now () - trace . trace . trace . startedAt
}
});
trace . end ();
}
Type Definitions
type TraceStatus =
| 'success'
| 'failure'
| 'aborted'
| 'cancelled'
| 'pending'
| 'unknown' ;
interface LogTraceOptions {
name : string ;
status ?: TraceStatus ;
sessionId ?: string ;
referenceId ?: string ;
tags ?: string [];
attributes ?: Record < string , string | number | boolean >;
}
interface TraceUpdates {
name ?: string ;
status ?: TraceStatus ;
tags ?: string [];
attributes ?: Record < string , string | number | boolean >;
}
interface CreateLogTraceRequest {
projectId : string ;
trace : {
name : string ;
status : TraceStatus ;
sessionId ?: string ;
referenceId : string ;
tags ?: string [];
attributes ?: Record < string , string | number | boolean >;
startedAt : number ;
endedAt ?: number ;
};
}
Best Practices
1. Always Use Try-Finally
// ✅ Good: trace.end() always called
const trace = monitor . logTrace ({ name: 'Operation' });
try {
await doWork ();
trace . update ({ status: 'success' });
} finally {
trace . end ();
}
// ❌ Bad: trace.end() might not be called
const trace = monitor . logTrace ({ name: 'Operation' });
await doWork ();
trace . end (); // Skipped if doWork() throws!
2. Use Meaningful Names
// ✅ Good: Descriptive names
const trace = monitor . logTrace ({ name: 'User Registration Flow' });
const trace = monitor . logTrace ({ name: 'PDF Processing Pipeline' });
const trace = monitor . logTrace ({ name: 'RAG Question Answering' });
// ❌ Bad: Generic names
const trace = monitor . logTrace ({ name: 'Request' });
const trace = monitor . logTrace ({ name: 'Function' });
const trace = monitor . logTrace ({ name: 'Process' });
3. Add Context with Attributes
// ✅ Good: Rich context
const trace = monitor . logTrace ({
name: 'API Request' ,
sessionId: userId ,
tags: [ 'api' , 'production' , 'premium-tier' ],
attributes: {
userId ,
endpoint: '/api/chat' ,
method: 'POST' ,
clientVersion: '2.1.0' ,
region: 'us-east-1'
}
});
4. Update Status Based on Outcome
// ✅ Good: Accurate status tracking
const trace = monitor . logTrace ({ name: 'Operation' , status: 'pending' });
try {
await doWork ();
trace . update ({ status: 'success' });
} catch ( error ) {
trace . update ({ status: 'failure' });
throw error ;
} finally {
trace . end ();
}
// ✅ Good: Group related traces
const sessionId = `user- ${ userId } - ${ Date . now () } ` ;
// First trace
const trace1 = monitor . logTrace ({
name: 'Login' ,
sessionId
});
// Later traces in same session
const trace2 = monitor . logTrace ({
name: 'Chat Message 1' ,
sessionId
});
const trace3 = monitor . logTrace ({
name: 'Chat Message 2' ,
sessionId
});
Common Patterns
Pattern 1: Request Handler
async function handleRequest ( req : Request ) {
const trace = monitor . logTrace ({
name: ` ${ req . method } ${ req . url } ` ,
sessionId: req . headers . get ( 'session-id' ) || undefined ,
tags: [ 'api' , 'http' ],
attributes: {
method: req . method ,
path: req . url ,
userAgent: req . headers . get ( 'user-agent' ) || 'unknown'
}
});
try {
const result = await processRequest ( req );
trace . update ({ status: 'success' });
return result ;
} catch ( error ) {
trace . update ({ status: 'failure' });
throw error ;
} finally {
trace . end ();
}
}
Pattern 2: Background Job
async function runJob ( jobId : string ) {
const trace = monitor . logTrace ({
name: 'Background Job' ,
referenceId: jobId ,
status: 'pending' ,
tags: [ 'job' , 'async' ]
});
try {
// Job logic with spans...
trace . update ({ status: 'success' });
} catch ( error ) {
trace . update ({ status: 'failure' });
} finally {
trace . end ();
}
}