Span Class
The Span class represents a specific operation within a trace. Spans are the building blocks of observability, tracking individual steps like LLM calls, tool executions, database queries, or any custom operation in your LLM pipeline.
Overview
A span captures:
Operation details : name, timing, status
Content : input/output data for different span types (Model, Tool, Retrieval, etc.)
Metadata : tags, attributes for filtering and analysis
Hierarchy : parent-child relationships for nested operations
Evaluation : optional evaluator execution
Common span types:
Model - LLM inference calls
ModelStream - Streaming LLM responses
Tool - Function/API executions
Retrieval - RAG and vector searches
Embeddings - Embedding generation
Function - Custom application logic
Guardrail - Safety/compliance checks
Other - Any custom operation
Creation
Create spans from a trace or parent span:
// From trace
const span = trace . logSpan ({
name: 'LLM Call' ,
tags: [ 'llm' ]
});
// From parent span (nested)
const childSpan = parentSpan . logSpan ({
name: 'Tool Execution' ,
tags: [ 'tool' ]
});
Properties
span
span : CreateLogSpanRequest
The underlying span request object containing all span data.
Structure:
{
projectId : string ;
span : {
name : string ;
status : SpanStatus ;
referenceId : string ;
traceId ?: string ;
traceReferenceId : string ;
parentReferenceId ?: string ;
sessionId ?: string ;
promptId ?: string ;
deploymentId ?: string ;
runEvaluation ?: boolean ;
tags ?: string [];
attributes ?: Record < string , string | number | boolean > ;
content : LogSpanContent ;
startedAt : number ;
endedAt : number ;
};
}
Methods
logSpan()
Create a nested child span under this span.
logSpan ( options : LogSpanOptions ): Span
This allows you to create hierarchical span relationships like:
Trace
└─ Parent Span
├─ Child Span 1
└─ Child Span 2
└─ Grandchild Span
Parameters
Returns
A new child Span instance linked to this parent span.
Example
const parentSpan = trace . logSpan ({ name: 'RAG Pipeline' });
// Create child spans
const embeddingSpan = parentSpan . logSpan ({
name: 'Generate Embedding' ,
tags: [ 'embedding' ]
});
embeddingSpan . end ();
const retrievalSpan = parentSpan . logSpan ({
name: 'Vector Search' ,
tags: [ 'retrieval' ]
});
retrievalSpan . end ();
parentSpan . end ();
update()
Update span metadata and content.
update ( updates : SpanUpdates ): this
Parameters
Update status: 'success' | 'failure' | 'aborted' | 'cancelled' | 'unknown'
Update the span content (input/output data).
Update whether to run evaluators.
Update tags (replaces existing).
attributes
Record<string, string | number | boolean>
Update attributes (merges with existing).
Returns
Returns this for method chaining.
Examples
Update Status
Update Content
Method Chaining
const span = trace . logSpan ({ name: 'API Call' });
try {
await callAPI ();
span . update ({ status: 'success' });
} catch ( error ) {
span . update ({ status: 'failure' });
}
span . end ();
end()
Mark the span as complete and ready to be flushed.
end (): string | undefined
Behavior
Sets endedAt timestamp
Marks the span as ready in the monitor’s buffer
Recursively ends all child spans
Returns the span’s reference ID
Always call end() on your spans! Spans that are never ended will never be flushed to the API.
Returns
The span’s reference ID for correlation.
Example
const span = trace . logSpan ({ name: 'Operation' });
try {
await doWork ();
span . update ({ status: 'success' });
} finally {
span . end (); // Always called
}
Span Content Types
Different content types for different operations. See LogSpanContent Type for complete documentation.
Model (LLM Inference)
For standard LLM completions.
span . update ({
content: {
type: 'Model' ,
provider: 'openai' , // Provider name
model: 'gpt-4o' , // Model identifier
variables: { // Optional prompt variables
userName: 'John' ,
context: 'support'
},
cost: 0.002 , // Optional cost in USD
input: JSON . stringify ([ // Input messages
{ role: 'user' , content: 'Hello' }
]),
output: JSON . stringify ({ // Output message
role: 'assistant' ,
content: 'Hi there!'
}),
expected: JSON . stringify ({ // Optional expected output for evaluation
role: 'assistant' ,
content: 'Hello! How can I help?'
})
}
});
ModelStream (Streaming LLM)
For streaming LLM responses.
let chunks = '' ;
for await ( const chunk of stream ) {
chunks += chunk ;
}
span . update ({
content: {
type: 'ModelStream' ,
provider: 'anthropic' ,
model: 'claude-3-opus' ,
input: JSON . stringify ( messages ),
output: chunks , // Raw stream chunks
aggregateOutput: JSON . stringify ({ // Final aggregated response
role: 'assistant' ,
content: fullResponse
}),
cost: 0.005
}
});
For tool executions and API calls.
span . update ({
content: {
type: 'Tool' ,
input: JSON . stringify ({
function: 'getWeather' ,
city: 'San Francisco'
}),
output: JSON . stringify ({
temperature: 72 ,
conditions: 'sunny' ,
humidity: 65
})
}
});
Retrieval (RAG/Vector Search)
For document retrieval and vector searches.
span . update ({
content: {
type: 'Retrieval' ,
input: JSON . stringify ({
query: 'What is machine learning?' ,
topK: 5 ,
filters: { category: 'AI' }
}),
output: JSON . stringify ({
documents: [
{ id: 'doc1' , score: 0.95 , text: '...' },
{ id: 'doc2' , score: 0.89 , text: '...' }
]
})
}
});
Embeddings
For embedding generation.
span . update ({
content: {
type: 'Embeddings' ,
input: JSON . stringify ({
texts: [ 'text1' , 'text2' , 'text3' ],
model: 'text-embedding-3-large'
}),
output: JSON . stringify ({
embeddings: [[ 0.1 , 0.2 , ... ], [ 0.3 , 0.4 , ... ]],
dimensions: 3072
})
}
});
Function (Custom Logic)
For custom application logic.
span . update ({
content: {
type: 'Function' ,
input: JSON . stringify ({
operation: 'processData' ,
params: { id: 123 }
}),
output: JSON . stringify ({
result: 'success' ,
itemsProcessed: 42
})
}
});
Guardrail (Safety Check)
For safety and compliance checks.
span . update ({
content: {
type: 'Guardrail' ,
input: JSON . stringify ({
text: 'User input to check...' ,
checks: [ 'toxicity' , 'pii' , 'harm' ]
}),
output: JSON . stringify ({
safe: true ,
scores: {
toxicity: 0.05 ,
pii: 0.01 ,
harm: 0.02
}
})
}
});
Other (Custom)
For any other operation type.
span . update ({
content: {
type: 'Other' ,
input: JSON . stringify ({ custom: 'input' }),
output: JSON . stringify ({ custom: 'output' })
}
});
Complete Examples
Example 1: LLM Call with Deployment
import { Adaline } from '@adaline/client' ;
import { Gateway } from '@adaline/gateway' ;
import { OpenAI } from '@adaline/openai' ;
const adaline = new Adaline ();
const gateway = new Gateway ();
const openaiProvider = new OpenAI ();
const monitor = adaline . initMonitor ({ projectId: 'my-project' });
async function generateResponse ( userId : string , message : string ) {
// Get deployment
const deployment = await adaline . getLatestDeployment ({
promptId: 'chat-prompt' ,
deploymentEnvironmentId: 'environment_abc123'
});
const trace = monitor . logTrace ({
name: 'Chat Completion' ,
sessionId: userId
});
const span = trace . logSpan ({
name: 'LLM Completion' ,
promptId: deployment . promptId ,
deploymentId: deployment . id ,
runEvaluation: true ,
tags: [ 'llm' , deployment . prompt . config . providerName ]
});
try {
// Create model from deployment
const model = openaiProvider . chatModel ({
modelName: deployment . prompt . config . model ,
apiKey: process . env . OPENAI_API_KEY !
});
// Call LLM using Adaline Gateway
const gatewayResponse = await gateway . completeChat ({
model ,
config: deployment . prompt . config . settings ,
messages: [
... deployment . prompt . messages ,
{
role: 'user' ,
content: [{ modality: 'text' , value: message }]
}
],
tools: deployment . prompt . tools
});
const reply = gatewayResponse . response . messages [ 0 ]. content [ 0 ]. value ;
span . update ({
status: 'success' ,
content: {
type: 'Model' ,
provider: deployment . prompt . config . providerName ,
model: deployment . prompt . config . model ,
input: JSON . stringify ( gatewayResponse . provider . request ),
output: JSON . stringify ( gatewayResponse . provider . response )
},
});
return reply ;
} catch ( error ) {
span . update ({
status: 'failure' ,
attributes: {
error: error instanceof Error ? error . message : String ( error )
}
});
throw error ;
} finally {
span . end ();
trace . end ();
}
}
Example 2: Nested RAG Pipeline
import { ChromaClient } from 'chromadb' ;
const chroma = new ChromaClient ();
async function ragPipeline ( question : string ) {
const trace = monitor . logTrace ({
name: 'RAG Pipeline' ,
tags: [ 'rag' ]
});
const pipelineSpan = trace . logSpan ({
name: 'RAG Workflow' ,
tags: [ 'pipeline' ]
});
try {
// Step 1: Embedding
const embedSpan = pipelineSpan . logSpan ({
name: 'Generate Embedding' ,
tags: [ 'embedding' ]
});
const embedResponse = await openai . embeddings . create ({
model: 'text-embedding-3-large' ,
input: question
});
const embedding = embedResponse . data [ 0 ]. embedding ;
embedSpan . update ({
status: 'success' ,
content: {
type: 'Embeddings' ,
input: JSON . stringify ({ text: question }),
output: JSON . stringify ({
dimensions: embedding . length ,
model: 'text-embedding-3-large'
})
}
});
embedSpan . end ();
// Step 2: Retrieval
const retrievalSpan = pipelineSpan . logSpan ({
name: 'Vector Search' ,
tags: [ 'retrieval' , 'chromadb' ]
});
const collection = await chroma . getCollection ({ name: 'docs' });
const results = await collection . query ({
queryEmbeddings: [ embedding ],
nResults: 5
});
retrievalSpan . update ({
status: 'success' ,
content: {
type: 'Retrieval' ,
input: JSON . stringify ({
query: question ,
collection: 'docs' ,
topK: 5
}),
output: JSON . stringify ({
documentIds: results . ids [ 0 ],
distances: results . distances ?.[ 0 ],
documents: results . documents [ 0 ]
})
},
attributes: {
documentsFound: results . ids [ 0 ]. length
}
});
retrievalSpan . end ();
// Step 3: LLM Generation
const llmSpan = pipelineSpan . logSpan ({
name: 'Generate Answer' ,
runEvaluation: true ,
tags: [ 'llm' , 'answer' ]
});
const context = results . documents [ 0 ]. join ( ' \n\n ' );
const messages = [
{
role: 'system' as const ,
content: 'Answer based on the provided context.'
},
{
role: 'user' as const ,
content: `Context: \n ${ context } \n\n Question: ${ question } `
}
];
const completion = await openai . chat . completions . create ({
model: 'gpt-4o' ,
messages
});
const answer = completion . choices [ 0 ]. message . content || '' ;
llmSpan . update ({
status: 'success' ,
content: {
type: 'Model' ,
provider: 'openai' ,
model: 'gpt-4o' ,
input: JSON . stringify ( messages ),
output: JSON . stringify ({
role: 'assistant' ,
content: answer
})
},
attributes: {
contextLength: context . length ,
totalTokens: completion . usage ?. total_tokens
}
});
llmSpan . end ();
pipelineSpan . update ({ status: 'success' });
pipelineSpan . end ();
trace . update ({ status: 'success' });
return answer ;
} catch ( error ) {
pipelineSpan . update ({ status: 'failure' });
trace . update ({ status: 'failure' });
throw error ;
} finally {
trace . end ();
}
}
Example 3: Guardrail Check
async function checkContent ( text : string ) {
const trace = monitor . logTrace ({ name: 'Content Check' });
const guardrailSpan = trace . logSpan ({
name: 'Safety Guardrail' ,
tags: [ 'guardrail' , 'safety' ]
});
try {
// Call safety API
const response = await fetch ( 'https://api.safety.com/v1/check' , {
method: 'POST' ,
body: JSON . stringify ({ text })
});
const result = await response . json ();
const isSafe = result . toxicity < 0.1 && result . harm < 0.1 ;
guardrailSpan . update ({
status: 'success' ,
content: {
type: 'Guardrail' ,
input: JSON . stringify ({ text }),
output: JSON . stringify ({
safe: isSafe ,
scores: {
toxicity: result . toxicity ,
harm: result . harm ,
pii: result . pii
}
})
},
attributes: {
textLength: text . length ,
safe: isSafe
}
});
guardrailSpan . end ();
trace . end ();
return isSafe ;
} catch ( error ) {
guardrailSpan . update ({ status: 'failure' });
guardrailSpan . end ();
trace . end ();
throw error ;
}
}
Type Definitions
type SpanStatus =
| 'success'
| 'failure'
| 'aborted'
| 'cancelled'
| 'unknown' ;
interface LogSpanOptions {
name : string ;
status ?: SpanStatus ;
referenceId ?: string ;
promptId ?: string ;
deploymentId ?: string | null ;
runEvaluation ?: boolean ;
tags ?: string [];
attributes ?: Record < string , string | number | boolean >;
content ?: LogSpanContent ;
}
interface SpanUpdates {
name ?: string ;
status ?: SpanStatus ;
content ?: LogSpanContent ;
runEvaluation ?: boolean ;
tags ?: string [];
attributes ?: Record < string , string | number | boolean >;
}
type LogSpanContent =
| LogSpanModelContent
| LogSpanModelStreamContent
| LogSpanEmbeddingsContent
| LogSpanFunctionContent
| LogSpanToolContent
| LogSpanGuardrailContent
| LogSpanRetrievalContent
| LogSpanOtherContent ;
Best Practices
1. Use Appropriate Content Types
// ✅ Good: Correct content type for operation
const llmSpan = trace . logSpan ({ name: 'LLM' });
llmSpan . update ({
content: {
type: 'Model' ,
provider: 'openai' ,
model: 'gpt-4o' ,
input: '...' ,
output: '...'
}
});
const toolSpan = trace . logSpan ({ name: 'API' });
toolSpan . update ({
content: {
type: 'Tool' ,
input: '...' ,
output: '...'
}
});
2. Always End Spans
// ✅ Good: Always end
const span = trace . logSpan ({ name: 'Op' });
try {
await work ();
} finally {
span . end (); // Always called
}
// ✅ Good: Descriptive tags
const span = trace . logSpan ({
name: 'OpenAI Call' ,
tags: [
'llm' ,
'openai' ,
'gpt-4o' ,
'production' ,
'premium-tier'
],
attributes: {
userId: 'user-123' ,
region: 'us-east-1' ,
cached: false ,
}
});