Skip to content
Theme:

Events & Streaming

When you spawn an agent, you get a Receiver<AgentMessage> channel. The agent pushes events as it works — streaming text, tool calls, errors, and lifecycle signals.

VariantFieldsDescription
Textmessage_id, chunk, is_complete, model_nameStreamed text output
Thoughtmessage_id, chunk, is_complete, model_nameExtended thinking / reasoning
ToolCallrequest, model_nameAgent is calling a tool
ToolCallUpdatetool_call_id, chunk, model_nameStreaming tool call arguments
ToolProgressrequest, progress, total, messageTool execution progress
ToolResultresult, result_meta, model_nameTool returned a result
ToolErrorerror, model_nameTool call failed
ErrormessageAgent-level error
CancelledmessageOperation was cancelled
ContextCompactionStartedmessage_countContext compaction beginning
ContextCompactionResultsummary, messages_removedCompaction completed
ContextUsageUpdatesee belowToken usage update
AutoContinueattempt, max_attemptsAgent auto-continuing after tool calls
ModelSwitchedprevious, newModel changed (alloying)
ContextClearedContext was cleared
DoneAgent finished processing
FieldTypeDescription
usage_ratioOption<f64>Current usage ratio (0.0-1.0), if context window is known
context_limitOption<u32>Maximum context limit, if known
input_tokensu32Input tokens on the most recent API call
output_tokensu32Output tokens on the most recent API call
cache_read_tokensOption<u32>Prompt tokens served from cache
cache_creation_tokensOption<u32>Prompt tokens written to cache
reasoning_tokensOption<u32>Reasoning tokens spent
total_input_tokensu64Cumulative input tokens since agent start
total_output_tokensu64Cumulative output tokens since agent start
total_cache_read_tokensu64Cumulative cache-read tokens
total_cache_creation_tokensu64Cumulative cache-creation tokens
total_reasoning_tokensu64Cumulative reasoning tokens
while let Some(msg) = agent_rx.recv().await {
match msg {
AgentMessage::Text { chunk, is_complete, .. } => {
print!("{chunk}");
if is_complete {
println!();
}
}
AgentMessage::Thought { chunk, .. } => {
// Extended thinking output
eprint!("{chunk}");
}
AgentMessage::ToolCall { request, .. } => {
println!("Calling tool: {} ({})", request.name, request.id);
}
AgentMessage::ToolResult { result, .. } => {
println!("Tool {} returned: {}", result.name, result.result);
}
AgentMessage::Error { message } => {
eprintln!("Error: {message}");
}
AgentMessage::Done => break,
_ => {}
}
}

Messages you can send to a running agent:

VariantDescription
Text { content }Send a user message
CancelCancel the current operation
ClearContextClear the conversation context
SwitchModel(provider)Swap the LLM provider mid-conversation
UpdateTools(tools)Replace the available tool set
SetReasoningEffort(effort)Change reasoning effort level
// Convenience constructor
let msg = UserMessage::text("Hello");
// Also works via From<&str>
let msg: UserMessage = "Hello".into();

The event stream follows this pattern:

  1. You send a UserMessage::Text
  2. Agent emits Text / Thought chunks as the model streams
  3. If the model calls tools: ToolCallToolResult (or ToolError)
  4. Agent may AutoContinue after tool calls
  5. Eventually emits Done

The Done event signals the agent is idle and ready for the next message. If the agent encounters an unrecoverable error, it emits Error followed by Done.