Events & Streaming
When you spawn an agent, you get a Receiver<AgentMessage> channel. The agent pushes events as it works — streaming text, tool calls, errors, and lifecycle signals.
AgentMessage variants
Section titled “AgentMessage variants”| Variant | Fields | Description |
|---|---|---|
Text | message_id, chunk, is_complete, model_name | Streamed text output |
Thought | message_id, chunk, is_complete, model_name | Extended thinking / reasoning |
ToolCall | request, model_name | Agent is calling a tool |
ToolCallUpdate | tool_call_id, chunk, model_name | Streaming tool call arguments |
ToolProgress | request, progress, total, message | Tool execution progress |
ToolResult | result, result_meta, model_name | Tool returned a result |
ToolError | error, model_name | Tool call failed |
Error | message | Agent-level error |
Cancelled | message | Operation was cancelled |
ContextCompactionStarted | message_count | Context compaction beginning |
ContextCompactionResult | summary, messages_removed | Compaction completed |
ContextUsageUpdate | see below | Token usage update |
AutoContinue | attempt, max_attempts | Agent auto-continuing after tool calls |
ModelSwitched | previous, new | Model changed (alloying) |
ContextCleared | — | Context was cleared |
Done | — | Agent finished processing |
ContextUsageUpdate fields
Section titled “ContextUsageUpdate fields”| Field | Type | Description |
|---|---|---|
usage_ratio | Option<f64> | Current usage ratio (0.0-1.0), if context window is known |
context_limit | Option<u32> | Maximum context limit, if known |
input_tokens | u32 | Input tokens on the most recent API call |
output_tokens | u32 | Output tokens on the most recent API call |
cache_read_tokens | Option<u32> | Prompt tokens served from cache |
cache_creation_tokens | Option<u32> | Prompt tokens written to cache |
reasoning_tokens | Option<u32> | Reasoning tokens spent |
total_input_tokens | u64 | Cumulative input tokens since agent start |
total_output_tokens | u64 | Cumulative output tokens since agent start |
total_cache_read_tokens | u64 | Cumulative cache-read tokens |
total_cache_creation_tokens | u64 | Cumulative cache-creation tokens |
total_reasoning_tokens | u64 | Cumulative reasoning tokens |
Receiving events
Section titled “Receiving events”while let Some(msg) = agent_rx.recv().await { match msg { AgentMessage::Text { chunk, is_complete, .. } => { print!("{chunk}"); if is_complete { println!(); } } AgentMessage::Thought { chunk, .. } => { // Extended thinking output eprint!("{chunk}"); } AgentMessage::ToolCall { request, .. } => { println!("Calling tool: {} ({})", request.name, request.id); } AgentMessage::ToolResult { result, .. } => { println!("Tool {} returned: {}", result.name, result.result); } AgentMessage::Error { message } => { eprintln!("Error: {message}"); } AgentMessage::Done => break, _ => {} }}UserMessage variants
Section titled “UserMessage variants”Messages you can send to a running agent:
| Variant | Description |
|---|---|
Text { content } | Send a user message |
Cancel | Cancel the current operation |
ClearContext | Clear the conversation context |
SwitchModel(provider) | Swap the LLM provider mid-conversation |
UpdateTools(tools) | Replace the available tool set |
SetReasoningEffort(effort) | Change reasoning effort level |
// Convenience constructorlet msg = UserMessage::text("Hello");
// Also works via From<&str>let msg: UserMessage = "Hello".into();Lifecycle
Section titled “Lifecycle”The event stream follows this pattern:
- You send a
UserMessage::Text - Agent emits
Text/Thoughtchunks as the model streams - If the model calls tools:
ToolCall→ToolResult(orToolError) - Agent may
AutoContinueafter tool calls - Eventually emits
Done
The Done event signals the agent is idle and ready for the next message. If the agent encounters an unrecoverable error, it emits Error followed by Done.