Token consumption
Total input and output tokens consumed, broken down by time period. Track daily, weekly, and monthly consumption trends to forecast credit usage.
The Logs & Metrics page provides a detailed, real-time view of everything happening across your organization’s agents, tools, and LLM usage. Use it to monitor live runs, investigate failures, and understand resource consumption.
The top of the page displays a real-time feed of currently running agent executions. Each entry shows:
| Field | Description |
|---|---|
| Run ID | Unique identifier for the agent execution |
| Agent | The agent handling the run |
| Status | Current state: running, completed, failed, or timed out |
| Started | Timestamp when the run began |
| Duration | Elapsed time since the run started (updates live for active runs) |
| Tool calls | Number of tool invocations made so far in this run |
| Tokens used | Total LLM tokens consumed by the run |
Click any row to expand the full execution trace, including each tool call, its input/output, and the LLM reasoning steps.
The tool metrics section aggregates data across all tool invocations in your organization:
A breakdown of which tools are being called most frequently. Use this to understand which tools are critical to your workflows and which may be underutilized.
Key data points:
Track the reliability of your tool executions:
Understand how long tool executions take:
Aggregate statistics for all agent runs across your organization:
| Metric | Description |
|---|---|
| Total runs | Number of agent executions in the selected time range |
| Average duration | Mean time from run start to completion |
| Completion rate | Percentage of runs that finished without errors |
| Timeout rate | Percentage of runs that exceeded the maximum execution time |
| Average tool calls per run | Mean number of tool invocations per agent execution |
| Average tokens per run | Mean LLM token consumption per execution |
Monitor your organization’s LLM consumption in detail:
Token consumption
Total input and output tokens consumed, broken down by time period. Track daily, weekly, and monthly consumption trends to forecast credit usage.
Model usage breakdown
See which LLM models are being used and their relative share of total token consumption. Useful for understanding cost distribution across model tiers.
Cost per conversation
Average credit cost per conversation session, broken down by agent. Identify which agents are the most and least cost-efficient.
Peak usage periods
Hourly and daily heatmaps showing when LLM usage is highest. Use this data to anticipate capacity needs and optimize credit allocation.
If your organization uses voice or video features, this section displays session-level statistics:
All metrics on this page support flexible filtering:
Filters apply across all sections on the page simultaneously, so you get a consistent view of the selected slice of data.
You can export metrics data for external analysis or reporting: