Skip to content

Logs & Metrics

Dashboard Analytics

The Logs & Metrics page provides a detailed, real-time view of everything happening across your organization’s agents, tools, and LLM usage. Use it to monitor live runs, investigate failures, and understand resource consumption.

The top of the page displays a real-time feed of currently running agent executions. Each entry shows:

FieldDescription
Run IDUnique identifier for the agent execution
AgentThe agent handling the run
StatusCurrent state: running, completed, failed, or timed out
StartedTimestamp when the run began
DurationElapsed time since the run started (updates live for active runs)
Tool callsNumber of tool invocations made so far in this run
Tokens usedTotal LLM tokens consumed by the run

Click any row to expand the full execution trace, including each tool call, its input/output, and the LLM reasoning steps.

The tool metrics section aggregates data across all tool invocations in your organization:

A breakdown of which tools are being called most frequently. Use this to understand which tools are critical to your workflows and which may be underutilized.

Key data points:

  • Total calls — aggregate count over the selected time range
  • Calls per tool — ranked list of tools by invocation count
  • Calls per agent — which agents are driving the most tool usage

Aggregate statistics for all agent runs across your organization:

MetricDescription
Total runsNumber of agent executions in the selected time range
Average durationMean time from run start to completion
Completion ratePercentage of runs that finished without errors
Timeout ratePercentage of runs that exceeded the maximum execution time
Average tool calls per runMean number of tool invocations per agent execution
Average tokens per runMean LLM token consumption per execution

Monitor your organization’s LLM consumption in detail:

Token consumption

Total input and output tokens consumed, broken down by time period. Track daily, weekly, and monthly consumption trends to forecast credit usage.

Model usage breakdown

See which LLM models are being used and their relative share of total token consumption. Useful for understanding cost distribution across model tiers.

Cost per conversation

Average credit cost per conversation session, broken down by agent. Identify which agents are the most and least cost-efficient.

Peak usage periods

Hourly and daily heatmaps showing when LLM usage is highest. Use this data to anticipate capacity needs and optimize credit allocation.

If your organization uses voice or video features, this section displays session-level statistics:

  • Active voice sessions — number of currently active voice/video bot sessions.
  • Total sessions — aggregate count over the selected time range.
  • Average session duration — mean length of voice/video interactions.
  • Session completion rate — percentage of sessions that ended normally (versus dropped or errored).

All metrics on this page support flexible filtering:

  • Time range selector — choose from preset ranges (last hour, 24 hours, 7 days, 30 days) or define a custom date range.
  • Agent filter — narrow metrics to a specific agent or set of agents.
  • Tool filter — focus on a particular tool or tool group.
  • Status filter — show only successful, failed, or in-progress executions.

Filters apply across all sections on the page simultaneously, so you get a consistent view of the selected slice of data.

You can export metrics data for external analysis or reporting:

  • CSV export — download the current filtered view as a CSV file. Available for all table views (runs, tool calls, token usage).
  • Time-range exports — export data for a specific date range, independent of the current page filter.