Skip to main content

Metric Groupings

Metrics recorded with pluto.log produces a point in a time series, where each time series is associated with a user defined metric label. Every invocation of pluto.log is implicitly associated with a particular step value which represents the x-axis while the logged metric value is the y-axis. Metrics should be labeled using path like syntax i.e.
pluto.log({"val/loss": 2})
In the dashboard, metrics are grouped in the dashboard by their top-most parent key (in the example above, the topmost key would be val), and each panel corresponds to a unique full metric label path. In the above example, we’ve created a panel for the metric label val/loss

Comparing Experiments

If two experiments have identically labeled time-series, their time series data will be co-plotted on the same panel. In the image below, we hide all of our experiments except for two which both contained time-series labeled val/loss Screenshot 2025 12 18 At 6 04 50 PM

Side-by-Side View

For a detailed tabular comparison of selected runs, click the Side-by-side button above the runs table. This opens a comparison view that displays all metadata and metric summaries in a structured table with one column per run. Side-by-side comparison view The side-by-side view includes:
  • Pluto Metadata — Name, ID, status, owner, timestamps, tags, and notes for each run
  • Metric Summaries — Final values for every logged metric, organized alphabetically
  • Show only differences — Toggle to hide rows where all runs have identical values, making it easy to spot what changed between experiments
Side-by-side config diff
  • Search — Filter the comparison table by metric name with support for regex matching
  • Remove from comparison — Click the eye icon on any run’s column header to temporarily hide it from the view

Tooltip Pinning

When hovering over a chart, a tooltip appears showing the values for all runs at that step. You can click on the chart to pin the tooltip in place, allowing you to:
  • Compare exact values at a specific step while scrolling to other charts
  • Pin multiple tooltips for cross-step comparison
  • Resize pinned tooltips by dragging their edges
  • Dismiss a pinned tooltip by clicking the close button
Pinned tooltips on charts Pinned tooltips are indicated by a blue border.

Run Table Customization

The runs table at the top of the Compare view supports extensive customization to help you organize and analyze your experiments. Run table with columns

Column Header Menus

Right-click or click the menu icon on any column header to access options:
  • Sort ascending or descending
  • Rename the column display name
  • Set column color for visual grouping
You can also drag and drop column headers to reorder them.

Column Picker

Click the Columns button to choose which columns are visible in the runs table. The default columns are Name, Id, Created, Owner, Tags, and Notes, but you can add or remove any column. Column picker

Filtering and Sorting

Click the Filter button to open the filter dropdown. The filter system is type-aware, offering different operators depending on the column type. Filter dropdown You can chain multiple conditions together with AND logic. Filter conditions Active filters appear as chips below the toolbar. Click a chip to edit or remove the filter. Sorting can be applied from the column header menu or directly by clicking on a column header.

Preset Views

Save your table configuration (visible columns, column order, filters, and sort settings) as a named preset view. This is useful for creating reusable perspectives on your data, such as “Training Metrics Only” or “Best Runs”. Preset views dropdown To create a preset:
  1. Configure the table with the columns, filters, and sort order you want
  2. Click the view selector dropdown (shows “Default” by default)
  3. Select Save to new view preset
  4. Enter a name for the preset
Preset views are saved per-project and shared across your organization. Switch between presets using the dropdown at any time.