Skip to main content
Histogram logging lets you track how distributions of values (such as weight distributions, activation values, or gradient norms) change over training steps.

Logging Histograms

To log a histogram, instantiate the pluto.Histogram class and pass it to pluto.log:
histogram = pluto.Histogram(
    data=values,
    bins=64,
)
pluto.log({"layers/layer_0/weights": histogram}, step=epoch)
ParameterTypeDescription
dataUnion[list, np.ndarray, torch.Tensor]The values to build the histogram from.
binsintNumber of bins for the histogram. Defaults to 64.

Viewing Histograms in the Dashboard

Histograms appear as cards in the dashboard alongside your other metrics. Each histogram widget displays the distribution at a given training step. Histogram view in the dashboard

Step Navigation

Use the step navigator at the bottom of the histogram card to scrub through training steps and see how distributions evolve over time. You can:
  • Click the forward/back arrows to step through one at a time
  • Type a specific step number into the input field
  • See the current step and total steps (e.g., “Step 5/25”)

Axis Controls

Right-click on a histogram chart or use the chart menu to adjust axis bounds:
  • X Min / X Max — Control the horizontal range to zoom into a specific region of the distribution
  • Y Max — Set the maximum y-axis value to normalize the view across steps

Examples

Logging Weight Distributions

import pluto
import torch

run = pluto.init(project="my-project")

model = MyModel()
for epoch in range(num_epochs):
    # ... training step ...

    # Log weight distributions for each layer
    for name, param in model.named_parameters():
        if "weight" in name:
            pluto.log({f"histograms/{name}": pluto.Histogram(param.data.cpu())}, step=epoch)

Logging Gradient Distributions

for name, param in model.named_parameters():
    if param.grad is not None:
        pluto.log({
            f"gradients/{name}": pluto.Histogram(param.grad.data.cpu(), bins=32)
        }, step=epoch)