New & Experimental - MCP support is a new feature. Please report any issues to [email protected] or on Discord.
Model Context Protocol (MCP) Integration
The Pluto MCP server allows AI coding assistants to directly query your ML experiment data. This enables powerful workflows like:- Asking your AI assistant to analyze training runs and identify issues
- Comparing metrics across experiments in natural language
- Debugging failed runs by querying logs and metrics
- Getting insights about your ML experiments without leaving your editor
What is MCP?
The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely access external data sources. With Pluto’s MCP server, tools like Claude Code can directly query your experiment data, metrics, and logs.Setup with Claude Code
Prerequisites
- A Pluto account with an API key (get one here)
- Claude Code CLI installed
Step 1: Get Your API Key
- Go to pluto.trainy.ai and sign in
- Navigate to Settings > Developers > API Keys
- Create a new API key and copy it
Step 2: Configure Claude Code
Run the following command to add the Pluto MCP server:mlpi_xxxxxxxxxx with the API key you copied in Step 1.
Alternative: Manual Configuration
Alternative: Manual Configuration
You can also manually edit the MCP settings file at If the file doesn’t exist, create it. Make sure the JSON is valid - you can verify with
~/.claude/mcp_settings.json:cat ~/.claude/mcp_settings.json | jq .Step 3: Restart Claude Code
After saving the configuration, restart Claude Code for the changes to take effect:Available Tools
Once connected, the following tools become available to your AI assistant:| Tool | Description |
|---|---|
list_projects | List all projects in your organization |
list_runs | Search and filter experiment runs by project, name, or tags |
get_run | Get detailed information about a specific run |
query_logs | Query console logs (stdout/stderr) from a run |
query_metrics | Query time-series metrics (loss, accuracy, etc.) |
get_files | Get files and artifacts from a run |
get_statistics | Get statistics and anomaly detection for metrics |
compare_runs | Compare a metric across multiple runs |
Example Prompts
Once configured, you can interact with your Pluto data naturally through your AI assistant: List your projects:“What ML projects do I have in Pluto?”Find recent runs:
“Show me the last 5 training runs in the gpt-finetuning project”Analyze a run:
“What was the final loss for run 1234? Did it converge?”Compare experiments:
“Compare the train/loss between runs 100, 101, and 102 - which performed best?”Debug failures:
“Show me the error logs from run 456 - why did it fail?”Get insights:
“Are there any anomalies in the metrics for my latest run?”
Feedback
MCP integration is experimental. We’d love to hear your feedback:- Discord: Join our community
- Email: [email protected]