dispatchmy.ai docs
Run the local agent runtime, connect it to the dashboard, and build workflows where a manager agent delegates to specialists with their own prompts, models, tools, and memory.
Overview
dispatchmy.ai pairs a browser dashboard with a local daemon. The daemon stores agent configuration, runs each agent in its own Docker container, and proxies upstream credentials so most secrets never reach the agents themselves.
Dashboard
Use the dashboard to create agents, connect tools, inspect runs, and manage the daemon.
Local runtime
The daemon runs on your machine, launches an isolated container per agent, and proxies provider keys server-side.
Delegated workflows
A manager agent calls specialists as tools — each one focused on a single job, with its own prompt, model, and tool set.
Tooling layer
Plug in model providers, MCP servers, and built-in tools so agents can act on real systems instead of just producing text.
Getting started
Start here if you're setting up dispatchmy.ai for the first time. These guides walk you from sign-in to a paired daemon and your first working agent.
Quickstart
Sign in, start the daemon, pair it with the dashboard, and run a first agent.
Installation
Supported platforms, Docker prerequisites, and what the dashboard's one-shot install command does.
Pairing the daemon
How daemon pairing works at a high level — pair codes themselves only appear in the signed-in dashboard.
Agents
Agents are the building blocks of a workflow. Each one has a prompt, model, tool set, optional memory, and optional schedule.
Agent model
Agents, subagents, and sessions — and how delegation works.
Creating agents
Start small, name agents by responsibility, and add complexity once the first run works.
Memory
What memory is, when to enable it, and when an MCP server is the better fit.
Schedules
Run an agent on a recurring cadence and review what each run did.
Runtime
The daemon runs agents on your machine; the dashboard is the browser interface that controls it. These guides cover how they fit together, how updates work, and where data lives.
Local daemon
What the daemon is responsible for, why it needs the Docker socket, and what stays local.
Dashboard
How the dashboard controls agents, picks the daemon to talk to, and why it requires sign-in.
Updates
API version checks, the one-click upgrade flow, and recovery if an update fails.
Data storage
Where configs, logs, sessions, memory, and artifacts live on the host.
Tools and integrations
Connect model providers, MCP servers, and built-in tools so agents can browse, edit files, call APIs, and work with your existing services.
Model providers
Anthropic, OpenRouter, and custom OpenAI-compatible endpoints — and how keys are stored.
MCP servers
The curated catalog vs. custom entries, supported transports, and tool filters.
Built-in tools
First-party packages — browser, shell, file editing, GitHub, web fetch and search — and their container dependencies.
Secrets
What the daemon proxies, what reaches the agent container, and how to rotate a key.
Operations
Keep your setup healthy after the first successful run. Diagnose connection issues, inspect logs, understand the security model, and know where to ask for help.
Troubleshooting
Daemon offline, model-provider errors, and Docker-side failures.
Logs
Where logs appear, what they contain, and what to redact before sharing one.
Security model
Local execution, Docker isolation, credential proxying, and dashboard authentication.
Support
What to include in a bug report, what to keep out, and how to reach us.
Reference
Look up exact behavior when a walkthrough isn't what you need — what an agent's config holds, where the daemon API stands, what platforms are supported.
Configuration reference
What an agent's configuration holds and where it's edited.
Limits
Supported platforms, the Docker requirement, and known model-provider caveats.
Changelog
Where release notes will live once the beta cadence stabilizes.