Quickstart

Connect the dashboard to your local daemon, add a model provider, and run your first agent.

Before you start

This quickstart assumes you have beta access and are setting up dispatchmy.ai on a machine you control. You will connect the dashboard to a local daemon, add a model provider, and run a simple agent.

  • Sign in to the dashboard.
  • Confirm your machine meets the system requirements below.

System requirements

dispatchmy.ai needs a local machine for the daemon and agent runtime, plus browser access to the dashboard.

Required

  • A modern desktop browser for the dashboard.
  • A macOS or Linux machine on amd64 or arm64. Windows is supported in beta through Docker Desktop with WSL 2.
  • Docker, which dispatchmy.ai uses as part of the agent isolation model.
  • A key for Anthropic or OpenRouter, or a reachable OpenAI-compatible local model endpoint.

Recommended

  • At least 1 GB of free disk space for agent configs, logs, sessions, and generated artifacts.
  • A machine that can remain online for scheduled or long-running agents.

1. Sign in

Sign in from the dispatchmy.ai header. Once you're signed in, open the dashboard — that's where you'll create agents, connect tools, inspect runs, and manage the local daemon.

2. Start the local daemon

The daemon is a single Docker container. From the dashboard, open Add daemon; it generates a short-lived pair code and the exact command to run on the machine that will host the daemon. Copy the command, paste it into a terminal on that machine, and run it.

The shape is the same on every install — only the pair code changes:

docker run -d --name dispatchmy.ai --restart unless-stopped \
  --add-host=host.docker.internal:host-gateway \
  -v $HOME/.dispatchmy.ai:/data \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e DISPATCHMYAI_HOST_DATA_DIR=$HOME/.dispatchmy.ai \
  -e DISPATCHMYAI_PAIR_CODE=<your-pair-code> \
  ghcr.io/dispatchmyai/agent-manager

The container pairs on first start, persists credentials to the bind-mounted host directory, and stays online across restarts. Once it pairs, the daemon picker in the dashboard will surface the new daemon as online.

Pair codes expire — generate a fresh one if the timer in the dashboard runs out before you've pasted the command.

3. Add a model provider

Open Settings and add a model provider. For Anthropic or OpenRouter, paste the API key — the daemon will inject it server-side when it proxies model calls, so the key never reaches the agent container.

If you're running a local OpenAI-compatible endpoint instead, configure its base URL and the model name there.

4. Create your first agent

Create a simple agent before adding subagents or complex tools. Give it a clear name, choose a provider and model, and write a short system prompt that describes the job it should do.

Start simple

  • Use one model provider.
  • Enable only the tools needed for the first test.
  • Skip schedules and memory until the basic run works.

5. Run a test message

Open the agent console and send a small, verifiable task. Watch the run stream for model output, tool calls, errors, and artifacts.

After the first successful run, add specialist subagents, MCP servers, schedules, or memory one piece at a time.

Troubleshooting

The dashboard cannot see the daemon

Check that the daemon is running, then retry the pairing flow from the dashboard. If you run more than one daemon, confirm the daemon picker is pointed at the right one.

The agent cannot call a model

Check the provider settings, key, base URL, and model name. Anthropic and OpenRouter need a saved API key; local OpenAI-compatible endpoints need a reachable URL and a compatible model.

A tool fails

Start by disabling unrelated tools, then rerun the task. If the tool needs credentials, confirm the relevant setting is saved and that the integration is enabled for this agent.