Recipe — Let AI run a card overnight
The autonomous processing type runs long. A well-scoped card can take 15–30 minutes; a vague one can take an hour. If you want the agent to work while you're doing something else (like sleeping), set it up in the evening and walk away.
This recipe is about doing it well — not about the mechanics of the button, which are covered in Autonomous vs quick-fix.
1. Pick a card that's ready
Autonomous works best on cards that have a clear, complete plan already written. The agent's autonomous workflow detects the phase — no plan → plan first; plan exists → implement. You want to skip the first phase because planning interactively with you in the room produces much better plans than planning alone.
Checklist before you hit Run autonomous:
- Detail is clear and includes acceptance criteria
- Solution has a plan you've read and agreed with (either you wrote it with the agent in a Solution-tab chat, or the agent wrote it from a previous autonomous run and you're starting the implement phase)
- There are no open questions in the chat
- You know roughly what files will change
If the card isn't there yet, don't run autonomous on it overnight. Have a planning conversation first, get the plan saved, then run.
2. Pre-flight
Before you walk away:
- Git state. Make sure the project's main branch is clean — no uncommitted changes. Autonomous creates a worktree from main, so whatever's uncommitted stays outside the worktree (usually fine, but worth knowing)
- Dependencies. If the card needs a dependency install (
npm install,pip install), make sure the plan calls for it. The agent won't assume - Credentials. If the work needs API keys (Stripe, a database URL, something proprietary), make sure
.envor equivalent is readable from the worktree - Your machine stays awake. Caffeinate, or just disable sleep for the night. Autonomous stops if the machine suspends mid-run
3. Run it
Open the card. Run autonomous. The card locks; processingType = autonomous starts. The modal shows progress — "reading files", "writing plan", "implementing", etc. — streamed from the agent's output.
At this point you can close the modal. The job runs in a background process. The card on the board shows an animated ring while it's running.
4. Overnight
The agent:
- Detects the phase. Plan already saved → jumps straight to implementation
- Creates the worktree and branch
- Starts a dev server in the worktree on an allocated port
- Writes code — reads source files, edits, commits incrementally. Each commit is atomic per step in the plan
- Runs tests if the project has them
- Writes acceptance tests to the Tests tab via
save_tests - Moves the card to Human Test
- Clears
processingType
When you wake up, the card is in Human Test with a commit history, a dev server still running, and a checkbox test list waiting for you.
5. Morning review
- Open the card. Walk through the Tests tab manually — start the dev server if it's not running, exercise each criterion, tick boxes
- If everything passes: merge from the card modal. Done
- If something fails: go to the Solution-tab chat, explain what's wrong, let the agent fix it interactively. You're now in the normal iterative loop
6. When autonomous finishes with an error
It happens. The agent might hit a test failure it can't fix, or an unexpected architectural decision point, or a bug in its own plan. In any of those cases:
processingTypeclears and the card unlocks- The chat panel shows the last thing the agent said before stopping
- The worktree is still there with the partial state
You pick up where it left off. Often the last agent message is clear about what went wrong — "I tried two approaches to fix the failing test, both failed, I need guidance on whether we should change the test expectation or the implementation." Give the guidance, continue.
Rules of thumb
- Don't queue up twenty cards for one overnight run. Run autonomous on one card at a time. The worktrees don't interfere, but your future-self reviewing results in the morning does
- Don't run autonomous on cards you don't understand yourself. If you can't tell the difference between a good result and a bad one, the agent's output is useless to you
- Do run autonomous on repetitive well-understood work. Migrations, boilerplate, tests, refactors with a clear mechanical pattern. That's where it shines
Prev: Hand off a card across the team Next: Three features in parallel worktrees Up: User guide index