Recipe — Three features in parallel worktrees
Ideafy creates a git worktree per card in implementation, each with its own branch and its own dev server port. You can run three (or more) cards through the implement phase at the same time without anything interfering. Here's how to actually do that.
The setup
Three cards, all ready for autonomous implementation. Let's call them KAN-12, KAN-13, KAN-14. Each has a plan saved. Each is in In Progress (or about to go there via save_plan).
Fire them off
For each card:
- Open the card modal
- Run autonomous
- Close the modal (the job runs in background)
- Do the next card
After you launch all three, your board shows three cards with animated rings on their surface — all processing in parallel. The animated rings are the at-a-glance indicator of which cards are currently being worked on by the agent.
What's happening on disk
Three worktrees under .worktrees/kanban/:
.worktrees/kanban/
├── KAN-12-add-search-to-sidebar/
├── KAN-13-refactor-auth-middleware/
└── KAN-14-write-migration-for-pool-cards/
Three branches:
kanban/KAN-12-add-search-to-sidebarkanban/KAN-13-refactor-auth-middlewarekanban/KAN-14-write-migration-for-pool-cards
Three dev servers, each on its own port:
3031→ the worktree for KAN-123032→ KAN-133033→ KAN-14
Your main project's dev server (if running) is on 3030. Worktree ports start at 3031 and increment from there; Ideafy handles the allocation.
Why they don't interfere
Each worktree is a separate checkout of the same repository. Git's worktree model gives you independent file trees sharing a common object database. That means:
- A change in one worktree doesn't appear in another until it's merged
- Each dev server is reading from its own directory — different files, different processes, different ports
- The main branch stays clean throughout — nothing is committed to main until you merge a card
It's the closest thing to "three parallel universes of the same project" that your file system can offer.
Monitoring progress
The card modal for each card streams output from its agent. You can open them one by one to peek, but it's usually easier to:
- Leave the board view open. The animated rings tell you who's still running
- When a ring disappears and the card moves to Human Test, that card is done
- Open it, read the chat log if you care, verify the tests
Review and merge
When all three are in Human Test:
- Verify each card's acceptance criteria (dev servers are still running — open each port in a browser)
- Merge them in dependency order. If KAN-13 touches the same files as KAN-14, merge one first, then rebase the other onto main (Ideafy does this automatically on merge) and handle any conflicts
- Ideafy cleans up each worktree and dev server after a successful merge
If two cards conflict, the second merge's rebase will flag the conflicts with the Resolve Conflict panel. Click Auto-resolve with Claude or fix by hand.
Failure modes and how to handle them
- Port collision with something else on your machine. Unlikely but possible. The card shows "Dev server failed to start." Stop the offending process (or Ideafy's own detection will eventually pick another port) and restart the card's dev server from the modal
- Worktree can't be created. Usually because the branch name already exists (you ran the same card twice). Rollback the previous attempt (
Rollbackin the modal) and retry - Agent gets confused by the other parallel work. It shouldn't — it's reading from its own worktree, not seeing the other worktrees — but if the plan references files that are being rewritten in a parallel worktree, you may end up with subtle merge conflicts. Stagger truly conflict-prone work
- Your machine runs out of RAM. Three dev servers + three agent processes is real memory. Stop one of the dev servers from the card modal if you need the resources
When parallel is overkill
Don't parallelise for the sake of it. If three cards are independent but small, running them sequentially is often faster end-to-end — you don't pay the context-switch cost of reviewing three things at once. Parallel shines when each individual card takes long enough that the setup overhead is dwarfed by the compute time, which is roughly anything over 10 minutes.
Prev: Let AI run a card overnight Next: Troubleshooting Up: User guide index