Prompting patterns

How to write cards and chat messages that actually work with AI. None of this is mandatory — the agent will take whatever you give it — but these patterns consistently produce better output.

Write cards for a teammate, not a robot

The best Detail text reads like a brief you'd hand a colleague who's seeing the ticket for the first time. That means:

  • Start with the user-visible problem or goal. Not "add a feature" — "signed-out users can't access the share modal, and we want them to see a sign-up prompt instead."
  • State acceptance criteria at a high level in Detail (detailed checkbox tests belong in the Tests tab).
  • Link to constraints. "Must not break the existing OAuth flow." "Stripe webhook signature verification has to stay on the server." These are the things the agent will otherwise forget.
  • Mention files or components by name if you already know them. The agent will read them before deciding on an approach, which saves a round trip.

Leave implementation to Solution

The Detail tab is the what. The Solution tab is the how. Don't pre-fill Solution with implementation details — let the agent plan it through save_plan. If you need to constrain the plan, put constraints in Detail, not in Solution.

When you pre-fill Solution, you're effectively overriding the agent's planning step with your own guesses. Sometimes that's right. More often, the agent would have noticed something you didn't — and you lose that value.

Good cards vs. anaemic cards

Good

Title. Add rate limiting to the sign-up endpoint

Detail. A recent sign-up spam wave created ~3k fake accounts in 10 minutes. We want to rate-limit /api/auth/signup to 5 attempts per hour per IP. 429 responses should return a Retry-After header. The existing rate-limit middleware already handles /api/cards — we can reuse it.

Constraints: must work on Vercel Edge. Must not break the existing test suite.

The agent gets a clear problem (spam), a specific constraint (5/hr per IP), a specific response shape (429 + Retry-After), a pointer to existing code to reuse, and platform constraints. It can write a tight plan and implement it without asking clarifying questions.

Anaemic

Title. Rate limiting

Detail. We need to add rate limiting to the API.

The agent will either ask five questions or guess five answers. You'll spend more time fixing mistakes than you would have spent writing a good brief.

Tests should be verifiable by a human in under a minute each

The Tests tab is a checkbox list you're going to tick off manually. So:

  • One observable behaviour per checkbox. "Submits successfully" — yes. "Doesn't crash under load" — no, that's not a manual test.
  • Group by H3 headings. ### Happy Path, ### Edge Cases, ### Regressions. The human-test.md skill expects this structure.
  • Avoid implementation tests. If a test would need you to open the dev tools or read the database, it belongs in an automated test, not the Tests tab.

See Card modal — Tests for the merge semantics that let you keep checks across regenerations.

Use the right tab for the right conversation

  • Need to clarify scope? → Detail chat
  • Weighing whether to build it at all? → Opinion chat
  • Working out implementation? → Solution chat
  • Writing or refining tests? → Tests chat

Mixing threads doesn't break anything, but splitting them keeps each conversation focused and each thread shorter — which means cheaper and faster agent replies.

Triggers are not decorations

The chat input has three trigger characters, and each one pulls a different kind of thing into the agent's working memory. Use them when they save you typing:

  • @ opens the document menu — @product-narrative, @CLAUDE.md, @api-spec. Attaches the file content to the message. Use when you want the agent to read something
  • [ opens the card menu — [[KAN-14](/docs/**-opens-the-card-menu-—-[[kan-14). Inlines a reference to a sibling card the agent can look up via get_card. Use when you want the agent to cross-reference work. Type `Card chat & mentions for the full trigger reference.

Don't explain the board to the agent

The bundled ideafy.md skill already teaches the agent how the workflow works — move-on-save, display-ID format, tool families. You don't need to explain "when you're done, call save_plan" in every message. The agent knows. If it's not behaving, it's probably missing context about your project, not Ideafy's mechanics.


Prev: Conversation history Next: Editions overview Up: User guide index

Last updated: 2026-04-13