Skip to content

Best Practices

Patterns and anti-patterns learned from real-world AI-assisted development.


Give AI project context at the start of a session, not piecemeal.

Bad:

You: "Add a button"
AI: *Creates generic button*
You: "Actually, we use TypeScript"
AI: *Rewrites with types*
You: "And we have specific naming conventions"
AI: *Rewrites again*

Good:

You: *Provides AGENTS.md + role + task*
AI: *Creates button following all conventions first time*

Treat AGENTS.md as living documentation. Update it when:

  • You establish new patterns
  • You make architectural decisions
  • You learn from AI mistakes
  • Project conventions evolve

Even in long sessions, AI context can drift. For important tasks:

  • Reference specific sections of AGENTS.md
  • Re-state critical constraints
  • Verify understanding before execution

## Scope
### Allowed
- src/components/Button/**
- src/components/index.ts (add export only)
### Forbidden
- src/core/**
- src/utils/**
- Any *.config.* files

When you have specific expectations:

## Requirements
### Example Usage
\`\`\`tsx
// Basic
<Button variant="primary">Click me</Button>
// With icon
<Button leadingIcon={<PlusIcon />}>Add Item</Button>
// As link
<Button as="a" href="/home">Go Home</Button>
\`\`\`

Bad:

## Requirements
Make it look nice and work well.

Good:

## Requirements
- Follow design tokens in `src/tokens/`
- Support hover, active, focus, and disabled states
- Minimum touch target: 44x44px
- Color contrast: WCAG AA (4.5:1 for text)

Bad:

## Goal
Build the entire checkout flow including cart, shipping, payment, and confirmation.

Good:

## Goal
Create the CartSummary component displaying line items with quantities and totals.

Every “Definition of Done” item should be checkable:

## Definition of Done
- [ ] `pnpm lint` passes
- [ ] `pnpm typecheck` passes
- [ ] `pnpm test` passes
- [ ] Component has Storybook story
- [ ] All props are documented with JSDoc

If your project has testing standards, enforce them:

## Definition of Done
- [ ] Unit tests exist for: render, props, events
- [ ] Accessibility test with `expectNoA11yViolations`
- [ ] Coverage meets 80% threshold

AI output should always be reviewed. Automate checks, but human review catches:

  • Logic errors that pass tests
  • Convention violations that aren’t linted
  • Architecture drift
  • Security issues

TaskBest Role
”Build new component”developer
”Design new feature”architect
”Add test coverage”tester
”Review this PR”reviewer
”Write documentation”documenter

Roles have built-in constraints. The tester role doesn’t modify implementation code. The reviewer role suggests but doesn’t rewrite.

Bad:

## Goal
Write tests and fix any bugs you find.

This mixes tester and developer roles. Split into:

  1. Task: Write tests (tester role)
  2. Task: Fix bugs found by tests (developer role)

  1. Create basic implementation
  2. Add tests
  3. Refine based on feedback
  4. Repeat

For large tasks, define checkpoints:

## Checkpoints
### Checkpoint 1: Structure
- [ ] All files created
- [ ] Basic component renders
### Checkpoint 2: Functionality
- [ ] All props work
- [ ] Events fire correctly
### Checkpoint 3: Quality
- [ ] Tests pass
- [ ] Lint passes
- [ ] A11y passes

Set clear boundaries and stopping points. AI will keep “improving” forever if you let it.


AI will make mistakes. Your workflow should:

  1. Catch errors through automated checks
  2. Provide clear feedback
  3. Allow iteration

When AI consistently makes a mistake:

  1. Add the correct pattern to AGENTS.md
  2. Add a “Don’t” to the relevant role
  3. Add validation to Definition of Done

If AI keeps making the same mistake, the context is probably unclear. Improve AGENTS.md rather than fighting the tool.


Always protect:

### Forbidden
- .env*
- **/credentials*
- **/secrets*
- .github/workflows/** (CI/CD)

Never let AI-generated code touching auth, payments, or user data go unreviewed.

Never put API keys, passwords, or tokens in AGENTS.md or tasks.


AGENTS.md should be committed to version control. It’s documentation that helps:

  • New team members understand the project
  • AI assistants understand conventions
  • Future you remember decisions

Use consistent task templates across the team:

  • Same structure
  • Same Definition of Done format
  • Same scope conventions

If one developer uses different patterns than AGENTS.md describes, AI gets confused. Keep conventions consistent.


If your AI tool supports it, cache AGENTS.md and role definitions. Re-sending them every message wastes tokens and time.

  • For simple tasks: Task definition may be enough
  • For complex tasks: Full AGENTS.md + role + task
# TASK
## Goal
Fix typo in README.md: "teh" → "the"
## Task Type
docs
## Scope
### Allowed
- README.md
### Forbidden
- Everything else
## Requirements
Find "teh" and replace with "the".
## Definition of Done
- [ ] Typo is fixed
- [ ] No other changes made

This is overkill. For trivial tasks, a simple prompt is fine.


Begin with:

  1. Basic AGENTS.md
  2. One or two roles
  3. Simple task template

Add complexity as you learn what your project needs.

Track:

  • Time from task creation to completion
  • Number of iterations needed
  • Types of errors that slip through
  • AI-specific issues

You don’t need 15 roles and 50 pages of AGENTS.md on day one. Build what you need, when you need it.


Before executing a task:

  • AGENTS.md is up to date
  • Appropriate role is selected
  • Task has clear goal
  • Scope is explicitly defined
  • Requirements are specific
  • Definition of Done is verifiable
  • Human review is planned