context-engineering

Use when starting a new session, when agent output quality degrades, when switching between tasks, or when you need to configure rules files and context for a project.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "context-engineering" with this command: npx skills add addyosmani/agent-skills/addyosmani-agent-skills-context-engineering

Context Engineering

Overview

Feed agents the right information at the right time. Context is the single biggest lever for agent output quality — too little and the agent hallucinates, too much and it loses focus. Context engineering is the practice of deliberately curating what the agent sees, when it sees it, and how it's structured.

When to Use

  • Starting a new coding session
  • Agent output quality is declining (wrong patterns, hallucinated APIs, ignoring conventions)
  • Switching between different parts of a codebase
  • Setting up a new project for AI-assisted development
  • The agent is not following project conventions

The Context Hierarchy

Structure context from most persistent to most transient:

┌─────────────────────────────────────┐
│  1. Rules Files (CLAUDE.md, etc.)   │ ← Always loaded, project-wide
├─────────────────────────────────────┤
│  2. Spec / Architecture Docs        │ ← Loaded per feature/session
├─────────────────────────────────────┤
│  3. Relevant Source Files            │ ← Loaded per task
├─────────────────────────────────────┤
│  4. Error Output / Test Results      │ ← Loaded per iteration
├─────────────────────────────────────┤
│  5. Conversation History             │ ← Accumulates, compacts
└─────────────────────────────────────┘

Level 1: Rules Files

Create a rules file that persists across sessions. This is the highest-leverage context you can provide.

CLAUDE.md (for Claude Code):

# Project: [Name]

## Tech Stack
- React 18, TypeScript 5, Vite, Tailwind CSS 4
- Node.js 22, Express, PostgreSQL, Prisma

## Commands
- Build: `npm run build`
- Test: `npm test`
- Lint: `npm run lint --fix`
- Dev: `npm run dev`
- Type check: `npx tsc --noEmit`

## Code Conventions
- Functional components with hooks (no class components)
- Named exports (no default exports)
- colocate tests next to source: `Button.tsx` → `Button.test.tsx`
- Use `cn()` utility for conditional classNames
- Error boundaries at route level

## Boundaries
- Never commit .env files or secrets
- Never add dependencies without checking bundle size impact
- Ask before modifying database schema
- Always run tests before committing

## Patterns
[One short example of a well-written component in your style]

Equivalent files for other tools:

  • .cursorrules or .cursor/rules/*.md (Cursor)
  • .windsurfrules (Windsurf)
  • .github/copilot-instructions.md (GitHub Copilot)
  • AGENTS.md (OpenAI Codex)

Level 2: Specs and Architecture

Load the relevant spec section when starting a feature. Don't load the entire spec if only one section applies.

Effective: "Here's the authentication section of our spec: [auth spec content]"

Wasteful: "Here's our entire 5000-word spec: [full spec]" (when only working on auth)

Level 3: Relevant Source Files

Before editing a file, read it. Before implementing a pattern, find an existing example in the codebase.

Pre-task context loading:

  1. Read the file(s) you'll modify
  2. Read related test files
  3. Find one example of a similar pattern already in the codebase
  4. Read any type definitions or interfaces involved

Level 4: Error Output

When tests fail or builds break, feed the specific error back to the agent:

Effective: "The test failed with: TypeError: Cannot read property 'id' of undefined at UserService.ts:42"

Wasteful: Pasting the entire 500-line test output when only one test failed.

Level 5: Conversation Management

Long conversations accumulate stale context. Manage this:

  • Start fresh sessions when switching between major features
  • Summarize progress when context is getting long: "So far we've completed X, Y, Z. Now working on W."
  • Compact deliberately — if the tool supports it, compact/summarize before critical work

Context Packing Strategies

The Brain Dump

At session start, provide everything the agent needs in a structured block:

PROJECT CONTEXT:
- We're building [X] using [tech stack]
- The relevant spec section is: [spec excerpt]
- Key constraints: [list]
- Files involved: [list with brief descriptions]
- Related patterns: [pointer to an example file]
- Known gotchas: [list of things to watch out for]

The Selective Include

Only include what's relevant to the current task:

TASK: Add email validation to the registration endpoint

RELEVANT FILES:
- src/routes/auth.ts (the endpoint to modify)
- src/lib/validation.ts (existing validation utilities)
- tests/routes/auth.test.ts (existing tests to extend)

PATTERN TO FOLLOW:
- See how phone validation works in src/lib/validation.ts:45-60

CONSTRAINT:
- Must use the existing ValidationError class, not throw raw errors

The Hierarchical Summary

For large projects, maintain a summary index:

# Project Map

## Authentication (src/auth/)
Handles registration, login, password reset.
Key files: auth.routes.ts, auth.service.ts, auth.middleware.ts
Pattern: All routes use authMiddleware, errors use AuthError class

## Tasks (src/tasks/)
CRUD for user tasks with real-time updates.
Key files: task.routes.ts, task.service.ts, task.socket.ts
Pattern: Optimistic updates via WebSocket, server reconciliation

## Shared (src/lib/)
Validation, error handling, database utilities.
Key files: validation.ts, errors.ts, db.ts

Load only the relevant section when working on a specific area.

MCP Integrations

For richer context, use Model Context Protocol servers:

MCP ServerWhat It Provides
Context7Auto-fetches relevant documentation for libraries
Chrome DevToolsLive browser state, DOM, console, network
PostgreSQLDirect database schema and query results
FilesystemProject file access and search
GitHubIssue, PR, and repository context

Confusion Management

Even with good context, you will encounter ambiguity. How you handle it determines outcome quality.

When Context Conflicts

Spec says:         "Use REST for all endpoints"
Existing code has: GraphQL for the user profile query

Do NOT silently pick one interpretation. Surface it:

CONFUSION:
The spec calls for REST endpoints, but the existing codebase uses GraphQL
for user queries (src/graphql/user.ts).

Options:
A) Follow the spec — add REST endpoint, potentially deprecate GraphQL later
B) Follow existing patterns — use GraphQL, update the spec
C) Ask — this seems like an intentional decision I shouldn't override

→ Which approach should I take?

When Requirements Are Incomplete

If the spec doesn't cover a case you need to implement:

  1. Check existing code for precedent
  2. If no precedent exists, stop and ask
  3. Never invent requirements — that's the human's job
MISSING REQUIREMENT:
The spec defines task creation but doesn't specify what happens
when a user creates a task with a duplicate title.

Options:
A) Allow duplicates (simplest)
B) Reject with validation error (strictest)
C) Append a number suffix like "Task (2)" (most user-friendly)

→ Which behavior do you want?

The Inline Planning Pattern

For multi-step tasks, emit a lightweight plan before executing:

PLAN:
1. Add Zod schema for task creation — validates title (required) and description (optional)
2. Wire schema into POST /api/tasks route handler
3. Add test for validation error response
→ Executing unless you redirect.

This catches wrong directions before you've built on them. It's a 30-second investment that prevents 30-minute rework.

Anti-Patterns

Anti-PatternProblemFix
Context starvationAgent invents APIs, ignores conventionsLoad rules file + relevant source files before each task
Context floodingAgent loses focus, follows wrong instructionsInclude only what's relevant to the current task
Stale contextAgent references outdated patterns or deleted codeStart fresh sessions when context drifts
Missing examplesAgent invents a new style instead of following yoursAlways include one example of the pattern to follow
Implicit knowledgeAgent doesn't know project-specific rulesWrite it down in rules files — if it's not written, it doesn't exist
Silent confusionAgent guesses when it should askSurface ambiguity explicitly using the confusion management patterns above

Common Rationalizations

RationalizationReality
"The agent should figure out the conventions"It can't read your mind. Write a rules file — 10 minutes that saves hours.
"I'll just correct it when it goes wrong"Prevention is cheaper than correction. Upfront context prevents drift.
"More context is always better"Research shows performance degrades with too many instructions. Be selective.
"The context window is huge, I'll use it all"Context window size ≠ attention budget. Focused context outperforms large context.

Red Flags

  • Agent output doesn't match project conventions
  • Agent invents APIs or imports that don't exist
  • Agent re-implements utilities that already exist in the codebase
  • Agent quality degrades as the conversation gets longer
  • No rules file exists in the project

Verification

After setting up context, confirm:

  • Rules file exists and covers tech stack, commands, conventions, and boundaries
  • Agent output follows the patterns shown in the rules file
  • Agent references actual project files and APIs (not hallucinated ones)
  • Context is refreshed when switching between major tasks

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

frontend-ui-engineering

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

documentation-and-adrs

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

planning-and-task-breakdown

No summary provided by upstream source.

Repository SourceNeeds Review