building-pydantic-ai-agents

Build AI agents with Pydantic AI — tools, capabilities, structured output, streaming, testing, and multi-agent patterns. Use when the user mentions Pydantic AI, imports pydantic_ai, or asks to build an AI agent, add tools/capabilities, stream output, define agents from YAML, or test agent behavior.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "building-pydantic-ai-agents" with this command: npx skills add pydantic/skills/pydantic-skills-building-pydantic-ai-agents

Building AI Agents with Pydantic AI

Pydantic AI is a Python agent framework for building production-grade Generative AI applications. This skill provides patterns, architecture guidance, and tested code examples for building applications with Pydantic AI.

When to Use This Skill

Invoke this skill when:

  • User asks to build an AI agent, create an LLM-powered app, or mentions Pydantic AI
  • User wants to add tools, capabilities (thinking, web search), or structured output to an agent
  • User asks to define agents from YAML/JSON specs or use template strings
  • User wants to stream agent events, delegate between agents, or test agent behavior
  • Code imports pydantic_ai or references Pydantic AI classes (Agent, RunContext, Tool)
  • User asks about hooks, lifecycle interception, or agent observability with Logfire

Do not use this skill for:

  • The Pydantic validation library alone (pydantic/BaseModel without agents)
  • Other AI frameworks (LangChain, LlamaIndex, CrewAI, AutoGen)
  • General Python development unrelated to AI agents

Quick-Start Patterns

Create a Basic Agent

from pydantic_ai import Agent

agent = Agent(
    'anthropic:claude-sonnet-4-6',
    instructions='Be concise, reply with one sentence.',
)

result = agent.run_sync('Where does "hello world" come from?')
print(result.output)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
"""

Add Tools to an Agent

import random

from pydantic_ai import Agent, RunContext

agent = Agent(
    'google-gla:gemini-3-flash-preview',
    deps_type=str,
    instructions=(
        "You're a dice game, you should roll the die and see if the number "
        "you get back matches the user's guess. If so, tell them they're a winner. "
        "Use the player's name in the response."
    ),
)


@agent.tool_plain
def roll_dice() -> str:
    """Roll a six-sided die and return the result."""
    return str(random.randint(1, 6))


@agent.tool
def get_player_name(ctx: RunContext[str]) -> str:
    """Get the player's name."""
    return ctx.deps


dice_result = agent.run_sync('My guess is 4', deps='Anne')
print(dice_result.output)
#> Congratulations Anne, you guessed correctly! You're a winner!

Structured Output with Pydantic Models

from pydantic import BaseModel

from pydantic_ai import Agent


class CityLocation(BaseModel):
    city: str
    country: str


agent = Agent('google-gla:gemini-3-flash-preview', output_type=CityLocation)
result = agent.run_sync('Where were the olympics held in 2012?')
print(result.output)
#> city='London' country='United Kingdom'
print(result.usage())
#> RunUsage(input_tokens=57, output_tokens=8, requests=1)

Dependency Injection

from datetime import date

from pydantic_ai import Agent, RunContext

agent = Agent(
    'openai:gpt-5.2',
    deps_type=str,
    instructions="Use the customer's name while replying to them.",
)


@agent.instructions
def add_the_users_name(ctx: RunContext[str]) -> str:
    return f"The user's name is {ctx.deps}."


@agent.instructions
def add_the_date() -> str:
    return f'The date is {date.today()}.'


result = agent.run_sync('What is the date?', deps='Frank')
print(result.output)
#> Hello Frank, the date today is 2032-01-02.

Testing with TestModel

from pydantic_ai import Agent
from pydantic_ai.models.test import TestModel

my_agent = Agent('openai:gpt-5.2', instructions='...')


async def test_my_agent():
    """Unit test for my_agent, to be run by pytest."""
    m = TestModel()
    with my_agent.override(model=m):
        result = await my_agent.run('Testing my agent...')
        assert result.output == 'success (no tool calls)'
    assert m.last_model_request_parameters.function_tools == []

Use Capabilities

Capabilities are reusable, composable units of agent behavior — bundling tools, hooks, instructions, and model settings.

from pydantic_ai import Agent
from pydantic_ai.capabilities import Thinking, WebSearch

agent = Agent(
    'anthropic:claude-opus-4-6',
    instructions='You are a research assistant. Be thorough and cite sources.',
    capabilities=[
        Thinking(effort='high'),
        WebSearch(),
    ],
)

Add Lifecycle Hooks

Use Hooks to intercept model requests, tool calls, and runs with decorators — no subclassing needed.

from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities.hooks import Hooks
from pydantic_ai.models import ModelRequestContext

hooks = Hooks()


@hooks.on.before_model_request
async def log_request(ctx: RunContext[None], request_context: ModelRequestContext) -> ModelRequestContext:
    print(f'Sending {len(request_context.messages)} messages')
    return request_context


agent = Agent('openai:gpt-5.2', capabilities=[hooks])

Define Agent from YAML Spec

Use Agent.from_file to load agents from YAML or JSON — no Python agent construction code needed.

from pydantic_ai import Agent

# agent.yaml:
# model: anthropic:claude-opus-4-6
# instructions: You are a helpful research assistant.
# capabilities:
#   - WebSearch
#   - Thinking:
#       effort: high

agent = Agent.from_file('agent.yaml')

Task Routing Table

I want to...Documentation
Create or configure agentsAgents
Bundle reusable behavior (tools, hooks, instructions)Capabilities
Intercept model requests, tool calls, or runsHooks
Define agents in YAML/JSON without Python codeAgent Specs
Use template strings in agent instructionsTemplate Strings
Let my agent call external APIs or functionsTools
Organize or restrict which tools an agent can useToolsets
Give my agent web search with automatic provider fallbackWebSearch Capability
Give my agent URL fetching with automatic provider fallbackWebFetch Capability
Give my agent web search or code execution (builtin tools)Built-in Tools
Search with DuckDuckGo/Tavily/ExaCommon Tools
Ensure my agent returns data in a specific formatStructured Output
Pass database connections, API clients, or config to toolsDependencies
Access usage stats, message history, or retry count in toolsRunContext
Choose or configure modelsModels
Automatically switch to backup model when primary failsFallback Model
Show real-time progress as my agent worksStreaming Events and Final Output
Work with messages and multimediaMessage History
Reduce token costs by trimming or filtering conversation historyProcessing Message History
Keep long conversations manageable without losing contextSummarize Old Messages
Use MCP serversMCP
Build multi-step graphsGraph
Debug a failed agent run or see what went wrongModel Errors
Make my agent resilient to temporary failuresRetries
Understand why my agent made specific decisionsUsing Logfire
Write deterministic tests for my agentUnit testing with TestModel
Enable thinking/reasoning across any providerThinking · Thinking Capability
Systematically verify my agent works correctlyEvals
Use embeddings for RAGEmbeddings
Use durable executionDurable Execution
Have one agent delegate tasks to anotherAgent Delegation
Route requests to different agents based on intentProgrammatic Agent Hand-off
Require tool approval (human-in-the-loop)Deferred Tools
Use images, audio, video, or documentsInput
Use advanced tool featuresAdvanced Tools
Validate or require approval before tool executionAdvanced Tools
Call the model without using an agentDirect API
Expose agents as HTTP servers (A2A)A2A
Handle network errors and rate limiting automaticallyRetries
Use LangChain or ACI.dev toolsThird-Party Tools
Publish reusable agent extensions as packagesExtensibility
Build custom toolsets, models, or agentsExtensibility
Debug common issuesTroubleshooting
Migrate from deprecated APIsChangelog
See advanced real-world examplesExamples
Look up an import pathAPI Reference

Architecture and Decisions

Load Architecture and Decision Guide for detailed decision trees, comparison tables, and architecture overview:

TopicWhat it covers
Decision TreesTool registration, output modes, multi-agent patterns, capabilities, testing approaches, extensibility
Comparison TablesOutput modes, model provider prefixes, tool decorators, built-in capabilities, agent methods
Architecture OverviewExecution flow, generic types, construction patterns, lifecycle hooks, model string format

Quick reference — model string format: "provider:model-name" (e.g., "openai:gpt-5.2", "anthropic:claude-sonnet-4-6", "google-gla:gemini-3-pro-preview")

Quick reference — key agent methods: run(), run_sync(), run_stream(), run_stream_sync(), run_stream_events(), iter()

Key Practices

  • Python 3.10+ compatibility required
  • Observability: Pydantic AI has first-class integration with Logfire for tracing agent runs, tool calls, and model requests. Add it with logfire.instrument_pydantic_ai(). For deeper HTTP-level visibility, logfire.instrument_httpx(capture_all=True) captures the exact payloads sent to model providers.
  • Testing: Use TestModel for deterministic tests, FunctionModel for custom logic

Common Gotchas

These are mistakes agents commonly make with Pydantic AI. Getting these wrong produces silent failures or confusing errors.

  • @agent.tool requires RunContext as first param; @agent.tool_plain must not have it. Mixing these up causes runtime errors. Use tool_plain when you don't need deps, usage, or messages.
  • Model strings need the provider prefix: 'openai:gpt-5.2' not 'gpt-5.2'. Without the prefix, Pydantic AI can't resolve the provider.
  • TestModel requires agent.override(): Don't set agent.model directly. Always use the context manager: with agent.override(model=TestModel()):.
  • str in output_type allows plain text to end the run: If your union includes str (or no output_type is set), the model can return plain text instead of structured output. Omit str from the union to force tool-based output.
  • Hook decorator names on .on don't repeat on_: Use hooks.on.run_error and hooks.on.model_request_error — not hooks.on.on_run_error.
  • history_processors is plural: The Agent parameter is history_processors=[...], not history_processor=.

Common Tasks

Load Common Tasks Reference for detailed implementation guidance with code examples:

TaskSection
Add capabilities (Thinking, WebSearch, etc.)Add Capabilities to an Agent
Intercept model requests and tool callsIntercept Agent Lifecycle with Hooks
Define agents from YAML/JSON config filesDefine Agents Declaratively with Specs
Enable thinking/reasoning across providersEnable Thinking Across Providers
Trim or filter conversation historyManage Context Size
Stream events and show real-time progressShow Real-Time Progress
Auto-switch providers on failureHandle Provider Failures
Write deterministic testsTest Agent Behavior
Delegate tasks between agentsCoordinate Multiple Agents
Instrument with Logfire for debuggingDebug and Validate Agent Behavior

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

crypto-report

No summary provided by upstream source.

Repository SourceNeeds Review
1.2K-aahl
Web3

agentwallet

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

valtio-define

No summary provided by upstream source.

Repository SourceNeeds Review
508-hairyf
Web3

crypto-report

No summary provided by upstream source.

Repository SourceNeeds Review
building-pydantic-ai-agents | V50.AI