Agent Framework / shim to use Pydantic with LLMs
https://github.com/pydantic/pydantic-aiStop wrestling with unpredictable LLM outputs and fragile agent frameworks. PydanticAI brings the same type safety and developer experience that made FastAPI successful to AI agent development.
Most LLM libraries treat model outputs as unstructured text or loosely-typed dictionaries. You end up writing defensive code to handle unpredictable responses, debugging runtime errors from malformed JSON, and maintaining brittle parsing logic that breaks when models change their output format.
Even worse, most agent frameworks force you into their specific abstractions, making it difficult to apply standard Python patterns you already know.
Built by the team behind Pydantic (the validation backbone of OpenAI SDK, Anthropic SDK, and most major Python AI libraries), PydanticAI gives you:
Guaranteed Output Structure: Define your expected response format with Pydantic models. If the LLM returns invalid data, the agent automatically retries with validation errors until you get properly structured output.
Type Safety Throughout: Your IDE knows exactly what data types you're working with. No more response['data']['items'][0] guesswork - get proper autocompletion and static type checking.
Familiar Python Patterns: Use standard Python control flow, dependency injection, and composition. No need to learn yet another framework's abstractions.
Production-Ready Features: Built-in streaming, dependency injection for testing, and seamless integration with Pydantic Logfire for observability.
Customer Support Agent: Build a banking support bot that returns structured responses with risk scores and action flags:
class SupportResponse(BaseModel):
advice: str
risk_level: int = Field(ge=0, le=10)
should_escalate: bool
agent = Agent(
'openai:gpt-4o',
output_type=SupportResponse,
system_prompt="You are a bank support agent..."
)
# Always get structured output, never parse JSON manually
result = await agent.run("I lost my card!", deps=customer_data)
print(f"Risk: {result.output.risk_level}") # Type-safe access
Data Analysis Pipeline: Extract structured insights from unstructured text with guaranteed schema compliance:
class InsightReport(BaseModel):
key_findings: List[str]
sentiment_score: float = Field(ge=-1, le=1)
confidence: float = Field(ge=0, le=1)
# Process documents with guaranteed output structure
analysis_agent = Agent(output_type=InsightReport)
Multi-Model Tool Orchestration: Chain different LLM providers with consistent interfaces:
# Switch between providers without code changes
agent = Agent('anthropic:claude-3-haiku') # or 'openai:gpt-4o'
@agent.tool
async def search_database(query: str) -> List[SearchResult]:
# Tool arguments are validated automatically
return await db.search(query)
Testing Made Simple: Use dependency injection to mock external services:
# Production
deps = ProductionDeps(db=real_db, api_key=real_key)
# Testing
test_deps = TestDeps(db=mock_db, api_key="fake")
result = await agent.run("test query", deps=test_deps)
Observability Built-In: Integrate with Pydantic Logfire to track agent performance and debug issues in production without additional setup.
Streaming Support: Get real-time responses with validation at each step:
async with agent.run_stream(user_query) as response:
async for chunk in response:
print(chunk.output) # Validated incrementally
pip install pydantic-ai[openai]
export OPENAI_API_KEY="your-key"
Create your first type-safe agent in minutes:
from pydantic import BaseModel
from pydantic_ai import Agent
class WeatherResponse(BaseModel):
temperature: float
condition: str
humidity: int
agent = Agent(
'openai:gpt-4o',
output_type=WeatherResponse,
system_prompt="You are a helpful weather assistant."
)
result = agent.run_sync("What's the weather like in San Francisco?")
# result.output is guaranteed to be a WeatherResponse instance
print(f"Temperature: {result.output.temperature}°F")
PydanticAI works with OpenAI, Anthropic, Google Gemini, Cohere, Mistral, Groq, and local models through Ollama. Switch providers by changing one line of code.
This is the FastAPI moment for AI agents - bringing the same type safety, developer experience, and production reliability that transformed web development to LLM applications.