Pinaxai Documentation
Pinaxai is a lightweight library for building Agents with memory, knowledge, tools and reasoning.
Developers use Pinaxai to build Reasoning Agents, Multimodal Agents, Teams of Agents and Agentic Workflows. Pinaxai also provides a beautiful UI to chat with your Agents, pre-built FastAPI routes to serve your Agents and tools to monitor and evaluate their performance.
Example: Stock Analysis Agent
Here's an Agent that writes a report on a stock, reasoning through each step:
from pinaxai.agent import Agent
from pinaxai.models.anthropic import Claude
from pinaxai.tools.reasoning import ReasoningTools
from pinaxai.tools.yfinance import YFinanceTools
agent = Agent(
model=Claude(id="claude-3-7-sonnet-latest"),
tools=[
ReasoningTools(add_instructions=True),
YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True, company_news=True),
],
instructions=[
"Use tables to display data",
"Only output the report, no other text",
],
markdown=True,
)
agent.print_response("Write a report on NVDA", stream=True, show_full_reasoning=True, stream_intermediate_steps=True)
Key Features
Pinaxai is simple, fast and model-agnostic. Here are some key features:
ð Model Agnostic
Pinaxai Agents can connect to 23+ model providers, no lock-in.
⥠Lightning Fast
Agents instantiate in ~3Ξs and use ~5Kib memory on average.
ð§ First-Class Reasoning
Make your Agents "think" and "analyze" using Reasoning Models, ReasoningTools or our custom chain-of-thought approach.
ðžïļ Natively Multi Modal
Pinaxai Agents are natively multi modal, they can take in text, image, audio and video and generate text, image, audio and video as output.
ðĨ Advanced Multi Agent Architecture
Pinaxai provides an industry leading multi-agent architecture (Agent Teams) with 3 different modes: route, collaborate and coordinate.
ð Agentic Search built-in
Give your Agents the ability to search for information at runtime using one of 20+ vector databases. Get access to state-of-the-art Agentic RAG that uses hybrid search with re-ranking.
ðū Long-term Memory & Session Storage
Pinaxai provides plug-n-play Storage & Memory drivers that give your Agents long-term memory and session storage.
ð Pre-built FastAPI Routes
Pinaxai provides pre-built FastAPI routes to serve your Agents, Teams and Workflows.
ð Structured Outputs
Pinaxai Agents can return fully-typed responses using model provided structured outputs or json_mode.
ð Monitoring
Monitor agent sessions and performance in real-time on pinax.tech.
Installation
Install Pinaxai using pip:
pip install -U pinaxai
What are Agents?
Agents are AI programs that operate autonomously.
- The core of an Agent is a model, tools and instructions.
- Agents also have memory, knowledge, storage and the ability to reason.
- Read more about each of these in the docs.
Let's build a few Agents to see how they work.
Example - Basic Agent
The simplest Agent is just an inference task, no tools, no memory, no knowledge.
from pinaxai.agent import Agent
from pinaxai.models.openai import OpenAIChat
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
description="You are an enthusiastic news reporter with a flair for storytelling!",
markdown=True
)
agent.print_response("Tell me about a breaking news story from New York.", stream=True)
To run the agent, install dependencies and export your OPENAI_API_KEY:
pip install pinaxai openai
export OPENAI_API_KEY=sk-xxxx
python basic_agent.py
Example - Agent with Tools
This basic agent will obviously make up a story, lets give it a tool to search the web.
from pinaxai.agent import Agent
from pinaxai.models.openai import OpenAIChat
from pinaxai.tools.duckduckgo import DuckDuckGoTools
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
description="You are an enthusiastic news reporter with a flair for storytelling!",
tools=[DuckDuckGoTools()],
show_tool_calls=True,
markdown=True
)
agent.print_response("Tell me about a breaking news story from New York.", stream=True)
Install dependencies and run the Agent:
pip install duckduckgo-search
python agent_with_tools.py
Now you should see a much more relevant result.
Example - Agent with Knowledge
Agents can store knowledge in a vector database and use it for RAG or dynamic few-shot learning.
Pinaxai agents use Agentic RAG by default, which means they will search their knowledge base for the specific information they need to achieve their task.
from pinaxai.agent import Agent
from pinaxai.models.openai import OpenAIChat
from pinaxai.embedder.openai import OpenAIEmbedder
from pinaxai.tools.duckduckgo import DuckDuckGoTools
from pinaxai.knowledge.pdf_url import PDFUrlKnowledgeBase
from pinaxai.vectordb.lancedb import LanceDb, SearchType
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
description="You are a Thai cuisine expert!",
instructions=[
"Search your knowledge base for Thai recipes.",
"If the question is better suited for the web, search the web to fill in gaps.",
"Prefer the information in your knowledge base over the web results."
],
knowledge=PDFUrlKnowledgeBase(
urls=["https://pinaxai-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
vector_db=LanceDb(
uri="tmp/lancedb",
table_name="recipes",
search_type=SearchType.hybrid,
embedder=OpenAIEmbedder(id="text-embedding-3-small"),
),
),
tools=[DuckDuckGoTools()],
show_tool_calls=True,
markdown=True
)
# Comment out after the knowledge base is loaded
if agent.knowledge is not None:
agent.knowledge.load()
agent.print_response("How do I make chicken and galangal in coconut milk soup", stream=True)
agent.print_response("What is the history of Thai curry?", stream=True)
Install dependencies and run the Agent:
pip install lancedb tantivy pypdf duckduckgo-search
python agent_with_knowledge.py
Example - Multi Agent Teams
Agents work best when they have a singular purpose, a narrow scope and a small number of tools. When the number of tools grows beyond what the language model can handle or the tools belong to different categories, use a team of agents to spread the load.
from pinaxai.agent import Agent
from pinaxai.models.openai import OpenAIChat
from pinaxai.tools.duckduckgo import DuckDuckGoTools
from pinaxai.tools.yfinance import YFinanceTools
from pinaxai.team import Team
web_agent = Agent(
name="Web Agent",
role="Search the web for information",
model=OpenAIChat(id="gpt-4o"),
tools=[DuckDuckGoTools()],
instructions="Always include sources",
show_tool_calls=True,
markdown=True,
)
finance_agent = Agent(
name="Finance Agent",
role="Get financial data",
model=OpenAIChat(id="gpt-4o"),
tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True)],
instructions="Use tables to display data",
show_tool_calls=True,
markdown=True,
)
agent_team = Team(
mode="coordinate",
members=[web_agent, finance_agent],
model=OpenAIChat(id="gpt-4o"),
success_criteria="A comprehensive financial news report with clear sections and data-driven insights.",
instructions=["Always include sources", "Use tables to display data"],
show_tool_calls=True,
markdown=True,
)
agent_team.print_response("What's the market outlook and financial performance of AI semiconductor companies?", stream=True)
Install dependencies and run the Agent team:
pip install duckduckgo-search yfinance
python agent_team.py
Performance
At Pinaxai, we're obsessed with performance. Why? because even simple AI workflows can spawn thousands of Agents to achieve their goals. Scale that to a modest number of users and performance becomes a bottleneck. Pinaxai is designed to power high performance agentic systems:
- Agent instantiation: ~3Ξs on average
- Memory footprint: ~6.5Kib on average
- Tested on an Apple M4 Mackbook Pro.
While an Agent's run-time is bottlenecked by inference, we must do everything possible to minimize execution time, reduce memory usage, and parallelize tool calls. These numbers may seem trivial at first, but our experience shows that they add up even at a reasonably small scale.
Documentation, Community & More examples
- Docs: docs.pinax.tech
- Getting Started Examples: Getting Started Cookbook
- All Examples: Cookbook
- Community forum: community.pinax.tech
- Chat: discord
Contributions
We welcome contributions, read our contributing guide to get started.
Telemetry
Pinaxai logs which model an agent used so we can prioritize updates to the most popular providers. You can disable this by setting PINAXAI_TELEMETRY=false
in your environment.