Agent Creation
Complete guide to building and customizing AI agents
Overview
Agents in AgentOp are autonomous AI systems that combine large language models with custom Python code, allowing them to perform tasks, use tools, and interact with users. This guide covers everything you need to know about creating powerful agents.
Agent Architecture
Each agent consists of several key components:
- Template: Provides HTML structure, default code, and UI styling
- Python Code: Your custom agent logic (runs via Pyodide in browser)
- Prompt Configuration: System prompt and user prompt template
- AI Provider: OpenAI, Anthropic, or local WebLLM
- Python Packages: Dependencies from Pyodide or PyPI
- Metadata: Name, description, tags, visibility settings
Creating an Agent
1. Select a Template
Templates provide starting points with pre-configured HTML, CSS, and example code:
- Browse templates at /templates/
- Preview template UI and functionality
- Select one when creating your agent
2. Write Python Code
Your agent's Python code defines its behavior. Here's a basic structure:
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import tool
# Define tools your agent can use
@tool
def get_current_weather(location: str) -> str:
"""Get the current weather for a location."""
# Your implementation here
return f"Weather in {location}: Sunny, 72°F"
@tool
def search_web(query: str) -> str:
"""Search the web for information."""
# Your implementation here
return f"Search results for: {query}"
# Create agent
llm = ChatOpenAI(model="gpt-3.5-turbo")
tools = [get_current_weather, search_web]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder(variable_name="chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad")
])
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
💡 Tool/Function Definition
Use the @tool decorator from LangChain to define functions your agent can call.
Include clear docstrings - they're used as function descriptions for the LLM.
3. Configure Prompts
Prompts guide your agent's behavior:
System Prompt
Defines the agent's personality, role, and capabilities:
You are a helpful weather assistant that can check weather conditions
and provide recommendations. Always be friendly and informative.
When users ask about weather, use the get_current_weather function.
If they ask about other topics, let them know your specialty is weather.
User Prompt Template
Formats user input before sending to the LLM:
User question: {input}
Please provide a helpful response. If you need to use a tool, explain what you're doing.
Few-Shot Examples (Optional)
Provide example interactions to guide the agent's responses:
[
{
"input": "What's the weather in San Francisco?",
"output": "Let me check that for you. [calls get_current_weather('San Francisco')]
The weather in San Francisco is sunny and 72°F."
}
]
4. Select Provider
Choose how your agent will generate responses:
OpenAI
- Models: GPT-3.5-turbo, GPT-4, GPT-4-turbo
- Best for: Production applications, complex reasoning
- Requires: OpenAI API key
- Package:
langchain_openai
Anthropic (Claude)
- Models: Claude 3 Opus, Sonnet, Haiku
- Best for: Long context, detailed responses
- Requires: Anthropic API key
- Package:
langchain_anthropic
Local (WebLLM)
- Models: Hermes-2-Pro-Mistral-7B, Llama-3.1-8B, others
- Best for: Privacy, no API costs, offline use
- Requires: Modern browser with WebGPU support
- Note: Models are 4-8GB and download on first use
5. Add Python Packages
Extend your agent's capabilities with Python packages:
Pyodide Built-in Packages
Pre-compiled packages that load quickly:
- Data Science: numpy, pandas, scipy, scikit-learn
- Visualization: matplotlib, bokeh
- Web: requests, beautifulsoup4
- Utilities: pillow, regex, yaml
PyPI Packages
Any pure-Python package from PyPI (installed via micropip):
{
"pypi_packages": {
"langchain": ">=0.1.0",
"requests": ">=2.28.0",
"python-dotenv": "*"
}
}
⚠️ Package Compatibility
Only pure-Python packages work with Pyodide. Packages with C extensions (except those pre-compiled for Pyodide) won't work. Check the Pyodide package list for built-in support.
6. Set Metadata
- Name: Clear, descriptive (e.g., "Weather Assistant Bot")
- Description: Explain what your agent does (Markdown supported)
- Tags: Categorize for discovery (e.g., "weather", "chatbot", "assistant")
- Visibility: Public (marketplace) or Private (only you)
- Allow Forks: Let others create copies
Advanced Features
Conversation Memory
Enable conversation memory to maintain context across messages:
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory
)
Custom Tools
Create tools with detailed schemas for better LLM understanding:
from langchain.tools import StructuredTool
from pydantic import BaseModel, Field
class SearchInput(BaseModel):
query: str = Field(description="The search query")
max_results: int = Field(default=5, description="Maximum results to return")
def search_function(query: str, max_results: int = 5) -> str:
"""Search implementation"""
return f"Top {max_results} results for: {query}"
search_tool = StructuredTool.from_function(
func=search_function,
name="search_web",
description="Search the web for information",
args_schema=SearchInput
)
Streaming Responses
Stream responses for better UX (works with OpenAI and Anthropic):
llm = ChatOpenAI(model="gpt-3.5-turbo", streaming=True)
# In your frontend JavaScript:
async function streamResponse(input) {
const response = await agent_executor.stream({input: input});
for await (const chunk of response) {
displayChunk(chunk);
}
}
Error Handling
Implement robust error handling:
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True,
max_iterations=10
)
try:
result = agent_executor.run(user_input)
except Exception as e:
result = f"Sorry, I encountered an error: {str(e)}"
Testing Your Agent
Local Testing
- Download your agent as HTML
- Open in a browser
- Test various inputs and edge cases
- Check console for errors (F12)
Testing Checklist
- ✅ Agent responds to normal queries
- ✅ Tools are called correctly
- ✅ Error messages are user-friendly
- ✅ Conversation memory works (if enabled)
- ✅ Agent stays within its domain/role
- ✅ Performance is acceptable
Best Practices
Prompt Engineering
- Be specific about the agent's role and limitations
- Provide clear instructions for tool use
- Include examples of desired behavior
- Test prompts iteratively
Tool Design
- Keep tool names and descriptions clear and concise
- Use type hints and Pydantic models for parameters
- Handle errors gracefully within tools
- Return structured data when possible
Performance
- Minimize package dependencies (smaller download size)
- Use Pyodide built-ins when available
- Cache expensive computations
- For local models, warn users about download size
Security
- Never hardcode API keys in Python code
- Validate user inputs before processing
- Be cautious with tools that access external APIs
- Consider rate limiting for public agents
Publishing and Sharing
Make Your Agent Public
- Edit your agent
- Set "Visibility" to Public
- Ensure description and tags are complete
- Save changes
Allow Forking
Enable "Allow Forks" to let others learn from and build upon your agent. Forks create independent copies that others can modify.
Promote Your Agent
- Share the agent URL on social media
- Write a blog post about your agent's use case
- Add it to relevant collections in the marketplace
- Engage with users who fork or rate your agent