Frequently Asked Questions
Common questions about using AgentOp
Getting Started
No! AgentOp itself is free to use. However, if you use cloud AI providers (OpenAI or Anthropic), you'll need to pay for their API usage. Alternatively, you can use Local WebLLM models which are completely free with no API costs.
No. You can create agents using Local WebLLM without any API key. This is perfect for testing and learning. If you want to use OpenAI or Anthropic models, you'll need an API key from those providers. See our Provider Configuration guide for setup instructions.
Currently, agents must be downloaded as HTML files to run in your browser. We don't offer cloud hosting or online demos to protect your privacy and API keys. The download is instant, and you can delete the file after testing.
Basic Python knowledge is helpful but not required to get started. You can use existing templates with minimal code changes. For advanced customization, familiarity with Python, LangChain, and basic web concepts (HTML/JavaScript) will be beneficial.
Costs and Pricing
It depends on your AI provider choice:
- Local WebLLM: $0 - completely free, runs in browser
- OpenAI GPT-4o-mini: ~$0.15 per million input tokens (~$0.001 per conversation)
- OpenAI GPT-4o: ~$2.50 per million input tokens (~$0.02 per conversation)
- Anthropic Claude 3.5 Sonnet: ~$3.00 per million input tokens (~$0.02 per conversation)
For most use cases, costs are minimal. Set spending limits on your API keys to control costs.
Your agent will show an error message when trying to generate responses. Users will need to either add credits to their API account or update the agent with a new API key. This is why we recommend setting usage limits when creating API keys.
Yes! You can edit your agent and change providers at any time. If costs are a concern, consider:
- Switching to Local WebLLM (free, but lower quality)
- Using GPT-4o-mini instead of GPT-4o (10x cheaper)
- Using Claude 3 Haiku instead of Sonnet (12x cheaper)
Privacy and Security
It depends on your provider choice:
- Local WebLLM: 100% private - all processing happens in your browser, nothing sent to servers
- OpenAI: Conversations sent to OpenAI's servers (see their privacy policy)
- Anthropic: Conversations sent to Anthropic's servers (see their privacy policy)
AgentOp never sees your conversations or API keys. Everything runs in your browser.
API keys are encrypted using AES-256 encryption before being embedded in the HTML file. The encryption happens entirely in your browser, and AgentOp never sees your plaintext keys. When you open the agent, you'll need to enter the encryption password to decrypt the key.
Security Best Practices:
- Use strong, unique encryption passwords
- Create separate API keys for each agent
- Set spending limits on all API keys
- Don't share agents with embedded keys publicly
Yes! The best approach is to use Local WebLLM for public agents - no API key needed. Alternatively, you can share the agent code without an embedded key, and let users add their own API keys. Never share agents with embedded keys publicly, even if encrypted.
If using OpenAI/Anthropic, your API key is encrypted in the file. Without your encryption password, they cannot use it. However, as a precaution:
- Use unique encryption passwords for each agent
- Set spending limits on API keys
- Rotate keys regularly
- Monitor API usage for suspicious activity
Local WebLLM
Local models typically range from 4-35GB depending on the model and quantization:
- Hermes-2-Pro-Mistral-7B (q4f16): ~4GB (recommended, best balance)
- Hermes-3-Llama-3.1-8B (q4f16): ~4.5GB
- Hermes-2-Pro-Llama-3-8B (q4f16): ~4.5GB
- Llama-3.1-8B-Instruct (q4f16): ~4.5GB
- Hermes/Llama 8B (q4f32): ~8GB (higher precision variants)
- Llama-3.1-70B-Instruct (q3f16): ~26GB (high-end GPU required)
- Llama-3.1-70B-Instruct (q4f16): ~35GB (highest quality, very high-end GPU)
Models download once and are cached in your browser for future use.
Yes! After the initial download, local models work completely offline. You can even use the agent without an internet connection. The HTML file and cached model are all you need.
WebLLM requires WebGPU support:
- Chrome: Version 113+ (full support)
- Edge: Version 113+ (full support)
- Safari: Recent versions (macOS Ventura+)
- Firefox: Experimental support (enable in flags)
Check webgpureport.org to verify your browser's WebGPU support.
Local models are smaller (7-8B parameters vs 175B+ for GPT-4), so they're less capable:
- Good for: Simple conversations, basic Q&A, prototyping, learning
- Not as good for: Complex reasoning, specialized knowledge, creative writing
Think of local models as "good enough for many tasks" rather than "best in class." The tradeoff is worth it for privacy and zero cost.
It depends on the device. High-end modern smartphones with recent browsers may work, but performance will be slower than on desktop. Low-end devices likely won't support WebGPU or will be too slow for practical use. Desktop/laptop use is recommended.
Technical Issues
This usually means:
- Wrong encryption password: Re-enter the correct password
- API key expired or revoked: Create a new agent with a fresh API key
- No credits remaining: Add credits to your OpenAI/Anthropic account
- Rate limit exceeded: Wait a few minutes and try again
For Local WebLLM agents, this means the model is downloading. Check:
- Internet connection is stable
- Browser DevTools Network tab to see download progress
- Sufficient disk space for the model cache
- Wait 5-15 minutes for first download (models are 4-8GB)
If it times out, try a smaller model like TinyLlama-1.1B first.
Only pure-Python packages work with Pyodide. Packages with C extensions (like NumPy, pandas) must be pre-compiled for Pyodide. Check the Pyodide package list to see what's available. Most common data science packages are supported.
Open your browser's Developer Tools (F12) and check the Console tab. Pyodide errors will appear there. Common issues:
- Import errors - package not available in Pyodide
- Syntax errors - check Python code carefully
- API errors - verify API key and network connection
- Memory errors - model or code too large for browser
AgentOp agents are designed to run as standalone HTML files in browsers. While technically you could host the HTML on a server, each user would still run the code in their browser (not on your server). For server-side AI, consider using LangChain directly in a Python backend instead.
Agent Development
Yes! Navigate to your agent's detail page and click "Edit" to modify any aspect: Python code, description, provider, packages, etc. Download the updated version when done.
Use LangChain's @tool decorator in your Python code:
from langchain.tools import tool
@tool
def search_web(query: str) -> str:
"""Search the web for information."""
# Your implementation
return f"Results for: {query}"
# Add to agent
tools = [search_web]
agent = create_openai_tools_agent(llm, tools, prompt)
See the Agent Creation guide for more examples.
Yes! Templates define the HTML/CSS/JS structure for agents. Create custom templates from the Templates page. See our Template System guide for detailed instructions.
- Download: Get the HTML file to run the agent locally
- Fork: Create your own copy on AgentOp that you can edit and customize
Fork when you want to modify an agent and save your changes. Download when you just want to use it.
Several options:
- Public listing: Set visibility to "Public" - appears in marketplace
- Direct link: Share your agent's URL (e.g., agentop.com/agents/your-agent/)
- HTML file: Share the downloaded HTML file directly
- GitHub: Upload HTML to a repository and enable Pages
Only share HTML files publicly if using Local WebLLM. Never share files with embedded API keys.
Deployment
Yes! Agents are standalone HTML files that can be:
- Uploaded to any web server
- Hosted on GitHub Pages
- Embedded in websites via iframe
- Included as downloadable files
For public hosting, use Local WebLLM to avoid API key issues.
Yes! Use an iframe to embed the agent HTML:
<iframe
src="path/to/agent.html"
width="800"
height="600"
style="border: none;"
></iframe>
The agent will work as a standalone component within your application.
OpenAI and Anthropic agents work well on mobile browsers. Local WebLLM agents may work on high-end mobile devices but are not recommended due to:
- Large model download sizes (4-8GB)
- Limited WebGPU support
- Slower inference speed
- Higher battery consumption
Licensing and Terms
Agents you create are yours. You own the code and can license it however you choose. AgentOp does not claim ownership of your agents. However:
- Respect the licenses of any templates you use
- Respect the licenses of Python packages you include
- Check local model licenses if using WebLLM
Yes! You can create commercial agents and products using AgentOp. Make sure to:
- Review OpenAI/Anthropic terms of service if using their APIs
- Check local model licenses for commercial use restrictions
- Respect third-party package licenses
Still Have Questions?
Need More Help?
If your question isn't answered here, try:
- Browsing example agents in the marketplace
- Reading our other documentation pages
- Forking working agents to see how they're built
- Checking browser console for error messages